url
stringlengths 13
4k
| text
stringlengths 100
1.01M
| date
timestamp[s] | meta
dict |
---|---|---|---|
https://openstax.org/books/college-algebra-corequisite-support-2e/pages/6-review-exercises | College Algebra with Corequisite Support 2e
# Review Exercises
### Review Exercises
##### Exponential Functions
1 .
Determine whether the function $y=156 ( 0.825 ) t y=156 ( 0.825 ) t$ represents exponential growth, exponential decay, or neither. Explain
2 .
The population of a herd of deer is represented by the function $A(t)=205 (1.13) t , A(t)=205 (1.13) t ,$ where $t t$ is given in years. To the nearest whole number, what will the herd population be after $6 6$ years?
3 .
Find an exponential equation that passes through the points and $(5,60.75). (5,60.75).$
4 .
Determine whether Table 1 could represent a function that is linear, exponential, or neither. If it appears to be exponential, find a function that passes through the points.
x 1 2 3 4 f(x) 3 0.9 0.27 0.081
Table 1
5 .
A retirement account is opened with an initial deposit of $8,500 and earns $8.12% 8.12%$ interest compounded monthly. What will the account be worth in $20 20$ years? 6 . Hsu-Mei wants to save$5,000 for a down payment on a car. To the nearest dollar, how much will she need to invest in an account now with $7.5% 7.5%$ APR, compounded daily, in order to reach her goal in $3 3$ years?
7 .
Does the equation $y=2.294 e −0.654t y=2.294 e −0.654t$ represent continuous growth, continuous decay, or neither? Explain.
8 .
Suppose an investment account is opened with an initial deposit of $10,500 10,500$ earning $6.25% 6.25%$ interest, compounded continuously. How much will the account be worth after $25 25$ years?
##### Graphs of Exponential Functions
9 .
Graph the function $f(x)=3.5 ( 2 ) x . f(x)=3.5 ( 2 ) x .$ State the domain and range and give the y-intercept.
10 .
Graph the function $f(x)=4 ( 1 8 ) x f(x)=4 ( 1 8 ) x$ and its reflection about the y-axis on the same axes, and give the y-intercept.
11 .
The graph of $f(x)= 6.5 x f(x)= 6.5 x$ is reflected about the y-axis and stretched vertically by a factor of $7. 7.$ What is the equation of the new function, $g(x)? g(x)?$ State its y-intercept, domain, and range.
12 .
The graph below shows transformations of the graph of $f(x)= 2 x . f(x)= 2 x .$ What is the equation for the transformation?
Figure 1
##### Logarithmic Functions
13 .
Rewrite $log 17 ( 4913 )=x log 17 ( 4913 )=x$ as an equivalent exponential equation.
14 .
Rewrite $ln( s )=t ln( s )=t$ as an equivalent exponential equation.
15 .
Rewrite $a − 2 5 =b a − 2 5 =b$ as an equivalent logarithmic equation.
16 .
Rewrite $e −3.5 =h e −3.5 =h$ as an equivalent logarithmic equation.
17 .
Solve for x if $log 64 (x)= 1 3 log 64 (x)= 1 3$ by converting the logarithmic equation $log 64 (x)= 1 3 log 64 (x)= 1 3$ to exponential form.
18 .
Evaluate $log 5 ( 1 125 ) log 5 ( 1 125 )$ without using a calculator.
19 .
Evaluate $log( 0.000001 ) log( 0.000001 )$ without using a calculator.
20 .
Evaluate $log(4.005) log(4.005)$ using a calculator. Round to the nearest thousandth.
21 .
Evaluate $ln( e −0.8648 ) ln( e −0.8648 )$ without using a calculator.
22 .
Evaluate $ln( 18 3 ) ln( 18 3 )$ using a calculator. Round to the nearest thousandth.
##### Graphs of Logarithmic Functions
23 .
Graph the function $g(x)=log( 7x+21 )−4. g(x)=log( 7x+21 )−4.$
24 .
Graph the function $h(x)=2ln( 9−3x )+1. h(x)=2ln( 9−3x )+1.$
25 .
State the domain, vertical asymptote, and end behavior of the function $g(x)=ln( 4x+20 )−17. g(x)=ln( 4x+20 )−17.$
##### Logarithmic Properties
26 .
Rewrite $ln( 7r⋅11st ) ln( 7r⋅11st )$ in expanded form.
27 .
Rewrite $log 8 ( x )+ log 8 ( 5 )+ log 8 ( y )+ log 8 ( 13 ) log 8 ( x )+ log 8 ( 5 )+ log 8 ( y )+ log 8 ( 13 )$ in compact form.
28 .
Rewrite $log m ( 67 83 ) log m ( 67 83 )$ in expanded form.
29 .
Rewrite $ln( z )−ln( x )−ln( y ) ln( z )−ln( x )−ln( y )$ in compact form.
30 .
Rewrite $ln( 1 x 5 ) ln( 1 x 5 )$ as a product.
31 .
Rewrite $− log y ( 1 12 ) − log y ( 1 12 )$ as a single logarithm.
32 .
Use properties of logarithms to expand $log( r 2 s 11 t 14 ). log( r 2 s 11 t 14 ).$
33 .
Use properties of logarithms to expand $ln( 2b b+1 b−1 ). ln( 2b b+1 b−1 ).$
34 .
Condense the expression $5ln( b )+ln( c )+ ln( 4−a ) 2 5ln( b )+ln( c )+ ln( 4−a ) 2$ to a single logarithm.
35 .
Condense the expression $3 log 7 v+6 log 7 w− log 7 u 3 3 log 7 v+6 log 7 w− log 7 u 3$ to a single logarithm.
36 .
Rewrite $log 3 ( 12.75 ) log 3 ( 12.75 )$ to base $e. e.$
37 .
Rewrite $5 12x−17 =125 5 12x−17 =125$ as a logarithm. Then apply the change of base formula to solve for $x x$ using the common log. Round to the nearest thousandth.
##### Exponential and Logarithmic Equations
38 .
Solve $216 3x ⋅ 216 x = 36 3x+2 216 3x ⋅ 216 x = 36 3x+2$ by rewriting each side with a common base.
39 .
Solve $125 ( 1 625 ) −x−3 = 5 3 125 ( 1 625 ) −x−3 = 5 3$ by rewriting each side with a common base.
40 .
Use logarithms to find the exact solution for $7⋅ 17 −9x −7=49. 7⋅ 17 −9x −7=49.$ If there is no solution, write no solution.
41 .
Use logarithms to find the exact solution for $3 e 6n−2 +1=−60. 3 e 6n−2 +1=−60.$ If there is no solution, write no solution.
42 .
Find the exact solution for $5 e 3x −4=6 5 e 3x −4=6$ . If there is no solution, write no solution.
43 .
Find the exact solution for $2 e 5x−2 −9=−56. 2 e 5x−2 −9=−56.$ If there is no solution, write no solution.
44 .
Find the exact solution for $5 2x−3 = 7 x+1 . 5 2x−3 = 7 x+1 .$ If there is no solution, write no solution.
45 .
Find the exact solution for $e 2x − e x −110=0. e 2x − e x −110=0.$ If there is no solution, write no solution.
46 .
Use the definition of a logarithm to solve. $−5 log 7 ( 10n )=5. −5 log 7 ( 10n )=5.$
47 .
Use the definition of a logarithm to find the exact solution for $9+6ln( a+3 )=33. 9+6ln( a+3 )=33.$
48 .
Use the one-to-one property of logarithms to find an exact solution for $log 8 ( 7 )+ log 8 ( −4x )= log 8 ( 5 ). log 8 ( 7 )+ log 8 ( −4x )= log 8 ( 5 ).$ If there is no solution, write no solution.
49 .
Use the one-to-one property of logarithms to find an exact solution for $ln( 5 )+ln( 5 x 2 −5 )=ln( 56 ). ln( 5 )+ln( 5 x 2 −5 )=ln( 56 ).$ If there is no solution, write no solution.
50 .
The formula for measuring sound intensity in decibels $D D$ is defined by the equation $D=10log( I I 0 ), D=10log( I I 0 ),$ where $I I$ is the intensity of the sound in watts per square meter and $I 0 = 10 −12 I 0 = 10 −12$ is the lowest level of sound that the average person can hear. How many decibels are emitted from a large orchestra with a sound intensity of $6.3⋅ 10 −3 6.3⋅ 10 −3$ watts per square meter?
51 .
The population of a city is modeled by the equation $P(t)=256,114 e 0.25t P(t)=256,114 e 0.25t$ where $t t$ is measured in years. If the city continues to grow at this rate, how many years will it take for the population to reach one million?
52 .
Find the inverse function $f −1 f −1$ for the exponential function $f( x )=2⋅ e x+1 −5. f( x )=2⋅ e x+1 −5.$
53 .
Find the inverse function $f −1 f −1$ for the logarithmic function $f( x )=0.25⋅ log 2 ( x 3 +1 ). f( x )=0.25⋅ log 2 ( x 3 +1 ).$
##### Exponential and Logarithmic Models
For the following exercises, use this scenario: A doctor prescribes $300 300$ milligrams of a therapeutic drug that decays by about $17% 17%$ each hour.
54 .
To the nearest minute, what is the half-life of the drug?
55 .
Write an exponential model representing the amount of the drug remaining in the patient’s system after $t t$ hours. Then use the formula to find the amount of the drug that would remain in the patient’s system after $24 24$ hours. Round to the nearest hundredth of a gram.
For the following exercises, use this scenario: A soup with an internal temperature of $350° 350°$ Fahrenheit was taken off the stove to cool in a $71°F 71°F$ room. After fifteen minutes, the internal temperature of the soup was $175°F. 175°F.$
56 .
Use Newton’s Law of Cooling to write a formula that models this situation.
57 .
How many minutes will it take the soup to cool to $85°F? 85°F?$
For the following exercises, use this scenario: The equation $N( t )= 1200 1+199 e −0.625t N( t )= 1200 1+199 e −0.625t$ models the number of people in a school who have heard a rumor after $t t$ days.
58 .
How many people started the rumor?
59 .
To the nearest tenth, how many days will it be before the rumor spreads to half the carrying capacity?
60 .
What is the carrying capacity?
For the following exercises, enter the data from each table into a graphing calculator and graph the resulting scatter plots. Determine whether the data from the table would likely represent a function that is linear, exponential, or logarithmic.
61 .
x f(x) 1 3.05 2 4.42 3 6.4 4 9.28 5 13.46 6 19.52 7 28.3 8 41.04 9 59.5 10 86.28
62 .
x f(x) 0.5 18.05 1 17 3 15.33 5 14.55 7 14.04 10 13.5 12 13.22 13 13.1 15 12.88 17 12.69 20 12.45
63 .
Find a formula for an exponential equation that goes through the points $( −2,100 ) ( −2,100 )$ and $( 0,4 ). ( 0,4 ).$ Then express the formula as an equivalent equation with base e.
##### Fitting Exponential Models to Data
64 .
What is the carrying capacity for a population modeled by the logistic equation $P(t)= 250,000 1+499 e −0.45t ? P(t)= 250,000 1+499 e −0.45t ?$ What is the initial population for the model?
65 .
The population of a culture of bacteria is modeled by the logistic equation $P(t)= 14,250 1+29 e −0.62t , P(t)= 14,250 1+29 e −0.62t ,$ where $t t$ is in days. To the nearest tenth, how many days will it take the culture to reach $75% 75%$ of its carrying capacity?
For the following exercises, use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places.
66 .
x f(x) 1 409.4 2 260.7 3 170.4 4 110.6 5 74 6 44.7 7 32.4 8 19.5 9 12.7 10 8.1
67 .
x f(x) 0.15 36.21 0.25 28.88 0.5 24.39 0.75 18.28 1 16.5 1.5 12.99 2 9.91 2.25 8.57 2.75 7.23 3 5.99 3.5 4.81
68 .
x f(x) 0 9 2 22.6 4 44.2 5 62.1 7 96.9 8 113.4 10 133.4 11 137.6 15 148.4 17 149.3 | 2022-01-24T22:06:17 | {
"domain": "openstax.org",
"url": "https://openstax.org/books/college-algebra-corequisite-support-2e/pages/6-review-exercises",
"openwebmath_score": 0.7056789994239807,
"openwebmath_perplexity": 695.8476363538398,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9920620077761345,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909809127147
} |
https://sabbiu.com/p-norms-taxicab-norm-euclidean-norm-and-infinity-norm/ | Sabbiu Shah
p-Norms (Taxicab Norm, Euclidean Norm and infinity-norm)
Norms are a measure of distance. Norm is defined as follows: For $p\geq1$,
$||x||_p \equiv \sqrt[p]{|x_1|^p + |x_2|^p + ... + |x_n|^p}$
Taxicab Norm (1-Norm)
When $p=1$, then the norm is said to be taxicab norm. The distance derived from this norm is called Manhattan distance.
$||x||_1 \equiv |x_1| + |x_2| + ... + |x_n|$
Euclidean Norm (2-Norm)
It is the most common notion of distance. When $p=2$, then the norm is said to be euclidean norm.
$||x||_2 \equiv \sqrt{|x_1|^2 + |x_2|^2 + ... + |x_n|^2}$
∞-norm
Infinity norm is defined as,
$||x||_\infty \equiv max(|x_1|, |x_2|, ..., |x_n|)$
Proof
\begin{aligned} ||x||_p & \equiv \sum_{i=1}^n |x_i|^p\ \ \ \ \ \ \ \ \ \ \text{Equation of p-norm}\\ ||x||_p & \equiv m \sum_{i=1}^n \frac{|x_i|^p}{m}\ \ \ \ \ \ \ \ \ \ m=max(|x_i|)\\ \end{aligned}
As, $p$ approaches $\infty$, only the term $\frac{max|x_i|}{m}$ equals to $1$, while other terms approaches to $0$. Thus, $\sum_{i=1}^n \frac{|x_i|^p}{m} = 1$
\begin{aligned} \therefore\ ||x||_\infty & \equiv max(|x_i|) \end{aligned}
Visualising norms as a unit circle
This section will show visualization when, $||x||_p \equiv 1$. Let us consider for 2 Dimensional case.
1-Norm
The equation is given as,
\begin{aligned} & ||x||_1 = |x_1| + |x_2|\\ \implies & 1= |x_1| + |x_2|\\ \end{aligned}
Thus we get the following equations,
When $x_1\geq0$ and $x_2\geq0$, $x_2=1-x_1$ [First quadrant]
When $x_1\leq0$ and $x_2\geq0$, $x_2=1+x_1$ [Second quadrant]
When $x_1\leq0$ and $x_2\leq0$, $x_2=x_1-1$ [Third quadrant]
When $x_1\geq0$ and $x_2\leq0$, $x_2=-x_1-1$ [Fourth quadrant]
Plotting these equations, we get,
2-Norm
The equation is given as,
\begin{aligned} & ||x||_2 = \sqrt{|x_1|^2 + |x_2|^2}\\ \implies & 1= x_1^2 + x_2^2\\ \end{aligned}
As this equation represents a unit circle, we get the following graph,
∞-norm
The equation is given as,
\begin{aligned} & ||x||_\infty = max(|x_1|, |x_2|)\\ \implies & 1 = max(|x_1|, |x_2|)\\ \end{aligned}
This gives the following graph,
Substituting different values of p, these equations can be further visualised in Wolfram Mathematica Demonstaration | 2021-10-20T00:50:48 | {
"domain": "sabbiu.com",
"url": "https://sabbiu.com/p-norms-taxicab-norm-euclidean-norm-and-infinity-norm/",
"openwebmath_score": 0.9855085611343384,
"openwebmath_perplexity": 2892.2843904093147,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620075414418,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909807581888
} |
https://encyclopediaofmath.org/wiki/Difference_equation | # Difference equation
An equation containing finite differences of an unknown function. Let $y ( n) = y _ {n}$ be a function depending on an integer argument $n = 0, \pm 1, \pm 2 , . . .$; let
$$\Delta y _ {n} = \ y _ {n + 1 } - y _ {n} ,\ \ \Delta ^ {m + 1 } y _ {n} = \ \Delta ( \Delta ^ {m} y _ {n} ),$$
$$\Delta ^ {1} y _ {n} = \Delta y _ {n} ,\ m = 1, 2 \dots$$
be the finite differences. The expression $\Delta ^ {m} y _ {n}$ contains values of the function $y$ at the $m + 1$ points $n \dots n + m$. The formula
$$\tag{1 } \Delta ^ {m} y _ {n} = \ \sum _ { k= } 0 ^ { m } (- 1) ^ {m - k } \left ( \begin{array}{c} m \\ k \end{array} \right ) y _ {n + k }$$
is valid. An equation of the form
$$\tag{2 } F ( n; y _ {n} , \Delta y _ {n} \dots \Delta ^ {m} y _ {n} ) = 0$$
is called a difference equation, where $y$ is an unknown and $F$ is a given function. Replacing the finite differences in (2) by their expressions in the values of the desired function according to (1), it reduces to an equation of the form
$$\tag{3 } F ( n; y _ {n} ,\ y _ {n + 1 } \dots y _ {n + m } ) = 0.$$
If $\partial F/ \partial y _ {n} \neq 0$, $\partial F/ \partial y _ {n + m } \neq 0$, that is, if equation (3) really does contain $y _ {n}$ as well as $y _ {n + m }$, then equation (3) is called an $m$- th order difference equation.
The most developed theory is that of linear difference equations, which has much in common with the theory of linear ordinary differential equations (see [1][3]). An equation
$$\tag{4 } a _ {m} ( n) y _ {n + m } + \dots + a _ {0} ( n) y _ {n} = f _ {n}$$
is an $m$- th order linear difference equation. Here $f _ {n} = f ( n)$ is a given function and the $a _ {k} ( n)$, $k = 0 \dots m$, are given coefficients, $a _ {m} ( n) \neq 0$, $a _ {0} ( n) \neq 0$. Every function $y _ {n} = y ( n)$ satisfying equation (4) is called a solution to the difference equation. As in the case of differential equations one distinguishes particular and general solutions of the difference equation (4). A general solution to the difference equation (4) is a solution, depending on $m$ arbitrary parameters, such that each particular solution can be obtained from it by giving a certain value to the parameters. Usually the actual values of the parameters are found from supplementary conditions. The Cauchy problem is typical: Given $y _ {0} \dots y _ {m - 1 } , f _ {n}$, find the solution $y _ {n}$ to equation (4) when $n = m, m + 1 , . . .$. The existence of and a method for constructing a solution to the difference equation (4) are established according to the following scheme. Along with (4) the homogeneous difference equation
$$\tag{5 } a _ {m} ( n) y _ {n + m } + \dots + a _ {0} ( n) y _ {n} = 0$$
is considered.
The following assertions are true:
1) Let $y _ {n} ^ {(} 1) \dots y _ {n} ^ {(} k)$ be solutions to equation (5) and let $c _ {1} \dots c _ {k}$ be an arbitrary collection of constants. Then the function $c _ {1} y _ {n} ^ {(} 1) + \dots + c _ {k} y _ {n} ^ {(} k)$ is also a solution to equation (5).
2) If $y _ {n} ^ {(} 1) \dots y _ {n} ^ {(} m)$ are $m$ solutions to equation (5) and if the determinant
$$\left | \begin{array}{ccc} y _ {0} ^ {(} 1) &\dots &y _ {0} ^ {(} m) \\ \dots &\dots &\dots \\ y _ {m - 1 } ^ {(} 1) &\dots &y _ {m - 1 } ^ {(} m) \\ \end{array} \ \right |$$
is non-zero, then the general solution to the homogeneous difference equation (5) has the form
$$\tag{6 } y _ {n} = \ \sum _ {k = 1 } ^ { m } c _ {k} y _ {n} ^ {(} k) ,$$
where $c _ {k}$ are arbitrary constants.
3) The general solution to the non-homogeneous difference equation (4) is the sum of any one of its particular solutions and the general solution of the homogeneous difference equation (5).
A particular solution to the non-homogeneous equation (5) can be constructed by starting from the general solution (6) of the homogeneous equation by the method of variation of parameters (see, for example, [2]). In the case of a difference equation with constant coefficients,
$$\tag{7 } a _ {m} y _ {m + n } + \dots + a _ {0} y _ {n} = 0,$$
one can find $m$ linearly independent particular solutions immediately. Namely, consider the characteristic equation
$$\tag{8 } a _ {m} q ^ {m} + a _ {m - 1 } q ^ {m - 1 } + \dots + a _ {0} = 0$$
and find its roots $q _ {1} \dots q _ {m}$. If all the roots are simple, then the functions
$$y _ {n} ^ {(} 1) = \ q _ {1} ^ {n} \dots \ y _ {n} ^ {(} m) = \ q _ {m} ^ {n}$$
are a linearly independent system of solutions to equation (7). When $q _ {k}$ is a root of multiplicity $r$, the solutions
$$q _ {k} ^ {n} ,\ nq _ {k} ^ {n} \dots \ n ^ {r - 1 } q _ {k} ^ {n}$$
are linearly independent.
If the coefficients $a _ {0} \dots a _ {m}$ are real and equation (8) has a complex root, for example a simple root $q _ {k} = \rho ( \cos \phi + i \sin \phi )$, then instead of the complex solutions $q _ {k} ^ {n} , \overline{q}\; {} _ {k} ^ {n}$ one obtains two linearly independent real solutions
$$\rho ^ {n} \cos n \phi ,\ \ \rho ^ {n} \sin n \phi .$$
Suppose one has a second-order difference equation with constant real coefficients,
$$\tag{9 } a _ {2} y _ {n + 2 } + a _ {1} y _ {n + 1 } + a _ {0} y _ {n} = 0.$$
The characteristic equation
$$a _ {2} q ^ {2} + a _ {1} q + a _ {0} = 0$$
has the roots
$$q _ {1,2} = \ \frac{- a _ {1} \pm \sqrt {a _ {1} ^ {2} - 4a _ {0} a _ {2} } }{2a _ {2} } .$$
When $q _ {2} \neq q _ {1}$ it is convenient to write the general solution to (9) in the form
$$\tag{10 } y _ {n} = c _ {1} \frac{q _ {2} q _ {1} ^ {n} - q _ {1} q _ {2} ^ {n} }{q _ {2} - q _ {1} } + c _ {2} \frac{q _ {2} ^ {n} - q _ {1} ^ {n} }{q _ {2} - q _ {1} } ,$$
where $c _ {1}$ and $c _ {2}$ are arbitrary constants. If $q _ {1}$ and $q _ {2}$ are complex conjugate roots:
$$q _ {1,2} = \ \rho ( \cos \phi \pm i \sin \phi ),$$
then another representation of the general solution is
$$\tag{11 } y _ {n} = \ - c _ {1} \rho ^ {n} \frac{\sin ( n - 1) \phi }{\sin \phi } + c _ {2} \rho ^ {n - 1 } \frac{\sin n \phi }{\sin \phi } .$$
In the case of a multiple root the general solution can be obtained by taking limits in (10) or (11). It will have the form
$$y _ {n} = \ - c _ {1} ( n - 1) q _ {1} ^ {n} + c _ {2} nq _ {1} ^ {n - 1 } .$$
One can consider the Cauchy problem or various boundary value problems for second-order difference equations in the same way as for equations of arbitrary order. For example, for the Cauchy problem
$$\tag{12 } \left . \begin{array}{c} T _ {n + 2 } ( x) - 2xT _ {n + 1 } ( x) + T _ {n} ( x) = 0,\ \ n = 0, 1 \dots \\ T _ {0} ( x) = 1,\ \ T _ {1} ( x) = x, \\ \end{array} \right \}$$
where $x$ is any real number, the solution of (12) is a polynomial $T _ {n} ( x)$ of degree $n$( a Chebyshev polynomial of the first kind), defined by
$$T _ {n} ( x) = \ \cos ( n \mathop{\rm arc} \cos x) =$$
$$= \ { \frac{1}{2} } [( x + \sqrt {x ^ {2} - 1 } ) ^ {n} + ( x + \sqrt {x ^ {2} - 1 } ) ^ {-} n ].$$
A boundary value problem for a second-order difference equation is to find a function $y _ {n}$ satisfying, when $n = 1 \dots N - 1$, an equation
$$\tag{13 } Ly _ {n} = \ a _ {n} y _ {n - 1 } - c _ {n} y _ {n} + b _ {n} y _ {n + 1 } = \ - f _ {n}$$
and two linearly independent boundary conditions. Such boundary conditions can be, for example,
$$\tag{14 } y _ {0} = \ \kappa _ {1} y _ {1} + \mu _ {1} ,\ \ y _ {N} = \ \kappa _ {2} y _ {N - 1 } + \mu _ {2} ,$$
or
$$\tag{15 } y _ {0} = \ \mu _ {1} ,\ \ y _ {N} = \ \mu _ {2} .$$
The following maximum principle is valid for a second-order difference equation. Given the problem (13), (15), let the conditions
$$a _ {n} > 0,\ \ b _ {n} > 0,\ \ c _ {n} \geq \ a _ {n} + b _ {n} ,\ \ n = 1 \dots N - 1,$$
be fulfilled. Now if $Ly _ {n} \geq 0$ $( Ly _ {n} \leq 0)$, $n = 1 \dots N - 1$, then $y _ {n} \not\equiv \textrm{ const }$ cannot have a greatest positive (smallest negative) value when $n = 1 \dots N - 1$. The maximum principle implies that the boundary value problem (13), (15) is uniquely solvable and that its solution is stable under a change of the boundary conditions $\mu _ {1} , \mu _ {2}$ and the right-hand side $f _ {n}$. The shooting method (see [2]) can be applied to solve difference boundary value problems (13), (14).
One has constructed an explicit form of the solution to a non-linear difference equation
$$\tag{16 } y _ {n + 1 } = \ f _ {n} ( y _ {n} ),\ \ n = 0, 1 \dots$$
only in isolated, very special cases. For equations of the type (16) one studies qualitative questions on the behaviour of the solutions as $n \rightarrow \infty$, and a stability theory, which on the whole is analogous to the stability theory for ordinary differential equations, has been developed (see [4], [5]).
Multi-dimensional difference equations arise for difference approximations to partial differential equations (see [2], [6]). For example, the Poisson equation
$$\frac{\partial ^ {2} u }{\partial x _ {1} ^ {2} } + \frac{\partial ^ {2} u }{\partial x _ {2} ^ {2} } = \ - f ( x _ {1} , x _ {2} )$$
can be approximated by the difference equation
$$\frac{u _ {i + 1, j } - 2u _ {i,j} + u _ {i - 1, j } }{h _ {1} ^ {2} } + \frac{u _ {i, j + 1 } - 2u _ {i,j} + u _ {i, j + 1 } }{h _ {2} ^ {2} } = \ - f _ {i,j} ,$$
where
$$u _ {i,j} = \ u ( x _ {1} ^ {(} i) ,\ x _ {2} ^ {(} j) ),\ \ x _ {1} ^ {(} i) = \ ih _ {1} ,\ \ x _ {2} ^ {(} j) = \ jh _ {2} ,$$
$$i, j = 0, \pm 1, \pm 2 \dots$$
and $h _ {1}$ and $h _ {2}$ are the steps of the grid.
A system of multi-dimensional difference equations together with additional initial and boundary conditions forms a difference scheme. Such questions as the correctness of difference problems, methods for solving them, convergence under grid refinement to the solutions of the original differential equations, are all studied in connection with multi-dimensional difference equations (see Difference schemes, theory of).
Although various mathematical and technical models leading to difference equations exist (see, for example, [4], [5]) the basic field of their application is in approximation methods for solving differential equations (see [6] and [9]).
#### References
[1] A.O. [A.O. Gel'fond] Gelfond, "Differenzenrechnung" , Deutsch. Verlag Wissenschaft. (1958) (Translated from Russian) [2] A.A. Samarskii, E.S. Nikolaev, "Numerical methods for grid equations" , 1–2 , Birkhäuser (1989) (Translated from Russian) [3] A.A. Samarskii, Yu.N. Karamzin, "Difference equations" , Moscow (1978) (In Russian) [4] D.I. Martynyuk, "Lectures on the qualitative theory of difference equations" , Kiev (1972) (In Russian) [5] A. Halanay, D. Wexler, "Qualitative theory of impulse systems" , Acad. R.S. Romania (1968) (Translated from Rumanian) [6] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian) [7] I.S. Berezin, N.P. Zhidkov, "Computing methods" , 2 , Pergamon (1973) (Translated from Russian) [8] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) [9] A.D. Gorbunov, "Difference equations" , Moscow (1972) (In Russian)
For references see also Difference scheme. In addition, [a1], [a2] below give a more general treatment of difference equations and difference operators (cf. Difference operator), as well as applications of these to differential equations.
#### References
[a1] P. Henrici, "Discrete variable methods in ordinary differential equations" , Wiley (1962) [a2] F.B. Hildebrand, "Finite-difference equations and simulations" , Prentice-Hall (1968) [a3] M.R. Spiegel, "Calculus of finite differences and difference equations" , McGraw-Hill (1971) [a4] L.M. Milne-Thomson, "The calculus of finite differences" , Chelsea, reprint (1981) [a5] N.E. Nörlund, "Volesungen über Differenzenrechnung" , Springer (1924)
How to Cite This Entry:
Difference equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Difference_equation&oldid=46653
This article was adapted from an original article by A.V. GulinA.A. Samarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article | 2022-06-27T08:03:10 | {
"domain": "encyclopediaofmath.org",
"url": "https://encyclopediaofmath.org/wiki/Difference_equation",
"openwebmath_score": 0.8317833542823792,
"openwebmath_perplexity": 499.09670913075917,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620073067491,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909806036631
} |
https://www.cut-the-knot.org/m/Algebra/MarghidanuWithGeneralization.shtml | # Dorin Marghidanu's Inequality with Generalization
### Problem
Let $a,b\in\mathbb{R},\,$ $a\lt b,\,$ and $x,y,z\in [a,b].\,$ Prove that
$\displaystyle (x+y)^2+(y+z)^2+(z+x)^2+12ab\le 4(a+b)(x+y+z).$
Where does equality hold? Generalize!
### Solution 1
For $u,v\in [a,b],\,$ $(u+v-2a)(u+v-2b)\le 0,\,$ so that
(1)
$(u+v)^2+4ab\le 2(a+b)(u+v)$
Equality occurs when $u=v=a\,$ or $u=v=b.$
If $x_k\in [a,b],\,$ for $k=1,\ldots,n,\,$ then, assuming $x_{n+1}=x_1,\,$ (1) yields
$(x_k+x_{k+1})^2+4ab\le 2(a+b)(x_k+x_{k+1})$
for $k=1,\ldots,n.\,$ Summing up, we get
\displaystyle \begin{align} \sum_{k=1}^n(x_k+x_{k+1})^2+4nab&\le 2(a+b)\sum_{k=1}^n(x_k+x_{k+1})\\ &=2(a+b)\sum_{k=1}^nx_k. \end{align}
Equality occurs when all $x_k$ are equal to either $a\,$ or $b.\,$ This solves the problem for $n=3\,$ and provides the required generalization.
### Solution 2
First of all note that
(1)
$(x_{i}-a)(x_{i}-b)\leq 0,$
that is
$x_{i}^{2}+ab\leq x_{i}(a+b),$
for every $x_{i}\in \lbrack a,b].$ Adding up for $i=1,2,\ldots n$ we get
(2)
$\displaystyle \sum\limits_{i=1}^{n}x_{i}^{2}+nab\leq (a+b)\sum\limits_{i=1}^{n}x_{i}.$
On the other hand, using the obvious inequality (here and in the following set $x_{n+1}:=x_{1}$)
(3)
$\displaystyle \frac{(x_{i}+x_{i+1})^{2}}{2}\leq x_{i}^{2}+x_{i+1}^{2}$
and adding up for $i=1,2,\ldots n$, we obtain
$\displaystyle \frac{1}{2}\sum\limits_{i=1}^{n}(x_{i}+x_{i+1})^{2}\leq 2\sum\limits_{i=1}^{n}x_{i}^{2}$
that is
(4)
$\displaystyle \frac{1}{4}\sum\limits_{i=1}^{n}(x_{i}+x_{i+1})^{2}\leq \sum\limits_{i=1}^{n}x_{i}^{2},$
so that, using (2) and (4), we conclude that
$\displaystyle \sum\limits_{i=1}^{n}(x_{i}+x_{i+1})^{2}+4nab\leq 4(a+b)\sum\limits_{i=1}^{n}x_{i}.$
For the equality case note that in (3) all the $x_{i}$ must have the same value and from (1) it's clear that this value can only be $a$ or $b.$
### Solution 3
Since $x,y,z\in [a,b],\,$ there are $\lambda_1,\lambda_2,\lambda_3\in [0,1]\,$ such that
$x=a\lambda_1+b(1-\lambda_1),\,y=a\lambda_2+b(1-\lambda_2),\,z=a\lambda_3+b(1-\lambda_3).$
Thus
$\displaystyle s=x+y+z=a\sum_{k=1}^3\lambda_k+b(3-\sum_{k=1}^3\lambda_k)=3\left(\lambda a+ b(1-\lambda)\right),$
where $\displaystyle \lambda=\frac{\lambda_1+\lambda_2+\lambda_3}{3}\in [0,1].\,$
\displaystyle \begin{align} \sum_{cycl}(x+y)^2 &= \sum_{cycl}(s-x)^2=3s^2-2s\cdot s+\sum_{cycl}x^2\\ &=s^2+\sum_{cycl}x^2. \end{align}
Then $\displaystyle s^2+\sum_{cycl}x^2+12ab\le 4(a+b)\sum_{cycl}x\,$ is equivalent to, say,
\displaystyle\begin{align} T&=9(\lambda a+(1-\lambda)b)^2+12ab-4(a+b)3(\lambda a+(1-\lambda)b)\\ &\qquad\qquad\qquad\qquad\qquad+\sum_{k=1}^3(\lambda_k a+(1-\lambda_k)b)^2\le 0. \end{align}
Using Jensen's inequality,
\displaystyle \begin{align} (\lambda a+(1-\lambda)b)^2&\le \lambda a^2+(1-\lambda)b^2\\ \sum_{k=1}^3(\lambda_k a+(1-\lambda_k)b)^2&\le\left(\sum_{k=1}^3\lambda_k\right) a^2+\left(\sum+{k=1}^n(1-\lambda_k)\right)b^2\\ &=3(\lambda a^2+(1-\lambda)b^2). \end{align}
Bringing it all together,
\displaystyle \begin{align} T\,&\le 9(\lambda a^2+(1-\lambda)b^2)+12ab+3(\lambda a^2+(1-\lambda)b^2)\\ &\qquad\qquad\qquad -12(a+b)((\lambda a+(1-\lambda)b)\\ &=12[\lambda a^2+(1-\lambda)b^2-(a+b)(\lambda a+(1-\lambda)b)]+12ab\\ &=12[\lambda(-ab)+(1-\lambda)(-ab)]+12ab=0. \end{align}
Using the same approach we obtain a generalization:
For $x_k\in[a,b],\,$ $k=1,2,\ldots,n,$
$\displaystyle \sum_{i=1}^n\left(\sum_{k=1}^nx_k-x_i\right)^2+n(n-1)^2ab\le (n-1)^2(a+b)\sum_{k=1}^nx_k.$
### Solution 4
$F(x,y,z)=LHS-RHS.\,$ $F\,$ is convex in $x,y,z:\;F"=6.\,$ Hence max is at the endpoints. By symmetry, must be in $\{a,a,a\}\,$ or $\{b,b,b\}.\,$ $F(a,a,a)=F(b,b,b)=0,\,$ QED.
Same argument generalizes to n variables, with $12\,$ replaced with $4n.$
### Solution 5
We have
$\displaystyle f=\frac{(x+y)^2+(x+z)^2+(y+z)^2-12 a b}{x+y+z}-4 (a+b)$
$f$ finds its extrema at $\nabla f=0$
$\displaystyle \left( \begin{array}{c} \displaystyle \frac{2 \left(-6 a b+x^2+2 x (y+z)+y z\right)}{(x+y+z)^2} =0\\ \displaystyle \frac{2 (-6 a b+x (2 y+z)+y (y+2 z))}{(x+y+z)^2} =0\\ \displaystyle 2-\frac{2 \left(6 a b+x^2+x y+y^2\right)}{(x+y+z)^2} =0\\ \end{array} \right)$
For solution we get $f^*=\left\{f:x=\sqrt{a} \sqrt{b},y=\sqrt{a} \sqrt{b}, z= \sqrt{a} \sqrt{b}\right\}$, at the geometric average between $a$ and $b$, which is the minimum. $f^*=-4 \left(\sqrt{a} - \sqrt{b}\right)^2$.
We can show that the function $f$ increases away from $f^*$, with maximum $f=0$ in the range where $\left\{x=a,y=a,z=a\right\}$ or $\left\{x=b,y=b,z=b\right\}$.
### Acknowledgment
Dorin Marghidanu has kindly posted this problem of his at the CutTheKnotMath facebook page and then commented with his solution (Solution 1) and the links to the solutions by Giulio Franco (Solution 2) and Marian Dinca (Solution 3), the latter offered a different kind of generalization than the other two. Solution 4 is by Lorenzo Villa; Solution 5 is by N. N. Taleb. | 2017-11-17T17:08:41 | {
"domain": "cut-the-knot.org",
"url": "https://www.cut-the-knot.org/m/Algebra/MarghidanuWithGeneralization.shtml",
"openwebmath_score": 0.9868766069412231,
"openwebmath_perplexity": 2301.7533981753195,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.992062006602671,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909801400857
} |
https://mattbaker.blog/2013/07/03/quadratic-reciprocity-and-zolotarevs-lemma/ | # Quadratic reciprocity and Zolotarev’s Lemma
I want to explain a beautiful proof of the Law of Quadratic Reciprocity from c. 1874 due to Egor Ivanovich Zolotarev (1847-1878). Some time ago I reformulated Zolotarev’s argument (as presented here) in terms of dealing cards and I posted a little note about it on my web page. After reading my write-up (which was unfortunately opaque in a couple of spots), Jerry Shurman was inspired to rework the argument and he came up with this elegant formulation which I think may be a “proof from the book”. The following exposition is my own take on Jerry’s argument. The proof requires some basic facts about permutations, all of which are proved in this handout.
Let $m$ and $n$ be odd relatively prime positive integers. You have a stack of $mn$ playing cards numbered 0 through $mn-1$ and you want to deal them onto the table in an $m \times n$ rectangular array. Consider the following three ways of doing this:
Row deal ($\rho$) : Deal the cards into rows, going left to right and top to bottom.
Column deal ($\kappa$): Deal the cards into columns, going top to bottom and left to right.
Diagonal deal ($\delta$): Deal the cards diagonally down and to the right, wrapping around from bottom to top or right to left whenever necessary.
There are corresponding actions which undo the above deals, which we will denote by Row pickup ($\rho^{-1}$), Column pickup ($\kappa^{-1}$), and Diagonal pickup ($\delta^{-1}$). Note that in a pickup move each successive card is placed under the previous one.
We combine these basic moves as follows, obtaining three different permutations of the rectangular array which we denote by $\alpha$, $\beta$, and $\gamma$:
$\alpha = \delta \circ \rho^{-1}$ is a row pickup followed by a diagonal deal.
$\beta = \delta \circ \kappa^{-1}$ is a column pickup followed by a diagonal deal.
$\gamma = \kappa \circ \rho^{-1}$ is a row pickup followed by a column deal.
For later use, we note that $\alpha$ permutes each column separately and $\beta$ permutes each row separately. (Try it with actual cards and you will easily convince yourself of this!)
It is clear that $\alpha = \beta \circ \gamma$, i.e., doing $\gamma$ and then $\beta$ is the same as doing $\alpha$. Since the sign of a permutation is a homomorphism, we have ${\rm sign}(\alpha) = {\rm sign}(\beta) \cdot {\rm sign}(\gamma)$. Believe it or not, this self-evident observation is the Law of Quadratic Reciprocity in disguise!
Let me explain.
Claim 1: ${\rm sign}(\alpha)$ is equal to the sign of the permutation of the integers modulo $m$ induced by multiplication by $n$, which we write as $\left[ \frac{n}{m} \right]$. (By symmetry, we have ${\rm sign}(\beta) = \left[ \frac{m}{n} \right]$.)
Claim 2: ${\rm sign}(\gamma) = (-1)^{\frac{(m-1)(n-1)}{4}}.$
From these two claims, we obtain:
Zolotarev’s Reciprocity Law: If $m$ and $n$ are relatively prime odd positive integers, then
$\left[ \frac{n}{m} \right] = (-1)^{\frac{(m-1)(n-1)}{4}} \left[ \frac{m}{n} \right].$
The connection with Quadratic Reciprocity is via:
Zolotarev’s Lemma: If $p$ is an odd prime and $a$ is a positive integer not divisible by $p$, then $\left[ \frac{a}{p} \right] = \left( \frac{a}{p} \right)$, where $\left( \frac{a}{p} \right)$ denotes the Legendre symbol.
Combining Zolotarev’s Reciprocity Law and Zolotarev’s Lemma, we immediately obtain:
The Law of Quadratic Reciprocity: If $p$ and $q$ are distinct odd primes, then
$\left( \frac{q}{p} \right) = (-1)^{\frac{(p-1)(q-1)}{4}} \left( \frac{p}{q} \right).$
We now explain how to prove the two claims, as well as Zolotarev’s Lemma. For the explanation, it is helpful to identify the initial stack of cards with the set of integers $\{ 0,1,2,\ldots, mn-1 \}$. Also, by indexing the rows of the array by $0,1,\ldots,m-1$ and the columns by $0,1,\ldots,n-1$, we can identify the array with the set ${\mathbf Z}/m {\mathbf Z} \times {\mathbf Z}/n {\mathbf Z}$. In this way we can identify $\rho$, $\kappa$, and $\delta$ with the following maps from $\{ 0,1,2,\ldots, mn-1 \}$ to ${\mathbf Z}/m {\mathbf Z} \times {\mathbf Z}/n {\mathbf Z}$ (where $x \in \{ 0,1,\ldots,m-1 \}$ and $y \in \{ 0,1,\ldots,n-1 \}$):
$\rho(nx+y) = (x {\rm \; mod \;} m, y {\rm \; mod \;} n)$,
$\kappa(x+my) = (x {\rm \; mod \;} m, y {\rm \; mod \;} n)$,
$\delta(z) = (z {\rm \; mod \;} m, z {\rm \; mod \;} n)$.
Proof of Claim 1: By the above formulas we have $\alpha(x,y) = (nx+y,y)$ and $\beta(x,y) = (x,x+my)$. It follows that $\alpha$ is the product of the $n$ “column permutations” $\alpha_y(x) \mapsto nx+y \pmod{m}$. But each $\alpha_y$ is a composition of multiplication by $n$ modulo $m$ with the permutation $x \mapsto x+y \pmod{m}$, which always has sign $+1$ since it’s either trivial or a product of ${\rm GCD}(y,m)$ cycles, each of length $m/{\rm GCD}(y,m)$, and we’re assuming that $m$ is odd. Since $n$ is odd as well, we obtain the claim about ${\rm sign}(\alpha)$, and the corresponding assertion for $\beta$ follows by symmetry.
Proof of Claim 2: The sign of a permutation $\tau$ of a totally ordered finite set is equal to $-1$ to the number of inversions of $\tau$. (An inversion is a pair $(a,b)$ with $a < b$ and $\tau(b) < \tau(a)$.) We put the column-major order (the order depicted in the second figure above) on our array. Note that for $a,b$ in the array, $\gamma(b)$ is less than $\gamma(a)$ in column-major order if and only if $b < a$ in row-major order (the order depicted in the first figure above). So $\{ a,b \}$ is an inversion pair in column-major order if and only if $a$ is below and to the left of $b$. The number of such inversion pairs is $\binom{m}{2} \cdot \binom{n}{2}$, since each pair of 2-element subsets $\{ x,x' \}$ of $\{0,1,\ldots,m\}$ and $\{ y,y' \}$ of $\{0,1,\ldots,n\}$ gives rise to a unique inversion (by ordering the elements $a=(x,y),b=(x',y')$ so that $x < x'$ and $y' < y$). The claim follows since $m$ and $n$ are assumed to be odd. (Note that we do not require $m$ and $n$ to be relatively prime for this argument.)
Proof of Zolotarev’s Lemma:
Let $g$ be a primitive root modulo $p$, and write $a \equiv g^k \pmod{p}$. Then the permutation of $\{ 1,2,\ldots,p-1 \}$ given by multiplication by $a$ modulo $p$ has the same sign as the permutation of $\{ 1,2,\ldots,p-1 \}$ given by $x \mapsto x + k$ modulo $p-1$. The latter is a product of $c:= {\rm GCD}(k,p-1)$ cycles, each of length $\ell := (p-1)/{\rm GCD}(k,p-1)$, so $\left[ \frac{a}{p} \right] =(-1)^{c(\ell - 1)}.$ If $c$ is even then $c(\ell - 1)$ is also even, and if $c$ is odd then since $p$ is odd and $\ell c = p-1$, $\ell$ must be even and hence $c(\ell - 1)$ is odd. Note also that $c$ is even if and only if $k$ is even. It follows that $\left[ \frac{a}{p} \right] = 1$ if and only if $k$ is even, which happens if and only if $\left( \frac{a}{p} \right) = 1$.
Concluding observations:
2) The supplement to the Law of Quadratic Reciprocity determining the Legendre symbol $\left( \frac{2}{p} \right)$ is also easily deduced from Zolotarev’s Lemma, which shows that $\left( \frac{2}{p} \right)$ is $-1$ to the number of inversions of multiplication by 2 modulo $p$. A pair $\{ a, b \}$ with $a < b$ gets inverted if and only if $1 \leq a \leq (p-1)/2$ and $(p-1)/2 + 1 \leq b \leq (p-1)/2 + a$. Thus the number of inversions is $1+2+\cdots+(p-1)/2 = (p^2 - 1)/8$, which by inspection is even if $p$ is congruent to 1 or 7 modulo 8 and odd otherwise.
3) If we identify the set $\{ 0,1,2,\ldots, mn-1 \}$ with the set ${\mathbf Z} / mn {\mathbf Z}$ of integers modulo $mn$, the map $\delta$ is just the canonical ring isomorphism from ${\mathbf Z} / mn {\mathbf Z}$ to ${\mathbf Z}/m {\mathbf Z} \times {\mathbf Z}/n {\mathbf Z}$ afforded by the Chinese Remainder Theorem.
4) An alternative, more algebraic argument, for Claim 2 is as follows. Since the bijection $\rho$ identifies the permutation $\gamma = \kappa \rho^{-1}$ of the array with the permutation $\lambda = \rho^{-1} \gamma \rho = \rho^{-1} \kappa$ of the totally ordered set $\{ 0,1,2,\ldots, mn-1 \}$, it suffices to compute the number of inversions of $\lambda$. Using the explicit formula $\lambda(x+my) = nx+y$ we see that inversions of $\lambda$ correspond to ordered pairs $(x,y), (x',y')$, with $x,x' \in \{ 0,1,\ldots,m-1 \}$ and $y,y' \in \{ 0,1,\ldots,n-1 \}$, such that $x+my < x'+my'$ and $nx'+y' < nx+y$. The latter two conditions together are easily seen to be equivalent to $x < x'$ and $y' < y$.
5) A more group-theoretic proof of Zolotarev’s Lemma is as follows. We observe that $\left[ \frac{\cdot}{p} \right]$ is a surjective homomorphism from $({\mathbf Z} / p{\mathbf Z})^\times$ to $\{ \pm 1 \}$; surjectivity follows from the fact that if $g$ is a primitive root mod $p$ (i.e., a cyclic generator of $({\mathbf Z} / p{\mathbf Z})^\times$) then $\left[ \frac{g}{p} \right]$ is a $(p-1)$-cycle and thus has signature $-1$. The kernel of $\left[ \frac{\cdot}{p} \right]$ is therefore a subgroup of $({\mathbf Z} / p{\mathbf Z})^\times$ of index 2, but the only such subgroup is the group of quadratic residues. Thus $\left[ \frac{\cdot}{p} \right]$ coincides with the Legendre symbol $\left( \frac{\cdot}{p} \right)$.
6) Zolotarev’s Lemma generalizes to the statement that if $m,n$ are relatively prime odd positive integers then $\left[ \frac{m}{n} \right]$ is equal to the Jacobi symbol $\left( \frac{m}{n} \right)$, and the proof above then gives quadratic reciprocity for the Jacobi symbol (which is used for rapid computation of Legendre symbols). Note that in general $\left[ \frac{m}{n} \right]$ is the sign of multiplication by $m$ on ${\mathbf Z} / n{\mathbf Z}$ and not on $({\mathbf Z} / n{\mathbf Z})^\times$, though when $n$ is prime these coincide.
[Updated November 26, 2013.]
## 10 thoughts on “Quadratic reciprocity and Zolotarev’s Lemma”
1. martyweissman says:
Hi Matt,
I’m happy to see the growing popularity of Zolotarev’s proof! When I teach it, I use the following approach/notation (and this will appear in my Illustrated Theory of Numbers book eventually). It’s not as magic-tricky as the card deal, but the notation is useful for other parts of teaching number theory.
Fix two odd primes p and q throughout. Let a be an integer between 0 and p-1 and b an integer between 0 and q-1. Let S be the set of integers between 0 and pq-1. Define:
[a,b] to be the unique element of S congruent to a modulo p and b modulo q;
[a,b> to be the integer bp + a; (note that [a,b> is congruent to a, modulo p. Hence the bracket agreeing with [a,b] on the a-side)
<a,b] to be the integer aq+b. (same comment as before).
Now the three permutations of S are:
L sends [a,b] to
I find this approach works really well, especially if the notation [a,b] is introduced earlier in the context of the Chinese remainder theorem. Besides the notation, I think my usual approach is the same proof as yours (and Zolotarev’s).
• martyweissman says:
Uh oh – I think my plaintext math was interpreted as an HTML link. Sorry about that! You might guess that L sends [a,b] to \langle a,b] and R sends [a,b] to [a,b \rangle and F sends \langle a,b] to [a,b \rangle.
Whoops! | 2022-06-26T02:02:02 | {
"domain": "mattbaker.blog",
"url": "https://mattbaker.blog/2013/07/03/quadratic-reciprocity-and-zolotarevs-lemma/",
"openwebmath_score": 0.9319087862968445,
"openwebmath_perplexity": 240.64872705386674,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620063679783,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.65319097998556
} |
http://zbmath.org/?q=an:1119.53006&format=complete | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On the geometric structure of hypersurfaces of conullity two in Euclidean space. (English) Zbl 1119.53006
Mladenov, Ivaïlo (ed.) et al., Proceedings of the 8th international conference on geometry, integrability and quantization, Sts. Constantine and Elena, Bulgaria, June 9–14, 2006. Sofia: Bulgarian Academy of Sciences (ISBN 978-954-8495-37-0/pbk). 169-183 (2007).
Summary: We introduce the notion of a semi-developable surface of codimension two as a generalization of the notion of a developable surface of codimension two. We give a characterization of the developable and semi-developable surfaces in terms of their second fundamental forms. We prove that any hypersurface of conullity two in Euclidean space is locally a foliation of developable or semi-developable surfaces of codimension two.
##### MSC:
53A07 Higher-dimensional and -codimensional surfaces in Euclidean $n$-space
##### Keywords:
semi-developable; second fundamental forms | 2013-12-12T19:02:39 | {
"domain": "zbmath.org",
"url": "http://zbmath.org/?q=an:1119.53006&format=complete",
"openwebmath_score": 0.7562640905380249,
"openwebmath_perplexity": 9657.388025247168,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620061332854,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909798310341
} |
https://www.mathcounterexamples.net/a-diagonalizable-matrix-over-q-but-not-over-z/ | # A diagonalizable matrix over Q but not over Z
Let us recall that a square matrix $$A$$ is called diagonalizable if it is similar to a diagonal matrix, i.e. if there exists an invertible matrix $$P$$ such that $$P^{-1}AP$$ is a diagonal matrix.
It is well known that a square matrix of dimension $$n \ge 1$$ over a field $$\mathbb{K}$$ that has $$n$$ distinct eigenvalues is diagonalizable. The proof is based on the fact that eigenvectors associated to different eigenvalues are linearly independent. Therefore a family of $$n$$ eigenvectors associated to the $$n$$ distinct eigenvalues is a basis. And over a field the matrix of the change basis from the standard basis to the basis of eigenvectors is invertible.
This doesn’t remain true if we consider a matrix over the ring of the integers $$\mathbb{Z}$$, even if the matrix is diagonalizable over the field $$\mathbb{Q}$$. For a counterexample we consider the matrix:
$A=\left(\begin{array}{cc} 1 & 1\\ 0 & -1\\ \end{array}\right)$ $$A$$ is an upper triangular matrix with distinct values on the main diagonal. Hence the elements of the main diagonal are eigenvalues. In the case of $$A$$, that means that $$1$$ and $$-1$$ are eigenvalues. As dimension of $$A$$ is equal to $$2$$, $$A$$ is diagonalizable on the field of the rational numbers $$\mathbb{Q}$$ according to above statement. More precisely, the matrix
$$P=\left(\begin{array}{cc} 1 & 1\\ 0 & -2\\ \end{array}\right)$$ is a matrix of eigenvectors, $$P^{-1}=\left(\begin{array}{cc} 1 & 1/2\\ 0 & -1/2\\ \end{array}\right)$$ and $$P^{-1}AP=\left(\begin{array}{cc} 1 & 0\\ 0 & -1\\ \end{array}\right)=D$$ is diagonal. However it is not possible to diagonalize $$A$$ over the ring $$\mathbb{Z}$$. If that was the case, an invertible matrix $$Q=\left(\begin{array}{cc} a & c\\ b & d\\ \end{array}\right)$$ would exist satisfying the equality $$AQ=QD$$.
The reader will verify that it forces $$Q$$ to be written as $$Q=\left(\begin{array}{cc} a & c\\ 0 & -2c\\ \end{array}\right)$$ with $$a,c$$ integers. Hence $$\det(Q)=2ac$$ cannot be an invertible element of $$\mathbb{Z}$$ and $$Q$$ cannot be an invertible matrix. We can finally conclude that $$A$$ is not diagonalizable over $$\mathbb{Z}$$. However, we notice that $$A$$ has two eigenvectors on $$\mathbb{Z}$$ and that $$A$$ is invertible over the ring of the integers. | 2020-09-30T05:09:45 | {
"domain": "mathcounterexamples.net",
"url": "https://www.mathcounterexamples.net/a-diagonalizable-matrix-over-q-but-not-over-z/",
"openwebmath_score": 0.9228686690330505,
"openwebmath_perplexity": 42.72274414668293,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620061332854,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909798310341
} |
https://online.stat.psu.edu/statprogram/book/export/html/574 | # M.4 Matrix Inverse
M.4 Matrix Inverse
Inverse of a Matrix
The matrix B is the inverse of matrix A if $AB = BA = I$. This is often denoted as $B = A^{-1}$ or $A = B^{-1}$. When taking the inverse of the product of two matrices A and B,
$(AB)^{-1} = B^{-1} A^{-1}$
When taking the determinate of the inverse of the matrix A,
$det(A^{-1}) = \frac{1}{det(A)} = det(A)^{-1}$
Note that not all matrices have inverses. For a matrix A to have an inverse, that is to say for A to be invertible, A must be a square matrix and $det(A) \neq 0$. For that reason, invertible matrices are also called nonsingular matrices.
Two examples are shown below
$det(A) = \begin{vmatrix} 4 & 5 \\ -2 & 1 \end{vmatrix} = 4*1-5*-2 = 14 \neq 0$
$det(C) = \begin{vmatrix} 1 & 2 & -1\\ 5 & 3 & 2 \\ 6 & 0 & 6 \end{vmatrix} = -2 \begin{vmatrix} 5 & 2 \\ 6 & 6 \end{vmatrix} + 3 \begin{vmatrix} 1 & -1\\ 6 & 6 \end{vmatrix} + 0 \begin{vmatrix} 1 & -1\\ 5 & 2 \end{vmatrix}$
$det(C)= - 2(5*6-2*6) + 3(1*6-(-1)*6) - 0(1*2-(-1)*5) = 0$
So C is not invertible, because its determinate is zero. However, A is an invertible matrix, because its determinate is nonzero. To calculate that matrix inverse of a 2 × 2 matrix, use the below formula.
$A^{-1} = \begin{pmatrix} a_{1,1} & a_{1,2}\\ a_{2,1} & a_{2,2} \end{pmatrix} ^{-1} = \frac{1}{det(A)} \begin{pmatrix} a_{2,2} & -a_{1,2} \\ -a_{2,1} & a_{1,1} \end{pmatrix} = \frac{1}{a_{1,1} * a_{2,2} - a_{1,2}*a_{2,1}} \begin{pmatrix} a_{2,2} & -a_{1,2} \\ -a_{2,1} & a_{1,1} \end{pmatrix}$
For example
$A^{-1} = \begin{pmatrix} 4 & 5 \\ -2 & 1 \end{pmatrix} ^{-1} = \frac{1}{det(A)} \begin{pmatrix} 1 & -5 \\ 2 & 4 \end{pmatrix} = \frac{1}{4*1 - 5*(-2)} \begin{pmatrix} 1 & -5 \\ 2 & 4 \end{pmatrix} = \begin{pmatrix} \frac{1}{14} & \frac{-5}{14} \\ \frac{2}{14} & \frac{4}{14} \end{pmatrix}$
For finding the matrix inverse in general, you can use Gauss-Jordan Algorithm. However, this is a rather complicated algorithm, so usually one relies upon the computer or calculator to find the matrix inverse.
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility | 2020-05-31T17:09:25 | {
"domain": "psu.edu",
"url": "https://online.stat.psu.edu/statprogram/book/export/html/574",
"openwebmath_score": 0.9307947158813477,
"openwebmath_perplexity": 369.41992951891507,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620061332853,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.653190979831034
} |
https://en.wikipedia.org/wiki/257_(number) | 257 (number)
← 256 257 258 →
Cardinal two hundred fifty-seven
Ordinal 257th
(two hundred fifty-seventh)
Factorization prime
Prime yes
Greek numeral ΣΝΖ´
Roman numeral CCLVII
Binary 1000000012
Ternary 1001123
Quaternary 100014
Quinary 20125
Senary 11056
Octal 4018
Duodecimal 19512
Vigesimal CH20
Base 36 7536
257 (two hundred [and] fifty-seven) is the natural number following 256 and preceding 258.
In mathematics
257 is a prime number of the form ${\displaystyle 2^{2^{n}}+1,}$ specifically with n = 3, and therefore a Fermat prime. Thus a regular polygon with 257 sides is constructible with compass and unmarked straightedge. It is currently the second largest known Fermat prime.[1]
It is also a balanced prime,[2] an irregular prime,[3] a prime that is one more than a square,[4] and a Jacobsthal–Lucas number.[5]
There are exactly 257 combinatorially distinct convex polyhedra with eight vertices (or polyhedral graphs with eight nodes).[6] | 2018-03-19T09:17:26 | {
"domain": "wikipedia.org",
"url": "https://en.wikipedia.org/wiki/257_(number)",
"openwebmath_score": 0.6661502718925476,
"openwebmath_perplexity": 6503.965013734332,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620058985927,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909796765083
} |
https://opentextbc.ca/intermediatealgebraberg/chapter/4-5-geometric-word-problems/ | Chapter 4: Inequalities
# 4.5 Geometric Word Problems
It is common to run into geometry-based word problems that look at either the interior angles, perimeter, or area of shapes. When looking at interior angles, the sum of the angles of any polygon can be found by taking the number of sides, subtracting 2, and then multiplying the result by 180°. In other words:
$\text{sum of interior angles} = 180^{\circ} \times (\text{number of sides} - 2)$
This means the interior angles of a triangle add up to 180° × (3 − 2), or 180°. Any four-sided polygon will have interior angles adding to 180° × (4 − 2), or 360°. A chart can be made of these:
$\begin{array}{rrrrrr} \text{3 sides:}&180^{\circ}&\times&(3-2)&=&180^{\circ} \\ \text{4 sides:}&180^{\circ}&\times&(4-2)&=&360^{\circ} \\ \text{5 sides:}&180^{\circ}&\times&(5-2)&=&540^{\circ} \\ \text{6 sides:}&180^{\circ}&\times&(6-2)&=&720^{\circ} \\ \text{7 sides:}&180^{\circ}&\times&(7-2)&=&900^{\circ} \\ \text{8 sides:}&180^{\circ}&\times&(8-2)&=&1080^{\circ} \\ \end{array}$
Example 4.5.1
The second angle $(A_2)$ of a triangle is double the first $(A_1).$ The third angle $(A_3)$ is 40° less than the first $(A_1).$ Find the three angles.
The relationships described in equation form are as follows:
$A_2 = 2A_1 \text{ and } A_3 = A_1 - 40^{\circ}$
Because the shape in question is a triangle, the interior angles add up to 180°. Therefore:
$A_1 + A_2 + A_3 = 180^{\circ}$
Which can be simplified using substitutions:
$A_1 + (2A_1) + (A_1 - 40^{\circ}) = 180^{\circ}$
Which leaves:
$\begin{array}{rrrrrrrrrrr} 2A_1&+&A_1&+&A_1&-&40^{\circ}&=&180^{\circ}&&&\\ &&&&4A_1&-&40^{\circ}&=&180^{\circ}&&\\ \\ &&&&&&4A_1&=&180^{\circ}&+&40^{\circ}\\ \\ &&&&&&A_1&=&\dfrac{220^{\circ}}{4}&\text{or}&55^{\circ} \end{array}$
This means $A_2 = 2 (55^{\circ})$ or 110° and $A_3 = 55^{\circ}-40^{\circ}$ or 15°.
Common Geometric Shapes with Related Area and Perimeter Equations
Shape Picture Area Perimeter
Circle $\pi r^2$ $2\pi r$
Square $s^2$ $4s$
Rectangle $lw$ $2l+2w$
Triangle $\dfrac{1}{2}bh$ $s_1+s_2+s_3$
Rhombus $bh$ $4b$
Trapezoid $\dfrac{1}{2}\left(l_1+l_2\right)h$ $l_1+l_2+h_1+h_2$
Parallelogram $bh$ $2h_1+2b$
Regular polygon ($n$-gon) $\left(\dfrac{1}{2}sh\right)(\text{number of sides})$ $s(\text{number of sides})$
Another common geometry word problem involves perimeter, or the distance around an object. For example, consider a rectangle, for which $\text{perimeter} = 2l + 2w.$
Example 4.5.2
If the length of a rectangle is 5 m less than twice the width, and the perimeter is 44 m long, find its length and width.
The relationships described in equation form are as follows:
$L = 2W - 5 \text{ and } P = 44$
For a rectangle, the perimeter is defined by:
$P = 2 W + 2 L$
Substituting for $L$ and the value for the perimeter yields:
$44 = 2W + 2 (2W - 5)$
Which simplifies to:
$44 = 2W + 4W - 10$
Further simplify to find the length and width:
$\begin{array}{rrrrlrrrr} 44&+&10&=&6W&&&& \\ \\ &&54&=&6W&&&& \\ \\ &&W&=&\dfrac{54}{6}&\text{or}&9&& \\ \\ &\text{So}&L&=&2(9)&-&5&\text{or}&13 \\ \end{array}$
The width is 9 m and the length is 13 m.
Other common geometric problems are:
Example 4.5.3
A 15 m cable is cut into two pieces such that the first piece is four times larger than the second. Find the length of each piece.
The relationships described in equation form are as follows:
$P_1 + P_2 = 15 \text{ and } P_1 = 4P_2$
Combining these yields:
$\begin{array}{rrrrrrr} 4P_2&+&P_2&=&15&& \\ \\ &&5P_2&=&15&& \\ \\ &&P_2&=&\dfrac{15}{5}&\text{or}&3 \end{array}$
This means that $P_2 =$ 3 m and $P_1 = 4 (3),$ or 12 m.
# Questions
For questions 1 to 8, write the formula defining each relation. Do not solve.
1. The length of a rectangle is 3 cm less than double the width, and the perimeter is 54 cm.
2. The length of a rectangle is 8 cm less than double its width, and the perimeter is 64 cm.
3. The length of a rectangle is 4 cm more than double its width, and the perimeter is 32 cm.
4. The first angle of a triangle is twice as large as the second and 10° larger than the third.
5. The first angle of a triangle is half as large as the second and 20° larger than the third.
6. The sum of the first and second angles of a triangle is half the amount of the third angle.
7. A 140 cm cable is cut into two pieces. The first piece is five times as long as the second.
8. A 48 m piece of hose is to be cut into two pieces such that the second piece is 5 m longer than the first.
For questions 9 to 18, write and solve the equation describing each relationship.
1. The second angle of a triangle is the same size as the first angle. The third angle is 12° larger than the first angle. How large are the angles?
2. Two angles of a triangle are the same size. The third angle is 12° smaller than the first angle. Find the measure of the angles.
3. Two angles of a triangle are the same size. The third angle is three times as large as the first. How large are the angles?
4. The second angle of a triangle is twice as large as the first. The measure of the third angle is 20° greater than the first. How large are the angles?
5. Find the dimensions of a rectangle if the perimeter is 150 cm and the length is 15 cm greater than the width.
6. If the perimeter of a rectangle is 304 cm and the length is 40 cm longer than the width, find the length and width.
7. Find the length and width of a rectangular garden if the perimeter is 152 m and the width is 22 m less than the length.
8. If the perimeter of a rectangle is 280 m and the width is 26 m less than the length, find its length and width.
9. A lab technician cuts a 12 cm piece of tubing into two pieces such that one piece is two times longer than the other. How long are the pieces?
10. An electrician cuts a 30 m piece of cable into two pieces. One piece is 2 m longer than the other. How long are the pieces? | 2023-01-27T11:51:08 | {
"domain": "opentextbc.ca",
"url": "https://opentextbc.ca/intermediatealgebraberg/chapter/4-5-geometric-word-problems/",
"openwebmath_score": 0.4841272234916687,
"openwebmath_perplexity": 351.8876120247952,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620056638999,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909795219825
} |
https://phys.libretexts.org/TextBooks_and_TextMaps/Astronomy_and_Cosmology_TextMaps/Map%3A_Celestial_Mechanics_(Tatum)/2%3A_Conic_Sections/2.7%3A_The_General_Conic_Section | $$\require{cancel}$$
# 2.7: The General Conic Section
The equation $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 \label{2.7.1} \tag{2.7.1}$
represents an ellipse whose major axis is along the $$x$$ axis and whose centre is at the origin of coordinates. But what if its centre is not at the origin, and if the major axis is at some skew angle to the $$x$$ axis? What will be the equation that represents such an ellipse? Figure $$\text{II.37}$$.
$$\text{FIGURE II.37}$$
If the centre is translated from the origin to the point $$(p, q)$$, the equation that represents the ellipse will be found by replacing $$x$$ by $$x − p$$ and $$y$$ by $$y − q$$. If the major axis is inclined at an angle θ to the $$x$$ axis, the equation that represents the ellipse will be found by replacing $$x$$ by $$x \cos θ + y \sin θ$$ and $$y$$ by $$−x \sin θ + y \cos θ$$. In any case, if the ellipse is translated or rotated or both, $$x$$ and $$y$$ will each be replaced by linear expressions in $$x$$ and $$y$$, and the resulting equation will have at most terms in $$x^2 , \ y^2 , \ xy, \ x, \ y$$ and a constant. The same is true of a parabola or a hyperbola. Thus, any of these three curves will be represented by an equation of the form
$ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0. \label{2.7.2} \tag{2.7.2}$
(The coefficients $$a$$ and $$b$$ are not the semi major and semi minor axes.) The apparently random notation for the coefficients arises because these figures are plane sections of three-dimensional surfaces (the ellipsoid, paraboloid and hyperboloid) which are described by terms involving the coordinate $$z$$ as well as $$x$$ and $$y$$. The customary notation for these three-dimensional surfaces is very systematic, but when the terms in $$z$$ are removed for the two- dimensional case, the apparently random notation $$a, \ b, \ c, \ f, \ g, \ h$$ remains. In any case, the above equation can be divided through by the constant term without loss of generality, so that the equation to an ellipse, parabola or hyperbola can be written, if preferred, as
$ax^2 + 2hxy + by^2 + 2gx + 2fy + 1 = 0. \label{2.7.3} \tag{2.7.3}$
Is the converse true? That is, does an equation of this form always necessarily represent an ellipse, parabola or hyperbola?
Not quite. For example,
$6x^2 + xy - y^2 - 17x - y + 12 = 0 \label{2.7.4} \tag{2.7.4}$
represents two straight lines (it can be factored into two linear terms - try it), while
$2x^2 -4xy + 4y^2 -4x + 4 = 0 \label{2.7.5} \tag{2.7.5}$
is satisfied only by a single point. (Find it.)
However, a plane section of a cone can be two lines or a single point, so perhaps we can now ask whether the general second degree equation must always represent a conic section. The answer is: close, but not quite.
For example,
$4x^2 + 12xy + 9y^2 + 14x + 21y + 6 = 0 \label{2.7.6} \tag{2.7.6}$
represents two parallel straight lines, while
$x^2 + y^2 + 3x + 4y + 15 = 0 \label{2.7.7} \tag{2.7.7}$
cannot be satisfied by any real $$(x,y)$$.
However, a plane can intersect a cylinder in two parallel straight lines, or a single straight line, or not at all. Therefore, if we stretch the definition of a cone somewhat to include a cylinder as a special limiting case, then we can say that the general second degree equation in $$x$$ and $$y$$ does indeed always represent a conic section.
Is there any means by which one can tell by a glance at a particular second degree equation, for example
$8x^2 + 10xy -3y^2 -2x - 4y - 2 = 0, \label{2.7.8} \tag{2.7.8}$
what type of conic section is represented? The answer is yes, and this one happens to be a hyperbola. The discrimination is made by examining the properties of the determinant
\begin{array}{l | c c c |}
& a & h & g \\
\Delta = & h & b & f \\
& g & f & c \\
\label{2.7.9} \tag{2.7.9}
\end{array}
I have devised a table after the design of the dichotomous tables commonly used by taxonomists in biology, in which the user is confronted by a couplet (or sometimes triplet) of alternatives, and is then directed to the next part of the table. I shall spare the reader the derivation of the table; instead, I shall describe its use.
In the table, I have used the symbol $$\bar{a}$$ to mean the cofactor of $$a$$ in the determinant, $$\bar{h}$$ the cofactor of $$h$$, $$\bar{g}$$ the cofactor of $$g$$, etc. Explicitly, these are
$\bar{a} = bc - f^2 , \label{2.7.10} \tag{2.7.10}$
$\bar{b} = ca - g^2 , \label{2.7.11} \tag{2.7.11}$
$\bar{c} = ab - h^2 , \label{2.7.12} \tag{2.7.12}$
$\bar{f} = gh - af , \label{2.7.13} \tag{2.7.13}$
$\bar{g} = hf - bg \label{2.7.14} \tag{2.7.14}$
and $\bar{h} = fg - ch . \label{2.7.15} \tag{2.7.15}$
The first column labels the choices that the user is asked to make. At the beginning, there are two choices to make, $$1$$ and $$1^\prime$$ The second column says what these choices are, and the fourth column says where to go next. Thus, if the determinant is zero, go to $$2$$; otherwise, go to $$5$$. If there is an asterisk in column $$4$$, you are finished. Column $$3$$ says what sort of a conic section you have arrived at, and column $$5$$ gives an example.
No matter what type the conic section is, the coordinates of its centre are $$(\bar{g}/\bar{c}, \ \bar{f}/\bar{c} )$$ and the angle $$θ$$ that its major or transverse axis makes with the x axis is given by
$\tan 2θ = \frac{2h}{a-b}. \label{2.7.16} \tag{2.7.16}$
Thus if $$x$$ is first replaced with $$x + \bar{g} / \bar{c}$$ and $$y$$ with $$y + \bar{f}/\bar{c}$$, and then the new $$x$$ is replaced with $$x \cos θ − y \sin θ$$ and the new $$y$$ with $$x \sin θ + y \cos θ$$, the equation will take the familiar form of a conic section with its major or transverse axis coincident with the $$x$$ axis and its centre at the origin. Any of its properties, such as the eccentricity, can then be deduced from the familiar equations. You should try this with equation $$\ref{2.7.8}$$.
Key to the Conic Sections
When faced with a general second degree equation in $$x$$ and $$y$$, I often find it convenient right at the start to calculate the values of the cofactors from equations 2.7.10 − 2.7.15.
Here is an exercise that you might like to try. Show that the ellipse $$ax^2 + 2hxy + by^2 + 2gx + 2fy + 1 = 0$$ is contained within the rectangle whose sides are
$x = \frac{\bar{g} \pm \sqrt{\bar{g}^2 - \bar{a} \bar{c}}}{\bar{c}} \label{2.7.18} \tag{2.7.18}$
$y = \frac{\bar{f} \pm \sqrt{\bar{f}^2 - \bar{b} \bar{c}}}{\bar{c}} \label{2.7.19} \tag{2.7.19}$
In other words, these four lines are the vertical and horizontal tangents to the ellipse.
This is probably not of much use in celestial mechanics, but it will probably be useful in studying Lissajous ellipses, or the Stokes parameters of polarized light. It is also useful in programming a computer to draw, for example, the ellipse $$14x^2 - 4xy + 11y^2 - 44x -58y + 71 = 0$$. To do this, you will probably want to start at some value of $$x$$ and calculate the two corresponding values of $$y$$, and then move to another value of $$x$$. But at which value of $$x$$ should you start? Equation $$\ref{2.7.18}$$ will tell you.
But what do equations $$\ref{2.7.18}$$ and $$\ref{2.7.19}$$ mean if the conic section equation $$ax^2 + 2hxy + by^2 + 2gx + 2fy + 1 = 0$$ is not an ellipse? They are still useful if the conic section is a hyperbola. Equations $$\ref{2.7.18}$$ and $$\ref{2.7.19}$$ are still vertical and horizontal tangents - but in this case the hyperbola is entirely outside the limits imposed by these tangents. If the axes of the hyperbola are horizontal and vertical, one or other of equations $$\ref{2.7.18}$$ and $$\ref{2.7.19}$$ will fail.
If the conic section is a parabola, equations $$\ref{2.7.18}$$ and $$\ref{2.7.19}$$ are not useful, because $$c = 0$$. There is only one horizontal tangent and only one vertical tangent. They are given by
$x = \frac{\bar{a}}{2\bar{g}} \label{2.7.20} \tag{2.7.20}$
and $y = \frac{\bar{b}}{2\bar{f}} \tag{2.7.21} \label{2.7.21}$
If the axis of the parabola is horizontal or vertical, one or other of equations $$\ref{2.7.20}$$ and $$\ref{2.7.21}$$ will fail.
If the second degree equation represents one or two straight lines, or a point, or nothing, I imagine that all of equations $$\ref{2.7.18}$$ − $$\ref{2.7.21}$$ will fail - unless perhaps the equation represents horizontal or vertical lines. I haven’t looked into this; perhaps the reader would like to do so.
Here is a problem that you might like to try. The equation $$8x^2 + 10xy - 3y^2 - 2x - 4y - 2 = 0$$ represents a hyperbola. What are the equations to its axes, to its asymptotes, and to its conjugate hyperbola? Or, more generally, if $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$$ represents a hyperbola, what are the equations to its axes, to its asymptotes, and to its conjugate hyperbola?
Before starting, one point worth noting is that the original hyperbola, its asymptotes, and the conjugate hyperbola) have the same centre, which means that $$g$$ and $$f$$ are the same for each, and they have the same axes, which means that $$a$$, $$h$$, and $$b$$ are the same for each. They differ only in the constant term.
If you do the first problem, $$8x^2 + 10xy - 3y^2 - 2x - 4y -2 = 0$$ , there will be a fair amount of numerical work to do. When I did it I didn’t use either pencil and paper or a hand calculator. Rather I sat in front of a computer doing the numerical calculations with a Fortran statement for every stage of the calculation. I don’t think I could have done it otherwise without making lots of mistakes. The very first thing I did was to work out the cofactors $$\bar{a}, \ \bar{h}, \ \bar{b}, \ \bar{g}, \ \bar{f}, \ \bar{c}$$ and store them in the computer, and also the coordinates of the centre $$(x_0, y_0)$$ of the hyperbola, which are given by $$x_0 = \bar{g}/\bar{c}, \ y_0 = \bar{f} / \bar{c}$$.
Whether you do the particular numerical problem, or the more general algebraic one, I suggest that you proceed as follows. First, refer the hyperbola to a set of coordinates $$x^\prime , y^\prime$$ whose origin coincides with the axes of the hyperbola. This is done by replacing $$x$$ with $$x^\prime + x_0$$ and $$y$$ with $$y' + y_0$$. This will result in an equation of the form $$ax^{\prime 2} + 2hx^\prime y^\prime + by^{\prime 2} + c^\prime = 0$$. The coefficients of the quadratic terms will be unchanged, the linear terms will have vanished, and the constant term will have changed. At this stage I got, for the numerical example, $$8x^{\prime 2} + 10x^\prime y^\prime - 3y^{\prime 2} - 1.8163 = 0$$.
Now refer the hyperbola to a set of coordinates $$x^{\prime \prime}, y^{\prime \prime}$$ whose axes are parallel to the axes of the hyperbola. This is achieved by replacing $$x^\prime$$ with $$x^{\prime \prime} \cos θ − y^{\prime \prime} \sin θ$$ and $$y^\prime$$ with $$x^{\prime \prime} \sin θ + y^{\prime \prime} \cos θ$$, where $$\tan 2θ = 2h /(a − b)$$. There will be a small problem here, because this gives two values of $$θ$$ differing by $$90^\circ$$, and you’ll want to decide which one you want. In any case, the result will be an equation of the form $$a^{\prime \prime} x^{\prime \prime 2} + b^{\prime \prime} y^{\prime \prime 2} + c^\prime = 0$$, in which $$a^{\prime \prime}$$ and $$b^{\prime \prime}$$ are of opposite sign. Furthermore, if you happen to understand the meaning of the noise “The trace of a matrix is invariant under an orthogonal transformation”, you’ll be able to check for arithmetic mistakes by noting that $$a^{\prime \prime} + b^{\prime \prime} = a + b$$. If this is not so, you have made a mistake. Also, the constant term should be unaltered by the rotation (note the single prime on the $$c$$). At this stage, I got $$9.933 x^{\prime \prime 2} - 4.933y^{\prime \prime 2} - 1.8163 = 0$$. (All of this was done with Fortran statements on the computer - no actual calculation or writing done by me - and the numbers were stored in the computer to many significant figures).
In any case this equation can be written in the familiar form $$\dfrac{x^{\prime \prime 2}}{A^2}-\dfrac{y^{\prime \prime 2}}{B^2}=1$$, which in this case I made to be $$\dfrac{x^{\prime \prime 2}}{0.4283^2}-\dfrac{y^{\prime \prime 2}}{0.6088^2}=1$$. We are now on familiar ground. The axes of the hyperbola are $$x^{\prime \prime} = 0$$ and $$y^{\prime \prime} = 0$$, the asymptotes are $$\dfrac{x^{\prime \prime 2}}{A^2} - \dfrac{y^{\prime \prime 2}}{B^2} = 0$$ and the conjugate hyperbola is $$\dfrac{x^{\prime \prime 2}}{A^2} - \dfrac{y^{\prime \prime 2}}{B^2} = -1$$.
Now, starting from $$\dfrac{x^{\prime \prime 2}}{A^2} - \dfrac{y^{\prime \prime 2}}{B^2} = -1$$ for the asymptotes, or from $$\dfrac{x^{\prime \prime 2}}{A^2} - \dfrac{y^{\prime \prime 2}}{B^2} =-1$$ for the conjugate hyperbola, we reverse the process. We go to the single-primed coordinates by replacing $$x^{\prime \prime}$$ with $$x^\prime \cos θ + y^\prime \sin θ$$ and $$y^{\prime \prime}$$ with − $$x^\prime \sin θ + y' \cos θ$$, and then to the original coordinates by replacing $$x^\prime$$ with $$x − x_0$$ and $$y^\prime$$ with $$y − y_0$$.
This is what I find:
Original hyperbola: $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$$
Conjugate hyperbola: $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c_{\text{conj}} = 0,$$
where $$c_{\text{conj}} = -(2g \bar{g} + 2f \bar{f} + c \bar{c})/ \bar{c} = -(2gx_0 + 2fy_0 + c).$$
Asymptotes: $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c_{\text{asymp}} = 0,$$
where $$c_{\text{asymp}}$$ can be written in any of the following equivalent forms:
$c_{\text{asymp}} = + (a \bar{g}^2 + 2h \bar{g} \bar{f} + b \bar{f}^2 ) / \bar{c}^2 = ax_0^2 + 2hx_0 y_0 + by_0^2 = -(g \bar{g} + f \bar{f}) / \bar{c} .$
[The last of these three forms can be derived very quickly by recalling that a condition for a general second degree equation in $$x$$ and $$y$$ to represent two straight lines is that the determinant $$∆$$ should be zero. A glance at this determinant will show that this implies that $$g\bar{g} + f\bar{f} + c\bar{c} = 0.$$ ]
Axes of hyperbolas: $$(y - x \tan θ - y_0 + x_0 \tan θ) ( y + x \cot θ - y_0 - x_0 \cot θ) = 0,$$
where $$\tan 2 θ = 2h / (a-b) .$$
Example:
Original hyperbola: $$8x^2 + 10xy - 3y^2 - 2x - 4y - 2 = 0$$
Conjugate hyperbola: $$8x^2 + 10xy - 3y^2 - 2x - 4y + \frac{80}{49} = 0$$
Asymptotes: $$8x^2 + 10xy - 3y^2 - 2x -4y - \frac{9}{49} = 0 ,$$
which can also be written $$(4x - y - \frac{9}{7})(2x + 3y + \frac{1}{7}) = 0$$
Axes of hyperbolas: $$( y − 0.3866x + 0.3275)( y + 2.5866 x − 0.4613)$$.
These are shown in the figure below - the original hyperbola in black, the conjugate in blue.
The centre is at (0.26531, −0.22449).
The slopes of the two asymptotes are 4 and $$−\frac{2}{3}$$. From equation 2.2.16 we find that the tangent of the angle between the asymptotes is $$\tan 2ψ = \frac{14}{5}$$, so that $$2ψ = 70^\circ .3$$, and the angle between the asymptote and the major axis of the original hyperbola is $$54^\circ .8$$, or $$\tan ψ = 1.419$$. This is equal (see equations 2.5.3 and 2.5.10) to $$\sqrt{e^2 - 1}$$, so the eccentricity of the original hyperbola is $$1.735$$. From Section 2.2, shortly equation 2,5,6, we soon find that the eccentricity of the conjugate hyperbola is $$\csc ψ = 1.223$$.
An interesting question occurs to me. We have found that, if $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$$ is a hyperbola, then the equations to the conjugate hyperbola and the asymptotes are of a similar form, namely $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c_{\text{conj}} = 0$$ and $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c_{\text{asymp}} = 0$$ , and we found expressions for $$c_{\text{conj}}$$ and $$c_\text{asymp}$$. But what if $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$$ is not a hyperbola? What if it is an ellipse? What do the other equations represent, given that an ellipse has neither a conjugate nor asymptotes?
For example, $$14x^2 - 4xy + 11y^2 - 44x -58y + 71 = 0$$ is an ellipse. What are $$14x^2 - 4xy + 11y^2 - 44x - 58y + 191 = 0$$ and $$14x^2 -4xy + 11y^2 - 44x - 58y + 131 = 0$$? I used the key on page 47, and it told me that the first of these equations is satisfied by no real points, which I suppose is the equation’s way of telling me that there is no such thing as the conjugate to an ellipse. The second equation was supposed to be the “asymptotes”, but the key shows me that the equation is satisfied by just one real point, namely (2 , 3), which coincides with the centre of the original ellipse. I didn’t expect that. Should I have done so? | 2018-08-21T04:09:36 | {
"domain": "libretexts.org",
"url": "https://phys.libretexts.org/TextBooks_and_TextMaps/Astronomy_and_Cosmology_TextMaps/Map%3A_Celestial_Mechanics_(Tatum)/2%3A_Conic_Sections/2.7%3A_The_General_Conic_Section",
"openwebmath_score": 0.8829994201660156,
"openwebmath_perplexity": 209.50392004604868,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620054292072,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909793674566
} |
https://typeset.io/papers/a-note-on-independent-sets-in-trees-wtn5zzcnv7 | Journal ArticleDOI
# A note on independent sets in trees
01 Jan 1988-SIAM Journal on Discrete Mathematics (Society for Industrial and Applied Mathematics)-Vol. 1, Iss: 1, pp 105-108
TL;DR: A simple graph-theoretical proof that the largest number of maximal independent vertex sets in a tree with n vertices is given by m( T), a result first proved by Wilf.
AbstractWe give a simple graph-theoretical proof that the largest number of maximal independent vertex sets in a tree with n vertices is given by $m( T ) = \begin{cases} 2^{k - 1} + 1& {\text{if }} n = 2k, \\ 2^k & {\text{if }} n = 2k + 1, \end{cases}$ a result first proved by Wilf [SIAM J. Algebraic Discrete Methods, 7 (1986), pp. 125–130]. We also characterize those trees achieving this maximum value. Finally we investigate some related problems.
### Summary
• The authors also characterize those trees achieving this maximum value.
• Finally the authors investigate some related problems.
• Key words, independent vertices, trees, extremal graphs AMS(MOS) subject classifications.
Did you find this useful? Give us your feedback
Content maybe subject to copyright Report
SIAM
J.
Disc.
MATH.
Vol.
1,
No.
1,
February
1988
(C)
1988
Society
for
Industrial
and
Applied
Mathematics
012
A
NOTE
ON
INDEPENDENT
SETS
IN
TREES*
BRUCE
E.
SAGANf
Abstract.
We
give
a
simple
graph-theoretical
proof
that
the
largest
number
of
maximal
independent
vertex
sets
in
a
tree
with
n
vertices
is
given
by
2
k-
+
if
n
2k,
m(T)
2
if
n
2k
+
1,
a
result
first
proved
by
Wilf
[SIAM
J.
Algebraic
Discrete
Methods,
7
(I
986),
pp.
125-130].
We
also
characterize
those
trees
achieving
this
maximum
value.
Finally
we
investigate
some
related
problems.
Key
words,
independent
vertices,
trees,
extremal
graphs
AMS(MOS)
subject
classifications.
05C35, 05C30,
05C70
1.
Introduction.
Herbert
Wilf
[5]
was
the
first
to
the
following
question:
What
is
the
largest
number
of
maximal
independent
vertex
sets
in
a
tree
with
n
vertices?
His
proof
an
algebraic
flavor
and
was
somewhat
complicated.
Subsequently
Daniel
Cohen
[1]
was
able
to
provide
a
graph-theoretical
proof,
but
one
which
was
still
fairly
complex
in
view
of
the
simplicity
of
the
bound
(see
Theorem
3
below).
The
purpose
of
this
note
is
to
give
a
simple
graph-theoretical
demonstration
of
this
result
which,
in
completely
characterizes
all
trees
achieving
the
maximum
value.
J.
Griggs
and
C.
[2]
independently
found
a
straightforward
proof
which
is
similar
to
ours
in
some
respects
but
differs
in
others.
2.
Maximizing
independent
sets.
We
begin
with
some
preliminary
definitions
and
lemmas.
For
any
concepts
that
are
not
defined,
the
can
consult
Harary’s
book
[4].
Given
a
graph,
G,
let
V(G)
be
the
vertex
set
of
G
and
let
v(G)
IV(G)[
where
denotes
cardinality.
Recall
that
a
vertex
u
V(G)
is
called
an
endpoint
if
deg
u
1.
We
will
say
that
a
vertex
v
V(G)
is
penultimate
if
v
is
not
an
endpoint
and
v
is
to
(at
least)
deg
v
endpoints.
Note
that
v
is
to
deg
v
endpoints
if
and
only
if
v
is
the
center
of
the
star
Kt.
LEMMA
1.
Every
finite
tree
T
with
v(T)
>-_
3
has
a
penultimate
vertex.
Proof.
The
next-to-last
vertex
on
any
diameter
must
be
penultimate.
If
v
is
penultimate
in
T,
then
T
v
consists
of
deg
v
isolated
vertices
and
one
other
component
called
the
penultimate
component
P
(if
v
is
the
center
of
a
star,
choose
any
fixed
component
as
the
penultimate
one).
Now
let
End
v
{
w
Plw
is
to
v}
so
that
V(T)
V(P)
U
{
v}
U
End
v
where
denotes
disjoint
union.
Call
a
set
I
_
V(G)
independent
if
no
two
vertices
of
I
are
in
G.
Now
let
M(G)
{I
_
V(G)II
is
independent
and
maximal},
i.e.,
if
!
M(G)
then
there
is
no
independent
set
J
with
I
J.
Also
set
m(G)
IM(G)i.
We
wish
to
find
the
maximum
value
of
m(T)
over
all
trees
T
with
v(T)
n.
First,
however,
we
need
an
upper
bound.
by
the
editors
April
28,
1986;
accepted
for
publication
(in
revised
form)
March
24,
1987.
This
research
was
supported
in
part
by
a
NATO
post-doctoral
grant
by
the
National
Science
Foundation.
Department
of
Mathematics,
University
of
Pennsylvania,
Pennsylvania
19104-6395.
Present
Department
of
Mathematics,
Michigan
State
University,
East
Lansing,
Michigan
48824.
105
106
BRUCE
E.
SAGAN
FIG.
1.
Batons
of
length
O.
LEMMA
2.
Let
T
be
a
tree
and
v
V(
T
be
penultimate
with
corresponding
component
P.
Then
m(T)<=2m(P).
Proof
Let
I
be
a
maximal
independent
set
in
T.
Then
either
End
v
_
I
or
v
e
I
(exclusive
or).
In
the
first
case,
I
End
v
tO
Ip
where
Ip
is
a
maximal
independent
set
of
P.
In
the
second,
I
{
v}
tO
(I
{
w})
where
w
is
the
unique
vertex
of
P
to
v
(w
need
not
be
in
I).
[2]
Define
a
baton
of
length
l
as
follows.
Start
with
a
path
L
of
length
I
and
attach
any
number
of
paths
of
length
two
to
L’s
endpoints.
Hence
the
batons
of
length
0
are
just
"extended"
stars
and
the
first
few
are
displayed
in
Fig.
1.
Similarly,
the
batons
of
length
form
a
family
some
of
whose
members
are
shown
in
Fig.
2.
THEOREM
3.
Among
all
labeled
trees
T
with
n
vertices,
the
maximum
value
of
m(T)
is
2
k-+l
ifn=2k,
m(T)=
2
k
ifn=2k+
1.
Furthermore
this
maximum
is
attained
only
by
the
batons
of
length
0
(when
n
is
odd)
or
by
the
batons
of
lengths
and
3
(when
n
is
even).
Proof
Induct
on
n.
The
theorem
can
be
checked
by
hand
for
v(T)
<-
10
using
Lemma
2
and
Harary’s
tables
[4,
pp.
233-234].
(The
author
has
this
calculation
and
does
not
recommend
it
to
the
Now
let
T
be
a
tree
with
re(T)
maximum
among
all
trees
with
v(T)
n
>
10.
By
Lemma
there
exists
a
penultimate
vertex
v
V(T)
with
corresponding
component
P.
There
are
two
cases
depending
upon
the
parity
of
n.
If
n
is
odd,
n
2k
+
1,
then
consider
the
unique
baton
of
length
0
with
n
vertices,
denoted
Tn.
Since
T2k
/
contains
k
paths
of
length
2,
a
simple
calculation
shows
that
m
(T2k
+
1)
2
k.
Hence
(1)
m(T)>=2
k.
FIG.
2.
Batons
of
length
1.
A
NOTE
ON
INDEPENDENT
SETS
IN
TREES
107
FIG.
3.
G
and
G2.
Now
v(P)
=<
n
2
2k-
1.
However,
if
v(P)
<
2k-
1,
then
by
Lemma
2
and
induction
(for
n
at
least
7)
we
have
(2)
m(T)<=2m(P)<2
2
k-=
2
k,
which
(1).
Thus
v(P)
2k-
which
implies
that
deg
v
2,
and
End
v
{
u}
for
a
single
vertex
u.
Furthermore
P
T2-
since
if
it
does
not,
induction
applies
which
yields
m(P)
<
2
(this
baton
is
the
unique
tree
attaining
the
maximum
value).
But
then
(2)
holds
as
before,
a
unless
P
T2_
.
Putting
all
these
facts
together,
we
see
that
T
consists
of
a
tree
T2_
with
a
path
of
length
two
w-v-u
attached
to
some
w
V(T2k-
).
This
leaves
only
three
possibilities
for
T:
T
/
,
G
or
G,
where
G
and
G
are
given
in
Fig.
3.
To
eliminate
G1
and
G2
as
possibilities,
consider
a
second
penultimate
vertex
v’
as
shown.
If
p’
is
the
corresponding
component,
then
for
n
_
9
we
have
P’
q
Tz,-.
Invoking
Lemma
2
again
we
see
that
m(Gi)
<=
2m(P’)
<
2.2
-
2
for
1,
2
and
so
neither
Gi
maximizes
re(T).
For
n
even,
n
2k,
exactly
the
same
line
of
reasoning
as
in
the
odd
case
can
be
used.
It
follows
that
the
only
possibilities
for
T
are
those
obtained
by
attaching
a
path
of
length
2
to
a
baton
of
length
or
3
on
2k
2
vertices.
Hence
T
is
either
a
baton
of
length
or
3
and
re(T)
2
k-
+
or
T
is
one
of
the
five
graphs
H,
H5
shown
in
Fig.
4.
Note
that
in
H2
(respectively
H3)
we
require
that
deg
c
_
3
(respectively
deg
d
>-
2)
so
that
the
graph
does
not
degenerate
into
a
baton
of
length
3
(respectively
1).
It
is
easy
to
verify
that
because
n
>
10
we
can
find
in
each
of
these
five
graphs
a
second
penultimate
vertex
v’
such
that
P’
is
not
a
baton
of
length
or
3.
It
follows
from
FIG.
4.
H
through
Hs.
108
BRUCE
E.
SAGAN
induction
and
from
v(P’)
n
2
2k
2
that
we
have
m(P’)
<=
(2
k-
2
q_
1)
2
k
-2.
Hence
by
Lemma
2
m(Hi)
<=
2m(P’)
=<
2.2
k-
2
2
k-
for
1,
...,
5
which
is
less
than
the
value
obtained
for
the
batons.
This
finishes
the
proof
of
the
theorem.
Call
a
tree
T
extrema!
if
m(T)
is
a
maximum
as
compared
to
all
other
trees
with
the
same
number
of
vertices.
Let
e(n)
be
the
number
of
extremal
trees,
up
to
the
labeling,
on
n
vertices.
COROLLARY
4.
The
number
of
extremal
trees
on
n
vertices
is
given
by
e(n)={lk
ifn=2k+
1,
ifn=2k.
Proof
It
is
a
simple
matter
to
count
the
number
of
batons
of
the
appropri-
ate
sizes.
3.
Remarks.
Finding
the
minimum
value
of
m(T)
is
quite
easy.
THEOREM
5.
The
minimum
value
of
m(
T)
over
all
trees
with
n
vertices,
n
>=
2,
is
m(
T
)
2.
Furthermore
the
unique
tree
(up
to
relabeling)
achieving
this
minimum
is
the
star
gl,n-
.
Proof.
If
v(T)
>-
2,
then
for
any
edge
vw
there
is
a
maximal
independent
set
con-
taining
v
and
a
different
one
containing
w.
Hence
m(T)
>=
2
and
dearly
m(K,n-)
2.
If
T
q:
K,n_
,
then
T
contains
a
path
u-v-w-x.
This
forces
m(T)
>=
3
since
a
third
maximal
independent
set
containing
u
and
x
also
exists,
t--1
Once
one
has
determined
the
lower
and
upper
bounds,
b
and
B,
respectively,
for
a
graphical
invariant
fl(G)
one
looks
for
an
interpolation
theorem.
Such
a
result
has
the
following
form:
For
all
integers
z
satisfying
b
<-
z
=<
B
there
is
a
graph
G
with/(G)
z.
Unfortunately
there
is
no
interpolation
theorem
for
m(T)
since
when
v(T)
9
we
have
2
=<
m(T)
=<
16
but
there
is
no
tree
with
m(T)
15.
We
should
compare
our
proof
of
Theorem
3
with
that
of
Griggs
and
mentioned
in
the
Introduction.
They
also
begin
by
proving
Lemmas
and
2.
Then,
however,
they
use
the
lemmas
to
show
that
the
maximum
value
of
rn
(F)
over
all
forests
F
with
v(F)
n
is
achieved
precisely
when
F
is
a
one-factor
(possibly
with
an
isolated
vertex).
By
carefully
amalgamating
the
components
of
the
one-factor,
they
finally
obtain
the
bound
and
extremal
graphs
for
trees.
Following
the
dictum
that
once
something
is
proved
for
trees
it
should
be
extended
to
all
connected
graphs,
one
is
led
to
pose
the
following
question:
What
is
the
maximum
value
of
m(G)
over
all
connected
graphs
G
with
v(G)
n?
Griggs,
and
Guichard
[3]
have
this
query.
Another
proof
has
been
found
independently
by
Fiiredi.
Acknowledgments.
I
would
like
to
thank
Mihily
Hujter
for
pointing
out
the
proof
of
Lemma
1.
I
also
thank
the
referee
for
suggestions
that
considerably
improved
the
exposition.
REFERENCES
1]
D.
COHEN,
Counting
stable
sets
in
trees,
in
Seminaire
Lotharingien
de
combinatoire,
10
session,
R.
KtJnig,
ed.,
Institute
de
Recherche
Math6matique
Avanc6e
pub.,
Strasbourg,
France,
1984,
pp.
48-
52.
[2]
J.
GRIGGS
AND
C.
private
communication.
[3]
J.
GRIGGS,
C.
AND
D.
GUICHARD,
The
number
of
maximal
independent
sets
in
a
connected
graph,
prepdnt.
[4]
F.
HARARY,
Graph
Theory,
MA,
1969.
[5]
H.
WILF,
The
number
of
maximal-independent
sets
in
a
tree,
SIAM
J.
Algebraic
Discrete
Methods,
7
(1986),
pp.
125-130.
##### Citations
More filters
Book ChapterDOI
TL;DR: A survey of results concerning algorithms, complexity, and applications of the maximum clique problem is presented and enumerative and exact algorithms, heuristics, and a variety of other proposed methods are discussed.
Abstract: The maximum clique problem is a classical problem in combinatorial optimization which finds important applications in different domains. In this paper we try to give a survey of results concerning algorithms, complexity, and applications of this problem, and also provide an updated bibliography. Of course, we build upon precursory works with similar goals [39, 232, 266].
1,041 citations
Journal ArticleDOI
TL;DR: The number of maximum independent sets is shown to depend on the structure within the tree of the α-critical edges and the families of trees on which these maxima are achieved are given.
Abstract: A subset of vertices is a maximum independent set if no two of the vertices are joined by an edge and the subset has maximum cardinality. in this paper we answer a question posed by Herb Wilf. We show that the greatest number of maximum independent sets for a tree of n vertices is We give the families of trees on which these maxima are achieved. Proving which trees are extremal depends upon the structure of maximum independent sets in trees. This structure is described in terms of adjacency rules between three types of vertices, those which are in all, no, or some maximum independent sets. We show that vertices that are in some but not all maximum independent sets of the tree are joined in pairs by the α-critical edges (edges whose removal increases the size of a maximum independent set). The number of maximum independent sets is shown to depend on the structure within the tree of the α-critical edges.
62 citations
Journal ArticleDOI
Abstract: In this paper, we study the problem of determining the largest number of maximum independent sets of a graph of order n. Solutions to this problem are given for various classes of graphs, including general graphs, trees, forests, (connected) graphs with at most one cycle, connected graphs and triangle-free graphs. Extremal graphs achieving the maximum values are also given.
46 citations
• ...Sagan [26] finally presented an elegant proof, in which trees attaining the upper bound were also found (as did Griggs and Grinstead [7] independently)....
[...]
Journal ArticleDOI
01 May 1993-Networks
TL;DR: It is proved that if the complement of a graph G on n vertices contains no set of t + 1 pairwise disjoint edges as an induced subgraph, then G has fewer than (n/2t)2t maximal complete subgraphs.
Abstract: Giving a partial solution to a conjecture of Balas and Yu [Networks19 (1989) 247–235], we prove that if the complement of a graph G on n vertices contains no set of t + 1 pairwise disjoint edges as an induced subgraph, then G has fewer than (n/2t)2t maximal complete subgraphs. © 1993 by John Wiley & Sons, Inc.
46 citations
Journal IssueDOI
Abstract: The number of independent vertex subsets is a graph parameter that is, apart from its purely mathematical importance, of interest in mathematical chemistry. In particular, the problem of maximizing or minimizing the number of independent vertex subsets within a given class of graphs has already been investigated by many authors. In view of the applications of this graph parameter, trees of restricted degree are of particular interest. In the current article, we give a characterization of the trees with given maximum degree which maximize the number of independent subsets, and show that these trees also minimize the number of independent edge subsets. The structure of these trees is quite interesting and unexpected: it can be described by means of a novel digital system—in the case of maximum degree 3, we obtain a binary system using the digits 1 and 4. The proof mainly depends on an exchange lemma for branches of a tree. © 2008 Wiley Periodicals, Inc. J Graph Theory 58: 49–68, 2008 Dedicated to Prof. Robert Tichy on the occasion of his 50th birthday. This article was written while C. Heuberger was a visitor at the Center of Experimental Mathematics at the University of Stellenbosch. He thanks the center for its hospitality.
41 citations
### Cites result from "A note on independent sets in trees..."
• ...Quite a lot of similar results are given in the graph-theoretic literature as well: for instance, Hedman [4] studies the (essentially equivalent) problem of maximizing the number of cliques in graphs with a given maximal clique size, and Wilf [21] gives the largest number of maximal independent vertex sets of a tree on n vertices, see also [15]....
[...]
##### References
More filters
Book
01 Jan 1969
15,318 citations
Journal ArticleDOI
Abstract: We find the largest number of maximal independent sets of vertices that any tree of n vertices can have.
94 citations
Journal ArticleDOI
TL;DR: The maximum number of maximal independent sets which a connected graph on n vertices can have is determined, and the extremal graphs are completely characterize, thereby answering a question of Wilf.
Abstract: We determine the maximum number of maximal independent sets which a connected graph on n vertices can have, and we completely characterize the extremal graphs, thereby answering a question of Wilf.
80 citations | 2022-12-10T05:45:34 | {
"domain": "typeset.io",
"url": "https://typeset.io/papers/a-note-on-independent-sets-in-trees-wtn5zzcnv7",
"openwebmath_score": 0.7027577757835388,
"openwebmath_perplexity": 342.8679602617552,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620054292072,
"lm_q2_score": 0.6584174938590245,
"lm_q1q2_score": 0.6531909793674565
} |
http://zbmath.org/?q=an:1117.65077 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Nonlocal boundary-value problems for abstract parabolic equations: well-posedness in Bochner spaces. (English) Zbl 1117.65077
The author consider an abstract parabolic equation ${v}^{\text{'}}\left(t\right)+Av\left(t\right)=f\left(t\right)$ where the initial condition is replaced by the nonlocal condition $v\left(0\right)=v\left(\lambda \right)+\mu$. All variables and constants takes values in a Hilbert space $E$ and $A$ is a linear and possible unbounded operator on this space. Under the assumption that the operator $-A$ generates an analytic semigroup ${\left\{exp\left(-At\right)\right\}}_{t\ge 0}$ with exponential decay, it is shown that the solutions to the nonlocal parabolic equation satify a coercivity estimate in terms of $f$ and $\mu$ with the implication that the problem is well-posed. In addition, first and second order difference schemes are given and so called almost coercive inequalities are established for these (the multiplier in the inequality contains the factor $min\left\{1/\tau ,|ln\parallel A{\parallel }_{E\to E}|\right\}$, where $\tau$ is the time step).
##### MSC:
65J10 Equations with linear operators (numerical methods) 65M06 Finite difference methods (IVP of PDE) 65L05 Initial value problems for ODE (numerical methods) 47D06 One-parameter semigroups and linear evolution equations 34G10 Linear ODE in abstract spaces 35K90 Abstract parabolic equations | 2014-04-17T15:40:35 | {
"domain": "zbmath.org",
"url": "http://zbmath.org/?q=an:1117.65077",
"openwebmath_score": 0.8438723087310791,
"openwebmath_perplexity": 4287.674141460762,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620051945144,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909792129308
} |
https://mathstrek.blog/2012/08/04/combinatorial-game-theory-ii/ | ## Lesson 2
In this lesson, we will focus on a special type of game called Nim. Although it’s only one out of infinitely many possible games, understanding it in depth will be very beneficial in analysing a much larger class of games, known as impartial games. (The exact definition of this term is deferred until a later lesson.)
The rules of this game are as follows:
Start with r heaps of stones, of sizes $n_1, n_2, \dots, n_r > 0$. Play alternates between two players: at each player’s turn, he must pick a heap and remove as many stones as he wants from it, possibly even emptying the heap. However, he can remove stones from one and only one heap.
Here is an example of a game between two players:
$(3,5,\mathbf{8}) \to (3,\mathbf{5},1)\Rightarrow (\mathbf{3},2,1) \to (0,\mathbf{2},1) \Rightarrow (0,\mathbf{1},1) \to (0,0,\mathbf{1})\Rightarrow (0,0,0).$
Thus the second player wins in this match.
Of course, we could do our analysis as in the previous lesson: to check whether (3, 5, 8) is winning or losing, we could iteratively compute the status of $(a, b, c)$ for $0 \le a \le 3, 0 \le b \le 5, 0 \le c \le 8$. But this would require us to compute the status for 3 × 5 × 8 = 120 positions, even for a simple configuration like (3, 5, 8), which is way too much work. Thankfully, there is a beautiful solution for the problem.
### Binary Representation
If you understand what binary representation is, you may skip this section entirely. The crux of the matter is the following result:
Every positive integer is uniquely representable as a sum of distinct powers of two (i.e. 1, 2, 4, 8, 16 … ).
The keyword here is “distinct”. To write a number n as a sum of distinct powers of 2, let $m=2^r$ be the largest power of 2 which does not exceed n. Now consider the difference $d = n - 2^r$. If d = 0, then $n=2^r$ and there is nothing left to do. Otherwise, d > 0. We claim that $d < 2^r$; indeed, if $d \ge 2^r$ then $n = d+2^r$ would be at least $2^r + 2^r = 2^{r+1}$ which contradicts the maximality of r. (q.e.d.)
Example : let’s take n=25. To take the largest power of two which is at most 25, we have to pick $2^4=16$. The difference is:
$25 - 2^4 = 9$.
Now do the same with 9: the largest power of two not exceeding it is $2^3 = 8$ and the difference is:
$9 - 2^3 = 1$.
Finally $1 = 2^0$ so we can write $25 = 2^4 + 2^3 + 2^0$. We will write this more compactly as follows:
$25 = (11001)_2$.
This is known as the binary representation of 25. We can add as many leading zeros as we want:
$25 = (11001)_2 = (011001)_2 = (0011001)_2 = \ldots$
which is akin to 37 = 037 = 0037 = … in decimal representation.
### Solving Nim
Now we are ready to describe our strategy for playing Nim perfectly. For simplicity, we consider only three heaps of stones: (a, b, c), where a, b, c > 0, and take the example a = 17, b = 25, c = 13. First, write a, b, c in binary notation and align them on the right. E.g.
17 = (10001) = 16 + 1
25 = (11001) = 16 + 8 + 1
13 = (01101) = 8 + 4 + 1
Now count the number of ones in each column and note their parities. In our example, we have (from left to right): 2, 2, 1, 0, 3. Their parities are even, even, odd, even, odd respectively. The big theorem is as follows:
The above Nim configuration is a losing position if and only if there is an even number of ones in each column.
Hence, the above configuration is winning (i.e. first player wins) since there are an odd number of ones in the rightmost column. One naturally asks for a proof to this theorem. Instead of a rigourous proof, we will instead explain a winning strategy, from which the proof should be apparent. For now, let’s look at another example.
### Example
Consider the Nim game (30, 15, 27, 10). To compute its status, let us write:
30 = (11110) = 16 + 8 + 4 + 2
15 = (01111) = 8 + 4 + 2 + 1
27 = (11011) = 16 + 8 + 2 + 1
10 = (01010) = 8 + 2
Now there are an even number of ones in every column, so the second player wins.
### Playing Nim
Knowing the status of the game position is just half the battle won – you need to know how to exploit a winning position to obtain an eventual victory. Below, we will explain how to do this, by looking at the case of (17, 25, 13) above. As before, write:
17 = (10001)
25 = (11001)
13 = (01101)
Starting from the leftmost position, find the first column which has an odd number of ones:
17 = (10 [0] 01)
25 = (11 [0] 01)
13 = (01 [1] 01)
Next, pick a heap number m which has a one in that column. Such a heap must exist, since otherwise, there’d be no 1’s in that column (and thus, an even number of ones). In our example, we must take m = 13. Now remove the binary digits of m from column to the right.
17 = (10 001)
25 = (11 001)
? = (01 ***)
Fill in our desired binary digits in order to have an even number of ones in each column:
17 = (10 001)
25 = (11 001)
? = (01 000)
Note that this is a losing position by our theorem, so it would do us good to leave this configuration for our opponent. Also, such a move is always possible since we’re flipping the ‘1’ in column c to ‘0’, and no bits to the left of c are changed. So the resulting number is always smaller than m. Since (1000) = 8 in binary, in our example, a good move would be:
(17, 25, 13) → (17, 25, 8).
Thus, we see that given a winning position, it is always possible to move to a losing position for the opponent. That’s only half the theory; we also need to explain why every move from a losing position results in a winning one.
### The XOR Operation
Given a Nim game $G = (n_1, n_2, \dots, n_r)$ if $n_1, n_2, \dots, n_{r-1}$ are fixed, then there’s only one possible $n_r$ for which G is a losing position. Indeed, suppose r= 3 and $n_1 = 37, n_2 = 52$. Then to compute $n_3$, we write the binary representations:
n1 = 37 = (100101)
n2 = 52 = (110100)
n3 = ?? = (******)
For each column of $n_3$, there’s a unique bit such that the total number of ones in that column is even:
n1 = 37 = (100101)
n2 = 52 = (110100)
n3 = ?? = (010001)
This shows that $n_3 = 17$ is unique and also gives us a recipe for calculating it: simply add the two numbers in binary and ignore any carry which might occur. Addition has never been easier! Just perform 0+0 = 0, 0+1 = 1+0 = 1, 1+1 = 0 and forget the carry. This operation is called XOR for exclusive-or, or sometimes just addition without carry and denoted by $n_3 = n_1 \oplus n_2$.
Incidentally, this also explains why every losing position must lead to a winning one in the next move. For if $(n_1, n_2, \dots, n_r)$ were a losing position, then:
$n_1 \oplus n_2 \oplus \dots \oplus n_r = 0$.
Another way of writing this would be $n_r = n_1 \oplus n_2 \oplus \dots \oplus n_{r-1}$. Hence, if the first player makes a move from $n_r$, then it would result in $n_r' \ne n_1 \oplus n_2 \oplus \dots \oplus n_{r-1}$ which is a winning position. Since there’s nothing special about the last heap $n_r$, any move would also result in a winning position.
### Exercises
1. Determine the status of the following Nim games:
• (17, 18, 13),
• (18, 14, 9, 21).
2. For the games in Q1 where the first player wins, find a good move for him.
3. Given that the Nim game (15, 20, 7, m) is a second-player win, what is m?
4. Mum has baked a large almond cake which is divided into unit squares as below:
Unfortunately, the black square is burnt and inedible. Now Alice and Bob play a game as follows: each child alternately cuts the cake into two (possibly unequal) pieces, by a horizontal / vertical cut which is along a gridline. He/she then eats the edible piece and leaves the other behind. Whoever is left with the burnt piece is deemed to have lost and must do the dishes. If Alice starts the game, can she avoid the dishes?
1. In Northcott’s Game, each player controls counters of a certain colour (either black or white). At his turn, he may arbitrarily shift his counters left or right any number of squares. However, no counter may jump over another counter. The player who’s unable to make any move loses. If black starts first, who wins, or will the game never end?
1. In Misere Nim, the rules are exactly the same as Nim with one caveat: the person who takes the last stick loses the game instead of winning. Based on your understanding of classical Nim, find a strategy for Misere Nim. Explain how the strategies for the two games differ.
2. (Hard) In Nimble, we have coins in the following strip: each square contains at most one coin. Play alternates between two players. Each player, at his turn, picks up any coin and shifts it to some position on its left – without sliding off the strip and without jumping over another coin. Does the first or second player win in the following configuration?
1. Analyse the following Chinese Chess position : who wins?
1. This interesting mathematical game is the subject of some research recently. The game of Chomp is played as follows: start with an m-by-n grid of squares. Two players take turns cutting out a piece of the board by (i) picking any unit square on the board, and (ii) cutting away all squares to its right and below. Whoever is forced to take the top-left square loses.
• Find a winning strategy for the first or second player on an n-by-n square board.
• Find a winning strategy for the first or second player on an 2-by-n square board.
• Determine whether the first player wins in the following configuration. (P. S. The answer depends on a and b.)
• (Hard) Prove that, in general, the first player wins on any m-by-n grid if mn > 1.
This entry was posted in Notes and tagged , , , , , . Bookmark the permalink. | 2019-10-15T14:03:17 | {
"domain": "mathstrek.blog",
"url": "https://mathstrek.blog/2012/08/04/combinatorial-game-theory-ii/",
"openwebmath_score": 0.6622587442398071,
"openwebmath_perplexity": 434.24976725936136,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620047251287,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909789038791
} |
https://www.coursehero.com/sg/college-algebra/properties-of-equality/ | # Properties of Equality
The addition property of equality can be used to undo subtraction when solving equations.
Linear equations are the simplest types of equations that contain variables. In a linear equation in one variable, each term can be written as a number or a product of the variable and a number. The variable is not raised to a power, in the denominator of a fraction, or under a radical, such as a square root. The variable can be multiplied or divided by a number, called a constant, or a number may be added to or subtracted from the variable.
### Examples of Linear and Nonlinear Equations
Linear Equations Nonlinear Equations
$x+3=7$
$x^2+4x=5$
$2x=4$
$\sqrt{x+3}=2$
$\frac{x}{5}+1=\frac{2}{3}$
$\frac{3}{x}=x$
$6x+1=4x-3$
$10^x=100$
A solution of an equation is any value of the variable that makes the equation true, which means both sides are equal. Solving an equation means to find all possible solutions. The most common method of solving linear equations in one variable is to use properties of equality to isolate the variable, or rewrite the equation with the variable alone on one side.
The addition property of equality states that the solution of an equation does not change after adding the same number to both sides of the equation.
If the same quantity is added to both sides of a true equation, the resulting equation is still true. If $a=b$, then:
$a+c=b+c$
Step-By-Step Example
Applying the Addition Property of Equality
Solve the equation and then check whether the solution is true.
$x-5=8$
Step 1
Use the addition property of equality. Add 5 to both sides of the equation to undo the subtraction.
\begin{aligned}x-5&=8\\x-5+5&=8+5 \end{aligned}
Step 2
Simplify by combining like terms.
\begin{aligned}x-5+5&=8+5\\x&=13 \end{aligned}
Step 3
Substitute $x$ for 13 to determine whether the solution is true.
\begin{aligned}x-5&=8\\13-5&=8\\8&=8\end{aligned}
Solution
The equation is true. So, $x$ is equal to 13.
### Subtraction Property of Equality
The subtraction property of equality can be used to undo addition when solving equations.
The subtraction property of equality states that the solution of an equation does not change after subtracting the same number from both sides of the equation.
Subtraction Property of Equality Example
If the same quantity is subtracted from both sides of a true equation, the resulting equation is still true. If $a=b$, then:
$a-c=b-c$
Step-By-Step Example
Applying the Subtraction Property of Equality
Solve the equation and then check whether the solution is true.
$x+4=12$
Step 1
Use the subtraction property of equality. Subtract 4 from both sides to undo the addition:
\begin{aligned}x+4&=12\\x+4-4&=12-4\end{aligned}
Step 2
Simplify by combining like terms:
\begin{aligned}x+4-4&=12-4\\x&=8\end{aligned}
Step 3
Substitute $x$ for 8 in the original equation to determine whether the solution is true.
\begin{aligned}x+4&=12\\8+4&=12\\12&=12\end{aligned}
Solution
The equation is true. So, $x$ is equal to 8.
### Multiplication Property of Equality
The multiplication property of equality can be used to undo division when solving equations.
The multiplication property of equality states that the solution of an equation does not change after multiplying both sides of an equation by the same nonzero number. Note that multiplying both sides of the equation by zero produces $0=0$, which has no solutions.
Multiplication Property of Equality Example
If both sides of a true equation are multiplied by the same nonzero quantity, the resulting equation is still true. If $a=b$ and $c\neq0$, then:
$ac=bc$
Step-By-Step Example
Applying the Multiplication Property of Equality
Solve the equation and then check whether the solution is true.
$\frac{x}{5}=7$
Step 1
Use the multiplication property of equality. Multiply both sides by 5 to undo the division:
\begin{aligned}\frac{x}{5}&=7\\\left(\frac{x}{5}\right)(5)&=(7) (5)\end{aligned}
Step 2
Simplify the equation.
\begin{aligned}\left(\frac{x}{5}\right)(5)&=(7) (5)\\x&=35\end{aligned}
Step 3
Substitute $x$ for 35 to determine whether the solution is true.
\begin{aligned}\frac{x}{5}&=7\\\frac{35}{5}&=7\\7&=7 \end{aligned}
Solution
The equation is true. So, the value of $x$ is 35.
### Division Property of Equality
The division property of equality can be used to undo multiplication when solving equations.
The division property of equality states that the solution of an equation does not change after dividing both sides of an equation by the same nonzero number. Division is the inverse operation of multiplication, which means that dividing a variable by a number will undo multiplication by the same number. So the division property of equality is used to solve equations in which a variable is multiplied by a number.
Division Property of Equality Example
If both sides of a true equation are divided by the same nonzero quantity, the resulting equation is still true. If $a=b$ and $c\neq0$, then:
$\frac{a}{c}=\frac{b}{c}$
Step-By-Step Example
Applying the Division Property of Equality
Solve the equation and then check whether the solution is true.
$2x=18$
Step 1
Use the division property of equality. Divide both sides by 2 to undo the multiplication:
\begin{aligned}2x&=18\\ \frac{2x}{2}&=\frac{18}{2}\end{aligned}
Step 2
Simplify the equation.
\begin{aligned} \frac{2x}{2}&=\frac{18}{2}\\x&=9\end{aligned}
Step 3
Substitute $x$ for 9 to determine whether the solution is true.
\begin{aligned}2x&=18\\(2)(9)&=18\\18&=18 \end{aligned}
Solution
The equation is true. So, the value of $x$ is 9. | 2019-01-19T14:52:40 | {
"domain": "coursehero.com",
"url": "https://www.coursehero.com/sg/college-algebra/properties-of-equality/",
"openwebmath_score": 0.8516951203346252,
"openwebmath_perplexity": 357.5886018251146,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.992062004490436,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909787493533
} |
http://mathonline.wikidot.com/convergence-criterion-for-series-in-hilbert-spaces | Convergence Criterion for Series in Hilbert Spaces
# Convergence Criterion for Series in Hilbert Spaces
Recall from the Bessel's Inequality for Inner Product Spaces page that if $H$ is an inner product space and $(x_n)_{n=1}^{\infty}$ is an orthonormal sequence of points in $H$ then for every $y \in H$ we have that:
(1)
\begin{align} \quad \sum_{n=1}^{\infty} |\langle y, x_n \rangle|^2 \leq \| y \|^2 \end{align}
The following theorem will give us a nice convergence criterion for a particular type of series of points in a Hilbert space.
Theorem 1: Let $H$ be a Hilbert space and let $(x_n)_{n=1}^{\infty}$ be an orthonormal sequence of points in $H$. Then for every $y \in H$ the series $\displaystyle{\sum_{n=1}^{\infty} \langle y, x_n \rangle x_n}$ converges in $H$ to some $z \in H$ such that $z - y \perp \{ x_1, x_2, ... \}$.
• Proof: For each $N \in \mathbb{N}$ define $s_N$ to be:
(2)
\begin{align} \quad s_N = \sum_{n=1}^{N} \langle y, x_n \rangle x_n \end{align}
• Then $(s_N)_{N=1}^{\infty}$ is the sequence of partial sums for the series $\displaystyle{\sum_{n=1}^{\infty} \langle y, x_n \rangle x_n}$. We will show that the sequence of partial sums converges. Now for $M \geq N+1$ we have that:
(3)
\begin{align} \quad \| s_M - s_N \|^2 = \biggr \| \sum_{n=1}^{M} \langle y, x_n \rangle x_n - \sum_{n=1}^{N} \langle y, x_n \rangle x_n \biggr \| = \biggr \| \sum_{n=N+1}^{M} \langle y, x_n \rangle x_n \end{align}
(4)
\begin{align} \quad \| s_M - s_N \| = \sum_{n=N+1}^{M} |\langle y, x_n \rangle|^2 \end{align}
• By Bessel's inequality (referenced at the top of this page), we have that $\displaystyle{\sum_{n=1}^{\infty} |\langle y, x_n \rangle|^2 \leq \| y \|^2}$ and hence converges. Therefore the difference $\| s_M - s_N \|$ can be made as small as possible, and so $(s_N)_{N=1}^{\infty}$ is a Cauchy sequence. Since $H$ is a Hilbert space, $H$ is complete. So the series $\displaystyle{\sum_{n=1}^{\infty} \langle y, x_n \rangle x_n}$ converges to some $z \in H$.
• To show that $z - y \perp \{ x_1, x_2, ... \}$ we will show that $z - y \perp x_m$ for every $m \in \mathbb{N}$. For each $m \in \mathbb{N}$ let $N \in \mathbb{N}$ be such that $N \geq m$. Then:
(5)
\begin{align} \quad \langle s_N, x_m \rangle = \langle \sum_{n=1}^{N} \langle y, x_n \rangle, x_m \rangle = \sum_{n=1}^{N} \langle y, x_n \rangle \langle x_n, x_m \rangle = \langle y, x_m \rangle \underbrace{\langle x_m, x_m \rangle}_{= 1} = \langle y, x_m \rangle \end{align}
• Therefore:
(6)
\begin{align} \quad \langle s_N - y, x_m \rangle = 0 \end{align}
• Since the inner product is a continuous function, taking the limit as $N \to \infty$ gives us that:
(7)
\begin{align} \quad 0 = \lim_{N \to \infty} \langle s_N - y, x_m \rangle = \biggr \langle \lim_{N \to \infty} s_N - y, x_m \biggr \rangle = \langle z - y, x_m \rangle \end{align}
• Therefore:
(8)
\begin{align} \quad z - y \perp \{ x_1, x_2, ... \} \quad \blacksquare \end{align} | 2017-07-27T14:16:30 | {
"domain": "wikidot.com",
"url": "http://mathonline.wikidot.com/convergence-criterion-for-series-in-hilbert-spaces",
"openwebmath_score": 0.9994683861732483,
"openwebmath_perplexity": 385.4174016548919,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620043730896,
"lm_q2_score": 0.6584174938590246,
"lm_q1q2_score": 0.6531909786720903
} |
https://jeremykun.com/category/linear-algebra/ | The Inner Product as a Decision Rule
The standard inner product of two vectors has some nice geometric properties. Given two vectors $x, y \in \mathbb{R}^n$, where by $x_i$ I mean the $i$-th coordinate of $x$, the standard inner product (which I will interchangeably call the dot product) is defined by the formula
$\displaystyle \langle x, y \rangle = x_1 y_1 + \dots + x_n y_n$
This formula, simple as it is, produces a lot of interesting geometry. An important such property, one which is discussed in machine learning circles more than pure math, is that it is a very convenient decision rule.
In particular, say we’re in the Euclidean plane, and we have a line $L$ passing through the origin, with $w$ being a unit vector perpendicular to $L$ (“the normal” to the line).
If you take any vector $x$, then the dot product $\langle x, w \rangle$ is positive if $x$ is on the same side of $L$ as $w$, and negative otherwise. The dot product is zero if and only if $x$ is exactly on the line $L$, including when $x$ is the zero vector.
Left: the dot product of $w$ and $x$ is positive, meaning they are on the same side of $w$. Right: The dot product is negative, and they are on opposite sides.
Here is an interactive demonstration of this property. Click the image below to go to the demo, and you can drag the vector arrowheads and see the decision rule change.
Click above to go to the demo
The code for this demo is available in a github repository.
It’s always curious, at first, that multiplying and summing produces such geometry. Why should this seemingly trivial arithmetic do anything useful at all?
The core fact that makes it work, however, is that the dot product tells you how one vector projects onto another. When I say “projecting” a vector $x$ onto another vector $w$, I mean you take only the components of $x$ that point in the direction of $w$. The demo shows what the result looks like using the red (or green) vector.
In two dimensions this is easy to see, as you can draw the triangle which has $x$ as the hypotenuse, with $w$ spanning one of the two legs of the triangle as follows:
If we call $a$ the (vector) leg of the triangle parallel to $w$, while $b$ is the dotted line (as a vector, parallel to $L$), then as vectors $x = a + b$. The projection of $x$ onto $w$ is just $a$.
Another way to think of this is that the projection is $x$, modified by removing any part of $x$ that is perpendicular to $w$. Using some colorful language: you put your hands on either side of $x$ and $y$, and then you squish $x$ onto $y$ along the line perpendicular to $y$ (i.e., along $b$).
And if $y$ is a unit vector, then the length of $a$—that is, the length of the projection of $x$ onto $y$—is exactly the inner product product $\langle x, y \rangle$.
Moreover, if the angle between $x$ and $y$ is larger than 90 degrees, the projected vector will point in the opposite direction of $y$, so it’s really a “signed” length.
Left: the projection points in the same direction as $w$. Right: the projection points in the opposite direction.
And this is precisely why the decision rule works. This 90-degree boundary is the line perpendicular to $y$.
More technically said: Let $x, y \in \mathbb{R}^n$ be two vectors, and $\langle x,y \rangle$ their dot product. Define by $\| y \|$ the length of $y$, specifically $\sqrt{\langle y, y \rangle}$. Define by $\text{proj}_{y}(x)$ by first letting $y' = \frac{y}{\| y \|}$, and then let $\text{proj}_{y}(x) = \langle x,y' \rangle y'$. In words, you scale $y$ to a unit vector $y'$, use the result to compute the inner product, and then scale $y$ so that it’s length is $\langle x, y' \rangle$. Then
Theorem: Geometrically, $\text{proj}_y(x)$ is the projection of $x$ onto the line spanned by $y$.
This theorem is true for any $n$-dimensional vector space, since if you have two vectors you can simply apply the reasoning for 2-dimensions to the 2-dimensional plane containing $x$ and $y$. In that case, the decision boundary for a positive/negative output is the entire $n-1$ dimensional hyperplane perpendicular to $y$ (the projected vector).
In fact, the usual formula for the angle between two vectors, i.e. the formula $\langle x, y \rangle = \|x \| \cdot \| y \| \cos \theta$, is a restatement of the projection theorem in terms of trigonometry. The $\langle x, y' \rangle$ part of the projection formula (how much you scale the output) is equal to $\| x \| \cos \theta$. At the end of this post we have a proof of the cosine-angle formula above.
Part of why this decision rule property is so important is that this is a linear function, and linear functions can be optimized relatively easily. When I say that, I specifically mean that there are many known algorithms for optimizing linear functions, which don’t have obscene runtime or space requirements. This is a big reason why mathematicians and statisticians start the mathematical modeling process with linear functions. They’re inherently simpler.
In fact, there are many techniques in machine learning—a prominent one is the so-called Kernel Trick—that exist solely to take data that is not inherently linear in nature (cannot be fruitfully analyzed by linear methods) and transform it into a dataset that is. Using the Kernel Trick as an example to foreshadow some future posts on Support Vector Machines, the idea is to take data which cannot be separated by a line, and transform it (usually by adding new coordinates) so that it can. Then the decision rule, computed in the larger space, is just a dot product. Irene Papakonstantinou neatly demonstrates this with paper folding and scissors. The tradeoff is that the size of the ambient space increases, and it might increase so much that it makes computation intractable. Luckily, the Kernel Trick avoids this by remembering where the data came from, so that one can take advantage of the smaller space to compute what would be the inner product in the larger space.
Next time we’ll see how this decision rule shows up in an optimization problem: finding the “best” hyperplane that separates an input set of red and blue points into monochromatic regions (provided that is possible). Finding this separator is core subroutine of the Support Vector Machine technique, and therein lie interesting algorithms. After we see the core SVM algorithm, we’ll see how the Kernel Trick fits into the method to allow nonlinear decision boundaries.
Proof of the cosine angle formula
Theorem: The inner product $\langle v, w \rangle$ is equal to $\| v \| \| w \| \cos(\theta)$, where $\theta$ is the angle between the two vectors.
Note that this angle is computed in the 2-dimensional subspace spanned by $v, w$, viewed as a typical flat plane, and this is a 2-dimensional plane regardless of the dimension of $v, w$.
Proof. If either $v$ or $w$ is zero, then both sides of the equation are zero and the theorem is trivial, so we may assume both are nonzero. Label a triangle with sides $v,w$ and the third side $v-w$. Now the length of each side is $\| v \|, \| w\|,$ and $\| v-w \|$, respectively. Assume for the moment that $\theta$ is not 0 or 180 degrees, so that this triangle is not degenerate.
The law of cosines allows us to write
$\displaystyle \| v - w \|^2 = \| v \|^2 + \| w \|^2 - 2 \| v \| \| w \| \cos(\theta)$
Moreover, The left hand side is the inner product of $v-w$ with itself, i.e. $\| v - w \|^2 = \langle v-w , v-w \rangle$. We’ll expand $\langle v-w, v-w \rangle$ using two facts. The first is trivial from the formula, that inner product is symmetric: $\langle v,w \rangle = \langle w, v \rangle$. Second is that the inner product is linear in each input. In particular for the first input: $\langle x + y, z \rangle = \langle x, z \rangle + \langle y, z \rangle$ and $\langle cx, z \rangle = c \langle x, z \rangle$. The same holds for the second input by symmetry of the two inputs. Hence we can split up $\langle v-w, v-w \rangle$ as follows.
\displaystyle \begin{aligned} \langle v-w, v-w \rangle &= \langle v, v-w \rangle - \langle w, v-w \rangle \\ &= \langle v, v \rangle - \langle v, w \rangle - \langle w, v \rangle + \langle w, w \rangle \\ &= \| v \|^2 - 2 \langle v, w \rangle + \| w \|^2 \\ \end{aligned}
Combining our two offset equations, we can subtract $\| v \|^2 + \| w \|^2$ from each side and get
$\displaystyle -2 \|v \| \|w \| \cos(\theta) = -2 \langle v, w \rangle,$
Which, after dividing by $-2$, proves the theorem if $\theta \not \in \{0, 180 \}$.
Now if $\theta = 0$ or 180 degrees, the vectors are parallel, so we can write one as a scalar multiple of the other. Say $w = cv$ for $c \in \mathbb{R}$. In that case, $\langle v, cv \rangle = c \| v \| \| v \|$. Now $\| w \| = | c | \| v \|$, since a norm is a length and is hence non-negative (but $c$ can be negative). Indeed, if $v, w$ are parallel but pointing in opposite directions, then $c < 0$, so $\cos(\theta) = -1$, and $c \| v \| = - \| w \|$. Otherwise $c > 0$ and $\cos(\theta) = 1$. This allows us to write $c \| v \| \| v \| = \| w \| \| v \| \cos(\theta)$, and this completes the final case of the theorem.
$\square$
The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm
Christos Papadimitriou, who studies multiplicative weights in the context of biology.
Hard to believe
Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more.
What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of $n$ objects, and each object has a weight.
Set all the object weights to be 1.
For some large number of rounds:
Pick an object at random proportionally to the weights
Some event happens
Increase the weight of the chosen object if it does well in the event
Otherwise decrease the weight
The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step $t$ is $w_t$ before the event, and $G$ represents how well the object did in the event, then we’ll update the weight according to the rule:
$\displaystyle w_{t+1} = w_t (1 + G)$
Think of this as increasing the weight by a small multiple of the object’s performance on a given round.
Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable.
This is how we use multiplicative weights: if we number the experts $1, \dots, N$, we give each expert a weight $w_i$ which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss $G$. Then we update the weight of the chosen expert by multiplying it by $(1 + G / 1000)$. Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later).
In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo.
This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led:
1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem.
2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm.
3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error.
4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology.
5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion.
6. You can get easy $\log(n)$-approximation algorithms for many NP-hard problems, such as set cover.
Additional, more technical examples can be found in this survey of Arora et al.
In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository.
The generic MWUA algorithm
Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality.
In general we have some set $X$ of objects and some set $Y$ of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table $M$ whose rows are objects, whose columns are outcomes, and whose $i,j$ entry $M(i,j)$ is the reward produced by object $x_i$ when the outcome is $y_j$. We will also write this as $M(x, y)$ for object $x$ and outcome $y$. The only assumption we’ll make on the rewards is that the values $M(x, y)$ are bounded by some small constant $B$ (by small I mean $B$ should not require exponentially many bits to write down as compared to the size of $X$). In symbols, $M(x,y) \in [0,B]$. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table $M$ just exists for analysis, and the algorithm does not know its values. Moreover, while the values in $M$ are static, the choice of outcome $y$ for a given round may be nondeterministic.
The MWUA algorithm randomly chooses an object $x \in X$ in every round, observing the outcome $y \in Y$, and collecting the reward $M(x,y)$ (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round.
Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds.
def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
...
We define for object $x$ a nonnegative number $w_x$ we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number $t$, i.e. $w_{x,t}$ is the weight of object $x$ in round $t$. Initially, all the weights are $1$. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round.
# draw: [float] -> int
# pick an index from the given list of floats proportionally
# to the size of the entry (i.e. normalize to a probability
# distribution and draw according to the probabilities).
def draw(weights):
choice = random.uniform(0, sum(weights))
choiceIndex = 0
for weight in weights:
choice -= weight
if choice <= 0:
return choiceIndex
choiceIndex += 1
# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate numRounds):
weights = [1] * len(objects)
for t in numRounds:
chosenObjectIndex = draw(weights)
chosenObject = objects[chosenObjectIndex]
outcome = observeOutcome(t, weights, chosenObject)
thisRoundReward = reward(chosenObject, outcome)
...
Sampling objects in this way is the same as associating a distribution $D_t$ to each round, where if $S_t = \sum_{x \in X} w_{x,t}$ then the probability of drawing $x$, which we denote $D_t(x)$, is $w_{x,t} / S_t$. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis.
Next comes the weight update step. Let’s call our learning rate variable parameter $\varepsilon$. In round $t$ say we have object $x_t$ and outcome $y_t$, then the reward is $M(x_t, y_t)$. We update the weight of the chosen object $x_t$ according to the formula:
$\displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)$
In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects $x \in X$. This turns into the following Python snippet, where we hide the division by $B$ into the choice of learning rate:
# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
weights = [1] * len(objects)
for t in numRounds:
chosenObjectIndex = draw(weights)
chosenObject = objects[chosenObjectIndex]
outcome = observeOutcome(t, weights, chosenObject)
thisRoundReward = reward(chosenObject, outcome)
for i in range(len(weights)):
weights[i] *= (1 + learningRate * reward(objects[i], outcome))
One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round $t$ can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post.
But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee.
Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus $\log(n)$, where $n$ is the number of objects. (Exact formula at the end of the proof)
The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script:
1. Define potential function, in our case $S_t$.
2. State what seems like trivial facts about the potential function to write $S_{t+1}$ in terms of $S_t$, and hence get general information about $S_T$ for some large $T$.
3. Theorem is proved.
4. Wait, what?
Clearly, coming up with a useful potential function is a difficult and prized skill.
In this proof our potential function is the sum of the weights of the objects in a given round, $S_t = \sum_{x \in X} w_{x, t}$. Now the lemma.
Lemma: Let $B$ be the bound on the size of the rewards, and $0 < \varepsilon < 1/2$ a learning parameter. Recall that $D_t(x)$ is the probability that MWUA draws object $x$ in round $t$. Write the expected reward for MWUA for round $t$ as the following (using only the definition of expected value):
$\displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)$
Then the claim of the lemma is:
$\displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}$
Proof. Expand $S_{t+1} = \sum_{x \in X} w_{x, t+1}$ using the definition of the MWUA update:
$\displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)$
Now distribute $w_{x, t}$ and split into two sums:
$\displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)$
Using the fact that $D_t(x) = \frac{w_{x,t}}{S_t}$, we can replace $w_{x,t}$ with $D_t(x) S_t$, which allows us to get $R_t$
\displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned}
And then using the fact that $(1 + x) \leq e^x$ (Taylor series), we can bound the last expression by $S_te^{\varepsilon R_t / B}$, as desired.
$\square$
Now using the lemma, we can get a hold on $S_T$ for a large $T$, namely that
$\displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}$
If $|X| = n$ then $S_1=n$, simplifying the above. Moreover, the sum of the weights in round $T$ is certainly greater than any single weight, so that for every fixed object $x \in X$,
$\displaystyle S_T \geq w_{x,T} \leq (1 + \varepsilon)^{\sum_t M(x, y_t) / B}$
Squeezing $S_t$ between these two inequalities and taking logarithms (to simplify the exponents) gives
$\displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t$
Multiply through by $B$, divide by $\varepsilon$, rearrange, and use the fact that when $0 < \varepsilon < 1/2$ we have $\log(1 + \varepsilon) \geq \varepsilon - \varepsilon^2$ (Taylor series) to get
$\displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) - \frac{B \log n}{\varepsilon}$
The bracketed term is the payoff of object $x$, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object $x \in X$, and hence to the best one. This proves the theorem.
$\square$
Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity $B \log(n) / \varepsilon$ hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed $\delta$. The answer is roughly: let $\varepsilon = O(\delta / B)$ and pick $T = O(B^2 \log(n) / \delta^2)$. See this survey for more.
MWUA for linear programs
Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product $c \cdot x$, where $c$ is a fixed vector and $x = (x_1, x_2, \dots, x_n)$ is a vector of non-negative variables the algorithm gets to choose. The choices for $x$ are also constrained by a set of $m$ linear inequalities, $A_i \cdot x \geq b_i$, where $A_i$ is a fixed vector and $b_i$ is a scalar for $i = 1, \dots, m$. This is usually summarized by putting all the $A_i$ in a matrix, $b_i$ in a vector, as
$x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}$
We can further simplify the constraints by assuming we know the optimal value $Z = c \cdot x_{\textup{OPT}}$ in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint $Ax \geq b$, the “easy feasible region” of possible $x$‘s includes $\{ x \mid x \geq 0, c \cdot x = Z \}$.
In order to fit linear programming into the MWUA framework we have to define two things.
1. The objects: the set of linear inequalities $A_i \cdot x \geq b_i$.
2. The rewards: the error of a constraint for a special input vector $x_t$.
Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily.
The special input $x_t$ depends on the weights in round $t$ (which is allowed, recall). Specifically, if the weights are $w = (w_1, \dots, w_m)$, we ask for a vector $x_t$ in our “easy feasible region” which satisfies
$\displaystyle (A^T w) \cdot x_t \geq w \cdot b$
For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector $\alpha$ and a scalar $\beta$ and a convex region $R$, finding a vector $x \in R$ satisfying $\alpha \cdot x \geq \beta$. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, $\alpha = A^T w, \beta = w \cdot b$, are particular to the linear programming formulation.
Two remarks on this choice of inputs. First, the vector $A^T w$ is a weighted average of the constraints in $A$, and $w \cdot b$ is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such $x$ exists, then the original linear program has no solution. Indeed, given a solution $x^*$ to the original linear program, each constraint, say $A_1 x^*_1 \geq b_1$, is unaffected by left-multiplication by $w_1$.
Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them.
At the end, our final output is an average of the $x_t$ produced in each round, i.e. $x^* = \frac{1}{T}\sum_t x_t$. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above.
Implementing the oracle
Fix the convex region $R = \{ c \cdot x = Z, x \geq 0 \}$ for a known optimal value $Z$. Define $\textup{oracle}(\alpha, \beta)$ as the problem of finding an $x \in R$ such that $\alpha \cdot x \geq \beta$.
For the case of this linear region $R$, we can simply find the index $i$ which maximizes $\alpha_i Z / c_i$. If this value exceeds $\beta$, we can return the vector with that value in the $i$-th position and zeros elsewhere. Otherwise, the problem has no solution.
To prove the “no solution” part, say $n=2$ and you have $x = (x_1, x_2)$ a solution to $\alpha \cdot x \geq \beta$. Then for whichever index makes $\alpha_i Z / c_i$ bigger, say $i=1$, you can increase $\alpha \cdot x$ without changing $c \cdot x = Z$ by replacing $x_1$ with $x_1 + (c_2/c_1)x_2$ and $x_2$ with zero. I.e., we’re moving the solution $x$ along the line $c \cdot x = Z$ until it reaches a vertex of the region bounded by $c \cdot x = Z$ and $x \geq 0$. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions.
The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read.
def makeOracle(c, optimalValue):
n = len(c)
def oracle(weightedVector, weightedThreshold):
def quantity(i):
return weightedVector[i] * optimalValue / c[i] if c[i] > 0 else -1
biggest = max(range(n), key=quantity)
if quantity(biggest) < weightedThreshold:
raise InfeasibleException
return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)])
return oracle
Implementing the core solver
The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of $c$.
def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1):
m, n = A.shape # m equations, n variables
oracle = makeOracle(linearObjective, optimalValue)
def reward(i, specialVector):
...
def observeOutcome(_, weights, __):
...
numRounds = 1000
weights, cumulativeReward, outcomes = MWUA(
range(m), observeOutcome, reward, learningRate, numRounds
)
averageVector = sum(outcomes) / numRounds
return averageVector
First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to $A$ and $b$. Note that neither $c$ nor the optimal value show up here.
def reward(i, specialVector):
constraint = A[i]
threshold = b[i]
return threshold - numpy.dot(constraint, specialVector)
def observeOutcome(_, weights, __):
weights = numpy.array(weights)
weightedVector = A.transpose().dot(weights)
weightedThreshold = weights.dot(b)
return oracle(weightedVector, weightedThreshold)
Implementing the binary search, and an example
Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise.
def solve(A, b, linearObjective, maxRange=1000):
optRange = [0, maxRange]
while optRange[1] - optRange[0] > 1e-8:
proposedOpt = sum(optRange) / 2
print("Attempting to solve with proposedOpt=%G" % proposedOpt)
# Because the binary search starts so high, it results in extreme
# reward values that must be tempered by a slow learning rate. Exercise
# to the reader: determine absolute bounds for the rewards, and set
# this learning rate in a more principled fashion.
learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective)
learningRate = min(learningRate, 0.1)
try:
result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate)
optRange[1] = proposedOpt
except InfeasibleException:
optRange[0] = proposedOpt
return result
Finally, a simple example:
A = numpy.array([[1, 2, 3], [0, 4, 2]])
b = numpy.array([5, 6])
c = numpy.array([1, 2, 1])
x = solve(A, b, c)
print(x)
print(c.dot(x))
print(A.dot(x) - b)
The output:
Attempting to solve with proposedOpt=500
Attempting to solve with proposedOpt=250
Attempting to solve with proposedOpt=125
Attempting to solve with proposedOpt=62.5
Attempting to solve with proposedOpt=31.25
Attempting to solve with proposedOpt=15.625
Attempting to solve with proposedOpt=7.8125
Attempting to solve with proposedOpt=3.90625
Attempting to solve with proposedOpt=1.95312
Attempting to solve with proposedOpt=2.92969
Attempting to solve with proposedOpt=3.41797
Attempting to solve with proposedOpt=3.17383
Attempting to solve with proposedOpt=3.05176
Attempting to solve with proposedOpt=2.99072
Attempting to solve with proposedOpt=3.02124
Attempting to solve with proposedOpt=3.00598
Attempting to solve with proposedOpt=2.99835
Attempting to solve with proposedOpt=3.00217
Attempting to solve with proposedOpt=3.00026
Attempting to solve with proposedOpt=2.99931
Attempting to solve with proposedOpt=2.99978
Attempting to solve with proposedOpt=3.00002
Attempting to solve with proposedOpt=2.9999
Attempting to solve with proposedOpt=2.99996
Attempting to solve with proposedOpt=2.99999
Attempting to solve with proposedOpt=3.00001
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3 # note %G rounds the printed values
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
[ 0. 0.987 1.026]
3.00000000425
[ 5.20000072e-02 8.49831849e-09]
So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs.
Discussion
One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology.
This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution.
The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming.
If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different.
Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet.
Until next time!
A Spectral Analysis of Moore Graphs
For fixed integers $r > 0$, and odd $g$, a Moore graph is an $r$-regular graph of girth $g$ which has the minimum number of vertices $n$ among all such graphs with the same regularity and girth.
(Recall, A the girth of a graph is the length of its shortest cycle, and it’s regular if all its vertices have the same degree)
Problem (Hoffman-Singleton): Find a useful constraint on the relationship between $n$ and $r$ for Moore graphs of girth $5$ and degree $r$.
Note: Excluding trivial Moore graphs with girth $g=3$ and degree $r=2$, there are only two known Moore graphs: (a) the Petersen graph and (b) this crazy graph:
The solution to the problem shows that there are only a few cases left to check.
Solution: It is easy to show that the minimum number of vertices of a Moore graph of girth $5$ and degree $r$ is $1 + r + r(r-1) = r^2 + 1$. Just consider the tree:
This is the tree example for $r = 3$, but the argument should be clear for any $r$ from the branching pattern of the tree: $1 + r + r(r-1)$
Provided $n = r^2 + 1$, we will prove that $r$ must be either $3, 7,$ or $57$. The technique will be to analyze the eigenvalues of a special matrix derived from the Moore graph.
Let $A$ be the adjacency matrix of the supposed Moore graph with these properties. Let $B = A^2 = (b_{i,j})$. Using the girth and regularity we know:
• $b_{i,i} = r$ since each vertex has degree $r$.
• $b_{i,j} = 0$ if $(i,j)$ is an edge of $G$, since any walk of length 2 from $i$ to $j$ would be able to use such an edge and create a cycle of length 3 which is less than the girth.
• $b_{i,j} = 1$ if $(i,j)$ is not an edge, because (using the tree idea above), every two vertices non-adjacent vertices have a unique neighbor in common.
Let $J_n$ be the $n \times n$ matrix of all 1’s and $I_n$ the identity matrix. Then
$\displaystyle B = rI_n + J_n - I_n - A.$
We use this matrix equation to generate two equations whose solutions will restrict $r$. Since $A$ is a real symmetric matrix is has an orthonormal basis of eigenvectors $v_1, \dots, v_n$ with eigenvalues $\lambda_1 , \dots, \lambda_n$. Moreover, by regularity we know one of these vectors is the all 1’s vector, with eigenvalue $r$. Call this $v_1 = (1, \dots, 1), \lambda_1 = r$. By orthogonality of $v_1$ with the other $v_i$, we know that $J_nv_i = 0$. We also know that, since $A$ is an adjacency matrix with zeros on the diagonal, the trace of $A$ is $\sum_i \lambda_i = 0$.
Multiply the matrices in the equation above by any $v_i$, $i > 1$ to get
\displaystyle \begin{aligned}A^2v_i &= rv_i - v_i - Av_i \\ \lambda_i^2v_i &= rv_i - v_i - \lambda_i v_i \end{aligned}
Rearranging and factoring out $v_i$ gives $\lambda_i^2 - \lambda_i - (r+1) = 0$. Let $z = 4r - 3$, then the non-$r$ eigenvalues must be one of the two roots: $\mu_1 = (-1 + \sqrt{z}) / 2$ or $\mu_2 = (-1 - \sqrt{z})/2$.
Say that $\mu_1$ occurs $a$ times and $\mu_2$ occurs $b$ times, then $n = a + b + 1$. So we have the following equations.
\displaystyle \begin{aligned} a + b + 1 &= n \\ r + a \mu_1 + b\mu_2 &= 0 \end{aligned}
From this equation you can easily derive that $\sqrt{z}$ is an integer, and as a consequence $r = (m^2 + 3) / 4$ for some integer $m$. With a tiny bit of extra algebra, this gives
$\displaystyle m(m^3 - 2m - 16(a-b)) = 15$
Implying that $m$ divides $15$, meaning $m \in \{ 1, 3, 5, 15\}$, and as a consequence $r \in \{ 1, 3, 7, 57\}$.
$\square$
Discussion: This is a strikingly clever use of spectral graph theory to answer a question about combinatorics. Spectral graph theory is precisely that, the study of what linear algebra can tell us about graphs. For an deeper dive into spectral graph theory, see the guest post I wrote on With High Probability.
If you allow for even girth, there are a few extra (infinite families of) Moore graphs, see Wikipedia for a list.
With additional techniques, one can also disprove the existence of any Moore graphs that are not among the known ones, with the exception of a possible Moore graph of girth $5$ and degree $57$ on $n = 3250$ vertices. It is unknown whether such a graph exists, but if it does, it is known that
You should go out and find it or prove it doesn’t exist.
Hungry for more applications of linear algebra to combinatorics and computer science? The book Thirty-Three Miniatures is a fantastically entertaining book of linear algebra gems (it’s where I found the proof in this post). The exposition is lucid, and the chapters are short enough to read on my daily train commute.
Singular Value Decomposition Part 2: Theorem, Proof, Algorithm
I’m just going to jump right into the definitions and rigor, so if you haven’t read the previous post motivating the singular value decomposition, go back and do that first. This post will be theorem, proof, algorithm, data. The data set we test on is a thousand-story CNN news data set. All of the data, code, and examples used in this post is in a github repository, as usual.
We start with the best-approximating $k$-dimensional linear subspace.
Definition: Let $X = \{ x_1, \dots, x_m \}$ be a set of $m$ points in $\mathbb{R}^n$. The best approximating $k$-dimensional linear subspace of $X$ is the $k$-dimensional linear subspace $V \subset \mathbb{R}^n$ which minimizes the sum of the squared distances from the points in $X$ to $V$.
Let me clarify what I mean by minimizing the sum of squared distances. First we’ll start with the simple case: we have a vector $x \in X$, and a candidate line $L$ (a 1-dimensional subspace) that is the span of a unit vector $v$. The squared distance from $x$ to the line spanned by $v$ is the squared length of $x$ minus the squared length of the projection of $x$ onto $v$. Here’s a picture.
I’m saying that the pink vector $z$ in the picture is the difference of the black and green vectors $x-y$, and that the “distance” from $x$ to $v$ is the length of the pink vector. The reason is just the Pythagorean theorem: the vector $x$ is the hypotenuse of a right triangle whose other two sides are the projected vector $y$ and the difference vector $z$.
Let’s throw down some notation. I’ll call $\textup{proj}_v: \mathbb{R}^n \to \mathbb{R}^n$ the linear map that takes as input a vector $x$ and produces as output the projection of $x$ onto $v$. In fact we have a brief formula for this when $v$ is a unit vector. If we call $x \cdot v$ the usual dot product, then $\textup{proj}_v(x) = (x \cdot v)v$. That’s $v$ scaled by the inner product of $x$ and $v$. In the picture above, since the line $L$ is the span of the vector $v$, that means that $y = \textup{proj}_v(x)$ and $z = x -\textup{proj}_v(x) = x-y$.
The dot-product formula is useful for us because it allows us to compute the squared length of the projection by taking a dot product $|x \cdot v|^2$. So then a formula for the distance of $x$ from the line spanned by the unit vector $v$ is
$\displaystyle (\textup{dist}_v(x))^2 = \left ( \sum_{i=1}^n x_i^2 \right ) - |x \cdot v|^2$
This formula is just a restatement of the Pythagorean theorem for perpendicular vectors.
$\displaystyle \sum_{i} x_i^2 = (\textup{proj}_v(x))^2 + (\textup{dist}_v(x))^2$
In particular, the difference vector we originally called $z$ has squared length $\textup{dist}_v(x)^2$. The vector $y$, which is perpendicular to $z$ and is also the projection of $x$ onto $L$, it’s squared length is $(\textup{proj}_v(x))^2$. And the Pythagorean theorem tells us that summing those two squared lengths gives you the squared length of the hypotenuse $x$.
If we were trying to find the best approximating 1-dimensional subspace for a set of data points $X$, then we’d want to minimize the sum of the squared distances for every point $x \in X$. Namely, we want the $v$ that solves $\min_{|v|=1} \sum_{x \in X} (\textup{dist}_v(x))^2$.
With some slight algebra we can make our life easier. The short version: minimizing the sum of squared distances is the same thing as maximizing the sum of squared lengths of the projections. The longer version: let’s go back to a single point $x$ and the line spanned by $v$. The Pythagorean theorem told us that
$\displaystyle \sum_{i} x_i^2 = (\textup{proj}_v(x))^2 + (\textup{dist}_v(x))^2$
The squared length of $x$ is constant. It’s an input to the algorithm and it doesn’t change through a run of the algorithm. So we get the squared distance by subtracting $(\textup{proj}_v(x))^2$ from a constant number,
$\displaystyle \sum_{i} x_i^2 - (\textup{proj}_v(x))^2 = (\textup{dist}_v(x))^2$
which means if we want to minimize the squared distance, we can instead maximize the squared projection. Maximizing the subtracted thing minimizes the whole expression.
It works the same way if you’re summing over all the data points in $X$. In fact, we can say it much more compactly this way. If the rows of $A$ are your data points, then $Av$ contains as each entry the (signed) dot products $x_i \cdot v$. And the squared norm of this vector, $|Av|^2$, is exactly the sum of the squared lengths of the projections of the data onto the line spanned by $v$. The last thing is that maximizing a square is the same as maximizing its square root, so we can switch freely between saying our objective is to find the unit vector $v$ that maximizes $|Av|$ and that which maximizes $|Av|^2$.
At this point you should be thinking,
Great, we have written down an optimization problem: $\max_{v : |v|=1} |Av|$. If we could solve this, we’d have the best 1-dimensional linear approximation to the data contained in the rows of $A$. But (1) how do we solve that problem? And (2) you promised a $k$-dimensional approximating subspace. I feel betrayed! Swindled! Bamboozled!
Here’s the fantastic thing. We can solve the 1-dimensional optimization problem efficiently (we’ll do it later in this post), and (2) is answered by the following theorem.
The SVD Theorem: Computing the best $k$-dimensional subspace reduces to $k$ applications of the one-dimensional problem.
We will prove this after we introduce the terms “singular value” and “singular vector.”
Singular values and vectors
As I just said, we can get the best $k$-dimensional approximating linear subspace by solving the one-dimensional maximization problem $k$ times. The singular vectors of $A$ are defined recursively as the solutions to these sub-problems. That is, I’ll call $v_1$ the first singular vector of $A$, and it is:
$\displaystyle v_1 = \arg \max_{v, |v|=1} |Av|$
And the corresponding first singular value, denoted $\sigma_1(A)$, is the maximal value of the optimization objective, i.e. $|Av_1|$. (I will use this term frequently, that $|Av|$ is the “objective” of the optimization problem.) Informally speaking, $(\sigma_1(A))^2$ represents how much of the data was captured by the first singular vector. Meaning, how close the vectors are to lying on the line spanned by $v_1$. Larger values imply the approximation is better. In fact, if all the data points lie on a line, then $(\sigma_1(A))^2$ is the sum of the squared norms of the rows of $A$.
Now here is where we see the reduction from the $k$-dimensional case to the 1-dimensional case. To find the best 2-dimensional subspace, you first find the best one-dimensional subspace (spanned by $v_1$), and then find the best 1-dimensional subspace, but only considering those subspaces that are the spans of unit vectors perpendicular to $v_1$. The notation for “vectors $v$ perpendicular to $v_1$” is $v \perp v_1$. Restating, the second singular vector $v _2$ is defined as
$\displaystyle v_2 = \arg \max_{v \perp v_1, |v| = 1} |Av|$
And the SVD theorem implies the subspace spanned by $\{ v_1, v_2 \}$ is the best 2-dimensional linear approximation to the data. Likewise $\sigma_2(A) = |Av_2|$ is the second singular value. Its squared magnitude tells us how much of the data that was not “captured” by $v_1$ is captured by $v_2$. Again, if the data lies in a 2-dimensional subspace, then the span of $\{ v_1, v_2 \}$ will be that subspace.
We can continue this process. Recursively define $v_k$, the $k$-th singular vector, to be the vector which maximizes $|Av|$, when $v$ is considered only among the unit vectors which are perpendicular to $\textup{span} \{ v_1, \dots, v_{k-1} \}$. The corresponding singular value $\sigma_k(A)$ is the value of the optimization problem.
As a side note, because of the way we defined the singular values as the objective values of “nested” optimization problems, the singular values are decreasing, $\sigma_1(A) \geq \sigma_2(A) \geq \dots \geq \sigma_n(A) \geq 0$. This is obvious: you only pick $v_2$ in the second optimization problem because you already picked $v_1$ which gave a bigger singular value, so $v_2$‘s objective can’t be bigger.
If you keep doing this, one of two things happen. Either you reach $v_n$ and since the domain is $n$-dimensional there are no remaining vectors to choose from, the $v_i$ are an orthonormal basis of $\mathbb{R}^n$. This means that the data in $A$ contains a full-rank submatrix. The data does not lie in any smaller-dimensional subspace. This is what you’d expect from real data.
Alternatively, you could get to a stage $v_k$ with $k < n$ and when you try to solve the optimization problem you find that every perpendicular $v$ has $Av = 0$. In this case, the data actually does lie in a $k$-dimensional subspace, and the first-through-$k$-th singular vectors you computed span this subspace.
Let’s do a quick sanity check: how do we know that the singular vectors $v_i$ form a basis? Well formally they only span a basis of the row space of $A$, i.e. a basis of the subspace spanned by the data contained in the rows of $A$. But either way the point is that each $v_{i+1}$ spans a new dimension from the previous $v_1, \dots, v_i$ because we’re choosing $v_{i+1}$ to be orthogonal to all the previous $v_i$. So the answer to our sanity check is “by construction.”
Back to the singular vectors, the discussion from the last post tells us intuitively that the data is probably never in a small subspace. You never expect the process of finding singular vectors to stop before step $n$, and if it does you take a step back and ask if something deeper is going on. Instead, in real life you specify how much of the data you want to capture, and you keep computing singular vectors until you’ve passed the threshold. Alternatively, you specify the amount of computing resources you’d like to spend by fixing the number of singular vectors you’ll compute ahead of time, and settle for however good the $k$-dimensional approximation is.
Before we get into any code or solve the 1-dimensional optimization problem, let’s prove the SVD theorem.
Proof of SVD theorem.
Recall we’re trying to prove that the first $k$ singular vectors provide a linear subspace $W$ which maximizes the squared-sum of the projections of the data onto $W$. For $k=1$ this is trivial, because we defined $v_1$ to be the solution to that optimization problem. The case of $k=2$ contains all the important features of the general inductive step. Let $W$ be any best-approximating 2-dimensional linear subspace for the rows of $A$. We’ll show that the subspace spanned by the two singular vectors $v_1, v_2$ is at least as good (and hence equally good).
Let $w_1, w_2$ be any orthonormal basis for $W$ and let $|Aw_1|^2 + |Aw_2|^2$ be the quantity that we’re trying to maximize (and which $W$ maximizes by assumption). Moreover, we can pick the basis vector $w_2$ to be perpendicular to $v_1$. To prove this we consider two cases: either $v_1$ is already perpendicular to $W$ in which case it’s trivial, or else $v_1$ isn’t perpendicular to $W$ and you can choose $w_1$ to be $\textup{proj}_W(v_1)$ and choose $w_2$ to be any unit vector perpendicular to $w_1$.
Now since $v_1$ maximizes $|Av|$, we have $|Av_1|^2 \geq |Aw_1|^2$. Moreover, since $w_2$ is perpendicular to $v_1$, the way we chose $v_2$ also makes $|Av_2|^2 \geq |Aw_2|^2$. Hence the objective $|Av_1|^2 + |Av_2|^2 \geq |Aw_1|^2 + |Aw_2|^2$, as desired.
For the general case of $k$, the inductive hypothesis tells us that the first $k$ terms of the objective for $k+1$ singular vectors is maximized, and we just have to pick any vector $w_{k+1}$ that is perpendicular to all $v_1, v_2, \dots, v_k$, and the rest of the proof is just like the 2-dimensional case.
$\square$
Now remember that in the last post we started with the definition of the SVD as a decomposition of a matrix $A = U\Sigma V^T$? And then we said that this is a certain kind of change of basis? Well the singular vectors $v_i$ together form the columns of the matrix $V$ (the rows of $V^T$), and the corresponding singular values $\sigma_i(A)$ are the diagonal entries of $\Sigma$. When $A$ is understood we’ll abbreviate the singular value as $\sigma_i$.
To reiterate with the thoughts from last post, the process of applying $A$ is exactly recovered by the process of first projecting onto the (full-rank space of) singular vectors $v_1, \dots, v_k$, scaling each coordinate of that projection according to the corresponding singular values, and then applying this $U$ thing we haven’t talked about yet.
So let’s determine what $U$ has to be. The way we picked $v_i$ to make $A$ diagonal gives us an immediate suggestion: use the $Av_i$ as the columns of $U$. Indeed, define $u_i = Av_i$, the images of the singular vectors under $A$. We can swiftly show the $u_i$ form a basis of the image of $A$. The reason is because if $v = \sum_i c_i v_i$ (using all $n$ of the singular vectors $v_i$), then by linearity $Av = \sum_{i} c_i Av_i = \sum_i c_i u_i$. It is also easy to see why the $u_i$ are orthogonal (prove it as an exercise). Let’s further make sure the $u_i$ are unit vectors and redefine them as $u_i = \frac{1}{\sigma_i}Av_i$
If you put these thoughts together, you can say exactly what $A$ does to any given vector $x$. Since the $v_i$ form an orthonormal basis, $x = \sum_i (x \cdot v_i) v_i$, and then applying $A$ gives
\displaystyle \begin{aligned}Ax &= A \left ( \sum_i (x \cdot v_i) v_i \right ) \\ &= \sum_i (x \cdot v_i) A_i v_i \\ &= \sum_i (x \cdot v_i) \sigma_i u_i \end{aligned}
If you’ve been closely reading this blog in the last few months, you’ll recognize a very nice way to write the last line of the above equation. It’s an outer product. So depending on your favorite symbols, you’d write this as either $A = \sum_{i} \sigma_i u_i \otimes v_i$ or $A = \sum_i \sigma_i u_i v_i^T$. Or, if you like expressing things as matrix factorizations, as $A = U\Sigma V^T$. All three are describing the same object.
Let’s move on to some code.
A black box example
Before we implement SVD from scratch (an urge that commands me from the depths of my soul!), let’s see a black-box example that uses existing tools. For this we’ll use the numpy library.
Recall our movie-rating matrix from the last post:
The code to compute the svd of this matrix is as simple as it gets:
from numpy.linalg import svd
movieRatings = [
[2, 5, 3],
[1, 2, 1],
[4, 1, 1],
[3, 5, 2],
[5, 3, 1],
[4, 5, 5],
[2, 4, 2],
[2, 2, 5],
]
U, singularValues, V = svd(movieRatings)
Printing these values out gives
[[-0.39458526 0.23923575 -0.35445911 -0.38062172 -0.29836818 -0.49464816 -0.30703202 -0.29763321]
[-0.15830232 0.03054913 -0.15299759 -0.45334816 0.31122898 0.23892035 -0.37313346 0.67223457]
[-0.22155201 -0.52086121 0.39334917 -0.14974792 -0.65963979 0.00488292 -0.00783684 0.25934607]
[-0.39692635 -0.08649009 -0.41052882 0.74387448 -0.10629499 0.01372565 -0.17959298 0.26333462]
[-0.34630257 -0.64128825 0.07382859 -0.04494155 0.58000668 -0.25806239 0.00211823 -0.24154726]
[-0.53347449 0.19168874 0.19949342 -0.03942604 0.00424495 0.68715732 -0.06957561 -0.40033035]
[-0.31660464 0.06109826 -0.30599517 -0.19611823 -0.01334272 0.01446975 0.85185852 0.19463493]
[-0.32840223 0.45970413 0.62354764 0.1783041 0.17631186 -0.39879476 0.06065902 0.25771578]]
[ 15.09626916 4.30056855 3.40701739]
[[-0.54184808 -0.67070995 -0.50650649]
[-0.75152295 0.11680911 0.64928336]
[ 0.37631623 -0.73246419 0.56734672]]
Now this is a bit weird, because the matrices $U, V$ are the wrong shape! Remember, there are only supposed to be three vectors since the input matrix has rank three. So what gives? This is a distinction that goes by the name “full” versus “reduced” SVD. The idea goes back to our original statement that $U \Sigma V^T$ is a decomposition with $U, V^T$ both orthogonal and square matrices. But in the derivation we did in the last section, the $U$ and $V$ were not square. The singular vectors $v_i$ could potentially stop before even becoming full rank.
In order to get to square matrices, what people sometimes do is take the two bases $v_1, \dots, v_k$ and $u_1, \dots, u_k$ and arbitrarily choose ways to complete them to a full orthonormal basis of their respective vector spaces. In other words, they just make the matrix square by filling it with data for no reason other than that it’s sometimes nice to have a complete basis. We don’t care about this. To be honest, I think the only place this comes in useful is in the desire to be particularly tidy in a mathematical formulation of something.
We can still work with it programmatically. By fudging around a bit with numpy’s shapes to get a diagonal matrix, we can reconstruct the input rating matrix from the factors.
Sigma = np.vstack([
np.diag(singularValues),
np.zeros((5, 3)),
])
print(np.round(movieRatings - np.dot(U, np.dot(Sigma, V)), decimals=10))
And the output is, as one expects, a matrix of all zeros. Meaning that we decomposed the movie rating matrix, and built it back up from the factors.
We can actually get the SVD as we defined it (with rectangular matrices) by passing a special flag to numpy’s svd.
U, singularValues, V = svd(movieRatings, full_matrices=False)
print(U)
print(singularValues)
print(V)
Sigma = np.diag(singularValues)
print(np.round(movieRatings - np.dot(U, np.dot(Sigma, V)), decimals=10))
And the result
[[-0.39458526 0.23923575 -0.35445911]
[-0.15830232 0.03054913 -0.15299759]
[-0.22155201 -0.52086121 0.39334917]
[-0.39692635 -0.08649009 -0.41052882]
[-0.34630257 -0.64128825 0.07382859]
[-0.53347449 0.19168874 0.19949342]
[-0.31660464 0.06109826 -0.30599517]
[-0.32840223 0.45970413 0.62354764]]
[ 15.09626916 4.30056855 3.40701739]
[[-0.54184808 -0.67070995 -0.50650649]
[-0.75152295 0.11680911 0.64928336]
[ 0.37631623 -0.73246419 0.56734672]]
[[-0. -0. -0.]
[-0. -0. 0.]
[ 0. -0. 0.]
[-0. -0. -0.]
[-0. -0. -0.]
[-0. -0. -0.]
[-0. -0. -0.]
[ 0. -0. -0.]]
This makes the reconstruction less messy, since we can just multiply everything without having to add extra rows of zeros to $\Sigma$.
What do the singular vectors and values tell us about the movie rating matrix? (Besides nothing, since it’s a contrived example) You’ll notice that the first singular vector $\sigma_1 > 15$ while the other two singular values are around $4$. This tells us that the first singular vector covers a large part of the structure of the matrix. I.e., a rank-1 matrix would be a pretty good approximation to the whole thing. As an exercise to the reader, write a program that evaluates this claim (how good is “good”?).
The greedy optimization routine
Now we’re going to write SVD from scratch. We’ll first implement the greedy algorithm for the 1-d optimization problem, and then we’ll perform the inductive step to get a full algorithm. Then we’ll run it on the CNN data set.
The method we’ll use to solve the 1-dimensional problem isn’t necessarily industry strength (see this document for a hint of what industry strength looks like), but it is simple conceptually. It’s called the power method. Now that we have our decomposition of theorem, understanding how the power method works is quite easy.
Let’s work in the language of a matrix decomposition $A = U \Sigma V^T$, more for practice with that language than anything else (using outer products would give us the same result with slightly different computations). Then let’s observe $A^T A$, wherein we’ll use the fact that $U$ is orthonormal and so $U^TU$ is the identity matrix:
$\displaystyle A^TA = (U \Sigma V^T)^T(U \Sigma V^T) = V \Sigma U^TU \Sigma V^T = V \Sigma^2 V^T$
So we can completely eliminate $U$ from the discussion, and look at just $V \Sigma^2 V^T$. And what’s nice about this matrix is that we can compute its eigenvectors, and eigenvectors turn out to be exactly the singular vectors. The corresponding eigenvalues are the squared singular values. This should be clear from the above derivation. If you apply $(V \Sigma^2 V^T)$ to any $v_i$, the only parts of the product that aren’t zero are the ones involving $v_i$ with itself, and the scalar $\sigma_i^2$ factors in smoothly. It’s dead simple to check.
Theorem: Let $x$ be a random unit vector and let $B = A^TA = V \Sigma^2 V^T$. Then with high probability, $\lim_{s \to \infty} B^s x$ is in the span of the first singular vector $v_1$. If we normalize $B^s x$ to a unit vector at each $s$, then furthermore the limit is $v_1$.
Proof. Start with a random unit vector $x$, and write it in terms of the singular vectors $x = \sum_i c_i v_i$. That means $Bx = \sum_i c_i \sigma_i^2 v_i$. If you recursively apply this logic, you get $B^s x = \sum_i c_i \sigma_i^{2s} v_i$. In particular, the dot product of $(B^s x)$ with any $v_j$ is $c_i \sigma_j^{2s}$.
What this means is that so long as the first singular value $\sigma_1$ is sufficiently larger than the second one $\sigma_2$, and in turn all the other singular values, the part of $B^s x$ corresponding to $v_1$ will be much larger than the rest. Recall that if you expand a vector in terms of an orthonormal basis, in this case $B^s x$ expanded in the $v_i$, the coefficient of $B^s x$ on $v_j$ is exactly the dot product. So to say that $B^sx$ converges to being in the span of $v_1$ is the same as saying that the ratio of these coefficients, $|(B^s x \cdot v_1)| / |(B^s x \cdot v_j)| \to \infty$ for any $j$. In other words, the coefficient corresponding to the first singular vector dominates all of the others. And so if we normalize, the coefficient of $B^s x$ corresponding to $v_1$ tends to 1, while the rest tend to zero.
Indeed, this ratio is just $(\sigma_1 / \sigma_j)^{2s}$ and the base of this exponential is bigger than 1.
$\square$
If you want to be a little more precise and find bounds on the number of iterations required to converge, you can. The worry is that your random starting vector is “too close” to one of the smaller singular vectors $v_j$, so that if the ratio of $\sigma_1 / \sigma_j$ is small, then the “pull” of $v_1$ won’t outweigh the pull of $v_j$ fast enough. Choosing a random unit vector allows you to ensure with high probability that this doesn’t happen. And conditioned on it not happening (or measuring “how far the event is from happening” precisely), you can compute a precise number of iterations required to converge. The last two pages of these lecture notes have all the details.
We won’t compute a precise number of iterations. Instead we’ll just compute until the angle between $B^{s+1}x$ and $B^s x$ is very small. Here’s the algorithm
import numpy as np
from numpy.linalg import norm
from random import normalvariate
from math import sqrt
def randomUnitVector(n):
unnormalized = [normalvariate(0, 1) for _ in range(n)]
theNorm = sqrt(sum(x * x for x in unnormalized))
return [x / theNorm for x in unnormalized]
def svd_1d(A, epsilon=1e-10):
''' The one-dimensional SVD '''
n, m = A.shape
x = randomUnitVector(m)
lastV = None
currentV = x
B = np.dot(A.T, A)
iterations = 0
while True:
iterations += 1
lastV = currentV
currentV = np.dot(B, lastV)
currentV = currentV / norm(currentV)
if abs(np.dot(currentV, lastV)) > 1 - epsilon:
print("converged in {} iterations!".format(iterations))
return currentV
We start with a random unit vector $x$, and then loop computing $x_{t+1} = Bx_t$, renormalizing at each step. The condition for stopping is that the magnitude of the dot product between $x_t$ and $x_{t+1}$ (since they’re unit vectors, this is the cosine of the angle between them) is very close to 1.
And using it on our movie ratings example:
if __name__ == "__main__":
movieRatings = np.array([
[2, 5, 3],
[1, 2, 1],
[4, 1, 1],
[3, 5, 2],
[5, 3, 1],
[4, 5, 5],
[2, 4, 2],
[2, 2, 5],
], dtype='float64')
print(svd_1d(movieRatings))
With the result
converged in 6 iterations!
[-0.54184805 -0.67070993 -0.50650655]
Note that the sign of the vector may be different from numpy’s output because we start with a random vector to begin with.
The recursive step, getting from $v_1$ to the entire SVD, is equally straightforward. Say you start with the matrix $A$ and you compute $v_1$. You can use $v_1$ to compute $u_1$ and $\sigma_1(A)$. Then you want to ensure you’re ignoring all vectors in the span of $v_1$ for your next greedy optimization, and to do this you can simply subtract the rank 1 component of $A$ corresponding to $v_1$. I.e., set $A' = A - \sigma_1(A) u_1 v_1^T$. Then it’s easy to see that $\sigma_1(A') = \sigma_2(A)$ and basically all the singular vectors shift indices by 1 when going from $A$ to $A'$. Then you repeat.
If that’s not clear enough, here’s the code.
def svd(A, epsilon=1e-10):
n, m = A.shape
svdSoFar = []
for i in range(m):
matrixFor1D = A.copy()
for singularValue, u, v in svdSoFar[:i]:
matrixFor1D -= singularValue * np.outer(u, v)
v = svd_1d(matrixFor1D, epsilon=epsilon) # next singular vector
u_unnormalized = np.dot(A, v)
sigma = norm(u_unnormalized) # next singular value
u = u_unnormalized / sigma
svdSoFar.append((sigma, u, v))
# transform it into matrices of the right shape
singularValues, us, vs = [np.array(x) for x in zip(*svdSoFar)]
return singularValues, us.T, vs
And we can run this on our movie rating matrix to get the following
>>> theSVD = svd(movieRatings)
>>> theSVD[0]
array([ 15.09626916, 4.30056855, 3.40701739])
>>> theSVD[1]
array([[ 0.39458528, -0.23923093, 0.35446407],
[ 0.15830233, -0.03054705, 0.15299815],
[ 0.221552 , 0.52085578, -0.39336072],
[ 0.39692636, 0.08649568, 0.41052666],
[ 0.34630257, 0.64128719, -0.07384286],
[ 0.53347448, -0.19169154, -0.19948959],
[ 0.31660465, -0.0610941 , 0.30599629],
[ 0.32840221, -0.45971273, -0.62353781]])
>>> theSVD[2]
array([[ 0.54184805, 0.67071006, 0.50650638],
[ 0.75151641, -0.11679644, -0.64929321],
[-0.37632934, 0.73246611, -0.56733554]])
Checking this against our numpy output shows it’s within a reasonable level of precision (considering the power method took on the order of ten iterations!)
>>> np.round(np.abs(npSVD[0]) - np.abs(theSVD[1]), decimals=5)
array([[ -0.00000000e+00, -0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, -0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, -1.00000000e-05, 1.00000000e-05],
[ 0.00000000e+00, 0.00000000e+00, -0.00000000e+00],
[ 0.00000000e+00, -0.00000000e+00, 1.00000000e-05],
[ -0.00000000e+00, 0.00000000e+00, -0.00000000e+00],
[ 0.00000000e+00, -0.00000000e+00, 0.00000000e+00],
[ -0.00000000e+00, 1.00000000e-05, -1.00000000e-05]])
>>> np.round(np.abs(npSVD[2]) - np.abs(theSVD[2]), decimals=5)
array([[ 0.00000000e+00, 0.00000000e+00, -0.00000000e+00],
[ -1.00000000e-05, -1.00000000e-05, 1.00000000e-05],
[ 1.00000000e-05, 0.00000000e+00, -1.00000000e-05]])
>>> np.round(np.abs(npSVD[1]) - np.abs(theSVD[0]), decimals=5)
array([ 0., 0., -0.])
So there we have it. We added an extra little bit to the svd function, an argument $k$ which stops computing the svd after it reaches rank $k$.
CNN stories
One interesting use of the SVD is in topic modeling. Topic modeling is the process of taking a bunch of documents (news stories, or emails, or movie scripts, whatever) and grouping them by topic, where the algorithm gets to choose what counts as a “topic.” Topic modeling is just the name that natural language processing folks use instead of clustering.
The SVD can help one model topics as follows. First you construct a matrix $A$ called a document-term matrix whose rows correspond to words in some fixed dictionary and whose columns correspond to documents. The $(i,j)$ entry of $A$ contains the number of times word $i$ shows up in document $j$. Or, more precisely, some quantity derived from that count, like a normalized count. See this table on wikipedia for a list of options related to that. We’ll just pick one arbitrarily for use in this post.
The point isn’t how we normalize the data, but what the SVD of $A = U \Sigma V^T$ means in this context. Recall that the domain of $A$, as a linear map, is a vector space whose dimension is the number of stories. We think of the vectors in this space as documents, or rather as an “embedding” of the abstract concept of a document using the counts of how often each word shows up in a document as a proxy for the semantic meaning of the document. Likewise, the codomain is the space of all words, and each word is embedded by which documents it occurs in. If we compare this to the movie rating example, it’s the same thing: a movie is the vector of ratings it receives from people, and a person is the vector of ratings of various movies.
Say you take a rank 3 approximation to $A$. Then you get three singular vectors $v_1, v_2, v_3$ which form a basis for a subspace of words, i.e., the “idealized” words. These idealized words are your topics, and you can compute where a “new word” falls by looking at which documents it appears in (writing it as a vector in the domain) and saying its “topic” is the closest of the $v_1, v_2, v_3$. The same process applies to new documents. You can use this to cluster existing documents as well.
The dataset we’ll use for this post is a relatively small corpus of a thousand CNN stories picked from 2012. Here’s an excerpt from one of them
\$ cat data/cnn-stories/story479.txt
3 things to watch on Super Tuesday
Here are three things to watch for: Romney's big day. He's been the off-and-on frontrunner throughout the race, but a big Super Tuesday could begin an end game toward a sometimes hesitant base coalescing behind former Massachusetts Gov. Mitt Romney. Romney should win his home state of Massachusetts, neighboring Vermont and Virginia, ...
So let’s first build this document-term matrix, with the normalized values, and then we’ll compute it’s SVD and see what the topics look like.
Step 1 is cleaning the data. We used a bunch of routines from the nltk library that boils down to this loop:
for filename, documentText in documentDict.items():
tokens = tokenize(documentText)
tagged_tokens = pos_tag(tokens)
wnl = WordNetLemmatizer()
stemmedTokens = [wnl.lemmatize(word, wordnetPos(tag)).lower()
for word, tag in tagged_tokens]
This turns the Super Tuesday story into a list of words (with repetition):
["thing", "watch", "three", "thing", "watch", "big", ... ]
If you’ll notice the name Romney doesn’t show up in the list of words. I’m only keeping the words that show up in the top 100,000 most common English words, and then lemmatizing all of the words to their roots. It’s not a perfect data cleaning job, but it’s simple and good enough for our purposes.
Now we can create the document term matrix.
def makeDocumentTermMatrix(data):
words = allWords(data) # get the set of all unique words
wordToIndex = dict((word, i) for i, word in enumerate(words))
indexToWord = dict(enumerate(words))
indexToDocument = dict(enumerate(data))
matrix = np.zeros((len(words), len(data)))
for docID, document in enumerate(data):
docWords = Counter(document['words'])
for word, count in docWords.items():
matrix[wordToIndex[word], docID] = count
return matrix, (indexToWord, indexToDocument)
This creates a matrix with the raw integer counts. But what we need is a normalized count. The idea is that a common word like “thing” shows up disproportionately more often than “election,” and we don’t want raw magnitude of a word count to outweigh its semantic contribution to the classification. This is the applied math part of the algorithm design. So what we’ll do (and this technique together with SVD is called latent semantic indexing) is normalize each entry so that it measures both the frequency of a term in a document and the relative frequency of a term compared to the global frequency of that term. There are many ways to do this, and we’ll just pick one. See the github repository if you’re interested.
So now lets compute a rank 10 decomposition and see how to cluster the results.
data = load()
matrix, (indexToWord, indexToDocument) = makeDocumentTermMatrix(data)
matrix = normalize(matrix)
sigma, U, V = svd(matrix, k=10)
This uses our svd, not numpy’s. Though numpy’s routine is much faster, it’s fun to see things work with code written from scratch. The result is too large to display here, but I can report the singular values.
>>> sigma
array([ 42.85249098, 21.85641975, 19.15989197, 16.2403354 ,
15.40456779, 14.3172779 , 13.47860033, 13.23795002,
12.98866537, 12.51307445])
Now we take our original inputs and project them onto the subspace spanned by the singular vectors. This is the part that represents each word (resp., document) in terms of the idealized words (resp., documents), the singular vectors. Then we can apply a simple k-means clustering algorithm to the result, and observe the resulting clusters as documents.
projectedDocuments = np.dot(matrix.T, U)
projectedWords = np.dot(matrix, V.T)
documentCenters, documentClustering = cluster(projectedDocuments)
wordCenters, wordClustering = cluster(projectedWords)
wordClusters = [
[indexToWord[i] for (i, x) in enumerate(wordClustering) if x == j]
for j in range(len(set(wordClustering)))
]
documentClusters = [
[indexToDocument[i]['text']
for (i, x) in enumerate(documentClustering) if x == j]
for j in range(len(set(documentClustering)))
]
And now we can inspect individual clusters. Right off the bat we can tell the clusters aren’t quite right simply by looking at the sizes of each cluster.
>>> Counter(wordClustering)
Counter({1: 9689, 2: 1051, 8: 680, 5: 557, 3: 321, 7: 225, 4: 174, 6: 124, 9: 123})
>>> Counter(documentClustering)
Counter({7: 407, 6: 109, 0: 102, 5: 87, 9: 85, 2: 65, 8: 55, 4: 47, 3: 23, 1: 15})
What looks wrong to me is the size of the largest word cluster. If we could group words by topic, then this is saying there’s a topic with over nine thousand words associated with it! Inspecting it even closer, it includes words like “vegan,” “skunk,” and “pope.” On the other hand, some word clusters are spot on. Examine, for example, the fifth cluster which includes words very clearly associated with crime stories.
>>> wordClusters[4]
['account', 'accuse', 'act', 'affiliate', 'allegation', 'allege', 'altercation', 'anything', 'apartment', 'arrest', 'arrive', 'assault', 'attorney', 'authority', 'bag', 'black', 'blood', 'boy', 'brother', 'bullet', 'candy', 'car', 'carry', 'case', 'charge', 'chief', 'child', 'claim', 'client', 'commit', 'community', 'contact', 'convenience', 'court', 'crime', 'criminal', 'cry', 'dead', 'deadly', 'death', 'defense', 'department', 'describe', 'detail', 'determine', 'dispatcher', 'district', 'document', 'enforcement', 'evidence', 'extremely', 'family', 'father', 'fear', 'fiancee', 'file', 'five', 'foot', 'friend', 'front', 'gate', 'girl', 'girlfriend', 'grand', 'ground', 'guilty', 'gun', 'gunman', 'gunshot', 'hand', 'happen', 'harm', 'head', 'hear', 'heard', 'hoodie', 'hour', 'house', 'identify', 'immediately', 'incident', 'information', 'injury', 'investigate', 'investigation', 'investigator', 'involve', 'judge', 'jury', 'justice', 'kid', 'killing', 'lawyer', 'legal', 'letter', 'life', 'local', 'man', 'men', 'mile', 'morning', 'mother', 'murder', 'near', 'nearby', 'neighbor', 'newspaper', 'night', 'nothing', 'office', 'officer', 'online', 'outside', 'parent', 'person', 'phone', 'police', 'post', 'prison', 'profile', 'prosecute', 'prosecution', 'prosecutor', 'pull', 'racial', 'racist', 'release', 'responsible', 'return', 'review', 'role', 'saw', 'scene', 'school', 'scream', 'search', 'sentence', 'serve', 'several', 'shoot', 'shooter', 'shooting', 'shot', 'slur', 'someone', 'son', 'sound', 'spark', 'speak', 'staff', 'stand', 'store', 'story', 'student', 'surveillance', 'suspect', 'suspicious', 'tape', 'teacher', 'teen', 'teenager', 'told', 'tragedy', 'trial', 'vehicle', 'victim', 'video', 'walk', 'watch', 'wear', 'whether', 'white', 'witness', 'young']
As sad as it makes me to see that ‘black’ and ‘slur’ and ‘racial’ appear in this category, it’s a reminder that naively using the output of a machine learning algorithm can perpetuate racism.
Here’s another interesting cluster corresponding to economic words:
>>> wordClusters[6]
['agreement', 'aide', 'analyst', 'approval', 'approve', 'austerity', 'average', 'bailout', 'beneficiary', 'benefit', 'bill', 'billion', 'break', 'broadband', 'budget', 'class', 'combine', 'committee', 'compromise', 'conference', 'congressional', 'contribution', 'core', 'cost', 'currently', 'cut', 'deal', 'debt', 'defender', 'deficit', 'doc', 'drop', 'economic', 'economy', 'employee', 'employer', 'erode', 'eurozone', 'expire', 'extend', 'extension', 'fee', 'finance', 'fiscal', 'fix', 'fully', 'fund', 'funding', 'game', 'generally', 'gleefully', 'growth', 'hamper', 'highlight', 'hike', 'hire', 'holiday', 'increase', 'indifferent', 'insistence', 'insurance', 'job', 'juncture', 'latter', 'legislation', 'loser', 'low', 'lower', 'majority', 'maximum', 'measure', 'middle', 'negotiation', 'offset', 'oppose', 'package', 'pass', 'patient', 'pay', 'payment', 'payroll', 'pension', 'plight', 'portray', 'priority', 'proposal', 'provision', 'rate', 'recession', 'recovery', 'reduce', 'reduction', 'reluctance', 'repercussion', 'rest', 'revenue', 'rich', 'roughly', 'sale', 'saving', 'scientist', 'separate', 'sharp', 'showdown', 'sign', 'specialist', 'spectrum', 'spending', 'strength', 'tax', 'tea', 'tentative', 'term', 'test', 'top', 'trillion', 'turnaround', 'unemployed', 'unemployment', 'union', 'wage', 'welfare', 'worker', 'worth']
One can also inspect the stories, though the clusters are harder to print out here. Interestingly the first cluster of documents are stories exclusively about Trayvon Martin. The second cluster is mostly international military conflicts. The third cluster also appears to be about international conflict, but what distinguishes it from the first cluster is that every story in the second cluster discusses Syria.
>>> len([x for x in documentClusters[1] if 'Syria' in x]) / len(documentClusters[1])
0.05555555555555555
>>> len([x for x in documentClusters[2] if 'Syria' in x]) / len(documentClusters[2])
1.0
Anyway, you can explore the data more at your leisure (and tinker with the parameters to improve it!).
Issues with the power method
Though I mentioned that the power method isn’t an industry strength algorithm I didn’t say why. Let’s revisit that before we finish. The problem is that the convergence rate of even the 1-dimensional problem depends on the ratio of the first and second singular values, $\sigma_1 / \sigma_2$. If that ratio is very close to 1, then the convergence will take a long time and need many many matrix-vector multiplications.
One way to alleviate that is to do the trick where, to compute a large power of a matrix, you iteratively square $B$. But that requires computing a matrix square (instead of a bunch of matrix-vector products), and that requires a lot of time and memory if the matrix isn’t sparse. When the matrix is sparse, you can actually do the power method quite quickly, from what I’ve heard and read.
But nevertheless, the industry standard methods involve computing a particular matrix decomposition that is not only faster than the power method, but also numerically stable. That means that the algorithm’s runtime and accuracy doesn’t depend on slight changes in the entries of the input matrix. Indeed, you can have two matrices where $\sigma_1 / \sigma_2$ is very close to 1, but changing a single entry will make that ratio much larger. The power method depends on this, so it’s not numerically stable. But the industry standard technique is not. This technique involves something called Householder reflections. So while the power method was great for a proof of concept, there’s much more work to do if you want true SVD power.
Until next time! | 2017-05-29T01:57:44 | {
"domain": "jeremykun.com",
"url": "https://jeremykun.com/category/linear-algebra/",
"openwebmath_score": 0.8876754641532898,
"openwebmath_perplexity": 730.6758729905721,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620068373637,
"lm_q2_score": 0.658417487156366,
"lm_q1q2_score": 0.6531909736451587
} |
https://freopen.com/math/2020/03/29/Box-Muller-Algorithm.html | # Box-Muller Algorithm
## Description
$x_1$ and $x_2$ are two independent $uniform(0,1)$ random variables, and let
$r = \sqrt{-2 \log{x_1}}, \quad \theta = 2 \pi x_2$
Then
$z_1 = r \cos{\theta}, \quad z_2 = r \sin{\theta}$
are independent $normal(0,1)$ random variabls.
## Proof
$p(z_1,z_2) = \frac{1}{2\pi} e^{\frac{-(z_1^2 + z_2^2)}{2}}$
$f(r,\theta) = \frac{1}{2\pi} e^{\frac{-r^2}{2}}$
$\theta = 2 \pi x_2.$
$\int \int p(z_1,z_2) dz_1 dz_2 = \int \int rf(r,\theta) dr d\theta$
$p(r) = r e^{\frac{-r^2}{2}}$
$F(r) = 1 - e^{\frac{-r^2}{2}}$
$F^{-1}(y) = \sqrt{-2\log(1-y)}$
$r = \sqrt{-2\log(x_1)}$
## Polar form
$s = x^2 + y^2, sin(\theta) = y/\sqrt{s}, cos(\theta) = x/\sqrt{s}$
$r = \sqrt{-2 \log{s}} \\ z_1 = r y/\sqrt{s} \\ \quad z_2 = r x/\sqrt{s}$
do
$x,y \sim U(0,1)$
while $x^2 + y^2 > 1$ | 2023-03-23T23:25:41 | {
"domain": "freopen.com",
"url": "https://freopen.com/math/2020/03/29/Box-Muller-Algorithm.html",
"openwebmath_score": 0.9632532596588135,
"openwebmath_perplexity": 9397.006252546356,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.992062006602671,
"lm_q2_score": 0.658417487156366,
"lm_q1q2_score": 0.6531909734906328
} |
https://www.tungsteno.io/post/exp-change_of_variable/ | If $F'(x)=f(x)$, then $(F(g(x)))'=f(g(x))g'(x)$. Thus, the integral of a function of the type
is always solved as
It is very useful to denote this trick with a change of variable: let
If we differentiate both sides of the expression and keep track of variable with respect to which we are differentiating with $\mathrm{d}\cdot$, we get
and the expression above is more easily handed
This also leads to a list of "semi-immediate integrals": if $t$ is some function of $x$,
$$\int t^n\cdot t' \,\mathrm{d}x=\dfrac{t^{n+1}}{n+1}+k\qquad \text{if n\neq -1}$$ $$\int \dfrac{t'}{t}\,\mathrm{d}x=\ln\vert t\vert+k$$ $$\int e^t\cdot t' \,\mathrm{d}x= e^t+k$$ $$\int a^t\cdot t' \,\mathrm{d}x= \dfrac{a^t}{\ln a}+k$$ $$\int \sin t\cdot t' \,\mathrm{d}x= -\cos t+k$$ $$\int \cos t\cdot t' \,\mathrm{d}x= \sin t+k$$ $$\int \dfrac{t'}{1+t^2} \,\mathrm{d}x = \tan^{-1} t+k$$ $$\int \left(1+\tan^2 t\right)\cdot t' \,\mathrm{d}x= \int \dfrac{t'}{\cos^2 t} \,\mathrm{d}x = \tan t+k$$ $$\dfrac{t'}{\sqrt{1-t^2}} \,\mathrm{d}x = \sin^{-1} t+k$$ $$\dfrac{t'}{\sqrt{1+t^2}} \,\mathrm{d}x = \sinh^{-1} t+k$$ | 2020-02-18T10:26:39 | {
"domain": "tungsteno.io",
"url": "https://www.tungsteno.io/post/exp-change_of_variable/",
"openwebmath_score": 0.9674426317214966,
"openwebmath_perplexity": 112.82053265473057,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620061332854,
"lm_q2_score": 0.658417487156366,
"lm_q1q2_score": 0.6531909731815813
} |
http://en.wikipedia.org/wiki/Multiplicative_group_of_integers_modulo_n | # Multiplicative group of integers modulo n
Not to be confused with Integers modulo n.
In modular arithmetic the set of congruence classes relatively prime to the modulus number, say n, form a group under multiplication called the multiplicative group of integers modulo n. It is also called the group of primitive residue classes modulo n. In the theory of rings, a branch of abstract algebra, it is described as the group of units of the ring of integers modulo n. (Units refers to elements with a multiplicative inverse.)
This group is fundamental in number theory. It has found applications in cryptography, integer factorization, and primality testing. For example, by finding the order of this group, one can determine whether n is prime: n is prime if and only if the order is n − 1.
## Group axioms
It is a straightforward exercise to show that, under multiplication, the set of congruence classes modulo n which are relatively prime to n satisfy the axioms for an abelian group.
Because ab (mod n) implies that gcd(a, n) = gcd(b, n), the notion of congruence classes modulo n which are relatively prime to n is well-defined.
Since gcd(a, n) = 1 and gcd(b, n) = 1 implies gcd(ab, n) = 1 the set of classes relatively prime to n is closed under multiplication.
The natural mapping from the integers to the congruence classes modulo n that takes an integer to its congruence class modulo n respects products. This implies that the class containing 1 is the unique multiplicative identity, and also the associative and commutative laws hold. In fact it is a ring homomorphism.
Given a, gcd(a, n) = 1, finding x satisfying ax ≡ 1 (mod n) is the same as solving ax + ny = 1, which can be done by Bézout's lemma. The x found will have the property that gcd(x, n) = 1.
## Notation
The (quotient) ring of integers modulo n is denoted $\mathbb{Z}/n\mathbb{Z}$ or $\mathbb{Z}/(n)$ (i.e., the ring of integers modulo the ideal $n\mathbb{Z} = (n)$ consisting of the multiples of n) or by $\mathbb{Z}_n$ (though the latter can be confused with the p-adic integers when n is a prime number). Depending on the author, its group of units may be written $(\mathbb{Z}/n\mathbb{Z})^*,$ $(\mathbb{Z}/n\mathbb{Z})^\times,$ $\mathrm{U}(\mathbb{Z}/n\mathbb{Z}),$ $\mathrm{E}(\mathbb{Z}/n\mathbb{Z})$ (for German Einheit, which translates as unit) or similar notations. This article uses $(\mathbb{Z}/n\mathbb{Z})^\times.$
The notation $\mathrm{C}_n$ refers to the cyclic group of order n.
## Structure
### n = 1
Modulo 1 any two integers are congruent, i.e. there is only one congruence class. Every integer is relatively prime to 1. Therefore the single congruence class modulo 1 is relatively prime to the modulus, so $(\mathbb{Z}/1\,\mathbb{Z})^\times \cong \mathrm{C}_1$ is trivial. This implies that φ(1) = 1. Since the first power of any integer is congruent to 1 modulo 1, λ(1) is also 1.
Because of its trivial nature, the case of congruences modulo 1 is generally ignored. For example, the theorem "$(\mathbb{Z}/n\mathbb{Z})^\times$ is cyclic if and only if φ(n) = λ(n)" implicitly includes the case n = 1, whereas the usual statement of Gauss's theorem "$(\mathbb{Z}/n\mathbb{Z})^\times$ is cyclic if and only if n = 2, 4, any power of an odd prime or twice any power of an odd prime," explicitly excludes 1.
### Powers of 2
Modulo 2 there is only one relatively prime congruence class, 1, so $(\mathbb{Z}/2\mathbb{Z})^\times \cong \mathrm{C}_1$ is the trivial group.
Modulo 4 there are two relatively prime congruence classes, 1 and 3, so $(\mathbb{Z}/4\mathbb{Z})^\times \cong \mathrm{C}_2,$ the cyclic group with two elements.
Modulo 8 there are four relatively prime classes, 1, 3, 5 and 7. The square of each of these is 1, so $(\mathbb{Z}/8\mathbb{Z})^\times \cong \mathrm{C}_2 \times \mathrm{C}_2,$ the Klein four-group.
Modulo 16 there are eight relatively prime classes 1, 3, 5, 7, 9, 11, 13 and 15. $\{\pm 1, \pm 7\}\cong \mathrm{C}_2 \times \mathrm{C}_2,$ is the 2-torsion subgroup (i.e. the square of each element is 1), so $(\mathbb{Z}/16\mathbb{Z})^\times$ is not cyclic. The powers of 3, $\{1, 3, 9, 11\}$ are a subgroup of order 4, as are the powers of 5, $\{1, 5, 9, 13\}.$ Thus $(\mathbb{Z}/16\mathbb{Z})^\times \cong \mathrm{C}_2 \times \mathrm{C}_4.$
The pattern shown by 8 and 16 holds[1] for higher powers 2k, k > 2: $\{\pm 1, 2^{k-1} \pm 1\}\cong \mathrm{C}_2 \times \mathrm{C}_2,$ is the 2-torsion subgroup (so $(\mathbb{Z}/2^k\mathbb{Z})^\times$ is not cyclic) and the powers of 3 are a subgroup of order 2k − 2, so $(\mathbb{Z}/2^k\mathbb{Z})^\times \cong \mathrm{C}_2 \times \mathrm{C}_{2^{k-2}}.$
### Powers of odd primes
For powers of odd primes pk the group is cyclic:[2]
$(\mathbb{Z}/p^k\mathbb{Z})^\times \cong \mathrm{C}_{p^{k-1}(p-1)} \cong \mathrm{C}_{\varphi(p^k)} .$
### General composite numbers
The Chinese remainder theorem[3] says that if $\;\;n=p_1^{k_1}p_2^{k_2}p_3^{k_3}\dots, \;$ then the ring $\mathbb{Z}/n\mathbb{Z}$ is the direct product of the rings corresponding to each of its prime power factors:
$\mathbb{Z}/n\mathbb{Z} \cong \mathbb{Z}/{p_1^{k_1}}\mathbb{Z}\; \times \;\mathbb{Z}/{p_2^{k_2}}\mathbb{Z} \;\times\; \mathbb{Z}/{p_3^{k_3}}\mathbb{Z}\dots\;\;$
Similarly, the group of units $(\mathbb{Z}/n\mathbb{Z})^\times$ is the direct product of the groups corresponding to each of the prime power factors:
$(\mathbb{Z}/n\mathbb{Z})^\times\cong (\mathbb{Z}/{p_1^{k_1}}\mathbb{Z})^\times \times (\mathbb{Z}/{p_2^{k_2}}\mathbb{Z})^\times \times (\mathbb{Z}/{p_3^{k_3}}\mathbb{Z})^\times \dots\;.$
#### Subgroup of false witnesses
If n is composite, there exists a subgroup of the multiplicative group, called the "group of false witnesses", in which the elements, when raised to the power n − 1, are congruent to 1 modulo n (since the residue 1, to any power, is congruent to 1 modulo n, the set of such elements is nonempty).[4] One could say, because of Fermat's Little Theorem, that such residues are "false positives" or "false witnesses" for the primality of n. 2 is the residue most often used in this basic primality check, hence 341 = 11 × 31 is famous since 2340 is congruent to 1 modulo 341, and 341 is the smallest such composite number (with respect to 2). For 341, the false witnesses subgroup contains 100 residues and so is of index 3 inside the 300 element multiplicative group mod 341.
##### Examples
n = 9
The smallest example with a nontrivial subgroup of false witnesses is 9 = 3 × 3. There are 6 residues relatively prime to 9: 1, 2, 4, 5, 7, 8. Since 8 is congruent to −1 modulo 9, it follows that 88 is congruent to 1 modulo 9. So 1 and 8 are false positives for the "primality" of 9 (since 9 is not actually prime). These are in fact the only ones, so the subgroup {1,8} is the subgroup of false witnesses. The same argument shows that n − 1 is a "false witness" for any odd composite n.
n = 561
561 is a Carmichael number, thus n560 is congruent to 1 modulo 561 for any number n coprime to 561. Thus the subgroup of false witnesses is in this case not proper, it is the entire group of multiplicative units modulo 561, which consists of 320 residues.
## Properties
### Order
The order of the group is given by Euler's totient function: $| (\mathbb{Z}/n\mathbb{Z})^\times|=\varphi(n).$ This is the product of the orders of the cyclic groups in the direct product.
### Exponent
The exponent is given by the Carmichael function $\lambda(n),$ the least common multiple of the orders of the cyclic groups. Thus, $\lambda(n)$ is the smallest number for a given n such that for each a relatively prime to n, $a^{\lambda(n)} \equiv 1 \pmod n$ holds.
### Generators
The group $(\mathbb{Z}/n\mathbb{Z})^\times$ is cyclic if and only if its order $\varphi(n)$ is equal to its exponent $\lambda(n)$. This is the case when n is 2, 4, pk or 2pk, where p is an odd prime and k > 0. For all other values of n (except 1) the group is not cyclic.[5][6] The single generator in the cyclic case is called a primitive root modulo n.
Since all the $(\mathbb{Z}/n\mathbb{Z})^\times,$ n ≤ 7 are cyclic, another way to state this is: If n < 8 then $(\mathbb{Z}/n\mathbb{Z})^\times$ has a primitive root. If n ≥ 8 then $(\mathbb{Z}/n\mathbb{Z})^\times$ has a primitive root unless n is divisible by 4 or by two distinct odd primes.
In the general case there is one generator for each cyclic direct factor.
## Examples
This table shows the cyclic decomposition of $(\mathbb{Z}/n\mathbb{Z})^\times$ and a generating set for small values of n. The generating sets are not unique; e.g. modulo 16 both {−1, 3} and {−1, 5} will work. The generators are listed in the same order as the direct factors.
For example take n = 20. $\varphi(20)=8$ means that the order of $(\mathbb{Z}/20\mathbb{Z})^\times$ is 8 (i.e. there are 8 numbers less than 20 and coprime to it); $\lambda(20)=4$ that the fourth power of any number relatively prime to 20 is congruent to 1 (mod 20); and as for the generators, 19 has order 2, 3 has order 4, and every member of $(\mathbb{Z}/20\mathbb{Z})^\times$ is of the form 19a × 3b, where a is 0 or 1 and b is 0, 1, 2, or 3.
The powers of 19 are {±1} and the powers of 3 are {3, 9, 7, 1}. The latter and their negatives modulo 20, {17, 11, 13, 19} are all the numbers less than 20 and coprime to it. That the order of 19 is 2 and the order of 3 is 4 implies that the fourth power of every member of $(\mathbb{Z}/20\mathbb{Z})^\times$ is congruent to 1 (mod 20).
$n\;$ $(\mathbb{Z}/n\mathbb{Z})^\times$ $\varphi(n)$ $\lambda(n)\;$ generating set $n\;$ $(\mathbb{Z}/n\mathbb{Z})^\times$ $\varphi(n)$ C1 1 1 1 C2×C10 20 10 10, 2 C1 1 1 1 C16 16 16 3 C2 2 2 2 C2×C12 24 12 6, 2 C2 2 2 3 C2×C6 12 6 19, 5 C4 4 4 2 C36 36 36 2 C2 2 2 5 C18 18 18 3 C6 6 6 3 C2×C12 24 12 38, 2 C2×C2 4 2 7, 3 C2×C2×C4 16 4 39, 11, 3 C6 6 6 2 C40 40 40 6 C4 4 4 3 C2×C6 12 6 13, 5 C10 10 10 2 C42 42 42 3 C2×C2 4 2 5, 7 C2×C10 20 10 43, 3 C12 12 12 2 C2×C12 24 12 44, 2 C6 6 6 3 C22 22 22 5 C2×C4 8 4 14, 2 C46 46 46 5 C2×C4 8 4 15, 3 C2×C2×C4 16 4 47, 7, 5 C16 16 16 3 C42 42 42 3 C6 6 6 5 C20 20 20 3 C18 18 18 2 C2×C16 32 16 50, 5 C2×C4 8 4 19, 3 C2×C12 24 12 51, 7 C2×C6 12 6 20, 2 C52 52 52 2 C10 10 10 7 C18 18 18 5 C22 22 22 5 C2×C20 40 20 21, 2 C2×C2×C2 8 2 5, 7, 13 C2×C2×C6 24 6 13, 29, 3 C20 20 20 2 C2×C18 36 18 20, 2 C12 12 12 7 C28 28 28 3 C18 18 18 2 C58 58 58 2 C2×C6 12 6 13, 3 C2×C2×C4 16 4 11, 19, 7 C28 28 28 2 C60 60 60 2 C2×C4 8 4 11, 7 C30 30 30 3 C30 30 30 3 C6×C6 36 6 2, 5 C2×C8 16 8 31, 3 C2×C16 32 16 63, 3
## Notes
1. ^ Gauss, DA, arts. 90–91
2. ^ Gauss, DA, arts. 52–56, 82–89
3. ^ Riesel covers all of this. pp. 267–275
4. ^ Erdős, Paul; Pomerance, Carl (1986). "On the number of false witnesses for a composite number". Math. Comput. 46: 259–279. doi:10.1090/s0025-5718-1986-0815848-x. Zbl 0586.10003.
5. ^
6. ^
## References
The Disquisitiones Arithmeticae has been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
• Gauss, Carl Friedrich; Clarke, Arthur A. (translator into English) (1986), Disquisitiones Arithemeticae (Second, corrected edition), New York: Springer, ISBN 0-387-96254-9
• Gauss, Carl Friedrich; Maser, H. (translator into German) (1965), Untersuchungen uber hohere Arithmetik (Disquisitiones Arithemeticae & other papers on number theory) (Second edition), New York: Chelsea, ISBN 0-8284-0191-8
• Riesel, Hans (1994), Prime Numbers and Computer Methods for Factorization (second edition), Boston: Birkhäuser, ISBN 0-8176-3743-5 | 2014-07-25T18:39:59 | {
"domain": "wikipedia.org",
"url": "http://en.wikipedia.org/wiki/Multiplicative_group_of_integers_modulo_n",
"openwebmath_score": 0.7347819209098816,
"openwebmath_perplexity": 418.35718126885996,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620058985926,
"lm_q2_score": 0.658417487156366,
"lm_q1q2_score": 0.6531909730270553
} |
http://educ.jmu.edu/~waltondb/ModelCalculus/calculus-overview.html | # Section5.1An Overview of Calculus¶ permalink
When we studied sequences, we learned that the behavior of the sequence could be described in terms of its increments. (Recall 3.4.) The following summarizes what we learned about sequences.
##### Behavior of a Sequence Using Increments
Every sequence $(x_k)$ has a corresponding increment sequence, $\Delta x_k = x_{k} - x_{k-1}$. The dynamic behavior of the sequence $x$ can be described using the attributes of the increments as summarized below.
• If the increments are positive on a range, $\Delta x_k \gt 0$ for all $k=m,\ldots,n$, then the sequence $x$ is increasing on the index range $k=m-1,\ldots,n$.
• If the increments are negative on a range, $\Delta x_k \lt 0$ for all $k=m,\ldots,n$, then the sequence $x$ is decreasing on the index range $k=m-1,\ldots,n$.
• If the increments are increasing on a range, $\Delta x_k \gt 0$ for all $k=m,\ldots,n$, then the sequence $x$ is concave up on the index range $k=m-1,\ldots,n$.
• If the increments are decreasing on a range, $\Delta x_k \lt 0$ for all $k=m,\ldots,n$, then the sequence $x$ is concave down on the index range $k=m-1,\ldots,n$.
##### Note5.1.1
The range of index values for the sequence always starts one index earlier than the corresponding range of index values describing the increments because each increment is computed as a backward different. The increment $\Delta x_m = x_m - x_{m-1}$ characterizes the behavior of the sequence $x$ going from index $m-1$ to index $m$. So a range of index values $k=m,\ldots,n$ for the increments characterizes the change of the sequence $x_{m-1}$ to $x_{m}$, and ultimately to $x_n$.
We now attempt to use this understanding by way of analogy to describe functions. Functions are more general than sequences, although sequences are a special subcollection of functions. A function is a general rule mapping values in a domain set to a co-domain set. For a sequence, the domain is a collection of integers. More generally, functions might have any valid set as the domain. We will usually be working with functions whose domains are intervals or unions of intervals.
When we learned about definite integrals, for a function $f$ that is integrable (which includes all continuous functions), we can define an accumulation function \begin{equation*}A(x)=\int_{a}^{x} f(z) \, dz.\end{equation*} (See 4.4,) The accumulation function measures the accumulated increments of change using $f(x)$ as the rate of change as $x$ goes from $x=a$ to the present value. The Riemann sum approximation reinforces this interpretation, \begin{equation*} \int_{a}^{x} f(z) \, dz \approx \sum_{k=1}^{n} f(z_k^*) \Delta z \end{equation*} where $\displaystyle \Delta z = \frac{x-a}{n}$, $z_k=a+k \Delta z$ and $z_k^*$ is any value in the subinterval $[z_{k-1}, z_k]$. The increments $f(z_k^*) \Delta z$ represent the product of a rate $f(z_k^*)$ and an increment of the variable $\Delta z$.
The rate function for an accumulation function is analogous to the increments for a sequence. The features of the rate function informs us about the behavior of the accumulation function $A(x)$. The rate function $f(x)$ is called the derivative of the accumulation function $A(x)$, and we write $f(x)=A'(x)$ or $\displaystyle f(x)=\frac{dA}{dx}$. This statement, known as the Fundamental Theorem of Calculus, serves as the central result of calculus. One of our goals is to understand this result at a level where we know not only what it says but why it is true.
##### Behavior of a Function Using Derivatives
Given an accumulation function $A(x)$ that has a derivative $A'(x)=f(x)$ defined on an interval $I$, then the behavior of $A$ is determined by the behavior of $A'=f$ as given below.
• If $A'(x) \gt 0$ for all $x \in I$, then $A(x)$ is increasing on the interval $I$.
• If $A'(x) \lt 0$ for all $x \in I$, then $A(x)$ is decreasing on the interval $I$.
• If $A'(x)$ is increasing on the interval $I$, then $A(x)$ is concave up on $I$.
• If $A'(x)$ is decreasing on the interval $I$, then $A(x)$ is concave down on $I$.
When a function is defined as an accumulation function, identifying the derivative (or rate function) is as easy as identifying the function that appears inside the integral operation. But what about other functions? Do they have derivatives as well? Unfortunately, the answer is not always. We will be studying this question throughout the course as well as learn rules for how to find the derivatives when they exist.
Note that because $A'(x)=f(x)$ is also a function, it too might have a derivative, represented by $A''(x)=f'(x)$. If so, the sign of $A''$ can tell us whether $A'$ is increasing or decreasing, and therefore gives us the concavity of the original function $A$. Also note that we will later learn some conditions in which we will be able to add the end-points of the intervals. Also, we will not generally use the name $A$ for the function of interest.
In order to use these results about the behavior of a function in terms of its derivative, we will need to have methods of determining derivatives. For now, we will let a computer assist us. Online tools, such as WolframAlpha.com or SageMathCell, are convenient enough. Because Sage allows more flexibility, some guidance is provided below.
Suppose you have a function $F(x) = x^3+4x^2$. Sage requires that you explicitly indicate multiplication. In order to work with our function, we will save it with a label y. To verify our work, we will have Sage display its results.
In Sage, once we have a label (a Sage variable), we can apply Sage operations. In this example, we want to find the derivative. We will save this with a new label as well, say dy. So long as you evaluated the above results already, you can evaluate the next step below.
If your problem involves an independent variable other than $x$, you need to let Sage know what symbols represent mathematical variables. The following script uses the same function but with $t$ as the independent variable. It also finds $f''(t)$ as the derivative of $f'(t)$.
Knowing formulas for the derivatives allows us to interpret the behavior of the original function. The following example works with the function we just differentiated with Sage.
##### Example5.1.2
Given $f(t) = t^3+4t^2$, describe the behavior of $f$ giving intervals of monotonicity and of concavity.
Solution
##### Key Questions Still Needing Answers
This section introduced a number of things that we will study as the course progresses. It leaves a number of questions unanswered for now.
• The derivative was introduced as the rate function in a definite integral. What does the derivative measure?
• How does one mathematically define a derivative?
• How does one calculate a derivative?
• What functions even have a derivative?
• What is the precise relationship between definite integrals and derivatives (i.e., the fundamental theorem of calculus)?
• Concavity is defined by where a derivative is increasing or decreasing. What does that really mean? | 2017-03-27T16:28:29 | {
"domain": "jmu.edu",
"url": "http://educ.jmu.edu/~waltondb/ModelCalculus/calculus-overview.html",
"openwebmath_score": 0.8645172715187073,
"openwebmath_perplexity": 219.48422288076256,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620042557432,
"lm_q2_score": 0.658417487156366,
"lm_q1q2_score": 0.6531909719453746
} |
https://stats.stackexchange.com/questions/30858/how-to-calculate-cumulative-distribution-in-r/30860 | # How to calculate cumulative distribution in R?
I need to calculate the cumulative distribution function of a data sample.
Is there something similar to hist() in R that measure the cumulative density function?
I have tries ecdf() but i can't understand the logic.
## locked by whuber♦May 19 '17 at 12:35
This question exists because it has historical significance, but it is not considered a good, on-topic question for this site so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. See the help center for guidance on writing a good question.
The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example:
> X = rnorm(100) # X is a sample of 100 normally distributed random variables
> P = ecdf(X) # P is a function giving the empirical CDF of X
> P(0.0) # This returns the empirical CDF at zero (should be close to 0.5)
[1] 0.52
> plot(P) # Draws a plot of the empirical CDF (see below)
If you want to have an object representing the empirical CDF evaluated at specific values (rather than as a function object) then you can do
> z = seq(-3, 3, by=0.01) # The values at which we want to evaluate the empirical CDF
> p = P(z) # p now stores the empirical CDF evaluated at the values in z
Note that p contains at most the same amount of information as P (and possibly it contains less) which in turn contains the same amount of information as X.
• Yes i know, but how is it possible to access the values of ecdf? this is a mystery for me. – emanuele Jun 21 '12 at 8:50
• If you want its value at x you simply write P(x). Note that x can be a vector (see the last couple of sentences of my answer.) – Chris Taylor Jun 21 '12 at 8:54
• @ChrisTaylor The correct terminology is empirical cumulative distribution function not density function. – Michael R. Chernick Jun 21 '12 at 14:51
What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the argument of that function, if it were a stair, would be the index of the tread.
You can use this:
acumulated.distrib= function(sample,x){
minors= 0
for(n in sample){
if(n<=x){
minors= minors+1
}
}
return (minors/length(sample))
}
mysample = rnorm(100)
acumulated.distrib(mysample,1.21) #1.21 or any other value you want.
Sadly the use of this function is not very fast. I don't know if R has a function that does this returning you a function, that would be more efficient.
• You seem to mix up the ECDF with its inverse. R does, indeed, compute the ECDF: its argument is a potential value of the random variable and it returns values in the interval $[0,1]$. This is readily checked. For instance, ecdf(c(-1,0,3,9))(8) returns 0.75. A generalized inverse of the ECDF is the quantile function, implemented by quantile in R. – whuber Jun 1 '15 at 16:19
I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead.
First install data.table. Then install my package, mltools (or just copy the empirical_cdf() method into your R environment.)
Then it's as easy as
# load packages
library(data.table)
library(mltools)
# Make some data
dt <- data.table(x=c(0.3, 1.3, 1.4, 3.6), y=c(1.2, 1.2, 3.8, 3.9))
dt
x y
1: 0.3 1.2
2: 1.3 1.2
3: 1.4 3.8
4: 3.6 3.9
### CDF of a vector
empirical_cdf(dt\$x, ubounds=seq(1, 4, by=1.0))
UpperBound N.cum CDF
1: 1 1 0.25
2: 2 3 0.75
3: 3 3 0.75
4: 4 4 1.00
### CDF of column 'x' of dt
empirical_cdf(dt, ubounds=list(x=seq(1, 4, by=1.0)))
x N.cum CDF
1: 1 1 0.25
2: 2 3 0.75
3: 3 3 0.75
4: 4 4 1.00
### CDF of columns 'x' and 'y' of dt
empirical_cdf(dt, ubounds=list(x=seq(1, 4, by=1.0), y=seq(1, 4, by=1.0)))
x y N.cum CDF
1: 1 1 0 0.00
2: 1 2 1 0.25
3: 1 3 1 0.25
4: 1 4 1 0.25
5: 2 1 0 0.00
6: 2 2 2 0.50
7: 2 3 2 0.50
8: 2 4 3 0.75
9: 3 1 0 0.00
10: 3 2 2 0.50
11: 3 3 2 0.50
12: 3 4 3 0.75
13: 4 1 0 0.00
14: 4 2 2 0.50
15: 4 3 2 0.50
16: 4 4 4 1.00
friend, you can read the code on this blog.
sample.data = read.table ('data.txt', header = TRUE, sep = "\t")
cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf()
cdf
more details can be found on following link:
r cdf and histogram | 2019-11-21T13:09:16 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/30858/how-to-calculate-cumulative-distribution-in-r/30860",
"openwebmath_score": 0.46444079279899597,
"openwebmath_perplexity": 1742.1527950290695,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240964782011,
"lm_q2_score": 0.7549149923816048,
"lm_q1q2_score": 0.653170642201222
} |
https://mathematica.stackexchange.com/questions/163331/how-would-i-use-mathematica-to-solve-this-equation | # How would I use mathematica to solve this equation?
I don't have much experience with solving equations using mathematica. I have the following equation:
$$A=u\cdot f^{-1}(u)-\int_a^{f^{-1}(u)}f(x)dx$$ For some given constant $A>0$ and $a\in[0,1]$, and given function $f$, with the following properties:
• $f(x)\geq 0$. $f(x)=0$ on $x\in[0,a]$
• $f^{-1}(u)\in [a,1]$
• I know that $u>0$ will hold
How do I tell mathematica to solve for $u$? I don't know where to start. I would be satisfied with a numerical solution.
Actually, Perhaps there is some good tutorial that would teach me these things?
## 3 Answers
If you substitute
\[Lambda] -> (f^-1)[u]
your equation becomes
A==\[Lambda] f[\[Lambda]]-\!$$\*SubsuperscriptBox[\(\[Integral]$$, $$a$$,$$\[Lambda]$$]$$f[x] \[DifferentialD]x$$\)
a little bit nicer!
If you know the antiderivative function of f[x] you can solve your problem using FindRoot[], otherwice numerical integration...
example:
gl = \[Lambda] Max[0, \[Lambda] - 0.5] -Integrate[Max[0, x - .5], {x, 0.5, \[Lambda]}] - 10
Plot[gl, {\[Lambda], 0, 10}]
NMinimize[{1, gl == 0}, \[Lambda]]
(* {\[Lambda] -> 4.5}*)
Numerical version(NIntegrate):
int[ \[Lambda]_?NumericQ] :=NIntegrate[Max[0, x - .5], {x, 0.5, \[Lambda]}]
gl = \[Lambda] Max[0, \[Lambda] - 0.5] - int[\[Lambda]] - 10
Plot[gl, {\[Lambda], 0, 10}]
NMinimize[{1, gl == 0}, \[Lambda]]
• Thank you. The problem is, I know how to numerically integrate an integral, but I don't know how to do it if $\lambda$ as you've defined it, is unknown, and to then solve for lambda – user56834 Jan 9 '18 at 14:59
• An example would help! – Ulrich Neumann Jan 9 '18 at 16:12
• Ok, say $a=0.5$, and $f(x)=max(0,x-a)$, $A =10$ – user56834 Jan 9 '18 at 17:36
Update
(My first version had a serious error.)
Your example function is simple enough that the problem can be solved exactly. If you have a more complicated function in mind, then it might make sense to use an NDSolve approach instead, but you will need to provide such an example before I show that approach.
First, here is your equation:
Block[{if = InverseFunction[f]},
eqn = A == u[A] if[u[A]] - Integrate[f[x], {x, a, if[u[A]]}]
];
eqn //TeXForm
$A=u(A) f^{(-1)}(u(A))-\int_a^{f^{(-1)}(u(A))} f(x) \, dx$
The example in the comments had:
f[x_] := Max[0, x-a]
Having the inverse will also be convenient:
if[u_] = x /. First @ Solve[f[x] == u, x, Reals]
ConditionalExpression[a + u, u > 0]
Using the above example function, we obtain:
eqn2 = Simplify[eqn /. InverseFunction[f]->if, u[A]>0]
2 A == u[A] (2 a + u[A])
Solving for u[A] yields:
Simplify[Reduce[eqn2, u[A], Reals], u[A]>0 && a>0 && A>0]
Sqrt[a^2 + 2 A] == a + u[A]
Finally, we obtain the following plot for u[A]:
Block[{a=.5},
Plot[-a + Sqrt[a^2 + 2 A], {A, 0, 10}]
]
• I don't understand why those two constraints imply $a=0$? – user56834 Jan 9 '18 at 14:57
• Where did you get $0=f^{-1}(u)$? – user56834 Jan 9 '18 at 17:39
• I am not quite sure, but it seems that the by calculating the derivative of the initial equation with respect to u one immediately finds f^(-1)(u)==0, is not it? – Alexei Boulbitch Jan 10 '18 at 11:33
• @AlexeiBoulbitch You're making the same error I originally made. Consider the equation $x^2=1$. Taking a derivative with respect to $x$ does not yield the same roots. – Carl Woll Jan 10 '18 at 15:10
• @ Carl Woll That was also my concern. On the other hand, taking a derivative is a rather standard trick for integral equations, though, of course, all information about A is lost. – Alexei Boulbitch Jan 10 '18 at 15:45
I would suggest having a look at the reference docs...or trying again to search for closely related problems with google...you may end up on S.E again.
https://reference.wolfram.com/language/ref/Integrate.html | 2019-07-18T03:27:47 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/163331/how-would-i-use-mathematica-to-solve-this-equation",
"openwebmath_score": 0.561585009098053,
"openwebmath_perplexity": 1525.1162041938808,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240964782011,
"lm_q2_score": 0.7549149868676283,
"lm_q1q2_score": 0.6531706374303967
} |
https://mathematica.stackexchange.com/questions/51127/how-to-apply-compound-styles-to-curves-in-e-g-plot | # How to apply compound styles to curves in e.g. Plot
Say, I have a colorlist
listColor={Black,Brown,Red,Cyan}
Now, I have some Plot function that I can use this list:
Plot[{Sin[x],Cos[x],x,x^2},{x,1,100},PlotStyle->listColor]
Everything went fine. Now, I wanted to make the plot style "thick"
Plot[{Sin[x],Cos[x],x,x^2},{x,1,100},PlotStyle->{Thick,listColor}]
The listColor breaks down. I understand I actually need
listColor={{Thick, Black},{Thick, Brown},{Thick, Red},{Thick, Cyan}}
But adding {Thick} to each entry of listColor is too hard. Is there anyway that I can append {Thick, } to each entry of the list elegantly?
I notice that using
Transpose[{Table[Thick,{i,1,4}],listColor}]
might work but it looks unnecessary...
You can use BaseStyle for some of directives:
listColor = {Black, Brown, Red, Cyan}
Another way is to use all in PlotStyle:
PlotStyle -> Thread[Directive[listColor, Thick]] (*or just*) | 2021-04-16T22:17:43 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/51127/how-to-apply-compound-styles-to-curves-in-e-g-plot",
"openwebmath_score": 0.5510608553886414,
"openwebmath_perplexity": 6493.965579740076,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240895276223,
"lm_q2_score": 0.7549149923816048,
"lm_q1q2_score": 0.6531706369541259
} |
https://gmatclub.com/forum/the-cost-price-of-m-articles-is-the-same-as-the-selling-price-of-n-262916.html | GMAT Changed on April 16th - Read about the latest changes here
It is currently 25 Apr 2018, 09:18
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
The cost price of m articles is the same as the selling price of n
Author Message
TAGS:
Hide Tags
Intern
Joined: 05 Dec 2017
Posts: 21
GPA: 3.35
The cost price of m articles is the same as the selling price of n [#permalink]
Show Tags
08 Apr 2018, 04:11
00:00
Difficulty:
65% (hard)
Question Stats:
42% (01:06) correct 58% (01:00) wrong based on 33 sessions
HideShow timer Statistics
The cost price of $$m$$ articles is the same as the selling price of $$n$$ articles. If the profit is 200% then what is the value of $$n$$ in terms of $$m$$?
A. $$3m$$
B. $$\frac{3}{m}$$
C. $$2m$$
D. $$\frac{m}{2}$$
E. $$\frac{m}{3}$$
[Reveal] Spoiler: OA
Math Expert
Joined: 02 Aug 2009
Posts: 5777
Re: The cost price of m articles is the same as the selling price of n [#permalink]
Show Tags
08 Apr 2018, 07:24
Jamil1992Mehedi wrote:
The cost price of $$m$$ articles is the same as the selling price of $$n$$ articles. If the profit is 200% then what is the value of $$n$$ in terms of $$m$$?
A. $$3m$$
B. $$\frac{3}{m}$$
C. $$2m$$
D. $$\frac{m}{2}$$
E. $$\frac{m}{3}$$
Profit of 200% means SP is 3 times CP..
so $$SP = 3*CP$$....so CP of m articles will be EQUAL to SP of $$\frac{m}{3}$$ articles...
It is given that CP of m = SP of n, which is equal to SP of $$\frac{m}{3}$$..so $$n=\frac{m}{3}$$
E
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
GMAT online Tutor
Re: The cost price of m articles is the same as the selling price of n [#permalink] 08 Apr 2018, 07:24
Display posts from previous: Sort by | 2018-04-25T16:18:57 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/the-cost-price-of-m-articles-is-the-same-as-the-selling-price-of-n-262916.html",
"openwebmath_score": 0.4424494802951813,
"openwebmath_perplexity": 2677.2361077431146,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240756264638,
"lm_q2_score": 0.7549149978955811,
"lm_q1q2_score": 0.653170631230758
} |
https://mathematica.stackexchange.com/questions/47449/how-do-i-make-the-color-of-arrowheads-in-a-graph-different-from-the-colors-of-th/157629 | # How do I make the color of arrowheads in a graph different from the colors of the arrows?
I'm graphing a Markov process
mp = DiscreteMarkovProcess[{1, 0, 0}, ({
{0.6, 0.1, 0.3},
{0.2, 0.7, 0.1},
{0.3, 0.3, 0.4}
})];
and would like to have arrows whose thicknesses corresponds to the transition probabilities, with arrowheads of a different color in the exact center of each edge. But all my attempts end up a mess.
g = Graph[mp];
Scan[(PropertyValue[{g, #}, EdgeLabels] =
PropertyValue[{g, #}, "Probability"]) &, EdgeList[g]];
Scan[(PropertyValue[{g, #}, EdgeStyle] =
Thickness[PropertyValue[{g, #}, "Probability"]/20]]) &,
EdgeList[g]];
g
The thick edges leave gaps between their ends and the nodes of the graph, and I can't figure out how to change the color of the arrow heads so that they stand out against the color of the edges.
How can I change the color of the arrowheads in my figure. How can I avoid the gaps that appear between nodes and the ends of the edges?
• Take a look at EdgeShapeFunction. – wxffles May 7 '14 at 22:47
• @wxffles: Looks intriguing; but I'm not sure where to go with it. It seems to amount to "build it from scratch". – orome May 7 '14 at 23:21
Using an EdgeShapeFunction seems to do what you want. Adapting from the examples in the help:
ef[pts_List, e_] :=
Arrow[pts]}
g = Graph[mp];
Scan[(PropertyValue[{g, #}, EdgeLabels] =
PropertyValue[{g, #}, "Probability"]) &, EdgeList[g]];
Scan[(PropertyValue[{g, #}, EdgeStyle] =
Directive[GrayLevel[.7], Thickness[PropertyValue[{g, #}, "Probability"]/20]]) &,
EdgeList[g]];
Scan[(PropertyValue[{g, #}, EdgeShapeFunction] = ef) &, EdgeList[g]];
g
It's a bit ugly, with mysterious red dots within the arrowheads. But this only reflects how little time I've put into it. With some competence and patience I suspect it could do what you want.
Edit: Something nicer:
ef[pts_List, e_] := {Arrowheads[{{0.02, 0.65,
Graphics@{Red, EdgeForm[Gray], Polygon[{{-1.5, -1}, {1.5, 0}, {-1.5, 1}}]}
}}], Arrow[pts]}
• Is there a way to keep the arc shapes of the original? – orome May 8 '14 at 0:47
• Not easily as far as I can tell. It's just using the points that it gets passed. I'm not sure what the default EdgeShapeFunctions do to it. – wxffles May 8 '14 at 1:05
• @wxffles How to access the built-in set of arrow heads is described here. – Alexey Popkov May 8 '14 at 7:09
If you don't mind having a Graphics object, you can replace the Arrowheads directives with wxffles's Arrowheads specification, and get to keep the arc shapes of the orginial g:
arrowheads = Arrowheads[{{0.02, 0.65, Graphics@{Red, EdgeForm[Gray],
Polygon[{{-1.5, -1}, {1.5, 0}, {-1.5, 1}}]}}}];
g2 = Show[g] /. TagBox -> (# &) /. _Arrowheads :> arrowheads
If you have to have a Graph object, you can extract the edge primitives from g2 and use them as EdgeShapeFunction for g:
edgehapefunctions = Function /@
Cases[g2[[1]], {dirs___, _Arrowheads, _ArrowBox}, {0, Infinity}];
SetProperty[g, EdgeShapeFunction -> Thread[EdgeList[g] -> edgehapefunctions]] | 2019-07-21T17:52:23 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/47449/how-do-i-make-the-color-of-arrowheads-in-a-graph-different-from-the-colors-of-th/157629",
"openwebmath_score": 0.23747175931930542,
"openwebmath_perplexity": 4429.338296535963,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240791017535,
"lm_q2_score": 0.7549149923816048,
"lm_q1q2_score": 0.6531706290834812
} |
https://gmatclub.com/forum/the-sum-of-the-first-k-positive-integers-is-equal-to-k-k-1-2-what-is-126289.html?fl=similar | It is currently 12 Dec 2017, 14:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# The sum of the first k positive integers is equal to k(k+1)/2. What is
Author Message
TAGS:
### Hide Tags
Director
Status: No dream is too large, no dreamer is too small
Joined: 14 Jul 2010
Posts: 604
Kudos [?]: 1167 [2], given: 39
The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
19 Jan 2012, 10:16
2
KUDOS
8
This post was
BOOKMARKED
00:00
Difficulty:
65% (hard)
Question Stats:
60% (01:37) correct 40% (01:25) wrong based on 212 sessions
### HideShow timer Statistics
The sum of the first k positive integers is equal to k(k+1)/2. What is the sum of the integers from n to m, inclusive, where 0<n<m?
A. $$\frac{m(m+1)}{2} - \frac{(n+1)(n+2)}{2}$$
B. $$\frac{m(m+1)}{2} - \frac{n(n+1)}{2}$$
C. $$\frac{m(m+1)}{2} - \frac{(n-1)n}{2}$$
D. $$\frac{(m-1)m}{2} - \frac{(n+1)(n+2)}{2}$$
E. $$\frac{(m-1)m}{2} - \frac{n(n+1)}{2}$$
[Reveal] Spoiler: OA
_________________
Collections:-
PSof OG solved by GC members: http://gmatclub.com/forum/collection-ps-with-solution-from-gmatclub-110005.html
DS of OG solved by GC members: http://gmatclub.com/forum/collection-ds-with-solution-from-gmatclub-110004.html
100 GMAT PREP Quantitative collection http://gmatclub.com/forum/gmat-prep-problem-collections-114358.html
Collections of work/rate problems with solutions http://gmatclub.com/forum/collections-of-work-rate-problem-with-solutions-118919.html
Mixture problems in a file with best solutions: http://gmatclub.com/forum/mixture-problems-with-best-and-easy-solutions-all-together-124644.html
Kudos [?]: 1167 [2], given: 39
Math Expert
Joined: 02 Sep 2009
Posts: 42571
Kudos [?]: 135384 [0], given: 12691
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
19 Jan 2012, 10:28
Expert's post
6
This post was
BOOKMARKED
Baten80 wrote:
The sum of the first k positive integers is equal to k(k+1)/2. What is the sum of the integers from n to m, inclusive, where 0<n<m?
A. $$\frac{m(m+1)}{2} - \frac{(n+1)(n+2)}{2}$$
B. $$\frac{m(m+1)}{2} - \frac{n(n+1)}{2}$$
C. $$\frac{m(m+1)}{2} - \frac{(n-1)n}{2}$$
D. $$\frac{(m-1)m}{2} - \frac{(n+1)(n+2)}{2}$$
E. $$\frac{(m-1)m}{2} - \frac{n(n+1)}{2}$$
The sum of the integers from n to m, inclusive, will be the sum of the first m positive integers minus the sum of the first n-1 integers: $$\frac{m(m+1)}{2}-\frac{(n-1)(n-1+1)}{2}=\frac{m(m+1)}{2}-\frac{(n-1)n}{2}$$.
_________________
Kudos [?]: 135384 [0], given: 12691
Math Expert
Joined: 02 Sep 2009
Posts: 42571
Kudos [?]: 135384 [3], given: 12691
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
19 Jan 2012, 10:34
3
KUDOS
Expert's post
1
This post was
BOOKMARKED
Baten80 wrote:
The sum of the first k positive integers is equal to k(k+1)/2. What is the sum of the integers from n to m, inclusive, where 0<n<m?
A. $$\frac{m(m+1)}{2} - \frac{(n+1)(n+2)}{2}$$
B. $$\frac{m(m+1)}{2} - \frac{n(n+1)}{2}$$
C. $$\frac{m(m+1)}{2} - \frac{(n-1)n}{2}$$
D. $$\frac{(m-1)m}{2} - \frac{(n+1)(n+2)}{2}$$
E. $$\frac{(m-1)m}{2} - \frac{n(n+1)}{2}$$
Or try plug-in method: let m=4 and n=3 --> then m+n=7. Let see which option yields 7.
A. $$\frac{m(m+1)}{2} - \frac{(n+1)(n+2)}{2} = 10-10=0$$;
B. $$\frac{m(m+1)}{2} - \frac{n(n+1)}{2} = 10-6=4$$;
C. $$\frac{m(m+1)}{2} - \frac{(n-1)n}{2} = 10-3=7$$ --> OK;
D. $$\frac{(m-1)m}{2} - \frac{(n+1)(n+2)}{2} = 6-10=-4$$;
E. $$\frac{(m-1)m}{2} - \frac{n(n+1)}{2} = 6-6=0$$.
_________________
Kudos [?]: 135384 [3], given: 12691
Current Student
Joined: 03 Apr 2015
Posts: 26
Kudos [?]: 7 [0], given: 6
Schools: ISB '16 (A)
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
19 Jul 2015, 05:05
Can also be solved by using Sum of A.P formula from n to m. (A bit lengthy though)
Kudos [?]: 7 [0], given: 6
Manager
Joined: 07 Jan 2015
Posts: 91
Kudos [?]: 23 [0], given: 654
Location: Thailand
GMAT 1: 540 Q41 V23
GMAT 2: 570 Q44 V24
GMAT 3: 550 Q44 V21
GMAT 4: 660 Q48 V33
GPA: 3.31
WE: Science (Other)
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
16 Sep 2015, 20:52
Baten80 wrote:
The sum of the first k positive integers is equal to k(k+1)/2. What is the sum of the integers from n to m, inclusive, where 0<n<m?
A. m(m+1)/2 - (n+1)(n+2)/2
B. m(m+1)/2 - n(n+1)/2
C. m(m+1)/2 - (n-1)n/2
D. (m-1)m/2 - (n+1)(n+2)/2
E. (m-1)m/2 - n(n+1)/2
But OA is different.
I think this question can be solved easily by picking numbers.
Let n = 1 and m = 2
Sum of 1 integer is 1;
Sum of 2 integers is 3
So, Sum of the integers from 1 to 2 must be 3. Let's pluck N and M in the choices
A. $$\frac{2(2+1)}{2}$$ - $$\frac{(1+1)(1+2)}{2}$$ $$= 3 - 3 = 0$$
B. $$\frac{2(2+1)}{2}$$ - $$\frac{1(1+1)}{2}$$ $$= 3 - 1 = 2$$
C. $$\frac{2(2+1)}{2}$$ - $$\frac{(1-1)1}{2}$$ $$= 3 - 0 = 3$$ Bingo!
D. $$\frac{(2-1)2}{2}$$ - $$\frac{(1+1)(1+2)}{2}$$ $$= 1 - 3 = -2$$
E. $$\frac{(2-1)2}{2}$$ - $$\frac{1(1+1)}{2}$$ $$= 1 - 1 = 0$$
Correct me if I'm wrong pls
Kudos [?]: 23 [0], given: 654
Board of Directors
Joined: 17 Jul 2014
Posts: 2697
Kudos [?]: 447 [0], given: 207
Location: United States (IL)
Concentration: Finance, Economics
GMAT 1: 650 Q49 V30
GPA: 3.92
WE: General Management (Transportation)
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
20 Dec 2015, 17:08
I solved by picking numbers.
n=12
m=15.
only answer choice C yields a valid result.
Kudos [?]: 447 [0], given: 207
Manager
Joined: 12 Nov 2015
Posts: 53
Kudos [?]: 3 [0], given: 23
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
20 Dec 2015, 21:14
The only thing to trick here is that we need the sum of (n-1) integers to be subtracted from the sum of m integers.
Kudos [?]: 3 [0], given: 23
Non-Human User
Joined: 09 Sep 2013
Posts: 14897
Kudos [?]: 287 [0], given: 0
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink]
### Show Tags
01 Jan 2017, 13:00
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Kudos [?]: 287 [0], given: 0
Re: The sum of the first k positive integers is equal to k(k+1)/2. What is [#permalink] 01 Jan 2017, 13:00
Display posts from previous: Sort by | 2017-12-12T22:04:24 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/the-sum-of-the-first-k-positive-integers-is-equal-to-k-k-1-2-what-is-126289.html?fl=similar",
"openwebmath_score": 0.5512520670890808,
"openwebmath_perplexity": 6500.427905585682,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_q1_score": 0.8652240964782011,
"lm_q2_score": 0.7549149758396752,
"lm_q1q2_score": 0.6531706278887459
} |
http://math.stackexchange.com/questions/106824/what-are-various-proofs-good-for?answertab=oldest | # What are various proofs good for?
There are plenty of questions around here, which are proven to be right or wrong in various ways. I wonder, what one can learn from these differing ways of how to prove something, despite the fact that: The more proofs, the better.
Let's say a statement is something like a way $A\to Z$. One proof might then break this down to $$A\to B\to \cdots \to W\to Z,$$ while the other proof takes another route $$A\to \beta \to \cdots \to \omega\to Z.$$
Is there way to morph between the various ways and by that learn something about the general structure?
EDIT What is it worth to have plenty of proofs for the "$\Rightarrow$" direction, if a have only one proof for "$\Leftarrow$"?
The question is very general. Examples, are welcome.
-
You may be interested in this and this. – lentic catachresis Feb 19 '12 at 23:33
I think the question is too general. – lhf Mar 23 '12 at 0:58
@draks Different proofs might be related to different areas of math. The origin of various proofs is either genious (like Gauss' MO) or different people that do maths their way. I personally prefer a variety of proofs since you might not get one and understand another, or because they provide different insights. Think about $$1 = \cos(x-x) = \cos(x)\cos(-x)-\sin(x)\sin(-x)=\cos ^2 x+\sin ^2 x$$ It is a purely analitical proof a the Pythagorean Theorem, which I like the most over any other. – Pedro Tamaroff Mar 23 '12 at 1:32
If you look at this from a proof-theoretic point of view, then each proof yields certain kinds of information which ideally facilitate the extraction of computable realizers or things in this fashion.
An interesting field of study is the topic of proof mining which concerns the extraction of computable realizers or uniform bounds from (possibly non-constructive) proofs. Ulrich Kohlenbach has written an extensive book on the topic [1].
Based on Gödel's Dialectica interpretion and a negative translation of formulas, on can show that if a sentence $\forall \vec{x}\exists\vec{y}A_0(\vec{x},\vec{y})$, where $A_0$ is quantifier-free, can be proven in weakly extensional Peano-arithmetic using only quantifier-free choice and some universal axioms, one can extract realizers (computable functionals) $\vec{t}$ for $\vec{y}$ such that it is constructively provable that $\forall\vec{x}A_0(\vec{x},\vec{t}(\vec{x}))$.
Therefore, it is at least for some cases possible to extract general information from proofs, independent of the actual form of the proof (however, it is important that the axioms used are in a certain set of allowed ones).
NB: When it comes to bound extraction, the quality of the bound might well depend on the actual proof. For example, there are different proofs by Euclid and Euler for the proposition "There are infinitely many prime numbers" which yield different upper bounds on the $(r+1)$th prime number. I think that this also motivates the construction of different proofs for the same theorems.
[1] Kohlenbach, Ulrich. Applied Proof Theory: Proof Interpretations and their Use in Mathematics. Heidelberg: Springer, 2008.
-
+1 Thanks for thoughts... – draks ... Feb 13 '14 at 11:24 | 2015-07-31T03:47:42 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/106824/what-are-various-proofs-good-for?answertab=oldest",
"openwebmath_score": 0.853007972240448,
"openwebmath_perplexity": 640.8261751897438,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240895276224,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706274124754
} |
https://forum.allaboutcircuits.com/threads/matlab-code-to-find-all-prime-numbers-between-2-and-5000.80722/ | # Matlab code to find all prime numbers between 2 and 5000
#### Dapnc
Joined Feb 7, 2013
3
Need to write a matlab program that will find all prime numbers between 2 and 5000 but I can't use any built in matlab functions. Please help. It also has to run quickly.
#### tshuck
Joined Oct 18, 2012
3,534
Need to write a matlab program that will find all prime numbers between 2 and 5000 but I can't use any built in matlab functions. Please help. It also has to run quickly.
There are plenty of algorithms out there, what prevents you from using one of those? Or, are you wanting us to write the MATLAB code for you?
How quickly is quickly?
#### Dapnc
Joined Feb 7, 2013
3
Having trouble not using matlab built-in functions. Just need help getting started.
#### thatoneguy
Joined Feb 19, 2009
6,359
num = prime(5000)
Using that internal function, write the rest of the program, such as desired output format, optional input value for number (just like the built in function).
Once that is complete, Write a prime number algorithm. Run both prime(5000) and yourfunc(5000) to compare time to run, (remember to compile for max speed if you didn't know that). Once you get an algorithm that is close or less than the builtin function AND produces the same results, you are done.
That, combined with the algorithm link above should be all you need to get you started conceptually.
#### vortmax
Joined Oct 10, 2012
102
another good trick with matlab: many builtin functions are written in m-code, so you can see how they work by running edit <fcn>. However, the low level functions tend to be written in C, and so you can't see how they do their magic.
#### THE_RB
Joined Feb 11, 2008
5,438
Download a list of primes <=5000 from the internet, then install it in Matlab as a lookup table.
That is very easy to get a finished result and will really "run quickly".
#### tgstanfi
Joined Feb 22, 2013
1
Hi! I am having to do this same project and I was just wondering if you ever figured out a code that worked? I have been working on it for hours now and I started one but it ran forever and isn't right. I would greatly appreciate it!
#### tshuck
Joined Oct 18, 2012
3,534
Hi! I am having to do this same project and I was just wondering if you ever figured out a code that worked? I have been working on it for hours now and I started one but it ran forever and isn't right. I would greatly appreciate it!
This site is not meant for sharing homework answers, it is, however, for educational purposes. If you'd like to start your own thread and post what you came up with, we'd be glad to take a look at it for you...
#### John_2016
Joined Nov 23, 2016
55
clear all;clc;close all
N=5e3
L=1:N;
tic
for k=1:1:numel(L)
L2{k}=[1:L(k)];
end
prime_list=[]
for k=1:1:N
if nnz(mod(k*ones(1,numel(L2{k})),L2{k}))==k-2
prime_list=[prime_list k];
end
end
toc
Elapsed time is 0.213874 seconds.
Following, a way to collect the result in text format:
prime_list_char=num2str(prime_list)
file_id=fopen('prime_list.txt','w');
fprintf(file_id,'Primes below %s :\n\n',num2str(N));
fprintf(file_id,'%s',prime_list_char);
fclose(file_id) | 2022-10-05T18:06:12 | {
"domain": "allaboutcircuits.com",
"url": "https://forum.allaboutcircuits.com/threads/matlab-code-to-find-all-prime-numbers-between-2-and-5000.80722/",
"openwebmath_score": 0.3761000633239746,
"openwebmath_perplexity": 940.73056531584,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240860523328,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706247889272
} |
https://gmatclub.com/forum/a-group-of-8-friends-want-to-play-doubles-tennis-how-many-55369-20.html | It is currently 18 Nov 2017, 07:15
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
A group of 8 friends want to play doubles tennis. How many
Author Message
TAGS:
Hide Tags
Manager
Joined: 20 Jun 2012
Posts: 100
Kudos [?]: 45 [0], given: 52
Location: United States
Concentration: Finance, Operations
GMAT 1: 710 Q51 V25
Re: A group of 8 friends want to play doubles tennis. How many [#permalink]
Show Tags
21 Sep 2013, 01:15
Balvinder wrote:
A group of 8 friends want to play doubles tennis. How many different ways can the group be divided into 4 teams of 2 people?
A. 420
B. 2520
C. 168
D. 90
E. 105
well, this one was interesting but easy ..
first person = anyone = 1
second = out of 7 remaining = 7
third person - anyone = 1
fourth = out of 5 remaining = 5 ...... so on
note: BOLD in even places .. just to distinguish among last three 1s
_________________
Forget Kudos ... be an altruist
Kudos [?]: 45 [0], given: 52
Intern
Joined: 13 Dec 2013
Posts: 39
Kudos [?]: 19 [0], given: 10
Schools: Fuqua (I), AGSM '16
GMAT 1: 620 Q42 V33
Show Tags
19 Apr 2014, 18:50
Bunuel wrote:
jeeteshsingh wrote:
Balvinder wrote:
A group of 8 friends want to play doubles tennis. How many different ways can the group be divided into 4 teams of 2 people?
A. 420
B. 2520
C. 168
D. 90
E. 105
8c2 x 6c2 x 4c2 = 2520 = B
We should divide this by 4! --> 2520/4!= 105, as the order of the teams does not matter.
For the first person we can pick a pair in 7 ways;
For the second one in 5 ways (as two are already chosen);
For the third one in 3 ways (as 4 people are already chosen);
For the fourth one there is only one left.
So we have 7*5*3*1=105
You can heck this: combination-groups-and-that-stuff-85707.html#p642634
There is also direct formula for this:
1. The number of ways in which $$mn$$ different items can be divided equally into $$m$$ groups, each containing $$n$$ objects and the order of the groups is not important is $$\frac{(mn)!}{(n!)^m*m!}$$.
2. The number of ways in which $$mn$$ different items can be divided equally into $$m$$ groups, each containing $$n$$ objects and the order of the groups is important is $$\frac{(mn)!}{(n!)^m}$$
I tried using the formula and got:
m = 4 groups
n = 8 people
(4*8)!/(8!)^4*4! but the result was way off.
Am I using it wrongly?
Appreciate the help.
Kudos [?]: 19 [0], given: 10
Math Expert
Joined: 02 Sep 2009
Posts: 42249
Kudos [?]: 132580 [0], given: 12326
Show Tags
20 Apr 2014, 02:54
Enael wrote:
Bunuel wrote:
jeeteshsingh wrote:
A group of 8 friends want to play doubles tennis. How many different ways can the group be divided into 4 teams of 2 people?
A. 420
B. 2520
C. 168
D. 90
E. 105
We should divide this by 4! --> 2520/4!= 105, as the order of the teams does not matter.
For the first person we can pick a pair in 7 ways;
For the second one in 5 ways (as two are already chosen);
For the third one in 3 ways (as 4 people are already chosen);
For the fourth one there is only one left.
So we have 7*5*3*1=105
You can heck this: combination-groups-and-that-stuff-85707.html#p642634
There is also direct formula for this:
1. The number of ways in which $$mn$$ different items can be divided equally into $$m$$ groups, each containing $$n$$ objects and the order of the groups is not important is $$\frac{(mn)!}{(n!)^m*m!}$$.
2. The number of ways in which $$mn$$ different items can be divided equally into $$m$$ groups, each containing $$n$$ objects and the order of the groups is important is $$\frac{(mn)!}{(n!)^m}$$
I tried using the formula and got:
m = 4 groups
n = 8 people
(4*8)!/(8!)^4*4! but the result was way off.
Am I using it wrongly?
Appreciate the help.
The number of ways in which $$mn$$ different items can be divided equally into $$m$$ groups, each containing $$n$$ objects and the order of the groups is not important is $$\frac{(mn)!}{(n!)^m*m!}$$.
How many different ways can the group be divided into 4 teams (m) of 2 people (n)?
$$\frac{(mn)!}{(n!)^m*m!}=\frac{8!}{(2!)^4*4!}=105$$.
Hope it helps.
_________________
Kudos [?]: 132580 [0], given: 12326
Intern
Joined: 28 Dec 2015
Posts: 42
Kudos [?]: 3 [0], given: 62
Re: A group of 8 friends want to play doubles tennis. How many [#permalink]
Show Tags
16 Jun 2016, 23:44
For the first 2 people can be choosen out of 8 people,for the second team 2 people out of 6 people,for the third team 2 people out of 4 people and for the last team 2 people out of 2 and since the order of the team doesn’t matter,so we will divide it by 4!( if the teams are T1,T2,T3 and T4,then arrangement as T1 T2 T3 T4 or T4 T2 T3 T1 is the same)
8c2*6c2*4c2*2c2/4!=105
Kudos [?]: 3 [0], given: 62
Manager
Joined: 18 Jun 2016
Posts: 105
Kudos [?]: 21 [0], given: 76
Location: India
Concentration: Technology, Entrepreneurship
GMAT 1: 700 Q49 V36
Re: A group of 8 friends want to play doubles tennis. How many [#permalink]
Show Tags
21 Jun 2016, 18:17
Balvinder wrote:
A group of 8 friends want to play doubles tennis. How many different ways can the group be divided into 4 teams of 2 people?
A. 420
B. 2520
C. 168
D. 90
E. 105
To choose 2 out of 8 people we have 8C2
There are 4 teams
hence 4*(8C2) = 112
so went with nearest answer E.
But what's wrong with above logic?
_________________
If my post was helpful, feel free to give kudos!
Kudos [?]: 21 [0], given: 76
Manager
Joined: 09 Aug 2016
Posts: 73
Kudos [?]: 7 [0], given: 8
Re: A group of 8 friends want to play doubles tennis. How many [#permalink]
Show Tags
19 Aug 2016, 14:45
In case that the teams were DISTINCT i.e. {A,B}, {C,D}, {E,F}, {G,H} is NOT the same set as: {E,F}, {C,D}, {A,B}, {G,H} how the question would be written ?
Kudos [?]: 7 [0], given: 8
Manager
Joined: 16 Mar 2016
Posts: 135
Kudos [?]: 42 [0], given: 0
Location: France
GMAT 1: 660 Q47 V33
GPA: 3.25
A group of 8 friends want to play doubles tennis. How many [#permalink]
Show Tags
16 Oct 2016, 09:22
$$\frac{(8*7)}{2} * \frac{(6*5)}{2} * \frac{(4*3)}{2} * \frac{(2*1)}{2}/$$
you divide this whole expression by 4*3*2*1
You calculate, and find 105
Kudos [?]: 42 [0], given: 0
Non-Human User
Joined: 09 Sep 2013
Posts: 15699
Kudos [?]: 281 [0], given: 0
Re: A group of 8 friends want to play doubles tennis. How many [#permalink]
Show Tags
17 Oct 2017, 04:32
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Kudos [?]: 281 [0], given: 0
Re: A group of 8 friends want to play doubles tennis. How many [#permalink] 17 Oct 2017, 04:32
Go to page Previous 1 2 [ 28 posts ]
Display posts from previous: Sort by | 2017-11-18T14:15:35 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/a-group-of-8-friends-want-to-play-doubles-tennis-how-many-55369-20.html",
"openwebmath_score": 0.3779344856739044,
"openwebmath_perplexity": 3799.059414311431,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.8652240825770432,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.653170622165379
} |
https://tex.stackexchange.com/questions/480294/ploting-the-roots-of-a-polynomial | # Ploting the roots of a polynomial
I want to plot the roots of any given polynomial on the complex plane.
Example: Let $P(x)=x^4-x^3-1$ be given. I want to plot on the $Oxy$ complex plane all four roots of this polynomial.
I suppose Tikz might be useful tool on this case, but I don't have experience on this package.
• Welcome to TeX.se, ofcourse tikz would be useful. However, it would be great if you can show what you have tried so far in the form of a MWE. – Raaja Mar 19 '19 at 15:25
• @Raaja I haven't done any effort since I'm new in TeX. I need this for some other purpose, so any help is more than welcomed. Regards :) – Emo Mar 19 '19 at 15:31
When you move to more technical mathematics you should use the sagetex package as it gives you access to an open source CAS called SAGE. The documentation on CTAN is here. Here is the "quick and dirty" way to get what you want.
\documentclass{article}
\usepackage{sagetex}
\begin{document}
\begin{sagesilent}
x = polygen(QQ)
f = x^4-x^3-1
root_list = f.roots(CC)
real_roots = []
for root in root_list:
real_roots += [root[0].n(digits=3)]
P = list_plot(real_roots,color='red',size=25)
\end{sagesilent}
The roots of the polynomial $\sage{f}$ plotted in the complex plane
\begin{center}
\sageplot[scale=.8]{P}
\end{center}
The roots of $\sage{f}$ are $\sage{real_roots}$.
\end{document}
Here is the output: I don't know the intricacies of the code, I just hacked together some code by referring to this and this to figure out the code. I think x = polygen(QQ) will let you find roots of polynomials with rational coefficients and f.roots(CC) tells sage to find any complex roots. Since SAGE is a CAS those numbers could be objects such as sqrt(2) and we want to force them into decimals that can be plotted. That's accomplished by for root in root_list: real_roots += [root[0].n(digits=3)]. The actual plot is stored in a variable, P, by way of P = list_plot(real_roots,color='red',size=25) where color and size refer to the points which are initially too small to be seen easily. This is all done in sagesilent mode, which is like scrap paper that doesn't get into the document. In the LaTeX code, use \sage{} to get numbers/calculations and \sageplot{} to get the plots which are done in SAGE. Doing the plots through sage helps make the code short and since a CAS is doing the math, you can change the function (just remember you need multiplication between coefficients and variables) and SAGE will crunch out the result. You can, with a bit more coding, get the plot to be a nicer looking tikz plot, you can refer to how I did that for the Zeta function here. This will take quite a few extra lines. Notice that in my code, SAGE was also able to give you the 4 zeros by \sage{real_roots}. Having a CAS do the work prevents mistakes.
SAGE is not part of the LaTeX distribution; the best way to access it is through a free Cocalc account by clicking here.
• Thanks mate! :) – Emo Mar 19 '19 at 18:50
TikZ is not a computer algebra system. Of course you can compute the roots yourself and plot them using polar coordinates. (In principle you could even let TikZ solve the equations that determine the roots numerically, but this would arguably be a bit crazy.)
\documentclass[tikz,border=3.14mm]{standalone}
\usepackage{amsmath}
\DeclareMathOperator{\re}{Re}
\DeclareMathOperator{\im}{Im}
\begin{document}
\begin{tikzpicture}[scale=4]
\draw[-latex] (-1.5,0) -- (1.75,0) node[below left] {$\re z$};
\draw[-latex] (0,-1.5) -- (0,1.5) node[below right] {$\im z$};
\draw (1,0.05) -- (1,-0.05) node[below]{1};
\draw (0.05,1) -- (-0.05,1) node[left]{i};
\foreach \X/\Y in {-76.5/0.94,76.5/0.94,180/0.82,0/1.38}
\end{tikzpicture}
\end{document}
• This was very helpful to me. Yet, can we use any other package (polynom for example) to determine the roots, and then use them to plot with the help of TikZ? – Emo Mar 19 '19 at 15:56
• @Emin This is what I did here. I kindly asked Mathematica to tell me the phases and radii of the roots (zsols = z /. N[Solve[z^4 - z^3 - 1 == 0, z]]; Map[{Arg[#]*180/\[Pi], Abs[#]} &, zsols] // InputForm) and plotted them in the loop. – user121799 Mar 19 '19 at 16:00
Next time, you should post a minimal working example to attract more users to your post. Anyway, you are a new user, so this answer is for welcoming you to TeX.SE!
First of all, I don't think it has more than two real roots.
You can plot it quite easily with TikZ:
\documentclass[tikz]{standalone}
\usetikzlibrary{intersections}
\begin{document}
\begin{tikzpicture}[>=stealth,scale=2]
\draw[->] (0,-2.5)--(0,2.5) node[left] {$y$};
\draw[->,name path=ox] (-2.5,0)--(2.5,0) node[above]{$x$};
\draw (0,0) node[below left] {$O$};
\foreach \i in {-2,-1,1,2} {
\draw (-.05,\i)--(.05,\i);
\draw (0,\i) node[left] {$\i$};
\draw (\i,-.05)--(\i,.05);
\draw (\i,0) node[below] {$\i$};
}
\draw[red,name path=pl] plot[smooth,samples=500,domain=-1.1:1.6] (\x,{\x*\x*\x*\x-\x*\x*\x-1});
\path[name intersections={of=ox and pl,by={i1,i2}}];
\fill (i1) circle (1pt) node[above right] {$A$};
\fill (i2) circle (1pt) node[below right] {$B$};
\end{tikzpicture}
\end{document}
Now, when you have the intersections, you can have their coordinates:
\documentclass[tikz]{standalone}
\usetikzlibrary{intersections}
\newdimen\xa
\newdimen\xb
\newdimen\ya
\newdimen\yb
\makeatletter
\def\convertto#1#2{\strip@pt\dimexpr #2*65536/\number\dimexpr 1#1}
\makeatother
% https://tex.stackexchange.com/a/239496/156344
\begin{document}
\begin{tikzpicture}[>=stealth,scale=2]
\draw[->] (0,-2.5)--(0,2.5) node[left] {$y$};
\draw[->,name path=ox] (-2.5,0)--(2.5,0) node[above]{$x$};
\draw (0,0) node[below left] {$O$};
\foreach \i in {-2,-1,1,2} {
\draw (-.05,\i)--(.05,\i);
\draw (0,\i) node[left] {$\i$};
\draw (\i,-.05)--(\i,.05);
\draw (\i,0) node[below] {$\i$};
}
\draw[red,name path=pl] plot[smooth,samples=500,domain=-1.1:1.6] (\x,{\x*\x*\x*\x-\x*\x*\x-1});
\path[name intersections={of=ox and pl,by={i1,i2}}];
\fill (i1) circle (1pt) node[above right] {$A$};
\path (i1); \pgfgetlastxy{\xa}{\ya}
\fill (i2) circle (1pt) node[below right] {$B$};
\path (i2); \pgfgetlastxy{\xb}{\yb}
\draw (0,-3) node[text width=10cm,align=left] {%
There are two roots:\\
$A$ at $({\convertto{cm}{\xa}*2}, 0)$ and $B$ at $({\convertto{cm}{\xb}*2}, 0)$.};
\end{tikzpicture}
\end{document}
Of course you can always use \xa and \xb anywhere you want ;-)
As marmot said, TikZ is not a calculator. It can only help us find the real roots using intersections. And I don't think it is easy to do so with any LaTeX tools other than finding the roots yourself.
• Thanks a lot for the effort, but I'm asking for something else. I'm not interested to the graph of the polynomial but only the (real and complex) roots of the polynomial. I suppose first there should be a code for finding the roots of any given polynomial and then plotting those points to the complex plane. – Emo Mar 19 '19 at 15:37
• @Emin I edited my answer. – user156344 Mar 19 '19 at 15:43 | 2020-01-20T21:20:06 | {
"domain": "stackexchange.com",
"url": "https://tex.stackexchange.com/questions/480294/ploting-the-roots-of-a-polynomial",
"openwebmath_score": 0.7632899880409241,
"openwebmath_perplexity": 1154.7337699248374,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.8652240825770432,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.653170622165379
} |
https://greprepclub.com/forum/if-x-0-and-two-sides-of-a-certain-triangle-2579.html | It is currently 14 Jul 2020, 10:35
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If x>0, and two sides of a certain triangle
Author Message
TAGS:
Intern
Joined: 03 Jun 2016
Posts: 37
Followers: 0
Kudos [?]: 29 [0], given: 4
If x>0, and two sides of a certain triangle [#permalink] 30 Aug 2016, 02:20
00:00
Question Stats:
29% (01:38) correct 70% (01:16) wrong based on 75 sessions
If x>0, and two sides of a certain triangle have lengths 2x+1 and 3x+4 respectively, which of the following could be the length of the third side of the triangle?
Indicate all possible lengths.
A) 4x+5
B) x+2
C) 6x+1
D) 5x+6
E) 2x+17
[Reveal] Spoiler: OA
Retired Moderator
Joined: 07 Jun 2014
Posts: 4804
GRE 1: Q167 V156
WE: Business Development (Energy and Utilities)
Followers: 163
Kudos [?]: 2776 [1] , given: 394
Re: If x>0, and two sides of a certain triangle [#permalink] 30 Aug 2016, 04:35
1
KUDOS
Expert's post
Hi,
I think B should be one of the solutions too
Let the triangle have side named A, B and C
A= 2x+1 and B= 3x+4
A+B= 5x+5
Now we know from triangle inequality
C < A +B
Option A
4x+5 < 5x+5
=>x > 0
Option B
x +2 < 5x+5
=> -4x < 3 => x > $$\frac{-3}{4}$$.
There is also possible as long as x >0 it is also x > $$\frac{-3}{4}$$
Option C
6x+1 < 5x+5
x < 4
Also a possible solution exits such that 0 < x < 4.
Option D is not possible.
Option E is also similarly possible.
_________________
Sandy
If you found this post useful, please let me know by pressing the Kudos Button
Try our free Online GRE Test
Intern
Joined: 03 Jun 2016
Posts: 37
Followers: 0
Kudos [?]: 29 [1] , given: 4
Re: If x>0, and two sides of a certain triangle [#permalink] 30 Aug 2016, 05:10
1
KUDOS
Let the third side be y
then (3x + 4) – (2x + 1) < y < (3x + 4) + (2x + 1)
=> x + 3 < y < 5x + 5
As with option B, x+2 is always less than x+3, therefore B is not possible.
And with option E, if we choose x=5, then 2x+17= 27
=> from inequality, 8<27<30
Hope, the solution is clear!
Intern
Joined: 20 Sep 2018
Posts: 14
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: If x>0, and two sides of a certain triangle [#permalink] 11 Nov 2018, 08:19
Hey I got the answer as A, C ,E..I took x as 1,3 and 8... But my workbook still shows the answer as incorrect. Can somebody help me with this question.
Supreme Moderator
Joined: 01 Nov 2017
Posts: 371
Followers: 10
Kudos [?]: 166 [0], given: 4
Re: If x>0, and two sides of a certain triangle [#permalink] 11 Nov 2018, 09:24
Expert's post
Reetika1990 wrote:
Hey I got the answer as A, C ,E..I took x as 1,3 and 8... But my workbook still shows the answer as incorrect. Can somebody help me with this question.
The answer is correct as A, C and E.
the equations can be formed in following way..
(I) the third side is less than the sum of the other two sides...
so third side < (2x+1) + (3x+4) or third side < 5x+5
(II) the third side is greater than the difference of the other two sides...
so third side > |(2x+1) - (3x+4)| or > |x+3|
Therefore, equation becomes $$x+3 < third side < 5x+5$$
_________________
Some useful Theory.
1. Arithmetic and Geometric progressions : https://greprepclub.com/forum/progressions-arithmetic-geometric-and-harmonic-11574.html#p27048
2. Effect of Arithmetic Operations on fraction : https://greprepclub.com/forum/effects-of-arithmetic-operations-on-fractions-11573.html?sid=d570445335a783891cd4d48a17db9825
3. Remainders : https://greprepclub.com/forum/remainders-what-you-should-know-11524.html
4. Number properties : https://greprepclub.com/forum/number-property-all-you-require-11518.html
5. Absolute Modulus and Inequalities : https://greprepclub.com/forum/absolute-modulus-a-better-understanding-11281.html
Intern
Joined: 12 Sep 2019
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: If x>0, and two sides of a certain triangle [#permalink] 12 Sep 2019, 11:49
Hi All ,
While 4x+5 , 6x+1 , and 2x +17 are mentioned as correct answers for x>0 there might e challenge
X tending to 0 e.g. 0.01
2x+1 = 1.02
3x+4 = 4.03
here 6x+1 as third side will become 1.06 thus three sides as 1.02,1.06, 4.03 which is not possible
here 2x+17 = 17.02 which again is not possible
Again if x is big as x=1000
2x+1 =2001
3x+4 =3004
6x+1 becomes 6001
Intern
Joined: 09 Aug 2018
Posts: 5
Followers: 0
Kudos [?]: 3 [1] , given: 8
Re: If x>0, and two sides of a certain triangle [#permalink] 15 Oct 2019, 06:57
1
KUDOS
x+3<3rd side length<5x+5
let's say x=1000
1003<3rd side<5005
option E: 2(1000)+17= 2017 which falls right into the region. N.B if you consider small values for x, ie 2/3 then option E is invalid. but the ques asks for which of the following could be** the length. not which of the following must be**. you have to go for every possible way out there.
Intern
Joined: 19 May 2020
Posts: 6
Followers: 0
Kudos [?]: 5 [2] , given: 1
Re: If x>0, and two sides of a certain triangle [#permalink] 04 Jun 2020, 18:31
2
KUDOS
phoenixio wrote:
If x>0, and two sides of a certain triangle have lengths 2x+1 and 3x+4 respectively, which of the following could be the length of the third side of the triangle?
Indicate all possible lengths.
A) 4x+5
B) x+2
C) 6x+1
D) 5x+6
E) 2x+17
solution:
of course 3x+4 > 2x+1 so, third side of the triangle is 3X+4-(2x+1)<third_side<3x+4+2x+1.
which gives x+3<third_side<5x+5.
note: question asked which "could be" the length of third side.
x>0:
A) x+3<4x+5<5x+5 which is true.
B) x+3<x+2<5x+5 never true.
C) x+3<6X+1<5x+5 which is not true for all value of x, but is true for x<4. so, could be true.
D) x+3<5x+6<5x+5 never true.
E) x+3<2x+17<5x+5 which is not true for all value of x, but is true for x>=5.
A,C,E
Re: If x>0, and two sides of a certain triangle [#permalink] 04 Jun 2020, 18:31
Display posts from previous: Sort by | 2020-07-14T18:35:19 | {
"domain": "greprepclub.com",
"url": "https://greprepclub.com/forum/if-x-0-and-two-sides-of-a-certain-triangle-2579.html",
"openwebmath_score": 0.6305027008056641,
"openwebmath_perplexity": 5165.861746325491,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240756264639,
"lm_q2_score": 0.7549149868676283,
"lm_q1q2_score": 0.6531706216891078
} |
https://stats.stackexchange.com/questions/231285/dropping-one-of-the-columns-when-using-one-hot-encoding | # Dropping one of the columns when using one-hot encoding
My understanding is that in machine learning it can be a problem if your dataset has highly correlated features, as they effectively encode the same information.
Recently someone pointed out that when you do one-hot encoding on a categorical variable you end up with correlated features, so you should drop one of them as a "reference".
For example, encoding gender as two variables, is_male and is_female, produces two features which are perfectly negatively correlated, so they suggested just using one of them, effectively setting the baseline to say male, and then seeing if the is_female column is important in the predictive algorithm.
That made sense to me but I haven't found anything online to suggest this may be the case, so is this wrong or am I missing something?
Possible (unanswered) duplicate: Does collinearity of one-hot encoded features matter for SVM and LogReg?
• you end up with correlated features, so you should drop one of them as a "reference" Dummy variables or indicator variables (these are the two names used in statistics, synonymic to "one-hot encoding" in machine learning) are correlated pairwisely anyway, be they all k or k-1 variables. So, the better word is "statistically/informationally redundant" instead of "correlated". – ttnphns Aug 23 '16 at 14:09
• The set of all k dummies is the multicollinear set because if you know values of k-1 dummies in the data you automatically know the values of that last one dummy. Some data analysis methods or algorithms require that you drop one of the k. Other are able to cope with all k. – ttnphns Aug 23 '16 at 14:09
• @ttnphns: thanks, that makes sense. Does keeping all k values theoretically make them weaker features that could/should be eliminated with dimensionality reduction? One of the arguments for using something like PCA is often to remove correlated/redundant features, I'm wondering if keeping all k variables falls in that category. – dasboth Aug 23 '16 at 14:24
• Does keeping all k values theoretically make them weaker features. No (though I'm not 100% sure what you mean by "weaker"). using something like PCA Note, just in case, that PCA on a set of dummies representing one same categorical variable has little practical point because the correlations inside the set of dummies reflect merely the relationships among the category frequencies (so if all frequencies are equal all the correlations are equal to 1/(k-1)). – ttnphns Aug 23 '16 at 15:21
• What I mean is when you use your model to evaluate feature importance (e.g. with a random forest) will it underestimate the importance of that variable if you include all k values? As in, do you get a "truer" estimate of the importance of gender if you're only using an is_male variable as opposed to both options? Maybe that doesn't make sense in this context, and it might only be an issue when you have two different variables actually encoding the same information (e.g. height in inches and height in cm). – dasboth Aug 23 '16 at 15:31
This depends on the models (and maybe even software) you want to use. With linear regression, or generalized linear models estimated by maximum likelihood (or least squares) (in R this means using functions lm or glm), you need to leave out one column. Otherwise you will get a message about some columns "left out because of singularities"$^\dagger$.
But if you estimate such models with regularization, for example ridge, lasso er the elastic net, then you should not leave out any columns. The regularization takes care of the singularities, and more important, the prediction obtained may depend on which columns you leave out. That will not happen when you do not use regularization$^\ddagger$.
$^\dagger$ But, using factor variables, R will take care of that for you.
$^\ddagger$ Trying to answer extra question in comment: When using regularization, most often iterative methods are used (as with lasso or elasticnet) which do not need matrix inversion, so that the design matrix do not have full rank is not a problem. With ridge regularization, matrix inversion may be used, but in that case the regularization term added to the matrix before inversion makes it invertible. That is a technical reason, a more profound reason is that removing one column changes the optimization problem, it changes the meaning of the parameters, and it will actually lead to different optimal solutions. As a concrete example, say you have a categorical variable with three levels, 1,2 and 3. The corresponding parameters is $\beta_, \beta_2, \beta_3$. Leaving out column 1 leads to $\beta_1=0$, while the other two parameters change meaning to $\beta_2-\beta_1, \beta_3-\beta_1$. So those two differences will be shrinked. If you leave out another column, other contrasts in the original parameters will be shrinked. So this changes the criterion function being optimized, and there is no reason to expect equivalent solutions! If this is not clear enough, I can add a simulated example (but not today). | 2019-06-16T23:17:57 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/231285/dropping-one-of-the-columns-when-using-one-hot-encoding",
"openwebmath_score": 0.5772078037261963,
"openwebmath_perplexity": 678.1498805032202,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240756264639,
"lm_q2_score": 0.7549149868676283,
"lm_q1q2_score": 0.6531706216891078
} |
https://tex.stackexchange.com/questions/227181/formulas-should-start-at-the-same-position | # Formulas should start at the same position
I am a very beginner of LaTeX and I do not have any kind of experience with it.
How can I make formulas start at the same position such that they are aligned exactly underneath each other.
\documentclass[oneside,12pt]{scrartcl}
\usepackage[ngerman]{babel}
\usepackage[fixamsmath,disallowspaces]{mathtools}
\begin{document}
\underline{Verschiebare Ger{\"a}te}
\begin{equation*}
P_j(t,s_j) =
\begin{cases}
0 & \text{f"ur } t < s_j \ Q_j(t-s_j) & \text{f"ur } s_j \leq t \leq s_j + p_j \ 0 & \text{f"ur } t \textgreater s_j + p_j
\end{cases}
mit s_j = r_j + \Delta t \ \Delta t \leq tDoF
\end{equation*}
\end{document}
My problem refers to the last two formulars. I want the \Delta t to start exactly underneath the s_j, so the equality signs of those two formulars should start at the same position, but underneath each other.
• Perhaps the documentclass option fleqn is what you are searching for? – Juri Robl Feb 8 '15 at 12:40
• Hi and welcome, instead of shooting into the blue, a bit of scientific preparation might be a good idea. If you want to know more about typesetting maths, please have a look at Mathmode. – Johannes_B Feb 8 '15 at 12:43
• Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. A suggestion: Do us a favour and change your username to something more telling than "user1234". – Martin Schröder Feb 8 '15 at 15:40
• You might be interested in TeXwelt.de, agerman site with the same Question and Anwer format and goLaTeX, a classical german discussion forum. – Johannes_B Feb 8 '15 at 15:51
• Remember to accept one of the answers if you find it useful. – Svend Tveskæg Feb 8 '15 at 17:22
I'm not sure I've guessed what you want but here is a try:
\documentclass{scrartcl}
\usepackage[showframe]{geometry} % used to show page width
\usepackage{mathtools}
\begin{document}
\begin{equation*}
P_{j}(t, s_{j}) =
\begin{cases}
0 & \text{f{\"u}r $t < s_{j}$,}\\
Q_{j}(t - s_{j}) & \text{f{\"u}r $s_{j} \leq t \leq s_{j} + p_{j}$,}\\
0 & \text{f{\"u}r $t > s_{j} + p_{j}$,}
\end{cases}
\end{equation*}
mit $s_{j} = r_{j} + \Delta t$ f{\"u}r $\Delta t \leq tDoF$.
\end{document}
You can use the align environment to align math at specific parts:
\documentclass[oneside,12pt]{scrartcl}
\usepackage[ngerman]{babel}
\usepackage[fixamsmath,disallowspaces]{mathtools}
\begin{document}
\begin{align*}
P_j(t,s_j) &= \begin{cases}
0 & \text{f"ur } t < s_j \\
Q_j(t-s_j) & \text{f"ur } s_j \leq t \leq s_j + p_j \\
0 & \text{f"ur } t \textgreater s_j + p_j
\end{cases}\\
%\intertext{mit}\\
\text{mit }&s_j = r_j + \Delta t \\
&\Delta t \leq tDoF
\end{align*}
\end{document}
To add text on it's own line you can use intertext or if you want it inline use text as you already did.
Or if you want the equation symbols directly underneath each other:
\text{mit }s_j &= r_j + \Delta t \\
\Delta t &\leq tDoF
You can use the alignenvironment. Be careful to use \\ to break lines.
\documentclass[oneside,12pt]{scrartcl}
\usepackage[ngerman]{babel}
\usepackage{amsmath}
\usepackage[fixamsmath,disallowspaces]{mathtools}
\begin{document}
\underline{Verschiebare Ger{\"a}te}
\begin{align*}
P_j(t,s_j) &= \begin{cases} 0 & \text{f"ur } t < s_j \\
Q_j(t-s_j) & \text{f"ur } s_j \leq t \leq s_j + p_j \\
0 & \text{f"ur } t \textgreater s_j + p_j \end{cases} \\
\text{mit } s_j& = r_j + \Delta t \ \Delta t \leq tDoF
\end{align*}
\end{document} | 2019-09-15T14:07:46 | {
"domain": "stackexchange.com",
"url": "https://tex.stackexchange.com/questions/227181/formulas-should-start-at-the-same-position",
"openwebmath_score": 0.9984839558601379,
"openwebmath_perplexity": 2638.2002902258305,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240791017536,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706195418309
} |
https://mathematica.stackexchange.com/questions/95473/fill-a-circle-in-an-image | # Fill a Circle in an Image
Let's suppose I have something like this (the image resulting from this):
ColorNegate[Rasterize[Graphics[Circle[{100, 100}, 50]]]]
Now, what I want to do is fill the inside of the circle (in the image, not transforming Circle->Disk :) ) with White (the color of the circle).
Ideas?
• Binarize@FillingTransform@ ColorNegate[Rasterize[Graphics[Circle[{100, 100}, 50]]]] works for your specific case, though without the binarize the color is different to the edge – dr.blochwave Sep 25 '15 at 14:54
• Also see here: mathematica.stackexchange.com/questions/7781/… – dr.blochwave Sep 25 '15 at 14:58
• Oh... Of course it's FillingTransform. I didn't know about the Binarize. Thank you. If you want to create an answer, and explain maybe better the Binarize part, I'll accept it. – mgm Sep 25 '15 at 14:59
• there you are, hopefully a better explanation of the binarization... – dr.blochwave Sep 25 '15 at 15:10
FillingTransform is what you're after:
img = FillingTransform@ColorNegate[Rasterize[Graphics[Circle[{100, 100}, 50]]]]
But this gives a gray fill because your image wasn't binary to begin with. Easy to fix, e.g. with a subsequent Binarize:
Binarize@img
Alternatively, ColorReplace[] might provide a more general solution.
ColorReplace[img, Gray -> White] | 2021-06-21T14:22:35 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/95473/fill-a-circle-in-an-image",
"openwebmath_score": 0.40425586700439453,
"openwebmath_perplexity": 2040.263267662341,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240791017535,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706195418308
} |
http://www.onemathematicalcat.org/Math/Geometry_obj/GeoGebra_tutorial/GeoGebra_basics.htm | GEOGEBRA TUTORIAL: GEOGEBRA BASICS
This tutorial firms up many of the concepts introduced in GeoGebra Worksheet: Triangles,
and also introduces some new features of GeoGebra.
This tutorial will be most helpful if you're trying everything in GeoGebra while you're reading.
Here's one way to do this:
• click on APPLET START; this gives you a fully-functioning GeoGebra window, but doesn't install anything on your computer
Note: Things tend to change a bit over time.
Some of the information in this lesson may look slightly different on your own version of GeoGebra.
However, everything should be similar enough for you to recognize what to do.
CHECKED and UNCHECKED VIEW SUBMENU ITEMS
In the View submenu, a checked item means that it is visible.
These work as a toggle; when you click on a checked item, it becomes unchecked.
When you click on an unchecked item, it becomes checked.
Here are some examples:
Axes visible; no Grid Grid visible; no Axes Axes visible; Grid visible no Axes; no Grid
GETTING A DROP-DOWN MENU FROM THE TOOLBAR
To get a drop-down menu from the toolbar, you must click the small arrow in the bottom right corner of a tool.
If you click anywhere else, you'll just make the tool active instead of getting the submenu.
RENAMING AN OBJECT
To rename an object, right-click on the object and select ‘Rename’.
GeoGebra won't allow two different objects to have the same name:
it automatically renames, if needed, to prevent this from happening.
FREE versus DEPENDENT OBJECTS
Free objects are ones that can be freely moved with the MOVE tool.
Dependent objects depend on something else.
Here's a simple example to illustrate the difference.
To set things up:
• use the POINT tool to create two points, $\,A\,$ and $\,B\,$
• in the POINT drop-down menu, select ‘Midpoint or Center’
• click on $\,A\,$, then click on $\,B\,$; the midpoint will appear, which GeoGebra automatically labels $\,C\,$
• rename $\,C\,$ as MIDPOINT (see instructions above)
You should now see what appears below: points $\,A\,$ and $\,B\,$ are free; MIDPOINT was constructed as the midpoint of $\,A\,$ and $\,B\,$.
With the MOVE tool, you can move $\,A\,$ and $\,B\,$ wherever you please. (Try it!)
If you try to move MIDPOINT, it won't budge. The only way to control MIDPOINT is through its ‘parents’ $\,A\,$ and $\,B\,$.
(By the way, MIDPOINT is often called a ‘child’ of $\,A\,$ and $\,B\,$.)
If you hover over a dependent object, then you can see its dependency:
TRACING AN OBJECT
If you right-click on an object and check ‘Trace On’, then you can see a ‘trace’ of its movement in the geometry window.
Below, both $\,A\,$ and MIDPOINT are being traced.
(Point $\,A\,$ was moved with the MOVE tool; MIDPOINT followed accordingly.)
To make a trace disappear, uncheck ‘Trace on’.
SHOWING AND HIDING OBJECTS and LABELS
Right-click on an object and uncheck ‘Show Object’ to make it disappear; its label also disappears.
Such an object is called a hidden object.
Notice that the bullet next to a hidden object is hollow; the bullet next to a visible object is filled-in (solid).
If desired, you can uncheck ‘Show Label’ to make only a label disappear:
UNDO/REDO
GeoGebra has a wonderful ‘undo’ feature.
If you ever goof anything up, just select Edit-Undo from the menu.
(There might also be an undo/redo shortcut at the right of the toolbar.)
You can undo multiple items.
You can redo anything that you undo.
GETTING A NEW GEOGEBRA WINDOW
The easiest way to get a new GeoGebra window is to select File-New .
You are given the option of saving the current file.
LOWERCASE GREEK LETTERS FOR ANGLES
GeoGebra uses lowercase Greek letters to label angles.
For now, you should know the first three lowercase Greek letters: alpha ($\,\alpha\,$), beta ($\,\beta\,$), and gamma ($\,\gamma\,$).
Sometimes, labels get positioned in an ugly way.
You can use the MOVE tool to adjust the position of the labels.
ugly labeling better labeling
DISPLAYING TEXT IN THE GEOMETRY WINDOW
You can display text in the Geometry Window using the ‘Insert Text’ tool, which is in the drop-down menu of the SLIDER tool.
‘Insert Text’ tool:
After you select a tool, instructions on its proper use appear to the right of the toolbar:
Always look in this space to see how to use a tool!
For the ‘Insert Text’ tool, click anywhere in the drawing pad, and a text box appears:
STATIC versus DYNAMIC TEXT
Static text is text that just sits there—it is static—it doesn't change.
Just type static text in the text box and click the button to accept.
Use the MOVE tool to put the text where you want it.
Dynamic text is text that changes.
It's much more exciting and useful!
Quotation marks (",") are very important when you're creating dynamic text.
Characters inside quotation marks are displayed exactly as they appear.
Outside the quotation marks, a ‘+’ sign is used to put things together.
When you type a GeoGebra object outside the quotation marks, its current value will be displayed.
PRACTICE WITH DYNAMIC TEXT
Try to duplicate each of the examples below:
In this first example, create a single point, $\,A\,$.
In the ‘Insert Text’ box, type: "A =" + A
Type "A =" + A in the text box ... As you move $\,A\,$ around, the coordinates will change.
In this second example, create a line segment from $\,A\,$ to $\,B\,$.
Type this in the text box ... Move either endpoint and watch the length change!
In this third example, use the CIRCLE tool to create a circle with center $\,A\,$ and equation $\,c\,$.
Then, follow the directions illustrated below:
Type this in the text box ... Move $\,A\,$ or $\,B\,$ and watch the dynamic text change!
CHANGING THE PROPERTIES OF DISPLAYED TEXT
You can change the properties of displayed text by right-clicking on the text box and selecting ‘Properties ...’ .
Changing the font, size, and color are all routine.
You see the changes immediately in the Geometry Window.
Two of the tabs, however, deserve some discussion: ‘Position’ and ‘Advanced’.
MAKING TEXT FOLLOW A SPECIFIC OBJECT: the ‘POSITION’ TAB
You can set up text to follow (say) a point around the screen.
Get the ‘Properties ...’ box for the desired text.
Click on the ‘Position’ tab.
Type the name of the point you want the text to follow in the ‘Starting Point’ box:
Set up your text to follow point $\,A\,$ ... ... and as you move $\,A\,$, the text follows!
DISPLAYING THE COORDINATES OF A POINT
GeoGebra uses x(A) and y(A) to denote the $\,x$-value and $\,y$-value of a point $\,A\,$.
SHOWING AN OBJECT UNDER SPECIFIED CONDITIONS
Suppose you only want to show an object (like text) when a certain condition is met.
You can use the ‘Advanced’ tab of the ‘Properties...’ box to do this!
For example, suppose you only want to show text when the $\,x$-value of a point is greater than zero.
Get the ‘Properties ...’ box for the desired text.
Set up the desired condition. Here, the $\,x$-value is $\,1.2\,$,which is greater than zero.Thus, the text is showing. Here, the $\,x$-value is $\,-0.18\,$,which is not greater than zero.Thus, the text is not showing.
EVERY GEOGEBRA OBJECT HAS ITS OWN PROPERTIES BOX
Indeed, every GeoGebra object has its own ‘Properties ...’ box.
This ‘Properties ...’ box gives you some special labeling features:
you can show only the NAME of an object; both the NAME and VALUE; or only the VALUE.
set up point $\,A\,$ to show both name and value ... ... and here's what you'll see
Master the ideas from this section | 2015-07-28T17:42:30 | {
"domain": "onemathematicalcat.org",
"url": "http://www.onemathematicalcat.org/Math/Geometry_obj/GeoGebra_tutorial/GeoGebra_basics.htm",
"openwebmath_score": 0.49724870920181274,
"openwebmath_perplexity": 3191.7980340230083,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240791017535,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706195418308
} |
https://ellejae.com/archive/2caa12-show-that-satisfies-the-differential-equation-with-initial-condition | 0\\ Differential Equation Practice Problems With Solutions. In calculus, the term usually refers to the starting condition for finding the particular solution for a differential equation. How can I make an Android app "forget" that it installed on my phone before? where C is the constant to solve for using the 2nd condition. @stochasticboy321 You could say the something about many other questions. hence we get: $y=e^{-\frac{x^2}{2}-x}, y>0$. Show Instructions. xy'=+y=y^2, y(1)=-1 - Answered by a verified Math Tutor or Teacher We use cookies to … What's is the purpose of a trailing '-' in a Kubernetes apply -f -. b. race affects how people act towards one another. Find the solution of the differential equation that satisfies the given initial condition dL/dt = kL^2lnt, L(1) = -1 - e-eduanswers.com & Elliot, G. (2003). Step 1: Rewrite the equation, using algebra, to make integration possible (essentially you’re just moving the “dx”. A constant of integration will be introduced and that is why we have the intial condition $y(-2)=1$ to determine this constant. $\endgroup$ – DonAntonio Aug 11 '16 at 23:20 ... show 1 more comment. How to evaluate double integral over function of square? Creed Meaning In Urdu, Talk To The Paw Meme, Masters In Food Science And Technology In Usa, Cuisinart White Toaster Oven, Ottolenghi Celeriac Salad, Signs He Sees You Long-term, Gourmet Garden Where To Buy, " /> 0\\ Differential Equation Practice Problems With Solutions. In calculus, the term usually refers to the starting condition for finding the particular solution for a differential equation. How can I make an Android app "forget" that it installed on my phone before? where C is the constant to solve for using the 2nd condition. @stochasticboy321 You could say the something about many other questions. hence we get: $y=e^{-\frac{x^2}{2}-x}, y>0$. Show Instructions. xy'=+y=y^2, y(1)=-1 - Answered by a verified Math Tutor or Teacher We use cookies to … What's is the purpose of a trailing '-' in a Kubernetes apply -f -. b. race affects how people act towards one another. Find the solution of the differential equation that satisfies the given initial condition dL/dt = kL^2lnt, L(1) = -1 - e-eduanswers.com & Elliot, G. (2003). Step 1: Rewrite the equation, using algebra, to make integration possible (essentially you’re just moving the “dx”. A constant of integration will be introduced and that is why we have the intial condition $y(-2)=1$ to determine this constant. $\endgroup$ – DonAntonio Aug 11 '16 at 23:20 ... show 1 more comment. How to evaluate double integral over function of square? Creed Meaning In Urdu, Talk To The Paw Meme, Masters In Food Science And Technology In Usa, Cuisinart White Toaster Oven, Ottolenghi Celeriac Salad, Signs He Sees You Long-term, Gourmet Garden Where To Buy, " />
Select Page
Most questions answered within 4 hours. N.b., if the latter, the standard way to solve this is via the Laplace transform. Correct answer to the question Show that A(t)=300−250e0.2−0.02t satisfies the differential equation ⅆAⅆt=6−0.02A with initial condition A(10)=50 . Evaluate the indefinite integral. Ask Question Asked 3 years, 10 months ago. Differential Equation Initial Value Problem Example. But if an initial condition is specified, then you must find a particular solution (a single function). Is that correct? Retrieved July 19, 2020 from: https://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/unit-iii-fourier-series-and-laplace-transform/unit-step-and-unit-impulse-response/MIT18_03SCF11_s25_1text.pdf Use the equation editor to show your work, located in the " VT"icon above. f(x) = 4x2 + 6. Find the solution of the differential equation that satisfies the initial condition. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Your first 30 minutes with a Chegg tutor is free! In this sample problem, the initial condition is that when x is 0, y=2, so: 2 = 10(0) – 0 2 ⁄ 2 + C; 2 = 0 + C; C = 2; Therefore, the function that satisfies this particular differential equation with the initial condition y(0) = … (x+1)\partial x=-\frac{\partial y}{y} , y\neq0\\ Therefore, the particular solution is {eq}f(s) =7s^2-3s^4+185 Example Problem 1: Solve the following differential equation, with the initial condition y(0) = 2. Find the particular solution that satisfies the differential equation and the initial condition. Our experts can answer your tough homework and study questions. Differential Equation Calculator. Can verbs/i-adjectives be indefinitely conjugated, or is there a limit? All other trademarks and copyrights are the property of their respective owners. Can someone please explain? \begingroup isn't this first order linear differential equation? A: Given: The function fx=5x2x2-25, c. there... #15 the expression 15a + 12c is the cost (in dollars) of admission at an amusement park for a adults and c children. \end{align*} EXPLAIN YOUR STEPS IN WORDS. Indefinite Integral: Definition, Rules & Examples, Antiderivative: Rules, Formula & Examples, Accuplacer Math: Advanced Algebra and Functions Placement Test Study Guide, Math 97: Introduction to Mathematical Reasoning, Calculus Syllabus Resource & Lesson Plans, TECEP College Algebra: Study Guide & Test Prep, Biological and Biomedical For example, the differential equation needs a general solution of a function or series of functions (a general solution has a constant “c” at the end of the equation): \endgroup – Idonknow Jan 9 '13 at 15:13 Which of the following describes how to translate the graph y = xi to obtain the graph of y = x - 11 - 12 1 unit left and 1 unit down 1 unit left and 1 unit up 1 unit right and 1 unit down 1 unit right and 1 unit up, Divide. @smcc That's actually ambiguous. {/eq} with a particular value, giving us the particular solution: {eq}\begin{align*} Sciences, Culinary Arts and Personal f(s)& =\frac{14s^{1+1}}{1+1}- \frac{12s^{3+1}}{3+1}+C\\ Asking for help, clarification, or responding to other answers. 7 = 4(0) 2 + C . To plot: The graph of given function: Q: 1. \end{align*} reduce the answer to lowest terms.5 2/3 ÷ 3 1/9. 1. f '(x) = 8x, f(0) = 7 ... where C is the constant to solve for using the 2nd condition. Find the differential of the function. Start here or give us a call: (312) 646-6365. Tests for Unit Roots. 1. f '(x) = 8x, f(0) = 7 ... where C is the constant to solve for using the 2nd condition. That is what you are being asked. Find the solution of the differential equation that satisfies the given initial condition? Find answers to questions asked by student like you, Find the solution of the differential equation that satisfies the initial condition. 1=Ae^{-\frac{(-2)^2}{2}-(-2)}=Ae^{-2+2}=Ae^{0}\Rightarrow A=1 That is a differential equation. Step 1: Use algebra to move the “dx” to the right side of the equation (this makes the equation more familiar to integrate): Show that A(t)=300−250e0.2−0.02t satisfies the differential equation ⅆAⅆt=6−0.02A with initial condi... And millions of other answers 4U without ads, Add a question text of at least 10 characters. You will receive an answer to the email. Correct answer to the question Show that A(t)=300−250e0.2−0.02t satisfies the differential equation ⅆAⅆt=6−0.02A with initial condition A(10)=50 . dy⁄dx = 10 – x → Q: In Exercises 35-74, evaluate the integral using the methods covered in thetext so far. Is a software open source if its source code is published by its copyright owner but cannot be used without a commercial license? (b) For what values of x... A: Pink graph represents f(x). dy =? f(s)& =\int (14 s - 12 s^3) \ \mathrm{d}s \\ What modern innovations have been/are being made for the piano, Baby proofing the space between fridge and wall. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. dL – kL² Int By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. The general solution of a differential equation contains arbitrary constants, whose number equates to the order of the differential equation. You can refuse to use cookies by setting the necessary parameters in your browser. That is what you are being asked. In general, an initial condition can be any starting point. 2 Answers Answer to: Find the particular solution that satisfies the differential equation and the initial condition. In this sample problem, the initial condition is that when x is 0, y=2, so: Therefore, the function that satisfies this particular differential equation with the initial condition y(0) = 2 is y = 10x – x2⁄2 + 2, Initial Value Example problem #2: Solve the following initial value problem: dy⁄dx = 9x2 – 4x + 5; y(-1) = 0. What function $\;y(x)\;$ fulfills the given conditions. For Free. {/eq}: {eq}\begin{align*} This is a linear homogeneous ODE and can be solved using separation. “Question closed” notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2/4/9 UTC (8:30PM…. For example: It could be $y \cdot (x+1)$ or $y$ evaluated at $x+1$. Question sent to expert. Step 3: Substitute in the values specified in the initial condition. Why do I need to turn my crankshaft after installing a timing belt? Where should small utility programs store their preferences? (a) Find all critical numbers. integral x sec^2 x^2 dx. Meanwhile, the particular solution gets rid of the constants through the use of initial conditions. \frac{x^2}{2}+x+C=-\ln(y), y>0\\ Differential Equation Practice Problems With Solutions. In calculus, the term usually refers to the starting condition for finding the particular solution for a differential equation. How can I make an Android app "forget" that it installed on my phone before? where C is the constant to solve for using the 2nd condition. @stochasticboy321 You could say the something about many other questions. hence we get: $y=e^{-\frac{x^2}{2}-x}, y>0$. Show Instructions. xy'=+y=y^2, y(1)=-1 - Answered by a verified Math Tutor or Teacher We use cookies to … What's is the purpose of a trailing '-' in a Kubernetes apply -f -. b. race affects how people act towards one another. Find the solution of the differential equation that satisfies the given initial condition dL/dt = kL^2lnt, L(1) = -1 - e-eduanswers.com & Elliot, G. (2003). Step 1: Rewrite the equation, using algebra, to make integration possible (essentially you’re just moving the “dx”. A constant of integration will be introduced and that is why we have the intial condition $y(-2)=1$ to determine this constant. $\endgroup$ – DonAntonio Aug 11 '16 at 23:20 ... show 1 more comment. How to evaluate double integral over function of square? | 2022-10-02T05:57:44 | {
"domain": "ellejae.com",
"url": "https://ellejae.com/archive/2caa12-show-that-satisfies-the-differential-equation-with-initial-condition",
"openwebmath_score": 0.6854680180549622,
"openwebmath_perplexity": 1036.3032906210562,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.8652240825770432,
"lm_q2_score": 0.7549149758396752,
"lm_q1q2_score": 0.6531706173945536
} |
https://tex.stackexchange.com/questions/131125/better-way-to-display-long-division | # Better way to display long division?
I'm currently in the process of trying to create a worksheet for my students with long division problems for them to practice. Unfortunately, the best I've been able to come up with so far in terms of displaying long division like how they write it is:
Which could work if need be, but I thought I'd see if anyone has tooled around with this and come up with something better. To create that, all I did was type:
$\overline{)12345}$
Any suggestions for ways of making that better (so it looks more like what'd you see when using \longdiv) would be awesome.
• I'm not somewhere I can check, but if I remember correctly, kicking up the parenthesis by one size improves the appearance. a solution was published in tugboat years ago. Sep 2, 2013 at 0:07
• To add to Barbara's comment: here is what I did: \newcommand[2]{\longdiv}{#1\ \overline{\smash{\Big)}\ #2}} and it closed the gap. Sep 2, 2014 at 21:24
• Using \overlinegenerally does not produce pretty results with shorter characters. This applies to many of the answers given below. Jan 6, 2017 at 21:27
You can give a definition of a command inspired by the one used in longdiv.sty; something along these lines:
\documentclass{article}
\newcommand\Mydiv[2]{%
$\strut#1$\kern.25em\smash{\raise.3ex\hbox{$\big)$}}$\mkern-8mu \overline{\enspace\strut#2}$}
\begin{document}
\end{document}
• @Gonzalo_Medina Thanks for the answer, how would I write numbers on top of the line? Dec 30, 2014 at 16:50
\end{document}
• It would be nice if you make your example compilable, so people can just copy/paste to try it. May 20, 2019 at 7:30
• Good call. All fixed. May 20, 2019 at 7:42
• Thanks :-) Note that \scalebox is provided by graphicx, so you don't need all of TikZ to use it. I edited it for you. May 20, 2019 at 7:44
• Yes, I didn't know why it needed tikz just that it worked. Thanks. May 20, 2019 at 7:44
Here is a way to display 2011/3:
\begin{align*}
&\text{ }\text{ }\text{ }670\\
3 &\overline{\big)2011}\\
&\underline{\text{ }18}\\
&\text{ }\text{ }\text{ }21\\
&\text{ }\text{ }\underline{\text{ }21}\\
&\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }1
\end{align*}
Tweak the spacing on different long divisions. As barbara beeton commented on the question, scaling up the size of the parentheses does help.
What about polynom package. If you write
\longdiv{12345}{13}
You obtain | 2022-05-20T06:37:54 | {
"domain": "stackexchange.com",
"url": "https://tex.stackexchange.com/questions/131125/better-way-to-display-long-division",
"openwebmath_score": 0.9945861101150513,
"openwebmath_perplexity": 856.6165367269222,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240825770432,
"lm_q2_score": 0.7549149758396752,
"lm_q1q2_score": 0.6531706173945536
} |
https://mathematica.stackexchange.com/questions/linked/88312 | 27k views
### $\LaTeX$ and Mathematica
I quite often would like to draw graphics in my $\LaTeX$ documents using Mathematica. I have encountered three problems. I would like to know if there are any workarounds to these problems I would ...
5k views
### Python-style plots in Mathematica
I love making plots in Mathematica. And I love to spend a lot of time making high-quality plots that maximize readability and aesthetics. For most cases, Mathematica can make very beautiful images, ...
4k views
### Saner alternative to ContourPlot fill
I am producing a large number of ContourPlot objects, which when exported generate notoriously large PDF files because it basically generates lots of little ...
6k views
### Do I have to code each case of this Grid full of plots separately?
I have written some custom functions to draw multi-panel graphs like this one: It's done by passing a matrix of (custom) plotting functions to a MultiPanelGraph ...
3k views
### Aligning plot axes in a graphics object
I need to align the y-axes in the plots below. I think I'm going to have to do some rasterizing and searching for vertical lines, then vary x and ...
867 views
### How to align coordinate systems of Inset and enclosing Graphics?
Suppose we have some plot with AspectRatio not being Automatic: ...
1k views
### Spacing and dimension of plots in Grid/GraphicsGrid
There is a way to align axis/picture in this kind of plot? ...
519 views
### Problem with using GraphicsColumn
I have three plots defined as: ...
573 views
### Set size of plot region [duplicate]
When generating graphs, I would like to keep the size of my plot range constant, instead of the size of the image in total (as been done with ImageSize). Please see the example below. How can I ...
280 views
### Plot with two scales for X axis
I want to do something similar to 1 Plot, 2 Scale/Axis but for the X-axis. The aim is to have physical units below, but array indices at the top for easy access to the discrete data range. So far I ...
239 views
### Size problems combining objects in a Graphics using Inset
The Problem I need to put together different graphics objects in a precise way inside a Graphics. Some objects are drawn directly by graphics primitives, some ...
67 views
### Is there a way to automate “aligning plot axes in a graphics object” for more than 2 plots?
I tried the automated solution there, and it works nicely with two plots. However, I want to know if there is any way to do the same thing for more than 2 plots. I did try, but failed to do so... ... | 2019-12-13T00:30:10 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/linked/88312",
"openwebmath_score": 0.7076141834259033,
"openwebmath_perplexity": 1691.1335922016087,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240756264639,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706169182826
} |
https://brilliant.org/discussions/thread/pencil-logic-puzzles/ | ×
# Pencil logic puzzles
Do you know Sudoku? Or maybe Paint by Numbers (also known as Nonogram)? Sudoku and Nonogram are examples of the many kinds of puzzles also known as "pencil logic puzzles" (for the lack of better terms).
Pencil logic puzzles are puzzles that usually have a grid (so you usually don't need to spare some space just to start solving), and which are typically intended to be filled with pencil or pen (and thus shouldn't be interactive). Moreover, as a logic puzzle, it should have a unique solution, reachable by (deductive) logic alone, and most often these puzzles are culture-neutral in the sense that you don't have to understand any particular culture (except possibly basic math) to solve the puzzles. This rules out things such as knights and knaves (for not having a grid and is also rather culture-heavy), logic grid puzzles (for being culture-heavy), puzzlehunt puzzles (for not being deductive), or puzzle video games (for not being doable with pencil).
So, what are examples of pencil logic puzzles? Sudoku and Nonogram are examples of them, but you might have seen examples such as Battleships, Slitherlink, or maybe recently Fillomino.
There are generally two major types of people indulging in such puzzles. The first one is publishers of these and casual consumers of such; if you've seen Sudoku books in stores, most likely they are made by these publishers. Puzzles in this kind are almost exclusively Sudoku (as the most popular sort), and mostly computer-generated. They sell pretty good, but they are usually of lower quality; after all, computers can generate plenty of puzzles, but can't determine whether they are aesthetically good and stuff.
The second one, which I belong in, is one that enjoys high-quality puzzles, those that are carefully hand-crafted. There aren't so many of us, but if you're looking, you can find a heck lot of them (with apologies to people that I can't think up their blogs fast enough). In fact, there is even a federation complete with its own puzzle and sudoku championships. (Naturally, they are about solving puzzles as fast as possible.)
Now, are you interested in exploring the world of high-class puzzles?
I have collected such puzzles I found in my set here. Feel free to comment here or notify me somehow if you want an inclusion. To be in this set, the puzzle must be a pencil logic puzzle, following the above rules. To be more succinct, puzzles in this set must be:
• Pencil (can be solved on paper, so not interactive; no Sokoban)
• Deductive (can be solved with deduction, no need of inductive logic; no puzzlehunt-style puzzles)
• Culture-neutral (can be solved without knowing any particular culture, except basic math; no crossword, no elimination grid, no knights and knaves)
Note by Ivan Koswara
2 years, 4 months ago
Sort by:
Do you hear the Sudoku Wiki Page calling out for you? It feels so lonely right now. Staff · 2 years, 4 months ago
Surprisingly, I am less versed in Sudoku than in other puzzles (my favorite kinds of the type are Fillomino and Heteromino at the moment). Perhaps one of the reasons is that it is so saturated with so many tricks that normal Sudoku is mostly "look for the correct trick and apply it". If I have some ideas on what to write about Sudoku, I'll write it up. · 2 years, 4 months ago
Well Yeah! I have always solved many like Battleship, Slitherlink and Sudoku.
I didn't know that there is a federation! And man, Ulrich Voigt! He has won the Championship 7 times. God Knows what his Speed would be. I am Now Way too interested in Solving more of these! :D Thanks For Sharing, Mate! · 2 years, 4 months ago
Yeah! Pumped up and already solved battleship, Sudoku, and rubik's cube · 2 years, 4 months ago
Hi Ivan !
I have always been interested in those things in which we have to challenge ourselves . Just like Sudoku , Rubik's cube , the 15 puzzle , Brainvita , Tangram and related stuffs..
I just feel a surge of energy flow through me when I attempt such things .
How about you ? · 2 years, 4 months ago | 2017-09-20T00:31:02 | {
"domain": "brilliant.org",
"url": "https://brilliant.org/discussions/thread/pencil-logic-puzzles/",
"openwebmath_score": 0.8015426397323608,
"openwebmath_perplexity": 2298.423050887487,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240686758842,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706116711858
} |
https://gmatclub.com/forum/the-arithmetic-mean-of-a-collection-of-5-positive-integers-not-necess-245617.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Nov 2018, 00:07
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### How to QUICKLY Solve GMAT Questions - GMAT Club Chat
November 20, 2018
November 20, 2018
09:00 AM PST
10:00 AM PST
The reward for signing up with the registration form and attending the chat is: 6 free examPAL quizzes to practice your new skills after the chat.
• ### The winning strategy for 700+ on the GMAT
November 20, 2018
November 20, 2018
06:00 PM EST
07:00 PM EST
What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL.
# The arithmetic mean of a collection of 5 positive integers, not necess
Author Message
TAGS:
### Hide Tags
Board of Directors
Joined: 01 Sep 2010
Posts: 3306
The arithmetic mean of a collection of 5 positive integers, not necess [#permalink]
### Show Tags
26 Jul 2017, 10:52
Top Contributor
9
00:00
Difficulty:
25% (medium)
Question Stats:
72% (01:30) correct 28% (01:29) wrong based on 340 sessions
### HideShow timer Statistics
The arithmetic mean of a collection of 5 positive integers, not necessarily distinct, is 9. One additional positive integer is included in the collection and the arithmetic mean of the 6 integers is computed. Is the arithmetic mean of the 6 integers at least 10 ?
1. The additional integer is at least 14.
2. The additional integer is a multiple of 5.
_________________
Intern
Joined: 09 Jul 2017
Posts: 1
Re: The arithmetic mean of a collection of 5 positive integers, not necess [#permalink]
### Show Tags
26 Jul 2017, 11:25
Answer should be "C" as with (1) and (2) the smallest of added no is 15 which makes the arith mean 10 (atleast).
Manager
Joined: 01 Feb 2017
Posts: 167
Re: The arithmetic mean of a collection of 5 positive integers, not necess [#permalink]
### Show Tags
26 Jul 2017, 11:32
Ans C:
From Question Stem, we can determine the required range of value of 6th Integer as >=15, using Formulae for Arithmetic Mean.
To conclude this, we need to use both Statements 1 and 2.
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 477
Re: The arithmetic mean of a collection of 5 positive integers, not necess [#permalink]
### Show Tags
03 Sep 2018, 04:21
carcass wrote:
The arithmetic mean of a collection of 5 positive integers, not necessarily distinct, is 9. One additional positive integer is included in the collection and the arithmetic mean of the 6 integers is computed. Is the arithmetic mean of the 6 integers at least 10 ?
1. The additional integer is at least 14.
2. The additional integer is a multiple of 5.
The solution below explores the homogeneity nature of the average.
$$\left( * \right)\,\,\,5\,\,{\text{ints}} \geqslant 1$$
$$\sum\nolimits_{\,5} {\, = } \,\,5 \cdot 9 = 45$$
$$\sum\nolimits_{\,5} {\, + \,\,x\,\,\left( {6{\text{th}}} \right)\,\,\,\,\mathop \geqslant \limits^? } \,\,\,\,6 \cdot 10\,\,\,\,\,\,\, \Leftrightarrow \,\,\,\,\,\,\,x\,\,\,\mathop \geqslant \limits^? \,\,\,15$$
$$\left( 1 \right)\,\,x \geqslant 14\,\,\,\left\{ \begin{gathered} \,x = 14\,\,\,\,\, \Rightarrow \,\,\,\,\,\left\langle {{\text{NO}}} \right\rangle \hfill \\ \,x = 15\,\,\,\,\, \Rightarrow \,\,\,\,\,\left\langle {{\text{YES}}} \right\rangle \hfill \\ \end{gathered} \right.$$
$$\left( 2 \right)\, + \,\left( * \right):\,\,\,\,\,x = 5,10,15, \ldots \,\,\,\,\,\,\left\{ \begin{gathered} \,x = 5\,\,\,\,\, \Rightarrow \,\,\,\,\,\left\langle {{\text{NO}}} \right\rangle \hfill \\ \,x = 15\,\,\,\,\, \Rightarrow \,\,\,\,\,\left\langle {{\text{YES}}} \right\rangle \hfill \\ \end{gathered} \right.$$
$$\left( {1 + 2} \right) + \,\left( * \right):\,\,\,\,x = 15,20,25, \ldots \,\,\,\,\, \Rightarrow \,\,x \geqslant 15\,\,\,\,\, \Rightarrow \,\,\,\,\,\left\langle {{\text{YES}}} \right\rangle \,\,$$
The above follows the notations and rationale taught in the GMATH method.
_________________
Fabio Skilnik :: https://GMATH.net (Math for the GMAT) or GMATH.com.br (Portuguese version)
Course release PROMO : finish our test drive till 30/Nov with (at least) 50 correct answers out of 92 (12-questions Mock included) to gain a 50% discount!
Re: The arithmetic mean of a collection of 5 positive integers, not necess &nbs [#permalink] 03 Sep 2018, 04:21
Display posts from previous: Sort by | 2018-11-18T08:07:33 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/the-arithmetic-mean-of-a-collection-of-5-positive-integers-not-necess-245617.html",
"openwebmath_score": 0.5918459296226501,
"openwebmath_perplexity": 3752.7482575104873,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.8652240686758841,
"lm_q2_score": 0.7549149813536518,
"lm_q1q2_score": 0.6531706116711857
} |
https://gmatclub.com/forum/at-an-upscale-fast-food-restaurant-shin-can-buy-3-burgers-7-shakes-49205.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 26 Jun 2019, 04:13
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes,
Author Message
Director
Joined: 01 May 2007
Posts: 751
At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink]
Show Tags
22 Jul 2007, 13:59
2
14
00:00
Difficulty:
75% (hard)
Question Stats:
45% (01:12) correct 55% (01:06) wrong based on 182 sessions
HideShow timer Statistics
At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, and one cola for $120. At the same place it would cost$164.50 for 4 burgers, 10 shakes, and one cola. How much would it cost for a meal of one burger, one shake, and one cola?
A. $21 B.$27
C. $31 D.$41
E. It cannot be determined
This is a challenge problem, but from the FREE PRACTICE BIN, so don't freak out on my for posting a challenge problem on the forum. I've never seen this type of explanation before. I always assumed that if you had 3 variables, you needed 3 equations to solve. I've also never seen a solution where you subtract multiple times from the same equation. Is this even kosher? Is there a way to solve this without subtracting multiple times by the same #?
Explanation
Let's suppose that the price of burger is $B, of shake$S and cola's price is $C. We can then construct equations: {3B+7S+C=$120
{4B+10S+C=$164.5 Subtracting first equation from the second, gives us B+3S=$44.5.
Now if we subtract new equation two times from first or 3 times from second we will get B+S+C=$31. In any case, there is no necessity to know each items price, just the sum. M00-01 Manager Joined: 27 May 2007 Posts: 119 Re: At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink] Show Tags 22 Jul 2007, 14:50 That's really helpful! I've never seen that type of solution before either, but there's no reason why it shouldn't work. If B+32=$44.50 is a true statement, there's no reason why it couldn't be applied as many times as needed to get to the answer.
Senior Manager
Status: Verbal Forum Moderator
Joined: 17 Apr 2013
Posts: 463
Location: India
GMAT 1: 710 Q50 V36
GMAT 2: 750 Q51 V41
GMAT 3: 790 Q51 V49
GPA: 3.3
Re: At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink]
Show Tags
08 Sep 2014, 03:20
10
6
3B + 7S + 1C = 120
4B + 10S + 1C = 164.50
Subtracting 1 from 2
B + 3S = 44.50
Now MULTIPLY ABOVE by 2
2B + 6S = 89
First equation can be written as:
B + 2B+6S + S + C = 120
OR
B + S + C + 89 = 120
B + S + C = 31
_________________
Like my post Send me a Kudos It is a Good manner.
My Debrief: http://gmatclub.com/forum/how-to-score-750-and-750-i-moved-from-710-to-189016.html
Senior Manager
Joined: 13 Jun 2013
Posts: 271
Re: At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink]
Show Tags
13 Nov 2014, 01:03
1
anceer wrote:
At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, and one cola for $120. At the same place it would cost$164.50 for 4 burgers, 10 shakes, and one cola. How much would it cost for a meal of one burger, one shake, and one cola?
A $21 B$27
C $31 D$41
E It cannot be determined
price of one burger=x, price of one shake= y, and price of one cola= z
3x+7y+z = 120------------1)
4x+10y+z=164.5-------------2)
subtracting 1 from 2 we have
x+3y=44.5
now multiply both sides by 3,
3x+9y=133.5------------------3)
from 2 we have
x+3x +9y+y+z=164.5
substitute the value of 3x+9y in 3 we have
x+y+z=164.5-133.5
=31
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1787
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Re: At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink]
Show Tags
13 Nov 2014, 02:58
3b + 7s + 1c = 120 ......... (1)
4b + 10s + 1c = 164.5 ........ (2)
(2) - (1)
1b + 3s = 44.5
3b + 9s = 44.5*3 .............. (3)
Rearranging equation (2)
(3b + 9s) + (1a + 1s + 1c) = 164.5
1a + 1s + 1c = 164.5 - 133.5 = 31
_________________
Kindly press "+1 Kudos" to appreciate
Math Expert
Joined: 02 Sep 2009
Posts: 55802
At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink]
Show Tags
13 Nov 2014, 06:21
2
2
jimmyjamesdonkey wrote:
At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, and one cola for $120. At the same place it would cost$164.50 for 4 burgers, 10 shakes, and one cola. How much would it cost for a meal of one burger, one shake, and one cola?
A. $21 B.$27
C. $31 D.$41
E. It cannot be determined
This is a challenge problem, but from the FREE PRACTICE BIN, so don't freak out on my for posting a challenge problem on the forum. I've never seen this type of explanation before. I always assumed that if you had 3 variables, you needed 3 equations to solve. I've also never seen a solution where you subtract multiple times from the same equation. Is this even kosher? Is there a way to solve this without subtracting multiple times by the same #?
Explanation
Let's suppose that the price of burger is $B, of shake$S and cola's price is $C. We can then construct equations: {3B+7S+C=$120
{4B+10S+C=$164.5 Subtracting first equation from the second, gives us B+3S=$44.5.
Now if we subtract new equation two times from first or 3 times from second we will get B+S+C=$31. In any case, there is no necessity to know each items price, just the sum. :evil: M00-01 At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, and one cola for$120. At the same place it would cost $164.50 for 4 burgers, 10 shakes, and one cola. How much would it cost for a meal of one burger, one shake, and one cola? A.$21
B. $27 C.$31
D. \$41
E. It cannot be determined
Let's suppose that the price of a burger is $$B$$, of a shake - $$S$$ and that of a cola is $$C$$. We can then construct these equations:
$$3B+7S+C = 120$$
$$4B+10S+C = 164.5$$
Subtracting the first equation from the second gives us $$B+3S=44.5$$.
Now if we subtract the new equation two times from first or 3 times from second we will get $$B+S+C=31$$. In any case, there is no necessity to know each item's price, just the sum.
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 11439
Re: At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink]
Show Tags
27 Aug 2018, 01:48
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: At an upscale fast-food restaurant, Shin can buy 3 burgers, 7 shakes, [#permalink] 27 Aug 2018, 01:48
Display posts from previous: Sort by | 2019-06-26T11:13:14 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/at-an-upscale-fast-food-restaurant-shin-can-buy-3-burgers-7-shakes-49205.html",
"openwebmath_score": 0.5499479174613953,
"openwebmath_perplexity": 4902.244676893946,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.8652240686758841,
"lm_q2_score": 0.7549149758396752,
"lm_q1q2_score": 0.6531706069003604
} |
https://shapero.xyz/blog/index-1.html | # Conservation laws
A conservation law is a type of PDE that describes the transport of extensive quantities like mass, momentum, and energy. The most general form of a hyperbolic conservation law for a field $q$ is
$$\frac{\partial q}{\partial t} + \nabla\cdot f(q) = s,$$
where $f$ is the flux function and $s$ the sources. The solution variable $q$ could be a scalar, vector, or tensor field. Here we'll look at the simplest conservation law of them all, the advection equation: $q$ is a scalar field and $f(q) = qu$ for some velocity field $u$. As we'll see, there are lots of ways to screw up something as simple as the advection equation, so learning what the common error modes are will help when attacking harder problems like the shallow water or Euler equations.
One of the challenging things about solving hyperbolic problems is that the class of reasonable solutions includes functions with jump discontinuities, and for some problems this is true even when the initial data are smooth. Compare that to, say, elliptic problems, where the solution is almost always smoother than the input data. For elliptic problems, it's common to use continuous basis functions, but when we try to use the same basis for conservation laws we can run up against some very nasty stability problems. It's possible to work around these issues by using tricks like the streamlined upwind Petrov-Galerkin method. But almost any approach to stabilizing a CG discretization introduces (1) free parameters that can be difficult to tune right and (2) an unrealistic level of numerical diffusion. For these reasons, the discontinuous Galerkin method is very popular for these kinds of problems. The DG method has good local conservation properties, it can achieve high-order accuracy where the solution is smooth, and there are more options in how you guarantee a stable scheme.
DG is a huge subject and I couldn't possibly do justice to it here. If you want to read more about it, this paper by Cockburn and Shu is a great reference, as are these notes by Ralf Hartmann and this dissertation by Michael Crabb. Instead, I'll focus here on the effects of some of the choices you have to make when you solve these types of problems.
#### Input data¶
First, we want to create a domain, some function spaces, and a divergence-free velocity field $u$. The classic example is a material in uniform solid-body rotation around some fixed point $y$:
$$u(x) = \hat k \times (x - y)$$
where $\hat k$ is the unit vector in the $z$ direction.
import firedrake
from firedrake import inner, Constant, as_vector
mesh = firedrake.UnitSquareMesh(64, 64, diagonal='crossed')
x = firedrake.SpatialCoordinate(mesh)
y = Constant((.5, .5))
r = x - y
u = as_vector((-r[1], r[0]))
To have a stable timestepping scheme, we'll need to satisfy the Courant-Friedrichs-Lewy condition, which means calculating the maximum speed and the minimum cell diameter. Calculating the maximum speed exactly can be challenging; if $u$ is represented with piecewise linear basis functions, then $|u|^2$ is a quadratic function and thus might not attain its maximum value at the interpolation points. You could work around this by changing to a basis of Bernstein polynomials, but for our purposes it'll be enough to evaluate the maximum at the interpolation points and take a smaller timestep than necessary.
import numpy as np
Q = firedrake.FunctionSpace(mesh, family='CG', degree=2)
speed = firedrake.interpolate(inner(u, u), Q)
max_speed = np.sqrt(speed.dat.data_ro.max())
Q0 = firedrake.FunctionSpace(mesh, family='DG', degree=0)
diameters = firedrake.project(firedrake.CellDiameter(mesh), Q0)
min_diameter = diameters.dat.data_ro.min()
cfl_timestep = min_diameter / max_speed
print('Upper bound for CFL-stable timestep: {}'.format(cfl_timestep))
Upper bound for CFL-stable timestep: 0.022097086912079608
The initial data we'll use will be the classic bell and cone:
$$q_0 = \max\{0, 1 - |x - x_c| / r_c\} + \max\{0, 1 - |x - x_b|^2 / r_b^2\}$$
where $x_c$, $r_c$ are the center and radius of the cone and $x_b$, $r_b$ for the bell.
from firedrake import sqrt, min_value, max_value
x_c = as_vector((5/8, 5/8))
R_c = Constant(1/8)
x_b = as_vector((3/8, 3/8))
R_b = Constant(1/8)
q_expr = (
max_value(0, 1 - sqrt(inner(x - x_c, x - x_c) / R_c**2)) +
max_value(0, 1 - inner(x - x_b, x - x_b) / R_b**2)
)
q0 = firedrake.project(q_expr, Q0)
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d
fig = plt.figure()
axes = fig.add_subplot(projection='3d')
firedrake.trisurf(q0, axes=axes);
#### Fluxes¶
For our first experiment we'll look at the problem of choosing a numerical flux. As we'll see in the following, we have choices to make in how we determine the discrete approximation to the solution of the conservation law. This is very different from elliptic problems -- once we've decided to use the continous Galerkin method, the only real choice is what polynomial degree we'll use.
The usual procedure to come up with a weak form for a PDE is to multiply by some smooth test function $\phi$, move some derivatives onto $\phi$, and integrate. For the conservation law written above, we would arrive at the weak form
$$\int_\Omega\left(\frac{\partial q}{\partial t}\cdot\phi - f(q)\cdot\nabla\phi\right)dx = \int_\Omega s\cdot\phi\, dx + \ldots$$
where I've used an ellipsis to stand for some boundary terms that don't particularly matter. Unfortunately, this equation doesn't quite tell the whole story. We're using discontinuous basis functions to represent the solution $q$, and ideally we would use the same basis and test functions. What happens when the test functions are discontinuous too?
Let $\phi$ be some basis function and let $K$ be the cell of the domain where $\phi$ is supported. If we apply the usual procedure, we get an element-wise weak form when integrating against $\phi$:
$$\int_K\left(\frac{\partial q}{\partial t}\phi - f(q)\cdot\nabla\phi\right)dx + \int_{\partial K}f(q)\cdot\phi n\, ds = \int_K s\cdot\phi\, dx + \ldots$$
where $n$ is the unit outward normal vector to $K$. Note that we're integrating over only a single element and not the entire domain. The problem here is that if the solution and the basis functions are discontinuous across the element, we can't uniquely define their values on the boundary.
To see why this is so, you can imagine that, instead of having a discontinuous test function, we have a sequence $\phi_\epsilon$ of continuous test functions that converge to $\phi$ in some appropriate norm. If we take the support of each element of the sequence to be contained in the interior of $K$, then the value of $q$ in the boundary integral will its the value approaching the boundary from inside:
$$q_-(x) = \lim_{\epsilon\to 0}q(x - \epsilon n).$$
Alternatively, if we take $K$ to be contained in the interior of the support of each element of the sequence, then the value of the solution in the boundary integral will be its value approach the boundary from the outside:
$$q_+(x) = \lim_{\epsilon\to 0}q(x + \epsilon n).$$
Finally, with the right choice of sequence we could get any weighted average of the values on either side of the interface. As a consequence, we need to make some choice of the numerical flux. The numerical flux $f^*$ is a function of the interface values $q_+$ and $q_-$ and the unit normal vector $n$. The discrete approximation to the solution will satisfy the ODE system
$$\sum_K\left\{\int_K\left(\frac{\partial q}{\partial t}\phi - f(q)\cdot\nabla\phi\right)dx + \int_{\partial K}f^*(q_-, q_+, n)\cdot\phi\, ds\right\} = \sum_K\int_K s\cdot\phi\, dx + \ldots$$
for all test functions $\phi$. What kinds of functions can make a good numerical flux? First, if the solution is continuous across an element boundary, the numerical flux should give the same value as the true physical flux:
$$f^*(q, q, n) = f(q)\cdot n.$$
This condition is called consistency and it guarantees that the exact solution is also a discrete solution. The second property we want is to have some analogue of the conservative nature of the true problem. The important thing about fluxes in physical problems is that they can't create or destroy mass, momentum, energy, etc., they only transport it around the domain. To see how we can attain a similar property for our discrete problem, first observe that the sum over all the boundary integrals is telescoping because two neighboring cells $K_-$, $K_+$ share a common face $E$. We can then rewrite the sum of all the boundary integrals as a sum over all faces $E$ of the mesh:
$$\sum_K\int_{\partial K}f^*(q_-, q_+, n)\phi\, ds = \sum_E\int_E\left\{f^*(q_-, q_+, n_-)\phi_- + f^*(q_+, q_-, n_+)\phi_+\right\}ds$$
here $n_-$, $n_+$ are the unit outwardn ormal vectors to $K_-$ and $K_+$ respectively. Note that $n_+ = -n_-$, i.e. the two normals point in opposite directions to each other. What happens if the test function $\phi$ is identically equal to 1 throughout the entire domain? In that case the facet integrals should sum up to 0 -- fluxes only transport, they don't create or destroy. The numerical flux is conservative if
$$f^*(q_-, q_+, n) + f^*(q_+, q_-, -n) = 0.$$
The most braindead way we can come up with a sane numerical flux is to take the average of the solution values across the cell boundary:
$$f^*(q_-, q_+, n) = \frac{1}{2}(q_- + q_+)\cdot n.$$
This is called the central flux. Let's see how well it works.
from firedrake import grad, dx, ds, dS
q, ϕ = firedrake.TrialFunction(Q0), firedrake.TestFunction(Q0)
m = q * ϕ * dx
q = q0.copy(deepcopy=True)
cell_flux = -inner(grad(ϕ), q * u) * dx
n = firedrake.FacetNormal(mesh)
f = q * inner(u, n)
face_flux = (f('+') - f('-')) * (ϕ('+') - ϕ('-')) * dS
q_in = Constant(0)
influx = q_in * min_value(0, inner(u, n)) * ϕ * ds
outflux = q * max_value(0, inner(u, n)) * ϕ * ds
We'll take our timestep to be 1/4 of the formal CFL-stable timestep. We need at least a factor of 1/2 for the dimension, and probably another factor of 1/2 for triangle shape.
from numpy import pi as π
final_time = 2 * π
num_steps = 4 * int(final_time / cfl_timestep)
dt = Constant(final_time / num_steps)
Since we're repeatedly solving the same linear system, we'll create problem and solver objects so that this information can be reused from one solve to the next. The solver parameters are specially chosen for the fact that the mass matrix with discontinuous Galerkin methods is block diagonal, so a block Jacobi preconditioner with exact solvers on all the blocks is exact for the whole system.
from firedrake import LinearVariationalProblem, LinearVariationalSolver
dq_dt = -(cell_flux + face_flux + influx + outflux)
δq = firedrake.Function(Q0)
problem = LinearVariationalProblem(m, dt * dq_dt, δq)
parameters = {'ksp_type': 'preonly', 'pc_type': 'bjacobi', 'sub_pc_type': 'ilu'}
solver = LinearVariationalSolver(problem, solver_parameters=parameters)
import numpy as np
qrange = np.zeros((num_steps, 2))
import tqdm
for step in tqdm.trange(num_steps, unit='timesteps'):
solver.solve()
q += δq
qrange[step, :] = q.dat.data_ro.min(), q.dat.data_ro.max()
100%|██████████| 1136/1136 [00:05<00:00, 225.35timesteps/s]
After only 250 steps the solution is already attaining values two orders of magnitude greater than what they should, even while using a CFL-stable timestep. The reason for this is that the central flux, while both consistent and conservative, is numerically unstable with forward time-differencing.
fig, axes = plt.subplots()
axes.set_yscale('log')
axes.plot(qrange[:250, 1])
axes.set_xlabel('timestep')
axes.set_ylabel('solution maximum');
Instead, we'll try the upwind numerical flux. The idea of the upwind flux is to sample from whichever side of the interface has the velocity flowing outward and not in. The numerical flux is defined as
$$f^*(q_-, q_+, n) = \begin{cases}q_-u\cdot n && u\cdot n > 0 \\ q_+u\cdot n && u\cdot n \le 0\end{cases}.$$
We can also write this in a more symmetric form as
$$f^*(q_-, q_+, n) = q_-\max\{0, u\cdot n\} + q_+\min\{0, u\cdot n\}.$$
The upwind flux is designed to mimic the stability properties of one-sided finite difference schemes for transport equations.
q = q0.copy(deepcopy=True)
cell_flux = -inner(grad(ϕ), q * u) * dx
n = firedrake.FacetNormal(mesh)
u_n = max_value(inner(u, n), 0)
f = q * u_n
face_flux = (f('+') - f('-')) * (ϕ('+') - ϕ('-')) * dS
q_in = Constant(0)
influx = q_in * min_value(0, inner(u, n)) * ϕ * ds
outflux = q * max_value(0, inner(u, n)) * ϕ * ds
dq_dt = -(cell_flux + face_flux + influx + outflux)
δq = firedrake.Function(Q0)
problem = LinearVariationalProblem(m, dt * dq_dt, δq)
parameters = {'ksp_type': 'preonly', 'pc_type': 'bjacobi', 'sub_pc_type': 'ilu'}
solver = LinearVariationalSolver(problem, solver_parameters=parameters)
qs = []
output_freq = 5
for step in tqdm.trange(num_steps, unit='timesteps'):
solver.solve()
q += δq
if step % output_freq == 0:
qs.append(q.copy(deepcopy=True))
100%|██████████| 1136/1136 [00:05<00:00, 223.81timesteps/s]
We at least get a finite answer as a result, which is a big improvement. Keeping in mind that the original data capped out at a value of 1, the peaks have shrunk considerably, and we can also see that the sharp cone is much more rounded than before.
from firedrake.plot import FunctionPlotter
fn_plotter = FunctionPlotter(mesh, num_sample_points=1)
%%capture
fig, axes = plt.subplots()
axes.set_aspect('equal')
axes.get_xaxis().set_visible(False)
axes.get_yaxis().set_visible(False)
colors = firedrake.tripcolor(
q, num_sample_points=1, vmin=0., vmax=1., shading="gouraud", axes=axes
)
from matplotlib.animation import FuncAnimation
def animate(q):
colors.set_array(fn_plotter(q))
interval = 1e3 * output_freq * float(dt)
animation = FuncAnimation(fig, animate, frames=qs, interval=interval)
from IPython.display import HTML
HTML(animation.to_html5_video())
Despite this fact, the total volume under the surface has been conserved to within rounding error.
print(firedrake.assemble(q * dx) / firedrake.assemble(q0 * dx))
0.9999713508961685
Nonetheless, the relative error in the $L^1$ norm is quite poor.
firedrake.assemble(abs(q - q0) * dx) / firedrake.assemble(q0 * dx)
0.6651047426779894
Let's see if we can improve on that by changing the finite element basis.
#### Higher-order basis functions¶
One of the main advantages that the discontinuous Galerkin method has over the finite volume method is that achieving higher-order convergence is straightforward if the problem is nice -- you just increase the polynomial degree. (When the problem is not nice, for example if there are shockwaves, everything goes straight to hell and the finite volume method is much less finicky about stability.) Here we'll look at what happens when we go from piecewise constant basis functions to piecewise linear.
One of the first changes we have to make is that the Courant-Friedrichs-Lewy condition is more stringent for higher-order basis functions. For piecewise constant basis functions, we have that $\delta x / \delta t \ge |u|$; for degree-$p$ polynomials, we instead need that
$$\frac{\delta x}{\delta t} \ge (2p + 1)\cdot|u|.$$
One way of looking at this higher-degree CFL condition is that the introduction of more degrees of freedom makes the effective spacing between the nodes smaller than it might be in the piecewise-constant case. The multiplicative factor of $2p + 1$ accounts for the effective shrinkage in the numerical length scale. (For more, see this paper from 2013.) Once again, we'll use a timestep that's 1/4 of the formal CFL timestep to account for the spatial dimension and the mesh quality.
cfl_timestep = min_diameter / max_speed / 3
num_steps = 4 * int(final_time / cfl_timestep)
dt = Constant(final_time / num_steps)
We have to be a bit carefuly about creating the initial data. For discontinuous Galerkin discretizations, we would normally project the expression into the discrete function space. Since this is a projection in $L^2$, we might get negative values for an otherwise strictly positive expression. In this case, the positivity of the solution is vital and so instead I'm interpolating the expression for the initial data, but doing so is a little dangerous.
Q1 = firedrake.FunctionSpace(mesh, family='DG', degree=1)
q0 = firedrake.interpolate(q_expr, Q1)
q0.dat.data_ro.min(), q0.dat.data_ro.max()
(0.0, 1.0)
In almost every other respect the discretization is the same as before.
q, ϕ = firedrake.TrialFunction(Q1), firedrake.TestFunction(Q1)
m = q * ϕ * dx
q = q0.copy(deepcopy=True)
cell_flux = -inner(grad(ϕ), q * u) * dx
n = firedrake.FacetNormal(mesh)
u_n = max_value(inner(u, n), 0)
f = q * u_n
face_flux = (f('+') - f('-')) * (ϕ('+') - ϕ('-')) * dS
q_in = Constant(0)
influx = q_in * min_value(0, inner(u, n)) * ϕ * ds
outflux = q * max_value(0, inner(u, n)) * ϕ * ds
dq_dt = -(cell_flux + face_flux + influx + outflux)
δq = firedrake.Function(Q1)
problem = LinearVariationalProblem(m, dt * dq_dt, δq)
parameters = {'ksp_type': 'preonly', 'pc_type': 'bjacobi', 'sub_pc_type': 'ilu'}
solver = LinearVariationalSolver(problem, solver_parameters=parameters)
for step in tqdm.trange(num_steps, unit='timesteps'):
solver.solve()
q += δq
100%|██████████| 3412/3412 [00:40<00:00, 83.82timesteps/s]
The error in the $L^1$ norm is less than that of the degree-0 solution, which was on the order of 40%, but it's far from perfect.
firedrake.assemble(abs(q - q0) * dx) / firedrake.assemble(q0 * dx)
0.09376446683007597
Worse yet, the final value of the solution has substantial over- and undershoots. The mathematical term for this is that the true dynamics are monotonicity-preserving -- they don't create new local maxima or minima -- but the numerical scheme is not.
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(q, axes=axes)
fig.colorbar(colors);
To be precise and for later comparison we'll print out exactly how far outside the initial range the solution goes.
q.dat.data_ro.min(), q.dat.data_ro.max()
(-0.11039252600936499, 1.0315252284314207)
But of course we're only using the explicit Euler timestepping scheme, which is of first order, while our spatial discretization should be 2nd-order accurate. Can we do better if we match the asymptotic accuracy of the errors in time and space?
#### Timestepping¶
Choosing a finite element basis or a numerical flux is part of deciding how we'll discretize the spatial part of the differential operator. After that we have to decide how to discretize in time. The explicit Euler, which we used in the preceding code, has the virtue of simplicity. Next we'll try out the strong stability-preserving Runge-Kutta method of order 3. First, we'll create a form representing the rate of change of $q$ with the upwind flux just as we did before.
q = q0.copy(deepcopy=True)
ϕ = firedrake.TestFunction(Q1)
cell_flux = -inner(grad(ϕ), q * u) * dx
n = firedrake.FacetNormal(mesh)
u_n = max_value(inner(u, n), 0)
f = q * u_n
face_flux = (f('+') - f('-')) * (ϕ('+') - ϕ('-')) * dS
q_in = Constant(0)
influx = q_in * min_value(0, inner(u, n)) * ϕ * ds
outflux = q * max_value(0, inner(u, n)) * ϕ * ds
dq_dt = -(cell_flux + face_flux + influx + outflux)
To implement the SSPRK3 timestepping scheme, we'll introduce some auxiliary functions and solvers for the Runge-Kutta stages.
q1 = firedrake.Function(Q1)
q2 = firedrake.Function(Q1)
F2 = firedrake.replace(dq_dt, {q: q1})
F3 = firedrake.replace(dq_dt, {q: q2})
problems = [
LinearVariationalProblem(m, dt * dq_dt, δq),
LinearVariationalProblem(m, dt * F2, δq),
LinearVariationalProblem(m, dt * F3, δq)
]
solvers = [
LinearVariationalSolver(problem, solver_parameters=parameters)
for problem in problems
]
The timestepping loop is more involved; we have to separately evaluate the Runge-Kutta stages and then form the solution as an appropriate weighted sum.
for step in tqdm.trange(num_steps, unit='timesteps'):
solvers[0].solve()
q1.assign(q + δq)
solvers[1].solve()
q2.assign(3 * q / 4 + (q1 + δq) / 4)
solvers[2].solve()
q.assign(q / 3 + 2 * (q2 + δq) / 3)
100%|██████████| 3412/3412 [02:07<00:00, 26.84timesteps/s]
The SSPRK3 scheme gives a huge improvement in how well it agrees with the true solution.
firedrake.assemble(abs(q - q0) * dx) / firedrake.assemble(q0 * dx)
0.028571053235589616
In the eyeball norm, it looks like it stays pretty well wtihin the upper and lower limits of the initial data.
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(q, axes=axes)
fig.colorbar(colors);
But if we explicitly calculate the upper and lower bounds, we see that this scheme also fails to be monotonicity preserving!
q.dat.data_ro.min(), q.dat.data_ro.max()
(-0.023255380690921732, 1.0038686288761318)
The departures are relatively small but for more challenging or nonlinear problems the overshoots can become more severe. There is (unfortunately) a can't-do theorem that tells us why: the Godunov barrier. This theorem states that any linear, monotonicity-preserving scheme for hyperbolic conservation laws can be at most 1st-order accurate.
In principle this might sound like a bit of a bummer; why bother looking for higher-order accurate numerical schemes if they're doomed to do unphysical things that will likely result in instability? The operative word here is a linear scheme. The Godunov barrier does not rule out the possibility of nonlinear monotonicity-preserving schemes. I find it profoundly disturbing that we should be using nonlinear schemes to approximate the solutions of linear conservation laws, but ours is but to do and die I suppose.
#### Flux limiters¶
The Godunov barrier motivated the development in the early 80s of post-processing techniques that would turn an otherwise oscillatory scheme into one that does not introduce new local maxima or minima. These ideas fall under the aegis of flux limiters or slope limiters, which apply a transformation that clamps the solution in such a way as to suppress unrealistic gradients near sharp discontinuities but which leave the solution unaltered where it is smooth. The design of limiters is part science and part art. Sweby (1984) established some constraints on the what a good limiter function can look like in order to guarantee that the numerical scheme is variation-diminishing. But there's a very large range within those constraints; Sweby's paper showed three different ones even in 1984 and the wiki article on flux limiters lists 15.
Flux-corrected transport is a huge subject, and rather than try to do it any kind of justice I'll instead refer you to a wonderful book by Dmitri Kuzmin. Instead, let's finish things off by looking at what happens when we add a flux limiter to our simulation above. The application of the limiter will be interleaved with all of the Runge-Kutta stages, and conveniently we can reuse the existing solvers for the SSPRK3 stages.
q.assign(q0)
limiter = firedrake.VertexBasedLimiter(Q1)
for step in tqdm.trange(num_steps, unit='timesteps'):
solvers[0].solve()
q1.assign(q + δq)
limiter.apply(q1)
solvers[1].solve()
q2.assign(3 * q / 4 + (q1 + δq) / 4)
limiter.apply(q2)
solvers[2].solve()
q.assign(q / 3 + 2 * (q2 + δq) / 3)
limiter.apply(q)
100%|██████████| 3412/3412 [02:37<00:00, 21.72timesteps/s]
The relative error in the 1-norm is just as good as before, but with the flux limiter the solution does a much better job staying within the bounds of the initial data.
firedrake.assemble(abs(q - q0) * dx) / firedrake.assemble(q0 * dx)
0.034105170730422026
q.dat.data_ro.min(), q.dat.data_ro.max()
(1.4278749839079737e-45, 0.958887212115741)
#### Conclusion¶
Hyperbolic problems are hard. There are difficult decisions to make even at the level of how to formulate the discrete problem. For this demo, we were looking at a scalar conservation laws, and the upwind flux works quite well. But for systems of conservation laws, like the shallow water equations, things become much more involved. You have to know something at an analytical level about the underlying problem -- the wave speeds. Once we've decided which discrete problem to solve, going beyond first-order accuracy is filled with even more challenges. Some issues, like getting a stable enough timestep, often require manual tuning. For the linear problem shown here, we know what the wave speeds are from the outset and we have reasonable confidence that we can pick a good timestep that will work for the entire simulation. The solutions of nonlinear conservation laws can meander to regions of state space where the wave speeds are much higher than where they started and an initially stable timestep becomes unstable. The Right Thing To Do is to use an adaptive timestepping scheme. But you now have the added implementational difficulty of tracking a higher- and lower-order solution with which to inform the adaptation strategy. Hopefully this has shown what some of the typical pitfalls are and what tools are available to remedy them.
# Stokes flow
In the last post, we looked at variational principles by studying the minimal surface equation. Much of what you learn in multivariable calculus carries over equally well to infinite-dimensional spaces and we were able to leverage a lot of this intuition to design efficient solution procedures. For example, the notion of convexity carries over to variatonal problems and using this idea we can show that Newton's method is effective in this setting as well.
When we solved the minimal surface equation, our solution space consists of all functions that satisfy a set of Dirichlet boundary conditions. These conditions are easy to eliminate so our problem is essentially unconstrained. In this post, we'll look at the Stokes equations, which are a constrained optimization problem. For unconstrained problems, the convexity of the objective implies a kind of stability property that we can use to prove that roughly any finite element basis will give a convergent approximation scheme. For constrained problems we have to be much more careful about the choice of basis and this is the content of the Ladyzhenskaya-Babuška-Brezzi or LBB conditions, which I'll describe in a later post. For now, we'll focus on solving the Stokes equations using one particular discretization, the Taylor-Hood element.
The Stokes equations describe slow flow of very viscous, incompressible fluids. The fields we're solving for are the velocity $u$ and the pressure $p$. The incompressibility condition means that the velocity field is divergence-free:
$$\nabla\cdot u = 0.$$
The remaining equations state that the stresses are in balance wth body forces:
$$\nabla\cdot \tau - \nabla p + f = 0,$$
where $\tau$ is the rank-2 stress tensor and $f$ are the body forces. The stress tensor must be related somehow to the velocity field. For a viscous fluid, the stress tensor is related to the rate-of-strain tensor
$$\dot\varepsilon = \frac{1}{2}\left(\nabla u + \nabla u^*\right).$$
(For solids the stress tensor is related to the gradient of the displacement rather than the velocity.) The simplest constitutive relation is that of a Newtonian fluid:
$$\tau = 2\mu\dot\varepsilon,$$
where $\mu$ is the viscosity. There are other nonlinear constitutive relations, but for now we'll just consider Newtonian fluids. If $U$ and $L$ are characteristic velocity and length scales for the particular flow at hand and $\rho$ is the fluid density, the Stokes equations are a good description when the Reynolds number is much less than 1:
$$\text{Re} \equiv \frac{\rho UL}{\mu} \ll 1.$$
When the Reynolds number is closer to or larger than 1, we need to use the full Navier-Stokes equations, which includes inertial effects as well.
The Stokes equations, like the Poisson equation, have a minimization principle, but for two fields instead of one. The variational principle is that the solution $u$, $p$ is a critical point of the rate of decrease of the Gibbs free energy:
$$\dot{\mathscr{G}}(u, p) = \int_\Omega\left(\mu|\dot\varepsilon(u)|^2 - p\nabla\cdot u - f\cdot u\right)dx.$$
You can show using the usual tricks that the Euler-Lagrange equations for this functional are the Stokes equations. The free energy dissipation functional consists of a positive, quadratic term in the velocity, but the pressure $p$ only acts like a Lagrange multiplier enforcing the incompressibility condition. The lack of any positivity in the pressure is part of what makes the Stokes equations so challenging to discretize and solve. While the second derivative of the objective is still symmetric, it is no longer positive-definite.
#### Demonstration¶
Here we'll work on a classic problem of flow driven by a moving boundary. The domain will consist of a circle with two holes removed. We'll imagine that these holes are cylindrical turbines that are rotating with some fixed speed and dragging the fluid along with them. As we'll see, getting a linear solver to converge for this problem is much more challenging than for the Poisson equation.
import numpy as np
from numpy import pi as π
def add_ellipse(geometry, x, y, a, b, N, lcar):
θs = np.array([2 * π * n / N for n in range(N)])
xs, ys = x + a * np.cos(θs), y + b * np.sin(θs)
points = [geometry.add_point([x, y, 0], lcar=lcar) for x, y in zip(xs, ys)]
lines = [geometry.add_line(points[n], points[(n + 1) % N])
for n in range(N)]
geometry.add_physical(lines)
line_loop = geometry.add_line_loop(lines)
return line_loop
import pygmsh
geometry = pygmsh.built_in.Geometry()
outer_line_loop = add_ellipse(geometry, x=0, y=0, a=1, b=1, N=128, lcar=1/4)
inner_loops = [
add_ellipse(geometry, x=0, y=+1/2, a=1/8, b=1/8, N=64, lcar=1/4),
add_ellipse(geometry, x=0, y=-1/2, a=1/8, b=1/8, N=64, lcar=1/4)
]
plane_surface = geometry.add_plane_surface(outer_line_loop, inner_loops)
geometry.add_physical(plane_surface)
with open('mixer.geo', 'w') as geo_file:
geo_file.write(geometry.get_code())
!gmsh -2 -format msh2 -v 0 -o mixer.msh mixer.geo
import firedrake
mesh = firedrake.Mesh('mixer.msh')
import matplotlib.pyplot as plt
fig, axes = plt.subplots()
axes.set_aspect('equal')
firedrake.triplot(mesh, axes=axes)
axes.legend();
For this problem we'll use the Taylor-Hood element: piecewise linear basis functions for the pressure and piecewise quadratic basis functions for the velocity. The Taylor-Hood element is stable for the Stokes equations in that norm of the inverse of the disretized linear system is bounded as the mesh is refined. This is a very special property and not just any element will work.
For scalar problems, the solution is a single field, but for the Stokes equations our solution consists of a pair of a velocity and a pressure field. Firedrake includes a handy algebraic notation for defining the direct product of two function spaces.
Q = firedrake.FunctionSpace(mesh, family='CG', degree=1)
V = firedrake.VectorFunctionSpace(mesh, family='CG', degree=2)
Z = V * Q
We can access the components of a function that lives in this product space using the usual Python indexing operators, but it's more convenient to use the function firedrake.split to give us handles for the two components.
z = firedrake.Function(Z)
u, p = firedrake.split(z)
This way our code to define the objective functional looks as much like the math as possible, rather than have to constantly reference the components.
We'll use a viscosity coefficient $\mu$ of 1000. Since the diameter of the domain and the fluid velocity are both on the order of 1, the viscosity would need to be fairly large for the Stokes equations to actually be applicable.
from firedrake import inner, sym, grad, div, dx
def ε(u):
return sym(grad(u))
μ = firedrake.Constant(1e3)
𝒢̇ = (μ * inner(ε(u), ε(u)) - p * div(u)) * dx
One of the extra challenging factors about the Stokes equations is that they have a non-trivial null space. To see this, suppose we have some velocity-pressure pair $u$, $p$. The velocity field $u$ is not necessarily divergence-free, but we do need that $u\cdot n = 0$ on the boundary of the domain. If we add a constant factor $p_0$ to the pressure, then the value of the objective functional is unchanged:
\begin{align} \dot{\mathscr{G}}(u, p) - \dot{\mathscr{G}}(u, p + p_0) & = \int_\Omega p_0\nabla\cdot u\, dx \\ & = p_0\int_{\partial\Omega}u\cdot n\, ds = 0. \end{align}
In order to obtain a unique solution to the system, we can impose the additional constraint that
$$\int_\Omega p\, dx = 0,$$
or in other words that the pressure must be orthogonal to all the constant functions.
from firedrake import MixedVectorSpaceBasis, VectorSpaceBasis
basis = VectorSpaceBasis(constant=True)
nullspace = MixedVectorSpaceBasis(Z, [Z.sub(0), basis])
Next we have to create the boundary conditions. The only extra work we have to do here is to get the right component of the mixed function space $Z$.
x = firedrake.SpatialCoordinate(mesh)
x2 = firedrake.as_vector((0, +1/2))
r2 = firedrake.Constant(1/8)
x3 = firedrake.as_vector((0, -1/2))
r3 = firedrake.Constant(1/8)
q2 = (x - x2) / r2
q3 = (x - x3) / r3
u2 = firedrake.as_vector((-q2[1], q2[0]))
u3 = firedrake.as_vector((-q3[1], q3[0]))
from firedrake import DirichletBC, as_vector
bc1 = DirichletBC(Z.sub(0), as_vector((0, 0)), 1)
bc2 = DirichletBC(Z.sub(0), u2, 2)
bc3 = DirichletBC(Z.sub(0), u3, 3)
Now let's see what happens if we invoke the default linear solver.
from firedrake import derivative
try:
firedrake.solve(
derivative(𝒢̇, z) == 0, z,
bcs=[bc1, bc2, bc3],
nullspace=nullspace
)
except firedrake.ConvergenceError:
print("Oh heavens, it didn't converge!")
We'll take the easy way out and use the sparse direct solver MUMPS to make sure we get an answer. This approach will work for now, but even parallel direct solvers scale poorly to large problems, especially in 3D. The proper incanctation to invoke the direct solver needs a bit of explaining. For mixed problems like the Stokes equations, Firedrake will assemble a special matrix type that exploits the problem's block structure. Unfortunately MUMPS can't work with this matrix format, so we have to specify that it will use PETSc's aij matrix format with the option 'mat_type': 'aij'. Next, we'll request that the solver use the LU factorization with 'pc_type': 'lu'. Without any other options, this will use PETSc's built-in matrix factorization routines. These are fine for strictly positive matrices, but fail when the problem has a non-trivial null space. The option 'pc_factor_mat_solver_type': 'mumps' will use the MUMPS package instead of PETSc's built-in sparse direct solver.
firedrake.solve(
derivative(𝒢̇, z) == 0, z,
bcs=[bc1, bc2, bc3],
nullspace=nullspace,
solver_parameters={
'mat_type': 'aij',
'ksp_type': 'preonly',
'pc_type': 'lu',
'pc_factor_mat_solver_type': 'mumps'
}
)
Some cool features you can observe in the stream plot are the saddle point at the center of the domain and the two counter-rotating vortices that form on either side of it.
u, p = z.split()
fig, axes = plt.subplots()
axes.set_aspect('equal')
kwargs = {'resolution': 1/30, 'seed': 4, 'cmap': 'winter'}
streamlines = firedrake.streamplot(u, axes=axes, **kwargs)
fig.colorbar(streamlines);
The velocity field should be close to divergence-free; if we project the divergence into a DG(2) we can see what the exact value is. There are some small deviations, especially around the boundary of the domain. Part of the problem is that the boundary conditions we've specified are exactly tangent to the idealized domain -- a large circle with two circular holes punched out of it -- but not to its discrete approximation by a collection of straight edges.
S = firedrake.FunctionSpace(mesh, family='DG', degree=2)
div_u = firedrake.project(div(u), S)
fig, axes = plt.subplots()
axes.set_aspect('equal')
kwargs = {'vmin': -0.01, 'vmax': +0.01, 'cmap': 'seismic'}
triangles = firedrake.tripcolor(div_u, axes=axes, **kwargs)
fig.colorbar(triangles);
For now we'll calculate and store the norm of the velocity diverence. When we try to improve on this we'll use this value as a baseline.
linear_coords_divergence = firedrake.norm(div_u)
print(linear_coords_divergence)
0.026532615307004404
#### Higher-order geometries¶
We can try to improve on this by using curved edges for the geometry instead of straight ones. The topology of the mesh is the same; we're just adding more data describing how it's embedded into Euclidean space. In principle, gmsh can generate this for us, but reading in the file seems to be awfully annoying. To get a higher-order geometry, we'll proceed by:
1. making a quadratic vector function space
2. interpolating the linear coordinates into this space
3. patching the new coordinate field to conform to the boundary
This approach will work for our specific problem but it requires us to know things about the idealized geometry that aren't always available. So what we're about to do isn't exactly generalizable.
To do the patching in step 3, we'll create boundary condition objects defined on the quadratic function space and then apply them. We need to know the numbering of the various boundary segments in order to do that, so to refresh the memory let's look at the mesh again.
fig, axes = plt.subplots()
axes.set_aspect('equal')
firedrake.triplot(mesh, axes=axes)
axes.legend();
The outer curve is boundary 1, the upper mixer head is boundary 2, and the lower head is boundary 3. With that in mind we can create the new coordinate field.
Vc = firedrake.VectorFunctionSpace(mesh, family='CG', degree=2)
from firedrake import sqrt, Constant
def fixup(x, center, radius):
distance = sqrt(inner(x - center, x - center))
return center + radius * (x - center) / distance
centers = [Constant((0., 0.)), Constant((0., +.5)), Constant((0., -0.5))]
radii = [Constant(1.), Constant(1/8), Constant(1/8)]
bcs = [firedrake.DirichletBC(Vc, fixup(x, center, radius), index + 1)
for index, (center, radius) in enumerate(zip(centers, radii))]
X0 = firedrake.interpolate(mesh.coordinates, Vc)
X = X0.copy(deepcopy=True)
for bc in bcs:
bc.apply(X)
Just as a sanity check, we'll calculate the average deviation of the new from the old coordinate field to see how different they are.
from firedrake import ds
length = firedrake.assemble(Constant(1.) * ds(mesh))
firedrake.assemble(sqrt(inner(X - X0, X - X0)) * ds) / length
0.000180710597501049
Now we can solve the Stokes equations again on this new mesh using the exact same procedures as before.
qmesh = firedrake.Mesh(X)
Q = firedrake.FunctionSpace(qmesh, family='CG', degree=1)
V = firedrake.VectorFunctionSpace(qmesh, family='CG', degree=2)
Z = V * Q
z = firedrake.Function(Z)
u, p = firedrake.split(z)
𝒢̇ = (μ * inner(ε(u), ε(u)) - p * div(u)) * dx
basis = VectorSpaceBasis(constant=True)
nullspace = MixedVectorSpaceBasis(Z, [Z.sub(0), basis])
x = firedrake.SpatialCoordinate(qmesh)
x2 = firedrake.as_vector((0, +1/2))
r2 = firedrake.Constant(1/8)
x3 = firedrake.as_vector((0, -1/2))
r3 = firedrake.Constant(1/8)
q2 = (x - x2) / r2
q3 = (x - x3) / r3
u2 = firedrake.as_vector((-q2[1], q2[0]))
u3 = firedrake.as_vector((-q3[1], q3[0]))
from firedrake import DirichletBC, as_vector
bc1 = DirichletBC(Z.sub(0), as_vector((0, 0)), 1)
bc2 = DirichletBC(Z.sub(0), u2, 2)
bc3 = DirichletBC(Z.sub(0), u3, 3)
firedrake.solve(
derivative(𝒢̇, z) == 0, z,
bcs=[bc1, bc2, bc3],
nullspace=nullspace,
solver_parameters={
'mat_type': 'aij',
'ksp_type': 'preonly',
'pc_type': 'lu',
'pc_factor_mat_solver_type': 'mumps'
}
)
S = firedrake.FunctionSpace(qmesh, family='DG', degree=2)
div_u = firedrake.project(div(u), S)
The ring of spurious divergences around the outer edge of the domain is substantially reduced with curved elements. Nonetheless, the boundary doesn't perfectly fit the circle and this imperfection means that at some points around the edge the discretized velocity field will have an unphysical, non-zero normal component.
fig, axes = plt.subplots()
axes.set_aspect('equal')
triangles = firedrake.tripcolor(div_u, axes=axes, **kwargs)
fig.colorbar(triangles);
Using a higher-order geometry reduced the norm of the velocity divergence almost by a factor of 4, which is a big improvement.
quadratic_coords_divergence = firedrake.norm(div_u)
print(linear_coords_divergence / quadratic_coords_divergence)
3.6951400937053247
#### Conclusion¶
The code above shows how to get an exact (up to rounding-error) solution to the discretized Stokes equations using MUMPS. For larger problems in 3D, using a direct method can become prohibitively expensive. The Firedrake documentation has a demo of how to use PETSc's field split preconditioners, together with matrix-free operators, to solve the Stokes equations efficiently. In subsequent posts, I'll show more about stable discretizations of mixed problems, and how to solve the Stokes equations with more exotic boundary conditions than the standard ones we've shown here.
The velocity field we calculated was not exactly divergence-free and part of this was a consequence of using a boundary condition that adapted poorly to a piecewise-linear discretized geometry. We were able to do better by increasing the polynomial degree of the geometry, and in general this is absolutely necessary to achieve the expected rates of convergence with higher-order finite element bases. Nonetheless, the support for higher-order geometries in common finite element and mesh generation packages should be better given how useful they are. I think this is an area where a little investment in resources could make a really outsized difference. The logical endpoint of this line of thinking is isogeometric analysis, which is an active area of research.
# The obstacle problem
In this post, we'll look at the obstacle problem. We've seen in previous posts examples of variational problems -- minimization of some functional with respect to a field. The classic example of a variational problem is to find the function $u$ that minimizes the Dirichlet energy
$$\mathscr{J}(u) = \int_\Omega\left(\frac{1}{2}|\nabla u|^2 - fu\right)dx$$
subject to the homogeneous Dirichlet boundary condition $u|_{\partial\Omega} = 0$. The Poisson equation is especially convenient because the objective is convex and quadratic. The obstacle problem is what you get when you add the additional constraint
$$u \ge g$$
throughout the domain. More generally, we can look at the problem of minimizing a convex functional $\mathscr{J}$ subject to the constraint that $u$ has to live in a closed, convex set $K$ of a function space $Q$. For a totally unconstrained problem, $K$ would just be the whole space $Q$.
Newton's method with line search is a very effective algorithm for solving unconstrained convex problems, even for infinite-dimensional problems like PDEs. Things get much harder when you include inequality constraints. To make matters worse, much of the literature you'll find on this subject is focused on finite-dimensional problems, where techniques like the active-set method work quite well. It's not so obvious how to generalize these methods to variational problems. In the following, I'll follow the approach in section 4.1 of this paper by Farrell, Croci, and Surowiec, whch was my inspiration for writing this post.
Minimizing the action functional $\mathscr{J}$ over the convex set $K$ can be rephrased as an unconstrained problem to minimize the functional
$$\mathscr{J}(u) + \mathscr{I}(u),$$
where $\mathscr{I}$ is the indicator function of the set $K$:
$$\mathscr{I}(u) = \begin{cases}0 & u \in K \\ \infty & u \notin K\end{cases}.$$
This functional is still convex, but it can take the value $\infty$. The reformulation isn't especially useful by itself, but we can approximate it using the Moreau envelope. The envelope of $\mathscr{I}$ is defined as
$$\mathscr{I}_\gamma(u) = \min_v\left(\mathscr{I}(v) + \frac{1}{2\gamma^2}\|u - v\|^2\right).$$
In the limit as $\gamma \to 0$, $\mathscr{I}_\gamma(u) \to \mathscr{I}(u)$. The Moreau envelope is much easier to work with than the original functional because it's differentiable. In some cases it can be computed analytically; for example, when $\mathscr{I}$ is an indicator function,
$$\mathscr{I}_\gamma(u) = \frac{1}{2\gamma^2}\text{dist}\,(u, K)^2$$
where $\text{dist}$ is the distance to a convex set. We can do even better for our specific case, where $K$ is the set of all functions greater than $g$. For this choice of $K$, the distance to $K$ is
$$\text{dist}(u, K)^2 = \int_\Omega(u - g)_-^2dx,$$
where $v_- = \min(v, 0)$ is the negative part of $v$. So, our approach to solving the obstacle problem will be to find the minimzers of
$$\mathscr{J}_\gamma(u) = \int_\Omega\left(\frac{1}{2}|\nabla u|^2 - fu\right)dx + \frac{1}{2\gamma^2}\int_\Omega(u - g)_-^2dx$$
as $\gamma$ goes to 0. I've written things in such a way that $\gamma$ has units of length. Rather than take $\gamma$ to 0 we can instead stop at some fraction of the finite element mesh spacing. At that point, the errors in the finite element approximation are comparable to the distance of the approximate solution to the constraint set.
This is a lot like the penalty method for optimization problems with equality constraints. One of the main practical considerations when applying this regularization method is that the solution $u$ only satisfies the inequality constraints approximately. For the obstacle problem this deficiency isn't so severe, but for other problems we may need the solution to stay strictly feasible. In those cases, another approach like the logarithmic barrier method might be more appropriate.
#### Demonstration¶
For our problem, the domain will be the unit square and the obstacle function $g$ will be the upper half of a sphere.
import firedrake
nx, ny = 64, 64
mesh = firedrake.UnitSquareMesh(nx, ny, quadrilateral=True)
Q = firedrake.FunctionSpace(mesh, family='CG', degree=1)
from firedrake import max_value, sqrt, inner, as_vector, Constant
def make_obstacle(mesh):
x = firedrake.SpatialCoordinate(mesh)
y = as_vector((1/2, 1/2))
z = 1/4
return sqrt(max_value(z**2 - inner(x - y, x - y), 0))
g = firedrake.interpolate(make_obstacle(mesh), Q)
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d
fig = plt.figure()
axes = fig.add_subplot(projection='3d')
firedrake.trisurf(g, axes=axes);
Next we'll make a few utility procedures to create the Moreau envelope of the objective functional and to calculate a search direction from a given starting guess.
from firedrake import grad, dx, min_value
def make_objective(u, g, γ):
J_elastic = 0.5 * inner(grad(u), grad(u)) * dx
J_penalty = 0.5 / γ**2 * min_value(u - g, 0)**2 * dx
return J_elastic + J_penalty
from firedrake import derivative
def update_search_direction(J, u, v):
F = derivative(J, u)
H = derivative(F, u)
bc = firedrake.DirichletBC(u.function_space(), 0, 'on_boundary')
params = {'ksp_type': 'cg', 'pc_type': 'icc'}
firedrake.solve(H == -F, v, bc, solver_parameters=params)
Let's start from a zero initial guess and see what the first search direction will be.
u = firedrake.Function(Q)
γ = Constant(1.)
J = make_objective(u, g, γ)
v = firedrake.Function(Q)
update_search_direction(J, u, v)
fig = plt.figure()
axes = fig.add_subplot(projection='3d')
firedrake.trisurf(v, axes=axes);
To make sure that a Newton-type method will converge, we'll need a routine to perform a 1D minimization along the search direction.
from scipy.optimize import minimize_scalar
from firedrake import assemble, replace
def line_search(J, u, v):
def J_line(step):
t = firedrake.Constant(step)
J_t = replace(J, {u: u + t * v})
return assemble(J_t)
result = minimize_scalar(J_line)
assert result.success
return result.x
t = line_search(J, u, v)
print(t)
1.0000833865577647
With these steps out of the way we can define a Newton search procedure and calculate a solution for our initial, rough guess of $\gamma$.
from firedrake import action
def newton_search(J, u, tolerance=1e-10, max_num_steps=30):
v = firedrake.Function(u.function_space())
F = derivative(J, u)
for step in range(max_num_steps):
update_search_direction(J, u, v)
Δ = assemble(action(F, v))
if abs(Δ) < tolerance * assemble(J):
return
t = Constant(line_search(J, u, v))
u.assign(u + t * v)
newton_search(J, u)
fig = plt.figure()
axes = fig.add_subplot(projection='3d')
firedrake.trisurf(u, axes=axes);
The solution we obtain doesn't do a good job of staying above the obstacle because we haven't used a sufficiently small value of $\gamma$.
δ = firedrake.interpolate(max_value(g - u, 0), Q)
print(firedrake.assemble(δ * dx) / firedrake.assemble(g * dx))
0.9680039453017546
Instead, we can use the solution obtained for one value of $\gamma$ to initialize a search for the solution with $\gamma / 2$ and iterate. We've chosen this slightly indirect route rather than start from a small value of $\gamma$ directly because the problem may be very poorly conditioned. The numerical continuation approach can still give a reasonable answer even for poorly-conditioned problems.
def continuation_search(g, γ0, num_steps, contraction=0.5):
u = g.copy(deepcopy=True)
γ = Constant(γ0)
for step in range(num_steps):
J = make_objective(u, g, γ)
newton_search(J, u)
γ.assign(contraction * γ)
return u
We'll choose a number of steps so that the final value of $\gamma$ is roughly proportional to the mesh spacing.
import numpy as np
num_steps = int(np.log2(nx)) + 1
print(num_steps)
u = continuation_search(g, 1., num_steps)
7
Finally, I'll plot a cross section of the solution and the constraint $g$ so that you can see where the two coincide.
fig, axes = plt.subplots()
num_points = 51
xs = np.linspace(0., 1., num_points)
ys = 0.5 * np.ones(num_points)
X = np.array((xs, ys)).T
axes.plot(xs, g.at(X), color='tab:orange')
axes.plot(xs, u.at(X), color='tab:blue');
#### Refinement¶
The code above worked well enough for a single grid, but one of the hard parts about optimization with PDE constraints is making sure that our algorithms do sane things under mesh refinement. Many common algorithms can have different convergence rates depending on the mesh or the degree of the finite element basis. The reasons for this are a little involved, but if you want to read more, I recommend this book by Málek and Strakos.
To really make sure we're doing things right, we should run this experiment at several levels of mesh refinement. We can do this easily using the MeshHierarchy function in Firedrake.
coarse_mesh = firedrake.UnitSquareMesh(nx, ny, quadrilateral=True)
num_levels = 3
mesh_hierarchy = firedrake.MeshHierarchy(coarse_mesh, num_levels)
for level, mesh in enumerate(mesh_hierarchy):
Q = firedrake.FunctionSpace(mesh, family='CG', degree=1)
g = firedrake.interpolate(make_obstacle(mesh), Q)
num_continuation_steps = int(np.log(nx)) + level + 1
u = continuation_search(g, 1, num_continuation_steps)
print(assemble(max_value(g - u, 0) * dx))
0.0034424020045330894
0.0009409740393031862
0.00024126075715710023
6.073519021737773e-05
If we plot the volume of the region where $u$ is less than $g$, it decreases roughly by a factor of four on every mesh refinement. This rate of decrease makes sense -- the area of each cell decreases by the same amount on each refinement. Doing a more thorough convergence study would require more computational power, but for now this is a promising sign that our algorithm works right.
#### Discussion¶
We were able to get a convergent approximation scheme for the obstacle problem by expressing the constraint as an indicator functional and then using Moreau-Yosida regularization. The idea of regularizing non-smooth optimization problems is a more general trick; we can use it for things like $L^1$ or total variation penalties as well. The Moreau envelope is another angle to look at proximal algorithms from too.
For the obstacle problem, regularization made it possible to describe every part of the algorithm using higher-level concepts (fields, functionals) without having to dive down to lower levels of abstraction (matrices, vectors). In order to implement other approaches, like the active set method, we would have no choice but to pull out the PETSc matrices and vectors that lie beneath, which is a more demanding prospect.
# Variational calculus
In this post I'll look at a classic example of a convex variational problem: computing minimal surfaces. The minimal surface problem has a simple physical interpretation in terms of soap films. Suppose you have a wire loop and you stretch a film of soap over it; what shape does the film take? The available energy that the film has to do mechanical work is proportional to the product of the surface tension and the area of the film. When the film is in equilibrium, it will minimize the energy, so it will find the surface of least area that stretches over the hoop. This shape is called a minimal surface.
Here we'll look at a geometrically simpler case where the surface can be described as the graph of a function defined on some footprint domain $\Omega$ that lives in the plane. We'll describe the position of the hoop as a function $g$ that maps the boundary $\partial\Omega$ to the reals, and the surface as a function $u$ on $\Omega$. The surface area of the graph of $u$ is the quantity
$$J(u) = \int_\Omega\sqrt{1 + |\nabla u|^2}\,dx.$$
So, our goal is to minimize the objective functional $J$ among all functions $u$ such that $u|_{\partial\Omega} = g$. This is a classic example in variational calculus, which I'll assume you're familiar with. If you haven't encountered this topic before, I learned about it from Weinstock's book.
The weak form of the Euler-Lagrange equation for $J$ is
$$\int_\Omega\frac{\nabla u\cdot\nabla v}{\sqrt{1 + |\nabla u|^2}}dx = 0$$
for all $v$ that vanish on the boundary. This PDE is just a specific way of stating the general condition that, for $u$ to be an extremum of $J$, its directional derivative along all perturbations $v$ must be 0:
$$\langle dJ(u), v\rangle = 0.$$
We can go a little bit further and calculate the second derivative of $J$ too:
$$\langle d^2J(u)\cdot v, w\rangle = \int_\Omega\frac{I - \frac{\nabla u\cdot \nabla u^*}{1 + |\nabla u|^2}}{\sqrt{1 + |\nabla u|^2}}\nabla v\cdot \nabla w\, dx,$$
Deriving this equation takes a bit of leg work, but the important part is that it looks like a symmetric, positive-definite elliptic operator, only the conductivity tensor depends on the gradient of $u$. Since the second derivative of $J$ is positive-definite, the minimization problem is convex and thus has a unique solution.
There are many approaches you could take to solving the minimal surface equation. I'll examine some here using the finite element modeling package Firedrake. If you're unfamiliar with Firedrake or FEniCS, their main selling point is that, rather than write code to fill matrices and vectors yourself, these packages use an embedded domain-specific language to describe the weak form of the PDE. The library then generates efficient C code on the spot to fill these matrices and vectors. Having done all this by hand for several years I can tell you this is a big improvement!
import firedrake
To keep things simple, we'll use the unit square as our spatial domain, and we'll use piecewise quadratic finite elements.
mesh = firedrake.UnitSquareMesh(100, 100, quadrilateral=True)
Q = firedrake.FunctionSpace(mesh, family='CG', degree=2)
I'll use a test case from some course notes from a class that Douglas Arnold teaches on finite element methods. The boundary curve is
$$g = ax\cdot\sin\left(\frac{5}{2}\pi y\right).$$
In the notes, Arnold uses $a = 1/5$. When the numerical range of $g$ is small relative to the diameter of the domain, the minimal surface equation linearizes to the Laplace equation. I want to instead look at the more nonlinear case of $a > 1$, which will stress the nonlinear solver a good deal more.
x, y = firedrake.SpatialCoordinate(mesh)
from numpy import pi as π
from firedrake import sin
a = firedrake.Constant(3/2)
g = a * x * sin(5 * π * y / 2)
A picture is worth a thousand words of course.
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d
fig = plt.figure()
axes = fig.add_subplot(projection='3d')
firedrake.trisurf(firedrake.interpolate(g, Q), axes=axes);
Here we'll create the proposed solution $u$, define the objective functional, and try to find the minimizer naively using Firedrake's built-in solver. With the value for $a$ that I chose, the solver won't converge using its default settings.
u = firedrake.interpolate(g, Q)
bc = firedrake.DirichletBC(Q, g, 'on_boundary')
from firedrake import sqrt, inner, grad, dx
J = sqrt(1 + inner(grad(u), grad(u))) * dx
F = firedrake.derivative(J, u)
try:
firedrake.solve(F == 0, u, bc)
except firedrake.ConvergenceError:
print('Woops, nonlinear solver failed to converge!')
Woops, nonlinear solver failed to converge!
We could tweak these settings to make the solver converge, but instead let's try and dive deeper into what does and doesn't make for a good nonlinear solver.
##### Picard's method¶
This method is predicated on the idea that many nonlinear PDEs look like a linear problem with coefficients that depend on the solution. If you freeze those coefficients at the current guess for the solution, you get something that's fairly easy to solve and hopefully convergent. Suppose we've got a guess $u_n$ for the solution of the minimal surface equation. The Picard method would give us a next guess $u_{n + 1}$ that solves the linear PDE
$$\int_\Omega\frac{\nabla u_{n + 1}\cdot\nabla v}{\sqrt{1 + |\nabla u_n|^2}}dx = 0$$
for all $v$ that vanish on the boundary. This method is easy to implement if you know the functional form of the problem you're solving. Let's see how fast this decreases the area.
u.interpolate(g)
u_n = u.copy(deepcopy=True)
v = firedrake.TestFunction(Q)
G = inner(grad(u), grad(v)) / sqrt(1 + inner(grad(u_n), grad(u_n))) * dx
import numpy as np
num_iterations = 24
Js = np.zeros(num_iterations)
Js[0] = firedrake.assemble(J)
for step in range(1, num_iterations):
firedrake.solve(G == 0, u, bc)
u_n.assign(u)
Js[step] = firedrake.assemble(J)
The method converges in the eyeball norm in about 6 iterations.
fig, axes = plt.subplots()
axes.scatter(list(range(num_iterations)), Js, label='surface area')
axes.set_xlabel('iteration')
axes = axes.twinx()
axes.scatter(list(range(1, num_iterations)), -np.diff(Js) / Js[1:],
color='tab:orange', label='relative decrease')
axes.set_ylim(1e-6, 1)
axes.set_yscale('log')
fig.legend(loc='upper center');
This looks pretty good -- the iterates converge very rapidly to the minimizer. There are still reasons to look for something better though. Picard's method relies on the problem having special structure, which is true of the minimal surface equation but harder to find for other problems.
##### Newton's method (take 1)¶
One of the best known methods is due to Newton. The idea behind Newton's method is to use the Taylor expansion of the objective at the current guess $u_{n - 1}$ up to second order to define a quadratic approximation to the objective:
$$J(u_n + v) = J(u_n) + \langle F, v\rangle + \frac{1}{2}\langle Hv, v\rangle + \ldots$$
where $F = dJ(u_n)$, $H = d^2J(u_n)$ are the first and second derivatives of the objective. We can then define a new iterate as the minimizer of this quadratic problem:
$$u_{n + 1} = u_n + \text{argmin}_v\, \langle F, v\rangle + \frac{1}{2}\langle Hv, v\rangle.$$
The big advantage of Newton's method is that, for a starting guess sufficiently close to the solution, the iterates converge quadratically to the minimizer. Picard's method converges at best linearly.
One of the advantages of Newton's method is that there are many software packages for automatically calculating first and second derivatives of nonlinear functionals. So it's easy to apply to a broad class of problems. It isn't quite so clear how to select the right linear operator for Picard's method.
u.interpolate(g)
F = firedrake.derivative(J, u)
H = firedrake.derivative(F, u)
v = firedrake.Function(Q)
num_iterations = 24
Js = np.zeros(num_iterations + 1)
Js[0] = firedrake.assemble(J)
bc = firedrake.DirichletBC(Q, 0, 'on_boundary')
params = {'ksp_type': 'cg', 'pc_type': 'icc'}
try:
for step in range(1, num_iterations):
firedrake.solve(H == -F, v, bc, solver_parameters=params)
u += v
Js[step] = firedrake.assemble(J)
except firedrake.ConvergenceError:
print('Newton solver failed after {} steps!'.format(step))
Newton solver failed after 3 steps!
Doesn't bode very well does it? Let's see what the objective functional did before exploding:
print(Js[:step])
[ 4.27174587 13.03299587 2847.93224483]
Not a lot to save from the wreckage here -- the objective functional was increasing, which is just the opposite of what we want. What happened? Newton's method will converge quadratically if initialized close enough to the true solution. We don't have any idea a priori if we're close enough, and if we aren't then there's no guarantee that the iterates will converge at all. The example from Doug Arnold's course notes used a much smaller amplitude $a$ in the boundary data, so the initial guess is already within the convergence basin.
##### Newton's method (take 2)¶
But there's always hope! Suppose $v$ is a function such that the directional derivative of $J$ at $u$ along $v$ is negative:
$$\langle dJ(u), v\rangle < 0.$$
Then there must be some sufficiently small real number $t$ such that
$$J(u + t\cdot v) < J(u).$$
If we do have a descent direction in hand, then the problem of finding a better guess $u_{n + 1}$ starting from $u_n$ is reduced to the one-dimensional problem to minimize $J(u_n + t\cdot v)$ with respect to the real variable $t$.
If $H$ is any symmetric, positive-definite linear operator, then
$$v = -H^{-1}dJ(u)$$
is a descent direction for $J$. While the pure Newton method can diverge for some starting guesses, it does offer up a really good way to come up with descent directions for convex problems because the second derivative of the objective is positive-definite. This suggests the following algorithm:
\begin{align} v_n & = -d^2J(u_n)^{-1}dJ(u_n) \\ t_n & = \text{argmin}_t\, J(u_n + t\cdot v_n) \\ u_{n + 1} & = u_n + t_n\cdot v_n. \end{align}
This is called the damped Newton method or the Newton line search method. We can use standard packages like scipy to do the 1D minimization, as I'll show below.
u.interpolate(g)
F = firedrake.derivative(J, u)
H = firedrake.derivative(F, u)
v = firedrake.Function(Q)
bc = firedrake.DirichletBC(Q, 0, 'on_boundary')
import scipy.optimize
t = firedrake.Constant(1)
def J_t(s):
t.assign(s)
return firedrake.assemble(firedrake.replace(J, {u: u + t * v}))
num_iterations = 24
Js = np.zeros(num_iterations)
ts = np.zeros(num_iterations)
Δs = np.zeros(num_iterations)
Js[0] = firedrake.assemble(J)
from firedrake import action
for step in range(1, num_iterations):
firedrake.solve(H == -F, v, bc, solver_parameters=params)
Δs[step] = firedrake.assemble(-action(F, v))
line_search_result = scipy.optimize.minimize_scalar(J_t)
if not line_search_result.success:
raise firedrake.ConvergenceError('Line search failed at step {}!'
.format(step))
t_min = firedrake.Constant(line_search_result.x)
u.assign(u + t_min * v)
ts[step] = t_min
Js[step] = firedrake.assemble(J)
The same convergence plot as above for Newton's method paints a very different picture.
fig, axes = plt.subplots()
axes.scatter(list(range(num_iterations)), Js, label='surface area')
axes.set_xlabel('iteration')
axes = axes.twinx()
axes.scatter(list(range(1, num_iterations)), -np.diff(Js) / Js[1:],
color='tab:orange', label='relative decrease')
axes.set_ylim(1e-16, 1)
axes.set_yscale('log')
fig.legend(loc='upper center'); | 2023-03-27T09:27:36 | {
"domain": "shapero.xyz",
"url": "https://shapero.xyz/blog/index-1.html",
"openwebmath_score": 0.7145429253578186,
"openwebmath_perplexity": 1009.2257660600867,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.986777180969715,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704026126387
} |
https://www.ask-math.com/greatest-integer-function.html | # Greatest Integer Function
The greatest integer function is also known as floor function or step function. It is written as f(x) = $[\![x ]\!]$
The value of $[\![x ]\!]$ is the largest integer that is less than or equal to x.
Example 1 : greatest integer of 2.7 = $[\![2.7 ]\!]$ So the largest integer is 2
So, $[\![2.7 ]\!]$ = 2
Example 2 : greatest integer of -1.6 = $[\![-1.6 ]\!]$ So the largest integer is -2
So, $[\![-1.6 ]\!]$ = -2
Note : When the greatest integer is positive then its largest integer will be its previous whole number and when the greatest integer is negative then its largest integer will be next negative number. If there is no decimal then the largest integer will be the number itself.
## Limit of Greatest Integer Function
1) Find the limit of greatest integer function f(x) = $[\![x ]\!]$ as x approaches to 0 from left and right side.
Solution : First we will graph the given function f(x) = $[\![x ]\!]$ For the limit of largest integer from the left side we will consider red circle from the above diagram.
$\lim_{x->0^{-}}[\![x ]\!]$ = -1
For the limit of largest integer from the right side we will consider blue circle from the above diagram.
$\lim_{x->0^{+}}[\![x ]\!]$ = 0
The greatest integer function is discontinuous function as its left side and right side limit gives us different values.
2) Find the limit of greatest integer function
f(x) =$\lim_{x->4^{-}}(5[\![x ]\!] -7)$ as x approaches to 4 from left side.
Solution : Since x approaches 4 from the left it must go first through such values as 3.1,3.2,3.3,3.5, 3.9, 3.9999, and so on. But all these values become 3 on the greatest integer function $[\![x ]\!]$. So instead of 4 we must plug in x = 3.
$\lim_{x->4^{-}}(5[\![x ]\!] -7)$ = 5(3) - 7 = 15- 7 = 8
$\lim_{x->4^{-}}(5[\![x ]\!] -7)$ = 8
Covid-19 has led the world to go through a phenomenal transition .
E-learning is the future today.
Stay Home , Stay Safe and keep learning!!!
Covid-19 has affected physical interactions between people.
Don't let it affect your learning. | 2021-09-26T00:28:04 | {
"domain": "ask-math.com",
"url": "https://www.ask-math.com/greatest-integer-function.html",
"openwebmath_score": 0.46929165720939636,
"openwebmath_perplexity": 452.5496607792191,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180580855,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704023552434
} |
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-5-number-theory-and-the-real-number-system-chapter-summary-review-and-test-review-exercises-page-336/136 | ## Thinking Mathematically (6th Edition)
$\frac{1}{2},\,\,\frac{1}{4},\,\,\frac{1}{8},\,\,\frac{1}{16},\,\,\frac{1}{32}\ \text{and}\ \frac{1}{64}$.
For the second term put $n=2$ in the general formula stated above. \begin{align} & {{a}_{2}}={{a}_{1}}{{r}^{2-1}} \\ & =\frac{1}{2}\cdot {{\left( \frac{1}{2} \right)}^{1}} \\ & =\frac{1}{2}\cdot \frac{1}{2} \\ & =\frac{1}{4} \end{align} For the third term put $n=3$ in the general formula stated above. \begin{align} & {{a}_{3}}={{a}_{1}}{{r}^{3-1}} \\ & =\frac{1}{2}\cdot {{\left( \frac{1}{2} \right)}^{2}} \\ & =\frac{1}{2}\cdot \frac{1}{4} \\ & =\frac{1}{8} \end{align} For the fourth term put $n=4$ in the general formula stated above. \begin{align} & {{a}_{4}}={{a}_{1}}{{r}^{4-1}} \\ & =\frac{1}{2}\cdot {{\left( \frac{1}{2} \right)}^{3}} \\ & =\frac{1}{2}\cdot \frac{1}{8} \\ & =\frac{1}{16} \end{align} For the fifth term put $n=5$ in the general formula stated above. \begin{align} & {{a}_{5}}={{a}_{1}}{{r}^{5-1}} \\ & =\frac{1}{2}\cdot {{\left( \frac{1}{2} \right)}^{4}} \\ & =\frac{1}{2}\cdot \frac{1}{16} \\ & =\frac{1}{32} \end{align} For the sixth term put $n=6$ in the general formula stated above. \begin{align} & {{a}_{6}}={{a}_{1}}{{r}^{6-1}} \\ & =\frac{1}{2}\cdot {{\left( \frac{1}{2} \right)}^{5}} \\ & =\frac{1}{2}\cdot \frac{1}{32} \\ & =\frac{1}{64} \end{align} The first six terms of the geometric sequence are $\frac{1}{2},\,\,\frac{1}{4},\,\,\frac{1}{8},\,\,\frac{1}{16},\,\,\frac{1}{32}\ \text{and}\ \frac{1}{64}$. | 2019-12-05T22:22:44 | {
"domain": "gradesaver.com",
"url": "https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-5-number-theory-and-the-real-number-system-chapter-summary-review-and-test-review-exercises-page-336/136",
"openwebmath_score": 1.0000100135803223,
"openwebmath_perplexity": 2374.792705814474,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771805808551,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704023552434
} |
https://forum.azimuthproject.org/plugin/ViewComment/16682 | Boolean algebras are an important kind of poset. The power set functor defines an (contravariant) equivalence \$$P: \text{Set}^{\text{op}} \to \text{Bool}\$$: any function \$$f: X \to Y\$$ corresponds to the *preimage* map \$$f*: PY \to PX\$$, which is a monotone map, and also a boolean algebra homomorphism, meaning it preserves meets and joins (the very important fact that preimage preserves set operations).
Even more interesting, this preimage has a left and right adjoint! **Puzzle CW**: What are they? | 2020-01-20T14:16:14 | {
"domain": "azimuthproject.org",
"url": "https://forum.azimuthproject.org/plugin/ViewComment/16682",
"openwebmath_score": 0.9633446335792542,
"openwebmath_perplexity": 1323.0197183285375,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771801919951,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704020978482
} |
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-5-trigonometric-identities-section-5-2-verifying-trigonometric-identities-5-2-exercises-page-209/67 | ## Trigonometry (11th Edition) Clone
$$\frac{\sec^4\theta-\tan^4\theta}{\sec^2\theta+\tan^2\theta}=\sec^2\theta-\tan^2\theta$$ We simplify the left side and find that the expression is an identity.
$$\frac{\sec^4\theta-\tan^4\theta}{\sec^2\theta+\tan^2\theta}=\sec^2\theta-\tan^2\theta$$ The left side is more complicated. We would simplify it. $$A=\frac{\sec^4\theta-\tan^4\theta}{\sec^2\theta+\tan^2\theta}$$ We have $a^4-b^4=(a^2-b^2)(a^2+b^2)$. So, $$A=\frac{(\sec^2\theta-\tan^2\theta)(\sec^2\theta+\tan^2\theta)}{\sec^2\theta+\tan^2\theta}$$ $$A=\sec^2\theta-\tan^2\theta$$ They are thus equal. The expression is an identity. | 2019-10-14T02:14:01 | {
"domain": "gradesaver.com",
"url": "https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-5-trigonometric-identities-section-5-2-verifying-trigonometric-identities-5-2-exercises-page-209/67",
"openwebmath_score": 0.9635627269744873,
"openwebmath_perplexity": 278.358080664885,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771801919951,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704020978482
} |
https://drexel28.wordpress.com/2010/06/09/munkres-chapter-2-section-19-part-ii/ | # Abstract Nonsense
## Munkres Chapter 2 Section 19 (Part II)
9.
Problem: Show that the axiom of choice (AOC) is equivalent to the statement that for any indexed family $\left\{U_{\alpha}\right\}_{\alpha\in\mathcal{A}}$ of non-empty sets, with $\mathcal{A}\ne\varnothing$, the product $\displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha\ne\varnothing$
Proof: This is pretty immediate when one writes down the actual definition of the product, namely:
$\displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha=\left\{\bold{x}:\mathcal{A}\to\bigcup_{\alpha\in\mathcal{A}}U_{\alpha}:\bold{x}(\alpha)\in U_\alpha,\text{ for every }\alpha\in\mathcal{A}\right\}$
So, if one assumes the AOC then one must assume the existence of a choice function
$\displaystyle c:\left\{U_\alpha\right\}_{\alpha\in\mathcal{A}}\to\bigcup_{\alpha\in\mathcal{A}},\text{ such that }c\left(U_\alpha\right)\in U_\alpha\text{ for all }\alpha\in\mathcal{A}$
So, then if we consider $\left\{U_\alpha\right\}_{\alpha\in\mathcal{A}}\overset{\text{def.}}{=}\Omega$ as just a class of sets, the fact that we have indexed them implies there exists a surjective “indexing function\$
$i:\mathcal{A}\to\Omega$
where clearly since we have already indexed out set we have that $i:\alpha\mapsto U_\alpha$. So, consider
$c\circ i:\mathcal{A}\to\bigcup_{\alpha\in\mathcal{A}}U_\alpha$
This is clearly a well-defined mapping and $\left(c\circ i\right)(\alpha)=c\left(U_\alpha\right)\in U_\alpha$ and thus
$\displaystyle c\circ i\in\prod_{\alpha\in\mathcal{A}}U_\alpha$
from where it follows that $\displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha\ne\varnothing$
Conversely, let $\Omega$ be a class of sets and let $i:\mathcal{A}\to\Omega$ be an indexing function. We may then index $\Omega$ by $\Omega=\left\{U_{\alpha}\right\}_{\alpha\in\mathcal{A}}$. Then, by assumption
$\displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha\ne\varnothing$
Thus there exists some
$\displaystyle \bold{x}\in\prod_{\alpha\in\mathcal{A}}U_\alpha$
Such that
$\displaystyle \bold{x}:\mathcal{A}\to\bigcup_{\alpha\in\mathcal{A}},\text{ such that }\bold{x}(\alpha)\in U_\alpha$
Thus, we have that
$\displaystyle \bold{x}\circ i^{-1}:\left\{U_\alpha\right\}_{\alpha\in\mathcal{A}}\to\bigcup_{\alpha\in\mathcal{A}}$
is a well-defined mapping with
$\displaystyle \bold{x}\left(U_\alpha\right)\in U_\alpha$
For each $\alpha\in\mathcal{A}$. It follows that we have produced a choice function for $\Omega$ and the conclusion follows. $\blacksquare$
Remark: We have assumed the existence of a bijective indexing function $i:\mathcal{A}\to\Omega$, but this is either A) a matter for descriptive set theory or B) obvious since $\text{id}:\Omega\to\Omega$ satisfies the conditions. This depends on your level of rigor.
10.
Problem: Let $A$ be a set; let $\left\{X_\alpha\right\}_{\alpha\in\mathcal{A}}$ be an indexed family of spaces; and let $\left\{f_\alpha\right\}_{\alpha\in\mathcal{A}}$ be an indexed family of functions $f_\alpha:A\to X_\alpha$
a) Prove there is a unique coarsest topology $\mathfrak{J}$ on $A$ relative to whish each of the functions $f_\alpha$ is continuous.
b) Let
$\mathcal{S}_\beta=\left\{f_\beta^{-1}\left(U_\beta\right):U_\beta\text{ is ope}\text{n in}X_\beta\right\}$
and let $\displaystyle \mathcal{S}=\bigcup_{\alpha\in\mathcal{A}}$. Prove that $\mathcal{S}$ is a subbasis for $\mathfrak{J}$.
c) Show that the map $g:Y\to A$ is continuous relative to $\mathfrak{J}$ if and only if each map $f_\alpha\circ g:Y\to X_\alpha$ is continuous.
d) Let $\displaystyle f:A\to\prod_{\alpha\in\mathcal{A}}X_\alpha$ be defined by the equation
$f(x)=\left(f_\alpha(x)\right)_{\alpha\in\mathcal{A}}$
Let $Z$ denote the subspace of $f\left(A\right)$ of the product space $\displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha$. Prove taht the image under $f$ of each element of $\mathfrak{J}$ is an open set in $Z$.
Proof:
a) We first prove a lemma
Lemma: Let $\mathfrak{J}$ be a topology on $A$, then all the mappings $f_\alpha:A\to X_\alpha$ are continuous if and only if $\mathcal{S}\subseteq\mathfrak{J}$ where $\mathcal{S}$ is defined in part b).
Proof:Suppose that all the mappings $f_\alpha:A\to X_\alpha$ are continuous. Then, given any open set $U_\alpha\in X_\alpha$ we have that $f_\alpha$ is continuous and so $f_\alpha^{-1}\left(U_\alpha\right)$ is open and thus $f_{\alpha}^{-1}\left(U_\alpha\right)\in\mathfrak{J}$ from where it follows that $\mathcal{S}\subseteq\mathfrak{J}$.
Conversely, suppose that $\mathcal{S}\subseteq\mathfrak{J}$. It suffices to prove that $f_\alpha:A\to X_\alpha$ for a fixed but arbitrary $\alpha\in\mathcal{A}$. So, to do this let $U$ be open in $X_\alpha$ then $f_{\alpha}^{-1}\left(U\right)\in\mathfrak{J}$ and thus by assumption $f_\alpha^{-1}\left(U\right)\in\mathfrak{J}$; but this precisely says that $f_\alpha^{-1}\left(U\right)$ is open in $A$. By prior comment the conclusion follows. $\blacksquare$
So, let
$\mathcal{C}=\left\{\mathfrak{T}:\mathfrak{J}\text{ is a topology on }A\text{ and }\mathcal{S}\subseteq\mathfrak{J}\right\}$
and let
$\displaystyle \mathfrak{J}=\bigcap_{\mathfrak{I}\in\mathcal{C}}\mathfrak{T}$
By previous problem $\mathfrak{J}$ is in fact a topology on $A$, and by our lemma we also know that all the mappings $f_\alpha:A\to X_\alpha$ are continuous since $\mathcal{S}\subseteq\mathfrak{J}$. To see that it’s the coarsest such topology let $\mathfrak{U}$ be a topology for which all of the $f_\alpha:A\to X_\alpha$ are continuous. Then, by the other part of our lemma we know that $\mathcal{S}\subseteq\mathfrak{U}$ and thus $\mathfrak{U}\in\mathcal{C}$. So,
$\displaystyle \mathfrak{J}=\bigcap_{\mathfrak{T}\in\mathcal{C}}\mathfrak{T}\subseteq\mathfrak{U}$
And thus $\mathfrak{J}$ is coarser than $\mathfrak{U}$.
The uniqueness is immediate.
b) It follows from the previous problem that we must merely show that $\mathcal{S}$ is a subbasis for the topology $\mathfrak{J}$. The conclusion will follow from the following lemma (which was actually an earlier problem, but we reprove here for referential reasons):
Lemma: Let $X$ be a set and $\Omega$ be a subbasis for a topology on $X$. Then, the topology generated by $\Omega$ equals the intersection of all topologies which contain $\Omega$.
Proof: Let
$\mathcal{C}\left\{\mathfrak{T}:\mathfrak{T}\text{ is a topology on }X\text{ and }\Omega\subseteq\mathfrak{T}\right\}$
and
$\displaystyle \mathfrak{J}=\bigcap_{\mathfrak{T}\in\mathcal{C}}\mathfrak{T}$
Also, let $\mathfrak{G}$ be the topology generated by the subbasis $\Omega$.
Clearly since $\Omega\subseteq\mathfrak{G}$ we have that $\mathfrak{J}\subseteq\mathfrak{G}$.
Conversely, let $U\in\mathfrak{G}$. Then, by definition to show that $U\in\mathfrak{J}$ it suffices to show that $U\in\mathfrak{T}$ for a fixed but arbitrary $\mathfrak{T}\in\mathcal{C}$. To do this we first note that by definition that
$\displaystyle U=\bigcup_{\alpha\in\mathcal{A}}U_\alpha$
where each
$U_\alpha=O_1\cap\cdots\cap O_{m_\alpha}$
for some $O_1,\cdots,O_{m_\alpha}\in\Omega$. Now, if $\mathfrak{T}\in\mathcal{C}$ we know (since $\Omega\subseteq\mathfrak{T}$) that $O_1,\cdots,O_{m_\alpha}\in\mathfrak{T}$ and thus
$O_1\cap\cdots\cap O_{m_\alpha}=U_\alpha\in\mathfrak{T}$
for each $\alpha\in\mathcal{A}$. It follows that $U$ is the union of sets in $\mathfrak{T}$ and thus $U\in\mathfrak{T}$. It follows from previous comment that $\mathfrak{G}\subseteq\mathfrak{J}$.
The conclusion follows. $\blacksquare$
The actual problem follows immediately from this.
c) So, let $g:Y\to A$ be some mapping and suppose that $f_\alpha\circ g:Y\to X_\alpha$ is continuous for each $\alpha\in\mathcal{A}$. Then, given a subbasic open set $U$ in $A$ we have that
$U=f_{\alpha_1}^{-1}\left(U_{\alpha_1}\right)\cap\cdots\cap f_{\alpha_n}^{-1}\left(U_{\alpha_n}\right)$
for some $\alpha_1,\cdots,\alpha_n$ and for some open sets $U_{\alpha_1},\cdots,U_{\alpha_n}$ in $X_{\alpha_1},\cdots,X_{\alpha_n}$ respectively. Thus $g^{-1}(U)$ may be written as
$\displaystyle g^{-1}\left(\bigcup_{j=1}^{n}f_{\alpha_j}^{-1}\left(U_{\alpha_j}\right)\right)=\bigcup_{j=1}^{n}g^{-1}\left(f_{\alpha_j}^{-1}\left(U_{\alpha_j}\right)\right)=\bigcup_{j=1}^{n}\left(f_{\alpha_j}\circ g\right)^{-1}\left(U_{\alpha_j}\right)$
but since each $f_{\alpha_j}\circ g:Y\to X_{\alpha_j}$ we see that $g^{-1}\left(U\right)$ is the finite union of open sets in $Y$ and thus open in $Y$. It follows that $g$ is continuous.
Conversely, suppose that $g$ is continuous then $f_\alpha\circ g:Y\to X_{\alpha}$ is continuous since it’s the composition of continuous maps.
d) First note that
$\displaystyle f^{-1}\left(\prod_{\alpha\in\mathcal{A}}U_\alpha\right)=\bigcap_{\alpha\in\mathcal{A}}f\alpha^{-1}\left(U_\alpha\right)$
from where it follows that the initial topology under the class of maps $\{f_\alpha\}$ on $A$ is the same as the initial topology given by the single map $f$. So, in general we note that if $X$ is given the initial topology determined by $f:X\to Y$ then given an open set $f^{-1}(U)$ in $X$ we have that $f\left(f^{-1}(U)\right)=U\cap f(X)$ which is open in the subspace $f(X)$.
June 9, 2010 -
## 1 Comment »
1. Cool Story bro.
Comment by Jeremy Janson | November 3, 2011 | Reply | 2017-05-24T15:42:16 | {
"domain": "wordpress.com",
"url": "https://drexel28.wordpress.com/2010/06/09/munkres-chapter-2-section-19-part-ii/",
"openwebmath_score": 0.9891128540039062,
"openwebmath_perplexity": 79.09775362080121,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180191995,
"lm_q2_score": 0.6619228891883799,
"lm_q1q2_score": 0.6531704020978479
} |
http://jde27.uk/lgla/30_adjoint_accessible.html | Before we classify the representations of SU(3), I want to introduce a representation which makes sense for any Lie group, the adjoint representation, and study it for SU(3).
Definition:
Given a matrix group G with Lie algebra little g, the adjoint representation is the homomorphism capital Ad from big G to big G L of little g defined by capital Ad of g applied to X equals g X g inverse. In other words:
• the vector space on which we're acting is little g,
• capital Ad of g is a linear map from little g to little g.
It is an exercise to check that this is a representation, but I will explain why it is well-defined, in other words why is g X g inverse in little g when X is in little g and g is in big G.
Lemma:
If X is in little g and g is in big G then g X g inverse is in little g.
Proof:
We need to show that for all t in R, exp of t g X g inverse is in G. As a power series, this is: exp of (t g X g inverse) equals the identity, plus t g X g inverse, plus a half t squared g X g inverse g X g inverse, plus dot dot dot.
All of the g inverse gs sandwiched between the Xs cancel and we get identity, plus t g X g inverse, plus a half t squared g X squared g inverse, plus dot dot dot.
Since g is in G, g inverse is in G and exp of (t X) is in G for all t, we see that this product is in G for all t, which proves the lemma.
Definition:
The induced map on Lie algebras is little ad equals capital Ad star from little g to little g l of little g. (I apologise for the profusion of g's playing different notational roles).
Let's calculate little ad of X for some X in little g. This is a linear map little g to little g, so let's apply it to some Y in little g: little ad of X applied to Y is the derivative with respect to t at t = 0 of capital Ad of exp (tX) applied to Y (This is how we calculate R star for any representation R: it follows by differentiating R of exp (t X) equals exp of t R star X with respect to t). This gives: little ad of X applied to Y equals the derivative with respect to to at t = 0 of (exp t X times Y times exp of minus t X, which equals (X exp (t X) Y exp (minus t X) minus exp (t X) times Y times X times exp of (minus t X), all evaluated at t=0, which gives X Y minus Y X.
In other words, little ad of X applied to Y equals X bracket Y Note that this makes sense even without reference to G.
Exercise:
Since little ad equals capital Ad star we know already that it's a representation of Lie algebras, but it's possible to prove it directly from the axioms of a Lie algebra without reference to the group G i.e. that little ad of X bracket Y applied to Z equals little ad of X times little ad of Y applied to Z minus little ad of Y times little ad of X applied to Z for all X, Y, and Z in little g. Do this!
## Example: sl(2,C)
Recall that we have a basis H, X, Y for little s l 2 C. Let's compute little ad H with respect to this basis.
• H to H bracket H, which equals 0,
• X to H bracket X, which equals 2 X,
• Y to H bracket Y, which equals minus 2 Y,
so little ad H is the diagonal matrix with diagonal entries 0, 2, minus 2 with respect to this basis.
In fact, the action of H on a representation tells us the weights, so we see that the weights of the adjoint representation are minus 2, 0, and 2. In particular, the adjoint representation is isomorphic to Sym 2 C 2.
It's an exercise to compute little ad X and little ad Y.
## Example: sl(3,C)
Let's find a basis of little s l 3 C. Define E_{i j} to be the matrix with zeros everywhere except in position i, j where there is a 1, e.g. E_(1 2) is the 3-by-3 matrix 0, 1, 0; 0, 0, 0; 0, 0, 0 There are 6 such matrices with i not equal to j. Together with H_(1 3) equals 1, 0 ,0; 0, 0, 0; 0, 0, -1 and H_(2 3) equals 0, 0, 0; 0, 1, 0; 0, 0, minus 1, this gives us a basis of little s l 3 C; in other words, any tracefree complex matrix can be written as a complex linear combination of these 8 (it's an 8-dimensional Lie algebra).
More generally, we will write H theta for the diagonal matrix in little s l 3 C with diagonal entries theta_1, theta_2, theta_3 where theta equals (theta_1, theta_2, theta_3) is a vector satisfying theta_1 plus theta_2 plus theta_3 equals 0 I want to compute little ad of H_theta.
We have little ad H_(theta) H_(i j) equals 0 because the H-matrices are all diagonal (and hence all commute with one another). This means that H_{1 3} and H_{2 3} are contained in the zero-weight space of the adjoint representation. This is because exp of i H_(theta) equals the diagonal matrix with diagonal entries e to the i theta_1, e to the i theta_2, e to the minus i (theta_1 plus theta_2), so the eigenvalues of H_(theta) tell us the weights of the representation.
It turns out that little ad of H_(theta) applied to E_(i j) equals (theta_i minus theta_j) times E_(i j). For example: H_(theta) bracket E_(i j) equals theta_1, 0, 0; 0, theta_2, 0; 0, 0, minus theta_1 minus theta_2 bracket 0, 1, 0; 0, 0, 0; 0, 0, 0; which equals 0, theta_1, 0; 0, 0, 0; 0, 0, 0; minus 0, theta_2, 0; 0, 0, 0; 0, 0, 0; which equals (theta_1 minus theta_2) times E_(1 2)
Let's figure out the weights of the adjoint representation. If v in W_(k l) then we have exp of i little ad H_(theta) equals e to the i (k theta_1 plus l theta_2) times v, so little ad of H theta applied to v equals (k theta_1 plus l theta_2) times v. For example, E_{1 2} satisies little ad H_(theta) applied to E_(1 2) equals (theta_1 minus theta_2) times E_(1 2), so E_(1 2) is in W_(1, minus 1).
Similarly, we get little ad H_(theta) E_(1 3) equals (theta_1 minus theta_3) E_(1 3), which equals ((2 theta_1) plus theta_2) times E_(1 3), so E_(1 3) is in W_(2, 1).
Exercise:
The other weight space that occur are: E_(1 2) in W_(1, minus 1); E_(2 1) in W_(minus 1, 1); E_(1 3) in W_(2 1); E_(3 1) in W_(minus 2, minus 1); E_(2 3) in W_(1 2); and E_(3 1) in W_(minus 1, minus 2) and the weight diagram is the hexagon shown in the figure below.
Note that the zero-weight space is spanned by H_{1 3} and H_{2 3}, which means it's 2-dimensional. We've denoted this by putting a circle around the dot at the origin in the weight diagram.
Remark:
The weight space decomposition of the adjoint representation is sufficiently important to warrant its own name: it's called the root space decomposition. The weights that occur are called roots and the weight diagram is called a root diagram.
## Pre-class exercise
Exercise:
The matrices E_{i j} inhabit the following weight spaces: E_(1 2) in W_(1, minus 1); E_(2 1) in W_(minus 1, 1); E_(1 3) in W_(2 1); E_(3 1) in W_(minus 2, minus 1); E_(2 3) in W_(1 2); and E_(3 1) in W_(minus 1, minus 2) and the weight diagram is the hexagon shown above. | 2022-10-06T08:06:07 | {
"domain": "jde27.uk",
"url": "http://jde27.uk/lgla/30_adjoint_accessible.html",
"openwebmath_score": 0.8816365003585815,
"openwebmath_perplexity": 1814.6742949855497,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777179414275,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704015830574
} |
http://mathhelpforum.com/calculus/47975-approximation-partial-derivative.html | # Math Help - An Approximation to a partial Derivative
1. ## An Approximation to a partial Derivative
An Approximation to a partial Derivative
If a function is known to have
fx(30,24) = -.4
fy(30,24) = 1.2
f(30,24) = 50
Estimate this value of F.
f(30.2,23.9)
Thank you for helping me. I am stuck on the last part
2. Originally Posted by Applestar13
An Approximation to a partial Derivative
If a function is known to have
fx(30,24) = -.4
fy(30,24) = 1.2
f(30,24) = 50
Estimate this value of F.
f(30.2,23.9)
Thank you in advance! I am stuck on this part. Both s and y changed and have no idea what to do.
the idea of approximating a two variable function around a given point is to use the tangent plane (at the given point) to the surface that the function represents as
the approximation, i.e. $f(x,y) \approx f(x_0,y_0)+(x-x_0)f_x(x_0,y_0)+(y-y_0)f_y(x_0,y_0).$ so if we put $x=30.2, \ y=23.9, \ x_0=30, \ y_0=24,$ and use the given info, we'll
get: $f(30.2,23.9) \approx 50 + (0.2) \times (-0.4) + (-0.1) \times (1.2) = 49.8.$
3. Originally Posted by Applestar13
An Approximation to a partial Derivative
If a function is known to have
fx(30,24) = -.4
fy(30,24) = 1.2
f(30,24) = 50
Estimate this value of F.
f(30.2,23.9)
Thank you for helping me. I am stuck on the last part
$
\nabla f(30,24) = \left[ - 0.4,1.2 \right]$
$
f(30.2,23.9) \approx f(30,24) + [0.2, - 0.1].\nabla f(30,24)$
.................. $
= 50 + ( - 0.2 \times 0.4 - 0.1 \times 1.2) = 49.8
$
RonL
4. ## Thank you
Thank you, both of you. That made perfect sense. I didn't know what to do with the difference in the intial value of X and Y. | 2015-07-06T15:42:09 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/47975-approximation-partial-derivative.html",
"openwebmath_score": 0.7871342897415161,
"openwebmath_perplexity": 905.8364658594248,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9867771794142751,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704015830574
} |
https://jaketae.github.io/study/stieltjes/ | # On Expectations and Integrals
Expectation is a core concept in statistics, and it is no surprise that any student interested in probability and statistics may have seen some expression like this:
$\mathbb{E}[X] = \sum_{x \in X} x f(x) \tag{1}$
In the continuous case, the expression is most commonly presented in textbooks as follows:
$\mathbb{E}[X] = \int_{x \in X} x f(x) \, dx \tag{2}$
However, this variant might throw you off, which happened to me when I first came across it a few weeks ago:
$\mathbb{E}[X] = \int_{x \in X} x \,dF(x) \tag{3}$
I mean, my calculus is rusty, but it kind of makes sense: the probably density function is, after all, a derivative of the cumulative density function, and so notationally there is some degree of coherency here.
$f(x) = \frac{d}{dx}F(x) \implies f(x) \, dx = dF(x) \tag{4}$
But still, this definition of the expected value threw me off quite a bit. What does it mean to integrate over a distribution function instead of a variable? After some research, however, the math gurus at Stack Exchange provided me with an answer. So here is a brief summary of my findings.
The integral that we all know of is called the Riemann integral. The confusing integral is in fact a generalization of the Riemann integral, known as the Riemann-Stieltjes integral (don’t ask me how to pronounce the name of the Dutch mathematician). There is an even more general interpretation of integrals called the Lebesgue integral, but we won’t get into that here.
First, let’s take a look at the definition. The definition of the integral is actually a lot simpler than what one might imagine. Here, $c_i$ is a value that falls within the interval $[x_i, x_{i+1}]$.
$\int_a^b f(x) \, dg(x) = \lim_{n \to \infty}\sum_{i=1}^n f(c_i)[g(x_{i+1}) - g(x_i)] \tag{5}$
In short, we divide the interval of integration $[a, b]$ into $n$ infinitesimal pieces. Imagine this process as being similar to what we learn in Calculus 101, where integrals are visualized as an infinite sum of skinny rectangles as the limit approaches zero. Essentially, we are doing the same thing, except that now, the base of each rectangle is defined as the difference between $g(x_{i+1})$ and $g(x_i)$ instead of $x_{i+1}$ and $x$ as is the case with the Riemann integral. Another way to look at this is to consider the integral as calculating the area beneath the curve represented by the parameterization $(x, y) = (g(x), f(x))$. This connection becomes a bit more apparent if we consider the fact that the Riemann integral is calculating the area beneath the curve represented by $(x, y) = (x, f(x))$. In other words, the Riemann-Stieltjes integral can be seen as dealing with a change of variables.
You might be wondering why the Riemann-Stieltjes integral is necessary in the first place. After all, the definition of expectation we already know by heart should be enough, shouldn’t it? To answer this question, consider the following function:
$F(x) = \begin{cases} 0 & x < 0\\\ \frac12 & 0 \leq x < 1 \\ 1 & x \geq 1 \end{cases} \tag{6}$
This cumulative mass function is obviously discontinuous since it is a step-wise function. This also means that it is not differentiable; hence, we cannot use the definition of expectation that we already know. However, this does not mean that the random variable $X$ does not have an expected value. In fact, it is possible to calculate the expectation using the Riemann-Stieltjes integral quite easily, despite the discontinuity!
The integral we wish to calculate is the following:
$\int_{-\infty}^{\infty} x \, dF(x) \tag{7}$
Therefore, we should immediately start visualizing splitting up the domain of integration, the real number line, into infinitesimal pieces. Each box will be of height $x$ and width $F(x_{i+1}) - F(x_i)$. In the context of the contrived example, this definition makes the calculation extremely easy, since $F(x_{i+1}) - F(x_i)$ equals zero in all locations but the jumps where the discontinuities occur. In other words,
$\int_{-\infty}^{\infty} x \, dF(x) = 0 \cdot \frac12 + 1 \cdot 1 = 1 \tag{8}$
We can easily extend this idea to calculating things like variance or other higher moments.
A more realistic example might be the Dirac delta function. Consider a constant random variable (I know it sounds oxymoronic, but the idea is that the random variable takes only one value and that value only). In this case, we can imagine the probability density function as a literal spike in the sense that the PDF will peak at $x=c$ and be zero otherwise. The cumulative density function will thus exhibit a discontinuous jump from zero to 1 at $x=c$. And by the same line of logic, it is easy to see that the expected value of this random variable is $c$, as expected. Although this is a rather boring example in that the expectation of a constant is of course the constant itself, it nonetheless demonstrates the potential applications of Riemann-Stieltjes.
I hope you enjoyed reading this post. Lately, I have been busy working on some interesting projects. There is a lot of blogging and catching up to do, so stay posted for exciting updates to come!
Tags:
Categories:
Updated: | 2022-06-26T01:22:02 | {
"domain": "github.io",
"url": "https://jaketae.github.io/study/stieltjes/",
"openwebmath_score": 0.9334332942962646,
"openwebmath_perplexity": 130.90841796165952,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777179025415,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.653170401325662
} |
https://www.gmatmath.online/fractions-percentages/ | # Fractions / Percentages
In doing problems that involve percentages, whether or not they also involve fractions, it’s important to remember that percentages are just fractions. For example, 87% is the same as 87/100, or, written as a decimal, it’s .87. When we do a problem that includes both fractions and percentages, it’s often easiest to convert the percentages to fractions or the fractions to percentages.
On this page we provide 3 examples of Fraction problems and 3 examples of Percentage problems.
Fraction Problems Example 1)
Harriet bought 120 fifty dollar government savings bonds at .8 of their face value, and gave 3/4 of them to grandchildren, nephews, and nieces for Christmas presents. If she sold the remainder later at 9/10 of their face value, what was the net amount she spent on the bonds?
A. $3350 B.$3000
C. $3450 D.$3500
E. $3250 Explanation: The net amount spent is equal to the original amount spent minus the proceeds from the sale of the remaining bonds: Original Amount = .8 × 120 × 50 = 4800 # Bonds Left after Gifts = 120 – (3/4 × 120) = 120 – 90 = 30 Sale Revenue from Leftover Bonds = 9/10 × 30 × 50 = 1350 Original Amount – Sale Revenue = 4800 – 1350 = 3450 So C is the correct answer. Fraction Problems Example 2) Mary spends her monthly take-home pay as follows: 1/3 goes to rent; 1/6 to food and clothing; 1/8 to savings; 1/4 to miscellaneous expenses. The remaining$300 she uses for entertainment. What is her monthly take-home pay?
A. $2000 B.$2500
C. $1800 D.$2100
E. $2400 Explanation: Let x be Mary’s take-home pay. Then if we take these fractions of her pay and add 300 we should get her total take-home pay: x/3 + x/6 + x/8 + x/4 + 300 = x Now we solve this equation for x. First, we note that the least common denominator for 3, 6, 8, and 4 is 24: 8x/24 + 4x/24 + 3x/24 + 6x/24 + 300 = x 21x/24 + 300 = x 21x + 7200 = 24x 7200 = 24x – 21x 7200 = 3x 2400 = x So E is the correct answer. Fraction Problems Example 3) Which of these is not equal to 0? A. 1/3 + 1/5 – 8/15 B. 2/3 + 1/15 – 11/15 C. 1/5 + 3/10 – 1/2 D. 1/2 + 1/4 – 7/8 E. 1/2 + 1/5 – 7/10 Explanation: A. 1/3 + 1/5 – 8/15 = 5/15 + 3/15 – 8/15 = 8/15 – 8/15 = 0 B. 2/3 + 1/15 – 11/15 = 10/15 + 1/15 – 11/15 = 11/15 – 11/15 = 0 C. 1/5 + 3/10 – 1/2 = 2/10 + 3/10 – 5/10 = 5/10 – 5/10 = 0 D. 1/2 + 1/4 – 7/8 = 4/8 + 2/8 – 7/8 = 6/8 – 7/8 = -1/8 E. 1/2 + 1/5 – 7/10 = 5/10 + 2/10 – 7/10 = 7/10 – 7/10 = 0 So D is the correct answer. To practice more of these types of problems, click here. Percentage Problems Example 1) Arlen gets 10% commission on his total sales above$2000. He gets no commission on sales up to $2000. If he was paid$900, what were his total sales?
A. $11,000 B.$9,000
C. $10,000 D.$12,000
E. \$11,100
Explanation: Let x be his total sales. Then
10% of (x – 2000) = 900
1/10 ∙ (x – 2000) = 900
x – 2000 = 900 ∙ 10
x = 9000 + 2000 = 11,000
So A is the correct answer.
Percentage Problems Example 2)
In a certain state between .4 percent and .8 percent of new companies will fail in their first month of business. During the current year, 8000 new companies started operations. Which of these numbers is in the range of the number of companies that failed during the first month:
A. 20
B. 45
C. 90
D. 320
E. 6400
Explanation: This problem can be a little deceptive, because it says .4 percent and .8 percent, and our natural inclination is to interpret that as 4 percent and 8 percent or as 4 tenths or 8 tenths (which would be 40% or 80%). But the actual numbers are less than 1 percent, which is less than 1/100, so we have to treat them in that way. We apply .4 percent and .8 percent therefore to the number of 8000 new companies to find out how many failed. Let x = the number of new companies that failed in their first month:
.4% of 8000 ≤ x ≤ .8% of 8000
(.4/100) ∙ 8000 ≤ x ≤ (.8/100) ∙ 8000
(4/1000) ∙ 8000 ≤ x ≤ (8/1000) ∙ 8000
4 ∙ 8 ≤ x ≤ 8 ∙ 8
32 ≤ x ≤ 64
The only number in the list of possible answers which is in this range is 45, so B is the correct answer.
Percentage Problems Example 3)
Lemon chicken is the most popular entrée at a certain Chinese restaurant. During a recent month 30% of their customers ordered take-out. If they had T customers during the month, and if 1/2 of their table service customers and 85 of their take-out customers ordered lemon chicken, how many ordered lemon chicken?
A. .50T + 85
B. .60T + 42
C. .70T + 100
D .35T + 85
E. .40T + 30
Explanation: This problem requires us to be careful in keeping concepts straight. It also presents some difficulties by using a variable, T, to represent the number of customers. It says that 30% (.3T) of their customers ordered take-out. So that means that 70% (.7T) of their customers were table service customers. If 1/2 of these ordered lemon chicken, then
1/2 of .7T = .35T, of table service customers ordered lemon chicken. | 2022-05-25T07:12:33 | {
"domain": "gmatmath.online",
"url": "https://www.gmatmath.online/fractions-percentages/",
"openwebmath_score": 0.28967219591140747,
"openwebmath_perplexity": 2538.9582147270194,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777179025415,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.653170401325662
} |
https://gmachine1729.livejournal.com/2021/06/06/ | ## June 6th, 2021
### On the chain rule and change of variables of integrals
Originally published at 狗和留美者不得入内. You can comment here or there.
Theorem 1 (Chain rule) Let , , where and are open in , such that are differentiable on their respective domains. Then is also differentiable on , with for all .
Proof: We first assume that there exists a neighborhood of for which . This happens in the case of by inverse function theorem. In that case, by the definition of derivative and its properties, we have
In the case of , we have that for all ,
From this, we easily verifies that , which means that is differentiable at and in the case of , must hold as well.
Lemma 1 Let , be differentiable with and . Then,
Instead of , one can also use any closed interval of .
Proof: Follows directly from Fundamental Theorem of Calculus. See Theorem 2 (Newton-Leibniz axiom) of [1].
Lemma 1 is a statement of invariance of integral along parameterized smooth paths with the same endpoints.
Theorem 2 (Change of variables or u-substitution in integration) Let be any differentiable function of on , which is continuous on , and be Riemann integrable on intervals in its domain. Then,
Proof: Let be an antiderivative of . By the Fundamental Theorem of Calculus, it suffices to show that the left hand side of is equal to , which can be done by applying Lemma 1 accordingly.
Theorem 3 (Integration by parts) Let be differentiable functions on and continuous on . Then,
Proof: We have
Rearranging the above completes the proof.
References
### On the tangent line and osculating plane of a curve
Originally published at 狗和留美者不得入内. You can comment here or there.
Here, we will be working in .
### Analytic geometry prerequisites
Proposition 1 The distance between a point and the plane given by is .
Proof: A normal vector of the plane is . We plug in to get
the solution of which is . Since every unit of corresponds to of distance, we have for our answer.
Proposition 2 The distance between a point and a straight line given by can be obtained by the magnitude of a cross product.
Proof: As for this distance, it is obtained by taking the perpendicular with respect the straight line that contains , which we shall call . We use to denote the distance between and . One notices that is equal to , where is the angle between the straight line given in the proposition and the straight line connecting and . We know that the magnitude of the cross product of two vectors is the product of their magnitudes and the sign of the angle between the two vectors, which completes our proof.
## Preliminary definitions
Definition 1 A regular curve is a connected subset of homeomorphic to some that is a line segment or a circle of radius . If the homeomorphism is in for and the rank of is maximal (equal to 1), then we say this curve is k-fold continuously differentiable. For , we say that is smooth.
Definition 2 Let a smooth curve be given by the parametric equations
The velocity vector of at is the derivative
The velocity vector field is the vector function . The speed of at is the length of the velocity vector.
Definition 3 The tangent line to a smooth curve at the point is the straight line through the point in the direction of the velocity vector .
## Tangent line and osculating plane of a curve
We let denote the length of a chord of a curve joining the points and and denote the length of a perpendicular dropped from onto the tangent line to at the point .
Lemma 1 Let be continuous in . Then,
Proof: Trivial and left to the reader.
Theorem 1
Proof: We have that and by Proposition 2 that
We have, using properties of limits and keeping Lemma 1 in mind in the process,
Definition 4 A plane is called an osculating plane to a curve at a point if
Theorem 2 At each point of a regular curve of class where , there is an osculating plane , and the vectors are orthogonal to its unit normal vector .
Proof: Based on the following diagram from [1],
[Error: Irreparable invalid markup ('<img [...] https://pic4.zhimg.com/v2-d4b39ec764b969ae82b0c141ec3e117f_b.jpg&#8221">') in entry. Owner must fix manually. Raw contents below.]
### 关于曲线的长度和曲率
Originally published at 狗和留美者不得入内. You can comment here or there.
### 曲线的长度
,我们得到矢量函数 的可导,其导数为
:一个简单的计算。
References | 2021-10-28T05:18:26 | {
"domain": "livejournal.com",
"url": "https://gmachine1729.livejournal.com/2021/06/06/",
"openwebmath_score": 0.9319318532943726,
"openwebmath_perplexity": 369.3129977989518,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777178636555,
"lm_q2_score": 0.6619228891883799,
"lm_q1q2_score": 0.6531704010682666
} |
https://en.m.wikibooks.org/wiki/Real_Analysis/Darboux_Integral | # Real Analysis/Darboux Integral
Real AnalysisDarboux Integral
Another popular definition of "integration" was provided by Jean Gaston Darboux and is often used in more advanced texts, such as this wikibook, due to its introductory ease. In this chapter, we will define the Darboux integral, and demonstrate the equivalence of Darboux integrals and the more widely known Riemann integrals.
## ConstructionEdit
Unlike Riemann Integration, this version of the integral will forgo one assumption of the function ƒ — that it must be continuous. It will only assume that the function ƒ is bounded on [a,b]. Of course, normal assumptions for a Real Analysis course such as the function only operating on real numbers over the interval of focus can be presumed (i.e. $f:[a,b]\to\mathbb{R}$)
### PartitionsEdit
We will modify the definition of partition for the Darboux Integral so that the values a and b are also included in the set. For completeness, we will write out this new definition again.
Define Partition $\mathcal{P}$ of an interval $[a,b]$
A finite collection of real numbers such that $a = x_0 < x_1 < x_2 < \ldots < x_n = b$. It is commonly notated as $\mathcal{P} = (x_0, x_1, x_2, \ldots, x_n)$, with the number of discretely writen x's being arbitrary.
Define Partition Point
An element of the Partition set.
For now, we will ignore the actual process of indexing these values. However, it should be noted that our definition of the partition does not make a claim about the relationship between the numbers; these values are not necessarily evenly distributed - but they can.
### Upper and Lower SumsEdit
Let $\mathcal{P}$ be a partition of $[a,b]$
For every $x_i\in\mathcal{P}$, you can define two special numbers:
$m_{i} \, \dot{=} \, \inf{\{f(x) \,|\, x\in [x_{i-1}, x_i]\}}$ and
$M_{i} \, \dot{=} \, \sup{\{f(x) \,|\, x\in [x_{i-1}, x_i]\}}$
The verbal definition of these two variables is more clear; mi defines the infimum of the set of valid ƒ(x) values in between two partition points and Mi defines the supremum of the set of valid ƒ(x) values in between two partition points.
Next, we will define the key functional component of the Darboux Integral, the sums.
Definition of an Upper Sum of $f$ for $\mathcal{P}$
A function, notated as $U(f,\mathcal{P})$, and is defined as $\sum_{i=1}^n M_i(x_i-x_{i-1})$
Definition of a Lower Sum of $f$ for $\mathcal{P}$
A function, notated as $L(f,\mathcal{P})$, and is defined as $\sum_{i=1}^n m_i(x_i-x_{i-1})$
Borrowing from geometry, you will notice that both sums are essentially additions of various rectangular shapes that are tied to the function ƒ due to the definition of length being either a supremum or infimum respectively.
It is important to note that although the upper and lower sum borrows function notation, it is not, necessarily, a function in the normal sense. It takes partitions as the input, which size is a natural number. The function ƒ is treated as a fixed constant.
There is actually only one more construction required in order to reach the Darboux integral. The only problem? This last step is to relate the upper sum and the lower sum. After all, the rectangles generated from this function leave out a lot of gaps because there are too few partition points. The more partition points there are, the more that Mi and mi converge upon the same value. The next task should be clear now; we need to prove that the upper sum and lower sum can converge onto a point.
### RefinementEdit
The second last piece of the construction requires that we prove the following two lemmas regarding our partition and our sums:
1. $L(f,\mathcal{P}) \le L(f,\mathcal{P}^*)$
2. $U(f,\mathcal{P}) \ge U(f,\mathcal{P}^*)$
Excuse me, we have to define what the partition P with the asterisk means first before we can analyze this statement.
Define the Refinement $\mathcal{P}^*$ of Partition $\mathcal{P}$
A partition such that $\mathcal{P}^* \supset \mathcal{P}$. Alternatively, $\mathcal{P}^*$ has more partition points over the same interval [a,b] than $\mathcal{P}$
Okay, why do we need to prove this? Simple, these inequalities state that more partitions leads to a better approximation of the actual area. The lower bound will increase as it reaches the "area", while the upper bound will decrease as it reaches the "area". This should be a fact so intuitive that the idea of proving it might have never crossed your mind. However, we will prove this lemma right now. It will be needed for the final piece of the Darboux Integral puzzle.
This proof is simple and will only require inequality algebra.
For now, let's assume that $\mathcal{P}^*$ only has one more partition point than $\mathcal{P}$ (We will use this special case to prove the general case later). Given that, we will only require three partition points from these partitions in our proof: the extra partition point found only in $\mathcal{P}^*$ and its two adjacent partition points found in both $\mathcal{P}^*$ and $\mathcal{P}$ Let $x_i,x_{i-1}\in\mathcal{P}$ and let $x^*\in\mathcal{P}^*\setminus\mathcal{P}$ be such that $x_{i-1} < x^* < x_i$. Now, we will generate the special infimum variable mi specifically for these partition points. They are given the variable name m′ and m″. Let $m'_i=\inf \{f(x) \, | \, x\in [x_{i-1},x^*]\}$ and $m''_i=\inf \{f(x) \, | \, x\in [x^*,x_i]\}$ We will use all of these variables to express the lower sum of the refined partition as something in relation to the lower sum of the partition. \begin{align}L(f, \mathcal{P}) &= \sum^n_{i=1} {m_i(x_i - x_{i-1})} \text{ and } \\ L(f, \mathcal{P}^*) &= \sum^{x^*-1}_{i=1} {m_i(x_i - x_{i-1})} \\ &+ m'(x^* - x_{i-1}) + m''(x_i - x^*) \\ &+ \sum^{n}_{i=x^*+1} {m_i(x_i - x_{i-1})}\end{align} The final relationship to compare between both equations can be distilled by removing the summations from the picture (via subtraction), yielding the following $m_i(x_i - x_{i-1}) \text{ and } m'(x^* - x_{i-1}) + m''(x_i - x^*)$ Given that we have two infimums between the same partition point than only having one, it should be obvious that this relationship holds. This implies that a partition more refined by one partition point is larger. $m_i(x_i - x_{i-1}) \le m'(x^* - x_{i-1}) + m''(x_i - x^*)$ Using recursion, a refined partition of any arbitrary size can be achieved. The following mathematical statements depict the process of recursion. $\mathcal{P} \subset \mathcal{P}^1 \subset \mathcal{P}^2 \subset \ldots \subset \mathcal{P}^*$ $\blacksquare$
Similarly, we can prove $U(f,\mathcal{P})>U(f,\mathcal{P}^*)$ using the same method, by inverting the necessary functions.
### ConvergenceEdit
Now that we have proved that our intuition is correct; more partitions will yield an even closer approximation from both the lower sum and the upper sum, it is only fair to see if we can bring them together. If I can use mathematical symbols freely, it can be depicted as
$L(f,\mathcal{P}_1) \rightarrow \text{Area''} \leftarrow U(f,\mathcal{P}_2)$
when the upper sum (the area of overestimation) is larger than the actual "area" and the lower sum (the area of underestimation) is smaller, yet both converge upon the "area" when the partition becomes finer. However, thinking like this will lead us to avoid the mathematical pieces we have collected that can also as fairly construct our integral. The roadmap to prove the Darboux Integral leads us to the final piece,
$L(f,\mathcal{P}) \le U(f,\mathcal{P})$
where $\mathcal{P}$ can be thought of as a partition full enough to yield the perfect approximation. We will call it, for this explanation portion, a perfect partition, although the perfect partition is importantly not infinite. However, you might be wondering how to solve this; the previous lemma does not make any comparisons between the lower and upper sum. That is why we are going to prove this instead:
$L(f,\mathcal{P}_1) \le U(f,\mathcal{P}_2)$
when $\mathcal{P}_1$ and $\mathcal{P}_2$ are any partitions of [a,b]. Yes, they do not need to be the same partition, as long as they are over the same interval [a,b]. This is actually going to be simpler, because our proof will use these two sums like bounds — dare I compare it as a squeeze?
Given that $\mathcal{P}_1$ and $\mathcal{P}_2$ are subsets of the main partition, we can use our lemma to continually refine our partition until they become the perfect partition. $L(f,\mathcal{P}_1) \le L(f,\mathcal{P})$$U(f,\mathcal{P}_2) \ge U(f,\mathcal{P})$ Even during the process of creating the perfect partition, it can be noted that the upper bound is larger than the lower bound. This is a consequence of the supremum being, by definition, greater than or equal to any other value. We can rule out that the lower sum can ever be greater. $L(f,\mathcal{P}^n) \le U(f,\mathcal{P})$ Speaking of which, we can also imagine that the supremum/infimum of the sums will also obey this property of maintaining the upper sum stance. $\sup \{L(f,\mathcal{P}^n)\} \le \inf \{U(f,\mathcal{P})\}$ Using supremums and infimums on the function mimics the behaviour of the perfect partition. $\sup \{L(f,\mathcal{P}^n)\} = L(f,\mathcal{P}) \le U(f,\mathcal{P}) = \inf \{U(f,\mathcal{P})\}$ We can not conclude our proof. $\blacksquare$
### ConclusionEdit
We come at an impasse. Our final piece yields a very strange answer about the lower and upper sum. Namely that they are not an equality, but an inequality
$L(f,\mathcal{P}) \le U(f,\mathcal{P})$
where the certainty of the number remains unknown. However, we can easily sidestep this issue by breaking it into two cases and validating one or the other. What do we mean by validation? We can define the integral, namely the Darboux Integral, as being the number ensuring the equality of the upper and lower sum. We can then define an invalid integral as maintaining the inequality. In mathematical notation, we define the integral as being
$L(f,\mathcal{P}) = U(f,\mathcal{P})$
and rejecting every other case as being an invalid integral.
From here, we completed the construction of the Darboux Integral from the bottom-up.
## DefinitionEdit
The definition of Darboux Integrable for a function ƒ on $[a,b]$
is
Alternate Notations Notice. Both definitions are equivalent and only serve to clarify confusing notation.
1. If and only if $\sup_{\mathcal{P}}L(f,\mathcal{P}) = \inf_{\mathcal{P}}U(f,\mathcal{P})$, where the supremum is taken over the Set of all partitions on that interval
2. If and only if $\sup{\{L(f,\mathcal{P}) \, : \, \mathcal{P} \text{ a partition of } [a,b] \}} = \inf {\{U(f,\mathcal{P}) \, : \, \mathcal{P} \text{ a partition of } [a,b] \}}$
It is commonly notated as either
1. $\int_a^b f$
2. $\int_a^b f(x) \, dx$
Based on whether you are willing to write out the function explicitly (#2) or by name (#1)
### RemarksEdit
1. Of course, the function has to be real i.e. $f:[a,b]\to\mathbb{R}$.
2. The Darboux Integral is defined on the condition of uniqueness, unlike other concepts in this wikibook, such as limits, that are implied from the definition.
### PropertiesEdit
Let $f:[a,b]\to\mathbb{R}$
$f$ is Darboux integrable over $[a,b]$ if and only if for every $\varepsilon>0$, there exists a partition $\mathcal{P}$ on $[a,b]$ such that $U(f,\mathcal{P})-L(f,\mathcal{P})<\varepsilon$
#### ProofEdit
($\Rightarrow$)Let $A=\int_a^b f$ and let $\varepsilon>0$ be given. Thus, by Gap Lemma, there exists a partition $\mathcal{P}$ such that both $U(f,\mathcal{P}),L(f,\mathcal{P})\in V_{\frac{\varepsilon}{2}}(A)$, and hence $U(f,\mathcal{P})-L(f,\mathcal{P})<\varepsilon$
($\Leftarrow$)Let $\mathcal{P}_0$ be any partition on $[a,b]$. Observe that $L(f,\mathcal{P}_0)$ is a lower bound of the set $\mathcal{U}=\{U(f,\mathcal{P})|\mathcal{P}$ is any partition$\}$ and that $U(f,\mathcal{P}_0)$ is an upper bound of the set $\mathcal{L}=\{L(f,\mathcal{P})|\mathcal{P}$ is any partition$\}$
Thus, let $\alpha=\sup\mathcal{L}$ and $\beta=\inf\mathcal{U}$. As $L(f,\mathcal{P}), we have that $\alpha>\beta$ cannot be true. Also, as $\alpha,\beta$ are a supremum and infimum respectively, $\alpha<\beta$ is also not possible. Hence, $\alpha=\beta=L$ (say).
As $L=\sup_{\mathcal{P}} L(f,\mathcal{P})=\inf_{\mathcal{P}} U(f,\mathcal{P})$, we have that $\int_a^b f=L$
## Equivalence of Riemann and Darboux IntegralsEdit
At first sight, it may appear that the Darboux integral is a special case of the Riemann integral. However, this is illusionary, and indeed the two are equivalent.
### LemmaEdit
(1) Let $f:[a,b]\rightarrow\mathbb{R}$ be Darboux Integrable, with integral $L$
Define function $\varepsilon (\delta)=\sup\{|L-S(f,\dot{P})|:\|\dot{P}\|=\delta\}$
(2) Then $\delta_1<\delta_2$ $\Rightarrow$ $\varepsilon(\delta_1)<\varepsilon(\delta_2)$
#### ProofEdit
Let $\delta_1<\delta_2$. Consider set $T$ of tagged partitions $\dot{P}$ such that $\varepsilon(\delta_1)\leq|L-S(f,\dot{P})|$
Let $T'$ be the set of $\dot{P'}$ where $\dot{P'}\subset\dot{P}\in T$ and $\|\dot{P'}\|=\delta_2>\delta_1$
note that $T'\neq T$ and that the set $T'$ indeed contains all partitions $\dot{P'}$ with $\|\dot{P'}\|=\delta_2$
Now, for $\dot{P}\in T$, we can construct $\dot{P'}\in T'$ such that $|L-S(f,\dot{P'})|>|L-S(f,\dot{P})|$
Hence, $\displaystyle\sup_{\dot{P}\in T}\{|L-S(f,\dot{P})|\}<\sup_{\dot{P'}\in T'}\{|L-S(f,\dot{P'})|\}$
i.e. $\varepsilon(\delta_2)>\varepsilon(\delta_1)$
### TheoremEdit
Let $f:[a,b]\rightarrow\mathbb{R}$
(1)$f$ is Riemann integrable on $[a,b]$ iff
(2)$f$ is Darboux Integrable on $[a,b]$
#### ProofEdit
($\Rightarrow$) Let $\epsilon>0$ be given.
(1)$\Rightarrow$ $\exists$ tagged partition $\dot{P}$ such that $|S(f,\dot{P})-L|<\frac{\epsilon}{2}$.
Let partitions $P_1$ and $P_2$ be the same refinement of $\dot{P}$ but with different tags.
Therefore, $|S(f,P_1)-S(f,P_2)|<\epsilon$ $\forall$ $P_1$ and $P_2$
i.e., by the triangle inequality, $|S(f,P_1)|-|S(f,P_2)|<\epsilon$
Gap Lemma $\Rightarrow$ $U(f,P)-L(f,P)<\epsilon$,
$\epsilon>0$ being arbitrary, using Theorem 2.1, we have that $f$ is Darboux Integrable.
($\Leftarrow$)Let $\epsilon>0$ be given.
(2), Theorem 2.1 $\Rightarrow$ $\exists$ partition $P$ such that $U(f,P)-L(f,P)<\epsilon$
Hence, $|L-S(f,P)|<\epsilon$ as $L(f,P)\leq S(f,P)\leq U(f,P)$
By Lemma 3.1, $|L-S(f,P')|<\epsilon$ if $\|P'\|<\|P\|$
Thus, if we put $\delta=\|P\|$, we have (1)
We note here that the crucial element in this proof is Lemma 3.1, as it essentially is giving an order relation between $\varepsilon$ and $\delta$, which is not directly present in either the Riemann or Darboux definition. | 2016-05-04T00:02:35 | {
"domain": "wikibooks.org",
"url": "https://en.m.wikibooks.org/wiki/Real_Analysis/Darboux_Integral",
"openwebmath_score": 0.968351423740387,
"openwebmath_perplexity": 375.29921263354737,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777178442125,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.653170400939569
} |
https://quasirandomideas.wordpress.com/2010/04/22/math2111-chapter-1-fourier-series-additional-material-l_2-convergence-of-fourier-series/ | # Math2111: Chapter 1: Fourier series. Additional Material: L_2 convergence of Fourier series.
In this blog entry you can find lecture notes for Math2111, several variable calculus. See also the table of contents for this course. This blog entry printed to pdf is available here.
In this post I present some ideas which shed light on the question why one can expect the Fourier series to converge to the function (under certain assumptions).
Complete orthonormal systems
To do so let us first study a simpler case, one with which you are familiar with. (Throughout this section I will ignore convergence questions.)
Consider the vector space $\mathbb{R}^n$. For two vectors $\boldsymbol{u} = (u_1,\ldots, u_n)^\top$ and $\boldsymbol{v} = (v_1,\ldots, v_n)^\top$ we define the inner product
$\displaystyle \langle \boldsymbol{u}, \boldsymbol{v} \rangle = u_1 v_1 + \cdots + u_n v_n.$
Assume we have given a set of orthonormal vectors $\boldsymbol{u}_1, \ldots, \boldsymbol{u}_k,$ that is, we have
$\displaystyle \langle \boldsymbol{u}_k, \boldsymbol{u}_l \rangle = \left\{ \begin{array}{rl} 0 & \mbox{if } k \neq l, \\ & \\ 1 & \mbox{if } k = l. \end{array} \right.$
Now the question arises whether for every vector $\boldsymbol{v} \in \mathbb{R}^n$ there are $\lambda_1, \ldots, \lambda_k \in \mathbb{R}$ such that
$\displaystyle \boldsymbol{v} = \lambda_1 \boldsymbol{u}_1 + \cdots + \lambda_k \boldsymbol{u}_k?$
The answer is of course: it depends. If $k = n$, then yes, if $k \textless n$ then no. (Notice that $k \textgreater n$ is not possible.)
Consider now $\ell^2(\mathbb{Z})$ equipped with the standard inner product (defined here). Then let us ask the same question. Assume we have given a set of orthonormal vectors $\boldsymbol{u}_1, \ldots, \boldsymbol{u}_k,$ that is, we have
$\displaystyle \langle \boldsymbol{u}_k, \boldsymbol{u}_l \rangle = \left\{ \begin{array}{rl} 0 & \mbox{if } k \neq l, \\ & \\ 1 & \mbox{if } k = l. \end{array} \right.$
Is it true that for every vector $\boldsymbol{v} \in \ell^2(\mathbb{Z})$ there are $\lambda_1, \ldots, \lambda_k \in \mathbb{R}$ such that
$\displaystyle \boldsymbol{v} = \lambda_1 \boldsymbol{u}_1 + \cdots + \lambda_k \boldsymbol{u}_k?$
The answer is again it depends. If $k \textless \infty$, then certainly not because $\ell^2(\mathbb{Z})$ is infinite dimensional. If $k=\infty$, then we still do not know. Why? To illustrate this, let $A_k$ be the vector whose entries are all $0$ except the entry at position ${}k$ is ${}1$. More precisely, for $k \in \mathbb{Z}$ let
$\displaystyle A_k = (\ldots, b_{2,k}, b_{1,k}, a_{0,k}, a_{1,k}, a_{2,k}, \ldots)$
where we set
$\displaystyle \begin{array}{rcl} b_{n,k} & = & 0 \quad \mbox{if } n \neq -k, \\ && \\ b_{n,k} & = & 1 \quad \mbox{if } n = -k, \\ && \\ a_{n,k} & = & 0 \quad \mbox{if } n \neq k, \\ && \\ a_{n,k} & = & 1 \quad \mbox{if } n = k. \end{array}$
Choose now an arbitrary element
$\displaystyle C = (\ldots, d_2, d_1, c_0, c_1, c_2, \ldots) \in \ell^2(\mathbb{Z}).$
In general, which vectors $A_k$ do we need to represent $C?$ The answer is: all of them. Since we have
$\displaystyle C = \sum_{k=1}^\infty d_k A_{-k} + \sum_{k=0}^\infty c_k A_k.$
If we consider for instance the set of vectors
$\displaystyle \mathscr{A} = \{A_k \in \ell^2(\mathbb{Z}): k \in \mathbb{Z} \setminus \{23\}\}$
then the element $A_{23} \in \ell^2(\mathbb{Z})$ cannot be represented using the elements in $\mathscr{A}.$ In other words, $A_{23}$ is missing from the set $\mathscr{A}$.
Hence, for an infinite dimensional vector space it is not enough to have infinitely many orthonormal vectors in order to be able to represent any element. Notice that, if we have an arbitrary infinite set of orthonormal vectors
$\displaystyle \mathscr{B} =\{ \ldots, B_{-2}, B_{-1}, B_0, B_1, B_2, \ldots \}$
and a vector $A$ which cannot be represented by the vectors in $\mathscr{B}$, then there exists a vector $B$ such that
$\langle B, B_k\rangle = 0 \quad \mbox{for all } k \in \mathbb{Z}.$
Indeed, we can define the vector $B$ by
$\displaystyle B = A - \sum_{k \in \mathbb{Z}} \langle A, B_k \rangle B_k.$
Since $A$ cannot be represented by the elements in $\mathscr{B}$, it follows that $B$ is not the zero vector. Further
$\displaystyle \begin{array}{rcl} \langle B, B_n \rangle & = & \langle A, B_n \rangle - \sum_{k=-\infty}^\infty \langle A, B_k \rangle \langle B_k, B_n \rangle \\ && \\ & = & \langle A, B_n \rangle - \langle A, B_n \rangle \langle B_n, B_n \rangle = 0, \end{array}$
since we have chosen the elements $\ldots, B_{-2}, B_{-1}, B_0, B_1, B_2, \ldots$ such that
$\displaystyle \langle B_k, B_n \rangle = \left\{\begin{array}{rl} 1 & \mbox{if } k = n, \\ & \\ 0 & \mbox{if } k \neq n. \end{array} \right.$
Now let us consider Fourier series. Let us ignore questions concerning convergence in the following. For instance, say, let us only consider continuously differentiable functions. The same argument as for $\ell^2(\mathbb{Z})$ also applies in this case. If there would be a function $f$ for which we do not have
$\displaystyle f(x) = \frac{a_0}{2} + \sum_{k=1}^\infty \left[a_k \cos kx + b_k \sin kx \right]$
then there would have to be a (continuously differentiable), $2\pi$-periodic function $\phi:\mathbb{R} \to \mathbb{R}$ such that
$\displaystyle \begin{array}{rll} \langle \phi, \cos kx \rangle & = \int_{-\pi}^\pi \phi(x) \cos kx \, \mathrm{d} x & = 0 \quad \mbox{for all } k = 0, 1, 2, \ldots \mbox{and } \\ && \\ \langle \phi, \cos kx \rangle & = \int_{-\pi}^\pi \phi(x) \sin kx \, \mathrm{d} x & = 0 \quad \mbox{for all } k = 0, 1, 2, \ldots . \end{array}$
Therefore every (continuously differentiable), $2\pi$ periodic function can be represented by its Fourier series if and only if the functions
$\displaystyle \cos k x, \quad k = 0, 1, 2, \ldots \quad \mbox{and } \sin kx, \quad k = 1,2, \ldots \quad (1)$
are a complete orthonormal system, that is, there is no (continuously differentiable), $2\pi$ periodic function $\phi:\mathbb{R} \to \mathbb{R}$ which is orthogonal to all trigonometric functions given in line (1).
This does indeed hold, which follows from the convergence of the Fourier series to the function in the mean square sense. We prove this result in the next section.
$\rhd$ Mean square convergence
We prove now that the Fourier series of $f$ converges to $f$ in the mean square sense. That is,
$\displaystyle \lim_{N \to \infty} \|S_Nf- f\|_2 = 0,$
where $S_N f$ is the $N$ th partial sum of the Fourier series and the $L_2$-norm is defined by
$\displaystyle \|f\|_2 = \int_{-\pi}^\pi |f(x)|^2 \, \mathrm{d} x.$
To prove this we need some preparation. Instead of using $\cos kx$ for $k = 0,1,2, \ldots$and $\sin kx$ for $k = 1,2, \ldots$ we use in the following the functions $\mathrm{e}^{2 \pi \mathrm{i} k x}$ where $k \in \mathbb{Z}$. For complex valued functions $f, g: [-\pi, \pi] \to \mathbb{C}$, we define the inner product by
$\displaystyle \langle f, g \rangle = \int_{-\pi}^\pi f(x) \overline{g(x)} \, \mathrm{d} x,$
where $\overline{c}$ stands for the complex conjugate of a complex number $c \in \mathbb{C}$. Hence, we consider the Fourier series
$\displaystyle Sf(x) = \sum_{k=-\infty}^\infty a_k \mathrm{e}^{2 \pi {\rm i} k x},$
where the Fourier coefficients are given by
$\displaystyle a_k = \frac{1}{\pi} \int_{-\pi}^\pi f(x) \mathrm{e}^{-2\pi {\rm i} k x} \, \mathrm{d} x. \qquad\qquad\qquad (2)$
Further, the $N$th partial sum of the Fourier series is given by
$\displaystyle S_Nf(x) = \sum_{k=-N}^N a_k \mathrm{e}^{2 \pi {\rm i} k x},$
Exercise Show that (2) holds if $f$ is a trigonometric polynomial $f(x) = \sum_{n=-N}^N a_k \, \mathrm{e}^{2 \pi {\rm i} k x}$.
Definition (Dirichlet kernel) The trigonometric polynomial defined for $x \in [-\pi, \pi]$ by
$\displaystyle D_N(x) = \sum_{n=-N}^N {\rm e}^{2 \pi {\rm i} n x}$
is called the $N$th Dirichlet kernel.
Exercise Show that
$\displaystyle D_N(x) = \frac{\sin ((N+1/2) x)}{\sin (x/2)} .$
Definition (Fejér kernel) The trigonometric polynomial defined for $x \in [-\pi, \pi]$ by
$\displaystyle F_N(x) = \frac{1}{N} \sum_{n= 0}^{N-1} D_n(x)$
is called the $N$th Fejér kernel.
Exercise Show that
$\displaystyle F_N(x) = \frac{1}{N} \frac{\sin^2 (Nx/2)}{\sin^2 (x/2)} .$
Lemma
The Fejér kernel satisfies the following properties:
1. For all $N \ge 1$ we have
$\displaystyle \frac{1}{2\pi} \int_{-\pi}^\pi F_N(x) \, \mathrm{d} x = 1.$
2. There exists an $M > 0$ such that for all $N \ge 1,$
$\displaystyle \int_{-\pi}^\pi |F_N(x)|\, \mathrm{d} x \le M.$
3. For every $\delta > 0,$
$\displaystyle \int_{\delta \le |x| \le \pi} |F_N(x)| \, \mathrm{d} x \to 0 \mbox{ as } N \to \infty.$
Proof
For the first part we just note that $\int_{-\pi}^\pi {\rm e}^{2 \pi {\rm i} k x} \, \mathrm{d} x = 0$ for $k \neq 0$ and $1$ for $k = 0$. The second part follows from the first, since $F_N(x) = \frac{1}{N} \left(\frac{\sin(Nx/2)}{\sin(x/2)}\right)^2 \ge 0$.
To prove the third part, note that for $\delta > 0$ there exists a $c_\delta > 0$ such that $\sin(x/2) > c_\delta$ for all $\delta \le |x| \le \pi$. Hence $|F_N(x)| \le \frac{1}{N c^2_\delta}$ for all $\delta \le |x| \le \pi$ and therefore
$\displaystyle \int_{\delta \le |x| \le \pi} |F_N(x)| \, \mathrm{d} x \to 0 \mbox{ as } N \to \infty.$
Hence the result follows. $\Box$
Theorem Let $f:\mathbb{R} \to \mathbb{R}$ be continuous and periodic with period $2\pi$. Let the convolution of $f$ and $F_N$ be given by
$\displaystyle f \star F_N(x) = \frac{1}{2\pi} \int_{-\pi}^\pi f(x-y) F_N(y) \, \mathrm{d} x.$
Then
$\displaystyle \lim_{N\to \infty} \left\| f \star F_N - f \right\|_\infty = 0.$
Proof
Since $f$ is continuous it follows that $f$ is uniformly continuous on any bounded interval. For $\varepsilon > 0$ choose $\delta > 0$ such that for $|y|< \delta$ we have $|f(x-y)-f(x)| < \varepsilon$. Then, by the first property of the above lemma we have
$\displaystyle (f\star F_N)(x) - f(x) = \frac{1}{2\pi} \int_{-\pi}^\pi F_N(y) \left[f(x-y) - f(x) \right] \, \mathrm{d} x.$
Therefore,
$\displaystyle \begin{array}{rcl} 2\pi |(f\star F_N)(x) - f(x)| & \le & \int_{-\pi}^\pi F_N(y) |f(x-y)-f(x)| \, \mathrm{d} x \\ && \\ & \le & \int_{|y| < \delta} F_N(y) |f(x-y)-f(x)| \, \mathrm{d} x \\ && \\ && + \int_{\delta \le |y| \le \pi} F_N(y) |f(x-y) - f(x)| \, \mathrm{d} x \\ && \\ & \le & \varepsilon \int_{-\pi}^\pi F_N(y) \, \mathrm{d} x + 2 B \int_{\delta \le |y| \le \pi} F_N(y) \, \mathrm{d} y \\ && \\ & = & \varepsilon + 2 B \int_{\delta \le |y| \le \pi} F_N(y) \, \mathrm{d} y, \end{array}$
where $B = \max_{-\pi \le x \le \pi} |f(x)|$. Therefore
$\displaystyle |(f\star F_N)(x) - f(x)| \le \frac{\varepsilon}{2\pi} + \frac{B}{\pi} \int_{\delta \le |y| \le \pi} F_N(y) \, \mathrm{d} y.$
The result follows now by the third property of $F_N$. $\Box$
The function $f\star F_N$ is a trigonometric polynomial. Indeed, using the variable transformation $z = x-y$ we obtain
$\displaystyle \begin{array}{rcl} f\star F_N(x) & = & \frac{1}{2\pi} \int_{x-\pi}^{x+\pi} f(z) F_N(x-z) \, \mathrm{d} z \\ && \\ & = & \frac{1}{2\pi} \int_{-\pi}^{\pi} f(z) F_N(x-z) \, \mathrm{d} z \\ && \\ & = & \frac{1}{2\pi N} \sum_{m=0}^{N-1} \sum_{n=-m}^m \mathrm{e}^{2\pi {\rm i} n x} \int_{-\pi}^\pi f(z) \mathrm{e}^{-2\pi {\rm i} n z} \, \mathrm{d} z \\ && \\ & = & \frac{1}{2\pi N} \sum_{m=0}^{N-1} \sum_{n=-m}^m a_n \mathrm{e}^{2 \pi {\rm i} n x}, \end{array}$
where the $a_n$ are the Fourier coefficients of $f$.
Hence the theorem above shows that a continuous, $2\pi$ periodic function $f$ can be uniformly approximated by a trigonometric polynomial. The result now follows from the best approximation lemma.
Lemma (Best approximation lemma)
If $f$ is integrable with Fourier coefficients $a_n$, then
$\displaystyle \|f- \sum_{n=-N}^N a_n \mathrm{e}^{2 \pi {\rm i} n x}\|_2 \le \|f - \sum_{n = -N}^N c_n {\rm e}^{2\pi {\rm i} n x}\|_2$
for any complex numbers $c_{-N}, \ldots, c_N$.
Proof
Let $a_{-N}, \ldots, a_N$ be Fourier coefficients of $f$ and set $b_n = a_n - c_n$ for $-N \le n \le N$. Then
$\displaystyle f - \sum_{n=-N}^N c_n \mathrm{e}^{2\pi {\rm i} n x} = f - S_N f + \sum_{n = - N}^N b_n \mathrm{e}^{2 \pi {\rm i} n x}.$
Now we have
$\displaystyle \langle f-S_Nf, \sum_{n=-N}^N b_n \mathrm{e}^{2 \pi {\rm i} n x} \rangle = \sum_{n=-N}^N b_n \left[\langle f, \mathrm{e}^{2\pi {\rm i} n x} \rangle - \langle S_N f, \mathrm{e}^{2\pi {\rm i} n x} \rangle\right].$
Since $\langle f, \mathrm{e}^{2 \pi {\rm i} n x} \rangle = a_{-n}$ and $\langle S_N f, \mathrm{e}^{2 \pi {\rm i} n x} \rangle = a_{-n}$, it follows that the inner product is $0$. Hence we can use the Pythagorean theorem to obtain
$\displaystyle \| f - \sum_{n=-N}^N c_n \mathrm{e}^{2\pi {\rm i} n x} \|_2 = \|f - S_N f \|_2 + \|\sum_{n = - N}^N b_n \mathrm{e}^{2 \pi {\rm i} n x}\|_2 \ge \|f - S_N f\|_2.$
$\Box$
Corollary
Let $f:\mathbb{R} \to \mathbb{R}$ be $2 \pi$ periodic and continuous. Then
$\displaystyle \lim_{N\to \infty} \|f-S_Nf\|_2 = 0.$
Proof
The proof follows from the observation
$\displaystyle \|f-S_Nf\|_2 \le \|f- (f\star F_N)\|_2 \le \sqrt{2\pi} \|f-(f\star F_N)\|_\infty.$
$\Box$
The corollary states that trigonometric polynomials are dense in the space of continuous functions.
So far we have shown that the mean square convergence for Fourier series holds for continuous functions. To show that it also holds for merely integrable and bounded functions we need the following lemma.
Lemma
Suppose that $f:[a,b]\to \mathbb{R}$ is integrable and bounded by ${}B$. There there exists a sequence of continuous and periodic functions $\{f_k\}_{m\ge 1}$ with $f_m:[a,b]\to\mathbb{R}$ so that
$\displaystyle \sup_{x \in [a,b]} |f_k(x)| \le B \quad \mbox{for all } m = 1,2,\ldots$
and
$\displaystyle \int_a^b |f(x)-f_m(x)| \, \mathrm{d}x \to 0 \quad \mbox{as } m \to \infty.$
Proof
Given an integer $L \textgreater 0$, there is a partition of $[a,b]$ given by $a=x_{L,0}\textless x_{L,1}\textless \cdots \textless x_{L,N}=b$ such that the upper and lower Riemann sums of ${}f$ differ by at most $1/L$. Let
$\displaystyle f^\ast(x)=\sup_{x_{L,n-1}\le y \le x_{L,n}} f(y) \quad \mbox{if } x \in [x_{L,n-1},x_{L,n}] \mbox{ for } 1 \le n \le N.$
Thus we have $\sup_{x\in [a,b]} |f^\ast(x)|\le B$ and
$\displaystyle \int_a^b |f^\ast(x)-f(x)|\,\mathrm{d}x = \int_a^b (f^\ast(x)-f(x))\,\mathrm{d}x < \frac{1}{L}.$
We construct now functions $g_{L,k}$ in the following way. Let $K\in\mathbb{N}$ be large enough such that $1/K \textless \min_{1\le n\le N} |x_{n-1}-x_n|/2$. Then, for $k\ge K$ construct $g_{L,k}$ by setting
$\displaystyle g_{L,k}(x)=f^\ast(x) \quad \mbox{for } x\in [a,b]\setminus \bigcup_{0\le n \le N} (x_{L,n}-1/k,x_{L,n}+1/k).$
For $1 \le n\textless N$ and $x \in (x_{L,n}-1/k,x_{L,n}+1/k)$ let
$\displaystyle \begin{array}{rcl} g_{L,k}(x) &=& f^\ast(x_{L,n}-1/k) \\ && \\ && + (f^\ast(x_{L,n}+1/k)-f^\ast(x_{L,n}-1/k)) (x-x_{L,n}+1/k) k/2. \end{array}$
Further, for $x\in[a,a+1/k)$ let
$\displaystyle g_{L,k}(x)= f^\ast(a+1/k) (x-a)k$
and for $x \in (b-1/k,b]$ let
$\displaystyle g_{L,k}(x) = f^\ast(b-1/k) (b-x)k.$
For $1 \le k < K$ we set $g_{L,k}=g_{L,K}$.
Hence $g_{L,1},g_{L,2},\ldots$ is a sequence of continuous and periodic functions defined on $[a,b].$ Further, $g_{L,k}$ differs from $f^\ast$ only in the intervals $(x_{L,n}-1/k,x_{L,n}+1/k)$, and in those intervals by at most $2B$. Therefore
$\displaystyle \int_a^b |f^\ast(x)-g_{L,k}(x)|\,\mathrm{d} x \le 2BN \frac{2}{k}.$
By choosing $k\in\mathbb{N}$ large enough, say $k = k^\ast(L)$ we obtain
$\displaystyle \int_a^b |f^\ast(x)-g_{L,k^\ast(L)}(x)|\,\mathrm{d} x < \frac{1}{L}.$
Using the triangle inequality we obtain
$\displaystyle \int_a^b |f(x)-g_{L,k^\ast(L)}(x)|\, \mathrm{d} x < \frac{2}{L}.$
Now we define
$\displaystyle f_m(x)=g_{2m,k^\ast(2m)}(x) \quad\mbox{for } x\in[a,b] \mbox{ and } m = 1,2,\ldots,$
and extend $f_m$ periodically to $\mathbb{R}$. Then, $f_1,f_2,\ldots$ are periodic and continuous, and, by the above construction,
$\displaystyle \int_a^b |f(x)- f_m(x)|\,\mathrm{d}x < \frac{1}{m}$
which proves the result.
$\Box$
Theorem
Let $f:\mathbb{R} \to \mathbb{R}$ be $2 \pi$ periodic, integrable and bounded. Then
$\displaystyle \lim_{N\to \infty} \|f-S_Nf\|_2 = 0.$
Proof
Let ${}f$ be $2\pi$ periodic, bounded and integrable. Let $\varepsilon\textgreater 0$. Then, by the above lemma, there exists a $2\pi$ periodic, continuous function ${}g$ such that
$\displaystyle \sup_{x\in[-\pi,\pi]} |g(x)| \le \sup_{x\in[-\pi,\pi]} |f(x)| = B,$
and
$\displaystyle \int_{-\pi}^\pi |f(x)-g(x)|\,\mathrm{d}x < \frac{\varepsilon^2}{8B}.$
Hence we get
$\displaystyle \begin{array}{rcl} \|f-g\|^2_{2} &= &\int_{-\pi}^\pi |f(x)-g(x)| |f(x)-g(x)|\,\mathrm{d}x \\ &&\\ &\le & 2B\int_{-\pi}^\pi |f(x)-g(x)|\,\mathrm{d}x <\frac{\varepsilon^2}{4}. \end{array}$
By the corollary above, there exists a trigonometric polynomial $P$ such that $\|g-P\|_2< \varepsilon/2$ and hence, using the triangle inequality,
$\displaystyle \|f-P\|_2=\|f-g+g-P\| \le \|f-g\|_2 + \|g-P\|_2 < \varepsilon.$
By the best approximation lemma it follows that
$\displaystyle \|f-S_Nf\|_2 \le \|f-P\|_2 < \varepsilon$
where ${}N$ is the degree of ${}P$. Hence the result follows.
$\Box$
For more information see for example E.M. Stein and R. Shakarchi, Princeton lectures in analysis I, Fourier analysis. Princeton University Press, Princeton, 2003. See also the Riesz-Fischer theorem and the Dini test. | 2018-02-21T16:57:52 | {
"domain": "wordpress.com",
"url": "https://quasirandomideas.wordpress.com/2010/04/22/math2111-chapter-1-fourier-series-additional-material-l_2-convergence-of-fourier-series/",
"openwebmath_score": 0.976305365562439,
"openwebmath_perplexity": 98.20205888890122,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771782476948,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704008108713
} |
https://proofwiki.org/wiki/Planes_are_Subspaces_of_Space | # Planes are Subspaces of Space
## Theorem
The two-dimensional subspaces of $\R^3$ are precisely the homogeneous planes of solid analytic geometry.
## Proof
Follows directly from Equivalent Statements for Vector Subspace Dimension One Less.
$\blacksquare$ | 2022-06-25T20:09:43 | {
"domain": "proofwiki.org",
"url": "https://proofwiki.org/wiki/Planes_are_Subspaces_of_Space",
"openwebmath_score": 0.7893891930580139,
"openwebmath_perplexity": 3211.256393099533,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588348,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704005534759
} |
http://math.stackexchange.com/questions/422598/average-number-of-heads-in-filtered-coin-toss?answertab=votes | Average number of heads in filtered coin toss
I have a coin that, when tossed, produces heads with probability $p \geq 0.5$ and tails with probability $1-p$.
I start a coin-tossing experiment. Whenever I get more than one tail in a row, I discard the second tail and toss again, so that my results look like one long chain of heads with the occasional tail sprinkled in. In the long run, does the overall percentage of heads converge to 100%? How long does my result list have to be to guarantee at least $x$% heads? What if I only discard the last toss if I get more than $k$ tails in a row?
(This question comes from thinking about the caching function of my streaming music library. Please excuse a first-year undergrad's background knowledge; if this has been done a million times before I would appreciate a link to the general subject)
-
The resulting output is a Markov chain on the state space $\{h,t\}$ with transitions $p(t,h)=1$, $p(t,t)=0$, $p(h,h)=p$ and $p(h,t)=1-p$. The overall percentage of heads converges to the stationary measure of $h$. The stationary distribution $\pi$ solves $\pi(t)=p(t,t)\pi(t)+p(h,t)\pi(h)$, that is, $\pi(t)=(1-p)\pi(h)$.
Hence $\pi(h)=\dfrac1{2-p}$ and, in particular, $\pi(h)\ne1$ for every $p\ne1$.
If the strategy is to discard the last toss if one gets more than $k$ tails in a row, the resulting output is a Markov chain with memory $k$, or equivalently, a Markov chain on the state space $\{h,t\}^k$, and the same technique applies.
An alternative is to consider the length of the run of consecutive tails ending at $n$. This is again a Markov chain, this time on the state space $\{0,1,\ldots,k\}$. The transitions are $p(i,i+1)=1-p$ and $p(i,0)=p$ for every $i\leqslant k-1$, and $p(k,0)=1$. The stationary distribution $\pi_k$ solves the system $\pi_k(0)=p\pi_k(0)+\cdots+p\pi_k(k-1)+\pi_k(k)$ and $\pi_k(i)=(1-p)\pi_k(i-1)$ for $1\leqslant i\leqslant k$. Hence $\pi_k(i)=(1-p)^i\pi_k(0)$ for every state $i$, and $\pi_k(0)\cdot\sum\limits_{i=0}^k(1-p)^i=1$.
Finally, for each $k\geqslant1$, the overall percentage of heads converges to $\pi_k(0)=\dfrac{p}{1-(1-p)^{k+1}}.$
- | 2016-02-07T17:46:27 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/422598/average-number-of-heads-in-filtered-coin-toss?answertab=votes",
"openwebmath_score": 0.9537276029586792,
"openwebmath_perplexity": 72.85193178130476,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588347,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704005534759
} |
http://math.stackexchange.com/questions/267419/problem-on-two-ideals-and-their-quotient | # Problem on two ideals and their quotient
For two ideals $I$ and $J$ in a commutative ring $R$, define $I : J = \{a\in R : aJ \subset I\}$. In the ring $\mathbb{Z}$ of all integers, if $I = 12\mathbb {Z}$ and $J = 8\mathbb {Z}$, find $I : J$.
How should I solve this problem? Can anyone help me please? Thanks for your time.
-
Start by solving the relation $a (8 \mathbb{Z}) \subseteq (12 \mathbb{Z})$ for $a$. Find an equivalent condition on $a$ if this one is too strange to solve. P.S. this is usually called the "colon ideal" or sometimes "ideal quotient": index is something different. – Hurkyl Dec 30 '12 at 3:05
@gumti $I:J$ is called the quotient ideal not index. – user26857 Dec 30 '12 at 10:34
Which integers $a$ have the property that $a\cdot 8$ is a multiple of $12$?
If $a\cdot 8$ is a multiple of $12$, what can you say about $a\cdot x$ where $x$ is any multiple of $8$?
What does that tell you about the ideal $aJ$, where $J=(8)=\{\text{multiples of }8\}$? | 2015-07-30T18:12:54 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/267419/problem-on-two-ideals-and-their-quotient",
"openwebmath_score": 0.8056981563568115,
"openwebmath_perplexity": 144.47072976232036,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9867771776644048,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704004247783
} |
https://mathoverflow.net/questions/260322/the-mathfrak-l-functor-on-textsfprof | # The $\mathfrak L$ functor on $\textsf{Prof}$
$\def\L{\mathfrak{L}}\def\Prof{\mathsf{Prof}}$ Recall that Isbell duality $\text{Spec}\dashv {\cal O} : {\cal V}^{A^°} \leftrightarrows \big({\cal V}^A\big)^°$ allows us to define the functor $$\L : \Prof(A,B) \to \Prof(B,A)$$ ($\cal V$ is a cosmos in which $A,B$ are enriched categories) sending $K : A^° \times B \to \cal V$ into $\L(K) : (b,a)\mapsto {\cal V}^{A^°}(K(-,b), \hom(-,a)) = {\cal O}(K_b)_a$.
Is it true that $\L(H \circ K) \cong \L(K)\circ \L(H)$, maybe under some additional assumptions on $H,K$? Or maybe in general, but that's not a formal proof.
Is there any relation between $\text{Ran}_FK$, as a profunctor $C^° \times B\to \cal V$ and the composition $K\circ \L(F)$?
How does this construction interact with the "tautological dualiser" of $\Prof$ that sends $A \mapsto A^°$?
Is there any reference that encompasses all, or almost all, these questions?
This is the right Kan extension of $\hom_A: A \nrightarrow A$ along $K: A \nrightarrow B$ in the bicategory of profunctors. Which is to say that for every profunctor (aka bimodule) $L: B \nrightarrow A$ there is a natural bijection between morphisms $LK \to \hom_A$ and morphisms $L \to \mathcal{L}(K)$. Since $\hom_A$ is a unit $1_A$ in this bicategory, it may be more suggestive to write $\mathcal{L}(K) = Ran_K 1_A$.
Thus for $H: A \nrightarrow B$ and $K: B \nrightarrow C$, we have a canonical map $\mathcal{L}(K)K \to 1_B$, and whiskering on the right by $H$ and on the left by $\mathcal{L}(H)$, we get a composite
$$\mathcal{L}(H)\mathcal{L}(K)KH \to \mathcal{L}(H)1_B H \cong \mathcal{L}(H)H \to 1_A.$$
By the universal property of $\mathcal{L}(KH)$, we now obtain a canonical map $\mathcal{L}(H)\mathcal{L}(K) \to \mathcal{L}(KH)$.
But this map is typically not an isomorphism. The simplest type of example that comes to mind is just the classical case of bimodules over rings, where for a right-$B$ left-$C$ bimodule $K$, we have $\mathcal{L}(K) = \hom_A(K, B)$ regarded as a right-$C$ left-$B$ bimodule. We have in this situation a canonical map
$$\hom_A(H, A) \otimes_B \hom_B(K, B) \to \hom_A(K \otimes_B H, A)$$
but normally this won't be an isomorphism. Indeed, even in the humble case of vector spaces over a ground field $k$, the canonical map $V^\ast \otimes_k W^\ast \to (V \otimes W)^\ast$ isn't generally an isomorphism. Of course we do get an isomorphism here in some special cases, such as if $K$ is finitely generated projective over $B$. More abstractly, this is the situation where $K$ has a left adjoint bimodule, and here we may recall that in a 2-category or bicategory, if an arrow $K: B \nrightarrow C$ has a left adjoint $L$, then it is necessarily $L = Ran_K 1_B$, i.e., $L = \mathcal{L}(K)$ in our situation. In that case, the asserted inverse $\mathcal{L}(KH) \to \mathcal{L}(H)\mathcal{L}(K)$ of the canonical map is mated (by the adjunction $\mathcal{L}(K) \dashv K$) to an arrow $\mathcal{L}(KH)K \to \mathcal{L}(H)$, which in turn is mated to the canonical arrow $\mathcal{L}(KH)KH \to 1_A$ using the definition of right Kan extension.
[For the bicategory of profunctors or bimodules, such right adjoints $K$ are induced by functors $F: C \to \bar{B}$ where $\bar{B}$ is the Cauchy completion of $B$ (which we may think of as analogous to the category of finitely generated projective $B$-modules); specifically, $K(b, c) = \bar{B}(b, Fc)$ and its left adjoint is given by $\mathcal{L}(K)(c, b) = \bar{B}(Fc, b)$.]
By similar reasoning as above, one may calculate that if $K: B \nrightarrow C$ has a left adjoint $L$ (in a bicategory $\mathbf{B}$), then for every $F: B \nrightarrow A$ we have an isomorphism $Ran_F(K) \cong K \circ Ran_F(1_B)$, assuming these right Kan extensions exist. This is purely formal of course: for every $H: A \nrightarrow C$ we have isomorphisms
$$\mathbf{B}(A, C)(H, Ran_F(K)) \cong \mathbf{B}(B, C)(HF, K) \cong \mathbf{B}(B, B)(LHF, 1_B) \cong \mathbf{B}(A, B)(LH, Ran_F(1_B)) \cong \mathbf{B}(A, C)(H, K Ran_F(1_B)).$$
I'm not sure what could be said about interaction with the "tautological dualizer" (where now are viewing $\textbf{Prof}$ as a compact closed bicategory): essentially all of the above has to do with the bicategory structure, not the monoidal bicategory structure.
• That's what I call an answer! – Fosco Jan 24 '17 at 8:06
• Thanks! I'm not sure where in the literature this type of thing would be, but I'm sure it's there somewhere. – Todd Trimble Jan 24 '17 at 14:27
• Any reference exhibiting a feeble relation with this construction is welcome! This definition is part of a particularly "concrete" (provided "concrete" is the right adjective here :-) ) construction when $\cal V =$chain complexes or, better, a simplicially co/tensored model category. – Fosco Jan 24 '17 at 14:51
• On a separate note, I'm able to "blindly" prove that $L(H \diamond K)^a_c \cong Nat(H^c, L(K)^a) = Ran_H L(K)^a_c$ as a right Kan extension in $\bf Prof$. Your explanation trivializes this result. – Fosco Jan 24 '17 at 14:53 | 2020-10-01T20:23:33 | {
"domain": "mathoverflow.net",
"url": "https://mathoverflow.net/questions/260322/the-mathfrak-l-functor-on-textsfprof",
"openwebmath_score": 0.9708478450775146,
"openwebmath_perplexity": 231.6987091853297,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699747,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704002960805
} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-11-infinite-sequences-and-series-11-1-sequences-11-1-exercises-page-745/80 | ## Calculus 8th Edition
a) The sequence is increasing and has an upper bound. b) $\lim\limits_{n \to \infty}a_{n} = 2$
a) $a_{1}= \sqrt 2$ $a_{n+1} = \sqrt (2+a_{n})$ For $n=1$ $a_{1+1} = \sqrt (2+a_{1})$ $a_{2}= \sqrt (2+\sqrt 2)$ since $\sqrt (2+\sqrt 2) \gt \sqrt 2$ Then $a_{2} \gt a_{1}$ which suggests that the sequence is increasing Assume that it is true for $n=k$ so $a_{k+1} \gt a_{k}$ $2 + a_{k+1} \gt 2+a_{k}$ $\sqrt (2+a_{k+1}) \gt \sqrt (2+a_{k})$ $a_{k+2} \gt a_{k+1}$ The sequence is increasing for $n=k+1$ $a_{n+1} \gt a_{n}$ for all $n$ For the sequence {$a_{n}$} is increasing. $a_{1} = \sqrt 2 = 1.4142$ $a_{2} = \sqrt (2+\sqrt 2) = 1.84776$ $a_{3} = \sqrt (2+a_{2}) =1.96157$ $a_{4} = 1.990$ $a_{5} = 1.9976$ $a_{6}=1.999$ $a_{10}=1.9999$ The terms are approaching to $2$ $2 \lt 3$ So $a_{n} \lt 3$ for all $n \geq 1$ Thus 3 is an upper bound. b) Since {$a_{n}$} is increasing and bounded above by 4 then {$a_{n}$} must have a limit. Let limit be $L$ $\lim\limits_{n \to \infty}{a_{n}} = L$ exists $\lim\limits_{n \to \infty} a_{n+1} = \lim\limits_{n \to \infty} \sqrt (2+a_{n})$ $= \sqrt (2+\lim\limits_{n \to \infty}a_{n})$ $=\sqrt (2+L)$ Since $a_{n} → L$ then $a_{n+1} → L$ (as $n→ \infty$) $L = \sqrt (2+L)$ $L^{2} = 2+L$ $L^{2}-L-2=0$ $(L-2)(L+1)=0$ $L=-1$ or $L=2$ since $a_{1} = \sqrt 2 \gt 0$ and $a_{n}$ is increasing so $L \ne -1$ and thus $\lim\limits_{n \to \infty}a_{n} = 2$ | 2020-04-08T16:56:55 | {
"domain": "gradesaver.com",
"url": "https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-11-infinite-sequences-and-series-11-1-sequences-11-1-exercises-page-745/80",
"openwebmath_score": 0.9934211373329163,
"openwebmath_perplexity": 59.17286879753058,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699747,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704002960805
} |
https://en.wikipedia.org/wiki/Stolz-Ces%C3%A0ro_theorem | # Stolz–Cesàro theorem
(Redirected from Stolz-Cesàro theorem)
In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time.
The Stolz–Cesàro theorem can be viewed as a generalization of the Cesàro mean, but also as a l'Hôpital's rule for sequences.
## Statement of the Theorem (the ∙/∞ case)
Let ${\displaystyle (a_{n})_{n\geq 1}}$ and ${\displaystyle (b_{n})_{n\geq 1}}$ be two sequences of real numbers. Assume that ${\displaystyle (b_{n})_{n\geq 1}}$ is strictly monotone and divergent sequence (i.e. strictly increasing and approaches ${\displaystyle +\infty }$ or strictly decreasing and approaches ${\displaystyle -\infty }$) and the following limit exists:
${\displaystyle \lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=\ell .\ }$
Then, the limit
${\displaystyle \lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}\ }$
also exists and it is equal to .
## History
The ∞/∞ case is stated and proved on pages 173—175 of Stolz's 1885 book S and also on page 54 of Cesàro's 1888 article C.
It appears as Problem 70 in Pólya and Szegő.
## The General Form
The general form of the Stolz–Cesàro theorem is the following:[1] If ${\displaystyle (a_{n})_{n\geq 1}}$ and ${\displaystyle (b_{n})_{n\geq 1}}$ are two sequences such that ${\displaystyle (b_{n})_{n\geq 1}}$ is monotone and unbounded, then:
${\displaystyle \liminf _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}\leq \liminf _{n\to \infty }{\frac {a_{n}}{b_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}.}$ | 2017-03-29T02:04:46 | {
"domain": "wikipedia.org",
"url": "https://en.wikipedia.org/wiki/Stolz-Ces%C3%A0ro_theorem",
"openwebmath_score": 0.965880811214447,
"openwebmath_perplexity": 358.6059311901543,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531704000386851
} |
http://www.martinorr.name/blog/2012/04/25/the-masser-wustholz-isogeny-theorem/ | # Martin's Blog
## The Masser-Wüstholz isogeny theorem
Posted by Martin Orr on Wednesday, 25 April 2012 at 14:09
Let and be two isogenous abelian varieties over a number field . Can we be sure that there is an isogeny between them of small degree, where "small" is an explicit function of and ? In particular, our bound should not depend on ; this means that the bound will imply Finiteness Theorem I, and hence the Shafarevich, Tate and Mordell conjectures.
The Masser-Wüstholz isogeny theorem answers this question, at least subject to a minor condition on polarisations (I think that this was removed in a later paper of Masser and Wüstholz but it is not too important anyway -- when deducing Finiteness Theorem I you can remove the polarisation issue with Zarhin's Trick).
Theorem. (Masser, Wüstholz 1993) Let and be principally polarised abelian varieties over a number field . Suppose that there exists some isogeny . Then there is an isogeny of degree at most where and are constants depending only on the dimension of .
We will prove this using the Masser-Wüstholz period theorem which I discussed last time.
### Proof of the isogeny theorem
Recall from last time that in order to obtain an isogeny of bounded degree between a subvariety of and a subvariety of , it suffices to have a non-split subvariety of bounded degree in . The central idea of the proof is that we will use the period theorem, along with the assumption that and are isogenous, to construct such a non-split subvariety.
Actually we will construct a non-split subvariety of , of degree say. Once we have done this, it is easy to finish the proof: by the lemma from last time, there are non-zero subvarieties and and an isogeny of degree at most .
If we suppose that is simple, then we must have and at least one of the projections is an isogeny. Because the degree of is bounded, so is the degree of and hence is the desired isogeny of bounded degree.
If is not simple, then we can do something similar to get an isogeny where and of bounded degree; then we quotient out by and and induct.
### Finding a non-split subvariety
It remains to show that for any isogenous abelian varieties and , there is a non-split subvariety with degree bounded polynomially by , and .
Choose a non-zero period and a basis for . Let be the period of .
Let be the smallest abelian subvariety of whose tangent space contains . Now the period theorem gives a bound for the degree of , and the assumption that is isogenous to implies that is non-split.
To prove the latter, let be any isogeny . There are integers such that Then must be contained in Any subvariety of with a non-trivial projection to is non-split.
(Observe that the degree of may be really big, and the degree of is related to the degree of , so the degree of may be big. But this is OK because we are not using to say anything about the degree of - that comes from the period theorem.)
### Bounding the degree of the subvariety
The period theorem tells us that where and are principal polarisations of and , and is the Hermitian form associated with the polarisation of .
We want to remove from this bound, by bounding it in terms of the other quantities. Let us write This is a norm on the real vector space .
We have so it will suffice to show that we can choose and with bounded lengths.
The only condition that we have put on is that it is a period i.e. in the lattice . A fundamental domain for this lattice has volume 1 with respect to our chosen metric (this is equivalent to the polarisation being principal) and so Minkowski's theorem gives an upper bound for the length of the smallest period, depending only on the dimension :
With regard to , they must be a basis for . Again this lattice has covolume 1 and so by Minkowski's theorem, there is a basis such that for a constant depending only on .
An upper bound for the product does not imply an upper bound for the individual lengths (or for ) but we can deduce such a bound if we also have a lower bound for the . The following such bound can be deduced from a refinement of Masser's Matrix Lemma.
Lemma. Let be a principally polarised abelian variety defined over a number field . There is a constant depending only on such that every non-zero period of satisfies
### Finishing the proof
Combining the above, we get that the isogeny of least degree satisfies (The constants and are different in different inequalities in this post.)
However this bound depends on , which we wanted to avoid. We use a basic fact about the Faltings height: if there is an isogeny , then So we get that It might seem silly that we want to bound and we have just introduced it on the right hand side, but it is only a polynomial in so with a bit of rearrangement we can absorb it into the constant.
1. Matrix lemma for elliptic curves From Martin's Blog
Let be a principally polarised abelian variety of dimension over . We can associate with a complex matrix called the period matrix which roughly speaking describes a basis for the image of in (actually it is not really the period matrix as it is o...
1. Barinder Banwait said on Friday, 27 April 2012 at 18:55 :
1. "Any subvariety of with a non-trivial projection to is non-split". Why is this? Are you using that is simple?
2. Right at the end, "So we get that...": Are you claiming that ?
2. Martin Orr said on Wednesday, 02 May 2012 at 08:53 :
1. No. I use that so any split subvariety contained in is in fact contained in . Since is finite, the subvariety is contained in i.e. has trivial projection to .
2. Again no. I confusingly left out some steps. We use the fact that so then we can hide the constants in the big constant . | 2018-07-19T01:26:40 | {
"domain": "martinorr.name",
"url": "http://www.martinorr.name/blog/2012/04/25/the-masser-wustholz-isogeny-theorem/",
"openwebmath_score": 0.9774165153503418,
"openwebmath_perplexity": 325.1440369673869,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771766922545,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531703997812897
} |
https://math.stackexchange.com/questions/1654123/understanding-the-definition-of-the-direct-sum-of-subspaces-of-a-vector-space | # Understanding the definition of the direct sum of subspaces of a vector space
I have a question regarding the definition of direct sum of a vector space in relation to subspaces.
Definition: A vector space $V$ is called the direct sum of $W_1$ and $W_2$ if $W_1$ and $W_2$ are subspaces of $V$ such that $W_1\cap W_2 = \{0\}$ and $W_1 + W_2 = V$. We denote that $V$ is the direct sum of $W_1$ and $W_2$ by writing $V = W_1\oplus W_2$.
Is this definition saying that any vector in $V$ can be written as a linear combination of the vectors in the set $W_1 + W_2$?
Thanks!
The definition is saying that any vector $v \in V$ can be written as $v = w_1 + w_2$ where $w_1 \in W_1$ and $w_2 \in W_2$ (this is the condition $W_1 + W_2 = V$), and this decomposition is unique (this follows from the condition $W_1\cap W_2 = \{0\}$). You don't need to take linear combinations. | 2020-02-21T19:20:26 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1654123/understanding-the-definition-of-the-direct-sum-of-subspaces-of-a-vector-space",
"openwebmath_score": 0.995526134967804,
"openwebmath_perplexity": 25.175489080794133,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771766922544,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531703997812897
} |
https://mathstodon.xyz/@0foldcv/100630473390402761 | Consider two random complex numbers
z = u₁ + iv₁ and
w = u₂ + iv₂,
where u₁, v₁, u₂, v₂ are independent standard normal random variables (N(0,1)).
Then what is the probability distribution of the absolute value of the product |zw|?
Some empirical investigation (simulation) shows that the distribution looks like this:
Turns out an ancient paper(*) has the answer.
If z = u₁ + iv₁ and w = u₂ + iv₂, where u₁, u₂, v₁, v₂ ~ N(0,1) (and independent), then the probability density of
r := |wz|
is given by
rK₀(r),
where K₀ denotes the modified Bessel function of the second kind with order 0.
(*) Wells, Anderson, Cell (1962) "The Distribution of the Product of Two Central or Non-Central Chi-Square Variates"
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon! | 2020-09-26T20:58:33 | {
"domain": "mathstodon.xyz",
"url": "https://mathstodon.xyz/@0foldcv/100630473390402761",
"openwebmath_score": 0.9192535281181335,
"openwebmath_perplexity": 5386.892014570575,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771763033945,
"lm_q2_score": 0.6619228891883799,
"lm_q1q2_score": 0.6531703995238942
} |
https://stacks.math.columbia.edu/tag/0EHL | Lemma 10.15.3. Let $R$ be a ring. Let $x \in R$, $I \subset R$ an ideal, and $\mathfrak p_ i$, $i = 1, \ldots , r$ be prime ideals. Suppose that $x + I \not\subset \mathfrak p_ i$ for $i = 1, \ldots , r$. Then there exists an $y \in I$ such that $x + y \not\in \mathfrak p_ i$ for all $i$.
Proof. We may assume there are no inclusions among the $\mathfrak p_ i$. After reordering we may assume $x \not\in \mathfrak p_ i$ for $i < s$ and $x \in \mathfrak p_ i$ for $i \geq s$. If $s = r + 1$ then we are done. If not, then we can find $y \in I$ with $y \not\in \mathfrak p_ s$. Choose $f \in \bigcap _{i < s} \mathfrak p_ i$ with $f \not\in \mathfrak p_ s$. Then $x + fy$ is not contained in $\mathfrak p_1, \ldots , \mathfrak p_ s$. Thus we win by induction on $s$. $\square$
There are also:
• 5 comment(s) on Section 10.15: Miscellany
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2021-09-23T10:29:38 | {
"domain": "columbia.edu",
"url": "https://stacks.math.columbia.edu/tag/0EHL",
"openwebmath_score": 0.9798616766929626,
"openwebmath_perplexity": 111.73249908323369,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771763033943,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531703995238942
} |
https://enacademic.com/dic.nsf/enwiki/15538/Quadrilateral | Edges and vertices 4
Schläfli symbol {4} (for square)
Area various methods;
see below
Internal angle (degrees) 90° (for square)
In Euclidean plane geometry, a quadrilateral is a polygon with four sides (or 'edges') and four vertices or corners. Sometimes, the term quadrangle is used, by analogy with triangle, and sometimes tetragon for consistency with pentagon (5-sided), hexagon (6-sided) and so on. The word quadrilateral is made of the words quad (meaning "four") and lateral (meaning "of sides").
The origin of the word quadrilateral is from the two Latin words "quadri", a variant of four, and "latus" meaning side.
Quadrilaterals are simple (not self-intersecting) or complex (self-intersecting), also called crossed. Simple quadrilaterals are either convex or concave.
The interior angles of a simple quadrilateral add up to 360 degrees of arc. This is a special case of the n-gon interior angle sum formula (n - 2)×180°. In a crossed quadrilateral, the interior angles on either side of the crossing add up to 720°.[1]
All convex quadrilaterals tile the plane by repeated rotation around the midpoints of their edges.
A parallelogram is a quadrilateral with two pairs of parallel sides. Equivalent conditions are that opposite sides are of equal length; that opposite angles are equal; or that the diagonals bisect each other. Parallelograms also include the square, rectangle, rhombus and rhomboid.
• Rhombus or rhomb: all four sides are of equal length. Equivalent conditions are that opposite sides are parallel and opposite angles are equal, or that the diagonals perpendicularly bisect each other. An informal description is "a pushed-over square" (including a square).
• Rhomboid: a parallelogram in which adjacent sides are of unequal lengths and angles are oblique (not right angles). Informally: "a pushed-over rectangle with no right angles."
• Rectangle: all four angles are right angles. An equivalent condition is that the diagonals bisect each other and are equal in length. Informally: "a box or oblong" (including a square).
• Square (regular quadrilateral): all four sides are of equal length (equilateral), and all four angles are right angles. An equivalent condition is that opposite sides are parallel (a square is a parallelogram), that the diagonals perpendicularly bisect each other, and are of equal length. A quadrilateral is a square if and only if it is both a rhombus and a rectangle (four equal sides and four equal angles).
• Oblong: a term sometimes used to denote a rectangle which has unequal adjacent sides (i.e. a rectangle that is not a square).
• Kite: two pairs of adjacent sides are of equal length. This implies that one diagonal divides the kite into congruent triangles, and so the angles between the two pairs of equal sides are equal in measure. It also implies that the diagonals are perpendicular. (It is common, especially in the discussions on plane tessellations, to refer to the concave quadrilateral with these properties as a dart or arrowhead, with term kite being restricted to the convex shape.)
• Orthodiagonal quadrilateral: the diagonals cross at right angles.
• Trapezium (British English) or trapezoid (American English): one pair of opposite sides are parallel.
• Isosceles trapezium (Brit.) or isosceles trapezoid (NAm.): one pair of opposite sides are parallel and the base angles are equal in measure. This implies that the other two sides are of equal length, and that the diagonals are of equal length. An alternative definition is: "a quadrilateral with an axis of symmetry bisecting one pair of opposite sides".
• Trapezium (NAm.): no sides are parallel. (In British English this would be called an irregular quadrilateral, and was once called a trapezoid.)
• Cyclic quadrilateral: the four vertices lie on a circumscribed circle. A quadrilateral is cyclic if and only if opposite angles sum to 180°.
• Tangential quadrilateral: the four edges are tangential to an inscribed circle. Another term for a tangential polygon is inscriptible.
• Bicentric quadrilateral: both cyclic and tangential.
• Ex-tangential quadrilateral: the four extensions of the sides are tangent to an excircle.
• A geometric chevron (dart or arrowhead) is a concave quadrilateral with bilateral symmetry like a kite, but one interior angle is reflex.
• A non-planar quadrilateral is called a skew quadrilateral. Formulas to compute its dihedral angles from the edge lengths and the angle between two adjacent edges were derived for work on the properties of molecules such as cyclobutane that contain a "puckered" ring of four atoms.[2] See skew polygon for more.
## Area of a convex quadrilateral
There are various general formulas for the area K of a convex quadrilateral.
The area of a quadrilateral ABCD can be calculated using vectors. Let vectors AC and BD form the diagonals from A to C and from B to D. The area of the quadrilateral is then
$K = \frac{1}{2} |\overrightarrow{AC}\times\overrightarrow{BD}|,$
which is the magnitude of the cross product of vectors AC and BD. In two-dimensional Euclidean space, expressing vector AC as a free vector in Cartesian space equal to (x1,y1) and BD as (x2,y2), this can be rewritten as:
$K = \frac{1}{2} |x_1 y_2 - x_2 y_1|.$
The area can be expressed in trigonometric terms as
$K = \frac{1}{2} pq \cdot \sin \theta,$
where the lengths of the diagonals are p and q and the angle between them is θ.[3] In the case of an orthodiagonal quadrilateral e.g. rhombus, square, and kite, this formula reduces to $\tfrac{1}{2}pq$ since θ is 90°.
Bretschneider's formula[4] expresses the area in terms of the sides and angles:
\begin{align} K &= \sqrt{(s-a)(s-b)(s-c)(s-d) - \tfrac{1}{2} abcd \; [ 1 + \cos (\gamma + \lambda) ]} \\ &= \sqrt{(s-a)(s-b)(s-c)(s-d) - abcd \left[ \cos^2 \left( \tfrac{\gamma + \lambda}{2} \right) \right]} \\ \end{align}
where the sides in sequence are a,b,c,d, where $s=\tfrac{1}{2}(a+b+c+d)$ is the semiperimeter, and γ and λ are any two opposite angles. This reduces to Brahmagupta's formula for the area of a cyclic quadrilateral when γ + λ = 180°.
Another area formula in terms of the sides and angles, with γ being between sides b and c and λ being between sides a and d (adjacent sides belonged to the angles), is
$K = \frac{1}{2}bc \cdot \sin \gamma + \frac{1}{2}ad \cdot \sin \lambda.$
In the case of a cyclic quadrilateral, the latter formula becomes
$K = \frac{1}{2}(ad+bc)\sin \gamma.$
In a parallelogram, where both pairs of opposite sides and angles are equal, this formula reduces to $K=ab \cdot \sin \gamma.$
Next,[5] the following formula expresses the area in terms of the sides and diagonals:
\begin{align} K &= \sqrt{(s-a)(s-b)(s-c)(s-d) - \tfrac{1}{4}(ac+bd+pq)(ac+bd-pq)} \\ &= \frac{1}{4} \sqrt{4p^{2}q^{2}- \left( a^{2}+c^{2}-b^{2}-d^{2} \right) ^{2}}, \\ \end{align}
where p and q are the diagonals. Again, this reduces to Brahmagupta's formula in the cyclic quadrilateral case, since then pq = ac + bd.
Alternatively, we can write the area in terms of the sides and the intersection angle θ of the diagonals, so long as this angle is not 90°:[6]
$K = \frac{|\tan \theta|}{4} \cdot \left| a^2 + c^2 - b^2 - d^2 \right|.$
In the case of a parallelogram, the latter formula becomes
$K = \frac{|\tan \theta|}{2} \cdot \left| a^2 - b^2 \right|.$
## Area inequalities
If a convex quadrilateral has the consecutive sides a, b, c, d and the diagonals p, q, then its area K satisfy[7]
$K\le \tfrac{1}{4}(a+c)(b+d)$ with equality only for a rectangle.
$K\le \tfrac{1}{4}(a^2+b^2+c^2+d^2)$ with equality only for a square.
$K\le \tfrac{1}{4}(p^2+q^2)$ with equality only if the diagonals are perpendicular and equal.
In any convex quadrilateral ABCD, the sum of the squares of the four sides is equal to the sum of the squares of the two diagonals plus four times the square of the line segment connecting the midpoints of the diagonals. Thus
$\displaystyle AB^2 + BC^2 + CD^2 + DA^2 = AC^2 + BD^2 + 4MN^2$
where M and N are the midpoint of the diagonals AC and BD.[8]:p.126 This is sometimes known as Euler's quadrilateral theorem and is a generalization of the parallelogram law.
Euler also generalized Ptolemy's theorem, which is an equality in a cyclic quadrilateral, into an inequality for a convex quadrilateral. It states that
$AB \cdot CD + AD \cdot BC \ge AC \cdot BD$
where there is equality if and only if the quadrilateral is cyclic.[8]:p.128-129
• The length of the diagonal that is opposite to the adjacent sides a and b at angle θ is given by $\sqrt{ a^2 + b^2 - 2ab \cos \theta }$ which is derived from the law of cosines.
• The midpoints of the sides of a quadrilateral are the vertices of a parallelogram. The area of this inner parallelogram equals one-half the area of the outer quadrilateral. The perimeter of the inner parallelogram equals the sum of the diagonals of the outer quadrilateral.
• Let exterior squares be drawn on all sides of a quadrilateral. The segments connecting the centers of opposite squares are (a) equal in length, and (b) perpendicular.
• The line segment joining the midpoints of two opposite sides of any quadrilateral, the segment joining the midpoints of the other two sides, and the segment joining the midpoints of the diagonals are concurrent and are all bisected by their point of intersection.[8]:p.125
• The internal bisectors of the angles of a quadrilateral form a cyclic quadrilateral.[8]:p.127
• Among all quadrilaterals with a given perimeter, the one with the largest area is the square. This is called the isoperimetric theorem for quadrilaterals.
• For any simple quadrilateral with given edge lengths, there is a cyclic quadrilateral with the same edge lengths.[9]
• The quadrilateral with given side lengths that has the maximum area is the cyclic quadrilateral.[9]
## Special line segments
• The two diagonals of a convex quadrilateral are the line segments that connect opposite vertices.
• The two bimedians of a convex quadrilateral are the line segments that connect the midpoints of opposite sides.[10] They intersect at the centroid of the quadrilateral.
• The four maltitudes of a convex quadrilateral are the perpendiculars to a side through the midpoint of the opposite side.[11]
• The eight tangent lengths of a tangential quadrilateral are the line segments from a vertex to the points where the incircle is tangent to the sides. From each vertex there are two congruent tangent lengths.
• The two tangency chords of a tangential quadrilateral are the line segments that connect points on opposite sides where the incircle is tangent to these sides. These are also the diagonals of the contact quadrilateral.
## Bimedians
The length of the bimedians in a convex quadrilateral with sides a, b, c, d are given by
$m_1=\tfrac{1}{2}\sqrt{-a^2+b^2-c^2+d^2+p^2+q^2}$
and
$m_2=\tfrac{1}{2}\sqrt{a^2-b^2+c^2-d^2+p^2+q^2}$
where p and q are the length of the diagonals.[12] Hence[8]:p.126
$p^2+q^2=2(m_1^2+m_2^2).$
## Taxonomy
A taxonomy of quadrilaterals is illustrated by the following graph. Lower forms are special cases of higher forms. Note that "trapezium" here is referring to the British definition (the North American equivalent is a trapezoid), and "kite" excludes the concave kite (arrowhead or dart). Inclusive definitions are used throughout.
• The diagonals of a crossed or concave quadrilateral do not intersect inside the shape.
• The diagonals of a rhombus bisect the angles.
• Let ABCD be a trapezoid (in the U.S. sense of having two parallel sides) with vertices A, B, C, and D in sequence and with parallel sides AB and DC. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC:
$\frac{1}{FG}=\frac{1}{2} \left( \frac{1}{AB}+ \frac{1}{DC} \right).$
• A parallelogram with equal diagonals is a rectangle.
• A cyclic quadrilateral with successive sides a, b, c, d and diagonals p, q has pq=ac+bd.
• A cyclic quadrilateral with successive vertices A, B, C, D and successive sides a=AB, b=BC, c=CD, and d=DA and with diagonals p=AC and q=BD has:
$\frac {p}{q}= \frac{ad+cb}{ab+cd},$
$p^{2}= \frac{(ac+bd)(ad+bc)}{ab+cd},$ and
$q^{2}= \frac{(ac+bd)(ab+dc)}{ad+bc}.$
• A cyclic quadrilateral with successive sides a, b, c, d and semiperimeter s has circumradius (the radius of the circumscribing circle) given by[13]
$R=\frac{1}{4} \sqrt{\frac{(ab+cd)(ac+bd)(ad+bc)}{(s-a)(s-b)(s-c)(s-d)}}.$
• A parallelogram with diagonals p, q and successive sides a, b, c, and d with d=b and c=a has
p2 + q2 = a2 + b2 + c2 + d2.
• For any point P in the interior of a rectangle with successive vertices A, B, C, D, we have
(AP)2 + (CP)2 = (BP)2 + (DP)2.
• Any line through the midpoint (centroid) of a parallelogram bisects the area.
• An orthodiagonal quadrilateral (one with perpendicular diagonals) with sides a, b, c, d in sequence has[6][8]:p.136 a2 + c2 = b2 + d2.
• There are no cyclic quadrilaterals with unequal rational sides in arithmetic progression and with rational area.[14]
• There are no cyclic quadrilaterals with unequal rational sides in geometric progression and with rational area.[14]
## References
1. ^ Stars: A Second Look
2. ^ M.P. Barnett and J.F. Capitani, Modular chemical geometry and symbolic calculation, International Journal of Quantum Chemistry, 106 (1) 215--227, 2006.
3. ^ Harries, J. "Area of a quadrilateral," Mathematical Gazette 86, July 2002, 310-311.
4. ^ R. A. Johnson, Advanced Euclidean Geometry, 2007, Dover Publ., p. 82.
5. ^ E. W. Weisstein. "Bretschneider's formula". MathWorld -- A Wolfram Web Resource.
6. ^ a b Mitchell, Douglas W., "The area of a quadrilateral," Mathematical Gazette 93, July 2009, 306-309.
7. ^ O. Bottema, Geometric Inequalities, Wolters-Noordhoff Publishing, The Netherlands, 1969, pp. 129, 132.
8. ^ a b c d e f Altshiller-Court, Nathan, College Geometry, Dover Publ., 2007.
9. ^ a b Thomas Peter, Maximizing the Area of a Quadrilateral, The College Mathematics Journal, Vol. 34, No. 4 (Sep., 2003), pp. 315-316.
10. ^ Eric W. Weisstein, MathWorld, [1]
11. ^ Eric W. Weisstein, MathWorld, [2]
12. ^ Mateescu Constantin, Answer to Inequality Of Diagonal , [3]
13. ^ Hoehn, Larry, "Circumradius of a cyclic quadrilateral," Mathematical Gazette 84, March 2000, 69-70.
14. ^ a b Buchholz, R. H., and MacDougall, J. A. "Heron quadrilaterals with sides in arithmetic or geometric progression", Bull. Austral. Math. Soc. 59 (1999), 263-269. http://journals.cambridge.org/article_S0004972700032883
Wikimedia Foundation. 2010.
Synonyms:
### Look at other dictionaries:
• Quadrilateral — Quad ri*lat er*al, n. 1. (Geom.) A plane figure having four sides, and consequently four angles; a quadrangular figure; any figure formed by four lines. [1913 Webster] 2. An area defended by four fortresses supporting each other; as, the Venetian … The Collaborative International Dictionary of English
• Quadrilaterāl — (lat.), vierseitig … Meyers Großes Konversations-Lexikon
• quadrilatéral — quadrilatéral, ale (koua dri la té ral, ra l ) adj. Qui offre quatre côtés. ÉTYMOLOGIE Quadri..., et latéral … Dictionnaire de la Langue Française d'Émile Littré
• quadrilateral — (n.) 1650, from L. quadrilaterus, from quadri four (see QUADRI (Cf. quadri )) + latus (gen. lateris) side (see OBLATE (Cf. oblate) (n.)) … Etymology dictionary
• quadrilateral — ► NOUN ▪ a four sided figure. ► ADJECTIVE ▪ having four straight sides … English terms dictionary
• quadrilateral — [kwä΄dri lat′ər əl] adj. [< L quadrilaterus (see QUADRI & LATERAL) + AL] four sided n. 1. Geom. a plane figure having four sides and four angles 2. a four sided area quadrilaterally adv … English World dictionary
• quadrilateral — UK [ˌkwɒdrɪˈlæt(ə)rəl] / US [ˌkwɑdrɪˈlæt(ə)rəl] noun [countable] Word forms quadrilateral : singular quadrilateral plural quadrilaterals maths a flat shape with four sides such as a square Derived word: quadrilateral UK / US adjective … English dictionary | 2020-08-12T15:37:25 | {
"domain": "enacademic.com",
"url": "https://enacademic.com/dic.nsf/enwiki/15538/Quadrilateral",
"openwebmath_score": 0.8638280630111694,
"openwebmath_perplexity": 1118.806624041225,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771759145343,
"lm_q2_score": 0.66192288918838,
"lm_q1q2_score": 0.6531703992664989
} |
https://socratic.org/questions/what-is-the-unit-vector-that-is-orthogonal-to-the-plane-containing-3i-2j-3k-and--1 | # What is the unit vector that is orthogonal to the plane containing (3i + 2j - 3k) and (i -2j + 3k) ?
Dec 19, 2016
The answer is =〈0,-3/sqrt13,-2/sqrt13〉
#### Explanation:
We do a cross product to find the vector orthogonal to the plane
The vector is given by the determinant
$| \left(\hat{i} , \hat{j} , \hat{k}\right) , \left(3 , 2 , - 3\right) , \left(1 , - 2 , 3\right) |$
$= \hat{i} \left(6 - 6\right) - \hat{j} \left(9 - - 3\right) + \hat{k} \left(- 6 - 2\right)$
=〈0,-12,-8〉
Verification by doing the dot product
〈0,-12,-8〉.〈3,2,-3〉=0-24+24=0
〈0,-12,-8〉.〈1,-2,3〉=0+24-24=0
The vector is orthgonal to the other 2 vectors
The unit vector is obtained by dividing by the modulus
∥〈0,-12,-8〉∥=sqrt(0+144+64)=sqrt208=4sqrt13
Thre unit vector is =1/(4sqrt13)〈0,-12,-8〉
=〈0,-3/sqrt13,-2/sqrt13〉 | 2019-10-17T22:38:19 | {
"domain": "socratic.org",
"url": "https://socratic.org/questions/what-is-the-unit-vector-that-is-orthogonal-to-the-plane-containing-3i-2j-3k-and--1",
"openwebmath_score": 0.7569089531898499,
"openwebmath_perplexity": 732.8212764301463,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180969715,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703960316314
} |
https://jimmyjhickey.com/ST702-Estimators-Cramer-Rao | ## Example Problems on Estimators
ST702 Homework 5 on Estimators
# 7.45
Let $X_1, X_2, \dots X_n$ be iid from a distribution with mean $\mu$ and variance $\sigma^2$, and let $S^2$ be the usual unbiased estimator of $\sigma^2$. In Example 7.3.4 we saw that, under normality, the MLE has smaller MSE than $S^2$. In this exercise we will explore variance estimators some more.
## a
Show that, for any estimator of the form $aS^2$, where $a$ is some constant,
$MSE(aS^2) = E[aS^2 - \sigma^2]^2 = a^2 Var(S^2) + (a - 1)^2 \sigma^4$ \begin{align} MSE(aS^2) & = E[W - \theta]^2 \\ & = E[aS^2 - \sigma^2]^2 \\ & = Var(aS^2) - (Bias(aS^2))^2 \\ & = a^2 Var(S^2) - (E(aS^2) - \sigma^2)^2 \\ & = a^2 Var(S^2) - (a^2 E(S^2)^2 - 2a E(S^2)\sigma^2 + \sigma^4)\\ & = a^2 Var(S^2) - (a^2 (\sigma^2)^2 - 2a \sigma^2 \sigma^2 + \sigma^4) & \text{unbiased}\\ & = a^2 Var(S^2) + (a-1)^2 \sigma^4 \end{align}
## b
Show that
$Var(S^2) = \frac{ 1 }{ n } \Big( \kappa - \frac{ n-3 }{ n-1 } \Big)\sigma^4$
where $\kappa = E[X-\mu]^4 / \sigma^4$ is the kurtosis.
\begin{align} V(S^2) & = V\Big( \frac{ 1 }{ n-1 } \sum (X_i - \overline{ X })^2 \Big) \\ & = \frac{ 1 }{ (n-1)^2 } \sum V( (X_i - \overline{ X })^2) \\ & = \frac{ 1 }{ (n-1)^2 } \sum \Big[ E( ((X_i - \overline{ X })^2)^2 ) - E((X_i - \overline{ X })^2)^2 \Big] \\ & = \frac{ 1 }{ (n-1)^2 } \sum \Big[ E( (X_i - \overline{ X })^4 ) - E((X_i - \overline{ X })^2)^2 \Big] \\ & = \frac{ 1 }{ (n-1)^2 } \sum \kappa \sigma^4 - (\sigma^2)^2 \\ & = \frac{ 1 }{ (n-1)^2 } ( n \kappa \sigma^4 - n \sigma^4) \end{align}
## c
Show that under normality, the kurtosis is $3$ and establish that, in this case, the estimator of the form $aS^2$ with the minimum MSE is $\frac{ n-1 }{ n+1 }S^2$. (Lemma 3.6.5 may be helpful)
We can bring the $\sigma^4$ inside the fraction to get
$\kappa = E\Big[ \Big( \frac{ X-\mu }{ \sigma } \Big)^4 \Big].$
Notice that the inside function now follows a $N(0,1)$ distribution. We can also make use of Lemma 3.6.5 which says
$E\Big[ g(x) (x - \mu) \Big] = \sigma^2 E(g'(x)).$
Now we can take $g(Z) = Z^3$. Then,
\begin{align} \kappa & = E[Z^4] \\ & = E[g(Z) (Z - 0)] \\ & = \sigma^2 E[g'(Z)] \\ & = \sigma^2 E[3 Z^2] \\ & = 3 \sigma^2 E[Z^2] \\ & = 3 \sigma^2 (V(Z) + E(Z)^2) \\ & = 3 \cdot 1 ( 1 + 0) \\ & = 3. \end{align}
Now we can use our MSE equation from (a) and our variance equation from (b). We are trying to optimize with respect to $a$.
\begin{align} MSE & = a^2 \frac{ 1 }{ n } \Big( \kappa - \frac{ n-3 }{ n-1 } \Big)\sigma^4 + (a-1)^2 \sigma^4 \\ \frac{ \partial MSE }{\partial a} & = 2 a \frac{ 1 }{ n } \Big( \kappa - \frac{ n-3 }{ n-1 } \Big)\sigma^4 + 2(a-1) \sigma^4 = 0 \\ a & = \frac{ \sigma^4 }{ \frac{ 1 }{ n } \Big( \kappa - \frac{ n-3 }{ n-1 } \Big)\sigma^4 + \sigma^4 } \\ & = \frac{ \sigma^4 }{ \frac{ 1 }{ n } \Big( 3 - \frac{ n-3 }{ n-1 } \Big)\sigma^4 + \sigma^4 } \\ & = \frac{ 1 }{ \frac{ 1 }{ n } \Big( 3 - \frac{ n-3 }{ n-1 } \Big) + 1 } \\ & = \frac{ n-1 }{ n+1 } \end{align}
## d
If normality is not assumed, show that $MSE(aS^2)$ is minimized at
$a = \frac{ n-1 }{ n+1 + \frac{ (\kappa - 3) (n-1) }{ n } }$
which is useless as is depends on a parameter.
From above
\begin{align} a & = \frac{ \sigma^4 }{ \frac{ 1 }{ n } \Big( \kappa - \frac{ n-3 }{ n-1 } \Big)\sigma^4 + \sigma^4 } \\ & = \frac{ 1 }{ \frac{ 1 }{ n } \Big( \kappa - \frac{ n-3 }{ n-1 } \Big) + 1 }\\ & = \frac{ n-1 }{ n+1 + \frac{ (\kappa - 3) (n-1) }{ n } } \end{align}
## e
Show that
### i
for distributions with $\kappa > 3$, the optimal $a$ will satisfy $a < \frac{ n-1 }{ n+1 }$.
Taking $\kappa > 3$ will make the second term in our denominator positive, which will make the denominator larger than $n+1$. So our overall expression will be less than $\frac{ n-1 }{ n+1 }$.
### ii
for distributions with $\kappa < 3$, the optimal $a$ will satisfy $\frac{ n-1 }{ n+1 } < a < 1$.
Similarly, making $\kappa < 3$ will make the second term in our denominator negative, which will make the denominator smaller than $n+1$. So our overall expression will be greater than $\frac{ n-1 }{ n+1 }$.
# 7.48
Suppose that $X_i, \ i = 1, \dots , n$ are iid $Bernoulli(p)$.
## a
Show that the variance of the MLE of $p$ attains the Cramér-Rao Lower Bound.
The MLE is,
\begin{align} L(p | x) & = \prod f(x_i | p) \\ & = \prod p^{x_i} (1-p)^{1-x_i}\\ & = p^{\sum x_i} (1-p)^{1-\sum x_i} \\ \\ \ell(p|x) & = \sum x_i \log(p) + (n - \sum x_i) \log(1-p) \\ \\ \frac{ \partial \ell }{\partial p } & = \sum x_i \frac{ 1 }{ p } + -(n - \sum x_i) \frac{ 1 }{ 1-p } \\ 0 & = \sum x_i \frac{ 1 }{ p } + -(n - \sum x_i) \frac{ 1 }{ 1-p } \\ \\ \sum x_i \frac{ 1 }{ p } & = (n - \sum x_i) \frac{ 1 }{ 1-p } \\ \frac{ 1-p }{ p } & = \frac{ n - \sum x_i }{ \sum x_i } \\ \widehat p & = \overline{ X }. \end{align}
The Cramér-Rao lower bound is,
\begin{align} V_\theta ( W(X)) & \geq \frac{ \Big( \frac{ \partial }{\partial \theta} E_\theta(W(X)) \Big)^2 }{ -E_\theta \Big( \frac{ \partial^2 }{\partial \theta^2} \log(f(x|\theta)) \Big) } \\ -E_\theta \Big( \frac{ \partial^2 }{\partial \theta^2} \log(f(x|\theta)) \Big) & = -E_\theta \Big( \frac{ \partial }{\partial \theta} \sum \frac{ x_i }{ p } - (n - \sum x_i) \frac{ 1 }{ 1-p } \Big) \\ & = -E_\theta \Big( - \frac{ \sum x_i }{ p^2 } + \frac{ -(n-\sum x_i) }{ (1-p)^2 } \Big) \\ & = \frac{ 1 }{ p^2 } \sum E(x_i) + E\Big( \frac{ n = \sum x_i }{ (1-p)^2 } \Big) \\ & = \frac{ n }{ p^2 } p + \frac{ n-np }{ (1-p)^2 } \\ & = \frac{ n }{ p } + \frac{ n(1-p) }{ (1-p)^2 } \\ \\ CRLB & = \frac{ 1 }{ \frac{ n }{ p } + \frac{ n(1-p) }{ (1-p)^2 } } \\ & = \frac{ p(1-p) }{ n }. \end{align}
Now we can find the variance of our estimator.
\begin{align} V(\widehat p) & = V \Big( \frac{ \sum x_i }{ n } \Big) \\ & = \frac{ 1 }{ n^2 } \sum V(x_i) \\ & = \frac{ 1 }{ n^2 } n p (1-p) \\ & = \frac{ p(1-p) }{ n }\\ & = CRLB \end{align}
## b
For $n \geq 4$, show that the product $X_1 X_2 X_3 X_4$ is an unbiased estimator of $p^4$. and use this fact to find the best unbiased estimator of $p^4$.
\begin{align} E\Big[ X_1 X_2 X_3 X_4 \Big] & = E(X_1)E(X_2)E(X_3)E(X_4) & \text{independence} \\ & = p^4. \end{align}
Also $\sum_{i=1}^{n} x_i$ is sufficient for $p$ by the Factorization Theorem.
\begin{align} f(X | p) & = \prod p^{x_i} (1-p)^{1-x_i} \\ & = p^{\sum x_i} (1-p)^{1-\sum x_i} \end{align}
We can use the Rao-Blackwell theorem to find a UMVUE.
\begin{align} \phi(T) & = E\Big( X_1 X_2 X_3 X_4 | \sum_{i=1}^{n} X_i = t \Big) \\ & = \frac{ P(X_1, X_2, X_3, X_4 = 1 \land \sum_{i=1}^{n} X_i = t) }{ P(\sum_{i=1}^{n} X_i = t) } \\ & = \frac{ P(X_1, X_2, X_3, X_4 = 1 \land \sum_{i=5}^{n} X_i = t-4) }{ P(\sum_{i=1}^{n} X_i = t) } \\ & = \frac{ P(X_1, X_2, X_3, X_4 = 1) P(\sum_{i=5}^{n} X_i = t-4) }{ P(\sum_{i=1}^{n} X_i = t) } & \text{independence} \\ & = \frac{ p^4 \cdot {n-4 \choose t-4} p^{t-4} (1-p)^{n-4-(t-4)} }{ {n \choose t} p^t (1-p)^{n-t} } \\ & = \frac{ {n-4 \choose t-4} }{ {n \choose t} } \end{align}
# 7.49
Let $X_1, \dots , X_n$ be iid $exponential(\lambda)$.
## a
Find an unbiased estimator of $\lambda$ based only on $Y = \min {X_1, \dots , X_n }$.
\begin{align} F_Y & = P(X < X_i) \\ & = [1 - F(X)]^n \\ \\ f_y & = n f(x) (1- F(X))^{n-1} \\ & = n \frac{ 1 }{ \lambda } e^{-\frac{ 1 }{ \lambda } x} \Big( 1 - (1 - e^{\frac{ -1 }{ \lambda } x}) \Big)^{n-1} \\ & = n \frac{ 1 }{ \lambda } e^{- n \frac{ 1 }{ \lambda } x}. \end{align}
So $Y\sim Exp(\lambda / b)$. So $nY$ is unbiased.
$E(nY) = n E(Y) = n \lambda/n = \lambda$
## b
Find a better estimator than the one in part (a). Prove that it is better.
Since it’s an exponential family, $\sum x_i$ is sufficient and complete. Notice that $\overline{ X }$ is a function of $\sum x_i$ and is unbiased.
$E(\overline{ X }) = \frac{ 1 }{ n } \sum E(x_i) = \frac{ 1 }{ n } n \lambda = \lambda$
By Lehmann-Scheffe, $\overline{ X }$ is UMVUE, so it is better than the estimator in (a).
## c
The following data are high-stress failure times (in hours) of Kevlar/epoxy spherical vessels used in a sustained pressure environment on the space shuttle:
$50.1, 70.1, 137.0, 166.9, 170.5, 152.8, 80.5, 123.5, 112.6, 148.5, 160.0, 125.4.$
Failure times are often modeled with the exponential distribution. Estimate the mean failure time using the estimators from parts (a) and (b).
From (a) $\widehat \lambda = 50.1 \cdot 12 = 601.2$. For (b) we can take the average $\widehat \lambda = 124.825$.
# 4
Consider a random sample $X_1, X_2, \dots X_n$ from the density $f(x| \theta) = \exp\Big[ -(x-\theta) \Big]$, $\theta < x < \infty$, and 0 elsewhere. Show that the first order statistic $X_{(1)} = \min X_i$ is a complete sufficient statistic for $\theta$, and find the UMVUE of $\theta$.
\begin{align} f(X | \theta) & = \prod e^{-(x-\theta)} I[\theta < x< \infty] \\ & = e^{-\sum x_i - n \theta} I[\theta < X_{(1)}] \\ & = e^{- \sum x_i} e^{n \theta} I[\theta < X_{(1)}] \end{align}
By the factorization theorem $X_{(1)}$ is sufficient. Now we can show that it is complete. First, we need its distribution.
\begin{align} f_{X_{(1)}} & = n \cdot f_x ( 1- F_X)^{n-1} \\ & = n e^{-(x-\theta)} \Big[ 1 - \int_{\theta}^{x} e^{-(t - \theta)} dt \Big]^{n-1} \\ & = n e^{-(x-\theta)} \Big[ 1 - (1 - e^{-x + \theta}) \Big]^{n-1} \\ & = n e^{-(x-\theta)} \Big[ e^{-x + \theta} \Big]^{n-1} \\ & = n e^{n(-x + \theta)} \end{align}
Now we can show completeness.
\begin{align} \int_{\theta}^{\infty} g(x) n e^{n(\theta - t)}dx & = n e^{n \theta} \int_{\theta}^{\infty} g(x) e^{- nt}dx \end{align}
This is only 0 if $g(x)$ is 0. Now we can look at the expectation of the minimum order statistic.
\begin{align} E(X_{(1)}) & = \int_{\theta}^{\infty} x n e^{(-x + \theta)n} dx \\ & = \frac{ n (1+n\theta) }{ n^2 } \\ & = \frac{ 1 }{ n } + \theta \end{align}
So $X_{(1)} - \frac{ 1 }{ n }$ in unbiased. | 2021-05-06T03:16:41 | {
"domain": "jimmyjhickey.com",
"url": "https://jimmyjhickey.com/ST702-Estimators-Cramer-Rao",
"openwebmath_score": 1.0000097751617432,
"openwebmath_perplexity": 1023.2494922240674,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771809697151,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703960316313
} |
https://web2.0calc.com/questions/the-graph-of-the-parametric-equations-begin-align | +0
# The graph of the parametric equations \begin{align*} x&=\cos t,\\ y&=\sin t,
0
218
1
+474
The graph of the parametric equations
\begin{align*} x&=\cos t,\\ y&=\sin t, \end{align*}
meets the graph of the parametric equations
\begin{align*} x &= 2+ 4\cos s,\\ y &= 3+4\sin s, \end{align*}
at two points. Find the slope of the line between these two points.
Nov 28, 2018
#1
+22010
+9
The graph of the parametric equations
\begin{align*} x&=\cos t,\\ y&=\sin t, \end{align*}
meets the graph of the parametric equations
\begin{align*} x &= 2+ 4\cos s,\\ y &= 3+4\sin s, \end{align*}
at two points.
Find the slope of the line between these two points.
$$\text{circle 1:} \\ \begin{array}{|rcll|} \hline x &=& \cos(t) \\ y &=& \sin(t) \\ \hline x^2+y^2 &=& \cos^2(t) + \sin^2(t) \\ \mathbf{x^2+y^2} & \mathbf{=} & \mathbf{1} \\ \hline \end{array}$$
$$\text{circle 2:} \\ \begin{array}{|rcll|} \hline x &=& 2+4\cos(t) \quad \text{ or } \quad 4\cos(t) = x-2 \\ y &=& 3+4\sin(t) \quad \text{ or } \quad 4\sin(t) = y-3 \\ \hline x^2+y^2 &=& \Big( 2+4\cos(t) \Big)^2 + \Big( 3+4\sin(t) \Big)^2 \\ x^2+y^2 &=& 4+16\cos(t)+16\cos^2(t) + 9 + 24\sin(t) + 16\sin^2(t) \\ x^2+y^2 &=& 13+16\cos(t)+ 24\sin(t) + 16\Big(\cos^2(t) + \sin^2(t) \Big) \\ x^2+y^2 &=& 13+16\cos(t)+ 24\sin(t) + 16 \\ x^2+y^2 &=& 29+16\cos(t)+ 24\sin(t) \\ x^2+y^2 &=& 29+4\cdot4\cos(t)+ 6\cdot 4\sin(t) \\ x^2+y^2 &=& 29+4\cdot (x-2)+ 6\cdot (y-3) \\ x^2+y^2 &=& 29+4x-8+6y-18 \\ \mathbf{x^2+y^2} & \mathbf{=} & \mathbf{3+4x +6y} \\ \hline \end{array}$$
$$\text{intersection between the two circles:} \\ \begin{array}{|rcll|} \hline \text{circle 2: }~\mathbf{x^2+y^2} & \mathbf{=} & \mathbf{3+4x +6y} \quad &| \quad \text{circle 1: }~ \mathbf{x^2+y^2= 1} \\ 1 &=& 3+4x +6y \\ 6y+4x+3 &=& 1 \\ 6y &=& -4x-3 +1 \\ 6y &=& -4x-2 \\\\ y &=& \dfrac{-4x-2}{6} \\\\ y &=& -\dfrac{4}{6}x -\dfrac{2}{6} \\\\ y &=& -\dfrac{2}{3}x -\dfrac{1}{3} \\ \hline \end{array}$$
$$\text{the line between these two points:} \\ \begin{array}{|rcll|} \hline \mathbf{ y } & \mathbf{=} & \mathbf{ \underbrace{-\dfrac{2}{3}}_{\text{slope}}x -\dfrac{1}{3} } \\ \hline \end{array}$$
The slope of the line between these two points is $$\mathbf{-\dfrac{2}{3}}$$.
Nov 28, 2018
edited by heureka Nov 28, 2018 | 2019-04-26T14:52:32 | {
"domain": "0calc.com",
"url": "https://web2.0calc.com/questions/the-graph-of-the-parametric-equations-begin-align",
"openwebmath_score": 1.000008463859558,
"openwebmath_perplexity": 3971.3694577416563,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9867771809697151,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703960316313
} |
https://yannisparissis.wordpress.com/2011/03/10/dmat0101-notes-3-the-fourier-transform-on-l1/ | ## DMat0101, Notes 3: The Fourier transform on L^1
1. Definition and main properties.
For ${f\in L^1({\mathbb R}^n)}$, the Fourier transform of ${f}$ is the function
$\displaystyle \mathcal{F}(f)(\xi)=\hat{f}(\xi)=\int_{{\mathbb R}^n}f(x)e^{-2\pi i x\cdot \xi}dx,\quad \xi\in{\mathbb R}^n.$
Here ${x\cdot y}$ denotes the inner product of ${x=(x_1,\ldots,x_n)}$ and ${y=(y_1,\ldots, y_n)}$:
$\displaystyle x\cdot y=\langle x,y\rangle=x_1y_1+\cdots x_n y_n.$
Observe that this inner product in ${{\mathbb R}^n}$ is compatible with the Euclidean norm since ${x\cdot x=|x|^2}$. It is easy to see that the integral above converges for every ${\xi\in{\mathbb R}^n}$ and that the Fourier transform of an ${L^1}$ function is a uniformly continuous function.
Theorem 1 Let ${f,g\in L^1({\mathbb R}^n)}$. We have the following properties.
(i) The Fourier transform is linear ${\widehat{f+g}=\hat f + \hat g}$ and ${\widehat{cf}=c \hat f}$ for any ${c\in{\mathbb C}}$.
(ii) The function ${\hat f(\xi)}$ is uniformly continuous.
(iii) The operator ${\mathcal F}$ is bounded operator from ${L^1({\mathbb R}^n)}$ to ${L^\infty({\mathbb R}^n)}$ and
$\displaystyle \|\hat f \|_{L^{\infty}({\mathbb R}^n)}\leq \|f\|_{L^1({\mathbb R}^n)}.$
(iv) (Riemann-Lebesgue) We have that
$\displaystyle \lim _{|\xi|\rightarrow +\infty} \hat f(\xi)=0.$
Proof: The properties (i), (ii) and (iii) are easy to establish and are left as an exercise. There are several ways to see (iv) based on the idea that it is enough to establish this property for a dense subspace of ${L^1({\mathbb R}^n)}$. For example, observe that if ${f}$ is the indicator function of an interval of the real line, ${f=\chi_{[a,b]}}$, then we can calculate explicitly to show that
$\displaystyle |\hat{f}(\xi)|=\bigg|\int_a ^b e^{-2\pi i x\xi}dx\bigg| =\bigg| \frac{e^{-2\pi i \xi a}-e^{-2\pi i \xi b}}{2\pi i \xi }\bigg|\lesssim \frac{1}{|\xi|}\rightarrow 0\quad\mbox{as}\quad |\xi|\rightarrow +\infty.$
Tensoring this one dimensional result one easily shows that ${\lim_{|\xi|\rightarrow +\infty}f(\xi)=0}$ whenever ${f}$ is the indicator function of an ${n}$-dimensional interval of the form ${[a_1,b_1]\times\cdots\times [a_n,b_n]}$. Obviously the same is true for finite linear combinations of ${n}$-dimensional intervals since the Fourier transform is linear.
Now let ${f}$ be any function in ${L^1({\mathbb R}^n)}$ and ${\epsilon >0}$ and consider a simple function which is a finite linear combination of ${n}$-dimensional intervals, such that ${\|f-g\|_1<\epsilon/2}$. Let also ${M>0}$ be large enough so that ${|\hat g (\xi)|<\epsilon/2}$ whenever ${|\xi|>M}$. Using (iii) and the linearity of the Fourier transform we have that
$\displaystyle \begin{array}{rcl} |\hat f(\xi)|\leq |\widehat{(f-g)}\hat (\xi)|+|\hat g (\xi)|\leq \|f-g\|_{L^1}+|\hat g (\xi)|<\epsilon, \end{array}$
whenever ${|\xi|>M}$, which finishes the proof. $\Box$
In view of (ii) and (iv) we immediately get the following.
Corollary 2 If ${f\in L^1({\mathbb R}^n)}$ then ${\hat f\in C_o({\mathbb R}^n)}$.
Exercise 1 Show the properties (ii) and (iii) in the previous Theorem.
The discussion above and especially Corollary 2 shows that a necessary condition for a function ${g}$ to be a Fourier transform of some function in ${L^1({\mathbb R}^n)}$ is ${g\in C_o({\mathbb R}^n)}$. However, this condition is far from being sufficient as there are functions ${g\in C_o({\mathbb R}^n)}$ which are not Fourier transforms of ${L^1}$ functions. See Exercise 8.
Let us now see two important examples of Fourier transforms that will be very useful in what follows.
Example 1 For ${a>0}$ let ${f(x)=e^{-\pi a|x|^2}}$. Then
$\displaystyle \hat f(\xi)=a^{-\frac{n}{2}}e^{-\frac{\pi|\xi|^2}{a}}.$
Proof: Observe that in one dimension we have
$\displaystyle \begin{array}{rcl} \hat f(\xi)&=&\int_{\mathbb R} e^{-\pi ax^2}e^{-2\pi i x\xi}dx=\int_{\mathbb R} e^{-\pi a(x+i\frac{\xi}{a})^2}dx\ e^{-\frac{\pi\xi^2}{a}}\\ \\ &=&\int_{\mathbb R} e^{-\pi ax^2} dx \ e^{-\frac{\pi\xi^2}{a}}= a^{-\frac{1}{2}} e^{-\frac{\pi^2\xi^2}{a}}, \end{array}$
where the third equality is a consequence of Cauchy’s theorem from complex analysis. The ${n}$-dimensional case is now immediate by tensoring the one dimensional result. $\Box$
Remark 1 Replacing ${a=1}$ in the previous example we see that ${e^{-\pi |x|^2}}$ is its own Fourier transform.
Example 2 For ${a>0}$ let ${g(x)=e^{-2\pi a |x|}}$. Then
$\displaystyle \hat g(\xi)=c_n\frac{a}{(a^2+|\xi|^2)^\frac{n+1}{2}},$
where ${c_n=\Gamma((n+1)/2)/\pi^\frac{n+1}{2}}$.
Proof: The first step here is to show the subordination identity
$\displaystyle e^{-\beta}=\frac{1}{\sqrt{\pi}}\int_0 ^\infty \frac{e^{-u}}{\sqrt{u}}e^{-\beta^2/4u}du,\quad \beta>0, \ \ \ \ \ (1)$
which is a simple consequence of the identities
$\displaystyle \begin{array}{rcl} e^{-\beta}&=&\frac{2}{\pi}\int_0 ^\infty\frac{\cos \beta x}{1+x^2}dx,\\ \\ \frac{1}{1+x^2}&=&\int_0 ^\infty e^{-(1+x^2)u}du. \end{array}$
Using (1) we can write
$\displaystyle \begin{array}{rcl} \hat g(\xi)&=&\int_{{\mathbb R}^n} e^{-2\pi a|x|} e^{-2\pi i x\cdot \xi}dx \\ \\ &=&\frac{1}{\sqrt{\pi}}\int_{{\mathbb R}^n}\bigg(\int_0 ^\infty \frac{e^{-u}}{\sqrt{u}}e^{-4\pi^2a^2|x|^2/4u}du\bigg) e^{-2\pi i x\cdot \xi} dx\\ \\ &=&\frac{1}{\sqrt{\pi}}\int_0 ^\infty \frac{e^{-u}}{\sqrt{u}} \frac{1}{a^n}\bigg(\sqrt{\frac{u}{\pi}}\bigg)^\frac{n}{2} e^{-\frac{u|\xi|^2}{a^2}}du\\ \\ &=& \frac{1}{\pi^\frac{n+1}{2} a^n}\int_0 ^\infty u^\frac{n-1}{2}e^{-u\frac{|\xi|^2}{a^2}}e^{-u}du\\ \\ &=& \frac{1}{\pi^\frac{n+1}{2} a^n} \frac{1}{\big(1+\frac{|\xi|^2}{a^2}\big)^\frac{n+1}{2}}\int_0 ^\infty u^\frac{n-1}{2}e^{-u}du\\ \\ &=& \frac{\Gamma(\frac{n+1}{2})}{\pi^\frac{n+1}{2} }\frac{a}{\big(a^2+|\xi|^2 \big)^\frac{n+1}{2}}, \end{array}$
by the definition of the ${\Gamma}$-function.$\Box$
Exercise 2 This exercise gives a first (qualitative) instance of the uncertainty principle. Prove that there does not exist a non-zero integrable function on ${{\mathbb R}}$ such that both ${f}$ and ${\hat f}$ have compact support.
Hint: Observe that the function
$\displaystyle \hat f(\xi)=\int_{\mathbb R} f(x) e^{-2\pi i x\xi}dx ,$
extends to an entire function (why ?).
The definition of the Fourier transform extends without difficulty to finite Borel measures on ${{\mathbb R}^n}$. Let us denote by ${\mathcal M({\mathbb R}^n)}$ this class of finite Borel measures and let ${\mu\in \mathcal M({\mathbb R}^n)}$. We define the Fourier transform of ${\mu}$ to be the function
$\displaystyle \mathcal{F}(\mu)(\xi)=\hat{\mu}(\xi)=\int_{{\mathbb R}^n}e^{-2\pi i x\cdot \xi}d\mu(x),\quad \xi\in{\mathbb R}^n.$
We have the analogues of (i), (ii) and (iii) of Theorem 1 if we replace the ${L^1}$ norm by the total variation of the measure. However property (iv) fails as can be seen by consider the Fourier transform of a Dirac mass at the point ${0}$. Indeed observe that
$\displaystyle \hat{\delta_0}(\xi)=\int_{{\mathbb R}^n} e^{-2\pi i x\cdot \xi}d\delta_0(x)=1,$
which is a constant function.
The Fourier transform interacts very nicely with convolutions of functions, turning them to products. This turns out to be quite important when considering translation invariant operators as we shall see later on in the course.
Proposition 3 Let ${f,g\in L^1({\mathbb R}^n)}$. Then ${\widehat{f*g}=\hat f \hat g}$.
Exercise 3 Prove Proposition 3.
Another important property of the Fourier transform is the multiplication formula.
Proposition 4 (Multiplication formula) Let ${f,g\in L^1({\mathbb R}^n)}$. Then
$\displaystyle \int_{{\mathbb R}^n}\hat f(\xi)g(\xi)d\xi=\int_{{\mathbb R}^n}f(x)\hat g(x)dx.$
We will now describe some easily verified symmetries of the Fourier transform. We introduce the following basic operations on functions:
Translation operator: ${ (\tau_{x_o} f)(x)=f(x-x_o),\quad x,x_o\in {\mathbb R}^n}$
Modulation operator: ${\textnormal{Mod}_{x_o}(f)(x)=e^{2\pi i x\cdot x_o} f(x),\quad x,x_o\in {\mathbb R}^n}$
Dilation operator: ${\textnormal{Dil}_\lambda ^p(f)(x)={\lambda^{-\frac{n}{p}}}f(x/\lambda),\quad x,\in {\mathbb R}^n,\lambda>0,1\leq p\leq\infty}$.
Proposition 5 Let ${f\in L^1({\mathbb R}^n)}$ We have the following symmetries:
(i) ${ \mathcal F \tau_{x_o}=\textnormal{Mod} _{-x_o}\mathcal F}$,
(ii) ${\mathcal F \textnormal{Mod}_{\xi_o}=\tau_{\xi_o}\mathcal F}$,
(iii) ${\mathcal F \textnormal {Dil} _\lambda ^p = \textnormal {Dil} _{ \lambda^{-1} } ^{p'} \mathcal F}$, where ${\frac{1}{p}+\frac{1}{p'}=1}$.
Exercise 4 Prove the symmetries in Proposition 5 above. Also, let ${U:{\mathbb R}^n\rightarrow {\mathbb R}^n}$ be an invertible linear transformation, that is, ${U\in GL({\mathbb R}^n)}$. Define the general dilation operator
$\displaystyle ( \textnormal{Dil} _U ^pf)(x) =|\det{U}|^{-\frac{1}{p}}f(U^{-1}x), \quad x\in {\mathbb R}^n, 1\leq p\leq \infty.$
Prove that
$\displaystyle \mathcal F \textnormal{Dil}_U ^p = \textnormal{Dil}_{(U^*)^{-1}} ^{p'}\mathcal F,$
where ${U^*}$ is the (real) adjoint of ${U}$, that is the matrix for which we have ${\langle Ux, y\rangle =\langle x , U^*y\rangle}$ for all ${x,y\in {\mathbb R}^n}$.
We now come to one of the most interesting properties of the Fourier transform, the way it commutes with derivatives.
Proposition 6 (a) Suppose that ${f\in L^1({\mathbb R}^n)}$ and that ${x_kf(x)\in L^1({\mathbb R}^n)}$ for some ${1\leq k \leq n}$. The ${\hat f}$ is differentiable with respect to ${\xi_k}$ and
$\displaystyle \frac{\partial}{\partial \xi_k} \mathcal F(f)(\xi) = \mathcal F(- 2\pi i x_k f)(\xi).$
(b) We will say that a function ${f}$ has a partial derivative in the ${L^p}$ norm with respect to ${x_k}$ if there exists a function ${g\in L^p({\mathbb R}^n)}$ such that
$\displaystyle \lim_{h_k\rightarrow 0 }\bigg(\int_{{\mathbb R}^n}\bigg|\frac{f(x+h)-f(x)}{h_k}-g\bigg|^p dx\bigg)^\frac{1}{p}=0,$
where ${h=(0,\ldots,0,h_k,0,\ldots,0)}$ is a non-zero vector along the ${k}$-th coordinate axis. If ${f}$ has a partial derivative ${g}$ with respect to ${x_k}$ in the ${L^1}$-norm, then
$\displaystyle \hat g(\xi)=2\pi i \xi_j \mathcal {F}(\xi).$
Exercise 5 Prove Proposition 6.
A similar result that involves the classical derivatives of a function is the following:
Proposition 7 For ${k}$ a non-negative integer, suppose that ${f\in C^k({\mathbb R}^n)}$ and that ${\partial^\alpha f \in L^1({\mathbb R}^n)}$ for all ${|\alpha|\leq k}$, and ${\partial^\alpha f\in C_o({\mathbb R}^n)}$ for ${|\alpha|\leq k-1}$. Show that
$\displaystyle \widehat {\partial^\alpha f}(\xi)=(2\pi i \xi)^\alpha \hat f(\xi).$
Exercise 6 Prove Proposition 7.
Several remarks are in order. First of all observe that Propositions 6,7 assert that the following commutation relations are true
(i) ${\mathcal F (-2\pi i x_k) = \frac{\partial}{\partial \xi_k} \mathcal F ,}$
(ii) ${ \mathcal F \frac{\partial}{\partial x_k} = 2\pi i \xi_k \mathcal F,}$
where here we abuse notation and denote by ${2\pi i x_k }$ the operator of multiplication by ${2\pi i x_k}$. Thus the Fourier transform turns derivatives to multiplication by the corresponding variable, and vice versa, it turns multiplication by the coordinate variable to a partial derivative, whenever this is technically justified. This is a manifestation of the heuristic principle that smoothness of a function translates to decay of the Fourier transform and on the other hand, decay of a function at infinity translates to smoothness of the Fourier transform.
A second remark is that these commutation relations generalize, in an obvious way, to higher derivatives. To make this more precise, let ${P}$ be a polynomial on ${{\mathbb R}^n}$:
$\displaystyle P(x)=\sum_{|\alpha|\leq d}c_\alpha x^\alpha .$
Slightly abusing notation again we write ${P(\partial ^\alpha)}$ for the differential operator
$\displaystyle P(\partial^\alpha)=\sum_{|\alpha|\leq d}c_\alpha {\partial^\alpha} .$
We then have that the following commutation relations are true
(i’) ${\mathcal F P(-2\pi i x) = P(\partial^\alpha) \mathcal F ,}$
(ii’) ${ \mathcal F P(\partial^\alpha) =P( 2\pi i \xi) \mathcal F.}$
Observe that for nice’ functions, for example ${f\in C_c({\mathbb R}^n)}$ or ${f\in \mathcal S(R^n)}$, Propositions 6 and 7 are automatically satisfied.
2. Inverting the Fourier transform
On of the most important problems in the theory of Fourier transforms is that of the inversion of the Fourier transform. That is, given the Fourier transform ${\hat f}$ of an ${L^1}$ function, when can we recover the original function ${f}$ from ${\hat f}$? We begin with a simple case where the recovery is quite easy.
Proposition 8 Let ${f\in L^1({\mathbb R}^n)}$ be such that ${\hat f \in L^1({\mathbb R}^n)}$. Then the inversion formula holds true. In particular we have that
$\displaystyle f(x)=\int_{{\mathbb R}^n} \hat f(\xi) e^{2\pi i x\cdot \xi} d\xi,$
for almost every ${x\in {\mathbb R}^n}$.
Proof: The proof is based on the following calculation. For ${a>0}$ we have that
$\displaystyle \begin{array}{rcl} \int_{{\mathbb R}^n}\hat f(\xi)e^{-a|\xi|^2}e^{2\pi ix\cdot \xi} d\xi&=&\int_{{\mathbb R}^n}\int_{{\mathbb R}^n} f(y) e^{-2\pi i y\cdot \xi}dy e^{-a|\xi|^2}e^{2\pi ix\cdot \xi} d\xi\\ \\ &=&\int_{{\mathbb R}^n} f(x+y) \int_{{\mathbb R}^n} e^{-2\pi i y}e^{-a|\xi|^2} d\xi dy\\ \\ &=&(\frac{\pi}{a})^\frac{n}{2}\int_{{\mathbb R}^n}f(x+y) e^{-\frac{\pi^2|y|^2}{a}} dy \\ \\ &=& \int_{{\mathbb R}^n}f(x+\sqrt{a}y) e^{-\pi|y|^2}dy, \end{array}$
where in the last equality we have used Example 1. We can thus write
$\displaystyle \begin{array}{rcl} \int_{{\mathbb R}^n}\bigg|\int_{{\mathbb R}^n} \hat f(\xi)e^{-a|\xi|^2 e^{2\pi i x\cdot \xi} }d\xi -f(x) \bigg| dx&=& \int_{{\mathbb R}^n}\bigg|\int_{{\mathbb R}^n} f(x+\sqrt{a}y) e^{-\pi|y|^2}dy-f(x)\bigg| dx\\ \\ &=& \int_{{\mathbb R}^n}\bigg|\int_{{\mathbb R}^n} \{f(x+\sqrt{a}y) -f(x) \}e^{-\pi |y|^2} dy \bigg|dx \\ \\ &\leq &\int_{{\mathbb R}^n}\int_{{\mathbb R}^n} |f(x+\sqrt{a}y)-f(x)|dx e^{-\pi|y|^2}dy \\ \\ &=& \int_{{\mathbb R}^n}\|f-\tau_{-\sqrt{a}y}f\|_{L^1({\mathbb R}^n)}e^{-\pi|y|^2}dy. \end{array}$
Since ${\|f-\tau_{-\sqrt{a}y}f\|_{L^1({\mathbb R}^n)}\rightarrow 0}$ as ${a\rightarrow 0}$ and ${\|f-\tau_{-\sqrt{a}y}f\|_{L^1({\mathbb R}^n)}\leq 2\|f\|_{L^1({\mathbb R}^n)}}$, Lebesgue’s dominated convergence theorem shows that ${f}$ is almost everywhere equal to the ${L^1}$-limit of the sequence of functions
$\displaystyle g_a(x)=\int_{{\mathbb R}^n}\hat f(\xi)e^{-a|\xi|^2}e^{2\pi ix\cdot \xi} d\xi ,$
as ${a\rightarrow 0}$ (technically speaking we need to consider a sequence ${a_k\rightarrow0}$). On the other hand since ${\hat f\in L^1({\mathbb R}^n)}$, another application of Lebesgue’s dominated theorem shows that the ${L^1}$-limit of the functions ${g_a}$ is also equal to ${\int_{{\mathbb R}^n}\hat f(\xi)e^{2\pi i x\cdot \xi}d\xi}$. This completes the proof of the proposition. $\Box$
An immediate corollary is that the Fourier transform is a one-to-one operator:
Corollary 9 Let ${f_1,f_2\in L^1({\mathbb R}^n)}$ and suppose that ${\hat f_1(\xi)=\hat f_2(\xi)}$ for all ${\xi\in{\mathbb R}^n}$. The we have that ${f_1(x)=f_2(x)}$ for almost every ${x\in{\mathbb R}^n}$.
The proof is an obvious application of Proposition 8.
Exercise 7 (i) Suppose that ${f\in C_c ^{n+1}({\mathbb R}^n)}$. Show that
$\displaystyle |\hat f(\xi)| \lesssim (1+|\xi|^2)^{-(n+1)/2}.$
Conclude that whenever ${f\in C^{n+1} _c({\mathbb R}^n)}$, we have that
$\displaystyle f(x)=\int_{{\mathbb R}^n} \hat f(\xi) e^{2\pi i x\cdot \xi} d\xi.$
(ii) Show that ${\mathcal F}$ maps the Schwartz space ${\mathcal S({\mathbb R}^n)}$ onto ${\mathcal S({\mathbb R}^n)}$.
Exercise 8 The purpose of this exercise is to show that ${\mathcal F(L^1({\mathbb R}^n))}$ is a proper subset of ${C_o({\mathbb R}^n)}$ but also that it is a dense subset of ${C_o({\mathbb R}^n)}$.
(i) Show that ${\mathcal F(L^1({\mathbb R}))}$ is a proper subset of ${C_o({\mathbb R})}$.
Hint: While there are different ways to do that, a possible approach is the following. For simplicity we just consider the case ${n=1}$:
(a) Show that ${\big| \int_a ^b \frac{\sin x}{x}dx\big| \leq B}$ for all ${0\leq |a|<|b|<\infty}$ where ${B>0}$ is a numerical constant that does not depend on ${a,b}$.
(b) Suppose that ${f\in L^1({\mathbb R})}$ is such that ${\hat f}$ is an odd function. Use (a) to show that for every ${b>0}$ we have that
$\displaystyle \bigg|\int_1 ^b \frac{\hat f(\xi)}{\xi} d\xi\bigg|
for some numerical constant ${A>0}$ which does not depend on ${b}$.
(c) Construct a function ${g\in C_o({\mathbb R})}$ which is not the Fourier transform of an ${L^1}$ function. To do this note that it is enough to find a function ${g\in C_o({\mathbb R})}$ which does not satisfy the condition in (b).
(ii) Show that ${\overline{\mathcal F(L^1({\mathbb R}^n))}=C_o({\mathbb R}^n)}$ where the closure is taken in the ${C_o}$ topology.
Hint: Observe that ${C_c ^\infty({\mathbb R}^n)}$ is dense in ${C_o({\mathbb R}^n)}$, in the topology of the supremum norm.
It is convenient to define the formal inverse of the Fourier transform in the following way. For ${f\in L^1({\mathbb R}^n)}$ we set
$\displaystyle \mathcal F^{-1}(f)(\xi)=\mathcal F^*(f)(\xi)=\check f(\xi)=\int_{{\mathbb R}^n}f(x) e^{2\pi i x\cdot \xi}d\xi=\hat f(-\xi)=\tilde {\hat f}(\xi)=\hat{\tilde f}(\xi).$
Here we denote by ${\tilde g}$ the reflection of a function ${g}$, that is, ${\tilde g(x)=g(-x)}$. Observe that ${\mathcal F^*}$ is the conjugate of the Fourier transform. Thus the operator ${\mathcal F^*}$ is very closely connected to the operator ${\mathcal F}$ and enjoys essentially the same symmetries and properties.
As we shall see later on, it is also the adjoint of the Fourier transform with respect to the ${L^2}$ inner product
$\displaystyle \langle f,g\rangle =\int_{{\mathbb R}^n} f\bar g.$
Although we haven’t yet defined the Fourier transform on ${L^2}$ we can calculate for ${f,g\in L^1\cap L^2({\mathbb R}^n)}$ that
$\displaystyle \begin{array}{rcl} \int_{{\mathbb R}^n} (\mathcal F f)\bar g &=&\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}f(x)e^{-2\pi i x\cdot \xi} dx \bar g(\xi) d\xi \\ \\ &=&\int_{{\mathbb R}^n} f(x)\overline{\int_{{\mathbb R}^n} g(\xi)e^{2\pi i x\cdot \xi}d\xi} \ dx\\ \\ &=&\int_{{\mathbb R}^n} f \overline{ (\mathcal F^*(g))} \end{array}$
Proposition 8 claims that ${\mathcal F^*}$ is also the inverse of the Fourier transform in the sense that
$\displaystyle \mathcal F^* \mathcal F f=f,$
whenever ${f,\mathcal F f\in L^1({\mathbb R}^n)}$.
The proof of Proposition 8 is quite interesting in the following ways. First of all observe that we have actually showed that whenever ${f\in L^1({\mathbb R}^n)}$, ${f}$ is equal (a.e.) to the ${L^1}$ limit of the functions
$\displaystyle \int_{{\mathbb R}^n}\hat f(\xi)e^{-a|\xi|^2}e^{2\pi ix\cdot \xi} d\xi ,$
as ${a\rightarrow 0}$. This does not require any additional hypothesis and actually provides us with a method of inverting the Fourier transform of any ${L^1}$ function, at least in the ${L^1}$ sense. The second remark is that the proof of Proposition 8 can be generalized to different methods of summability. Indeed, let ${\Phi\in L^1({\mathbb R}^n)}$ be such that ${\phi=\hat \Phi\in L^1({\mathbb R}^n)}$ and ${\Phi(0) }$. For ${\epsilon>0}$ we consider the integrals
$\displaystyle \int_{{\mathbb R}^n} \hat f(\xi) \Phi(\epsilon \xi) e^{2\pi i x\cdot \xi}d\xi, \ \ \ \ \ (2)$
which we will call the ${\Phi}$-means of the integral ${\int_{{\mathbb R}^n}\hat f (\xi) e^{2\pi i x\cdot \xi}}$, or just the ${\Phi}$-means of ${\check f}$. Using the multiplication formula in Proposition 4 we can rewrite the means (2) as
$\displaystyle \int_{{\mathbb R}^n} \hat f(\xi) \Phi(\epsilon \xi) e^{2\pi i x\cdot \xi} d\xi =(f*\tilde \phi_\epsilon)(x), \quad x\in {\mathbb R}^n. \ \ \ \ \ (3)$
The following more general version of Proposition 8 is true.
Proposition 10 Let ${\Phi\in L^1({\mathbb R}^n)}$ be such that ${\phi=\hat \Phi\in L^1({\mathbb R}^n)}$ with ${\int \phi =1}$. We then have that the ${\Phi}$-means of ${\int \hat f(\xi)e^{2\pi i x\cdot \xi}d\xi}$,
$\displaystyle \int_{{\mathbb R}^n}\hat f(\xi) \Phi(\epsilon \xi) e^{2\pi i x\cdot\xi}d\xi,$
converge to ${f}$ in ${L^1}$, as ${\epsilon\rightarrow 0}$.
Proof: The proof is just a consequence of formula (3). Indeed, ${\tilde \phi_\epsilon}$ is an approximation to the identity since ${\tilde \phi\in L^1}$ and ${\int \tilde {\phi}(x)dx=1}$ and thus ${f*\tilde \phi_\epsilon}$ converges to ${f}$ in the ${L^1}$ norm as ${\epsilon\rightarrow 0}$. $\Box$
Proposition 8 says that the inversion formula is true whenever ${f,\hat f\in L^1({\mathbb R}^n)}$. This however is not the most natural assumption since the Fourier transform of an ${L^1}$ function need not be integrable. The idea behind Proposition 10 is to force’ ${\hat f}$ in ${L^1}$ by multiplying it by the ${L^1}$ function ${\Phi(\epsilon\xi)}$. Thus, we artificially impose some decay on ${\hat f}$. This is equivalent to smoothing out the function ${f}$ itself by convolving it with a smooth function ${\tilde \phi_\epsilon}$. Although no smoothness is explicitly assumed in Proposition 10, there is a hidden smoothness hypothesis in the requirement ${\Phi, \phi \in L^1}$. Indeed, we could have replaced this assumption by directly assuming that ${\phi}$ is (say) a smooth function with compact support and taking ${\Phi=\hat \phi}$; then the conclusion ${\hat \phi\in L^1({\mathbb R}^n)}$ would follow automatically. The trick of multiplying the Fourier transform of a general ${L^1}$ function with an appropriate function in ${L^1}$ or, equivalently, smoothing out the function ${f}$ itself allows us then to invert the Fourier transform, at least in the ${L^1}$-sense. This process is usually referred to as a summability method.
As we shall see now, the inversion of a Fourier transform by means of a summability method is also valid in a pointwise sense. Because of formula (3), in order to understand the pointwise convergence of the ${\Phi}$-means of ${\check{f}}$ we have to examine the pointwise convergence of the convolution ${f*\phi_\epsilon}$ to ${f}$, whenever ${\phi }$ is an approximation to the identity.
Definition 11 Let ${f\in L^1 _{\textnormal {loc}} ({\mathbb R}^n)}$. The Lebesgue set of ${f}$ is the set of points ${x\in{\mathbb R}^n}$ such that
$\displaystyle \lim_{r\rightarrow 0}\frac{1}{r^n}\int_{|y|< r}|f(x-y)-f(x)|dy=0.$
The Lebesgue set of a locally integrable function ${f}$ is closely related to the set where the integral of ${f}$ is differentiable:
Definition 12 Let ${f\in L^1 _{\textnormal {loc}} ({\mathbb R}^n)}$. The set of points where the integral of ${f}$ is differentiable is the set of points ${x\in{\mathbb R}^n}$ such that
$\displaystyle \lim_{r\rightarrow 0} \frac{1}{\Omega_n r^n}\int_{|y|< r}f(x-y)=f(y),$
where ${\Omega_n}$ is the volume of the unit ball ${B(0,1)}$ in ${{\mathbb R}^n}$. In other words, we say that the integral of ${f}$ is differentiable at some point ${x\in {\mathbb R}^n}$ if the average of ${f}$ with respect to Euclidean balls centered at ${x}$ the value of ${f}$ at the point ${x}$.
We shall come back to these notions a bit later in the course when we will introduce the maximal function of ${f}$ which is just the maximal average of ${f}$ around every point. For now we will use as a black box the following theorem:
Theorem 13 Let ${f\in L^1 _{\textnormal {loc}} ({\mathbb R}^n)}$. Then the integral of ${f}$ is differentiable at almost every point ${x\in {\mathbb R}^n}$.
While postponing the proof of this theorem for later on in the course, we can already see the following simple proposition connecting the Lebesgue set of ${f}$ to to the set of points where the integral of ${f}$ is differentiable. In particular we see that almost every point in ${{\mathbb R}^n}$ is Lebesgue point of ${f}$.
Corollary 14 Let ${f\in L^1 _{\textnormal {loc}} ({\mathbb R}^n)}$. Then almost every ${x\in{\mathbb R}^n}$ is a Lebesgue point of ${f}$.
Proof: For any rational number ${q}$ we have that the function ${f(x)-q}$ is locally integrable. Theorem 13 then implies that
$\displaystyle \lim_{r\rightarrow 0} \frac{1}{r^n}\int_{|y|\leq r}\big\{|f(x-y)-q|-|f(x)-q|\big\}dy=0,$
for almost every ${x\in {\mathbb R}^n}$. Thus the set ${F_q}$ where the previous statement is not true has measure zero and so does the set ${F:=\cup_{q\in{\mathbb Q}} F_q}$. Now let ${x\in {\mathbb R}^n \setminus F}$. Indeed, let ${\epsilon>0}$ and ${q\in {\mathbb Q}}$ be such that ${|f(x)-q|<\epsilon/2}$. We then have
$\displaystyle \begin{array}{rcl} \frac{1}{\Omega_n r^n}\int_{|y|
The first summand converges to ${|f(x)-q|<\epsilon/2}$ as ${r\rightarrow 0}$ since ${x\notin F}$ while the second summand is smaller than ${\epsilon/2}$. This shows that the Lebesgue set of ${f}$ is contained in ${{\mathbb R}^n\setminus F}$ and thus that almost every point in ${{\mathbb R}^n}$ is a Lebesgue point of ${f}$. $\Box$
We can now give the following pointwise convergence result for approximations to the identity.
Theorem 15 Let ${\phi \in L^1({\mathbb R}^n)}$ with ${\int \phi=1}$. We define ${\psi(x):={\mathrm{esssup}}_{|y|\geq |x|}|\phi(y)|}$. If ${\psi\in L^1({\mathbb R}^n)}$ and ${f\in L^p({\mathbb R}^n)}$ for ${1\leq p \leq \infty}$ then
$\displaystyle \lim_{\epsilon\rightarrow 0} (f*\phi_\epsilon)(x)=f(x),$
whenever ${x}$ is a Lebesgue point for ${f}$. If in addition $\hat \phi \in L^1(\mathbb R^n)$ then the $\hat \phi$-means of $\check f$,
$\displaystyle \int_{\mathbb R^n} \hat f(\xi) \hat \phi(\epsilon \xi) e^{2\pi i x\cdot \xi} d\xi,$
converge to $f (x)$ as $\epsilon \to 0$ for almost every $x \in \mathbb R ^n$.
Proof: Let ${x}$ be a Lebesgue point of ${f}$ and fix ${\delta>0}$. By Corollary 14 there exists ${\eta>0}$ such that
$\displaystyle \frac{1}{r^n}\int_{|y|
whenever ${|r|<\eta}$.
We can estimate as usual
$\displaystyle \begin{array}{rcl} |(f*\phi_\epsilon)(x)-f(x)|&=&\bigg|\int_{{\mathbb R}^n}[f(x-y)-f(x)]\phi_\epsilon(y)dy\bigg|\\ \\ &\leq& \bigg|\int_{|y|<\eta}[f(x-y)-f(x)]\phi_\epsilon(y)dy\bigg| \\ \\ &&+\bigg|\int_{|y|\geq \eta}[f(x-y)-f(x)]\phi_\epsilon(y)dy\bigg| \\ \\ &=:& I_1+I_2. \end{array}$
We claim that
$\displaystyle \psi(x)\lesssim_{n,\phi} |x|^{-n}, \quad x\in{\mathbb R}^n. \ \ \ \ \ (5)$
First of all observe that ${\psi}$ is radially decreasing. We will abuse notation and write ${\psi(x)=\psi(|x|)}$. For every ${r>0}$ we have that
$\displaystyle \int_{r/2\leq |x|
Now since ${\psi\in L^1}$, the left hand side in the previous estimate tends to ${0}$ when ${r\rightarrow 0}$ and when ${r\rightarrow \infty}$ we get the claim.
We write (4) in polar coordinates to get
$\displaystyle \frac{1}{r^n}\int_{S^{n-1}}\int_0 ^r |f(x-sy')-f(x)|s^{n-1}ds d\sigma_{n-1}(y')<\delta.$
Setting ${g(s)=\int_{S^{n-1}}|f(x-sy')-f(x)| d\sigma_{n-1}(y')}$ we can rewrite the previous estimate in the form
$\displaystyle G(r):=\int_0 ^r g(s)s^{n-1}ds \leq \delta r^n,$
whenever ${|r|<\eta}$. We now estimate ${I_1}$ as follows
$\displaystyle \begin{array}{rcl} I_1&\leq& \int_{S^{n-1}}\int_0 ^\eta |f(x-ry')-f(x)|\psi_\epsilon(r)|d\sigma_{n-1}(y')r^{n-1}dr \\ \\ &=&\int_0 ^\eta g(r)r^{n-1}\frac{1}{\epsilon^n}\psi(r/\epsilon)dr\\ \\ &=&\int_0 ^\eta G'(r)\frac{1}{\epsilon^n}\psi(r/\epsilon) dr. \end{array}$
At this point the proof simplifies a bit if we assume that ${\psi}$ is differentiable. In this case we have that ${\psi'\leq 0}$ and we can estimate the last integral by
$\displaystyle \begin{array}{rcl} \int_0 ^\eta G'(r)\frac{1}{\epsilon^n}\psi(r/\epsilon) dr&=&\frac{1}{\epsilon^n}G(\eta)\psi(\frac{\eta}{\epsilon})-\int_0 ^\eta G(r)\frac{1}{\epsilon^{n+1}}\psi'(\frac{r}{\epsilon})dr\\ \\ &\lesssim_{n,\phi}& \delta - \delta \frac{1}{\epsilon^{n+1}}\int_0 ^\eta r^n\psi'(\frac{r}{\epsilon})dr \\ \\ &=&\delta +\delta \frac{n}{\epsilon^n}\int_0 ^\eta r^{n-1}\psi(r/\epsilon)dr \\ \\ &\leq & \delta\bigg(1+\frac{n}{\omega_{n-1}}\int_{{\mathbb R}^n}\psi(x)dx\bigg). \end{array}$
The argument actually goes through without the assumption that ${\psi}$ is differentiable by a clever use of the Riemann-Stieljes integral. Note that the function ${\psi}$ is decreasing thus almost everywhere differentiable. This shows that ${I_1\lesssim_{n,\phi} \delta}$.
For ${I_2}$ we estimate as follows
$\displaystyle \begin{array}{rcl} I_2\leq \|f\|_p\|\psi_\epsilon\chi_{\{|x|\geq \eta \}}\|_{p'}+|f(x)|\|\chi_{\{|x|\geq \eta \}}\psi_\epsilon\|_1. \end{array}$
For the second summand we have that
$\displaystyle \|\chi_{\{|x|\geq \eta \}}\psi_\epsilon\|_1=\frac{1}{\epsilon^n}\int_{|x|\geq \eta}\psi_\epsilon(x/\epsilon)dx=\int_{|x|\geq \eta/\epsilon}\psi(x)dx\rightarrow 0,$
as ${\epsilon\rightarrow 0}$, since ${\psi\in L^1}$.
On the other hand, we have
$\displaystyle \begin{array}{rcl} \| \psi_\epsilon\chi_{\{|x|\geq \eta \}}\|_{p'}&=&\bigg(\int_{|x|\geq \eta}[\psi_\epsilon(x)]^{p'}dx\bigg)^\frac{1}{p'}=\bigg(\int_{|x|\geq \eta}[\psi_\epsilon(x)]^\frac{p'}{p}\psi_\epsilon(x)dx\bigg)^\frac{1}{p'}\\ \\ &\leq & \|\psi_\epsilon(x)\chi_{\{|x|\geq \eta \}}\|_\infty ^\frac{1}{p} \|\psi_\epsilon(x)\chi_{\{|x|\geq \eta \}}\|_1\leq \|\psi_\epsilon(x)\chi_{\{|x|\geq \eta \}}\|_\infty ^\frac{1}{p}\|\psi\|_1. \end{array}$
Now since ${\psi_\epsilon}$ is decreasing we have
$\displaystyle \|\psi_\epsilon(x)\chi_{\{|x|\geq \eta \}}\|_\infty \leq \psi_\epsilon(\eta)=\frac{1}{\epsilon^n}\psi(\eta/\epsilon)=\eta^{-n}\big(\frac{\eta}{\epsilon}\big)^n\psi(\eta/\epsilon)\rightarrow 0,$
when ${\epsilon\rightarrow 0}$.
We have showed that
$\displaystyle \limsup_{\epsilon\rightarrow 0} |(f*\phi_\epsilon)(x)-f(x)|\lesssim_{n,\phi} \delta,$
whenever ${x}$ is a Lebesgue point of ${f}$. Since ${\delta>0}$ was arbitrary this completes the proof of the theorem.$\Box$
Remark 2 The previous theorem is true in the case that ${\phi}$ is a radially decreasing function in ${L^1}$ or, in general, a function that satisfies a bound of the form ${|\phi(x)|\lesssim (1+|x|)^{-(n+\delta)}}$ for some ${\delta>0}$.
We conclude the discussion on the inversion of the Fourier transform with a useful corollary.
Corollary 16 Let ${f\in L^1({\mathbb R}^n)}$ and assume that ${f}$ is continuous at ${0}$ and that ${\hat f \geq 0}$. Then ${\hat f\in L^1({\mathbb R}^n)}$ and
$\displaystyle f(x)=\int_{{\mathbb R}^n}\hat f(\xi) e^{2\pi i x\cdot \xi}d\xi,$
for almost every ${x\in{\mathbb R}^n}$. In particular,
$\displaystyle f(0)=\int_{{\mathbb R}^n} \hat f(\xi)d\xi.$
Proof: By identity (3) we have that
$\displaystyle \int_{{\mathbb R}^n}\hat f(\xi) \Phi(\epsilon \xi) e^{2\pi i x\cdot \xi} d\xi=(f*\tilde \phi_\epsilon)(x),$
for all ${x\in{\mathbb R}^n}$. Observe that the functions on both sides of this identity are continuous functions of ${x}$. Now let ${\phi,\Phi}$ satisfy the conditions of Theorem 15. Assume furthermore that ${\Phi}$ is non-negative and continuous at ${0}$. For example we can consider the function ${\Phi(\xi)=\phi(\xi)=e^{-\pi |\xi|^2}}$. Now since the point ${0}$ is a point of continuity of ${f}$, it certainly belongs to the Lebesgue set of ${f}$. Thus we have that ${\lim_{\epsilon\rightarrow 0} (f*\tilde \phi_\epsilon)(0)=f(0)}$ which gives
$\displaystyle \lim_{\epsilon\rightarrow 0}\int_{{\mathbb R}^n} \hat f(\xi) \Phi(\epsilon \xi) d\xi=f(0).$
Since ${\hat f \Phi}$ is positive, we can use Fatou’s lemma to write
$\displaystyle \int_{{\mathbb R}^n}\hat f(\xi) d\xi= \int_{{\mathbb R}^n}\liminf_{\epsilon_k\rightarrow 0} \hat f(\xi) \Phi(\epsilon _k\xi) d\xi\leq f(0),$
so ${\hat f \in L^1({\mathbb R}^n)}$. Thus the inversion formula holds true for ${f}$ and we get
$\displaystyle f(x)=\int_{{\mathbb R}^n} \hat f(\xi) e^{2\pi i x\cdot \xi } d\xi,$
for almost every ${x\in {\mathbb R}^n}$. However
$\displaystyle f(0)=\lim_{\epsilon\rightarrow 0}\int_{{\mathbb R}^n} \hat f(\xi) \Phi(\epsilon \xi) d\xi=\int_{{\mathbb R}^n}\lim_{\epsilon \rightarrow 0}\hat f(\xi) \Phi(\epsilon \xi) d\xi=\int_{{\mathbb R}^n}\hat f(\xi) d\xi,$
since ${\hat f\in L^1}$. $\Box$
2.1. Two special summability methods
We describe in detail two summability methods that are of special interest. These are based on the Examples 1 and 2 in the beginning of this set of notes.
The Gauss-Weierstrass summability method. By dilating the function ${W(x)=e^{-\pi|x|^2}}$ we get
$\displaystyle W(x,t):= W_{\sqrt{4\pi t}}(x)=(4\pi t)^{-\frac{n}{2}}e^{- \frac{|x|^2}{4t} }.$
The function ${W(x,t),\ t>0,}$ is called the Gauss kernel and it gives rise to the Gauss-Weierstrass method of summability. The Fourier transform of ${W}$ is
$\displaystyle \widehat{W_{\sqrt{4\pi t}} }(\xi)=\widehat W(\sqrt{2\pi t}\xi)=e^{-4\pi^2t|\xi|^2}.$
It is also clear that
$\displaystyle \int_{{\mathbb R}^n} W(x,t)dx=1,$
for all ${t>0}$. The discussion in the previous sections applies to the Gauss-Weierstrass summability method and we have that the means
$\displaystyle w(x,t):=\int_{{\mathbb R}^n}f(y)W(y-x,t)dy=\int_{{\mathbb R}^n} \hat f(\xi)e^{-4\pi^2 t |\xi|^2}e^{2\pi i x\cdot \xi} d\xi$
convergence to ${f}$ in ${L^1({\mathbb R}^n)}$ and also in the pointwise sense, for every ${x}$ in the Lebesgue set of ${f}$. One of the aspects of Gauss-Weierstrass summability is that the function ${w(x,t)}$ defined above satisfies the heat equation:
$\displaystyle \begin{array}{rcl} \frac{\partial w}{\partial t}-\Delta w &=& 0,\quad \mbox{ on }{\mathbb R}^{n+1} _+,\\ w(x,0) &=&f(x),\quad x\in{\mathbb R}^n. \end{array}$
To see that the Gauss-Weierstrass means of ${\check f}$ satisfy the Heat equation with initial data ${f}$, one can use the formula for ${w(x,t)}$ and calculate everything explicitly. However it is easier to consider the Fourier transform of the solution ${u(x,t)}$ of the Heat equation in the ${x}$ variable and show that it must agree with the Fourier transform of ${w(x,t)}$, again in the ${x}$ variable. Observe that under suitable assumptions on the initial data ${f}$ we get that the solution ${w(x,t)}$ converges to the initial data ${f}$ as time’ ${t\rightarrow 0}$.
Exercise 9 Let ${f(x)=e^{-\pi x^2}}$, ${x\in{\mathbb R}}$. Using the properties of the Fourier transform show that the function ${\hat f}$ satisfies the initial value problem
$\displaystyle \begin{array}{rcl} u'+2\pi x u&=&0,\\ \\ u(0)&=&1. \end{array}$
Solve the initial value problem to give an alternative proof of the fact that ${\hat f(\xi)=e^{-\pi \xi^2}}$. Observe that the differential equation above is invariant under the Fourier transform.
The Abel summability method. We consider the function ${P(x)=c_n\frac{1}{(1+|x|^2)^\frac{n+1}{2}}}$ where ${c_n=\frac{\Gamma((n+1)/2}{\pi^\frac{n+1}{2}}}$. By dilating the function ${P}$ we have
$\displaystyle P(x,t):= P_t(x)=c_n\frac{t}{(t^2+|x|^2)^\frac{n+1}{2}}.$
The function ${P(x,t),\ t>0,}$ is called the Poisson kernel (for the upper half plane) and it gives rise to the Abel method of summability. The Fourier transform of ${P}$ is
$\displaystyle \widehat{P_t}(\xi)=\hat P(t\xi)=e^{-2\pi t|\xi|}.$
This is just a consequence of the calculation in Example 2, the inversion formula and the easily verified fact that ${P\in L^1({\mathbb R}^n)}$. It is also clear by a direct calculation or through the previous Fourier transform relation that
$\displaystyle \int_{{\mathbb R}^n} P(x,t)dx=1,$
for all ${t>0}$. Everything we have discussed in these notes applies to the Abel summability method. In particular we have that whenever ${f\in L^1({\mathbb R}^n)}$, the means
$\displaystyle u(x,t):=\int_{{\mathbb R}^n}f(y)P(y-x,t)dy=\int_{{\mathbb R}^n} \hat{f}(\xi) e^{-2\pi t|\xi|}e^{-2\pi i x\cdot \xi} d\xi,$
converge to ${f}$ in ${L^1}$ as ${t\rightarrow 0}$ and also in the pointwise sense for all ${x}$ in the Lebesgue set of ${f}$. The function ${u(x,t)}$ is also called the Poisson integral or extension of ${f}$. It is not difficult to see that it satisfies the Dirichlet problem
$\displaystyle \begin{array}{rcl} \Delta u &=&0, \quad \mbox{ on }{\mathbb R}^{n+1} _+,\\ u(x,0) &=&f(x),\quad x\in{\mathbb R}^n. \end{array}$
Here we denote by ${{\mathbb R}^n _+}$ the upper half plane ${{\mathbb R}^n _+=\{(x,y):x\in {\mathbb R}^n, y>0\}}$. Thus, if we are given an ${L^1}$ function on the boundary’ ${{\mathbb R}^n}$, the Poisson integral of ${f}$ provides us with a harmonic function ${u(x,t)}$ in the upper half plane which has boundary value ${f}$ in the sense that ${u(x,t)}$ converges to ${f}$ as ${t\rightarrow 0}$ both in the ${L^1}$ sense as well as almost everywhere.
Remark 3 The Poisson extension of ${f\in L^p({\mathbb R}^n)}$, ${1\leq p\leq \infty}$
$\displaystyle u(x,t)=\int_{{\mathbb R}^n} f(y)P(x-y,t)dy,$
is harmonic in ${{\mathbb R}_+ ^{n+1}}$, that is, that it satisfies the Laplace equation:
$\displaystyle \Delta_{x,t} u(x,t) =\sum_{j=1} ^n \frac{\partial}{\partial x_k}u(x,t)+\frac{\partial}{\partial t} u(x,t)=0.$
This is essentially a consequence of the fact that ${\Delta_{x,t}P(x,t)=0}$ for ${(x,t)\in{\mathbb R}^n _+}$.
In general, we can ask for a harmonic function ${u(x,t)}$ in ${{\mathbb R}^{n+1} _+}$ which has boundary value ${\lim_{t\rightarrow 0}u(\cdot,t)=f\in L^p({\mathbb R}^n)}$ where the limit is taken ${L^p}$-sense. In the case ${p<\infty}$ this extension is uniquely given by the Poisson integral of ${f}$. Also, the same is true if ${p=\infty}$ and ${f\in C_o({\mathbb R}^n)\subset L^\infty({\mathbb R}^n)}$. On the other hand, if we ask for a function which is harmonic in ${{\mathbb R}^{n+1} _+}$, continuous in ${\overline {{\mathbb R}^{n+1} _+}}$ and has boundary function ${f}$, then no assumption on ${f}$ can guarantee that this extension is unique. Take for example ${f=0}$ and ${u_1(x,t)=0}$, ${u(x,t)=t}$. The solution of the Dirichlet problem becomes unique though if we require in addition that the harmonic extension is a bounded function in ${{\mathbb R}^{n+1} _+}$. See [SW] for more information.
Exercise 10 Prove the subordination identity:
$\displaystyle e^{-\beta}=\frac{1}{\sqrt{\pi}}\int_0 ^\infty \frac{e^{-u}}{\sqrt{u}}e^{-\beta^2/4u}du,\quad \beta>0. \ \ \ \ \$
For this, first prove the identities
$\displaystyle \begin{array}{rcl} e^{-\beta}&=&\frac{2}{\pi}\int_0 ^\infty\frac{\cos \beta x}{1+x^2}dx,\\ \\ \frac{1}{1+x^2}&=&\int_0 ^\infty e^{-(1+x^2)u}du. \end{array}$
The second identity above is obvious. In order to prove the first, use the theory of residues for the function
$\displaystyle f(z)=\frac{e^{i\beta z}}{1+z^2}.$
[Update 11 Mar 2011: Exercise 10 added.]
[Update 11 Mar 2011: Statement of Theorem 15 completed with the corollary about the convergence of the means of $\check f$.] | 2017-10-20T12:32:13 | {
"domain": "wordpress.com",
"url": "https://yannisparissis.wordpress.com/2011/03/10/dmat0101-notes-3-the-fourier-transform-on-l1/",
"openwebmath_score": 0.9937453866004944,
"openwebmath_perplexity": 198.40545162314393,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771805808552,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703957742362
} |
http://appliedclassicalanalysis.net/2016/08/02/integrate-int_-11sqrtfrac1x1-x-mathrmd-x-2/ | # Integrate $$\int_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x$$
This integral appeared in Inside Interesting Integrals by Paul Nahin in the problem set of chapter 3. Using Wolfram Alpha, we get
\int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \pi
\label{eq:1}
\tag{1}
Nahin suggests the following trig substitution, $$x = \cos(2y)$$.
While the form of the integrand certainly does suggest that some type of trig substitution will work, let us do it with another method. If we write the integral as
\int\limits_{-1}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x
this looks like a beta function. From Higher Transcendental Functions (Bateman Manuscript), Volume 1, Section 1.5.1, equation 10, we see
\mathrm{B}(x,y) = 2^{1-x-y} \int\limits_{0}^{1} (1+t)^{x-1}(1-t)^{y-1} + (1+t)^{y-1}(1-t)^{x-1} \mathrm{d} t
\label{eq:2}
\tag{2}
Let us begin with the original integral and the right half of the interval of integration
\int\limits_{0}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x = \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x
\label{eq:3}
\tag{3}
Now, let us consider
\int\limits_{0}^{1} (1+x)^{-\frac{1}{2}}(1-x)^{\frac{1}{2}} \mathrm{d} x = \int\limits_{0}^{1}\sqrt{\frac{1-x}{1+x}} \mathrm{d} x
\label{eq:4}
\tag{4}
We let $$x=-y$$ to obtain
-\int\limits_{0}^{-1} \sqrt{\frac{1+y}{1-y}} \mathrm{d} y,
\label{eq:5}
\tag{5}
which we can rewrite as
\int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x
\label{eq:6}
\tag{6}
Adding the right hand side of equation \eqref{eq:3} and equation \eqref{eq:6} yields our original integral
\int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x + \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x
\label{eq:7}
\tag{7}
Likewise, adding the left hand sides of equations \eqref{eq:4} and \eqref{eq:3} yields
\int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x + \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x =
\int\limits_{0}^{1} (1+x)^{-\frac{1}{2}}(1-x)^{\frac{1}{2}} \mathrm{d} x + \int\limits_{0}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x
If we combine this result into one integral and rearrange the integrand, we see that it is the same as the integral in \eqref{eq:2} with
x=\frac{3}{2} \,\, \mathrm{and} \,\, y=\frac{1}{2}
Putting it all together, we have
\int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = 2\mathrm{B}\left(\frac{3}{2},\frac{1}{2}\right) = \pi | 2018-11-15T14:42:17 | {
"domain": "appliedclassicalanalysis.net",
"url": "http://appliedclassicalanalysis.net/2016/08/02/integrate-int_-11sqrtfrac1x1-x-mathrmd-x-2/",
"openwebmath_score": 0.9360681176185608,
"openwebmath_perplexity": 1732.4572151919483,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180580855,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703957742359
} |
https://www.delftstack.com/howto/python/rmse-python/ | # Using RMSE in Python
Fariba Laiq Jan 03, 2023 Jan 07, 2022 Python Python Math
RMS (root mean square), also known as the quadratic mean, is the square root of the arithmetic mean of the squares of a series of numbers.
RMSE (root mean square error) gives us the difference between actual results and our calculated results from the model. It defines the quality of our model (which uses quantitative data), how accurate our model has predicted, or the percentage of error in our model.
RMSE is one of the methods for evaluating supervised machine learning models. The larger the RMSE will be the inaccuracy of our model and vice versa.
There are multiple ways to find the RMSE in Python by using the NumPy library or scikit-learn library.
## the Formula for Root Mean Square Error in Python
The logic behind calculating the RMSE is through its following formula:
$$RMSE=\sqrt{\sum_{i=1}^n {(predicted_{i}-actual_{i})}^2}$$
## Calculate RMSE Using NumPy in Python
NumPy is a useful library for dealing with large data, numbers, arrays, and mathematical functions.
Using this library, we can easily calculate RMSE when given the actual and predicted values as an input. We will use the built-in functions of the NumPy library for performing different mathematical operations like square, mean, difference, and square root.
In the following example, we will calculate RMSE by first calculating the difference between actual and predicted values. We calculate the square of that difference, then take the mean.
Until this step, we will get the MSE. To get the RMSE, we will take the square root of MSE.
Note
To use this library, we should install it first.
Example Code:
#python 3.x
import numpy as np
actual = [1,2,5,2,7, 5]
predicted = [1,4,2,9,8,6]
diff=np.subtract(actual,predicted)
square=np.square(diff)
MSE=square.mean()
RMSE=np.sqrt(MSE)
print("Root Mean Square Error:", RMSE)
Output:
#python 3.x
Root Mean Square Error: 3.265986323710904
## Calculate RMSE Using scikit-learn Library in Python
Another way to calculate RMSE in Python is by using the scikit-learn library.
scikit-learn is useful for machine learning. This library contains a module called sklearn.metrics containing the built-in mean_square_error function.
We will import the function from this module into our code and pass the actual and predicted values from the function call. The function will return the MSE. To calculate the RMSE, we will take MSE’s square root.
Note
To use this library, we should install it first.
Example Code:
#python 3.x
from sklearn.metrics import mean_squared_error
import math
actual = [1,2,5,2,7, 5]
predicted = [1,4,2,9,8,6]
MSE = mean_squared_error(actual, predicted)
RMSE = math.sqrt(MSE)
print("Root Mean Square Error:",RMSE)
Output:
#python 3.x
Root Mean Square Error: 3.265986323710904
Author: Fariba Laiq
I am Fariba Laiq from Pakistan. An android app developer, technical content writer, and coding instructor. Writing has always been one of my passions. I love to learn, implement and convey my knowledge to others. | 2023-01-29T06:21:47 | {
"domain": "delftstack.com",
"url": "https://www.delftstack.com/howto/python/rmse-python/",
"openwebmath_score": 0.20868198573589325,
"openwebmath_perplexity": 2164.4807291833217,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771801919951,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703955168406
} |
http://www.mathematicalfoodforthought.com/2006/01/finished-product-topic-number-theory_22.html | ## Sunday, January 22, 2006
### Finished Product. Topic: Number Theory. Level: Olympiad.
Problem: (1970 IMO - #1) Find all positive integers $n$ such that the set $\{n,n+1,n+2,n+3,n+4,n+5\}$ can be partitioned into two subsets so that the product of the numbers in each subset is equal.
Solution: We claim that no such $n$ exists.
First, suppose one of the elements has a prime factor $> 5$. Then clearly none of the other elements has the same prime factor; therefore, any partition will create one set with that prime factor and one without it. Hence none of the elements has a prime factor $> 5$.
Now take the set modulo $5$. By the same argument above, there must be two elements divisible by $5$. But the first and last are the only two elements that differ by $5$, so $n \equiv n+5 \equiv 0 \pmod{5}$.
Consider the elements $n+1, n+2, n+3, n+4$. Two of them can be divisible by $2$. Of the remaining two, however, only one can be divisible by $3$ (since they differ by $2$). Hence there exists an element that is divisible by neither $2, 3,$ nor $5$. Then it must have a prime factor $> 5$, which gives us a contradiction.
So no such $n$ exist, as desired. QED.
--------------------
Practice Problem: Find an $n$ such that the above condition holds for the set $\{n,n+1,\ldots,n+7\}$ or prove that none exist.
#### 1 comment:
1. Assme that such a set exists - clearly, no element has a prime factor greater than 7, and thus must have prime factors of 2, 3, 5, and/or 7.
By mod 7, the only two numbers which are equivalent are n and n+7, thus these two must be multiples of 7 --> 7k and 7k+7, and all other numbers in the set cannot have a 7 as a prime factor.
In the remaining 6 numbers, it is clear that 2 of these are multiples of 3, and that the rest (four) numbers are not multiples of 3.
Of the two numbers that were accounted for directly above, one must have been odd and one must have been even, as these elements would have the form m and m+3.
Within the remaining 4 numbers, then, we have 2 odd and 2 even numbers. These two odd numbers cannot both be multiples of 5, since the have the same parity but the absolute value of the difference is less than 10.
Thus, there exists an odd number, not divisible by 3, 5, or 7 - this implies that it has a prime factor greater than 7, but this gives us a contradiction, so no such set exists. | 2020-01-18T11:30:48 | {
"domain": "mathematicalfoodforthought.com",
"url": "http://www.mathematicalfoodforthought.com/2006/01/finished-product-topic-number-theory_22.html",
"openwebmath_score": 0.9150283336639404,
"openwebmath_perplexity": 128.3106356257361,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180191995,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703955168406
} |
http://makeyourownmandelbrot.blogspot.com/ | ## Friday, 29 December 2017
### Escape Condition for Julia and Mandelbrot Fractals
This post is reproduced from the blog for my next book, Make Your Own Algorithmic Art.
This post isn't a tutorial on creating the Julia or Mandelbrot fractals - you can learn about that here. Here we'll focussed on a specific bit of mathematics about the "escape condition" that many guides state but fail to explain.
### Basic Idea
The basic idea behind both Julia and Mandelbrot fractals is to take a starting complex number $z_0$, apply a simple function $z^2 + c$, and feed the output $z_1$ back into this function as input. Doing this repeatedly we get a sequence of output complex numbers, $z_1$, $z_2$, $z_3$ ...
$$z_{n+1} = z_{n}^2 + c$$
These output values can behave in one of three ways:
• They can get larger and larger, towards infinity. This is called escaping.
• They can get smaller and smaller, towards zero.
• They can orbit around, but without getting ever larger. In some cases they can approach a finite constant.
The fractal patterns are coloured to show which points escape, and which don't. Those that escape are outside the fractal, and those that don't are inside.
The difference between Julia and Mandelbrot fractals is only how we choose $z_0$ and $c$.
### Computational Problem
The behaviour of complex numbers under $z^2 + c$ is chaotic. That is:
• The output sequence is very often irregular, and predicting future values is very difficult without actually working out the sequence. They seem random, even if they're not.
• The sequence is very sensitive to starting conditions. That is, even a tiny change in the starting conditions can drastically change how the sequence behaves.
We can't derive a mathematical formula which tells us which points are inside the fractal, and which are outside (they escape). And this chaotic behaviour suggests we can't truly know even if we run the feedback iterations many many times, because the sequence can suddenly escape after a long period of orbiting around.
### Practical Compromise
So we have to compromise - we agree to generate a fixed number of output values, so we can render an approximate pattern. Perhaps a sequence of 50 output values is sufficient? Maybe 100? Maybe even 1000?
By experimenting, it becomes clear that 50 or 100 iterations is fine, except when we are zooming into a very small area of the fractals, where we need more iterations to be able to separate out the inside and outside regions. If we don't do this, the details of the fractal don't emerge.
### Computational Shortcut
For many starting points in a fractal pattern, the output values will get larger and larger very quickly. We want to stop calculating the sequence for two reasons:
• If the numbers get too large, this can cause our code to crash with an error, or worse, continue with incorrect calculations. The root cause of this is that the largest size of number we can store and calculate with is fixed.
• We don't want to waste time and computational effort continuing to calculate the sequence which is only every going to get larger and larger. This shortcut can be important because the calculations for detailed fractals can take a long time.
So many guides will state that we can stop calculating the sequence when the magnitude $|z_n|$ gets larger than 2.
That works well, and makes intuitive sense. If $z_n$ has got so large that it is more than 2 away from the origin, then it will forever get larger and larger. But the statement is rarely explained.
Next is a simple proof - and also a demonstration that the popular escape condition $|z_n| > 2$ is incomplete.
### Simple Proof
First let's remind ourselves of the triangle inequality, which basically says that a direct path is always shorter than an indirect path between two points.
$$| a + b | \leq |a| + |b|$$
Let's contrive to artificially expand out $|z^2|$,
$$|z^2| = |z^2 +c -c|$$
If we use that triangle inequality, with $a = z^2 +c$ and $b = -c$, we have
$$|z^2 +c -c| \leq |z^2 +c| + |-c|$$
but because $|-c|$ is just $|c|$ we have
$$|z^2 +c -c| \leq |z^2 +c| + |c|$$
Now, remember the next value in a sequence is $z^2 + c$ because that's the function we keep applying. Let's bring that to the left of that last inequality.
$$|z^2 +c| \geq |z^2| - |c|$$
Also, $|z^2|$ is the same as $|z|^2$ so we have
$$|z^2 +c| \geq |z|^2 - |c|$$
Now is the interesting part. If we say that $|z|$ is bigger than $|c|$, that would mean
$$|z|^2 - |c| \gt |z|^2 - |z|$$
That means we can also say,
$$|z^2 +c| \gt |z|^2 - |z|$$
Which can be factorised as
$$|z^2 +c| \gt |z|(|z| -1)$$
Let's rewrite that previous expression as a ratio of the sizes of the current $z_n$ and the next $z_{n+1} = z_n^2 + c$,
$$\frac {|z_{n+1}|} {|z_n|} \gt (|z| -1)$$
Now another interesting part. If we say $|z|$ is greater than 2, that means $|z| -1 \gt 1$. So we finally have
$$\frac {|z_{n+1}|} {|z_n|} \gt 1$$
Which is saying that $|z_{n+1}|$ is always greater than $|z_n|$ as long as:
• $|z| \gt |c|$, and
• $|z| \gt 2$
So we've shown that two conditions need to be true to prove that the sequences escapes, not just the traditional one. However in practice, the traditional $|z| \gt 2$ seems to work well enough.
Reference
One text that does try to cover this is Chaos and Fractals: New Frontiers of Science.
## Tuesday, 19 July 2016
### Python 3 Code on GitHub
The code for the book Make Your Own Mandelbrot is now on github:
And whilst I was doing this, I updated the code to Python 3.
This required only minor changes: the xrange() becomes the simpler range() function.
(sadly the 3d code requires the mayavi libraries which are not yet ported to Python 3)
## Tuesday, 19 April 2016
### Republished for Better Formatting
I've republished the kindle and print book.
The main reason is that a few people with older kindle devices didn't have great experiences with the formatting. For some images didn't show properly, for others, the margins were wonky, etc
This is sad because it shouldn't be that hard to get right in 2016. The core problem is that ebook file formats are not open, stable and implemented in an interoperable way. It's like the web 20 years ago - with big companies not implementing web standards properly, and deliberately trying to pervert them to their own ends. Thankfully after 20 years - that's all settled down and usable.
I took the decision to publish the ebooks using Amazon's new Kindle Textbook format. That promises to have much greater certainty over layout, even for complex content, ... like a PDF. This will be great for people who want to see a page more or less as it was intended., and certainly not mashed up.
The following are screenshots from my Android phone's Kindle app - and it looks fantastic!
There is a down side - those with older Kindles that can't support this new Textbook format, won't be able to buy the book. So happier readers, but fewer of them. I didn't take that decision lightly but not having unhappy readers was a priority for me.
## Tuesday, 3 March 2015
### London Python Group
I was lucky enough to present a flash talk on Make Your Own Mandelbrot at the London Python Group.
One of the great things about such grassroots groups is the openness, honestly and generousness - unlike corporate events. I picked up some pointers on things I didn't know:
• The IPython Cookbook has some content on interactive UI elements (widgets) for IPython Notebooks. Something I always wanted to know how to do.
• An example of GPU accelerated computation in IPython notebooks for generating Mandelbrot fractals.
## Sunday, 8 February 2015
### Make Your Own Neural Network
I'm now focussing on my next ebook Make Your Own Neural Network.
The central idea is the same, to make sure that anyone with interest and nothing more than school-level maths can understand how neural networks work, and appreciate the pretty cool concepts on the way! Again we'll use Python and assume no previous knowledge of programming.By the end of the guide, you'll have built a simple neural network that recognises human handritten numbers.
You can follow progress and discussions at: http://makeyourownneuralnetwork.blogspot.co.uk/ and @myoneuralnet
## Friday, 24 October 2014
### LinuxVoice Magazine
I'm really pleased that LinuxVoice Magazine has published the first of my 2-part series on Python and the Mandelbrot fractals.
I hope the series will inspire those completely new to programming to try it - the tutorials require no previous experience at all.
And I also hope the mathematics - which is no more difficult than school maths - will inspire young and old by showing that it can be surprising, exciting and beautiful!
I'd like to thank Graham, the editor of LinuxVoice, for being so accommodating, helpful and patient with me.
By the way, I've been reading computer magazines for over 20 years and LinuxVoice has refreshed enthusiasm, community spirit, and quality content - best wishes for its future!
Grab a copy of issues 009 and 0010 - out now and next month!
## Saturday, 27 September 2014
### Oil Painting Fractals
I was exploring artistic filters in image editing software - you know the kind that make an image look like it was really sketched with an ink pen or painted in watercolours.
The usual software wasn't doing it for me because the effects looked very fake, so I explored further and found the free FotoSketcher. Its focus is purely on such effects - and it's brilliant. I particularly like the Painitng 5 (watercolour) and Painting 6 (oil) effects - they are very realistic.
Then it struck me - what if I applied these filters to fractal images? The results, in my opinion, are fantastic! Enjoy .... and do try it yourself! | 2018-04-21T02:14:31 | {
"domain": "blogspot.com",
"url": "http://makeyourownmandelbrot.blogspot.com/",
"openwebmath_score": 0.37631216645240784,
"openwebmath_perplexity": 1132.2566989987056,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771798031351,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703952594454
} |
https://proofwiki.org/wiki/Direct_Product_of_Sylow_p-Subgroups_is_Sylow_p-Subgroup | # Direct Product of Sylow p-Subgroups is Sylow p-Subgroup
## Theorem
Let $G_1$ and $G_2$ be groups.
Let $H_1$ and $H_2$ be subgroups of $G_1$ and $G_2$ respectively.
Let $H_1$ be a Sylow $p$-subgroup of $G_1$.
Let $H_2$ be a Sylow $p$-subgroup of $G_2$.
Then $H_1 \times H_2$ is a Sylow $p$-subgroup of $G_1 \times G_2$.
## Proof
By definition of Sylow $p$-subgroup:
$\order {H_1} = p^r$, where $p^r$ is the highest power of $p$ which is a divisor of $\order {G_1}$.
$\order {H_2} = p^s$, where $p^s$ is the highest power of $p$ which is a divisor of $\order {G_2}$.
We have that:
$\order {H_1 \times H_2} = p^{r + s}$
We also have that $p^{r + s}$ is the highest power of $p$ which is a divisor of $\order {G_1 \times G_2}$.
Hence it follows by definition that $H_1 \times H_2$ is a Sylow $p$-subgroup of $G_1 \times G_2$.
$\blacksquare$ | 2021-01-25T04:04:34 | {
"domain": "proofwiki.org",
"url": "https://proofwiki.org/wiki/Direct_Product_of_Sylow_p-Subgroups_is_Sylow_p-Subgroup",
"openwebmath_score": 0.9561023712158203,
"openwebmath_perplexity": 52.53486798737504,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771798031351,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703952594454
} |
https://ja.mathigon.org/course/intro-probability/random-variables | # ProbabilityRandom Variables
An event may be regarded as function of the outcome of an experiment: based on the outcome, we can say that the event occurred or didn't occur. We will often be interested in specifying richer information about the outcome of an experiment than a simple yes or no. Specifically, we will often want to specify information in the form of a real number.
For example, suppose that you will receive a dollar for each head flipped in our two-fair-flips experiment. Then your payout might be 0 dollars, 1 dollar, or 2 dollars. Because represents a value which is random (that is, dependent on the outcome of a random experiment), it is called a random variable. A random variable which takes values in some finite or countably infinite set (such as , in this case) is called a discrete random variable.
Since a random variable associates a real number to each outcome of the experiment, in mathematical terms a random variable is a function from the sample space to . Using function notation, the dollar-per-head payout random variable satisfies
Note that a random variable , as a function from to , does not have its own uncertainty: for each outcome , the value of is consistently and perfectly well defined. The randomness comes entirely from thinking of as being selected randomly from . For example, the amount of money you'll take home from tomorrow's poker night is a random quantity, but the function which maps each poker game outcome to your haul is fully specified by the rules of poker.
We can combine random variables using any operations or functions we can use to combine numbers. For example, suppose is defined to be the number of heads in the first of two coin flips. In other words, we define
and is defined to be the number of heads in the second flip. Then the random variable maps each to . This random variable is equal to , since for every .
Exercise
Suppose that the random variable represents a fair die roll and is defined to be the remainder when is divided by .
Define a six-element probability space on which and may be defined, and find for every integer value of .
Solution. We set From the sample space, we see that for any integer value we have
Exercise
Consider a sample space and an event . We define the random variable by
The random variable is called the indicator random variable for If is another event, which of the following random variables are necessarily equal?
XEQUATIONX1XEQUATIONX and XEQUATIONX2XEQUATIONX
XEQUATIONX3XEQUATIONX and XEQUATIONX4XEQUATIONX
and XEQUATIONX5XEQUATIONX
Solution.
• Since if and only if and we see that
• Because may be equal to 2 (on the intersection of and ), we cannot have in general.
• We observe that because if and only if
Bruno | 2021-11-27T16:45:38 | {
"domain": "mathigon.org",
"url": "https://ja.mathigon.org/course/intro-probability/random-variables",
"openwebmath_score": 0.8250868916511536,
"openwebmath_perplexity": 265.6448386587733,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771798031351,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703952594454
} |
https://www.physicsforums.com/threads/maxima-and-minima-in-calculus.1044462/ | # Maxima and Minima in calculus
• MHB
WMDhamnekar
MHB
Question: Prove that the radius of the right circular cylinder of greatest curved surface area which can be inscribed in a given cone is half of that of the cone.
Let r and h be the radius and height of the right circular cylinder inscribed in a given cone of radius R and height H. Let S be the curved surface area of cylinder.
S = 2πr*h
h = H*(R – r)/R
( Would any Math help board member provide me the detailed explanation of the computation of height of right circular cylinder of greatest curved surface inscribed in a given cone with a figure (as far as possible) ?
So S = 2πr*H(R – r)/R
= $\frac{2πH}{R}(r*R – r^2)$
Differentiate w.r.t.r
$\frac{dS}{dr} = \frac{2πH}{R}(R – 2r)$
For maxima or minima
$\frac{dS}{dr} =0$
=> $\frac{2πH}{R}(R – 2r) = 0$
=> R – 2r = 0
=> R = 2r
=> $r = \frac{R}{2}$
$\frac{d^2S}{dr^2} = \frac{2πH}{R}*(0 – 2)= \frac{-4πH}{R }$(negative)
So for $r = \frac{R}{2},$ S is maximum.
Kansas Boy
Draw a picture. From the side, a cone of radius R and height h looks like s triangle. I would set up a coordinate system with x-axis along the base, y-axis along the altitude, and origin at the center of the base. Then the peak is at (0, h) and one vertex is at (R, 0). The line between those two points, on the side of the cone, is given by y= -(h/R)x+ h. At x= r, y= -hr/R+ h= h(1- r/R).
The area of the curved side is $2\pi rh(1- r/R)$.
WMDhamnekar
MHB
I drew a picture describing this question. Now, how can we prove $\frac{h}{H}=\frac{(R-r)}{R}$
Last edited:
Kansas Boy
The cone has height H and radius R. Set up a coordinate system so the origin is at the center of the base and the z axis passes through the vertex. Then the vertex is at (0, 0, H) and the x-axis passes through the cone at (R, 0, 0). The line through those two points, in the xz-plane, is given by $z= H\frac{R- x}{R}$.
Taking x= r, for the cylinder, we get $h= H\frac{R- r}{R}$ or, dividing both sides by H, $\frac{h}{H}= \frac{R- r}{R}$. | 2023-03-21T02:33:12 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/maxima-and-minima-in-calculus.1044462/",
"openwebmath_score": 0.847327470779419,
"openwebmath_perplexity": 471.5447882972499,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771798031351,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703952594453
} |
https://proofwiki.org/wiki/Carmichael_Number_has_3_Odd_Prime_Factors | # Carmichael Number has 3 Odd Prime Factors
## Theorem
Let $n$ be a Carmichael number.
Then $n$ has at least $3$ distinct odd prime factors.
## Proof
By Korselt's Theorem, $n$ is odd.
Therefore $n$ has at least $1$ odd prime factor.
By Korselt's Theorem, for each prime factor of $n$:
$p^2 \nmid n$
$\paren {p - 1} \divides \paren {n - 1}$
Suppose $n = p^k$ for some odd prime $p$.
By Korselt's Theorem, $k = 1$.
However by definition of a Carmichael Number, $n$ cannot be prime.
Therefore $n$ has at least $2$ distinct odd prime factors.
Suppose $n = p^a q^b$ for distinct odd primes $p$ and $q$.
By Korselt's Theorem, the following holds:
$a = b = 1$
$n = p q$
$\paren {p - 1} \divides \paren {n - 1}$
$\paren {q - 1} \divides \paren {n - 1}$
Hence:
$\displaystyle \paren {p - 1}$ $\divides$ $\displaystyle \paren {n - 1 - q \paren {p - 1} }$ $\displaystyle$ $=$ $\displaystyle p q - 1 - p q + q$ $\displaystyle$ $=$ $\displaystyle q - 1$
Swapping $p$ and $q$ yields $\paren {q - 1} \divides \paren {p - 1}$.
Hence $p - 1 = q - 1$ and $p = q$, which is a contradiction.
Therefore $n$ has at least $3$ distinct odd prime factors.
$\blacksquare$
## Historical Note
Robert Daniel Carmichael proved that a Carmichael Number has at least 3 distinct odd prime factors in $1912$, at around the same time that he discovered that $561$ was the smallest one. | 2020-09-30T06:36:15 | {
"domain": "proofwiki.org",
"url": "https://proofwiki.org/wiki/Carmichael_Number_has_3_Odd_Prime_Factors",
"openwebmath_score": 0.8550156354904175,
"openwebmath_perplexity": 238.8129658963121,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777179414275,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703950020501
} |
https://docs.aptech.com/gauss/beta.html | beta¶
Purpose¶
Computes the standard Beta function, also called the Euler integral. The beta function is defined as:
$B(x, y) = \int_{0}^{1} t^{x−1}(1−t)^{y−1}dt$
Format¶
f = beta(x, y)
Parameters
• x (scalar or NxK matrix) – may be real or complex
• y (LxM matrix) – ExE conformable with x.
Returns
f (NxK matrix) –
Examples¶
// Set x
x = 9;
// Set y
y = 3;
// Call beta function
f = beta(x, y);
After the code above:
f = 0.0020202020
Remarks¶
The Beta function’s relationship with the Gamma function is:
$B(x,y) = \frac{\Gamma(x)×\Gamma(y)}{\Gamma(x+y)}$ | 2022-05-22T20:30:39 | {
"domain": "aptech.com",
"url": "https://docs.aptech.com/gauss/beta.html",
"openwebmath_score": 0.8688353896141052,
"openwebmath_perplexity": 8418.280773871347,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.986777179414275,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.65317039500205
} |
http://mathhelpforum.com/pre-calculus/103828-problem-regarding-trig-identities.html | # Math Help - Problem regarding Trig Identities
1. ## Problem regarding Trig Identities
I need to use trig identities to find the indicated trig functions. The problem that I'm stuck on gives me that tan(theta)=5 ; with this im supposed to find cot(theta) cos(theta) tan(90- theta) and csc(theta). I am not really sure how to approach this, any help would be greatly appreciated. Thanks in advance
2. $\tan(\theta) = 5$
$\cot(\theta) = \frac{1}{\tan(\theta)}$
any easier? | 2015-04-21T05:13:05 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/pre-calculus/103828-problem-regarding-trig-identities.html",
"openwebmath_score": 0.9357950091362,
"openwebmath_perplexity": 811.6443197826328,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9867771790254151,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703947446548
} |
https://brainmass.com/math/complex-analysis/computing-integrals-via-analytic-continuation-589748 | Explore BrainMass
# Computing integrals via analytic continuation
Not what you're looking for? Search our solutions OR ask your own Custom question.
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
Using the identity:
Integral from 0 to infinity of x^(-q)/(1+x) dx = pi/[sin(pi q)]
valid for 0 < q < 1
Compute the integral:
Integral from 0 to infinity of x^p/(x^2+4 x + 3) dx
for -1 < p < 1
https://brainmass.com/math/complex-analysis/computing-integrals-via-analytic-continuation-589748
#### Solution Preview
The denominator of the integrand factors as follows:
x^2+4 x + 3 = (x+1)(x+3)
We can perform the partial fraction expansion:
1/(x^2+4 x + 3 ) = 1/[(x+1)(x+3)] = 1/2 [1/(x+1) - 1/(x+3)]
The integral can thus be written as
I(p) = 1/2 Integral from 0 to infinity of x^p [1/(x+1) - 1/(x+3)] dx
It now looks straightforward to split up the integrals into two ...
#### Solution Summary
We explain in detail how one can use analytic continuation of the given integral to compute the desired integral.
\$2.49 | 2021-05-10T01:54:45 | {
"domain": "brainmass.com",
"url": "https://brainmass.com/math/complex-analysis/computing-integrals-via-analytic-continuation-589748",
"openwebmath_score": 0.8065240979194641,
"openwebmath_perplexity": 2309.9834013608906,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9867771790254151,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703947446548
} |
https://www.shaalaa.com/question-bank-solutions/graph-maxima-minima-the-minimum-value-x-2-250-x-a-75-b-50-c-25-d-55_46681 | Share
Books Shortlist
# Solution for The Minimum Value of ( X 2 + 250 X ) is (A) 75 (B) 50 (C) 25 (D) 55 - CBSE (Science) Class 12 - Mathematics
#### Question
The minimum value of $\left( x^2 + \frac{250}{x} \right)$ is
(a) 75
(b) 50
(c) 25
(d) 55
#### Solution
(a) 75
$\text { Given }: f\left( x \right) = x^2 + \frac{250}{x}$
$\Rightarrow f'\left( x \right) = 2x - \frac{250}{x^2}$
$\text { For a local maxima or a local minima, we must have }$
$f'\left( x \right) = 0$
$\Rightarrow 2x - \frac{250}{x^2} = 0$
$\Rightarrow 2 x^3 - 250 = 0$
$\Rightarrow x^3 = 125$
$\Rightarrow x = 5$
$\text { Now,}$
$f''\left( x \right) = 2 + \frac{500}{x^3}$
$\Rightarrow f''\left( 5 \right) = 2 + \frac{500}{5^3} = \frac{750}{125} = 6 > 0$
$\text { So, x = 5 is a local minima } .$
$\therefore f' \left( x \right)_\min = 5^2 + \frac{250}{5} = \frac{375}{5} = 75$
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [1]
Solution The Minimum Value of ( X 2 + 250 X ) is (A) 75 (B) 50 (C) 25 (D) 55 Concept: Graph of Maxima and Minima.
S | 2019-05-26T09:59:49 | {
"domain": "shaalaa.com",
"url": "https://www.shaalaa.com/question-bank-solutions/graph-maxima-minima-the-minimum-value-x-2-250-x-a-75-b-50-c-25-d-55_46681",
"openwebmath_score": 0.24300260841846466,
"openwebmath_perplexity": 853.3120647393165,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777179025415,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703947446547
} |
https://www.cut-the-knot.org/Curriculum/Geometry/GeoGebra/Thebault0.shtml | # In the Spirit of Thébault I
Dr. Ray Viglione from Kean University and his undergraduate student Purna Patel found a variation of Victor Thébault's first problem that dealt with the configuration of squares built on the sides of a parallelogram.
Given a parallelogram, the centers of the squares drawn on both sides of both diagonals form a parallelogram congruent to the original and rotated $90^{\circ}$ about its center
The applet below serves to illustrate the problem and its solution:
As a backup for the GeoGebra applet, below is a plain diagram for a solution without words.
### References
1. Purna Patel, Raymond Viglione, Proof Without Words: A Variation on Thébault's First Problem, The College Mathematics Journal, Vol. 44, No. 2 (March 2013), p. 135
[an error occurred while processing this directive] | 2018-12-14T00:02:01 | {
"domain": "cut-the-knot.org",
"url": "https://www.cut-the-knot.org/Curriculum/Geometry/GeoGebra/Thebault0.shtml",
"openwebmath_score": 0.48492857813835144,
"openwebmath_perplexity": 1939.4636334229044,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777178636555,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703944872593
} |
https://www.effortlessmath.com/math-topics/how-to-find-the-number-of-solutions-in-a-system-of-equations/ | # How to Find the Number of Solutions in a System of Equations?
Depending on how the linear equations in a system touch each other, there will be a different number of solutions to the system. Here you get familiarized with how to find the number of solutions in a system of equations.
A system of linear equations usually has a single solution, but sometimes it can have no solution (parallel lines) or infinite solutions (same line).
## A step-by-step guide to the number of solutions in a system of equations
A linear equation in two variables is an equation of the form $$ax + by + c = 0$$ where $$a, b, c ∈ R$$, $$a$$, and $$b ≠ 0$$. When we consider the system of linear equations, we can find the number of answers by comparing the coefficients of the variables of the equations.
### Three types of solutions of a system of linear equations
Consider the pair of linear equations in two variables $$x$$ and $$y$$:
$$a_1x+b_1y+c_1=0$$
$$a_2x+b_2y+c_2=0$$
Here $$a_1$$, $$b_1$$, $$c_1$$, $$a_2$$, $$b_2$$, $$c_2$$ are all real numbers.
Note that, $$a_1^2 + b_1^2 ≠ 0, a_2^2 + b_2^2 ≠ 0$$.
• If $$\frac{a_1}{a_2}≠ \frac{b_1}{b_2}$$, then there will be a unique solution. If we plot the graph, the lines will intersect. This type of equation is called a consistent pair of linear equations.
• If $$\frac{a_1}{a_2}= \frac{b_1}{b_2}=\frac{c_1}{c_2}$$, then there will be infinitely many solutions. The lines will coincide. This type of equation is called a dependent pair of linear equations in two variables.
• If $$\frac{a_1}{a_2}= \frac{b_1}{b_2}≠\frac{c_1}{c_2}$$, then there will be no solution. If we plot the graph, the lines will be parallel. This type of equation is called an inconsistent pair of linear equations.
### The Number of Solutions in a System of Equations – Example 1:
How many solutions does the following system have?
$$y=-2x-4$$, $$y=3x+3$$
Solution:
First, rewrite the equation to the general form:
$$-2x-y-4=0$$
$$3x-y+3=0$$
Now, compare the coefficients:
$$\frac{a_1}{a_2}$$$$=-\frac{2}{3}$$
$$\frac{b_1}{b_2}$$$$=-\frac{1}{1}=1$$
$$\frac{a_1}{a_2}≠ \frac{b_1}{b_2}$$, Hence, this system of equations will have only one solution.
## Exercises for the Number of Solutions in a System of Equations
### Find the number of solutions in each system of equations.
1. $$\color{blue}{2x\:+\:3y\:-\:11\:=\:0,\:3x\:+\:2y\:-\:9\:=\:0}$$
2. $$\color{blue}{y=\frac{10}{3}x+\frac{9}{7},\:y=\frac{1}{8}x-\frac{3}{4}}$$
3. $$\color{blue}{y=\frac{8}{5}x+2,\:y=\frac{8}{5}x+\frac{5}{2}}$$
4. $$\color{blue}{y=-x+\frac{4}{7},\:y=-x+\frac{4}{7}}$$
1. $$\color{blue}{one\:solution}$$
2. $$\color{blue}{one\:solution}$$
3. $$\color{blue}{no\:solution}$$
4. $$\color{blue}{infinitely\:many\:solutions}$$
### What people say about "How to Find the Number of Solutions in a System of Equations?"?
No one replied yet.
X
30% OFF
Limited time only!
Save Over 30%
SAVE $5 It was$16.99 now it is \$11.99 | 2023-03-21T10:29:37 | {
"domain": "effortlessmath.com",
"url": "https://www.effortlessmath.com/math-topics/how-to-find-the-number-of-solutions-in-a-system-of-equations/",
"openwebmath_score": 0.783394455909729,
"openwebmath_perplexity": 236.49900299182434,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771786365549,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703944872592
} |
https://plainmath.net/6638/to-determine-the-correct-graph-for-the-function-g-x-equal-frac-1-2-f-plus-is | To determine.The correct graph for the function[g(x)=-frac{1}{2}f(x)+1 is B
To determine.
The correct graph for the function $g\left(x\right)=-\frac{1}{2}f\left(x\right)+1$ is B
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Asma Vang
Vertical translation:
For ,
The graph of is the graph of $y=f\left(x\right)$ {shifted up b units}
The graph of is the graph of {shifted down b units}
Reflection:
Across the x-axis:
The graph of is the reflection of the graph of $y=f\left(x\right)$ across the x-axis
Across the y-axis:
The graph of is the reflection of the graph of $y=f\left(x\right)$ across the x-axis
Vertically Stretching or Shrinking:
The graph of can be obtained from the graph of $y=f\left(x\right)$ by
Stretching verttically for or shrinking vertically for .
For $a<0$, the graph is also reflected across the x-axis
. By the properties of transformation, the graph of $g\left(x\right)=-\frac{1}{2}f\left(x\right)+1$ is,
The transformation of $y=f\left(x\right)$ and shri
vertically by a factor of $\frac{1}{2}$
Then the graph is reflection about the x-axis and shifted up one unit.
From the given graphs, the graphs B satisfies the all condition's stated above.
Thus, the correct graph for the function $g\left(x\right)=-\frac{1}{2}f\left(x\right)+1$ is B | 2022-06-28T13:01:30 | {
"domain": "plainmath.net",
"url": "https://plainmath.net/6638/to-determine-the-correct-graph-for-the-function-g-x-equal-frac-1-2-f-plus-is",
"openwebmath_score": 0.7271022796630859,
"openwebmath_perplexity": 1594.6778369069496,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777178636555,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703944872592
} |
https://www.cheenta.com/medians-of-triangle-prmo-2018-problem-10/ | What is the NO-SHORTCUT approach for learning great Mathematics?
# How to Pursue Mathematics after High School?
For Students who are passionate for Mathematics and want to pursue it for higher studies in India and abroad.
Try this beautiful problem from Geometry based on medians of triangle
## Medians of triangle | PRMO | Problem 10
In a triangle ABC, the medians from B to CA is perpendicular to the median from C to AB. If the median from A to BC is 30,determine $(BC^2 +AC^2+AB^2)/100$?
• $56$
• $24$
• $34$
### Key Concepts
Geometry
Medians
Centroid
Answer:$24$
PRMO-2018, Problem 10
Pre College Mathematics
## Try with Hints
We have to find out $(BC^2 +AC^2+AB^2)/100$. So, we have to find out $AB, BC, CA$ at first. Now given that, the medians from B to CA is perpendicular to the median from C to AB and the median from A to BC is 30,
So clearly $\triangle BGC$,$\triangle BGF$,$\triangle EGC$ are right angle triangle.Let $CF=3x$ & $BE =3y$ then clearly $CG=2x$ & $BG= 2y$ given that $AD=30$ SO $AG=20$ & $DG =10$ (as $G$ is centroid, medians intersects at 2:1). Therefore from pythagoras theorem we can find out $BC,BF,CE$ i.e we can find out the value $AB,BC,CA$
Can you now finish the problem ..........
$CE^2=(2x)^2+y^2=4x^2 +y^2$
$BF^2=(2y)^2+x^2=4y^2+x^2$
Also, $CG^2+BG^2=BC^2$ $\Rightarrow 4x^2 + 4y^2={20}^2$ $\Rightarrow x^2+y^2=100$
$AC^2=(2CE)^2=4(4x^2+y^2)$
$AB^2=(2BF)^2=4(4y^2+x^2)$
Can you finish the problem........
$(BC^2 +AC^2+AB^2)=20(x^2+y^2)+20^2=2400$
so, $(BC^2 +AC^2+AB^2)/100$=24
## What to do to shape your Career in Mathematics after 12th?
From the video below, let's learn from Dr. Ashani Dasgupta (a Ph.D. in Mathematics from the University of Milwaukee-Wisconsin and Founder-Faculty of Cheenta) how you can shape your career in Mathematics and pursue it after 12th in India and Abroad. These are some of the key questions that we are discussing here:
• What are some of the best colleges for Mathematics that you can aim to apply for after high school?
• How can you strategically opt for less known colleges and prepare yourself for the best universities in India or Abroad for your Masters or Ph.D. Programs?
• What are the best universities for MS, MMath, and Ph.D. Programs in India?
• What topics in Mathematics are really needed to crack some great Masters or Ph.D. level entrances?
• How can you pursue a Ph.D. in Mathematics outside India?
• What are the 5 ways Cheenta can help you to pursue Higher Mathematics in India and abroad?
## Want to Explore Advanced Mathematics at Cheenta?
Cheenta has taken an initiative of helping College and High School Passout Students with its "Open Seminars" and "Open for all Math Camps". These events are extremely useful for students who are really passionate for Mathematic and want to pursue their career in it.
To Explore and Experience Advanced Mathematics at Cheenta
### 3 comments on “Medians of triangle | PRMO-2018 | Problem 10”
How do you find that $BC = 20$ so that $BD = DC = 10.$
Oh! I got it. Because median of right triangle drawn from the vertex of right angle to the opposite side is half the length of it's hypotenuse.
1. KOUSHIK SOM says:
yes...
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-08-06T00:00:22 | {
"domain": "cheenta.com",
"url": "https://www.cheenta.com/medians-of-triangle-prmo-2018-problem-10/",
"openwebmath_score": 0.2648894786834717,
"openwebmath_perplexity": 1525.8789989510872,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771782476948,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703942298639
} |
http://clay6.com/qa/7087/the-money-to-be-spent-for-the-welfare-of-the-employees-of-a-firm-is-proport | # The money to be spent for the welfare of the employees of a firm is proportional to the rate of change of its total revenue (marginal revenue).If the total revenue(in rupees) received from the sale of Xunits of a product is given by R(x)=$3x^2+36x+5$,find the marginal revenue,when x=5,and write which value does the question indicate.
This question appeared in 65-1,65-2 and 65-3 versions of the paper in 2013.
Toolbox:
• If the Total Revenue is given by $f(x)$, the Marginal Revenue is the rate of change of total revenue and is nothing but the first order derivative of the function $\large \frac{df(x)}{dx}$
Given the total revenue $=R(x)=3x^2+36x+5$.
If the Total Revenue is given by $f(x)$, the Marginal Revenue is the rate of change of total revenue and is nothing but the first order derivative of the function $\large \frac{df(x)}{dx}$
$\Rightarrow$ Differentiating $R(x)$, we get $\large \frac{dR(x)}{dx}$$= 6x+36$
If $x=5$, Marginal revenue $= 6 (5) + 36$
Therefore, Marginal revenue $= 30+36 = 66$.
The concern over the welfare of the employees by the employer.
edited Mar 22, 2013 | 2017-11-24T01:57:24 | {
"domain": "clay6.com",
"url": "http://clay6.com/qa/7087/the-money-to-be-spent-for-the-welfare-of-the-employees-of-a-firm-is-proport",
"openwebmath_score": 0.8562685251235962,
"openwebmath_perplexity": 437.02433110930195,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588347,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703939724685
} |
https://encyclopediaofmath.org/wiki/Conformal_radius_of_a_domain | # Conformal radius of a domain
A characteristic of a conformal mapping of a simply-connected domain, defined as follows: Let $D$ be a simply-connected domain with more than one boundary point in the $z$- plane. Let $z _ {0}$ be a point of $D$. If $z _ {0} \neq \infty$, then there exists a unique function $w = f ( z)$, holomorphic in $D$, normalized by the conditions $f ( z _ {0} ) = 0$, $f ^ { \prime } ( z _ {0} ) = 1$, that maps $D$ univalently onto the disc $\{ {w } : {| w | < r } \}$. The radius $r = r ( z _ {0} , D )$ of this disc is called the conformal radius of $D$ relative to $z _ {0}$. If $\infty \in D$, then there exists a unique function $w = f ( z)$, holomorphic in $D$ except at $\infty$, that, in a neighbourhood of $\infty$, has a Laurent expansion of the form
$$f ( z) = z + c _ {0} + c _ {1} z ^ {-} 1 + \dots ,$$
and that maps $D$ univalently onto a domain $\{ {w } : {| w | > r } \}$. In this case the quantity $r = r ( \infty , D )$ is called the conformal radius of $D$ relative to infinity. The conformal radius of $D$, $\infty \in D$, relative to infinity is equal to the transfinite diameter of the boundary $C$ of $D$ and to the capacity of the set $C$.
An extension of the notion of the conformal radius of a domain to the case of an arbitrary domain $D$ in the complex $z$- plane is that of the interior radius of $D$ relative to a point $z _ {0} \in D$( in the non-Soviet literature the term "interior radius" is used primarily in the case of a simply-connected domain). Let $D$ be a domain in the complex $z$- plane, let $z _ {0}$ be a point of $D$ and suppose that a Green function $g ( z , z _ {0} )$ for $D$ with pole at $z _ {0}$ exists. Let $\gamma$ be the Robin constant of $D$ with respect to $z _ {0}$, i.e.
$$\gamma = \ \left \{ \begin{array}{lll} \lim\limits _ {z \rightarrow z _ {0} } [ g ( z , z _ {0} ) + \mathop{\rm ln} | z - z _ {0} | ] & \textrm{ for } &z _ {0} \neq \infty , \\ \lim\limits _ {z \rightarrow \infty } [ g ( z , \infty ) - \mathop{\rm ln} | z | ] & \textrm{ for } &z _ {0} = \infty . \\ \end{array} \right.$$
The quantity $r = {e ^ \gamma }$ is called the interior radius of $D$ relative to $z _ {0}$. If $D$ is a simply-connected domain whose boundary contains at least two points, then the interior radius of $D$ relative to $z _ {0} \in D$ is equal to the conformal radius of $D$ relative to $z _ {0}$. The interior radius of a domain is non-decreasing as the domain increases: If the domains $D$, $D _ {1}$ have Green functions $g ( z _ {1} , z _ {0} )$, $g _ {1} ( z , z _ {0} )$, respectively, if $z _ {0} \in D$ and if $D \subset D _ {1}$, then the following inequality holds for their interior radii $r$, $r _ {1}$ at $z _ {0}$:
$$r \leq r _ {1} .$$
The interior radius of an arbitrary domain $D$ relative to a point $z _ {0} \in D$ is defined as the least upper bound of the set of interior radii at $z _ {0}$ of all domains containing $z _ {0}$, contained in $D$ and having a Green function. In accordance with this definition, if $D$ does not have a generalized Green function, then the interior radius $r$ of $D$ at $z _ {0} \in D$ is equal to $\infty$.
#### References
[1] G.M. Goluzin, "Geometric theory of functions of a complex variable" , Transl. Math. Monogr. , 26 , Amer. Math. Soc. (1969) (Translated from Russian) [2] V.I. Smirnov, A.N. Lebedev, "Functions of a complex variable" , M.I.T. (1968) (Translated from Russian) [3] W.K. Hayman, "Multivalent functions" , Cambridge Univ. Press (1958)
In [a2] the conformal radius of a compact connected set $E$ in the $z$- plane is defined as the conformal radius of its complement relative to infinity (as defined above). If $E$ is contained in a disc of radius $r$ and has diameter $d \geq r$, then
$$\rho \leq r \leq 4 \rho ,$$
where $\rho$ is its conformal radius (in the sense of [a2], cf. [a2]). | 2023-02-05T04:29:54 | {
"domain": "encyclopediaofmath.org",
"url": "https://encyclopediaofmath.org/wiki/Conformal_radius_of_a_domain",
"openwebmath_score": 0.971750020980835,
"openwebmath_perplexity": 113.56213662043339,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588347,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703939724685
} |
https://abrahamphysics.com/mechanics/massDistributions/momentOfInertia/momentOfInertia.htm | ## Moment of Inertia
The moment of inertia describes how mass is distributed around a point. How we calculate it depends on the type of system. We call a system discrete if it is composed of only point like particles. We call a system continuous if you cannot describe your system that way. Essentially, can you invoke the particle approximation for each constituent object in your system? If yes, your system is discrete If not, it is continuous. At the moment we consider discrete systems here.
### Moment of Inertia in one dimension for a discrete system
We have a system of $$N$$ objects that can each be modeled as a particle. Each object has mass $$m_i$$ and position $$x_i$$ where $$i$$ is a index distinguishing one object from another and runs from 1 to $$N$$. The moment of inertia about the origin is $$I = \sum_{i=1}^{N} m_ix_i^2$$
For the case where the moment of inertia is calcualted about some other point, replace $$x_i$$ with the distance from the location of the particles to the axis of rotation.
### Moment of Inertia in 2D
There is nothing special about doing it in two dimensions, except you have to do it twice.
### Worked Examples
There are currently no worked examples for this section. | 2021-06-23T21:16:02 | {
"domain": "abrahamphysics.com",
"url": "https://abrahamphysics.com/mechanics/massDistributions/momentOfInertia/momentOfInertia.htm",
"openwebmath_score": 0.6263046264648438,
"openwebmath_perplexity": 196.07558195499598,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588348,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703939724685
} |
https://meangreenmath.com/2015/09/22/different-ways-of-solving-a-contest-problem-part-3/ | # Different ways of solving a contest problem (Part 3)
The following problem appeared on the American High School Mathematics Examination (now called the AMC 12) in 1988:
If $3 \sin \theta = \cos \theta$, what is $\sin \theta \cos \theta$?
When I presented this problem to a group of students, I was pleasantly surprised by the amount of creativity shown when solving this problem.
Yesterday, I presented a solution using a Pythagorean identity, but I was unable to be certain if the final answer was a positive or negative without drawing a picture. Here’s a third solution that also use a Pythagorean trig identity but avoids this difficulty. Again, I begin by squaring both sides.
$9 \sin^2 \theta = \cos^2 \theta$
$9 (1 - \cos^2 \theta) = \cos^2 \theta$
$9 - 9 \cos^2 \theta = \cos^2 \theta$
$9 = 10 \cos^2 \theta$
$\displaystyle \frac{9}{10} = \cos^2 \theta$
$\displaystyle \pm \frac{3}{\sqrt{10}} = \cos \theta$
Yesterday, I used the Pythagorean identity again to find $\sin \theta$. Today, I’ll instead plug back into the original equation $3 \sin \theta = \cos \theta$:
$3 \sin \theta = \cos \theta$
$3 \sin \theta = \displaystyle \frac{3}{\sqrt{10}}$
$\sin \theta = \displaystyle \pm \frac{1}{\sqrt{10}}$
Unlike the example yesterday, the signs of $\sin \theta$ and $\cos \theta$ must agree. That is, if $\cos \theta = \displaystyle \frac{3}{\sqrt{10}}$, then $\sin \theta = \displaystyle \frac{1}{\sqrt{10}}$ must also be positive. On the other hand, if $\cos \theta = \displaystyle -\frac{3}{\sqrt{10}}$, then $\sin \theta = \displaystyle -\frac{1}{\sqrt{10}}$ must also be negative.
If they’re both positive, then
$\sin \theta \cos \theta = \displaystyle \left( \frac{1}{\sqrt{10}} \right) \left( \frac{3}{\sqrt{10}} \right) =\displaystyle \frac{3}{10}$,
and if they’re both negative, then
$\sin \theta \cos \theta = \displaystyle \left( -\frac{1}{\sqrt{10}} \right) \left( -\frac{3}{\sqrt{10}} \right) = \displaystyle \frac{3}{10}$.
Either way, the answer must be $\displaystyle \frac{3}{10}$.
This is definitely superior to the solution provided in yesterday’s post, as there’s absolutely no doubt that the product $\sin \theta \cos \theta$ must be positive. | 2017-12-13T18:57:20 | {
"domain": "meangreenmath.com",
"url": "https://meangreenmath.com/2015/09/22/different-ways-of-solving-a-contest-problem-part-3/",
"openwebmath_score": 0.8019065856933594,
"openwebmath_perplexity": 288.4872670848113,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588347,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703939724685
} |
https://math.stackexchange.com/questions/2814735/tensor-product-identity-proof-without-explicit-construction | # Tensor product identity: proof without explicit construction
Let $R$ be a commutative ring with identity, and $M$ an $R$-module. Then $$R\otimes_R M \simeq M$$ This can be easily shown directly by construction of the isomorphism, namely $r\otimes m \mapsto rm$ for $r \in R$ and $m \in M$. Indeed, let $rm=0$. Then $r \otimes m = r\left(1\otimes m\right) = 1 \otimes rm = 1 \otimes 0 = 0$. Hence the homomorphism is injective (I skip the proof that the map in question is indeed a homomorphism; this is straight forward). On the other hand, for any $m\in M$, $1\otimes m$ gets mapped to $m$, thus showing surjectivity.
My question is, does anyone know a way to prove the same result without explicit construction of the isomorphism, i.e. via the universal property?
• Basically, bilinear maps from $R\times M$ are the same as linear maps from $M$. – Angina Seng Jun 10 '18 at 15:33
Consider the map $\phi: R\times M\to Z$ as a $R$-bilinear map. Also consider the natural $R$-bilinear map $\gamma: R\times M\to M$ given by $(r,m)\mapsto rm$. Note that by $R$-bilinearity $\phi(r,m)=\phi(1,rm)$.
Using these two, we construct $\psi: M\to Z$ as follows: $\psi(m)=\phi(1,m)$. Now it is easy to show that $\phi=\psi\circ \gamma$. Hence, $M$ satisfies the universal property of tensor product, so we must have $M\simeq R\otimes M$.
• This might be a dumb question, but what about uniqueness of the map $\Psi$? You showed existence of such a map fitting into a commutative diagram. However for the universal property to be satisfied the map must also be unique. Did I overlook this part in your arguments? – Schief Aug 11 '18 at 14:41
• If $f$ is any map satisfying $\phi=f\circ \gamma$ then $\phi(1,m)=f(\gamma(1,m))=f(m)$. Meaning, there is exactly one such map, namely $\psi$. – Hamed Aug 11 '18 at 14:48 | 2020-09-25T04:09:58 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2814735/tensor-product-identity-proof-without-explicit-construction",
"openwebmath_score": 0.9897511005401611,
"openwebmath_perplexity": 85.50428538567193,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771776644046,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703938437707
} |
https://math.stackexchange.com/questions/2420304/variance-in-central-limit-theorem-for-dependent-random-variables | # Variance in central limit theorem for dependent random variables.
I am reading some notes and struggling to show something which should be basic. I will write out the problem below. We have a measure preserving dynamical system $(X,T,\mu)$, and a measurable map $f:X\to \mathbb{R}$. We consider the sequence of identically distributed (although dependent) random variables $f,f\circ T,f\circ T^2$.... Moreoverm we suppose that $\mu$ is mixing. I.e., $\lim\limits_{n\to\infty}\int_X (\psi\circ T^n)\phi d\mu=(\int\limits_X \psi d\mu)(\int\limits_X \phi d\mu)$.
Under some conditions (which are detailed here https://vaughnclimenhaga.wordpress.com/2013/03/17/spectral-methods-3-central-limit-theorem/ (chapter 2)) we have the central limit theorem holds with variance $\sigma^2=\sum\limits_{n\in\mathbb{Z}}\int_X f\cdot (f\circ T^n)d\mu$.
If we write $S_nf=\sum\limits_{k=0}^{n-1}f\circ T^k$, the notes say that $\sigma^2$ can be written as $\sigma^2=\lim\limits_{n\to\infty}\frac{1}{n}\int (S_nf)^2d\mu$.
I am struggling to show why this is true. I feel an argument such as this should work but am evidently going wrong somewhere:
$\sigma^2=\lim\limits_{n\to\infty}\sum_{k=0}^{n-1}\int\limits_Xf\cdot f\circ T^kd\mu=\lim\limits_{n\to\infty}\int_X f\cdot S_nf d\mu=\lim\limits_{n\to\infty}\int_X\frac{S_n}{n}S_nd\mu=\lim\limits_{n\to\infty}\frac{1}{n}\int_X (S_nf)^2d\mu$
where for the third equality I fudged over and used the $T$ - invariance of $\mu$.
Could someone please tell me where(if) I am going wrong?Thanks!
If $\sum_{n\in\Bbb{Z}}|\int_Xf\cdot(f\circ T^n)d\mu|<\infty$, then \begin{align} \frac1n\int_X(S_nf)^2d\mu&=\frac1n\sum_{i,j=1}^n\int_X (f\circ T^i)\cdot (f\circ T^j) d\mu =\frac1n\sum_{i,j=1}^n \int_Xf\cdot (f\circ T^{j-i})d\mu\\ &=\sum_{k=-n+1}^{n-1}\frac{n-|k|}n\int_Xf\cdot (f\circ T^{k})d\mu\to \sum_{k\in\Bbb{Z}}\int_Xf\cdot(f\circ T^k)d\mu\qquad \text{as $n\to\infty$}. \end{align} | 2019-07-21T15:34:44 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2420304/variance-in-central-limit-theorem-for-dependent-random-variables",
"openwebmath_score": 0.9874310493469238,
"openwebmath_perplexity": 679.976418714891,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699747,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703937150732
} |
http://zbmath.org/?q=an:1205.35099 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Existence and multiplicity results for a nonlinear stationary Schrödinger equation. (English) Zbl 1205.35099
Summary: We revisit Kristály’s result on the existence of weak solutions of the Schrödinger equation of the form
$-{\Delta }u+a\left(x\right)u=\lambda b\left(x\right)f\left(u\right),\phantom{\rule{1.em}{0ex}}x\in {ℝ}^{N},\phantom{\rule{4pt}{0ex}}u\in {H}^{1}\left({ℝ}^{N}\right),$
where $\lambda$ is a positive parameter, $a$ and $b$ are positive functions, while $f:ℝ\to ℝ$ is sublinear at infinity and superlinear at the origin. In particular, by using Ricceri’s recent three critical points theorem, we show that, under the same hypotheses, a much more precise conclusion can be obtained.
##### MSC:
35J61 Semilinear elliptic equations 35J20 Second order elliptic equations, variational methods 35B45 A priori estimates for solutions of PDE | 2014-04-18T05:56:09 | {
"domain": "zbmath.org",
"url": "http://zbmath.org/?q=an:1205.35099",
"openwebmath_score": 0.8184961676597595,
"openwebmath_perplexity": 6885.258666866859,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699746,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703937150731
} |
http://mathhelpforum.com/algebra/280860-more-come-me-s-been-while.html | # Thread: More to come from me !! It’s been a while
1. ## More to come from me !! It’s been a while
Solve a^2 - 7a - 30 = 0
2. ## Re: More to come from me !! It’s been a while
I would factor here...can you name two factors of -30 whose sum is -7?
3. ## Re: More to come from me !! It’s been a while
Oh I see, it is -10 and 3, but than my question is what is done with the 0 after the GCF is found
4. ## Re: More to come from me !! It’s been a while
Also, what happens to the a in 7a?
5. ## Re: More to come from me !! It’s been a while
Originally Posted by Eddyrodriguez
Oh I see, it is -10 and 3, but than my question is what is done with the 0 after the GCF is found
Yes, and so we may write:
$\displaystyle a^2-7a-30=(a-10)(a+3)=0$
Now, equate each factor to zero, and solve for a to get the two roots.
6. ## Re: More to come from me !! It’s been a while
The basic point in solving problems like this is the "zero product property": "if ab= 0 then either a= 0 or b= 0". Once you have that $\displaystyle (a- 10)(a+ 3)= 0$ then we must have either $\displaystyle a- 10= 0$ or $\displaystyle a+ 3= 0$. That gives, of course, two different values for a either of which satisfies the original equation. | 2018-12-12T01:20:39 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/algebra/280860-more-come-me-s-been-while.html",
"openwebmath_score": 0.8104525804519653,
"openwebmath_perplexity": 1081.9253046340684,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699747,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703937150731
} |
https://testbook.com/question-answer/the-following-balances-are-extracted-at-the-end-of--6156d02786a3de6005a9ca8c | # The following balances are extracted at the end of the accounting period from the books of Radhey Shyam as follows:Plant & Machinery Rs. 2,00,000Furniture Rs. 50,000Building Rs. 5,00,000Depreciation is to be charged:20% on plant & machinery, 10% on furniture and 5% on Building. Calculate the amount of depreciation to be charged in the Profit and Loss account.
This question was previously asked in
UPPCL Assistant Accountant 29 Jan 2019 Official Paper
View all UPPCL Assistant Accountant Papers >
1. Rs. 45,000
2. Rs. 7,000
3. Rs. 70,000
4. Rs. 40,000
Option 3 : Rs. 70,000 | 2022-01-28T08:02:24 | {
"domain": "testbook.com",
"url": "https://testbook.com/question-answer/the-following-balances-are-extracted-at-the-end-of--6156d02786a3de6005a9ca8c",
"openwebmath_score": 0.8471425771713257,
"openwebmath_perplexity": 13285.621478757124,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699746,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.653170393715073
} |
https://analysisofmath.wordpress.com/tag/liebniz-series/ | # A Beautiful Convergent Series
In this post I want to find the value of the sum
$\displaystyle\sum_{n=1}^{\infty}\frac{3^n-1}{4^n} \zeta \left(n+1 \right)$
Note that for $s>1$, $\zeta(s)=\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^s}$.
I use following formula that is called Gregory-Leibniz-Madhava’s series
$\displaystyle\sum_{n=1}^{\infty}\frac{1}{4n-3}-\frac{1}{4n-1}=\frac{\pi}{4}$
If we define $f$ function as follows:
$\displaystyle{f\left(z\right)=\frac{z}{4-3z}-\frac{z}{4-z}}$
then
$\displaystyle{f(z)=\frac{\frac{z}{4}}{1-\frac{3z}{4}}-\frac{\frac{z}{4}}{1-\frac{z}{4}}=\frac{z}{4}\sum_{n=0}^{\infty}\left(\frac{3z}{4}\right)^n-\left(\frac{z}{4}\right)^n=\frac{z}{4}\sum_{n=1}^{\infty}\left(\frac{3z}{4}\right)^n-\left(\frac{z}{4}\right)^n}$
$\displaystyle{=\frac{z}{4}\sum_{n=2}^{\infty}\left(\frac{3z}{4}\right)^{n-1}-\left(\frac{z}{4}\right)^{n-1}=\frac{1}{4}\sum_{n=2}^{\infty}\left[\left(\frac{3}{4}\right)^{n-1}-\left(\frac{1}{4}\right)^{n-1}\right]z^n=\frac{1}{4}\sum_{n=2}^{\infty}\frac{3^{n-1}-1}{4^{n-1}}z^n}.$
Now I use this theorem
THEOREM (FlajoletVardi): If $f\left(z \right)=\displaystyle\sum_{n=2}^{\infty}a_{n}z^n$ and $\displaystyle\sum_{n=2}^{\infty}|a_n|$ converges then,
$\displaystyle\sum_{n=1}^{\infty}f\left(\frac{1}{n}\right)=\displaystyle\sum_{n=2}^{\infty}a_n$$\zeta$$\left(n\right)$.
PROOF:
Because $\displaystyle\sum_{m=2}^{\infty}|a_m|<\infty$,
$\displaystyle\sum_{n=1}^{\infty}\displaystyle\sum_{m=2}^{\infty}|a_m|\frac{1}{n^m}\leq\displaystyle\sum_{n=1}^{\infty}\displaystyle\sum_{m=2}^{\infty}|a_m|\frac{1}{n^2}<\infty$
Hence, by Cauchy’s double series theorem, we can switch the order of summation:
$\displaystyle{\sum_{n=1}^{\infty}f\left(\frac{1}{n}\right)=\sum_{n=1}^{\infty}\sum_{m=2}^{\infty}a_m\frac{1}{n^m}=\sum_{m=2}^{\infty}a_m\sum_{n=1}^{\infty}\frac{1}{n^m}=\sum_{n=2}^{\infty}a_n\zeta(n)}$
This theorem implies that
$\displaystyle{\frac{\pi}{4}=\sum_{n=1}^{\infty}f\left(\frac{1}{n}\right)=\frac{1}{4}\sum_{n=2}^{\infty}\frac{3^{n-1}-1}{4^{n-1}}\zeta(n)=\frac{1}{4}\sum_{n=1}^{\infty}\frac{3^{n}-1}{4^{n}}\zeta(n+1)}$
and
$\boxed{\displaystyle\sum_{n=1}^{\infty}\frac{3^n-1}{4^n} \zeta \left(n+1 \right)=\pi}$ | 2021-07-27T19:31:23 | {
"domain": "wordpress.com",
"url": "https://analysisofmath.wordpress.com/tag/liebniz-series/",
"openwebmath_score": 0.9943063855171204,
"openwebmath_perplexity": 798.0821168680992,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703934576778
} |
https://aquazorcarson.wordpress.com/2013/03/09/product-of-gaussian-probability-densities/ | product of Gaussian probability densities
So lately I have been following Udacity’s wonderful course on robotics and in particular was fascinated by the procedure of Kalman filter. My attempt here will be to expound on certain mathematical subtleties and missing deductive steps in the videos.
The first one has to do with updating prediction by incorporating observation. Somehow in my entire graduate education on probability the notion of product of densities functions is de-emphasized, or maybe I was too prematurely drawn into research to take advantage of the vast number of applied courses that do teach this stuff.
Imagine we have two independent observations of the same 1-dimensional quantity x, represented by $X_1$ and $X_2$, and suppose somehow we also know the variances $\sigma_1^2$ and $\sigma_2^2$. The question now is what will be the best estimate of what x actually is based on these two observations? Here by estimate we can only mean a single number. As a perhaps stupid remark, for single observations, the observed value is the unbiased minimum variance estimate of the mean itself: so the UMV estimator of $\mu_1$ given $X_1$ is in fact $X_1$.
Obviously some parametric assumption is needed. For instance one can assume the estimate takes the form $\hat{\mu} = \alpha X_1 + (1- \alpha) X_2$ for some $\alpha \in [0,1]$, that is, the final estimate is a convex combination of the two individual ones. This assumption is in fact more natural than I first thought, since it guarantees the estimator will be unbiased.
So now we have to find a suitable $\alpha$. What property do we desire in the final estimate $\hat{x}$, treated as a random variable? Why not stipulate that its variance is minimized in addition to being unbiased (i.e., UMV)? This then becomes an interesting optimization problem:
$\min_\alpha \text{var} \alpha X_1 + (1 -\alpha) X_2$. One can easily solve this to obtain
$\alpha^* = \sigma_2^2 / (\sigma_1^2 + \sigma_2^2)$.
Plugging in, we also get the following new estimate of the variance:
$\hat{\sigma}^2 = (\sigma_2^4 \sigma_1^2 + \sigma_1^4 \sigma_2^2) / (\sigma_1^2 + \sigma_2^2)^2 = 1/ (\sigma_1^{-2} + \sigma_2^{-2})$.
So what does this have to do with product of Gaussian pdf’s? If you multiply the two Gaussian densities and renormalize, you get exactly the same result as above! First of all, it’s remarkable that the resulting product is again proportional to a Gaussian. The easiest way to see is by multiplying things out:
$(x-\mu_1)^2 / \sigma_1^2 + (x -\mu_2)^2 / \sigma_2^2 =$ constant + the exponent of a Gaussian with the above mean and variance.
but conceptually the Fourier transform of a Gaussian is still Gaussian, and convolution of Gaussians is Gaussian, hence the closure extends to products. Note also convolution will increase the variance, but Fourier transform takes a big variance Gaussian to a small variance one, so the renormalized product will get skinnier.
If one thinks about maximal likelihood estimator, if $\mu$ is the true mean, then given two independent observations, $X_1$ and $X_2$, their joint density is proportional to $e^{-(X_1 - \mu)^2/ \sigma_1^2 - (X_2 - \mu)^2/ \sigma_2^2}$. The MLE of $\mu$ is precisely $\hat{x}$ (as obtained by optimizing the above density over $\mu$). The variance of $\hat{x}$ similarly inherits from those of $X_1$ and $X_2$, and should coincide with $\hat{\sigma}^2$ given above. In fact the distribution of $\hat{x}$ will be Gaussian.
p.s. At first I was trying to derive the MVUE from the point of view of Bayesian update, by letting the distribution of $X_1$ be the prior and that of $X_2$ the conditional likelihood function (in this case conditioning doesn’t do anything). I cannot make sense of it. So if someone is an expert in this, please enlighten me! | 2018-01-17T11:08:15 | {
"domain": "wordpress.com",
"url": "https://aquazorcarson.wordpress.com/2013/03/09/product-of-gaussian-probability-densities/",
"openwebmath_score": 0.9224775433540344,
"openwebmath_perplexity": 270.01644361246974,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703934576778
} |
https://www.vedantu.com/maths/factors-of-120 | # Factors of 120
## Prime Factorization of 120
The factors of 120 are all the numbers that give the result as the number 120 when they are multiplied together in a pair of two. There are many different factors that are commonly used in the mathematical calculations, for example, the factors of the numbers 56, 90, etc. The prime factors of the number 120 basically give you the prime numbers. For finding the factors of the number 120, you need to use the multiplication method. The multiplication method gives you the prime factorization of 120 and hence, you will get all the factors of the number. In this article, we will learn in detail about what are the factors of 120, what is the prime factorization of 120, and the prime factorization of 120 using a factor tree.
### Factor Pairs of 120
The factor pairs of 120 refers to all the different combinations of the two factors of 120 which you multiply together to get the answer as 120. It is a two-step process for creating all the factor pairs of 120. First, you need to list all the factors of 120. Then, you need to pair all the different combinations of these factors and it will give you all the factor pairs of 120.
All factors of 120 include 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, and 120.
All of the different pair combinations from these factors of 120 are called the factor pairs of 120. Given below is the list of all the factor pairs of 120. As you can see, all the factor pairs of 120 equal to the number 120 if you multiply them together.
• 1 x 120 = 120
• 2 x 60 = 120
• 3 x 40 = 120
• 4 x 30 = 120
• 5 x 24 = 120
• 6 x 20 = 120
• 8 x 15 = 120
• 10 x 12 = 120
• 12 x 10 = 120
• 15 x 8 = 120
• 20 x 6 = 120
• 24 x 5 = 120
• 30 x 4 = 120
• 40 x 3 = 120
• 60 x 2 = 120
• 120 x 1 = 120
The factors of 120 include the negative numbers as well since minus times minus results to plus. Hence you can convert all the positive factor pairs simply putting a minus sign in front of each factor and you will get all the negative factor pairs of 120.
• -1 × -120 = 120
• -2 × -60 = 120
• -3 × -40 = 120
• -4 × -30 = 120
• -5 × -24 = 120
• -6 × -20 = 120
• -8 × -15 = 120
• -10 × -12 = 120
• -12 × -10 = 120
• -15 × -8 = 120
• -20 × -6 = 120
• -24 × -5 = 120
• -30 × -4 = 120
• -40 × -3 = 120
• -60 × -2 = 120
• -120 × -1 = 120
### Prime Factorization of 120
Let us now learn about the prime factorization of 120 using a factor tree.
120 is a composite number. Hence, its prime factorization is as follows:
Image will be updated soon
1. The first step is dividing the number 120 with the smallest prime factor which is 2 and continue dividing the numbers by 2 until you get a fraction.
Doing so, you get:
120 ÷ 2 = 60
60 ÷ 2 = 30
30 ÷ 2 = 15
15 ÷ 2 = 7.5
7.5 cannot be a factor of 120 since it is a decimal number.
1. Now, proceed to the next prime number which is 3, and continue to divide till you get a fraction or 1. Doing so, you get:
15 ÷ 3 = 5
5 ÷ 3 = 1.66, which cannot be a factor
1. Therefore, when you move to the next prime number which is and continue the division process, you get:
5 ÷ 5 = 1
You have received 1 at the end in the division process and you cannot proceed further.
Hence, the prime factors of 120 are 2 × 2 × 2 × 3 × 5,
You can also write this as $2^{3}$ × 3 × 5. Here, the numbers 2, 3 and 5 are the prime numbers.
1. What are all the factors of 120?
The positive factors of 120 are called all the numbers that you use to divide the number 120 to get an even number. Here is the list of all the positive factors of the number 120 in the numerical order:
1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, and 120
The factors of 120 include negative numbers as well. Hence, all the positive factors of the number 120 can be converted to negative numbers. The list of all the negative factors of 120 is given below:
-1, -2, -3, -4, -5, -6, -8, -10, -12, -15, -20, -24, -30, -40, -60, and -120.
2. How many total factors of 120 are there?
According to the number theory, the prime factors of a positive integer are known as the prime numbers which divide that particular integer exactly. The prime factorization of a given positive integer is the list of all the integer's prime factors along with their multiplicities. The process of determining all these factors is known as integer factorization.
The factors of 120 are all the possible integers both the positive and the negative whole numbers which you can evenly divide into the number 120. The number 120 when divided by a factor of 120 will equal another factor of 120. When you count the total factors of 120 you will find that 120 has a total of 16 positive factors and 16 negative factors. Therefore, the total number of factors of the number 120 is 32. | 2020-09-23T14:03:41 | {
"domain": "vedantu.com",
"url": "https://www.vedantu.com/maths/factors-of-120",
"openwebmath_score": 0.5938757061958313,
"openwebmath_perplexity": 312.6446849400106,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703934576778
} |
https://math.stackexchange.com/questions/2656819/can-any-large-number-n-in-mathbbz-be-written-as-a-sum-of-a-square-and-a-squ | # Can any large number $N\in \mathbb{Z}$ be written as a sum of a square and a square-free?
I'm reading an article by Dan Carmon on square free values of large polynomials over the rational function field. In this article he states the following question:
Does every sufficiently large $N\in \mathbb{Z}$ admit a representation as a sum $N = x^k+r$ of a positive $k$-th power and a positive square-free? How many such representations are there, asymptotically?
He then states that this has been proven for $k=2$ and $k=3$ by Estermann. I'm trying to find these proofs, but I'm not yet succeeding. Any idea how to prove this or where I can find the proofs?
At the end of the paper he states that "We can prove similarly the more general theorem $n=p^k+g$, where $k$ is a given exponent and $g$ is free from $k$-th power divisors." | 2021-03-05T01:19:19 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2656819/can-any-large-number-n-in-mathbbz-be-written-as-a-sum-of-a-square-and-a-squ",
"openwebmath_score": 0.9405708312988281,
"openwebmath_perplexity": 80.66669689746232,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.6619228825191872,
"lm_q1q2_score": 0.6531703934576778
} |
https://www.shaalaa.com/question-bank-solutions/the-mean-following-numbers-68-find-value-x-45-52-60-x-69-70-26-81-94-hence-estimate-median-median-grouped-data_19015 | Share
# The Mean of Following Numbers is 68. Find the Value of ‘X’. 45, 52, 60, X, 69, 70, 26, 81 and 94 Hence Estimate the Median. - Mathematics
Course
#### Question
The mean of following numbers is 68. Find the value of ‘x’.
45, 52, 60, x, 69, 70, 26, 81 and 94
Hence estimate the median.
#### Solution
Mean = "Sum of all observations"/"Total number of observations"
:. 68 = (45+52+60 + x + 69 + 70 + 26 + 81 + 94)/9
=> 68 = (497 + x)/9
⇒ 612= 497+ x
⇒ x = 612 - 497
⇒ x = 115
Data in ascending order
26, 45, 52, 60, 69, 70, 81, 94, 115
Since the number of observations is odd, the median is the ((n+1)/2)^"th"observation
⇒ Median = ((9+1)/2)^"th" observation =5th observation.
Hence, the median is 69
Is there an error in this question or solution?
#### APPEARS IN
Selina Solution for Concise Mathematics for Class 10 ICSE (2020 (Latest))
Chapter 24: Measure of Central Tendency(Mean, Median, Quartiles and Mode)
Exercise 24(E) | Q: 20 | Page no. 377
2015-2016 (March) (with solutions)
Question 1.3 | 3.00 marks | 2020-07-08T22:53:21 | {
"domain": "shaalaa.com",
"url": "https://www.shaalaa.com/question-bank-solutions/the-mean-following-numbers-68-find-value-x-45-52-60-x-69-70-26-81-94-hence-estimate-median-median-grouped-data_19015",
"openwebmath_score": 0.3550638258457184,
"openwebmath_perplexity": 2677.94881643834,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811147,
"lm_q2_score": 0.6619228825191871,
"lm_q1q2_score": 0.6531703934576777
} |