set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train | 0.0.0 | \begin{document}
\begin{abstract} We prove the Liv\v{s}ic Theorem for H\"{o}lder continuous cocycles with values in Banach rings. We consider a transitive homeomorphism ${\ensuremath{\mathbf{\sigma}}:X\to X}$ that satisfies the Anosov Closing Lemma, and a H\"{o}lder continuous map ${a:X\to B^\times}$ from a compact metric space $X$ to the set of invertible elements of some Banach ring $B$. We show that it is a coboundary with a H\"{o}lder continuous transition function if and only if ${a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=e}$ for each periodic point $p=\ensuremath{\mathbf{\sigma}}^n p$.
\end{abstract}
\title{ Liv{s}
\section{Introduction}
We assume that $X$ is a compact metric space, $G$ a complete metric group, and $\ensuremath{\mathbf{\sigma}}:X\to X$ a homeomorphism.
We say that a map $a:\mathbb{Z}\times X\to G$ is {\it a cocycle} over \ensuremath{\mathbf{\sigma}}\ if
$$a(n,x)=a(n-k,\ensuremath{\mathbf{\sigma}}^kx)a(k,x)\quad\text{for any }n,k\in\ensuremath{\mathbb{Z}}$$
Every map $a:X\to G$ generates a cocycle $a(n,x)$ defined as $$a(n,x)=a(\ensuremath{\mathbf{\sigma}}^{n-1}x)a(\ensuremath{\mathbf{\sigma}}^{n-2}x)\ldots a(x) \quad n>0$$
$$a(0,x)=Id$$
$$a(n,x)= a^{-1}(\ensuremath{\mathbf{\sigma}}^{n}x)\ldots a^{-1}(\ensuremath{\mathbf{\sigma}}^{-2}x)a^{-1}(\ensuremath{\mathbf{\sigma}}^{-1}x)\quad n<0$$
We see that $a(1,x)=a(x)$. In this paper we consider only cocycles generated by H\"older continuous maps $a:X\to G$.
We say that a H\"older continuous map $a:X\to G$ is a {\it coboundary } (or more precisely generates a cocycle which is a coboundary) if there is a H\"older continuous function $t:X\to G$ such that
$$ a(x)=t(\ensuremath{\mathbf{\sigma}} x)t^{-1}(x)$$
The function $t(x)$ is a called a {transition map}.
If $a(x)$ is a coboundary then it is clear that
$$a(n,x)=t(\ensuremath{\mathbf{\sigma}}^n x)t^{-1}(x)$$
A question whether some cocycle is a coboundary or not appears naturally in many important problems in dynamical systems.
There is a simple necessary condition for a cocycle to be a coboundary. If $a(x)$ is a coboundary and $p\in X$ is a periodic point $\ensuremath{\mathbf{\sigma}}^n p=p$ then
$$a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=a(n,p)=t(\ensuremath{\mathbf{\sigma}}^n p)t^{-1}(p)=e$$
where $e$ is the identity element in the group $G$.
We say that for a cocycle $a(n,x)$ {\it periodic obstruction vanish} if
\begin{equation}\ensuremath{\lambda}bel{e0} a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=e\quad\forall p\in X\text{ with } \ensuremath{\mathbf{\sigma}}^np=p, n\in\mathbb{N} \end{equation}
A.Liv\v{s}ic (see ~\cite{L1,L2}) proved that when \ensuremath{\mathbf{\sigma}}\ is a transitive Anosov map and the group $G$ is $\mathbb{R}$ or $\mathbb{R}^n$ then a cocycle $a(x)$ is a coboundary if and only if the periodic obstruction vanish. This result is called Liv\v{s}ic theorem. The proof of the Liv\v{s}ic theorem for other groups turned out to be harder. Nevertheless, in the last twenty years in the series of papers (see \cite{BN},\cite{PW},\cite{P},\cite{KS},\cite{NT},\cite{LW}) it was shown that for some groups under an additional assumption on the growth rates of the cocycle $a(n,x)$ the condition (\ref{e0}) is also sufficient. For example, in \cite{BN} it was shown that if $G=B^\times$ the set of invertible elements of some Banach algebra then if periodic obstruction vanish and $a(x)$ is close to the identity element $e$ then it is a coboundary.
The question remained if this additional assumption will follow from the fact that the products along periodic points are equal to $e$. In 2011 B.Kalinin in \cite{Ka} made a breakthrough by proving the Liv\v{s}ic theorem for functions with values in $GL(n,\mathbb{R})$ and more generally in a connected Lie group assuming only that condition (\ref{e0}) is satisfied.
He used Lyapunov exponents for different invariant measures to estimate the rate of the cocycle growth and then approximated Lyapunov exponents for all invariant measures by Lyapunov exponents only at periodic points. To do the latter the Oseledets Theorem was used. In this paper, we are proving that a cocycle with values in invertible elements of Banach ring is a coboundary if and only if periodic obstructions vanish. There is no analogs of the Oseledets Theorem for Banach rings ( or even Banach algebras). Still we can define analogs of the highest and lowest Lyapunov exponents and using a different argument show that they could be approximated by the values of the cocycle at periodic points. Examples of Banach rings include, Banach algebras, and Banach algebras with $\ensuremath{\mathbb{F}}$ as a field of scalars, where $\ensuremath{\mathbb{F}}$ is a local field. For them it is a new result. Also several already known results follow: Liv\v{s}ic Theorem for cocycles with values in $GL(n,\ensuremath{\mathbb{R}})$ (see \cite{Ka}) and $GL(n,\ensuremath{\mathbb{F}})$ (see \cite{LZ}).
As in \cite{Ka} we require that the map \ensuremath{\mathbf{\sigma}}\ was transitive and had the following property
\begin{definition} We say that a homeomorphism $\ensuremath{\mathbf{\sigma}}:X\to X$ has a {\it closing property} if there exist positive numbers $\delta_0, \ensuremath{\lambda},C$ such that for any $x\in X$ and $n>0$ with $\text{dist}(x,\ensuremath{\mathbf{\sigma}}^n x)\le \delta_0$ we can find points $p,z\in X$ where
$$\ensuremath{\mathbf{\sigma}}^n p=p$$
and for every $i=0,1,\ldots, n$
$$\text{dist}(\ensuremath{\mathbf{\sigma}}^i p,\ensuremath{\mathbf{\sigma}}^i z)\le e^{-i\ensuremath{\lambda}}C\text{dist}(x,\ensuremath{\mathbf{\sigma}}^n x)\quad \text{dist}(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i z)\le e^{-(n-i)\ensuremath{\lambda}}C\text{dist}(x,\ensuremath{\mathbf{\sigma}}^n x) $$
We will call $\ensuremath{\lambda}mbda$ the expansion constant for the map \ensuremath{\mathbf{\sigma}}.
\end{definition}
Anosov maps and shifts of finite types are main examples of maps with closing property.
\begin{definition} An associative (non--commutative) ring $B$ with the unity element $e$ is called {\it Banach ring} if there is a function $\|\cdot\|:B\to\ensuremath{\mathbb{R}}$ such that
\begin{enumerate}
\item $\|a\|\ge 0$ and $\|a\|=0$ if and only if $a=0$.
\item $\|a+b\|\le \|a\|+\|b\|$.
\item $\|a\cdot b\|\le \|a\|\cdot \|b\|$.
\item The ring $B$ is a complete metric space with respect to the distance defined as $dist(a,b)=\|a-b\|$.
\end{enumerate}
\end{definition}
We denote as $B^\times$ the set of invertible elements of a Banach ring $B$. The main result of this paper is:
\begin{main}\ensuremath{\lambda}bel{t2} Let $X$ be a compact metric space, $\ensuremath{\mathbf{\sigma}}:X\to X$ a transitive homeomorphism with closing property. If $a:X\to B^\times $ is an $\alpha$-H\"older continuous function such that
$$ a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=e\quad \forall p\in X, n\in \mathbb{N} \text{ with } \ensuremath{\mathbf{\sigma}}^np=p$$
then there exists an $\alpha$-H\"older function $t:X\to B^\times$ such that
$$ a(x)=t(\ensuremath{\mathbf{\sigma}} x)t^{-1}(x)$$
\end{main}
\section{Subadditive Cocycles}
Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a continuous function. We will call a continuous function $s(n,x):\mathbb{Z}\times X\to \mathbb{R}$ \textit{ a subadditive cocycle} over the function $\ensuremath{\mathbf{\sigma}}$ if
$$s(n+m,x)\le s(n,\ensuremath{\mathbf{\sigma}}^m x)+s(m,\ensuremath{\mathbf{\sigma}} x)$$
From the Kingman's Theorem about subadditive cocycles \cite{Ki, Furstenberg} follows that for every $\ensuremath{\mathbf{\sigma}}$-invariant measure $\mu$ and for almost all $x$ there exists a number
\begin{equation}\ensuremath{\lambda}bel{e1} r(x)=\lim_{n\to\infty} \frac{s(n,x)}{n}\end{equation}
If $\mu$ is ergodic then this number is the same for a.a $x$ and equals $\displaystyle{\inf_{n\ge 1}\int_X \frac{s(n,x)}{n}d\mu}$. For an ergodic $\mu$ we will call this number $r_\mu$. The set of all \ensuremath{\mathbf{\sigma}}-invariant ergodic measures we denote as $\mathcal{M}$. The set of points $x\in X$ for which limit $(\ref{e1})$ exists we call regular and denote as $\mathcal{R}$
Of course, there could be points for which the limit in $(\ref{e1})$ does not exist.
We can also consider numbers $s_n=\displaystyle{\max_x s(n,x)}$. It is a subadditive sequence of numbers $s_{n+m}\le s_n+s_m$ and we denote as $r$ the following number:
\begin{equation}\ensuremath{\lambda}bel{e2}\displaystyle{r=\lim_{n\to\infty}\frac{s_n}{n}=\inf_{n\ge 1} \frac{s_n}{n}}
\end{equation}
It is known (see \cite{S}) that if $\ensuremath{\mathbf{\sigma}}$ is continuous and $X$ is compact then
\begin{equation}\ensuremath{\lambda}bel{e3} r=\sup_{x\in\mathcal{R}} r(x)=\sup_{\mu\in\mathcal{M}} r_\mu\end{equation}
For a periodic point $p=\ensuremath{\mathbf{\sigma}}^k p$ we denote as $r_{p}$ the following quantity
$$r_{p}=\frac{ s(k,p)}{k}$$
It is easy to see that $r(p)$ exists (but can be $-\infty$) and $r(p)\le r_p$.
We will show that if $\ensuremath{\mathbf{\sigma}}$ has a closing property we can prove that:
\begin{theorem}\ensuremath{\lambda}bel{t3} Let $X$ be a compact metric space, $\ensuremath{\mathbf{\sigma}}:X\to X$ a homeomorphism with closing property. We denote as $\mathcal{P}$ the set of all periodic points. If $a:X\to B^\times $ is an $\alpha$-H\"older continuous function, $a(n,x)$ is a cocycle generated by it and ${s(n,x)=\ln \|a(n,x)\| }$ then
\begin{equation}\ensuremath{\lambda}bel{e3} r=\sup_{x\in\mathcal{R}} r(x)=\sup_{\mu\in\mathcal{M}} r_\mu\le \sup_{p\in\mathcal{P}}r_p \end{equation}
\end{theorem}
An easy corollary of this theorem is the following important for us result.
\begin{corollary} \ensuremath{\lambda}bel{c3} Let $X$ be a compact metric space, $\ensuremath{\mathbf{\sigma}}:X\to X$ a homeomorphism with closing property. If $a(n,p)=e$ for every periodic point $p$ with period $n$ then for any $\ensuremath{\varepsilon}>0$ there exists $C$ such that for all integer positive $n$ and all $x\in X$
$$\|a(n,x)\|\le Ce^{\ensuremath{\varepsilon} n}$$
$$\|a(-n,x)\|\le Ce^{\ensuremath{\varepsilon} n}$$
$$\|[a(n,x)]^{-1}\|\le Ce^{\ensuremath{\varepsilon} n}$$
\end{corollary}
\begin{proof} The first inequality follows from the fact that if $s(n,x)=\ln\|a(n,x)\|$ then for this subadditive cocycle $r_p=0$ for every periodic point $p$ and from (\ref{e3}) follows that $r=0$. For the second inequality we can consider a cocycle $b(n,x)$ over $\ensuremath{\mathbf{\sigma}}^{-1}$ generated by $a^{-1}(x)$. Below, we will prove that if $a(x)$ is H\"{o}lder continuous then $a^{-1}(x)$ is also H\"{o}lder continuous, and therefore we can apply Theorem \ref{t3} to the cocycle $b(n,x)$ also. But $a(-n,x)=b(n,x)$ and if $a(n,p)=e$ for every periodic point then
$$b(n,p)=a(-n,p)=a(-n,\ensuremath{\mathbf{\sigma}}^np)=[a(n,p)]^{-1}=e$$
So the rate of growth $r$ for $b(n,x)$ is also 0.
The last inequality follows from the fact that
$$[a(n,x)]^{-1}=b(n,\ensuremath{\mathbf{\sigma}}^n x)$$
The only thing left to show is that if $a(x)$ is $\alpha$-H\"{o}lder continuous then $a^{-1}(x)$ is also $\alpha$-H\"{o}lder continuous. For normed rings the operation of taking the inverse element is continuous (see \cite{Na}). Therefore the function $a^{-1}(x)$ is bounded. But
$$\|a^{-1}-b^{-1}\|=\|b^{-1}(b-a)a^{-1}\|\le\|b^{-1}\|\cdot\|(b-a)\|\cdot\|a^{-1}\|$$
Therefore, if the function $a(x)$ is $\alpha$-H\"{o}lder continuous, then $a^{-1}(x)$ is also $\alpha$-H\"{o}lder continuous.
\end{proof} | 3,893 | 10,708 | en |
train | 0.0.1 | \section{Proof of the Theorem \ref{t3}}
The following result proven in \cite[Proposition 4.2]{MK} will be used.
\begin{lemman}[A. Karlsson, G. A. Margulis]\ensuremath{\lambda}bel{MK} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a measurable map, $\mu$ an ergodic measure, $s(n,x)$ a subadditive cocycle. For any $\ensuremath{\epsilon}>0$, let $E_\ensuremath{\epsilon}$ be the set of $x$ in $X$ for which there exist an integer $K(x)$ and infinitely many $n$ such that
$$s(n,x)-s(n-k,\ensuremath{\mathbf{\sigma}}^kx)\ge (r_\mu-\ensuremath{\epsilon})k$$
for all $k, K(x)\le k\le n$. Let $E=\cap_{\ensuremath{\epsilon}>0} E_{\ensuremath{\epsilon}}$ then $\mu(E)=1$.
\end{lemman}
If $s(n,x)=\ln\|a(n,x)\|$ then the inequality in the lemma could be rewritten as
\begin{equation}\|a(n-k,\ensuremath{\mathbf{\sigma}}^k x)\|\le \|a(n,x)\|e^{-(r_\mu-\ensuremath{\epsilon})k}\ensuremath{\lambda}bel{fmk}\end{equation}
\begin{definition} Let $\gamma,\delta$ be some positive numbers and $n$ is a natural number. We say that a point $y$ is $(\gamma,\delta,n)-$close to $x$ if
$$dist(\ensuremath{\mathbf{\sigma}}^k x,\ensuremath{\mathbf{\sigma}}^k y)\le \delta e^{-\gamma k} \quad \text{for all}\quad 0\le k\le n$$
\end{definition}
\begin{prop} \ensuremath{\lambda}bel{l5} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism and $a:X\to B^\times$ be an $\alpha$-H\"older continuous function, and $s(n,x)=\ln\|a(n,x)\|$. For any $\gamma>0$ let $S_\gamma$ be the set of points $x$ in $X$ for which there exist a number $\delta(x)>0$ and infinitely many $n$ such that for any point $y$ which is $(\gamma,\delta,n)-$close to $x$
\begin{equation}\ensuremath{\lambda}bel{f11}
\|a(n,y)\|\ge \frac12\|a(n,x)\|
\end{equation}
Then $\mu(S_\gamma)=1$ for any ergodic invariant measure $\mu$ with $r-r_\mu<\alpha\gamma$.
\end{prop}
\begin{proof}
Let $\mu$ be an ergodic invariant measure with $r-r_\mu<\alpha\gamma$. We choose a number $0<\ensuremath{\varepsilon}<\frac13(\gamma\alpha-(r-r_\mu))$. Almost all points with respect to this measure and \ensuremath{\varepsilon}\ satisfy Karlsson-Margulis Lemma and for almost all points the number $r(x)=r_\mu$. Take a point $x$ from the intersection of those two sets. Using the identity
$$b_na_{n-1}\ldots b_1-a_na_{n-1}\ldots a_1=\sum_{k=1}^n b_n\ldots b_{k+1}(b_{k}-a_{k})a_{k-1}\ldots a_1$$
we can see that
$$\|a(n,x)-a(n,y)\|=\|\sum_{k=0}^{n-1} a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)[a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)]a(k,y)\|\le$$
\begin{equation}\ensuremath{\lambda}bel{f4}
\sum_{k=0}^{n-1} \|a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)\|\cdot\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\cdot \|a(k,y)\|
\end{equation}
Our goal is to show that if we choose a sufficiently small $\delta$ then for infinitely many numbers $n$ and for every $(\gamma,\delta,n)-$close $y$ the sum $(\ref{f4})$ is smaller than $ \frac12\|a(n,x)\|$.
Let $K(x,\ensuremath{\varepsilon})$ and $n$ be as in the Karlsson-Margulis Lemma and a point $y$ is $(\gamma,\delta,n)-$close to $x$ for some $\delta$ that we specify later. By definition $\displaystyle{r=\lim_{k\to\infty} s_k/k}$, so we can find $K\ge K(x,\ensuremath{\varepsilon})$ such that $s_{k}<k(r+\ensuremath{\varepsilon})$ for all $k\ge K$, or $\|a(k,x)\|<e^{k(r+\ensuremath{\varepsilon})}$. For every $k> K$ factors in the product
\begin{equation}\ensuremath{\lambda}bel{f3} \|a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)\|\cdot\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\cdot \|a(k,y)\|
\end{equation} could be bounded from above as \\
$\|a(n-k-1,\ensuremath{\mathbf{\sigma}}^k x)\|\le \|a(n,x)\|e^{- (r_\mu-\ensuremath{\epsilon}silon)(k+1)}\quad$ It follows from the Karlsson-Margulis Lemma. \\
$\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\le H\delta^\alpha e^{-k\gamma\alpha}$ where $H$ is some positive constant. It follows from the fact that $a(x)$ is H\"older continuous and $y$ is $(\gamma,\delta,n)$-close to $x$.\\
$ \|a(k,y)\|\le e^{s_{k}}\le e^{k(r+\ensuremath{\varepsilon})}$ It follows from the definition of $K$.\\
If we combine those inequalities we can see that that the number in the product (\ref{f3}) is smaller than
$$ \|a(n,x)\|e^{- (r_\mu-\ensuremath{\epsilon}silon)(k+1)}\cdot H\delta^\alpha e^{-k\gamma\alpha}\cdot e^{k(r+\ensuremath{\varepsilon})}\le \|a(n,x)\|H\delta^\alpha e^{-k(\gamma\alpha-(r-r_\mu)-2\ensuremath{\varepsilon})}$$
After simplification we can write that
$$ \|a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)\|\cdot\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\cdot \|a(k,y)\|\le \|a(n,x)\|H\delta^\alpha e^{-k\ensuremath{\varepsilon}}
$$
If we add those inequalities for $k\ge K$ we can see that
$$\|\sum_{k=K}^n a(n-k+1,\ensuremath{\mathbf{\sigma}}^{k+1} x)[a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)]a(k,y)\|\le$$
$$ \|a(n,x)\|H\delta^\alpha \sum_{k=0}^\infty e^{-k\ensuremath{\varepsilon}}= \|a(n,x)\|\frac{H\delta^\alpha}{1-e^{-\ensuremath{\varepsilon}}}$$
To estimate the number in the formula (\ref{f3}) for $k<K$ we denote as
$$M=1+\max_x ||a(x)||$$
$$m=1+\max_x ||a^{-1}(x)||$$
Then as before
$$\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\le H \delta^\alpha e^{-k\gamma\alpha}$$
but
$$||a(n-k+1,\ensuremath{\mathbf{\sigma}}^{k+1} x)||=||a(-k+1,\ensuremath{\mathbf{\sigma}}^{n}x)a(n,x)||\le ||a(n,x)||\cdot m^k$$
and
$$||a(k,y)||\le M^{k}$$
So for $k<K$ the expression (\ref{f3}) is bounded by
$$||a(n,x)|| m^k\cdot H\delta^\alpha e^{-k\gamma\alpha}\cdot M^{k}\le ||a(n,x)||H\delta^\alpha (mM)^K$$
Finally,
$$||a(n,x)-a(n,y)||\le ||a(n,x)||\delta^\alpha \left(\frac{H}{1-e^{-\ensuremath{\varepsilon}}}+K(mM)^K\right)=||a(n,x)||\delta'$$
By choosing $\delta$ sufficiently small we can make $\delta'<1/2$. Then
$$||a(n,y)||= ||a(n,x)-(a(n,x)-a(n,y))||\ge ||a(n,x)||-||a(n,x)-a(n,y)||\ge$$
$$\ge \frac12|| a(n,x)||$$
\end{proof}
To finish the proof of the Theorem \ref{t3} we will need the following features of the maps with closing property.
\begin{lemma}\ensuremath{\lambda}bel{l6} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism with closing property and the expansion constant $\ensuremath{\lambda}$, then for any positive numbers $\ensuremath{\varepsilon}$ and $\delta$ there is a number $\delta'$ such that if $dist(x,\ensuremath{\mathbf{\sigma}}^kx)\le\delta'$ and $k\ge n(1+\ensuremath{\varepsilon})$ then there is a point $p$ such that $\ensuremath{\mathbf{\sigma}}^k p=p$ and $p$ is $(\gamma,\delta,n)-$close to $x$, where $\gamma=\ensuremath{\varepsilon}\ensuremath{\lambda}mbda$.
\end{lemma}
\begin{proof} It follows from the definition of the closing property that for $0\le i\le k$
$$dist(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i p)\le dist(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i z)+dist(\ensuremath{\mathbf{\sigma}}^i z,\ensuremath{\mathbf{\sigma}}^i p)\le 2C\delta' e^{-\ensuremath{\lambda}mbda \min(i,k-i) }$$
The function $-\ensuremath{\lambda}mbda\min(x,k-x)$ is convex downward so the segment connecting points $(0,0)$ and $(n,-\ensuremath{\lambda}mbda\min(n,k-n))$ on the graph of this function stays above the graph. The linear function that corresponds to this segment is $-\gamma x$ where
$$\gamma=\frac{k-n}{n}\ensuremath{\lambda}mbda>\ensuremath{\varepsilon}\ensuremath{\lambda}mbda$$
Therefore the point $p$ satisfies the following inequalities:
$$dist(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i p)\le 2 C\delta' e^{-\gamma i } \quad 0\le i\le n$$
If we take $\delta'=\frac{\delta}{2 C}$ we can see that $p$ is $(\gamma,\delta,n)-$close to $x$.
\end{proof}
\begin{lemma}\ensuremath{\lambda}bel{l7} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism. For any $\ensuremath{\varepsilon},\delta>0$ let $P_{\ensuremath{\epsilon},\delta}$ be the set of points $x$ in $X$ for which there is an integer number $N=N(x,\ensuremath{\epsilon},\delta)$ such that if $n>N$ then there is an integer $ n(1+\ensuremath{\varepsilon})<k<n(1+2\ensuremath{\varepsilon})$ for which
$$dist(x,\ensuremath{\mathbf{\sigma}}^k x)<\delta$$
If $\displaystyle{P=\cap_{\ensuremath{\varepsilon}>0,\delta>0} P_{\ensuremath{\varepsilon},\delta}}$ then $\mu(P)=1$ for any invariant measure $\mu$.
\end{lemma}
\begin{proof} It is enough to prove it only for ergodic invariant measures. Let $\mu$ be an invariant ergodic measure. The support of a measure is the the set of all points in $X$ such that the measure of any open ball centered at $x$ is not 0. The support of a measure on a compact metric space has always full measure {(see \cite{Fe})}. $X$ is compact so there is a sequence of balls $B_i$ which is a base of the topology. If we define as $f_i(n,x)$ the number of such $k$ that $\ensuremath{\mathbf{\sigma}}^k x\in B_i$ and $1\le k\le n$ then by Birkhoff's Ergodic Theorem $\displaystyle{\lim_{n\to\infty} \frac{f_i(n,x)}{n}}$ exists and equals $\mu(B_i)$ for almost all $x$. It is easy to see that any $x$ that belongs to the support of the measure and satisfies Birkhoff's Ergodic Theorem for all $i$ will belong to the set $P$. Indeed, if we choose $\delta>0$ then we know that the ball $B_\delta$ centered at $x$ should have measure greater than 0. This ball is a countable union of some of the balls $B_i$, therefore there exists at least one ball $B_{i_0}$ such that $\mu(B_{i_0})>0$ and $B_{i_0}\subset B_\delta$. Now, using the numbers $\ensuremath{\varepsilon}$ and $\mu(B_{i_0})$ we choose a very small $\ensuremath{\epsilon}$. How small we specify later. For this $\ensuremath{\epsilon}>0$ we can find $N$ such that if $n>N$ then $|f_{i_0}(n,x)-\mu(B_{i_0}n)|<\ensuremath{\epsilon} n$. If $n>N$ and there is no $k$ such that ${n(1+\ensuremath{\varepsilon})<k<n(a+2\ensuremath{\varepsilon})}$ and ${\ensuremath{\mathbf{\sigma}}^k x\in B_{i_0}}$ then $f_{i_0}(n(1+\ensuremath{\varepsilon}),x)=f_{i_0}(n(1+2\ensuremath{\varepsilon}),x)$. It is impossible if we choose $\ensuremath{\epsilon}$ very small because in this case
$$ (\mu(B_{i_0})+\ensuremath{\epsilon})n(1+\ensuremath{\varepsilon})\ge f_{i_0}(n(1+\ensuremath{\varepsilon},x)=f_{i_0}(n(1+2\ensuremath{\varepsilon},x)\ge (\mu(B_{i_0})-\ensuremath{\epsilon})n(1+2\ensuremath{\varepsilon})$$
or
$$\frac{\mu(B_{i_0})+\ensuremath{\epsilon}}{\mu(B_{i_0})-\ensuremath{\epsilon}}\ge \frac{1+2\ensuremath{\varepsilon}}{1+\ensuremath{\varepsilon}}$$
When $\ensuremath{\epsilon}$ is small the left side is as close to 1 as we want, so we get a contradiction. It means, if $N$ is sufficiently big and $n>N$ then there is $k$ such that ${\ensuremath{\mathbf{\sigma}}^k\in B_{i_0}\subset B_\delta }$ and ${n(1+\ensuremath{\varepsilon})\le k\le n(1+2\ensuremath{\varepsilon})}$. Therefore the set $P$ includes the intersection of two sets of full measure and has the full measure.
\end{proof} | 3,863 | 10,708 | en |
train | 0.0.2 | \begin{lemma}\ensuremath{\lambda}bel{l7} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism. For any $\ensuremath{\varepsilon},\delta>0$ let $P_{\ensuremath{\epsilon},\delta}$ be the set of points $x$ in $X$ for which there is an integer number $N=N(x,\ensuremath{\epsilon},\delta)$ such that if $n>N$ then there is an integer $ n(1+\ensuremath{\varepsilon})<k<n(1+2\ensuremath{\varepsilon})$ for which
$$dist(x,\ensuremath{\mathbf{\sigma}}^k x)<\delta$$
If $\displaystyle{P=\cap_{\ensuremath{\varepsilon}>0,\delta>0} P_{\ensuremath{\varepsilon},\delta}}$ then $\mu(P)=1$ for any invariant measure $\mu$.
\end{lemma}
\begin{proof} It is enough to prove it only for ergodic invariant measures. Let $\mu$ be an invariant ergodic measure. The support of a measure is the the set of all points in $X$ such that the measure of any open ball centered at $x$ is not 0. The support of a measure on a compact metric space has always full measure {(see \cite{Fe})}. $X$ is compact so there is a sequence of balls $B_i$ which is a base of the topology. If we define as $f_i(n,x)$ the number of such $k$ that $\ensuremath{\mathbf{\sigma}}^k x\in B_i$ and $1\le k\le n$ then by Birkhoff's Ergodic Theorem $\displaystyle{\lim_{n\to\infty} \frac{f_i(n,x)}{n}}$ exists and equals $\mu(B_i)$ for almost all $x$. It is easy to see that any $x$ that belongs to the support of the measure and satisfies Birkhoff's Ergodic Theorem for all $i$ will belong to the set $P$. Indeed, if we choose $\delta>0$ then we know that the ball $B_\delta$ centered at $x$ should have measure greater than 0. This ball is a countable union of some of the balls $B_i$, therefore there exists at least one ball $B_{i_0}$ such that $\mu(B_{i_0})>0$ and $B_{i_0}\subset B_\delta$. Now, using the numbers $\ensuremath{\varepsilon}$ and $\mu(B_{i_0})$ we choose a very small $\ensuremath{\epsilon}$. How small we specify later. For this $\ensuremath{\epsilon}>0$ we can find $N$ such that if $n>N$ then $|f_{i_0}(n,x)-\mu(B_{i_0}n)|<\ensuremath{\epsilon} n$. If $n>N$ and there is no $k$ such that ${n(1+\ensuremath{\varepsilon})<k<n(a+2\ensuremath{\varepsilon})}$ and ${\ensuremath{\mathbf{\sigma}}^k x\in B_{i_0}}$ then $f_{i_0}(n(1+\ensuremath{\varepsilon}),x)=f_{i_0}(n(1+2\ensuremath{\varepsilon}),x)$. It is impossible if we choose $\ensuremath{\epsilon}$ very small because in this case
$$ (\mu(B_{i_0})+\ensuremath{\epsilon})n(1+\ensuremath{\varepsilon})\ge f_{i_0}(n(1+\ensuremath{\varepsilon},x)=f_{i_0}(n(1+2\ensuremath{\varepsilon},x)\ge (\mu(B_{i_0})-\ensuremath{\epsilon})n(1+2\ensuremath{\varepsilon})$$
or
$$\frac{\mu(B_{i_0})+\ensuremath{\epsilon}}{\mu(B_{i_0})-\ensuremath{\epsilon}}\ge \frac{1+2\ensuremath{\varepsilon}}{1+\ensuremath{\varepsilon}}$$
When $\ensuremath{\epsilon}$ is small the left side is as close to 1 as we want, so we get a contradiction. It means, if $N$ is sufficiently big and $n>N$ then there is $k$ such that ${\ensuremath{\mathbf{\sigma}}^k\in B_{i_0}\subset B_\delta }$ and ${n(1+\ensuremath{\varepsilon})\le k\le n(1+2\ensuremath{\varepsilon})}$. Therefore the set $P$ includes the intersection of two sets of full measure and has the full measure.
\end{proof}
{\it Proof of the Theorem \ref{t3}:} Choose any $\ensuremath{\varepsilon}>0$. We can find an ergodic invariant measure $\mu$ such that ${r-r_\mu<min(\ensuremath{\varepsilon},\ensuremath{\varepsilon}\alpha\ensuremath{\lambda}mbda)}$. Choose a point $x$ such that $r(x)=r_\mu$ and $x$ belongs to the set $S_{\ensuremath{\varepsilon}\ensuremath{\lambda}mbda}\cap P$ where $S_{\ensuremath{\varepsilon}\ensuremath{\lambda}mbda}$ as in the Proposition \ref{l5} and $P$ as in the Lemma \ref{l7}. All those sets have full support, so their intersection is not empty. For the point $x$ we can find $\delta$ such that for infinitely many $n_i$ if a point $p$ is $(\ensuremath{\varepsilon}\ensuremath{\lambda}mbda,\delta,n_i)-$close to $x$ then
\begin{equation}\ensuremath{\lambda}bel{f10}\|a(n_i,p)\|\ge\frac12\|a(n_i,x)\|\end{equation}
For this $\delta$ we can find $\delta'$ from Lemma \ref{l6}. Using this $\delta'$ and $\ensuremath{\varepsilon}$ we can find $N=N(\ensuremath{\varepsilon},\delta')$ from the Lemma \ref{l7} such that if $n_i>N$ then there is $k$ such that $n_i(1+\ensuremath{\varepsilon})\le k\le n_i(1+2\ensuremath{\varepsilon})$ and $dist(\ensuremath{\mathbf{\sigma}}^kx,x)<\delta'$, then from Lemma \ref{l6} follows that there is a periodic point $p$ with the period $k$ such that it is $(\ensuremath{\varepsilon}\ensuremath{\lambda}mbda,\delta,n_i)-$close to $x$ and therefore satisfies the {inequality (\ref{f10})}.
Now, we estimate $\|a(k,p)\|$. Let $N'$ be a number such that if $n>N'$ then
$$\|a(n,x)\|\ge e^{n(r_\mu-\ensuremath{\varepsilon})}\ge e^{n(r-2\ensuremath{\varepsilon})}$$
We always can choose $n_i$ bigger not only than $N$ but also and $N'$. Denote as ${m=\ln\max_y\|a^{-1}(y)\|}$. Then
$$\|a(n_i,p)\|=\|a(-(k-n_i),p)a(k,p)\|\le \|a(-(k-n_i),p)\|\cdot\|a(k,p)\|$$
so
$$\|a(k,p)\|\ge \frac{\|a(n_i,p)\|}{e^{m(k-n_i)}}\ge\frac12\frac{\|a(n_i,x)\|}{e^{2m\ensuremath{\varepsilon} n_i}}\ge \frac12 e^{(r-2\ensuremath{\varepsilon}-2m\ensuremath{\varepsilon})n_i}$$
We see that
$$r_p=\frac{\ln\|a(k,p)\|}{k}\ge\frac{(r-2\ensuremath{\varepsilon}-2m\ensuremath{\varepsilon})n_i-\ln 2}{(1+2\ensuremath{\varepsilon})n_i}$$
Number $m$ does not depend on the choice of $x,n_i$ and $p$, so by choosing $\ensuremath{\varepsilon}$ very small and $n_i$ very big we can make $r_p$ as close to $r$ as we want.
\qed\\ | 1,867 | 10,708 | en |
train | 0.0.3 | \section{Proof of the Main Theorem }
After Theorem \ref{t3} is established we can use Corollary \ref{c3} to show that the growth of $\|a(n,x)\|$ is sub-exponential. It allows to use the idea of the original Liv\v{s}ic proof for cocycles with values in Banach rings.
H.Bercovici and V.Nitica in \cite{BN} (Theorem 3.2) showed that if $\ensuremath{\mathbf{\sigma}}$ is a transitive Anosov map, periodic obstructions vanish and
\begin{equation}\ensuremath{\lambda}bel{f11}\begin{split}\|a(x)\|&\le 1+\delta\\
\|a^{-1}(x)\|&\le 1+\delta
\end{split}\end{equation}
for some $\delta$ that depends on $\ensuremath{\mathbf{\sigma}}$, then $a(x)$ is a coboundary. From Corollary \ref{c3} we can get a little bit less. If periodic obstructions vanish then for any $\delta>0$ there exists $C>0$ such that for any positive integer $n$
\begin{equation*}\begin{split}\|a(n,x)\|&\le C(1+\delta)^n\\
\|[a(n,x)]^{-1}\|&\le C(1+\delta)^n
\end{split}\end{equation*}
Those inequalities are actually enough to repeat the arguments from \cite{BN} with some small changes, but we also refer to more general theorem proven in \cite{G} that considers cocycles over maps that satisfy closing property and with values in abstract groups satisfying some conditions. But we will need couple of more definitions.
\begin{definition} If $G$ is a group with metric denoted as $dist$ and $g\in G$ we define the distortion of the element $g$ as
$$|g|=\sup_{f\neq g}\left[ \frac{dist(gf,gh)}{dist{(f,h)}},\frac{dist(fg,hg)}{dist{(f,h)}},\frac{dist(g^{-1}f,g^{-1}h)}{dist{(f,h)}},\frac{dist(fg^{-1},hg^{-1})}{dist{(f,h)}}\right]$$
We say that a group is Lipschitz if $|g|<\infty$ for all $g\in G$.
\end{definition}
It is easy to see that for Banach rings if we define $$dist(f,h)=\max(\|f-h\|,\|f^{-1}-g^{-1}\|)$$ then $|g|\le \max(\|g\|,\|g^{-1}\|)$ and $B^\times$ is Lipschitz.
\begin{definition} We call the rate of distortion of a cocycle $a(x):X\to G$ the following number
$$r(a)=\lim_{n\to\infty} \frac{\sup_{x\in X}\ln |a(n,x)|}{n}$$
\end{definition}
\begin{theorem}\ensuremath{\lambda}bel{t9} Let $G$ be a Lipschitz group with the property that there are numbers $\ensuremath{\epsilon}$ and $D$ such that $dist(g,e)\le\ensuremath{\epsilon}$ implies $|g|\le D$, \ensuremath{\mathbf{\sigma}}\ be a transitive homeomorphism with $\ensuremath{\lambda}mbda$-closing property. If the rate of the distortion of an $\alpha$-H\"older continuous cocycle $a:X\to G$ is smaller than $\alpha\ensuremath{\lambda}/2$ and the periodic obstructions vanish then $a(x)$ is a coboundary with $\alpha$-H\"older continuous transition function $t(x)$.
\end{theorem}
\begin{proof} See \cite{G}\end{proof}
{\it Proof of the Main Theorem.} If $a(n,p)=e$ for every periodic point then it follows from the Corollary \ref{c3} that the distortion rate of the $a(n,x)$ is less or equal than 0. In the group $B^\times$ if $dist(e,g)<\frac12$ then $\|e-g\|<\frac12$ and $\|e-g^{-1}\|<\frac12$ so $\|g\|,\|g^{-1}\|\le\frac32$. We see that $|g|<\frac32$, therefore by Theorem \ref{t9} the cocycle $a(n,x)$ is a coboundary with
$\alpha$-H\"older continuous transition function.\qed
\end{document} | 1,085 | 10,708 | en |
train | 0.1.0 | \begin{document}
\pagenumbering{gobble}
\begin{titlepage}
\title{Sensitivity Oracles for All-Pairs Mincuts}
\author{
Surender Baswana\thanks{Department of Computer Science \& Engineering, IIT Kanpur, Kanpur -- 208016, India, [email protected]}
\and
Abhyuday Pandey\thanks{Department of Computer Science \& Engineering, IIT Kanpur, Kanpur -- 208016, India, [email protected]}
}
\maketitle
\begin{abstract}
{
Let $G=(V,E)$ be an undirected unweighted graph on $n$ vertices and $m$ edges. We address the problem of sensitivity oracle for all-pairs mincuts in $G$ defined as follows.
Build a compact data structure that, on receiving any pair of vertices $s,t\in V$ and failure (or insertion) of any edge as query, can efficiently report the mincut between $s$ and $t$ after the failure (or the insertion).
To the best of our knowledge, there exists no data structure for this problem which takes $o(mn)$ space and a non-trivial query time.
We present the following results.
\begin{enumerate}
\item Our first data structure occupies ${\cal O}(n^2)$ space and guarantees ${\cal O}(1)$ query time to report the value of resulting $(s,t)$-mincut upon failure (or insertion) of any edge. Moreover, the set of vertices defining a resulting $(s,t)$-mincut after the update can be reported in ${\cal O}(n)$ time which is worst-case optimal.
\item
Our second data structure optimizes space at the expense of increased query time. It takes ${\cal O}(m)$ space -- which is also the space taken by $G$. The query time is ${\cal O}(\min(m,n c_{s,t}))$ where $c_{s,t}$ is the value of the mincut between $s$ and $t$ in $G$. This query time is faster by a factor of $\Omega(\min(m^{1/3},\sqrt{n}))$ compared to the best known deterministic algorithm \cite{DBLP:conf/focs/GoldbergR97a,DBLP:conf/stoc/KargerL98,DBLP:journals/corr/abs-2003-08929} to compute a $(s,t)$-mincut from scratch.
\item
If we are only interested in knowing if failure (or insertion) of an edge changes the value of $(s,t)$-mincut, we can distribute our ${\cal O}(n^2)$ space data structure evenly among $n$ vertices. For any failed (or inserted) edge we only require the data structures stored at its endpoints to determine if the value of $(s,t)$-mincut has changed for any $s,t \in V$.
Moreover, using these data structures we can also output efficiently a compact encoding of all pairs of vertices whose mincut value is changed after the failure (or insertion) of the edge.
\end{enumerate}
}
\end{abstract}
\end{titlepage}
\pagebreak
\pagenumbering{arabic}
\section{Introduction}
\subfile{src/introduction}
\section{Preliminaries} \label{sec:prelimiaries}
\subfile{src/preliminaries}
\section{Insights into \texorpdfstring{$3$}{3}-vertex mincuts} \label{sec:query-transformation}
\subfile{src/compact-graph-query-transf}
\section{A Compact Graph for Query Transformation}
\subfile{src/ft-steiner-connectivity}
\section{Compact Data Structure for Sensitivity Query} \label{sec:final-ds}
\subfile{src/graph-contractions}
\section{Distributed Sensitivity Oracle}
\label{sec:distributed-sensitivity-oracle}
\subfile{src/distributed-sensitivity-oracle}
\section{Conclusion}
\label{sec:conclusion}
\subfile{src/conclusion}
\pagebreak
\pagebreak
\appendix
\subfile{src/appendix}
\end{document} | 1,068 | 1,068 | en |
train | 0.2.0 | \begin{document}
\mainmatter
\title{Learning with a Drifting Target Concept}
\titlerunning{Learning with a Drifting Target Concept}
\author{Steve Hanneke \and Varun Kanade \and Liu Yang}
\authorrunning{Steve Hanneke, Varun Kanade, and Liu Yang}
\institute{Princeton, NJ USA.\\
\email{[email protected]}
\and
D\'{e}partement d'informatique, \'{E}cole normale sup\'{e}rieure, Paris, France.\\
\email{[email protected]}
\and
IBM T.J. Watson Research Center, Yorktown Heights, NY USA.\\
\email{[email protected]}
}
\maketitle
\begin{abstract}
We study the problem of learning in the presence of a drifting target concept. Specifically,
we provide bounds on the error rate at a given time, given a learner with access to a history
of independent samples labeled according to a target concept that can change on each round.
One of our main contributions is a refinement of the best previous results for
polynomial-time algorithms for the space of linear separators under a uniform distribution.
We also provide general results for an algorithm capable of adapting to a variable rate of drift
of the target concept.
Some of the results also describe an active learning variant of this setting, and provide bounds on the
number of queries for the labels of points in the sequence sufficient to obtain the stated bounds
on the error rates.
\end{abstract}
\section{Introduction}
Much of the work on statistical learning has focused on
learning settings in which the concept to be learned is static
over time.
However, there are many application areas where this is not
the case. For instance, in the problem of face recognition,
the concept to be learned actually changes over time as
each individual's facial features evolve over time. In this
work, we study the problem of learning with a drifting
target concept. Specifically, we consider a statistical
learning setting, in which data arrive i.i.d. in a stream,
and for each data point, the learner is required to predict
a label for the data point at that time. We are then
interested in obtaining low error rates for these predictions.
The target labels are generated from a function known to reside
in a given concept space, and at each time $t$ the target function
is allowed to change by at most some distance $\mathcal Delta_{t}$: that is,
the probability the new target function disagrees with the previous
target function on a random sample is at most $\mathcal Delta_{t}$.
This framework has previously been studied in a number of articles.
The classic works of \cite{helmbold:91,helmbold:94,bartlett:96,long:99,bartlett:00} and \cite{barve:97}
together provide a general analysis of a
very-much related setting. Though the objectives in these works are
specified slightly differently, the results established there are
easily translated into our present framework,
and we summarize many of the relevant results from this literature
in Section~\ref{sec:background}.
While the results in these classic works are general, the best guarantees
on the error rates are only known for methods having no guarantees
of computational efficiency.
In a more recent effort, the work of \cite{min_concept} studies this problem
in the specific context of learning a homogeneous linear separator,
when all the $\mathcal Delta_{t}$ values are identical.
They propose a polynomial-time algorithm (based on the modified Perceptron
algorithm of \cite{stream_perceptron}),
and prove a bound on the number of mistakes it makes as a function of
the number of samples, when the data distribution satisfies a
certain condition called ``$\lambda$-good'' (which generalizes a useful
property of the uniform distribution on the origin-centered unit sphere).
However, their result is again worse than that obtainable by the known
computationally-inefficient methods.
Thus, the natural question is whether there exists a polynomial-time algorithm
achieving roughly the same guarantees on the error rates known for the inefficient methods.
In the present work, we resolve this question in the case of learning homogeneous
linear separators under the uniform distribution, by proposing a polynomial-time
algorithm that indeed achieves roughly the same bounds on the error rates
known for the inefficient methods in the literature.
This represents the main technical contribution of this work.
We also study the interesting problem of \emph{adaptivity} of an
algorithm to the sequence of $\mathcal Delta_{t}$ values, in the setting where
$\mathcal Delta_{t}$ may itself vary over time. Since the values $\mathcal Delta_{t}$
might typically not be accessible in practice, it seems important to
have learning methods having no explicit dependence on the sequence $\mathcal Delta_{t}$.
We propose such a method below, and prove that it achieves roughly the
same bounds on the error rates known for methods in the literature
which require direct access to the $\mathcal Delta_{t}$ values.
Also in the context of variable $\mathcal Delta_{t}$ sequences, we discuss
conditions on the sequence $\mathcal Delta_{t}$ necessary and sufficient
for there to exist a learning method guaranteeing a \emph{sublinear}
rate of growth of the number of mistakes.
We additionally study an \emph{active learning} extension to this
framework, in which, at each time, after making its prediction,
the algorithm may decide whether or not to request access to the
label assigned to the data point at that time. In addition to guarantees on the
error rates (for \emph{all} times, including those for which the label was not observed),
we are also interested in bounding the number of labels we expect the algorithm to
request, as a function of the number of samples encountered thus far.
\section{Definitions and Notation}
\label{sec:definitions}
Formally, in this setting, there is a fixed distribution $\mathcal{P}$ over the instance space $\mathcal X$,
and there is a sequence of independent $\mathcal{P}$-distributed unlabeled data $X_{1},X_{2},\ldots$.
There is also a concept space $\mathbb C$, and a sequence of target functions $h^{*}seq = \{h^{*}_{1},h^{*}_{2},\ldots\}$ in $\mathbb C$.
Each $t$ has an associated target label $Y_{t} = h^{*}_{t}(X_{t})$.
In this context, a (passive) learning algorithm is required, on each round $t$,
to produce a classifier $\hat{h}_{t}$ based on the observations $(X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1})$,
and we denote by $\hat{Y}_{t} = \hat{h}_{t}(X_{t})$ the corresponding prediction by the algorithm
for the label of $X_{t}$. For any classifier $h$, we define ${\rm er}_{t}(h) = \mathcal{P}(x : h(x) \neq h^{*}_{t}(x))$.
We also say the algorithm makes a ``mistake'' on instance $X_{t}$ if $\hat{Y}_{t} \neq Y_{t}$;
thus, ${\rm er}_{t}(\hat{h}_{t}) = \mathbb P( \hat{Y}_{t} \neq Y_{t} | (X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1}) )$.
For notational convenience, we will suppose the $h^{*}_{t}$ sequence is
chosen independently from the $X_{t}$ sequence (i.e., $h^{*}_{t}$ is chosen prior
to the ``draw'' of $X_{1},X_{2},\ldots \sim \mathcal{P}$), and is not random.
In each of our results, we will suppose $h^{*}seq$ is chosen from some set $S$ of
sequences in $\mathbb C$. In particular, we are interested in describing the sequence $h^{*}seq$
in terms of the magnitudes of \emph{changes} in $h^{*}_{t}$ from one time to the next.
Specifically, for any sequence $\mathcal Deltaseq = \{\mathcal Delta_{t}\}_{t=2}^{\infty}$ in $[0,1]$,
we denote by $S_{\mathcal Deltaseq}$ the set of all sequences $h^{*}seq$ in $\mathbb C$ such that,
$\forall t \in \mathbb{N}$, $\mathcal{P}(x : h_{t}(x) \neq h_{t+1}(x)) \leq \mathcal Delta_{t+1}$.
Throughout this article, we denote by $d$ the VC dimension of $\mathbb C$ \cite{vapnik:71},
and we suppose $\mathbb C$ is such that $1 \leq d < \infty$.
Also, for any $x \in \mathbb{R}$, define ${\rm Log}(x) = \ln(\max\{x,e\})$. | 2,164 | 33,201 | en |
train | 0.2.1 | \section{Definitions and Notation}
\label{sec:definitions}
Formally, in this setting, there is a fixed distribution $\mathcal{P}$ over the instance space $\mathcal X$,
and there is a sequence of independent $\mathcal{P}$-distributed unlabeled data $X_{1},X_{2},\ldots$.
There is also a concept space $\mathbb C$, and a sequence of target functions $h^{*}seq = \{h^{*}_{1},h^{*}_{2},\ldots\}$ in $\mathbb C$.
Each $t$ has an associated target label $Y_{t} = h^{*}_{t}(X_{t})$.
In this context, a (passive) learning algorithm is required, on each round $t$,
to produce a classifier $\hat{h}_{t}$ based on the observations $(X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1})$,
and we denote by $\hat{Y}_{t} = \hat{h}_{t}(X_{t})$ the corresponding prediction by the algorithm
for the label of $X_{t}$. For any classifier $h$, we define ${\rm er}_{t}(h) = \mathcal{P}(x : h(x) \neq h^{*}_{t}(x))$.
We also say the algorithm makes a ``mistake'' on instance $X_{t}$ if $\hat{Y}_{t} \neq Y_{t}$;
thus, ${\rm er}_{t}(\hat{h}_{t}) = \mathbb P( \hat{Y}_{t} \neq Y_{t} | (X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1}) )$.
For notational convenience, we will suppose the $h^{*}_{t}$ sequence is
chosen independently from the $X_{t}$ sequence (i.e., $h^{*}_{t}$ is chosen prior
to the ``draw'' of $X_{1},X_{2},\ldots \sim \mathcal{P}$), and is not random.
In each of our results, we will suppose $h^{*}seq$ is chosen from some set $S$ of
sequences in $\mathbb C$. In particular, we are interested in describing the sequence $h^{*}seq$
in terms of the magnitudes of \emph{changes} in $h^{*}_{t}$ from one time to the next.
Specifically, for any sequence $\mathcal Deltaseq = \{\mathcal Delta_{t}\}_{t=2}^{\infty}$ in $[0,1]$,
we denote by $S_{\mathcal Deltaseq}$ the set of all sequences $h^{*}seq$ in $\mathbb C$ such that,
$\forall t \in \mathbb{N}$, $\mathcal{P}(x : h_{t}(x) \neq h_{t+1}(x)) \leq \mathcal Delta_{t+1}$.
Throughout this article, we denote by $d$ the VC dimension of $\mathbb C$ \cite{vapnik:71},
and we suppose $\mathbb C$ is such that $1 \leq d < \infty$.
Also, for any $x \in \mathbb{R}$, define ${\rm Log}(x) = \ln(\max\{x,e\})$.
\section{Background: $(\epsilon,S)$-Tracking Algorithms}
\label{sec:background}
As mentioned, the classic literature on learning with a drifting target concept
is expressed in terms of a slightly different model. In order to relate those
results to our present setting, we first introduce the classic setting.
Specifically, we consider a model introduced by \cite{helmbold:94},
presented here in a more-general form inspired by \cite{bartlett:00}.
For a set $S$ of sequences $\{h_{t}\}_{t=1}^{\infty}$ in $\mathbb C$,
and a value $\epsilon > 0$, an algorithm $\mathcal A$ is said to be
\emph{$(\epsilon,S)$-tracking} if $\exists t_{\epsilon} \in \mathbb{N}$ such that,
for any choice of $h^{*}seq \in S$,
$\forall T \geq t_{\epsilon}$,
the prediction $\hat{Y}_{T}$ produced by $\mathcal A$ at time $T$ satisfies
\begin{equation*}
\mathbb P\left( \hat{Y}_{T} \neq Y_{T} \right) \leq \epsilon.
\end{equation*}
Note that the value of the probability in the above expression
may be influenced by $\{X_{t}\}_{t=1}^{T}$, $\{h^{*}_{t}\}_{t=1}^{T}$,
and any internal randomness of the algorithm $\mathcal A$.
The focus of the results expressed in this classical model is determining
sufficient conditions on the set $S$ for there to exist an $(\epsilon,S)$-tracking algorithm,
along with bounds on the sufficient size of $t_{\epsilon}$.
These conditions on $S$ typically take the form of an assumption on the
drift rate, expressed in terms of $\epsilon$. Below, we summarize
several of the strongest known results for this setting.
\subsection{Bounded Drift Rate}
\label{sec:classic-constant-drift}
The simplest, and perhaps most elegant, results for $(\epsilon,S)$-tracking algorithms
is for the set $S$ of sequences with a bounded drift rate. Specifically, for any $\mathcal Delta \in [0,1]$,
define $S_{\mathcal Delta} = S_{\mathcal Deltaseq}$, where $\mathcal Deltaseq$ is such that $\mathcal Delta_{t+1} = \mathcal Delta$ for every $t \in \mathbb{N}$.
The study of this problem was initiated in the original work of \cite{helmbold:94}.
The best known general results are due to \cite{long:99}: namely,
that for some $\mathcal Delta_{\epsilon} = \Theta( \epsilon^{2} / d )$,
for every $\epsilon \in (0,1]$, there exists an $(\epsilon,S_{\mathcal Delta})$-tracking algorithm for all values
of $\mathcal Delta \leq \mathcal Delta_{\epsilon}$.\footnote{In fact, \cite{long:99} also allowed the distribution
$\mathcal{P}$ to vary gradually over time. For simplicity, we will only discuss the case of fixed $\mathcal{P}$.}
This refined an earlier result of \cite{helmbold:94} by a logarithmic factor.
\cite{long:99} further argued that this result can be achieved with $t_{\epsilon} = \Theta(d/\epsilon)$.
The algorithm itself involves a beautiful modification of the one-inclusion graph prediction
strategy of \cite{haussler:94}; since its specification is somewhat involved,
we refer the interested reader to the original work of \cite{long:99} for the details.
\subsection{Varying Drift Rate: Nonadaptive Algorithm}
\label{sec:classic-varying-drift}
In addition to the concrete bounds for the case $h^{*}seq \in S_{\mathcal Delta}$,
\cite{helmbold:94} additionally present an elegant general result. Specifically,
they argue that, for any $\epsilon > 0$, and any $m = \Omega\left( \frac{d}{\epsilon}{\rm Log}\frac{1}{\epsilon} \right)$,
if $\sum_{i=1}^{m} \mathcal{P}(x : h^{*}_{i}(x) \neq h^{*}_{m+1}(x)) \leq m \epsilon / 24$, then
for $\hat{h} = \mathop{\rm argmin}_{h \in \mathbb C} \sum_{i=1}^{m} \mathbbm{1}[ h(X_{i}) \neq Y_{i} ]$,
$\mathbb P( \hat{h}(X_{m+1}) \neq h^{*}_{m+1}(X_{m+1}) ) \leq \epsilon$.\footnote{They in fact
prove a more general result, which also applies to methods approximately minimizing
the number of mistakes, but for simplicity we will only discuss this basic version of the result.}
This result immediately inspires an algorithm $\mathcal A$ which, at every time $t$,
chooses a value $m_{t} \leq t-1$, and predicts $\hat{Y}_{t} = \hat{h}_{t}(X_{t})$,
for $\hat{h}_{t} = \mathop{\rm argmin}_{h \in \mathbb C} \sum_{i=t-m_{t}}^{t-1} \mathbbm{1}[ h(X_{i}) \neq Y_{i} ]$.
We are then interested in choosing $m_{t}$ to minimize the value of $\epsilon$ obtainable
via the result of \cite{helmbold:94}. However, that method is based on the
values $\mathcal{P}( x : h^{*}_{i}(x) \neq h^{*}_{t}(x) )$, which would typically not
be accessible to the algorithm. However, suppose instead we have access to a
sequence $\mathcal Deltaseq$ such that $h^{*}seq \in S_{\mathcal Deltaseq}$.
In this case, we could approximate $\mathcal{P}( x : h^{*}_{i}(x) \neq h^{*}_{t}(x) )$
by its \emph{upper bound} $\sum_{j = i+1}^{t} \mathcal Delta_{j}$. In this case,
we are interested choosing $m_{t}$ to minimize the smallest value of $\epsilon$
such that $\sum_{i=t-m_{t}}^{t-1} \sum_{j=i+1}^{t} \mathcal Delta_{j} \leq m_{t} \epsilon / 24$
and $m_{t} = \Omega\left( \frac{d}{\epsilon} {\rm Log}\frac{1}{\epsilon} \right)$.
One can easily verify that this minimum is obtained at a value
\begin{equation*}
m_{t} = \Theta\left( \mathop{\rm argmin}_{m \leq t-1} \frac{1}{m} \sum_{i=t-m}^{t-1} \sum_{j=i+1}^{t} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)}{m} \right),
\end{equation*}
and via the result of \cite{helmbold:94} (applied to the sequence $X_{t-m_{t}},\ldots,X_{t}$)
the resulting algorithm has
\begin{equation}
\label{eqn:hl94}
\mathbb P\left( \hat{Y}_{t} \neq Y_{t} \right) \leq O\left( \min_{1 \leq m \leq t-1} \frac{1}{m} \sum_{i=t-m}^{t-1} \sum_{j=i+1}^{t} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)}{m} \right).
\end{equation}
As a special case, if every $t$ has $\mathcal Delta_{t} = \mathcal Delta$ for a fixed value $\mathcal Delta \in [0,1]$,
this result recovers the bound $\sqrt{ d \mathcal Delta {\rm Log}(1/\mathcal Delta) }$,
which is only slightly larger than that obtainable from the best bound of \cite{long:99}.
It also applies to far more general and more intersting sequences $\mathcal Deltaseq$,
including some that allow periodic large jumps (i.e., $\mathcal Delta_{t} = 1$ for some indices $t$),
others where the sequence $\mathcal Delta_{t}$ converges to $0$, and so on.
Note, however, that the algorithm obtaining this bound
directly depends on the sequence $\mathcal Deltaseq$.
One of the contributions of the present work is to remove this requirement, while
maintaining essentially the same bound, though in a slightly different form.
\subsection{Computational Efficiency}
\label{sec:classic-consistency}
\cite{helmbold:94} also proposed a reduction-based approach, which
sometimes yields computationally efficient methods, though the tolerable $\mathcal Delta$
value is smaller. Specifically, given any (randomized) polynomial-time algorithm $\mathcal A$
that produces a classifier $h \in \mathbb C$ with $\sum_{t=1}^{m} \mathbbm{1}[ h(x_{t}) \neq y_{t} ] = 0$
for any sequence $(x_1,y_1),\ldots,(x_m,y_m)$ for which such a classifier $h$ exists
(called the \emph{consistency problem}),
they propose a polynomial-time algorithm that is $(\epsilon,S_{\mathcal Delta})$-tracking
for all values of $\mathcal Delta \leq \mathcal Delta_{\epsilon}^{\prime}$,
where $\mathcal Delta_{\epsilon}^{\prime} = \Theta\left( \frac{\epsilon^{2}}{d^{2} {\rm Log}(1/\epsilon)} \right)$.
This is slightly worse (by a factor of $d {\rm Log}(1/\epsilon)$) than the drift rate tolerable by the
(typically inefficient) algorithm mentioned above.
However, it does sometimes yield computationally-efficient methods.
For instance, there are known polynomial-time algorithms for the consistency problem for the classes of
linear separators, conjunctions, and axis-aligned rectangles.
\subsection{Lower Bounds}
\label{sec:classic-lower-bound}
\cite{helmbold:94} additionally prove \emph{lower bounds} for specific concept spaces:
namely, linear separators and axis-aligned rectangles. They specifically argue that, for
$\mathbb C$ a concept space
\begin{equation*}
{\rm BASIC}_{n} = \{ \cup_{i=1}^{n} [i/n,(i+a_i)/n) : \mathbf{a} \in [0,1]^{n} \}
\end{equation*}
on $[0,1]$, under $\mathcal{P}$ the uniform distribution on $[0,1]$,
for any $\epsilon \in [0,1/e^{2}]$ and $\mathcal Delta_{\epsilon} \geq e^{4} \epsilon^{2} / n$,
for any algorithm $\mathcal A$, and any $T \in \mathbb{N}$, there exists a choice of $h^{*}seq \in S_{\mathcal Delta_{\epsilon}}$
such that the prediction $\hat{Y}_{T}$ produced by $\mathcal A$ at time $T$ satisfies
$\mathbb P\left( \hat{Y}_{T} \neq Y_{T} \right) > \epsilon$.
Based on this, they conclude that no $(\epsilon,S_{\mathcal Delta_{\epsilon}})$-tracking algorithm exists.
Furthermore, they observe that the space ${\rm BASIC}_{n}$ is embeddable in many
commonly-studied concept spaces, including halfspaces and axis-aligned
rectangles in $\mathbb{R}^{n}$, so that for $\mathbb C$ equal to either of these spaces,
there also is no $(\epsilon,S_{\mathcal Delta_{\epsilon}})$-tracking algorithm. | 3,547 | 33,201 | en |
train | 0.2.2 | \section{Adapting to Arbitrarily Varying Drift Rates}
\label{sec:general}
This section presents a general bound on the error rate at each time,
expressed as a function of the rates of drift, which are allowed to be \emph{arbitrary}.
Most-importantly, in contrast to the methods from the literature discussed above,
the method achieving this general result is \emph{adaptive} to the drift rates,
so that it requires no information about the drift rates in advance. This is an
appealing property, as it essentially allows the algorithm to learn under an \emph{arbitrary}
sequence $h^{*}seq$ of target concepts; the difficulty of the task
is then simply reflected in the resulting bounds on the error rates:
that is, faster-changing sequences of target functions result in larger bounds on
the error rates, but do not require a change in the algorithm itself. | 205 | 33,201 | en |
train | 0.2.3 | \subsection{Adapting to a Changing Drift Rate}
\label{sec:adaptive-varying-rate}
Recall that the method yielding \eqref{eqn:hl94} (based on the work of \cite{helmbold:94})
required access to the sequence $\mathcal Deltaseq$ of changes to achieve the stated guarantee
on the expected number of mistakes. That method is based on choosing a classifier to predict $\hat{Y}_{t}$
by minimizing the number of mistakes among the previous $m_{t}$ samples, where $m_{t}$ is a value
chosen based on the $\mathcal Deltaseq$ sequence. Thus, the key to modifying this algorithm to make it
adaptive to the $\mathcal Deltaseq$ sequence is to determine a suitable choice of $m_{t}$ without reference
to the $\mathcal Deltaseq$ sequence. The strategy we adopt here is to use the \emph{data} to determine
an appropriate value $\hat{m}_{t}$ to use. Roughly (ignoring logarithmic factors for now), the insight
that enables us to achieve this feat is that,
for the $m_{t}$ used in the above strategy, one can show that $\sum_{i=t-m_{t}}^{t-1} \mathbbm{1}[ h^{*}_{t}(X_{i}) \neq Y_{i} ]$
is roughly $\tilde{O}(d)$, and that
making the prediction $\hat{Y}_{t}$ with \emph{any} $h \in \mathbb C$ with roughly $\tilde{O}(d)$ mistakes
on these samples will suffice to obtain the stated bound on the error rate (up to logarithmic factors).
Thus, if we replace $m_{t}$ with the largest value $m$ for which $\min_{h \in \mathbb C} \sum_{i=t-m}^{t-1} \mathbbm{1}[ h(X_{i}) \neq Y_{i}]$
is roughly $\tilde{O}(d)$, then the above observation implies $m \geq m_{t}$. This then
implies that, for $\hat{h} = \mathop{\rm argmin}_{h \in \mathbb C} \sum_{i=t-m}^{t-1} \mathbbm{1}[ h(X_{i}) \neq Y_{i} ]$,
we have that $\sum_{i=t-m_{t}}^{t-1} \mathbbm{1}[ \hat{h}(X_{i}) \neq Y_{i} ]$ is also roughly $\tilde{O}(d)$,
so that the stated bound on the error rate will be achieved (aside from logarithmic factors)
by choosing $\hat{h}_{t}$ as this classifier $\hat{h}$.
There are a few technical modifications to this argument needed to get the logarithmic factors to work out properly,
and for this reason the actual algorithm and proof below are somewhat more involved.
Specifically, consider the following algorithm (the value of the universal constant $K \geq 1$ will be specified below).
\begin{bigboxit}
0. For $T = 1,2,\ldots$\\
1. \quad Let $\hat{m}_{T} \!=\! \max\!\left\{ m \!\in\! \{1,\ldots,T\!-\!1\} : \min\limits_{h \in \mathbb C} \max\limits_{m^{\prime} \leq m} \frac{\sum_{t=T-m^{\prime}}^{T-1} \mathbbm{1}[h(X_{t}) \neq Y_{t}]}{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)} < K \right\}$\\
2. \quad Let $\hat{h}_{T} = \mathop{\rm argmin}\limits_{h \in \mathbb C} \max\limits_{m^{\prime} \leq \hat{m}_{T}} \frac{\sum_{t=T-m^{\prime}}^{T-1} \mathbbm{1}[h(X_{t}) \neq Y_{t}]}{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}$
\end{bigboxit}
Note that the classifiers $\hat{h}_{t}$ chosen by this algorithm have no dependence on $\mathcal Deltaseq$,
or indeed anything other than the data $\{(X_{i},Y_{i}) : i < t\}$, and the concept space $\mathbb C$.
\begin{theorem}
\label{thm:epst-adaptive}
Fix any $\delta \in (0,1)$, and let $\mathcal A$ be the above algorithm.
For any sequence $\mathcal Deltaseq$ in $[0,1]$, for any $\mathcal{P}$ and any choice of $h^{*}seq \in S_{\mathcal Deltaseq}$,
for every $T \in \mathbb{N} \setminus \{1\}$, with probability at least $1-\delta$,
\begin{equation*}
{\rm er}_{T}\left( \hat{h}_{T} \right)
\leq O\left( \min_{1 \leq m \leq T-1} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m} \right).
\end{equation*}
\end{theorem}
Before presenting the proof of this result, we first state a crucial lemma, which follows immediately
from a classic result of \cite{vapnik:82,vapnik:98}, combined with the fact (from \cite{vidyasagar:03}, Theorem 4.5)
that the VC dimension of the collection of sets $\{ \{x : h(x) \neq g(x)\} : h,g \in \mathbb C \}$ is at most $10 d$.
\begin{lemma}
\label{lem:vc-ratio}
There exists a universal constant $c \in [1,\infty)$ such that,
for any class $\mathbb C$ of VC dimension $d$, $\forall m \in \mathbb{N}$, $\forall \delta \in (0,1)$,
with probability at least $1-\delta$,
every $h,g \in \mathbb C$ have
\begin{multline*}
\left| \mathcal{P}(x : h(x) \neq g(x)) - \frac{1}{m}\sum_{t=1}^{m} \mathbbm{1}[h(X_{t}) \neq g(X_{t})] \right|
\\ \leq c \sqrt{ \left(\frac{1}{m}\sum_{t=1}^{m} \mathbbm{1}[h(X_{t}) \neq g(X_{t})] \right) \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}}
\\ + c \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}.
\end{multline*}
\end{lemma}
We are now ready for the proof of Theorem~\ref{thm:epst-adaptive}.
For the constant $K$ in the algorithm, we will choose $K = 145 c^{2}$,
for $c$ as in Lemma~\ref{lem:vc-ratio}.
\begin{proof}[Proof of Theorem~\ref{thm:epst-adaptive}]
Fix any $T \in \mathbb{N}$ with $T \geq 2$, and define
\begin{multline*}
m_{T}^{*} = \max\left\{ m \in \{1,\ldots,T-1\} : \forall m^{\prime} \leq m, \phantom{\sum_{t=T-m^{\prime}}^{T-1}} \right.
\\ \left. \sum_{t=T-m^{\prime}}^{T-1} \mathbbm{1}[h^{*}_{T}(X_{t}) \neq Y_{t}] < K ( d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta) )\right\}.
\end{multline*}
Note that
\begin{equation}
\label{eqn:adaptive-target-mistakes}
\sum_{t=T-m_{T}^{*}}^{T-1} \mathbbm{1}[h^{*}_{T}(X_{t}) \neq Y_{t}] \leq K (d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)),
\end{equation}
and also note that (since $h^{*}_{T} \in \mathbb C$) $\hat{m}_{T} \geq m_{T}^{*}$, so that (by definition of $\hat{m}_{T}$ and $\hat{h}_{T}$)
\begin{equation*}
\sum_{t=T-m_{T}^{*}}^{T-1} \mathbbm{1}[\hat{h}_{T}(X_{t}) \neq Y_{t}] \leq K ( d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta) )
\end{equation*}
as well.
Therefore,
\begin{align*}
\sum_{t=T-m_{T}^{*}}^{T-1} \!\!\mathbbm{1}[h^{*}_{T}(X_{t}) \neq \hat{h}_{T}(X_{t})]
& \leq
\sum_{t=T-m_{T}^{*}}^{T-1} \!\!\mathbbm{1}[h^{*}_{T}(X_{t}) \neq Y_{t}]
+
\sum_{t=T-m_{T}^{*}}^{T-1} \!\!\mathbbm{1}[Y_{t} \neq \hat{h}_{T}(X_{t})]
\\ & \leq
2 K ( d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta) ).
\end{align*}
Thus, by Lemma~\ref{lem:vc-ratio}, for each $m \in \mathbb{N}$,
with probability at least $1-\delta / (6 m^{2})$, if $m_{T}^{*} = m$, then
\begin{equation*}
\mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x))
\leq
(2K+c \sqrt{2K} + c) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(6(m_{T}^{*})^{2}/\delta)}{m_{T}^{*}}.
\end{equation*}
Furthermore, since
${\rm Log}(6(m_{T}^{*})^{2}) \leq \sqrt{2K} d {\rm Log}(m_{T}^{*} / d)$,
this is at most
\begin{equation*}
2(K+c \sqrt{2K}) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}.
\end{equation*}
By a union bound (over values $m \in \mathbb{N}$), we have that with probability at least $1-\sum_{m=1}^{\infty} \delta/(6 m^{2}) \geq 1 - \delta/3$,
\begin{equation*}
\mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x))
\leq 2(K+c \sqrt{2K}) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}.
\end{equation*}
Let us denote
\begin{equation*}
\tilde{m}_{T} = \mathop{\rm argmin}_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}.
\end{equation*}
Note that, for any $m^{\prime} \in \{1,\ldots,T-1\}$ and $\delta \in (0,1)$,
if $\tilde{m}_{T} \geq m^{\prime}$, then
\begin{align*}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}
\\ & \geq \min_{m \in \{m^{\prime},\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j}
= \frac{1}{m^{\prime}} \sum_{i=T-m^{\prime}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j},
\end{align*}
while if $\tilde{m}_{T} \leq m^{\prime}$, then
\begin{align*}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}
\\ & \geq \min_{m \in \{1,\ldots,m^{\prime}\}} \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
= \frac{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}{m^{\prime}}.
\end{align*}
Either way, we have that
\begin{align}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m} \notag
\\ & \geq \min\left\{ \frac{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}{m^{\prime}}, \frac{1}{m^{\prime}} \sum_{i=T-m^{\prime}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \right\}. \label{eqn:adaptive-min-lb}
\end{align} | 3,466 | 33,201 | en |
train | 0.2.4 | Let us denote
\begin{equation*}
\tilde{m}_{T} = \mathop{\rm argmin}_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}.
\end{equation*}
Note that, for any $m^{\prime} \in \{1,\ldots,T-1\}$ and $\delta \in (0,1)$,
if $\tilde{m}_{T} \geq m^{\prime}$, then
\begin{align*}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}
\\ & \geq \min_{m \in \{m^{\prime},\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j}
= \frac{1}{m^{\prime}} \sum_{i=T-m^{\prime}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j},
\end{align*}
while if $\tilde{m}_{T} \leq m^{\prime}$, then
\begin{align*}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}
\\ & \geq \min_{m \in \{1,\ldots,m^{\prime}\}} \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
= \frac{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}{m^{\prime}}.
\end{align*}
Either way, we have that
\begin{align}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m} \notag
\\ & \geq \min\left\{ \frac{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}{m^{\prime}}, \frac{1}{m^{\prime}} \sum_{i=T-m^{\prime}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \right\}. \label{eqn:adaptive-min-lb}
\end{align}
For any $m \in \{1,\ldots,T-1\}$,
applying Bernstein's inequality (see \cite{boucheron:13}, equation 2.10) to the random variables $\mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ]/d$, $i \in \{T-m,\ldots,T-1\}$,
and again to the random variables $-\mathbbm{1}[h^{*}_{T}(X_{i}) \neq Y_{i}]/d$, $i \in \{T-m,\ldots,T-1\}$, together with a union bound,
we obtain that, for any $\delta \in (0,1)$, with probability at least $1 - \delta / (3m^{2})$,
\begin{align}
& \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \notag
\\ & {\hskip 1cm}- \sqrt{ \left( \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \right) \frac{2\ln(3m^{2}/\delta)}{m} } \notag
\\ & < \frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ] \notag
\\ & < \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \notag
\\ & {\hskip 1cm}+ \max\begin{cases}
\sqrt{ \left( \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \right) \frac{4\ln(3m^{2}/\delta)}{m} }
\\\frac{(4/3)\ln(3m^{2}/\delta)}{m} \end{cases}.\label{eqn:adaptive-empirical-ub}
\end{align}
The left inequality implies that
\begin{equation*}
\frac{1}{m} \!\sum_{i=T-m}^{T-1}\!\!\! \mathcal{P}( x \!:\! h^{*}_{T}(x) \neq h^{*}_{i}(x) )
\leq \max\left\{ \frac{2}{m} \!\sum_{i=T-m}^{T-1} \!\!\!\mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ], \frac{8\ln(3m^{2}/\delta)}{m} \right\}\!.
\end{equation*}
Plugging this into the right inequality in \eqref{eqn:adaptive-empirical-ub}, we obtain that
\begin{multline*}
\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ]
< \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) )
\\ + \max\left\{ \sqrt{ \left(\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ] \right) \frac{8\ln(3m^{2}/\delta)}{m} }, \frac{\sqrt{32}\ln(3m^{2}/\delta)}{m} \right\}.
\end{multline*}
By a union bound, this holds simultaneously for all $m \in \{1,\ldots,T-1\}$ with probability at least $1-\sum_{m = 1}^{T-1} \delta / (3m^{2}) > 1 - (2/3)\delta$.
Note that, on this event,
we obtain
\begin{multline*}
\frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) )
>
\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ]
\\ - \max\left\{ \sqrt{ \left(\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ] \right) \frac{8\ln(3m^{2}/\delta)}{m} }, \frac{\sqrt{32}\ln(3m^{2}/\delta)}{m} \right\}.
\end{multline*}
In particular, taking $m = m_{T}^{*}$, and invoking maximality of $m_{T}^{*}$, if $m_{T}^{*} < T-1$, the right hand side is at least
\begin{equation*}
(K - 6c\sqrt{K}) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}.
\end{equation*}
Since $\frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \geq \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) )$,
taking $K = 145 c^{2}$,
we have that with probability at least $1-\delta$, if $m_{T}^{*} < T-1$, then
\begin{align*}
& 10(K+c \sqrt{2K})\min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
\\ & \geq
10(K+c \sqrt{2K})\min\left\{ \frac{d {\rm Log}(m_{T}^{*}/d)+{\rm Log}(1/\delta)}{m_{T}^{*}}, \frac{1}{m_{T}^{*}} \sum_{i=T-m_{T}^{*}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \right\}
\\ & \geq
10(K+c \sqrt{2K})\frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}
\\ & \geq \mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x)).
\end{align*}
Furthermore, if $m_{T}^{*} = T-1$, then we trivially have (on the same $1-\delta$ probability event as above)
\begin{align*}
& 10(K+c \sqrt{2K})\min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
\\ & \geq 10(K+c \sqrt{2K}) \min_{m \in \{1,\ldots,T-1\}} \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
\\ & = 10(K+c \sqrt{2K}) \frac{d {\rm Log}((T-1)/d)+{\rm Log}(1/\delta)}{T-1}
\\ & = 10(K+c \sqrt{2K})\frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}
\geq \mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x)).
\end{align*}
\qed | 2,792 | 33,201 | en |
train | 0.2.5 | \end{proof} | 5 | 33,201 | en |
train | 0.2.6 | \subsection{Conditions Guaranteeing a Sublinear Number of Mistakes}
\label{sec:sublinear}
\input{tex-files/sublinear.tex} | 38 | 33,201 | en |
train | 0.2.7 | \section{Polynomial-Time Algorithms for Linear Separators}
\label{sec:halfspaces}
In this section, we suppose $\mathcal Delta_{t} = \mathcal Delta$ for every $t \in \mathbb{N}$, for a fixed constant $\mathcal Delta > 0$,
and we consider the special case of learning homogeneous linear separators in $\mathbb{R}^{k}$ under a uniform distribution
on the origin-centered unit sphere.
In this case, the analysis of \cite{helmbold:94} mentioned in Section~\ref{sec:classic-consistency} implies that it is possible to achieve a
bound on the error rate that is $\tilde{O}(d \sqrt{\mathcal Delta})$,
using an algorithm that runs in time ${\rm poly}(d,1/\mathcal Delta,\log(1/\delta))$ (and independent of $t$) for each prediction.
This also implies that it is possible to achieve expected number of mistakes among $T$ predictions that is $\tilde{O}(d \sqrt{\mathcal Delta}) \times T$.
\cite{min_concept}\footnote{This
work in fact studies a much broader model of drift, which in fact allows the distribution $\mathcal{P}$ to vary with time as well. However, this $\tilde{O}((d \mathcal Delta)^{1/4}) \times T$ result can be obtained
from their more-general theorem by calculating the various parameters for this particular setting.}
have since proven that a variant of the Perceptron algorithm is capable of achieving an expected number of mistakes $\tilde{O}( (d \mathcal Delta)^{1/4} ) \times T$.
Below, we improve on this result by showing that there exists an efficient algorithm that achieves a
bound on the error rate that is $\tilde{O}(\sqrt{d \mathcal Delta})$,
as was possible with the inefficient algorithm of \cite{helmbold:94,long:99} mentioned in Section~\ref{sec:classic-constant-drift}.
This leads to a bound on the expected number of mistakes that is $\tilde{O}(\sqrt{d \mathcal Delta}) \times T$.
Furthermore, our approach also allows us to present the method as an \emph{active learning}
algorithm, and to bound the expected number of queries, as a function of the
number of samples $T$, by $\tilde{O}(\sqrt{d \mathcal Delta}) \times T$.
The technique is based on a modification of the algorithm of \cite{helmbold:94},
replacing an empirical risk minimization step with (a modification of) the computationally-efficient algorithm of \cite{awasthi:13}.
Formally, define the class of homogeneous linear separators as the set of classifiers
$h_{w} : \mathbb{R}^{d} \to \{-1,+1\}$, for $w \in \mathbb{R}^{d}$ with $\|w\|=1$,
such that $h_{w}(x) = {\rm sign}( w \cdot x )$ for every $x \in \mathbb{R}^{d}$. | 709 | 33,201 | en |
train | 0.2.8 | \subsection{An Improved Guarantee for a Polynomial-Time Algorithm}
\label{sec:efficient-linsep}
We have the following result.
\begin{theorem}
\label{thm:linsep-uniform}
When $\mathbb C$ is the space of homogeneous linear separators (with $d \geq 4$)
and $\mathcal{P}$ is the uniform distribution on the surface of
the origin-centered unit sphere in $\mathbb{R}^{d}$,
for any fixed $\mathcal Delta > 0$,
for any $\delta \in (0,1/e)$,
there is an algorithm that runs in time ${\rm poly}(d,1/\mathcal Delta,\log(1/\delta))$ for each time $t$,
such that for any $h^{*}seq \in S_{\mathcal Delta}$,
for every sufficiently large $t \in \mathbb{N}$, with probability at least $1-\delta$,
\begin{equation*}
{\rm er}_{t}(\hat{h}_{t}) = O\left( \sqrt{\mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right).
\end{equation*}
Also, running this algorithm with $\delta = \sqrt{\mathcal Delta d} \land 1/e$,
the expected number of mistakes among the first $T$ instances is
$O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } T \right)$.
Furthermore, the algorithm can be run as an \emph{active learning} algorithm,
in which case, for this choice of $\delta$, the expected number of labels
requested by the algorithm among the first $T$ instances is
$O\left( \sqrt{\mathcal Delta d} \log^{3/2}\left(\frac{1}{\mathcal Delta d}\right) T \right)$.
\end{theorem}
We first state the algorithm used to obtain this result. It is primarily based on a
margin-based learning strategy of \cite{awasthi:13}, combined with an initialization
step based on a modified Perceptron rule from \cite{stream_perceptron,min_concept}.
For $\tau > 0$ and $x \in \mathbb{R}$, define $\ell_{\tau}(x) = \max\left\{0, 1 - \frac{x}{\tau}\right\}$.
Consider the following algorithm and subroutine;
parameters $\delta_{k}$, $m_{k}$, $\tau_{k}$, $r_{k}$, $b_{k}$, $\alpha$, and $\kappa$
will all be specified in the context of the proof; we suppose $M = \sum_{k=0}^{\lceil \log_{2}(1/\alpha) \rceil} m_{k}$.
\begin{bigboxit}
Algorithm: DriftingHalfspaces\\
0. Let $\tilde{h}_{0}$ be an arbitrary classifier in $\mathbb C$\\
1. For $i = 1,2,\ldots$\\
2. \quad $\tilde{h}_{i} \gets {\rm ABL}(M (i-1), \tilde{h}_{i-1})$\\
\end{bigboxit}
\begin{bigboxit}
Subroutine: ${\rm ModPerceptron}(t,\tilde{h})$\\
0. Let $w_{t}$ be any element of $\mathbb{R}^{d}$ with $\|w_{t}\| = 1$\\
1. For $m = t+1,t+2,\ldots,t+m_{0}$\\
2. \quad Choose $\hat{h}_{m} = \tilde{h}$ (i.e., predict $\hat{Y}_{m} = \tilde{h}(X_{m})$ as the prediction for $Y_{m}$)\\
3. \quad Request the label $Y_{m}$\\
4. \quad If $h_{w_{m-1}}(X_{m}) \neq Y_{m}$\\
5. \qquad $w_{m} \gets w_{m-1} - 2(w_{m-1} \cdot X_{m}) X_{m}$\\
6. \quad Else $w_{m} \gets w_{m-1}$\\
7. Return $w_{t+m_{0}}$
\end{bigboxit}
\begin{bigboxit}
Subroutine: ${\rm ABL}(t,\tilde{h})$\\
0. Let $w_{0}$ be the return value of ${\rm ModPerceptron}(t,\tilde{h})$\\
1. For $k = 1,2,\ldots,\lceil \log_{2}(1/\alpha) \rceil$\\
2. \quad $W_{k} \gets \{\}$\\
3. \quad For $s = t + \sum_{j=0}^{k-1} m_{j} + 1, \ldots, t + \sum_{j=0}^{k} m_{j}$\\
4. \qquad Choose $\hat{h}_{s} = \tilde{h}$ (i.e., predict $\hat{Y}_{s} = \tilde{h}(X_{s})$ as the prediction for $Y_{s}$)\\
5. \qquad If $|w_{k-1} \cdot X_{s}| \leq b_{k-1}$, Request label $Y_{s}$ and let $W_{k} \gets W_{k} \cup \{(X_{s},Y_{s})\}$\\
6. \quad Find $v_{k} \in \mathbb{R}^{d}$ with $\|v_{k} - w_{k-1}\| \leq r_{k}$, $0 < \|v_{k}\| \leq 1$,
and\\ {\hskip 7mm}$\sum\limits_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (v_{k} \cdot x)) \leq \inf\limits_{v : \|v-w_{k-1}\| \leq r_{k}} \sum\limits_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (v \cdot x)) + \kappa |W_{k}|$\\
7. \quad Let $w_{k} = \frac{1}{\|v_{k}\|} v_{k}$\\
8. Return $h_{w_{\lceil \log_{2}(1/\alpha) \rceil-1}}$
\end{bigboxit}
Before stating the proof, we have a few additional lemmas that will be needed.
The following result for ${\rm ModPerceptron}$ was proven by \cite{min_concept}.
\begin{lemma}
\label{lem:perceptron}
Suppose $\mathcal Delta < \frac{1}{512}$.
Consider the values $w_{m}$ obtained during the execution of ${\rm ModPerceptron}(t,\tilde{h})$.
$\forall m \in \{t+1,\ldots, t+ m_{0}\}$, $\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m}^{*}(x)) \leq \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x))$.
Furthermore, letting $c_{1} = \frac{\pi^{2}}{d \cdot 400 \cdot 2^{15}}$, if
$\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) \geq 1/32$,
then with probability at least $1/64$,
$\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m}^{*}(x)) \leq (1 - c_{1}) \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x))$.
\end{lemma}
This implies the following.
\begin{lemma}
\label{lem:perceptron-init}
Suppose $\mathcal Delta \leq \frac{\pi^{2}}{400 \cdot 2^{27} (d+\ln(4/\delta))}$.
For $m_{0} = \max\{\lceil 128 (1/c_{1}) \ln(32) \rceil,$ $\lceil 512 \ln(\frac{4}{\delta}) \rceil \}$,
with probability at least $1-\delta/4$,
${\rm ModPerceptron}(t,\tilde{h})$ returns a vector $w$ with
$\mathcal{P}(x : h_{w}(x) \neq h_{t+m_{0}+1}^{*}(x)) \leq 1/16$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:perceptron} and a union bound, in general we have
\begin{equation}
\label{eqn:perceptron-weak-update}
\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m+1}^{*}(x)) \leq \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) + \mathcal Delta.
\end{equation}
Furthermore, if $\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) \geq 1/32$,
then wth probability at least $1/64$,
\begin{equation}
\label{eqn:perceptron-strong-update}
\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m+1}^{*}(x)) \leq (1-c_{1}) \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) + \mathcal Delta.
\end{equation}
In particular, this implies that the number $N$ of values $m \in \{t+1,\ldots,t+m_{0}\}$ with either
$\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) < 1/32$ or $\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m+1}^{*}(x)) \leq (1-c_{1}) \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) + \mathcal Delta$
is lower-bounded by a ${\rm Binomial}(m,1/64)$ random variable.
Thus, a Chernoff bound implies that with probability at least $1 - \exp\{ - m_{0} / 512 \} \geq 1 - \delta/4$,
we have $N \geq m_{0} / 128$. Suppose this happens.
Since $\mathcal Delta m_{0} \leq 1/32$, if any $m \in \{t+1,\ldots,t+m_{0}\}$ has $\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) < 1/32$,
then inductively applying \eqref{eqn:perceptron-weak-update} implies that
$\mathcal{P}(x : h_{w_{t+m_{0}}}(x) \neq h_{t+m_{0}+1}^{*}(x)) \leq 1/32 + \mathcal Delta m_{0} \leq 1/16$.
On the other hand, if all $m \in \{t+1,\ldots,t+m_{0}\}$ have $\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) \geq 1/32$,
then in particular we have $N$ values of $m \in \{t+1,\ldots,t+m_{0}\}$ satisfying \eqref{eqn:perceptron-strong-update}.
Combining this fact with \eqref{eqn:perceptron-weak-update} inductively, we have that
\begin{multline*}
\mathcal{P}(x : h_{w_{t+m_{0}}}(x) \neq h_{t+m_{0}+1}^{*}(x))
\leq (1-c_{1})^{N} \mathcal{P}(x : h_{w_{t}}(x) \neq h_{t+1}^{*}(x)) + \mathcal Delta m_{0}
\\ \leq (1-c_{1})^{(1/c_{1}) \ln(32) } \mathcal{P}(x : h_{w_{t}}(x) \neq h_{t+1}^{*}(x)) + \mathcal Delta m_{0}
\leq \frac{1}{32} + \mathcal Delta m_{0}
\leq \frac{1}{16}.
\end{multline*}
\qed
\end{proof} | 3,010 | 33,201 | en |
train | 0.2.9 | Next, we consider the execution of ${\rm ABL}(t,\tilde{h})$, and let the sets $W_{k}$ be as in that execution.
We will denote by $w^{*}$ the weight vector with $\|w^{*}\|=1$ such that $h_{t+m_{0}+1}^{*} = h_{w^{*}}$.
Also denote by $M_{1} = M-m_{0}$.
The proof relies on a few results proven in the work of \cite{awasthi:13}, which we summarize in the following lemmas.
Although the results were proven in a slightly different setting in that work (namely, agnostic learning under a fixed joint distribution),
one can easily verify that their proofs remain valid in our present context as well.
\begin{lemma}
\label{lem:denoised-risk}
\cite{awasthi:13}
Fix any $k \in \{1,\ldots,\lceil \log_{2}(1/\alpha) \rceil\}$.
For a universal constant $c_{7} > 0$, suppose $b_{k-1} = c_{7} 2^{1-k} / \sqrt{d}$,
and let $z_{k} = \sqrt{r_{k}^{2}/(d-1) + b_{k-1}^{2}}$.
For a universal constant $c_{1} > 0$, if $\|w^{*} - w_{k-1}\| \leq r_{k}$,
\begin{multline*}
{\hskip -3mm}\left| \mathbb E\!\left[ \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(|w^{*} \cdot x|) \Big| w_{k-1}, |W_{k}| \right]
- \mathbb E\!\left[ \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (w^{*} \cdot x)) \Big| w_{k-1}, |W_{k}| \right] \right|
\\ \leq c_{1} |W_{k}| \sqrt{2^{k} \mathcal Delta M_{1}} \frac{z_{k}}{\tau_{k}}.
\end{multline*}
\end{lemma}
\begin{lemma}
\label{lem:margin-error-concentration}
\cite{balcan:13}
For any $c > 0$, there is a constant $c^{\prime} > 0$ depending only on $c$ (i.e., not depending on $d$)
such that, for any $u,v \in \mathbb{R}^{d}$ with $\|u\|=\|v\|=1$, letting $\sigma = \mathcal{P}(x : h_{u}(x) \neq h_{v}(x))$,
if $\sigma < 1/2$, then
\begin{equation*}
\mathcal{P}\left( x : h_{u}(x) \neq h_{v}(x) \text{ and } |v \cdot x| \geq c^{\prime} \frac{\sigma}{\sqrt{d}} \right) \leq c \sigma.
\end{equation*}
\end{lemma}
The following is a well-known lemma concerning concentration around the equator for the uniform distribution (see e.g., \cite{stream_perceptron,balcan:07,awasthi:13});
for instance, it easily follows from the formulas for the area in a spherical cap derived by \cite{li:11}.
\begin{lemma}
\label{lem:uniform-P-concentration}
For any constant $C > 0$, there are constants $c_{2},c_{3} > 0$ depending only on $C$ (i.e., independent of $d$) such that,
for any $w \in \mathbb{R}^{d}$ with $\|w\|=1$, $\forall \gamma \in [0, C/\sqrt{d}]$,
\begin{equation*}
c_{2} \gamma \sqrt{d} \leq \mathcal{P}\left( x : |w \cdot x| \leq \gamma \right) \leq c_{3} \gamma \sqrt{d}.
\end{equation*}
\end{lemma}
Based on this lemma, \cite{awasthi:13} prove the following.
\begin{lemma}
\label{lem:opt-margin-loss}
\cite{awasthi:13}
For $X \sim \mathcal{P}$, for any $w \in \mathbb{R}^{d}$ with $\|w\|=1$, for any $C > 0$ and $\tau, b \in [0,C/\sqrt{d}]$,
for $c_{2},c_{3}$ as in Lemma~\ref{lem:uniform-P-concentration},
\begin{equation*}
\mathbb E\left[ \ell_{\tau}( |w^{*} \cdot X| ) \Big| |w \cdot X| \leq b \right] \leq \frac{c_{3} \tau}{c_{2} b}.
\end{equation*}
\end{lemma}
The following is a slightly stronger version of a result of \cite{awasthi:13} (specifically,
the size of $m_{k}$, and consequently the bound on $|W_{k}|$, are both improved by a factor of $d$
compared to the original result).
\begin{lemma}
\label{lem:margin-error-bound}
Fix any $\delta \in (0,1/e)$.
For universal constants $c_{4},c_{5},c_{6},c_{7},c_{8},c_{9},c_{10} \in (0,\infty)$,
for an appropriate choice of $\kappa \in (0,1)$ (a universal constant),
if $\alpha = c_{9} \sqrt{\mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right)}$,
for every $k \in \{1,\ldots,\lceil \log_{2}(1/\alpha) \rceil\}$,
if $b_{k-1} = c_{7} 2^{1-k} / \sqrt{d}$, $\tau_{k} = c_{8} 2^{-k} / \sqrt{d}$, $r_{k} = c_{10} 2^{-k}$, $\delta_{k} = \delta / (\lceil \log_{2}(4/\alpha) \rceil - k)^{2}$,
and $m_{k} = \left\lceil c_{5} \frac{2^{k}}{\kappa^{2}} d \log\left(\frac{1}{\kappa\delta_{k}} \right)\right\rceil$,
and if $\mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-3}$,
then with probability at least $1-(4/3)\delta_{k}$,
$|W_{k}| \leq c_{6} \frac{1}{\kappa^{2}} d \log\left(\frac{1}{\kappa\delta_{k}}\right)$
and
$\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-4}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:uniform-P-concentration}, and a Chernoff and union bound,
for an appropriately large choice of $c_{5}$ and any $c_{7} > 0$,
letting $c_{2},c_{3}$ be as in Lemma~\ref{lem:uniform-P-concentration} (with $C=c_{7} \lor (c_{8}/2)$),
with probability at least $1-\delta_{k}/3$,
\begin{equation}
\label{eqn:Wk-bounds}
c_{2} c_{7} 2^{-k} m_{k}
\leq |W_{k}| \leq
4 c_{3} c_{7} 2^{-k} m_{k}.
\end{equation}
The claimed upper bound on $|W_{k}|$ follows from this second inequality.
Next note that, if $\mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-3}$,
then
\begin{equation*}
\max\{ \ell_{\tau_{k}}(y (w^{*} \cdot x)) : x \in \mathbb{R}^{d}, |w_{k-1} \cdot x| \leq b_{k-1}, y \in \{-1,+1\} \} \leq c_{11} \sqrt{d}
\end{equation*}
for some universal constant $c_{11} > 0$.
Furthermore, since $\mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-3}$,
we know that the angle between $w_{k-1}$ and $w^{*}$ is at most $2^{-k-3} \pi$,
so that
\begin{multline*}
\|w_{k-1} - w^{*}\|
= \sqrt{ 2 - 2 w_{k-1} \cdot w^{*} }
\leq \sqrt{ 2 - 2 \cos(2^{-k-3} \pi) }
\\ \leq \sqrt{ 2 - 2 \cos^{2}(2^{-k-3} \pi) }
= \sqrt{2} \sin(2^{-k-3} \pi) \leq 2^{-k-3} \pi \sqrt{2}.
\end{multline*}
For $c_{10} = \pi\sqrt{2} 2^{-3}$, this is $r_{k}$.
By Hoeffding's inequality (under the conditional distribution given $|W_{k}|$), the law of total probability,
Lemma~\ref{lem:denoised-risk}, and linearity of conditional expectations,
with probability at least $1-\delta_{k}/3$, for $X \sim \mathcal{P}$,
\begin{multline}
\label{eqn:opt-loss-bound}
\sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}( y ( w^{*} \cdot x) )
\leq |W_{k}| \mathbb E\left[ \ell_{\tau_{k}}(|w^{*} \cdot X|) \Big| w_{k-1}, |w_{k-1} \cdot X| \leq b_{k-1} \right]
\\ + c_{1} |W_{k}| \sqrt{2^{k} \mathcal Delta M_{1}} \frac{z_{k}}{\tau_{k}}
+ \sqrt{ |W_{k}| (1/2) c_{11}^{2} d \ln(3/\delta_{k}) }.
\end{multline}
We bound each term on the right hand side separately.
By Lemma~\ref{lem:opt-margin-loss}, the first term is at most $|W_{k}|\frac{c_{3} \tau_{k}}{c_{2} b_{k-1}} = |W_{k}|\frac{c_{3} c_{8}}{2 c_{2} c_{7}}$.
Next,
\begin{equation*}
\frac{z_{k}}{\tau_{k}}
= \frac{\sqrt{c_{10}^{2} 2^{-2k}/(d-1) + 4 c_{7}^{2} 2^{-2k}/d}}{c_{8} 2^{-k} / \sqrt{d}}
\leq \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}},
\end{equation*}
while $2^{k} \leq 2/\alpha$
so that the second term is at most
\begin{equation*}
\sqrt{2} c_{1} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}} |W_{k}| \sqrt{ \frac{\mathcal Delta m}{\alpha} }.
\end{equation*}
Noting that
\begin{equation}
\label{eqn:m-bound}
M_{1} = \sum_{k^{\prime}=1}^{\lceil \log_{2}(1/\alpha) \rceil} m_{k^{\prime}}
\leq \frac{32 c_{5}}{\kappa^{2}} \frac{1}{\alpha} d \log\left(\frac{1}{\kappa\delta}\right),
\end{equation}
we find that the second term on the right hand side of \eqref{eqn:opt-loss-bound} is at most
\begin{equation*}
\sqrt{\frac{c_{5}}{c_{9}}} \frac{8 c_{1}}{\kappa} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}} |W_{k}| \sqrt{ \frac{\mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right)}{\alpha^{2}} }
= \frac{8 c_{1} \sqrt{c_{5}}}{\kappa} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}c_{9}} |W_{k}|.
\end{equation*}
Finally, since $d \ln(3/\delta_{k}) \leq 2 d \ln(1/\delta_{k}) \leq \frac{2 \kappa^{2}}{c_{5}} 2^{-k} m_{k}$,
and \eqref{eqn:Wk-bounds} implies $2^{-k} m_{k} \leq \frac{1}{c_{2} c_{7}} |W_{k}|$,
the third term on the right hand side of \eqref{eqn:opt-loss-bound} is at most
\begin{equation*}
|W_{k}| \frac{c_{11} \kappa}{ \sqrt{c_{2} c_{5} c_{7}} }.
\end{equation*}
Altogether, we have
\begin{equation*}
\sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}( y ( w^{*} \cdot x) )
\leq |W_{k}| \left(
\frac{c_{3} c_{8}}{2 c_{2} c_{7}}
+ \frac{8 c_{1} \sqrt{c_{5}}}{\kappa} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}c_{9}}
+ \frac{c_{11} \kappa}{ \sqrt{c_{2} c_{5} c_{7}} }\right).
\end{equation*}
Taking $c_{9} = 1/\kappa^{3}$ and $c_{8} = \kappa$, this is at most
\begin{equation*}
\kappa |W_{k}| \left(
\frac{c_{3}}{2 c_{2} c_{7}}
+ 8 c_{1} \sqrt{c_{5}}\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}
+ \frac{c_{11}}{ \sqrt{c_{2} c_{5} c_{7}} }\right).
\end{equation*} | 3,642 | 33,201 | en |
train | 0.2.10 | Next, note that because $h_{w_{k}}(x) \neq y \Rightarrow \ell_{\tau_{k}}(y (v_{k} \cdot x)) \geq 1$,
and because (as proven above) $\|w^{*} - w_{k-1}\| \leq r_{k}$,
\begin{equation*}
|W_{k}| {\rm er}_{W_{k}}( h_{w_{k}} )
\leq \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (v_{k} \cdot x))
\leq \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (w^{*} \cdot x)) + \kappa |W_{k}|.
\end{equation*}
Combined with the above, we have
\begin{equation*}
|W_{k}| {\rm er}_{W_{k}}( h_{w_{k}} )
\leq \kappa |W_{k}| \left(
1 + \frac{c_{3}}{2 c_{2} c_{7}}
+ 8 c_{1} \sqrt{c_{5}}\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}
+ \frac{c_{11}}{ \sqrt{c_{2} c_{5} c_{7}} }\right).
\end{equation*}
Let $c_{12} = 1 + \frac{c_{3}}{2 c_{2} c_{7}} + 8 c_{1} \sqrt{c_{5}}\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}} + \frac{c_{11}}{ \sqrt{c_{2} c_{5} c_{7}} }$.
Furthermore,
\begin{multline*}
|W_{k}|{\rm er}_{W_{k}}( h_{w_{k}} )
= \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq y ]
\\ \geq \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ] - \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w^{*}}(x) \neq y ].
\end{multline*}
For an appropriately large value of $c_{5}$,
by a Chernoff bound, with probability at least $1-\delta_{k}/3$,
\begin{equation*}
\sum_{s=t+\sum_{j=0}^{k-1}m_{j} + 1}^{t+\sum_{j=0}^{k} m_{j}} \mathbbm{1}[ h_{w^{*}}(X_{s}) \neq Y_{s} ]
\leq 2 e \mathcal Delta M_{1} m_{k} + \log_{2}(3/\delta_{k}).
\end{equation*}
In particular, this implies
\begin{equation*}
\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w^{*}}(x) \neq y ]
\leq 2 e \mathcal Delta M_{1} m_{k} + \log_{2}(3/\delta_{k}),
\end{equation*}
so that
\begin{equation*}
\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ]
\leq |W_{k}|{\rm er}_{W_{k}}( h_{w_{k}} ) + 2 e \mathcal Delta M_{1} m_{k} + \log_{2}(3/\delta_{k}).
\end{equation*}
Noting that \eqref{eqn:m-bound} and \eqref{eqn:Wk-bounds} imply
\begin{align*}
\mathcal Delta M_{1} m_{k} & \leq \mathcal Delta \frac{32 c_{5}}{\kappa^{2}} \frac{ d \log\left(\frac{1}{\kappa\delta}\right) }{c_{9} \sqrt{ \mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right)}} \frac{2^{k}}{c_{2} c_{7}} |W_{k}|
\leq \frac{32 c_{5}}{c_{2} c_{7} c_{9} \kappa^{2}} \sqrt{ \mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right) } 2^{k} |W_{k}|
\\ & = \frac{32 c_{5}}{c_{2} c_{7} c_{9}^{2} \kappa^{2}} \alpha 2^{k} |W_{k}|
= \frac{32 c_{5} \kappa^{4}}{c_{2} c_{7}} \alpha 2^{k} |W_{k}|
\leq \frac{32 c_{5} \kappa^{4}}{c_{2} c_{7}} |W_{k}|,
\end{align*}
and \eqref{eqn:Wk-bounds} implies $\log_{2}(3/\delta_{k}) \leq \frac{2\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|$,
altogether we have
\begin{align*}
\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ]
& \leq |W_{k}|{\rm er}_{W_{k}}( h_{w_{k}} ) + \frac{64 e c_{5} \kappa^{4}}{c_{2} c_{7}} |W_{k}| + \frac{2\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|
\\ & \leq \kappa |W_{k}| \left( c_{12} + \frac{64 e c_{5} \kappa^{3}}{c_{2} c_{7}} + \frac{2\kappa}{c_{2}c_{5}c_{7}} \right).
\end{align*}
Letting $c_{13} = c_{12} + \frac{64 e c_{5}}{c_{2} c_{7}} + \frac{2}{c_{2}c_{5}c_{7}}$, and noting $\kappa \leq 1$,
we have
$\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ] \leq c_{13} \kappa |W_{k}|$.
Lemma~\ref{lem:vc-ratio} (applied under the conditional distribution given $|W_{k}|$)
and the law of total probability imply that with probability at least $1-\delta_{k}/3$,
\begin{align*}
|W_{k}| &\mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x)]
+ c_{14} \sqrt{ |W_{k}| (d \log(|W_{k}|/d) + \log(1/\delta_{k})) },
\end{align*}
for a universal constant $c_{14} > 0$.
Combined with the above, and the fact that \eqref{eqn:Wk-bounds} implies
$\log(1/\delta_{k}) \leq \frac{\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|$
and
\begin{align*}
d \log(|W_{k}|/d) & \leq d \log\left(\frac{8c_{3}c_{5}c_{7} \log\left(\frac{1}{\kappa\delta_{k}}\right)}{\kappa^{2}}\right)
\\ & \leq d \log\left(\frac{8 c_{3} c_{5} c_{7}}{\kappa^{3} \delta_{k}}\right)
\leq 3\log(8 \max\{c_{3},1\} c_{5} ) c_{5} d \log\left(\frac{1}{\kappa \delta_{k}}\right)
\\ & \leq 3 \log(8 \max\{c_{3},1\}) \kappa^{2} 2^{-k} m_{k}
\leq \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} \kappa^{2} |W_{k}|,
\end{align*}
we have
\begin{align*}
|W_{k}| & \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq c_{13} \kappa |W_{k}|
+ c_{14} \sqrt{ |W_{k}| \left( \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} \kappa^{2} |W_{k}| + \frac{\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}| \right)}
\\ & = \kappa |W_{k}| \left( c_{13} + c_{14} \sqrt{ \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} + \frac{1}{c_{2}c_{5}c_{7}}}\right).
\end{align*}
Thus, letting $c_{15} = \left( c_{13} + c_{14} \sqrt{ \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} + \frac{1}{c_{2}c_{5}c_{7}}}\right)$,
we have
\begin{equation}
\label{eqn:conditional-error-bound}
\mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\leq c_{15} \kappa.
\end{equation}
Next, note that $\|v_{k} - w_{k-1}\|^{2} = \|v_{k}\|^{2} + 1 - 2 \|v_{k}\| \cos( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) )$.
Thus, one implication of the fact that $\|v_{k} - w_{k-1}\| \leq r_{k}$ is that
$\frac{\|v_{k}\|}{2} + \frac{1-r_{k}^{2}}{2\|v_{k}\|} \leq \cos( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) )$;
since the left hand side is positive, we have $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) < 1/2$.
Additionally, by differentiating, one can easily verify that for $\phi \in [0,\pi]$,
$x \mapsto \sqrt{ x^{2} + 1 - 2 x \cos(\phi) }$ is minimized at $x=\cos(\phi)$,
in which case $\sqrt{x^{2} + 1 - 2 x \cos(\phi) } = \sin(\phi)$.
Thus, $\|v_{k} - w_{k-1}\| \geq \sin( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) ) )$.
Since $\|v_{k} - w_{k-1}\| \leq r_{k}$,
we have $\sin(\pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x))) \leq r_{k}$.
Since $\sin(\pi x) \geq x$ for all $x \in [0,1/2]$,
combining this with the fact (proven above) that $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) < 1/2$
implies $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) \leq r_{k}$. | 3,040 | 33,201 | en |
train | 0.2.11 | Lemma~\ref{lem:vc-ratio} (applied under the conditional distribution given $|W_{k}|$)
and the law of total probability imply that with probability at least $1-\delta_{k}/3$,
\begin{align*}
|W_{k}| &\mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x)]
+ c_{14} \sqrt{ |W_{k}| (d \log(|W_{k}|/d) + \log(1/\delta_{k})) },
\end{align*}
for a universal constant $c_{14} > 0$.
Combined with the above, and the fact that \eqref{eqn:Wk-bounds} implies
$\log(1/\delta_{k}) \leq \frac{\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|$
and
\begin{align*}
d \log(|W_{k}|/d) & \leq d \log\left(\frac{8c_{3}c_{5}c_{7} \log\left(\frac{1}{\kappa\delta_{k}}\right)}{\kappa^{2}}\right)
\\ & \leq d \log\left(\frac{8 c_{3} c_{5} c_{7}}{\kappa^{3} \delta_{k}}\right)
\leq 3\log(8 \max\{c_{3},1\} c_{5} ) c_{5} d \log\left(\frac{1}{\kappa \delta_{k}}\right)
\\ & \leq 3 \log(8 \max\{c_{3},1\}) \kappa^{2} 2^{-k} m_{k}
\leq \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} \kappa^{2} |W_{k}|,
\end{align*}
we have
\begin{align*}
|W_{k}| & \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq c_{13} \kappa |W_{k}|
+ c_{14} \sqrt{ |W_{k}| \left( \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} \kappa^{2} |W_{k}| + \frac{\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}| \right)}
\\ & = \kappa |W_{k}| \left( c_{13} + c_{14} \sqrt{ \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} + \frac{1}{c_{2}c_{5}c_{7}}}\right).
\end{align*}
Thus, letting $c_{15} = \left( c_{13} + c_{14} \sqrt{ \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} + \frac{1}{c_{2}c_{5}c_{7}}}\right)$,
we have
\begin{equation}
\label{eqn:conditional-error-bound}
\mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\leq c_{15} \kappa.
\end{equation}
Next, note that $\|v_{k} - w_{k-1}\|^{2} = \|v_{k}\|^{2} + 1 - 2 \|v_{k}\| \cos( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) )$.
Thus, one implication of the fact that $\|v_{k} - w_{k-1}\| \leq r_{k}$ is that
$\frac{\|v_{k}\|}{2} + \frac{1-r_{k}^{2}}{2\|v_{k}\|} \leq \cos( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) )$;
since the left hand side is positive, we have $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) < 1/2$.
Additionally, by differentiating, one can easily verify that for $\phi \in [0,\pi]$,
$x \mapsto \sqrt{ x^{2} + 1 - 2 x \cos(\phi) }$ is minimized at $x=\cos(\phi)$,
in which case $\sqrt{x^{2} + 1 - 2 x \cos(\phi) } = \sin(\phi)$.
Thus, $\|v_{k} - w_{k-1}\| \geq \sin( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) ) )$.
Since $\|v_{k} - w_{k-1}\| \leq r_{k}$,
we have $\sin(\pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x))) \leq r_{k}$.
Since $\sin(\pi x) \geq x$ for all $x \in [0,1/2]$,
combining this with the fact (proven above) that $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) < 1/2$
implies $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) \leq r_{k}$.
In particular, we have that both $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) \leq r_{k}$ and $\mathcal{P}(x : h_{w^{*}}(x) \neq h_{w_{k-1}}(x)) \leq 2^{-k-3} \leq r_{k}$.
Now Lemma~\ref{lem:margin-error-concentration} implies that, for any universal constant $c > 0$,
there exists a corresponding universal constant $c^{\prime} > 0$ such that
\begin{equation*}
\mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right) \leq c r_{k}
\end{equation*}
and
\begin{equation*}
\mathcal{P}\left(x : h_{w^{*}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right) \leq c r_{k},
\end{equation*}
so that (by a union bound)
\begin{align*}
& \mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right)
\\ & \leq
\mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right)
\\ & +
\mathcal{P}\left(x : h_{w^{*}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right)
\leq 2 c r_{k}.
\end{align*}
In particular, letting $c_{7} = c^{\prime} c_{10} / 2$, we have $c^{\prime} \frac{r_{k}}{\sqrt{d}} = b_{k-1}$.
Combining this with \eqref{eqn:conditional-error-bound}, Lemma~\ref{lem:uniform-P-concentration}, and a union bound, we have that
\begin{align*}
& \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x)\right)
\\ & \leq \mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \text{ and } |w_{k-1} \cdot x| \geq b_{k-1} \right)
\\ & {\hskip 3mm}+ \mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \text{ and } |w_{k-1} \cdot x| \leq b_{k-1} \right)
\\ & \leq 2 c r_{k} + \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1} \right) \mathcal{P}\left(x : |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq 2 c r_{k} + c_{15} \kappa c_{3} b_{k-1} \sqrt{d}
= \left( 2^{5} c c_{10} + c_{15} \kappa c_{3} c_{7} 2^{5} \right) 2^{-k-4}.
\end{align*}
Taking $c = \frac{1}{2^{6} c_{10}}$ and $\kappa = \frac{1}{2^{6} c_{3} c_{7} c_{15}}$,
we have $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-4}$, as required.
By a union bound, this occurs with probability at least $1 - (4/3)\delta_{k}$.
\qed | 2,565 | 33,201 | en |
train | 0.2.12 | \end{proof} | 5 | 33,201 | en |
train | 0.2.13 | \begin{proof}[Proof of Theorem~\ref{thm:linsep-uniform}]
We begin with the bound on the error rate.
If $\mathcal Delta > \frac{\pi^{2}}{400 \cdot 2^{27} (d+\ln(4/\delta))}$, the result trivially holds, since then $1 \leq \frac{400 \cdot 2^{27}}{\pi^{2}} \sqrt{\mathcal Delta (d+\ln(4/\delta))}$.
Otherwise, suppose $\mathcal Delta \leq \frac{\pi^{2}}{400 \cdot 2^{27} (d+\ln(4/\delta))}$.
Fix any $i \in \mathbb{N}$.
Lemma~\ref{lem:perceptron-init} implies that, with probability at least $1-\delta/4$,
the $w_{0}$ returned in Step 0 of ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ satisfies
$\mathcal{P}(x : h_{w_{0}}(x) \neq h_{M(i-1) + m_{0}+1}^{*}(x)) \leq 1/16$.
Taking this as a base case, Lemma~\ref{lem:margin-error-bound} then inductively implies that,
with probability at least
\begin{multline*}
1 - \frac{\delta}{4} - \sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} (4/3) \frac{\delta}{2(\lceil \log_{2}(4/\alpha) \rceil - k)^{2}}
\geq 1 - \frac{\delta}{2} \left(1 + (4/3) \sum_{\ell=2}^{\infty} \frac{1}{\ell^{2}} \right)
\geq 1 - \delta,
\end{multline*}
every $k \in \{ 0, 1, \ldots, \lceil \log_{2}(1/\alpha) \rceil \}$ has
\begin{equation}
\label{eqn:abl-mistake-prob-raw}
\mathcal{P}(x : h_{w_{k}}(x) \neq h_{M(i-1)+m_{0}+1}^{*}(x)) \leq 2^{-k-4},
\end{equation}
and furthermore the number of labels requested during ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ total to at most (for appropriate universal constants $\hat{c}_{1},\hat{c}_{2}$)
\begin{align*}
m_{0} + \!\!\!\!\sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} |W_{k}|
& \leq \hat{c}_{1} \left(d + \ln\left(\frac{1}{\delta}\right) + \sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} d \log\left(\frac{( \lceil \log_{2}(4/\alpha) \rceil - k )^{2}}{\delta}\right) \right)
\\ & \leq \hat{c}_{2} d \log\left(\frac{1}{\mathcal Delta d}\right)\log\left(\frac{1}{\delta}\right).
\end{align*}
In particular, by a union bound, \eqref{eqn:abl-mistake-prob-raw} implies that for every $k \in \{1,\ldots,\lceil \log_{2}(1/\alpha) \rceil\}$,
every
\begin{equation*}
m \in \left\{ M(i-1) + \sum_{j=0}^{k-1} m_{j} + 1, \ldots, M(i-1) + \sum_{j=0}^{k} m_{j} \right\}
\end{equation*}
has
\begin{align*}
& \mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{m}^{*}(x))
\\ & \leq \mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{M(i-1)+m_{0}+1}^{*}(x)) + \mathcal{P}(x : h_{M(i-1)+m_{0}+1}^{*}(x) \neq h_{m}^{*}(x))
\\ & \leq 2^{-k-3} + \mathcal Delta M.
\end{align*}
Thus, noting that
\begin{align*}
M & = \sum_{k=0}^{\lceil \log_{2}(1/\alpha) \rceil} m_{k}
= \Theta\left( d + \log\left(\frac{1}{\delta}\right) + \sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} 2^{k} d \log\left(\frac{\lceil \log_{2}(1/\alpha) \rceil - k}{\delta}\right) \right)
\\ & = \Theta\left( \frac{1}{\alpha} d \log\left(\frac{1}{\delta}\right) \right)
= \Theta\left(\sqrt{\frac{d}{\mathcal Delta} \log\left(\frac{1}{\delta}\right)} \right),
\end{align*}
with probability at least $1-\delta$,
\begin{equation*}
\mathcal{P}(x : h_{w_{\lceil \log_{2}(1/\alpha) \rceil-1}}(x) \neq h^{*}_{M i}(x) ) \leq O\left( \alpha + \mathcal Delta M \right) = O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right).
\end{equation*}
In particular, this implies that, with probability at least $1-\delta$, every $t \in \{M i + 1, \ldots, M (i+1)-1\}$ has
\begin{align*}
{\rm er}_{t}(\hat{h}_{t}) & \leq \mathcal{P}(x : h_{w_{\lceil \log_{2}(1/\alpha) \rceil-1}}(x) \neq h^{*}_{M i}(x) ) + \mathcal{P}( x : h^{*}_{M i}(x) \neq h^{*}_{t}(x) )
\\ & \leq O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right) + \mathcal Delta M
= O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right),
\end{align*}
which completes the proof of the bound on the error rate.
Setting $\delta = \sqrt{\mathcal Delta d}$, and noting that $\mathbbm{1}[ \hat{Y}_{t} \neq Y_{t} ] \leq 1$, we have that for any $t > M$,
\begin{equation*}
\mathbb P\left( \hat{Y}_{t} \neq Y_{t} \right)
= \mathbb E\left[ {\rm er}_{t}(\hat{h}_{t}) \right]
\leq O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right) + \delta
= O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } \right).
\end{equation*}
Thus, by linearity of the expectation,
\begin{equation*}
\mathbb E\left[ \sum_{t=1}^{T} \mathbbm{1}\left[ \hat{Y}_{t} \neq Y_{t} \right] \right]
\leq M + O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } T \right)
= O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } T \right).
\end{equation*}
Furthermore, as mentioned, with probability at least $1-\delta$,
the number of labels requested during the execution of ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ is at most
\begin{equation*}
O\left( d \log\left(\frac{1}{\mathcal Delta d}\right)\log\left(\frac{1}{\delta}\right) \right).
\end{equation*}
Thus, since the number of labels requested during the execution of ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ cannot exceed $M$,
letting $\delta = \sqrt{\mathcal Delta d}$, the expected number of requested labels during this execution is at most
\begin{align*}
O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \right) + \sqrt{\mathcal Delta d} M
& \leq O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \right) + O\left( d \sqrt{\log\left(\frac{1}{\mathcal Delta d}\right) } \right)
\\ & = O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \right).
\end{align*}
Thus, by linearity of the expectation, the expected number of labels requested among the first $T$ samples is at most
\begin{equation*}
O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \left\lceil \frac{T}{M} \right\rceil \right)
= O\left( \sqrt{\mathcal Delta d} \log^{3/2}\left(\frac{1}{\mathcal Delta d}\right) T \right),
\end{equation*}
which completes the proof.
\qed
\end{proof}
\paragraph{Remark:} The original work of \cite{min_concept} additionally allowed for some number $K$ of ``jumps'':
times $t$ at which $\mathcal Delta_{t} = 1$. Note that, in the above algorithm, since the influence of each sample is localized to the predictors trained
within that ``batch'' of $M$ instances, the effect of allowing such jumps would only change the bound on the number of
mistakes to $\tilde{O}\left(\sqrt{d \mathcal Delta} T + \sqrt{\frac{d}{\mathcal Delta}} K \right)$. This compares favorably to the
result of \cite{min_concept}, which is roughly $O\left( (d \mathcal Delta)^{1/4} T + \frac{d^{1/4}}{\mathcal Delta^{3/4}} K \right)$.
However, the result of \cite{min_concept} was proven for a more general setting, allowing distributions $\mathcal{P}$
that are not uniform (though they do require a relation between the angle between any two separators and the
probability mass they disagree on, similar to that holding for the uniform distribution, which seems to require that the
distributions approximately retain some properties of the uniform distribution). It is not clear whether Theorem~\ref{thm:linsep-uniform} can be
generalized to this larger family of distributions. | 2,626 | 33,201 | en |
train | 0.2.14 | \section{General Results for Active Learning}
\label{sec:general-active}
As mentioned, the above results on linear separators also provide results
for the number of queries in \emph{active learning}. One can also state
quite general results on the expected number of queries and mistakes
achievable by an active learning algorithm.
This section provides such results, for an algorithm based on the
the well-known strategy of \emph{disagreement-based} active learning.
Throughout this section, we suppose $h^{*}seq \in S_{\mathcal Delta}$,
for a given $\mathcal Delta \in (0,1]$: that is, $\mathcal{P}( x : h^{*}_{t+1}(x) \neq h^{*}_{t}(x)) \leq \mathcal Delta$
for all $t \in \mathbb{N}$.
First, we introduce a few definitions.
For any set $\mathcal H \subseteq \mathbb C$, define the \emph{region of disagreement}
\begin{equation*}
\mathcal DIS(\mathcal H) = \{x \in \mathcal X : \exists h,g \in \mathcal H \text{ s.t. } h(x) \neq g(x) \}.
\end{equation*}
The analysis in this section is centered around the following algorithm.
The ${\rm Active}$ subroutine is from the work of \cite{hanneke:activized} (slightly modified here),
and is a variant of the $A^2$ (Agnostic Acive) algorithm of \cite{A2};
the appropriate values of $M$ and $\hat{T}_{k}(\cdot)$ will be discussed below.
\begin{bigboxit}
Algorithm: ${\rm DriftingActive}$\\
0. For $i = 1,2,\ldots$\\
1. \quad ${\rm Active}(M (i-1))$\\
\end{bigboxit}
\begin{bigboxit}
Subroutine: ${\rm Active}(t)$\\
0. Let $\hat{h}_{0}$ be an arbitrary element of $\mathbb C$, and let $V_{0} \gets \mathbb C$\\
1. Predict $\hat{Y}_{t+1} = \hat{h}_{0}(X_{t+1})$ as the prediction for the value of $Y_{t+1}$\\
2. For $k = 0,1,\ldots,\log_{2}(M/2)$\\
3. \quad $Q_{k} \gets \{\}$\\
4. \quad For $s = 2^{k}+1,\ldots,2^{k+1}$\\
5. \qquad Predict $\hat{Y}_{s} = \hat{h}_{k}(X_{s})$ as the prediction for the value of $Y_{s}$\\
6. \qquad If $X_{s} \in \mathcal DIS(V_{k})$\\
7. \quad\qquad Request the label $Y_{s}$ and let $Q_{k} \gets Q_{k} \cup \{(X_{s},Y_{s})\}$\\
8. \quad Let $\hat{h}_{k+1} = \mathop{\rm argmin}_{h \in V_{k}} \sum_{(x,y) \in Q_{k}} \mathbbm{1}[h(x) \neq y]$\\
9. \quad Let $V_{k+1} \gets \{h \in V_{k} : \sum_{(x,y) \in Q_{k}} \mathbbm{1}[h(x) \neq y] - \mathbbm{1}[\hat{h}_{k+1}(x) \neq y] \leq \hat{T}_{k}\}$
\end{bigboxit}
As in the ${\rm DriftingHalfspaces}$ algorithm above, this ${\rm DriftingActive}$
algorithm proceeds in batches, and in each batch runs an active learning algorithm
designed to be robust to classification noise. This robustness to classification noise
translates into our setting as tolerance for the fact that there is no classifier in $\mathbb C$
that perfectly classifies all of the data. The specific algorithm employed here maintains
a set $V_{k} \subseteq \mathbb C$ of candidate classifiers, and requests the labels of samples $X_{s}$
for which there is some disagreement on the classification among classifiers in $V_{k}$.
We maintain the invariant that there is a low-error classifier contained in $V_{k}$ at all
times, and thus the points we query provide some information to help us determine
which among these remaining candidates has low error rate. Based on these queries,
we periodically (in Step 9) remove from $V_{k}$ those classifiers making a relatively excessive
number of mistakes on the queried samples, relative to the minimum among classifiers in $V_{k}$.
All predictions are made with an element of $V_{k}$.\footnote{One could alternatively proceed
as in ${\rm DriftingHalfspaces}$, using the final classifier from the previous batch, which
would also add a guarantee on the error rate achieved at all sufficiently large $t$.}
We prove an abstract bound on the number of labels requested by this algorithm,
expressed in terms of the \emph{disagreement coefficient} \cite{hanneke:07b},
defined as follows. For any $r \geq 0$ and any classifier $h$, define ${\rm B}(h,r) = \{g \in \mathbb C : \mathcal{P}(x : g(x) \neq h(x)) \leq r\}$.
Then for $r_{0} \geq 0$ and any classifier $h$, define the disagreement coefficient of $h$ with respect to $\mathbb C$ under $\mathcal{P}$:
\begin{equation*}
\theta_{h}(r_{0}) = \sup_{r > r_{0}} \frac{ \mathcal{P}( \mathcal DIS( {\rm B}( h, r ) ) ) }{r}.
\end{equation*}
Usually, the disagreement coefficient would be used with $h$ equal the target concept;
however, since the target concept is not fixed in our setting,
we will make use of the worst-case value of the disagreement coefficient:
$\theta_{\mathbb C}(r_{0}) = \sup_{h \in \mathbb C} \theta_{h}(r_{0})$.
This quantity has been bounded for a variety of spaces $\mathbb C$ and distributions $\mathcal{P}$
(see e.g., \cite{hanneke:07b,el-yaniv:12,balcan:13}).
It is useful in bounding how quickly the region $\mathcal DIS(V_{k})$ collapses in the
algorithm. Thus, since the probability the algorithm requests the label of the next instance
is $\mathcal{P}(\mathcal DIS(V_{k}))$, the quantity $\theta_{\mathbb C}(r_{0})$ naturally arises in characterizing the
number of labels we expect this algorithm to request.
Specifically, we have the following result.\footnote{Here,
we define $\lceil x \rceil_{2} = 2^{\lceil \log_{2}(x) \rceil}$, for $x \geq 1$.}
\begin{theorem}
\label{thm:general-active}
For an appropriate universal constant $c_{1} \in [1,\infty)$,
if $h^{*}seq \in S_{\mathcal Delta}$ for some $\mathcal Delta \in (0,1]$,
then taking $M = \left\lceil c_{1} \sqrt{\frac{d}{\mathcal Delta}} \right\rceil_{2}$,
and $\hat{T}_{k} = \log_{2}(1/\sqrt{d \mathcal Delta}) + 2^{2k+2} e \mathcal Delta$,
and defining $\epsilon_{\mathcal Delta} = \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta))$,
the above ${\rm DriftingActive}$ algorithm makes an expected number of mistakes among the
first $T$ instances that is
\begin{equation*}
O\left(\epsilon_{\mathcal Delta} {\rm Log}(d/\mathcal Delta) T \right) = \tilde{O}\left( \sqrt{d\mathcal Delta} \right) T
\end{equation*}
and requests an expected number of labels among the first $T$ instances that is
\begin{equation*}
O\left( \theta_{\mathbb C}( \epsilon_{\mathcal Delta} ) \epsilon_{\mathcal Delta} {\rm Log}(d/\mathcal Delta) T \right) = \tilde{O}\left( \theta_{\mathbb C}(\sqrt{d \mathcal Delta}) \sqrt{d \mathcal Delta} \right) T.
\end{equation*}
\end{theorem}
The proof of Theorem~\ref{thm:general-active} relies on an analysis of the behavior of the ${\rm Active}$ subroutine,
characterized in the following lemma.
\begin{lemma}
\label{lem:active-subroutine}
Fix any $t \in \mathbb{N}$, and consider the values obtained in the execution of ${\rm Active}(t)$.
Under the conditions of Theorem~\ref{thm:general-active},
there is a universal constant $c_{2} \in [1,\infty)$ such that,
for any $k \in \{0,1,\ldots,\log_{2}(M/2)\}$,
with probability at least $1-2\sqrt{d \mathcal Delta}$, if
$h^{*}_{t+1} \in V_{k}$,
then $h^{*}_{t+1} \in V_{k+1}$ and
$\sup_{h \in V_{k+1}} \mathcal{P}(x : h(x) \neq h^{*}_{t+1}(x)) \leq c_{2}
2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})$.
\end{lemma}
\begin{proof}
By a Chernoff bound, with probability at least $1-\sqrt{d \mathcal Delta}$,
\begin{equation*}
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[h^{*}_{t+1}(X_{s}) \neq Y_{s}]
\leq \log_{2}(1/\sqrt{d \mathcal Delta}) + 2^{2k+2} e \mathcal Delta
= \hat{T}_{k}.
\end{equation*}
Therefore, if $h^{*}_{t+1} \in V_{k}$, then since every $g \in V_{k}$
agrees with $h^{*}_{t+1}$ on those points $X_{s} \notin \mathcal DIS(V_{k})$,
in the update in Step 9 defining $V_{k+1}$,
we have
\begin{align*}
& \sum_{(x,y) \in Q_{k}} \mathbbm{1}[h^{*}_{t+1}(x) \neq y] - \mathbbm{1}[\hat{h}_{k+1}(x) \neq y]
\\ & = \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[h^{*}_{t+1}(X_{s}) \neq Y_{s}]
- \min_{g \in V_{k}} \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[g(X_{s}) \neq Y_{s}]
\\ & \leq \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[h^{*}_{t+1}(X_{s}) \neq Y_{s}] \leq \hat{T}_{k},
\end{align*}
so that $h^{*}_{t+1} \in V_{k+1}$ as well.
Furthermore, if $h^{*}_{t+1} \in V_{k}$,
then by the definition of $V_{k+1}$,
we know every $h \in V_{k+1}$ has
\begin{equation*}
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h(X_{s}) \neq Y_{s} ]
\leq \hat{T}_{k} + \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h^{*}_{t+1}(X_{s}) \neq Y_{s} ],
\end{equation*}
so that a triangle inequality implies
\begin{align*}
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h(X_{s}) \neq h^{*}_{t+1}(X_{s}) ]
& \leq
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h(X_{s}) \neq Y_{s} ]
+ \mathbbm{1}[ h^{*}_{t+1}(X_{s}) \neq Y_{s} ]
\\ & \leq
\hat{T}_{k} + 2 \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h^{*}_{t+1}(X_{s}) \neq Y_{s} ]
\leq 3 \hat{T}_{k}.
\end{align*}
Lemma~\ref{lem:vc-ratio} then implies that, on an additional event of
probability at least $1-\sqrt{d \mathcal Delta}$,
every $h \in V_{k+1}$ has
\begin{align*}
& \mathcal{P}(x : h(x) \neq h^{*}_{t+1}(x))
\\ & \leq 2^{-k} 3\hat{T}_{k} + c 2^{-k} \sqrt{3\hat{T}_{k} (d {\rm Log}(2^{k}/d)+{\rm Log}(1/\sqrt{d\mathcal Delta}))}
\\ & \phantom{\leq } + c 2^{-k} (d {\rm Log}(2^{k}/d) + {\rm Log}(1/\sqrt{d\mathcal Delta}))
\\ & \leq
2^{-k} 3 \log_{2}(1/\sqrt{d\mathcal Delta})
+ 2^{k} 12 e \mathcal Delta
+ c 2^{-k} \sqrt{ 6 \log_{2}(1/\sqrt{d\mathcal Delta}) d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})}
\\ & \phantom{\leq } + c 2^{-k} \sqrt{ 2^{2k} 24 e \mathcal Delta d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) }
+ 2 c 2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})
\\ &
\leq
2^{-k} 3 \log_{2}(1/\sqrt{d\mathcal Delta})
+ 12 e c_{1} \sqrt{d\mathcal Delta}
+ 3 c 2^{-k} \sqrt{ d } {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})
\\ & \phantom{\leq } + \sqrt{24 e} c \sqrt{d \mathcal Delta {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) }
+ 2 c 2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}),
\end{align*}
where $c$ is as in Lemma~\ref{lem:vc-ratio}.
Since $\sqrt{d \mathcal Delta} \leq 2 c_{1} d / M \leq c_{1} d 2^{-k}$,
this is at most
\begin{equation*}
\left(5 + 12 e c_{1}^{2} + 3 c + \sqrt{24 e} c c_{1} + 2 c\right)
2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}).
\end{equation*}
Letting $c_{2} = 5 + 12 e c_{1}^{2} + 3 c + \sqrt{24 e} c c_{1} + 2 c$,
we have the result by a union bound.
\qed
\end{proof} | 3,794 | 33,201 | en |
train | 0.2.15 | We are now ready for the proof of Theorem~\ref{thm:general-active}.
\begin{proof}[Proof of Theorem~\ref{thm:general-active}]
Fix any $i \in \mathbb{N}$, and consider running ${\rm Active}(M(i-1))$.
Since $h^{*}_{M(i-1)+1} \in \mathbb C$,
by Lemma~\ref{lem:active-subroutine}, a union bound, and induction,
with probability at least $1-2\sqrt{d\mathcal Delta} \log_{2}(M/2)
\geq 1 - 2 \sqrt{d\mathcal Delta} \log_{2}(c_{1}\sqrt{d/\mathcal Delta})$,
every $k \in \{0,1,\ldots,\log_{2}(M/2)\}$ has
\begin{equation}
\label{eqn:general-active-radius}
\sup_{h \in V_{k}} \mathcal{P}(x : h(x) \neq h^{*}_{M(i-1)+1}(x)) \leq
c_{2} 2^{1-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}).
\end{equation}
Thus, since $\hat{h}_{k} \in V_{k}$ for each $k$,
the expected number of mistakes among the predictions
$\hat{Y}_{M(i-1)+1},\ldots,\hat{Y}_{M i}$
is
\begin{align*}
& 1 + \sum_{k=0}^{\log_{2}(M/2)} \sum_{s=2^{k}+1}^{2^{k+1}} \mathbb P(\hat{h}_{k}(X_{M(i-1)+s}) \neq Y_{M(i-1)+s})
\\ & \leq 1 + \sum_{k=0}^{\log_{2}(M/2)} \sum_{s=2^{k}+1}^{2^{k+1}}
\mathbb P(h^{*}_{M(i-1)+1}(X_{M(i-1)+s}) \neq Y_{M(i-1)+s})
\\ & \phantom{\leq } + \sum_{k=0}^{\log_{2}(M/2)} \sum_{s=2^{k}+1}^{2^{k+1}} \mathbb P(\hat{h}_{k}(X_{M(i-1)+s}) \neq h^{*}_{M(i-1)+1}(X_{M(i-1)+s}))
\\ & \leq
1 + \mathcal Delta M^{2} +
\sum_{k=0}^{\log_{2}(M/2)} 2^{k} \left( c_{2} 2^{1-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) + 2\sqrt{d\mathcal Delta}\log_{2}(M/2)\right)
\\ & \leq
1 + 4 c_{1}^{2} d + 2 c_{2} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) \log_{2}(2 c_{1} \sqrt{d/\mathcal Delta})
+ 4c_{1} d \log_{2}(c_{1} \sqrt{d/\mathcal Delta})
\\ & =
O\left( d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \right).
\end{align*}
Furthermore, \eqref{eqn:general-active-radius} implies the algorithm only
requests the label $Y_{M(i-1)+s}$ for $s \in \{2^{k}+1,\ldots,2^{k+1}\}$
if $X_{M(i-1)+s} \in \mathcal DIS({\rm B}(h^{*}_{M(i-1)+1}, c_{2} 2^{1-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})))$,
so that the expected number of labels requested among $Y_{M(i-1)+1},\ldots,Y_{M i}$ is at most
\begin{align*}
& 1 + \sum_{k=0}^{\log_{2}(M/2)} 2^{k} \left(\mathbb E[ \mathcal{P}(\mathcal DIS({\rm B}(h^{*}_{M(i-1)+1}, c_{2} 2^{1-k} d {\rm Log}(c_{1}/\sqrt{d\mathcal Delta}))))] \right.
\\ & {\hskip 6cm}\left.+ 2 \sqrt{d\mathcal Delta} \log_{2}(c_{1}\sqrt{d/\mathcal Delta})\right)
\\ & \leq
1 + \theta_{\mathbb C}\left(4 c_{2} d {\rm Log}(c_{1}/\sqrt{d\mathcal Delta}) / M\right) 2 c_{2} d {\rm Log}(c_{2}/\sqrt{d\mathcal Delta}) \log_{2}(2 c_{1} \sqrt{d/\mathcal Delta})
\\ & {\hskip 6cm}+ 4 c_{1} d \log_{2}(c_{1}\sqrt{d/\mathcal Delta})
\\ & =
O\left( \theta_{\mathbb C}\left( \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta)) \right) d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \right).
\end{align*}
Thus, the expected number of mistakes among indices $1,\ldots,T$ is at most
\begin{equation*}
O\left( d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \left\lceil \frac{T}{M} \right\rceil \right)
= O\left( \sqrt{d\mathcal Delta} {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) T \right),
\end{equation*}
and the expected number of labels requested among indices $1,\ldots,T$ is at most
\begin{multline*}
O\left( \theta_{\mathbb C}\left( \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta)) \right) d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \left\lceil \frac{T}{M} \right\rceil \right)
\\ = O\left( \theta_{\mathbb C}\left( \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta)) \right) \sqrt{d\mathcal Delta} {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) T \right).
\end{multline*}
\qed
\end{proof}
\end{document} | 1,593 | 33,201 | en |
train | 0.3.0 | \begin{equation}gin{document}
\date{}
\title{ON THE UNIVERSALITY OF SOME SMARANDACHE LOOPS OF BOL-MOUFANG TYPE
\footnote{2000 Mathematics Subject Classification. Primary 20NO5 ;
Secondary 08A05.}
\thanks{{\bf Keywords and Phrases :} Smarandache quasigroups, Smarandache loops, universality, $f,g$-principal isotopes}}
\author{T\`em\'it\'op\'e Gb\'ol\'ah\`an Ja\'iy\'e\d ol\'a\thanks{On Doctorate Programme at
the University of Agriculture Abeokuta, Nigeria.}
\thanks{All correspondence to be addressed to this author}\\
Department of Mathematics,\\
Obafemi Awolowo University, Ile Ife, Nigeria.\\
[email protected], [email protected]} \maketitle
\begin{equation}gin{abstract}
A Smarandache quasigroup(loop) is shown to be universal if all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
Also, weak Smarandache loops of Bol-Moufang type such as
Smarandache: left(right) Bol, Moufang and extra loops are shown to
be universal if all their $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes. Conversely, it is shown that if these weak
Smarandache loops of Bol-Moufang type are universal, then some
autotopisms are true in the weak Smarandache sub-loops of the weak
Smarandache loops of Bol-Moufang type relative to some Smarandache
elements. Futhermore, a Smarandache left(right) inverse property
loop in which all its $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes is shown to be universal if and only if it
is a Smarandache left(right) Bol loop in which all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
Also, it is established that a Smarandache inverse property loop in
which all its $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes is universal if and only if it is a
Smarandache Moufang loop in which all its $f,g$-principal isotopes
are Smarandache $f,g$-principal isotopes. Hence, some of the
autotopisms earlier mentioned are found to be true in the
Smarandache sub-loops of universal Smarandache: left(right) inverse
property loops and inverse property loops.
\end{abstract}
\section{Introduction}
W. B. Vasantha Kandasamy initiated the study of Smarandache loops
(S-loop) in 2002. In her book \cite{phd75}, she defined a
Smarandache loop (S-loop) as a loop with at least a subloop which
forms a subgroup under the binary operation of the loop called a
Smarandache subloop (S-subloop). In \cite{sma2}, the present author
defined a Smarandache quasigroup (S-quasigroup) to be a quasigroup
with at least a non-trivial associative subquasigroup called a
Smarandache subquasigroup (S-subquasigroup). Examples of Smarandache
quasigroups are given in Muktibodh \cite{muk}. For more on
quasigroups, loops and their properties, readers should check
\cite{phd3}, \cite{phd41},\cite{phd39}, \cite{phd49}, \cite{phd42}
and \cite{phd75}. In her (W.B. Vasantha Kandasamy) first paper
\cite{phd83}, she introduced Smarandache : left(right) alternative
loops, Bol loops, Moufang loops, and Bruck loops. But in
\cite{sma1}, the present author introduced Smarandache : inverse
property loops (IPL), weak inverse property loops (WIPL), G-loops,
conjugacy closed loops (CC-loop), central loops, extra loops,
A-loops, K-loops, Bruck loops, Kikkawa loops, Burn loops and
homogeneous loops. The isotopic invariance of types and varieties of
quasigroups and loops described by one or more equivalent
identities, especially those that fall in the class of Bol-Moufang
type loops as first named by Fenyves \cite{phd56} and \cite{phd50}
in the 1960s and later on in this $21^{st}$ century by Phillips and
Vojt\v echovsk\'y \cite{phd9}, \cite{phd61} and \cite{phd124} have
been of interest to researchers in loop theory in the recent past.
For example, loops such as Bol loops, Moufang loops, central loops
and extra loops are the most popular loops of Bol-Moufang type whose
isotopic invariance have been considered. Their identities relative
to quasigroups and loops have also been investigated by Kunen
\cite{ken1} and \cite{ken2}. A loop is said to be universal relative
to a property ${\cal P}$ if it is isotopic invariant relative to
${\cal P}$, hence such a loop is called a universal ${\cal P}$ loop.
This language is well used in \cite{phd88}. The universality of most
loops of Bol-Moufang types have been studied as summarised in
\cite{phd3}. Left(Right) Bol loops, Moufang loops, and extra loops
have all been found to be isotopic invariant. But some types of
central loops were shown to be universal in Ja\'iy\'e\d ol\'a
\cite{tope} and \cite{phdtope} under some conditions. Some other
types of loops such as A-loops, weak inverse property loops and
cross inverse property loops (CIPL) have been found be universal
under some neccessary and sufficient conditions in \cite{phd40},
\cite{phd43} and \cite{phd30} respectively. Recently, Michael Kinyon
et. al. \cite{phd95}, \cite{phd118}, \cite{phd119} solved the
Belousov problem concerning the universality of F-quasigroups which
has been open since 1967 by showing that all the isotopes of
F-quasigroups are Moufang loops.
In this work, the universality of the Smarandache concept in loops
is investigated. That is, will all isotopes of an S-loop be an
S-loop? The answer to this could be 'yes' since every isotope of a
group is a group (groups are G-loops). Also, the universality of
weak Smarandache loops, such as Smarandache Bol loops (SBL),
Smarandache Moufang loops (SML) and Smarandache extra loops (SEL)
will also be investigated despite the fact that it could be expected
to be true since Bol loops, Moufang loops and extra loops are
universal. The universality of a Smarandache inverse property loop
(SIPL) will also be considered.
\section{Preliminaries}
\begin{equation}gin{mydef}
A loop is called a Smarandache left inverse property loop (SLIPL) if
it has at least a non-trivial subloop with the LIP.
A loop is called a Smarandache right inverse property loop (SRIPL)
if it has at least a non-trivial subloop with the RIP.
A loop is called a Smarandache inverse property loop (SIPL) if it
has at least a non-trivial subloop with the IP.
A loop is called a Smarandache right Bol-loop (SRBL) if it has at
least a non-trivial subloop that is a right Bol(RB)-loop.
A loop is called a Smarandache left Bol-loop (SLBL) if it has at
least a non-trivial subloop that is a left Bol(LB)-loop.
A loop is called a Smarandache central-loop (SCL) if it has at least
a non-trivial subloop that is a central-loop.
A loop is called a Smarandache extra-loop (SEL) if it has at least a
non-trivial subloop that is a extra-loop.
A loop is called a Smarandache A-loop (SAL) if it has at least a
non-trivial subloop that is a A-loop.
A loop is called a Smarandache Moufang-loop (SML) if it has at least
a non-trivial subloop that is a Moufang-loop.
\end{mydef}
\begin{equation}gin{mydef}
Let $(G,\oplus)$ and $(H,\otimes)$ be two distinct quasigroups. The
triple $(A,B,C)$ such that $A,B,C~:~(G,\oplus)\rightarrow
(H,\otimes)$ are bijections is said to be an isotopism if and only
if
\begin{equation}gin{displaymath}
xA\otimes yB=(x\oplus y)C~\forall~x,y\in G.
\end{displaymath}
Thus, $H$ is called an isotope of $G$ and they are said to be
isotopic. If $C=I$, then the triple is called a principal isotopism
and $(H,\otimes)=(G,\otimes )$ is called a principal isotope of
$(G,\oplus )$. If in addition, $A=R_g$, $B=L_f$, then the triple is
called an $f,g$-principal isotopism, thus $(G,\otimes )$ is reffered
to as the $f,g$-principal isotope of $(G,\oplus )$.
A subloop(subquasigroup) $(S,\otimes )$ of a loop(quasigroup)
$(G,\otimes )$ is called a Smarandache $f,g$-principal isotope of
the subloop(subquasigroup) $(S,\oplus )$ of a loop(quasigroup)
$(G,\oplus )$ if for some $f,g\in S$,
\begin{equation}gin{displaymath}
xR_g\otimes yL_f=(x\oplus y)~\forall~x,y\in S.
\end{displaymath}
On the other hand $(G,\otimes )$ is called a Smarandache
$f,g$-principal isotope of $(G,\oplus )$ if for some $f,g\in S$,
\begin{equation}gin{displaymath}
xR_g\otimes yL_f=(x\oplus y)~\forall~x,y\in G
\end{displaymath}
where $(S,\oplus )$ is a S-subquasigroup(S-subloop) of $(G,\oplus
)$. In these cases, $f$ and $g$ are called Smarandache
elements(S-elements).
\end{mydef}
\begin{equation}gin{myth}\leftarrowbel{1:1}(\cite{phd41})
Let $(G,\oplus)$ and $(H,\otimes)$ be two distinct isotopic
loops(quasigroups). There exists an $f,g$-principal isotope
$(G,\circ )$ of $(G,\oplus)$ such that $(H,\otimes)\cong (G,\circ
)$.
\end{myth}
\begin{equation}gin{mycor}\leftarrowbel{1:2}
Let ${\cal P}$ be an isotopic invariant property in
loops(quasigroups). If $(G,\oplus)$ is a loop(quasigroup) with the
property ${\cal P}$, then $(G,\oplus)$ is a universal
loop(quasigroup) relative to the property ${\cal P}$ if and only if
every $f,g$-principal isotope $(G,\circ )$ of $(G,\oplus)$ has the
property ${\cal P}$.
\end{mycor}
{\bf Proof}\\
If $(G,\oplus)$ is a universal loop relative to the property ${\cal
P}$ then every distinct loop isotope $(H,\otimes)$ of $(G,\oplus)$
has the property ${\cal P}$. By Theorem~\ref{1:1}, there exists an
$f,g$-principal isotope $(G,\circ )$ of $(G,\oplus)$ such that
$(H,\otimes)\cong (G,\circ )$. Hence, since ${\cal P}$ is an
isomorphic invariant property, every $(G,\circ )$ has it.\\
Conversely, if every $f,g$-principal isotope $(G,\circ )$ of
$(G,\oplus)$ has the property ${\cal P}$ and since by
Theorem~\ref{1:1} for each distinct isotope $(H,\otimes)$ there
exists an $f,g$-principal isotope $(G,\circ )$ of $(G,\oplus)$ such
that $(H,\otimes)\cong (G,\circ )$, then all $(H,\otimes)$ has the
property, Thus, $(G,\oplus)$ is a universal loop relative to the
property ${\cal P}$.
\begin{equation}gin{mylem}\leftarrowbel{1:3}
Let $(G,\oplus)$ be a loop(quasigroup) with a subloop(subquasigroup)
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then $(S,\circ )$ is a
subloop(subquasigroup) of $(G,\circ)$ if $(S,\circ )$ is a
Smarandache $f,g$-principal isotope of $(S,\oplus )$.
\end{mylem}
{\bf Proof}\\
If $(S,\circ )$ is a Smarandache $f,g$-principal isotope of
$(S,\oplus )$, then for some $f,g\in S$,
\begin{equation}gin{displaymath}
xR_g\circ yL_f=(x\oplus y)~\forall~x,y\in SI\!\!Rightarrow x\circ
y=xR_g^{-1}\oplus yL_f^{-1}\in S~\forall~x,y\in S
\end{displaymath}
since $f,g\in S$. So, $(S,\circ )$ is a subgroupoid of $(G,\circ )$.
$(S,\circ )$ is a subquasigroup follows from the fact that
$(S,\oplus )$ is a subquasigroup. $f\oplus g$ is a two sided
identity element in $(S,\circ )$. Thus, $(S,\circ )$ is a subloop of
$(G,\circ )$. | 3,655 | 25,145 | en |
train | 0.3.1 | \section{Main Results}
\subsection*{Universality of Smarandache Loops}
\begin{equation}gin{myth}\leftarrowbel{1:4}
A Smarandache quasigroup is universal if all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a Smarandache quasigroup with a S-subquasigroup
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a
subquasigroup of $(G,\circ)$ if $(S,\circ )$ is a Smarandache
$f,g$-principal isotope of $(S,\oplus )$. Let us choose all
$(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath} It shall now be shown that
\begin{equation}gin{displaymath}(x\circ y)\circ z=x\circ (y\circ z)~\forall~x,y,z\in
S.
\end{displaymath}
But in the quasigroup $(G,\oplus )$, $xy$ will have preference over
$x\oplus y~\forall~x,y\in G$.
\begin{equation}gin{displaymath}
(x\circ y)\circ z=(xR_g^{-1}\oplus yL_f^{-1})\circ z=(xg^{-1}\oplus
f^{-1}y)\circ z=(xg^{-1}\oplus f^{-1}y)R_g^{-1}\oplus
zL_f^{-1}
\end{displaymath}
\begin{equation}gin{displaymath}
=(xg^{-1}\oplus f^{-1}y)g^{-1}\oplus f^{-1}z=xg^{-1}\oplus
f^{-1}yg^{-1}\oplus f^{-1}z.
\end{displaymath}
\begin{equation}gin{displaymath}
x\circ (y\circ z)=x\circ (yR_g^{-1}\oplus zL_f^{-1})=x\circ
(yg^{-1}\oplus f^{-1}z)=xR_g^{-1}\oplus (yg^{-1}\oplus
f^{-1}z)L_f^{-1}
\end{displaymath}
\begin{equation}gin{displaymath}
=xg^{-1}\oplus f^{-1}(yg^{-1}\oplus
f^{-1}z)=xg^{-1}\oplus f^{-1}yg^{-1}\oplus f^{-1}z.
\end{displaymath}
Thus, $(S,\circ )$ is an S-subquasigroup of $(G,\circ )$ hence,
$(G,\circ )$ is a S-quasigroup. By Theorem~\ref{1:1}, for any
isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a $(G,\circ )$
such that $(H,\otimes )\cong (G,\circ )$. So we can now choose the
isomorphic image of $(S,\circ)$ which will now be an S-subquasigroup
in $(H,\otimes )$. So, $(H,\otimes )$ is an S-quasigroup. This
conclusion can also be drawn straight from Corollary~\ref{1:2}.
\begin{equation}gin{myth}\leftarrowbel{1:5}
A Smarandache loop is universal if all its $f,g$-principal isotopes
are Smarandache $f,g$-principal isotopes. Conversely, if a
Smarandache loop is universal then
\begin{equation}gin{displaymath}
(I,L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath} is an autotopism of an S-subloop of the S-loop such that $f$ and $g$ are S-elements.
\end{myth}
{\bf Proof}\\
Every loop is a quasigroup. Hence, the first claim follows from
Theorem~\ref{1:4}. The proof of the converse is as follows. If a
Smarandache loop $(G,\oplus )$ is universal then every isotope
$(H,\otimes)$ is an S-loop i.e there exists an S-subloop $(S,\otimes
)$ in $(H,\otimes )$. Let $(G,\circ )$ be the $f,g$-principal
isotope of $(G,\oplus)$, then by Corollary~\ref{1:2}, $(G,\circ)$ is
an S-loop with say an S-subloop $(S,\circ)$. So,
\begin{equation}gin{displaymath}
(x\circ y)\circ z=x\circ (y\circ z)~\forall~x,y,z\in S
\end{displaymath}
where \begin{equation}gin{displaymath} x\circ y=xR_g^{-1}\oplus
yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus zL_f^{-1}=xR_g^{-1}\oplus
(yR_g^{-1}\oplus zL_f^{-1})L_f^{-1}. \end{displaymath} Replacing
$xR_g^{-1}$ by $x'$, $yL_f^{-1}$ by $y'$ and taking $z=e$ in
$(S,\oplus)$ we have; \begin{equation}gin{displaymath} (x'\oplus
y')R_g^{-1}R_{f^\rho}=x'\oplus
y'L_fR_g^{-1}R_{f^\rho}L_f^{-1}I\!\!Rightarrow (I,L_fR_
g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}) \end{displaymath} is an
autotopism of an S-subloop $(S,\oplus )$ of the S-loop $(G,\oplus )$
such that $f$ and $g$ are S-elements.
\subsection*{Universality of Smarandache Bol, Moufang and Extra Loops}
\begin{equation}gin{myth}\leftarrowbel{1:6}
A Smarandache right(left)Bol loop is universal if all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
Conversely, if a Smarandache right(left)Bol loop is universal then
\begin{equation}gin{displaymath}
{\cal
T}_1=(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})\bigg({\cal
T}_2=(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})\bigg)
\end{displaymath}
is an autotopism of an SRB(SLB)-subloop of the SRBL(SLBL) such that
$f$ and $g$ are S-elements.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SRBL(SLBL) with a S-RB(LB)-subloop $(S,\oplus
)$. If $(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
It is already known from \cite{phd3} that RB(LB) loops are
universal, hence $(S,\circ )$ is a RB(LB) loop thus an
S-RB(LB)-subloop of $(G,\circ)$. By Theorem~\ref{1:1}, for any
isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a $(G,\circ )$
such that $(H,\otimes )\cong (G,\circ )$. So we can now choose the
isomorphic image of $(S,\circ)$ which will now be an
S-RB(LB)-subloop in $(H,\otimes )$. So, $(H,\otimes )$ is an
SRBL(SLBL). This conclusion can also be drawn straight from
Corollary~\ref{1:2}.
The proof of the converse is as follows. If a SRBL(SLBL) $(G,\oplus
)$ is universal then every isotope $(H,\otimes)$ is an SRBL(SLBL)
i.e there exists an S-RB(LB)-subloop $(S,\otimes )$ in $(H,\otimes
)$. Let $(G,\circ )$ be the $f,g$-principal isotope of $(G,\oplus)$,
then by Corollary~\ref{1:2}, $(G,\circ)$ is an SRBL(SLBL) with say
an SRB(SLB)-subloop $(S,\circ)$. So for an SRB-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[(y\circ x)\circ z]\circ x=y\circ [(x\circ z)\circ
x]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=yR_g^{-1}\oplus [(xR_g^{-1}\oplus zL_f^{-1})R_g^{-1}\oplus
xL_f^{-1}]L_f^{-1}. \end{displaymath} Replacing $yR_g^{-1}$ by $y'$,
$zL_f^{-1}$ by $z'$ and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_{f^\rho}R_g^{-1}\oplus z')R_g^{-1}R_{f^\rho}=y'\oplus
z'L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1}. \end{displaymath} Again,
replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
(y''\oplus z')R_g^{-1}R_{f^\rho}=y''R_gR_{f^\rho}^{-1}\oplus
z'L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SRB-subloop $(S,\oplus )$ of the S-loop $(G,\oplus )$ such that $f$ and $g$ are S-elements.\\
On the other hand, for a SLB-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[x\circ (y\circ x)]\circ z=x\circ [y\circ (x\circ
z)]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[xR_g^{-1}\oplus (yR_g^{-1}\oplus xL_f^{-1})L_f^{-1}]R_g^{-1}\oplus
zL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus z'=(y'\oplus
z'L_{g^\leftarrowmbda}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath} Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$
so that
\begin{equation}gin{displaymath}
y'R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SLB-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements. | 3,528 | 25,145 | en |
train | 0.3.2 | The proof of the converse is as follows. If a SRBL(SLBL) $(G,\oplus
)$ is universal then every isotope $(H,\otimes)$ is an SRBL(SLBL)
i.e there exists an S-RB(LB)-subloop $(S,\otimes )$ in $(H,\otimes
)$. Let $(G,\circ )$ be the $f,g$-principal isotope of $(G,\oplus)$,
then by Corollary~\ref{1:2}, $(G,\circ)$ is an SRBL(SLBL) with say
an SRB(SLB)-subloop $(S,\circ)$. So for an SRB-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[(y\circ x)\circ z]\circ x=y\circ [(x\circ z)\circ
x]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=yR_g^{-1}\oplus [(xR_g^{-1}\oplus zL_f^{-1})R_g^{-1}\oplus
xL_f^{-1}]L_f^{-1}. \end{displaymath} Replacing $yR_g^{-1}$ by $y'$,
$zL_f^{-1}$ by $z'$ and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_{f^\rho}R_g^{-1}\oplus z')R_g^{-1}R_{f^\rho}=y'\oplus
z'L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1}. \end{displaymath} Again,
replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
(y''\oplus z')R_g^{-1}R_{f^\rho}=y''R_gR_{f^\rho}^{-1}\oplus
z'L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SRB-subloop $(S,\oplus )$ of the S-loop $(G,\oplus )$ such that $f$ and $g$ are S-elements.\\
On the other hand, for a SLB-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[x\circ (y\circ x)]\circ z=x\circ [y\circ (x\circ
z)]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[xR_g^{-1}\oplus (yR_g^{-1}\oplus xL_f^{-1})L_f^{-1}]R_g^{-1}\oplus
zL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus z'=(y'\oplus
z'L_{g^\leftarrowmbda}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath} Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$
so that
\begin{equation}gin{displaymath}
y'R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SLB-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
\begin{equation}gin{myth}\leftarrowbel{1:7}
A Smarandache Moufang loop is universal if all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes. Conversely, if a
Smarandache Moufang loop is universal then
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
are autotopisms of an SM-subloop of the SML such that $f$ and $g$
are S-elements.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SML with a SM-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
It is already known from \cite{phd3} that Moufang loops are
universal, hence $(S,\circ )$ is a Moufang loop thus an SM-subloop
of $(G,\circ)$. By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$
of $(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ which will now be an SM-subloop in $(H,\otimes )$. So,
$(H,\otimes )$ is an SML. This conclusion can also be drawn straight
from Corollary~\ref{1:2}.
The proof of the converse is as follows. If a SML $(G,\oplus )$ is
universal then every isotope $(H,\otimes)$ is an SML i.e there
exists an SM-subloop $(S,\otimes )$ in $(H,\otimes )$. Let $(G,\circ
)$ be the $f,g$-principal isotope of $(G,\oplus)$, then by
Corollary~\ref{1:2}, $(G,\circ)$ is an SML with say an SM-subloop
$(S,\circ)$. For an SM-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (z\circ x)=[x\circ (y\circ z)]\circ
x~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}=[xR_g^{-1}\oplus (yR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]R_g^{-1}\oplus xL_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}I\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Again, for an SM-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (z\circ x)=x\circ [(y\circ z)\circ x]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}=xR_g^{-1}\oplus [(yR_g^{-1}\oplus
zL_f^{-1})R_g^{-1}\oplus xL_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Also, if $(S,\circ)$ is an SM-subloop then,
\begin{equation}gin{displaymath}
[(x\circ y)\circ x]\circ z=x\circ [y\circ (x\circ z)]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus xL_f^{-1}]R_g^{-1}\oplus
zL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1}\oplus
z'=(y'\oplus z'L_{g^\leftarrowmbda}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements. | 4,067 | 25,145 | en |
train | 0.3.3 | Again, for an SM-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (z\circ x)=x\circ [(y\circ z)\circ x]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}=xR_g^{-1}\oplus [(yR_g^{-1}\oplus
zL_f^{-1})R_g^{-1}\oplus xL_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Also, if $(S,\circ)$ is an SM-subloop then,
\begin{equation}gin{displaymath}
[(x\circ y)\circ x]\circ z=x\circ [y\circ (x\circ z)]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus xL_f^{-1}]R_g^{-1}\oplus
zL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1}\oplus
z'=(y'\oplus z'L_{g^\leftarrowmbda}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Furthermore, if $(S,\circ)$ is an SM-subloop then,
\begin{equation}gin{displaymath}
[(y\circ x)\circ z]\circ x=y\circ [x\circ (z\circ x)]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=yR_g^{-1}\oplus [xR_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_{f^\rho}R_g^{-1}\oplus z')R_g^{-1}R_{f^\rho}=y'\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1}.
\end{displaymath}
Again, replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
(y''\oplus z')R_g^{-1}R_{f^\rho}=y''R_gR_{f^\rho}^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Lastly, $(S,\oplus)$ is an SM-subloop if and only if $(S,\circ)$ is
an SRB-subloop and an SLB-subloop. So by Theorem~\ref{1:6}, ${\cal
T}_1$ and ${\cal T}_2$ are autotopisms in $(S,\oplus)$, hence ${\cal
T}_1{\cal T}_2$ and ${\cal T}_2{\cal T}_1$ are autotopisms in
$(S,\oplus)$.
\begin{equation}gin{myth}\leftarrowbel{1:8}
A Smarandache extra loop is universal if all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes. Conversely, if a
Smarandache extra loop is universal then
$(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_{f^\rho}^{-1}R_gL_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_g)$,
\begin{equation}gin{displaymath}
(R_gR_{f^\rho}^{-1}R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1}L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
\end{displaymath}
are autotopisms of an SE-subloop of the SEL such that $f$ and $g$
are S-elements.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SEL with a SE-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
In \cite{phd34} and \cite{phd36} respectively, it was shown and
stated that a loop is an extra loop if and only if it is a Moufang
loop and a CC-loop. But since CC-loops are G-loops(they are
isomorphic to all loop isotopes) then extra loops are universal,
hence $(S,\circ )$ is an extra loop thus an SE-subloop of
$(G,\circ)$. By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ which will now be an SE-subloop in $(H,\otimes )$. So,
$(H,\otimes )$ is an SEL. This conclusion can also be drawn straight
from Corollary~\ref{1:2}.
The proof of the converse is as follows. If a SEL $(G,\oplus )$ is
universal then every isotope $(H,\otimes)$ is an SEL i.e there
exists an SE-subloop $(S,\otimes )$ in $(H,\otimes )$. Let $(G,\circ
)$ be the $f,g$-principal isotope of $(G,\oplus)$, then by
Corollary~\ref{1:2}, $(G,\circ)$ is an SEL with say an SE-subloop
$(S,\circ)$. For an SE-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[(x\circ y)\circ z]\circ x=x\circ [y\circ (z\circ
x)]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z')R_g^{-1}R_{f^\rho}=(y'\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z''L_fR_{f^\rho}^{-1}R_gL_f^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_gI\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_{f^\rho}^{-1}R_gL_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_g)
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements. | 3,885 | 25,145 | en |
train | 0.3.4 | The proof of the converse is as follows. If a SEL $(G,\oplus )$ is
universal then every isotope $(H,\otimes)$ is an SEL i.e there
exists an SE-subloop $(S,\otimes )$ in $(H,\otimes )$. Let $(G,\circ
)$ be the $f,g$-principal isotope of $(G,\oplus)$, then by
Corollary~\ref{1:2}, $(G,\circ)$ is an SEL with say an SE-subloop
$(S,\circ)$. For an SE-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[(x\circ y)\circ z]\circ x=x\circ [y\circ (z\circ
x)]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z')R_g^{-1}R_{f^\rho}=(y'\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z''L_fR_{f^\rho}^{-1}R_gL_f^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_gI\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_{f^\rho}^{-1}R_gL_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_g)
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Again, for an SE-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (x\circ z)=x\circ [(y\circ x)\circ z]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}=xR_g^{-1}\oplus [(yR_g^{-1}\oplus
xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_{g^\leftarrowmbda}L_f^{-1}=(y'R_{f^\rho}R_g^{-1}\oplus
z')L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
y''R_gR_{f^\rho}^{-1}R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_{g^\leftarrowmbda}L_f^{-1}=(y''\oplus
z')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1}R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Also, if $(S,\circ)$ is an SE-subloop then,
\begin{equation}gin{displaymath}
(y\circ x)\circ (z\circ x)=[y\circ (x\circ z)]\circ x
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}= [(yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]R_g^{-1}\oplus xL_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_{f^\rho}R_g^{-1}\oplus z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z'L_{g^\leftarrowmbda}L_f^{-1})R_g^{-1}R_{f^\rho}.
\end{displaymath}
Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_{f^\rho}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')R_g^{-1}R_{f^\rho}I\!\!Rightarrow
(R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1}L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Lastly, $(S,\oplus)$ is an SE-subloop if and only if $(S,\circ)$ is
an SM-subloop and an SCC-subloop. So by Theorem~\ref{1:7}, the six
remaining triples are autotopisms in $(S,\oplus)$.
\subsection*{Universality of Smarandache Inverse Property Loops}
\begin{equation}gin{myth}\leftarrowbel{1:9}
A Smarandache left(right) inverse property loop in which all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes is
universal if and only if it is a Smarandache left(right) Bol loop in
which all its $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SLIPL with a SLIP-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SLIPL if and only if every isotope
$(H,\otimes )$ is a SLIPL. $(H,\otimes )$ is a SLIPL if and only if
it has at least a SLIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1},
for any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a
$(G,\circ )$ such that $(H,\otimes )\cong (G,\circ )$. So we can now
choose the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$
which is already a SLIP-subloop in $(H,\otimes )$. So, $(S,\circ)$
is also a SLIP-subloop in $(G,\circ )$. As shown in \cite{phd3},
$(S,\oplus )$ and its $f,g$-isotope(Smarandache $f,g$-isotope)
$(S,\circ)$ are SLIP-subloops if and only if $(S,\oplus )$ is a left
Bol subloop(i.e a SLB-subloop). So, $(G,\oplus)$ is SLBL.
Conversely, if $(G,\oplus)$ is SLBL, then there exists a
SLB-subloop $(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an
arbitrary $f,g$-principal isotope of $(G,\oplus)$, then by
Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of $(G,\circ)$ if
$(S,\circ )$ is a Smarandache $f,g$-principal isotope of $(S,\oplus
)$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SLB-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SLB-subloop in $(G,\circ )$. Left Bol loops have
the left inverse property(LIP), hence, $(S,\oplus )$ and $(S,\circ)$
are SLIP-subloops in $(G,\oplus)$ and $(G,\circ )$ respectively.
Thence, $(G,\oplus)$ and $(G,\circ )$ are SLBLs. Therefore,
$(G,\oplus)$ is
a universal SLIPL.\\
The proof for a Smarandache right inverse property loop is similar
and is as follows. Let $(G,\oplus)$ be a SRIPL with a SRIP-subloop
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a
subloop of $(G,\circ)$ if $(S,\circ )$ is a Smarandache
$f,g$-principal isotope of $(S,\oplus )$. Let us choose all
$(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SRIPL if and only if every isotope
$(H,\otimes )$ is a SRIPL. $(H,\otimes )$ is a SRIPL if and only if
it has at least a SRIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1},
for any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a
$(G,\circ )$ such that $(H,\otimes )\cong (G,\circ )$. So we can now
choose the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$
which is already a SRIP-subloop in $(H,\otimes )$. So, $(S,\circ)$
is also a SRIP-subloop in $(G,\circ )$. As shown in \cite{phd3},
$(S,\oplus )$ and its $f,g$-isotope(Smarandache $f,g$-isotope)
$(S,\circ)$ are SRIP-subloops if and only if $(S,\oplus )$ is a
right Bol subloop(i.e a SRB-subloop). So, $(G,\oplus)$ is SRBL.
Conversely, if $(G,\oplus)$ is SRBL, then there exists a
SRB-subloop $(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an
arbitrary $f,g$-principal isotope of $(G,\oplus)$, then by
Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of $(G,\circ)$ if
$(S,\circ )$ is a Smarandache $f,g$-principal isotope of $(S,\oplus
)$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SRB-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SRB-subloop in $(G,\circ )$. Right Bol loops have
the right inverse property(RIP), hence, $(S,\oplus )$ and
$(S,\circ)$ are SRIP-subloops in $(G,\oplus)$ and $(G,\circ )$
respectively. Thence, $(G,\oplus)$ and $(G,\circ )$ are SRBLs.
Therefore, $(G,\oplus)$ is a universal SRIPL. | 3,970 | 25,145 | en |
train | 0.3.5 | Conversely, if $(G,\oplus)$ is SLBL, then there exists a
SLB-subloop $(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an
arbitrary $f,g$-principal isotope of $(G,\oplus)$, then by
Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of $(G,\circ)$ if
$(S,\circ )$ is a Smarandache $f,g$-principal isotope of $(S,\oplus
)$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SLB-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SLB-subloop in $(G,\circ )$. Left Bol loops have
the left inverse property(LIP), hence, $(S,\oplus )$ and $(S,\circ)$
are SLIP-subloops in $(G,\oplus)$ and $(G,\circ )$ respectively.
Thence, $(G,\oplus)$ and $(G,\circ )$ are SLBLs. Therefore,
$(G,\oplus)$ is
a universal SLIPL.\\
The proof for a Smarandache right inverse property loop is similar
and is as follows. Let $(G,\oplus)$ be a SRIPL with a SRIP-subloop
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a
subloop of $(G,\circ)$ if $(S,\circ )$ is a Smarandache
$f,g$-principal isotope of $(S,\oplus )$. Let us choose all
$(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SRIPL if and only if every isotope
$(H,\otimes )$ is a SRIPL. $(H,\otimes )$ is a SRIPL if and only if
it has at least a SRIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1},
for any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a
$(G,\circ )$ such that $(H,\otimes )\cong (G,\circ )$. So we can now
choose the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$
which is already a SRIP-subloop in $(H,\otimes )$. So, $(S,\circ)$
is also a SRIP-subloop in $(G,\circ )$. As shown in \cite{phd3},
$(S,\oplus )$ and its $f,g$-isotope(Smarandache $f,g$-isotope)
$(S,\circ)$ are SRIP-subloops if and only if $(S,\oplus )$ is a
right Bol subloop(i.e a SRB-subloop). So, $(G,\oplus)$ is SRBL.
Conversely, if $(G,\oplus)$ is SRBL, then there exists a
SRB-subloop $(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an
arbitrary $f,g$-principal isotope of $(G,\oplus)$, then by
Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of $(G,\circ)$ if
$(S,\circ )$ is a Smarandache $f,g$-principal isotope of $(S,\oplus
)$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SRB-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SRB-subloop in $(G,\circ )$. Right Bol loops have
the right inverse property(RIP), hence, $(S,\oplus )$ and
$(S,\circ)$ are SRIP-subloops in $(G,\oplus)$ and $(G,\circ )$
respectively. Thence, $(G,\oplus)$ and $(G,\circ )$ are SRBLs.
Therefore, $(G,\oplus)$ is a universal SRIPL.
\begin{equation}gin{myth}\leftarrowbel{1:10}
A Smarandache inverse property loop in which all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes is universal if
and only if it is a Smarandache Moufang loop in which all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SIPL with a SIP-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SIPL if and only if every isotope
$(H,\otimes )$ is a SIPL. $(H,\otimes )$ is a SIPL if and only if it
has at least a SIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1}, for
any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a $(G,\circ
)$ such that $(H,\otimes )\cong (G,\circ )$. So we can now choose
the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$ which is
already a SIP-subloop in $(H,\otimes )$. So, $(S,\circ)$ is also a
SIP-subloop in $(G,\circ )$. As shown in \cite{phd3}, $(S,\oplus )$
and its $f,g$-isotope(Smarandache $f,g$-isotope) $(S,\circ)$ are
SIP-subloops if and only if $(S,\oplus )$ is a Moufang subloop(i.e a
SM-subloop). So, $(G,\oplus)$ is SML.
Conversely, if $(G,\oplus)$ is SML, then there exists a SM-subloop
$(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an arbitrary
$f,g$-principal isotope of $(G,\oplus)$, then by Lemma~\ref{1:3},
$(S,\circ )$ is a subloop of $(G,\circ)$ if $(S,\circ )$ is a
Smarandache $f,g$-principal isotope of $(S,\oplus )$. Let us choose
all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SM-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SM-subloop in $(G,\circ )$. Moufang loops have the
inverse property(IP), hence, $(S,\oplus )$ and $(S,\circ)$ are
SIP-subloops in $(G,\oplus)$ and $(G,\circ )$ respectively. Thence,
$(G,\oplus)$ and $(G,\circ )$ are SMLs. Therefore, $(G,\oplus)$ is a
universal SIPL.
\begin{equation}gin{mycor}\leftarrowbel{1:11}
If a Smarandache left(right) inverse property loop is universal then
\begin{equation}gin{displaymath}
(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})\bigg(
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})\bigg)
\end{displaymath}
is an autotopism of an SLIP(SRIP)-subloop of the SLIPL(SRIPL) such
that $f$ and $g$ are S-elements.
\end{mycor}
{\bf Proof}\\
This follows by Theorem~\ref{1:9} and Theorem~\ref{1:11}.
\begin{equation}gin{mycor}\leftarrowbel{1:12}
If a Smarandache inverse property loop is universal then
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
are autotopisms of an SIP-subloop of the SIPL such that $f$ and $g$
are S-elements.
\end{mycor}
{\bf Proof}\\
This follows from Theorem~\ref{1:10} and Theorem~\ref{1:7}. | 3,108 | 25,145 | en |
train | 0.3.6 | Conversely, if $(G,\oplus)$ is SML, then there exists a SM-subloop
$(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an arbitrary
$f,g$-principal isotope of $(G,\oplus)$, then by Lemma~\ref{1:3},
$(S,\circ )$ is a subloop of $(G,\circ)$ if $(S,\circ )$ is a
Smarandache $f,g$-principal isotope of $(S,\oplus )$. Let us choose
all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SM-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SM-subloop in $(G,\circ )$. Moufang loops have the
inverse property(IP), hence, $(S,\oplus )$ and $(S,\circ)$ are
SIP-subloops in $(G,\oplus)$ and $(G,\circ )$ respectively. Thence,
$(G,\oplus)$ and $(G,\circ )$ are SMLs. Therefore, $(G,\oplus)$ is a
universal SIPL.
\begin{equation}gin{mycor}\leftarrowbel{1:11}
If a Smarandache left(right) inverse property loop is universal then
\begin{equation}gin{displaymath}
(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})\bigg(
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})\bigg)
\end{displaymath}
is an autotopism of an SLIP(SRIP)-subloop of the SLIPL(SRIPL) such
that $f$ and $g$ are S-elements.
\end{mycor}
{\bf Proof}\\
This follows by Theorem~\ref{1:9} and Theorem~\ref{1:11}.
\begin{equation}gin{mycor}\leftarrowbel{1:12}
If a Smarandache inverse property loop is universal then
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
are autotopisms of an SIP-subloop of the SIPL such that $f$ and $g$
are S-elements.
\end{mycor}
{\bf Proof}\\
This follows from Theorem~\ref{1:10} and Theorem~\ref{1:7}.
\begin{equation}gin{thebibliography}{99}
\bibitem{phd30} R. Artzy (1959), {\it Crossed inverse and related
loops}, Trans. Amer. Math. Soc. 91, 3, 480--492.
\bibitem{phd41} R. H. Bruck (1966), {\it A survey of binary systems}, Springer-Verlag, Berlin-G\"ottingen-Heidelberg, 185pp.
\bibitem{phd40} R. H. Bruck and L. J. Paige (1956), {\it Loops whose
inner mappings are automorphisms}, The annuals of Mathematics, 63,
2, 308--323.
\bibitem{phd39} O. Chein, H. O. Pflugfelder and J. D. H. Smith (1990), {\it Quasigroups and loops : Theory and applications}, Heldermann Verlag, 568pp.
\bibitem{phd49} J. D\'{e}ne and A. D. Keedwell (1974), {\it Latin squares and their applications}, the Academic press Lts, 549pp.
\bibitem{phd50} F. Fenyves (1968), {\it Extra loops I}, Publ. Math. Debrecen, 15, 235--238.
\bibitem{phd56} F. Fenyves (1969), {\it Extra loops II}, Publ. Math. Debrecen, 16, 187--192.
\bibitem{phd42} E. G. Goodaire, E. Jespers and C. P. Milies (1996), {\it Alternative loop rings}, NHMS(184), Elsevier, 387pp.
\bibitem{phd34} E. G. Goodaire and D. A. Robinson (1990), {\it Some special conjugacy closed loops}, Canad. Math. Bull. 33, 73--78.
\bibitem{sma1} T. G. Ja\'iy\'e\d ol\'a (2006), {\it An holomorphic study of the Smarandache concept in
loops}, Scientia Magna Journal, 2, 1, 1--8.
\bibitem{sma2} T. G. Ja\'iy\'e\d ol\'a (2006), {\it Smarandache quasigroups}, Scientia Magna Journal, 2, 2, to appear.
\bibitem{phdtope} T. G. Ja\'iy\'e\d ol\'a (2005), {\it An isotopic study of
properties of central loops}, M.Sc. Dissertation, University of
Agriculture, Abeokuta.
\bibitem{tope} T. G. Ja\'iy\'e\d ol\'a and J. O. Ad\'en\'iran, {\it On isotopic characterization of central
loops}, communicated for publication.
\bibitem{phd118} T. Kepka, M. K. Kinyon, J. D. Phillips, {\it
F-quasigroups and generalised modules}, communicated for
publication.
\bibitem{phd119} T. Kepka, M. K. Kinyon, J. D. Phillips, {\it F-quasigroups isotopic to groups}, communicated for publication.
\bibitem{phd95} T.
Kepka, M. K. Kinyon, J. D. Phillips, {\it The structure of
F-quasigroups}, communicated for publication.
\bibitem{phd36} M. K. Kinyon, K. Kunen (2004), {\it The structure of
extra loops}, Quasigroups and Related Systems 12, 39--60.
\bibitem{phd124} M. K. Kinyon, J. D. Phillips and P. Vojt\v echovsk\'y (2004), {\it Loops of Bol-Moufang type with a subgroup of index
two}, Bull. Acad. Stinte. Rebub. Mold. Mat. 2,45, 1--17.
\bibitem{ken2} K. Kunen (1996), {\it Quasigroups, loops and associative laws}, J. Alg.185, 194--204.
\bibitem{ken1} K. Kunen (1996), {\it Moufang quasigroups}, J. Alg. 183, 231--234.
\bibitem{muk} A. S. Muktibodh (2006), {\it Smarandache quasigroups},
Scientia Magna Journal, 2, 1, 13--19.
\bibitem{phd88} P. T. Nagy and K. Strambach (1994), {\it Loops as
invariant sections in groups, and their geometry}, Canad. J. Math.
46, 5, 1027--1056.
\bibitem{phd43} J. M. Osborn (1961), {\it Loops with the weak
inverse property}, Pac. J. Math. 10, 295--304.
\bibitem{phd3} H. O. Pflugfelder (1990), {\it Quasigroups and loops : Introduction}, Sigma series in Pure Math. 7, Heldermann Verlag, Berlin, 147pp.
\bibitem{phd9} J. D. Phillips and P. Vojt\v echovsk\'y (2005), {\it The varieties of loops of Bol-Moufang type}, Alg. Univer. (to appear).
\bibitem{phd61} J. D. Phillips and P. Vojt\v echovsk\'y (2005), {\it The varieties of quasigroups of Bol-Moufang type : An equational
approach}, J. Alg. 293, 17--33.
\bibitem{phd75} W. B. Vasantha Kandasamy (2002), {\it Smarandache
loops}, Department of Mathematics, Indian Institute of Technology,
Madras, India, 128pp.
\bibitem{phd83} W. B. Vasantha Kandasamy (2002), {\it Smarandache
Loops}, Smarandache notions journal, 13, 252--258.
\end{thebibliography}
\end{document} | 2,932 | 25,145 | en |
train | 0.4.0 | \begin{document}
\newcounter{algnum}
\newcounter{step}
\newtheorem{alg}{Algorithm}
\newenvironment{algorithm}{\begin{alg}\mathcal End{alg}}
\mathcal Title[Joint spectral radius, Sturmian measures, finiteness conjecture]
{Joint spectral radius, Sturmian measures, and the finiteness conjecture}
\author{O.~Jenkinson \& M. Pollicott}
\address{Oliver Jenkinson;
School of Mathematical Sciences, Queen
Mary, University of London, Mile End Road, London, E1 4NS, UK.
\newline {\mathcal Tt [email protected]}}
\address{Mark Pollicott;
Mathematics Institute,
University of Warwick,
Coventry, CV4 7AL, UK.
\newline {\mathcal Tt [email protected]}}
\subjclass[2010]{Primary 15A18, 15A60; Secondary 37A99, 37B10, 68R15}
\begin{abstract}
The joint spectral radius of a pair of $2 \mathcal Times 2$ real matrices $(A_0,A_1)\in M_2(\mathcal Mathbb{R})^2$
is defined to be
$r(A_0,A_1)= \mathcal Limsup_{n\mathcal To\infty} \mathcal Max \{\|A_{i_1}\cdots A_{i_n}\|^{1/n}: i_j\in\{0,1\}\}$,
the optimal growth rate of the norm of products of these matrices.
The Lagarias-Wang finiteness conjecture \cite{lagariaswang}, asserting that $r(A_0,A_1)$ is
always the $n$th root of the spectral radius of some length-$n$ product $A_{i_1}\cdots A_{i_n}$, has been refuted by
Bousch \& Mairesse \cite{bouschmairesse}, with subsequent counterexamples presented by
Blondel, Theys \& Vladimirov \cite{btv}, Kozyakin \cite{kozyakin}, Hare, Morris, Sidorov \& Theys
\cite{hmst}.
In this article we introduce a new approach to generating finiteness counterexamples,
and use this to exhibit an open subset of $M_2(\mathcal Mathbb{R})^2$ with the property that each member $(A_0,A_1)$ of the subset generates
uncountably many counterexamples of the form $(A_0,tA_1)$.
Our methods employ ergodic theory, in particular the analysis of Sturmian invariant measures;
this approach allows a short proof
that the relation between the parameter $t$ and the Sturmian parameter $\mathcal Mathcal{P}(t)$ is a devil's staircase.
\mathcal End{abstract}
\mathcal Maketitle | 732 | 61,706 | en |
train | 0.4.1 | \section{Introduction}\mathcal Label{generalsection}
\subsection{Problem and setting}\mathcal Label{problemsetting}
For a square matrix $A$ with real entries, its \mathcal Emph{spectral radius} $r(A)$, defined as the maximum modulus of its eigenvalues, satisfies \mathcal Emph{Gelfand's formula}
$$
r(A)= \mathcal Lim_{n\mathcal To\infty} \|A^n\|^{1/n}\,,
$$
where $\|\cdot\|$ is a matrix norm.
More generally, for a finite collection $\mathcal Mathcal A=\{A_0,\mathcal Ldots, A_l\}$ of real square matrices, all of the same size,
the \mathcal Emph{joint spectral radius} $r(\mathcal Mathcal A)$ is defined by
\begin{equation}\mathcal Label{jsrlimsup}
r(\mathcal Mathcal A) =
\mathcal Limsup_{n\mathcal To\infty} \mathcal Max \{\|A_{i_1}\cdots A_{i_n}\|^{1/n}: i_j\in\{0,\mathcal Ldots, l\}\} \,,
\mathcal End{equation}
or equivalently (see e.g.~\cite{jungers}) by
\begin{equation}\mathcal Label{rjsr}
r(\mathcal Mathcal A) =
\mathcal Lim_{n \mathcal To +\infty} \mathcal Max
\{ r(A_{i_1}\cdots A_{i_n})^{1/n}: i_j\in\{0, \dots,l\}\} \,.
\mathcal End{equation}
The notion of joint spectral radius was introduced by Rota \& Strang \cite{rotastrang},
and notably popularised by Daubechies \& Lagarias \cite{daubechieslagarias} in their work on wavelets.
Since the 1990s it has become an area of very active research interest, from both a pure and an applied perspective
(see e.g.~\cite{blondel, jungers, kozyakinbiblio, strang}).
The set $\mathcal Mathcal A$ is said to have the \mathcal Emph{finiteness property} if $r(\mathcal Mathcal A)=r(A_{i_1}\cdots A_{i_n})^{1/n}$
for some $i_1,\mathcal Ldots, i_n\in\{0,\mathcal Ldots, l\}$.
It was conjectured by Lagarias \& Wang \cite{lagariaswang}
(see also Gurvits \cite{gurvits}) that every such $\mathcal Mathcal A$ enjoys the finiteness property.
This so-called finiteness conjecture was, however, refuted by Bousch \& Mairesse \cite{bouschmairesse}, and a number of authors (see \cite{btv,hmst, kozyakin, morrissidorov}) have subsequently given examples of sets $\mathcal Mathcal A$ for which the finiteness property fails.
A common feature of these finiteness counterexamples has been a judicious choice of a pair of $2 \mathcal Times 2$ matrices $A_0,A_1$, followed by an argument that for certain $t>0$, the finiteness property fails for the set $\mathcal Mathcal A(t)= \{ A_0^{(t)}, A_1^{(t)}\} = \{A_0, tA_1\}$.
In fact for many of these examples it has been observed that the family $(\mathcal Mathcal A(t))_{t>0}$ can be associated with the class of \mathcal Emph{Sturmian} sequences of Morse \& Hedlund \cite{morsehedlund}:
for a given $t>0$ an appropriate Sturmian sequence $(i_n)_{n=1}^\infty \in\{0,1\}^\mathbb N$ turns out to give the optimal matrix product, in the sense that the joint spectral radius $r(\mathcal Mathcal A(t))$ equals
$\mathcal Lim_{n\mathcal To\infty} r(A_{i_1}^{(t)}\cdots A_{i_n}^{(t)})^{1/n}$
(see \cite{btv,bouschmairesse,hmst,kozyakin,morrissidorov} for further details).
A Sturmian sequence $(i_n)_{n=1}^\infty$ has a well-defined \mathcal Emph{1-frequency}
$\mathcal Mathcal{P} = \mathcal Lim_{N\mathcal To\infty} \frac{1}{N} \sum_{n=1}^N i_n$, and it is those sets $\mathcal Mathcal A(t)$ whose associated Sturmian sequences\footnote{We follow the definition of Sturmian sequence given in \cite{bullettsentenac}, though note that
some authors refer to these as \mathcal Emph{balanced} sequences,
reserving the nomenclature \mathcal Emph{Sturmian} precisely for those balanced sequences with irrational 1-frequency.}
have \mathcal Emph{irrational} 1-frequency which yield counterexamples to the finiteness conjecture
(see Proposition \mathcal Ref{irrationalcounterexampleconnection} below for a more precise description of the connection between finiteness counterexamples and Sturmian sequences with irrational 1-frequency).
For certain such families $(\mathcal Mathcal A(t))_{t>0}$ (which henceforth we refer to as \mathcal Emph{Sturmian families}), it has been proved by Morris \& Sidorov \cite{morrissidorov}
(see also \cite[p.~109]{bouschmairesse})
that
if $\mathcal Mathcal{P}(t)$ denotes the 1-frequency associated
to $\mathcal Mathcal A(t)$, then the parameter
mapping $t\mathcal Mapsto \mathcal Mathcal{P}(t)$ is continuous and monotone, but \mathcal Emph{singular}
in the sense that
$\{t>0:\mathcal Mathcal{P}(t)\notin \mathcal Mathbb{Q}\}$ is nowhere dense; in other words,
the uncountably many parameters
$t$ for which finiteness counterexamples occur only constitute a thin subset\footnote{The belief that finiteness counterexamples are rare appears to be widespread; for example Maesumi \cite{maesumi} conjectures
that they consitute a set of (Lebesgue) measure zero in the space of matrices.}
of $\mathcal Mathbb{R}^+$.
Examples of Sturmian families $(\mathcal Mathcal A(t))_{t>0}$
have been given by
Bousch \& Mairesse \cite{bouschmairesse}, who considered the family
generated by matrix pairs of the form
\begin{equation}\mathcal Label{bmfamily}
\mathcal Mathcal A= \mathcal Left( \begin{pmatrix} e^{\kappa h_0}+1 & 0 \cr e^\kappa & 1 \cr \mathcal End{pmatrix}\, , \
\begin{pmatrix} 1 & e^\kappa \cr 0 & e^{\kappa h_1} +1 \cr \mathcal End{pmatrix} \mathcal Right)\,,\,
\kappa>0\,, h_0,h_1>0\,, h_0+h_1 < 2\,,
\mathcal End{equation}
by Kozyakin \cite{kozyakin}, who studied the family
generated by pairs of the form
\begin{equation}\mathcal Label{kozyakinpairs}
\mathcal Mathcal A = \mathcal Left( \begin{pmatrix} 1 & 0 \cr c & d \cr \mathcal End{pmatrix}\, , \
\begin{pmatrix} a & b \cr 0 & 1 \cr \mathcal End{pmatrix} \mathcal Right) \,,\,
0<a,d<1\mathcal Le bc\,,
\mathcal End{equation}
and by various authors \cite{btv,hmst,morrissidorov} focusing on the family generated by
the particular pair
\begin{equation}\mathcal Label{standardpair}
\mathcal Mathcal A = \mathcal Left( \begin{pmatrix} 1 & 0 \cr 1 & 1 \cr \mathcal End{pmatrix}\, , \
\begin{pmatrix} 1 & 1 \cr 0 & 1 \cr \mathcal End{pmatrix} \mathcal Right) \,.
\mathcal End{equation}
For an invertible matrix $P$,
the simultaneous similarity $(A_0,A_1)\mathcal Mapsto (P^{-1}A_0P, P^{-1}A_1P)$
leaves invariant the joint spectral radius,
and does not change the sequences $(i_n)_{n=1}^\infty$ attaining the optimal matrix product,
while if $u,v>0$ then $(uA_0,vA_1)$
has the same optimizing sequences as $(A_0, (v/u)A_1)$.
Therefore, declaring $\mathcal Mathcal A=(A_0,A_1)$ and $\mathcal Mathcal A'=(A_0',A_1')$ to be equivalent if
$A_0'=uP^{-1}A_0P$ and $A_1'=vP^{-1}A_1P$ for some invertible $P$ and $u,v>0$, we see that the
equivalence of $\mathcal Mathcal A$ and $\mathcal Mathcal A'$ implies that $(\mathcal Mathcal A(t))_{t>0}$ is a Sturmian family if and only if
$(\mathcal Mathcal A'(t))_{t>0}$ is.
In particular, $(\mathcal Mathcal A'(t))_{t>0}$ is a Sturmian family whenever $\mathcal Mathcal A'$ is equivalent to a matrix pair $\mathcal Mathcal A$ of the form
(\mathcal Ref{bmfamily}), (\mathcal Ref{kozyakinpairs}), or (\mathcal Ref{standardpair}).
The purpose of this article is to introduce an approach to studying the joint spectral radius and
generating finiteness counterexamples, which in particular yields new examples of Sturmian
families $(\mathcal Mathcal A(t))_{t>0}$, i.e.~where $\mathcal Mathcal A$ is not equivalent to a matrix pair of the form
(\mathcal Ref{bmfamily}), (\mathcal Ref{kozyakinpairs}), or (\mathcal Ref{standardpair}).
Our method is conceptually different to previous authors, employing notions from
dynamical systems, ergodic theory, and in particular ergodic optimization (see e.g.~\cite{jeo}).
Specifically, we identify a dynamical system $T_\mathcal Mathcal A$ with the matrix pair $\mathcal Mathcal A=(A_0,A_1)$,
and cast the problem of determining the joint spectral radius $r(\mathcal Mathcal A)$ in terms of ergodic optimization
(see Theorem \mathcal Ref{maxtheoremintro} below): it suffices to determine the
$T_\mathcal Mathcal A$-invariant probability measure which maximizes the integral of a certain auxiliary real-valued function $f_\mathcal Mathcal A$.
Working with the family of \mathcal Emph{$\mathcal Mathcal A$-Sturmian measures} (certain probability measures
invariant under $T_\mathcal Mathcal A$) instead of Sturmian sequences, we exploit a characterisation of these measures in terms of the smallness of their support to show that they give precisely the family
of $f_{\mathcal Mathcal A(t)}$-maximizing measures, $t>0$.
In particular, whenever the $f_{\mathcal Mathcal A(t)}$-maximizing measure is Sturmian of \mathcal Emph{irrational} parameter
$\mathcal Mathcal{P}(t)$ then $\mathcal Mathcal A(t)$ is a finiteness counterexample (cf.~Proposition \mathcal Ref{irrationalcounterexampleconnection}).
The $\mathcal Mathcal A$-Sturmian measures are naturally identified with Sturmian measures on $\Omega = \{0,1\}^\mathbb N$, the full shift on two symbols (see Notation \mathcal Ref{sturmianfullshift}). A notable feature of our approach is that the singularity of the parameter mapping
$t\mathcal Mapsto \mathcal Mathcal{P}(t)$ (and in particular the fact that $\{t>0:\mathcal Mathcal{P}(t)\notin\mathcal Mathbb{Q}\}$ is nowhere dense in $\mathcal Mathbb{R}^+$)
is then readily deduced (see Theorem \mathcal Ref{deviltheorem} in \S \mathcal Ref{devilsection}) as a consequence of classical facts about
parameter dependence of Sturmian measures on $\Omega$
(i.e.~rather than requiring the {\it ab initio} approach of \cite{morrissidorov}). | 2,976 | 61,706 | en |
train | 0.4.2 | \subsection{Statement of results}\mathcal Label{statementsubsection}
We use $M_2(\mathcal Mathbb{R})$ to denote the set of real $2 \mathcal Times 2$ matrices,
and focus attention on certain of its open subsets:
\begin{notation}
$M_2(\mathcal Mathbb{R}^+)$ will denote the set of \mathcal Emph{positive matrices}, i.e.~matrices in $M_2(\mathcal Mathbb{R})$
with entries in $\mathcal Mathbb{R}^+ =\{x\in\mathcal Mathbb{R}:x>0\}$,
and
$M_2^+(\mathcal Mathbb{R}^+)=\{A\in M_2(\mathcal Mathbb{R}^+):\det A>0\}$
will denote the set of \mathcal Emph{positive orientation-preserving matrices}.
\mathcal End{notation}
Turning to \mathcal Emph{pairs} of matrices, we shall consider the following open subset
of $M_2^+(\mathcal Mathbb{R}^+)^2$:
\begin{defn}\mathcal Label{frakmdefn}
Let $\mathcal Mm\subset M_2^+(\mathcal Mathbb{R}^+)^2$ denote the set of matrix pairs
\begin{gather*}
\mathcal Left( A_0, A_1\mathcal Right)
= \mathcal Left( \begin{pmatrix} a_0&b_0\\c_0&d_0\mathcal End{pmatrix} ,
\begin{pmatrix} a_1&b_1\\ c_1& d_1\mathcal End{pmatrix} \mathcal Right) \in M_2^+(\mathcal Mathbb{R}^+)^2
\mathcal End{gather*}
satisfying
\begin{equation}\mathcal Label{definequal1}
\frac{a_0}{c_0} < \frac{b_1}{d_1}
\mathcal End{equation}
and
\begin{equation}\mathcal Label{definequal2}
a_1+c_1-b_1-d_1 < 0 < a_0+c_0-b_0-d_0\,.
\mathcal End{equation}
For reasons which will become apparent later (see Proposition \mathcal Ref{concavconvexprop}),
$\mathcal Mm$ will be referred to as the set of \mathcal Emph{concave-convex matrix pairs}.
\mathcal End{defn}
Finally, our counterexamples to the Lagarias-Wang finiteness conjecture will be drawn from
a certain open subset $\mathfrak D$
(given by Definition \mathcal Ref{defnofnn} below) of $\mathcal Mm$
which is conveniently described in terms of quantities $\mathcal Rho_A$ and $\sigma_A$ defined as follows:
\begin{defn}
For $A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$,
define
$$
\mathcal Rho_A = \frac{2b}{a-d-2b+\sqrt{(a-d)^2+4bc}}\,,
$$
and if $a+c\neq b+d$ then define
$$
\sigma_A = \frac{b-a}{a+c-b-d}\,.
$$
\mathcal End{defn}
It turns out (see Corollary \mathcal Ref{rhoposlessminusone}) that if
$(A_0,A_1)\in\mathcal Mm$
then
$\sigma_{A_0}<0<\mathcal Rho_{A_0}$ and $\mathcal Rho_{A_1}<-1$.
The set $\mathfrak D$ is defined by imposing two inequalities:
\begin{defn}\mathcal Label{defnofnn}
Define
\begin{equation*}
\mathfrak D
=
\mathcal Left\{\, (A_0,A_1)\in\mathcal Mm : \
\mathcal Rho_{A_1} < \sigma_{A_0}
\ \mathcal Text{and}\
\sigma_{A_1} < \mathcal Rho_{A_0} \mathcal Right\}\,.
\mathcal End{equation*}
\mathcal End{defn}
Clearly $\mathfrak D$ is an open subset of $\mathcal Mm$, hence also of $M_2(\mathcal Mathbb{R})^2$.
It is also non-empty: for example
it is readily verified that the two-parameter family
\begin{equation}\mathcal Label{2dimreg}
\mathfrak D_2 = \mathcal Left\{
\mathcal Left( \begin{pmatrix} 1&b\\c&1\mathcal End{pmatrix} ,
\begin{pmatrix} 1& c\\ b & 1\mathcal End{pmatrix} \mathcal Right)
: \
bc < 1 < c \ \ , \ \ (b,c)\in (\mathcal Mathbb{R}^+)^2
\mathcal Right\}
\mathcal End{equation}
is a subset of $\mathfrak D$.
Note that the pair (\mathcal Ref{standardpair})
studied in \cite{btv, hmst,morrissidorov}, and
corresponding to $(b,c)=(0,1)$ in (\mathcal Ref{2dimreg}), lies on the boundary of both $\mathfrak D$ and $\mathfrak D_2$.
\mathcal Medskip
A version of our main result is the following:
\begin{theorem}\mathcal Label{nontechnictheorem}
The open subset $\mathfrak D\subset M_2(\mathcal Mathbb{R})^2$ is such that if $\mathcal Mathcal A=(A_0,A_1)\in\mathfrak D$ then
for uncountably many $t\in\mathcal Mathbb{R}^+$, the matrix
pair $(A_0,tA_1)$
is a
finiteness counterexample.
\mathcal End{theorem}
If we define $\mathcal Ee \subset M_2(\mathbb R)^2$
to be the set of matrix pairs which are equivalent to some pair in $\mathfrak D$
(recall that
$\mathcal Mathcal A=(A_0,A_1)$ and $\mathcal Mathcal A'=(A_0',A_1')$ are equivalent if
$A_0'=uP^{-1}A_0P$ and $A_1'=vP^{-1}A_1P$ for some invertible $P$ and $u,v>0$) then clearly:
\begin{cor}\mathcal Label{nontechnictheoremcor}
The open subset $\mathcal Ee\subset M_2(\mathcal Mathbb{R})^2$ is such that if $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Ee$ then
for uncountably many $t\in\mathcal Mathbb{R}^+$, the matrix
pair $(A_0,tA_1)$
is a
finiteness counterexample.
\mathcal End{cor}
\begin{remark}
Theorem \mathcal Ref{nontechnictheorem} yields new finiteness counterexamples, in the sense that $\mathfrak D$ contains matrix pairs
which are not equivalent to pairs satisfying (\mathcal Ref{bmfamily}), (\mathcal Ref{kozyakinpairs}), or (\mathcal Ref{standardpair}).
To see this, note for example that
\begin{equation}\mathcal Label{exampleofa}
\mathcal Mathcal A = (A_0,A_1) =
\mathcal Left( \begin{pmatrix} 5/8 & 3/112 \cr 7/8 & 15/16 \cr \mathcal End{pmatrix}\, , \
\begin{pmatrix} 15/16 & 1 \cr 1/128 & 7/8 \cr \mathcal End{pmatrix} \mathcal Right)
\mathcal End{equation}
belongs to $\mathfrak D$.
Both $A_0$ and $A_1$ have their larger eigenvalue equal to 1, and smaller eigenvalues given by
$\mathcal Lambda_0=9/16$ and $\mathcal Lambda_1=13/16$, respectively.
Now both matrices in (\mathcal Ref{standardpair}) have the single eigenvalue 1, so (\mathcal Ref{exampleofa}) cannot be equivalent to (\mathcal Ref{standardpair}); moreover (\mathcal Ref{exampleofa}) is not equivalent to any matrix pair satisfying (\mathcal Ref{bmfamily}), since both matrices in (\mathcal Ref{bmfamily}) have the property that the larger eigenvalue is more than double the smaller eigenvalue.
Lastly, we show that $\mathcal Mathcal A$ in (\mathcal Ref{exampleofa}) is not equivalent to any pair $\mathcal Mathcal A'$ satisfying (\mathcal Ref{kozyakinpairs}),
i.e.~$\mathcal Mathcal A'=(A_0',A_1')$ with
$A_0' = \begin{pmatrix} 1 & 0 \cr c & d \cr \mathcal End{pmatrix}$,
$A_1'= \begin{pmatrix} a & b \cr 0 & 1 \cr \mathcal End{pmatrix}$,
where $0<a,d<1\mathcal Le bc$.
Note that both $A_0'$ and $A_1'$ have their larger eigenvalue equal to 1 (as is the case for $A_0$ and $A_1$),
and smaller eigenvalues equal to $d$ and $a$, respectively.
Thus if $\mathcal Mathcal A$ and $\mathcal Mathcal A'$ were equivalent then there would exist an invertible $P$ such that
$A_0'=P^{-1}A_0P$ and $A_1'=P^{-1}A_1P$ (i.e.~the positive reals $u,v$ in the above definition of equivalence must both equal 1), so that $d=\mathcal Lambda_0=9/16$, $a=\mathcal Lambda_1=13/16$, and $\mathcal Text{trace}(A_0A_1)=\mathcal Text{trace}(A_0'A_1')$.
In particular,
$1 \mathcal Le bc = \mathcal Text{trace}(A_0'A_1')-d-a = \mathcal Text{trace}(A_0A_1)-\mathcal Lambda_0-\mathcal Lambda_1 = 12995/14336 <1$, a contradiction.
It follows that $\mathcal Mathcal A\in\mathfrak D$ given by (\mathcal Ref{exampleofa}) is not equivalent to any matrix pair satisfying
(\mathcal Ref{kozyakinpairs}).
\mathcal End{remark}
A key tool in proving Theorem \mathcal Ref{nontechnictheorem} is the following Theorem \mathcal Ref{maxtheoremintro}
(proved in \S \mathcal Ref{induceddynsyssection} as Theorem \mathcal Ref{maxtheorem})
characterising the joint spectral radius of $\mathcal Mathcal A\in\mathcal Mm$ in terms of maximizing the integral of a certain
function $f_\mathcal Mathcal A$ over the set $\mathcal M_\mathcal Mathcal A$ of probability measures invariant under an associated mapping $T_\mathcal Mathcal A$.
More precisely, the action of any positive matrix $A$ on $(\mathcal Mathbb{R}^+)^2$ induces a projective map $T_A$ (see \S \mathcal Ref{inducproj}),
and if $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$
then the inverses $T_{A_0}^{-1}$, $T_{A_1}^{-1}$
together define a two-branch
dynamical system $T_\mathcal Mathcal A$ (see \S \mathcal Ref{induceddynsyssection}) on a subset of the unit interval $X$.
Defining the real-valued function $f_\mathcal Mathcal A$,
in terms of the derivative $T_\mathcal Mathcal A'$ and characteristic functions of the images $T_{A_0}(X)$ and $T_{A_1}(X)$,
by
\begin{equation*}
f _\mathcal Mathcal A =
\frac{1}{2}
\mathcal Left(
\mathcal Log T_\mathcal Mathcal A' +(\mathcal Log \det A_0)\mathcal Mathbbm{1}_{T_{A_0}(X)} +(\mathcal Log \det A_1)\mathcal Mathbbm{1}_{T_{A_1}(X)}
\mathcal Right)
\mathcal End{equation*}
then gives:
\begin{theorem}\mathcal Label{maxtheoremintro}
If $\mathcal Mathcal A\in\mathcal Mm$ then
\begin{equation}\mathcal Label{maxtheoremintroeq}
\mathcal Log r(\mathcal Mathcal A) =
\mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu \,.
\mathcal End{equation}
\mathcal End{theorem}
\mathcal Medskip
In order to state a more precise version of Theorem \mathcal Ref{nontechnictheorem},
we first need
some
basic facts concerning ergodic theory, symbolic dynamics, and Sturmian measures:
\begin{notation}\mathcal Label{sturmianfullshift}
Let $\Omega = \{0,1\}^\mathbb N$ denote the set of one-sided sequences $\omega=(\omega_n)_{n=1}^\infty$,
where $\omega_n\in\{0,1\}$ for all $n\ge1$.
When equipped with the product topology, $\Omega$ becomes a compact space,
and the shift map $\sigma:\Omega\mathcal To\Omega$ defined by
$(\sigma\omega)_n = \omega_{n+1}$ for all $n\ge1$ is then continuous.
Let $\mathcal M$ denote the set of shift-invariant Borel probability measures
on $\Omega$; when equipped with the weak-$*$ topology $\mathcal M$ is compact (see \cite[Thm.~6.10]{walters}).
We equip $\Omega$ with the lexicographic order $<$,
and write $[\omega^-,\omega^+]=\{\omega\in\Omega: \omega^-\mathcal Le \omega \mathcal Le \omega^+\}$.
A \mathcal Emph{Sturmian interval} is one of the form $[0\omega,1\omega]$, for some $\omega\in\Omega$.
A measure $\mathcal Mu\in\mathcal M$ is called \mathcal Emph{Sturmian} (see e.g.~\cite[Prop~1.5]{bouschmairesse}, \cite{bullettsentenac})
if its support is contained in a Sturmian interval.
Let $\mathcal Mathcal{S}\subset \mathcal M$ denote the class of Sturmian measures on $\Omega$.
For a Sturmian measure $\mathcal Mu\in\mathcal Mathcal{S}$, the value
$\mathcal Mu([1])$, denoted $\mathcal Mathcal{P}(\mathcal Mu)$, is called its \mathcal Emph{(Sturmian) parameter}\footnote{This
corresponds to the \mathcal Emph{1-frequency} mentioned in \S \mathcal Ref{problemsetting}, sometimes called
the \mathcal Emph{1-ratio} (see e.g.~\cite{hmst, morrissidorov}),
or the \mathcal Emph{rotation number} (see e.g.~\cite{bullettsentenac}).}, where $[1]$ denotes the (cylinder) set
$\{\omega\in\Omega : \omega_1=1\}$.
A \mathcal Emph{Sturmian sequence} of parameter $\mathcal Mathcal{P}$ is any point in the support of the Sturmian measure of parameter $\mathcal Mathcal{P}$.
\mathcal End{notation}
The following are classical facts about Sturmian measures (see e.g.~\cite[\S 1.1]{bouschmairesse} or \cite{bullettsentenac}):
\begin{prop}\mathcal Label{sturmianclassical}
\item[\, (a)]
For each Sturmian interval $[0\omega,1\omega]\subset\Omega$ there exists a unique Sturmian measure
whose support is contained in this interval.
\item[\, (b)]
The mapping $\mathcal Mathcal{P}:\mathcal Mathcal{S}\mathcal To[0,1]$ is a homeomorphism.
If $\mathcal Mu\in\mathcal Mathcal{S}$ has $\mathcal Mathcal{P}(\mathcal Mu)\in\mathcal Mathbb{Q}$ then its support is a single $\sigma$-periodic orbit, while if $\mathcal Mathcal{P}(\mathcal Mu)\notin\mathcal Mathbb{Q}$ then its support
is a Cantor subset of $\Omega$ which supports no other $\sigma$-invariant measure (and in particular contains no periodic orbit).
\item[\, (c)]
If $d(\omega)$ denotes the Sturmian parameter of the Sturmian
measure supported by the Sturmian interval $[0\omega,1\omega]\subset \Omega$,
then the map $d:\Omega\mathcal To [0,1]$ is
continuous, non-decreasing, and surjective.
The preimage $d^{-1}(\mathcal Mathcal{P})$ is a singleton if $\mathcal Mathcal{P}$ is irrational,
and a positive-length closed interval if $\mathcal Mathcal{P}$ is rational.
\mathcal End{prop} | 4,068 | 61,706 | en |
train | 0.4.3 | \begin{theorem}\mathcal Label{maxtheoremintro}
If $\mathcal Mathcal A\in\mathcal Mm$ then
\begin{equation}\mathcal Label{maxtheoremintroeq}
\mathcal Log r(\mathcal Mathcal A) =
\mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu \,.
\mathcal End{equation}
\mathcal End{theorem}
\mathcal Medskip
In order to state a more precise version of Theorem \mathcal Ref{nontechnictheorem},
we first need
some
basic facts concerning ergodic theory, symbolic dynamics, and Sturmian measures:
\begin{notation}\mathcal Label{sturmianfullshift}
Let $\Omega = \{0,1\}^\mathbb N$ denote the set of one-sided sequences $\omega=(\omega_n)_{n=1}^\infty$,
where $\omega_n\in\{0,1\}$ for all $n\ge1$.
When equipped with the product topology, $\Omega$ becomes a compact space,
and the shift map $\sigma:\Omega\mathcal To\Omega$ defined by
$(\sigma\omega)_n = \omega_{n+1}$ for all $n\ge1$ is then continuous.
Let $\mathcal M$ denote the set of shift-invariant Borel probability measures
on $\Omega$; when equipped with the weak-$*$ topology $\mathcal M$ is compact (see \cite[Thm.~6.10]{walters}).
We equip $\Omega$ with the lexicographic order $<$,
and write $[\omega^-,\omega^+]=\{\omega\in\Omega: \omega^-\mathcal Le \omega \mathcal Le \omega^+\}$.
A \mathcal Emph{Sturmian interval} is one of the form $[0\omega,1\omega]$, for some $\omega\in\Omega$.
A measure $\mathcal Mu\in\mathcal M$ is called \mathcal Emph{Sturmian} (see e.g.~\cite[Prop~1.5]{bouschmairesse}, \cite{bullettsentenac})
if its support is contained in a Sturmian interval.
Let $\mathcal Mathcal{S}\subset \mathcal M$ denote the class of Sturmian measures on $\Omega$.
For a Sturmian measure $\mathcal Mu\in\mathcal Mathcal{S}$, the value
$\mathcal Mu([1])$, denoted $\mathcal Mathcal{P}(\mathcal Mu)$, is called its \mathcal Emph{(Sturmian) parameter}\footnote{This
corresponds to the \mathcal Emph{1-frequency} mentioned in \S \mathcal Ref{problemsetting}, sometimes called
the \mathcal Emph{1-ratio} (see e.g.~\cite{hmst, morrissidorov}),
or the \mathcal Emph{rotation number} (see e.g.~\cite{bullettsentenac}).}, where $[1]$ denotes the (cylinder) set
$\{\omega\in\Omega : \omega_1=1\}$.
A \mathcal Emph{Sturmian sequence} of parameter $\mathcal Mathcal{P}$ is any point in the support of the Sturmian measure of parameter $\mathcal Mathcal{P}$.
\mathcal End{notation}
The following are classical facts about Sturmian measures (see e.g.~\cite[\S 1.1]{bouschmairesse} or \cite{bullettsentenac}):
\begin{prop}\mathcal Label{sturmianclassical}
\item[\, (a)]
For each Sturmian interval $[0\omega,1\omega]\subset\Omega$ there exists a unique Sturmian measure
whose support is contained in this interval.
\item[\, (b)]
The mapping $\mathcal Mathcal{P}:\mathcal Mathcal{S}\mathcal To[0,1]$ is a homeomorphism.
If $\mathcal Mu\in\mathcal Mathcal{S}$ has $\mathcal Mathcal{P}(\mathcal Mu)\in\mathcal Mathbb{Q}$ then its support is a single $\sigma$-periodic orbit, while if $\mathcal Mathcal{P}(\mathcal Mu)\notin\mathcal Mathbb{Q}$ then its support
is a Cantor subset of $\Omega$ which supports no other $\sigma$-invariant measure (and in particular contains no periodic orbit).
\item[\, (c)]
If $d(\omega)$ denotes the Sturmian parameter of the Sturmian
measure supported by the Sturmian interval $[0\omega,1\omega]\subset \Omega$,
then the map $d:\Omega\mathcal To [0,1]$ is
continuous, non-decreasing, and surjective.
The preimage $d^{-1}(\mathcal Mathcal{P})$ is a singleton if $\mathcal Mathcal{P}$ is irrational,
and a positive-length closed interval if $\mathcal Mathcal{P}$ is rational.
\mathcal End{prop}
For example the Sturmian measures of parameter
$1/2$, $1/3$, $2/5$, $3/8$ and $5/13$ are, respectively,
supported by the $\sigma$-periodic orbits generated by the finite words
$$
01
\, ,\
001
\, ,\
00101
\, ,\
00100101
\, ,\
0010010100101
\,,
$$
whereas the Sturmian measure of parameter
$(3-\sqrt{5})/2$ is
supported by
the smallest Cantor set
containing the $\sigma$-orbit of
$$
0010010100100101001010010010100101
\mathcal Ldots
$$
In view of Theorem \mathcal Ref{maxtheoremintro}, for a matrix pair $\mathcal Mathcal A\in\mathcal Mm$ we are interested in
measures $\nu\in\mathcal M_\mathcal Mathcal A$ attaining the maximum in (\mathcal Ref{maxtheoremintroeq}),
i.e.~satisfying $\int f_\mathcal Mathcal A\, d\nu = \mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu$;
such $\nu$ will be called \mathcal Emph{$f_\mathcal Mathcal A$-maximizing}.
There is a topological conjugacy between $T_\mathcal Mathcal A$ and the shift map
$\sigma:\Omega\mathcal To\Omega$,
and this induces a natural homeomorphism between $\mathcal M_\mathcal Mathcal A$ and $\mathcal M$;
the image of any $f_\mathcal Mathcal A$-maximizing measure under this homeomorphism will be
called a \mathcal Emph{maximizing measure for $\mathcal Mathcal A$}.
We then say that $\mathcal Mathcal A=(A_0,A_1)\in \mathcal Mm$ \mathcal Emph{generates a full Sturmian family}
if the set of maximizing measures for the family $\mathcal Mathcal A(t) = (A_0,tA_1)$, $t\in\mathcal Mathbb{R}^+$,
is precisely the set $\mathcal Mathcal{S}$ of all Sturmian measures on $\Omega$.
A more precise version of our main result Theorem \mathcal Ref{nontechnictheorem} is then the following:
\begin{theorem}\mathcal Label{maintheorem}
Every matrix pair in the open subset $\mathfrak D\subset M_2(\mathcal Mathbb{R})^2$
(and hence the open subset $\mathcal Ee\subset M_2(\mathcal Mathbb{R})^2$)
generates a full Sturmian family.
\mathcal End{theorem}
Note that Theorem \mathcal Ref{maintheorem} will follow from a more detailed version,
Theorem \mathcal Ref{deviltheorem}, which in particular incorporates the
statement that the parameter map $t\mathcal Mapsto \mathcal Mathcal{P}(t)$ is a devil's staircase. | 1,872 | 61,706 | en |
train | 0.4.4 | \subsection{Relation with previous results}\mathcal Label{relatsubsection}
The methods of this paper can also be used to give an alternative proof of some of the results mentioned above,
namely establishing the analogue of Theorem \mathcal Ref{maintheorem} in certain cases treated by
Bousch \& Mairesse \cite{bouschmairesse} and Kozyakin \cite{kozyakin},
and the case considered by
Blondel, Theys \& Vladimirov \cite{btv}, Hare, Morris, Sidorov \& Theys \cite{hmst}, and Morris \& Sidorov \cite{morrissidorov}.
As already noted, the matrix pair (\mathcal Ref{standardpair}) lies on the boundary of our open set $\mathfrak D$,
and clearly it also lies on the boundary of the set $\mathcal Mathfrak{K}\subset M_2^+(\mathcal Mathbb{R}^+)^2$ defined by
Kozyakin's conditions (\mathcal Ref{kozyakinpairs}).
It can be checked that $\mathcal Mathfrak{K}$ itself lies in the boundary of our set $\mathcal Mm$, but not in the boundary of $\mathfrak D$.
However, the subset $\mathcal Mathfrak{K}' \subset \mathcal Mathfrak{K}$ defined by
\begin{equation}\mathcal Label{kprime}
\mathcal Mathfrak{K}'
=
\mathcal Left\{
\mathcal Left( \begin{pmatrix} 1 & 0 \cr c & d \cr \mathcal End{pmatrix}\, , \
\begin{pmatrix} a & b \cr 0 & 1 \cr \mathcal End{pmatrix} \mathcal Right) \in\mathcal Mathfrak{K}
:
a\mathcal Le b\ \mathcal Text{and}\ d\mathcal Le c\mathcal Right\} \,,
\mathcal End{equation}
can be readily checked to lie in the boundary of $\mathfrak D$.
Matrices in the Bousch-Mairesse family (\mathcal Ref{bmfamily}) do not all satisfy our condition
(\mathcal Ref{definequal1}), or indeed the corresponding weak inequality,
so do not automatically belong to the boundary of $\mathcal Mm$.
However, imposing the additional condition
\begin{equation}\mathcal Label{bmadditional}
e^{2\kappa} \ge (e^{\kappa h_0}+1)(e^{\kappa h_1}+1)
\mathcal End{equation}
ensures that a matrix pair satisfying (\mathcal Ref{bmfamily}) belongs to the boundary
of $\mathcal Mm$, and indeed also belongs to the boundary of $\mathfrak D$.
In \S \mathcal Ref{adaptations} we will indicate the minor modifications to our approach needed to
handle the case of (\mathcal Ref{standardpair}), and the sub-cases of (\mathcal Ref{bmfamily}) and (\mathcal Ref{kozyakinpairs})
defined by (\mathcal Ref{bmadditional}) and (\mathcal Ref{kprime}) respectively.
\subsection{Organisation of article}
The article is organised as follows. Section \mathcal Ref{prelimsection} consists of preliminaries:
maps induced by matrices acting on projective space, Perron-Frobenius theory, and some useful notation
and identities. Section \mathcal Ref{projconvprojconc} develops the notions of projective convexity and projective concavity.
Section \mathcal Ref{induceddynsyssection} introduces the induced dynamical system $T_\mathcal Mathcal A$ for concave-convex matrix pairs $\mathcal Mathcal A$,
the formulation of joint spectral radius in terms of ergodic optimization (Theorem \mathcal Ref{maxtheorem}),
and the connection between the finiteness property and $T_\mathcal Mathcal A$-periodic orbits.
Section \mathcal Ref{sturmasection} introduces Sturmian measures and Sturmian intervals for the dynamical system $T_\mathcal Mathcal A$,
and makes the connection between finiteness counterexamples and unique maximizing measures which are Sturmian of irrational parameter.
Section \mathcal Ref{transfersection} establishes the existence of an important technical tool, the Sturmian transfer function.
After deriving some explicit formulae for extremal Sturmian intervals in Section \mathcal Ref{extsturmsection},
the key Section \mathcal Ref{particularsection} establishes the link between
Sturmian intervals and the parameter $t$ of the pair $\mathcal Mathcal A(t)$.
Section \mathcal Ref{dominatessec} treats the case of those parameters $t$ such that one matrix in the pair $\mathcal Mathcal A(t)$
dominates the other, so that the joint spectral radius $r(\mathcal Mathcal A(t))$ is simply the spectral radius of the dominating matrix.
All other parameters are considered in Section \mathcal Ref{technicalsection}, establishing that the joint spectral radius
is always attained by a unique Sturmian measure.
Finally, in Section \mathcal Ref{devilsection} we show that the map taking parameter values $t$ to the associated Sturmian
parameter
$\mathcal Mathcal{P}(t)$ is a devil's staircase. | 1,174 | 61,706 | en |
train | 0.4.5 | \section{Preliminaries}\mathcal Label{prelimsection}
\subsection{The induced map for a positive matrix}\mathcal Label{inducproj}
\begin{notation}
Throughout we use the notation $X=[0,1]$.
\mathcal End{notation}
A positive matrix $A\in M_2(\mathcal Mathbb{R}^+)$ gives a self-map
$v \mathcal Mapsto Av$
of $(\mathcal Mathbb{R}^+)^2$.
This lifts to a self-map $\widetilde A: [v]\mathcal Mapsto [Av]$
of projective space $(\mathcal Mathbb{R}^+)^2/\sim$, the equivalence relation $\sim$ being defined by
$v\sim v'$ if $v=sv'$ for some $s>0$, and $[v]$ denoting the equivalence class containing $v\in(\mathcal Mathbb{R}^+)^2$.
It is convenient to identify projective space with
\begin{equation*}
\Sigma=\mathcal Left\{ \begin{pmatrix} x\\ 1-x\mathcal End{pmatrix} : x\in (0,1)\mathcal Right\} \,,
\mathcal End{equation*}
so that the projection $\pi:(\mathcal Mathbb{R}^+)^2 \mathcal To \Sigma$ takes the form
$$
\pi: \begin{pmatrix} x\\ y\mathcal End{pmatrix} \mathcal Mapsto \begin{pmatrix} \frac{x}{x+y} \\ \frac{y}{x+y} \mathcal End{pmatrix}\,,
$$
and the projective map is represented as
$\pi\circ A:\Sigma \mathcal To\Sigma$,
taking the explicit form
\begin{equation*}
\pi\circ A:
\begin{pmatrix} x\\ 1-x \mathcal End{pmatrix} \mathcal Mapsto
\begin{pmatrix} \frac{(a-b)x+b}{(a+c-b-d)x + b+d} \\ \frac{(c-d)x+d}{(a+c-b-d)x+b+d}\mathcal End{pmatrix}\,.
\mathcal End{equation*}
This projective mapping is completely determined by its first coordinate, thereby
motivating the following definition of
the self-map $T_A$ of the unit interval $X=[0,1]$:
\begin{defn}\mathcal Label{taxasa}
For $A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$,
the \mathcal Emph{induced map} $T_A:X\mathcal To X$ is defined by
$$
T_A(x)=\frac{(a-b)x+b}{(a+c-b-d)x+b+d}\,,
$$
the \mathcal Emph{induced image} $X_A$ is defined by
$$X_A=T_A(X)=\mathcal Left[\frac{b}{b+d},\frac{a}{a+c}\mathcal Right]\,,$$
and the \mathcal Emph{induced inverse map} $S_A:X_A\mathcal To X$ is given by
$$
S_A(x)= T_A^{-1}(x) = \frac{ (b+d)x - b}{-(a+c-b-d)x+a-b}
\,.
$$
\mathcal End{defn}
\begin{remark}
Defining $P=\begin{pmatrix} 1 & 0 \\ 1 & 1 \mathcal End{pmatrix}$,
the M\"obius maps $T_A$ and $S_A$ are represented, respectively, by the matrices
$PAP^{-1}$ and $PA^{-1}P^{-1}$.
\mathcal End{remark}
\begin{remark}\mathcal Label{invariantposmultiple}
The objects defined in Definition \mathcal Ref{taxasa} do not change if the matrix $A$
is multiplied by a positive real number; that is,
if $t>0$, $A \in M_2^+(\mathcal Mathbb{R}^+)$,
then $T_{tA}=T_A$ (hence $S_{tA}=S_A$), and $X_{tA}=X_A$.
\mathcal End{remark}
In view of (\mathcal Ref{definequal2}) in the definition of $\mathcal Mm$,
it suffices to restrict attention to matrices
of the following form:
\begin{notation}
Let $\mathbb M$ denote the set of matrices $A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$
such that $a+c\neq b+d$.
\mathcal End{notation}
\begin{lemma}
For $A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$, the map $T_A$ has a single fixed point $p_A=T_A(p_A)$ in $X$.
If $A \in \mathbb M$ then
\begin{equation}\mathcal Label{paformula}
p_A
= \frac{a-d-2b+\sqrt{(a-d)^2+4bc}}{2(a+c-b-d)}\,,
\mathcal End{equation}
and if $A\notin\mathbb M$ then
\begin{equation}\mathcal Label{paformula2}
p_A = \frac{b}{2b+d-a}\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Uniqueness follows from the fact that $A$ has all entries strictly positive, and the formulae (\mathcal Ref{paformula})
and (\mathcal Ref{paformula2})
are
straightforward computations.
\mathcal End{proof}
\subsection{Notation and matrix preliminaries}
For a matrix $A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$,
it will be useful to write
\begin{equation}\mathcal Label{alphadef}
\alpha_A = a+c-b-d \,,
\mathcal End{equation}
\begin{equation}\mathcal Label{betadef}
\beta_A = a-d-2b \,,
\mathcal End{equation}
\begin{equation}\mathcal Label{gammadef}
\gamma_A = \sqrt{(a-d)^2+4bc}\,,
\mathcal End{equation}
noting that these quantities are related by the following identity:
\begin{lemma}\mathcal Label{alphabetagammalemma}
For $A \in M_2^+(\mathcal Mathbb{R}^+)$,
\begin{equation}\mathcal Label{alphabetagamma}
\gamma_A^2 - \beta_A^2
=
4b \alpha_A \,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Straightforward computation.
\mathcal End{proof}
For ease of reference it will be convenient to collect together
various previously defined objects expressed in terms of the above notation.
\begin{prop}
For $A \in M_2^+(\mathcal Mathbb{R}^+)$,
\begin{equation}\mathcal Label{rhoaredone}
\mathcal Rho_A = \frac{2b}{\beta_A + \gamma_A}\,,
\mathcal End{equation}
\begin{equation*}
T_A(x) = \frac{(a-b)x+b}{\alpha_Ax+b+d}\,,
\mathcal End{equation*}
\begin{equation*}
S_A(x)
= \frac{(b+d)x-b}{-\alpha_{A}(x+\sigma_{A})} \,,
\mathcal End{equation*}
and if moreover $A\in\mathbb M$ then
\begin{equation}\mathcal Label{sigmaaredone}
\sigma_A = \frac{b-a}{\alpha_A}\,,
\mathcal End{equation}
\begin{equation}\mathcal Label{paformularedone}
p_A = \frac{\beta_A+\gamma_A}{2\alpha_A}
= \frac{b\, \sigma_A}{(b-a)\mathcal Rho_A}\,.
\mathcal End{equation}
The set $\mathcal Mm$ can be written as
\begin{equation*}
\mathcal Mm
=
\mathcal Left\{ (A_0,A_1) \in \mathbb M^2 : \frac{a_0}{c_0}<\frac{b_1}{d_1}
\ \mathcal Text{and}\
\alpha_{A_1} < 0 < \alpha_{A_0} \mathcal Right\}\,.
\mathcal End{equation*}
\mathcal End{prop}
\subsection{Perron-Frobenius theory and the joint spectral radius}
\begin{lemma}\mathcal Label{pflemma}
The dominant (Perron-Frobenius) eigenvalue $\mathcal Lambda_A>0$ of the matrix
$A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$
is given by
\begin{equation*}
\mathcal Label{lambdaadef}
\mathcal Lambda_A = \frac{1}{2}\mathcal Left( a+d +\gamma_A\mathcal Right)
=\frac{b}{p_A}+a-b
\,,
\mathcal End{equation*}
with corresponding left eigenvector
\begin{equation*}
w_A = ( a-d +\gamma_A, 2b)\,,
\mathcal End{equation*}
and right eigenvector
\begin{equation*}
v_A= \begin{pmatrix} p_A \\ 1-p_A \mathcal End{pmatrix} \,.
\mathcal End{equation*}
\mathcal End{lemma}
For $A \in M_2^+(\mathcal Mathbb{R}^+)$,
the derivative of $T_A$ at its fixed point $p_A$ is related to the determinant
and Perron-Frobenius eigenvalue of $A$ as follows:
\begin{lemma}\mathcal Label{derivfixedpt}
If $A \in M_2^+(\mathcal Mathbb{R}^+)$ then
\begin{equation*}
T_A'(p_A) = \frac{\det A}{\mathcal Lambda_A^2}\,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
This is a straightforward computation. If $A\in\mathbb M$ we can use the
expression
$p_A=\frac{\beta_A+\gamma_A}{2\alpha_A}$
(see (\mathcal Ref{paformularedone})), the
derivative formula
$T_A'(x)
=
\det A\mathcal Left( \alpha_A x+b+d\mathcal Right)^{-2}
$, and the fact that
$\mathcal Lambda_A = \frac{1}{2}\mathcal Left( a+d +\gamma_A\mathcal Right)
=\frac{1}{2}(\beta_A+\gamma_A)+b+d$
(see (\mathcal Ref{lambdaadef})).
If $A\notin\mathbb M$ then $T_A'\mathcal Equiv \frac{a-b}{b+d}$,
$\mathcal Lambda_A=b+d$, and the relation $a+c=b+d$ means that $\det A = (a-b)(b+d)$, so the result follows.
\mathcal End{proof}
Since the Perron-Frobenius eigenvalue $\mathcal Lambda_A$ is also the spectral radius $r(A)$,
we obtain the following corollary:
\begin{cor}\mathcal Label{specradiusta}
If $A \in M_2^+(\mathcal Mathbb{R}^+)$ then its spectral radius $r(A)$ satisfies
\begin{equation}\mathcal Label{specradta}
r(A) = \mathcal Left( \frac{\det A}{T_A'(p_A)}\mathcal Right)^{1/2}\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{proof}
Immediate from Lemma \mathcal Ref{derivfixedpt}.
\mathcal End{proof}
\begin{notation}
Let us write finite words using the alphabet $\{0,1\}$ as
$\underline{i}=(i_1,\mathcal Ldots,i_n)$, and their length as $|\underline{i}|=n$.
Let $\Omega^*$ denote the set of all such finite words; that is, $\Omega^*=\cup_{n\ge1} \{0,1\}^n$.
Given $\mathcal Mathcal A=(A_0,A_1)\in M_2(\mathcal Mathbb{R})^2$, and $\underline{i}\in\{0,1\}^n$, let $A(\underline{i})$
denote the product
\begin{equation}\mathcal Label{matrixproduct}
A(\underline{i}) = A_{i_1}\cdots A_{i_n}\,.
\mathcal End{equation}
\mathcal End{notation}
Corollary \mathcal Ref{specradiusta} then allows us to express
the joint spectral radius of a matrix pair
$\mathcal Mathcal A=(A_0,A_1)\in M_2^+(\mathcal Mathbb{R}^+)^2$
in terms of induced maps of the products $A(\underline{i})$ as follows:
\begin{prop}\mathcal Label{jsrderiv}
If $\mathcal Mathcal A=(A_0,A_1)\in M_2^+(\mathcal Mathbb{R}^+)^2$, then its joint spectral radius $r(\mathcal Mathcal A)$ satisfies
\begin{equation}\mathcal Label{jsrderiveq}
r(\mathcal Mathcal A)=
\sup_{\underline{i}\in \Omega^*}
\mathcal Left( \frac{\det A(\underline{i}) }{T_{A(\underline{i}) }' (p_{A(\underline{i} )})} \mathcal Right)^{1/2|\underline{i}|}\,.
\mathcal End{equation}
\mathcal End{prop}
\begin{proof}
The
expression (\mathcal Ref{rjsr}) for the joint spectral radius can be written as
$$r(\mathcal Mathcal A)
=\sup\mathcal Left\{ r(A_{i_1}\cdots A_{i_n})^{1/n}:n\ge 1, i_j\in\{0,1\}\mathcal Right\}
=
\sup_{\underline{i}\in \Omega^*} r(A(\underline{i}))^{1/|\underline{i}|}
\,,
$$
so applying Corollary \mathcal Ref{specradiusta} with $A$ replaced by $A(\underline{i})$ yields the result.
\mathcal End{proof} | 3,513 | 61,706 | en |
train | 0.4.6 | \subsection{Perron-Frobenius theory and the joint spectral radius}
\begin{lemma}\mathcal Label{pflemma}
The dominant (Perron-Frobenius) eigenvalue $\mathcal Lambda_A>0$ of the matrix
$A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in M_2^+(\mathcal Mathbb{R}^+)$
is given by
\begin{equation*}
\mathcal Label{lambdaadef}
\mathcal Lambda_A = \frac{1}{2}\mathcal Left( a+d +\gamma_A\mathcal Right)
=\frac{b}{p_A}+a-b
\,,
\mathcal End{equation*}
with corresponding left eigenvector
\begin{equation*}
w_A = ( a-d +\gamma_A, 2b)\,,
\mathcal End{equation*}
and right eigenvector
\begin{equation*}
v_A= \begin{pmatrix} p_A \\ 1-p_A \mathcal End{pmatrix} \,.
\mathcal End{equation*}
\mathcal End{lemma}
For $A \in M_2^+(\mathcal Mathbb{R}^+)$,
the derivative of $T_A$ at its fixed point $p_A$ is related to the determinant
and Perron-Frobenius eigenvalue of $A$ as follows:
\begin{lemma}\mathcal Label{derivfixedpt}
If $A \in M_2^+(\mathcal Mathbb{R}^+)$ then
\begin{equation*}
T_A'(p_A) = \frac{\det A}{\mathcal Lambda_A^2}\,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
This is a straightforward computation. If $A\in\mathbb M$ we can use the
expression
$p_A=\frac{\beta_A+\gamma_A}{2\alpha_A}$
(see (\mathcal Ref{paformularedone})), the
derivative formula
$T_A'(x)
=
\det A\mathcal Left( \alpha_A x+b+d\mathcal Right)^{-2}
$, and the fact that
$\mathcal Lambda_A = \frac{1}{2}\mathcal Left( a+d +\gamma_A\mathcal Right)
=\frac{1}{2}(\beta_A+\gamma_A)+b+d$
(see (\mathcal Ref{lambdaadef})).
If $A\notin\mathbb M$ then $T_A'\mathcal Equiv \frac{a-b}{b+d}$,
$\mathcal Lambda_A=b+d$, and the relation $a+c=b+d$ means that $\det A = (a-b)(b+d)$, so the result follows.
\mathcal End{proof}
Since the Perron-Frobenius eigenvalue $\mathcal Lambda_A$ is also the spectral radius $r(A)$,
we obtain the following corollary:
\begin{cor}\mathcal Label{specradiusta}
If $A \in M_2^+(\mathcal Mathbb{R}^+)$ then its spectral radius $r(A)$ satisfies
\begin{equation}\mathcal Label{specradta}
r(A) = \mathcal Left( \frac{\det A}{T_A'(p_A)}\mathcal Right)^{1/2}\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{proof}
Immediate from Lemma \mathcal Ref{derivfixedpt}.
\mathcal End{proof}
\begin{notation}
Let us write finite words using the alphabet $\{0,1\}$ as
$\underline{i}=(i_1,\mathcal Ldots,i_n)$, and their length as $|\underline{i}|=n$.
Let $\Omega^*$ denote the set of all such finite words; that is, $\Omega^*=\cup_{n\ge1} \{0,1\}^n$.
Given $\mathcal Mathcal A=(A_0,A_1)\in M_2(\mathcal Mathbb{R})^2$, and $\underline{i}\in\{0,1\}^n$, let $A(\underline{i})$
denote the product
\begin{equation}\mathcal Label{matrixproduct}
A(\underline{i}) = A_{i_1}\cdots A_{i_n}\,.
\mathcal End{equation}
\mathcal End{notation}
Corollary \mathcal Ref{specradiusta} then allows us to express
the joint spectral radius of a matrix pair
$\mathcal Mathcal A=(A_0,A_1)\in M_2^+(\mathcal Mathbb{R}^+)^2$
in terms of induced maps of the products $A(\underline{i})$ as follows:
\begin{prop}\mathcal Label{jsrderiv}
If $\mathcal Mathcal A=(A_0,A_1)\in M_2^+(\mathcal Mathbb{R}^+)^2$, then its joint spectral radius $r(\mathcal Mathcal A)$ satisfies
\begin{equation}\mathcal Label{jsrderiveq}
r(\mathcal Mathcal A)=
\sup_{\underline{i}\in \Omega^*}
\mathcal Left( \frac{\det A(\underline{i}) }{T_{A(\underline{i}) }' (p_{A(\underline{i} )})} \mathcal Right)^{1/2|\underline{i}|}\,.
\mathcal End{equation}
\mathcal End{prop}
\begin{proof}
The
expression (\mathcal Ref{rjsr}) for the joint spectral radius can be written as
$$r(\mathcal Mathcal A)
=\sup\mathcal Left\{ r(A_{i_1}\cdots A_{i_n})^{1/n}:n\ge 1, i_j\in\{0,1\}\mathcal Right\}
=
\sup_{\underline{i}\in \Omega^*} r(A(\underline{i}))^{1/|\underline{i}|}
\,,
$$
so applying Corollary \mathcal Ref{specradiusta} with $A$ replaced by $A(\underline{i})$ yields the result.
\mathcal End{proof}
\subsection{Some useful formulae}
The purpose of this short subsection is to collect together various formulae
which will prove useful in the sequel.
Firstly, we have the following two expressions for the determinant of $A$ involving $\alpha_A$ and
$\sigma_A$:
\begin{lemma}\mathcal Label{detalphasigma}
For $A\in \mathbb M$, its determinant can be expressed as
\begin{equation}\mathcal Label{detac}
\det A = -\alpha_A(a+c)\mathcal Left( \frac{a}{a+c} + \sigma_A\mathcal Right)
\mathcal End{equation}
and
\begin{equation}\mathcal Label{detbd}
\det A = -\alpha_A(b+d)\mathcal Left( \frac{b}{b+d} + \sigma_A\mathcal Right) \,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Straightforward computation.
\mathcal End{proof}
There is a useful alternative way of expressing the quantity $\mathcal Rho_A$:
\begin{lemma}\mathcal Label{rhoquadratic}
For $A\in \mathbb M$,
\begin{equation}\mathcal Label{altrho}
\mathcal Rho_A = \frac{\gamma_A -\beta_A}{2\alpha_A}\,,
\mathcal End{equation}
and $\mathcal Rho_A$
is the larger
root of the quadratic polynomial $q_A$ defined by
\begin{equation}\mathcal Label{qdef}
q_A(z) = \alpha_A z^2 +\beta_A z-b
\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
The expression (\mathcal Ref{altrho}) follows from
(\mathcal Ref{rhoaredone}) and the identity
(\mathcal Ref{alphabetagamma}).
The larger root of $q_A$ is computed to be
$$
\frac{-\beta_A+\sqrt{\beta_A^2+4\alpha_A b}}{2\alpha_A}
= \frac{-\beta_A +\gamma_A}{2\alpha_A}\,,
$$
again using (\mathcal Ref{alphabetagamma}).
\mathcal End{proof}
Clearly
\begin{equation}\mathcal Label{qdet}
q_A(z)
=\det \begin{pmatrix} 1 & z \\ -\alpha_A z & \beta_A z - b \mathcal End{pmatrix} \,,
\mathcal End{equation}
though the following expression will prove to be more useful:
\begin{lemma}\mathcal Label{qpolylemma}
For $A\in \mathbb M$,
\begin{equation*}
q_A(z)
=\det \begin{pmatrix} 1 & z \\ b +d -\alpha_A z & (a-b) z -b \mathcal End{pmatrix} \,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
Straightforward computation.
\mathcal End{proof} | 2,150 | 61,706 | en |
train | 0.4.7 | \section{Projective convexity and projective concavity}\mathcal Label{projconvprojconc}
\begin{remark}\mathcal Label{derivativesremark}
\item[\, (a)]
For $A\in\mathbb M$ and $x\in X$, the derivative formula
\begin{equation}\mathcal Label{taderiv}
T_A'(x)
=
\det A\mathcal Left( \alpha_A x+b+d\mathcal Right)^{-2}
\mathcal End{equation}
implies that if $\mathcal Mathcal A\in \mathbb M^2$ then
$T_{A_0}$ and $T_{A_1}$ are orientation preserving.
\item[\, (b)]
For $A\in\mathbb M$ and $x\in X$,
the second derivative formula
\begin{equation}\mathcal Label{tasecondderiv}
T_A''(x)
=
-2\alpha_A \det A \mathcal Left( \alpha_A x+b+d\mathcal Right)^{-3}
\mathcal End{equation}
implies that if $\mathcal Mathcal A\in\mathcal Mm$ then
$T_{A_0}'' < 0$ and $T_{A_1}'' > 0$,
i.e.~$T_{A_0}$ is strictly concave and $T_{A_1}$ is strictly convex.
\mathcal End{remark}
Part (b) of Remark \mathcal Ref{derivativesremark} motivates the following definition, partitioning $\mathbb M$ into two subsets:
\begin{defn}
A matrix $A\in \mathbb M$ will be called \mathcal Emph{projectively convex} if the induced map $T_A$ is strictly convex, and
\mathcal Emph{projectively concave} if the induced map $T_A$ is strictly concave.
\mathcal End{defn}
\begin{remark}
The set $\mathbb M$ is the disjoint union of the subset of
projectively convex matrices and
the subset of projectively concave matrices.
\mathcal End{remark}
Recall that
\begin{equation}\mathcal Label{weig}
w_A=(w^{(1)}_A,w^{(2)}_A)
= ( a-d +\gamma_A, 2b)
\mathcal End{equation}
denotes the Perron-Frobenius left eigenvector of $A\in \mathbb M$,
and that (consequently) the right eigenvector for the other eigenvalue of $A$
is $\begin{pmatrix} w^{(2)}_A \\ -w^{(1)}_A\mathcal End{pmatrix}$.
It is useful to record the following identity:
\begin{lemma}\mathcal Label{wrholem}
For $A = \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in \mathbb M$,
\begin{equation}\mathcal Label{wrho}
\mathcal Rho_A = \frac{w_A^{(2)}}{w_A^{(1)}-w_A^{(2)}}\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Immediate from
(\mathcal Ref{rhoaredone}) and
(\mathcal Ref{weig}).
\mathcal End{proof}
\begin{cor}\mathcal Label{rhosiminvt}
For $A\in \mathbb M$, if $Q\in M_2(\mathcal Mathbb{R})$ is non-singular then
$\mathcal Rho_{Q^{-1}AQ}=\mathcal Rho_A$; that is, $\mathcal Rho_A$ is invariant under similarities.
\mathcal End{cor}
\begin{proof}
Immediate from Lemma \mathcal Ref{wrholem}, and the fact that the eigenvector $w_A$ is invariant under similarities.
\mathcal End{proof}
There are various useful characterisations of projective convexity and projective concavity:
\begin{lemma}\mathcal Label{concaveequivconditions}
For $A \in \mathbb M$, the following are equivalent
\item[\, (i)]
$A$ is projectively concave,
\item[\, (ii)]
$\alpha_A>0$,
\item[\, (iii)]
$\mathcal Rho_A>0$,
\item[\, (iv)]
$w^{(1)}_A > w^{(2)}_A$.
\mathcal End{lemma}
\begin{proof}
As noted in Remark \mathcal Ref{derivativesremark} (b), the second derivative formula
(\mathcal Ref{tasecondderiv}) yields the equivalence of (i) and (ii), since $\det A >0$, and a function
is strictly concave if and only if its second derivative is strictly negative.
To prove the equivalence of (ii) and (iii), we consider separately the cases where $\beta_A\ge0$ and
$\beta_A<0$.
If $\beta_A\ge0$ then $\alpha_A=\beta_A+b+c>0$, so we must simply
show that $\mathcal Rho_A>0$.
But $\gamma_A>0$ by definition, hence $\beta_A+\gamma_A>0$, and therefore
(\mathcal Ref{rhoaredone}) implies that $\mathcal Rho_A = \frac{2b}{\beta_A + \gamma_A}>0$, as required.
If on the other hand $\beta_A<0$ then $\gamma_A - \beta_A>0$ is automatically true,
again since $\gamma_A>0$ by definition.
Using (\mathcal Ref{alphabetagamma}) and (\mathcal Ref{rhoaredone}) we see that
\begin{equation*}
2\alpha_A \mathcal Rho_A = \gamma_A-\beta_A >0\,,
\mathcal End{equation*}
so indeed $\alpha_A>0$ if and only if $\mathcal Rho_A>0$, as required.
Lastly, the equivalence of (iii) and (iv) is immediate from (\mathcal Ref{wrho}), since $w_A^{(2)}>0$.
\mathcal End{proof}
\begin{lemma}\mathcal Label{convexequivconditions}
For $A \in \mathbb M$, the following are equivalent
\item[\, (i)]
$A$ is projectively convex,
\item[\, (ii)]
$\alpha_A<0$,
\item[\, (iii)]
$\mathcal Rho_A < -1$,
\item[\, (iv)]
$w^{(1)}_A < w^{(2)}_A$.
\mathcal End{lemma}
\begin{proof}
A function
is strictly convex if and only if its second derivative is strictly positive, so
the equivalence of (i) and (ii)
follows from
(\mathcal Ref{tasecondderiv}), since $\det A >0$
and $\alpha_Ax+b+d = a+c+(b+d)(1-x)>0$ for all $x\in X$.
To prove that (iii) is equivalent to (iv), note that (\mathcal Ref{wrho})
gives $w^{(1)}_A=w^{(2)}_A(1+\mathcal Rho_A^{-1})$; therefore $\mathcal Rho_A<-1$ if and only if
$1+\mathcal Rho_A^{-1}\in (0,1)$, if and only if $w^{(1)}_A\in(0,w^{(2)}_A)$.
Lastly, to prove the equivalence of (ii) and (iii), it follows from
Lemma \mathcal Ref{concaveequivconditions} that $\alpha_A<0$ if and only if $\mathcal Rho_A<0$,
but this latter inequality in fact implies $w^{(2)}_A-w^{(1)}_A>0$ by (\mathcal Ref{wrho}), so
$$\mathcal Rho_A = -1 - \frac{w_A^{(1)}}{w_A^{(2)}-w_A^{(1)}} <-1\,,$$
as required.
\mathcal End{proof}
Note that in Lemma \mathcal Ref{convexequivconditions} the assertion is not merely that $\mathcal Rho_A<0$, but that $\mathcal Rho_A<-1$; this should be contrasted with the inequality $\mathcal Rho_A>0$ in Lemma
\mathcal Ref{concaveequivconditions}.
It is now clear why $\mathcal Mm$ is described as the set of
\mathcal Emph{concave-convex pairs}\footnote{Note, however, the restriction that the induced images be
\mathcal Emph{disjoint}, with the concave image to the \mathcal Emph{left} of the convex one.}:
\begin{prop}\mathcal Label{concavconvexprop}
The set $\mathcal Mm$ consists of those matrix pairs $(A_0,A_1)\in \mathbb M^2$ such that
$A_0$ is projectively concave, $A_1$ is projectively convex, and the induced
image for $A_0$ is strictly to the left of the induced image of $A_1$.
\mathcal End{prop}
\begin{proof}
Lemmas \mathcal Ref{concaveequivconditions} and \mathcal Ref{convexequivconditions}
imply that the inequality $\alpha_{A_1}<0<\alpha_{A_0}$
in Definition \mathcal Ref{frakmdefn}
is equivalent to
$A_0$ being projectively concave and $A_1$ being projectively convex.
The inequality $\frac{a_0}{c_0} < \frac{b_1}{d_1}$ in Definition \mathcal Ref{frakmdefn}
is equivalent to
$T_{A_0}(1) = \frac{a_0}{a_0+c_0} < \frac{b_1}{b_1+d_1}=T_{A_1}(0)$, which asserts that the right
endpoint of the induced image $X_{A_0}$ is strictly to the left of the left endpoint
of the induced image $X_{A_1}$.
\mathcal End{proof}
\begin{lemma}\mathcal Label{lemma1}
If $A\in \mathbb M$ is projectively concave then
$\sigma_{A}<0$.
\mathcal End{lemma}
\begin{proof}
Projective concavity of $A$ means that $\alpha_A>0$,
so
by (\mathcal Ref{sigmaaredone})
it suffices to show that $b<a$.
Since $\det A = ad-bc>0$
and $\alpha_A=a+c-b-d>0$
we derive
$$
a-b>d-c > \frac{bc}{a}-c = -\frac{c}{a}(a-b)\,,
$$
or in other words
$$
(a-b)\mathcal Left(1+\frac{c}{a}\mathcal Right)>0\,,
$$
and hence $a-b>0$,
as required.
\mathcal End{proof}
We can now prove the following result mentioned in \S \mathcal Ref{statementsubsection}
(note, however, that there is no constraint on the sign of $\sigma_{A_1}$ when $(A_0,A_1)\in\mathcal Mm$):
\begin{cor}\mathcal Label{rhoposlessminusone}
If $(A_0,A_1)\in \mathcal Mm$ then
$\sigma_{A_0}<0<\mathcal Rho_{A_0}$ and $\mathcal Rho_{A_1}<-1$.
\mathcal End{cor}
\begin{proof}
Immediate from Lemmas \mathcal Ref{concaveequivconditions}, \mathcal Ref{convexequivconditions},
and \mathcal Ref{lemma1}
\mathcal End{proof}
An important result is the following:
\begin{lemma}\mathcal Label{posneglinfactors}
If $A\in \mathbb M$ then
\begin{equation}\mathcal Label{alwayspos}
-\alpha_{A}(x+\sigma_{A}) > 0 \quad\mathcal Text{for all } x\in X_{A}\,.
\mathcal End{equation}
In particular, if $A\in \mathbb M$ is projectively concave then
\begin{equation}\mathcal Label{posneglinfactorseq0}
x+\sigma_{A}<0 \quad\mathcal Text{for all }x\in X_{A}\,,
\mathcal End{equation}
and if $A\in \mathbb M$ is projectively convex then
\begin{equation}\mathcal Label{posneglinfactorseq1}
x+\sigma_{A}>0 \quad\mathcal Text{for all }x\in X_{A}\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Clearly (\mathcal Ref{alwayspos}) follows from (\mathcal Ref{posneglinfactorseq0}) and
(\mathcal Ref{posneglinfactorseq1}),
since $\alpha_A$ is positive if $A$ is projectively concave, and negative if
$A$ is projectively convex,
by Lemmas
\mathcal Ref{concaveequivconditions} and \mathcal Ref{convexequivconditions}.
To prove (\mathcal Ref{posneglinfactorseq0}), note that
$ -\alpha_A(a+c)\mathcal Left( \frac{a}{a+c} + \sigma_A\mathcal Right) = \det A >0$
by (\mathcal Ref{detac}), and if $A$ is projectively concave then $\alpha_A>0$, so
$\frac{a}{a+c} + \sigma_A < 0$. But $\frac{a}{a+c}$ is the righthand endpoint of $X_A$, so if $x\in X_A$ then $x\mathcal Le \frac{a}{a+c}$, therefore
$
x+\sigma_A \mathcal Le \frac{a}{a+c} + \sigma_A < 0
$, as required.
To prove (\mathcal Ref{posneglinfactorseq1}), note that
$-\alpha_A(b+d)\mathcal Left( \frac{b}{b+d} + \sigma_A\mathcal Right) = \det A >0$
by (\mathcal Ref{detbd}), and if $A$ is projectively convex then $\alpha_A<0$, so
$\frac{b}{b+d} + \sigma_A >0$.
But $\frac{b}{b+d}$ is the lefthand endpoint of $X_A$, so if $x\in X_A$ then $x\ge \frac{b}{b+d}$, therefore
$
x+\sigma_A \ge \frac{b}{b+d} + \sigma_A > 0
$, as required.
\mathcal End{proof}
\begin{cor}\mathcal Label{alwaysposcor}
If $(A_0,A_1)\in\mathcal Mm$ then
$
x+\sigma_{A_0} <0
$
for $x\in X_{A_0}$,
and
$x+\sigma_{A_1} >0
$
for $x\in X_{A_1}$, and
$-\alpha_{A_i}(x+\sigma_{A_i}) > 0$ for all $x\in X_{A_i}$, $i\in\{0,1\}$.
\mathcal End{cor}
\begin{proof}
Immediate from Lemma \mathcal Ref{posneglinfactors}.
\mathcal End{proof}
\begin{lemma}\mathcal Label{qrhoineq}
If $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$ then
\begin{equation*}
q_{A_1}(\mathcal Rho_{A_0}) < 0
<
q_{A_0}(\mathcal Rho_{A_1})\,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
The larger root of $q_{A_1}$ is $\mathcal Rho_{A_1}$, by Lemma \mathcal Ref{rhoquadratic}.
It follows that $q_{A_1}(z)= \alpha_{A_1} z^2 +\beta_{A_1} z-b_1 <0$ for all $z>\mathcal Rho_{A_1}$, since
the leading coefficient $\alpha_{A_1}<0$, since $A_1$ is projectively convex.
But by Lemmas \mathcal Ref{concaveequivconditions} and \mathcal Ref{convexequivconditions}
we know that $\mathcal Rho_{A_0} > 0 > -1 > \mathcal Rho_{A_1}$, so indeed
$q_{A_1}(\mathcal Rho_{A_0}) < 0$, as required. | 3,953 | 61,706 | en |
train | 0.4.8 | We can now prove the following result mentioned in \S \mathcal Ref{statementsubsection}
(note, however, that there is no constraint on the sign of $\sigma_{A_1}$ when $(A_0,A_1)\in\mathcal Mm$):
\begin{cor}\mathcal Label{rhoposlessminusone}
If $(A_0,A_1)\in \mathcal Mm$ then
$\sigma_{A_0}<0<\mathcal Rho_{A_0}$ and $\mathcal Rho_{A_1}<-1$.
\mathcal End{cor}
\begin{proof}
Immediate from Lemmas \mathcal Ref{concaveequivconditions}, \mathcal Ref{convexequivconditions},
and \mathcal Ref{lemma1}
\mathcal End{proof}
An important result is the following:
\begin{lemma}\mathcal Label{posneglinfactors}
If $A\in \mathbb M$ then
\begin{equation}\mathcal Label{alwayspos}
-\alpha_{A}(x+\sigma_{A}) > 0 \quad\mathcal Text{for all } x\in X_{A}\,.
\mathcal End{equation}
In particular, if $A\in \mathbb M$ is projectively concave then
\begin{equation}\mathcal Label{posneglinfactorseq0}
x+\sigma_{A}<0 \quad\mathcal Text{for all }x\in X_{A}\,,
\mathcal End{equation}
and if $A\in \mathbb M$ is projectively convex then
\begin{equation}\mathcal Label{posneglinfactorseq1}
x+\sigma_{A}>0 \quad\mathcal Text{for all }x\in X_{A}\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Clearly (\mathcal Ref{alwayspos}) follows from (\mathcal Ref{posneglinfactorseq0}) and
(\mathcal Ref{posneglinfactorseq1}),
since $\alpha_A$ is positive if $A$ is projectively concave, and negative if
$A$ is projectively convex,
by Lemmas
\mathcal Ref{concaveequivconditions} and \mathcal Ref{convexequivconditions}.
To prove (\mathcal Ref{posneglinfactorseq0}), note that
$ -\alpha_A(a+c)\mathcal Left( \frac{a}{a+c} + \sigma_A\mathcal Right) = \det A >0$
by (\mathcal Ref{detac}), and if $A$ is projectively concave then $\alpha_A>0$, so
$\frac{a}{a+c} + \sigma_A < 0$. But $\frac{a}{a+c}$ is the righthand endpoint of $X_A$, so if $x\in X_A$ then $x\mathcal Le \frac{a}{a+c}$, therefore
$
x+\sigma_A \mathcal Le \frac{a}{a+c} + \sigma_A < 0
$, as required.
To prove (\mathcal Ref{posneglinfactorseq1}), note that
$-\alpha_A(b+d)\mathcal Left( \frac{b}{b+d} + \sigma_A\mathcal Right) = \det A >0$
by (\mathcal Ref{detbd}), and if $A$ is projectively convex then $\alpha_A<0$, so
$\frac{b}{b+d} + \sigma_A >0$.
But $\frac{b}{b+d}$ is the lefthand endpoint of $X_A$, so if $x\in X_A$ then $x\ge \frac{b}{b+d}$, therefore
$
x+\sigma_A \ge \frac{b}{b+d} + \sigma_A > 0
$, as required.
\mathcal End{proof}
\begin{cor}\mathcal Label{alwaysposcor}
If $(A_0,A_1)\in\mathcal Mm$ then
$
x+\sigma_{A_0} <0
$
for $x\in X_{A_0}$,
and
$x+\sigma_{A_1} >0
$
for $x\in X_{A_1}$, and
$-\alpha_{A_i}(x+\sigma_{A_i}) > 0$ for all $x\in X_{A_i}$, $i\in\{0,1\}$.
\mathcal End{cor}
\begin{proof}
Immediate from Lemma \mathcal Ref{posneglinfactors}.
\mathcal End{proof}
\begin{lemma}\mathcal Label{qrhoineq}
If $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$ then
\begin{equation*}
q_{A_1}(\mathcal Rho_{A_0}) < 0
<
q_{A_0}(\mathcal Rho_{A_1})\,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
The larger root of $q_{A_1}$ is $\mathcal Rho_{A_1}$, by Lemma \mathcal Ref{rhoquadratic}.
It follows that $q_{A_1}(z)= \alpha_{A_1} z^2 +\beta_{A_1} z-b_1 <0$ for all $z>\mathcal Rho_{A_1}$, since
the leading coefficient $\alpha_{A_1}<0$, since $A_1$ is projectively convex.
But by Lemmas \mathcal Ref{concaveequivconditions} and \mathcal Ref{convexequivconditions}
we know that $\mathcal Rho_{A_0} > 0 > -1 > \mathcal Rho_{A_1}$, so indeed
$q_{A_1}(\mathcal Rho_{A_0}) < 0$, as required.
The smaller root of $q_{A_0}$, which we shall denote by $r_{A_0}$, is given by
$$
r_{A_0} = \frac{-(\gamma_{A_0}+\beta_{A_0})}{2\alpha_{A_0}}\,.
$$
It follows that
\begin{equation}\mathcal Label{qlarger}
q_{A_0}(z)= \alpha_{A_0} z^2 +\beta_{A_0} z-b_0 >0
\quad\mathcal Text{for all }
z<r_{A_0}\,,
\mathcal End{equation}
since
the leading coefficient $\alpha_{A_0}>0$, since $A_0$ is projectively concave.
Now $\mathcal Rho_{A_1}<-1$ by Lemma \mathcal Ref{convexequivconditions}, and if we can show that $r_{A_0}> -1$
then it follows that $\mathcal Rho_{A_1}<r_{A_0}$, and hence $q_{A_0}(\mathcal Rho_{A_1})>0$ by (\mathcal Ref{qlarger}).
To show that indeed $r_{A_0}> -1$, note that this inequality is equivalent to $2\alpha_A-\beta_A>\gamma_A$.
Both sides are positive, so this is equivalent to
$(2\alpha_A-\beta_A)^2>\gamma_A^2$,
which
using (\mathcal Ref{alphabetagamma})
becomes $4\alpha_A(\alpha_A-\beta_A) > 4b\alpha_A$.
This latter inequality is equivalent to $\alpha_A-\beta_A>b$, which is true because in fact
$\alpha_A-\beta_A=b+c>b$.
\mathcal End{proof}
We deduce the following technical lemma, which will be used in \S \mathcal Ref{dominatessec}:
\begin{lemma}\mathcal Label{techderivlemma}
If $\mathcal Mathcal A\in\mathcal Mm$ then the M\"obius function
\begin{equation*}
x\mathcal Mapsto \frac{x+\mathcal Rho_{A_0}}{(b_1+d_1-\alpha_{A_1}\mathcal Rho_{A_0})x +(a_1-b_1)\mathcal Rho_{A_0}-b_1}
\mathcal End{equation*}
has strictly negative derivative,
while
the M\"obius function
\begin{equation*}
x\mathcal Mapsto \frac{x+\mathcal Rho_{A_1}}{(b_0+d_0-\alpha_{A_0}\mathcal Rho_{A_1})x +(a_0-b_0)\mathcal Rho_{A_1}-b_0}
\mathcal End{equation*}
has strictly positive derivative.
\mathcal End{lemma}
\begin{proof}
A general M\"obius map $x\mathcal Mapsto \frac{Px+Q}{Rx+S}$ has derivative
$D(Rx+S)^{-2}$, where
$D=PS-QR$,
so the derivative is strictly negative if $D<0$, and strictly positive if $D>0$.
For our first M\"obius map we have
$$
D=\det \begin{pmatrix} 1 & \mathcal Rho_{A_0} \\ b_1 +d_1 -\alpha_{A_1} \mathcal Rho_{A_0} & (a_1-b_1) \mathcal Rho_{A_0} -b_1 \mathcal End{pmatrix}
= q_{A_1}(\mathcal Rho_{A_0})
$$
by Lemma \mathcal Ref{qpolylemma},
and $q_{A_1}(\mathcal Rho_{A_0})$ is strictly negative by Lemma \mathcal Ref{qrhoineq}, so the derivative of the map is strictly negative, as required.
For our second M\"obius map we have
$$
D=\det \begin{pmatrix} 1 & \mathcal Rho_{A_1} \\ b_0 +d_0 -\alpha_{A_0} \mathcal Rho_{A_1} & (a_0-b_0) \mathcal Rho_{A_1} -b_0 \mathcal End{pmatrix}
= q_{A_0}(\mathcal Rho_{A_1})
$$
by Lemma \mathcal Ref{qpolylemma},
and $q_{A_0}(\mathcal Rho_{A_1})$ is strictly positive by Lemma \mathcal Ref{qrhoineq}, so the derivative of the map is strictly positive, as required.
\mathcal End{proof} | 2,432 | 61,706 | en |
train | 0.4.9 | \section{The induced dynamical system for a concave-convex matrix pair}\mathcal Label{induceddynsyssection}
\subsection{The induced dynamical system and joint spectral radius}
\begin{defn}\mathcal Label{induceddefn}
For a matrix pair $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$,
define the
\mathcal Emph{induced space} $X_\mathcal Mathcal A$ to be
$$
X_\mathcal Mathcal A = X_{A_0}\cup X_{A_1}\,,
$$
and define the \mathcal Emph{induced dynamical system}
$T_\mathcal Mathcal A:X_\mathcal Mathcal A\mathcal To X$ by
\begin{equation*}
T_\mathcal Mathcal A(x) =
\begin{cases}
S_{A_0}(x)&\mathcal Text{if } x\in X_{A_0}\\
S_{A_1}(x)&\mathcal Text{if }x\in X_{A_1}\,.
\mathcal End{cases}
\mathcal End{equation*}
\mathcal End{defn}
\begin{remark}\mathcal Label{taremarks}
\item[\, (a)]
The map $T_\mathcal Mathcal A:X_\mathcal Mathcal A\mathcal To X$ is Lipschitz continuous since $X_\mathcal Mathcal A=X_{A_0}\cup X_{A_1}$ is the union of disjoint intervals $X_{A_0}$ and $X_{A_1}$, and the restriction of $T_\mathcal Mathcal A$ to $X_{A_i}$ is the M\"obius mapping $S_{A_i}$, which is certainly Lipschitz continuous.
\item[\, (b)]
Note that the (surjective) induced dynamical system $T_\mathcal Mathcal A$ is naturally defined as a mapping
from $X_\mathcal Mathcal A$ to $X=[0,1]$.
To view it as a surjective \mathcal Emph{self}-mapping of some set (the natural setting for a dynamical system) we consider its restriction to
the \mathcal Emph{induced Cantor set}
$Y_\mathcal Mathcal A:=\cap_{n\ge0}T_\mathcal Mathcal A^{-n}(X_\mathcal Mathcal A)$,
and note that $T_\mathcal Mathcal A:Y_\mathcal Mathcal A\mathcal To Y_\mathcal Mathcal A$ is topologically conjugate to the shift map
$\sigma$ on $\Omega = \{0,1\}^\mathbb N$.
\mathcal End{remark}
\begin{prop}\mathcal Label{jsrderivta}
If $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$ then its joint spectral radius $r(\mathcal Mathcal A)$ satisfies
\begin{equation}\mathcal Label{jsrderivtaeq}
r(\mathcal Mathcal A)=
\sup_{\underline{i}\in \Omega^*}
\mathcal Left( \det A(\underline{i}) \,\, (T_{\mathcal Mathcal A}^{|\underline{i}|})' (p_{A(\underline{i} )}) \mathcal Right)^{1/2|\underline{i}|}\,.
\mathcal End{equation}
\mathcal End{prop}
\begin{proof}
From Definition \mathcal Ref{induceddefn} we see that $T_\mathcal Mathcal A\circ T_{A_i}$ is the identity map on $X$,
for $i\in\{0,1\}$, since $T_\mathcal Mathcal A$ is defined in terms of the inverses $S_{A_i}=T_{A_i}^{-1}$.
Similarly, for any $\underline{i}\in\{0,1\}^n$ we see that $T_\mathcal Mathcal A^n\circ T_{A(\underline{i})}$ is also the
identity map on $X$, so
$(T_\mathcal Mathcal A^n)'(T_{A(\underline{i})}(x)) \, T_{A(\underline{i})}'(x) =1$
for all $x\in X$, by the chain rule. Setting $x=p_{A(\underline{i})} = T_{A(\underline{i})}(p_{A(\underline{i})})$ we obtain
\begin{equation}\mathcal Label{inversederivformu}
(T_\mathcal Mathcal A^n)'(p_{A(\underline{i})}) = \frac{1}{ T_{A(\underline{i})}'( p_{A(\underline{i})} )} \,,
\mathcal End{equation}
and combining this with (\mathcal Ref{jsrderiveq}) gives the required formula (\mathcal Ref{jsrderivtaeq}).
\mathcal End{proof}
\subsection{Invariant measures for the induced dynamical system}
\begin{defn}
For $\mathcal Mathcal A\in\mathcal Mm$,
let $\mathcal M_\mathcal Mathcal A$ denote the set
of $T_\mathcal Mathcal A$-invariant Borel probability measures on $X=[0,1]$;
the support of any such measure is contained in $Y_\mathcal Mathcal A=\cap_{n\ge0} T_\mathcal Mathcal A^{-n}(X_\mathcal Mathcal A)$.
\mathcal End{defn}
The following is a well known consequence of the compactness of $Y_\mathcal Mathcal A$ and continuity of $T_\mathcal Mathcal A$
(see e.g.~\cite[Thm.~6.10]{walters}):
\begin{lemma}\mathcal Label{macompactlemma}
The set $\mathcal M_\mathcal Mathcal A$ is compact with respect to the weak$^*$ topology.
\mathcal End{lemma}
\begin{defn}\mathcal Label{periodicorbitmeasuredefn}
For any $p\in X_\mathcal Mathcal A$ that is a periodic point for $T_\mathcal Mathcal A$, with $T_\mathcal Mathcal A^n(p)=p$, we say that the
probability measure $\mathcal Mu\in\mathcal M_\mathcal Mathcal A$ defined by
\begin{equation}
\mathcal Mu
=\frac{1}{n}\sum_{j=0}^{n-1} \delta_{T_\mathcal Mathcal A^j (p)}
\mathcal End{equation}
is the corresponding \mathcal Emph{periodic orbit measure} (or \mathcal Emph{$T_\mathcal Mathcal A$-periodic orbit measure}).
\mathcal End{defn}
\begin{remark}\mathcal Label{conjugacymeasures}
The topological conjugacy $h_\mathcal Mathcal A:\Omega\mathcal To Y_\mathcal Mathcal A$ between the shift map $\sigma:\Omega\mathcal To\Omega$
and $T_\mathcal Mathcal A:Y_\mathcal Mathcal A\mathcal To Y_\mathcal Mathcal A$ (cf.~Remark \mathcal Ref{taremarks} (b))
induces a one-to-one correspondence
$h_\mathcal Mathcal A^*:\mathcal M\mathcal To\mathcal M_\mathcal Mathcal A$ between invariant measures.
\mathcal End{remark}
\begin{defn}\mathcal Label{eomax}
For a bounded Borel function $f:X_\mathcal Mathcal A\mathcal To\mathcal Mathbb{R}$,
a measure $m\in\mathcal M_\mathcal Mathcal A$ is called \mathcal Emph{$f$-maximizing} if
$$
\int f\, dm = \sup_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f\, d\mathcal Mu
\,.
$$
\mathcal End{defn}
In the generality of Definition \mathcal Ref{eomax}, the notion of an $f$-maximizing invariant measure
is part of the wider field of
so-called ergodic optimization, see e.g.~\cite{jeo}.
\begin{defn}
For $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$, define the \mathcal Emph{induced function} $f_\mathcal Mathcal A:X_\mathcal Mathcal A\mathcal To\mathcal Mathbb{R}$ by
\begin{equation}\mathcal Label{function}
f _\mathcal Mathcal A = \frac{1}{2}\mathcal Left( \mathcal Log T_\mathcal Mathcal A' +(\mathcal Log \det A_0)\mathcal Mathbbm{1}_{X_{A_0}} +(\mathcal Log \det A_1)\mathcal Mathbbm{1}_{X_{A_1}} \mathcal Right) \,. \
\mathcal End{equation}
That is,
\begin{equation}\mathcal Label{fthatis}
f_\mathcal Mathcal A(x)=
\begin{cases}
\frac{1}{2}\mathcal Left( \mathcal Log S_{A_0}'(x) + \mathcal Log \det A_0 \mathcal Right) &\mathcal Text{if }x\in X_{A_0} \\
\frac{1}{2} \mathcal Left(\mathcal Log S_{A_1}'(x) + \mathcal Log \det A_1 \mathcal Right) &\mathcal Text{if }x\in X_{A_1} \,,
\mathcal End{cases}
\mathcal End{equation}
so writing $A_i= \begin{pmatrix} a_i&b_i\\c_i&d_i\mathcal End{pmatrix} $ gives
\begin{equation}\mathcal Label{explicitfalt}
f_\mathcal Mathcal A(x) = \mathcal Log \mathcal Left( \frac{\det A_i}{-\alpha_{A_i}(x+ \sigma_{A_i})} \mathcal Right)
\quad\mathcal Text{for }x\in X_{A_i}\,,
\mathcal End{equation}
where we recall from
Corollary \mathcal Ref{alwaysposcor} that $\frac{\det A_i}{-\alpha_{A_i}(x+ \sigma_{A_i})}>0$
for all $x\in X_{A_i}$.
\mathcal End{defn}
\begin{remark}\mathcal Label{faremark}
The function $f_\mathcal Mathcal A$ is clearly Lipschitz continuous on each $X_{A_i}$, hence Lipschitz continuous
on $X_\mathcal Mathcal A = X_{A_0}\cup X_{A_1}$, since the intervals $X_{A_0}$ and $X_{A_1}$ are disjoint.
\mathcal End{remark}
The reason for introducing the function $f_\mathcal Mathcal A$ is provided by the following characterisation of the joint spectral radius in terms of ergodic optimization:
\begin{theorem}\mathcal Label{maxtheorem}
If $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$ then its joint spectral radius $r(\mathcal Mathcal A)$ satisfies
\begin{equation}\mathcal Label{logra}
\mathcal Log r(\mathcal Mathcal A) =
\mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu \,.
\mathcal End{equation}
\mathcal End{theorem}
\begin{proof}
From Proposition \mathcal Ref{jsrderivta} we have
\begin{equation}\mathcal Label{jsrderivtalog}
\mathcal Log r(\mathcal Mathcal A)=
\sup_{\underline{i}\in \Omega^*}
\mathcal Log \mathcal Left( \det A(\underline{i}) \,\, (T_{\mathcal Mathcal A}^{|\underline{i}|})' (p_{A(\underline{i} )}) \mathcal Right)^{1/2|\underline{i}|}\,.
\mathcal End{equation}
If $\underline{i}\in\{0,1\}^n$ then
\begin{align}\mathcal Label{falogdet}
\mathcal Log \mathcal Left( \det A(\underline{i}) \,\, (T_{\mathcal Mathcal A}^n)' (p_{A(\underline{i} )}) \mathcal Right)^{1/2|\underline{i}|}
&=
\frac{1}{2n} \mathcal Left( \mathcal Log \det \mathcal Left( A_{i_1}\cdots A_{i_n} \mathcal Right) + \mathcal Log (T_{\mathcal Mathcal A}^n)' (p_{A(\underline{i} )}) \mathcal Right) \cr
&=
\frac{1}{2n} \mathcal Left( \mathcal Log \prod_{j=1}^n \det A_{i_j} + \mathcal Log \prod_{j=0}^{n-1} T_{\mathcal Mathcal A}' (T_\mathcal Mathcal A^j (p_{A(\underline{i} )})) \mathcal Right) \cr
&=
\frac{1}{n} \sum_{j=0}^{n-1}\ \frac{1}{2}\mathcal Left( \mathcal Log \det A_{i_{j+1}} + \mathcal Log T_{\mathcal Mathcal A}' (T_\mathcal Mathcal A^j (p_{A(\underline{i} )})) \mathcal Right) \cr
&=
\frac{1}{n} \sum_{j=0}^{n-1} f_\mathcal Mathcal A(T_\mathcal Mathcal A^j (p_{A(\underline{i} )})) \,,
\mathcal End{align}
where the last step uses (\mathcal Ref{function}) together with the fact that
$$
\mathcal Log \det A_{i_{j+1}}
=
\mathcal Left( \mathcal Left( \mathcal Log \det A_0 \mathcal Right) \mathcal Mathbbm{1}_{X_{A_0}} +(\mathcal Log \det A_1)\mathcal Mathbbm{1}_{X_{A_1}} \mathcal Right)(T_\mathcal Mathcal A^j (p_{A(\underline{i} )})) \,,
$$
because
\begin{equation*}
\mathcal Mathbbm{1}_{X_{A_k}}(T_\mathcal Mathcal A^j (p_{A(\underline{i} )}))
=
\begin{cases}
1& \mathcal Text{if } i_{j+1}=k\cr
0& \mathcal Text{if } i_{j+1}\neq k \,.
\mathcal End{cases}
\mathcal End{equation*}
Combining (\mathcal Ref{jsrderivtalog}) and (\mathcal Ref{falogdet}) gives
\begin{equation}\mathcal Label{perptmeasures}
\mathcal Log r(\mathcal Mathcal A)=
\sup_{\underline{i}\in \Omega^*}
\frac{1}{|\underline{i}|} \sum_{j=0}^{|\underline{i}|-1} f_\mathcal Mathcal A(T_\mathcal Mathcal A^j (p_{A(\underline{i} )}))
= \sup_{\underline{i}\in \Omega^*}
\int f_\mathcal Mathcal A\, d\mathcal Mu_{\underline{i}}
\,,
\mathcal End{equation}
where
$$\mathcal Mu_{\underline{i}}=\frac{1}{|\underline{i}|}\sum_{j=0}^{|\underline{i}|-1} \delta_{T_\mathcal Mathcal A^j (p_{A(\underline{i} )})}$$
is the
periodic orbit measure (see Definition \mathcal Ref{periodicorbitmeasuredefn})
for the period-$|\underline{i}|$ point
$p_{A(\underline{i} )}=T_\mathcal Mathcal A^{|\underline{i}|}(p_{A(\underline{i} )})$, i.e.~the
unique measure in $\mathcal M_\mathcal Mathcal A$ whose support equals the
periodic orbit $\{ T_\mathcal Mathcal A^j (p_{A(\underline{i} )})\}_{j= 0}^{ |\underline{i}|-1}$.
By a result of Parthasarathy \cite{parthasarathy} (see also Sigmund \cite{sigmund}),
the collection of periodic orbit measures $\{\mathcal Mu_{\underline{i}}:\underline{i}\in\Omega^*\}$
is weak$^*$ dense in the weak$^*$ compact space $\mathcal M_\mathcal Mathcal A$, so
\begin{equation}\mathcal Label{parthsigmund}
\sup_{\underline{i}\in \Omega^*}
\int f_\mathcal Mathcal A\, d\mathcal Mu_{\underline{i}}
= \mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu \,,
\mathcal End{equation}
and combining with (\mathcal Ref{perptmeasures}) gives the required equality
(\mathcal Ref{logra}).
\mathcal End{proof} | 3,892 | 61,706 | en |
train | 0.4.10 | \subsection{The finiteness property and periodic orbits}
In view of Theorem \mathcal Ref{maxtheorem}, we shall be interested in those measures $m\in\mathcal M_\mathcal Mathcal A$ which
are $f_\mathcal Mathcal A$-maximizing, in the sense of Definition \mathcal Ref{eomax},
i.e.~$m$ attains the maximum in (\mathcal Ref{logra}): $\mathcal Log r(\mathcal Mathcal A) =
\mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu =\int f_\mathcal Mathcal A\, dm$.
The finiteness property for $\mathcal Mathcal A$
(which we recall means that $r(\mathcal Mathcal A)=r(A_{i_1}\cdots A_{i_n})^{1/n}$
for some $i_1,\mathcal Ldots, i_n\in\{0,1\}$)
corresponds to existence of a periodic orbit measure which
is $f_\mathcal Mathcal A$-maximizing:
\begin{prop}\mathcal Label{finitenessperiodicequivalenceprop}
$\mathcal Mathcal A\in\mathcal Mm$ has the finiteness property if and only if some $T_\mathcal Mathcal A$-periodic orbit measure is $f_\mathcal Mathcal A$-maximizing.
\mathcal End{prop}
\begin{proof}
If $\mathcal Mathcal A\in\mathcal Mm$ has the finiteness property,
and $\underline{i}\in\{0,1\}^n$ satisfies $r(\mathcal Mathcal A)= r(A(\underline{i}))^{1/n}$,
then we claim that the corresponding periodic orbit measure
$\mathcal Mu_{\underline{i}}=\frac{1}{n}\sum_{j=0}^{n-1} \delta_{T_\mathcal Mathcal A^j (p_{A(\underline{i} )})}$
is $f_\mathcal Mathcal A$-maximizing.
To see this, first note that (\mathcal Ref{logra}) gives
\begin{equation}\mathcal Label{firstnote}
\mathcal Log r(A(\underline{i}))^{1/n} = \mathcal Log r(\mathcal Mathcal A) = \mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu\,,
\mathcal End{equation}
and the lefthand side of (\mathcal Ref{firstnote}) can be written as
\begin{equation}\mathcal Label{intermediate1}
\mathcal Log r(A(\underline{i}))^{1/n} = \mathcal Log \mathcal Left( \frac{\det A(\underline{i})}{T_{A(\underline{i})}'(p_{A(\underline{i})})}\mathcal Right)^{1/2n}
= \mathcal Log \mathcal Left( \det A(\underline{i}) \,\, (T_{\mathcal Mathcal A}^n)' (p_{A(\underline{i} )}) \mathcal Right)^{1/2n}
\mathcal End{equation}
using Corollary \mathcal Ref{specradiusta} and (\mathcal Ref{inversederivformu}),
and therefore (\mathcal Ref{falogdet}) gives
\begin{equation}\mathcal Label{intermediate2}
\mathcal Log r(A(\underline{i}))^{1/n} = \frac{1}{n} \sum_{j=0}^{n-1} f_\mathcal Mathcal A(T_\mathcal Mathcal A^j (p_{A(\underline{i} )}))
=\int f_\mathcal Mathcal A\, d\mathcal Mu_{\underline{i}}\,,
\mathcal End{equation}
so (\mathcal Ref{firstnote}) implies that
$\mathcal Mu_{\underline{i}}$
is indeed $f_\mathcal Mathcal A$-maximizing.
Conversely, if some $T_\mathcal Mathcal A$-periodic orbit measure is $f_\mathcal Mathcal A$-maximizing,
then this measure is necessarily of the form $\mathcal Mu_{\underline{i}}=\frac{1}{n}\sum_{j=0}^{n-1} \delta_{T_\mathcal Mathcal A^j (p_{A(\underline{i} )})}$
for some $n\in\mathbb N$ and $\underline{i}\in\{0,1\}^n$,
and satisfies
$\int f_\mathcal Mathcal A\, d\mathcal Mu_{\underline{i}} = \mathcal Max_{\mathcal Mu\in\mathcal M_\mathcal Mathcal A} \int f_\mathcal Mathcal A\, d\mathcal Mu$.
Combining
(\mathcal Ref{logra}) with
(\mathcal Ref{intermediate1}) and (\mathcal Ref{intermediate2}) gives
$\mathcal Log r(A(\underline{i}))^{1/n} = \mathcal Log r(\mathcal Mathcal A)$, so $\mathcal Mathcal A$ has the finiteness property, as required.
\mathcal End{proof}
Note that the above proof has also established:
\begin{prop}
Suppose that $\mathcal Mathcal A\in\mathcal Mm$,
and $\underline{i}\in\{0,1\}^n$ for some $n\in\mathbb N$.
Then $r(\mathcal Mathcal A)= r(A(\underline{i}))^{1/n}$
if and only if the periodic orbit measure
$\mathcal Mu_{\underline{i}}=\frac{1}{n}\sum_{j=0}^{n-1} \delta_{T_\mathcal Mathcal A^j (p_{A(\underline{i} )})}$
is $f_\mathcal Mathcal A$-maximizing.
\mathcal End{prop}
Recall that we say $\mathcal Mathcal A\in\mathcal Mm$ is a \mathcal Emph{finiteness counterexample} if
$r(\mathcal Mathcal A) > r(A_{i_1}\cdots A_{i_n})^{1/n}$
for all $n\in\mathbb N$ and all choices $i_1,\mathcal Ldots, i_n\in\{0,1\}$. We have:
\begin{prop}\mathcal Label{counterexampleequivalenceprop}
$\mathcal Mathcal A\in\mathcal Mm$ is a finiteness counterexample if and only if no $T_\mathcal Mathcal A$-periodic orbit measure is $f_\mathcal Mathcal A$-maximizing.
In this case there exists at least one measure $\mathcal Mu\in\mathcal M_\mathcal Mathcal A$ that is $f_\mathcal Mathcal A$-maximizing, and there exist uncountably many sequences $\omega\in\Omega$ such that
\begin{equation}
\mathcal Label{inthelimit}
r(\mathcal Mathcal A) = \mathcal Lim_{n\mathcal To\infty} r(A_{\omega_1}\cdots A_{\omega_n})^{1/n} \,.
\mathcal End{equation}
\mathcal End{prop}
\begin{proof}
The first statement is equivalent to that of Proposition \mathcal Ref{finitenessperiodicequivalenceprop}, while the existence of
an $f_\mathcal Mathcal A$-maximizing measure $\mathcal Mu$ is a consequence
(see e.g.~\cite[Prop.~2.4 (i)]{jeo})
of the continuity of $f_\mathcal Mathcal A$ and the weak$^*$ compactness of
$\mathcal M_\mathcal Mathcal A$ (see Lemma \mathcal Ref{macompactlemma}). In fact
$\mathcal Mu$ may be chosen to be an ergodic measure,
since it is readily shown that the set of $f_\mathcal Mathcal A$-maximizing measures is convex, and any of its extremal points
is ergodic (see e.g.~\cite[Prop.~2.4]{jeo}).
The ergodic
theorem (see e.g.~\cite[Thm.~1.14]{walters}) then implies that
\begin{equation}\mathcal Label{inthelimit2}
\mathcal Log r(\mathcal Mathcal A) = \int f_\mathcal Mathcal A\, d\mathcal Mu = \mathcal Lim_{n\mathcal To\infty} \frac{1}{n}\sum_{j=0}^{n-1} f_\mathcal Mathcal A(T_\mathcal Mathcal A^j(p))
\mathcal End{equation}
for $\mathcal Mu$-almost every $p\in Y_\mathcal Mathcal A$.
Since no periodic orbit measure is $f_\mathcal Mathcal A$-maximizing, the measure $\mathcal Mu$
must have uncountable support, and therefore
(\mathcal Ref{inthelimit2}) holds for an uncountable set of points $p\in Y_\mathcal Mathcal A$.
We may use the topological conjugacy $h_\mathcal Mathcal A:\Omega\mathcal To Y_\mathcal Mathcal A$
to define the image measure $m=(h_\mathcal Mathcal A^{-1})^*(\mathcal Mu)$, which also has uncountable support, and
if we write $h_\mathcal Mathcal A(\omega)=p$ then
\begin{equation}\mathcal Label{sumproductexp}
r(A_{\omega_1}\cdots A_{\omega_n})^{1/n} = \mathcal Exp\mathcal Left( \frac{1}{n}\sum_{j=0}^{n-1} f_\mathcal Mathcal A(T_\mathcal Mathcal A^j(p)) \mathcal Right) \,,
\mathcal End{equation} so
(\mathcal Ref{inthelimit2}) implies $
r(\mathcal Mathcal A) = \mathcal Lim_{n\mathcal To\infty} r(A_{\omega_1}\cdots A_{\omega_n})^{1/n}
$
for $m$-almost every $\omega\in\Omega$, hence for uncountably many $\omega\in\Omega$.
\mathcal End{proof} | 2,264 | 61,706 | en |
train | 0.4.11 | \subsection{Monotonicity properties and formulae}
The following simple lemma records that for $\mathcal Mathcal A\in\mathcal Mm$,
the induced dynamical system $T_{\mathcal Mathcal A(t)}$
is independent of $t$, and that the induced function $f_{\mathcal Mathcal A(t)}$ differs from $f_\mathcal Mathcal A$
only by the addition of a scalar multiple of the characteristic function
for the image $X_{A_1}$.
\begin{lemma}\mathcal Label{simple}
For $\mathcal Mathcal A=(A_0,A_1)\in \mathcal Mm$,
and all $t>0$,
\begin{itemize}
\item[(i)] $T_{\mathcal Mathcal A(t)} = T_\mathcal Mathcal A$ \,,
\item[(ii)] $f_{\mathcal Mathcal A(t)} = f_\mathcal Mathcal A + (\mathcal Log t)\mathcal Mathbbm{1}_{X_{A_1}}$,
\item[(iii)]
$
f_{\mathcal Mathcal A(t)}(T_{A_0}(1)) - f_{\mathcal Mathcal A(t)}(T_{A_1}(0))
=
f_{\mathcal Mathcal A}(T_{A_0}(1)) - f_{\mathcal Mathcal A}(T_{A_1}(0)) - \mathcal Log t\,,
$
\item[(iv)] $f_{\mathcal Mathcal A(t)}'=f_\mathcal Mathcal A'$, with
\begin{equation}\mathcal Label{fderivative}
f_\mathcal Mathcal A' (x)=
-( x+\sigma_{A_i})^{-1}
\quad\mathcal Text{for }x\in X_{A_i}, \ i\in\{0,1\}\,.
\mathcal End{equation}
\mathcal End{itemize}
\mathcal End{lemma}
\begin{proof}
(i) From Remark \mathcal Ref{invariantposmultiple} we see that if $t>0$ then
$T_{tA_1}=T_{A_1}$, hence
$T_{\mathcal Mathcal A(t)} = T_\mathcal Mathcal A$.
(ii)
Formula (\mathcal Ref{fthatis}) gives $f_\mathcal Mathcal A = f_{\mathcal Mathcal A(t)}$ on $X_{A_0}$,
while for $x\in X_{A_1}$ we have
$$f_{\mathcal Mathcal A(t)}(x) = \frac{1}{2}\mathcal Left( \mathcal Log S_{A_1}'(x) + \mathcal Log \det tA_1 \mathcal Right)
= \mathcal Log t +f_\mathcal Mathcal A(x)$$
since $\mathcal Log \det tA_1 = \mathcal Log \mathcal Left( t^2 \det A_1 \mathcal Right) = 2\mathcal Log t +\mathcal Log \det A_1$,
thus
$f_{\mathcal Mathcal A(t)} = f_\mathcal Mathcal A + (\mathcal Log t)\mathcal Mathbbm{1}_{X_{A_1}}$.
(iii) This is immediate from part (ii).
(iv) The formula for $f_\mathcal Mathcal A'$ follows readily from the explicit formula (\mathcal Ref{explicitfalt}) for $f_\mathcal Mathcal A$,
and is equal to $f_{\mathcal Mathcal A(t)}'$ by (ii) above.
\mathcal End{proof}
\begin{lemma}\mathcal Label{posnegf}
If $\mathcal Mathcal A=(A_0,A_1)\in\mathcal Mm$ then
\begin{enumerate}
\item[(i)]
$f_\mathcal Mathcal A'$ is strictly positive on $X_{A_0}$ and strictly negative on $X_{A_1}$,
\item[(ii)]
$f_\mathcal Mathcal A$ is strictly increasing on $X_{A_0}$ and strictly decreasing on $X_{A_1}$,
\item[(iii)]
$(f_\mathcal Mathcal A\circ T_{A_0}^i)'(x)>0$ and $(f_\mathcal Mathcal A\circ T_{A_1}^i)'(x)<0$ for all $i\ge1$, $x\in X$.
\mathcal End{enumerate}
\mathcal End{lemma}
\begin{proof}
(i)
In view of formula (\mathcal Ref{fderivative}),
it suffices to note that by Corollary \mathcal Ref{alwaysposcor},
$
x+\sigma_{A_0} <0
$
for $x\in X_{A_0}$,
and
$x+\sigma_{A_1} >0
$
for $x\in X_{A_1}$.
(ii) This is an immediate consequence of (i).
(iii)
By the chain rule,
\begin{equation}\mathcal Label{chain}
(f_\mathcal Mathcal A\circ T_{A_j}^i)'(x) = f_\mathcal Mathcal A'(T_{A_j}^i (x)) (T_{A_j}^i)'(x)\,.
\mathcal End{equation}
The second factor $(T_{A_j}^i)'(x)$ on the righthand side of (\mathcal Ref{chain}) is strictly positive
for all $x\in X$, $i\ge1$, $j\in\{0,1\}$, since $T_{A_j}$ is orientation-preserving, as noted in Remark \mathcal Ref{derivativesremark}.
Regarding the sign of the first factor $f_\mathcal Mathcal A'(T_{A_j}^i (x))$ on the righthand side of (\mathcal Ref{chain}),
note that since $i\ge1$ then $T_{A_j}^i(x)\in X_{A_j}=T_{A_j}(X)$ for all $x\in X$.
Part (i) above then implies that $f_\mathcal Mathcal A'(T_{A_j}^i (x))$
is strictly positive when $j=0$ and strictly negative when $j=1$.
It follows that
$(f_\mathcal Mathcal A\circ T_{A_j}^i)'(x)$
is strictly positive when $j=0$ and strictly negative when $j=1$, as required.
\mathcal End{proof}
For the purposes of the following Lemma \mathcal Ref{usefullem}, it will be convenient to introduce the following notation:
\begin{notation}
For a matrix $A= \begin{pmatrix} a&b\\c&d\mathcal End{pmatrix} \in \mathbb M$,
define
$$
\delta_A = \frac{b+d}{\alpha_A} = \frac{b+d}{a+c-b-d}\,.
$$
\mathcal End{notation}
We can now give another characterisation of $\mathcal Rho_A$:
\begin{lemma}\mathcal Label{usefullem}
For $A\in \mathbb M$,
\begin{equation*}
\mathcal Rho_A = \mathcal Lim_{k\mathcal To \infty} \delta_{A^k}\,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
Perron-Frobenius theory (see e.g.~\cite[Thm.~0.17]{walters}) gives
$$
\mathcal Lim_{k\mathcal To\infty} \mathcal Lambda_A^{-k} A^k = v \, w_A
=
\begin{pmatrix} v^{(1)} w_A^{(1)} & v^{(1)} w_A^{(2)} \\ v^{(2)} w_A^{(1)} & v^{(2)} w_A^{(2)} \mathcal End{pmatrix}\,,
$$
where the positive dominant eigenvalue $\mathcal Lambda_A>0$ and corresponding left eigenvector
$w_A$ are as in Lemma \mathcal Ref{pflemma}, and $v$ is a corresponding right eigenvector
(a suitable multiple of $v_A$ from Lemma \mathcal Ref{pflemma}), normalised so that
$w_A v=1$.
It follows that
\begin{equation}\mathcal Label{rklimit}
\mathcal Lim_{k\mathcal To\infty} \delta_{A^k}
= \frac{v^{(1)} w^{(2)}_A +v^{(2)} w_A^{(2)} }{v^{(1)} w_A^{(1)}+v^{(2)} w^{(1)}_A - v^{(1)} w^{(2)}_A -v^{(2)} w^{(2)}_A }
=\frac{w^{(2)}_A}{w^{(1)}_A -w^{(2)}_A}\,,
\mathcal End{equation}
so the formula
$\mathcal Rho_A = \frac{w_A^{(2)}}{w_A^{(1)}-w_A^{(2)}}$ from Lemma \mathcal Ref{wrholem} concludes the proof.
\mathcal End{proof}
\begin{cor}\mathcal Label{useful}
For $A\in \mathbb M$, $x\in X$,
\begin{equation}\mathcal Label{rhooccurence}
\sum_{n=1}^\infty (\mathcal Log S_A'\circ T_A^n)'(x) = \frac{2}{x+\mathcal Rho_A}\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{proof}
A simple calculation using the chain rule
yields
\begin{equation}\mathcal Label{negativeformula}
\sum_{n=1}^k ( \mathcal Log S_A'\circ T_A^n)'(x) =
- (\mathcal Log T_{A^k}')'(x)
= \frac{2}{x+\delta_{A^k}}
\mathcal End{equation}
for all $k\ge1$, so
letting $k\mathcal To\infty$
we see that the result follows from Lemma \mathcal Ref{usefullem}.
\mathcal End{proof}
Recalling from (\mathcal Ref{fthatis}) that $f_\mathcal Mathcal A= \frac{1}{2}\mathcal Left( \mathcal Log S_{A_i}' +\mathcal Log \det A_i\mathcal Right)$ on $X_{A_i}$,
the following result is an immediate consequence of Corollary \mathcal Ref{useful}:
\begin{cor}\mathcal Label{usefulcor}
If $\mathcal Mathcal A=(A_0,A_1)\in \mathbb M^2$ then for $i\in \{0,1\}$,
\begin{equation}\mathcal Label{sumformula}
\sum_{n=1}^\infty (f_\mathcal Mathcal A\circ T_{A_i}^n)'(x) = \frac{1}{x+\varrho_{A_i}}
\quad\mathcal Text{for }x\in X_{A_i}\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{cor}\mathcal Label{posneg}
If $\mathcal Mathcal A=(A_0,A_1)\in \mathcal Mm$
then for all $x\in X$,
\begin{equation}\mathcal Label{posnegsums}
\sum_{n=1}^\infty (f_\mathcal Mathcal A\circ T_{A_0}^n)'(x) >0
\quad\mathcal Text{and}\quad
\sum_{n=1}^\infty (f_\mathcal Mathcal A\circ T_{A_1}^n)'(x) <0\,,
\mathcal End{equation}
and
\begin{equation}\mathcal Label{posnegxrhos}
x+\varrho_{A_0}>0
\quad\mathcal Text{and}\quad
x+\varrho_{A_1}<0\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{proof}
The inequalities in (\mathcal Ref{posnegsums}) follow from Lemma \mathcal Ref{posnegf} (iii),
while (\mathcal Ref{posnegxrhos}) is an immediate consequence of
(\mathcal Ref{sumformula}) and
(\mathcal Ref{posnegsums}).
\mathcal End{proof}
\begin{remark}
The inequality $x+\varrho_{A_0}>0$ in (\mathcal Ref{posnegxrhos}) can also be deduced from the fact that
$\mathcal Rho_{A_0}>0$ (by Corollary \mathcal Ref{rhoposlessminusone}) and
$x\ge0$.
\mathcal End{remark} | 2,909 | 61,706 | en |
train | 0.4.12 | \section{Sturmian measures associated to a concave-convex matrix pair}\mathcal Label{sturmasection}
For $\mathcal Mathcal A\in\mathcal Mm$, the induced space $X_\mathcal Mathcal A$ becomes an ordered set when equipped
with the usual order on $X=[0,1]$. In particular,
by a \mathcal Emph{sub-interval} of $X_\mathcal Mathcal A$ we mean any subset of $X_\mathcal Mathcal A$ of the form $I\cap X_\mathcal Mathcal A$
where $I$ is some sub-interval of $X$. Note that a sub-interval of $X_\mathcal Mathcal A$ is
a sub-interval of $X$ if it is contained in either $X_{A_0}$ or $X_{A_1}$; otherwise it is
a union of two disjoint intervals in $X$.
\begin{defn}
Given a matrix pair $\mathcal Mathcal A \in \mathcal Mm$,
a closed interval $\Gamma\subset X_\mathcal Mathcal A$ is called
\mathcal Emph{$\mathcal Mathcal A$-Sturmian} (or simply \mathcal Emph{Sturmian})
if $T_\mathcal Mathcal A(\mathcal Min\Gamma)=T_\mathcal Mathcal A(\mathcal Max\Gamma)$,
i.e.~its two endpoints $\mathcal Min \Gamma$ and $\mathcal Max \Gamma$ have the same image under
the induced dynamical system
$T_\mathcal Mathcal A$.
\mathcal End{defn}
\begin{remark}\mathcal Label{sturmianasturmian}
\item[\, (a)]
The topological conjugacy $h_\mathcal Mathcal A: \Omega\mathcal To Y_\mathcal Mathcal A$ (cf.~Remark \mathcal Ref{conjugacymeasures}) is
order preserving, so
if $\Gamma\subset X_\mathcal Mathcal A$ is an $\mathcal Mathcal A$-Sturmian interval,
then $h_\mathcal Mathcal A^{-1}(\Gamma\cap Y_\mathcal Mathcal A)$ is
a Sturmian interval
as defined in Notation \mathcal Ref{sturmianfullshift}
(i.e. of the form $[0\omega,1\omega]$ for some $\omega\in\Omega$).
\item[\, (b)]
For all $t>0$, an interval is $\mathcal Mathcal A$-Sturmian if and only if it is $\mathcal Mathcal A(t)$-Sturmian.
\mathcal End{remark}
\begin{defn}\mathcal Label{henceforthca}
Let $\mathcal I_\mathcal Mathcal A$ denote the collection of all $\mathcal Mathcal A$-Sturmian intervals.
Note that $\mathcal I_\mathcal Mathcal A$ is naturally parametrized by $X=[0,1]$:
for each $c\in X$ there is a unique $\Gamma\in\mathcal I_\mathcal Mathcal A$ such that
$T_\mathcal Mathcal A(\mathcal Min \Gamma)= T_\mathcal Mathcal A(\mathcal Max\Gamma) =c$.
Henceforth we shall write $c_\mathcal Mathcal A(\Gamma)$ to denote the common value
$T_\mathcal Mathcal A(\mathcal Min \Gamma)= T_\mathcal Mathcal A(\mathcal Max\Gamma)$ for an $\mathcal Mathcal A$-Sturmian interval $\Gamma\in\mathcal I_\mathcal Mathcal A$, noting that
\begin{equation*}
c_\mathcal Mathcal A:\mathcal I_\mathcal Mathcal A\mathcal To X
\mathcal End{equation*}
is a bijection.
As a subset of $X$, we can express $\Gamma\in\mathcal I_\mathcal Mathcal A$ as
\begin{equation}\mathcal Label{disjunionsturm}
\Gamma = [T_{A_0}(c_\mathcal Mathcal A(\Gamma)), T_{A_0}(1)] \cup [T_{A_1}(0),T_{A_1}(c_\mathcal Mathcal A(\Gamma))]\,.
\mathcal End{equation}
\mathcal End{defn}
\begin{remark}\mathcal Label{neglectsingleton}
It is apparent from (\mathcal Ref{disjunionsturm}) that, viewed as a subset of $X=[0,1]$, an $\mathcal Mathcal A$-Sturmian interval $\Gamma$ is
\mathcal Emph{always} a disjoint union of two closed intervals. Note, however, that for the two extremal cases where $c_\mathcal Mathcal A(\Gamma)=0$ or $1$, one of the intervals in the disjoint union is a singleton set (and the other interval is, respectively, either $X_{A_0}$ or $X_{A_1}$).
These extremal cases are particularly significant, and in the calculations of \S \mathcal Ref{extsturmsection}
onwards it is convenient to neglect the singleton set, thereby identifying the extremal $\mathcal Mathcal A$-Sturmian interval
with either $X_{A_0}$ or $X_{A_1}$.
\mathcal End{remark}
\begin{defn}\mathcal Label{sturmianmeasdef}
We say that a $T_\mathcal Mathcal A$-invariant Borel probability measure on $X_\mathcal Mathcal A$ is
\mathcal Emph{$\mathcal Mathcal A$-Sturmian}
if its support is contained in
some $\mathcal Mathcal A$-Sturmian interval.
Let
$\mathcal S_\mathcal Mathcal A$ denote the collection of $\mathcal Mathcal A$-Sturmian measures.
\mathcal End{defn}
\begin{remark}
\item[\, (a)]
In view of Remarks \mathcal Ref{conjugacymeasures} and \mathcal Ref{sturmianasturmian},
the class of $\mathcal Mathcal A$-Sturmian measures on $X_\mathcal Mathcal A$ is just the $h_\mathcal Mathcal A^*$-image of the class of Sturmian measures
on the shift space $\Omega$,
i.e.~$\mathcal S_\mathcal Mathcal A = h_\mathcal Mathcal A^*(\mathcal Mathcal{S})$.
In particular (cf.~Proposition \mathcal Ref{sturmianclassical}\,(b)), $\mathcal S_\mathcal Mathcal A$ is also naturally parametrized by $X=[0,1]$: the map $\mathcal Mathcal{P}\circ (h_\mathcal Mathcal A^*)^{-1}:\mathcal S_\mathcal Mathcal A\mathcal To[0,1]$ is a homeomorphism,
and for $\mathcal Mu\in\mathcal S_\mathcal Mathcal A$ we refer to $\mathcal Mathcal{P}\circ (h_\mathcal Mathcal A^*)^{-1}(\mathcal Mu) = \mathcal Mu(X_{A_1})$
as its \mathcal Emph{(Sturmian) parameter}.
\item[\, (b)]
For all $t>0$, a measure is $\mathcal Mathcal A$-Sturmian if and only if it is $\mathcal Mathcal A(t)$-Sturmian.
\mathcal End{remark}
In \S \mathcal Ref{technicalsection} we shall identify cases where $\mathcal Mathcal A$-Sturmian measures arise as unique maximizing measures
for $f_{\mathcal Mathcal A(t)}$, $t>0$.
In particular, for certain $t$ the unique $f_{\mathcal Mathcal A(t)}$-maximizing measure is a Sturmian measure of \mathcal Emph{irrational} parameter, and
such $\mathcal Mathcal A(t)$ turn out to be \mathcal Emph{finiteness counterexamples}:
\begin{prop}\mathcal Label{irrationalcounterexampleconnection}
If $\mathcal Mathcal A\in\mathcal Mm$ is such that there is a unique $f_\mathcal Mathcal A$-maximizing measure, and this measure is an $\mathcal Mathcal A$-Sturmian measure with irrational parameter $\mathcal Mathcal{P}$, then $\mathcal Mathcal A$ is a finiteness counterexample
(i.e.~$r(\mathcal Mathcal A) > r(A_{i_1}\cdots A_{i_n})^{1/n}$
for all $n\in\mathbb N$ and all choices $i_1,\mathcal Ldots, i_n\in\{0,1\}$).
In this case
\begin{equation}
\mathcal Label{inthelimit3}
r(\mathcal Mathcal A) = \mathcal Lim_{n\mathcal To\infty} r(A_{\omega_1}\cdots A_{\omega_n})^{1/n} \,,
\mathcal End{equation}
holds for the uncountably many Sturmian sequences $\omega=(\omega_n)_{n=1}^\infty$ of parameter $\mathcal Mathcal{P}$.
\mathcal End{prop}
\begin{proof}
By assumption there is a unique $f_\mathcal Mathcal A$-maximizing measure $\mathcal Mu$,
and this measure is an $\mathcal Mathcal A$-Sturmian measure with irrational parameter $\mathcal Mathcal{P}$, which in particular is not a periodic orbit measure. It follows that no $T_\mathcal Mathcal A$-periodic orbit measure is $f_\mathcal Mathcal A$-maximizing, so Proposition
\mathcal Ref{counterexampleequivalenceprop} implies that $\mathcal Mathcal A$ is a finiteness counterexample,
and that there exist uncountably many sequences $\omega\in\Omega$ such that (\mathcal Ref{inthelimit3}) holds.
In fact the support of any $\mathcal Mathcal A$-Sturmian measure $\mathcal Mu$ is uniquely ergodic (see e.g.~\cite[Cor.~1.6]{bouschmairesse}),
so the ergodic theorem holds for the uncountably many points in the support of $\mathcal Mu$ (see e.g.~\cite[Thm.~6.19]{walters}),
and therefore $\mathcal Lim_{n\mathcal To\infty} \frac{1}{n}\sum_{j=0}^{n-1} f_\mathcal Mathcal A(T_\mathcal Mathcal A^j(p)) = \int f_\mathcal Mathcal A\, d\mathcal Mu = \mathcal Log r(\mathcal Mathcal A)$
for all $p$ in the support of $\mathcal Mu$.
Writing $p=h_\mathcal Mathcal A(\omega)$ as in the proof of Proposition \mathcal Ref{counterexampleequivalenceprop},
the relation (\mathcal Ref{sumproductexp}) then implies that
(\mathcal Ref{inthelimit3}) holds for all
points in the support of the Sturmian measure $m$,
i.e.~for all Sturmian sequences of parameter $\mathcal Mathcal{P}$.
\mathcal End{proof} | 2,447 | 61,706 | en |
train | 0.4.13 | \section{The Sturmian transfer function}\mathcal Label{transfersection}
In order to
show that the maximizing measure for $f_{\mathcal Mathcal A(t)}$ is supported in some $\mathcal Mathcal A$-Sturmian interval
$\Gamma\in\mathcal I_\mathcal Mathcal A$, our strategy will be to add a coboundary $\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$,
where the corresponding \mathcal Emph{Sturmian transfer function} $\varphi_\Gamma$ is introduced below,
so that the new function $ f_\mathcal Mathcal A + \varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$
takes a constant value on all of $\Gamma$, and
is strictly smaller than this constant value on the complement of $\Gamma$.
This approach is patterned on ideas of Bousch \cite{bousch}
in the setting of the angle-doubling map
and degree-one trigonometric polynomials.
To proceed, it is convenient to introduce the following:
\begin{defn}
For $\mathcal Mathcal A\in\mathcal Mm$, to each $\mathcal Mathcal A$-Sturmian interval $\Gamma$ we associate the \mathcal Emph{hybrid contraction}
$\mathcal Tau_\Gamma: X\mathcal To X_\mathcal Mathcal A$, defined by
\begin{equation}\mathcal Label{hybrid}
\mathcal Tau_\Gamma(x)=
\begin{cases}
T_{A_1}(x)&\mathcal Text{if }x\in[0,c(\Gamma))\\
T_{A_0}(x)&\mathcal Text{if }x\in[c(\Gamma),1]\,.
\mathcal End{cases}
\mathcal End{equation}
\mathcal End{defn}
\begin{remark}\mathcal Label{taugammaremark}
The hybrid contraction $\mathcal Tau_\Gamma$ satisfies $\mathcal Tau_\Gamma(X)=\Gamma$,
and is piecewise Lipschitz continuous.
More precisely, its restriction to $[0,c_\mathcal Mathcal A(\Gamma))$ is Lipschitz,
as is its restriction to $[c_\mathcal Mathcal A(\Gamma),1]$.
\mathcal End{remark}
\begin{lemma}\mathcal Label{phiexists}
Given $\mathcal Mathcal A\in \mathcal Mm$, and
an $\mathcal Mathcal A$-Sturmian interval $\Gamma\in \mathcal I_\mathcal Mathcal A$,
there exists a unique Lipschitz continuous function $\varphi_{\mathcal Mathcal A,\Gamma}:X\mathcal To\mathcal Mathbb{R}$
which simultaneously satisfies\footnote{The substantial condition is (\mathcal Ref{defphiformula}),
which determines $\varphi_{\mathcal Mathcal A,\Gamma}$ up to an additive constant.
The extra condition
(\mathcal Ref{zeroconv}) is useful in that it removes any ambiguity when discussing $\varphi_{\mathcal Mathcal A,\Gamma}$.}
\begin{equation}\mathcal Label{defphiformula}
\varphi_{\mathcal Mathcal A, \Gamma}' = \sum_{n=1}^\infty (f_\mathcal Mathcal A\circ \mathcal Tau_{\Gamma}^n)'
\quad\mathcal Text{Lebesgue a.e.,}
\mathcal End{equation}
and
\begin{equation}\mathcal Label{zeroconv}
\varphi_{\mathcal Mathcal A, \Gamma}(0)=0\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
The function $f_\mathcal Mathcal A$
is Lipschitz,
and $\mathcal Tau_\Gamma$ is piecewise Lipschitz (cf.~Remark \mathcal Ref{taugammaremark}),
so each $\mathcal Tau_\Gamma^n$ is piecewise Lipschitz, so by Rademacher's Theorem is differentiable Lebesgue
almost everywhere, with $L^\infty$ derivative.
Now
$\|(\mathcal Tau_\Gamma^n)'\|_\infty = O(\mathcal Theta^n)$ as $n\mathcal To\infty$ for some $\mathcal Theta \in(0,1)$,
so the sum
$$\sum_{n=1}^\infty (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'
= \sum_{n=1}^\infty f_\mathcal Mathcal A'\circ \mathcal Tau_\Gamma^n .( \mathcal Tau_\Gamma^n)'
$$
is Lebesgue almost everywhere convergent (as its $n$th term is
$O(\mathcal Theta^n)$),
and defines an $L^\infty$
function with respect to Lebesgue measure on $X$.
In particular, it has a Lipschitz antiderivative $\varphi_\Gamma$,
which is the unique Lipschitz antiderivative up to an additive constant,
hence uniquely defined if it satisfies the additional condition
$\varphi_{\mathcal Mathcal A, \Gamma}(0)=0$.
\mathcal End{proof}
\begin{notation}
For $\mathcal Mathcal A\in\mathcal Mm$, $\Gamma\in\mathcal I_\mathcal Mathcal A$,
the function $\varphi_\Gamma=\varphi_{\mathcal Mathcal A,\Gamma}$ whose existence and uniqueness is guaranteed
by Lemma \mathcal Ref{phiexists}
will be referred to as the corresponding \mathcal Emph{Sturmian transfer function}.
\mathcal End{notation}
\begin{remark}
Note that although the induced function $f_\mathcal Mathcal A$ is only defined on $X_\mathcal Mathcal A$, the Sturmian transfer function
$\varphi_\Gamma$ is actually defined on all of $X=[0,1]$. For the most part, however, we shall only
be interested in the restriction of $\varphi_\Gamma$ to $X_\mathcal Mathcal A$.
More precisely, we shall be interested in certain properties of $f_\mathcal Mathcal A+\varphi_\Gamma$,
or of $f_\mathcal Mathcal A+ \varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$,
considered as functions defined on $X_\mathcal Mathcal A$, beginning with the following Corollary \mathcal Ref{lipcont}.
\mathcal End{remark}
\begin{cor}\mathcal Label{lipcont}
If $\mathcal Mathcal A\in\mathcal Mm$, and $\Gamma$ is any $\mathcal Mathcal A$-Sturmian interval,
then both $f_\mathcal Mathcal A+\varphi_\Gamma$
and $f_\mathcal Mathcal A+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$ are Lipschitz continuous functions on $X_\mathcal Mathcal A$.
\mathcal End{cor}
\begin{proof}
Both $T_\mathcal Mathcal A$ and $f_\mathcal Mathcal A$ are Lipschitz continuous on $X_\mathcal Mathcal A$, as noted in Remarks \mathcal Ref{taremarks} and \mathcal Ref{faremark}, and $\varphi_\Gamma$ is Lipschitz continuous on $X$ as noted in Lemma \mathcal Ref{phiexists}, hence Lipschitz continuous
on $X_\mathcal Mathcal A$. It follows that
both $f_\mathcal Mathcal A+\varphi_\Gamma$
and $f_\mathcal Mathcal A+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$ are Lipschitz continuous on $X_\mathcal Mathcal A$.
\mathcal End{proof}
\begin{lemma}\mathcal Label{flatxai}
Suppose $\mathcal Mathcal A\in\mathcal Mm$, $t>0$, and $\Gamma$ is any $\mathcal Mathcal A$-Sturmian interval.
The Lipschitz continuous function
$f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A:X_\mathcal Mathcal A\mathcal To\mathcal Mathbb{R}$ has the property that its restriction to
$\Gamma\cap X_{A_0}$ is a constant function, and
its restriction to
$\Gamma\cap X_{A_1}$ is a constant function.
\mathcal End{lemma}
\begin{proof}
By Corollary \mathcal Ref{lipcont}, the function $f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$ is Lipschitz continuous on $X_\mathcal Mathcal A$, because $\mathcal Mathcal A(t)\in\mathcal Mm$.
So by the fundamental theorem of calculus for Lipschitz functions (see e.g.~\cite[Thm.~7.1.15]{kk}),
the required result will follow if it can be shown that
\begin{equation}\mathcal Label{derivativezero2}
(f_{\mathcal Mathcal A(t)}+\varphi_\Gamma -\varphi_\Gamma \circ T_\mathcal Mathcal A)'=0\quad \mathcal Text{Lebesgue a.e. on $\Gamma$}\,.
\mathcal End{equation}
But $f_{\mathcal Mathcal A(t)}'=f_\mathcal Mathcal A'$, so
(\mathcal Ref{derivativezero2}) is equivalent to proving that
\begin{equation}\mathcal Label{derivativezero}
(f_\mathcal Mathcal A +\varphi_\Gamma-\varphi_\Gamma\circ T_\mathcal Mathcal A)'=0\quad \mathcal Text{Lebesgue a.e. on $\Gamma$}\,.
\mathcal End{equation}
To establish this almost everywhere equality,
note that
$$
f_\mathcal Mathcal A' + \varphi_\Gamma' =
f_\mathcal Mathcal A' + \sum_{n=1}^\infty (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'
=
\sum_{n=0}^\infty (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'
=
\sum_{n=0}^\infty f_\mathcal Mathcal A' \circ \mathcal Tau_\Gamma^n. (\mathcal Tau_\Gamma^n)'
$$
and
$$
(\varphi_\Gamma\circ T_\mathcal Mathcal A)' =
\sum_{n=1}^\infty f_\mathcal Mathcal A' \circ \mathcal Tau_\Gamma^n \circ T_\mathcal Mathcal A . (\mathcal Tau_\Gamma^n)'\circ T_\mathcal Mathcal A . T_\mathcal Mathcal A'
= \sum_{n=1}^\infty f_\mathcal Mathcal A' \circ \mathcal Tau_\Gamma^{n-1}. (\mathcal Tau_\Gamma^{n-1})',
$$
since $(\mathcal Tau_\Gamma\circ T_\mathcal Mathcal A)' = \mathcal Tau_\Gamma^{n-1}$, so
indeed (\mathcal Ref{derivativezero}) holds.
\mathcal End{proof}
\begin{remark}
In the generality of Lemma \mathcal Ref{flatxai}, the constant values assumed by
$f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$ on
$\Gamma\cap X_{A_0}$ and
$\Gamma\cap X_{A_1}$
do not coincide.
However, we shall shortly give (see Lemma \mathcal Ref{flattenlemma}) an extra condition
which does ensure
that $f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$ takes the \mathcal Emph{same} constant value
on the whole of $\Gamma$. Indeed this possibility is a key tool in our strategy.
\mathcal End{remark} | 2,681 | 61,706 | en |
train | 0.4.14 | \section{The extremal Sturmian intervals}\mathcal Label{extsturmsection}
\subsection{Formulae involving extremal intervals}
As noted in Remark \mathcal Ref{neglectsingleton}, an $\mathcal Mathcal A$-Sturmian interval is the disjoint union of two closed intervals when viewed as a subset of $X=[0,1]$. However, the two \mathcal Emph{extremal} cases yield a leftmost
$\mathcal Mathcal A$-Sturmian interval equal to $X_{A_0}\cup\{T_{A_1}(0)\}$,
and a rightmost $\mathcal Mathcal A$-Sturmian interval equal to $\{T_{A_0}(1)\}\cup X_{A_1}$.
The presence of singleton sets in these expressions is notationally inconvenient, and unnecessary
for our purposes, so henceforth we neglect them.
More precisely, henceforth
the leftmost $\mathcal Mathcal A$-Sturmian interval is taken to be $X_{A_0}=T_{A_0}(X)$, and denoted
by $\Gamma_0$, so that $\mathcal Tau_{\Gamma_0}=T_{A_0}$;
the rightmost $\mathcal Mathcal A$-Sturmian interval is taken to be $X_{A_1}=T_{A_1}(X)$, and denoted
by $\Gamma_1$, so that $\mathcal Tau_{\Gamma_1}=T_{A_1}$.
When the $\mathcal Mathcal A$-Sturmian interval $\Gamma$ is either $\Gamma_0$ or $\Gamma_1$, there is an explicit formula for the Sturmian transfer function $\varphi_\Gamma$:
\begin{lemma}\mathcal Label{easyphiformulae}
Suppose $\mathcal Mathcal A\in\mathcal Mm$.
For $i\in\{0,1\}$, and all $x\in X$,
\begin{equation}\mathcal Label{phiiformulae}
\varphi_{\Gamma_i}(x) = \mathcal Log \mathcal Left( \frac{x+\mathcal Rho_{A_i}}{\mathcal Rho_{A_i}} \mathcal Right) \,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Now $\mathcal Tau_{\Gamma_i} = T_{A_i}$, so the defining formula (\mathcal Ref{defphiformula}) becomes
\begin{equation}\mathcal Label{defphiiformula}
\varphi_{\Gamma_i}'(x) = \sum_{n=1}^\infty (f_\mathcal Mathcal A\circ T_{A_i}^n)'(x)\,,
\mathcal End{equation}
and then (\mathcal Ref{sumformula}) implies that
\begin{equation*}
\varphi_{\Gamma_i}'(x) = \frac{1}{x+\varrho_{A_i}}\,.
\mathcal End{equation*}
Noting that the sign of $x+\varrho_{A_i}$ is positive when $i=0$ and negative when $i=1$
(see Corollary \mathcal Ref{posneg}), as well as the convention that $\varphi_{\Gamma_i}(0)=0$ (see Definition \mathcal Ref{phiexists}),
we deduce the required expression (\mathcal Ref{phiiformulae}).
\mathcal End{proof}
\begin{defn}\mathcal Label{Deltadefndefn}
Given $\mathcal Mathcal A=(A_0,A_1)\in \mathcal Mm$ and
$\Gamma\in\mathcal I_\mathcal Mathcal A$,
define $\Delta_\mathcal Mathcal A(\Gamma) \in \mathcal Mathbb{R}$ by
\begin{equation}\mathcal Label{Deltadefn}
\Delta_\mathcal Mathcal A(\Gamma)
=
\mathcal Left(\varphi_\Gamma(1)-\varphi_\Gamma(0) \mathcal Right)
-
\mathcal Left( \varphi_\Gamma(T_{A_0}(1))-\varphi_\Gamma(T_{A_1}(0)) \mathcal Right) \,,
\mathcal End{equation}
noting the equivalent expression
\begin{equation}\mathcal Label{Deltadefnalt}
\Delta_\mathcal Mathcal A(\Gamma)
=
\varphi_\Gamma(1)
-
\mathcal Left( \varphi_\Gamma(T_{A_0}(1))-\varphi_\Gamma(T_{A_1}(0)) \mathcal Right)
\mathcal End{equation}
as a consequence of the convention that $\varphi_\Gamma(0)=0$
(see Lemma \mathcal Ref{phiexists}).
\mathcal End{defn}
The values $\Delta_\mathcal Mathcal A(\Gamma_i)$ play an important role, so it will be useful to record
the following explicit formulae:
\begin{lemma}\mathcal Label{deltagammailemma}
Suppose $\mathcal Mathcal A\in\mathcal Mm$.
For $i\in\{0,1\}$,
\begin{equation}\mathcal Label{deltagammai}
\Delta_\mathcal Mathcal A(\Gamma_i) =
\mathcal Log \mathcal Left(
\frac{(1+\mathcal Rho_{A_i}) \mathcal Left(\frac{b_1}{b_1+d_1}+\mathcal Rho_{A_i}\mathcal Right)}{ \mathcal Rho_{A_i} \mathcal Left( \frac{a_0}{a_0+c_0}+\mathcal Rho_{A_i}\mathcal Right)}
\mathcal Right) \,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
This is immediate from the defining formula (\mathcal Ref{Deltadefn})
(or (\mathcal Ref{Deltadefnalt})) for $\Delta_\mathcal Mathcal A(\Gamma_i)$, together with formula
(\mathcal Ref{phiiformulae}) for $\varphi_{\Gamma_i}$, and the fact that $T_{A_0}(1)=\frac{a_0}{a_0+c_0}$
and $T_{A_1}(0) = \frac{b_1}{b_1+d_1}$.
\mathcal End{proof}
\begin{cor}\mathcal Label{deltasigns}
If $\mathcal Mathcal A\in\mathcal Mm$ then
\begin{equation*}
\mathcal Label{starrr}
\Delta_\mathcal Mathcal A(\Gamma_1) < 0 < \Delta_\mathcal Mathcal A(\Gamma_0)\,.
\mathcal End{equation*}
\mathcal End{cor}
\begin{proof}
The four terms
$
\mathcal Rho_{A_i}
$,
$
1+\mathcal Rho_{A_i}
$,
$
\frac{a_0}{a_0+c_0}+\mathcal Rho_{A_i}
$,
$
\frac{b_1}{b_1+d_1}+\mathcal Rho_{A_i}
$
in (\mathcal Ref{deltagammai}) are all positive if $i=0$, and all negative if $i=1$, by
the inequalities (\mathcal Ref{posnegxrhos})
in Corollary \mathcal Ref{posneg}.
Now $\mathcal Mathcal A\in\mathcal Mm$ implies that (\mathcal Ref{definequal1}) holds, so
$
\frac{b_1}{b_1+d_1}+\mathcal Rho_{A_i}
>
\frac{a_0}{a_0+c_0}+\mathcal Rho_{A_i}
$,
and clearly $1+\mathcal Rho_{A_i}>\mathcal Rho_{A_i}$
Consequently
$$
\frac{(1+\mathcal Rho_{A_i}) \mathcal Left(\frac{b_1}{b_1+d_1}+\mathcal Rho_{A_i}\mathcal Right)}{ \mathcal Rho_{A_i} \mathcal Left( \frac{a_0}{a_0+c_0}+\mathcal Rho_{A_i}\mathcal Right)}
$$
is strictly greater than $1$ if $i=0$, and strictly smaller than $1$ if $i=1$.
The result then follows from Lemma \mathcal Ref{deltagammailemma}.
\mathcal End{proof}
\subsection{Adaptations for other matrix pairs}\mathcal Label{adaptations}
As mentioned in \S \mathcal Ref{relatsubsection}, the methods of this paper can be adapted
so as to give alternative proofs
of certain results
(analogues of Theorem \mathcal Ref{maintheorem})
mentioned in \S \mathcal Ref{generalsection},
namely establishing that a full Sturmian family is generated
by the matrix pair (\mathcal Ref{standardpair}), and for matrix pairs corresponding to sub-cases of (\mathcal Ref{bmfamily}) and (\mathcal Ref{kozyakinpairs})
which lie in the boundary of $\mathfrak D$.\footnote{Note that all
of the matrix pairs in (\mathcal Ref{bmfamily}), (\mathcal Ref{kozyakinpairs}), (\mathcal Ref{standardpair}) have the property that
$A_0$ is projectively concave and $A_1$ is projectively convex.}
In this subsection we indicate the modifications necessary to handle these cases.
Firstly, the induced space $X_\mathcal Mathcal A$ may be the whole of $X=[0,1]$ rather than a disjoint union of two closed intervals:
this occurs if $a_0/c_0=b_1/d_1$ (i.e.~when (\mathcal Ref{definequal1}) becomes an equality), which is the case for the pair
(\mathcal Ref{standardpair}), and for (\mathcal Ref{kozyakinpairs}) if $bc=1$.
Secondly, in each of the cases (\mathcal Ref{bmfamily}), (\mathcal Ref{kozyakinpairs}) and (\mathcal Ref{standardpair}), the
induced maps $T_{A_0}$ and $T_{A_1}$ have fixed points
at
0 and 1 respectively, so that the dynamical system $T_\mathcal Mathcal A$ also fixes these points.
For (\mathcal Ref{standardpair}), both 0 and 1 are indifferent fixed points, i.e.~$T_{A_0}'(0)=1=T_{A_1}'(1)$.
For (\mathcal Ref{bmfamily}) and (\mathcal Ref{kozyakinpairs}) these fixed points are unstable for the induced maps $T_{A_0}$ and $T_{A_1}$, i.e.~$T_{A_0}'(0)>1$ and $T_{A_1}'(1)>1$, but both of these maps also have stable fixed points in the interior of $X=[0,1]$.
Consequently for (\mathcal Ref{standardpair}) the dynamical system $T_\mathcal Mathcal A:X\mathcal To X$ has indifferent fixed points at 0 and 1,
and no other fixed points, while for (\mathcal Ref{bmfamily}) and (\mathcal Ref{kozyakinpairs}) the dynamical system $T_\mathcal Mathcal A$ has stable fixed points at 0 and 1, and two further unstable fixed points in the interior of $X$.
The potentially problematic stable fixed points for $T_\mathcal Mathcal A$ can in fact be avoided by omitting to consider
the two extremal $\mathcal Mathcal A$-Sturmian intervals: this ensures the
asymptotic $\|(\mathcal Tau_\Gamma^n)'\|_\infty = O(\mathcal Theta^n)$ as $n\mathcal To\infty$,
$\mathcal Theta \in(0,1)$, and the existence of Sturmian transfer functions is proved as in Lemma \mathcal Ref{phiexists}.
In the case where $T_\mathcal Mathcal A$ has indifferent fixed points, it is even possible to consider
extremal $\mathcal Mathcal A$-Sturmian intervals, as the series defining the Sturmian transfer function is nonetheless convergent.
The existence of Sturmian transfer functions then allows the
remainder of the method of proof to proceed essentially as for matrix pairs in $\mathfrak D$,
ultimately establishing analogues of the main result Theorem \mathcal Ref{maintheorem}. | 2,790 | 61,706 | en |
train | 0.4.15 | \section{Associating $\mathcal Mathcal A$-Sturmian intervals to parameter values}\mathcal Label{particularsection}
\begin{notation}
For a Sturmian interval $\Gamma\in\mathcal I_\mathcal Mathcal A$,
let $s_\Gamma\in\mathcal S_\mathcal Mathcal A$
denote the $\mathcal Mathcal A$-Sturmian measure supported by $\Gamma$, i.e.~$s_\Gamma$ is
the unique $T_\mathcal Mathcal A$-invariant probability measure whose support is contained in $\Gamma$.
\mathcal End{notation}
\begin{lemma}\mathcal Label{flattenlemma}
Suppose $\mathcal Mathcal A\in \mathcal Mm$.
If $t\in\mathcal Mathbb{R}^+$
and
$\Gamma\in \mathcal I_\mathcal Mathcal A$ are such that
\begin{equation}\mathcal Label{keyflatteningequation}
f_{\mathcal Mathcal A(t)}(T_{A_0}(1)) - f_{\mathcal Mathcal A(t)}(T_{A_1}(0))
= \Delta_\mathcal Mathcal A( \Gamma)\,,
\mathcal End{equation}
then the Lipschitz continuous function
$f_{\mathcal Mathcal A(t)}+\varphi_{\Gamma} - \varphi_{\Gamma}\circ T_\mathcal Mathcal A$ is
equal to the constant value
$\int f_{\mathcal Mathcal A(t)}\, ds_\Gamma$ when restricted to $\Gamma$.
\mathcal End{lemma}
\begin{proof}
By Lemma \mathcal Ref{flatxai} we know that
$f_{\mathcal Mathcal A(t)}+\varphi_\Gamma-\varphi_\Gamma\circ T_\mathcal Mathcal A$
is constant when restricted to
$\Gamma\cap X_{A_0}$, and also constant
when restricted to $\Gamma\cap X_{A_1}$.
To prove that these constant values are the \mathcal Emph{same},
it suffices to show that $f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A$
takes the same value at the point $T_{A_0}(1)\in X_{A_0}$ as
it does at the point $T_{A_1}(0)\in X_{A_1}$.
But the equality
$$
\mathcal Left(f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A\mathcal Right)(T_{A_0}(1))
=
\mathcal Left(f_{\mathcal Mathcal A(t)}+\varphi_\Gamma - \varphi_\Gamma\circ T_\mathcal Mathcal A\mathcal Right)(T_{A_1}(0))
$$
holds if and only if
$$
f_{\mathcal Mathcal A(t)}(T_{A_0}(1)) - f_{\mathcal Mathcal A(t)}(T_{A_1}(0)) = \varphi_\Gamma(1)-\varphi_\Gamma(0) - \mathcal Left(\varphi_\Gamma(T_{A_0}(1)) - \varphi_\Gamma(T_{A_1}(0))\mathcal Right)\,,
$$
in other words
$f_{\mathcal Mathcal A(t)}(T_{A_0}(1)) - f_{\mathcal Mathcal A(t)}(T_{A_1}(0)) = \Delta_\mathcal Mathcal A( \Gamma)$, which is precisely
the hypothesis
(\mathcal Ref{keyflatteningequation}).
\mathcal End{proof}
\begin{cor}\mathcal Label{flattencor}
Given $\mathcal Mathcal A\in \mathcal Mm$, if $t\in\mathcal Mathbb{R}^+$
and
$\Gamma\in \mathcal I_\mathcal Mathcal A$ are such that
\begin{equation}\mathcal Label{keyflatteningequation1}
\mathcal Log \mathcal Left( \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) t^{-1} \mathcal Right)
= \Delta_\mathcal Mathcal A(\Gamma) \,,
\mathcal End{equation}
then the Lipschitz continuous function
$f_{\mathcal Mathcal A(t)}+\varphi_{\Gamma} - \varphi_{\Gamma}\circ T_\mathcal Mathcal A$ is
equal to the constant value
$\int f_{\mathcal Mathcal A(t)}\, ds_\Gamma$ on $\Gamma$.
\mathcal End{cor}
\begin{proof}
By Lemma \mathcal Ref{flattenlemma} it suffices to show that
$$f_{\mathcal Mathcal A(t)}(T_{A_0}(1)) - f_{\mathcal Mathcal A(t)}(T_{A_1}(0))
=
\mathcal Log \mathcal Left( \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) t^{-1} \mathcal Right)\,,$$
and by
Lemma \mathcal Ref{simple}(iii) this is equivalent to showing that
$$f_{\mathcal Mathcal A}(T_{A_0}(1)) - f_{\mathcal Mathcal A}(T_{A_1}(0))
=
\mathcal Log \mathcal Left( \frac{a_0+c_0}{b_1+d_1} \mathcal Right)\,.$$
Substituting $T_{A_0}(1)=\frac{a_0}{a_0+c_0}$ and $T_{A_1}(0)=\frac{b_1}{b_1+d_1}$
into, respectively, the formulae (\mathcal Ref{explicitfalt}) for $f_\mathcal Mathcal A$ on $X_{A_0}$ and $X_{A_1}$ yields
\begin{equation}\mathcal Label{fata01}
f_\mathcal Mathcal A(T_{A_0}(1)) = \mathcal Log(a_0+c_0)
\mathcal End{equation}
and
\begin{equation}\mathcal Label{fata10}
f_\mathcal Mathcal A(T_{A_1}(0)) = \mathcal Log(b_1+d_1)\,,
\mathcal End{equation}
so the result follows.
\mathcal End{proof}
In view of equation
(\mathcal Ref{keyflatteningequation1})
we make the following definition:
\begin{defn}
For $\mathcal Mathcal A\in\mathcal Mm$ and $i\in\{0,1\}$, define $t_i=t_i(\mathcal Mathcal A)$ by
\begin{equation}\mathcal Label{tiadef}
t_i = t_i(\mathcal Mathcal A) = \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) e^{-\Delta_\mathcal Mathcal A(\Gamma_i)}\,,
\mathcal End{equation}
so that
\begin{equation*}
\mathcal Log \mathcal Left( \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) t_i^{-1} \mathcal Right)
= \Delta_\mathcal Mathcal A(\Gamma_i)\,.
\mathcal End{equation*}
\mathcal End{defn}
\begin{remark}
Since $e^{-\Delta_\mathcal Mathcal A(\Gamma_0)} < 1 < e^{-\Delta_\mathcal Mathcal A(\Gamma_1)}$
by (\mathcal Ref{starrr}), it follows that
\begin{equation*}
t_0(\mathcal Mathcal A) <t_1(\mathcal Mathcal A)\,.
\mathcal End{equation*}
\mathcal End{remark}
\begin{lemma}
For $\mathcal Mathcal A\in\mathcal Mm$ and
$i\in\{0,1\}$,
\begin{equation}\mathcal Label{initialtiaexpressions}
t_i(\mathcal Mathcal A)
=
\frac{\mathcal Rho_{A_i} \mathcal Left( a_0 + \mathcal Rho_{A_i}(a_0+c_0) \mathcal Right)}{(1+\mathcal Rho_{A_i})\mathcal Left(b_1+\mathcal Rho_{A_i}(b_1+d_1)\mathcal Right)} \,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
From (\mathcal Ref{deltagammai}) we see that for $i\in\{0,1\}$,
\begin{equation*}
e^{-\Delta_\mathcal Mathcal A(\Gamma_i)} =
\frac{ \mathcal Rho_{A_i} \mathcal Left( \frac{a_0}{a_0+c_0}+\mathcal Rho_{A_i}\mathcal Right)}{(1+\mathcal Rho_{A_i}) \mathcal Left(\frac{b_1}{b_1+d_1}+\mathcal Rho_{A_i}\mathcal Right)}\,,
\mathcal End{equation*}
so that (\mathcal Ref{tiadef}) gives
\begin{equation}\mathcal Label{firsttexpression}
t_i(\mathcal Mathcal A) =
\mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) e^{-\Delta_\mathcal Mathcal A(\Gamma_i)}
=
\frac{\mathcal Rho_{A_i} \mathcal Left( a_0 + \mathcal Rho_{A_i}(a_0+c_0) \mathcal Right)}{(1+\mathcal Rho_{A_i})\mathcal Left(b_1+\mathcal Rho_{A_i}(b_1+d_1)\mathcal Right)} \,,
\mathcal End{equation}
which is the required expression (\mathcal Ref{initialtiaexpressions}).
\mathcal End{proof}
A consequence is the following property:
\begin{cor}
For $\mathcal Mathcal A\in \mathcal Mm$, $t\in\mathcal Mathbb{R}^+$, and $i\in\{0,1\}$,
\begin{equation}\mathcal Label{tti}
t_i( \mathcal Mathcal A(t)) = \frac{t_i(\mathcal Mathcal A)}{t}\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{proof}
This follows easily from
(\mathcal Ref{initialtiaexpressions}),
and the easily verified fact (used only in the proof of the $i=1$ case) that
$\mathcal Rho_{tA_1}=\mathcal Rho_{A_1}$.
Specifically, for $i\in\{0,1\}$,
\begin{equation*}
t_i(\mathcal Mathcal A(t))
=
\frac{\mathcal Rho_{A_i} \mathcal Left( a_0 + \mathcal Rho_{A_i}(a_0+c_0) \mathcal Right)}{(1+\mathcal Rho_{A_i})\mathcal Left(tb_1+\mathcal Rho_{A_i}(tb_1+td_1)\mathcal Right)}
=\frac{1}{t}
\frac{\mathcal Rho_{A_i} \mathcal Left( a_0 + \mathcal Rho_{A_i}(a_0+c_0) \mathcal Right)}{(1+\mathcal Rho_{A_i})\mathcal Left(b_1+\mathcal Rho_{A_i}(b_1+d_1)\mathcal Right)}
= \frac{t_i(\mathcal Mathcal A)}{t}\,.
\mathcal End{equation*}
\mathcal End{proof}
\begin{lemma}
For $\mathcal Mathcal A\in\mathcal Mm$, the quantities $t_0(\mathcal Mathcal A)$ and $t_1(\mathcal Mathcal A)$ admit the following alternative expressions:
\begin{equation}\mathcal Label{tiaexpressions}
t_0(\mathcal Mathcal A)
=
\frac{\det A_0}{ \mathcal Left( a_0 - b_0 ( 1+\mathcal Rho_{A_0}^{-1})\mathcal Right) \mathcal Left( d_1 +b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)}
\mathcal End{equation}
and
\begin{equation}\mathcal Label{tiaexpressionsi1}
t_1(\mathcal Mathcal A)
=
\frac{ \mathcal Left(a_0+c_0(1+\mathcal Rho_{A_1}^{-1})^{-1}\mathcal Right) \mathcal Left(a_1-b_1(1+\mathcal Rho_{A_1}^{-1})\mathcal Right)}{\det A_1}\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Since (\mathcal Ref{firsttexpression}) implies
\begin{equation}\mathcal Label{intermediatetexpression}
t_0(\mathcal Mathcal A) =
\frac{a_0 + \mathcal Rho_{A_0}(a_0+c_0) }{(1+\mathcal Rho_{A_0})\mathcal Left(d_1+b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)} \,,
\mathcal End{equation}
we see that $t_0(\mathcal Mathcal A)$ is equal to (\mathcal Ref{tiaexpressions}) if and only if
\begin{equation}\mathcal Label{ifftexpression}
\frac{a_0 + \mathcal Rho_{A_0}(a_0+c_0) }{1+\mathcal Rho_{A_0}}
=
\frac{a_0d_0-b_0c_0}{a_0-b_0(1+\mathcal Rho_{A_0}^{-1})}\,.
\mathcal End{equation}
Clearing fractions in (\mathcal Ref{ifftexpression}) reveals it to be equivalent to the equation
$$
q_{A_0}(\mathcal Rho_{A_0}) = \alpha_{A_0} \mathcal Rho_{A_0}^2+\beta_{A_0}\mathcal Rho_{A_0}-b_0 =0\,,
$$
which is true by Lemma \mathcal Ref{rhoquadratic}.
Since (\mathcal Ref{firsttexpression}) implies
\begin{equation}\mathcal Label{intermediatetexpression1}
t_1(\mathcal Mathcal A) =
\frac{\mathcal Rho_{A_1}\mathcal Left( a_0+c_0(1+\mathcal Rho_{A_1}^{-1})^{-1}\mathcal Right)}{b_1+\mathcal Rho_{A_1}(b_1+d_1)} \,,
\mathcal End{equation}
we see that $t_1(\mathcal Mathcal A)$ is equal to (\mathcal Ref{tiaexpressionsi1}) if and only if
\begin{equation}\mathcal Label{ifftexpression1}
\frac{\mathcal Rho_{A_1}}{b_1+\mathcal Rho_{A_1}(b_1+d_1)}
=
\frac{a_1-b_1(1+\mathcal Rho_{A_1}^{-1})}{\det A_1}\,.
\mathcal End{equation}
Clearing fractions in (\mathcal Ref{ifftexpression1}) reveals it to be equivalent to the equation
$$
q_{A_1}(\mathcal Rho_{A_1}) = \alpha_{A_1} \mathcal Rho_{A_1}^2+\beta_{A_1}\mathcal Rho_{A_1}-b_1 =0\,,
$$
which is true by Lemma \mathcal Ref{rhoquadratic}.
\mathcal End{proof}
\begin{notation}
For $\mathcal Mathcal A\in\mathcal Mm$, let $\mathcal T_\mathcal Mathcal A$ denote the open interval $\mathcal Left(t_0(\mathcal Mathcal A), t_1(\mathcal Mathcal A)\mathcal Right)$.
\mathcal End{notation} | 3,636 | 61,706 | en |
train | 0.4.16 | A consequence is the following property:
\begin{cor}
For $\mathcal Mathcal A\in \mathcal Mm$, $t\in\mathcal Mathbb{R}^+$, and $i\in\{0,1\}$,
\begin{equation}\mathcal Label{tti}
t_i( \mathcal Mathcal A(t)) = \frac{t_i(\mathcal Mathcal A)}{t}\,.
\mathcal End{equation}
\mathcal End{cor}
\begin{proof}
This follows easily from
(\mathcal Ref{initialtiaexpressions}),
and the easily verified fact (used only in the proof of the $i=1$ case) that
$\mathcal Rho_{tA_1}=\mathcal Rho_{A_1}$.
Specifically, for $i\in\{0,1\}$,
\begin{equation*}
t_i(\mathcal Mathcal A(t))
=
\frac{\mathcal Rho_{A_i} \mathcal Left( a_0 + \mathcal Rho_{A_i}(a_0+c_0) \mathcal Right)}{(1+\mathcal Rho_{A_i})\mathcal Left(tb_1+\mathcal Rho_{A_i}(tb_1+td_1)\mathcal Right)}
=\frac{1}{t}
\frac{\mathcal Rho_{A_i} \mathcal Left( a_0 + \mathcal Rho_{A_i}(a_0+c_0) \mathcal Right)}{(1+\mathcal Rho_{A_i})\mathcal Left(b_1+\mathcal Rho_{A_i}(b_1+d_1)\mathcal Right)}
= \frac{t_i(\mathcal Mathcal A)}{t}\,.
\mathcal End{equation*}
\mathcal End{proof}
\begin{lemma}
For $\mathcal Mathcal A\in\mathcal Mm$, the quantities $t_0(\mathcal Mathcal A)$ and $t_1(\mathcal Mathcal A)$ admit the following alternative expressions:
\begin{equation}\mathcal Label{tiaexpressions}
t_0(\mathcal Mathcal A)
=
\frac{\det A_0}{ \mathcal Left( a_0 - b_0 ( 1+\mathcal Rho_{A_0}^{-1})\mathcal Right) \mathcal Left( d_1 +b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)}
\mathcal End{equation}
and
\begin{equation}\mathcal Label{tiaexpressionsi1}
t_1(\mathcal Mathcal A)
=
\frac{ \mathcal Left(a_0+c_0(1+\mathcal Rho_{A_1}^{-1})^{-1}\mathcal Right) \mathcal Left(a_1-b_1(1+\mathcal Rho_{A_1}^{-1})\mathcal Right)}{\det A_1}\,.
\mathcal End{equation}
\mathcal End{lemma}
\begin{proof}
Since (\mathcal Ref{firsttexpression}) implies
\begin{equation}\mathcal Label{intermediatetexpression}
t_0(\mathcal Mathcal A) =
\frac{a_0 + \mathcal Rho_{A_0}(a_0+c_0) }{(1+\mathcal Rho_{A_0})\mathcal Left(d_1+b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)} \,,
\mathcal End{equation}
we see that $t_0(\mathcal Mathcal A)$ is equal to (\mathcal Ref{tiaexpressions}) if and only if
\begin{equation}\mathcal Label{ifftexpression}
\frac{a_0 + \mathcal Rho_{A_0}(a_0+c_0) }{1+\mathcal Rho_{A_0}}
=
\frac{a_0d_0-b_0c_0}{a_0-b_0(1+\mathcal Rho_{A_0}^{-1})}\,.
\mathcal End{equation}
Clearing fractions in (\mathcal Ref{ifftexpression}) reveals it to be equivalent to the equation
$$
q_{A_0}(\mathcal Rho_{A_0}) = \alpha_{A_0} \mathcal Rho_{A_0}^2+\beta_{A_0}\mathcal Rho_{A_0}-b_0 =0\,,
$$
which is true by Lemma \mathcal Ref{rhoquadratic}.
Since (\mathcal Ref{firsttexpression}) implies
\begin{equation}\mathcal Label{intermediatetexpression1}
t_1(\mathcal Mathcal A) =
\frac{\mathcal Rho_{A_1}\mathcal Left( a_0+c_0(1+\mathcal Rho_{A_1}^{-1})^{-1}\mathcal Right)}{b_1+\mathcal Rho_{A_1}(b_1+d_1)} \,,
\mathcal End{equation}
we see that $t_1(\mathcal Mathcal A)$ is equal to (\mathcal Ref{tiaexpressionsi1}) if and only if
\begin{equation}\mathcal Label{ifftexpression1}
\frac{\mathcal Rho_{A_1}}{b_1+\mathcal Rho_{A_1}(b_1+d_1)}
=
\frac{a_1-b_1(1+\mathcal Rho_{A_1}^{-1})}{\det A_1}\,.
\mathcal End{equation}
Clearing fractions in (\mathcal Ref{ifftexpression1}) reveals it to be equivalent to the equation
$$
q_{A_1}(\mathcal Rho_{A_1}) = \alpha_{A_1} \mathcal Rho_{A_1}^2+\beta_{A_1}\mathcal Rho_{A_1}-b_1 =0\,,
$$
which is true by Lemma \mathcal Ref{rhoquadratic}.
\mathcal End{proof}
\begin{notation}
For $\mathcal Mathcal A\in\mathcal Mm$, let $\mathcal T_\mathcal Mathcal A$ denote the open interval $\mathcal Left(t_0(\mathcal Mathcal A), t_1(\mathcal Mathcal A)\mathcal Right)$.
\mathcal End{notation}
\begin{prop}\mathcal Label{analogue}
Let $\mathcal Mathcal A\in\mathcal Mm$.
For each $t \in \mathcal T_\mathcal Mathcal A$ there exists
an $\mathcal Mathcal A$-Sturmian interval $\Gamma_\mathcal Mathcal A(t)\in \mathcal I_\mathcal Mathcal A$ such that
$f_{\mathcal Mathcal A(t)}+\varphi_{\Gamma_t} - \varphi_{\Gamma_t}\circ T_\mathcal Mathcal A$ is
equal to the constant value
$\int f_{\mathcal Mathcal A(t)}\, ds_{\Gamma_\mathcal Mathcal A(t)}$ on $\Gamma_\mathcal Mathcal A(t)$.
\mathcal End{prop}
\begin{proof}
First we show that $\Delta_\mathcal Mathcal A: \Gamma \mathcal Mapsto \Delta_\mathcal Mathcal A(\Gamma)$ is continuous.
The formula (\mathcal Ref{Deltadefnalt}) defines
\begin{align*}
\Delta_\mathcal Mathcal A(\Gamma)
& =
\varphi_\Gamma(1)
-
\varphi_\Gamma(T_{A_0}(1)) + \varphi_\Gamma(T_{A_1}(0)) \cr
& =
\varphi_\Gamma(1)
-
\varphi_\Gamma\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right) + \varphi_\Gamma\mathcal Left(\frac{b_1}{b_1+d_1}\mathcal Right) \,,
\mathcal End{align*}
so the continuity of $ \Delta_\mathcal Mathcal A$ will follow from the fact
that $\Gamma\mathcal Mapsto \varphi_\Gamma(z)$ is continuous for each $z\in X$.
To see this, first note that Definition \mathcal Ref{phiexists} gives
\begin{equation*}
\varphi_\Gamma(z)
= \varphi_\Gamma(z)-\varphi_\Gamma(0)
=\int_0^z \varphi_\Gamma'
= \int_0^z \sum_{n=1}^\infty (f_\mathcal Mathcal A \circ \mathcal Tau_\Gamma^n)' \,,
\mathcal End{equation*}
and re-writing this integral as
\begin{equation*}
\sum_{n=1}^\infty \int_0^z (f_\mathcal Mathcal A \circ \mathcal Tau_\Gamma^n)'
= \sum_{n=1}^\infty \int_{\mathcal Tau^n_\Gamma[0,z]} f_\mathcal Mathcal A'
= \sum_{n=1}^\infty \int \mathcal Mathbbm{1}_{\mathcal Tau^n_\Gamma[0,z]} f_\mathcal Mathcal A'
=\int f_\mathcal Mathcal A' \sum_{n=1}^\infty \mathcal Mathbbm{1}_{\mathcal Tau^n_\Gamma[0,z]}
\mathcal End{equation*}
gives
\begin{equation}\mathcal Label{phigammaexpression}
\varphi_\Gamma(z) = \int f_\mathcal Mathcal A' H_z(\Gamma) \,,
\mathcal End{equation}
where
\begin{equation*}
H_z(\Gamma) = \sum_{n=1}^\infty \mathcal Mathbbm{1}_{\mathcal Tau^n_\Gamma[0,z]} \,.
\mathcal End{equation*}
Now each map $H_{z,n}:\Gamma\mathcal Mapsto \mathcal Mathbbm{1}_{\mathcal Tau^n_\Gamma[0,z]}$
clearly belongs to $C([\Gamma_0,\Gamma_1],L^1)$, the space of
continuous functions from $[\Gamma_0,\Gamma_1]$ to $L^1=L^1(dx)$,
and $\sum_{n=1}^\infty H_{z,n}$
is convergent in
$C([\Gamma_0,\Gamma_1],L^1)$, so $H_z(\cdot) \in C([\Gamma_0,\Gamma_1],L^1)$.
It then follows from (\mathcal Ref{phigammaexpression}) that
$\Gamma\mathcal Mapsto \varphi_\Gamma(z)$ is continuous, as required.
Now
note that
the function $G_\mathcal Mathcal A$ defined by
\begin{equation}\mathcal Label{gadef}
G_\mathcal Mathcal A(t) = \mathcal Log \mathcal Left( \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) t^{-1} \mathcal Right)
\mathcal End{equation}
is strictly decreasing, since $a_0,c_0,b_1,d_1>0$,
so if $t \in \mathcal T_\mathcal Mathcal A = (t_0(\mathcal Mathcal A),t_1(\mathcal Mathcal A))$ then
\begin{equation}\mathcal Label{ginside}
G_\mathcal Mathcal A(t)\in \mathcal Left(G_\mathcal Mathcal A(t_1(\mathcal Mathcal A)),G_\mathcal Mathcal A(t_0(\mathcal Mathcal A))\mathcal Right) = (\Delta_\mathcal Mathcal A(\Gamma_1),\Delta_\mathcal Mathcal A(\Gamma_0))\,.
\mathcal End{equation}
Now $\Delta_\mathcal Mathcal A$ is continuous, so applying
the intermediate value theorem to this function (defined on the interval $[\Gamma_0,\Gamma_1]$) we see that in view of (\mathcal Ref{ginside}), there exists an $\mathcal Mathcal A$-Sturmian interval, which we denote by $\Gamma_\mathcal Mathcal A(t)$, such that
$\Gamma_\mathcal Mathcal A(t)\in (\Gamma_0,\Gamma_1)$
and
\begin{equation}\mathcal Label{deltatdefeq}
\Delta_\mathcal Mathcal A(\Gamma_\mathcal Mathcal A(t))=G_\mathcal Mathcal A(t)\,.
\mathcal End{equation}
In other words,
\begin{equation*}
\mathcal Log \mathcal Left( \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) t^{-1} \mathcal Right)
= \Delta_\mathcal Mathcal A(\Gamma_\mathcal Mathcal A(t)) \,,
\mathcal End{equation*}
so that Corollary \mathcal Ref{flattencor} implies that
$f_{\mathcal Mathcal A(t)}+\varphi_{\Gamma_t} - \varphi_{\Gamma_t}\circ T_\mathcal Mathcal A
= \int f_{\mathcal Mathcal A(t)}\, ds_{\Gamma_\mathcal Mathcal A(t)}$ on $\Gamma_\mathcal Mathcal A(t)$, as required.
\mathcal End{proof} | 2,955 | 61,706 | en |
train | 0.4.17 | \section{The case when one matrix dominates}\mathcal Label{dominatessec}
It will be useful to record the value of the induced function
$f_\mathcal Mathcal A$ at the two fixed points of $T_\mathcal Mathcal A$:
\begin{lemma}\mathcal Label{fapai}
For $\mathcal Mathcal A\in\mathcal Mm$ and $i\in\{0,1\}$,
\begin{equation*}
f_\mathcal Mathcal A(p_{A_i})
=
\mathcal Log\mathcal Left( \frac{\det A_i}{a_i - b_i ( 1+\mathcal Rho_{A_i}^{-1})}
\mathcal Right)
=
\mathcal Log\mathcal Left( \frac{\det A_i}{a_i-b_i -\frac{1}{2}(\beta_{A_i}+\gamma_{A_i})}\mathcal Right)
\,.
\mathcal End{equation*}
\mathcal End{lemma}
\begin{proof}
Straightforward computation using (\mathcal Ref{rhoaredone}),
(\mathcal Ref{paformularedone}),
and (\mathcal Ref{explicitfalt}).
\mathcal End{proof}
We first consider a sufficient condition for
the projectively concave matrix
$A_0$ to be the dominant matrix of the pair $\mathcal Mathcal A=(A_0,A_1)$:
\begin{theorem}\mathcal Label{t0larger1theorem}
If $\mathcal Mathcal A\in\mathcal Mm$ is such that
\begin{equation}\mathcal Label{t0larger1}
t_0(\mathcal Mathcal A) \ge 1\,,
\mathcal End{equation}
then the Dirac measure at the fixed point $p_{A_0}$ is the unique $f_\mathcal Mathcal A$-maximizing measure;
in particular, the joint spectral radius of $\mathcal Mathcal A$ is equal to the spectral radius of $A_0$.
\mathcal End{theorem}
\begin{proof}
Choosing
$\varphi(x) = \varphi_{\Gamma_0}(x) = \mathcal Log \mathcal Left(\frac{x+\mathcal Rho_{A_0}}{\mathcal Rho_{A_0}}\mathcal Right)$
ensures,
by Lemma \mathcal Ref{flatxai},
that
$f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is constant when restricted to $X_{A_0} = \Gamma_0$,
and the constant value assumed by this function is clearly $f_\mathcal Mathcal A(p_{A_0})$.
The result will follow if we can show that $f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is strictly decreasing
on $X_{A_1}$,
and that the value $(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(\frac{b_1}{b_1+d_1})$
at the left endpoint of $X_{A_1}$ is no greater than the constant value $f_\mathcal Mathcal A(p_{A_0})$.
This is because
the Dirac measure $\delta_{p_{A_0}}$ will then clearly be the unique maximizing measure for
$f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$, and hence the unique maximizing measure for $f_\mathcal Mathcal A$.
To compute the value $(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(\frac{b_1}{b_1+d_1})$ we recall
from (\mathcal Ref{fata10}) that
$$
f_\mathcal Mathcal A\mathcal Left( \frac{b_1}{b_1+d_1}\mathcal Right)
= f_\mathcal Mathcal A(T_{A_1}(0)) = \mathcal Log(b_1+d_1)\,,
$$
and note that
$$
\varphi\mathcal Left(T_\mathcal Mathcal A\mathcal Left(\frac{b_1}{b_1+d_1}\mathcal Right)\mathcal Right)=\varphi(0) = 0
\,,
$$
and
$$
\varphi\mathcal Left( \frac{b_1}{b_1+d_1}\mathcal Right)=\mathcal Log\mathcal Left( \frac{ \frac{b_1}{b_1+ d_1} +\mathcal Rho_{A_0}}{\mathcal Rho_{A_0}}\mathcal Right)
=\mathcal Log\mathcal Left(\frac{\mathcal Left( d_1 +b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)}{b_1+d_1}\mathcal Right)
\,.
$$
Therefore
\begin{equation}\mathcal Label{endpointformula}
(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)\mathcal Left(\frac{b_1}{b_1+d_1}\mathcal Right)
=
\mathcal Log \mathcal Left( d_1 +b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)
\,.
\mathcal End{equation}
By Lemma \mathcal Ref{fapai},
\begin{equation}\mathcal Label{fpt0}
f_\mathcal Mathcal A(p_{A_0})
=
\mathcal Log\mathcal Left( \frac{\det A_0}{a_0 - b_0 ( 1+\mathcal Rho_{A_0}^{-1})}
\mathcal Right)\,,
\mathcal End{equation}
so (\mathcal Ref{endpointformula}) and (\mathcal Ref{fpt0}) imply that the desired inequality
$$(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)\mathcal Left(\frac{b_1}{b_1+d_1}\mathcal Right)\mathcal Le f_\mathcal Mathcal A(p_{A_0})$$
is precisely the hypothesis (\mathcal Ref{t0larger1}),
since
\begin{equation*}
t_0(\mathcal Mathcal A)
=
\frac{\det A_0}{ \mathcal Left( a_0 - b_0 ( 1+\mathcal Rho_{A_0}^{-1})\mathcal Right) \mathcal Left( d_1 +b_1(1+\mathcal Rho_{A_0}^{-1})\mathcal Right)}
\mathcal End{equation*}
by (\mathcal Ref{tiaexpressions}).
It remains to show that $f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is strictly decreasing on $X_{A_1}$.
Suppose $x\in X_{A_1}$. We know by (\mathcal Ref{explicitfalt}) that
$$
f_\mathcal Mathcal A(x) = \mathcal Log \mathcal Left( \frac{\det A_1}{-\alpha_{A_1}(x+ \sigma_{A_1})} \mathcal Right)\,.
$$
Now
$$
\varphi(x) = \mathcal Log \mathcal Left( \frac{x+\mathcal Rho_{A_0}}{\mathcal Rho_{A_0}}\mathcal Right)\,,
$$
so
$$
\varphi(T_\mathcal Mathcal A(x))
=
\mathcal Log \mathcal Left( \frac{S_{A_1}(x)+\mathcal Rho_{A_0}}{\mathcal Rho_{A_0}}\mathcal Right)\,,
$$
and therefore
$$
(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(x)
=
\mathcal Log\mathcal Left( \frac{\det A_1 (x+\mathcal Rho_{A_0})}{-\alpha_{A_1}(x+\sigma_{A_1})(S_{A_1}(x)+\mathcal Rho_{A_0})}\mathcal Right)\,.
$$
It therefore suffices to show that
\begin{equation}\mathcal Label{suffdec}
x\mathcal Mapsto \frac{x+\mathcal Rho_{A_0}}{-\alpha_{A_1}(x+\sigma_{A_1})(S_{A_1}(x)+\mathcal Rho_{A_0})}
\mathcal End{equation}
is strictly decreasing.
For this note that
$$
S_{A_1}(x)+\mathcal Rho_{A_0}
= \frac{(b_1+d_1)x-b_1}{-\alpha_{A_1}(x+\sigma_{A_1})}+\mathcal Rho_{A_0}
=
\frac{ (b_1+d_1-\alpha_{A_1}\mathcal Rho_{A_0})x +(a_1-b_1)\mathcal Rho_{A_0}-b_1}{-\alpha_{A_1}(x+\sigma_{A_1})}
$$
so (\mathcal Ref{suffdec}) is seen to be the M\"obius function
\begin{equation*}
x\mathcal Mapsto \frac{x+\mathcal Rho_{A_0}}{ (b_1+d_1-\alpha_{A_1}\mathcal Rho_{A_0})x +(a_1-b_1)\mathcal Rho_{A_0}-b_1} \,,
\mathcal End{equation*}
which is known to be strictly decreasing by Lemma \mathcal Ref{techderivlemma}.
\mathcal End{proof}
As a consequence of Theorem \mathcal Ref{t0larger1theorem} we obtain:
\begin{cor}\mathcal Label{t0largerttheorem}
If $\mathcal Mathcal A\in\mathcal Mm$ and $t\in \mathcal Mathbb{R}^+$ are such that
\begin{equation}\mathcal Label{t0largert}
t \mathcal Le t_0(\mathcal Mathcal A) \,,
\mathcal End{equation}
then the Dirac measure at the fixed point $p_{A_0}$ is the unique $f_{\mathcal Mathcal A(t)}$-maximizing measure;
in particular, the joint spectral radius of $\mathcal Mathcal A(t)$ is equal to the spectral radius of $A_0$.
\mathcal End{cor}
\begin{proof}
The assumption (\mathcal Ref{t0largert}) means, using (\mathcal Ref{tti}),
that $t_0(\mathcal Mathcal A(t))\ge 1$, so the result follows by applying
Theorem \mathcal Ref{t0larger1theorem}
with $\mathcal Mathcal A$ replaced by $\mathcal Mathcal A(t)$.
\mathcal End{proof}
We now turn to an analogous sufficient condition for the projectively convex matrix $A_1$
to be dominant:
\begin{theorem}\mathcal Label{t1smaller1theorem}
If $\mathcal Mathcal A\in\mathcal Mm$ is such that
\begin{equation}\mathcal Label{t1smaller1}
t_1(\mathcal Mathcal A) \mathcal Le 1\,,
\mathcal End{equation}
then the Dirac measure at the fixed point $p_{A_1}$ is the unique $f_\mathcal Mathcal A$-maximizing measure;
in particular, the joint spectral radius of $\mathcal Mathcal A$ is equal to the spectral radius of $A_1$.
\mathcal End{theorem}
\begin{proof}
Choosing
$\varphi(x) = \varphi_{\Gamma_1}(x) = \mathcal Log \mathcal Left(\frac{x+\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)$
ensures,
by Lemma \mathcal Ref{flatxai},
that
$f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is constant when restricted to $X_{A_1} = \Gamma_1$,
and the constant value assumed by this function is clearly $f_\mathcal Mathcal A(p_{A_1})$.
The result will follow if we can show that $f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is strictly increasing
on $X_{A_0}$,
and that the value $(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(\frac{a_0}{a_0+c_0})$
at the right endpoint of $X_{A_0}$ is no greater than the constant value $f_\mathcal Mathcal A(p_{A_1})$.
This is because
the Dirac measure $\delta_{p_{A_1}}$ will then clearly be the unique maximizing measure for
$f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$, and hence the unique maximizing measure for $f_\mathcal Mathcal A$.
To compute the value $(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(\frac{a_0}{a_0+c_0})$ we recall
from (\mathcal Ref{fata01}) that
$$
f_\mathcal Mathcal A\mathcal Left( \frac{a_0}{a_0+c_0}\mathcal Right)
= f_\mathcal Mathcal A(T_{A_0}(1)) = \mathcal Log(a_0+c_0)\,,
$$
and note that
$$
\varphi\mathcal Left(T\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right)\mathcal Right)=\varphi(1) = \mathcal Log\mathcal Left( \frac{1+\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)
= \mathcal Log(1+\mathcal Rho_{A_1}^{-1})
\,,
$$
and
$$
\varphi\mathcal Left( \frac{a_0}{a_0+c_0}\mathcal Right)=\mathcal Log\mathcal Left( \frac{ \frac{a_0}{a_0+c_0} +\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)
=\mathcal Log\mathcal Left(\frac{\mathcal Left( c_0 +a_0(1+\mathcal Rho_{A_1}^{-1})\mathcal Right)}{a_0+c_0}\mathcal Right)
\,.
$$
Therefore
\begin{equation}\mathcal Label{endpointformula2}
(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right)
=
\mathcal Log \mathcal Left( a_0 +c_0(1+\mathcal Rho_{A_1}^{-1})^{-1} \mathcal Right)
\,.
\mathcal End{equation}
By Lemma \mathcal Ref{fapai},
\begin{equation}\mathcal Label{fpt02}
f_\mathcal Mathcal A(p_{A_1})
=
\mathcal Log\mathcal Left( \frac{\det A_1}{a_1 - b_1 ( 1+\mathcal Rho_{A_1}^{-1})}
\mathcal Right)\,,
\mathcal End{equation}
so (\mathcal Ref{endpointformula2}) and (\mathcal Ref{fpt02}) imply that the desired inequality
$$(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right)\mathcal Le f_\mathcal Mathcal A(p_{A_1})$$
is precisely the hypothesis (\mathcal Ref{t1smaller1}),
since
\begin{equation*}
t_1(\mathcal Mathcal A)
=
\frac{ \mathcal Left(a_0+c_0(1+\mathcal Rho_{A_1}^{-1})^{-1}\mathcal Right) \mathcal Left(a_1-b_1(1+\mathcal Rho_{A_1}^{-1})\mathcal Right)}{\det A_1}
\mathcal End{equation*}
by (\mathcal Ref{tiaexpressions}). | 3,645 | 61,706 | en |
train | 0.4.18 | As a consequence of Theorem \mathcal Ref{t0larger1theorem} we obtain:
\begin{cor}\mathcal Label{t0largerttheorem}
If $\mathcal Mathcal A\in\mathcal Mm$ and $t\in \mathcal Mathbb{R}^+$ are such that
\begin{equation}\mathcal Label{t0largert}
t \mathcal Le t_0(\mathcal Mathcal A) \,,
\mathcal End{equation}
then the Dirac measure at the fixed point $p_{A_0}$ is the unique $f_{\mathcal Mathcal A(t)}$-maximizing measure;
in particular, the joint spectral radius of $\mathcal Mathcal A(t)$ is equal to the spectral radius of $A_0$.
\mathcal End{cor}
\begin{proof}
The assumption (\mathcal Ref{t0largert}) means, using (\mathcal Ref{tti}),
that $t_0(\mathcal Mathcal A(t))\ge 1$, so the result follows by applying
Theorem \mathcal Ref{t0larger1theorem}
with $\mathcal Mathcal A$ replaced by $\mathcal Mathcal A(t)$.
\mathcal End{proof}
We now turn to an analogous sufficient condition for the projectively convex matrix $A_1$
to be dominant:
\begin{theorem}\mathcal Label{t1smaller1theorem}
If $\mathcal Mathcal A\in\mathcal Mm$ is such that
\begin{equation}\mathcal Label{t1smaller1}
t_1(\mathcal Mathcal A) \mathcal Le 1\,,
\mathcal End{equation}
then the Dirac measure at the fixed point $p_{A_1}$ is the unique $f_\mathcal Mathcal A$-maximizing measure;
in particular, the joint spectral radius of $\mathcal Mathcal A$ is equal to the spectral radius of $A_1$.
\mathcal End{theorem}
\begin{proof}
Choosing
$\varphi(x) = \varphi_{\Gamma_1}(x) = \mathcal Log \mathcal Left(\frac{x+\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)$
ensures,
by Lemma \mathcal Ref{flatxai},
that
$f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is constant when restricted to $X_{A_1} = \Gamma_1$,
and the constant value assumed by this function is clearly $f_\mathcal Mathcal A(p_{A_1})$.
The result will follow if we can show that $f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is strictly increasing
on $X_{A_0}$,
and that the value $(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(\frac{a_0}{a_0+c_0})$
at the right endpoint of $X_{A_0}$ is no greater than the constant value $f_\mathcal Mathcal A(p_{A_1})$.
This is because
the Dirac measure $\delta_{p_{A_1}}$ will then clearly be the unique maximizing measure for
$f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$, and hence the unique maximizing measure for $f_\mathcal Mathcal A$.
To compute the value $(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(\frac{a_0}{a_0+c_0})$ we recall
from (\mathcal Ref{fata01}) that
$$
f_\mathcal Mathcal A\mathcal Left( \frac{a_0}{a_0+c_0}\mathcal Right)
= f_\mathcal Mathcal A(T_{A_0}(1)) = \mathcal Log(a_0+c_0)\,,
$$
and note that
$$
\varphi\mathcal Left(T\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right)\mathcal Right)=\varphi(1) = \mathcal Log\mathcal Left( \frac{1+\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)
= \mathcal Log(1+\mathcal Rho_{A_1}^{-1})
\,,
$$
and
$$
\varphi\mathcal Left( \frac{a_0}{a_0+c_0}\mathcal Right)=\mathcal Log\mathcal Left( \frac{ \frac{a_0}{a_0+c_0} +\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)
=\mathcal Log\mathcal Left(\frac{\mathcal Left( c_0 +a_0(1+\mathcal Rho_{A_1}^{-1})\mathcal Right)}{a_0+c_0}\mathcal Right)
\,.
$$
Therefore
\begin{equation}\mathcal Label{endpointformula2}
(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right)
=
\mathcal Log \mathcal Left( a_0 +c_0(1+\mathcal Rho_{A_1}^{-1})^{-1} \mathcal Right)
\,.
\mathcal End{equation}
By Lemma \mathcal Ref{fapai},
\begin{equation}\mathcal Label{fpt02}
f_\mathcal Mathcal A(p_{A_1})
=
\mathcal Log\mathcal Left( \frac{\det A_1}{a_1 - b_1 ( 1+\mathcal Rho_{A_1}^{-1})}
\mathcal Right)\,,
\mathcal End{equation}
so (\mathcal Ref{endpointformula2}) and (\mathcal Ref{fpt02}) imply that the desired inequality
$$(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)\mathcal Left(\frac{a_0}{a_0+c_0}\mathcal Right)\mathcal Le f_\mathcal Mathcal A(p_{A_1})$$
is precisely the hypothesis (\mathcal Ref{t1smaller1}),
since
\begin{equation*}
t_1(\mathcal Mathcal A)
=
\frac{ \mathcal Left(a_0+c_0(1+\mathcal Rho_{A_1}^{-1})^{-1}\mathcal Right) \mathcal Left(a_1-b_1(1+\mathcal Rho_{A_1}^{-1})\mathcal Right)}{\det A_1}
\mathcal End{equation*}
by (\mathcal Ref{tiaexpressions}).
It remains to show that $f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A$ is strictly increasing on $X_{A_0}$.
Suppose $x\in X_{A_0}$. We know by (\mathcal Ref{explicitfalt}) that
$$
f_\mathcal Mathcal A(x) = \mathcal Log \mathcal Left( \frac{\det A_0}{-\alpha_{A_0}(x+ \sigma_{A_0})} \mathcal Right)\,.
$$
Now
$$
\varphi(x) = \mathcal Log \mathcal Left( \frac{x+\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)\,,
$$
so
$$
\varphi(T_\mathcal Mathcal A(x))
=
\mathcal Log \mathcal Left( \frac{S_{A_0}(x)+\mathcal Rho_{A_1}}{\mathcal Rho_{A_1}}\mathcal Right)\,,
$$
and therefore
$$
(f_\mathcal Mathcal A+\varphi-\varphi\circ T_\mathcal Mathcal A)(x)
=
\mathcal Log\mathcal Left( \frac{\det A_0 (x+\mathcal Rho_{A_1})}{-\alpha_{A_0}(x+\sigma_{A_0})(S_{A_0}(x)+\mathcal Rho_{A_1})}\mathcal Right)\,.
$$
It therefore suffices to show that
\begin{equation}\mathcal Label{suffinc}
x\mathcal Mapsto \frac{x+\mathcal Rho_{A_1}}{-\alpha_{A_0}(x+\sigma_{A_0})(S_{A_0}(x)+\mathcal Rho_{A_1})}
\mathcal End{equation}
is strictly increasing.
For this note that
$$
S_{A_0}(x)+\mathcal Rho_{A_1}
= \frac{(b_0+d_0)x-b_0}{-\alpha_{A_0}(x+\sigma_{A_0})}+\mathcal Rho_{A_1}
=
\frac{ (b_0+d_0-\alpha_{A_0}\mathcal Rho_{A_1})x +(a_0-b_0)\mathcal Rho_{A_1}-b_0}{-\alpha_{A_0}(x+\sigma_{A_0})}
$$
so (\mathcal Ref{suffinc}) is seen to be the M\"obius function
\begin{equation*}
x\mathcal Mapsto \frac{x+\mathcal Rho_{A_1}}{ (b_0+d_0-\alpha_{A_0}\mathcal Rho_{A_1})x +(a_0-b_0)\mathcal Rho_{A_1}-b_0} \,,
\mathcal End{equation*}
which is known to be strictly increasing by Lemma \mathcal Ref{techderivlemma}.
\mathcal End{proof}
As a consequence of Theorem \mathcal Ref{t1smaller1theorem}
we obtain:
\begin{cor}\mathcal Label{t1smallerttheorem}
If $\mathcal Mathcal A\in\mathcal Mm$ and $t\in \mathcal Mathbb{R}^+$ are such that
\begin{equation}\mathcal Label{t1smallert}
t \ge t_1(\mathcal Mathcal A) \,,
\mathcal End{equation}
then the Dirac measure at the fixed point $p_{A_1}$ is the unique $f_{\mathcal Mathcal A(t)}$-maximizing measure;
in particular, the joint spectral radius of $\mathcal Mathcal A(t)$ is equal to the spectral radius of $tA_1$.
\mathcal End{cor}
\begin{proof}
The assumption (\mathcal Ref{t1smallert}) means, using (\mathcal Ref{tti}),
that $t_1(\mathcal Mathcal A(t))\mathcal Le 1$, so the result follows by applying
Theorem \mathcal Ref{t1smaller1theorem}
with $\mathcal Mathcal A$ replaced by $\mathcal Mathcal A(t)$.
\mathcal End{proof} | 2,441 | 61,706 | en |
train | 0.4.19 | \section{Sturmian maximizing measures}\mathcal Label{technicalsection}
It is at this point that we make the extra hypothesis
that the matrix pair $\mathcal Mathcal A$ lies in the class $\mathfrak D \subset \mathcal Mm$.
By Lemma \mathcal Ref{posnegf}(ii) we know that if $\mathcal Mathcal A\in\mathcal Mm$ then $f_\mathcal Mathcal A$ is strictly increasing on $X_{A_0}$ and strictly decreasing on $X_{A_1}$; the following result asserts that
if we make the stronger hypothesis that $\mathcal Mathcal A\in\mathfrak D$ then
these monotonicity properties
are inherited by all functions formed by adding a Sturmian transfer function $\varphi_\Gamma$ to $f_\mathcal Mathcal A$.
\begin{prop}\mathcal Label{increasingdecreasingprop}
Let $\mathcal Mathcal A\in\mathfrak D$.
For each $\mathcal Mathcal A$-Sturmian interval $\Gamma\in \mathcal I_\mathcal Mathcal A$,
the function $f_\mathcal Mathcal A+\varphi_\Gamma:X_\mathcal Mathcal A\mathcal To\mathcal Mathbb{R}$ is strictly increasing on $X_{A_0}$,
and strictly decreasing on $X_{A_1}$.
\mathcal End{prop}
\begin{proof}
First suppose $x\in X_{A_0}$.
Let $0=i_0<i_1<i_2<\mathcal Ldots$ be the sequence of all
integers such that $\mathcal Tau_\Gamma^{i_k}(x) \in X_{A_0}$.
For $k\ge0$, writing $z=\mathcal Tau_\Gamma^{i_k}(x)$ we see that
if $1\mathcal Le i<i_{k+1}-i_k$ then
$\mathcal Tau_\Gamma^i(z)\in X_{A_1}$,
and thus
$\mathcal Tau_\Gamma^i(z)=T_{A_1}^i(z)$, so that
\begin{equation}\mathcal Label{tauta1pos}
\sum_{i=0}^{i_{k+1}-i_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z)
=
f_\mathcal Mathcal A'(z) + \sum_{i=1}^{i_{k+1}-i_k-1} (f_\mathcal Mathcal A\circ T_{A_1}^i)'(z)
>
f_\mathcal Mathcal A'(z) + \sum_{i=1}^{\infty} (f_\mathcal Mathcal A\circ T_{A_1}^i)'(z)\,,
\mathcal End{equation}
where the inequality is because
$(f_\mathcal Mathcal A\circ T_{A_1}^i)'(z) <0$ for all $i\ge1$, by Lemma \mathcal Ref{posnegf}.
Now
$z\in X_{A_0}$, so
(\mathcal Ref{fderivative}) in Lemma \mathcal Ref{simple} (iii)
gives
$f_\mathcal Mathcal A' (z)= -(z+\sigma_{A_0})^{-1}$ (which is positive),
and formula
(\mathcal Ref{sumformula}) from Corollary \mathcal Ref{usefulcor} gives
$\sum_{i=1}^\infty (f_\mathcal Mathcal A\circ T_{A_1}^i)'(z) = (z+\varrho_{A_1})^{-1}$ (which is negative), so
(\mathcal Ref{tauta1pos}) implies that
\begin{equation}\mathcal Label{sigrho}
\sum_{i=0}^{i_{k+1}-i_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z)
>
\frac{-1}{z+\sigma_{A_0}} + \frac{1}{z+\varrho_{A_1}} \,.
\mathcal End{equation}
However $\mathcal Mathcal A\in\mathfrak D$, so $\mathcal Rho_{A_1}<\sigma_{A_0}$, and therefore the righthand side of
(\mathcal Ref{sigrho}) is positive, so we have shown that
\begin{equation*}
\sum_{i=0}^{i_{k+1}-i_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z)
>
0\,.
\mathcal End{equation*}
It follows that for all $k\ge0$,
$$
\sum_{n=i_k}^{i_{k+1}-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'(x)
=
(\mathcal Tau_\Gamma^{i_k})'(x) \sum_{i=0}^{i_{k+1}-i_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z) > 0\,,
$$
and hence
$$
(f_\mathcal Mathcal A+\varphi_\Gamma)'(x)
=\sum_{n=0}^\infty (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'(x)
=\sum_{k=0}^\infty \sum_{n=j_k}^{j_{k+1}-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'(x) >0\,,
$$
so $f_\mathcal Mathcal A+\varphi_\Gamma$ is strictly increasing on $X_{A_0}$.
Now suppose $x\in X_{A_1}$. The proof proceeds analogously to the above.
Let $0=j_0<j_1<j_2<\mathcal Ldots$ be the sequence of all
integers such that $\mathcal Tau_\Gamma^{j_k}(x) \in X_{A_1}$.
For $k\ge0$, writing $z=\mathcal Tau_\Gamma^{j_k}(x)$ we see that
if $1\mathcal Le i<j_{k+1}-j_k$ then
$\mathcal Tau_\Gamma^i(z)\in X_{A_0}$,
and thus
$\mathcal Tau_\Gamma^i(z)=T_{A_0}^i(z)$, so that
\begin{equation}\mathcal Label{sigrho2}
\sum_{i=0}^{j_{k+1}-j_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z)
=
f_\mathcal Mathcal A'(z) + \sum_{i=1}^{j_{k+1}-j_k-1} (f_\mathcal Mathcal A\circ T_{A_0}^i)'(z)
<
f_\mathcal Mathcal A'(z) + \sum_{i=1}^{\infty} (f_\mathcal Mathcal A\circ T_{A_0}^i)'(z) \,,
\mathcal End{equation}
using the fact that
$(f_\mathcal Mathcal A\circ T_{A_0}^i)'(z) >0$ for all $i\ge1$, by Lemma \mathcal Ref{posnegf}.
The righthand side of (\mathcal Ref{sigrho2}) can be written as
$-(z+\sigma_{A_1})^{-1} + (z+\varrho_{A_0})^{-1}$
using Lemma \mathcal Ref{simple} (iii)
and
Corollary \mathcal Ref{usefulcor},
and this is strictly negative since $\sigma_{A_1}<\varrho_{A_0}$
because $\mathcal Mathcal A\in\mathfrak D$,
so we have shown that
\begin{equation*}
\sum_{i=0}^{j_{k+1}-j_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z)
<
\frac{-1}{z+\sigma_{A_1}} + \frac{1}{z+\varrho_{A_0}} < 0\,.
\mathcal End{equation*}
It follows that for all $k\ge0$,
$$
\sum_{n=j_k}^{j_{k+1}-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'(x)
=
(\mathcal Tau_\Gamma^{j_k})'(x) \sum_{i=0}^{j_{k+1}-j_k-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^i)'(z) < 0\,,
$$
and hence
$$
(f_\mathcal Mathcal A+\varphi_\Gamma)'(x)
=\sum_{n=0}^\infty (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'(x)
=\sum_{k=0}^\infty \sum_{n=j_k}^{j_{k+1}-1} (f_\mathcal Mathcal A\circ \mathcal Tau_\Gamma^n)'(x) <0\,,
$$
so $f_\mathcal Mathcal A+\varphi_\Gamma$ is strictly decreasing on $X_{A_1}$.
\mathcal End{proof}
\begin{theorem}\mathcal Label{thm6}
Let $\mathcal Mathcal A\in\mathfrak D$ and $t\in \mathcal T_\mathcal Mathcal A= (t_0(\mathcal Mathcal A),t_1(\mathcal Mathcal A))$.
The $\mathcal Mathcal A$-Sturmian measure
supported by
the $\mathcal Mathcal A$-Sturmian interval $\Gamma_\mathcal Mathcal A(t)$ is the unique maximizing measure
for $f_{\mathcal Mathcal A(t)}$; thus the corresponding Sturmian measure on $\Omega = \{0,1\}^\mathbb N$ is the unique $\mathcal Mathcal A(t)$-maximizing measure.
\mathcal End{theorem}
\begin{proof}
Let us write $\varphi = \varphi_{\Gamma_\mathcal Mathcal A(t)}$ and $T=T_\mathcal Mathcal A=T_{\mathcal Mathcal A(t)}$.
We know that
$f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T$
is a constant function when restricted to $\Gamma_\mathcal Mathcal A(t)=: [\gamma_t^-,\gamma_t^+] \cap X_\mathcal Mathcal A$,
by Proposition \mathcal Ref{analogue}.
In particular,
$$(f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T)\mathcal Left(\gamma_t^-\mathcal Right)
=
(f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T)\mathcal Left(\gamma_t^+\mathcal Right)\,,
$$
and because
$T(\gamma_t^-)=T(\gamma_t^+)$,
we deduce that
\begin{equation}\mathcal Label{equality}
(f_{\mathcal Mathcal A(t)}+\varphi)\mathcal Left(\gamma_t^-\mathcal Right)
=
(f_{\mathcal Mathcal A(t)}+\varphi)\mathcal Left(\gamma_t^+\mathcal Right)
\,.
\mathcal End{equation}
But Proposition \mathcal Ref{increasingdecreasingprop} implies that
$f_{\mathcal Mathcal A(t)}+\varphi$ is strictly increasing on $X_{A_0}$,
and strictly decreasing on $X_{A_1}$, so together with (\mathcal Ref{equality}) we deduce that
\begin{equation}\mathcal Label{strongsturmian}
(f_{\mathcal Mathcal A(t)}+\varphi)(x) > (f_{\mathcal Mathcal A(t)}+\varphi)(y)\quad\mathcal Text{for all }x\in \Gamma_\mathcal Mathcal A(t),\ y\in X_\mathcal Mathcal A \setminus \Gamma_\mathcal Mathcal A(t)\,.
\mathcal End{equation}
Consequently, if $z,z'$ are such that $T(z)=T(z')$, with $z\in \Gamma_\mathcal Mathcal A(t)$ and $z'\notin \Gamma_\mathcal Mathcal A(t)$, then
$$
(f_{\mathcal Mathcal A(t)}+\varphi)(z) > (f_{\mathcal Mathcal A(t)}+\varphi)(z')\,,
$$
and hence
$$
(f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T)(z) > (f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T)(z')\,.
$$
In other words, the constant value of
$f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T$ on $\Gamma_\mathcal Mathcal A(t)$ is its global maximum,
and this value is not attained at any point in $X_\mathcal Mathcal A \setminus \Gamma_\mathcal Mathcal A(t)$.
It follows that the Sturmian measure supported by $\Gamma_\mathcal Mathcal A(t)$ is the unique maximizing measure
for $f_{\mathcal Mathcal A(t)}+\varphi-\varphi\circ T$,
and hence the unique maximizing measure for $f_{\mathcal Mathcal A(t)}$.
Thus the corresponding Sturmian measure on $\Omega=\{0,1\}^\mathbb N$ is the unique $\mathcal Mathcal A(t)$-maximizing measure.
\mathcal End{proof}
Recall from \S \mathcal Ref{generalsection} that
$\mathcal Ee \subset M_2(\mathbb R)^2$
denotes the set of matrix pairs which are equivalent to some pair in $\mathfrak D$, where
equivalence of
$\mathcal Mathcal A=(A_0,A_1)$ and $\mathcal Mathcal A'=(A_0',A_1')$ means that
$A_0'=uP^{-1}A_0P$ and $A_1'=vP^{-1}A_1P$ for some invertible $P$ and $u,v>0$.
We deduce the following theorem:
\begin{theorem}\mathcal Label{deducedthm}
If $\mathcal Mathcal A\in\mathcal Ee$ and $t\in\mathcal Mathbb{R}^+$, then $\mathcal Mathcal A(t)$ has a unique maximizing measure, and this maximizing measure is Sturmian.
\mathcal End{theorem}
\begin{proof}
It suffices to prove the result for $\mathcal Mathcal A\in\mathfrak D$, and this is
immediate from Corollaries \mathcal Ref{t0largerttheorem} and \mathcal Ref{t1smallerttheorem},
Theorem \mathcal Ref{thm6}, and
the fact that $\mathfrak D \subset \mathcal Mm$.
\mathcal End{proof} | 3,385 | 61,706 | en |
train | 0.4.20 | \section{The parameter map is a devil's staircase}\mathcal Label{devilsection}
As noted in Remark \mathcal Ref{conjugacymeasures}, if $\mathcal Mathcal A\in\mathcal Mm$ then there is a topological conjugacy
$h_\mathcal Mathcal A:\Omega\mathcal To Y_\mathcal Mathcal A$ between the the shift map $\sigma:\Omega\mathcal To\Omega$
and the restriction of $T_\mathcal Mathcal A$ to the Cantor set $Y_\mathcal Mathcal A\subset X_\mathcal Mathcal A$; the map $h_\mathcal Mathcal A$ is strictly increasing with respect to the orders on $\Omega$ and $Y_\mathcal Mathcal A$ (cf.~Remark \mathcal Ref{sturmianasturmian}).
If $d:\Omega\mathcal To [0,1]$ is as in Proposition \mathcal Ref{sturmianclassical}\,(c), associating to
$\omega\in\Omega$ the Sturmian parameter of the measure supported by $[0\omega,1\omega]$,
then the map $d_\mathcal Mathcal A:Y_\mathcal Mathcal A\mathcal To[0,1]$ given by
$d_\mathcal Mathcal A=d\circ h_\mathcal Mathcal A^{-1}$
enjoys the same properties as $d$:
\begin{lemma}
The map $d_\mathcal Mathcal A: Y_\mathcal Mathcal A\mathcal To [0,1]$
is continuous, non-decreasing, and surjective.
The preimage $d_\mathcal Mathcal A^{-1}(\mathcal Mathcal{P})$ is a singleton if $\mathcal Mathcal{P}$ is irrational,
and a positive-length closed interval if $\mathcal Mathcal{P}$ is rational.
\mathcal End{lemma}
\begin{proof}
Immediate from Proposition \mathcal Ref{sturmianclassical}\,(c), and the fact that $h_\mathcal Mathcal A$ is strictly increasing.
\mathcal End{proof}
Note that $d_\mathcal Mathcal A$ associates to $y\in Y_\mathcal Mathcal A$
the parameter of the $\mathcal Mathcal A$-Sturmian measure supported by the $\mathcal Mathcal A$-Sturmian
interval
$c_\mathcal Mathcal A^{-1}(y)$,
where
we recall from Definition \mathcal Ref{henceforthca} that the identification map $c_\mathcal Mathcal A:\mathcal I_\mathcal Mathcal A\mathcal To [0,1]$ is defined by
$c_\mathcal Mathcal A(\Gamma)=T_\mathcal Mathcal A(\mathcal Min \Gamma) = T_\mathcal Mathcal A(\mathcal Max \Gamma)$.
Of the extensions of the function $d_\mathcal Mathcal A$ from the Cantor set $Y_\mathcal Mathcal A$ to the interval $X=[0,1]$,
there is a unique one giving a non-decreasing self-map $d_\mathcal Mathcal A:X\mathcal To [0,1]$.
This extension, which we shall also denote by $d_\mathcal Mathcal A$, is continuous, and $d_\mathcal Mathcal A(c)$ is just the parameter of the $\mathcal Mathcal A$-Sturmian measure
$s_{c_\mathcal Mathcal A^{-1}(c)}$ (i.e.~of the $\mathcal Mathcal A$-Sturmian measure supported by
the $\mathcal Mathcal A$-Sturmian interval $c_\mathcal Mathcal A^{-1}(c)$) for each $c\in X$.
We therefore have the following:
\begin{cor}\mathcal Label{dadev}
The map $d_\mathcal Mathcal A: X\mathcal To [0,1]$
is continuous, non-decreasing, and surjective.
The preimage $d_\mathcal Mathcal A^{-1}(\mathcal Mathcal{P})$ is a singleton if $\mathcal Mathcal{P}$ is irrational,
and a positive-length closed interval if $\mathcal Mathcal{P}$ is rational.
\mathcal End{cor}
\begin{defn}
For $\mathcal Mathcal A\in\mathfrak D$, let $\mathcal Mathcal{P}_\mathcal Mathcal A(t)$ denote the parameter of the Sturmian maximizing measure for $\mathcal Mathcal A(t)$,
or equivalently of the $\mathcal Mathcal A$-Sturmian $f_{\mathcal Mathcal A(t)}$-maximizing measure.
This defines the \mathcal Emph{parameter map} $\mathcal Mathcal{P}_\mathcal Mathcal A:\mathcal Mathbb{R}^+\mathcal To[0,1]$.
\mathcal End{defn}
Recalling
(see Proposition \mathcal Ref{analogue})
the map $t\mathcal Mapsto \Gamma_\mathcal Mathcal A(t)$ associating $\mathcal Mathcal A$-Sturmian interval to parameter
$t\in\mathcal T_\mathcal Mathcal A = (t_0(\mathcal Mathcal A),t_1(\mathcal Mathcal A))$,
we see that in fact the map $\mathcal Mathcal{P}_\mathcal Mathcal A:\mathcal T_\mathcal Mathcal A\mathcal To X$ can be written as
\begin{equation}\mathcal Label{rfactor}
\mathcal Mathcal{P}_\mathcal Mathcal A = d_\mathcal Mathcal A\circ c_\mathcal Mathcal A\circ \Gamma_\mathcal Mathcal A\,.
\mathcal End{equation}
This means that $\mathcal Mathcal{P}_\mathcal Mathcal A$ will enjoy the same properties as established for $d_\mathcal Mathcal A$ in Corollary
\mathcal Ref{dadev}, provided $c_\mathcal Mathcal A\circ \Gamma_\mathcal Mathcal A$ is strictly increasing:
\begin{lemma}\mathcal Label{strictinccomp}
For $\mathcal Mathcal A\in\mathcal Mm$, the map $c_\mathcal Mathcal A\circ \Gamma_\mathcal Mathcal A: \mathcal T_\mathcal Mathcal A\mathcal To X$ is strictly increasing and surjective.
\mathcal End{lemma}
\begin{proof}
Recall from
(\mathcal Ref{gadef})
the function $G_\mathcal Mathcal A$ given by
$$G_\mathcal Mathcal A(t) = \mathcal Log \mathcal Left( \mathcal Left(\frac{a_0+c_0}{b_1+d_1}\mathcal Right) t^{-1} \mathcal Right)\,,$$
and that $\Gamma_\mathcal Mathcal A(t)\in \mathcal I_A$ is defined
(see (\mathcal Ref{deltatdefeq})) by the identity
\begin{equation*}
\Delta_\mathcal Mathcal A \circ \Gamma_\mathcal Mathcal A
= G_\mathcal Mathcal A
\,.
\mathcal End{equation*}
Now $G_\mathcal Mathcal A$ is strictly decreasing, so in particular injective,
therefore the map $\Gamma_\mathcal Mathcal A$ is necessarily injective.
Note that $\Gamma_\mathcal Mathcal A$ clearly extends to a continuous injection on
$\overline{\mathcal T_\mathcal Mathcal A}= [t_0(\mathcal Mathcal A),t_1(\mathcal Mathcal A)]$, with $\Gamma_\mathcal Mathcal A(t_i(\mathcal Mathcal A))=\Gamma_i$ for $i\in\{0,1\}$.
Now $c_\mathcal Mathcal A:\mathcal I_\mathcal Mathcal A\mathcal To X$ is a bijection,
so $c_\mathcal Mathcal A\circ \Gamma_\mathcal Mathcal A:\overline{\mathcal T_\mathcal Mathcal A}\mathcal To X$ is injective, and its continuity means it is strictly monotone.
But
$c_\mathcal Mathcal A(\Gamma_\mathcal Mathcal A(t_0(\mathcal Mathcal A)))=0$
and $c_\mathcal Mathcal A(\Gamma_\mathcal Mathcal A(t_1(\mathcal Mathcal A)))=1$, so the map
$c_\mathcal Mathcal A\circ\Gamma_\mathcal Mathcal A$ must be strictly increasing and surjective, as required.
\mathcal End{proof}
We can now prove that the parameter map $\mathcal Mathcal{P}_\mathcal Mathcal A:\mathcal Mathbb{R}^+\mathcal To[0,1]$ is
singular. More specifically, its properties described by the following Theorem \mathcal Ref{deviltheorem}
mean it is a \mathcal Emph{devil's staircase}. These properties of the parameter map had been noted
by Bousch \& Mairesse \cite{bouschmairesse} in the context of the family (\mathcal Ref{bmfamily}),
and proved in detail by Morris \& Sidorov \cite{morrissidorov} for the family (\mathcal Ref{standardpair}).
The following result can be viewed as a
more detailed version of Theorem
\mathcal Ref{maintheorem} from \S \mathcal Ref{generalsection}:
\begin{theorem}\mathcal Label{deviltheorem}
If $\mathcal Mathcal A\in\mathcal Ee$ and $t\in\mathcal Mathbb{R}^+$, then $\mathcal Mathcal A(t)$ has a unique maximizing measure, and this maximizing measure is Sturmian.
Let $\mathcal Mathcal{P}_\mathcal Mathcal A(t)$ denote the parameter of the Sturmian maximizing measure for $\mathcal Mathcal A(t)$.
The parameter map $\mathcal Mathcal{P}_\mathcal Mathcal A:\mathcal Mathbb{R}^+\mathcal To [0,1]$
is continuous, non-decreasing, and surjective.
The preimage $\mathcal Mathcal{P}_\mathcal Mathcal A^{-1}(\mathcal Mathcal{P})$ is a singleton if $\mathcal Mathcal{P}$ is irrational,
and a positive-length closed interval if $\mathcal Mathcal{P}$ is rational.
\mathcal End{theorem}
\begin{proof}
The set $\mathcal Ee$ consists of matrix pairs which are equivalent to a matrix pair in $\mathfrak D$, so it suffices to prove the result
for $\mathcal Mathcal A\in\mathfrak D$.
Theorem \mathcal Ref{deducedthm} gives that
$\mathcal Mathcal A(t)$ has a unique maximizing measure, and that this maximizing measure is Sturmian.
For $t\in \mathcal Mathbb{R}^+\setminus \mathcal T_\mathcal Mathcal A$ we know that
\begin{equation}\mathcal Label{rot0}
\mathcal Mathcal{P}_\mathcal Mathcal A(t)=0 \quad\mathcal Text{for }t\in (0,t_0(\mathcal Mathcal A))
\mathcal End{equation}
by Theorem \mathcal Ref{t0largerttheorem},
and
\begin{equation}\mathcal Label{rot1}
\mathcal Mathcal{P}_\mathcal Mathcal A(t)=1 \quad\mathcal Text{for }t\in (t_1(\mathcal Mathcal A),\infty)
\mathcal End{equation}
by Theorem \mathcal Ref{t1smallerttheorem}, since the Dirac measures at the fixed points
$p_{A_0}$ and $p_{A_1}$ are $\mathcal Mathcal A$-Sturmian measures of parameters 0 and 1 respectively.
In view of (\mathcal Ref{rot0}) and (\mathcal Ref{rot1}), it suffices to
establish the required properties of $\mathcal Mathcal{P}_\mathcal Mathcal A$ on the sub-interval $\mathcal T_\mathcal Mathcal A=(t_0(\mathcal Mathcal A),t_1(\mathcal Mathcal A))$.
Using
the factorisation
(\mathcal Ref{rfactor}),
we see that this follows
from Corollary \mathcal Ref{dadev}
and Lemma \mathcal Ref{strictinccomp}.
\mathcal End{proof}
\begin{thebibliography}{1}
\bibitem{blondel} V. D. Blondel,
The birth of the joint spectral radius: an interview with Gilbert Strang,
{\it Linear Algebra Appl.}, {\bf 428} (2008), 2261--2264.
\bibitem{btv}
V. D. Blondel, J. Theys \& A. A. Vladimirov, An elementary counterexample to the finiteness conjecture,
{\it SIAM Journal on Matrix Analysis}, {\bf 24}
(2003), 963Ð970.
\bibitem{bochirams} J. Bochi \& M. Rams,
The entropy of Lyapunov-optimizing measures of some matrix cocycles,
{\it preprint}, arxiv:1312.6718
\bibitem{bousch} T. Bousch,
Le poisson n'a pas
d'ar\^{e}tes, {\it Ann. Inst. Henri Poincar\'e (Proba. et Stat.)} {\bf
36}, (2000), 489--508.
\bibitem{bouschmairesse}
T. Bousch \& J. Mairesse,
Asymptotic height optimization for topical IFS, Tetris heaps, and the finiteness conjecture,
{\it J. Amer. Math. Soc.}, {\bf 15} (2002), 77--111.
\bibitem{bullettsentenac}
S. Bullett \& P. Sentenac,
Ordered orbits of the shift, square roots, and the
devil's staircase,
{\it Math.\ Proc.\ Camb.\ Phil.\ Soc.},
{\bf 115} (1994),
451--481.
\bibitem{daubechieslagarias}
I. Daubechies \& J. C. Lagarias,
Sets of matrices all infinite products of which converge,
{\it Linear Algebra Appl.}, {\bf 162} (1992) 227-261
\bibitem{gurvits}
L. Gurvits,
Stability of Linear Inclusions--Part 2,
NECI Technical Report TR pp.96Ð173, 1996.
\bibitem{hmst}
K.~G.~Hare, I.~ D.~Morris, N.~Sidorov, \& J.~Theys, An explicit counterexample to the
Lagarias-Wang finiteness conjecture, {\it Adv. Math.}, {\bf 226} (2011),
4667--4701.
\bibitem{jeo}
O. Jenkinson, Ergodic optimization,
{\it Discrete \& Cont. Dyn. Sys.}, {\bf 15} (2006), 197--224.
\bibitem{jungers} R. Jungers, {\it The joint spectral radius},
vol.~385 of Lecture Notes in Control and Information Sciences,
Springer-Verlag, Berlin, 2009.
\bibitem{kk}
R. Kannan \& C. K.~Krueger,
{\it Advanced analysis on the real line},
Springer-Verlag, New York, 1996.
\bibitem{kozyakin}
V.~S.~Kozyakin,
A dynamical systems construction of a counterexample to the finiteness conjecture,
{\it in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005, Seville, Spain, December 2005}, pp.~2338--2343.
\bibitem{kozyakinbiblio}
V. S. Kozyakin,
An annotated bibliography on convergence of matrix products and the theory of joint/generalized spectral radius, {\it preprint}.
\bibitem{lagariaswang}
J. C. Lagarias \& Y. Wang,
The finiteness conjecture for the generalized spectral radius of a set of matrices,
{\it Linear Algebra Appl.}, 214:17Ð42, 1995.
\bibitem{maesumi} M. Maesumi,
Optimal norms and the computation of joint spectral radius of matrices,
{\it Linear Algebra Appl.}, {\bf 428} (2008), 2324--2338.
\bibitem{morrissidorov}
I.~D.~Morris \& N.~Sidorov,
On a devil's staircase associated to the joint spectral radii of a family of pairs of matrices,
{\it J. Eur. Math. Soc.}, {\bf 15} (2013), 1747--1782.
\bibitem{morsehedlund} M. Morse and G. A. Hedlund, Symbolic
Dynamics II. Sturmian Trajectories, {\it Amer. J. Math.}, {\bf 62}
(1940), 1--42. | 4,014 | 61,706 | en |
train | 0.4.21 | In view of (\mathcal Ref{rot0}) and (\mathcal Ref{rot1}), it suffices to
establish the required properties of $\mathcal Mathcal{P}_\mathcal Mathcal A$ on the sub-interval $\mathcal T_\mathcal Mathcal A=(t_0(\mathcal Mathcal A),t_1(\mathcal Mathcal A))$.
Using
the factorisation
(\mathcal Ref{rfactor}),
we see that this follows
from Corollary \mathcal Ref{dadev}
and Lemma \mathcal Ref{strictinccomp}.
\mathcal End{proof}
\begin{thebibliography}{1}
\bibitem{blondel} V. D. Blondel,
The birth of the joint spectral radius: an interview with Gilbert Strang,
{\it Linear Algebra Appl.}, {\bf 428} (2008), 2261--2264.
\bibitem{btv}
V. D. Blondel, J. Theys \& A. A. Vladimirov, An elementary counterexample to the finiteness conjecture,
{\it SIAM Journal on Matrix Analysis}, {\bf 24}
(2003), 963Ð970.
\bibitem{bochirams} J. Bochi \& M. Rams,
The entropy of Lyapunov-optimizing measures of some matrix cocycles,
{\it preprint}, arxiv:1312.6718
\bibitem{bousch} T. Bousch,
Le poisson n'a pas
d'ar\^{e}tes, {\it Ann. Inst. Henri Poincar\'e (Proba. et Stat.)} {\bf
36}, (2000), 489--508.
\bibitem{bouschmairesse}
T. Bousch \& J. Mairesse,
Asymptotic height optimization for topical IFS, Tetris heaps, and the finiteness conjecture,
{\it J. Amer. Math. Soc.}, {\bf 15} (2002), 77--111.
\bibitem{bullettsentenac}
S. Bullett \& P. Sentenac,
Ordered orbits of the shift, square roots, and the
devil's staircase,
{\it Math.\ Proc.\ Camb.\ Phil.\ Soc.},
{\bf 115} (1994),
451--481.
\bibitem{daubechieslagarias}
I. Daubechies \& J. C. Lagarias,
Sets of matrices all infinite products of which converge,
{\it Linear Algebra Appl.}, {\bf 162} (1992) 227-261
\bibitem{gurvits}
L. Gurvits,
Stability of Linear Inclusions--Part 2,
NECI Technical Report TR pp.96Ð173, 1996.
\bibitem{hmst}
K.~G.~Hare, I.~ D.~Morris, N.~Sidorov, \& J.~Theys, An explicit counterexample to the
Lagarias-Wang finiteness conjecture, {\it Adv. Math.}, {\bf 226} (2011),
4667--4701.
\bibitem{jeo}
O. Jenkinson, Ergodic optimization,
{\it Discrete \& Cont. Dyn. Sys.}, {\bf 15} (2006), 197--224.
\bibitem{jungers} R. Jungers, {\it The joint spectral radius},
vol.~385 of Lecture Notes in Control and Information Sciences,
Springer-Verlag, Berlin, 2009.
\bibitem{kk}
R. Kannan \& C. K.~Krueger,
{\it Advanced analysis on the real line},
Springer-Verlag, New York, 1996.
\bibitem{kozyakin}
V.~S.~Kozyakin,
A dynamical systems construction of a counterexample to the finiteness conjecture,
{\it in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005, Seville, Spain, December 2005}, pp.~2338--2343.
\bibitem{kozyakinbiblio}
V. S. Kozyakin,
An annotated bibliography on convergence of matrix products and the theory of joint/generalized spectral radius, {\it preprint}.
\bibitem{lagariaswang}
J. C. Lagarias \& Y. Wang,
The finiteness conjecture for the generalized spectral radius of a set of matrices,
{\it Linear Algebra Appl.}, 214:17Ð42, 1995.
\bibitem{maesumi} M. Maesumi,
Optimal norms and the computation of joint spectral radius of matrices,
{\it Linear Algebra Appl.}, {\bf 428} (2008), 2324--2338.
\bibitem{morrissidorov}
I.~D.~Morris \& N.~Sidorov,
On a devil's staircase associated to the joint spectral radii of a family of pairs of matrices,
{\it J. Eur. Math. Soc.}, {\bf 15} (2013), 1747--1782.
\bibitem{morsehedlund} M. Morse and G. A. Hedlund, Symbolic
Dynamics II. Sturmian Trajectories, {\it Amer. J. Math.}, {\bf 62}
(1940), 1--42.
\bibitem{parthasarathy}
K. R. Parthasarathy, On the category of ergodic measures,
{\it Illinois J. Math.}, {\bf 5} (1961), 648--656.
\bibitem{rotastrang}
G-C. Rota \& G. Strang, A note on the joint spectral radius,
{\it Indag. Math.} 22 (1960) 379-381
\bibitem{sigmund}
K. Sigmund,
Generic properties of invariant measures for Axiom $A$
diffeomorphisms,
{\mathcal Em Invent. Math.}, {\bf 11} (1970), 99--109.
\bibitem{strang} G. Strang, {\it The joint spectral radius},
Commentary by Gilbert Strang on paper number 5, in Collected Works of Gian-Carlo Rota, 2001;
available online from http://www-math.mit.edu/$\sim$gs
\bibitem{walters}
P.~Walters, {\it An introduction to ergodic theory}, Springer, 1981.
\mathcal End{thebibliography}
\mathcal End{document} | 1,777 | 61,706 | en |
train | 0.5.0 | \begin{document}
\begin{center}
{A heuristic for the non-unicost set covering problem using local branching} ~\\
~\\
{J.E. Beasley} ~\\
~\\
~\\
Mathematics, Brunel University, Uxbridge UB8 3PH, UK
~\\
~\\
[email protected]
~\\
{http://people.brunel.ac.uk/$\sim$mastjjb/jeb/jeb.html}
~\\
~\\
April 2023\\
\end{center}
\begin{abstract}
In this paper we present a heuristic for the non-unicost set covering problem using local branching.
Local branching eliminates the need to define a problem specific search neighbourhood for any particular (zero-one) optimisation problem. It does this by incorporating a generalised Hamming distance neighbourhood into the problem, and this leads naturally to an appropriate neighbourhood search procedure.
We apply our approach to the non-unicost set covering problem. Computational results are presented for 65 test problems that have been widely considered in the literature. Our results indicate that our approach is very competitive in terms of solution quality with other approaches from the literature.
We believe that the work described here illustrates that the potential for using local branching, operating as a stand-alone matheuristic,
has not been fully exploited in the literature.
\end{abstract}
\sloppy Keywords: Hamming distance; heuristics; local branching; integer programming; neighbourhood search; set covering
\section{Introduction}
As the reader may be aware a common approach to the heuristic solution of many zero-one integer programming problems is to apply neighbourhood search. By this we mean that given a (typically feasible) solution to the problem at hand we examine \enquote{small} changes to this solution. So we examine solutions in the \enquote{neighbourhood} of this feasible solution. If we find a better feasible solution then this typically becomes the new solution and the process repeats until some termination condition is satisfied
(e.g.~computational time limit, or failure to improve on the solution).
There are a number of general neighbourhood search approaches in the literature such as simulated annealing~\cite{kirkpatrick83},
tabu search~\cite{glover90} and
variable neighbourhood search~\cite{mladenovic97,hansen01} that can be applied. Such approaches set out a general search procedure, but need particularisation for the problem at hand, e.g.~in defining the neighbourhood of a solution. Typically the neighbourhood of a solution is defined by specifying the possible moves away from a solution. These neighbourhood search approaches have been extensively used in the literature. For example a recent search using Web of Science (http://www.webofscience.com) listed approximately 28,000 papers referring to simulated annealing, 11,000 papers referring to tabu search and 7,000 papers referring to variable neighbourhood search
In this paper we present an optimisation based approach to neighbourhood search based upon local branching.
Local branching eliminates the need to define a problem specific search neighbourhood for any particular (zero-one) optimisation problem.
It does this by incorporating a generalised Hamming distance neighbourhood into the problem, and this leads naturally to an appropriate
neighbourhood search procedure.
The structure of this paper is as follows. In Section~\ref{opt} we define what we mean by the neighbourhood of a feasible solution to a general (zero-one) optimisation problem. We then go on to outline a search procedure that we can adopt to successively search for improved solutions.
In Section~\ref{app} we consider the example optimisation problem, the non-unicost set covering problem, to which we are going to apply
our
approach. We define the problem and consider relevant literature on the problem with especial reference to papers in the literature which report good computational results.
In Section~\ref{results} we present computational results based on applying our
approach to 65 non-unicost set covering problems that have been extensively considered by others in the literature.
Finally in Section~\ref{conc} we present our conclusions.
\section{Optimisation based neighbourhood search}
\label{opt}
In this section we first define
what we mean by the neighbourhood of a feasible solution to a general (zero-one) optimisation problem.
We then go on to outline a search procedure that we can adopt to successively search for improved solutions.
\subsection{Neighbourhood}
To illustrate our approach suppose that we have a general zero-one integer programming problem
involving $n$ zero-one variables $[x_i,~i=1,\ldots,n]$ and $m$ constraints where the optimisation problem is:
\begin{equation}
\mbox{minimise}~~~~\sum_{i=1}^n c_i x_i
\label{eq1}
\end{equation}
subject to:
\begin{equation}
\sum_{j=1}^n a_{ij} x_j \geq b_i ~~~~ i = 1, \ldots, m
\label{eq2}
\end{equation}
\begin{equation}
x_i \in \{0,1\} ~~~~i=1,\ldots,n
\label{eq3}
\end{equation}
Equation~(\ref{eq1}) is a minimisation objective. Without significant loss of generality we shall henceforth assume that all the objective function coefficients $[c_i]$ are integer. Equation~(\ref{eq2}) represents the constraints of the problem and Equation~(\ref{eq3}) the integrality condition.
Let $[X_i]$ be some feasible solution to the problem. Then in this paper we define the neighbourhood of $[X_i]$ to be any set of zero-one variable values $[x_i]$ satisfying $1 \leq \sum_{i=1}^n |x_i - X_i| \leq K$, where $K$ is a known positive constant of our choice. In other words the neighbourhood of $[X_i]$ is any set of zero-one values $[x_i]$ such that the Hamming distance between $[x_i]$ and $[X_i]$ lies between one and $K$.
Then consider the optimisation problem:
\begin{equation}
\mbox{minimise}~~~~\sum_{i=1}^n c_i x_i
\label{eq1a}
\end{equation}
subject to Equations~(\ref{eq2}),(\ref{eq3}) and:
\begin{equation}
1 \leq \sum_{i=1~X_i=0}^n x_i + \sum_{i=1~X_i=1}^n (1- x_i ) \leq K
\label{eqs1}
\end{equation}
\begin{equation}
\sum_{i=1}^n c_i x_i \leq \sum_{i=1}^n c_i X_i - 1
\label{eqs2}
\end{equation}
Here we have added two constraints to our original optimisation problem. In Equation~(\ref{eqs1}) the expression seen is a linearisation of the nonlinear Hamming distance $\sum_{i=1}^n |x_i - X_i|$.
Equation~(\ref{eqs1}) ensures that the Hamming distance between $[x_i]$ and $[X_i]$ is at least one (so we have a solution different from $[X_i]$) and is also less than or equal to $K$.
Equation~(\ref{eqs2}) implies that we are only interested in improved feasible solutions in the neighbourhood of $[X_i]$, i.e.~those that strictly improve on the solution value $\sum_{i=1}^n c_i X_i $ associated with the current solution. Improving on the current feasible solution cannot be guaranteed by Equation~(\ref{eqs1}) since it only constrains the structural (Hamming distance) difference between two solutions, it does not address their objective function values.
With regard to a minor technical issue here we have that in integer programming terms use of Equation~(\ref{eqs2}) automatically implies that the Hamming distance between $[x_i]$ and $[X_i]$ is at least one.
This is because any improved feasible solution must be different from $[X_i]$. However it could be that including an explicit lower limit on the Hamming distance of one (as in Equation~(\ref{eqs1})) improves computational performance, e.g.~by improving the linear programming relaxation solution, so we include it here.
\sloppy Our amended optimisation problem is now
optimise Equation~(\ref{eq1a}) subject to Equations~(\ref{eq2}),(\ref{eq3}),(\ref{eqs1}),(\ref{eqs2}). In essence
here we amend the original optimisation problem to restrict attention to distinctly different (and improved) solutions within the $K$ neighbourhood of $[X_i]$.
Note here that because of the extra constraints added to the original optimisation problem
any feasible solution
must be an improved solution as compared to $[X_i]$.
Use of a constraint based upon Hamming distance has previously been given in the literature by Fischetti and Lodi~\cite{fischetti03}. In their approach, which they call \enquote{local branching}, once a feasible solution $[X_i]$ is found within an enumerative scheme, for example linear programming based tree search, two tree branches are created. One of these branches, which they call the left branch, has
$ \sum_{i=1}^n |x_i - X_i| \leq K$.
The other branch, which they call the right branch, has
$ \sum_{i=1}^n |x_i - X_i| \geq K +1$.
They suggested tactical exploration of the left branch, using standard branching procedures, in the hope of finding an improved feasible solution within the Hamming distance $K$ neighbourhood of the current feasible solution before proceeding with exploration of the right branch.
They proposed varying the value of $K$ depending upon search progress: for example reducing $K$ if the left branch has not resulted in an improved solution within a specified time limit; increasing $K$ to diversify the search.
Our approach differs from local branching as described in~\cite{fischetti03} in one significant respect, namely that
we only focus on the left branch, no exploration is attempted with regard to the right branch. This is because we are focusing on generating good quality heuristic solutions, abandoning any attempt to achieve a provably optimal solution for the original problem (Equations~(\ref{eq1})-(\ref{eq3})) under investigation.
In general terms it is clear that if we solve our amended optimisation problem to proven global optimality (e.g.~using a package such as Cplex~\cite{cplex1210}) then we will either:
\begin{compactitem}
\item find an improved feasible solution within the $K$ neighbourhood of $[X_i]$, technically the minimum feasible solution within the neighbourhood; or
\item prove that there is no improved feasible solution within the neighbourhood.
\end{compactitem}
Obviously computational considerations may mean that we do not solve the amended optimisation problem to proven global optimality, but within computational limits we may still find an improved feasible solution.
As noted above any feasible solution to the amended optimisation problem must by definition be an improved solution as compared to $[X_i]$.
Obviously, as with standard neighbourhood search procedures, an improved feasible solution can be used to replace $[X_i]$ and the process repeated in a natural way. The search procedure we adopted based upon the amended optimisation problem is detailed below.
\subsection{Search procedure}
Our search procedure requires an initial feasible solution $[X_i]$ as well as three parameter values. These are:
an initial value for $K$; a value $\delta$ for incrementing $K$ so as to increase the size of the neighbourhood and a value $L$ for the number of successive iterations we allow without improving the solution before terminating the search.
\newline
\newline
\noindent Our search procedure is:
\begin{enumerate}[label=(\alph*), noitemsep]
\item Initialise $[X_i]$, $K$, $\delta$ and $L$. Set $t \leftarrow 0$, where $t$ is the iteration counter.
\item Set $t \leftarrow t+1$ Solve the amended optimisation problem. If we find an improved feasible solution replace $[X_i]$ with this solution.
\item If $L$ successive iterations have been performed without improving the current feasible solution then stop, else set $K \leftarrow K + \delta$ and go to step (b).
\end{enumerate}
\noindent In this procedure we increase the size of the neighbourhood (increment $K$ by $\delta$) at each iteration, irrespective as to whether an improved solution has been found or not. This ensures that we continually expand the search space around the (current) feasible solution. The procedure only terminates once $L$ successive iterations have been performed without finding an improved feasible solution.
\subsection{Comment}
We would make a number of comments as to our
approach:
\begin{itemize}
\item Our approach draws directly on the mathematical formulation of the problem and hence can be classed as a matheuristic~\cite{boschetti2009, boschetti23, maniezzo21}.
\begin{comment}
\item It is general in nature, potentially making little use of any problem specific knowledge.
\end{comment}
\item It eliminates the need to design problem specific search neighbourhoods, since the neighbourhood is automatically incorporated into
the amended optimisation problem using the Hamming distance (Equation~(\ref{eqs1})) as discussed above.
\item If the amended optimisation problem associated with the final feasible solution found has been solved to proven global optimality then we have an \emph{\textbf{absolute guarantee}} that there is no improved feasible solution within the Hamming distance $K$ neighbourhood associated with that final feasible solution.
\item Clearly local branching is not a new concept. Indeed various authors in the literature have mentioned its use as a heuristic, e.g.~most recently~\cite{boschetti23}. \textbf{\emph{However, we believe that work described here illustrates that the potential for using local branching, operating as a stand-alone matheuristic as in the approach described in this paper,
has not been fully exploited in the literature.}}
\end{itemize} | 3,486 | 12,865 | en |
train | 0.5.1 | \section{The set covering problem}
\label{app}
\begin{comment}It is clear that to investigate the worth of the
approach given above we need to apply it to an example optimisation problem.
For this purpose we choose to use a classical zero-one optimisation problem, the set covering problem.
\end{comment}
The set covering problem is the problem of choosing a minimum cost set of columns $[x_i]$ that collectively cover each of the $m$ rows in the problem. Referring back to Equation~(\ref{eq2}) above we have that $[a_{ij}]$ is a known matrix with $a_{ij}=1$ if column $j$ covers row $i$, $a_{ij}=0$ otherwise. The values $[b_i]$ are all one.
There are two variants of the problem, one where the column costs $[c_i]$ are all one (known as the unicost set covering problem), one where the column costs $[c_i]$ are general
non-negative values (referred to as the non-unicost set covering problem, or more commonly as just the set covering problem). In the results given below we apply our approach to the
non-unicost problem. Of the two variants of the problem the non-unicost variant has attracted greater attention in the literature.
The (non-unicost) set covering problem has been considered by a number of authors in the literature as discussed below.
It is not our intention here to give a comprehensive and detailed review of the literature for the set covering problem. Indeed that would be a mammoth task, since a recent search using Web of Science (http://www.webofscience.com) listed nearly 500 papers that included the phrase \enquote{set covering} in their title.
Rather our intention is to highlight significant papers in the literature which report good computational results on set covering instances. This is because the focus of this paper is whether, for the specific example problem (set covering) considered, our
approach can yield good quality results as compared with those already reported in the literature.
\subsection{Relevant literature}
\sloppy Caprara and Toth~\cite{caprara00} give a survey of algorithms for the set covering problem prior to 2000. As is clear from their paper most of the authors in the literature since 1990 have made use of the test problems publicly available from
OR-Library~\cite{beasley1990}, see
http://people.brunel.ac.uk/$\sim$mastjjb/jeb/info.html.
In this paper we also make use of these test problems.
Lan et al~\cite{lan07} presented a heuristic approach which they called Meta-RPS (Meta-heuristic for Randomized Priority Search). They stressed the use of randomness to avoid local optima. Their approach is a repeated application of: firstly a constructive heuristic to find a feasible solution (but including randomisation); secondly a local improvement heuristic based on neighbourhood search. Their approach also included preprocessing, both to exclude columns from consideration and to include them if a column is the only one that covers a row. To reduce the computation time associated with their neighbourhood search procedure they defined a core problem consisting of a small subset of columns.
Lan et al~\cite{lan07} reported that their heuristic is one of only two to find all optimal/best-known solutions for non-unicost instances.
They gave a table illustrating the effectiveness of different heuristics on 65 non-unicost set covering problems. Of note there is
the indirect genetic algorithm of Aickelin~\cite{aickelin02};
the genetic algorithm of Beasley and Chu~\cite{beasley96}; and
the lagrangian heuristic of Caprara et al~\cite{caprara99}.
We consider each of these three approaches below.
\begin{comment}
In terms of unicost problems they compared their approach against that of six different heuristics for 15 unicost instances with the result given showing that their approach outperforms (on average) these other heuristics.
\end{comment}
\begin{comment}
In brief, of the competitive approaches for non-unicost problem cited by Lan et al~\cite{lan07}, Caprara et al~\cite{caprara99} is a lagrangian heuristic approach using dynamic pricing for the variables and systematic use of column fixing to improve the solution. Beasley and Chu~\cite{beasley96} is a genetic algorithm approach including a new fitness-based crossover operator (fusion), a variable mutation
rate and a heuristic feasibility operator tailored specifically for the set covering problem. Aickelin~\cite{aickelin02} is a genetic algorithm approach with a decoder which works on a permuted list of the rows to be covered, with hill-climbing to improve the solution applied after the decoder has provided a suitable solution.
\end{comment}
Aickelin~\cite{aickelin02} presented a genetic algorithm approach with a decoder which works on a permuted list of the rows to be covered, with hill-climbing to improve the solution applied after the decoder has provided a suitable solution. For 65 non-unicost set covering problems they compare their approach with other approaches,~\cite{beasley96,caprara99}, taken from the literature.
Beasley and Chu~\cite{beasley96} presented a genetic algorithm approach including a new fitness-based crossover operator (fusion), a variable mutation
rate and a heuristic feasibility operator tailored specifically for the non-unicost set covering problem. They reported computational experience for
their approach on 65 non-unicost set covering problems.
Caprara et al~\cite{caprara99} presented a lagrangian heuristic approach using dynamic pricing for the variables and systematic use of column fixing to improve the solution. They made use of a number of improvements in the subgradient optimisation procedure as well as a refining procedure to improve upon any given solution. They gave a table illustrating the effectiveness of different heuristics on 65 non-unicost set covering problems showing that their approach performs well.
Naji-Azimi et al~\cite{naji10} presented an electromagnetic metaheuristic approach drawing on the work of Birbil and Fang~\cite{birbil03}. Their approach involves an initial preprocessing step and then repetitively adjusting a pool of solutions to which local search is first applied and where the solutions are then changed based on the force generated by the \enquote{charge} associated with each solution. Mutation was also applied to perturb solutions. They considered 65 non-unicost problems and compared their results with those of Lan et al~\cite{lan07}. Their results indicated that their approach was competitive with that of Lan et al~\cite{lan07}.
\begin{comment}
For 15 unicost problems they compared their results with those of Lan et al~\cite{lan07} and the GRASP algorithm of Bautista and Pereira\cite{bautista07} which indicated that (on average) they outperformed Bautista and Pereira\cite{bautista07}, but were in turn out-performed by Lan et al~\cite{lan07}. We would note here that GRASP, Greedy Randomised Adaptive Search Procedures, was originally proposed by Feo and Resende~\cite{feo95} and is an approach that first constructs an initial
solution via an adaptive randomised greedy function and then applies a local search procedure
to the constructed solution.
Naji-Azimi et al~\cite{naji10} also modified the genetic algorithm
of Beasley and Chu~\cite{beasley96} to more effectively deal with unicost problems and compared their approach with that modified genetic algorithm on 55 test problems. Their results indicated that (on average) their electromagnetic approach out-performed the modified genetic algorithm. Although their algorithm was originally designed for unicost problems they modified their approach for non-unicost problems and presented results for 65 non-unicost problems and compared their results with those of Lan et al~\cite{lan07}. Their results indicated that their approach was competitive with that of Lan et al~\cite{lan07}.
\end{comment}
\begin{comment}
Gao et al~\cite{gao15} presented a local search approach for the unicost set covering problem based upon row weighting. They used tabu search strategies to prevent cycling between solutions. They presented computational results for 70 unicost problems and compared their results with those of Bautista and Pereira\cite{bautista07}, Musliu~\cite{musliu06} and
Naji-Azimi et al~\cite{naji10}.
They reported that their approach improved the best-known solutions in 14 of the instances. Muslio~\cite{musliu06} had good overall performance on their test problems and that approach is a local search approach based on a new method for neighbourhood generation from the current solution at
each iteration as well as making use of search history in conjunction with a tabu search mechanism to avoid cycles.
\end{comment}
Reyes and Araya~\cite{reyes21} presented a greedy randomised adaptive search procedure (GRASP~\cite{feo95, festa02}) based strategy for the non-unicost set covering problem. They proposed iterated local search and reward/penalty procedures in order to accelerate convergence and improve upon the GRASP solutions. Their approach also included preprocessing both to exclude columns from consideration and to include them. They presented results, based upon 30 trials, for 65 non-unicost set covering problems.
In recent years a number of papers in the literature have applied algorithms based upon paradigms drawn from the natural world (sometimes referred to as bio-inspired metaheuristics). One example of work of this kind is Soto et al~\cite{soto17} who presented approaches based on cuckoo search and black hole optimisation.
Cuckoo search (see Yang and Deb~\cite{yang09}) is a population based approach where each \enquote{nest} in the population contains a number of \enquote{eggs} (solutions) and \enquote{cuckoos} lay eggs in randomly chosen nests. The best nests carry over to the next generation.
Black hole optimisation (see Kumar et al~\cite{kumar15}) is also a population based approach where a \enquote{black hole} attracts \enquote{stars} (solutions). Stars change locations as they are attracted by the black hole. Some stars are absorbed by the black hole and replaced by newly generated stars (solutions).
Soto et al~\cite{soto17} applied these two approaches to the non-unicost set covering problem. They applied preprocessing and presented results for 65 non-unicost set covering problems (based on 30 trials for each instance). | 2,588 | 12,865 | en |
train | 0.5.2 | \section{Computational results}
\label{results}
In this section we first discuss the non-unicost set covering test problems which we used. We then give computational results for our
approach when applied to these test problems. We also give a comparison between the results from our approach and eight other approaches
presented previously in the literature.
\subsection{Test problems}
We used the standard set of 65 non-unicost set covering test problems that are available from
OR-Library~\cite{beasley1990}, see http://people.brunel.ac.uk/$\sim$mastjjb/jeb/info.html.
We used a Windows pc with 8GB of memory and an Intel Core i5-1137G7 2.4Ghz processor, a multi-core pc with four cores. The initial value of $K$ was set to 5, the neighbourhood increment $\delta$ was set to 5 and the number successive iterations $L$ allowed without improving the current feasible solution before termination was set to 5. These values were set based on limited computational experimentation.
The initial feasible solution required to start the search was produced using a simple greedy heuristic for the set covering problem: repetitively choosing a column with the minimum value of the ratio (column cost/number of uncovered rows covered by the column). Once a solution covering all rows had been found we removed any redundant columns (those columns for which all rows which they cover are also covered by other columns).
Note here that, unlike a number of other papers in the literature
(e.g.~\cite{lan07,naji10,reyes21,soto17}), we made no use of problem-specific preprocessing
to eliminate columns.
Table~\ref{table1} shows the characteristics of the 65 non-unicost problems considered. In that table we show the problem set name, the number of instances in that set, the number of rows ($m$) and columns ($n$) and the problem density ($[\sum_{i=1}^m \sum_{j=1}^n a_{ij}/mn]$ expressed as a percentage). We set a time limit of 15 seconds for the solution of each optimisation
for all the problems with $m \leq 500$, where Cplex~\cite{cplex1210} with default parameter settings was used as the solver.
For the larger problems with $m=1000$ we increased this time limit to 45 seconds.
\begin{table}[hbt!]
\centering
\renewcommand{1mm}{1mm}
\renewcommand{1.0}{1.0}
\begin{tabular}{|c|c|c|c|c|}
\hline
Problem set name & Number of instances & Number of rows ($m$) & Number of columns ($n$)
& Density (\%) \\
\hline
4 & 10 & 200 & 1000 & 2 \\
5 & 10 & 200 & 2000 & 2 \\
6 & 5 & 200 & 1000 & 5 \\
A & 5 & 300 & 3000 & 2 \\
B & 5 & 300 & 3000 & 5 \\
C & 5 & 400 & 4000 & 2 \\
D & 5 & 400 & 4000 & 5 \\
NRE & 5 & 500 & 5000 & 10 \\
NRF & 5 & 500 & 5000 & 20 \\
NRG & 5 & 1000 & 10000 & 2 \\
NRH & 5 & 1000 & 10000 & 5 \\
\hline
\end{tabular}
\caption{Test problem characteristics}
\label{table1}
\end{table} | 947 | 12,865 | en |
train | 0.5.3 | \subsection{Results}
Table~\ref{table2} shows the results obtained by our optimisation approach.
In that table we show the optimal/best-known solution value (OBK) for each problem, as taken from Lan et al~\cite{lan07}. We also show the solution value as obtained by our approach,
the final value of $K$ at termination,
the total computation time (in seconds), and whether the amended optimisation problem for the final value of $K$ was solved to proven optimality or not. We have not given in Table~\ref{table2} the number of iterations made in our procedure as this can easily deduced by dividing the final value of $K$ by the neighbourhood increment $\delta$, where we used $\delta=5$.
So for example consider problem 4.10 in Table~\ref{table2}. The optimal/best-known solution for this instance is 514 and the \enquote{o} signifies that this was the solution found by our approach. The value of $K$, the size of the neighbourhood at the final iteration, was 45 and the total solution time was 0.4 seconds. The \enquote{yes} in the solution guarantee column signifies that the amended optimisation problem associated with this final value of $K$ was solved to proven optimality, indicating that we have an absolute guarantee that there is no improved solution within a Hamming distance of $K=45$ from the solution associated with the value of 514 as found by our approach.
Considering Table~\ref{table2} it is clear that for all but one of the 65 test problems we found the optimal/best-known solution. From Table~\ref{table1} these problems are of increasing size and for all 45 problems up to and including problem set D we have the guarantee on solution quality. However for the 20 larger problems we only have one instance in which this is the case (recall here that we impose a time limit for the solution of each and every amended optimisation problem encountered during the process).
\begin{table}[hbt!]
\footnotesize
\centering
\renewcommand{1mm}{1mm}
\renewcommand{1.0}{0.85}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Instance & Optimal/best-known (OBK) & Solution & Final $K$ & Time (secs)
& Solution guarantee \\
\hline
4.1 & 429 & o & 45 & 0.4 & yes \\
4.2 & 512 & o & 50 & 0.6 & yes \\
4.3 & 516 & o & 55 & 0.5 & yes \\
4.4 & 494 & o & 50 & 0.5 & yes \\
4.5 & 512 & o & 45 & 0.4 & yes \\
4.6 & 560 & o & 55 & 0.7 & yes \\
4.7 & 430 & o & 50 & 0.4 & yes \\
4.8 & 492 & o & 45 & 0.8 & yes \\
4.9 & 641 & o & 55 & 1.0 & yes \\
4.10 & 514 & o & 45 & 0.4 & yes \\
\hline
5.1 & 253 & o & 50 & 1.1 & yes \\
5.2 & 302 & o & 50 & 1.3 & yes \\
5.3 & 226 & o & 50 & 0.7 & yes \\
5.4 & 242 & o & 45 & 1.1 & yes \\
5.5 & 211 & o & 45 & 0.7 & yes \\
5.6 & 213 & o & 45 & 0.8 & yes \\
5.7 & 293 & o & 50 & 1.1 & yes \\
5.8 & 288 & o & 45 & 1.0 & yes \\
5.9 & 279 & o & 50 & 0.8 & yes \\
5.10 & 265 & o & 50 & 0.7 & yes \\
\hline
6.1 & 138 & o & 60 & 2.0 & yes \\
6.2 & 146 & o & 40 & 1.1 & yes \\
6.3 & 145 & o & 45 & 1.5 & yes \\
6.4 & 131 & o & 45 & 1.4 & yes \\
6.5 & 161 & o & 50 & 1.8 & yes \\
\hline
A1 & 253 & o & 50 & 3.7 & yes \\
A2 & 252 & o & 55 & 4.1 & yes \\
A3 & 232 & o & 50 & 2.9 & yes \\
A4 & 234 & o & 50 & 2.4 & yes \\
A5 & 236 & o & 65 & 3.3 & yes \\
\hline
B1 & 69 & o & 40 & 3.5 & yes \\
B2 & 76 & o & 45 & 9.4 & yes \\
B3 & 80 & o & 45 & 4.7 & yes \\
B4 & 79 & o & 45 & 10.2 & yes \\
B5 & 72 & o & 40 & 3.4 & yes \\
\hline
C1 & 227 & o & 55 & 4.6 & yes \\
C2 & 219 & o & 60 & 5.8 & yes \\
C3 & 243 & o & 80 & 19.6 & yes \\
C4 & 219 & o & 50 & 4.0 & yes \\
C5 & 215 & o & 55 & 5.0 & yes \\
\hline
D1 & 60 & o & 50 & 7.6 & yes \\
D2 & 66 & o & 50 & 23.0 & yes \\
D3 & 72 & o & 55 & 23.5 & yes \\
D4 & 62 & o & 45 & 10.6 & yes \\
D5 & 61 & o & 45 & 5.1 & yes \\
\hline
NRE1 & 29 & o & 40 & 78.5 & no \\
NRE2 & 30 & o & 65 & 160.2 & no \\
NRE3 & 27 & o & 60 & 139.8 & no \\
NRE4 & 28 & o & 50 & 105.3 & no \\
NRE5 & 28 & o & 40 & 76.8 & no \\
\hline
NRF1 & 14 & o & 45 & 114.2 & no \\
NRF2 & 15 & o & 35 & 78.5 & no \\
NRF3 & 14 & o & 45 & 44.4 & yes \\
NRF4 & 14 & o & 40 & 93.4 & no \\
NRF5 & 13 & o & 45 & 109.4 & no \\
\hline
NRG1 & 176 & o & 65 & 308.3 & no \\
NRG2 & 154 & o & 60 & 219.7 & no \\
NRG3 & 166 & o & 85 & 565.3 & no \\
NRG4 & 168 & o & 115 & 821.2 & no \\
NRG5 & 168 & o & 75 & 475.4 & no \\
\hline
NRH1 & 63 & 64 & 55 & 344.9 & no \\
NRH2 & 63 & o & 70 & 501.0 & no \\
NRH3 & 59 & o & 100 & 773.0 & no \\
NRH4 & 58 & o & 85 & 618.6 & no \\
NRH5 & 55 & o & 55 & 342.0 & no \\
\hline
\end{tabular}
\caption{Computational results}
\label{table2}
\end{table}
\normalsize
In order to compare our results with previous results in the literature we have taken the detailed results given by various authors and computed percentage deviation from the \\
optimal/best-known value, OBK, as shown in Table~\ref{table2}. In other words using the solution values given by authors in their papers we computed 100(solution value - OBK)/OBK for each individual problem and then averaged the percentage deviations. Note here that whilst previous authors may have used this percentage deviation approach their OBK values may be different from those shown in Table~\ref{table2} (since obviously best-known values may be updated over time).
Table~\ref{table3} shows this comparison, where in each case we give the average percentage deviation; the number of solutions equal
to the optimal/best-known solution (OBK) and the average computation time (in seconds), as calculated
using solution details in the papers cited.
All of these averages relate to the same 65 instances which we considered, as detailed in Table~\ref{table1}.
So for example, for the work presented in this paper the average percentage deviation is 0.02\%,
for 64 of the 65 test problems the solution found was equal to the OBK solution (as detailed in Table~\ref{table2}) and the average computation time was 94.6 seconds.
As we would expect different authors have used different hardware and so a direct comparison between computation times is difficult, but the times given are an indication of how quickly problems are solved on average. In order to help set the times shown in context, as the papers cited range from 1996-2023, so over 25 years during which hardware has improved immensely, we also show in Table~\ref{table3} the year each paper was published.
With regard to Table~\ref{table3}:
\begin{compactitem}
\item Aickelin~\cite{aickelin02} and Beasley and Chu~\cite{beasley96} used ten trials of their algorithms. So for both these papers
in Table~\ref{table3} we have used the best solution found over the ten trials in calculating percentage deviation, and the time seen is the total time for these ten trials.
\item For Caprara et al~\cite{caprara99} the detailed times given in their paper are the times as to when the final best solution was first found. As such we have no information as to the total time taken, which is the value given for the other papers cited.
\item For Lan et al~\cite{lan07} the detailed times given in their paper are the times as to when the final best solution was first found. For this reason the time given in
Table~\ref{table3} is taken from the best performing of three different variants of their approach (the variant Meta-RaPs w/randomized priority rules, see Table 4 in\cite{lan07}) and the time given is as reported by them in their paper as the total time taken for that variant.
\item For Reyes and Araya~\cite{reyes21} the times given in their paper appear to be the average time for each trial, where they used 30 trials. So in Table~\ref{table3} we give the total time for 30 trials and in calculating percentage deviation we used the best solution found over these 30 trials.
\item For Soto et al~\cite{soto17} the times given in their paper are the average time for each trial~\cite{private22}, where they used 30 trials. So in Table~\ref{table3} we give the total time for 30 trials and in calculating percentage deviation we used the best solution found over these 30 trials.
\end{compactitem}
\noindent \emph{\textbf{We should stress here that the papers considered in Table~\ref{table3}
represent, based upon our literature review, the most effective heuristics for the set covering problem given previously in the literature.}}
Considering Table~\ref{table3}
then it seems reasonable to conclude that our
approach is very competitive in terms of solution quality for the non-unicost set covering problem as compared with other approaches given in the literature.
In order to compare our results with those obtained using Cplex alone we solved all of the 65 test problems,
using Cplex~\cite{cplex1210} with default parameter settings, but imposed a time limit for each problem equal to the corresponding solution time as shown in Table~\ref{table2}.
Over the 65 test problems Cplex gave an average percentage deviation of 0.12\%, where for 60 of the 65 test problems the solution found was equal to the OBK solution. Considering the comparative values for the approach given in this paper as in Table~\ref{table3} (average percentage deviation of 0.02\%, 64 of the 65 test problems equal to the OBK solution) it is clear that our approach is adding value as compared with using Cplex alone. | 3,820 | 12,865 | en |
train | 0.5.4 | \hline
\end{tabular}
\caption{Computational results}
\label{table2}
\end{table}
\normalsize
In order to compare our results with previous results in the literature we have taken the detailed results given by various authors and computed percentage deviation from the \\
optimal/best-known value, OBK, as shown in Table~\ref{table2}. In other words using the solution values given by authors in their papers we computed 100(solution value - OBK)/OBK for each individual problem and then averaged the percentage deviations. Note here that whilst previous authors may have used this percentage deviation approach their OBK values may be different from those shown in Table~\ref{table2} (since obviously best-known values may be updated over time).
Table~\ref{table3} shows this comparison, where in each case we give the average percentage deviation; the number of solutions equal
to the optimal/best-known solution (OBK) and the average computation time (in seconds), as calculated
using solution details in the papers cited.
All of these averages relate to the same 65 instances which we considered, as detailed in Table~\ref{table1}.
So for example, for the work presented in this paper the average percentage deviation is 0.02\%,
for 64 of the 65 test problems the solution found was equal to the OBK solution (as detailed in Table~\ref{table2}) and the average computation time was 94.6 seconds.
As we would expect different authors have used different hardware and so a direct comparison between computation times is difficult, but the times given are an indication of how quickly problems are solved on average. In order to help set the times shown in context, as the papers cited range from 1996-2023, so over 25 years during which hardware has improved immensely, we also show in Table~\ref{table3} the year each paper was published.
With regard to Table~\ref{table3}:
\begin{compactitem}
\item Aickelin~\cite{aickelin02} and Beasley and Chu~\cite{beasley96} used ten trials of their algorithms. So for both these papers
in Table~\ref{table3} we have used the best solution found over the ten trials in calculating percentage deviation, and the time seen is the total time for these ten trials.
\item For Caprara et al~\cite{caprara99} the detailed times given in their paper are the times as to when the final best solution was first found. As such we have no information as to the total time taken, which is the value given for the other papers cited.
\item For Lan et al~\cite{lan07} the detailed times given in their paper are the times as to when the final best solution was first found. For this reason the time given in
Table~\ref{table3} is taken from the best performing of three different variants of their approach (the variant Meta-RaPs w/randomized priority rules, see Table 4 in\cite{lan07}) and the time given is as reported by them in their paper as the total time taken for that variant.
\item For Reyes and Araya~\cite{reyes21} the times given in their paper appear to be the average time for each trial, where they used 30 trials. So in Table~\ref{table3} we give the total time for 30 trials and in calculating percentage deviation we used the best solution found over these 30 trials.
\item For Soto et al~\cite{soto17} the times given in their paper are the average time for each trial~\cite{private22}, where they used 30 trials. So in Table~\ref{table3} we give the total time for 30 trials and in calculating percentage deviation we used the best solution found over these 30 trials.
\end{compactitem}
\noindent \emph{\textbf{We should stress here that the papers considered in Table~\ref{table3}
represent, based upon our literature review, the most effective heuristics for the set covering problem given previously in the literature.}}
Considering Table~\ref{table3}
then it seems reasonable to conclude that our
approach is very competitive in terms of solution quality for the non-unicost set covering problem as compared with other approaches given in the literature.
In order to compare our results with those obtained using Cplex alone we solved all of the 65 test problems,
using Cplex~\cite{cplex1210} with default parameter settings, but imposed a time limit for each problem equal to the corresponding solution time as shown in Table~\ref{table2}.
Over the 65 test problems Cplex gave an average percentage deviation of 0.12\%, where for 60 of the 65 test problems the solution found was equal to the OBK solution. Considering the comparative values for the approach given in this paper as in Table~\ref{table3} (average percentage deviation of 0.02\%, 64 of the 65 test problems equal to the OBK solution) it is clear that our approach is adding value as compared with using Cplex alone.
\begin{table}[hbtp!]
\centering
\renewcommand{1mm}{1mm}
\renewcommand{1.0}{1.0}
\begin{tabular}{|l|c|c|c|c|}
\hline
Approach & Average & Number of & Average & Year\\
& \% deviation & OBK solutions & time (secs) & published\\
\hline
This paper & 0.02 & 64 & 94.6 & 2023\\
Aickelin~\cite{aickelin02} & 0.13 & 61 & 1179.5 & 2002 \\
Beasley and Chu~\cite{beasley96} & 0.07 & 61 & 14694.3 &1996 \\
Caprara et al~\cite{caprara99} & 0 & 65 & not known &1999 \\
Lan et al~\cite{lan07} & 0 & 65 & 878.4 & 2007 \\
Naji-Azimi et al~\cite{naji10} & 0.18 & 53& 118.4 & 2010\\
Reyes and Araya~\cite{reyes21} & 0.25 & 50& 244.7 & 2021 \\
Soto et al~\cite{soto17} black hole & 1.55 & 35 & 184.5 & 2017\\
Soto et al~\cite{soto17} cuckoo search & 1.01 &35 & 151.0 & 2017 \\
\hline
\end{tabular}
\caption{Comparison of results}
\label{table3}
\end{table} | 1,670 | 12,865 | en |
train | 0.5.5 | \section{Conclusions}
\label{conc}
In this paper we have presented
a heuristic for the non-unicost set covering problem using local branching.
Local branching eliminates the need to define a problem specific search neighbourhood for any particular (zero-one) optimisation problem. It does this by incorporating a generalised Hamming distance neighbourhood into the problem, and this leads naturally to an appropriate neighbourhood search procedure.
We applied our approach to the non-unicost set covering problem and presented computational results for 65 test problems that have been widely considered in the literature. Our results indicated that our heuristic for the set covering problem using local branching
is very competitive in terms of solution quality with other approaches from the literature.
We would also stress here the relative simplicity involved in creating a local branching based matheuristic
using the approach given in this paper. Obviously one needs a mathematical formulation of the problem at hand, but aside
from that the most that is needed is some (relatively simple) problem-specific procedure to generate an initial feasible solution.
We believe that the work described here illustrates that the potential for using local branching, operating as a stand-alone matheuristic as in the approach described in this paper,
has not been fully exploited in the literature.
We hope that the work presented here will encourage others to explore using it for other problems.
\FloatBarrier
\pagestyle{empty}
\linespread{1}
\small \normalsize
\end{document} | 354 | 12,865 | en |
train | 0.6.0 | \betaegin{document}
\title{Differential systems with reflection\ and matrix invariants}
\footnotetext{\footnotemark Département de Physique Théorique et Section de Mathématiques. Université de Genève, Genève, CH-1211 Switzerland. \href{mailto:[email protected]}{[email protected]}}
\footnotetext{Corresponding author. Instituto de Matemáticas, Universidade de Santiago de Compostela, 15782, Facultade de Matemáticas, Santiago, Spain. \href{mailto:[email protected]}{[email protected]}. Partially supported by project MTM2016-75140-P (AEI/FEDER, UE) and Xunta de Galicia (Spain), project EM2014/032.}
\medbreak
\betaegin{abstract}
In this work we derive important properties regarding matrix invariants which occur in the theory of differential equations with reflection.
\varepsilonnd{abstract}
\textbf{Keywords:} differential equations with reflection, matrix invariants.
\section{Introduction}
In recent works regarding the solution and Green's functions of Differential Equations with Reflection (see for instance \gammaite{Toj3, TojMI, CTMal,CT17}) the strong relation between linear analysis and linear algebra is highlighted. In particular, in the most recent of the aforementioned works, the authors obtain an explicit fundamental matrix for the system of differential equations with reflection
\betaegin{equation}\lambdaabel{hlsystem}Hu(t):=Fu'(t)+Gu'(-t)+A u(t)+Bu(-t)=0, t\in{\mathbb R},
\varepsilonnd{equation}
where $n\in{\mathbb N}$, $A,B,F,G\in{\mathcal M}_n({\mathbb R})$ and $u:{\mathbb R}\to{\mathbb R}^n$. To be precise, they prove the following result.
\betaegin{thm}[\gammaite{CT17}]\lambdaabel{thmexpfm}
Assume $F-G$ and $F+G$ are invertible. Then
\betaegin{equation*}\lambdaabel{Xseries}X(t): = \sum_{k=0}^\infty\frac{E^k t^{2k}}{(2k)!} -(F+G)^{-1}(A+B)\sum_{k=0}^\infty\frac{E^k t^{2k+1}}{(2k+1)!},\varepsilonnd{equation*}
where $E=(F-G)^{-1}(A-B)(F+G)^{-1}(A+B)$, is a fundamental matrix of problem \varepsilonqref{hlsystem}. If we further assume $A-B$ and $A+B$ are invertible, then $E$ is invertible and we can consider a square root $\Omega$ of $E$. Then,
\betaegin{equation*}\lambdaabel{fme}X(t)=\gammaosh \Omega t -(F+G)^{-1}(A+B)\Omega^{-1}\sinh\Omega t.\varepsilonnd{equation*}
\varepsilonnd{thm}
What is more, in another recent work the authors proved an analog of the Liouville's formula for the case with reflections in systems of order two.
\betaegin{thm}[Abel-Jacobi-Liouville Identity \gammaite{CoTo17}] \lambdaabel{AJLid}Let $n=2$ in equation \varepsilonqref{hlsystem}. Then $(|X|,|X'|)$ is the unique solution of the system of differential equations
\betaegin{equation*}\lambdaabel{n2e}\betaegin{aligned}x''= &\tr(E)x-2y, \\y''= &-2|E|x+\tr(E)y,\varepsilonnd{aligned}
\varepsilonnd{equation*}
subject to the one point conditions \[x(0)=1,\quad y(0)=|M_+|,\quad x'(0)=-\tr(M_+),\quad y'(0)=\tr(\operatorname{Adj}(M_+)E).\]
\varepsilonnd{thm}
The authors also presented in that work the following conjecture:
\betaegin{con} For any $n\ge 1$, if $X(t)$ is a fundamental matrix of problem \varepsilonqref{hlsystem}, then $|X(t)|$ can be obtained as a component of the solution of a linear system of differential equations with constant coefficients, those coefficients depending only on the different matrix invariants of $E$, which is defined as in Theorem \ref{thmexpfm}.
\varepsilonnd{con}
In order to attempt proving this conjecture, and taking into account the proof of Theorem \ref{AJLid}, we need to study the different matrix invariants of the matrices appearing in the theory.
\section{The $\mathbf Y$ matrix}
For $X(t)$ the fundamental matrix of the problem, define
\betaegin{equation}
Y(t) := X(t)^{-1} X'(t).
\lambdaabel{Ydef}
\varepsilonnd{equation}
We have that $X = S_1-M_+ S_2$ where $S_1$ and $S_2$ s are power series in $E$ which we can formally give, by using $\Omega = \sqrt{E}$, as $S_1=\gammaosh(\Omega t),\ S_2=\Omega^{-1} \sinh(\Omega t).$
Notice both are indeed power series in $\Omega^2 = E$. Since $X'' = X E$ and $ (Y^{-1})'= - Y^{-1} Y'Y^{-1}$ we have that
\betaegin{equation}
Y'= E - Y^2.
\lambdaabel{ricattiEq}
\varepsilonnd{equation}
Using the construction of \gammaite{levin} we build an associated ODE system
\betaegin{equation*}
z' = \lambdaeft(\betaegin{array}{cc}
0 & E \\ I & 0
\varepsilonnd{array}\right)z.
\varepsilonnd{equation*}
The system has as fundamental matrix
\betaegin{equation*}
\lambdaeft(\betaegin{array}{cc}
\gammaosh(\Omega t) & \Omega^{-1}\sinh(\Omega t) \\ \Omega \sinh(\Omega t) & \gammaosh(\Omega t)
\varepsilonnd{array}\right).
\varepsilonnd{equation*}
The solution of equation (\ref{ricattiEq}) is then given by
\betaegin{equation*}
Y(t) = \lambdaeft[\gammaosh(\Omega t) Y(0)+\Omega \sinh(\Omega t) \right] \lambdaeft[\Omega^{-1}\sinh(\Omega t) Y(0)+\gammaosh(\Omega t) \right]^{-1},
\varepsilonnd{equation*}
which in terms of the $S$ functions is
$Y(t) = \lambdaeft[-S_1 M_+ +E S_2\right] \lambdaeft[-S_2 M_+ + S_1\right]^{-1}$,
where we fix the initial condition with $Y(0)=X'(0)=-M_+$.
This seems like a commuted version of expression (\ref{Ydef}), but it is nothing more than the hypergeometric identity.
Consider the Liouville equation for $Y$ itself, that is,
\[
\lambdaeft(\lambdaog |Y|\right) '= \mathrm{Tr}\lambdaeft(Y^{-1} Y'\right) = \mathrm{Tr}\lambdaeft( Y - Y^{-1}E \right).\] Then we have
$\mathrm{Tr}(Y^{-1}E)=\mathrm{Tr}(Y)-\lambdaeft(\lambdaog |Y|\right)'$, which can be calculated in terms of invariants of $|X|$. | 1,876 | 12,170 | en |
train | 0.6.1 | \section{The $\mathbf Y$ matrix}
For $X(t)$ the fundamental matrix of the problem, define
\betaegin{equation}
Y(t) := X(t)^{-1} X'(t).
\lambdaabel{Ydef}
\varepsilonnd{equation}
We have that $X = S_1-M_+ S_2$ where $S_1$ and $S_2$ s are power series in $E$ which we can formally give, by using $\Omega = \sqrt{E}$, as $S_1=\gammaosh(\Omega t),\ S_2=\Omega^{-1} \sinh(\Omega t).$
Notice both are indeed power series in $\Omega^2 = E$. Since $X'' = X E$ and $ (Y^{-1})'= - Y^{-1} Y'Y^{-1}$ we have that
\betaegin{equation}
Y'= E - Y^2.
\lambdaabel{ricattiEq}
\varepsilonnd{equation}
Using the construction of \gammaite{levin} we build an associated ODE system
\betaegin{equation*}
z' = \lambdaeft(\betaegin{array}{cc}
0 & E \\ I & 0
\varepsilonnd{array}\right)z.
\varepsilonnd{equation*}
The system has as fundamental matrix
\betaegin{equation*}
\lambdaeft(\betaegin{array}{cc}
\gammaosh(\Omega t) & \Omega^{-1}\sinh(\Omega t) \\ \Omega \sinh(\Omega t) & \gammaosh(\Omega t)
\varepsilonnd{array}\right).
\varepsilonnd{equation*}
The solution of equation (\ref{ricattiEq}) is then given by
\betaegin{equation*}
Y(t) = \lambdaeft[\gammaosh(\Omega t) Y(0)+\Omega \sinh(\Omega t) \right] \lambdaeft[\Omega^{-1}\sinh(\Omega t) Y(0)+\gammaosh(\Omega t) \right]^{-1},
\varepsilonnd{equation*}
which in terms of the $S$ functions is
$Y(t) = \lambdaeft[-S_1 M_+ +E S_2\right] \lambdaeft[-S_2 M_+ + S_1\right]^{-1}$,
where we fix the initial condition with $Y(0)=X'(0)=-M_+$.
This seems like a commuted version of expression (\ref{Ydef}), but it is nothing more than the hypergeometric identity.
Consider the Liouville equation for $Y$ itself, that is,
\[
\lambdaeft(\lambdaog |Y|\right) '= \mathrm{Tr}\lambdaeft(Y^{-1} Y'\right) = \mathrm{Tr}\lambdaeft( Y - Y^{-1}E \right).\] Then we have
$\mathrm{Tr}(Y^{-1}E)=\mathrm{Tr}(Y)-\lambdaeft(\lambdaog |Y|\right)'$, which can be calculated in terms of invariants of $|X|$.
\section{Complex systems}
The main involution occurring in the theory of complex variable is the complex conjugation ${\mathcal C}:{\mathbb C}\to {\mathbb C}$, ${\mathcal C}(z)=\overline z$. It is, in fact, a reflection with respect to the second variable if we write $z=(x,y)\in{\mathbb R}^2$: ${\mathcal C}(x,y)=(x,-y)$.
We consider now an operator $L$ acting on $z(t)$ as
\betaegin{equation}
\lambdaabel{mainComplexEquation}
A_0 z(t) + A_1 \overline{z(t)} + B_0 z(t)+ B_1 \overline{z'(t)},
\varepsilonnd{equation}
where $z: \mathbb{R}\to \mathbb{C}^n$, and $A_i,B_i \in \mathcal{M}_{n\times n}\lambdaeft(\mathbb{C}\right):=\mathcal{M}$.
We can consider a extended algebra $\mathcal{M}^*$ by with the linear operation of complex conjugation which acts as
$
\mathcal{C} z = \overline{z}, z\in \mathbb{C}^n.
$
It is easy to see that the following properties hold\footnote{In fact, one could use here any involution for which matrix conjugation verifies $\mathcal{C}A\mathcal{C}\in \mathcal{M}$
by defining $\overline{A}$ suitably.},
\betaegin{equation}
\lambdaabel{conjugationProperties}
\mathcal{C}^2 = I, \mathcal{C} A = \overline{A} \mathcal{C},
\varepsilonnd{equation}
where $\overline{A}$ is the complex conjugate of $A$.
The we consider the free product quotiented by these relations,
\betaegin{equation*}
\mathcal{M}^* = \mathcal{M} \subsetar \lambdaeft\{\mathcal{C}\right\} \lambdaeft/ \lambdaeft( \mathcal{C}^2 = I, \mathcal{C} A = \overline{A} \mathcal{C}\right).\right.
\varepsilonnd{equation*}
Now, we can see that this is in fact a $\mathbb{Z}_2$-graded algebra. Due to the conditions (\ref{conjugationProperties}), we can move any $\mathcal{C}$'s to the
right, and any power of it is reduced modulo $2$. Therefore, any element of $A\in \mathcal{M}^*$ can be written as
$
\mathbf{A} = A_0 + A_1 \mathcal{C}$ with $A_0,A_1 \in \mathcal{M}
$.
As a vector space, $
\mathcal{M}^* = \mathcal{M} \oplus \mathcal{M}$.
The grading is clear by looking at the product of two generic elements,
\betaegin{equation}
\lambdaabel{conjugationAlgebraProduct}
\mathbf{A} \mathbf{B}=\lambdaeft(A_0 + A_1\mathcal{C}\right)\lambdaeft(B_0 + B_1\mathcal{C}\right) =\lambdaeft(A_0 B_0 + A_1 \overline{B_1}\right) + \lambdaeft( A_0 B_1 + A_1 \overline{B_0} \right) \mathcal{C}.
\varepsilonnd{equation}
We can also calculate a explicit inverse, using $
\lambdaeft(I-A \mathcal{C}\right)\lambdaeft(I+A \mathcal{C}\right) = I - A \overline{A}
$,
from where
$\lambdaeft(I+A \mathcal{C}\right)^{-1} = \lambdaeft(I-A \overline{A}\right) \lambdaeft(I - A \mathcal{C}\right)$.
Since $\lambdaeft(A B\right)^{-1}=B^{-1} A^{-1}$, we can generalize it to
\betaegin{equation*}
\lambdaeft(A_0 + A_1 \mathcal{C}\right)^{-1} = \lambdaeft(A_1^{-1}A_0 - \overline{A_0^{-1}A_1}\right)^{-1} \lambdaeft(A_1^{-1}-\overline{A_0^{-1}}\mathcal{C}\right).
\varepsilonnd{equation*}
As it is, the expression is unclear when either $A_i=0$. We rewrite
\betaegin{equation}
\lambdaabel{conjugationAlgebraInverse}
\mathbf{A}^{-1} = \Delta\lambdaeft(A_0,A_1\right) + \Delta\lambdaeft(\overline{A_1},\overline{A_0}\right) \mathcal{C},
\varepsilonnd{equation}
with
\betaegin{equation*}
\Delta\lambdaeft(A_0,A_1\right)=\betaegin{cases}
0, & A_0 = 0, A_1 {n\in\betaN}eq 0, \\
\lambdaeft(A_0 - A_1 \overline{A_0^{-1}A_1}\right)^{-1}, & A_0 {n\in\betaN}eq 0, A_1 {n\in\betaN}eq 0.
\varepsilonnd{cases}
\varepsilonnd{equation*}
As a last note, this can of course be realized as a matrix algebra over $\mathbb{R}^{2n}$, although it does not lead to anything new (other than clutter). For the record,
one can take a representation
\[
\rho\lambdaeft(z\right) = \lambdaeft(\betaegin{array}{c}
\Re z \\ \hline \Im z
\varepsilonnd{array}\right), \quad
\rho\lambdaeft(A\right) = \lambdaeft(\betaegin{array}{c|c}
\Re A & -\Im A \\ \hline
\Im A & \Re A
\varepsilonnd{array}\right), \quad
\rho\lambdaeft(\mathcal{C}\right) = \lambdaeft(\betaegin{array}{c|c}
I & 0 \\ \hline 0 & -I
\varepsilonnd{array}\right),
\]
for which it is easy to see that properties such as (\ref{conjugationProperties}) or (\ref{conjugationAlgebraProduct}) hold.
\subsection{Equation reduction}
With these tools, one can rewrite equation (\ref{mainComplexEquation}) as
$
\mathbf{B} z'(t) + \mathbf{A} z(t) = 0,
$
which can be reduced to
$
z'(t) + \lambdaeft(\mathbf{B}^{-1} \mathbf{A}\right) z(t) =0
$.
This means we can just focus on the study of
\betaegin{equation*}
z'(t) + A_0 z(t) + A_1 \overline{z(t)} =0.
\varepsilonnd{equation*}
Of course, one could now look for a fundamental operator inside the $\mathcal{M}^*$ algebra, such that solutions fulfill
\betaegin{equation*}
z(t) = \mathbf{X}(t) z(0),
\varepsilonnd{equation*}
which is
\betaegin{equation*}
\mathbf{X} = \gammaosh \lambdaeft(\mathbf{A} t\right) - \sinh \lambdaeft(\mathbf{A} t\right).
\varepsilonnd{equation*}
Unfortunately, it does not seem easy to calculate terms like $\mathbf{A}^n$ at the moment. We could in principle directly get
an explicit fundamental matrix if we could write a manageable expression. However, we do learn something important. Since $\mathbf{A}^n \in \mathcal{M}^*$, then
$\gammaosh\lambdaeft( \mathbf{A} t\right) \in \mathcal{M}^*$ too. Hence, if we want to find fundamental matrices for the problem, the ansatz must be of the form
\betaegin{equation*}
z(t) = \lambdaeft( X_0(t) + X_1(t) \mathcal{C} \right)z(0)= X_0(t) z(0) + X_1(t) \overline{z(0)},
\varepsilonnd{equation*}
where $X(0)=I$ and $Y(0)=0$ as to agree with $\mathbf{X}(0)=I$.
\subsection{$\mathbf{A^n}$ generating function}
The components of $\mathbf{A}^n$ can still be algorithmically computed by using the expression
\betaegin{equation*}
\lambdaeft(I-t \mathbf{A}\right)^{-1} = \sum_{n=0}^\infty \lambdaeft(t \mathbf{A}\right).
\varepsilonnd{equation*}
By writing the explicit inverse of $I-t\mathbf{A}=\lambdaeft(I-t A_0\right)-t A_1 \mathcal{C}$, we get
\betaegin{small}
\betaegin{equation*}
\mathbf{A}^n = \frac{1}{n!} \frac{\mathrm{d}}{\mathrm{d}t}\lambdaeft[ \lambdaeft(A_1^{-1}\lambdaeft( I - t A_0\right) - \overline{\lambdaeft(I -t A_0\right)^{-1}A_1}\right)^{-1}
\lambdaeft(A_1^{-1} - \overline{ \lambdaeft(t^{-1}I-A_0\right)^{-1} } \mathcal{C}\right)\right]_{t=0}.
\varepsilonnd{equation*}
\varepsilonnd{small}
\subsection{Ansatz}
We take the system
\betaegin{equation*}
z'+A z+\overline{B z} = 0, z: \mathbb{R}\to \mathbb{C}^n, A,B \in \mathcal{M}
\varepsilonnd{equation*}
and introduce the ansatz
\betaegin{equation*}
z = X z_0 + \overline{Y z_0}, z_0=z(0), X,Y: \mathbb{R} \to \mathcal{M},
\varepsilonnd{equation*}
\betaegin{equation*}
\lambdaeft(X' + A X + \overline{B} Y \right) z_0 + \lambdaeft(Y'+ \overline{A} Y + B X\right) \overline{z_0} = 0,
\varepsilonnd{equation*}
\betaegin{equation}
\lambdaabel{fundamentalSystem}
\lambdaeft\{ \betaegin{array}{r}
X' + A X + \overline{B} Y = 0, \\
Y'+ \overline{A} Y + B X = 0.
\varepsilonnd{array} \right.
\varepsilonnd{equation}
This a ordinary system. Take $X''$ and substitute $Y'$ and $Y$ through equation (\ref{fundamentalSystem}),
\betaegin{align*}
\betaegin{split}
& X'' + A X' + \overline{B} \lambdaeft(-B X - \overline{A} \overline{B^{-1}}\lambdaeft(-X'-A X\right)\right) = 0, \\
& X'' + \lambdaeft(A + \overline{B A B^{-1}}\right) X' + \lambdaeft(\overline{B A B^{-1}}A - \overline{B}B\right) X = 0.
\varepsilonnd{split}
\varepsilonnd{align*}
Unsurprisingly, we get a very similar structure to the inverses in expression (\ref{conjugationAlgebraInverse}).For the sake of notation, we will rename the coefficients as
\betaegin{equation*}
\lambdaabel{singleFundamentalMatrixEquation}
X'' + F X' + G X= 0.
\varepsilonnd{equation*}
Repeating the process, we get
\betaegin{equation*}
Y'' + \overline{F} Y' + \overline{G} Y = 0.
\varepsilonnd{equation*}
The initial conditions for this second order problem are given by $\mathbf{X}(0)=I$ and equation (\ref{fundamentalSystem}). That is,
\[X(0) = I, \quad Y(0)=0,\quad
X'(0) = -A, \quad Y'(0) = -\overline{A}.
\]
In principle, we could now take as an ansatz
\betaegin{equation*}
X = \alphalpha e^{(\Gamma + \Omega) t} + \betaeta e^{(\Gamma-\Omega) t},
\varepsilonnd{equation*}
subject to the conditions
\betaegin{equation*}
X(0) = \alphalpha + \betaeta, X'(0) = \alphalpha (\Gamma+\Omega) + \betaeta (\Gamma-\Omega),
\varepsilonnd{equation*}
which can be inverted into
\betaegin{align*}
\betaegin{split}
\alphalpha = &\frac{1}{2} \lambdaeft[ X'(0) - X(0) \lambdaeft(\Gamma-\Omega\right) \right] \Omega^{-1},\\
\betaeta = &-\frac{1}{2} \lambdaeft[ X'(0) - X(0) \lambdaeft(\Gamma+\Omega\right) \right] \Omega^{-1} .
\varepsilonnd{split}
\varepsilonnd{align*} | 3,841 | 12,170 | en |
train | 0.6.2 | \section{Generalized matrix invariants}
In the following section we use the concept of crossed or generalized matrix invariants which can be found in \gammaite{simon} and \gammaite{cgm} among others.
\subsection{Definition and basic properties}
Let $X_1,\deltaots,X_N \in GL\lambdaeft(n\right)$. Define
\betaegin{equation}\lambdaabel{z} Z\lambdaeft(X_1,\deltaots,X_N\right) := \deltaet\lambdaeft( I + \sum_{i=1}^N \alphalpha_i X_i \right) := \sum_{m_i} \alphalpha_1^{m_1}\deltaots\alphalpha_N^{m_N} Z_{m_1,\deltaots,m_N}\lambdaeft(X_1,\deltaots,X_N\right) .\varepsilonnd{equation}
Since $\deltaet$ is an algebraic combination of matrix entries, the expansion is a polynomial in the $\alphalpha_i$ variables. We can however take the sum to be over
all integer values of $m_i$ by suitably defining its $\alphalpha$-coefficients, $Z_{m_1,\deltaots,m_N}$, as zero when not corresponding to any power that appears in the $\deltaet$ expansion.
In particular,
\betaegin{equation}
Z_{m_1,\deltaots,m_N}\lambdaeft(X_1,\deltaots,X_n\right) = 0 \text{ if } \min\lambdaeft\{m_i\right\}<0.
\lambdaabel{noNegativeMultiTraces}
\varepsilonnd{equation}
These $Z$ coefficients then give us the \textit{generalized matrix invariants}, which reduce to the usual ones when we only consider one matrix (or set the other indices to $0$).
We can get explicit expressions in terms of traces via
\betaegin{equation}
\deltaet(I+\alphalpha X) = e^{\mathrm{Tr}\lambdaog (I+\alphalpha X)}
\lambdaabel{detExp}
\varepsilonnd{equation}
by expanding the Taylor series around $\alphalpha=0$ and using the linearity of the trace. This already gives, looking at the leading order of the exponential expansion,
\betaegin{equation}
Z_{0,\deltaots,0} (X_1,\deltaots,X_n) = 1.
\lambdaabel{zeroMultiTrace}
\varepsilonnd{equation}
In the same way, we can reduce any expression with a $0$ index,
\betaegin{equation*}
Z_{n_1,\deltaots,n_{N-1},0} \lambdaeft(X_1,\deltaots,X_N\right) = Z_{n_1,\deltaots,n_{N-1}} \lambdaeft(X_1,\deltaots,X_{N-1}\right).
\varepsilonnd{equation*}
Looking at higher coefficients upon expanding the exponential returns higher invariants. For instance,
\betaegin{align*}
\mathrm{Tr}\lambdaog\lambdaeft(I+\alphalpha X\right) =& \alphalpha \mathrm{Tr}\lambdaeft(X\right) -\frac{\alphalpha^2}{2} \mathrm{Tr}\lambdaeft(X^2\right) + \frac{\alphalpha^3}{3} \mathrm{Tr}\lambdaeft(X^3\right)
+ O\lambdaeft(\alphalpha^4\right), \\
Z_{1}(X) =& \mathrm{Tr}(X),\\
Z_{2}(X) =& \frac{1}{2} \lambdaeft( \mathrm{Tr}(X)^2-\mathrm{Tr}(X^2) \right),\\
Z_{3}(X) =& \frac{1}{6} \lambdaeft( \mathrm{Tr}(X)^3-3\mathrm{Tr}(X^2)\mathrm{Tr}(X)+\mathrm{Tr}(X^3) \right),
\varepsilonnd{align*}
etc, but also
\betaegin{equation*}
Z_{1,1}(X,Y) = \mathrm{Tr}(X) \mathrm{Tr}(Y)-\mathrm{Tr}(X Y).
\varepsilonnd{equation*}
Of course, equation (\ref{detExp}) is usually proven using Liouville's formula. We will make contact with it again later, when looking at the derivatives of the $Z$ invariants themselves.
\subsection{Factorization}
Consider
\betaegin{equation*}
\deltaet\lambdaeft(1+\alphalpha A + \sum_{i} \betaeta_i B\right),
\varepsilonnd{equation*}
and the fact that
\betaegin{equation*}
\deltaet\lambdaeft( \alphalpha A \right) =\alphalpha^n \deltaet A.
\varepsilonnd{equation*}
Extract this determinant from the original expansion,
\betaegin{align*}
&\deltaet\lambdaeft(I+\alphalpha A + \sum_{i} \betaeta_i B\right) = \sum_{l,m_i} \alphalpha^{l} \lambdaeft(\prod \betaeta^{m_i}\right) Z_{l, m_1,\deltaots,m_N}\lambdaeft(A,B_1,\deltaots,B_N\right) \\ = &\deltaet\lambdaeft(A\right) \sum_{l,m_i} \alphalpha^{l} \lambdaeft(\prod \betaeta_i^{m_i}\right) Z_{n-l-\sum m_i,m_1,\deltaots,m_N} \lambdaeft(A^{-1},A^{-1}B_1,\deltaots,A^{-1}B_N\right).
\varepsilonnd{align*}
To equate the two polynomials, we equate every coefficient and get a \textbf{duality} relationship
\betaegin{equation}\lambdaabel{multiFactorization}
Z_{l, m_1,\deltaots,m_N}\lambdaeft(A,B_1,\deltaots,B_N\right) = \deltaet\lambdaeft(A\right) Z_{n-l-\sum m_i,m_1,\deltaots,m_N} \lambdaeft(A^{-1},A^{-1}B_1,\deltaots,A^{-1}B_N\right).\varepsilonnd{equation}
Already an interesting property comes from the fact that any $Z$ with negative indices must be $0$, by equation (\ref{noNegativeMultiTraces}). The dual of this statement is then
\betaegin{equation*}
Z_{m_1,\deltaots,m_N}\lambdaeft(X_1,\deltaots,X_N\right) = 0 \text{ if } \sum_{i}m_i > n.
\varepsilonnd{equation*}
We will call the sum of all indices $\sum m_i$ the \textbf{order} of the trace $Z_{m_1,\deltaots,m_N}$.
That an invariant of order higher than the size of the matrix is zero
reduces, as expected, to the usual property of matrix invariants when we have a single matrix, and together with expression ($\ref{noNegativeMultiTraces}$) ensures that only
a finite number of $Z$ invariants for any given set of $X_i$ is non-zero.
We can also take a dual of equation $(\ref{zeroMultiTrace})$, which is the well known
\betaegin{equation*}
Z_{n}\lambdaeft(X\right) = \deltaet\lambdaeft(X\right)
\varepsilonnd{equation*}
or
\betaegin{equation*}
1 = \deltaet\lambdaeft(X\right) Z_{n}\lambdaeft(X^{-1}\right).
\varepsilonnd{equation*}
Now, this statement gets interesting when we introduce more matrices. Consider the two matrix case,
\betaegin{equation*}
Z_{l,m} \lambdaeft(A,B\right) = \deltaet\lambdaeft(A\right) Z_{n-l-m,m}\lambdaeft(A^{-1},A^{-1}B\right)
\varepsilonnd{equation*}
and set $A=X$, $B=X Y$, and $l+m=n$,
\betaegin{equation*}
Z_{n-m,m}\lambdaeft(X, X Y\right) = \deltaet\lambdaeft(X\right) Z_{0,m}\lambdaeft(X^{-1},Y\right) = \deltaet\lambdaeft(X\right) Z_m\lambdaeft(Y\right).
\varepsilonnd{equation*}
This, which is the generalization of
\betaegin{equation*}
\deltaet\lambdaeft(X Y\right) = \deltaet\lambdaeft(X\right) \deltaet\lambdaeft(Y\right),
\varepsilonnd{equation*}
allows us to decompose order $n$ invariants of a product into products of invariants. In particular, we get the $n=2$ expression with which
we built the ODE system.
More generally,
\betaegin{equation}
Z_{n-\sum m_i,m_1,\deltaots,m_N} \lambdaeft(X,X Y_1,\deltaots,X Y_N\right) = \deltaet\lambdaeft(X\right) Z_{m_1,\deltaots,m_N}\lambdaeft(Y_1,\deltaots,Y_N\right).
\lambdaabel{determinantFactorization}
\varepsilonnd{equation}
\subsection{Small-$\mathbf\varepsilonpsilon$ expansion}
By using expression \varepsilonqref{z} we can easily derive distributivity properties, which can be applied to calculate
\betaegin{align*}
& Z_{l,m_1,\deltaots,m_N}\lambdaeft(A_1+\varepsilonpsilon A_2+O\lambdaeft(\varepsilonpsilon^2\right),B_1,\deltaots,B_N\right) \\
=&\sum_{i=0}^{l} \varepsilonpsilon^i Z_{l-i,i,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2+O\lambdaeft(\varepsilonpsilon\right),B_1,\deltaots,B_N\right) \\
=&\sum_{i=0}^{l} \varepsilonpsilon^i \lambdaeft[Z_{l-i,i,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2,B_1,\deltaots,B_N\right) +O\lambdaeft(\varepsilonpsilon\right) \right] \\
=&\sum_{i=0}^{1} \varepsilonpsilon^i Z_{l-i,i,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2,B_1,\deltaots,B_N\right) +O\lambdaeft(\varepsilonpsilon^2\right) \\
=&Z_{l,m_1,\deltaots,m_N}\lambdaeft(A_1,B_1,\deltaots,B_N\right) + \varepsilonpsilon Z_{l-1,1,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2,B_1,\deltaots,B_N\right) +O\lambdaeft(\varepsilonpsilon^2\right).
\varepsilonnd{align*}
\subsection{Derivatives}
As we have seen before, the derivatives of the invariants play an essential role in the theory. We would now like to have a formula for derivatives of the form
\betaegin{equation*}
\frac{\mathrm{d}}{\mathrm{d}t} Z_m\lambdaeft(X\lambdaeft(t\right)\right).
\varepsilonnd{equation*}
Consider
\betaegin{equation*}
Z^{(m_0,m_1,m_2,\deltaots)}\lambdaeft(X\right) := Z_{m_0,m_1,m_2,\deltaots}\lambdaeft(X,X',X'',\deltaots\right),
\varepsilonnd{equation*}
such that for some $N$ we have $m_i=0$ for every $i>N$.
We can retrieve its first derivative from its Taylor series, which we can in turn get from its small $\varepsilonpsilon$ expansion.
\betaegin{align*}
& Z^{(m_0,m_1,\deltaots)}\lambdaeft( X+\varepsilonpsilon X'+O\lambdaeft(\varepsilonpsilon^2\right) \right) \\
=&Z^{(m_0,m_1,m_2,\deltaots)}\lambdaeft(X\right)+\varepsilonpsilon \lambdaeft(m_1+1\right) Z_{m_0-1,m_1+1,m_2,\deltaots}\lambdaeft(X,X',X'',\deltaots\right) \\ & +\varepsilonpsilon \lambdaeft(m_2+1\right) Z_{m_0,m_1-1,m_2+1,\deltaots}\lambdaeft(X,X',X'',\deltaots\right)+\gammadots
\varepsilonnd{align*}
Taking the $\varepsilonpsilon$ term we get the first coefficient of the Taylor series, i.\,e., the first derivative,
\[
\lambdaeft(Z^{(m_0,m_1,m_2,\deltaots)}\lambdaeft(X\right)\right)' =\sum_{i=1}^\infty \lambdaeft(m_i+1\right) Z^{ (m_0,\deltaots,m_{i-1}-1,m_i+1,\deltaots ) }\lambdaeft(X\right).
\]
Notice that the infinite sum is merely formal, since by equation (\ref{noNegativeMultiTraces}) it is guaranteed to terminate as soon as all the remaining $m_i$ are $0$, due
to the $m_{i-1}-1$ index at every term.
For the first few derivatives, we find via recursion the general expressions
\betaegin{align*}
Z_{m} \lambdaeft(X\right)' = \lambdaeft(Z^{(m)}\right)'=&Z^{(m-1,1)},
\\
Z_{m} \lambdaeft(X\right)''= \lambdaeft(Z^{(m)}\right)''=& 2 Z^{(m-2,2)} + Z^{(m-1,0,1)},
\\
Z_{m} \lambdaeft(X\right)''' = \lambdaeft(Z^{(m)}\right)''' =& 6 Z^{(m-3,3)} + 3 Z^{(m-2,1,1)} + Z^{(m-1,0,0,1)}.
\varepsilonnd{align*}
Something very important (albeit somehow obvious, following Leibniz's rule for matrices), is that the order of the invariants involved in the expressions is preserved.
This allows us to use the factorization formula (\ref{determinantFactorization}) over the derivatives of the determinant, which corresponds to $Z_n$.
As a small note, if we take in the first derivative $m=n$ together with expression (\ref{multiFactorization}), we get
\betaegin{equation*}
\deltaet\lambdaeft(X\right)'=Z_{n-1,1}\lambdaeft(X,X'\right) = \deltaet\lambdaeft(X\right) Z_1\lambdaeft(X^{-1} X'\right) = \deltaet\lambdaeft(X\right) \mathrm{Tr}\lambdaeft(X^{-1} X'\right),
\varepsilonnd{equation*}
the usual Liouville's Formula. | 3,674 | 12,170 | en |
train | 0.6.3 | \subsection{Small-$\mathbf\varepsilonpsilon$ expansion}
By using expression \varepsilonqref{z} we can easily derive distributivity properties, which can be applied to calculate
\betaegin{align*}
& Z_{l,m_1,\deltaots,m_N}\lambdaeft(A_1+\varepsilonpsilon A_2+O\lambdaeft(\varepsilonpsilon^2\right),B_1,\deltaots,B_N\right) \\
=&\sum_{i=0}^{l} \varepsilonpsilon^i Z_{l-i,i,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2+O\lambdaeft(\varepsilonpsilon\right),B_1,\deltaots,B_N\right) \\
=&\sum_{i=0}^{l} \varepsilonpsilon^i \lambdaeft[Z_{l-i,i,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2,B_1,\deltaots,B_N\right) +O\lambdaeft(\varepsilonpsilon\right) \right] \\
=&\sum_{i=0}^{1} \varepsilonpsilon^i Z_{l-i,i,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2,B_1,\deltaots,B_N\right) +O\lambdaeft(\varepsilonpsilon^2\right) \\
=&Z_{l,m_1,\deltaots,m_N}\lambdaeft(A_1,B_1,\deltaots,B_N\right) + \varepsilonpsilon Z_{l-1,1,m_1,\deltaots,m_N}\lambdaeft(A_1,A_2,B_1,\deltaots,B_N\right) +O\lambdaeft(\varepsilonpsilon^2\right).
\varepsilonnd{align*}
\subsection{Derivatives}
As we have seen before, the derivatives of the invariants play an essential role in the theory. We would now like to have a formula for derivatives of the form
\betaegin{equation*}
\frac{\mathrm{d}}{\mathrm{d}t} Z_m\lambdaeft(X\lambdaeft(t\right)\right).
\varepsilonnd{equation*}
Consider
\betaegin{equation*}
Z^{(m_0,m_1,m_2,\deltaots)}\lambdaeft(X\right) := Z_{m_0,m_1,m_2,\deltaots}\lambdaeft(X,X',X'',\deltaots\right),
\varepsilonnd{equation*}
such that for some $N$ we have $m_i=0$ for every $i>N$.
We can retrieve its first derivative from its Taylor series, which we can in turn get from its small $\varepsilonpsilon$ expansion.
\betaegin{align*}
& Z^{(m_0,m_1,\deltaots)}\lambdaeft( X+\varepsilonpsilon X'+O\lambdaeft(\varepsilonpsilon^2\right) \right) \\
=&Z^{(m_0,m_1,m_2,\deltaots)}\lambdaeft(X\right)+\varepsilonpsilon \lambdaeft(m_1+1\right) Z_{m_0-1,m_1+1,m_2,\deltaots}\lambdaeft(X,X',X'',\deltaots\right) \\ & +\varepsilonpsilon \lambdaeft(m_2+1\right) Z_{m_0,m_1-1,m_2+1,\deltaots}\lambdaeft(X,X',X'',\deltaots\right)+\gammadots
\varepsilonnd{align*}
Taking the $\varepsilonpsilon$ term we get the first coefficient of the Taylor series, i.\,e., the first derivative,
\[
\lambdaeft(Z^{(m_0,m_1,m_2,\deltaots)}\lambdaeft(X\right)\right)' =\sum_{i=1}^\infty \lambdaeft(m_i+1\right) Z^{ (m_0,\deltaots,m_{i-1}-1,m_i+1,\deltaots ) }\lambdaeft(X\right).
\]
Notice that the infinite sum is merely formal, since by equation (\ref{noNegativeMultiTraces}) it is guaranteed to terminate as soon as all the remaining $m_i$ are $0$, due
to the $m_{i-1}-1$ index at every term.
For the first few derivatives, we find via recursion the general expressions
\betaegin{align*}
Z_{m} \lambdaeft(X\right)' = \lambdaeft(Z^{(m)}\right)'=&Z^{(m-1,1)},
\\
Z_{m} \lambdaeft(X\right)''= \lambdaeft(Z^{(m)}\right)''=& 2 Z^{(m-2,2)} + Z^{(m-1,0,1)},
\\
Z_{m} \lambdaeft(X\right)''' = \lambdaeft(Z^{(m)}\right)''' =& 6 Z^{(m-3,3)} + 3 Z^{(m-2,1,1)} + Z^{(m-1,0,0,1)}.
\varepsilonnd{align*}
Something very important (albeit somehow obvious, following Leibniz's rule for matrices), is that the order of the invariants involved in the expressions is preserved.
This allows us to use the factorization formula (\ref{determinantFactorization}) over the derivatives of the determinant, which corresponds to $Z_n$.
As a small note, if we take in the first derivative $m=n$ together with expression (\ref{multiFactorization}), we get
\betaegin{equation*}
\deltaet\lambdaeft(X\right)'=Z_{n-1,1}\lambdaeft(X,X'\right) = \deltaet\lambdaeft(X\right) Z_1\lambdaeft(X^{-1} X'\right) = \deltaet\lambdaeft(X\right) \mathrm{Tr}\lambdaeft(X^{-1} X'\right),
\varepsilonnd{equation*}
the usual Liouville's Formula.
\subsection{Application to the differential system of invariants for $\mathbf{n>2}$}
In the matrix dimension $m=2$ case, taking derivatives of the determinant eventually closes, since $X''=XE$. This follows from
\[\deltaet(X)'' = Z_m(X)'' = 2 Z^{(m-2,2)}(X) + Z^{(m-1,0,1)}(X).\]
The $Z^{(m-1,0,1)}(X)$ can be immediately rewritten as a determinant by using the duality formula,
\[Z_{m-1,0,1} (X, X', X'') = \deltaet(X) Z_1 (X^{-1} X'') = \deltaet(X) \tr(E),\]
and, when $m=2$,
\[Z^{(m-2,2)}(X) = Z^{(0,2)}(X) = \deltaet(X').\]
Of course, now we can do the same for $X'$,
\[\deltaet(X')'' = 2 Z^{(m-2,2)}(X) + Z^{(m-1,0,1)}(X)\]
and, for $m=2$,
\[\deltaet(X')'' = 2 \deltaet(X'') + \deltaet(X') \tr(E) = 2 \deltaet(E) \deltaet(X) + \deltaet(X') \tr(E),\]
closing the system as we had found in Theorem \ref{AJLid}. The problem is now obvious, since for $m>2$, $Z^{(m-2,2)}(X)$ will involve a non trivial product between $X$ and $X'$. One could consider this as a new variable for the system, but its derivatives will now concern objects of the form $Z^{(m-2,1,1)}(X)$ which, if understood as yet another variable of the system, would yield upon derivation
\[Z^{(m-2,1,0,1)}(X),\ Z^{(m-2,1,0,0,1)}(X),\ Z^{(m-2,1,0,...,0,1)}, ...\]
Notice that this will always involve a term in $X'$, and a term in $X$, so that we cannot perform the same trick as we did for $Z^{(m-1,0,1)}(X)$ --namely, using $X''=XE$ to factor the determinant out. Hence, the system of second derivatives of invariants for $m>2$ does not close.
\betaegin{thebibliography}{1}
\providecommand{\url}[1]{{#1}}
\providecommand{\urlprefix}{URL }
\varepsilonxpandafter\ifx\gammasname urlstyle\varepsilonndcsname\relax
\providecommand{\deltaoi}[1]{DOI~\deltaiscretionary{}{}{}#1}\varepsilonlse
\providecommand{\deltaoi}{DOI~\deltaiscretionary{}{}{}\betaegingroup
\urlstyle{rm}\Url}\fi
\betaibitem{Toj3}
Cabada, A., Tojo, F.A.F.: \varepsilonmph{Solutions and {Green's} function of the first
order linear equation with reflection and initial conditions}.
{n\in\betaN}ewblock Bound. Value Probl. \textbf{2014}(1), 99 (2014)
\betaibitem{CTMal}
Cabada, A., Tojo, F.A.F.: \varepsilonmph{{Green's} functions for reducible functional
differential equations}.
{n\in\betaN}ewblock Bull. Malays. Math. Sci. Soc. pp. 1--22 (2016)
\betaibitem{CT17}
Cabada, A., Tojo, F.A.F.: \varepsilonmph{On linear differential equations and systems
with reflection}.
{n\in\betaN}ewblock Applied Mathematics and Computation \textbf{305}, 84--102 (2017)
\betaibitem{cgm}
Codesido, S., Grassi, A., Marino, M.: \varepsilonmph{Spectral theory and mirror curves
of higher genus}.
{n\in\betaN}ewblock In: Annales Henri Poincar{\'e}, pp. 1--64. Springer (2015)
\betaibitem{CoTo17}
Codesido, S., Tojo, F.A.F.: \varepsilonmph{A Liouville's Formula for systems with
reflection} (preprint)
\betaibitem{levin}
Levin, J.: \varepsilonmph{On the matrix Riccati equation}.
{n\in\betaN}ewblock Proceedings of the American Mathematical Society \textbf{10}(4),
519--524 (1959)
\betaibitem{simon}
Simon, B.: \varepsilonmph{Notes on infinite determinants of Hilbert space operators}.
{n\in\betaN}ewblock Advances in Mathematics \textbf{24}(3), 244--273 (1977)
\betaibitem{TojMI}
Tojo, F.A.F.: \varepsilonmph{Computation of Green’s functions through algebraic
decomposition of operators}.
{n\in\betaN}ewblock Boundary Value Problems \textbf{2016}(1), 167 (2016)
\varepsilonnd{thebibliography}
\varepsilonnd{document} | 2,779 | 12,170 | en |
train | 0.7.0 | \begin{document}
\title{Discriminating between L\"uders and von Neumann measuring devices: \\ An NMR investigation }
\author{C. S. Sudheer Kumar, Abhishek Shukla, and T. S. Mahesh
}
\email{[email protected]}
\affiliation{Department of Physics and NMR Research Center,\\
Indian Institute of Science Education and Research, Pune 411008, India}
\begin{abstract}
Measurement of an observable on a quantum system involves a probabilistic collapse of the quantum state and a corresponding measurement outcome. L\"uders and von Neumann state update rules attempt to describe the above phenomenological observations. These rules are identical for a nondegenerate observable, but differ for a degenerate observable. While L\"uders rule preserves superpositions within a degenerate subspace under a measurement of the corresponding degenerate observable, the von Neumann rule does not. Recently Hegerfeldt and Mayato
[Phys. Rev. A, 85, 032116 (2012)]
had formulated a protocol to discriminate between the two types of measuring devices. Here we have reformulated this protocol for quantum registers comprising of system and ancilla qubits. We then experimentally investigated this protocol using nulear spin systems with the help of NMR techniques, and found that L\"uders rule is favoured.
\end{abstract}
\keywords{Degeneracy, State update Rules }
\pacs{ 03.65.Ta, 03.67.-a, 03.67.Ac}
\maketitle
\section{Introduction}
Quantum measurement paradox lies at the heart of foundations of quantum mechanics\cite{D_Home_book}. It's an experimental fact that, upon measurement, a quantum state collapses into an eigenstate of the observable being measured. However there is no collapse in the unitary evolution described by Schr\"odinger equation, and therefore, the collapse has to be imposed from outside the formalism.
Let us assume an observable $A_N$ with discrete and nondegenerate eigenspectrum. In that case, the measurement leads to a collapse of the state to one of the eigenstates of $A_N$ (see Fig. \ref{luder}).
On the other hand, if we consider an observable $A$ with a degenerate eigenspectrum, there are two extreme rules to update the state after the measurement. The most commonly used rule was postulated by Gerhart L\"uders in 1951 \cite{luders1951zustandsanderung,Luders_originl_paper}. According to it, a system existing in a superposition of degenerate eigenstates is unaffected by the measurement such that the superposition is preserved.
However, an earlier postulate by von Neumann, proposed in 1932 \cite{von_math_foundof_QM}, does not preserve such a superposition. In the latter postulate,
the measuring device refines the observable $A$ into another commuting observable $A'$ (actual system observable) having a nondegenerate spectrum. The resulting measurement collapses the state to an eigenstate of $A'$, and the original superposition is not preserved under the measurement as if the degeneracy has been lifted \cite{Discriminate_von_Luder_protocol}.
Although, one generally assumes L\"uders state update rule implicitly in quantum physics, occassionally one encounters applications of the von Neumann state update rule.
One example is in the context of Leggett-Garg inequality in multilevel quantum systems \cite{multi_level_LGI}.
In principle, measurements which are intermediate between L\"uders and von Neumann can also be conceived \cite{Discriminate_von_Luder_protocol,multi_level_LGI}.
\begin{figure}
\caption{Comparison between L\"uders and von Neumann measurement postulates. }
\label{luder}
\end{figure}
Recently, Hegerfeldt and Mayato have proposed a general protocol (HM protocol) to discriminate between L\"uders and von Neumann kind of measuring devices \cite{Discriminate_von_Luder_protocol}.
To explain this protocol we consider an observable $A$, having two-fold degenerate eigenvalues, say $+1$ and $-1$ (see Fig. \ref{HMprotocolfig}). The HM protocol involves the following steps: (i) prepare an eigenstate $\ket{\xi_\mathrm{in}}$ of $A$, (ii) let the device measure $A$, and (iii) characterize the output state. In step (ii) a L\"uders measurement will preserve the state, while a von Neumann measurement may not. The last step is simply to determine if the step (ii) has changed the state or not. If the state has changed, we conclude that the device is von Neumann. Else, either the device is of L\"uders type, or the chosen initial state $\ket{\xi_\mathrm{in}}$ happens to be a nondegenerate eigenstate of the actual system observable $A'$. To rule out the latter possibility, one may change the initial state and repeat the above steps (Fig. \ref{HMprotocolfig}). This way one can attempt to discriminate between the L\"uders and von Neumann measurement devices.
\begin{figure}
\caption{HM protocol for discriminating between L\"uders and von Neumann measurements. }
\label{HMprotocolfig}
\end{figure}
In this work, we reformulate the HM protocol for a quantum register and try to investigate it using experiments. Nuclear spin ensembles in liquid, liquid-crystalline, or solid-state systems have often been chosen as convenient testbeds for studying foundations of quantum physics \cite{suter1988study,Moussa,LGI_Soumya,Context_cssk}. Their main advantages are long coherence times and excellent control over quantum dynamics via highly developed nuclear magnetic resonance (NMR) techniques.
In section II, we briefly explain the HM protocol as adapted to an NMR setup.
The experimental details to discriminate between the L\"uders and von Neumann measuring devices is described in section III. Finally we conclude in section IV. | 1,523 | 7,694 | en |
train | 0.7.1 | \section{Theory}
For the sake of clarity, and also to match the experimental details described in the next section, we consider a system of two qubits.
Since the system is to be measured projectively, dimension of the pointer basis should be greater than or equal to that of the system, and hence we need at least two ancillary qubits.
We refer to the ancillary qubits as (1,2) and system qubits as (3,4).
We use Zeeman product basis as our computational basis and denote eigenkets of $\sigma_z$, the Pauli $z$-operator,
by $\ket{0}$ and $\ket{1}$. We denote the basis vectors of system qubits as
\begin{eqnarray}
\ket{\phi_{0}} = \ket{00}, \ket{\phi_{1}} = \ket{01}, \ket{\phi_{2}} = \ket{10},
\ket{\phi_{3}} = \ket{11}.
\label{phis}
\end{eqnarray}
Let us assume a two-fold degenerate system-observable with spectral decomposition
\begin{eqnarray}
A = (\Pi_0+\Pi_1)-(\Pi_2+\Pi_3), ~~~~
\label{A}
\end{eqnarray}
where the projectors are defined as $\Pi_j=\proj{\chi_j},\ket{\chi_0}=\alpha_0\ket{\phi_0}+\beta_0\ket{\phi_1}$, $\ket{\chi_1}=\alpha_1\ket{\phi_0}+\beta_1\ket{\phi_1}$, $\ket{\chi_2}=\alpha_2\ket{\phi_2}+\beta_2\ket{\phi_3}$, $\ket{\chi_3}=\alpha_3\ket{\phi_2}+\beta_3\ket{\phi_3}$ are eigenvectors of $A$.
The projectors have the property $\Pi_k\Pi_l=\delta_{kl}\Pi_k$. We note that $A$ has no unique spectral decomposition due to the degeneracy.
We consider a measurement model, wherein a quantum system being measured undergoes a joint evolution with the measuring device, ultimately forming an entangled state. When the measuring device collapses to a particular pointer state, the system also collapses to the corresponding eigenstate. Let $Q$ be the observable corresponding to the ancilla (measuring device) and $g$ be the system-ancilla interaction strength. The joint evolution is then of the form
\begin{eqnarray}
U_\mathrm{int} = \exp(-i{\cal H}_\mathrm{int}\tau),
\end{eqnarray}
where ${\cal H}_\mathrm{int} = g ~Q \otimes A$ is the interaction Hamiltonian in units of angular frequency.
To fix the basis inside a degenerate subspace, we should choose a nondegenerate observable $A'$ which commutes with $A$, so that they are simultaneously diagonalizable and hence we can find a common eigenbasis. For simplicity we choose the computional basis $\{\ket{\phi_j}\}$ as the common eigenbasis. Then the observable $A'$ must have the following spectral decomposition
\begin{eqnarray}
A'= \sum_{j=0}^3 a'_j P_j,
\label{A'_defined}
\end{eqnarray}
where $P_j = \proj{\phi_j}$ and the nondegenerate eigenvalues $a'_j$ are yet to be determined.
Let us assume the device to be von Neumann which refines the degenerate observable $A$ that is being measured, into a nondegenerate observable $A'$, via a mapping $f(A')=A$.
As the refined observable $A'$ has nondegenerate eigenvalues and commutes with $A$, it fixes the basis inside the degenerate subspace. However, the choice of $A'$ is not unique, i.e., any orthonormal basis inside the degenerate subspace can be nondegenerate eigenkets of $A'$, and the von Neumann device has the freedom to choose among them \cite{von_math_foundof_QM}.
The measurement outcome is passed via the refining function $f$, such that $f(a'_0)=f(a'_1)=+1$ and $f(a'_2)=f(a'_3)=-1$. Hence the outcome is same as if $A$ is being measured. To projectively measure the observable $A'$, the measuring device has to jointly evolve with the system under the interaction Hamiltonian,
\begin{eqnarray}
{\cal H}'_\mathrm{int}= g ~Q\otimes A'.
\end{eqnarray}
For instance, we choose $Q=q_1\sigma_{1z}+q_2\sigma_{2z}$, where
$\sigma_{1z} = \sigma_z \otimes\mathbbm{1}_2$,
$\sigma_{2z} = \mathbbm{1}_2\otimes\sigma_z$ and $\mathbbm{1}_2$ is $2\times 2$ identity operator.
The joint evolution between the measuring device (ancillary qubits) and the system is described by the unitary operator
\begin{eqnarray}
U'_\mathrm{int} = \exp(-i\cal{H}_\mathrm{int}' \tau),
\end{eqnarray}
where $\tau$ is duration of the evolution.
\begin{figure}
\caption{An interpolating function $a = f(a') = (-a'^3+7a')/6$ mapping the nondegenerate eigenvalues $a'$ of $A'$ onto degenerate eigenvalues $a$ of $A$.}
\label{fofx}
\end{figure}
If each of the quantum register is initially prepared in $\ket{\Phi_0} = \ket{++++}$, with $\ket{+} = (\ket{0}+\ket{1})/\sqrt{2}$, the state after the joint evolution is given by
\begin{eqnarray}
U'_\mathrm{int}\ket{\Phi_0} &=& \frac{1}{2} \bigg( e^{-iga'_0Q \tau}\ket{++}\ket{\phi_0}+
e^{-iga'_1Q \tau}\ket{++}\ket{\phi_1}+ \nonumber \\
&& e^{-iga'_2Q \tau}\ket{++}\ket{\phi_2}+
e^{-iga'_3Q \tau}\ket{++}\ket{\phi_3} \bigg) \nonumber \\
&=& \frac{1}{2} \sum_{j=0}^3 \ket{\psi_j}\ket{\phi_j},
\label{von_evolutn}
\end{eqnarray}
where $\ket{\phi_j}$ are as defined in Eqs. \ref{phis} and $\ket{\psi_j} = \exp(-iga'_jQ \tau)\ket{++}$ represent states of the ancillary qubits. To realize the projective measurement, the pointer basis $\{\ket{\psi_j}\}$ must be orthonormal. Imposing the mutual orthogonality condition results in trigonometric constraint equations leading to a set of possible solutions. One such possible solution is
\begin{eqnarray}
\begin{array}{l|l}
a'_0 = -a'_2 = -3 & q_1 = \pi/(4g\tau) \\
a'_1 = -a'_3 = 1 & q_2 = -q_1/2.\\
\end{array}
\end{eqnarray}
Again, the von Neumann measuring device has the freedom to choose a particular pointer basis among several possible ones. Substituting the above values in Eq. \ref{A'_defined}, we obtain,
\begin{eqnarray}
A'=-3 P_0+ P_1 +3 P_2 -P_3,
\end{eqnarray}
which is obviously nondegenerate in the computational basis. The refining function $f$ can now be setup by interpolating the eigenvalue distribution (see Fig. \ref{fofx}). For the above example, we find a possible map to be $f(A') = (-A'^3+7A')/6 = A$.
The quantum circuit for discriminating L\"uders and von Neumann devices, illustrated in Fig. \ref{fig_discrimnt_von_lud}, involves four qubits each of which is initialized in state $\ket{+}$. If the device is L\"uders, the system undergoes a joint evolution $U_\mathrm{int}$ with the ancilla resulting in the state
\begin{eqnarray}
U_\mathrm{int} \ket{\Phi_0} &=& \frac{1}{\sqrt{2}}
\bigg( e^{-igQ \tau}\ket{++} \frac{d_0\ket{\chi_0}+d_1\ket{\chi_1}}{\sqrt{2}} + \nonumber \\
&& e^{igQ \tau}\ket{++} \frac{d_2\ket{\chi_2}+d_3\ket{\chi_3}}{\sqrt{2}} \bigg) \nonumber \\
& = & \frac{1}{\sqrt{2}} \bigg(
\ket{\psi_1} \frac{\ket{\phi_0}+\ket{\phi_1}}{\sqrt{2}} +
\ket{\psi_3} \frac{\ket{\phi_2}+\ket{\phi_3}}{\sqrt{2}}
\bigg),~~~~
\label{Luder_evolution}
\end{eqnarray}
where the coefficients $d_j$ depend on the choice of $\ket{\chi_j}$ (defined after Eq. \ref{A}) \cite{ludernote1}.
\begin{figure}
\caption{(a) Quantum circuit to discriminate L\"uders and von Neumann devices.
(b) The NMR pulse-scheme to implement the circuit in (a).}
\label{fig_discrimnt_von_lud}
\end{figure}
Note that if the measuring device is of von Neumann type, it will instead measure $A'$, and pass the measurement outcome via the function $f$, as explained before.
After the joint evolution of system and ancilla, a selective measurement of ancilla qubits is carried out.
Generally in a quantum measurement the measuring device collapses to its pointer basis. In our scheme, we perform the projective measurement in the computational basis after transforming the ancilla qubits onto the computational basis using a similarity transformation $U_{a}^\dagger$, such that
\begin{eqnarray}
U_{a}\ket{00}=\ket{\psi_0},U_{a}\ket{01}=\ket{\psi_1},\nonumber\\
U_{a}\ket{10}=\ket{\psi_2},U_{a}\ket{11}=\ket{\psi_3}.
\label{U_ra_constraints}
\end{eqnarray}
By substituting the explicit forms of $\ket{\psi_j}$, we obtain
\begin{eqnarray}
U_{a}=\frac{1}{2}
\left[
\begin{array}{cccc}
z^3 & z^{-1} & z^{-3} & z \\
z^{9} & z^{-3} & z^{-9} & z^{3} \\
z^{-9} & z^{3} & z^{9} & z^{-3} \\
z^{-3} & z & z^{3} & z^{-1}
\end{array}
\right],
\label{U_ra}
\end{eqnarray}
where $z=\exp(i\pi/8)$.
Finally, the ancilla is traced-out and the state of system qubits is characterized with the help of quantum state tomography.
According to the L\"uders state update rule, if a degenerate observable $A$ (as in Eq. \ref{A}) is measured on a system in state $\rho_0$, then the postmeasurement state of the ensemble is described by
\begin{eqnarray}
\rho_{L}=\sum_{l=\pm 1}{\mathbb P}_l\rho_{0}{\mathbb P}_l,
\label{rhoLgen}
\end{eqnarray}
where ${\mathbb P}_{+1} = \Pi_{0}+\Pi_{1}$,
${\mathbb P}_{+1} = \Pi_{2}+\Pi_{3}$.
For the initial state $\rho_0 = \proj{\Phi_0}$,
we obtain
\begin{eqnarray}
\rho_L =
(\mathbbm{1}_4 + \mathbbm{1}_2 \otimes \sigma_x)/4.
\label{rhoL}
\end{eqnarray}
However according to von Neumann's degeneracy breaking state update rule, the postmeasurement state of the ensemble is given by
\begin{eqnarray}
\rho_N = \sum_{j=0}^3 \Pi_j \rho_0 \Pi_j,
\label{rhoNgen}
\end{eqnarray}
where, $\Pi_j$'s are fixed by the refining observable $A'$.
Therefore, for the initial state $\rho_0 = \proj{\Phi_0}$ and the observable $A'$ (Eq. \ref{A'_defined}), the postmeasurement state collapses to a maximally mixed state, i.e.,
\begin{eqnarray}
\rho_N = \mathbbm{1}_4/4.
\label{rhoN}
\end{eqnarray}
In both the cases, the probabilities of obtaining the eigenvalues $\pm 1$ are identical, i.e.,
\begin{eqnarray}
p_{+1} &=& \mathrm{Tr}({\mathbb P}_{+1}\rho_{0}{\mathbb P}_{+1})=\sum_{j=0,1}\mathrm{Tr}(\Pi_j \rho_0 \Pi_j) ~~\mathrm{and,}\nonumber \\
p_{-1} &=& \mathrm{Tr}({\mathbb P}_{-1}\rho_{0}{\mathbb P}_{-1})=\sum_{j=2,3}\mathrm{Tr}(\Pi_j \rho_0\Pi_j).
\end{eqnarray}
Thus although, the measurement outcomes (eigenvalues) and their probabilities are identical, the postmeasurement states $\rho_L$ and $\rho_N$ are different \cite{Luders_originl_paper,von_math_foundof_QM,Discriminate_von_Luder_protocol,multi_level_LGI}.
In fact, the Uhlmann fidelity between $\rho_L$ and $\rho_N$ turns out to be $F(\rho_L,\rho_N)=\mbox{Tr}\sqrt{\sqrt{\rho_N}\rho_L\sqrt{\rho_N}}=1/\sqrt{2}$ \cite{quant_info_neilson_chuang}.
Therefore, it is possible to discriminate between the Luders and von Neumann devices by simply characterizing the final state of the system as shown by the circuit in Fig. \ref{fig_discrimnt_von_lud}.
\begin{figure}
\caption{Molecular structure of 1,2-Dibromo-3,5-difluorobenzene, Hamiltonian parameters, and the relaxation parameters. In the table, the diagonal values indicate resonance offsets ($\omega_j/2\pi$); off-diagonal values $(J_{ij}
\label{mol_H}
\end{figure} | 3,622 | 7,694 | en |
train | 0.7.2 | \section{Experiment}
We utilize the four spin-1/2 nuclei of 1,2-dibromo-3,5-difluorobenzene (DBDF) as our quantum register.
About 12 mg of DBDF was partially oriented in 600 $\upmu$l of liquid crystal MBBA. The molecular structure of DBDF and its NMR Hamiltonian parameters are shown in Fig. \ref{mol_H}.
The experiments were performed at 300 K on a 500 MHz Bruker UltraShield NMR spectrometer.
The secular part of the spin-Hamiltonian is of the form \cite{cavanagh},
\begin{eqnarray}
{\cal H}_0 &=& -\sum_{j=1}^4 \omega_j I_{jz}
+ 2\pi\sum_{j,k>j} (J_{jk}+2D_{jk}) I_{jz}I_{kz} \nonumber \\
&&+ 2\pi (J_{12}-D_{12})( I_{1x}I_{2x}+ I_{1y}I_{2y}),
\end{eqnarray}
where $\omega_j$, $J_{ij}$, and $D_{ij}$ are the resonance off-sets, indirect scalar coupling constants, and direct dipole-dipole coupling constants (Fig. \ref{mol_H}).
The strong-coupling term (i.e., the last term) is relevant only for (H$_1$, H$_2$) spins since $\vert\omega_1-\omega_2\vert < 2\pi \vert D_{12} \vert$.
We choose H$_1$, H$_2$ as ancilla (qubits 1, 2) and F$_3$, F$_4$ as the system (qubits 3, 4).
The NMR pulse diagram to implement the quantum circuit in Fig. \ref{fig_discrimnt_von_lud}(a) is shown
in Fig. \ref{fig_discrimnt_von_lud}(b). It begins with the initial state preparation. The thermal equilibrium state of the NMR system in the Zeeman eigenbasis under high-field, high-temperature, and secular approximation is given by \cite{Levitt_Spindynabook,cavanagh},
\begin{equation}
\rho_{\mathrm{eq}} =
\mathbbm{1}_{16}/16 + \sum_{j=1}^4 \epsilon_j I_{zj},
\label{rho_eq}
\end{equation}
where $\epsilon_j \sim 10^{-5}$ are the purity factors and the second term in the right hand side corresponds to the traceless deviation density matrix. The identity part is invariant under the unitary transformations and does not give rise to observable signal. Therefore only the deviation part is generally considered for both state preparation and characterization \cite{cory}.
The initial state of the quantum register assumed in the theory section, i.e., $\ket{\Phi_0}$ can be prepared by applying an Hadamard operator on each of the four qubits in a pure $\ket{0}$ state. However, in NMR, the preparation of such pure states is difficult and instead a pseudopure state is used \cite{cory}.
In our work, we utilize a technique based on preparing a pair of pseudopure states (POPS) \cite{POPS_Fung}. It involves inverting a single transition and subtracting the resulting spectrum from that of the thermal equilibrium. By inverting the transition $\ket{0000}$ to $\ket{0001}$ transition using a transition selective $\pi$ pulse, followed by Hadamard gates ({\textsf H}) on all the spins we obtain the POPS deviation density matrix:
\begin{eqnarray}
\rho_\mathrm{POPS} = \big(\proj{++++}-\proj{+++-}\big) \nonumber.
\end{eqnarray}
We then implemented the quantum circuit shown in Fig. \ref{fig_discrimnt_von_lud} (a) using the pulse sequence in Fig. \ref{fig_discrimnt_von_lud} (b).
As evident from circuit in Fig. \ref{fig_discrimnt_von_lud}, controls are designed to implement $U_{\mathrm{int}}$ (Eq. \ref{Luder_evolution}) since we intend to measure $A$. Whether to map it to $A'$ or not is left to the device.
The unitary operators $U_\mathrm{int}$ and $U_{a}^\dagger$ were realized by
bang-bang optimal control \cite{Bangbang_TSM}. Hadamard and tomography operations were only few hundred micro seconds long and had a simulated fidelity of about 0.99, when averaged over $\pm 10\%$ inhomogeneous RF fields.
The combined operation of $U_\mathrm{int}$ and $U_a^\dagger$ was about 17 ms in duration and had an average fidelity over $0.933$.
The intermediate measurement on ancilla was realized by applying strong pulse-field-gradients (PFG). By applying a $\pi_x$ pulse on the system spins in between two symmetrically spaced PFG pulses, we realize the selective dephasing of the ancilla spins (Fig. \ref{fig_discrimnt_von_lud} (b)). The central $\pi_x$ also refocuses all the system-ancilla coherent evolutions during the ancilla measurement. When averaged over the sample volume this process retains only the diagonal terms in the density matrix of the ancialla spins and thus simulates a projective measurement of ancilla. Setting the total duration of this process to $1/(J_{34}+2D_{34})$ also ensures refocusing of (F$_3$, F$_4$) interactions.
Finally, the density matrix of the system qubits was characterized using quantum state tomography. It involved nine independent measurements with different tomography pulses ({\textsf T}) (Fig. \ref{fig_discrimnt_von_lud} (b)) \cite{Tomography_Chuang447,Tomo_Soumya}.
The results of the quantum circuit (Fig. \ref{fig_discrimnt_von_lud}) on $\proj{++++}$ state by L\"uders and von Neumann devices are described in Eqs. \ref{rhoL} and \ref{rhoN} respectively.
For L\"uders measurement with the POPS input state
$\proj{++++}-\proj{+++-}$, the final deviation density matrix (in circuit \ref{fig_discrimnt_von_lud}) is expected to be
\begin{eqnarray}
\rho_L' = \mathbbm{1}_2 \otimes \sigma_x/2.
\label{Lud_finalstate}
\end{eqnarray}
On the other hand, for von Neumann measurement, the POPS input state leads to a maximally mixed final state with a null deviation density matrix $(\rho_N')$.
Fig. \ref{fig_tomo} compares the experimental results with the theoretically expected deviation density matrices. The correlation \cite{Correltn_fidlty_cory}
\begin{eqnarray}
C = \frac{\mathrm{Tr}[\rho'_{L}\rho'_\mathrm{exp}]}{\sqrt{\mathrm{Tr}[\rho_{L}^{\prime 2}]\mathrm{Tr}[\rho_\mathrm{exp}^{\prime 2}]}}
\label{C_corelatn}
\end{eqnarray}
between the theoretical ($\rho'_{L}$, Eq. \ref{Lud_finalstate}) and
the experimental ($\rho'_\mathrm{exp}$) deviation density matrices was 0.923. The reduction in the correlation is mainly due to coherent errors caused by imperfect unitary operators, fluctuations in the dipolar coupling constants due to temperature gradients over the sample volume, inhomogeneous RF fields, as well as due to decoherence.
The correlation expression in Eq. \ref{C_corelatn} is not directly applicable for the null-matrix $\rho'_N$. Therefore, we replace $\rho'_N$ with random traceless diagonal matrices, and obtained 0.28 as the upper bound for the correlation of $\rho'_\mathrm{exp}$ with $\rho'_N$.
Therefore we conclude that the experimental deviation density matrix is much closer to $\rho'_L$ (Eq. \ref{Lud_finalstate}), and strongly favors the L\"uders update rule.
\begin{figure}
\caption{Real (a) and imaginary (b) parts of the theoretically expected deviation density matrix for a L\"uders device ($\rho'_{L}
\label{fig_tomo}
\end{figure}
\section{Conclusions}
Quantum measurements, involving probabilistic state collapse and corresponding measurement outcomes, has always been mysterious. There have been attempts to deduce rules based on phenomenological observations. According to one of the earliest reduction rules, given by von Neumann, superposition in a degenerate subspace is destroyed by the measurement of the respective degenerate observable. This rule was later substantially modified by Gerhart L\"uders. The modified rule, which is most commonly used, implies that superpositions within the degenerate subspaces are preserved under such a measurement.
A protocol to determine whether a given measuring device is L\"uders or von Neumann was recently formulated by Hegerfeldt and Mayato \cite{Discriminate_von_Luder_protocol}. In this work, we have adapted this protocol for quantum information systems, and utilize ancilla qubits for performing a desired measurement on system qubits. Moreover, we describe an NMR experiment, with two system qubits and two ancilla qubits, to discriminate between L\"uders and von Neumann devices. Within the limitations of experimental NMR techniques, we found that the measurements are of L\"uders type.
There is a possibility that the above measurement is still of von Neumann type, if the chosen initial state happens to be a nondegenerate eigenstate of the actual system observable ($A'$). One way to rule out this possibility is by changing the initial state (Fig. \ref{HMprotocolfig}). However, it is also possible that the actual system observable is dynamic, in which case it is even more difficult to discriminate between L\"uders and von Neumann measurements. In this work we have not excluded these possibilities.
Nevertheless, the present work opens many interesting questions. For example, how can we build a von Neumann measuring device, or even an intermediate measuring device that partly breaks the degeneracy? More importantly, further research in this direction may throw some light on fundamental aspects of quantum measurement itself.
\end{document} | 2,549 | 7,694 | en |
train | 0.8.0 | \begin{document}
\begin{center} {Stability and bifurcation analysis of a SIR model with saturated incidence rate and saturated treatment}
\end{center}
\begin{center}
{\small \textsc{Erika Rivero-Esquivel}\footnote{email: [email protected]}, \textsc{Eric \'Avila-Vales}\footnote{email: [email protected]}, \textsc{Gerardo E. Garc\'ia-Almeida}\footnote{Corresponding author. email: [email protected] Tel. (999) 942 31 40 Ext. 1108.}
}
\end{center}
\begin{center} {\small \sl $^{1,2,3}$ Facultad de Matem\'aticas, Universidad Aut\'onoma de Yucat\'an, \\ Anillo Perif\'erico Norte, Tablaje 13615, C.P. 97119, M\'erida, Yucat\'an, Mexico}
\end{center}
{\small \centerline{\bf Abstract}
\begin{quote}
We study the dynamics of a SIR epidemic model with nonlinear incidence rate, vertical transmission vaccination for the newborns and the capacity of treatment, that takes into account the limitedness of the medical resources and the efficiency of the supply of available medical resources. Under some conditions we prove the existence of backward bifurcation, the stability and the direction of Hopf bifurcation. We also explore how the mechanism of backward bifurcation affects the control of the infectious disease. Numerical simulations are presented to illustrate the theoretical findings.
\end{quote}}
\noindent {\bf Keywords}: Local Stability, Hopf Bifurcation, Global Stability, Backward Bifurcation
\section{Introduction}
Mathematical models that describe the dynamics of infectious diseases in
communities, regions and countries can contribute to have better approaches in
the disease control in epidemiology. Researchers always look for thresholds,
equilibria, periodic solutions, persistence and eradication of the disease.
For classical disease transmission models, it is common to have one endemic
equilibrium and that the basic reproduction number tells us that a disease is
persistent if is greater than 1,and dies out if is less than 1. This kind of
behaviour associates to forward bifurcation. However, there are epidemic
models with multiple endemic equilibria
\cite{hadeler1997,dushoff1998,driessche2000,brauer2004}, within these models
it can happen that a stable endemic equilibrium coexist with a disease free
equilibrium, this phenomenon is called backward bifurcation \cite{hadeler1995}.
In order to prevent and control the spread of infectious diseases like,
measles, tuberculosis and influenza, treatment is an important and effective
method. In classical epidemic models, the treatment rate of the infectious is
assumed to be proportional to the number of the infective individuals
\cite{anderson1991}. Therefore we need to investigate how the application of
treatment affects the dynamical behaviour of these diseases. In that direction
in \cite{wang2004}, Wang and Ruan, considered the removal rate
\[
T(I)=\left\{
\begin{array}
[c]{ll}
k, & \text{if }I>0\\
0, & \text{if }I=0.
\end{array}
\right.
\]
In the following model
\[
\begin{aligned}
\frac{dS}{dt}&= A-dS-\lambda SI, \\
\frac{dI}{dt}&=\lambda SI-(d+\gamma)I-T(I), \\
\frac{dR}{dt}&=\gamma I+T(I)-dR,
\end{aligned}
\]
where S, I , and R denote the numbers of the susceptible, infective and
recovered individuals at time t , respectively. The authors study the
stability of equilibria and prove the model exhibits Bogdanov-Takens
bifurcation, Hopf bifurcation and Homoclinic bifurcation. In \cite{zhangliu},
the authors introduce a saturated treatment
\[
T(I)=\frac{\beta I}{1+\alpha I}.
\]
A related work is \cite{zhoufan}, \cite{wancui}. \par
Hu, Ma and Ruan \cite{humaruan} studied the model
\begin{equation}
\begin{aligned} \dfrac{dS}{dt} =& bm(S+R) - \frac{\beta SI}{1+\alpha I} -bS+p\delta I\\ \dfrac{dI}{dt} =& \frac{\beta SI}{1+\alpha I}+(q\delta -\delta -\gamma)I - T(I)\\ \dfrac{dR}{dt} =& \gamma I-bR+bm'(S+R)+T(I) \label{hmr1}
\end{aligned}
\end{equation}
the basic assumptions for the model \eqref{hmr1} are, the total population
size at time $t$ is denoted by $N=S+I+R$. The newborns of $S$ and $R$ are
susceptible individuals, and the newborns of $I$ who are not vertically
infected are also susceptible individuals, $b$ denotes the death rate and
birth rate of susceptible and recovered individuals, $\delta $ denotes the death
rate and birth rate of infective individuals, $\gamma $ is the natural recovery rate
of infective individuals. $q$ ($q\leq1$) is the vertical transmission rate,
and note $p=1-q$, then $0\leq p\leq1$. Fraction $m^{\prime}$ of all newborns
with mothers in the susceptible and recovered classes are vaccinated and
appeared in the recovered class, while the remaining fraction, $m=1-m^{\prime
}$, appears in the susceptible class, the incidence rate is described by a
nonlinear function $\beta SI/(1+\alpha I)$, where $\beta$ is a positive
constant describing the infection rate and $\alpha$ is a nonnegative constant.
The treatment rate of the disease is
\[
T(I)=\left\{
\begin{array}
[c]{ll}
kI, & \text{if }0\leq I\leq I_{0},\\
u=kI_{0}, & \text{if }I>I_{0}
\end{array}
\right.
\]
where $I_{0}$ is the infective level at which the healthcare systems reaches
capacity.\newline
In this work we will extend model \eqref{hmr1} introducing the treatment rate
$\frac{\beta_{2} I}{1+\alpha_{2} I}$, where $\alpha_{2}$, $\beta_{2}>0$,
obtaining the following model
\begin{equation}
\label{hmr3}\begin{aligned} \dfrac{dS}{dt} =& bm(S+R) - \frac{\beta SI}{1+\alpha I} -bS+p\delta I\\
\dfrac{dI}{dt} =& \frac{\beta SI}{1+\alpha I}+(q\delta -\delta -\gamma)I - \frac{\beta_2 I}{1+\alpha_2 I}\\
\dfrac{dR}{dt} =& \gamma I-bR+bm'(S+R)+\frac{\beta_2 I}{1+\alpha_2 I}. \end{aligned}
\end{equation}
Because $\frac{dN}{dt}=0$, the total number of population $N$ is constant. For
convenience, it is assumed that $N=S+I+R=1$. By using $S+R=1-I$, the first two
equations of \eqref{hmr3} do not contain the variable $R$. Therefore, system
\eqref{hmr3} is equivalent to the following 2-dimensional system:
\begin{equation}
\label{ruanmod}\begin{aligned} \dfrac{dS}{dt} &=-\dfrac{\beta SI}{1+\alpha I}-bS+bm(1-I)+p\delta I \\ \dfrac{dI}{dt} &=\dfrac{\beta SI}{1+\alpha I}-p\delta I -\gamma I-\dfrac{\beta _2 I}{1+\alpha _2 I} . \end{aligned}
\end{equation}
The parameters in the model are described below:
\begin{itemize}
\item $S,I,R$ are the normalized susceptible, infected, and recovered
population, respectively, therefore it follows that $S,I,R\leq1.$
\item $b$ is a positive number representing the birth and death rate of
susceptible and recovered population.
\item $\delta$ is a positive number representing the birth and death rate of
infected population.
\item $\gamma$ is a positive number giving the natural recovery rate of
infected population.
\item $q$ is positive ($q\leq1$) representing the vertical transmission rate
(disease transmission from mother to son before or during birth). It is
assumed that descendents of the susceptible and recovered classes belong to
the susceptible class, in the same way to the fraction of the newborns of the
infected class not affected by vertical transmission.
\item $p=1-q$ therefore $0\leq p\leq1$.
\item $m^{\prime}$ is positive and it is the fraction of vaccinated newborns
from susceptible and recovered mothers and therefore belong to the recovered
class. $m=1-m^{\prime}\geq0$ is the rest of newborns, which belong to the
susceptible class.
\item $\beta$ is positive, representing the infection rate, $\alpha$ is a
positive saturation constant (In the model the incidence rate is given by the
nonlinear function $\frac{\beta SI}{1+\alpha I}$ ).
\item $\frac{\beta_{2}I}{1+\alpha_{2}I}$ is the treatment function, satisfying
$\lim_{I\rightarrow\infty}\frac{\beta_{2}I}{1+\alpha_{2}I}=\frac{\beta_{2}
}{\alpha_{2}},$ where $\alpha_{2},\beta_{2}>0.$
\end{itemize}
We note that if $\alpha_{2}=0$ the treatment becomes bilinear, case considered
in \cite{humaruan}, whereas if $\beta_{2}=0$ treatment is null, not being of
interest here. Therefore we will assume $\beta_{2},\alpha_{2}>0.$ \par
The paper in distributed as follows: in section 2 we compute the equilibria points and determine the conditions of its existence (as real values) and positivity, in section 3 we analyze the stability of the disease free equilibrium and endemic equilibria points in terms of value of $\mathcal{R}_{0} $ and the parameters of treatment function. Section 4 is dedicated to study Hopf bifurcation of the endemic equilibria points and section 5 shows discussion of all our results and we give some control measures that could be effective to eradicate the disease in each case. \par
Following \cite{humaruan} we define
\begin{equation}
\mathcal{R}_{0}:=\dfrac{\beta m}{\beta_{2}+p\delta+\gamma}.
\end{equation}
When $ \beta_2 =0 $, $\mathcal{R}_{0} $ reduces to
\begin{equation}
\mathcal{R}_{0} ^{*} = \dfrac{ \beta m}{ p \delta + \gamma },
\end{equation}
which is the basic reproduction number of model \eqref{ruanmod} without treatment.
\begin{lemma}
Given the initial conditions $S(0)=S_{0}>0,I(0)=I_{0}>0$, then the solution of
(\ref{ruanmod}) satisfies $S(t),I(t)>0\quad\forall t>0$ and $S(t)+I(t)\leq1$.
\label{teo1}
\end{lemma}
\begin{proof}
Take the solution $S(t),I(t)$ satisfying the initial conditions $S(0)=S_{0}
>0,I(0)=I_{0}>0$. Assume that the solution is not always positive, i.e., there
exists a $t_{0}$ such that $S(t_{0})\leq0$ or $I(t_{0})\leq0$. By Bolzano's
theorem there exists a $t_{1}\in(0,t_{0}]$ such that $S(t_{1})=0$ or
$I(t_{1})=0$, which can be written as $S(t_{1})I(t_{1})=0$ for some $t_{1}
\in(0,t_{0}]$. Let
\begin{equation}
t_{2}=\min\{t_{i},S(t_{i})I(t_{i})=0\}.
\end{equation}
Assume first that $S(t_{2})=0$, then $\frac{dS\left( t_{2}\right) }{dt}>0$
implying that $S$ is increasing at $t=t_{2}.$ Hence $S(t)$ is negative for
values of $t<t_{2}$ near $t_{2},$ a contradiction. Therefore $S(t)>0$ $\forall
t>0$ and we must have $I(t_{2})=0,$ implying $\frac{dI(t_{2})}{dt}=0.$ Note
that if for some $t\geq0$ $I(t)=0,$ then $\frac{dI(t)}{dt}=0.$ Then any
solution with $I(0)=I_{0}=0$ will satisfy $I(t)=0$ $\forall t>0.$ By
uniqueness of solutions this fact implies that if $I(0)=I_{0}>0,$ then $I(t)$
will remain positive for all $t>0.$ Therefore $I(t_{2})=0$ leads to a
contradiction. Hence both $S$ and $I$ are nonnegative for all $t>0.$ Finally,
adding both derivatives of $S(t)$ and $I(t)$ we get:
\begin{equation}
\dfrac{d(S+I)}{dt}=-bS+bm-bmI-\gamma I-\dfrac{\beta_{2}I}{1+\alpha_{2}I}
\end{equation}
Being $S,I\geq0$, if $S+I=1$ then $0\leq S\leq1$, $0\leq I\leq1$. Analyzing
the expression $-bS+bm-bmI$,
\[
-bS+bm-bmI=b(m-mI-S)=b(m-mI-1+I)=b(m-1+I(1-m)).
\]
Note that by the definition of the model parameters, $1-m=m^{\prime}\geq0$.
Knowing that $I\leq1,$ then
\begin{equation}
I(1-m)\leq1-m\Rightarrow I(1-m)+m-1\leq0.
\end{equation}
Therefore $-bS+bm-bmI\leq0$. Hence $\frac{d(S+I)}{dt}\leq0$ and $S+I$ is non
increasing along the line $S+I=1$, implying that $S+I\leq1$. Note also that
$S+I$ cannot be grater than 1, otherwise from $R=1-(S+I),$ $R$ would be
negative, a nonsense.
\end{proof} | 3,744 | 38,693 | en |
train | 0.8.1 | \section{Existence and positivity of equilibria}
Assume that system (\ref{ruanmod}) has a constant solution $(S_{0},I_{0})$, then:
\begin{align}
-\dfrac{\beta S_{0}I_{0}}{1+\alpha I_{0}}-bS_{0}+bm(1-I_{0})+p\delta I_{0} &
=0\label{eqec1}\\
\dfrac{\beta S_{0}I_{0}}{1+\alpha I_{0}}-p\delta I_{0}-\gamma I_{0}
-\dfrac{\beta_{2}I_{0}}{1+\alpha_{2}I_{0}} & =0 \label{eqec2}
\end{align}
From (\ref{eqec1}) we obtain
\begin{equation}
S_{0}=\dfrac{(1+\alpha I_{0})(bm(1-I_{0})+p\delta I_{0})}{\beta
I_{0}+b(1+\alpha I_{0})}. \label{eqec3}
\end{equation}
And we get from (\ref{eqec2}):
\begin{align}
& I_{0}\left( \dfrac{\beta S_{0}}{1+\alpha I_{0}}-p\delta-\gamma-\dfrac
{\beta_{2}}{1+\alpha_{2}I_{0}}\right) =0\nonumber\\
& \Rightarrow I_{0}=0\quad \text{or}\quad\dfrac{\beta S_{0}}{1+\alpha I_{0}
}-p\delta-\gamma-\dfrac{\beta_{2}}{1+\alpha_{2}I_{0}}=0. \label{eqec4}
\end{align}
If $I_{0}=0$ then $S_{0}=m$, obtaining in that way the disease-free
equilibrium $E=(m,0)$.
\begin{theorem}
System (\ref{ruanmod}) has a positive disease-free equilibrium $E=(m,0)$.
\end{theorem}
In order to obtain positive solutions of system \ref{ruanmod} if $I_{0}\neq0$ then:
\begin{align}
& \dfrac{\beta S_{0}}{1+\alpha I_{0}}-p\delta-\gamma-\dfrac{\beta_{2}}
{1+\alpha_{2}I_{0}} =0\nonumber\\
& \Rightarrow S_{0} =\dfrac{1+\alpha I_{0}}{\beta}\left( p\delta
+\gamma+\dfrac{\beta_{2}}{1+\alpha_{2}I_{0}}\right) \label{eqec5}
\end{align}
We obtain the following quadratic equation:
\begin{equation}
AI_{0}^{2}+BI_{0}+C=0. \label{eqec7}
\end{equation}
Or
\begin{equation}
I_{0}^{2}+(B/A)I_{0}+C/A=0, \label{eqec8}
\end{equation}
where the coefficients are given by:
\begin{align}
A & =\alpha_{2}(\beta(\gamma+bm)+\alpha b(p\delta+\gamma))>0\nonumber\\
B & =\beta(\gamma+\beta_{2}+bm(1-\alpha_{2}))+b\alpha(p\delta+\gamma
+\beta_{2})+b\alpha_{2}(p\delta+\gamma)\nonumber\\
& =\beta(\gamma+\beta_{2}+bm-bm\alpha_{2})+b\alpha(1-\mathcal{R}_{0}
)(p\delta+\gamma+\beta_{2})+\beta mb\alpha+b\alpha_{2}(p\delta+\gamma
)\nonumber\\
C & =b(p\delta+\gamma+\beta_{2}-\beta m)=b(p\delta+\gamma+\beta
_{2})(1-\mathcal{R}_{0}).\label{abc}
\end{align}
Its roots are:
\begin{align}
I_{1} & =\dfrac{-B-\sqrt{B^{2}-4AC}}{2A}\nonumber\\
I_{2} & =\dfrac{-B+\sqrt{B^{2}-4AC}}{2A}. \label{I0}
\end{align}
And using these values in (\ref{eqec5}) we obtain its respective values
\begin{align}
S_{1} & =\dfrac{1+\alpha I_{1}}{\beta}\left( p\delta+\gamma+\dfrac
{\beta_{2}}{1+\alpha_{2}I_{1}}\right) \nonumber\\
S_{2} & =\dfrac{1+\alpha I_{2}}{\beta}\left( p\delta+\gamma+\dfrac
{\beta_{2}}{1+\alpha_{2}I_{2}}\right) . \label{eqec14}
\end{align}
Then our candidate for endemic equilibria are $E_{1}=(S_{1},I_{1})$, $E_{2}
=(S_{2},I_{2})$.
Note that $C=0$ if and only if $\mathcal{R}_{0}=1$, $C>0$ if and only if
$\mathcal{R}_{0}<1,$ and $C<0$ if and only if $\mathcal{R}_{0}>1$ .
\begin{figure}
\caption{Location of the sets
$A_{1}
\label{fig1}
\end{figure}
For $ \mathcal{R}_{0} ^{*}>1 $ we define the following sets:
\begin{align}
A_{1} & =\{(\beta_{2},\alpha_{2}): \beta_{2}>0,0<\alpha_{2}\leq \alpha_2^{0} ,\nonumber\\
A_{2} & =\{(\beta_{2},\alpha_{2}): \beta_{2}\geq g(\alpha_{2}),\alpha
_{2}> \alpha_2^{0}
>0\},\nonumber\\
A_{3} & =\{(\beta_{2},\alpha_{2}): 0<\beta_{2}<g(\alpha_{2}),\alpha_{2}
> \alpha_2^{0} >0\}.
\end{align}
Where
$$ \alpha_2^{0} = \frac
{-\beta(mb\alpha+\gamma+bm)}{b(p\delta+\gamma-\beta m)}$$
\[
g(\alpha_{2})=-\frac{1}{\beta}(b\alpha_{2}(p\delta+\gamma-\beta m)+\beta
(\gamma+bm+mb\alpha)).
\]
Define :
\begin{align}
P_{1} & = 1 + \dfrac{1}{b \alpha( p \delta+ \gamma+ \beta_{2})} [ \beta(
\gamma+ \beta_{2} + bm - bm \alpha_{2} ) + \beta m b \alpha+ b \alpha_{2} ( p
\delta+ \gamma) ]\nonumber\\
R_{0}^{+} & = 1 - \dfrac{1}{b \alpha^{2} ( p \delta+ \gamma+ \beta_{2}
)}\nonumber\\
& \left[ \sqrt{- \beta\alpha( bm \alpha+ \beta_{2} + \gamma+ bm - \alpha
_{2}bm ) + \beta\alpha_{2} ( \gamma+ bm ) } - \sqrt{ \alpha_{2} ( \beta\gamma+
\beta b m + \alpha b p \delta+ \alpha b \gamma) } \right] ^{2} .
\end{align}
Figure \ref{fig1} shows the location of these sets.
\begin{theorem}
If $\mathcal{R}_{0}>1$ the system (\ref{ruanmod}) has a unique (positive)
endemic equilibrium $E_{2}$. \label{teo10}
\end{theorem}
\begin{proof}
If $\mathcal{R}_{0}>1$ then $C<0$, then using Routh Hurwitz criterion for $n=2$, the
quadratic equation has two real roots with different sign, $I_{1}$ and $I_{2}
$, where $I_{1}<I_{2}$. Hence there exists a unique
positive endemic equilibrium $E_{2}=(S_{2},I_{2})$.
\end{proof} | 1,990 | 38,693 | en |
train | 0.8.2 | \begin{theorem}
Let $0<\mathcal{R}_{0}\leq1$. For system (\ref{ruanmod}), if $ \mathcal{R}_{0} ^{*} \leq 1 $ then there are no positive endemic equilibria. Otherwise, if
$ \mathcal{R}_{0} ^{*}>1 $ the following propositions hold:
\begin{enumerate}
\item If $\mathcal{R}_{0}=1$ and $(\beta_{2},\alpha_{2})\in A_{3}$ the system
(\ref{ruanmod}) has a unique positive endemic equilibrium $E_2=(S_2
,I_2)$, where
\[
I_2=-B/A,\quad S_2=\dfrac{1+\alpha I_2}{\beta}\left( p\delta
+\gamma+\dfrac{\beta_{2}}{1+\alpha_{2}I_2}\right) .
\]
.
\item If $\max\{P_{1},R_{0}^{+}\}<\mathcal{R}_{0}<1$ and $(\beta_{2}
,\alpha_{2})\in A_{3},$ the system (\ref{ruanmod}) has a pair of positive
endemic equilibria $E_{1},E_{2}$.
\item If $1>\mathcal{R}_{0}=R_{0}^{+}>P_{1}$ and $(\beta_{2},\alpha_{2})\in
A_{3},$ the system (\ref{ruanmod}) has a unique positive endemic equilibrium
$E_{1}=E_{2}$.
\item If $1>\mathcal{R}_{0}=P_{1}$ and $(\beta_{2},\alpha_{2})\in A_{3},$ the
system (\ref{ruanmod}) has no positive endemic equilibria.
\item If $0<\mathcal{R}_{0}\leq1$ and $(\beta_{2},\alpha_{2})\in A_{1}\cup
A_{2},$ the system (\ref{ruanmod}) has no positive endemic equilibria.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ and $0<\mathcal{R}_{0}<\max
(R_{0}^{+},P_{1})<1,$ then there are no positive endemic equilibria.
\end{enumerate}
\label{teo4}
\end{theorem}
\begin{proof}
If $0<\mathcal{R}_{0}\leq1,$ then $C\geq0$, so the roots of the
equation $AI^{2}+BI+C=0$ are not real with different sign, but real with equal
signs, complex conjugate or some of them are zero. If endemic equilibria exist
and are positive, it is necessary that $B<0$. After some calculations we can see that:
\begin{align}
& B<0 \Leftrightarrow\mathcal{R}_{0}>1+\dfrac{\beta(\gamma+\beta_{2}
+bm-bm\alpha_{2})+\beta mb\alpha+b\alpha_{2}(p\delta+\gamma)}{b\alpha
(p\delta+\gamma+\beta_{2})}:=P_{1}.
\end{align}
From the assumption that $\mathcal{R}_{0}\leq1$ then $P_{1}<1$, hence the
expression $\beta(\gamma+\beta_{2}+bm-bm\alpha_{2})+\beta mb\alpha+b\alpha
_{2}(p\delta+\gamma)$ must be negative, this happens if and only if
\begin{gather}
\beta_{2}<-\frac{1}{\beta}(b\alpha_{2}(p\delta+\gamma-\beta
m)+\beta(\gamma+bm+mb\alpha))=g(\alpha_{2}).
\end{gather}
If $ \mathcal{R}_{0} ^{*} \leq 1 $ then $-\frac{1}{\beta}(b\alpha_{2}
(p\delta+\gamma-\beta m)+\beta(\gamma+bm+mb\alpha))<0$ and it is not possible
to find a value of $\beta_{2}$ fulfilling the previous inequality, therefore
there are no positive endemic equilibria. \par
Now, if $ \mathcal{R}_{0} ^{*}>1 $ we have that.
\begin{enumerate}
\item If $\mathcal{R}_{0}=1$ then $C=0$ and the equation (\ref{eqec7}) is
transformed into
\begin{equation}
AI_{0}^{2}+BI_{0}=0,
\end{equation}
with $A>0$. Its roots are $I_{1}=0$ and $I_2=-B/A,$ and there
exists a unique endemic equilibrium that is positive if and only if $B<0$, that is given by $E_2=(S_2,I_2)$, where
\begin{align}
I_2 & =-B/A\nonumber\\
S_2 & =\dfrac{1+\alpha I_2}{\beta}\left( p\delta+\gamma+\dfrac{\beta
_{2}}{1+\alpha_{2}I_2}\right) . \label{eqec13}
\end{align}
Note that if $\alpha_{2}> \alpha_2^{0} \quad
\text{and} \quad \mathcal{R}_{0} ^{*}>1 $ then $g(\alpha_{2})>0$.
Hence $A_{3}$ is nonempty and its elements satisfy $B<0$, therefore if
$(\beta_{2},\alpha_{2})\in A_{3}$ there exists a unique positive endemic
equilibrium $E_2$.
\item If $\mathcal{R}_{0}<1$ then $C>0$ and the roots of the quadratic equation
for $I_{0}$ must be real of
equal sign or complex conjugate. By the previous part we know that if
$(\beta_{2},\alpha_{2})\in A_{3}$ then $P_{1}<1$, moreover if $\mathcal{R}
_{0}>P_{1}$ then $B<0$ and therefore both roots must have positive real part.
Finally, to assure that equilibria are both real, we demand that $\Delta\geq0$
. Computing $\Delta$:
\begin{align}
\Delta & =B^{2}-4AC\nonumber\\
& =A_{2}\mathcal{R}_{0}^{2}+B_{2}\mathcal{R}_{0}+C_{2}=\Delta(\mathcal{R}
_{0}),
\end{align}
where:
\begin{align}
A_2 &= {\alpha}^{2}{b}^{2}\left( p\delta+\gamma+\mathit{\beta_{2}}\right)
^{2}\\
B_2 &= -2\,[\beta\,\left( \gamma+\mathit{\beta_{2}}+bm\left( 1-\mathit{\alpha
_{2}}\right) \right) +\alpha\,b\left( p\delta+\gamma+\mathit{\beta_{2}
}\right) +\beta\,mb\alpha\nonumber\\
& +b\mathit{\alpha_{2}}\,\left( p\delta+\gamma\right) ]\alpha\,b\left(
p\delta+\gamma+\mathit{\beta_{2}}\right) +4\,\mathit{\alpha_{2}}\,\left(
\beta\,\left( \gamma+bm\right) \alpha\,b\left( p\delta+\gamma\right)
\right) \nonumber \\
& b\left( p\delta+\gamma+\mathit{\beta_{2}}\right) \\
C_2 &= \left( \beta\,\left( \gamma+\mathit{\beta_{2}}+bm\left(
1-\mathit{\alpha_{2}}\right) \right) +\alpha\,b\left( p\delta
+\gamma+\mathit{\beta_{2}}\right) +\beta\,mb\alpha+b\mathit{\alpha_{2}
}\,\left( p\delta+\gamma\right) \right) ^{2}\nonumber\\
& -4\,\mathit{\alpha_{2}}\,\left( \beta\,\left( \gamma+bm\right)
+\alpha\,b\left( p\delta+\gamma\right) \right) b\left( p\delta
+\gamma+\mathit{\beta_{2}}\right).
\end{align}
The previous expression is a quadratic function of $\mathcal{R}_{0}$. To
establish the region where $\Delta\geq0$, it is necessary to know how the roots
of $\Delta(\mathcal{R}_{0})$ behave. The discriminant of the quadratic
function $\Delta(\mathcal{R}_{0})$ is
\begin{align}
\Delta_{2} & =-16\,\mathit{\alpha_{2}}\,{b}^{2}\beta\,\left( p\delta
+\gamma+\mathit{\beta_{2}}\right) ^{2}\left( \beta\,\gamma+\beta
\,bm+\alpha\,bp\delta+\alpha\,b\gamma\right) \nonumber\\
& \left( \alpha(\alpha bm+\beta_{2}+\gamma+bm)-\alpha_{2}(\gamma+bm+\alpha
bm)\right) . \label{ec15}
\end{align}
If we assume that $\Delta_{2}<0,$ then $\alpha_{2}<\frac{\alpha(bm\alpha
-\beta_{2}+\gamma+bm)}{\gamma+bm+\alpha bm}$ and in this case we have that:
\begin{align}
\gamma+\beta_{2}+bm-bm\alpha_{2}+bm\alpha>\frac{2\beta_{2}\alpha
bm+(\gamma+bm)(\gamma+\beta_{2}+bm+bm\alpha)}{\gamma+bm+\alpha bm}>0.
\end{align}
So we get that $P_{1}>1>\mathcal{R}_{0}$, which is a contradiction with
the assumption in this part, therefore $\Delta_{2}\geq0$ and in consequence
$\Delta(\mathcal{R}_{0})$ has two real roots,
\begin{align}
R_{0}^{-} & =\dfrac{-B_{2}-\sqrt{\Delta_{2}}}{2A_{2}}\nonumber\\
& =1-\dfrac{1}{b\alpha^{2}(p\delta+\gamma+\beta_{2})}[\sqrt{-\beta
(\alpha(bm\alpha+\beta_{2}+\gamma+bm-bm\alpha_{2})-\alpha_{2}(\gamma
+bm))}\nonumber\\
& +\sqrt{\alpha_{2}(\beta(\gamma+bm)+\alpha b(p\delta+\gamma))}
]^{2},\nonumber\\
R_{0}^{+} & =\dfrac{-B_{2}+\sqrt{\Delta_{2}}}{2A_{2}}\nonumber\\
& 1-\dfrac{1}{b\alpha^{2}(p\delta+\gamma+\beta_{2})}[\sqrt{-\beta
(\alpha(bm\alpha+\beta_{2}+\gamma+bm-bm\alpha_{2})-\alpha_{2}(\gamma
+bm))}\nonumber\\
& -\sqrt{\alpha_{2}(\beta(\gamma+bm)+\alpha b(p\delta+\gamma))}]^{2}.
\end{align}
Note that due to the positivity of $\Delta_{2}$ and (\ref{ec15}),
we have that
$$-\beta(\alpha(bm\alpha+\beta_{2}+\gamma+bm-bm\alpha
_{2})-\alpha_{2}(\gamma+bm))$$
is positive, allowing its roots to be well
defined. Analyzing the derivative of
$\Delta(\mathcal{R}_{0})$ we have that $$\Delta^{\prime}(R_{0}^{+}
)=\sqrt{\Delta_{2}}>0 \quad \text{and} \quad \Delta^{\prime}(R_{0}^{-})=-\sqrt{\Delta_{2}}<0,$$
moreover $R_{0}^{-}<R_{0}^{+}$ making $\Delta$ positive for $\mathcal{R}
_{0}>R_{0}^{+}$ or $\mathcal{R}_{0}<R_{0}^{-}$. Nevertheless $$R_{0}
^{-}=1+\frac{1}{b\alpha(p\delta+\gamma+\beta_{2})}(\beta(\gamma+\beta
_{2}+bm-bm\alpha_{2}+bm \alpha))-\epsilon,$$ while $$P_{1}=1+\frac
{1}{b\alpha(p\delta+\gamma+\beta_{2})}(\beta(\gamma+\beta_{2}+bm-bm\alpha
_{2}+bm \alpha))+\epsilon_{2},$$
with $ \epsilon, \epsilon_{2}>0$, making $R_{0}^{-}<P_{1}<R_{0}$.
Therefore for $\mathcal{R}_{0}>\max(P_{1},R_{0}^{+})$, we have that there exists
two positive endemic equilibria
$E_{1},E_{2}$, proving this part.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ then $P_{1}<1$. If $1>\mathcal{R}
_{0}>P_{1},$ then we have that $B<0$ and $C>0$, therefore we have a pair of
roots of the quadratic for $I$ with positive real part. In the previous part
it was proven that for $P_{1}<1$ the discriminant $\Delta_{2}\geq0$ and both
roots $R_{0}^{+},R_{0}^{-}$ are real and less than one. If $\mathcal{R}
_{0}=R_{0}^{+}$ then $\Delta=0$ and both roots are fused in one $I_{1}
=-B/2A=I_{2}$. Therefore we have a unique positive endemic equilibrium
$E_{1}=E_{2}$.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ then $P_{1}<1$. If $\mathcal{R}
_{0}=P_{1}<1$ then $C>0,$ implying that the roots are complex conjugate or
real of the same sign. Being $\mathcal{R}_{0}=P_{1}$ then $B=0,$ implying that
both roots have real part equal to zero, therefore there are no positive
endemic equilibria.
\item If $0<\mathcal{R}_{0}\leq1$ and $(\beta_{2},\alpha_{2})\in A_{1}\cup
A_{2}$ then $P_{1}\geq1$, therefore $\mathcal{R}_{0}\leq P_{1}$ , $B\geq0,$
and $C\geq0$. Hence there are two roots with real part zero or negative, which
are not positive equilibria.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ we have that $P_{1}<1$ and the
roots of the discriminant $R_{0}^{+},R_{0}^{-}$ are real, in addition that
$R_{0}^{-}<P_{1}$ and $R_{0}^{+}<1$ by definition of this case. If
$0<\mathcal{R}_{0}<\max\{R_{0}^{+},P_{1}\}<1,$ then $C>0$ and the roots
$I_{2},I_{3}$ are complex conjugate or real with the same sign. If
$\mathcal{R}_{0}<P_{1}$ then $B>0,$ and the roots have negative real part, so
there are not positive endemic equilibria. If $0<\mathcal{R}_{0}<R_{0}^{+}$
and $\mathcal{R}_{0}>R_{0}^{-}$, then $\Delta<0$ and the roots are complex
conjugate, therefore there is not real endemic equilibria. If $0<\mathcal{R}
_{0}<R_{0}^{+}$ and $\mathcal{R}_{0}\leq R_{0}^{-}<P_{1},$ then it reduces to
the first case in which there are not positive endemic equilibria.
\end{enumerate}
\begin{figure}
\caption{Graph of
$\mathcal{R}
\label{fig12}
\end{figure}
\begin{figure}
\caption{Graph of
$\mathcal{R}
\label{fig2}
\end{figure} | 3,797 | 38,693 | en |
train | 0.8.3 | moreover $R_{0}^{-}<R_{0}^{+}$ making $\Delta$ positive for $\mathcal{R}
_{0}>R_{0}^{+}$ or $\mathcal{R}_{0}<R_{0}^{-}$. Nevertheless $$R_{0}
^{-}=1+\frac{1}{b\alpha(p\delta+\gamma+\beta_{2})}(\beta(\gamma+\beta
_{2}+bm-bm\alpha_{2}+bm \alpha))-\epsilon,$$ while $$P_{1}=1+\frac
{1}{b\alpha(p\delta+\gamma+\beta_{2})}(\beta(\gamma+\beta_{2}+bm-bm\alpha
_{2}+bm \alpha))+\epsilon_{2},$$
with $ \epsilon, \epsilon_{2}>0$, making $R_{0}^{-}<P_{1}<R_{0}$.
Therefore for $\mathcal{R}_{0}>\max(P_{1},R_{0}^{+})$, we have that there exists
two positive endemic equilibria
$E_{1},E_{2}$, proving this part.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ then $P_{1}<1$. If $1>\mathcal{R}
_{0}>P_{1},$ then we have that $B<0$ and $C>0$, therefore we have a pair of
roots of the quadratic for $I$ with positive real part. In the previous part
it was proven that for $P_{1}<1$ the discriminant $\Delta_{2}\geq0$ and both
roots $R_{0}^{+},R_{0}^{-}$ are real and less than one. If $\mathcal{R}
_{0}=R_{0}^{+}$ then $\Delta=0$ and both roots are fused in one $I_{1}
=-B/2A=I_{2}$. Therefore we have a unique positive endemic equilibrium
$E_{1}=E_{2}$.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ then $P_{1}<1$. If $\mathcal{R}
_{0}=P_{1}<1$ then $C>0,$ implying that the roots are complex conjugate or
real of the same sign. Being $\mathcal{R}_{0}=P_{1}$ then $B=0,$ implying that
both roots have real part equal to zero, therefore there are no positive
endemic equilibria.
\item If $0<\mathcal{R}_{0}\leq1$ and $(\beta_{2},\alpha_{2})\in A_{1}\cup
A_{2}$ then $P_{1}\geq1$, therefore $\mathcal{R}_{0}\leq P_{1}$ , $B\geq0,$
and $C\geq0$. Hence there are two roots with real part zero or negative, which
are not positive equilibria.
\item If $(\beta_{2},\alpha_{2})\in A_{3}$ we have that $P_{1}<1$ and the
roots of the discriminant $R_{0}^{+},R_{0}^{-}$ are real, in addition that
$R_{0}^{-}<P_{1}$ and $R_{0}^{+}<1$ by definition of this case. If
$0<\mathcal{R}_{0}<\max\{R_{0}^{+},P_{1}\}<1,$ then $C>0$ and the roots
$I_{2},I_{3}$ are complex conjugate or real with the same sign. If
$\mathcal{R}_{0}<P_{1}$ then $B>0,$ and the roots have negative real part, so
there are not positive endemic equilibria. If $0<\mathcal{R}_{0}<R_{0}^{+}$
and $\mathcal{R}_{0}>R_{0}^{-}$, then $\Delta<0$ and the roots are complex
conjugate, therefore there is not real endemic equilibria. If $0<\mathcal{R}
_{0}<R_{0}^{+}$ and $\mathcal{R}_{0}\leq R_{0}^{-}<P_{1},$ then it reduces to
the first case in which there are not positive endemic equilibria.
\end{enumerate}
\begin{figure}
\caption{Graph of
$\mathcal{R}
\label{fig12}
\end{figure}
\begin{figure}
\caption{Graph of
$\mathcal{R}
\label{fig2}
\end{figure}
Theorem \ref{teo4} gives us a complete scenario of the existence of endemic equilibria. When $\mathcal{R}_{0} ^{*} \leq 1 $ we have that $\mathcal{R}_{0} <1 $, it follows from the fact that $\mathcal{R}_{0} < \mathcal{R}_{0} ^{*}$ whenever $ \beta_2>0 $; then system \ref{ruanmod} has only a disease free equilibrium and no endemic equilibria. \par
Otherwise, when $\mathcal{R}_{0} ^{*}>1$ . If $ ( \beta_2, \alpha_2 ) \in A_1 \cup A_2 $ then we have no endemic equilibria for $0<\mathcal{R}_{0}<1 $ and a unique endemic equilibria $E_2$ when $\mathcal{R}_{0} >1$, so there exists a forward bifurcation in $\mathcal{R}_{0} =1$ from the disease free equilibrium to $E_2$ (see figure \ref{fig12} ). If $ ( \beta_2, \alpha_2 ) \in A_3$ there exist two positive endemic equilibria whenever $\max\{P_{1},R_{0}^{+}\}<\mathcal{R}_{0}<1$ ( $P_1$ and $\mathcal{R}_{0} ^{+}$ depend on $ \beta_2 $), we can observe the backward bifurcation of the equilibrium $E$ to two endemic equilibria (see figure \ref{fig2} ).
As an immediate consequence of the previous theorem we have that if
$\mathcal{R}_{0}>1$ there exists a unique positive endemic equilibrium, while
if $\mathcal{R}_{0}<1$ and the conditions of the second part are fulfilled,
there exist two positive endemic equilibria. Hence we have the following corollary:
\end{proof} | 1,466 | 38,693 | en |
train | 0.8.4 | \begin{corollary}
If $\mathcal{R}_{0}=1$, $ \mathcal{R}_{0} ^{*}>1 $ and $(\beta,\alpha_{2})\in A_{3}$, system
(\ref{ruanmod}) has a backward bifurcation of the disease-free equilibrium $E$.
\end{corollary}
\begin{proof}
First we note that if $(\beta_{2},\alpha_{2})\in A_{3}$ then $R_{0}^{+}$ is
real less than one and $P_{1}<1$, therefore we can find a neighborhood of
points in the interval $(\max\{R_{0}^{+},P_{1}\},1)$. By case 2 of
theorem, if $\mathcal{R}_{0}$ lies in this neighborhood there exist two
positive endemic equilibria $E_{1},E_{2}$; for $\mathcal{R}_{0}=1$ there
exists a unique positive endemic equilibrium $E_2$, while the other endemic
equilibrium becomes zero. Finally for $\mathcal{R}_{0}>1$ there exists a
unique positive endemic equilibrium as the zero "endemic" equilibrium becomes
negative.
\end{proof} | 269 | 38,693 | en |
train | 0.8.5 | \section{Characteristic Equation and Stability}
The characteristic equation of the linearization of system (\ref{ruanmod}) in
the equilibrium $(S_{0},I_{0})$ is given by:
\begin{equation}
\det(DF-\lambda I),
\end{equation}
where
\begin{equation}
DF=\left(
\begin{matrix}
\frac{\partial f_{1}}{\partial S} & \frac{\partial f_{1}}{\partial I}\\
\frac{\partial f_{2}}{\partial S} & \frac{\partial f_{2}}{\partial I}
\end{matrix}
\right) .
\end{equation}
Matrix is evaluated in the equilibrium $(S_{0},I_{0})$. Functions
$f_{1},f_{2}$ are the following:
\begin{align}
f_{1} & =-\dfrac{\beta SI}{1+\alpha I}-bS+bm(1-I)+p\delta I\\
f_{2} & =\dfrac{\beta SI}{1+\alpha I}-p\delta I-\gamma I-\dfrac{\beta_{2}
I}{1+\alpha_{2}I}.
\end{align}
Computing the matrix $DF$ we obtain:
\begin{equation}
DF (S,I) = \left(
\begin{matrix}
\dfrac{-\beta I}{1 + \alpha I } - b & \dfrac{-\beta S }{ (1+ \alpha I)^{2} }
-bm + p \delta\\
\dfrac{\beta I}{1 + \alpha I } & \dfrac{\beta S}{ (1+ \alpha I)^{2} } - p
\delta- \gamma- \dfrac{ \beta_{2} }{(1+ \alpha_{2} I)^{2}}
\end{matrix}
\right).
\end{equation}
\subsection{Stability of \textit{disease free }equilibrium}
For the disease free equilibrium $E=(m,0)$ the Jacobian matrix is:
\[
DF(m,0)=\left(
\begin{matrix}
-b & -\beta m-bm+p\delta\\
0 & \beta m-p\delta-\gamma-\beta_{2}
\end{matrix}
\right).
\]
\begin{theorem}
If $\mathcal{R}_{0}<1$ then the equilibrium $E=(m,0)$ of model (\ref{ruanmod})
is locally asymptotically stable, while if $\mathcal{R}_{0}>1$ then it is
unstable. \label{teo5}
\end{theorem}
\begin{proof}
The characteristic equation for the equilibrium $E$ is given by
\begin{align}
P(\lambda) & =\det(DF(m,0)-\lambda I_{2x2})\nonumber\\
& =\det\left(
\begin{matrix}
-b-\lambda & -\beta m-bm+p\delta\\
0 & \beta m-p\delta-\gamma-\beta_{2}-\lambda
\end{matrix}
\right) \nonumber\\
& =(-b-\lambda)(\beta m-p\delta-\gamma-\beta_{2}-\lambda). \label{carec1}
\end{align}
The equation (\ref{carec1}) has two real roots $\lambda_{1}=-b$ and
$\lambda_{2}=\beta m-p\delta-\gamma-\beta_{2}$. By Hartman-Grobman's theorem,
if the roots of (\ref{carec1}) have non-zero real part then the solutions of
system (\ref{ruanmod}) and its linearization are qualitatively equivalent. If
both roots have negative real part then the equilibrium $E$ is locally
asymptotically stable, whilst if any of the roots has positive real part the
equilibrium is unstable. Clearly $\lambda_{1}<0,$ but $\lambda_{2}<0$ if and
only if
\[
\beta m-p\delta-\gamma<\beta_{2},
\]
if and only if $\mathcal{R}_{0}<1$.
\end{proof}
According to the previous theorem and theorem \ref{teo4} we obtain the
following result for the global stability of equilibrium $E$\ :
\begin{theorem}
If $0<\mathcal{R}_{0}<1$ and one of the following conditions holds:
\begin{itemize}
\item $ \mathcal{R}_{0} ^{*} \leq 1 $.
\item $\mathcal{R}_{0}=P_{1}$ and $(\beta_{2},\alpha_{2})\in A_{3}$.
\item $(\beta_{2}, \alpha_{2}) \in A_{1} \cup A_{2} $.
\item $(\beta_{2},\alpha_{2})\in A_{3}$ and $0<\mathcal{R}_{0}<\max\{R_{0}
^{+},P_{1}\}$.
\end{itemize}
Then equilibrium $E$ of system (\ref{ruanmod}) is globally asymptoticaly stable.
\label{teo6}
\end{theorem}
\begin{proof}
If $0<\mathcal{R}_{0}<1$ then by theorem \ref{teo5} the equilibrium $E$ is
locally asymptotically stable. If any of the given conditions holds then by
theorem \ref{teo4} there are no endemic equilibria in the region
$D=\{S(t),I(t)\geq0\quad\forall t>0,\quad S(t)+I(t)\leq1\}$, which it was
proven to be positively invariant in theorem \ref{teo1}. By \cite{perko} (page
245) any solution of (\ref{ruanmod}) starting in $D$ must approach either an
equilibrium or a closed orbit in $D$. By \cite{kelley} (theorem 3.41) if the
solution path approaches a closed orbit, then this closed orbit must enclose
an equilibrium. Nevertheless, the only equilibrium existing in $D$ is $E$ and
it is located in the boundary of $D$, therefore there is no closed orbit
enclosing it, totally contained in $D$. Hence any solution of system
(\ref{ruanmod}) with initial conditions in $D$ must approach the point $E$ as
$t$ tends to infinity. \begin{figure}
\caption{Global stability of
equilibrium $E$.}
\label{fig3}
\end{figure}
\end{proof}
\begin{example}
Take the following values for the parameters: $\alpha=0.4,\alpha_{2}
=10,\beta=0.2,b=0.2,\gamma=0.01,\delta=0.01,p=0.02,m=0.3,\beta_{2}=0.1$.
Equilibrium $E=(0.3,0)$, $\mathcal{R}_{0}=0.5445<1$. By theorem \ref{teo5}, $E$
is locally asymptotically stable, $ \alpha_2^{0} =7.42<\alpha_{2}$ and $g(\alpha_{2})=-0.1864<\beta_{2}$,
therefore $(\beta_{2},\alpha_{2})\in A_{2}$. By theorem \ref{teo4} there are
no positive endemic equilibria. Finally by theorem \ref{teo6} we have that $E$
is globally stable. See figure \ref{fig3}.
\end{example}
\begin{theorem}
If $\mathcal{R}_{0}=1$ and $\beta_{2}\neq g(\alpha_{2})$ then equilibrium $E$
is a saddle point. Moreover, if $(\beta_{2},\alpha_{2})\in A_{1}\cup A_{2}$
the region $D$ is contained in the stable manifold of $E$. \label{teo7}
\end{theorem}
\begin{proof}
If $\mathcal{R}_{0}=1$ one of the eigenvalues of the Jacobian matrix of the
system is zero, hence we cannot apply Hartman-Grobman's theorem. In order to
establish the stability of equilibrium $E$ we apply central manifold theory.
Making the change of variables, $\hat{S}=S-m$, $ \hat{I}=I $, we obtain the equivalent system
\begin{align}
\dfrac{d\hat{S}}{dt} & =-\dfrac{\beta(\hat{S}+m) \hat{I} }{1+\alpha \hat{I} }-b\hat
{S}-bm \hat{I} +p\delta \hat{I} \nonumber\\
\dfrac{d \hat{I} }{dt} & =\dfrac{\beta(\hat{S}+m) \hat{I} }{1+\alpha \hat{I} }-p\delta \hat{I} -\gamma
\hat{I} -\dfrac{\beta_{2} \hat{I} }{1+\alpha_{2} \hat{I} }. \label{carac6}
\end{align}
Because $ \hat{I}=I $ we ignore the hat and use only $I$. This new system has an equilibrium in $\hat{E}=(0,0)$ and its Jacobian matrix
in that point is
\begin{equation}
DF(m,0)=\left(
\begin{matrix}
-b & - \beta m - bm + p \delta\\
0 & 0
\end{matrix}
\right) .\label{eqst1}
\end{equation}
Using change of variables $S=u-{\frac{\left( \gamma+\mathit{beta2}+bm\right) v}{b}},I=v$
and $\beta m=p\delta+\gamma+\beta_{2}$ we obtain the equivalent system (see appendix A):
\begin{align}
\frac{dv}{dt} & =0u+f(v,u)\nonumber\\
\frac{du}{dt} & =-bu+g(v,u), \label{eqst2}
\end{align}
where $f$ and $g$ are defined in Appendix A. \par
By \cite{carr}, system \eqref{ruanmod} has a center manifold of the form
$u=h(v)$ and the flow in the
center manifold ( and therefore in the system ) is given by the equation
\[
v^{\prime}=f(v,h(v))\sim f(v,\phi(v)),
\]
where $h(v) = a_0 v^{2} + a_1 v^{3} + O(v^{4}) $, and $a_i$'s are given in Appendix A. Expanding the Taylor series of $f$ we obtain the flow equation
\begin{align}
v^{\prime} & =-{\frac{{b}^{3}\beta\,m+{b}^{2}{\beta}^{2}m+{b}^{3}
\gamma\,\mathit{\alpha_{2}}-{b}^{2}\beta\,p\delta+{b}^{3}\alpha\,\beta
\,m+{b}^{3}p\delta\,\mathit{\alpha_{2}}-{b}^{3}\beta\,m\mathit{\alpha_{2}}
}{{b}^{3}}}v^{2}+O(v^{3})\nonumber\\
& =Hv^{2}+O(v^{3}).
\end{align}
Therefore the dynamics of solutions near the equilibrium $\hat{E}=(0,0)$
is given by the quadratic term, whenever this term is not zero. We note that
$H=0$ if and only if
\begin{equation}
\alpha_{2}=\frac{-\beta(bm+\beta m-p\delta+b\alpha m)}{b(p\delta+\gamma-\beta
m)}.
\end{equation}
Substituting again $\mathcal{R}_{0}=1$, expressed as $\beta m=p\delta
+\gamma+\beta_{2}$, we obtain $H=0$ if and only if $\beta_{2}=g(\alpha_{2})$. \par
If $(\beta_{2},\alpha_{2})\in A_{3}$ then $H>0$. $v^{\prime}>0$ for $v\neq0$.
If $(\beta_{2},\alpha_{2})\in A_{1}\cup A_{2}$ then $H<0$, $v^{\prime}<0$ for
$v\neq0$. In both cases $\hat{E}$ is a saddle point. Moreover, if $(\beta
_{2},\alpha_{2})\in A_{1}\cup A_{2}$ then $H<0$ and $v^{\prime}<0$ for $v>0$.
Recalling $v(t)=I(t)$ we have under this assumption that $I^{\prime}(t)<0$ for
$I>0$ therefore $I(t)\rightarrow0^{+}$, while as $v_{1}=(1,0)$ is the stable
direction of the point $E$ then $S(t)\rightarrow0$, therefore the solutions in
the region $D$ approach the equilibrium $E$ as $t\rightarrow\infty$.
\begin{figure}
\caption{Phase plane of the system
for $\mathcal{R}
\label{fig9}
\end{figure}
\end{proof}
\begin{example}
Take the following values for the parameters: $\beta=0.2,\alpha=0.4,\delta
=0.01,\gamma=0.01,\alpha_{2}=10,m=0.3,p=0.02,b=0.2, \beta_2 = 0.0498$. In this case
$\mathcal{R}_{0}=1$, $ \alpha_2^{0} =1.8876$
and $g(\alpha_{2})=0.4040$, hence $(\beta_{2},\alpha_{2})\in A_{3}$.
By the first case of theorem \ref{teo4} the system has a unique endemic
equilibrium in $S_2=0.11210,I_2=0.4781$. By theorem \ref{teo7} the
equilibrium $E$ is a saddle point, see figure \ref{fig9}.
\end{example}
\begin{figure}
\caption{Phase plane for
$\mathcal{R}
\label{fig10}
\end{figure}
\begin{example}
If we take the same values as in the previous example except $\alpha
_{2}=2$, then
$g(\alpha_{2})=0.0056<\beta_{2}$, hence $(\beta_{2},\alpha_{2})\in A_{2}$. By
theorem \ref{teo4} the system has no endemic equilibria, and by theorem
\ref{teo7} the point $E$ is a saddle point. Moreover, the region $D$ is
totally contained in the stable manifold, see figure \ref{fig10}.
\end{example} | 3,515 | 38,693 | en |
train | 0.8.6 | \subsection{Stability of endemic equilibria}
The general form of the Jacobian matrix is
\begin{equation}
DF=\left(
\begin{matrix}
-\dfrac{\beta I}{1+\alpha I}-b & & -\dfrac{\beta S}{(1+\alpha I)^{2}
}-bm+p\delta\\
\dfrac{\beta I}{1+\alpha I} & & \dfrac{\beta S}{(1+\alpha I)^{2}}
-p\delta-\gamma-\dfrac{\beta_{2}}{(1+\alpha_{2}I)^{2}}
\end{matrix}
\right) .
\end{equation}
Therefore the characteristic equation for an endemic equilibrium is
\begin{align}
P(\lambda) & =\left( -\dfrac{\beta I}{1+\alpha I}-b-\lambda\right) \left(
\dfrac{\beta S}{(1+\alpha I)^{2}}-p\delta-\gamma-\dfrac{\beta_{2}}
{(1+\alpha_{2}I)^{2}}-\lambda\right) \nonumber\\
- & \left( \dfrac{\beta I}{1+\alpha I}\right) \left( -\dfrac{\beta
S}{(1+\alpha I)^{2}}-bm+p\delta\right).
\end{align}
If we denote by
\begin{align}
C_{I} & :=\frac{\beta I}{1+\alpha I}\\
C_{S} & :=\frac{\beta S}{(1+\alpha I)^{2}}\\
D_{I} & :=\frac{\beta_{2}}{(1+\alpha_{2}I)^{2}}.
\end{align}
Then the characteristic polynomial is rewritten as
\begin{align}
P(\lambda) & =\lambda^{2}+W\lambda+U \label{carec2}.
\end{align}
Where:
\begin{align}
W &= C_I+b-C_S + p \delta + \gamma + D_I \\
U &= C_I \gamma + C_I D_I - b C_S + b p \delta + b \gamma + b D_I + C_I b m .
\end{align}
By proposition Routh Hurwitz criteria for $n=2$ if the coefficient $W$ and the independent
term $U$ are positive then the roots of the characteristic equation have negative
real part and therefore the endemic equilibrium is locally asymptotically stable.
Note that whenever the equilibriums are positive, $C_{I},C_{S},D_{I}$ will be
positive as well. Let us analyze the stability according to the value of
$\mathcal{R}_{0}$.
\begin{theorem}
Whenever the equilibrium $E_1$ exists it is a saddle and therefore unstable.
\label{teo8}
\end{theorem}
\begin{proof}
Consider $E_{1}=(S_{1},I_{1})$ and its characteristic polynomial
(\ref{carec2}). By Routh-Hurwitz criterion for quadratic polynomials, its
roots have negative real part if and only if $U>0$ and $W>0$ , where $U,W$
depend on $E_{1}$. Moreover, when $U<0$ its roots are both real with different sign and when $U>0$ and $W<0$ the roots have positive real part. Computing the value of $U$ and expressing $S_{1}$ in terms
of $I_{1}$ we obtain
\begin{equation}
U=\dfrac{I_1(a_{1}I_{1}^{2}+b_{1}I_{1}+c_{1})}{(1+\alpha I_{1})(1+\alpha
_{2}I_{1})^{2}}=\dfrac{I_1 F(I_{1})}{(1+\alpha I_{1})(1+\alpha_{2}I_{1})^{2}}.
\end{equation}
Where:
\begin{align}
a_{1} & =\alpha_{2}^{2}(\beta\gamma+bp\alpha\delta+b\alpha\gamma
+bm\beta) = \alpha_ 2 A >0, \nonumber\\
b_{1} & =2\alpha_{2}(\beta\gamma+bp\alpha\delta+b\alpha\gamma+bm\beta
) = 2A>0,\nonumber\\
c_{1} & =\beta\beta_{2}+bm\beta+bp\alpha\delta+b\alpha\beta_{2}+\beta
\gamma-b\alpha_{2}\beta_{2}+b\alpha\gamma = B- \alpha_2 C.
\end{align}
We are assuming that equilibrium $E_1$ exists and it is positive, and these happens (by previous section) when $B<0$ and $C>0$, so $c_1<0$. The sign of $U$ is equal to $\text{sgn}(F(I_{1}))$. $F(I_{1})$ has two roots of the
form:
\begin{align}
I\ast &=\dfrac{-b_{1}+\sqrt{b_{1}^{2}-4a_{1}c_{1}}}{2a_{1}} \\
I\ast\ast &=\dfrac{-b_{1}-\sqrt{b_{1}^{2}-4a_{1}c_{1}}}{2a_{1}}.
\end{align}
Where $b_1^{2}-4a_1 c_1>0$ and therefore $I \ast $ and $I \ast \ast$ are both real values with
$I\ast\ast<0$. $F(I_{1})>0$ for $I_{1}>I\ast$ and $I_{1}<I\ast\ast$, but second condition never holds because $I_{1}>0$, so $F(I_1)<0$ for $0 < I_1 < I\ast $. \par
Computing $I*$ in terms of $A,B,C$:
\begin{align}
I \ast &= - \dfrac{1}{\alpha_2} + \dfrac{1}{\alpha_2 A} \sqrt{( A^{2}- \alpha_2 AB + \alpha_2^{2} AC )} .
\end{align}
Substituting $\Delta = B^{^2}-4AC>0$
\begin{align}
I \ast &= - \dfrac{1}{\alpha_2} + \dfrac{1}{\alpha_2 A} \sqrt{\left( A^{2}- \alpha_2 AB + \frac{\alpha_2 ^{2}}{4} (B^{2}- \Delta ) \right)} \nonumber \\
&= - \dfrac{1}{\alpha_2} + \dfrac{1}{2 \alpha_2 A} \sqrt{(2A- \alpha_2 B )^{2}- \alpha_2^{2} \Delta } \nonumber \\
&> - \dfrac{1}{\alpha_2} + \dfrac{1}{2 \alpha_2 A} \left( \sqrt{(2A- \alpha_2 B )^{2}} - \sqrt{\alpha_2^{2} \Delta} \right) \nonumber \\
&= \dfrac{-B- \sqrt{\Delta} }{2A} = I_1.
\end{align}
Therefore $U<0$ and the equilibrium $E_1$ is a saddle.
\end{proof}
\begin{theorem}
Assume the conditions of theorem \ref{teo4} for existence and positivity of the endemic equilibrium $E_2$.
If $I_2< I*$ the equilibrium $E_2$ is unstable, else if $I_2> I*$ then $E_2$ is locally asymptotically stable for $s>0$ and unstable for $s<0$. \par
Where $s=m_{1}(-B+\sqrt{B^{2}-4AC})+2Am_{2}$,
\begin{align}
m_{1} & =(r+\beta_{2}\alpha-\beta_{2}\alpha_{2}+2B\alpha_{2})A^{2}-\alpha
_{2}^{2}rAC-AB\alpha_{2}(b\alpha_{2}+2r+B^{2}\alpha_{2}^{2}r),\nonumber\\
m_{2} & =bA^{2}-AC\alpha_{2}(b\alpha_{2}+2r)+\alpha_{2}^{2}rBC,\nonumber\\
r & =\alpha(p\delta+b+\gamma)+\beta.
\end{align}
\label{teo9}
\end{theorem}
\begin{proof}
Consider $E_{2}=(S_{2},I_{2})$ be real and positive, and its characteristic polynomial (\ref{carec2}). We will have that the equilibrium is unstable when $U<0$ and locally asymptotically stable when $U>0, W>0$.
Following the previous proof
\begin{equation}
U=\dfrac{I_2 (a_{1}I_{2}^{2}+b_{1}I_{2}+c_{1})}{(1+\alpha I_{2})(1+\alpha
_{2}I_{2})^{2}}=\dfrac{I_2 F(I_{2})}{(1+\alpha I_{2})(1+\alpha_{2}I_{2})^{2}}.
\end{equation}
Where $a_1,b_1,c_1$ are the same as in previous theorem. Therefore $\text{sgn}(U)=\text{sgn}(F(I_{2}))$. We have seen that $F(I_{2})$ has two real roots $I \ast$ and $ I \ast \ast$. Again $F(I_{2})>0$ for $I_{2}>I\ast$ and $I_{2}<I\ast\ast$ (which does not holds because $I \ast \ast<0$), and $F(I_2)<0$ for $0 < I_2 < I\ast $ . So if $I_2< I*$ the equilibrium $E_2$ is unstable. \par
When $I_2> I*$ then $U>0$ and
\begin{align}
& W=\dfrac{1}{(1+\alpha I_{2})(1+\alpha_{2}I_{2})^{2}}[{\alpha_{{2}}}
^{2}\left( \alpha\,\gamma+b\alpha+\beta+\alpha\,p\delta\right) {I_{{2}}}
^{3}\nonumber\\
& +\alpha_{{2}}\left( b\alpha_{{2}}+2\,\alpha\,p\delta+2\,b\alpha
+2\,\alpha\,\gamma+2\,\beta\right) {I_{{2}}}^{2}\nonumber\\
& +\left( \alpha\,p\delta+b\alpha+\beta+\alpha\,\mathit{\beta_{2}
}-\mathit{\beta_{2}}\,\alpha_{{2}}+\alpha\,\gamma+2\,b\alpha_{{2}}\right)
I_{{2}}+b]\nonumber\\
& =\dfrac{G(I_{2})}{(1+\alpha I_{2})(1+\alpha_{2}I_{2})^{2}}.
\end{align}
By using the division algorithm,
\begin{align}
G(I_{2}) & =(AI_{2}^{2}+BI_{2}+C)P(I_{2})\nonumber\\
& +\frac{1}{A^{2}}[(r+\beta_{2}\alpha-\beta_{2}\alpha_{2}+2B\alpha_{2}
)A^{2}-\alpha_{2}^{2}rAC-AB\alpha_{2}(b\alpha_{2}+2r+B^{2}\alpha_{2}
^{2}r)I_{2}\nonumber\\
& +bA^{2}-AC\alpha_{2}(b\alpha_{2}+2r)+\alpha_{2}^{2}rBC],\nonumber\\
& =(AI_{2}^{2}+BI_{2}+C)P(I_{2})+\dfrac{m_{1}I_{2}+m_{2}}{A^{2}}.
\end{align}
Where $P(I_{2})$ is a polynomial in $I_{2}$ of degree one. Being $I_{2}$ a
coordinate of an equilibrium then $AI_{2}^{2}+BI_{2}+C=0$ and
\[
G(I_{2})=\dfrac{m_{1}I_{2}+m_{2}}{A^{2}}.
\]
Hence $\text{sgn}(W)=\text{sgn}(G(I_{2}))=\text{sgn}(\frac{m_{1}I_{2}+m_{2}}{A^{2}})=\text{sgn}(m_{1}
I_{2}+m_{2}).$ Substituting the value of $I_{2},$
\[
m_{1}I_{2}+m_{2}=\frac{m_{1}}{2A}(-B+\sqrt{B^{2}-4AC})+m_{2}.
\]
It follows that $\text{sgn}(m_{1}I_{2}+m_{2})=\text{sgn}(m_{1}(-B+\sqrt{B^{2}-4AC}
)+2Am_{2})=\text{sgn}(s).$ Therefore $E_2$ is unstable if $s<0$ and locally asymptotically stable if $s>0$.
\end{proof} | 3,171 | 38,693 | en |
train | 0.8.7 | \section{Hopf bifurcation}
By previous section we know that the system \eqref{ruanmod} has two positive endemic equilibria under the conditions of theorem \eqref{teo4} . Equilibrium $E_1$ is always a saddle, so its stability does not change and there is no possibility of a Hopf bifurcation in it. So let us analyse the existence of a Hopf bifurcation of equilibrium $E_2=(S_2,I_2)$. Analysing the characteristic equation for $E_2$, it has a pair of pure imaginary roots if and only if $U>0$ and $W=0$ . \par
\begin{theorem}
System \eqref{ruanmod} undergoes a Hopf bifurcation of the endemic equilibrium $E_2$ (whenever it exists) if $I_2>I^{*}$ and $s= 0$. Moreover, if $ \bar{a}_{2}<0 $, there is a family of stable periodic orbits of
\eqref{ruanmod} as $s$ decreases from 0; if $ \bar{a}_2>0 $, there is a family of unstable periodic orbits of \eqref{ruanmod} as $s$ increases from 0. \label{biftheo1}
\end{theorem}
The characteristical polinomial for $E_2$ has a pair of pure imaginary roots iff $U>0$ and $W=0$. From the proof of theorem \ref{teo8} we have that $ U>0$ if and only if one of the conditions (i),(ii) is satisfied .
Although, sgn$(W)=$sgn$(s)$, so $W=0$ if and only if $s=0$. By first part of theorem 3.4.2 of \cite{guckenheimer} the roots $ \lambda $ and $ \bar{ \lambda } $ of \eqref{carec2} for $E_2$ vary smoothly, so we can affirm that near $ s=0 $ these roots are still complex conjugate and
\begin{align}
\frac{d Re( \lambda ( s))}{d s} \mid_{s = 0} &= \frac{d}{d s } \left( \frac{1}{2} W(s) \right) \nonumber \\
&= \frac{1}{2} \frac{d}{d s } \left( \frac{1}{2 A ^{3} (1+ \alpha_1 I_1)( 1+ \alpha_2 I_1)^{2} } s \right) \nonumber \\
&= \frac{1}{ 4 A^{3} (1+ \alpha_1 I_1)( 1+ \alpha_2 I_1)^{2} } \neq 0.
\end{align}
Therefore $s=0$ is the Hopf bifurcation point for \eqref{ruanmod} .\par
To analyze the behaviour of the solutions of \eqref{ruanmod} when $ s=0 $ we make a change of coordinates to obtain a new equivalent system to \eqref{ruanmod} with an equilibrium in $(0,0)$ in the $x-y$ plane ( see appendix B ). Under this change the system becomes:
\begin{align}
\frac{dx}{dt} &= {\frac {a_{{11}}x+a_{{12}}y+c_{{1}}xy+c_{{2}}{y}^{2} }{1+\alpha_{{1}}y+
\alpha_{{1}}I_{{2}}}}, \nonumber \\
\frac{dy}{dt} &= {\frac {a_{{21}}x+a_{{22}}y+c_{{3}}xy+c_{{4}}x{y}^{2}+c_{{5}}{y}^{2}+c
_{{6}}{y}^{3} }{ \left( 1+\alpha_{{1}}y+\alpha_{{1}}I_{{2}} \right)
\left( 1+\alpha_{{2}}y+\alpha_{{2}}I_{{2}} \right) }}. \label{hmrhb1}
\end{align}
Where the $a_{ij}$'s and $c_i$'s are defined in appendix B.
System \eqref{hmrhb1} and \eqref{ruanmod} are equivalent ( appendix B ), so we can work with \eqref{hmrhb1}. This system has a pair of pure imaginary eigenvalues if and only if \eqref{ruanmod} has them too. As we said before it happens if and only if any of conditions (i),(ii) is satisfied and $s=0$. Computing jacobian matrix $DF(0,0)$ of \eqref{hmrhb1}
\begin{equation}
DF(0,0)= \left[ \begin {array}{cc} {\dfrac {a_{{11}}}{1+\alpha_{{1}}I_{{2}}}}&
{\dfrac {a_{{12}}}{ \left( 1+\alpha_{{1}}I
_{{2}} \right) }}\\ \noalign{
}{\dfrac {a_{{21}}}{ \left( 1+
\alpha_{{2}}I_{{2}} \right) \left( 1+\alpha_{{1}}I_{{2}} \right) }}&{
\dfrac {a_{{22}}}{ \left( 1+\alpha_{{2}}I_{{2}} \right) \left( 1+
\alpha_{{1}}I_{{2}} \right) }}\end {array} \right].
\end{equation}
$$ Tr (DF(0,0)) = Tr(Df(S_2,I_2)) , \quad \det (DF(0,0)) = \det (Df(S_2,I_2)).$$
So condition $s=0$ is equivalent to $ a_{11}(1+\alpha_2 I_2)+ a_{22}=0 $ and (i),(ii) are equivalent to $ a_{22} a_{11}- a_{12} a_{21} > 0 $. \par
System \eqref{hmrhb1} can be rewritten as
\begin{align}
\dfrac{dx}{dt} &= {\dfrac {a_{{11}}x}{1+\alpha_{{1}}I_{{2}}}}+{\frac {a_{{12}}y}{1+\alpha
_{{1}}I_{{2}}}} + G_1 (x,y) \\
\dfrac{dy}{dt} &= {\frac {a_{{21}}x}{ \left( 1+\alpha_{{1}}I_{{2}} \right) \left( 1+
\alpha_{{2}}I_{{2}} \right) }}+{\frac {a_{{22}}y}{ \left( 1+\alpha_{{1
}}I_{{2}} \right) \left( 1+\alpha_{{2}}I_{{2}} \right) }} + G_2(x,y).
\end{align}
Where $G_1,G_2$ are defined in appendix B.
Let $ \Lambda = \sqrt{ \det (DF(0,0)) } $. We use the change of variable $u=x, v= \frac{a_{11}}{ \Lambda( 1+ \alpha_1 I_2)} + \frac{a_{12} y}{ \Lambda(1 + \alpha_{1} I_2 ) } $, to obtain the following equivalent system:
\begin{equation}
\left( \begin{matrix}
u \\ v
\end{matrix}\right)= \left( \begin{matrix}
0 & \Lambda \\ - \Lambda & 0
\end{matrix}\right) \left( \begin{matrix}
u \\ v
\end{matrix}\right) + \left( \begin{matrix}
H_1 (u,v) \\ H_2(u,v)
\end{matrix}\right).
\end{equation}
Where
\begin{align}
H_1(u,v) &= -\dfrac{\left( \left( -a_{{12}}c_{{1}}+a_{{11}}c_{{2}} \right) u+ \left( -
\Lambda c_{{2}}\alpha_{{1}}I_{{2}}+ \Lambda a_{{12}
}\alpha_{{1}}- \Lambda c_{{2}} \right) v \right) \left(
\left( \Lambda+ \Lambda \alpha_{{1}}I_{{2}} \right) v-a_{{11}}u
\right)
}{a_{{12}} \left( \left( \alpha_{{1}} \Lambda + \Lambda {\alpha_{{1}}}^{2}I_{{2}} \right) v+a_{{12}}-\alpha_{{1}}a_{{
11}}u+a_{{12}}\alpha_{{1}}I_{{2}} \right)
} \\
H_2(u,v) &= - \dfrac{1}{h(u,v)} \left[ (\Lambda(1+ \alpha_1 I_2)v-a_{11}u) \left( A_1 v^{2}+A_2 uv + A_3 v + A_4 u^{2} + A_5 u \right) \right]
\end{align}
And $A_1,...,A_5, h(u,v)$ are defined in appendix B.
Let
\begin{align}
\bar{a}_2 &= \dfrac{1}{16} [ (H_1)_{uuu} + (H_1)_{uvv} + (H_{2})_{uuv} + (H_2)_{vvv} ] + \dfrac{1}{16( - \Lambda)} [(H_1)_{uv} ((H_1)_{uu} + (H_1)_{vv}) \nonumber \\
& - (H_{2})_{uv} ( (H_{2})_{uu} + (H_2)_{vv} ) - (H_1)_{uu}(H_2)_{uu} + (H_1)_{vv} (H_2)_{vv}].
\end{align}
Where $$(H_1)_{uuu}= \dfrac{ \partial }{ \partial u } \left( \dfrac{ \partial }{ \partial u } \left( \dfrac{ \partial H_1}{ \partial u } \right) \right)(0,0) ,$$
and so on ($ \bar{a}_2 $ is explicitly expressed in appendix B) . \par
Then by theorem 3.4.2 of \cite{guckenheimer} if $ \bar{a }_{2} \neq 0 $ then there exist a surface of periodic solutions, if $ \bar{a}_{2}<0 $ then these cycles are stable, but if $ \bar{a}_{2}>0 $ then cycles are repelling. \par | 2,613 | 38,693 | en |
train | 0.8.8 | \section{Discussion}
As we said in the introduction, traditional epidemic models have always stability results in terms of $\mathcal{R}_{0} $, such that we need only reduce $\mathcal{R}_{0} <1$ to eradicate the disease. However, including the treatment function brings new epidemic equilibria that make the dynamics of the model more complicated. Now, let's discuss some control strategies for the infectious disease, analysing the parameters of the treatment function ($ \alpha_{2}, \beta_{2} $) and looking for conditions that allow us to eliminate the disease. We make this study by cases.\par
A first approach is focus on the definition of $\mathcal{R}_{0} $, we can see that $\mathcal{R}_{0} $ decreases when $ \beta_2 $ increases, so the first measure suggesting control is a big value for $ \beta_2 $. But this is not always a good way to proceed. Let us divide our analysis in the following cases: \par
\textit{Case 1: There is no positive endemic equilibrium for $\mathcal{R}_{0} \leq 1$. } This happens when $\mathcal{R}_{0} ^{*} \leq 1 $ ( by theorem \ref{teo4} ) or when $ \mathcal{R}_{0} ^{*}>1$ and $ ( \alpha_{2}, \beta_{2} ) \in A_1 \cup A_2 $ ( theorem \ref{teo4} , number 5). In this case if $\mathcal{R}_{0} >1$ there is a unique positive endemic equilibrium, therefore there exists a bifurcation at $\mathcal{R}_{0} =1$ : from the disease free equilibrium, which is globally asymptotically stable for $0<\mathcal{R}_{0} <1$ (by theorem \ref{teo5}) and a saddle for $\mathcal{R}_{0} =1$ and $ \beta_2 \neq g( \alpha_2 ) $ (theorem \ref{teo7} ), to the positive endemic equilibrium $E_2$ as $\mathcal{R}_{0} $ increase. $E_2$ will be locally asymptotic stable or unstable depending on theorem \ref{teo9} or surrounded by a limit cycle (theorem \ref{biftheo1} ) . If conditions for Hopf bifurcation hold then the stability of the limit cycle is determined by $ \bar{a}_{2} $; when $ \bar{a}_2<0 $ the periodic orbit is stable and therefore $E_2$ is unstable, while if $ \bar{a}_2>0 $ then the periodic orbit is unstable and $E_2$ is stable . In this case the best way to eradicate the disease is finding parameters that allow $\mathcal{R}_{0} <1$, because then all the infectious states tend to $I=0$. \par
\begin{figure}
\caption{Bifurcation diagram in terms of $ \beta_2 $ and $ \alpha_2 $. The values of the parameters taken are $ \alpha = 0.4, \beta = 0.3, b= 0.2, \gamma = 0.03, \delta = 0.05, p=0.3, m=0.3 $. Here $ \mathcal{R}
\label{biffig1}
\end{figure}
\textit{Case 2: There exist endemic equilibria for $\mathcal{R}_{0} \leq 1$}. This happens when $ ( \alpha_{2}, \beta_{2} ) \in A_3 $. The existence of endemic equilibria is determined by the relationship between $\mathcal{R}_{0} $ and $ \max \{ P_1, \mathcal{R}_{0} ^{+} \} $. Let $F( \alpha_2, \beta_2 )= \mathcal{R}_{0} -\mathcal{R}_{0} ^{+}$, $G( \alpha_2, \beta_2 ) = \mathcal{R}_{0} - P_1$, and focus on the implicit curves defined by $F=0$ and $G=0$. These curves divide the domain $A_3$ in another ones (see figure \ref{biffig1} ):
\begin{align}
A_{3}^{1} & = \{( \alpha_2, \beta_2 ) \in A_3, 0< \mathcal{R}_{0} <\mathcal{R}_{0} ^{+} \} \nonumber \\
A_{3}^{2} & = \{( \alpha_2, \beta_2 ) \in A_3, \mathcal{R}_{0} >\mathcal{R}_{0} ^{+} \} \nonumber \\
A_{3}^{3} & = \{( \alpha_2, \beta_2 ) \in A_3, 0< \mathcal{R}_{0} <P_1 \} \nonumber \\
A_{3}^{4} & = \{( \alpha_2, \beta_2 ) \in A_3, \mathcal{R}_{0} > P_1 \}.
\end{align}
If $ ( \alpha_{2}, \beta_{2} ) \in A_{3}^{2} \cap A_{3}^{4} $ then there exist two endemic equilibria $E_1$( a saddle ) and $E_2$ ( stable or unstable depending on conditions of theorems \ref{teo9} and possibly with a periodic orbit around (theorems \ref{biftheo1} )), but when $\mathcal{R}_{0} =1$ one of them becomes negative, leaving us with $E_2$. In this case $\mathcal{R}_{0} <1$ is not a sufficient condition to control the disease, because even with $\mathcal{R}_{0} <1$ we have endemic positive equilibria that could be stable and then the disease will tend to a non zero value; also we have the possibility of a periodic solution, or biologically, an outbreak that will apparently `` disappear '' but will re-emerge after some time. \par
The best way in this case is ensuring $ ( \alpha_{2}, \beta_{2} ) \in ( A_{3}^{2} \cap A_{3}^{4})^{c} $ because then we don't have endemic equilibria for $\mathcal{R}_{0} <1$ and the disease free will be globally asymptotically stable.
\begin{thebibliography}{99}
\bibitem {hadeler1997}Hadeler, K.P., van den Driessche, P. Backward
bifurcation in epidemic control. Math. Biosci. 146, 15--35 (1997).
\bibitem {dushoff1998}J. Dushoff, W. Huang, C. Castillo-Chavez, Backwards
bifurcations and catastrophe in simple models of fatal diseases, J. Math.
Biol. 36 227--248, (1998).
\bibitem {driessche2000}P. van den Driessche, J. Watmough, A simple SIS
epidemic model with a backward bifurcation, J. Math. Biol. 40 525--540 (2000).
\bibitem {brauer2004}Brauer, F.: Backward bifurcations in simple vaccination
models. J. Math. Anal. Appl. 298, 418--431 (2004)
\bibitem {hadeler1995}K.P. Hadeler, C. Castillo-Chavez, A core group model for
disease transmission, Math. Biosci. 128 (1995) 41.
\bibitem {anderson1991}R.M. Anderson, R.M. May, Infectious Diseases of Humans,
Oxford University Press, London, 1991.
\bibitem {wang2004}W. Wang, S. Ruan, Bifurcation in an epidemic model with
constant removal rate of the infectives, J. Math. Anal. Appl. 291 (2004) 775.
\bibitem {humaruan}Zhixing Hu, Wanbiao Ma, Shigui Ruan, \textit{Analysis of
SIR epidemic models with nonlinear incidence rate and treatment}, Mathematical
Biosciences, 238 (2012) 12-20.
\bibitem {zhonghua}Zhang Zhonghua, Suo Yaohong, \textit{Qualitative analysis
of a SIR epidemic model with saturated treatment rate}, J Appl Math Comput
(2010) 34: 177-194.
\bibitem {likuang}Bingtuan Li, Yang Kuang, \textit{Simple Food Chain in a
Chemostat with Distinct. Removal Rates},Journal of Mathematical Analysis and
Applications 242, 75-92(2000).
\bibitem {carr}Jack Carr, \textit{Applications of Centre Manifold Theory},
(1981), pp 1-13
\bibitem {perko}Lawrence Perko, \textit{Differential Equations and Dynamical
Systems}, 3rd edition, Springer,107-108.
\bibitem {zhoufan}Linhua Zhou, Meng Fan, \textit{Dynamics of an SIR epidemic
model with limited medical resources revisited}, Nonlinear Analysis: Real
World Application 13(2012)312-324.
\bibitem {kelley}Walter G. Kelley, Allan C. Peterson, \textit{The Theory of
Differential Equations, Classical and Qualitative}, second edition, Springer
\bibitem {zhangliu} Xu Zhang, Xianning Liu, \textit{Backward bifurcation of an epidemic model with saturated treatment function}, J. Math. Anal. Appl. 348 (2008) 433-443
\bibitem {wancui} H. Wan J. Cui \textit{Rich Dynamics of an epidemic model with saturation recovery}, J. App. Math. v.2013, Article ID 314958.
\bibitem{guckenheimer} John Guckenheimer, Philip Holmes, \textit{Nonlinear Oscillations, Dynamical Systems, and Bifurcation of Vector Fields}, Springer-Verlag, New York, 1996.
\end{thebibliography}
\appendix | 2,585 | 38,693 | en |
train | 0.8.9 | \section{Computing center manifold}
The Jacobian matrix of system \eqref{carac6} is
\begin{equation}
DF(m,0)=\left(
\begin{matrix}
-b & - \beta m - bm + p \delta\\
0 & 0
\end{matrix}
\right) . \label{eqst1}
\end{equation}
With eigenvalues $\lambda_{1}=-b$ and $\lambda_{2}=0$ and
respective eigenvectors $v_{1}=(1,0)$ and $v_{2}=(-\frac{\gamma+\beta
_{2}+bm}{b},1)$. Using the eigenvectors to establish a new coordinate system
we define:
\begin{equation}
\left(
\begin{matrix}
\hat{S}\\
I
\end{matrix}
\right) =\left(
\begin{array}
[c]{cc}
1 & -{\frac{\gamma+\beta_{2}+bm}{b}}\\
\noalign{
}0 & 1
\end{array}
\right) \left(
\begin{matrix}
u\\
v
\end{matrix}
\right) \quad\text{, or}\quad\left(
\begin{matrix}
u\\
v
\end{matrix}
\right) =\left(
\begin{array}
[c]{cc}
1 & {\frac{\gamma+\beta_{2}+bm}{b}}\\
\noalign{
}0 & 1
\end{array}
\right) \left(
\begin{matrix}
S\\
I
\end{matrix}
\right).
\end{equation}
Under this transformation the system becomes
\begin{align}
\frac{du}{dt} & ={\frac{d}{dt}}\hat{S}\left( t\right) +{\frac{\left(
\gamma+\mathit{\beta_{2}}+bm\right) {\frac{d}{dt}}I\left( t\right) }{b}
}\nonumber\\
& =-{\frac{\beta\,\left( \hat{S}+m\right) I}{1+\alpha\,I}}-b\hat
{S}-bmI+p\delta\,I+\left( \gamma+\mathit{\beta_{2}}+bm\right) \nonumber\\
& \left( {\frac{\beta\,\left( \hat{S}+m\right) I}{1+\alpha\,I}}-\left(
p\delta+\gamma\right) I-{\frac{\mathit{\beta_{2}}\,I}{1+\mathit{\alpha_{2}
}\,I}}\right) \frac{1}{b},\nonumber\\
\frac{dv}{dt} & =\frac{dI}{dt}\\
& ={\frac{\beta\,\left( \hat{S}+m\right) I}{1+\alpha\,I}}-\left(
p\delta+\gamma\right) I-{\frac{\mathit{\beta_{2}}\,I}{1+\mathit{\alpha_{2}
}\,I}}.
\end{align}
Substituting $S=u-{\frac{\left( \gamma+\beta_2 +bm\right) v}{b}},I=v$
and $\beta m=p\delta+\gamma+\beta_{2}$ we obtain:
\begin{align}
\frac{dv}{dt} & =0u+f(v,u)\nonumber\\
\frac{du}{dt} & =-bu+g(v,u), \label{eqst2}
\end{align}
where
\begin{align}
& f(u,v)=-{\frac{v\left( -\beta\,b-\beta\,b\mathit{\alpha_{2}}\,v\right)
u}{\left( 1+\alpha\,v\right) \left( 1+\mathit{\alpha_{2}}\,v\right) b}
}\nonumber\\
& -\frac{v}{\left( 1+\alpha\,v\right) \left( 1+\mathit{\alpha_{2}
}\,v\right) b}(\left( \beta\,bm\mathit{\alpha_{2}}+b\gamma\,\alpha
\,\mathit{\alpha_{2}}-\beta\,\mathit{\alpha_{2}}\,p\delta+bp\delta
\,\alpha\,\mathit{\alpha_{2}}+{\beta}^{2}\mathit{\alpha_{2}}\,m\right)
{v}^{2}\nonumber\\
& +\left( bp\delta\,\mathit{\alpha_{2}}+\beta\,bm-\beta\,bm\mathit{alpha2}
+b\gamma\,\mathit{\alpha_{2}}+{\beta}^{2}m-\beta\,p\delta+b\alpha
\,\beta\,m\right) v),\nonumber\\
& g(u,v)=-\frac{1}{\left( 1+\alpha\,v\right) \left( 1+\mathit{\alpha_{2}
}\,v\right) b^{2}}[v((m{b}^{2}\gamma\,\alpha\,\mathit{\alpha_{2}}+2\,{\beta
}^{2}b{m}^{2}\mathit{\alpha_{2}}+\beta\,\mathit{\alpha_{2}}\,{p}^{2}{\delta
}^{2}\nonumber \\
&+{\beta}^{3}\mathit{\alpha_{2}}\,{m}^{2}-b\gamma\,p\delta\,\alpha
\,\mathit{\alpha_{2}}+b\gamma\,\alpha\,\mathit{\alpha_{2}}\,\beta\,m-2\,{\beta}^{2}
\mathit{\alpha_{2}}\,mp\delta+bp\delta\,\alpha\,\mathit{\alpha_{2}}
\,\beta\,m\nonumber \\
&-{b}^{2}m\alpha\,\mathit{\alpha_{2}}\,\beta-{\beta}^{2}
bm\mathit{\alpha_{2}}+{b}^{2}{m}^{2}\beta\,\mathit{\alpha_{2}}+{b}^{2}
mp\delta\,\alpha\,\mathit{\alpha_{2}}-b{p}^{2}{\delta}^{2}\alpha\,\mathit{\alpha_{2}}\nonumber \\
&-2\,\beta\,bm\mathit{\alpha_{2}}\,p\delta-{b}^{2}m\beta\,\mathit{\alpha_{2}}+b\beta\,\mathit{\alpha_{2}
}\,p\delta)v^{2}+({b}^{2}{m}^{2}\beta-2\,\beta\,bmp\delta-\beta\,{b}
^{2}m+2\,{\beta}^{2}b{m}^{2}\nonumber\\
& -{\beta}^{2}bm+\beta\,{p}^{2}{\delta}^{2}+2\,\beta\,bm\mathit{\alpha_{2}
}\,p\delta-bp\delta\,\alpha\,\beta\,m+{\beta}^{3}{m}^{2}+\beta\,bp\delta
-{b}^{2}\alpha\,\beta\,m-2\,{\beta}^{2}mp\delta\nonumber \\
&+{b}^{2}{m}^{2}\alpha\,\beta+b\alpha\,{\beta}^{2}{m}^{2}-u\beta\,{b}^{2}m\alpha_{{2}}-b{\beta}
^{2}u\alpha_{{2}}m+b\beta\,u\alpha_{{2}}p\delta-\gamma\,bp\delta\,\alpha_{{2}
}+\gamma\,\beta\,bm\alpha_{{2}}\nonumber \\
&+{b}^{2}mp\delta\,\alpha_{{2}}-{b}^{2}{m}
^{2}\beta\,\alpha_{{2}}+u\beta\,{b}^{2}\alpha_{{2}}-b{p}^{2}{\delta}^{2}\alpha_{{2}}-{\beta}
^{2}b{m}^{2}\alpha_{{2}}+{b}^{2}m\gamma\,\alpha_{{2}})v-{b}^{2}m\beta
\,u\nonumber \\
&+u\beta\,{b}^{2}-b{\beta}^{2}um+b\beta\,up\delta)].
\end{align} | 1,916 | 38,693 | en |
train | 0.8.10 | Substituting $S=u-{\frac{\left( \gamma+\beta_2 +bm\right) v}{b}},I=v$
and $\beta m=p\delta+\gamma+\beta_{2}$ we obtain:
\begin{align}
\frac{dv}{dt} & =0u+f(v,u)\nonumber\\
\frac{du}{dt} & =-bu+g(v,u), \label{eqst2}
\end{align}
where
\begin{align}
& f(u,v)=-{\frac{v\left( -\beta\,b-\beta\,b\mathit{\alpha_{2}}\,v\right)
u}{\left( 1+\alpha\,v\right) \left( 1+\mathit{\alpha_{2}}\,v\right) b}
}\nonumber\\
& -\frac{v}{\left( 1+\alpha\,v\right) \left( 1+\mathit{\alpha_{2}
}\,v\right) b}(\left( \beta\,bm\mathit{\alpha_{2}}+b\gamma\,\alpha
\,\mathit{\alpha_{2}}-\beta\,\mathit{\alpha_{2}}\,p\delta+bp\delta
\,\alpha\,\mathit{\alpha_{2}}+{\beta}^{2}\mathit{\alpha_{2}}\,m\right)
{v}^{2}\nonumber\\
& +\left( bp\delta\,\mathit{\alpha_{2}}+\beta\,bm-\beta\,bm\mathit{alpha2}
+b\gamma\,\mathit{\alpha_{2}}+{\beta}^{2}m-\beta\,p\delta+b\alpha
\,\beta\,m\right) v),\nonumber\\
& g(u,v)=-\frac{1}{\left( 1+\alpha\,v\right) \left( 1+\mathit{\alpha_{2}
}\,v\right) b^{2}}[v((m{b}^{2}\gamma\,\alpha\,\mathit{\alpha_{2}}+2\,{\beta
}^{2}b{m}^{2}\mathit{\alpha_{2}}+\beta\,\mathit{\alpha_{2}}\,{p}^{2}{\delta
}^{2}\nonumber \\
&+{\beta}^{3}\mathit{\alpha_{2}}\,{m}^{2}-b\gamma\,p\delta\,\alpha
\,\mathit{\alpha_{2}}+b\gamma\,\alpha\,\mathit{\alpha_{2}}\,\beta\,m-2\,{\beta}^{2}
\mathit{\alpha_{2}}\,mp\delta+bp\delta\,\alpha\,\mathit{\alpha_{2}}
\,\beta\,m\nonumber \\
&-{b}^{2}m\alpha\,\mathit{\alpha_{2}}\,\beta-{\beta}^{2}
bm\mathit{\alpha_{2}}+{b}^{2}{m}^{2}\beta\,\mathit{\alpha_{2}}+{b}^{2}
mp\delta\,\alpha\,\mathit{\alpha_{2}}-b{p}^{2}{\delta}^{2}\alpha\,\mathit{\alpha_{2}}\nonumber \\
&-2\,\beta\,bm\mathit{\alpha_{2}}\,p\delta-{b}^{2}m\beta\,\mathit{\alpha_{2}}+b\beta\,\mathit{\alpha_{2}
}\,p\delta)v^{2}+({b}^{2}{m}^{2}\beta-2\,\beta\,bmp\delta-\beta\,{b}
^{2}m+2\,{\beta}^{2}b{m}^{2}\nonumber\\
& -{\beta}^{2}bm+\beta\,{p}^{2}{\delta}^{2}+2\,\beta\,bm\mathit{\alpha_{2}
}\,p\delta-bp\delta\,\alpha\,\beta\,m+{\beta}^{3}{m}^{2}+\beta\,bp\delta
-{b}^{2}\alpha\,\beta\,m-2\,{\beta}^{2}mp\delta\nonumber \\
&+{b}^{2}{m}^{2}\alpha\,\beta+b\alpha\,{\beta}^{2}{m}^{2}-u\beta\,{b}^{2}m\alpha_{{2}}-b{\beta}
^{2}u\alpha_{{2}}m+b\beta\,u\alpha_{{2}}p\delta-\gamma\,bp\delta\,\alpha_{{2}
}+\gamma\,\beta\,bm\alpha_{{2}}\nonumber \\
&+{b}^{2}mp\delta\,\alpha_{{2}}-{b}^{2}{m}
^{2}\beta\,\alpha_{{2}}+u\beta\,{b}^{2}\alpha_{{2}}-b{p}^{2}{\delta}^{2}\alpha_{{2}}-{\beta}
^{2}b{m}^{2}\alpha_{{2}}+{b}^{2}m\gamma\,\alpha_{{2}})v-{b}^{2}m\beta
\,u\nonumber \\
&+u\beta\,{b}^{2}-b{\beta}^{2}um+b\beta\,up\delta)].
\end{align}
By \cite{carr} the system (\ref{eqst2}) has a center manifold of the form
$u=h(v)$. Let $\phi:\mathbb{R}\rightarrow\mathbb{R}$ and define the
annihilator:
\begin{align}
N\phi & =\phi^{\prime}(v)(f(v,\phi(v)))+b\phi-g(v,\phi(v))\nonumber\\
& =\frac{1}{b^{2}(1+\alpha v)(1+\alpha_{2}v)}[bp\delta\,\alpha\,{v}^{3}
\alpha_{{2}}\beta\,m+{b}^{2}{m}^{2}\beta\,{v}^{2}-\beta\,{v}^{2}{b}^{2}
m+{b}^{3}\phi+{b}^{3}\phi\,\alpha\,v\nonumber \\
&+{b}^{3}\phi\,\alpha_{{2}}v+{b}^{2}m\gamma\,\alpha_{2}\,{v}^{2}+\phi\,\beta\,v{b}^{2}+vb\phi
\,\beta\,p\delta+{b}^{2}mp\delta\,\alpha_{{2}}{v}^{2}-\phi\,\beta\,{v}^{2}{b}^{2}m\alpha_{{2}}\nonumber \\
&-\gamma\,bp\delta\,\alpha_{{2}}{v}^{2}+{b}^{2}m\gamma\,\alpha\,{v}^{3}\alpha_{{2}}+\gamma\,\beta\,{v}^{2}
bm\alpha_{{2}}+{b}^{2}{m}^{2}\alpha\,{v}^{2}\beta-2\,{\beta}^{2}{v}
^{2}mp\delta-{\beta}^{2}{v}^{2}b{m}^{2}\alpha_{2}\nonumber \\
&+\beta\,{v}^{2}bp\delta\-{b}^{2}{v}^{2}\alpha\,\beta\,m+b\alpha\,{v}^{2}{\beta}^{2}{m}^{2}
-bp\delta\,\alpha\,{v}^{2}\beta\,m+{\beta}^{3}{v}^{2}{m}^{2}-2\,\beta\,{v}
^{2}bmp\delta+{\beta}^{3}{v}^{3}\alpha_{{2}}{m}^{2}\nonumber \\
&+2\,{\beta}^{2}{v}^{2}b{m}^{2}+\beta\,{v}^{2}{p}^{2}{\delta}^{2}-{\beta}^{2}{v}^{2}bm-\phi\,\beta
\,v{b}^{2}m-vb\phi\,{\beta}^{2}m+\beta\,{v}^{3}b\alpha_{{2}}p\delta-{b}^{2}
{v}^{3}\alpha\,\alpha_{{2}}\beta\,m\nonumber \\
&-b{p}^{2}{\delta}^{2}\alpha\,{v}^{3}\alpha_{{2}}-2\,{\beta}^{2}{v}^{3}\alpha_{{2}}mp\delta+\phi\,\beta\,{v}^{2}{b}
^{2}\alpha_{{2}}-{b}^{2}{m}^{2}\beta\,{v}^{2}\alpha_{{2}}+{b}^{2}{m}^{2}
\beta\,{v}^{3}\alpha_{{2}}-\beta\,{v}^{3}{b}^{2}m\alpha_{{2}}\nonumber \\
&-{\beta}^{2}{v}^{3}b\alpha_{{2}}m+2\,{\beta}^{2}{v}^{3}b{m}^{2}\alpha_{{2}}-b{p}^{2}{\delta}^{2}{v}
^{2}\alpha_{{2}}+\beta\,{v}^{3}\alpha_{{2}}{p}^{2}{\delta}^{2}+{b}^{3}
\phi\,\alpha\,{v}^{2}\alpha_{{2}}-{v}^{2}b\phi\,{\beta}^{2}m\alpha_{{2}}\nonumber \\
&+{v}^{2}b\phi\,\beta\,p\delta\,\alpha_{{2}}+b\gamma\,\alpha\,{v}^{3}\alpha_{{2}}\beta\,m+{b}^{2}mp\delta\,\alpha
\,{v}^{3}\alpha_{{2}}-\gamma\,bp\delta\,\alpha\,{v}^{3}\alpha_{{2}}-2\,\beta\,{v}^{3}bm\alpha_{{2}}p\delta\nonumber \\
&+2\,\beta\,{v}^{2}bm\alpha_{{2}}p\delta].
\end{align}
Assume that $\phi=a_{0}v^{2}+a_{1}v^{3}+O(v^{4})$, then by substituting $\phi$
and $\frac{d\phi}{dv}$ in the annihilator $N\phi$ and expanding its Taylor
series we get:
\begin{align}
N\phi & =\frac{1}{b^{2}}((\gamma\,\beta\,bm\alpha_{2}+{b}^{2}mp\delta
\,\alpha_{2}-{b}^{2}{m}^{2}\beta\,\mathit{\alpha_{2}}+2\,\beta
\,bm\mathit{\alpha_{2}}\,p\delta+2\,{\beta}^{2}b{m}^{2}+{b}^{2}{m}^{2}
\beta\nonumber\\
& -\beta\,{b}^{2}m+{b}^{3}\mathit{a_{0}}-{\beta}^{2}b{m}^{2}\mathit{\alpha
_{2}}-2\,\beta\,bmp\delta+{b}^{2}m\gamma\,\mathit{\alpha_{2}}-\gamma
\,bp\delta\,\mathit{\alpha_{2}}-{b}^{2}\alpha\,\beta\,m-{\beta}^{2}
bm\nonumber\\
& +b\alpha\,{\beta}^{2}{m}^{2}+{b}^{2}{m}^{2}\alpha\,\beta-2\,{\beta}
^{2}mp\delta+\beta\,{p}^{2}{\delta}^{2}-bp\delta\,\alpha\,\beta\,m-b{p}
^{2}{\delta}^{2}\mathit{\alpha_{2}}+\beta\,bp\delta\nonumber \\
&+{\beta}^{3}{m}^{2})v^{2}-\frac{1}{b^{2}}[\alpha\,{\beta}^{3}{m}^{2}-\mathit{a_{0}}\,\beta\,{b}
^{2}-{b}^{3}\mathit{a_{1}}-2\,bp\delta\,\alpha\,\beta\,m-{\beta}^{2}b{m}
^{2}{\mathit{\alpha_{2}}}^{2}-b{p}^{2}{\delta}^{2}{\mathit{\alpha_{2}}}^{2}\nonumber \\
&+m{b}^{2}\gamma\,{\mathit{\alpha_{2}}}^{2}-{b}^{2}{m}^{2}\beta
\,{\mathit{\alpha_{2}}}^{2}-{b}^{2}\alpha\,\beta\,m-{b}^{2}{\alpha}^{2}\beta\,m-\alpha\,{\beta}
^{2}bm+b{\alpha}^{2}{\beta}^{2}{m}^{2}+{b}^{2}{m}^{2}{\alpha}^{2}\beta\nonumber \\
&+\alpha\,\beta\,{p}^{2}{\delta}^{2}+3\,\mathit{a_{0}}\,\beta\,{b}^{2}m+3\,\mathit{a_{0}}\,b{\beta}^{2}m+2\,\mathit{a_{0}}\,{b}^{2}\gamma
\,\mathit{\alpha_{2}}+2\,\mathit{a_{0}}\,{b}^{2}p\delta\,\mathit{\alpha_{2}
}-2\,\mathit{a_{0}}\,\beta\,{b}^{2}m\mathit{\alpha_{2}}\nonumber \\
&-3\,\mathit{a_{0}}\,b\beta\,p\delta+2\,\mathit{a_{0}}\,{b}^{2}\alpha\,\beta\,m-2\,\alpha\,{\beta}^{2}mp\delta+\alpha\,\beta\,bp\delta+{b}^{2}
mp\delta\,{\mathit{\alpha_{2}}}^{2}+b\gamma\,{\mathit{\alpha_{2}}}^{2}\beta\,m\nonumber \\
&-b\gamma\,p\delta\,{\mathit{\alpha_{2}}}^{2}+2\,\beta\,bm{\mathit{\alpha_{2}}}^{2}p\delta-bp\delta\,{\alpha}^{2}\beta\,m+{b}^{2}{m}^{2}\alpha\,
\beta+2\,b\alpha\,{\beta}^{2}{m}^{2}]v^{3}+O(v^{4})).
\end{align}
By choosing the coefficients of $v^{2}$ and $v^{3}$ in order to have
$N\phi=O(v^{4})$ we obtain that $a_{0}$ and $a_{1}$ must be the following:
\begin{align}
a_{0} & =-\frac{1}{b^{3}}[{b}^{2}{m}^{2}\beta+\beta\,bp\delta-{b}^{2}
\alpha\,\beta\,m+{b}^{2}mp\delta\,\alpha_{2}-\gamma\,bp\delta\,\alpha
_{2}+\gamma\,\beta\,bm\alpha_{2}-2\,\beta\,bmp\delta\nonumber\\
& -\beta\,{b}^{2}m+2\,{\beta}^{2}b{m}^{2}-{\beta}^{2}bm+\beta\,{p}^{2}
{\delta}^{2}+2\,\beta\,bm\alpha_{2}\,p\delta-bp\delta\,\alpha\,\beta
\,m+{\beta}^{3}{m}^{2}-b{p}^{2}{\delta}^{2}\alpha_{2}\nonumber\\
& -{b}^{2}{m}^{2}\beta\,\alpha_{2}-2\,{\beta}^{2}mp\delta+{b}^{2}
m\gamma\,\alpha_{2}+b\alpha\,{\beta}^{2}{m}^{2}+{b}^{2}{m}^{2}\alpha
\,\beta-{\beta}^{2}b{m}^{2}\alpha_{2}],\\
a_{1} & =\frac{1}{b^{3}}[\alpha\,{\beta}^{3}{m}^{2}-ao\,\beta\,{b}
^{2}-2\,bp\delta\,\alpha\,\beta\,m-{\beta}^{2}b{m}^{2}\alpha_{2}-b{p}
^{2}{\delta}^{2}{\mathit{\alpha_{2}}}^{2}+m{b}^{2}\gamma\,\alpha_{2}^{2}
-{b}^{2}{m}^{2}\beta\alpha_{2}^{2}\nonumber\\
& -{b}^{2}\alpha\,\beta\,m-{b}^{2}{\alpha}^{2}\beta\,m-\alpha\,{\beta}
^{2}bm+b{\alpha}^{2}{\beta}^{2}{m}^{2}+{b}^{2}{m}^{2}{\alpha}^{2}\beta
+\alpha\,\beta\,{p}^{2}{\delta}^{2}+3\,\mathit{a_{0}}\,\beta\,{b}
^{2}m\nonumber\\
& +3\,a_{0}\,b{\beta}^{2}m+2\,a_{0}\,{b}^{2}\gamma\,\alpha_{2}
+2\,\mathit{a_{0}}\,{b}^{2}p\delta\,\alpha_{2}-2\,\mathit{a_{0}}\,\beta
\,{b}^{2}m\mathit{\alpha_{2}}-3\,\mathit{a_{0}}\,b\beta\,p\delta
+2\,\mathit{a_{0}}\,{b}^{2}\alpha\,\beta\,m\nonumber\\
& -2\,\alpha\,{\beta}^{2}mp\delta+\alpha\,\beta\,bp\delta+{b}^{2}
mp\delta\,{\mathit{\alpha_{2}}}^{2}+b\gamma\,{\mathit{\alpha_{2}}}^{2}
\beta\,m-b\gamma\,p\delta\,{\mathit{\alpha_{2}}}^{2}+2\,\beta
\,bm{\mathit{\alpha_{2}}}^{2}p\delta\nonumber\\
&-bp\delta\,{\alpha}^{2}\beta\,m+{b}^{2}{m}^{2}\alpha\,\beta+2\,b\alpha\,{\beta}^{2}{m}^{2}].
\end{align}
Hence $h(v)=a_{0}v^{2}+a_{1}v^{3}+O(v^{4})$. | 4,046 | 38,693 | en |
train | 0.8.11 | \section{Hopf bifurcation}
To analyze the behaviour of the solutions of \eqref{ruanmod} when $ s=0 $ we make a change of coordinates $x=S-S_2$, $y=I-I_2$, to obtain a new equivalent system to \eqref{ruanmod} with an equilibrium in $(0,0)$ in the $x-y$ plane. Under this change the system becomes in:
\begin{align}
\frac{dx}{dt} &= {\frac {a_{{11}}x+a_{{12}}y+c_{{1}}xy+c_{{2}}{y}^{2} + c_7}{1+\alpha_{{1}}y+
\alpha_{{1}}I_{{2}}}}, \nonumber \\
\frac{dy}{dt} &= {\frac {a_{{21}}x+a_{{22}}y+c_{{3}}xy+c_{{4}}x{y}^{2}+c_{{5}}{y}^{2}+c
_{{6}}{y}^{3} + c_8}{ \left( 1+\alpha_{{1}}y+\alpha_{{1}}I_{{2}} \right)
\left( 1+\alpha_{{2}}y+\alpha_{{2}}I_{{2}} \right) }}. \label{hmrhb2}
\end{align}
Where:
\begin{align}
a_{11} &= -b-\beta_{{1}}I_{{2}}-b\alpha_{{1}}I_{{2}} \\
a_{12} &= -2\,bm\alpha_{{1}}I_{{2}}+bm\alpha_{{1}}-b\alpha_{{1}}S_{{2}}+2\,p
\delta\,\alpha_{{1}}I_{{2}}+p\delta-bm-\beta_{{1}}S_{{2}} \\
c_{1} &=-b\alpha_{{1}}-\beta_{{1}} \\
c_{2} &= -bm\alpha_{{1}}+p\delta\,\alpha_{{1}} \\
a_{21} &=-I_{{2}} \left( -\beta_{{1}}-\beta_{{1}}\alpha_{{2}}I_{{2}} \right) \\
a_{22} &= -2\,p\delta\,\alpha_{{1}}I_{{2}}+2\,\beta_{{1}}\alpha_{{2}}S_{{2}}I_{{
2}}-3\,p\delta\,\alpha_{{1}}\alpha_{{2}}{I_{{2}}}^{2}-2\,\gamma\,
\alpha_{{1}}I_{{2}} \nonumber \\
&-2\,\gamma\,\alpha_{{2}}I_{{2}}-2\,p\delta\,\alpha_
{{2}}I_{{2}}-2\,\beta_{{2}}\alpha_{{1}}I_{{2}}-3\,\gamma\,\alpha_{{1}}
\alpha_{{2}}{I_{{2}}}^{2}-\gamma-p\delta-\beta_{{2}}+\beta_{{1}}S_{{2}
} \\
c_{3} &= 2\,\beta_{{1}}\alpha_{{2}}I_{{2}}+\beta_{{1}} \\
c_{4} &= \beta_{{1}}\alpha_{{2}}{y}^{2} \\
c_{5} &= -3\,p\delta\,\alpha_{{1}}\alpha_{{2}}I_{{2}}-3\,\gamma\,\alpha_{{1}}
\alpha_{{2}}I_{{2}}-p\delta\,\alpha_{{1}}+\beta_{{1}}\alpha_{{2}}S_{{2
}}-\gamma\,\alpha_{{1}}-\gamma\,\alpha_{{2}}-p\delta\,\alpha_{{2}}\nonumber \\
&-\beta_{{2}}\alpha_{{1}} \\
c_{6} &= -p\delta\,\alpha_{{1}}\alpha_{{2}}-\gamma\,\alpha_{{1}}\alpha_{{2}} \\
c_7 &= -(\beta_{{1}}S_{{2}}I_{{2}}-bm\alpha_{{1}}I_{{2}}+bS_{{2}}-p\delta\,I_{{
2}}-p\delta\,\alpha_{{1}}{I_{{2}}}^{2}+b\alpha_{{1}}S_{{2}}I_{{2}}+bmI
_{{2}} \nonumber \\
&-bm+bm\alpha_{{1}}{I_{{2}}}^{2})\\
c_8 &=- I_2 [p\delta\,\alpha_{{1}}I_{{2}}+p\delta+p\delta\,\alpha_{{2}}I_{{2}}+
\gamma\,\alpha_{{2}}I_{{2}}-\beta_{{1}}\alpha_{{2}}S_{{2}}I_{{2}}+
\gamma\,\alpha_{{1}}I_{{2}}+\beta_{{2}}\alpha_{{1}}I_{{2}} \nonumber \\
&+\gamma+ \gamma\,\alpha_{{1}}\alpha_{{2}}{I_{{2}}}^{2}-\beta_{{1}}S_{{2}}+\beta
_{{2}}+p\delta\,\alpha_{{1}}\alpha_{{2}}{I_{{2}}}^{2}].
\end{align}
But from the equations for the equilibrium point we can prove that $c_7=c_8=0$, so the system we will work on is
\begin{align}
\frac{dx}{dt} &= {\frac {a_{{11}}x+a_{{12}}y+c_{{1}}xy+c_{{2}}{y}^{2} }{1+\alpha_{{1}}y+
\alpha_{{1}}I_{{2}}}}, \nonumber \\
\frac{dy}{dt} &= {\frac {a_{{21}}x+a_{{22}}y+c_{{3}}xy+c_{{4}}x{y}^{2}+c_{{5}}{y}^{2}+c
_{{6}}{y}^{3} }{ \left( 1+\alpha_{{1}}y+\alpha_{{1}}I_{{2}} \right)
\left( 1+\alpha_{{2}}y+\alpha_{{2}}I_{{2}} \right) }}.
\end{align}
If we denote system \eqref{ruanmod} as $ (S,I)' = f(S,I) $ and system \eqref{hmrhb2} as $(x,y)' = F(x,y)$, $ f=(f_1,f_2) $, $F=(F_1,F_2)$ then
$$ F(x,y) = f(x+S_2,y+I_2), $$
and
$$ \frac{ \partial F_i }{ \partial x } (x,y) = \frac{ \partial f_i }{ \partial S } (x+S_2,y+I_2)\frac{ \partial S }{ \partial x} (x,y) + \frac{ \partial f_i }{ \partial I } (x+S_2,y+I_2)\frac{ \partial I }{ \partial x} (x,y)= \frac{ \partial f_i }{ \partial S } (x+S_2,y+I_2)$$
$$ \frac{ \partial F_i }{ \partial y } (x,y) = \frac{ \partial f_i }{ \partial S } (x+S_2,y+I_2)\frac{ \partial S }{ \partial y} (x,y) + \frac{ \partial f_i }{ \partial I } (x+S_2,y+I_2)\frac{ \partial I }{ \partial y} (x,y)= \frac{ \partial f_i }{ \partial S } (x+S_2,y+I_2) .$$
So, the jacobian matrix of \eqref{hmrhb1} $ DF(0,0) $ in the equilibrium is equal to the jacobian matrix of system \eqref{ruanmod} $Df(S_1,I_1)$. We can also compute the partial derivatives of system \eqref{hmrhb2} and \eqref{hmrhb1} to prove that they are equal,ie,
\begin{equation}
Df(S_2,I_2) = DF(0,0).
\end{equation}
Therefore the system \eqref{hmrhb1} and \eqref{ruanmod} are equivalent and we can work with system \eqref{hmrhb1}. The jacobian matrix $DF(0,0)$ of \eqref{hmrhb1} is:
\begin{equation}
DF(0,0)= \left[ \begin {array}{cc} {\dfrac {a_{{11}}}{1+\alpha_{{1}}I_{{2}}}}&
{\dfrac {a_{{12}}}{ \left( 1+\alpha_{{1}}I
_{{2}} \right) }}\\ \noalign{
}{\dfrac {a_{{21}}}{ \left( 1+
\alpha_{{2}}I_{{2}} \right) \left( 1+\alpha_{{1}}I_{{2}} \right) }}&{
\dfrac {a_{{22}}}{ \left( 1+\alpha_{{2}}I_{{2}} \right) \left( 1+
\alpha_{{1}}I_{{2}} \right) }}\end {array} \right].
\end{equation}
So system \eqref{hmrhb1} can be rewritten as
\begin{align}
\dfrac{dx}{dt} &= {\dfrac {a_{{11}}x}{1+\alpha_{{1}}I_{{2}}}}+{\frac {a_{{12}}y}{1+\alpha
_{{1}}I_{{2}}}} + G_1 (x,y) \\
\dfrac{dy}{dt} &= {\frac {a_{{21}}x}{ \left( 1+\alpha_{{1}}I_{{2}} \right) \left( 1+
\alpha_{{2}}I_{{2}} \right) }}+{\frac {a_{{22}}y}{ \left( 1+\alpha_{{1
}}I_{{2}} \right) \left( 1+\alpha_{{2}}I_{{2}} \right) }} + G_2(x,y).
\end{align}
Where
\begin{align}
G_1 &= \dfrac{1}{(1 + \alpha_1 y+ \alpha_1 I_2)(1 + \alpha_1 I_2)} \{[(1 + \alpha_1 I_2)c_1-a_{11} \alpha_1] xy + [ c_2 ( 1 + \alpha_1 I_2 ) - \alpha_1 a_{12} ] y^{2} \} \\
G_2 &= \dfrac{1}{(1 + \alpha_1 y+ \alpha_1 I_2)(1 + \alpha_2 y+ \alpha_2 I_2)(1 + \alpha_1 I_2)(1 + \alpha_2 I_2)} \{ [c_3 (1 + \alpha_1 I_2)(1 + \alpha_2 I_2)\nonumber \\ &- a_{21} ( \alpha_2 + \alpha_1 + 2 \alpha_1 \alpha_2 I_2 ) ] x y + [ c_4 ( 1 + \alpha_1 I_2 )( 1 + \alpha_2 I_2 ) - a_{21} \alpha_1 \alpha_2 ] xy^{2}\nonumber \\ &+ [c_{5}( 1 + \alpha_1 I_2 )( 1 + \alpha_2 I_2) - a_{22} ( \alpha_2 + \alpha_1 + 2 \alpha_1 \alpha_2 I_1 ) ] y^{2} + [ c_{6} ( 1 + \alpha_1 I_2 )( 1 + \alpha_2 I_2 )\nonumber \\
&- a_{22} \alpha_1 \alpha_2 ] y^{3} \}.
\end{align}
We need the normal form of the system \eqref{hmrhb1}. The eigenvalues of $ DF(0,0) $ when $s_2=0$ and (i),(ii) are satisfied are:
$$ \Lambda i , - \Lambda i .$$
With complex eigenvector $$ v= \left( \begin{matrix}
- 1 \\ \dfrac{ - \Lambda i(1 + \alpha_1 I_2)+ a_{11} }{a_{12}}
\end{matrix}\right), \quad \bar{v} = \left( \begin{matrix}
- 1 \\ \dfrac{ \Lambda i(1 + \alpha_1 I_2)+ a_{11} }{a_{12}}
\end{matrix}\right) .$$
Using the Jordan Canonical form of matrix $DF(0,0)$ and the procedure in \cite{perko} (p. 107, 108) we use the change of variable $u=x, v= \frac{a_{11}}{ \Lambda( 1+ \alpha_1 I_2)} + \frac{a_{12} y}{ \Lambda(1 + \alpha_{1} I_2 ) } $, to obtain the following equivalent system:
\begin{equation}
\left( \begin{matrix}
u \\ v
\end{matrix}\right)= \left( \begin{matrix}
0 & \Lambda \\ - \Lambda & 0
\end{matrix}\right) \left( \begin{matrix}
u \\ v
\end{matrix}\right) + \left( \begin{matrix}
H_1 (u,v) \\ H_2(u,v)
\end{matrix}\right).
\end{equation}
Where
\begin{align}
H_1(u,v) &= -\dfrac{\left( \left( -a_{{12}}c_{{1}}+a_{{11}}c_{{2}} \right) u+ \left( -
\Lambda c_{{2}}\alpha_{{1}}I_{{2}}+ \Lambda a_{{12}
}\alpha_{{1}}- \Lambda c_{{2}} \right) v \right) \left(
\left( \Lambda+ \Lambda \alpha_{{1}}I_{{2}} \right) v-a_{{11}}u
\right)
}{a_{{12}} \left( \left( \alpha_{{1}} \Lambda + \Lambda {\alpha_{{1}}}^{2}I_{{2}} \right) v+a_{{12}}-\alpha_{{1}}a_{{
11}}u+a_{{12}}\alpha_{{1}}I_{{2}} \right)
} \\
H_2(u,v) &= - \dfrac{1}{h(u,v)} \left[ (\Lambda(1+ \alpha_1 I_2)v-a_{11}u) \left( A_1 v^{2}+A_2 uv + A_3 v + A_4 u^{2} + A_5 u \right) \right].
\end{align}
And: | 3,638 | 38,693 | en |
train | 0.8.12 | Where
\begin{align}
G_1 &= \dfrac{1}{(1 + \alpha_1 y+ \alpha_1 I_2)(1 + \alpha_1 I_2)} \{[(1 + \alpha_1 I_2)c_1-a_{11} \alpha_1] xy + [ c_2 ( 1 + \alpha_1 I_2 ) - \alpha_1 a_{12} ] y^{2} \} \\
G_2 &= \dfrac{1}{(1 + \alpha_1 y+ \alpha_1 I_2)(1 + \alpha_2 y+ \alpha_2 I_2)(1 + \alpha_1 I_2)(1 + \alpha_2 I_2)} \{ [c_3 (1 + \alpha_1 I_2)(1 + \alpha_2 I_2)\nonumber \\ &- a_{21} ( \alpha_2 + \alpha_1 + 2 \alpha_1 \alpha_2 I_2 ) ] x y + [ c_4 ( 1 + \alpha_1 I_2 )( 1 + \alpha_2 I_2 ) - a_{21} \alpha_1 \alpha_2 ] xy^{2}\nonumber \\ &+ [c_{5}( 1 + \alpha_1 I_2 )( 1 + \alpha_2 I_2) - a_{22} ( \alpha_2 + \alpha_1 + 2 \alpha_1 \alpha_2 I_1 ) ] y^{2} + [ c_{6} ( 1 + \alpha_1 I_2 )( 1 + \alpha_2 I_2 )\nonumber \\
&- a_{22} \alpha_1 \alpha_2 ] y^{3} \}.
\end{align}
We need the normal form of the system \eqref{hmrhb1}. The eigenvalues of $ DF(0,0) $ when $s_2=0$ and (i),(ii) are satisfied are:
$$ \Lambda i , - \Lambda i .$$
With complex eigenvector $$ v= \left( \begin{matrix}
- 1 \\ \dfrac{ - \Lambda i(1 + \alpha_1 I_2)+ a_{11} }{a_{12}}
\end{matrix}\right), \quad \bar{v} = \left( \begin{matrix}
- 1 \\ \dfrac{ \Lambda i(1 + \alpha_1 I_2)+ a_{11} }{a_{12}}
\end{matrix}\right) .$$
Using the Jordan Canonical form of matrix $DF(0,0)$ and the procedure in \cite{perko} (p. 107, 108) we use the change of variable $u=x, v= \frac{a_{11}}{ \Lambda( 1+ \alpha_1 I_2)} + \frac{a_{12} y}{ \Lambda(1 + \alpha_{1} I_2 ) } $, to obtain the following equivalent system:
\begin{equation}
\left( \begin{matrix}
u \\ v
\end{matrix}\right)= \left( \begin{matrix}
0 & \Lambda \\ - \Lambda & 0
\end{matrix}\right) \left( \begin{matrix}
u \\ v
\end{matrix}\right) + \left( \begin{matrix}
H_1 (u,v) \\ H_2(u,v)
\end{matrix}\right).
\end{equation}
Where
\begin{align}
H_1(u,v) &= -\dfrac{\left( \left( -a_{{12}}c_{{1}}+a_{{11}}c_{{2}} \right) u+ \left( -
\Lambda c_{{2}}\alpha_{{1}}I_{{2}}+ \Lambda a_{{12}
}\alpha_{{1}}- \Lambda c_{{2}} \right) v \right) \left(
\left( \Lambda+ \Lambda \alpha_{{1}}I_{{2}} \right) v-a_{{11}}u
\right)
}{a_{{12}} \left( \left( \alpha_{{1}} \Lambda + \Lambda {\alpha_{{1}}}^{2}I_{{2}} \right) v+a_{{12}}-\alpha_{{1}}a_{{
11}}u+a_{{12}}\alpha_{{1}}I_{{2}} \right)
} \\
H_2(u,v) &= - \dfrac{1}{h(u,v)} \left[ (\Lambda(1+ \alpha_1 I_2)v-a_{11}u) \left( A_1 v^{2}+A_2 uv + A_3 v + A_4 u^{2} + A_5 u \right) \right].
\end{align}
And:
\begin{align*}
A_1 &= {\Lambda}^{2} \left( 1+\alpha_{{1}}I_{{2}} \right) ^{2} [ -a_{{12
}}c_{{6}}\alpha_{{2}}{I_{{2}}}^{2}\alpha_{{1}}-a_{{11}}c_{{2}}\alpha_{
{1}}{I_{{2}}}^{2}{\alpha_{{2}}}^{2}-a_{{11}}c_{{2}}\alpha_{{1}}I_{{2}}
\alpha_{{2}}\nonumber \\
&-a_{{12}}c_{{6}}\alpha_{{1}}I_{{2}}+a_{{11}}a_{{12}}\alpha
_{{1}}{\alpha_{{2}}}^{2}I_{{2}}+a_{{11}}a_{{12}}\alpha_{{1}}\alpha_{{2
}} +a_{{12}}a_{{22}}\alpha_{{1}}\alpha_{{2}}\nonumber \\
&-a_{{11}}c_{{2}}{\alpha_{{2
}}}^{2}I_{{2}}-a_{{12}}c_{{6}}\alpha_{{2}}I_{{2}}-a_{{11}}c_{{2}}
\alpha_{{2}}-a_{{12}}c_{{6}} ]
\\
A_2 &= -\Lambda\, \left( 1+\alpha_{{1}}I_{{2}} \right) [ a_{{11}}a_{{12
}}c_{{1}}{\alpha_{{2}}}^{2}\alpha_{{1}}{I_{{2}}}^{2}+{a_{{12}}}^{2}c_{
{4}}\alpha_{{2}}{I_{{2}}}^{2}\alpha_{{1}}-2\,a_{{12}}a_{{11}}c_{{6}}
\alpha_{{2}}{I_{{2}}}^{2}\alpha_{{1}}\nonumber \\
&-2\,c_{{2}}\alpha_{{1}}{I_{{2}}}^{2}{\alpha_{{2}}}^{2}{a_{{11}}}^{2} +a_{{12}}\alpha_{{1}}{\alpha_{{2}}}^{2}{a_{{11}}}^{2}I_{{2}}-2\,a_{{12}}a_{{11}}c_{{6}}\alpha_{{1}}I_{{2}
}-2\,c_{{2}}\alpha_{{1}}I_{{2}}\alpha_{{2}}{a_{{11}}}^{2}\nonumber \\
&+{a_{{12}}}^{2}c_{{4}}\alpha_{{1}}I_{{2}}+a_{{11}}a_{{12}}c_{{1}}\alpha_{{2}}\alpha
_{{1}}I_{{2}}+a_{{11}}a_{{12}}c_{{1}}{\alpha_{{2}}}^{2}I_{{2}}+{a_{{12
}}}^{2}c_{{4}}\alpha_{{2}}I_{{2}}-2\,a_{{12}}a_{{11}}c_{{6}}\alpha_{{2
}}I_{{2}}\nonumber \\
&-2\,{a_{{11}}}^{2}c_{{2}}{\alpha_{{2}}}^{2}I_{{2}}+{a_{{12}}}
^{2}c_{{4}}-2\,{a_{{11}}}^{2}c_{{2}}\alpha_{{2}}-2\,a_{{12}}a_{{11}}c_{{6}}+a_{{11}}a_{{12}}c_{{1}}\alpha_{{2}}+a_{{12}}\alpha_{{1}}\alpha_{
{2}}{a_{{11}}}^{2}\nonumber \\
&-{a_{{12}}}^{2}\alpha_{{1}}a_{{21}}\alpha_{{2}}+2\,a_{{12}}a_{{11}}a_{{22}}\alpha_{{1}}\alpha_{{2}} ] \\
A_3 &= \Lambda\, \left( 1+\alpha_{{1}}I_{{2}} \right) a_{{12}} [ -a_{{12
}}c_{{5}}\alpha_{{1}}{I_{{2}}}^{2}\alpha_{{2}}+a_{{12}}a_{{11}}\alpha_
{{1}}{\alpha_{{2}}}^{2}{I_{{2}}}^{2}+2\,a_{{12}}a_{{22}}\alpha_{{1}}
\alpha_{{2}}I_{{2}}\nonumber \\
&+2\,a_{{12}}a_{{11}}\alpha_{{1}}\alpha_{{2}}I_{{2}}
-a_{{12}}c_{{5}}\alpha_{{1}}I_{{2}}+a_{{12}}a_{{22}}\alpha_{{1}}+a_{{
11}}a_{{12}}\alpha_{{1}}-a_{{12}}c_{{5}}\alpha_{{2}}I_{{2}}+a_{{12}}a_
{{22}}\alpha_{{2}}\nonumber \\
&-2\,a_{{11}}c_{{2}}\alpha_{{1}}{I_{{2}}}^{2}\alpha_{{2}}-a_{{11}}c_{{2}}\alpha_{{1}}I_{{2}}-a_{{11}}c_{{2}}
-a_{{11}}c_{{2}}{\alpha_{{2}}}^{2}{I_{{2}}}^{2}-2\,a_{{11}}c_{{2}}\alpha_{{2}}I_{{2}} ] \\
A_4 &= -a_{{11}} [-{a_{{12}}}^{2}c_{{4}}\alpha_{{2}}I_{{2}}-a_{{11}}a_{
{12}}c_{{1}}\alpha_{{2}}\alpha_{{1}}I_{{2}}+c_{{2}}\alpha_{{1}}I_{{2}}
\alpha_{{2}}{a_{{11}}}^{2}-{a_{{12}}}^{2}c_{{4}}\alpha_{{2}}{I_{{2}}}^
{2}\alpha_{{1}}\nonumber \\
&-{a_{{12}}}^{2}c_{{4}}\alpha_{{1}}I_{{2}}-{a_{{12}}}^{2
}c_{{4}}+{a_{{12}}}^{2}\alpha_{{1}}a_{{21}}\alpha_{{2}}-a_{{12}}a_{{11
}}a_{{22}}\alpha_{{1}}\alpha_{{2}}-a_{{11}}a_{{12}}c_{{1}}{\alpha_{{2}
}}^{2}\alpha_{{1}}{I_{{2}}}^{2}\nonumber \\
&+a_{{12}}a_{{11}}c_{{6}}+{a_{{11}}}^{2}
c_{{2}}{\alpha_{{2}}}^{2}I_{{2}}+a_{{12}}a_{{11}}c_{{6}}\alpha_{{2}}{I
_{{2}}}^{2}\alpha_{{1}}+{a_{{11}}}^{2}c_{{2}}\alpha_{{2}}-a_{{11}}a_{{
12}}c_{{1}}\alpha_{{2}}\nonumber \\
&+a_{{12}}a_{{11}}c_{{6}}\alpha_{{2}}I_{{2}}+a_{{12}}a_{{11}}c_{{6}}\alpha_{{1}}I_{{2}}+c_{{2}}\alpha_{{1}}{I_{{2}}}^{
2}{\alpha_{{2}}}^{2}{a_{{11}}}^{2}-a_{{11}}a_{{12}}c_{{1}}{\alpha_{{2}}}^{2}I_{{2}} ] \\
A_5 &= a_{{12}} [ 2\,{a_{{11}}}^{2}c_{{2}}\alpha_{{2}}I_{{2}}+{a_{{11}}}
^{2}c_{{2}}\alpha_{{1}}I_{{2}}+{a_{{12}}}^{2}\alpha_{{1}}a_{{21}}+{a_{
{11}}}^{2}c_{{2}}{\alpha_{{2}}}^{2}{I_{{2}}}^{2}-a_{{12}}a_{{11}}a_{{
22}}\alpha_{{2}}\nonumber \\
&-a_{{12}}a_{{11}}a_{{22}}\alpha_{{1}}-{a_{{12}}}^{2}c_{{3}}\alpha_{{2}}I_{{2}}-{a_{{12}}}^{2}c_{{3}}\alpha_{{1}}I_{{2}}+a_{{
11}}a_{{12}}c_{{5}}+{a_{{12}}}^{2}\alpha_{{2}}a_{{21}}-a_{{12}}a_{{11}
}c_{{1}}\nonumber \\
&+2\,{a_{{12}}}^{2}\alpha_{{1}}a_{{21}}\alpha_{{2}}I_{{2}}-a_{{
12}}a_{{11}}c_{{1}}{\alpha_{{2}}}^{2}{I_{{2}}}^{2}-a_{{12}}a_{{11}}c_{
{1}}\alpha_{{1}}I_{{2}}-2\,a_{{12}}a_{{11}}c_{{1}}\alpha_{{2}}I_{{2}}\nonumber \\
&+a_{{11}}a_{{12}}c_{{5}}\alpha_{{2}}I_{{2}}+a_{{11}}a_{{12}}c_{{5}}
\alpha_{{1}}I_{{2}}-{a_{{12}}}^{2}c_{{3}}\alpha_{{1}}{I_{{2}}}^{2}
\alpha_{{2}}-{a_{{12}}}^{2}c_{{3}}+a_{{11}}a_{{12}}c_{{5}}\alpha_{{1}}
{I_{{2}}}^{2}\alpha_{{2}}\nonumber \\
&-2\,a_{{12}}a_{{11}}a_{{22}}\alpha_{{1}}
\alpha_{{2}}I_{{2}}-2\,a_{{12}}a_{{11}}c_{{1}}\alpha_{{1}}{I_{{2}}}^{2
}\alpha_{{2}}-a_{{12}}a_{{11}}c_{{1}}\alpha_{{1}}{I_{{2}}}^{3}{\alpha_
{{2}}}^{2}+2\,{a_{{11}}}^{2}c_{{2}}\alpha_{{1}}{I_{{2}}}^{2}\alpha_{{2
}}\nonumber \\
&+{a_{{11}}}^{2}c_{{2}}\alpha_{{1}}{I_{{2}}}^{3}{\alpha_{{2}}}^{2}+{a
_{{11}}}^{2}c_{{2}} ]
\\
h(u,v) &=\Lambda\, \left( 1+\alpha_{{1}}I_{{2}} \right) ^{2}a_{{12}} [
\left( \alpha_{{1}}\Lambda+\Lambda\,{\alpha_{{1}}}^{2}I_{{2}}
\right) v+a_{{12}}-\alpha_{{1}}a_{{11}}u+a_{{12}}\alpha_{{1}}I_{{2}}
] \nonumber \\
& [ \left( \alpha_{{2}}\Lambda+\alpha_{{2}}\Lambda\,
\alpha_{{1}}I_{{2}} \right) v+a_{{12}}-\alpha_{{2}}a_{{11}}u+\alpha_{{
2}}I_{{2}}a_{{12}} ] \left( 1+\alpha_{{2}}I_{{2}} \right) .
\end{align*} | 4,079 | 38,693 | en |
train | 0.8.13 | Let
\begin{align}
\bar{a}_2 &= \dfrac{1}{16} [ (H_1)_{uuu} + (H_1)_{uvv} + (H_{2})_{uuv} + (H_2)_{vvv} ] + \dfrac{1}{16( - \Lambda)} [(H_1)_{uv} ((H_1)_{uu} + (H_1)_{vv}) \nonumber \\
& - (H_{2})_{uv} ( (H_{2})_{uu} + (H_2)_{vv} ) - (H_1)_{uu}(H_2)_{uu} + (H_1)_{vv} (H_2)_{vv}].
\end{align}
Then
\begin{align}
\bar{a}_{2} &= \dfrac{3\left( \left( -c_{{1}}\Lambda\,v{\alpha_{{1}}}^{2}I_{{2}}+\Lambda\,va_{{11}}
{\alpha_{{1}}}^{2}-a_{{12}}c_{{1}}\alpha_{{1}}I_{{2}}+a_{{11}}c_{{2}}
\alpha_{{1}}I_{{2}}-c_{{1}}\Lambda\,v\alpha_{{1}}-a_{{12}}c_{{1}}+a_{{
11}}c_{{2}} \right) a_{{12}}{a_{{11}}}^{2}\alpha_{{1}}
\right)}{8 \left( a_{{12}}+\alpha_{{1}}\Lambda\,v \right) ^{4} \left( 1+\alpha_{
{1}}I_{{2}} \right) ^{3}
} \nonumber \\
& - \dfrac{\left( -3\,a_{{11}}c_{{2}}-3\,a_{{11}}c_{{2}}\alpha_{{1}}I_{{2}}+2\,a
_{{12}}a_{{11}}\alpha_{{1}}+a_{{12}}c_{{1}}+a_{{12}}c_{{1}}\alpha_{{1}
}I_{{2}} \right) \alpha_{{1}}{\Lambda}^{2}
}{8 \left( 1+\alpha_{{1}}I_{{2}} \right) {a_{{12}}}^{3}} \nonumber \\
& - \dfrac{1}{8 \Lambda\, \left( 1+\alpha_{{1}}I_{{2}} \right) ^{4}{a_{{12}}}^{4}
\left( 1+\alpha_{{2}}I_{{2}} \right) ^{3}
}[ 2\,a_{{11}}A_{{5}}\alpha_{{1}}\Lambda+6\,a_{{11}}A_{{5}}\alpha_{{1}}
\Lambda\,\alpha_{{2}}I_{{2}}+2\,a_{{11}}A_{{5}}{\alpha_{{1}}}^{2}
\Lambda\,I_{{2}} \nonumber \\
&+4\,a_{{11}}A_{{5}}{\alpha_{{1}}}^{2}\Lambda\,{I_{{2}}
}^{2}\alpha_{{2}}+2\,a_{{11}}A_{{5}}\alpha_{{2}}\Lambda-{a_{{11}}}^{2}
A_{{3}}\alpha_{{1}}-2\,{a_{{11}}}^{2}A_{{3}}\alpha_{{1}}\alpha_{{2}}I_
{{2}}-{a_{{11}}}^{2}A_{{3}}\alpha_{{2}}-a_{{11}}A_{{2}}a_{{12}} \nonumber \\
&-a_{{11
}}A_{{2}}a_{{12}}\alpha_{{2}}I_{{2}}-a_{{11}}A_{{2}}a_{{12}}\alpha_{{1
}}I_{{2}}-a_{{11}}A_{{2}}a_{{12}}\alpha_{{1}}{I_{{2}}}^{2}\alpha_{{2}}
+A_{{4}}\Lambda\,a_{{12}}+A_{{4}}\Lambda\,a_{{12}}\alpha_{{2}}I_{{2}}\nonumber \\
&+2\,A_{{4}}\Lambda\,a_{{12}}\alpha_{{1}}I_{{2}}+2\,A_{{4}}\Lambda\,a_{{
12}}\alpha_{{1}}{I_{{2}}}^{2}\alpha_{{2}}+A_{{4}}\Lambda\,a_{{12}}{
\alpha_{{1}}}^{2}{I_{{2}}}^{2}+A_{{4}}\Lambda\,a_{{12}}{\alpha_{{1}}}^
{2}{I_{{2}}}^{3}\alpha_{{2}}]\nonumber \\
&+ \dfrac{3}{8} {\frac {(-A_{{1}}a_{{12}}-A_{{1}}a_{{12}}\alpha_{{2}}I_{{2}}+A_{{3}}
\alpha_{{1}}\Lambda+2\,A_{{3}}\alpha_{{1}}\Lambda\,\alpha_{{2}}I_{{2}}
+A_{{3}}\alpha_{{2}}\Lambda)}{ \left( 1+\alpha_{{1}}I_{{2}} \right) ^{2
}{a_{{12}}}^{4} \left( 1+\alpha_{{2}}I_{{2}} \right) ^{3}}} \nonumber \\
&- \dfrac{1}{16 \Lambda }[ -2\,{\frac {\Lambda\, \left( -2\,a_{{11}}c_{{2}}-2\,a_{{11}}c_{{2}}
\alpha_{{1}}I_{{2}}+a_{{12}}a_{{11}}\alpha_{{1}}+a_{{12}}c_{{1}}+a_{{
12}}c_{{1}}\alpha_{{1}}I_{{2}} \right) }{{a_{{12}}}^{4} \left( 1+\alpha_{{1}}I_{{2}}
\right) ^{2}}} \nonumber \\
& -2\,{\frac { \left( A_{{5}}\Lambda+A_{{5}}\Lambda\,\alpha_{{1}}I_{{2}}
-a_{{11}}A_{{3}} \right) \left( -a_{{11}}A_{{5}}+A_{{3}}\Lambda+A_{{3
}}\Lambda\,\alpha_{{1}}I_{{2}} \right) }{{\Lambda}^{2} \left( 1+\alpha
_{{1}}I_{{2}} \right) ^{6}{a_{{12}}}^{6} \left( 1+\alpha_{{2}}I_{{2}}
\right) ^{4}}} \nonumber \\
& -4\,{\frac { \left( -a_{{12}}c_{{1}}+a_{{11}}c_{{2}} \right) {a_{{11}}
}^{2}A_{{5}}}{{a_{{12}}}^{5} \left( 1+\alpha_{{1}}I_{{2}} \right) ^{4}
\Lambda\, \left( 1+\alpha_{{2}}I_{{2}} \right) ^{2}}}
4\,{\frac { \left( -c_{{2}}\alpha_{{1}}I_{{2}}+a_{{12}}\alpha_{{1}}-c_
{{2}} \right) {\Lambda}^{2}A_{{3}}}{{a_{{12}}}^{5} \left( 1+\alpha_{{1
}}I_{{2}} \right) ^{2} \left( 1+\alpha_{{2}}I_{{2}} \right) ^{2}}}
].
\end{align}
\end{document} | 1,864 | 38,693 | en |
train | 0.9.0 | \begin{document}
\title{Temporal profile of biphotons generated from a hot atomic vapor and spectrum of electromagnetically induced transparency}
\author{
Shih-Si Hsiao,$^{1,}$\footnote{Electronic address: {\tt [email protected]}}
Wei-Kai Huang,$^1$
Yi-Min Lin,$^1$
Jia-Mou Chen,$^1$
Chia-Yu Hsu,$^1$ and
Ite A. Yu$^{1,2,}$}\email{[email protected]}
\address{$^1$Department of Physics, National Tsing Hua University, Hsinchu 30013, Taiwan \\
$^2$Center for Quantum Technology, Hsinchu 30013, Taiwan
}
\begin{abstract}
We systematically studied the temporal profile of biphotons, i.e., pairs of time-correlated single photons, generated from a hot atomic vapor via the spontaneous four-wave mixing process. The measured temporal width of biphoton wave packet or two-photon correlation function against the coupling power was varied from about 70 to 580 ns. We derived an analytical expression of the biphoton's spectral profile in the Doppler-broadened medium. The analytical expression reveals that the spectral profile is mainly determined by the effect of electromagnetically induced transparency (EIT), and behaves like a Lorentzian function with a linewidth approximately equal to the EIT linewidth. Consequently, the biphoton's temporal profile influenced by the Doppler broadening is an exponential-decay function, which was consistent with the experimental data. Employing a weak input probe field of classical light, we further measured the EIT spectra under the same experimental conditions as those in the biphoton measurements. The theoretical predictions of the biphoton wave packets calculated with the parameters determined by the classical-light EIT spectra are consistent with the experimental data. The consistency demonstrates that in the Doppler-broadened medium, the classical-light EIT spectrum is a good indicator for the biphoton's temporal profile. Besides, the measured biphoton's temporal widths well approximated to the predictions of the analytical formula based on the biphoton's EIT effect. This study provides an analytical way to quantitatively understand the biphoton's spectral and temporal profiles in the Doppler-broadened medium.
\end{abstract}
\maketitle
\newcommand{\FigOne}{
\begin{figure}
\caption{
Transition diagram of the SFWM process. Under the presence of the pump and coupling fields, the vacuum fluctuation induces the generation of a pair of the signal and probe single photons. The pump transition has a large magnitude of the one-photon detuning, $\Delta_p$, such that the excitation to state $|4\rangle$ is negligible. In the experiment, a hot vapor of $^{87}
\label{fig:transition_diagram}
\end{figure}
}
\newcommand{\FigTwo}{
\begin{figure}
\caption{
(a) Theoretical predictions of the biphoton's EIT spectrum of a Doppler-broadened medium. Solid lines are the numerical results calculated from Eq.~(\ref{eq:T_exact}
\label{fig:eit_simulation}
\end{figure}
}
\newcommand{\FigThree}{
\begin{figure}
\caption{
(a) Theoretical predictions of the biphoton's FWM spectrum of a Doppler-broadened medium. Solid lines are the numerical results calculated from Eq.~(\ref{eq:FWM_part}
\label{fig:fwm_simulation}
\end{figure}
}
\newcommand{\FigFour}{
\begin{figure}
\caption{
Comparisons between the linewidths of biphoton's EIT, FWM, and overall spectra, i.e., $\Gamma_{\rm EIT}
\label{fig:three_linewidth_comparison}
\end{figure}
}
\newcommand{\FigFive}{
\begin{figure}
\caption{
Theoretical predictions of the biphoton's overall spectrum of a Doppler-broadened medium. Solid lines are the numerical results given by Eq.~(\ref{eq:biphoton_frequency}
\label{fig:biphoton_simulation}
\end{figure}
}
\newcommand{\FigSix}{
\begin{figure}
\caption{
(a)-(e) From top to bottom, blue lines are the measured EIT spectra at the coupling powers of 0.5, 1, 2, 4, and 8 mW, and red lines are the theoretical predictions calculated from Eq.~(\ref{eq:T_exact}
\label{fig:experiment_data}
\end{figure}
}
\newcommand{\FigSeven}{
\begin{figure}
\caption{
Temporal width of the biphoton wave packet as a function of the coupling power. Red circles are obtained from the best fits of the biphoton wave packets as the examples shown in Figs.~\ref{fig:experiment_data}
\label{fig:eit_biphoton_linewidth}
\end{figure}
}
\section{Introduction} \label{sec:introduction}
The biphoton is a pair of time-correlated single photons. When the first photon is detected to trigger a quantum operation, the second photon in the same pair will be employed in the quantum operation. The first and second photons can be regarded as the heralding and heralded single photons. Biphotons can be used in quantum information processing~\cite{QI1,QI2,QI3,QI4,QI5,QI6}, such as quantum communication~\cite{Q_communication_1,Q_communication_2,Q_communication_3}, quantum memory~\cite{Q_memory_1,Q_memory_2,Q_memory_3,Q_memory_4}, and quantum interference~\cite{Q_interference_1,Q_interference_2,Q_interference_3}.
One of the mechanisms to produce biphotons is the spontaneous four-wave mixing (SFWM) process~\cite{SWFM1,SWFM2,SWFM3,SWFM4,SWFM5,SWFM6,SWFM7,SWFM8,SWFM9,SWFM10,SWFM11,SWFM12,SWFM13,SWFM14,SWFM15,SWFM16,SWFM17,SWFM18,SWFM19, SWFM20,SWFM21,SWFM22,SWFM23,SWFM24}, which has been commonly employed with media of cold or hot atomic vapors. There are several theoretical models to study the temporal profile of the biphoton. In Ref.~\cite{SWFM14}, S.~Du {\it et al}. developed the theory of the time-correlation function of biphotons in a cold atomic vapor. The authors systematically studied the two-photon correlation function, i.e., the biphoton wave packet, discussed the properties of each term in wave packet, and predicted the effects of the dephasing rate and propagation delay time on the temporal profile of the biphoton wave packet. Based on the theoretical model in Ref.~\cite{SWFM14}, several groups conducted experiments to study the temporal profiles of biphotons with cold atoms~\cite{SWFM7,SWFM8, SWFM9,SWFM10,ColdAtoms1,ColdAtoms2}. Moreover, in Ref.~\cite{SWFM8} and \cite{SWFM14} the authors derive the analytical expressions of time correlated function, providing the convenient method to expected the temporal profile of biphoton in cold atom system.
The SFWM-generated biphoton source of a room-temperature or hot atomic vapor has the merits of adjustable linewidth, tunable frequency, and high generation rate. In Ref.~\cite{SWFM18}, C.~Shu {\it et al}. reported the generation of biphotons from a hot atomic vapor, and provided the numerical simulation for the time-correlation function of biphotons. After Ref.~\cite{SWFM18}, several groups improved the efficiency of biphoton generation with hot-atom sources, and compared data with results of numerical simulations~\cite{SWFM23, SWFM24, HotAtom1}.
However, there is no analytical formula available for the temporal profile of biphoton's time-correlation function or wave packet in the Doppler-broadened medium so far, and the systematic study on the temporal profile has not been reported before.
In this article, we report the systematic study on the temporal profile of the SFWM biphotons generated from a Doppler-broadened medium. Although the spectral profile of the biphoton is influenced by the electromagnetically induced (EIT) effect, the four-wave mixing (FWM) process, the phase matching condition, and the Doppler effect as well as the Boltzmann distribution of the velocity, we derived an analytical expression to show that the biphoton's overall spectrum is approximately Lorentzian. Thus, the temporal profile of the biphoton wave packet behaves like an exponential-decay function with the time constant, which is mainly determined by the EIT effect. Furthermore, we experimentally demonstrated that the measured biphoton's temporal profile is predominately characterized by the measured EIT spectrum. The predictions of the biphoton's temporal width calculated from the analytical formula are in good agreement with the experimental data.
We organize the article as follows. In Sec.~II, we start with the time-correlation function or the wave packet of the SFWM-generated biphoton in a Doppler-broadened medium, and derive the analytical expressions or formulas of the biphoton's spectral and temporal profiles. In Sec.~III, we describe our experimental setup for the measurement of the EIT spectrum with classical light and that of the biphoton wave packet. In Sec.~IV, we presented the data of the EIT spectra and the biphoton wave packets, and compare them with the predictions from the theoretical model. Finally, we give a conclusion in Sec.~V. | 2,421 | 12,811 | en |
train | 0.9.1 | \section{Theoretical Predictions}
Biphotons or a pair of single photons are produced from a hot atomic vapor with the spontaneous four-wave mixing process (SFWM). Figure~\ref{fig:transition_diagram} shows the relevant energy levels and transitions of SFWM process for the generation of biphotons. All population is placed in the ground state of $|1\rangle$. The pump field is far detuned from the transition of $|1\rangle$ $\rightarrow$ $|4\rangle$, and the coupling field drive the transition of $|2\rangle$ $\rightarrow$ $|3\rangle$ resonantly. Due to the vacuum fluctuation, a pair of anti-Stokes and Stokes photons can be spontaneously emitted. The frequencies of the pump field, anti-Stokes photon, coupling field, and Stokes photon form the resonant four-photon transition. Note that, we employ the hyperfine optical pumping (HOP) field to empty the population in $|2\rangle$, which may contribute some residual light of the HOP field in its hollow region. The anti-Stokes photon propagates with the speed of light in vacuum since the pump field and anti-Stokes form the Raman process with the large one-photon detuning. The Stokes photon is slow light due to the effect of electromagnetically induced transparency (EIT) in two-photon on resonance of the coupling field and Stokes photon.
\FigOne
The temporal profile of biphoton wave packet is determined by the theory of the time correlation function between the anti-Stokes and Stokes photons. Considering phase-match condition, the time correlation function is shown below \cite{SWFM14}.
\begin{eqnarray}
G^{(2)}(\tau) =
\left| \int_{-\infty}^{\infty} d\delta \frac{1}{2\pi} e^{-i\delta\tau} F(\delta) \right|^2,
\label{eq:biphoton}
\end{eqnarray}
\begin{eqnarray}
F(\delta) =
\frac{\sqrt{k_{as} k_s}L}{2} \chi(\delta)
{\rm sinc} \left[ \frac{k_s L}{4} \xi(\delta) \right]
e^{i(k_s L/4) \xi(\delta)} ,
\label{eq:biphoton_frequency}
\end{eqnarray}
where $\delta$ is the two-photon detuning between the anti-Stokes photon and pump field (or $-\delta$ is that between the Stokes photon and coupling field), $\tau$ is the delay time of the Stokes photon, $k_{as}$ and $k_s$ are the wave vectors of the two photons, $L$ is the medium length, $\chi(\delta)$ is the cross-susceptibility of the Stokes photon induced by the anti-Stokes photon, and $\xi(\delta)$ is the self-susceptibility of the Stokes photon. Considering the Doppler-broadened medium and the all-copropagating scheme, the self-susceptibility and cross-susceptibility are given by
\begin{eqnarray}
\label{eq:self_chi}
\frac{k_s L}{4} \xi(\delta) \!\!\! &=& \!\!\! \frac{\alpha_s\Gamma_3}{2} \int_{-\infty}^{\infty} d\omega_D
\frac{e^{-\omega_D^2/\Gamma_D^2}}{\sqrt{\pi}\Gamma_D}
\\ &\times&
\frac{\delta+i\gamma}{\Omega_c^2-4(\delta+i\gamma)(\delta+\omega_D+i\Gamma_3/2)},
\nonumber
\\
\label{eq:cross_chi}
\frac{\sqrt{k_{as} k_s}L}{2} \chi(\delta) \!\!\! &=& \!\!\!
\frac{\sqrt{\alpha_{as}\alpha_s}\sqrt{\Gamma_3\Gamma_4}}{4} \int_{-\infty}^{\infty} d\omega_D
\frac{e^{-\omega_D^2/\Gamma_D^2}}{\sqrt{\pi}\Gamma_D}
\\ &\times&
\frac{\Omega_p}{\Delta_p-\omega_D + i\Gamma_4/2}
\nonumber \\ &\times&
\frac{\Omega_c}{\Omega_c^2-4(\delta+i\gamma)(\delta+\omega_D+i\Gamma_3/2)},~~
\nonumber
\end{eqnarray}
where $\omega_D$ is the Doppler shift, $\Gamma_D$ is the Doppler width, $\alpha_s = n \sigma_s L$ ($n$ is the atomic density and $\sigma_s$ is the resonant absorption cross section of the Stokes transition) is the optical depth of the entire atoms interacting with the Stokes photon on resonance, $\alpha_{as}$ means the similar optical depth of the anti-Stokes transition, $\Omega_p$ and $\Omega_c$ are the Rabi frequencies of the pump and coupling fields, $\Gamma_3$ and $\Gamma_4$ are the spontaneous decay rates of the excited states (i.e., $|3\rangle$ and $|4\rangle$ in Fig.~\ref{fig:transition_diagram}) in the Stokes and anti-Stokes transitions, respectively, $\Delta_p$ is the detuning of the pump field, and $\gamma$ is the dephasing rate of the ground-state coherence, i.e., the decoherence rate. Since the difference between $\Gamma_3$ and $\Gamma_4$ is merely about 5\% in our case, we neglect the difference and set $\Gamma_3 = \Gamma_4 \equiv \Gamma = 2\pi\times6~{\rm MHz}$ in this work. Due to the temperature of the vapor cell being 57 $^{\circ}$C, $\Gamma_D =$ 54$\Gamma$ in the calculation~\cite{OurSR2018}.
In the biphoton's spectral profile shown in Eq.~(\ref{eq:biphoton_frequency}), we first derive the analytical expression corresponding to the EIT effect, i.e.,
\begin{eqnarray}
T_{\rm EIT} \!\!&=&\!\!
\left| \exp\!\! \left[ i\frac{k_s L}{4} \xi(\delta) \right] \right|^2
\nonumber \\
\!\!&=&\!\!
\exp\!\! \left[ -\alpha_s' \!\! \int_{-\infty}^{\infty} \!\! d\omega_D
\:e^{-\omega_D^2/\Gamma_D^2}
\frac{\beta}{(\omega_D-\omega_0)^2+\beta^2} \right],~~
\label{eq:T_exact}
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:alpha_s_p}
\alpha_s'&=&\alpha_s \frac{\sqrt{\pi}\Gamma}{4\Gamma_D} \\
\label{eq:x_0}
\omega_0 &=&
\frac{\delta \Omega_c^2}{4(\delta^2+\gamma^2)} -\delta, \\
\label{eq:beta}
\beta &=&
\frac{2 \delta^2 \Gamma+\gamma(2\gamma\Gamma+\Omega_c^2)}
{4(\delta^2+\gamma^2)}
\approx \frac{2 \delta^2 \Gamma+\gamma\Omega_c^2}{4(\delta^2+\gamma^2)}.
\end{eqnarray}
The above approximation is valid under the assumption of $2\gamma\Gamma \ll \Omega_c^2$, which is very reasonable in typical EIT experiments. We name $T_{\rm EIT}$ as the biphoton's EIT spectrum.
In Eq.~(\ref{eq:T_exact}), the integral results in the Voigt function, i.e., the convolution between a Gaussian and a Lorentzian functions. The Voigt function can be expressed as the real part of the Faddeeva function, $w(z)$. Thus,
\begin{equation}
T_{\rm EIT} = \exp \left\{ - \alpha_s' {\rm Re}[w(z)] \right\},
\end{equation}
where $w(z)\equiv {\rm exp}(-z^2) {\rm erfc}(-i z)$ and $z=(\omega_0+i \beta)/\Gamma_D$. Re[$w(z)$] can be approximated as a Lorentzian function of the two-photon detuning ($\delta$) given by
\begin{equation}
{\rm Re}[w(z)] \approx R\left(1-\frac{A}{1+4\delta^2/\Gamma_L^2}\right),
\label{eq:F_approximation}
\end{equation}
where
\begin{eqnarray}
\label{eq:R}
R &=& \:\exp\! \left( \frac{1}{4 \Gamma_D^2} \right)
\:{\rm erfc}\! \left( \frac{1}{2 \Gamma_D} \right), \\
\label{eq:A}
A &=& 1-\frac{1}{R} \:\exp\! \left( \frac{\Omega_c^4}{16 \gamma^2 \Gamma_D^2} \right)
\:{\rm erfc}\! \left( \frac{\Omega_c^2}{4 \gamma \Gamma_D} \right), \\
\label{eq:linewidth}
\Gamma_L &=& 2\gamma
\left( 1 +\frac{\Omega_c^2}{4\gamma \Gamma_D} \right).
\end{eqnarray}
In Eq.~(\ref{eq:F_approximation}), $A/(1+4\delta^2/\Gamma_L^2)$ is the Lorentzian function and $\Gamma_L$ is its full width at half maximum (FWHM). Denote the Lorentzian function as $f(\delta)$ and $1-{\rm Re}[w(z)]/R$ as $g(\delta)$. The percentage difference between them, i.e., $\int (f- g)^2 d\delta / \int (f \cdot g) d\delta$, is always less than 5\% as long as $\gamma >$ 0.01$\Gamma$. When $\Omega_c^2/(4\gamma \Gamma_D) \gg 1$, Eq.~(\ref{eq:F_approximation}) becomes not a good approximation. Using Eq.~(\ref{eq:F_approximation}), we obtain the biphoton's EIT spectrum in the Doppler-broadened medium as the following:
\begin{eqnarray}
\label{eq:T_approximation}
T_{\rm EIT} &=& \:\exp\! \left[ -\alpha_s' R
\left( 1-\frac{A}{1+ 4\delta^2/\Gamma_L^2} \right) \right].
\end{eqnarray}
The FWHM, peak height, and baseline in the above equation are exactly the same as those in the equation below.
\begin{eqnarray}
\label{eq:T_approximation2}
T_{\rm EIT} &\approx& e^{-\alpha_s' R}
\left[1+ \frac{ e^{\alpha_s' RA}-1}{1+4\delta^2/\Gamma_{\rm EIT}^2} \right],\\
\label{eq:Gamma_EIT}
\Gamma_{\rm EIT} &=&
\Gamma_L \sqrt{\frac{ \alpha_s' R A }{ \ln[(1+e^{\alpha_s' R A})/2] } -1}.
\end{eqnarray}
The percentage difference between the right-hand sides of Eqs.~(\ref{eq:T_approximation}) and (\ref{eq:T_approximation2}) is always less than 5\%, and a smaller value of $\alpha_s' R A$ makes the difference less. According to Eq.~(\ref{eq:T_approximation2}), the biphoton's EIT spectrum is approximated as a Lorentzian peak on top of a baseline \cite{OurSR2018}.
\FigTwo
We compare the biphoton's EIT spectrum calculated from the numerical integral of Eq.~(\ref{eq:T_exact}) with that calculated from the analytical formula of Eq.~(\ref{eq:T_approximation}). Figure~\ref{fig:eit_simulation}(a) shows the comparisons at different values of the coupling Rabi frequency, $\Omega_c$. As $\Omega_c$ becomes larger, i.e., a larger value of $\Omega_c^2/(4\gamma\Gamma_D)$, the deviation between numerical and analytical spectra becomes larger. Figure~\ref{fig:eit_simulation}(b) shows the percentage difference between the numerical and analytical values of the FWHM of biphoton's EIT spectrum. One can observe the percentage difference increases with the value of $\Omega_c^2/(4 \gamma \Gamma_D)$.
We next derive the four-wave mixing (FWM) effect in the biphoton spectral profile shown in Eq.~(\ref{eq:biphoton_frequency}), named the biphoton's FWM spectrum thereafter. The spectrum of FWM transmission is the square of absolute value of Eq.~(\ref{eq:cross_chi}),
\begin{eqnarray}
\label{eq:FWM_part}
T_{\rm FWM}= \left| \frac{\sqrt{k_{as} k_s}L}{2} \chi(\delta) \right|^2.
\end{eqnarray}
At a large value of the pump detuning $\Delta_p \gg \Gamma$, which is the typical experimental condition in the biphoton generation, the term $\Omega_p/(\Delta_p-\omega_D + i\Gamma/2 )$ in Eq.~(\ref{eq:cross_chi}) is treated as a constant of $\Omega_p/\Delta_p$. The integration can be approximated as the following complex Lorentz function:
\begin{eqnarray}
\label{eq:FWM_part_th}
T_{\rm FWM} &=&
\left| \frac{\Gamma\sqrt{\alpha_{as}\alpha_s}}{4}
\frac{\Omega_p}{\Delta_p} \right. \left. \frac{B}{1-i (2 \delta/\Gamma_{\rm FWM})} \right|^2, \\
\label{eq:Gamma_FWM}
\Gamma_{\rm FWM} &=& 2\gamma
\left( 1 +\frac{\Omega_c^2}{4\gamma \Gamma_D} \right)
= \Gamma_L, \\
\label{eq:B}
B &=& \frac{\sqrt{\pi}\Omega_c}{4 \gamma \Gamma_D}
\:{\rm exp}\! \left( \frac{\Omega_c^4}{16 \gamma^2 \Gamma_D^2} \right)
\:{\rm erfc}\! \left( \frac{\Omega_c^2}{4 \gamma \Gamma_D} \right).
\end{eqnarray}
The above approximation is valid under the condition of $\Gamma_{\rm FWM} \gg \Omega_c^2/(5.4 \Gamma_D)$, where $\Omega_c^2/(5.4 \Gamma_D)$, obtained numerically, is the separation between the two transmission peaks in the biphoton's FWM spectrum. Thus, the spectral profile of cross-susceptibility is a Lorentzian function with a FWHM of $\Gamma_{\rm FWM}$, which is equal to $\Gamma_L$.
\FigThree
We compare the biphoton's EIT spectrum calculated from the numerical integral of Eq.~(\ref{eq:FWM_part}) with that calculated from the analytical formula of Eq.~(\ref{eq:FWM_part_th}). Figure~\ref{fig:fwm_simulation}(a) shows the numerical and analytical spectra of FWM transmission at the coupling Rabi frequencies $\Omega_c$ of 2.7$\Gamma$, 3.8$\Gamma$ and 5.4$\Gamma$, respectively. The deviation between the numerical and analytical spectra is more prominent as $\Omega_c$ increases, because the separation between the two transmission peaks in the FWM spectrum gets larger due to a larger value of $\Omega_c$. Figure~\ref{fig:fwm_simulation}(b) shows the percentage difference between the numerical and analytical values of the FWHM of biphoton's FWM spectrum. One can observe the difference increases with the value of $\Omega_c^2/(4 \gamma \Gamma_D)$.
\FigFour | 4,065 | 12,811 | en |
train | 0.9.2 | \FigTwo
We compare the biphoton's EIT spectrum calculated from the numerical integral of Eq.~(\ref{eq:T_exact}) with that calculated from the analytical formula of Eq.~(\ref{eq:T_approximation}). Figure~\ref{fig:eit_simulation}(a) shows the comparisons at different values of the coupling Rabi frequency, $\Omega_c$. As $\Omega_c$ becomes larger, i.e., a larger value of $\Omega_c^2/(4\gamma\Gamma_D)$, the deviation between numerical and analytical spectra becomes larger. Figure~\ref{fig:eit_simulation}(b) shows the percentage difference between the numerical and analytical values of the FWHM of biphoton's EIT spectrum. One can observe the percentage difference increases with the value of $\Omega_c^2/(4 \gamma \Gamma_D)$.
We next derive the four-wave mixing (FWM) effect in the biphoton spectral profile shown in Eq.~(\ref{eq:biphoton_frequency}), named the biphoton's FWM spectrum thereafter. The spectrum of FWM transmission is the square of absolute value of Eq.~(\ref{eq:cross_chi}),
\begin{eqnarray}
\label{eq:FWM_part}
T_{\rm FWM}= \left| \frac{\sqrt{k_{as} k_s}L}{2} \chi(\delta) \right|^2.
\end{eqnarray}
At a large value of the pump detuning $\Delta_p \gg \Gamma$, which is the typical experimental condition in the biphoton generation, the term $\Omega_p/(\Delta_p-\omega_D + i\Gamma/2 )$ in Eq.~(\ref{eq:cross_chi}) is treated as a constant of $\Omega_p/\Delta_p$. The integration can be approximated as the following complex Lorentz function:
\begin{eqnarray}
\label{eq:FWM_part_th}
T_{\rm FWM} &=&
\left| \frac{\Gamma\sqrt{\alpha_{as}\alpha_s}}{4}
\frac{\Omega_p}{\Delta_p} \right. \left. \frac{B}{1-i (2 \delta/\Gamma_{\rm FWM})} \right|^2, \\
\label{eq:Gamma_FWM}
\Gamma_{\rm FWM} &=& 2\gamma
\left( 1 +\frac{\Omega_c^2}{4\gamma \Gamma_D} \right)
= \Gamma_L, \\
\label{eq:B}
B &=& \frac{\sqrt{\pi}\Omega_c}{4 \gamma \Gamma_D}
\:{\rm exp}\! \left( \frac{\Omega_c^4}{16 \gamma^2 \Gamma_D^2} \right)
\:{\rm erfc}\! \left( \frac{\Omega_c^2}{4 \gamma \Gamma_D} \right).
\end{eqnarray}
The above approximation is valid under the condition of $\Gamma_{\rm FWM} \gg \Omega_c^2/(5.4 \Gamma_D)$, where $\Omega_c^2/(5.4 \Gamma_D)$, obtained numerically, is the separation between the two transmission peaks in the biphoton's FWM spectrum. Thus, the spectral profile of cross-susceptibility is a Lorentzian function with a FWHM of $\Gamma_{\rm FWM}$, which is equal to $\Gamma_L$.
\FigThree
We compare the biphoton's EIT spectrum calculated from the numerical integral of Eq.~(\ref{eq:FWM_part}) with that calculated from the analytical formula of Eq.~(\ref{eq:FWM_part_th}). Figure~\ref{fig:fwm_simulation}(a) shows the numerical and analytical spectra of FWM transmission at the coupling Rabi frequencies $\Omega_c$ of 2.7$\Gamma$, 3.8$\Gamma$ and 5.4$\Gamma$, respectively. The deviation between the numerical and analytical spectra is more prominent as $\Omega_c$ increases, because the separation between the two transmission peaks in the FWM spectrum gets larger due to a larger value of $\Omega_c$. Figure~\ref{fig:fwm_simulation}(b) shows the percentage difference between the numerical and analytical values of the FWHM of biphoton's FWM spectrum. One can observe the difference increases with the value of $\Omega_c^2/(4 \gamma \Gamma_D)$.
\FigFour
The biphoton's overall spectrum shown by Eq.~(\ref{eq:biphoton_frequency}) is approximately equal to the product of the biphoton's EIT spectrum, $T_{\rm EIT}$, and the biphoton's FWM spectrum. $T_{\rm FWM}$. Since $|{\rm sinc} \! \left[ k_s L \xi(\delta) /4 \right]|$ is slowly-varying within the biphoton's spectral width, Eq.~(\ref{eq:biphoton_frequency}) without ${\rm sinc} \! \left[ k_s L \xi(\delta) /4 \right]$ changes the result little. As shown in Eq.~(\ref{eq:T_approximation2}), the analytical expression of biphoton's EIT spectrum is proportional to
\begin{equation}
\label{eq:eit_bi}
1+ \frac{\sigma-1}{1+ 4\delta^2/\Gamma_{\rm EIT}^2},
\end{equation}
where $\sigma = e^{\alpha'_s R A}$ is a constant independent of $\delta$. As shown in Eq.~(\ref{eq:FWM_part_th}), the analytical expression of the biphoton's FWM spectrum is proportional to
\begin{equation}
\label{eq:fmw_bi}
\frac{1}{1+4\delta^2/(\eta \Gamma_{\rm EIT})^2},
\end{equation}
where $\eta^{-1} = \sqrt{\alpha'_s R A / \ln[(1+e^{\alpha'_s R A})/2] -1}$ based on Eqs.~(\ref{eq:Gamma_EIT}) and (\ref{eq:Gamma_FWM}), which is independent of $\delta$. As the value of $\alpha'_s R A$ is not large, we have $\sigma \approx \eta^2$, and the product of the biphoton's EIT and FWM spectral profiles gives
\begin{eqnarray}
&& \left( 1+ \frac{\sigma-1}{1+4\delta^2/\Gamma_{\rm EIT}^2} \right) \times
\frac{1}{1+4\delta^2/(\eta \Gamma_{\rm EIT})^2} \nonumber \\
&& ~\approx
\frac{\eta^2}{1+4\delta^2/\Gamma_{\rm EIT}^2}.
\end{eqnarray}
As the value of $\alpha'_s R A$ is large, $\eta$ and $\sigma$ become large. A large $\eta$ makes the biphoton's FWM linewidth far greater than the biphoton's EIT linewidth, indicating that the product of the biphoton's EIT and FWM spectral profiles is dominated by the EIT profile. We obtain
\begin{eqnarray}
&& \left( 1+ \frac{\sigma-1}{1+4\delta^2/\Gamma_{\rm EIT}^2} \right) \times
\frac{1}{1+4\delta^2/(\eta \Gamma_{\rm EIT})^2} \nonumber \\
&& ~\approx
1+ \frac{\sigma-1}{1+4\delta^2/\Gamma_{\rm EIT}^2}
\approx \frac{\sigma}{1+4\delta^2/\Gamma_{\rm EIT}^2}.
\end{eqnarray}
The second approximation in the above is due to $\sigma \gg 1$. Thus, the linewidth, $\Gamma_{\rm BI}$, of biphoton's overall spectrum is close to that, $\Gamma_{\rm EIT}$, of the biphoton's EIT spectrum.
In Fig.~\ref{fig:three_linewidth_comparison}, we compare the ratio of $\Gamma_{\rm EIT}$ to $\Gamma_{\rm BI}$ and that of $\Gamma_{\rm FWM}$ to $\Gamma_{\rm BI}$, where $\Gamma_{\rm EIT}$, $\Gamma_{\rm BI}$, and $\Gamma_{\rm FWM}$ are numerically calculated from Eqs.~(\ref{eq:biphoton_frequency}), (\ref{eq:T_exact}), and (\ref{eq:FWM_part}), respectively. The ratios are plotted against $\Omega_c^2$ at various values of the OD ($\alpha_s$) and decoherence rate ($\gamma$).
In all the cases, it is apparent that the linewidth of the biphoton's overall spectrum is nearly the same that of the biphoton's EIT spectrum, i.e.,
\begin{equation}
\label{eq:linewidth_bi}
\Gamma_{\rm BI} \approx \Gamma_{\rm EIT}.
\end{equation}
The outcome of the comparison between the numerical results shown in Fig.~\ref{fig:three_linewidth_comparison} is consistent with the expectation of the discussion utilizing the analytical expressions in Eqs.~(\ref{eq:eit_bi}) and (\ref{eq:fmw_bi}).
The numerical and analytical results of the biphoton's overall spectrum are compared in Fig.~\ref{fig:biphoton_simulation}(a). The solid lines are numerically calculated from the square of Eq.~(\ref{eq:biphoton_frequency}), and the dashed lines are the results of the analytical formula in Eq.~(\ref{eq:T_approximation2}).
Figure~\ref{fig:biphoton_simulation}(b) shows the percentage difference between the numerical and analytical values of the FWHM of biphoton's overall spectrum. The numerical linewidth is determined by the Lorentzian best fit of the biphoton spectrum calculated from Eq.~(\ref{eq:biphoton_frequency}). The analytical linewidth is given by the formula in Eq.~(\ref{eq:linewidth_bi}) or equivalently Eq.~(\ref{eq:Gamma_EIT}). In Fig.~\ref{fig:biphoton_simulation}(b), the percentage difference increases with the value of $\Omega_c^2/(4 \gamma \Gamma_D)$.
\FigFive
We have derived the analytical formula of the biphoton's self-susceptibility [$(k_s L/4) \xi(\delta)$] or EIT spectrum as shown by Eq.~(\ref{eq:T_approximation2}), and that of the biphoton's cross-susceptibility [$(\sqrt{k_{as} k_s}L/2) \chi(\delta)$] or FWM spectrum as shown by Eq.~(\ref{eq:FWM_part_th}). In addition, ${\rm sinc} \! \left[ k_s L \xi(\delta) /4 \right]$ in Eq.~(\ref{eq:biphoton_frequency}) plays little role in the spectral profile. Therefore, the analytical expression of Eq.~(\ref{eq:biphoton_frequency}) is obtained, and the time-correlation function or the wave packet of biphotons in Eq.~(\ref{eq:biphoton}) becomes
\begin{eqnarray}
\label{eq:g2_approx}
G^{(2)}(\tau) &=&
C e^{-\Gamma_{\rm BI} t}, \\
\label{eq:C_value}
C & \propto & (\alpha_{as}\alpha_s \Gamma^2) \frac{\Omega_p^2}{\Delta_p^2} B^2 e^{-\alpha_s'R(1-A)}.
\end{eqnarray}
The biphoton's wave packet is an exponential-decay function with the decay time constant of $1/\Gamma_{\rm BI}$. As long as the value of $\Omega_c^2/(4\gamma \Gamma_D)$ is not far greater than 1, Eq.~(\ref{eq:g2_approx}) is a reasonable approximation. | 2,883 | 12,811 | en |
train | 0.9.3 | \section{Experimental Setup}
We experimentally studied the temporal width and spectral linewidth of SFWM biphotons generated from a hot vapor of $^{87}$Rb atoms. The SFWM transition scheme is shown in Fig.~\ref{fig:transition_diagram}, and the actual energy levels in the experiment are specified in the caption. The time correlation function (i.e., the wave packet) of biphotons and the corresponding EIT spectrum were measured as functions of the coupling power or the square of the coupling Rabi frequency $\Omega_c^2$. In the measurement of the EIT spectrum, a weak probe laser field was employed in additional to the coupling field, and its Rabi frequency is far less than $\Omega_c$ to satisfy the perturbation limit and to affect the ground-state population distribution very little. The two laser fields form the $\Lambda$-type transition scheme with zero one-photon detuning, and their transitions are depicted in Fig.~1(a). Both laser fields have the wavelengths of about 795 nm. The probe laser was injection-locked by the coupling laser with an offset frequency, which was provided by an fiber-based electro-optic modulator (EOM) with an input RF frequency of around 6835 MHz. The injection-lock scheme ensures the two-photon detuning, $\delta$, between the probe and coupling fields to be stable and free of the laser frequency fluctuations. We swept the RF frequency of the EOM, i.e., the probe frequency which is equal to the two-photon detuning, and measured the probe transmission in the spectroscopic measurement.
The coupling and input probe fields completely overlapped and propagated through the same direction in the measurement of the EIT spectrum, minimizing the decoherence rate induced by the Doppler effect. The two fields had the $s$ and $p$ polarizations, and the $e^{-2}$ full widths of 1.5 and 1.4 mm. The output probe beam was collected by a polarization-maintained fiber (PMF), and its power after the PMF was detected by a Thorlabs PDA36A photo detector. The PMF's collection efficiency for the probe field is 70\%. We produced the coupling (or pump) field from an external-cavity diode laser of Toptica DL DLC pro 795 (or 780). A homemade 795 nm bare-diode laser under the injection lock generated the probe field. The coupling frequency was stabilized by the saturated absorption spectroscopy, and the pump frequency was stabilized by a wave meter.
In the measurement of the biphoton wave packet or two-photon correlation function as a function of the delay time between the photon pair, we employed only the coupling and pump fields. The coupling field is the same as that used in the measurements of the EIT spectrum. The pump field had the $p$ polarization. We set the pump detuning to $2$ GHz. The $e^{-2}$ full width of the pump field is 1.2 mm. Pairs of anti-Stokes (or signal) and Stokes (or probe) photons were generated by the spontaneous four-wave mixing (SFWM) process. The coupling and pump laser fields and the signal and Stokes probe photons all propagated in the same direction. This all-copropagation scheme can satisfy the phase match condition. The biphotons or the pairs of anti-Stokes and Stokes photons were collected by two PMFs, and their counts after the PMFs detected by two single-photon counting modules (SPCMs). To block the leakages of the strong coupling and pump fields to the SPCMs, we utilized polarization filters and etalon filters which totally provide the extinction ratios of about 130 dB.
We applied a hyperfine optical pumping (HOP) field to optically pump the population out of the state of $|5S_{1/2}, F=1\rangle$ during the measurements of the biphoton wave packet and the EIT spectrum. The HOP field has a hollow-shaped beam profile, while the interaction region of the SFWM process and the EIT spectrum is in its hollow region. We set the power of the HOP field to 9 mW and stabilized its frequency to 80 MHz below the transition frequency of $|5S_{1/2},F=1\rangle$ $\rightarrow$ $|5P_{3/2},F=2\rangle$. The HOP field increased decoherence rate a little in the experimental system. Other details of the experimental setup and measurement method of the biphoton wave packet can be found in our previous works \cite{Ourhotbiphoton1, Ourhotbiphoton2}.
\section{Results and Discussion} \label{sec:results}
We systematically measured the EIT spectra and the biphoton wave packets. The temperature of the vapor cell, which consists of nearly only $^{87}$Rb atoms, was maintained at 57 $^{\circ}$C throughout the experiment. At this temperature, $\Gamma_D =$ 54$\Gamma$. We utilized the absorption spectrum of a weak probe field in the presence of the coupling and HOP fields to determine the value of $\alpha_s$ (OD). In the measurement of the absorption spectrum, the probe frequency was swept across the entire Doppler-broadened spectral lines of $|5S_{1/2},F=2\rangle$ $\rightarrow$ $|5P_{1/2},F=1\rangle$ and $|5S_{1/2},F=2\rangle$ $\rightarrow$ $|5P_{1/2},F=2\rangle$. We fitted the absorption spectrum and determined $\alpha_s =$ 360$\pm$10.
In the measurement of biphoton wave packets, we used five different coupling powers of 0.5, 1, 2, 4, and 8 mW. The coupling Rabi frequency was determined by the EIT spectrum, and the determination method will be explained in the next paragraph. The pump power or Rabi frequency was fixed to 0.5 mW or approximately 2.0$\Gamma$, which is estimated according to the peak intensity of the pump beam. In the measurement of EIT spectra, we set the probe power to 0.5 $\mu$W. The probe field was weak enough that it can be treated as the perturbation. The coupling power in each EIT spectrum was the same as that in the corresponding biphoton wave packet.
\FigSix
Figures~\ref{fig:experiment_data}(a)-(e) show the EIT spectra at various coupling powers. The blue lines represent the experimental data. The red lines are the theoretical predictions calculated from with Eq.~(\ref{eq:T_exact}). In the theoretical calculation, we set the OD ($\alpha_s$) to 360 as determined by the absorption spectrum, and kept the ratio between the five coupling Rabi frequencies ($\Omega_c$) as the square root of the ratio between the five coupling powers, i.e., the Rabi frequencies correspond to $\Omega_{c0}$, $\sqrt{2}\Omega_{c0}$, $2\Omega_{c0}$, $2\sqrt{2}\Omega_{c0}$, and $4\Omega_{c0}$. We tuned a single value of $\Omega_{c0}$ for all EIT spectra, and varied the value of the decoherence rate ($\gamma$) for each EIT spectrum to get a good match between the data and the predictions. The value of $\Omega_{c0}$ used in the calculation was 1.75$\Gamma$, while the Rabi frequency estimated from the peak intensity of the 0.5-mW coupling power was 2.05$\Gamma$. When we boosted the coupling power by 16 folds, the value of $\gamma$ increased by a factor of 4.3. The change of the biphoton's temporal width was mainly influenced by the coupling power. The consistency between the experimental data and the theoretical predictions is satisfactory.
The green dashed lines in Figs.~\ref{fig:experiment_data}(a)-(e) are the best fits of Lorentzian functions, since the EIT spectral profile is approximately a Lorentzian function as shown by Eq.~(\ref{eq:T_approximation2}). The measured EIT spectral profile of a Doppler-broadened medium is very close to a Lorentzian function. Please note that the biphoton's EIT spectrum is given by $|\exp[i(k_s L/4)\xi]|^2$ as shown in Eq.~(\ref{eq:T_exact}), and the EIT spectrum with an input probe field is given by $|\exp[i(k_s L/2) \xi]|^2$. The difference of a factor of 2 between the two expressions is as a matter of fact that the probe photons resulted from the SFWM process are generated everywhere in a medium, and they propagate through a half of the medium length on average. To apply Eq.~(\ref{eq:T_approximation}) or (\ref{eq:T_approximation2}) to the EIT spectrum with an input probe field, $\alpha_s'$ in the equation needs to be redefined as $\alpha_s \sqrt{\pi}\Gamma/(2\Gamma_D)$, which differs from Eq.~(\ref{eq:alpha_s_p}) by a factor of 2.
\FigSeven
The blue lines in Figs.~\ref{fig:experiment_data}(f)-(j) are the experimental data of biphoton wave packets taken at the same coupling powers as those in Figs.~\ref{fig:experiment_data}(a)-(e), respectively. The red lines are the theoretical predictions calculated from Eq.~(\ref{eq:biphoton}). We set the calculation parameters of $\Omega_c$, $\alpha_s$, and $\gamma$ to the values determined by the absorption and EIT spectra. A Lorentzian function with a linewidth of 35 MHz, corresponding to the spectral profile of the etalon filter used in the experiment, was added to Eq.~(\ref{eq:biphoton_frequency}) in the calculation. Since the biphoton wave packets have the linewidths between 270 kHz and 2.2 MHz, the etalon filter's linewidth affects the calculation results little. The green dashed lines are the best fits of exponential-decay functions, i.e., $y(t) = y_0+A \times{\rm exp} [-(t-t0)/\tau]$, since the biphoton wave packet behaves approximately like an exponential-decay function as shown by Eq.~(\ref{eq:g2_approx}). We first determined the value of $y_0$ from the baseline of data, then fixed the value of $t_0$, and finally fitted the values of $A$ and $\tau$. The theoretical predictions and the exponential-decay best fits are in good agreement with the experimental data. In Fig.~\ref{fig:experiment_data}, we have demonstrated that the EIT spectrum measured with classical light can be a good indicator of the biphoton wave packet generated by the SFWM process.
To verify whether the temporal width of the measured biphoton wave packet is consistent with the analytical formula of $1/\Gamma_{\rm BI}$, we plot the biphoton's temporal width as a function of the coupling power in Fig.~\ref{fig:eit_biphoton_linewidth}. The red circles are the biphoton's temporal widths obtained from the $e^{-1}$ time constants of the best fits as the examples shown in Figs.~\ref{fig:experiment_data}(f)-(j). Each data point is the average of repeated measurements. As for the analytical formulas of $1/\Gamma_{\rm BI}$, the blue squares are the results calculated from Eq.~(\ref{eq:Gamma_EIT}) or (\ref{eq:linewidth_bi}). The calculation parameters of $\Omega_c$, $\alpha_s$, and $\gamma$ are the values determined in Figs.~\ref{fig:experiment_data}(a)-(e). Since the values of $\Omega_c^2/ (4\gamma\Gamma_D)$ of the five coupling powers ranged between 0.71 and 2.6, Eq.~(\ref{eq:g2_approx}) as well as the formula of $1/\Gamma_{\rm BI}$ is a good approximation. As expected, the agreement between the data of the measured temporal width and the predictions of the analytical formula is satisfactory.
\section{Conclusion}
We have systematically studied the temporal profile of biphotons generated from a hot atomic vapor via the SFWM. Taking into account all velocity groups in a Doppler-broadened medium, we were able to derive the analytical formula of the biphoton wave packet or the two-photon correlation function. The derived formula shows that the biphoton's spectral profile is a Lorentzian function with the linewidth mainly determined by the EIT linewidth, $\Gamma_{\rm EIT}$, where the formula of $\Gamma_{\rm EIT}$ is given by Eq.~(\ref{eq:Gamma_EIT}). Thus, the biphoton wave packet is close to an exponential-decay function with the $e^{-1}$ decay time constant of $1/\Gamma_{\rm EIT}$.
Since the coupling power can significantly affect the EIT effect and the biphoton's temporal width, we measured the EIT spectra with classical light and the biphoton wave packets generated from the SFWM process at various coupling powers from 0.5 to 8 mW. The measured EIT spectrum exhibited a Lorentzian profile, which is consistent with the prediction of the analytical formula shown in Eq.~(\ref{eq:T_approximation2}). Under the same experimental condition as the EIT spectrum, the measured biphoton wave packet behaved mainly like an exponential-decay function, in agreement with the analytical formula shown in Eq.~(\ref{eq:g2_approx}). We utilized the experimental parameters determined by the EIT spectra to calculate the theoretical predictions of the biphoton wave packets. The agreement between the theoretical predictions and experimental data is satisfactory. Furthermore, we compared the biphoton temporal width obtained from the measurement with that calculated from the analytical formula given by Eq.~(\ref{eq:Gamma_EIT}) or (\ref{eq:linewidth_bi}). The comparison verifies the analytical formula is a reasonable approximation as long as the value of $\Omega_c^2/(4\gamma \Gamma_D)$ is not far greater than 1.
In conclusion, the EIT spectrum measured with classical light is a good indicator for the biphoton wave packet generated by the SFWM process. The analytical formulas derived in this work provide a simple way to quantitatively understand the temporal and spectral profiles of the biphotons .
\section*{Acknowledgments}
This work was supported by Grant Nos.~109-2639-M-007-001-ASP and 110-2639-M-007-001-ASP of the Ministry of Science and Technology, Taiwan.
\end{document} | 3,442 | 12,811 | en |
train | 0.10.0 | \begin{document}
\title{Galloping in fast-growth natural merge sorts}
\begin{abstract}
We study the impact of sub-array merging routines
on merge-based sorting algorithms.
More precisely, we focus on the
\emph{galloping} sub-routine that
TimSort\xspace uses to merge monotonic (non-decreasing)
sub-arrays, hereafter called \emph{runs},
and on the impact on the
number of element comparisons performed
if one uses this sub-routine instead of a naïve
merging routine.
The efficiency of
TimSort\xspace and of similar sorting algorithms has often been
explained by using the notion of \emph{runs} and the
associated \emph{run-length entropy}. Here, we focus
on the related notion of \emph{dual runs},
which was introduced in the 1990s, and the
associated \emph{dual run-length entropy}.
We prove, for this complexity measure,
results that are similar to those already known
when considering standard run-induced measures:
in particular, TimSort\xspace requires only~$\mathcal{O}(n + n \log(\sigma))$ element comparisons
to sort arrays of length~$n$ with~$\sigma$
distinct values.
In order to do so, we introduce new notions
of \emph{fast-} and \emph{middle-growth} for
natural merge sorts (i.e., algorithms based on
merging runs). By using these notions, we prove
that several merge sorting algorithms,
provided that they use TimSort\xspace's galloping
sub-routine for merging runs,
are as efficient as TimSort\xspace at sorting arrays
with low run-induced or dual-run-induced
complexities.
\end{abstract} | 430 | 49,320 | en |
train | 0.10.1 | \section{Introduction}\label{sec:intro}
In 2002, Tim Peters, a software engineer, created a new sorting algorithm,
which was called TimSort\xspace~\cite{Peters2015} and was built
on ideas from McIlroy~\cite{McIlroy1993}. This algorithm immediately
demonstrated its efficiency for sorting actual data, and was
adopted as the standard sorting algorithm in core libraries of
widespread programming languages such as Python and Java.
Hence, the prominence of such a custom-made algorithm over previously
preferred \emph{optimal} algorithms contributed to the regain of interest
in the study of sorting algorithms.
Among the best-identified reasons
behind the success of TimSort\xspace are the fact that
this algorithm is well adapted to the architecture of
computers (e.g., for dealing with cache issues) and to realistic
distributions of data. In particular, the very conception
of TimSort\xspace makes it particularly well-suited
to sorting data whose \emph{run decompositions}~\cite{BaNa13,EsCaWo92} (see Figure~\ref{fig:runs})
are simple.
Such decompositions were already used in 1973 by
Knuth's NaturalMergeSort\xspace~\cite[Section 5.2.4]{Knuth98},
which adapted the traditional
MergeSort\xspace algorithm as follows: NaturalMergeSort\xspace is based on splitting arrays into monotonic subsequences, also called \emph{runs}, and on merging these runs together.
Thus, all algorithms sharing this feature of NaturalMergeSort\xspace are also called
\emph{natural} merge sorts.
\begin{figure}
\caption{A sequence and its \emph{run decomposition}
\label{fig:runs}
\end{figure}
In addition to being a natural merge sort, TimSort\xspace includes many
optimisations, which were carefully engineered, through extensive
testing, to offer the best complexity performances.
As a result,
the general structure of TimSort\xspace can be split into three main components:
(i) a variant of an insertion sort,
which is used to deal with \emph{small} runs,
e.g., runs of length less than 32,
(ii) a simple policy for choosing which \emph{large} runs to merge,
(iii) a sub-routine for merging these runs, based on a so-called
\emph{galloping} strategy.
The second component has been subject to
an intense scrutiny these last few years,
thereby giving birth to a great variety of {TimSort\xspace}-like
algorithms, such as
\textalpha-StackSort\xspace~\cite{AuNiPi15},
\mbox{\textalpha-MergeSort\xspace}~\cite{BuKno18},
ShiversSort\xspace~\cite{shivers02} (which \emph{predated} TimSort\xspace), adaptive ShiversSort\xspace~\cite{Ju20}, PeekSort\xspace and
PowerSort\xspace~\cite{munro2018nearly}.\label{citesorts}
On the contrary,
the first and third components,
which seem more complicated and whose effect might be
harder to quantify, have often been used as black boxes
when studying TimSort\xspace or designing variants thereof.
In what follows, we focus on the third component and prove
that it is very efficient:
whereas TimSort\xspace requires~$\mathcal{O}(n+n \log(\rho))$
comparisons to sort arrays of length~$n$
that can be decomposed as a concatenation of~$\rho$ non-decreasing arrays,
this component makes TimSort\xspace require
only~$\mathcal{O}(n+n \log(\sigma))$
comparisons to sort arrays of length~$n$
with~$\sigma$ distinct values.
\paragraph*{Context and related work}
The success of TimSort\xspace has nurtured the interest in the quest for
sorting algorithms that would be both excellent all-around and
adapted to arrays with few runs.
However, its \emph{ad hoc} conception made its complexity analysis
harder than what one might have hoped, and it is only in 2015,
a decade after TimSort\xspace had been largely deployed, that Auger et
al.~\cite{AuNiPi15} proved that
TimSort\xspace required~$\mathcal{O}(n \log(n))$ comparisons
for sorting arrays of length~$n$.
This is optimal in the model of sorting by comparisons,
if the input array can be
an arbitrary array of length~$n$.
However, taking into account the run decompositions
of the input array allows using finer-grained
complexity classes, as follows.
First, one may consider only arrays
whose run decomposition consists of~$\rho$
monotonic runs. On such arrays, the best worst-case
time complexity one may hope for is~$\mathcal{O}(n + n \log(\rho))$~\cite{Mannila1985}.
Second, we may consider even more restricted classes of
input arrays, and focus only on
those arrays that consist of~$\rho$ runs of lengths~$r_1,\ldots,r_\rho$. In that case,
every comparison-based sorting algorithm
requires at least~$n \mathcal{H} + \mathcal{O}(n)$ element comparisons,
where~$\mathcal{H}$ is defined as~$\mathcal{H} = H(r_1/n,\ldots,r_\rho/n)$ and~$H(x_1,\ldots,x_\rho) = -\sum_{i=1}^\rho x_i \log_2(x_i)$
is the general entropy function~\cite{BaNa13,Ju20,McIlroy1993}.
The number~$\mathcal{H}$ is called the
\emph{run-length entropy} of the array.
Since the early 2000s,
several natural merge sorts were proposed,
all of which were meant to offer easy-to-prove
complexity guarantees:
ShiversSort\xspace, which runs in
time~$\mathcal{O}(n \log(n))$;
\textalpha-StackSort\xspace, which, like NaturalMergeSort\xspace,
runs in
time~$\mathcal{O}(n + n \log(\rho))$;
\mbox{\textalpha-MergeSort\xspace}, which, like TimSort\xspace, runs in
time~$\mathcal{O}(n + n \mathcal{H})$;
adaptive ShiversSort\xspace, PeekSort\xspace and PowerSort\xspace, which run in time~$n \mathcal{H} + \mathcal{O}(n)$.
Except TimSort\xspace, these algorithms are, in fact, described only
as policies for deciding which runs to merge, the actual sub-routine used for merging runs
being left implicit:
since choosing a naïve merging sub-routine
does not harm the worst-case time complexities
considered above,
all authors identified
the cost of merging two runs of lengths~$m$ and~$n$
with the sum~$m+n$, and the complexity of the algorithm
with the sum of the costs of the merges performed.
One notable exception is that
of Munro and Wild~\cite{munro2018nearly}.
They compared the
running times of TimSort\xspace and of TimSort\xspace's variant obtained by using
a naïve merging routine
instead of TimSort\xspace's galloping sub-routine.
However, and although they mentioned the
challenge of finding distributions on arrays
that might benefit from galloping,
they did not address this challenge, and focused
only on arrays with a low entropy~$\mathcal{H}$.
As a result,
they unsurprisingly observed that the
galloping sub-routine looked \emph{slower}
than the naïve one.
Galloping turns out to be very efficient when sorting
arrays with few distinct values, a class of arrays that had
also been intensively studied.
As soon as 1976, Munro and Spira~\cite{MuSpi76} proposed a complexity measure~$\mathcal{H}^\ast$
related to the run-length entropy,
with the property that~$\mathcal{H}^\ast \leqslant
\log_2(\sigma)$ for arrays with~$\sigma$ values.
They also proposed an algorithm for sorting arrays of length~$n$ with~$\sigma$
values by using~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons.
McIlroy~\cite{McIlroy1993} then extended their work to arrays representing a permutation~$\pi$,
identifying~$\mathcal{H}^\ast$ with the run-length entropy
of~$\pi^{-1}$ and proposing a variant of
Munro and Spira's algorithm that would use~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons in this generalised
setting.
Similarly, Barbay et al.~\cite{BaOchoaSatti16} invented the
algorithm QuickSynergySort\xspace, which aimed at minimising the number of
comparisons, achieving a~$\mathcal{O}(n + n \mathcal{H}^\ast)$ upper bound and further refining the parameters it used, by taking into account
the interleaving between runs and dual runs.
Yet, all of these algorithms
require~$\omega(n + n \mathcal{H})$ element moves in the worst case.
Furthermore, as a side effect of being rather complicated and
lacking a proper analysis, except that
of~\cite{munro2018nearly} that hinted at its inefficiency,
the galloping sub-routine has been omitted
in various mainstream implementations of natural merge sorts,
in which it was replaced by its naïve variant.
This is the case, for instance, in library TimSort\xspace implementations
of the programming languages Swift~\cite{Swift} and
Rust~\cite{Rust}.
On the contrary, TimSort\xspace's implementation in
other languages, such as Java~\cite{Java},
Octave~\cite{Octave} or the V8 JavaScript
engine~\cite{V8}, and PowerSort\xspace's implementation in Python~\cite{Python}
include the galloping sub-routine.
\paragraph*{Contributions}
We study the time complexity of various natural
merge sort algorithms in a context where arrays are not just
parametrised by their lengths.
More precisely, we focus on a
decomposition of input arrays that is
dual to the decomposition of arrays into
monotonic runs,
and that was proposed by McIlroy~\cite{McIlroy1993}.
Consider an array~$A$ that we want to sort in a
\emph{stable} manner, i.e., in which two elements can always
considered to be distinct, if only because their positions in~$A$ are distinct. Without loss of generality,
we identify the values~$A[1],A[2],\ldots,A[n]$ with the integers from~$1$ to~$n$,
thereby making~$A$ a permutation of the set~$\{1,2,\ldots,n\}$.
A common measure of presortedness
consists in subdividing~$A$ into
distinct monotonic \emph{runs},
i.e., partitioning the set~$\{1,2,\ldots,n\}$
into intervals~$R_1,R_2,\ldots,R_\rho$
on which the function~$x \mapsto A[x]$ is monotonic.
\begin{figure}
\caption{The arrays~$A$ and~$B$ are lexicographically
equivalent to the permutation~$\tau$. Their dual runs,
represented with gray and white horizontal stripes,
have respective lengths~$4$,~$3$ and~$3$.
The mappings~$A \mapsto \tau$ and~$B \mapsto \tau$ identify them with the dual runs of~$\tau$,
i.e., with the runs of the permutation~$\tau^{-1}
\label{fig:dual-runs}
\end{figure}
Here, we adopt a dual approach,
which consists in partitioning the set~$\{1,2,\ldots,n\}$ into the \emph{increasing} runs~$S_1,S_2,\ldots,S_\sigma$ of the inverse permutation~$A^{-1}$.
These intervals~$S_i$ are already known under the name of
\emph{shuffled up-sequences}~\cite{BaNa13,LevPet90} or
\emph{riffle shuffles}~\cite{McIlroy1993}. In order to
underline their connection with runs, we say that these intervals are
the \emph{dual runs} of~$A$, and we denote their lengths by~$s_i$.
The process of transforming an array into a permutation and then
extracting its dual runs is illustrated in
Figure~\ref{fig:dual-runs}.
When~$A$ is \emph{not} a permutation of~$\{1,2,\ldots,n\}$,
the dual runs of~$A$ are simply the maximal
intervals~$S_i$ such that~$A$ is non-decreasing on the
set of positions~$\{j \colon A[j] \in S_i\} \subseteq \{1,2,\ldots,n\}$.
The length of a dual run is then defined as the cardinality of
that set of positions.
Thus, two lexicographically equivalent arrays have dual runs
of the same lengths:
this is the case, for instance, of the arrays~$A$,~$B$ and~$\tau$ in Figure~\ref{fig:dual-runs}.
In particular, we may see~$\tau$ and~$B$ as
canonical representatives of the array~$A$:
these are the unique permutation
of~$\{1,2,\ldots,n\}$ and the unique
array with values in~$\{1,2,\ldots,\sigma\}$
that are lexicographically equivalent with~$A$.
More generally,
an array that contains~$\sigma$ distinct values cannot have
more than~$\sigma$ dual runs.
Note that, in general, there is no non-trivial connection between
the runs of a permutation and its dual runs. For instance, a
permutation with a given number of runs may have arbitrarily many
(or few) dual runs, and conversely.
In this article, we prove that,
by using TimSort\xspace's galloping sub-routine,
several natural merge sorts
require only~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons,
or even~$n \mathcal{H}^\ast + \mathcal{O}(n)$ comparisons,
where~$\mathcal{H}^\ast = H(s_1/n,\ldots,s_\sigma/n) \leqslant \log_2(\sigma)$
is called the \emph{dual run-length entropy} of the array,~$s_i$ is the length of the dual run~$S_i$, and~$H$ is the general entropy function already mentioned above.
This legitimates using TimSort\xspace's arguably complicated galloping
sub-routine rather than its naïve alternative,
in particular when sorting arrays that are
constrained to have relatively few distinct values.
This also subsumes results that have been known
since the 1970s. For instance, adapting the optimal constructions
for alphabetic Huffman codes by Hu and Tucker~\cite{HuTucker71}
or Garsia and Wachs~\cite{Garsia77} to \emph{merge trees}
(described in Section~\ref{sec:fast-growth})
already provided sorting algorithms working in time~$n \mathcal{H} + \mathcal{O}(n)$.
Our new results rely on notions that we call
\emph{fast-} and \emph{middle-growth} properties,
and which are found in natural merge sorts like \textalpha-MergeSort\xspace,
\textalpha-StackSort\xspace, adaptive ShiversSort\xspace, ShiversSort\xspace, PeekSort\xspace, PowerSort\xspace or TimSort\xspace.
More precisely, we prove that merge sorts
require~$\mathcal{O}(n + n \mathcal{H})$ comparisons
\emph{and} element moves when they possess the
fast-growth property, thereby encompassing complexity
results that were proved separately for each of these algorithms~\cite{auger2018worst,
BuKno18,Ju20,munro2018nearly},
and~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons when they possess the
fast- or middle-growth property, which is a completely
new result. | 3,948 | 49,320 | en |
train | 0.10.2 | \paragraph*{Contributions}
We study the time complexity of various natural
merge sort algorithms in a context where arrays are not just
parametrised by their lengths.
More precisely, we focus on a
decomposition of input arrays that is
dual to the decomposition of arrays into
monotonic runs,
and that was proposed by McIlroy~\cite{McIlroy1993}.
Consider an array~$A$ that we want to sort in a
\emph{stable} manner, i.e., in which two elements can always
considered to be distinct, if only because their positions in~$A$ are distinct. Without loss of generality,
we identify the values~$A[1],A[2],\ldots,A[n]$ with the integers from~$1$ to~$n$,
thereby making~$A$ a permutation of the set~$\{1,2,\ldots,n\}$.
A common measure of presortedness
consists in subdividing~$A$ into
distinct monotonic \emph{runs},
i.e., partitioning the set~$\{1,2,\ldots,n\}$
into intervals~$R_1,R_2,\ldots,R_\rho$
on which the function~$x \mapsto A[x]$ is monotonic.
\begin{figure}
\caption{The arrays~$A$ and~$B$ are lexicographically
equivalent to the permutation~$\tau$. Their dual runs,
represented with gray and white horizontal stripes,
have respective lengths~$4$,~$3$ and~$3$.
The mappings~$A \mapsto \tau$ and~$B \mapsto \tau$ identify them with the dual runs of~$\tau$,
i.e., with the runs of the permutation~$\tau^{-1}
\label{fig:dual-runs}
\end{figure}
Here, we adopt a dual approach,
which consists in partitioning the set~$\{1,2,\ldots,n\}$ into the \emph{increasing} runs~$S_1,S_2,\ldots,S_\sigma$ of the inverse permutation~$A^{-1}$.
These intervals~$S_i$ are already known under the name of
\emph{shuffled up-sequences}~\cite{BaNa13,LevPet90} or
\emph{riffle shuffles}~\cite{McIlroy1993}. In order to
underline their connection with runs, we say that these intervals are
the \emph{dual runs} of~$A$, and we denote their lengths by~$s_i$.
The process of transforming an array into a permutation and then
extracting its dual runs is illustrated in
Figure~\ref{fig:dual-runs}.
When~$A$ is \emph{not} a permutation of~$\{1,2,\ldots,n\}$,
the dual runs of~$A$ are simply the maximal
intervals~$S_i$ such that~$A$ is non-decreasing on the
set of positions~$\{j \colon A[j] \in S_i\} \subseteq \{1,2,\ldots,n\}$.
The length of a dual run is then defined as the cardinality of
that set of positions.
Thus, two lexicographically equivalent arrays have dual runs
of the same lengths:
this is the case, for instance, of the arrays~$A$,~$B$ and~$\tau$ in Figure~\ref{fig:dual-runs}.
In particular, we may see~$\tau$ and~$B$ as
canonical representatives of the array~$A$:
these are the unique permutation
of~$\{1,2,\ldots,n\}$ and the unique
array with values in~$\{1,2,\ldots,\sigma\}$
that are lexicographically equivalent with~$A$.
More generally,
an array that contains~$\sigma$ distinct values cannot have
more than~$\sigma$ dual runs.
Note that, in general, there is no non-trivial connection between
the runs of a permutation and its dual runs. For instance, a
permutation with a given number of runs may have arbitrarily many
(or few) dual runs, and conversely.
In this article, we prove that,
by using TimSort\xspace's galloping sub-routine,
several natural merge sorts
require only~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons,
or even~$n \mathcal{H}^\ast + \mathcal{O}(n)$ comparisons,
where~$\mathcal{H}^\ast = H(s_1/n,\ldots,s_\sigma/n) \leqslant \log_2(\sigma)$
is called the \emph{dual run-length entropy} of the array,~$s_i$ is the length of the dual run~$S_i$, and~$H$ is the general entropy function already mentioned above.
This legitimates using TimSort\xspace's arguably complicated galloping
sub-routine rather than its naïve alternative,
in particular when sorting arrays that are
constrained to have relatively few distinct values.
This also subsumes results that have been known
since the 1970s. For instance, adapting the optimal constructions
for alphabetic Huffman codes by Hu and Tucker~\cite{HuTucker71}
or Garsia and Wachs~\cite{Garsia77} to \emph{merge trees}
(described in Section~\ref{sec:fast-growth})
already provided sorting algorithms working in time~$n \mathcal{H} + \mathcal{O}(n)$.
Our new results rely on notions that we call
\emph{fast-} and \emph{middle-growth} properties,
and which are found in natural merge sorts like \textalpha-MergeSort\xspace,
\textalpha-StackSort\xspace, adaptive ShiversSort\xspace, ShiversSort\xspace, PeekSort\xspace, PowerSort\xspace or TimSort\xspace.
More precisely, we prove that merge sorts
require~$\mathcal{O}(n + n \mathcal{H})$ comparisons
\emph{and} element moves when they possess the
fast-growth property, thereby encompassing complexity
results that were proved separately for each of these algorithms~\cite{auger2018worst,
BuKno18,Ju20,munro2018nearly},
and~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons when they possess the
fast- or middle-growth property, which is a completely
new result.
Finally, we prove finer complexity bounds on the number
of comparisons used by
adaptive ShiversSort\xspace, ShiversSort\xspace, NaturalMergeSort\xspace, PeekSort\xspace and PowerSort\xspace, which require only~$n \mathcal{H}^\ast + \mathcal{O}(n\log(\mathcal{H}^\ast+1)+n)$ comparisons,
nearly matching the~$n \mathcal{H} + \mathcal{O}(n)$ (or~$n \log_2(n) + \mathcal{O}(n)$ and~$n \log_2(\rho) + \mathcal{O}(n)$, in the cases of ShiversSort\xspace and NaturalMergeSort\xspace) complexity upper bound they already enjoy in terms of comparisons
and element moves. | 1,698 | 49,320 | en |
train | 0.10.3 | \section{The galloping sub-routine for merging runs}
\label{sec:description}
Here, we describe the galloping sub-routine that
the algorithm TimSort\xspace uses to merge adjacent non-decreasing runs.
This sub-routine is a blend between a naïve
merging algorithm, which requires~$a+b-1$
comparisons to merge runs~$A$ and~$B$ of lengths~$a$ and~$b$,
and a dichotomy-based algorithm,
which requires~$\mathcal{O}(\log(a+b))$ comparisons in the best case,
and~$\mathcal{O}(a+b)$ comparisons in the worst case.
It depends on a parameter~$\mathbf{t}$, and
works as follows.
When merging runs~$A$ and~$B$ into one large run~$C$,
we first need to find the least integers~$k$ and~$\ell$ such that~$B[0] < A[k] \leqslant B[\ell]$: the~$k+\ell$ first elements of~$C$
are~$A[0],A[1],\ldots,A[k-1]$,~$B[0],B[1],\ldots,B[\ell-1]$,
and the remaining elements of~$C$ are obtained by merging
the sub-array of~$A$ that spans positions~$k$ to~$a$ and the
sub-array of~$B$ that spans positions~$\ell$ to~$b$.
Computing~$k$ and~$\ell$ efficiently is therefore
a crucial step towards reducing the number of comparisons
required by the merging sub-routine (and, thus, by the sorting
algorithm).
This computation is a special case of the following problem:
if one wishes to find a secret integer~$m \geqslant 1$
by choosing integers~$x \geqslant 1$
and testing whether~$x \geqslant m$,
what is, as a function of~$m$, the least number of tests that one
must perform?
Bentley and Yao~\cite{BeYa76} answer this question
by providing simple strategies, which they number~$\mathsf{B}_0, \mathsf{B}_1,\ldots$:
\begin{itemize}
\item[$\mathsf{B}_0$:] choose~$x = 1$, then~$x = 2$, and so on, until
one chooses~$x = m$, thereby finding~$m$ in~$m$ queries;
\item[$\mathsf{B}_1$:] first use~$\mathsf{B}_0$ to find~$\lceil \log_2(m) \rceil + 1$ in~$\lceil \log_2(m) \rceil + 1$
queries, i.e., choose~$x = 2^k$ until~$x \geqslant m$, then compute the bits of~$m$ (from the most significant bit
of~$m$ to the least significant one) in~$\lceil \log_2(m)
\rceil - 1$
additional queries;
Bentley and Yao call this strategy a \emph{galloping} (or \emph{exponential
search}) technique;
\item[$\mathsf{B}_{k+1}$:] like~$\mathsf{B}_1$, except that one
finds~$\lceil \log_2(m) \rceil + 1$ by using~$\mathsf{B}_k$
instead of~$\mathsf{B}_0$.
\end{itemize}
Strategy~$\mathsf{B}_0$ uses~$m$ queries,~$\mathsf{B}_1$ uses~$2 \lceil \log_2(m) \rceil$ queries
(except for~$m = 1$, where it uses one query), and each strategy~$\mathsf{B}_k$ with~$k \geqslant 2$
uses~$\log_2(m) + o(\log(m))$ queries.
Thus, if~$m$ is known to be arbitrarily large,
one should favour some strategy~$\mathsf{B}_k$
(with~$k \geqslant 1$) over the naïve strategy~$\mathsf{B}_0$.
However, when
merging runs taken from a permutation chosen uniformly at
random over the~$n!$ permutations of~$\{1,2,\ldots,n\}$, the integer~$m$ is frequently small,
which makes~$\mathsf{B}_0$ suddenly more
attractive.
In particular, the overhead of using~$\mathsf{B}_1$ instead of~$\mathsf{B}_0$ is a prohibitive~$+20\%$ or~$+33\%$ when~$m = 5$ or~$m = 3$, as illustrated
in the black cells of Table~\ref{table:1}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|C{3.75mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|C{3.35mm}|}
\hline
\clap{$m$} & \clap{$1$} & \clap{$2$} & \clap{$3$} & \clap{$4$} & \clap{$5$} & \clap{$6$} & \clap{$7$} & \clap{$8$} & \clap{$9$} & \clap{$10$} & \clap{$11$} & \clap{$12$} & \clap{$13$} & \clap{$14$} & \clap{$15$} & \clap{$16$} & \clap{$17$} \\
\hline
\clap{$\mathsf{B}_0$} & \clap{$1$} & \clap{$2$} & \clap{$3$} & \clap{$4$} & \clap{$5$} & \clap{$6$} & \clap{$7$} & \clap{$8$} & \clap{$9$} & \clap{$10$} & \clap{$11$} & \clap{$12$} & \clap{$13$} & \clap{$14$} & \clap{$15$} & \clap{$16$} & \clap{$17$} \\
\hline
\clap{$\mathsf{B}_1$} & \clap{$1$} & \clap{$2$} & \clap{\cellcolor{black}\color{white}$\mathbf{4}$} & \clap{$4$} & \clap{\cellcolor{black}\color{white}$\mathbf{6}$} & \clap{$6$} & \clap{$6$} & \clap{$6$} & \clap{$8$} & \clap{$8$} & \clap{$8$} & \clap{$8$} & \clap{$8$} & \clap{$8$} & \clap{$8$} & \clap{$8$} & \clap{$10$} \\
\hline
\end{tabular}
\end{center}
\caption{Comparison requests needed by
strategies~$\mathsf{B}_0$ and~$\mathsf{B}_1$
to find a secret integer~$m \geqslant 1$.
\label{table:1}}
\end{table}
McIlroy~\cite{McIlroy1993} addresses this issue by choosing a
parameter~$\mathbf{t}$ and using a blend between the strategies~$\mathsf{B}_0$ and~$\mathsf{B}_1$,
which consists in two successive steps~$\mathsf{C}_1$ and~$\mathsf{C}_2$:
\begin{enumerate}
\item[$\mathsf{C}_1$:] one first follows~$\mathsf{B}_0$ for up to~$\mathbf{t}$ steps, thereby choosing~$x = 1$,~$x = 2$, \ldots,~$x = \mathbf{t}$
(if~$m \leqslant \mathbf{t}-1$, one stops after choosing~$x = m$);
\item[$\mathsf{C}_2$:] if~$m \geqslant \mathbf{t}+1$, one switches to~$\mathsf{B}_1$
(or, more precisely, to a version of~$\mathsf{B}_1$ translated by~$\mathbf{t}$, since the precondition~$m \geqslant 1$ is now~$m \geqslant \mathbf{t}+1$).
\end{enumerate}
Once such a parameter~$\mathbf{t}$ is fixed,
McIlroy's mixed strategy allows retrieving~$m$
in~$\mathsf{cost}_{\mathbf{t}}(m)$ queries, where~$\mathsf{cost}_{\mathbf{t}}(m) = m$ if~$m \leqslant \mathbf{t}+2$, and~$\mathsf{cost}_{\mathbf{t}}(m) =
\mathbf{t} + 2 \lceil \log_2(m - \mathbf{t}) \rceil$
if~$m \geqslant \mathbf{t}+3$.
In practice, however, we will replace this cost function by
the following simpler upper bound.
\begin{lemma}\label{lem:cost-bound}
For all~$\mathbf{t} \geqslant 0$ and~$m \geqslant 1$, we have~$\mathsf{cost}_{\mathbf{t}}(m) \leqslant \mathsf{cost}_{\mathbf{t}}^\ast(m)$,
where
\[\mathsf{cost}_{\mathbf{t}}^\ast(m) =
\min\{(1 + 1 / (\mathbf{t}+3)) m, \mathbf{t} + 2 +
2 \log_2(m + 1)\}.\]
\end{lemma}
\begin{proof}
Since the desired inequality is immediate when~$m \leqslant \mathbf{t}+2$, we assume
that~$m \geqslant \mathbf{t}+3$.
In that case, we already have~$\mathsf{cost}_{\mathbf{t}}(m) \leqslant
\mathbf{t} + 2 (\log_2(m-\mathbf{t})+1) \leqslant
\mathbf{t}+2+\log_2(m+1)$,
and we prove now that~$\mathsf{cost}_\mathbf{t}(m) \leqslant m+1$.
Indeed, let~$u = m - \mathbf{t}$ and let~$f \colon x \mapsto x-1-2\log_2(x)$.
The function~$f$ is positive and increasing
on the interval~$[7,+\infty)$.
Thus, it suffices to check by hand that~$(m+1) - \mathsf{cost}_\mathbf{t}(m) = 0,1,0,1$
when~$u = 3,4,5,6$,
and that~$(m+1) - \mathsf{cost}_\mathbf{t}(m) \geqslant f(u) > 0$ when~$u \geqslant 7$. It follows, as expected, that~$\mathsf{cost}_{\mathbf{t}}(m) \leqslant m+1 \leqslant
(1 + 1/(\mathbf{t}+3))m$.
\end{proof}
The above discussion immediately provides us with
a cost model for
the number of comparisons performed when merging two runs.
\begin{proposition}\label{pro:2}
Let~$A$ and~$B$ be two non-decreasing runs of
lengths~$a$ and~$b$, with values in~$\{1,2,\ldots,\sigma\}$.
For each integer~$i \leqslant \sigma$, let~$a_{\rightarrow i}$ (respectively,~$b_{\rightarrow i}$) be the number of elements
in~$A$ (respectively, in~$B$) with value~$i$.
Using a merging sub-routine based on McIlroy's mixed strategy
for a fixed parameter~$\mathbf{t}$,
we need at most
\[1 + \sum_{i=1}^\sigma
\mathsf{cost}_{\mathbf{t}}^\ast(a_{\rightarrow i}) +
\mathsf{cost}_{\mathbf{t}}^\ast(b_{\rightarrow i})\]
element comparisons to merge the runs~$A$ and~$B$.
\end{proposition}
\begin{proof}
First, assume that~$a_{\rightarrow i} = 0$ for some~$i \geqslant 2$.
Replacing every value~$j \geqslant i+1$ with the value~$j-1$
in both arrays~$A$ and~$B$ does not change the behaviour
of the sub-routine and decreases the value of~$\sigma$.
Moreover, the function~$\mathsf{cost}_{\mathbf{t}}^\ast$ is
sub-additive, i.e., we have~$\mathsf{cost}_{\mathbf{t}}^\ast(m) +
\mathsf{cost}_{\mathbf{t}}^\ast(m') \geqslant \mathsf{cost}_{\mathbf{t}}^\ast(m+m')$ for all~$m \geqslant 0$ and~$m' \geqslant 0$.
Hence, without loss of generality, we assume
that~$a_{\rightarrow i} \geqslant 1$ for all~$i \geqslant 2$.
Similarly, we assume without loss of generality that~$b_{\rightarrow i} \geqslant 1$ for all~$i \leqslant \sigma-1$.
Under these assumptions, the array~$C$ that results from merging~$A$ and~$B$ consists of~$a_{\rightarrow 1}$ elements from~$A$, then~$b_{\rightarrow 1}$ elements
from~$B$,~$a_{\rightarrow 2}$ elements from~$A$,~$b_{\rightarrow 2}$ elements from~$B$,
\ldots,~$a_{\rightarrow \sigma}$ elements
from~$A$ and~$b_{\rightarrow \sigma}$ elements from~$B$.
Thus, the galloping
sub-routine consists in discovering successively
the integers~$a_{\rightarrow 1},b_{\rightarrow 1},a_{\rightarrow 2},b_{\rightarrow 2},\ldots,a_{\rightarrow \sigma}$,
each time using McIlroy's strategy based on the two steps~$\mathsf{C}_1$ and~$\mathsf{C}_2$;
checking whether~$a_{\rightarrow 1} = 0$ requires one more comparison
than prescribed by McIlroy's strategy, and the integer~$b_{\rightarrow \sigma}$
does not need to be discovered once the entire
run~$A$ has been scanned be the merging sub-routine.
\end{proof} | 3,426 | 49,320 | en |
train | 0.10.4 | The above discussion immediately provides us with
a cost model for
the number of comparisons performed when merging two runs.
\begin{proposition}\label{pro:2}
Let~$A$ and~$B$ be two non-decreasing runs of
lengths~$a$ and~$b$, with values in~$\{1,2,\ldots,\sigma\}$.
For each integer~$i \leqslant \sigma$, let~$a_{\rightarrow i}$ (respectively,~$b_{\rightarrow i}$) be the number of elements
in~$A$ (respectively, in~$B$) with value~$i$.
Using a merging sub-routine based on McIlroy's mixed strategy
for a fixed parameter~$\mathbf{t}$,
we need at most
\[1 + \sum_{i=1}^\sigma
\mathsf{cost}_{\mathbf{t}}^\ast(a_{\rightarrow i}) +
\mathsf{cost}_{\mathbf{t}}^\ast(b_{\rightarrow i})\]
element comparisons to merge the runs~$A$ and~$B$.
\end{proposition}
\begin{proof}
First, assume that~$a_{\rightarrow i} = 0$ for some~$i \geqslant 2$.
Replacing every value~$j \geqslant i+1$ with the value~$j-1$
in both arrays~$A$ and~$B$ does not change the behaviour
of the sub-routine and decreases the value of~$\sigma$.
Moreover, the function~$\mathsf{cost}_{\mathbf{t}}^\ast$ is
sub-additive, i.e., we have~$\mathsf{cost}_{\mathbf{t}}^\ast(m) +
\mathsf{cost}_{\mathbf{t}}^\ast(m') \geqslant \mathsf{cost}_{\mathbf{t}}^\ast(m+m')$ for all~$m \geqslant 0$ and~$m' \geqslant 0$.
Hence, without loss of generality, we assume
that~$a_{\rightarrow i} \geqslant 1$ for all~$i \geqslant 2$.
Similarly, we assume without loss of generality that~$b_{\rightarrow i} \geqslant 1$ for all~$i \leqslant \sigma-1$.
Under these assumptions, the array~$C$ that results from merging~$A$ and~$B$ consists of~$a_{\rightarrow 1}$ elements from~$A$, then~$b_{\rightarrow 1}$ elements
from~$B$,~$a_{\rightarrow 2}$ elements from~$A$,~$b_{\rightarrow 2}$ elements from~$B$,
\ldots,~$a_{\rightarrow \sigma}$ elements
from~$A$ and~$b_{\rightarrow \sigma}$ elements from~$B$.
Thus, the galloping
sub-routine consists in discovering successively
the integers~$a_{\rightarrow 1},b_{\rightarrow 1},a_{\rightarrow 2},b_{\rightarrow 2},\ldots,a_{\rightarrow \sigma}$,
each time using McIlroy's strategy based on the two steps~$\mathsf{C}_1$ and~$\mathsf{C}_2$;
checking whether~$a_{\rightarrow 1} = 0$ requires one more comparison
than prescribed by McIlroy's strategy, and the integer~$b_{\rightarrow \sigma}$
does not need to be discovered once the entire
run~$A$ has been scanned be the merging sub-routine.
\end{proof}
We simply call~$\mathbf{t}$-\emph{galloping sub-routine} the
merging sub-routine based on McIlroy's mixed strategy
for a fixed parameter~$\mathbf{t}$; when the value of~$\mathbf{t}$ is irrelevant, we simply omit mentioning it.
Then, the quantity
\[1 + \sum_{i=1}^\sigma \mathsf{cost}_{\mathbf{t}}^\ast(a_i) +
\mathsf{cost}_{\mathbf{t}}^\ast(b_i)\]
is called the ($\mathbf{t}$-)\emph{galloping cost} of merging~$A$ and~$B$.
By construction, this cost never exceeds~$1 + 1/(\mathbf{t}+3)$ times the \emph{naïve
cost} of merging~$A$ and~$B$, which is simply defined as~$a + b$.
Below, we study the impact of using the galloping sub-routine
instead of the naïve one, which amounts to replacing
naïve merge costs by their galloping variants.
Note that using this new galloping
cost measure is relevant
only if the cost of element comparisons is significantly
larger than the cost of element (or pointer) moves.
For example, even if we were lucky enough to observe that
each element in~$B$ is smaller than
each element in~$A$, we would perform
only~$\mathcal{O}(\log(a+b))$ element comparisons, but as many
as~$\mathcal{T}heta(a+b)$ element moves.
\paragraph*{Updating the parameter~$\mathbf{t}$}
We assumed above that the parameter~$\mathbf{t}$ did not
vary while the runs~$A$ and~$B$ were being merged with each other.
This is not how~$\mathbf{t}$ behaves in TimSort\xspace's
implementation of the galloping sub-routine.
Instead, the parameter~$\mathbf{t}$ is
initially set to a
constant ($\mathbf{t} = 7$ in Java), and
may change during the algorithm as follows.
In step~$\mathsf{C}_2$,
after using the strategy~$\mathsf{B}_1$,
and depending on the value of~$m$ that we found,
one may realise that using~$\mathsf{B}_0$ might have been
less expensive than using~$\mathsf{B}_1$.
In that case, the value of~$\mathbf{t}$ increases by~$1$,
and otherwise (i.e., if using~$\mathsf{B}_1$ was indeed a smart
move), it decreases by~$1$ (with a minimum of~$0$).
When sorting a random permutation,
changing the value of~$\mathbf{t}$
in that way decreases the average overhead of
sometimes using~$\mathsf{B}_1$ instead of~$\mathsf{B}_0$ to a constant.
More generally, the worst-case overhead is limited
to a sub-linear~$\mathcal{O}(\sqrt{n \log(n)})$,
as proved in Proposition~\ref{pro:TS-update}.
Deciding whether our results remain valid
when~$\mathbf{t}$ is updated
like in TimSort\xspace remains an open question.
However, in Section~\ref{subsec:variable-t},
we propose and study the following alternative update policy:
when merging runs of lengths~$a$ and~$b$,
we set~$\mathbf{t} = \lceil \log_2(a+b) \rceil$.
\begin{proposition}\label{pro:TS-update}
Let~$\mathcal{A}$ be a stable merge sort algorithm with the
middle-growth property, and let~$A$ be an array
of length~$n$.
Let~$\mathsf{c}_1$ be the number of comparisons
that~$\mathcal{A}$ requires to sort~$A$ when it uses
the naïve sub-routine, and let~$\mathsf{c}_2$
be the number of comparisons that~$\mathcal{A}$ requires
to sort~$A$ when it uses the galloping sub-routine
with TimSort\xspace's update policy for the parameter~$\mathbf{t}$.
We have~$\mathsf{c}_2 \leqslant \mathsf{c}_1 +
\mathcal{O}(\sqrt{n \log(n)})$.
\end{proposition}
\begin{proof}
Below, we group the comparisons
that~$\mathcal{A}$ performs while sorting~$A$ into
\emph{steps}, which we will consider as individual
units. Steps are formed as follows.
Let~$R$ and~$R'$ be consecutive runs
that~$\mathcal{A}$ is about to merge,
and let us subdivide their concatenation~$R \cdot R'$ into~$\sigma$ dual runs~$S_1,S_2,\ldots,S_\sigma$ (note that these are
the dual runs of~$R \cdot R'$ and not the dual
runs of~$A$, i.e., some elements of~$R$ may belong
to a given dual run~$S_i$ of~$R \cdot R'$ while
belonging to distinct dual runs or~$A$).
Each step consists of those comparisons used to
discover the
elements of~$R$ (resp.,~$R'$) that belong to
a given dual run~$S_i$.
Thus, the comparisons used to merge~$R$ and~$R'$
are partitioned into~$2\sigma-1$ steps:
in the first step, we discover those elements of~$R$ that belong to~$S_1$; in the second step, those elements of~$R'$ that belong to~$S_1$; then, those elements of~$R$ that belong to~$S_2$ ; \ldots; and, finally,
those elements of~$R$ that belong to~$S_\sigma$.
Let~$s_1,s_2,\ldots,s_\ell$ be the steps into
which the comparisons performed by~$\mathcal{A}$ are
grouped.
By construction, each step~$s_i$ consists in
finding an integer~$m_i \geqslant 1$ (or, possibly,~$m_i = 0$ if~$s_i$ is the first step of a merge
between two consecutive runs).
If~$\mathcal{A}$ uses TimSort\xspace's update policy, the step~$s_i$
consists in using
McIlroy's strategy for a given parameter~$\mathbf{t}(s_i)$ that depends on the run~$s_i$.
We also denote by~$\mathbf{t}(s_{\ell+1})$ the parameter
value obtained after~$\mathcal{A}$ has finished sorting
the array~$A$.
Let~$\delta_i = 0$ if~$m_i \leqslant \mathbf{t}(s_i)$,
$\delta_i = 1$ if~$\mathbf{t}(s_i)+1 \leqslant m_i
\leqslant \mathbf{t}(s_i)+6$
and~$\delta_i = -1$ if~$\mathbf{t}(s_i)+7 \leqslant m_i$.
The naïve strategy~$\mathsf{B}_0$ requires~$m_i$ comparisons
to find that integer~$m_i$, and McIlroy's strategy
requires up to~$m_i + \delta_i$ comparisons.
Since~$\mathbf{t}(s_{i+1}) = \max\{0,\mathbf{t}(s_i) + \delta_i\}$, McIlroy's strategy never uses
more than~$m_i + \mathbf{t}(s_{i+1}) - \mathbf{t}(s_i)$
comparisons.
Consequently, the overhead of using TimSort\xspace's update
policy instead of a naïve merging sub-routine
is at most~$\mathbf{t}(s_{\ell+1}) - \mathbf{t}(s_1)$.
Finally, let~$\mu_i = m_1+\ldots+m_i$ for all~$i \leqslant \ell$.
We show that~$3\mu_i \geqslant \tau^2$ whenever~$\mathbf{t}(s_i) = \mathbf{t}(s_1) + \tau$.
Indeed, if~$\tau \geqslant 1$ and if~$s_i$ is
the first step for which~$\mathbf{t}(s_i) = \mathbf{t}(s_1) + \tau$,
we have~$\mathbf{t}(s_{i-1}) = \mathbf{t}(s_1) + \tau-1$.
It follows that~$3 \mu_{i-1} \geqslant (\tau-1)^2$ and~$m_i \geqslant \mathbf{t}(s_1) + \tau \geqslant \tau$,
which proves that~$3 \mu_i \geqslant (\tau-1)^2 + 3 \tau \geqslant
\tau^2$. Thus, we conclude that~$\mathbf{t}(s_\ell+1) - \mathbf{t}(s_1) \leqslant \sqrt{3 \mu_\ell}$. Since
Theorem~\ref{thm:middle-few-naïve} below proves that~$\mu_\ell = \mathcal{O}(n \log(n))$, the desired result
follows.
\end{proof}
Note that, in general, if $\mathcal{A}$ is any stable
natural merge sort algorithm, it will require at
most $\mathcal{O}(n^2)$ comparisons in the worst case,
and therefore using the galloping sub-routine
with TimSort\xspace's update may not increase the number of
comparisons performed by more than $\mathcal{O}(n)$. | 2,995 | 49,320 | en |
train | 0.10.5 | \section{Fast-growth and (tight) middle-growth properties}
\label{sec:fast-growth}
In this section, we focus on two novel properties of
stable natural merge sorts, which we call \emph{fast-growth}
and \emph{middle-growth}, and on a variant of
the latter property, which we call
\emph{tight middle-growth}.
These properties capture
all TimSort\xspace-like natural merge sorts invented in the last
decade, and explain why these sorting algorithms require
only~$\mathcal{O}(n + n \mathcal{H})$ element moves and~$\mathcal{O}(n + n \min\{\mathcal{H},\mathcal{H}^\ast\})$ element comparisons.
We will prove in Section~\ref{sec:pos-fast-growth}
that many algorithms have these properties.
When applying a stable natural merge sort on an array~$A$, the elements of~$A$ are clustered into
monotonic sub-arrays called \emph{runs},
and the algorithm consists in repeatedly merging
consecutive runs into one larger run until the array itself
contains only one run.
Consequently, each element may undergo
several successive merge operations.
\emph{Merge trees}~\cite{BaNa13,Ju20,munro2018nearly} are
a convenient way to represent the succession of runs
that ever occur while~$A$ is being sorted.
\begin{definition}\label{def:merge-tree}
The \emph{merge tree} induced by a stable natural merge
sort algorithm on an array~$A$ is
the binary rooted tree~$\mathcal{T}$ defined as follows.
The nodes of~$\mathcal{T}$ are all the runs
that were present in the initial array~$A$
or that resulted from merging two runs.
The runs of the initial array are the leaves of~$\mathcal{T}$,
and when two consecutive runs~$R_1$ and~$R_2$ are merged with
each other into a new run~$\overline{R}$,
the run~$R_1$ spanning positions immediately to the left
of those of~$R_2$, they form the
left and the right children of the node~$\overline{R}$,
respectively.
\end{definition}
Such trees ease the task of referring to
several runs that might not have occurred
simultaneously. In particular, we will often refer
to the~$i$\textsuperscript{th} \emph{ancestor}
or a run~$R$, which is just~$R$ itself if~$i = 0$,
or the parent, in the tree~$\mathcal{T}$,
of the~$(i-1)$\textsuperscript{th}
ancestor of~$R$ if~$i \geqslant 1$.
That ancestor will be denoted by~$\aR{i}$.
Before further manipulating these runs,
let us first present some notation about runs and
their lengths, which we will frequently use.
We will commonly denote runs with capital letters,
possibly with some index or adornment,
and we will then denote the length of such a run
with the same small-case letter and the same
index or adornment. For instance,
runs named~$R$,~$R_i$,~$\aR{j}$,~$Q'$ and~$\overline{S}$
will have respective lengths~$r$,~$r_i$,~$\ar{j}$,~$q'$ and~$\overline{s}$.
Finally, we say that a run~$R$ is a \emph{left} run if it
is the left child of its parent, and that it is a
\emph{right} run if it is a right child. The root
of a merge tree is neither left nor right.
\begin{definition}\label{def:fast-growth}
We say that a stable natural merge sort algorithm~$\mathcal{A}$ has the \emph{fast-growth
property} if it satisfies the following statement:
\begin{quote}
There exist an integer~$\ell \geqslant 1$ and a real number~$\alpha > 1$
such that, for every merge tree~$\mathcal{T}$
induced by~$\mathcal{A}$ and every run~$R$ at depth~$\ell$
or more in~$\mathcal{T}$, we have~$\ar{\ell} \geqslant
\alpha r$.
\end{quote}
We also say that~$\mathcal{A}$ has the
\emph{middle-growth property} if it satisfies
the following statement:
\begin{quote}
There exists a real number~$\beta > 1$
such that, for every merge tree~$\mathcal{T}$
induced by~$\mathcal{A}$, every integer~$h \geqslant 0$
and every run~$R$ of height~$h$ in~$\mathcal{T}$, we have~$r \geqslant \beta^h$.
\end{quote}
Finally, we say that~$\mathcal{A}$ has the
\emph{tight middle-growth property} if
it satisfies the following statement:
\begin{quote}
There exists an integer~$\gamma \geqslant 0$
such that, for every merge tree~$\mathcal{T}$
induced by~$\mathcal{A}$, every integer~$h \geqslant 0$
and every run~$R$ of height~$h$ in~$\mathcal{T}$, we have~$r \geqslant 2^{h-\gamma}$.
\end{quote}
\end{definition}
Since every node of height~$h \geqslant 1$
in a merge tree
is a run of length at least~$2$,
each algorithm with the fast-growth property
or with the tight middle-growth property
also has the middle-growth property:
indeed, it suffices to
choose~$\beta = \min\{2,\alpha\}^{1/\ell}$
in the first case,
and~$\beta = 2^{1/(\gamma+1)}$ in the second one.
As a result, the first and third properties
are stronger than
the latter one, and indeed they have stronger consequences.
\begin{theorem}\label{thm:fast-few-naïve}
Let~$\mathcal{A}$ be a stable natural merge sort algorithm
with the fast-growth property.
If~$\mathcal{A}$ uses either
the galloping or the naïve
sub-routine for merging runs, it
requires~$\mathcal{O}(n + n\mathcal{H})$ element comparisons and
moves to sort arrays of length~$n$ and run-length entropy~$\mathcal{H}$.
\end{theorem}
\begin{proof}
Let~$\ell \geqslant 1$ and~$\alpha > 1$ be the integer
and the real number
mentioned in the definition of the statement
``$\mathcal{A}$ has the fast-growth property''.
Let~$A$ be an array of length~$n$ with~$\rho$ runs of lengths~$r_1,r_2,\ldots,r_\rho$,
let~$\mathcal{T}$ be the merge tree induced
by~$\mathcal{A}$ on~$A$, and let~$d_i$ be the depth
of the run~$R_i$ in the tree~$\mathcal{T}$.
The algorithm~$\mathcal{A}$ uses~$\mathcal{O}(n)$ element comparisons and
element moves to
delimit the runs it will then merge and to make
them non-decreasing.
Then, both the galloping and the naïve merging sub-routine
require~$\mathcal{O}(a+b)$ element comparisons and moves to merge two
runs~$A$ and~$B$ of lengths~$a$ and~$b$.
Therefore, it suffices to prove that~$\sum_{R \in \mathcal{T}} r = \mathcal{O}(n + n \mathcal{H})$.
Consider some leaf~$R_i$ of the tree~$\mathcal{T}$, and let~$k = \lfloor d_i / \ell \rfloor$. The run
$\aR{k\ell}_i$ is a run of size~$\ar{k\ell}_i \geqslant \alpha^k r_i$, and thus~$n \geqslant
\ar{k\ell}_i \geqslant \alpha^k r_i$. Hence,~$d_i+1 \leqslant \ell(k+1) \leqslant \ell \left(\log_\alpha(n/r_i) + 1\right)$, and we conclude that
\[
\sum_{R \in \mathcal{T}} r = \sum_{i=1}^\rho (d_i+1) r_i \leqslant
\ell \sum_{i=1}^\rho \left(r_i \log_\alpha(n/r_i) + r_i\right)
= \ell (n \mathcal{H} / \log_2(\alpha) + n) = \mathcal{O}(n + n\mathcal{H}).\qedhere\]
\end{proof}
Similar, weaker results also holds for
algorithms with the (tight)
middle-growth property.
\begin{theorem}
\label{thm:middle-few-naïve}
Let~$\mathcal{A}$ be a stable natural merge sort algorithm
with the middle-growth property.
If~$\mathcal{A}$ uses either
the galloping or the naïve
sub-routine for merging runs, it
requires~$\mathcal{O}(n \log(n))$ element comparisons and
moves to sort arrays of length~$n$.
If, furthermore,~$\mathcal{A}$ has the tight middle-growth
property, it requires at most~$n \log_2(n) + \mathcal{O}(n)$ element comparisons and moves to sort arrays
of length~$n$.
\end{theorem}
\begin{proof}
Let us borrow the notations from the previous
proof, and
let~$\beta > 1$ be the real number mentioned in
the definition of the statement ``$\mathcal{A}$ has the
middle-growth property''.
Like in the proof of
Theorem~\ref{thm:fast-few-naïve}, it suffices
to show that~$\sum_{R \in \mathcal{T}} r = \mathcal{O}(n \log(n))$.
The~$d_i$\textsuperscript{th} ancestor of a run~$R_i$ is the root of~$\mathcal{T}$, and thus~$n \geqslant
\beta^{d_i}$. Hence,~$d_i \leqslant
\log_\beta(n)$, which proves that
\[\sum_{R \in \mathcal{T}} r = \sum_{i=1}^\rho (d_i+1) r_i \leqslant
\sum_{i=1}^\rho (\log_\beta(n)+1) r_i =
(\log_\beta(n)+1) n = \mathcal{O}(n \log(n)).\]
Similarly, if~$\mathcal{A}$ has the tight middle-growth
property, let~$\gamma$ be the integer mentioned in
the definition of the statement ``$\mathcal{A}$ has the
tight middle-growth property''.
This
time,~$n \geqslant 2^{d_i-\gamma}$, which proves
that~$d_i \leqslant \log_2(n) + \gamma$ and that
\[\sum_{R \in \mathcal{T}} r = \sum_{i=1}^\rho (d_i+1) r_i \leqslant
\sum_{i=1}^\rho (\log_2(n)+\gamma+1) r_i =
(\log_2(n)+\gamma+1) n = n \log_2(n) + \mathcal{O}(n). \qedhere\]
\end{proof} | 2,728 | 49,320 | en |
train | 0.10.6 | Similar, weaker results also holds for
algorithms with the (tight)
middle-growth property.
\begin{theorem}
\label{thm:middle-few-naïve}
Let~$\mathcal{A}$ be a stable natural merge sort algorithm
with the middle-growth property.
If~$\mathcal{A}$ uses either
the galloping or the naïve
sub-routine for merging runs, it
requires~$\mathcal{O}(n \log(n))$ element comparisons and
moves to sort arrays of length~$n$.
If, furthermore,~$\mathcal{A}$ has the tight middle-growth
property, it requires at most~$n \log_2(n) + \mathcal{O}(n)$ element comparisons and moves to sort arrays
of length~$n$.
\end{theorem}
\begin{proof}
Let us borrow the notations from the previous
proof, and
let~$\beta > 1$ be the real number mentioned in
the definition of the statement ``$\mathcal{A}$ has the
middle-growth property''.
Like in the proof of
Theorem~\ref{thm:fast-few-naïve}, it suffices
to show that~$\sum_{R \in \mathcal{T}} r = \mathcal{O}(n \log(n))$.
The~$d_i$\textsuperscript{th} ancestor of a run~$R_i$ is the root of~$\mathcal{T}$, and thus~$n \geqslant
\beta^{d_i}$. Hence,~$d_i \leqslant
\log_\beta(n)$, which proves that
\[\sum_{R \in \mathcal{T}} r = \sum_{i=1}^\rho (d_i+1) r_i \leqslant
\sum_{i=1}^\rho (\log_\beta(n)+1) r_i =
(\log_\beta(n)+1) n = \mathcal{O}(n \log(n)).\]
Similarly, if~$\mathcal{A}$ has the tight middle-growth
property, let~$\gamma$ be the integer mentioned in
the definition of the statement ``$\mathcal{A}$ has the
tight middle-growth property''.
This
time,~$n \geqslant 2^{d_i-\gamma}$, which proves
that~$d_i \leqslant \log_2(n) + \gamma$ and that
\[\sum_{R \in \mathcal{T}} r = \sum_{i=1}^\rho (d_i+1) r_i \leqslant
\sum_{i=1}^\rho (\log_2(n)+\gamma+1) r_i =
(\log_2(n)+\gamma+1) n = n \log_2(n) + \mathcal{O}(n). \qedhere\]
\end{proof}
Theorems~\ref{thm:fast-few-naïve}
and~\ref{thm:middle-few-naïve} provide us with
a simple framework for recovering well-known
results on the complexity of many algorithms.
By contrast, Theorem~\ref{thm:middle-few} consists
in new complexity guarantees
on the number of element comparisons
performed by algorithms with the middle-growth property, provided that they use the galloping
sub-routine.
\begin{theorem}\label{thm:middle-few}
Let~$\mathcal{A}$ be a stable natural merge sort algorithm
with the middle-growth property. If~$\mathcal{A}$ uses the galloping
sub-routine for merging runs,
it requires~$\mathcal{O}(n + n\mathcal{H}^\ast)$ element comparisons
to sort arrays of length~$n$ and dual run-length entropy~$\mathcal{H}^\ast$.
\end{theorem}
\begin{proof}
All comparisons performed by the galloping sub-routine
are of the form~$A[i] \leqslant^?\! A[j]$,
where~$i$ and~$j$ are positions such that~$i < j$.
Thus, the behaviour of~$\mathcal{A}$, i.e., the element comparisons
and element moves it performs, is invariant
under lexicographic equivalence, as illustrated in
Figure~\ref{fig:dual-runs}.
Consequently, starting from an array~$A$ of length~$n$
with~$\sigma$ dual runs
setting~$B[j] \eqdef i$ whenever~$A[j]$ belongs to the
dual run~$S_i$,
and we may now assume that~$A$ coincides with~$B$.
This assumption allows us to directly use
Proposition~\ref{pro:2}, whose presentation would have been
more complicated if we had referred to dual runs of an
underlying array instead of referring directly to distinct
values.
Let~$\beta > 1$ be the real number
mentioned in the definition of the statement
``$\mathcal{A}$ has the middle-growth property''.
Let~$A$ be an array of length~$n$ and whose values are integers
from~$1$ to~$\sigma$,
let~$s_1,s_2,\ldots,s_\sigma$ be the lengths of its
dual runs, and let~$\mathcal{T}$ be the merge tree induced
by~$\mathcal{A}$ on~$A$.
The algorithm~$\mathcal{A}$ uses~$\mathcal{O}(n)$ element comparisons to
delimit the runs it will then merge and to make
them non-decreasing. We prove now
that merging these runs requires
only~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons.
For every run~$R$ in~$\mathcal{T}$ and every integer~$i \leqslant
\sigma$, let~$r_{\rightarrow i}$
be the number of elements of~$R$
with value~$i$.
In the galloping cost model,
merging two runs~$R$ and~$R'$ requires at most
$1 + \sum_{i=1}^\sigma \mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i}) + \mathsf{cost}_{\mathbf{t}}^\ast(r'_{\rightarrow i})$
element comparisons.
Since less than~$n$ such merge operations are performed,
and since~$n = \sum_{i=1}^\sigma s_i$ and~$n \mathcal{H}^\ast = \sum_{i=1}^\sigma s_i \log(n / s_i)$,
it remains to show that
\[\sum_{R \in \mathcal{T}} \mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i}) = \mathcal{O}(s_i + s_i \log(n / s_i))\]
for all~$i \leqslant \sigma$.
Then, since~$\mathsf{cost}_{\mathbf{t}}^\ast(m) \leqslant (\mathbf{t}+1) \mathsf{cost}_0^\ast(m)$
for all parameter values~$\mathbf{t} \geqslant 0$ and
all~$m \geqslant 0$, we assume without loss of
generality that~$\mathbf{t} = 0$.
Consider now some integer~$h \geqslant 0$,
and let~$\mathsf{C}_0(h) = \sum_{R \in \mathcal{R}_h} \mathsf{cost}_0^\ast(r_{\rightarrow i})$,
where~$\mathcal{R}_h$ denotes the set of runs at height~$h$
in~$\mathcal{T}$.
Since no run in~$\mathcal{R}_h$ descends
from another one, we already have~$\mathsf{C}_0(h) \leqslant
2 \sum_{R \in \mathcal{R}_h} r_{\rightarrow i} \leqslant 2 s_i$ and~$\sum_{R \in \mathcal{R}_h} r \leqslant n$.
Moreover, by definition of~$\beta$, each run~$R \in \mathcal{R}_h$
is of length~$r \geqslant \beta^h$, and thus~$|\mathcal{R}_h| \leqslant n / \beta^h$.
Let also~$f \colon x \mapsto \mathbf{t}+2 + 2 \log_2(x+1)$,~$g \colon x \mapsto x \, f(s_i / x)$
and~$\lambda = \lceil\log_\beta(n / s_i) \rceil$.
Both~$f$ and~$g$ are positive and concave on the interval~$(0,+\infty)$, thereby also being increasing.
It follows that, for all~$h \geqslant 0$,
\begin{align*}
\mathsf{C}_0(\lambda + h) & \leqslant
\sum_{R \in \mathcal{R}_{\lambda + h}} f(r_{\rightarrow i})
\leqslant
|\mathcal{R}_{\lambda + h}| \, f\big({\textstyle\sum_{R \in \mathcal{R}_{\lambda + h}} r_{\rightarrow_i} / |\mathcal{R}_{\lambda + h}|}\big) \leqslant
g\big(|\mathcal{R}_{\lambda + h}|\big) \\
& \leqslant
g\big(n / \beta^{\lambda + h}\big) \leqslant
g\big(s_i \beta^{-h}\big) =
\big(2+2 \log_2(\beta^h+1)\big) s_i \beta^{-h} \\
& \leqslant \big(2 + 2 \log_2(2 \beta^h)\big) s_i \beta^{-h} =
\big(4 + 2 h \log_2(\beta)\big) s_i \beta^{-h}.
\end{align*}
Inequalities on the first line hold
by definition of~$\mathsf{cost}_1^\ast$,
because~$f$ is concave,
and because~$f$ is increasing;
inequalities on the second line hold
because~$g$ is increasing and
because~$|\mathcal{R}_h| \leqslant n / \beta^h$.
We conclude that
\begin{align*}
\sum_{R \in \mathcal{T}} \mathsf{cost}_0^\ast(r_{\rightarrow i})
& = \sum_{h \geqslant 0} \mathsf{C}_0(h)
= \sum_{h=0}^{\lambda-1} \mathsf{C}_0(h) + \sum_{h \geqslant 0}
\mathsf{C}_0(\lambda + h) \\
& \leqslant 2 \lambda s_i +
4 s_i \sum_{h \geqslant 0} \beta^{-h} +
2 \log_2(\beta) s_i \sum_{h \geqslant 0} h \beta^{-h} \\
& \leqslant
\mathcal{O}\big(s_i (1 + \log(n / s_i)\big) +
\mathcal{O}\big(s_i\big) +
\mathcal{O}\big(s_i\big) = \mathcal{O}\big(s_i+ s_i \log(n / s_i)\big). &&\qedhere
\end{align*}
\end{proof} | 2,605 | 49,320 | en |
train | 0.10.7 | \section{A few algorithms with the fast- and (tight) middle-growth properties}
\label{sec:pos-fast-growth}
In this section, we briefly present the algorithms
mentioned in Section~\ref{sec:intro} and prove
that each of them enjoys the fast-growth property
and/or the (tight) middle-growth property.
Before treating these algorithms one by one,
we first sum up our results.
\begin{theorem}\label{thm:fast}
The algorithms TimSort\xspace, \textalpha-MergeSort\xspace, PowerSort\xspace, PeekSort\xspace and adaptive ShiversSort\xspace
have the fast-growth property.
\end{theorem}
An immediate consequence of Theorems~\ref{thm:fast-few-naïve}
and~\ref{thm:middle-few} is that
these algorithms sort arrays of length~$n$ and
run-length entropy~$\mathcal{H}$ in time~$\mathcal{O}(n + n\mathcal{H})$,
which was already well-known,
and that, if used with the galloping merging sub-routine,
they only need~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons
to sort arrays of length~$n$
and dual run-length entropy~$\mathcal{H}^\ast$, which is a new result.
\begin{theorem}\label{thm:middle}
The algorithms NaturalMergeSort\xspace, ShiversSort\xspace and \textalpha-StackSort\xspace
have the middle-growth property.
\end{theorem}
Theorem~\ref{thm:middle-few} proves that,
if these three algorithms are used with the galloping
merging sub-routine,
they only need~$\mathcal{O}(n + n \mathcal{H}^\ast)$ comparisons
to sort arrays of length~$n$
and dual run-length entropy~$\mathcal{H}^\ast$.
By contrast, observe that they can be implemented
by using a stack, following TimSort\xspace's own implementation,
but where only the two top runs of the stack could be merged.
It is proved in~\cite{Ju20} that such algorithms may require~$\omega(n + n \mathcal{H})$ comparisons to sort
arrays of length~$n$ and run-length entropy~$\mathcal{H}$.
Hence, Theorem~\ref{thm:fast-few-naïve} shows that
these three algorithms do \emph{not} have the
fast-growth property.
\begin{theorem}\label{thm:tight-middle}
The algorithms NaturalMergeSort\xspace, ShiversSort\xspace and PowerSort\xspace
have the tight middle-growth property.
\end{theorem}
Theorem~\ref{thm:middle-few-naïve} proves that
these algorithms sort arrays of length~$n$
in time~$n \log_2(n) + \mathcal{O}(n)$, which was already
well-known.
In Section~\ref{sec:precise-bounds-PoS}, we will
further improve our upper bounds on the number of
comparisons that these algorithms require when
sorting arrays of length~$n$
and dual run-length entropy~$\mathcal{H}^\ast$.
Note that the algorithms \textalpha-StackSort\xspace, \textalpha-MergeSort\xspace and TimSort\xspace
may require more than~$n \log_2(n) + \mathcal{O}(n)$
comparisons to sort arrays of length~$n$,
which proves that they do not have the tight
middle-growth property.
In Section~\ref{sec:precise-bounds-PoS},
we will prove that adaptive ShiversSort\xspace and PeekSort\xspace also fail to have
the this property, although
they enjoy still complexity upper bounds similar to
those of algorithms with the tight middle-growth
property. | 897 | 49,320 | en |
train | 0.10.8 | \subsection{Algorithms with the fast-growth property}
\label{subsec:other-fast}
\subsubsection{PowerSort\xspace}
\label{subsubsec:PoS}
The algorithm PowerSort\xspace is best defined by introducing
the notion of \emph{power}
of a run endpoint or of a run, and then
characterising the merge trees that PowerSort\xspace induces.
\begin{definition}\label{def:PoS:power}
Let~$A$ be an array of length~$n$, whose run decomposition
consists of runs~$R_1,R_2,\ldots,R_\rho$,
ordered from left to right.
For all integers~$i \leqslant \rho$, let~$e_i = r_1+\ldots+r_i$. We also abusively set~$e_{-1} = -\infty$ and~$e_{\rho+1} = n$.
When~$0 \leqslant i \leqslant \rho$, we denote by~$\mathbf{I}(i)$ the half-open interval~$(e_{i-1}+e_i,e_i+e_{i+1}]$.
The \emph{power} of~$e_i$, which we denote by~$p_i$,
is then defined as the least integer~$p$ such that~$\mathbf{I}(i)$ contains an element of the set~$\{k n / 2^{p-1} \colon k \in \mathbb{Z}\}$.
Thus, we (abusively) have~$p_0 = -\infty$ and~$p_\rho = 0$.
Finally, let~$R_{i \ldots j}$
be a run obtained by merging consecutive runs~$R_i,R_{i+1},\ldots,R_j$.
The \emph{power} of the run~$R$
is defined as~$\max\{p_{i-1},p_j\}$.
\end{definition}
\begin{lemma}\label{lem:unique-min-power}
Each non-empty sub-interval~$I$ of the set~$\{0,\ldots,\rho\}$ contains exactly one integer~$i$
such that~$p_i \leqslant p_j$ for all~$j \in I$.
\end{lemma}
\begin{proof}
Assume that the integer~$i$ is not unique.
Since~$e_0$ is the only endpoint with power~$-\infty$,
we know that~$0 \notin I$.
Then, let~$a$ and~$b$ be elements of~$I$ such that~$a < b$ and~$p_a = p_b \leqslant p_j$ for all~$j \in I$,
and let~$p = p_a = p_b$.
By definition of~$p_a$ and~$p_b$, there exist odd integers~$k$ and~$\ell$ such that~$k n / 2^{p - 1} \in \mathbf{I}(a)$ and~$\ell n / 2^{p - 1} \in \mathbf{I}(b)$.
Since~$j \geqslant k+1$,
the fraction~$(k + 1) n / 2^{p-1}$ belongs to
some interval~$\mathbf{I}(j)$ such that~$a \leqslant j \leqslant b$.
But since~$k+1$ is even, we know that~$p_j < p$,
which is absurd.
Thus, our initial assumption is invalid,
which completes the proof.
\end{proof}
\begin{corollary}
\label{cor:construction}
Let~$R_1,\ldots,R_\rho$ be the run decomposition of an array~$A$. There is exactly one tree~$\mathcal{T}$ that is induced on~$A$ and
in which every inner node has a smaller power than its
children. Furthermore, for every
run~$R_{i \ldots j}$ in~$\mathcal{T}$, we have~$\max\{p_{i-1},p_j\} < \min\{p_i,p_{i+1},\ldots,p_{j-1}\}$.
\end{corollary}
\begin{proof}
Given a merge tree~$\mathcal{T}$,
let us prove that the following statements are equivalent:
\begin{itemize}
\item[$\mathsf{S}_1$:]\label{item:unique:1}
each inner node of~$\mathcal{T}$ has a smaller power than its children;
\item[$\mathsf{S}_2$:]\label{item:unique:2}
each run~$R_{i \ldots j}$ that belongs to~$\mathcal{T}$ has a power that
is smaller than all of~$p_i,\ldots,p_{j-1}$;
\item[$\mathsf{S}_3$:]\label{item:unique:3}
if a run~$R_{i \ldots j}$ is an inner node of~$\mathcal{T}$,
its children are the two runs~$R_{i \ldots k}$ and~$R_{k+1 \ldots j}$ such that~$p_k = \min\{p_i,\ldots,p_{j-1}\}$.
\end{itemize}
First, if~$\mathsf{S}_1$ holds,
we prove~$\mathsf{S}_3$ by induction on
the height~$h$ of the run~$R_{i \ldots j}$.
Indeed, if the restriction of~$\mathsf{S}_3$ to runs of
height less than~$h$ holds,
let~$R_{i \ldots k}$ and~$R_{k+1 \ldots j}$ be the
children of a run~$R_{i \ldots j}$ of height~$h$.
If~$i < k$, the run~$R_{i \ldots k}$ has two children~$R_{i \ldots \ell}$ and~$R_{\ell+1 \ldots k}$ such that~$p_\ell = \min\{p_i,\ldots,p_{k-1}\}$,
and the powers of these runs, i.e.,~$\max\{p_{i-1},p_\ell\}$
and~$\max\{p_\ell,p_k\}$, are greater than the power of~$R_{i \ldots k}$, i.e.,~$\max\{p_{i-1},p_k\}$, which proves
that~$p_\ell > p_k$.
It follows that~$p_k = \min\{p_i,\ldots,p_k\}$, and one proves
similarly that~$p_k = \min\{p_k,\ldots,p_{j-1}\}$,
thereby showing that~$\mathsf{S}_3$ also holds for runs
of height~$h$.
Then, if~$\mathsf{S}_3$ holds,
we prove~$\mathsf{S}_2$ by
induction on the depth~$d$ of the run~$R_{i \ldots j}$.
Indeed, if the restriction of~$\mathsf{S}_2$ to runs of
depth less than~$d$ holds,
let~$R_{i \ldots k}$ and~$R_{k+1 \ldots j}$ be the
children of a run~$R_{i \ldots j}$ of depth~$d$.
Lemma~\ref{lem:unique-min-power} and~$\mathsf{S}_3$
prove that~$p_k$ is the unique smallest element of~$\{p_i,\ldots,p_{j-1}\}$, and the induction hypothesis proves
that~$\max\{p_{i-1},p_j\} < p_k$.
It follows that both powers~$\max\{p_{i-1},p_k\}$ and~$\max\{p_k,p_j\}$ are smaller than all of~$p_i,\ldots,p_{k-1},p_{k+1},\ldots,p_{j-1}$,
thereby showing that~$\mathsf{S}_2$ also holds for runs
of depth~$d$.
Finally, if~$\mathsf{S}_2$ holds, let~$R_{i \ldots j}$ be
an inner node of~$\mathcal{T}$, with children~$R_{i \ldots k}$ and~$R_{k+1 \ldots j}$.
Property~$\mathsf{S}_2$ ensures that~$\max\{p_{i-1},p_j\} < p_k$, and thus that~$\max\{p_{i-1},p_j\}$ is smaller than both~$\max\{p_{i-1},p_k\}$ and~$\max\{p_k,p_j\}$,
i.e., that~$R_{i \ldots j}$ has a smaller power that its
children, thereby proving~$\mathsf{S}_1$.
In particular, once the array~$A$ and its run decomposition~$R_1,\ldots,R_\rho$ are fixed,~$\mathsf{S}_3$ provides us with a
deterministic top-down construction of the unique merge tree~$\mathcal{T}$ induced on~$A$ and that satisfies~$\mathsf{S}_1$:
the root of~$\mathcal{T}$ must be the run~$R_{1 \ldots \rho}$ and,
provided that some run~$R_{i \ldots j}$ belongs to~$\mathcal{T}$, where~$i < j$, Lemma~\ref{lem:unique-min-power}
proves that the integer~$k$ mentioned in~$\mathsf{S}_3$
is unique, which means that~$\mathsf{S}_3$ unambiguously
describes the children of~$R_{i \ldots j}$ in the tree~$\mathcal{T}$.
This proves the first claim of
Corollary~\ref{cor:construction}, and the second claim of
Corollary~\ref{cor:construction}
follows from the equivalence between the
statements~$\mathsf{S}_1$ and~$\mathsf{S}_2$.
\end{proof}
This leads to the following characterisation of
the algorithm PowerSort\xspace, which is proved in~\cite[Lemma 4]{munro2018nearly}, and which
Corollary~\ref{cor:construction} allows us to
consider as an alternative
definition of PowerSort\xspace.
\begin{definition}\label{def:PoS}
In every merge tree that
PowerSort\xspace induces, inner nodes have a smaller power than their children.
\end{definition}
\begin{lemma}\label{lem:PoS:power}
Let~$\mathcal{T}$ be a merge tree induced by PowerSort\xspace,
let~$R$ be a run of~$\mathcal{T}$ with power~$p$,
and let~$\aR{2}$ be its grandparent.
We have~$2^{p-2} r < n < 2^p \ar{2}$.
\end{lemma}
\begin{proof}
Let~$R_{i \ldots j}$ be the run~$R$.
Without loss of generality, we assume that~$p = p_j$, the case~$p = p_{i-1}$ being entirely similar.
Corollary~\ref{cor:construction} states that
all of~$p_i,\ldots,p_{j-1}$ are larger than~$p$,
and therefore that~$p \leqslant \min\{p_i,\ldots,p_j\}$.
Thus, the union of intervals~$\mathbf{I}(i) \cup \ldots \cup \mathbf{I}(j) = (e_{i-1}+e_i,e_j+e_{j+1}]$ does not contain any element of the set~$\mathcal{S} = \{k n / 2^{p-2} \colon k \in \mathbb{Z}\}$.
Consequently, the bounds~$e_{i-1}+e_i$ and~$e_j+e_{j+1}$ are contained
between two consecutive elements of~$\mathcal{S}$, i.e.,
there exists an integer~$\ell$ such that
\[\ell n / 2^{p-2} \leqslant e_{i-1}+e_i \leqslant
e_j+e_{j+1} < (\ell+1) n / 2^{p-2},\]
and we conclude that
\[
r = e_j - e_{i-1} \leqslant (e_j+e_{j+1}) - (e_{i-1}+e_i)
< n / 2^{p-2}.\]
We prove now that~$n \leqslant 2^p \ar{2}$.
To that end, we assume that both~$R$ and~$\aR{1}$ left children, the other possible cases being
entirely similar.
There exist integers~$u$ and~$v$ such that~$\aR{1} = R_{i \ldots u}$ and~$\aR{2} = R_{i \ldots v}$.
Hence,~$\max\{p_{i-1},p_u\} < \max\{p_{i-1},p_j\} = p$,
which shows that~$p_u < p_j = p$.
Thus, both intervals~$\mathbf{I}(j)$ and~$\mathbf{I}(u)$, which are subintervals
of~$(2e_{i-1},2e_v]$,
contain elements of the set~$\mathcal{S}' = \{k n / 2^{p-1} \colon k \in \mathbb{Z}\}$.
This means that there exist two integers~$k$ and~$\ell$ such
that~$2e_{i-1} < k n / 2^{p-1} < \ell n / 2^{p-1}
\leqslant 2 e_v$, from which we conclude that
\[
\ar{2} = e_v - e_{i-1} > (\ell-k) n / 2^p \geqslant
n / 2^p. \qedhere\]
\end{proof}
\begin{theorem}\label{thm:fast-growth-PoS}
The algorithm PowerSort\xspace has the fast-growth property.
\end{theorem}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by PowerSort\xspace.
Then, let~$R$ be a run in~$\mathcal{T}$, and let~$p$ and~$\ap{3}$ be the respective powers of the runs~$R$ and~$\aR{3}$.
Definition~\ref{def:PoS} ensures
that~$p \geqslant \ap{3}+3$, and therefore
Lemma~\ref{lem:PoS:power} proves that
\[2^{\ap{3}+1} r \leqslant 2^{p-2} r < n < 2^{\ap{3}} \ar{5}.\]
This means that~$\ar{5} \geqslant 2 r$, and therefore
that PowerSort\xspace has the fast-growth property.
\end{proof} | 3,367 | 49,320 | en |
train | 0.10.9 | \subsubsection{PeekSort\xspace}
\label{subsubsec:PeS}
Like its sibling PowerSort\xspace, the algorithm PeekSort\xspace is best defined
by characterizing the merge trees it induces.
\begin{definition}\label{def:tree:PeS}
Let~$\mathcal{T}$ be the merge tree induced by PeekSort\xspace on an array~$A$.
The children of each internal node~$R_{i \ldots j}$ of~$\mathcal{T}$
are the runs~$R_{i \ldots k}$ and~$R_{k+1 \ldots j}$ for which the quantity
\[ |2 e_k - e_j - e_{i-1}|\]
is minimal. In case of equality, the integer~$k$ is chosen to
be as small as possible.
\end{definition}
\begin{proposition}\label{pro:fast-growth-PeS}
The algorithm PeekSort\xspace has the fast-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by PeekSort\xspace,
and let~$R$ be a run in~$\mathcal{T}$.
We prove that~$\ar{3} \geqslant 2 r$.
Indeed, let us assume that a
majority of the runs~$R = \aR{0}$,~$\aR{1}$ and~$\aR{2}$ are left runs.
The situation is entirely similar if a majority of these runs
are right runs.
Let~$i < j$ be the two smallest integers such that~$\aR{i}$ and~$\aR{j}$ are left runs, and let~$S$ and~$T$ be their right siblings,
as illustrated in Figure~\ref{fig:pro:fast-growth-PeS}.
We can write these runs as~$\aR{i} = R_{w+1 \ldots x}$,~$S = R_{x+1 \ldots y}$,~$\aR{j} = R_{v+1 \ldots y}$ and~$T = R_{y+1 \ldots z}$
for some integers~$v \leqslant w < x < y < z$.
Definition~\ref{def:tree:PeS} states that
\[|\ar{j} - t| = |2 e_y - e_z - e_v| \leqslant
|2 e_{y-1} - e_z - e_v| = |\ar{j} - t - 2 r_y|.\]
Thus,~$\ar{j} - t - 2 r_y$ is negative, i.e.,~$t \geqslant \ar{j} - 2 r_y$, and
\[\ar{3} \geqslant \ar{j+1} = \ar{j} + t \geqslant
2 \ar{j} - 2 r_y = 2 (e_{y-1} - e_v) \geqslant 2 (e_x - e_w) = 2 \ar{i} \geqslant 2 r. \qedhere\]
\end{proof}
\begin{figure}
\caption{Runs~$\aR{i}
\label{fig:pro:fast-growth-PeS}
\end{figure}
\subsubsection{Adaptive ShiversSort\xspace}
\label{subsubsec:cASS}
The algorithm adaptive ShiversSort\xspace is presented in
Algorithm~\ref{alg:cASS}.
It is based on an \emph{ad hoc} tool,
which we call \emph{level} of a run :
the level of a run~$R$ is defined as the
number~$\ell = \lfloor \log_2(r) \rfloor$.
In practice, and following our naming conventions,
we denote by~$\al{i}$
and~$\ell_i$ the respective levels
of the runs~$\aR{i}$ and~$R_i$.
\begin{algorithm}[h]
\begin{small}
\mathcal{S}etArgSty{texttt}
\mathsf{D}ontPrintSemicolon
\Input{Array~$A$ to sort}
\mathcal{R}esult{The array~$A$ is sorted into a single run.
That run remains on the
stack.}
\Note{We denote the height of the stack~$\mathcal{S}$ by~$h$, and its~$i$\textsuperscript{th} deepest run by~$R_i$. The length of~$R_i$ is denoted by~$r_i$, and we set~$\ell_i = \lfloor \log_2(r_i) \rfloor$. When two consecutive runs of~$\mathcal{S}$ are merged, they are replaced, in~$\mathcal{S}$, by the run resulting from the merge. \justifying}
\BlankLine~$\mathcal{S} \ensuremath{\leftarrow}~$ an empty stack\;
\While(\label{main-loop:cASS:start}){\textbf{true}\xspace}{
\If(\label{test:cASS:case-1})
{\textrm{$h \geqslant 3$ and~$\ell_{h-2}
\leqslant \max\{\ell_{h-1},\ell_h\}$}}
{merge the runs~$R_{h-2}$ and~$R_{h-1}$\label{alg:cASS:merge}}
\mathsf{E}lseIf{\textrm{the end of the array has not yet been reached}}
{find a new monotonic run~$R$, make it non-decreasing,
and push it onto~$\mathcal{S}$\label{cASS:case-push}}
\mathsf{E}lse{break\label{algline:cASS:end_inner_loop}}
}
\While{$h \geqslant 2$\label{alg:cASS:trigger:collapse}}{
merge the runs~$R_{h-1}$ and~$R_h$
\label{alg:cASS:collapse}
}
\end{small}
\caption{adaptive ShiversSort\xspace\label{alg:cASS}}
\end{algorithm}
Observe that appending a fictitious run
of length~$2n$ to the array~$A$ and
stopping our sequence of merges just before
merging that fictitious run does not modify
the sequence of merges performed by the algorithm,
but allows us to assume that every merge was
performed in line~\ref{alg:cASS:merge}.
Therefore, we work below under that assumption.
Our proof is based on the following result,
which was stated and proved in~\cite[Lemma 7]{Ju20}. The inequality (ii) is to be considered
only when~$h \geqslant 4$.
\begin{lemma}\label{lemma:cASS:invariant}
Let~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ be a stack
obtained while executing adaptive ShiversSort\xspace.
We have (i)~$\ell_i \geqslant \ell_{i+1}+1$
whenever~$1 \leqslant i \leqslant h-3$ and
(ii)~$\ell_{h-3} \geqslant \ell_{h-1}+1$.
\end{lemma}
Then, Lemmas~\ref{lem:cASS::right} and~\ref{lem:cASS::left}
focus on inequalities involving the levels
of a given run~$R$ belonging to a merge tree
induced by adaptive ShiversSort\xspace, and of its ancestors.
In each case, we denote by~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$
the stack at the moment where the run~$R$ is merged.
\begin{lemma}\label{lem:cASS::right}
If~$\aR{1}$ is a right run,~$\al{2} \geqslant \ell+1$.
\end{lemma}
\begin{proof}
The run~$R$ coincides with either~$R_{h-2}$ or~$R_{h-1}$, and~$\aR{1}$ is the parent of the runs~$R_{h-2}$ and~$R_{h-1}$. Hence,
the run~$R_{h-3}$ descends from the left sibling of~$\aR{1}$
and from~$\aR{2}$.
Thus, inequalities (i) and (ii) prove that~$\al{2} \geqslant \ell_{h-3}
\geqslant \max\{\ell_{h-2},\ell_{h-1}\}+1
\geqslant \ell+1$.
\end{proof}
\begin{lemma}\label{lem:cASS::left}
If~$R$ is a left run,~$\al{2} \geqslant \ell+1$.
\end{lemma}
\begin{proof}
Since~$R$ is a left run, it coincides with~$R_{h-2}$,
and~$\ell_{h-2} \leqslant \max\{\ell_{h-1},\ell_h\}$.
Then, if~$\aR{1}$ is a right run,
Lemma~\ref{lem:cASS::right} already proves that~$\al{2} \geqslant \ell+1$.
Otherwise,~$\aR{1}$ is a left run, and the run~$R_h$ descends from~$\aR{2}$, thereby proving that
\[\ar{2} \geqslant r + \max\{r_{h-1},r_h\}
\geqslant {2^\ell + 2^{\max\{\ell_{h-1},\ell_h\}}}
\geqslant 2^{\ell+1},\]
and thus that~$\al{2} \geqslant \ell+1$ too.
\end{proof}
\begin{proposition}\label{pro:fast-growth-cASS}
The algorithm adaptive ShiversSort\xspace has the fast-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by adaptive ShiversSort\xspace,
and let~$R$ be a run in~$\mathcal{T}$.
We will prove that~$\ar{6} \geqslant 2r$.
In order to do so,
we first prove that~$\al{3} \geqslant \ell+1$.
Indeed, Lemma~\ref{lem:cASS::right} shows that~$\al{3} \geqslant \al{2} \geqslant \ell+1$ if
if~$\aR{1}$ is a right run, and
Lemma~\ref{lem:cASS::left} shows that~$\al{3} \geqslant \al{1}+1 \geqslant \ell+1$
if~$\aR{1}$ is a left run.
We show similarly that~$\al{6} \geqslant \al{3}+1$,
and we conclude that~$\ar{6} \geqslant 2^{\al{6}} \geqslant 2^{\ell+2} \geqslant 2r$.
\end{proof} | 2,598 | 49,320 | en |
train | 0.10.10 | \subsubsection{TimSort\xspace}
\label{subsubsec:TS}
The algorithm TimSort\xspace is presented in
Algorithm~\ref{alg:TS}.
\begin{algorithm}[h]
\begin{small}
\mathcal{S}etArgSty{texttt}
\mathsf{D}ontPrintSemicolon
\Input{Array~$A$ to sort}
\mathcal{R}esult{The array~$A$ is sorted into a single run.
That run remains on the
stack.}
\Note{We denote the height of the stack~$\mathcal{S}$ by~$h$, and its~$i$\textsuperscript{th} deepest run by~$R_i$. The length of~$R_i$ is denoted by~$r_i$. When two consecutive runs of~$\mathcal{S}$ are merged, they are replaced, in~$\mathcal{S}$, by the run resulting from the merge. \justifying}
\BlankLine~$\mathcal{S} \ensuremath{\leftarrow}~$ an empty stack\;
\While(\label{main-loop:start}){\textbf{true}\xspace}{
\If(\label{test:case-1})
{\textrm{$h \geqslant 3$ and~$r_{h-2} < r_h$}}
{merge the runs~$R_{h-2}$ and~$R_{h-1}$
\tcp*[f]{case \#1}\label{case-1}}
\mathsf{E}lseIf(\label{test:case-2})
{\textrm{$h \geqslant 2$ and~$r_{h-1} \leqslant r_h$}}
{merge the runs~$R_{h-1}$ and~$R_h$
\tcp*[f]{case \#2}\label{case-2}}
\mathsf{E}lseIf(\label{test:case-3})
{\textrm{$h \geqslant 3$ and~$r_{h-2} \leqslant r_{h-1} + r_h$}}
{merge the runs~$R_{h-1}$ and~$R_h$
\tcp*[f]{case \#3}\label{case-3}}
\mathsf{E}lseIf(\label{test:case-4})
{\textrm{$h \geqslant 4$ and~$r_{h-3} \leqslant r_{h-2} + r_{h-1}$}}
{merge the runs~$R_{h-1}$ and~$R_h$
\tcp*[f]{case \#4}\label{case-4}}
\mathsf{E}lseIf{\textrm{the end of the array has not yet been reached}}
{find a new monotonic run~$R$, make it non-decreasing,
and push it onto~$\mathcal{S}$\label{case-push}}
\mathsf{E}lse{break\label{algline:end_inner_loop}}
}
\While{$h \geqslant 2$}{
merge the runs~$R_{h-1}$ and~$R_h$
\label{alg:collapse}
}
\end{small}
\caption{TimSort\xspace\label{alg:TS}}
\end{algorithm}
We say that a run~$R$ is
a \#1-, a \#2-, a \#3 or a \#4-run
if is merged in line~\ref{case-1},~\ref{case-2},
~\ref{case-3} or~\ref{case-4},
respectively. We also say that~$R$ is a \#\rlap{\textsuperscript{\phantom{2}1}}\textsubscript{2\phantom{1}4}\xspace-run if it is a \#1-, a \#2- or a
\#4-run, and that~$R$ is a \#\rlap{\textsuperscript{\phantom{3}2}}\textsubscript{3\phantom{2}4}\xspace-run if it is a
\#2-, a \#3- or a \#4-run.
Like in Section~\ref{subsubsec:cASS},
appending a fictitious run of length~$2n$ that we
will avoid merging allows us to assume that
every run is merged in line~\ref{case-1},~\ref{case-2},
~\ref{case-3} or~\ref{case-4}, i.e.,
is a \#1-, a \#2-, a \#3 or a \#4-run.
Our proof is then based on the following result,
which extends~\cite[Lemma 5]{auger2018worst}
by adding the inequality (v).
The inequality (ii) is to be considered
only when~$h \geqslant 3$,
and the inequalities (iii) to (v) are to be
considered only when~$h \geqslant 4$.
\begin{lemma}\label{lemma:invariant}
Let~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ be a stack
obtained while executing TimSort\xspace.
We have (i)~$r_i > r_{i+1} + r_{i+2}$
whenever~$1 \leqslant i \leqslant h-4$,
(ii)~$3 r_{h-2} > r_{h-1}$,
(iii)~$r_{h-3} > r_{h-2}$,
(iv)~$r_{h-3} + r_{h-2} > r_{h-1}$ and
(v)~$\max\{r_{h-3}/r,4 r_h\} > r_{h-1}$.
\end{lemma}
\begin{proof}
Lemma 5 from~\cite{auger2018worst} already
proves the inequalities (i) to (iv).
Therefore, we prove, by a direct induction on the
number of (push or merge) operations
performed before obtaining the stack~$\mathcal{S}$,
that~$\mathcal{S}$ also satisfies (v).
When the algorithm starts, we have~$h \leqslant 4$,
and therefore there is nothing to prove in that
case.
Then, when a stack~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ obeying the inequalities (i) to (v)
is transformed into a stack~$\overline{\mathcal{S}} = (\overline{R}_1,\overline{R}_2,
\ldots,\overline{R}_{\overline{h}})$
\begin{itemize}
\item by inserting a run,~$\overline{h}
= h+1$ and~$\overline{r}_{\overline{h}-3}
= r_{h-2} > r_{h-1} + r_h > 2 r_h = 2 \overline{r}_{\overline{h}-1}$;
\item by merging the runs~$R_{h-1}$ and~$R_h$,
we have~$\overline{h} = h-1$ and
\[\overline{r}_{\overline{h}-3} = r_{h-4} >
r_{h-3} + r_{h-2} > 2 r_{h-2} = 2 \overline{r}_{\overline{h}-1};\]
\item by merging the runs~$R_{h-2}$ and~$R_{h-1}$,
case \#1 just occurred and~$\overline{h} = h-1$,
so that
\[4 \overline{r}_{\overline{h}} = 4 r_h > 4 r_{h-2}
> r_{h-2} + r_{h-1} = \overline{r}_{\overline{h}-1}.\]
\end{itemize}
In each case,~$\overline{\mathcal{S}}$ satisfies (v),
which completes the induction and the proof.
\end{proof}
Then, Lemmas~\ref{lem::right} and~\ref{lem::left}
focus on inequalities involving the lengths
of a given run~$R$ belonging to a merge tree
induced by TimSort\xspace, and of its ancestors.
In each case, we denote by~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$
the stack at the moment where the run~$R$ is merged.
\begin{lemma}\label{lem::right}
If~$\aR{1}$ is a right run,~$\ar{2} \geqslant 4r/3$.
\end{lemma}
\begin{proof}
Let~$S$ be the left sibling of the run~$\aR{1}$,
and let~$i$ be the integer such that~$R = R_i$.
If~$i = h-2$, (iii) shows that~$r_{i-1} > r_i$.
If~$i = h-1$, (ii) shows that~$r_{i-1} > r_i / 3$.
In both cases, the run~$R_{i-1}$
descends from~$\aR{2}$, and
thus~$\ar{2} \geqslant r + r_{i-1}
\geqslant 4 r / 3$.
Finally, if~$i = h$, the run~$R$ is a \#\rlap{\textsuperscript{\phantom{3}2}}\textsubscript{3\phantom{2}4}\xspace-right run,
which means both that~$r_{h-2} \geqslant r$ and that~$R_{h-2}$ descends from~$S$.
It follows that~$\ar{2} \geqslant r + r_{h-2} \geqslant 2 r$.
\end{proof}
\begin{lemma}\label{lem::left}
If~$R$ is a left run,~$\ar{2} \geqslant 5 r / 4$.
\end{lemma}
\begin{proof}
We treat four cases independently, depending on
whether~$R$ is a \#1-, a \#2-, a \#3- or a \#4-left run.
In each case, we assume that the run~$\aR{1}$ is a left run,
since Lemma~\ref{lem::right} already proves
that~$\ar{2} \geqslant 4r/3$
when~$\aR{1}$ is a right run.
\begin{itemize}
\item If~$R$ is a \#1-left run,
the run~$R = R_{h-2}$ is merged with~$R_{h-1}$ and~$r_{h-2} < r_h$. Since~$\aR{1}$ is a left run,~$R_h$ descends from~$\aR{2}$, and thus~$\ar{2} \geqslant r + r_h \geqslant 2r$.
\item If~$R$ is a \#2-left run, the run~$R = R_{h-1}$ is merged with~$R_h$ and~$r_{h-1} \leqslant r_h$. It follows, in that case,
that~$\ar{2} \geqslant \ar{1} = r + r_h \geqslant 2 r$.
\item If~$R$ is a \#3-left run, the run~$R = R_{h-1}$ is merged with~$R_h$, and~$r_{h-2} \leqslant r_{h-1} + r_h =
\ar{1}$.
Due to this inequality,
our next journey through the
loop of lines~\ref{main-loop:start}
to~\ref{algline:end_inner_loop} must trigger
another merge.
Since~$\aR{1}$ is a left run, that merge
must be a \#1-merge, which means that~$r_{h-3} < \ar{1}$.
Consequently, (v) proves that
\[\ar{1} \geqslant \max\{r_{h-3},r_{h-1}+r_h\} \geqslant
5 r_{h-1}/4 = 5 r/4.\]
\item We prove that~$R$ cannot be a \#4-left runs.
Indeed, if~$R$ is a \#4-left run, the run~$R = R_{h-1}$ is merged with~$R_h$,
and we both have~$r_{h-2} > \ar{1}$ and~$r_{h-3} \leqslant r_{h-2} + r_{h-1} \leqslant
r_{h-2} + \ar{1}$.
Due to the latter inequality,
our next journey through the
loop of lines~\ref{main-loop:start}
to~\ref{algline:end_inner_loop} must trigger
another merge.
Since~$\ar{1} < r_{h-2} < r_{h-3}$, that new merge
cannot be a \#1-merge, and thus~$\aR{1}$ is a right run,
contradicting our initial assumption. \qedhere
\end{itemize}
\end{proof}
\begin{proposition}\label{pro:fast-growth-TS}
The algorithm TimSort\xspace has the fast-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by TimSort\xspace,
and let~$R$ be a run in~$\mathcal{T}$.
We will prove that~$\ar{3} \geqslant
5 r/4$.
Indeed, if~$\aR{1}$ is a right run,
Lemma~\ref{lem::right} proves that~$\ar{3} \geqslant \ar{2} \geqslant 4r/3$.
Otherwise,~$\aR{1}$ is a left run, and
Lemma~\ref{lem::left}
proves that~$\ar{3} \geqslant 5 \ar{1}/4 \geqslant 5r/4$.
\end{proof} | 3,286 | 49,320 | en |
train | 0.10.11 | \subsubsection{\textalpha-MergeSort\xspace}
\label{subsubsec:aMS}
The algorithm \textalpha-MergeSort\xspace is
parametrised by a real number~$\alpha > 1$ and
is presented in Algorithm~\ref{alg:aMS}.
\begin{algorithm}[h]
\begin{small}
\mathcal{S}etArgSty{texttt}
\mathsf{D}ontPrintSemicolon
\Input{Array~$A$ to sort, parameter~$\alpha > 1$}
\mathcal{R}esult{The array~$A$ is sorted into a single run.
That run remains on the
stack.}
\Note{We denote the height of the stack~$\mathcal{S}$ by~$h$, and its~$i$\textsuperscript{th} deepest run by~$R_i$. The length of~$R_i$ is denoted by~$r_i$. When two consecutive runs of~$\mathcal{S}$ are merged, they are replaced, in~$\mathcal{S}$, by the run resulting from the merge. \justifying}
\BlankLine~$\mathcal{S} \ensuremath{\leftarrow}~$ an empty stack\;
\While(\label{aMS:main-loop:start}){\textbf{true}\xspace}{
\If(\label{test:aMS:case-1})
{\textrm{$h \geqslant 3$ and~$r_{h-2} < r_h$}}
{merge the runs~$R_{h-2}$ and~$R_{h-1}$
\tcp*[f]{case \#1}\label{aMS:case-1}}
\mathsf{E}lseIf(\label{test:aMS:case-2})
{\textrm{$h \geqslant 2$ and~$r_{h-1} < \alpha r_h$}}
{merge the runs~$R_{h-1}$ and~$R_h$
\tcp*[f]{case \#2}\label{aMS:case-2}}
\mathsf{E}lseIf(\label{test:aMS:case-3})
{\textrm{$h \geqslant 3$ and~$r_{h-2} < \alpha r_{h-1}$}}
{merge the runs~$R_{h-1}$ and~$R_h$
\tcp*[f]{case \#3}\label{aMS:case-3}}
\mathsf{E}lseIf{\textrm{the end of the array has not yet been reached}}
{find a new monotonic run~$R$, make it non-decreasing,
and push it onto~$\mathcal{S}$\label{aMS:case-push}}
\mathsf{E}lse{break\label{algline:aMS:end_inner_loop}}
}
\While{$h \geqslant 2$}{
merge the runs~$R_{h-1}$ and~$R_h$
\label{alg:aMS:collapse}
}
\end{small}
\caption{\textalpha-MergeSort\xspace\label{alg:aMS}}
\end{algorithm}
Like in Section~\ref{subsubsec:TS},
we say that a run~$R$ is
a \#1-, a \#2- or a \#3-run
if is merged in line~\ref{aMS:case-1},~\ref{aMS:case-2}
or~\ref{aMS:case-3}.
We also say that~$R$ is
a \#\rlap{\textsuperscript{1}}\textsubscript{2}\xspace-run if it is a \#1- or a \#2-run, and that~$R$ is a \#\rlap{\textsuperscript{2}}\textsubscript{3}\xspace-run if it is a \#2- or a \#3-run.
In addition, still like in Sections~\ref{subsubsec:cASS}
and~\ref{subsubsec:TS}, we safely assume that
each run is a \#1-, a \#2- or a \#3-run.
Our proof is then based on the following result,
which extends~\cite[Theorem 14]{BuKno18}
by adding the inequalities (ii) and (iii),
which are to be considered only when~$h \geqslant 3$.
\begin{lemma}\label{lemma:aMS:invariant}
Let~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ be a stack
obtained while executing \mbox{\textalpha-MergeSort\xspace}.
We have (i)~$r_i \geqslant \alpha r_{i+1}$
whenever~$1 \leqslant i \leqslant h-3$,
(ii)~$r_{h-2} \geqslant (\alpha-1) r_{h-1}$ and
(iii)~$\max\{r_{h-2}/\alpha,\alpha r_h/(\alpha-1)\} \geqslant r_{h-1}$.
\end{lemma}
\begin{proof}
Theorem 14 from~\cite{BuKno18} already proves the
inequality (i). Therefore, we prove,
by a direct induction on the number
of (push or merge) operations
performed before obtaining the stack~$\mathcal{S}$,
that~$\mathcal{S}$ satisfies (ii) and (iii).
When the algorithm starts, we have~$h \leqslant 2$,
and therefore there is nothing to prove in that
case.
Then, when a stack~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ obeying (i), (ii)
and (iii) is transformed into a stack~$\overline{\mathcal{S}} = (\overline{R}_1,\overline{R}_2,
\ldots,\overline{R}_{\overline{h}})$
\begin{itemize}
\item by inserting a run,~$\overline{h}
= h+1$ and~$\overline{r}_{\overline{h}-2}
= r_{h-1} \geqslant \alpha r_h = \overline{r}_{\overline{h}-1}$;
\item by merging the runs~$R_{h-1}$ and~$R_h$,
we have~$\overline{h} = h-1$ and~$\overline{r}_{\overline{h}-2}
= r_{h-3} \geqslant \alpha r_{h-2} = \overline{r}_{\overline{h}-1}$;
\item by merging the runs~$R_{h-2}$ and~$R_{h-1}$,
case \#1 just occurred and~$\overline{h} = h-1$,
so that
\[\min\{\overline{r}_{\overline{h}-2},\alpha\overline{r}_{\overline{h}}\} =
\min\{r_{h-3},\alpha r_h\} \geqslant \alpha r_{h-2}
\geqslant
(\alpha-1)(r_{h-2} + r_{h-1}) = (\alpha-1) \overline{r}_{\overline{h}-1}.\]
\end{itemize}
In each case,~$\overline{\mathcal{S}}$ satisfies (ii) and (iii),
which completes the induction and the proof.
\end{proof}
Lemmas~\ref{lem:aMS::right} and~\ref{lem:aMS::left}
focus on inequalities involving the lengths
of a given run~$R$ belonging to a merge tree
induced by \textalpha-MergeSort\xspace, and of its ancestors.
In each case, we denote by~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$
the stack at the moment where the run~$R$ is merged.
In what follows, we also set~$\alpha^\star = \min\{\alpha,1+1/\alpha,1+(\alpha-1)/\alpha\}$.
\begin{lemma}\label{lem:aMS::right}
If~$\aR{1}$ is a right run,~$\ar{2} \geqslant \alpha^\star r$.
\end{lemma}
\begin{proof}
Let~$i$ be the integer such that~$R = R_i$.
If~$i = h-2$, (i) shows that~$r_{i-1} \geqslant \alpha r_i$.
If~$i = h-1$, (ii) shows that~$r_{i-1} \geqslant (\alpha-1) r_i$.
In both cases,~$R_{i-1}$
descends from~$\aR{2}$, and
thus~$\ar{2} \geqslant r + r_{i-1}
\geqslant \alpha r$.
Finally, if~$i = h$, the run~$R$ is a \#\rlap{\textsuperscript{2}}\textsubscript{3}\xspace-right run,
which means that~$r_{h-2} \geqslant r$ and that~$R_{h-2}$
descends from the left sibling of~$\ar{1}$.
It follows that~$\ar{2} \geqslant r + r_{h-2} \geqslant 2 r \geqslant
(1 + 1/\alpha) r$.
\end{proof}
\begin{lemma}\label{lem:aMS::left}
If~$R$ is a left run,~$\ar{2} \geqslant \alpha^\star r$.
\end{lemma}
\begin{proof}
We treat three cases independently, depending on
whether~$R$ is a \#1-, a \#2 or a \#3-left run.
In each case, we assume that~$\aR{1}$ is a left run,
since Lemma~\ref{lem:aMS::right} already proves that~$\ar{2} \geqslant \alpha^\star r$ when~$\aR{1}$ is a right
run.
\begin{itemize}
\item If~$R$ is a \#1-left run,
the run~$R = R_{h-2}$ is merged with~$R_{h-1}$ and~$r_{h-2} < r_h$. Since~$\aR{1}$ is a left run,~$R_h$ descends from~$\aR{2}$, and thus~$\ar{2} \geqslant r + r_h \geqslant 2r \geqslant
(1 + 1/\alpha) r$.
\item If~$R$ is a \#2-left run, the run~$R = R_{h-1}$ is merged with~$R_h$ and~$r < \alpha r_h$. It follows, in that case, that~$\ar{2} \geqslant \ar{1} =
r + r_h \geqslant (1 + 1/\alpha) r$.
\item If~$R$ is a \#3-left run, the run~$R = R_{h-1}$ is merged with~$R_h$ and~$r_{h-2} < \alpha r_{h-1}$.
Hence, (iii) proves that~$(\alpha-1) r \leqslant \alpha r_h$, so that~$\ar{2} \geqslant \ar{1} =
r + r_h \geqslant (1 + (\alpha-1)/\alpha) r$. \qedhere
\end{itemize}
\end{proof}
\begin{proposition}\label{pro:fast-growth-aMS}
The algorithm \textalpha-MergeSort\xspace has the fast-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by \textalpha-MergeSort\xspace,
and let~$R$ be a run in~$\mathcal{T}$.
We will prove that~$\ar{3} \geqslant
\alpha^\star r$.
Indeed, if~$\aR{1}$ is a right run,
Lemma~\ref{lem:aMS::right} proves that~$\ar{3} \geqslant \ar{2} \geqslant
\alpha^\ast r$.
Otherwise,~$\aR{1}$ is a left run, and
Lemma~\ref{lem:aMS::left} proves that~$\ar{3} \geqslant \alpha^\ast \ar{1}
\geqslant \alpha^\ast r$.
\end{proof} | 2,893 | 49,320 | en |
train | 0.10.12 | \subsection{Algorithms with the tight middle-growth property}
\label{subsec:other-tight}
\subsubsection{PowerSort\xspace}
\label{subsubsec:PoS:2}
\begin{proposition}
\label{pro:POS:tight}
The algorithm PowerSort\xspace has the tight middle-growth
property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by PowerSort\xspace and
let~$R$ be a run in~$\mathcal{T}$ at depth at least~$h$.
We will prove that~$\ar{h} \geqslant 2^{h-4}$.
If~$h \leqslant 4$, the desired inequality
is immediate.
Then, if~$h \geqslant 5$,
let~$n$ be the length of the array on which~$\mathcal{T}$ is induced.
Let also~$p$ and~$\ap{h-2}$ be the
respective powers of the runs~$R$ and~$\aR{h-2}$.
Definition~\ref{def:PoS} and
Lemma~\ref{lem:PoS:power} prove that~$2^{\ap{h-2}+h-4} \leqslant
2^{p-2} \leqslant 2^{p-2} r < n <
2^{\ap{h-2}} \ar{h}$.
\end{proof}
\subsubsection{NaturalMergeSort\xspace}
\label{subsubsec:NMS}
The algorithm NaturalMergeSort\xspace consists in a plain binary merge
sort, whose unit pieces of data to be merged
are runs instead of being single elements.
Thus, we identify NaturalMergeSort\xspace with the fundamental
property that describes those merge trees it induces.
\begin{definition}\label{def:tree:NMS}
Let~$\mathcal{T}$ be a merge tree induced by NaturalMergeSort\xspace,
and let~$R$ and~$\overline{R}$ be two runs that are
siblings of each other in~$\mathcal{T}$.
Denoting by~$n$ and~$\overline{n}$
the respective numbers of leaves of~$\mathcal{T}$ that
descend from~$R$ and from~$\overline{R}$,
we have~$|n - \overline{n}| \leqslant 1$.
\end{definition}
\begin{proposition}\label{pro:fast-growth-NMS}
The algorithm NaturalMergeSort\xspace has the tight
middle-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by NaturalMergeSort\xspace,
let~$R$ be a run in~$\mathcal{T}$, and let~$h$
be its height.
We will prove by induction on~$h$
that, if~$h \geqslant 1$, the run~$R$ is an ancestor
of at least~$2^{h-1}+1$ leaves of~$\mathcal{T}$,
thereby showing that~$r \geqslant 2^{h-1}$.
First, this is the case if~$h = 1$.
Then, if~$h \geqslant 2$, let~$R_1$ and~$R_2$ be the
two children of~$R$. One of them, say~$R_1$,
has height~$h-1$. Let us denote by~$n$,~$n_1$ and~$n_2$ the number of
leaves that descend from~$R$,~$R_1$ and~$R_2$, respectively.
The induction hypothesis shows that
\[n = n_1 + n_2 \geqslant 2 n_1-1 \geqslant
2 \times (2^{h-2}+1) - 1 = 2^{h-1}+1,\]
which completes the proof.
\end{proof}
\subsubsection{ShiversSort\xspace}
\label{subsubsec:ShS}
The algorithm ShiversSort\xspace is presented
in Algorithm~\ref{alg:ShS}.
Like adaptive ShiversSort\xspace, it relies on the notion of
\emph{level} of a run.
\begin{algorithm}[ht]
\begin{small}
\mathcal{S}etArgSty{texttt}
\mathsf{D}ontPrintSemicolon
\Input{Array~$A$ to sort}
\mathcal{R}esult{The array~$A$ is sorted into a single run.
That run remains on the
stack.}
\Note{We denote the height of the stack~$\mathcal{S}$ by~$h$, and its~$i$\textsuperscript{th} deepest run by~$R_i$. The length of~$R_i$ is denoted by~$r_i$, and we set~$\ell_i = \lfloor \log_2(r_i) \rfloor$. When two consecutive runs of~$\mathcal{S}$ are merged, they are replaced, in~$\mathcal{S}$, by the run resulting from the merge. \justifying}
\BlankLine~$\mathcal{S} \ensuremath{\leftarrow}~$ an empty stack\;
\While(\label{main-loop:ShS:start}){\textbf{true}\xspace}{
\If(\label{test:ShS:case-1})
{\textrm{$h \geqslant 1$ and~$\ell_{h-1}
\leqslant \ell_h$}}
{merge the runs~$R_{h-1}$ and~$R_h$\label{ShS:std:merge}}
\mathsf{E}lseIf{\textrm{the end of the array has not yet been reached}}
{find a new monotonic run~$R$, make it non-decreasing,
and push it onto~$\mathcal{S}$\label{ShS:case-push}}
\mathsf{E}lse{break\label{algline:ShS:end_inner_loop}}
}
\While{$h \geqslant 2$}{
merge the runs~$R_{h-1}$ and~$R_h$
\label{alg:ShS:collapse}
}
\end{small}
\caption{ShiversSort\xspace\label{alg:ShS}}
\end{algorithm}
Our proof is based on the following result,
which appears in the proof
of~\cite[Theorem 11]{BuKno18}.
\begin{lemma}\label{lemma:ShS:invariant}
Let~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ be a stack
obtained while executing ShiversSort\xspace.
We have~$\ell_i \geqslant \ell_{i+1}+1$
whenever~$1 \leqslant i \leqslant h-2$.
\end{lemma}
Lemmas~\ref{lem:ShS:right}
and~\ref{lem:ShS:left-right}
focus on inequalities involving the lengths
of a given run~$R$ belonging to a merge tree
induced by adaptive ShiversSort\xspace, and of its ancestors.
In each case, we denote by~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$
the stack at the moment where the run~$R$ is merged.
However, and unlike
Sections~\ref{subsubsec:cASS}
to~\ref{subsubsec:aMS}, we cannot simulate
the merge operations that occur in
line~\ref{alg:ShS:collapse} as if they had
occurred in line~\ref{ShS:std:merge}.
Instead, we say that a run~$R$ is \emph{rightful}
if~$R$ and its ancestors are all right runs
(i.e., if~$R$ belongs to the rightmost branch
of the merge tree), and that~$R$ is
\emph{standard} otherwise.
\begin{lemma}\label{lem:ShS:right}
Let~$R$ be a run in~$\mathcal{T}$, and let~$k \geqslant 1$
be an integer. If each of the~$k$ runs~$\aR{0},\ldots,\aR{k-1}$ is a right
run,~$\al{k} \geqslant k-1$.
\end{lemma}
\begin{proof}
For all~$i \leqslant k$, let~$u(i)$ be the least
integer such that~$R_{u(i)}$
descends from~$\aR{i}$.
Since~$\aR{0},\ldots,\aR{k-1}$ are right runs,
we know that~$u(k) < u(k-1) < \ldots < u(0) = h$.
Thus, Lemma~\ref{lemma:ShS:invariant} proves that
\[\al{k} \geqslant \ell_{u(k)} \geqslant
\ell_{u(1)} + (k-1) \geqslant k-1. \qedhere\]
\end{proof}
\begin{lemma}\label{lem:ShS:left-right}
Let~$R$ be a run in~$\mathcal{T}$, and let~$k \geqslant 1$
be an integer. If~$R$ is a left run and
the~$k-1$ runs~$\aR{1},\ldots,\aR{k-1}$
are right runs, we have~$\al{k} \geqslant \ell+k-1$ if~$\aR{1}$ is
a rightmost run, and~$\al{k} \geqslant \ell+k$
if~$\aR{1}$ is a standard run.
\end{lemma}
\begin{proof}
First, assume that~$k = 1$.
If~$\aR{1}$ is rightful, the desired inequality
is immediate.
If~$\aR{1}$ is standard, however,
the left run~$R = R_{h-1}$ was merged with
the run~$R_h$ because~$\ell \leqslant \ell_h$.
In that case, it follows that~$\ar{1} = r + r_h \geqslant 2^\ell + 2^{\ell_h}
\geqslant 2^\ell + 2^\ell = 2^{\ell+1}$,
i.e., that~$\al{1} \geqslant \ell+1$.
Assume now that~$k \geqslant 2$.
Note that~$\aR{k}$ is rightful if and only if~$\aR{1}$ is also rightful.
Then, for all~$i \leqslant k$, let~$u(i)$ be the least
integer such that the run~$R_{u(i)}$
descends from~$\aR{i}$.
Since~$\aR{1},\ldots,\aR{k-1}$ are right runs,
we know that
\[u(k) < u(k-1) < \ldots < u(1) = h-1.\]
In particular, let~$R'$ be the left sibling of~$\aR{k-1}$: this is an ancestor of~$R_{u(k)}$,
and the left child of~$\aR{k}$.
Consequently, Lemma~\ref{lemma:ShS:invariant} and
our study of the case~$k = 1$ conjointly prove that
\begin{itemize}
\item
$\al{k} \geqslant \ell' \geqslant \ell_{u(k)}
\geqslant \ell_{u(1)}+k-1 = \ell+k-1$ if~$\aR{1}$ and~$\aR{k}$ are rightful;
\item $\al{k} \geqslant \ell'+1 \geqslant \ell_{u(k)}+1
\geqslant \ell_{u(1)}+k = \ell+k$
if~$\aR{1}$ and~$\aR{k}$ are standard. \qedhere
\end{itemize}
\end{proof}
\begin{proposition}\label{pro:middle-growth-ShS}
The algorithm ShiversSort\xspace has the tight
middle-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by ShiversSort\xspace
and let~$R$ be a run in~$\mathcal{T}$ at depth at least~$h$. We will prove that~$\ar{h} \geqslant 2^{h-2}$.
Let~$a_1 < a_2 < \ldots < a_k$ be the
non-negative integers smaller than~$h$ and
for which~$\aR{a_i}$ is a left run. We also set~$a_{k+1} = h$.
Lemma~\ref{lem:ShS:right} proves that~$\al{a_1} \geqslant a_1-1$.
Then, for all~$i < k$, the run~$\aR{a_i+1}$
is standard, since it descends from the left run~$\aR{a_k}$, and thus
Lemma~\ref{lem:ShS:left-right} proves that~$\al{a_{i+1}} \geqslant \al{a_i} + a_{i+1} - a_i$.
Lemma~\ref{lem:ShS:left-right} also proves that~$\al{a_{k+1}} \geqslant \al{a_k} + a_{k+1} - a_k-1$.
It follows that~$\al{h} = \al{a_{k+1}} \geqslant
h-2$, and therefore that~$\ar{h} \geqslant 2^{\al{h}} \geqslant 2^{h-2}$.
\end{proof} | 3,203 | 49,320 | en |
train | 0.10.13 | \subsection{Algorithms with the middle-growth property}
\label{subsec:other-middle}
\subsubsection{\textalpha-StackSort\xspace}
\label{subsubsec:aSS}
The algorithm \textalpha-StackSort\xspace, which
predated and inspired its variant
\textalpha-MergeSort\xspace, is presented in
Algorithm~\ref{alg:aSS}.
\begin{algorithm}[ht]
\begin{small}
\mathcal{S}etArgSty{texttt}
\mathsf{D}ontPrintSemicolon
\Input{Array~$A$ to sort, parameter~$\alpha > 1$}
\mathcal{R}esult{The array~$A$ is sorted into a single run.
That run remains on the
stack.}
\Note{We
denote
the
height
of
the
stack
~$\mathcal{S}$
by
~$h$,
and
its
~$i$\textsuperscript{th}
deepest
run
by
~$R_i$.
The
length
of
~$R_i$
is
denoted
by
~$r_i$.
When
two
consecutive
runs
of
~$\mathcal{S}$
are
merged,
they
are replaced, in~$\mathcal{S}$,
by the run resulting from the merge.}
\BlankLine~$\mathcal{S} \ensuremath{\leftarrow}~$ an empty stack\;
\While(\label{aSS:main-loop:start}){\textbf{true}\xspace}{
\If{\textrm{$h \geqslant 2$ and
~$r_{h-1} \leqslant \alpha r_h$}}
{merge the runs~$R_{h-1}$ and~$R_h$}
\mathsf{E}lseIf{\textrm{the end of the array has not yet been reached}}
{find a new monotonic run~$R$, make it non-decreasing,
and push it onto~$\mathcal{S}$\label{aSS:case-push}}
\mathsf{E}lse{break\label{algline:aSS:end_inner_loop}}
}
\While{$h \geqslant 2$}{
merge the runs~$R_{h-1}$ and~$R_h$
\label{alg:aSS:collapse}
}
\end{small}
\caption{\textalpha-StackSort\xspace\label{alg:aSS}}
\end{algorithm}
Our proof is based on the following result,
which appears in~\cite[Lemma 2]{AuNiPi15}.
\begin{lemma}\label{lemma:aSS:invariant}
Let~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$ be a stack
obtained while executing \mbox{\textalpha-StackSort\xspace}.
We have~$r_i > \alpha r_{i+1}$
whenever~$1 \leqslant i \leqslant h-2$.
\end{lemma}
Lemmas~\ref{lem:aSS::right} and~\ref{lem:aSS::left}
focus on inequalities involving the lengths
of a given run~$R$ belonging to a merge tree
induced by \textalpha-StackSort\xspace, and of its ancestors.
In each case, we denote by~$\mathcal{S} = (R_1,R_2,\ldots,R_h)$
the stack at the moment where the run~$R$ is merged.
Furthermore, like in
Section~\ref{subsubsec:ShS},
we say that a run~$R$ is \emph{rightful}
if~$R$ and its ancestors are all right runs,
and that~$R$ is
\emph{standard} otherwise.
\begin{lemma}\label{lem:aSS::right}
Let~$R$ be a run in~$\mathcal{T}$, and let~$k$ and~$m$
be two integers. If~$k$ of the~$m+1$ runs~$\aR{0}, \aR{1}, \ldots, \aR{m}$
are right runs,~$\ar{m+1} \geqslant
\alpha^{k-1}$.
\end{lemma}
\begin{proof}
Let~$a_1 < a_2 < \ldots < a_k$ be the smallest
integers such that~$\aR{a_1},\aR{a_2},
\ldots,\aR{a_k}$ are right runs.
For all~$i \leqslant k-1$,
let~$\aS{i}$ be the left sibling of~$\aR{a_i}$,
and let~$R_{u(i)}$ be a leaf of~$\aS{i}$.
Since~$R_{u(i)}$ descends from~$\aR{a_{i+1}}$, we
know that~$u(i+1) < u(i)$, so
that~$u(k) \leqslant u(1) - (k-1)$.
Thus, Lemma~\ref{lemma:aSS:invariant} proves that~$\ar{m+1} \geqslant r_{u(k)} \geqslant
\alpha^{k-1} r_{u(1)} \geqslant \alpha^{k-1}$.
\end{proof}
\begin{lemma}\label{lem:aSS::left}
Let~$R$ be a run in~$\mathcal{T}$.
If~$R$ is a left run and if its
parent is a standard
run, we have~$\ar{1} \geqslant (1+1/\alpha)r$.
\end{lemma}
\begin{proof}
Since~$R$ is a standard left run, it coincides with~$R_{h-1}$ and
~$r_{h-1} \leqslant \alpha r_h$. It follows
immediately that~$\ar{1} = r_{h-1} + r_h \geqslant (1+1/\alpha) r$.
\end{proof}
\begin{proposition}\label{pro:middle-growth-aSS}
The algorithm \textalpha-StackSort\xspace has the middle-growth property.
\end{proposition}
\begin{proof}
Let~$\mathcal{T}$ be a merge tree induced by ShiversSort\xspace, let~$R$ be a run in~$\mathcal{T}$ at depth at least~$h$,
and let~$\beta = \min\{\alpha,1+1/\alpha\}^{1/4}$. We will prove that~$\ar{h} \geqslant \beta^h$.
Indeed, let~$k = \lceil h/2 \rceil$.
We distinguish two cases, which are not
mutually exclusive if $h$ is even.
\begin{itemize}
\item If at least~$k$ of the~$h$ runs~$R, \aR{1},\aR{2}, \ldots, \aR{h-1}$
are right runs,
Lemma~\ref{lem:aSS::right} shows that
\[\ar{h} \geqslant \alpha^{k-1}
\geqslant \beta^{4(k-1)}.\]
\item If at least~$k$ of the~$h$ runs~$R, \aR{1},\aR{2}, \ldots, \aR{h-1}$
are left runs,
let~$a_1 < a_2 < \ldots < a_k$ be the smallest
integers such that~$\aR{a_1},\aR{a_2}, \ldots,
\aR{a_k}$ are left runs.
When~$j < k$, the run~$\aR{a_j+1}$ is standard,
since it descends from the left run~$\aR{a_k}$.
Therefore, due to Lemma~\ref{lem:aSS::left},
an immediate induction on~$j$ shows that~$\ar{a_j+1} \geqslant (1+1/\alpha)^j$
for all~$j \leqslant k-1$.
It follows that that
\[\ar{h} \geqslant \ar{a_{k-1}+1} \geqslant
(1+1/\alpha)^{k-1} \geqslant \beta^{4(k-1)}.\]
\end{itemize}
Consequently,~$\ar{h} \geqslant \beta^{4(k-1)} \geqslant \beta^{2h-4} \geqslant \beta^h$ whenever~$h \geqslant 4$,
whereas~$\ar{h} \geqslant 1 = \beta^h$
whenever~$h = 0$
and~$\ar{h} \geqslant 2 \geqslant 1+1/\alpha
\geqslant \beta^h$ whenever~$1 \leqslant h \leqslant 3$.
\end{proof} | 2,116 | 49,320 | en |
train | 0.10.14 | \section{Refined complexity bounds}
\label{sec:precise-bounds-PoS}
One weakness of Theorem~\ref{thm:middle-few} is that it cannot
help us to distinguish the complexity upper bounds of those
algorithms that have the middle-growth property, although
the constants hidden in the~$\mathcal{O}$ symbol could be dramatically
different. Below, we study these constants,
and focus on upper bounds of the type~$\mathsf{c} n \mathcal{H}^\ast + \mathcal{O}(n)$ or~$\mathsf{c} n (1 + o(1)) \mathcal{H}^\ast + \mathcal{O}(n)$.
Since sorting arrays of length~$n$, in general, requires
at least~$\log_2(n!) = n \log_2(n) + \mathcal{O}(n)$ comparisons,
and since~$\mathcal{H}^\ast \leqslant \log_2(n)$ for all arrays,
we already know that~$\mathsf{c} \geqslant 1$ for any such
constant~$\mathsf{c}$.
Below, we focus on finding matching upper bounds in two
regimes: first using a fixed parameter~$\mathbf{t}$,
thereby obtaining a constant~$\mathsf{c} > 1$,
and then letting~$\mathbf{t}$ depend on the lengths
of those runs that are being merged, in which case we
reach the constant~$\mathsf{c} = 1$.
Inspired by the success of
Theorem~\ref{thm:middle-few-naïve}, which states
that algorithms with
the tight middle-growth property sort array
of length~$n$ by using
only~$\mathsf{c} n \log_2(n) + \mathcal{O}(n)$ element
comparisons with~$\mathsf{c} = 1$,
we focus primarily on that property,
while not forgetting other algorithms
that also enjoyed~$n \mathcal{H} + \mathcal{O}(n)$
or~$n \log_2(n) + \mathcal{O}(n)$ complexity upper
bounds despite not having the tight
middle-growth property.
\subsection{Fixed parameter t and middle-growth property}
\label{subsec:fixed-t}
\begin{theorem}\label{thm:PoS-constant}
Let~$\mathcal{A}$ be a stable natural merge sort algorithm
with the tight middle-growth property.
For each parameter~$\mathbf{t} \geqslant 0$,
if~$\mathcal{A}$ uses the~$\mathbf{t}$-galloping
sub-routine for merging runs,
it requires at most
\[(1 + 1 / (\mathbf{t}+3)) n \mathcal{H}^\ast + \log_2(\mathbf{t}+1) n +
\mathcal{O}(n)\] element comparisons
to sort arrays of length~$n$ and dual run-length entropy~$\mathcal{H}^\ast$.
\end{theorem}
\begin{proof}
Let us follow a variant of the proof
of Theorem~\ref{thm:middle-few}.
Let~$\gamma$ be the integer mentioned
in the definition of the statement
``$\mathcal{A}$ has the tight middle-growth property'',
let~$\mathcal{T}$ be the merge tree induced by~$\mathcal{A}$
on an array~$A$ of length~$n$,
and let~$s_1,s_2,\ldots,s_\sigma$ be the lengths
of the dual runs of~$A$.
Like in the proof of Theorem~\ref{thm:middle-few},
we just need to prove that
\[ \sum_{R \in \mathcal{T}} \mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i}) \leqslant
(1+1/(\mathbf{t}+3)) s_i \log_2(n / s_i) +
s_i \log_2(\mathbf{t}+1) +
\mathcal{O}(s_i)\]
for all~$i \leqslant \sigma$.
Let~$\mathcal{R}_h$ be the set of runs
at height~$h$ in~$\mathcal{T}$. By construction, no run
in~$\mathcal{R}_h$ descends from another one, which
proves that~$\sum_{R \in \mathcal{R}_h} r_{\rightarrow i} \leqslant s_i$ and that~$\sum_{R \in \mathcal{R}_h} r \leqslant n$.
Since each run~$R \in \mathcal{R}_h$
is of length~$r \geqslant 2^{h-\gamma}$,
it follows that~$|\mathcal{R}_h| \leqslant n / 2^{h-\gamma}$.
Then, consider the function
\[\mathsf{C}_{\mathbf{t}}(h) = \sum_{R \in\mathcal{R}_h}
\mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i}).\]
We noted above that
\[ \mathsf{C}_{\mathbf{t}}(h) \leqslant
(1+1/(\mathbf{t}+3)) \sum_{R \in \mathcal{R}_h} r_{\rightarrow i} \leqslant (1+1/(\mathbf{t}+3))
s_i\]
for all~$h \geqslant 0$.
Let also~$f \colon x \mapsto \mathbf{t} + 2 + 2 \log_2(x+1)$,~$g \colon x \mapsto x \, f(s_i / x)$ and~$\mu = \lceil\log_2((\mathbf{t}+1) n / s_i)\rceil$.
Both functions~$f$ and~$g$ are positive, concave
and increasing on~$(0,+\infty)$, which shows that
\begin{align*}
\mathsf{C}_{\mathbf{t}}(\mu + \gamma + h) & \leqslant
\sum_{R \in \mathcal{R}_{\mu + \gamma+h}} f(r_{\rightarrow i})
\leqslant
|\mathcal{R}_{\mu + \gamma+h}| \, f\big({\textstyle\sum_{R \in \mathcal{R}_{\mu + \gamma+h}} r_{\rightarrow_i} / |\mathcal{R}_{\mu + \gamma+h}|}\big) \leqslant
g\big(|\mathcal{R}_{\mu + \gamma+h}|\big) \\
& \leqslant
g\big(n / 2^{\mu + h}\big) \leqslant
g\big(2^{-h} s_i / (\mathbf{t}+1)\big) \\
& \leqslant
\big(\mathbf{t}+2+
2 \log_2\!\big( 2^h (\mathbf{t}+1) + 1 \big)\big)
2^{-h} s_i / (\mathbf{t}+1)
\\
& \leqslant \big(\mathbf{t} + 2 + 2(h(\mathbf{t}+1)+1)\big)2^{-h} s_i /(\mathbf{t}+1) \leqslant (4+2h) 2^{-h} s_i.
\end{align*}
We conclude that
\begin{align*}
\sum_{R \in \mathcal{T}} \mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i})
& = \sum_{h \geqslant 0} \mathsf{C}_{\mathbf{t}}(h)
= \sum_{h=0}^{\mu+\gamma-1} \mathsf{C}_{\mathbf{t}}(h) + \sum_{h \geqslant 0}
\mathsf{C}_{\mathbf{t}}(\mu + \gamma+h) \\
& \leqslant (1+1/(\mathbf{t}+3)) (\mu+\gamma) s_i +
4 s_i \sum_{h \geqslant 0} 2^{-h} +
2 s_i \sum_{h \geqslant 0} h 2^{-h} \\
& \leqslant
(1+1/(\mathbf{t}+3))(\log_2(n / s_i) + \log_2(\mathbf{t}+1)) s_i + \mathcal{O}(s_i) \\
& \leqslant
(1+1/(\mathbf{t}+3)) \log_2(n / s_i) s_i +
\log_2(\mathbf{t}+1) s_i + \mathcal{O}(s_i). &&\qedhere
\end{align*}
\end{proof} | 1,976 | 49,320 | en |
train | 0.10.15 | \subsection{Parameter t with logarithmic growth and tight middle-growth property}
\label{subsec:variable-t}
Letting the parameter~$\mathbf{t}$ vary,
we minimise the upper bound provided by Theorem~\ref{thm:PoS-constant}
by choosing~$\mathbf{t} = \mathcal{T}heta(\mathcal{H}^\ast+1)$, in which case
this upper bound simply becomes~$n \mathcal{H}^\ast + \log_2(\mathcal{H}^\ast+1)n + \mathcal{O}(n)$.
However, computing~$\mathcal{H}^\ast$ before
starting the actual sorting process is not reasonable.
Instead, we update the parameter~$\mathbf{t}$
as follows, which will provide us with a
slightly larger upper bound.
\begin{definition}
\label{def:log-galloping}
We call \emph{logarithmic} galloping sub-routine
the merging sub-routine that, when merging adjacent runs
of lengths~$a$ and~$b$, performs the same comparisons and
element moves as the~$\mathbf{t}$-galloping sub-routine
for~$\mathbf{t} = \lceil \log_2(a+b) \rceil$.
\end{definition}
\begin{theorem}\label{thm:PoS-log}
Let~$\mathcal{A}$ be a stable natural merge sort algorithm
with the tight middle-growth property.
If~$\mathcal{A}$ uses the logarithmic galloping
sub-routine for merging runs,
it requires at most
\[n \mathcal{H}^\ast +
2 \log_2(\mathcal{H}^\ast+1) n + \mathcal{O}(n)\]
element comparisons
to sort arrays of length~$n$ and dual run-length entropy~$\mathcal{H}^\ast$.
\end{theorem}
\begin{proof}
Let us refine and adapt the proofs
of Theorems~\ref{thm:middle-few}
and~\ref{thm:PoS-constant}.
Let~$\gamma$ be the integer mentioned
in the definition of the statement
``$\mathcal{A}$ has the tight middle-growth property'',
let~$\mathcal{T}$ be the merge tree induced by~$\mathcal{A}$
on an array~$A$ of length~$n$,
and let~$s_1,s_2,\ldots,s_\sigma$ be the lengths
of the dual runs of~$A$.
Using a parameter~$\mathbf{t} = \lceil \log_2(r) \rceil$
to merge runs~$R'$ and~$R''$ into one run~$R$
requires at most
\[1 + \sum_{i=1}^\sigma
\mathsf{cost}_{\lceil \log_2(r) \rceil}^\ast(r'_{\rightarrow i}) +
\mathsf{cost}_{\lceil \log_2(r) \rceil}^\ast(r''_{\rightarrow i})\]
element comparisons.
Given that
\begin{align*}
\mathsf{cost}_{\lceil \log_2(r) \rceil}^\ast(r'_{\rightarrow i}) & \leqslant
\min\{(1+1/\log_2(r)) r'_{\rightarrow i}, \log_2(r)+3+2 \log_2(r'_{\rightarrow i}+1)\} \\
& \leqslant \min\{(1+1/\log_2(r)) r'_{\rightarrow i}, 3\log_2(r+1)+3\},
\end{align*}
and that~$r'_{\rightarrow i} + r''_{\rightarrow i}=r_{\rightarrow i}$,
this makes a total of at most
\[1 + \sum_{i=1}^\sigma \mathsf{cost}_{\log}^\ast(r,r_{\rightarrow i})\]
element comparisons, where~$\mathsf{cost}_{\log}^\ast(r,m) = \min\{(1+1/\log_2(r)) m,6\log_2(r+1)+6\}$.
Then, let~$\mathcal{T}^\ast$ denote the tree obtained
after removing the leaves of~$\mathcal{T}$.
We focus on proving that
\[\sum_{R \in \mathcal{T}^\ast}
\mathsf{cost}_{\log}^\ast(r,r_{\rightarrow i}) \leqslant
s_i \log_2(n/s_i) + 2 s_i \log_2(\log_2(2n/s_i)) +
\mathcal{O}(s_i)\]
for all~$i \leqslant \sigma$.
Indeed, finding the run decomposition~$R_1,R_2,\ldots,R_\rho$ of~$A$
requires~$n-1$ comparisons,
and~$\rho-1 \leqslant n-1$ merges
are then performed, which will make a total of
up to
\begin{align*}
2n+\sum_{R \in \mathcal{T}^\ast}\sum_{i=1}^\sigma
\mathsf{cost}_{\log}^\ast(r,r_{\rightarrow i})
& \leqslant
2 n + \sum_{i=1}^\sigma
s_i \log_2(n/s_i) + 2 s_i \log_2(\log_2(n/s_i)+1) +
\mathcal{O}(s_i) \\
& \leqslant n \mathcal{H}^\ast + 2 \log_2(\mathcal{H}^\ast+1) n + \mathcal{O}(n)
\end{align*}
comparisons, the latter inequality being due to
the concavity of the function~$x \mapsto \log_2(x+1)$.
Then, let~$\mathcal{R}_h$ be the set of runs
at height~$h$ in~$\mathcal{T}$, and let
\[\mathsf{C}_{\log}(h) = \sum_{R \in\mathcal{R}_h}
\mathsf{cost}_{\log}^\ast(r,r_{\rightarrow i}).\]
No run in~$\mathcal{R}_h$ descends
from another one, and each run~$R \in \mathcal{R}_h$ has length~$r \geqslant 2^{\max\{1,h-\gamma\}}$, which proves that~$|\mathcal{R}_h| \leqslant n / 2^{h-\gamma}$ and that
\[\mathsf{C}_{\log}(h) \leqslant
\sum_{R \in \mathcal{R}_h}
(1 + 1 / \log_2(r)) r_{\rightarrow i} \leqslant
\sum_{R \in \mathcal{R}_h}
(1 + 1 / \!\max\{1,h-\gamma\}) r_{\rightarrow i}
\leqslant
(1 + 1 / \!\max\{1,h-\gamma\}) s_i.\]
Let also~$f \colon x \mapsto 1 + \log_2(x+1)$,~$g \colon x \mapsto x \, f(n / x)$,~$z = n/s_i \geqslant 1$ and~$\nu = \lceil\log_2(z \log_2(2 z))\rceil + \gamma$.
Both functions~$f$ and~$g$ are concave,
positive and increasing on~$(0,+\infty)$,
which proves that
\begin{align*}
\mathsf{C}_{\log}(\nu + h) / 6 & \leqslant
\sum_{R \in \mathcal{R}_{\nu +h}}
f(r)
\leqslant
|\mathcal{R}_{\nu+h}| \, f\big({\textstyle\sum_{R \in \mathcal{R}_{\nu+h}} r / |\mathcal{R}_{\nu + h}|}\big) \leqslant
g(|\mathcal{R}_{\nu+h}|) \leqslant
g(n / 2^{\nu-\gamma+h}) \\
& \leqslant
g\big(2^{-h} s_i/\log_2(2z)\big) =
2^{-h} s_i f(2^h z \log_2(2z)) / \log_2(2z) \\
& \leqslant 2^{-h} s_i \big(1 + \log_2(2^h
(2z)^2)\big) / \log_2(2z) =
2^{-h} (h+1) s_i / \log_2(2z) +
2^{1-h} s_i \\
& \leqslant (h+3) 2^{-h} s_i,
\end{align*}
where the inequality between the second and
third line simply comes from the fact that
\[1 + 2^h z \log_2(2z) \leqslant
1 + 2^{h+1} z^2 \leqslant 2^h (1+2z^2) \leqslant
2^h \times (2z)^2\]
whenever~$h \geqslant 0$ and~$z \geqslant 1$.
It follows that
\[\sum_{h=1}^\gamma \mathsf{C}_{\log}(h) +
\sum_{h \geqslant 0} \mathsf{C}_{\log}(\nu + h)
\leqslant 2 \gamma s_i + 6 \sum_{h \geqslant 0}(h+3)2^{-h}s_i =
\mathcal{O}(s_i),\]
whereas
\[\sum_{h=1}^{\nu-\gamma-1} \mathsf{C}_{\log}(\gamma+h) \leqslant
\sum_{h=1}^{\nu-1} (1+1/h) s_i \leqslant ((\nu-1)+1+\ln(\nu-1)) s_i = (\nu+\ln(\nu)) s_i.\]
Thus, we conclude that
\begin{align*}
\sum_{R \in \mathcal{T}^\ast} \mathsf{cost}_{\log}^\ast(r,r_{\rightarrow i})
& = \sum_{h \geqslant 1} \mathsf{C}_{\log}(h) \leqslant
s_i(\nu + \ln(\nu)) + \mathcal{O}(s_i) \\
& \leqslant
s_i \log_2(z) + s_i \log_2(\log_2(2z)) + s_i \log_2(\log_2(2z^2)) + \mathcal{O}(s_i) \\
& \leqslant
s_i \log_2(z) + 2 s_i \log_2(\log_2(2z)) + \mathcal{O}(s_i). &&\qedhere
\end{align*}
\end{proof}
It is of course possible to approach
the~$n \mathcal{H}^\ast + \log_2(\mathcal{H}^\ast+1)n + \mathcal{O}(n)$
upper bound by improving our
update policy. For instance,
choosing~$\mathbf{t} =
\tau \lceil \log_2(a+b) \rceil$
for a given constant~$\tau \geqslant 1$ provides us with an
\[n \mathcal{H}^\ast + (1+1/\tau) \log_2(\mathcal{H}^\ast+1)n +
\log_2(\tau) + \mathcal{O}(n)\]
upper bound, and choosing~$\mathbf{t} =
\lceil \log_2(a+b) \rceil \times
\lceil \log_2(\log_2(a+b))\rceil$ further improves that upper bound.
However, such improvements soon become negligible
in comparison with the overhead of having to
compute the value of~$\mathbf{t}$. | 2,747 | 49,320 | en |
train | 0.10.16 | \subsection{Refined upper bounds for adaptive ShiversSort\xspace}
\label{subsec:cASS:precise}
\begin{proposition}
\label{pro:cASS:not-tight}
The algorithm adaptive ShiversSort\xspace does not have the
tight middle-growth property.
\end{proposition}
\begin{proof}
Let~$A_k$ be an array whose run decomposition
consists in runs of lengths~$1 \cdot 2 \cdot 1 \cdot 4 \cdot 1 \cdot 8 \cdot 1 \cdot 16 \cdot 1 \cdots 1 \cdot 2^k \cdot 1$
for some integer~$k \geqslant 0$.
When sorting the array~$A_k$, the algorithm
adaptive ShiversSort\xspace keeps merging the two leftmost
runs at its disposal.
The resulting tree, represented in
Figure~\ref{fig:22-cASS-tree},
has height~$h = 2k$ and its root
is a run of length~$r = 2^{k+1}+k-1 = o(2^h)$.
\end{proof}
\begin{figure}
\caption{Merge tree induced by adaptive ShiversSort\xspace on~$A_4$.
Each run is labelled by its length.\label{fig:22-cASS-tree}
\label{fig:22-cASS-tree}
\end{figure}
\begin{theorem}
\label{thm:cASS-log}
Theorems~\ref{thm:PoS-constant}
and~\ref{thm:PoS-log} remain valid if
we consider the algorithm
adaptive ShiversSort\xspace instead of
an algorithm with the tight middle-growth property.
\end{theorem}
\begin{proof}
Both proofs of Theorems~\ref{thm:PoS-constant}
and~\ref{thm:PoS-log} are based on the following
construction.
Given a constant~$\gamma$ and a merge tree~$\mathcal{T}$,
we partition the runs of~$\mathcal{T}$
into sets~$\mathcal{R}_h$ of pairwise incomparable runs,
such that each run~$R$ in each set~$\mathcal{R}_h$
has a length~$r \geqslant 2^{h-\gamma}$.
In practice, in both proofs, we define~$\mathcal{R}_h$ as
be the set of runs with height~$h$.
Below, we adapt this approach.
Let~$\mathcal{T}$ be the merge tree induced by adaptive ShiversSort\xspace on an
array~$A$ of length~$n$
and dual runs~$S_1,S_2,\ldots,S_\sigma$.
Following the terminology of~\cite{Ju20},
we say that a run~$R$ in~$\mathcal{T}$ is
\emph{non-expanding} if it has the same level
as its parent~$\aR{1}$, i.e., if~$\ell = \al{1}$.
It is shown, in the proof
of~\cite[Theorem 3.2]{Ju20},
that the lengths of the non-expanding runs
sum up to an integer smaller than~$3n$.
Hence, we partition~$\mathcal{T}$ as follows.
First, we place all the non-expanding runs
into one set~$\mathcal{R}_{-1}$.
Then, for each~$\ell \geqslant 0$,
we define~$\mathcal{R}_\ell$ as the set of expanding runs
with level~$\ell$:
by construction, each run~$R$ in~$\mathcal{R}_\ell$
has a length~$r \geqslant 2^\ell$,
and the elements of~$\mathcal{R}_\ell$ are pairwise
incomparable.
Thus, if we use a constant parameter~$\mathbf{t}$,
we perform a total of at most~$2n + \sum_{h \geqslant -1} \mathsf{C}_{\mathbf{t}}^\ast(h)$
element comparisons, where
\[\mathsf{C}_{\mathbf{t}}^\ast(h) = \sum_{R \in \mathcal{R}_h}
\sum_{i=1}^\sigma
\mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i}).\]
The proof of
Theorem~\ref{thm:PoS-constant} then shows that
\[\sum_{h \geqslant 0}
\mathsf{C}_{\mathbf{t}}^\ast(h) \leqslant (1+1/(\mathbf{t}+3)) n \mathcal{H}^\ast
+ \log_2(\mathbf{t}+1)n + \mathcal{O}(n),\]
whereas
\[\mathsf{C}_{\mathbf{t}}^\ast(-1) =
\sum_{R \in \mathcal{R}_{-1}}
\sum_{i=1}^\sigma
\mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i})
\leqslant
\sum_{R \in \mathcal{R}_{-1}}
\sum_{i=1}^\sigma 2 r_{\rightarrow i} =
\sum_{R \in \mathcal{R}_{-1}} 2 r
\leqslant 6 n,\]
thereby proving that adaptive ShiversSort\xspace has the desired
complexity upper bound when using the~$\mathbf{t}$-galloping
sub-routine.
Similarly, we perform at most~$2n + \mathsf{C}_{\log}^\ast(-1) + \sum_{h \geqslant 1}
\mathsf{C}_{\log}^\ast(h)$ element comparisons
if we use the logarithmic
galloping sub-routine, where we set
\[\mathsf{C}_{\log}^\ast(h) = \sum_{R \in \mathcal{R}^\ast_h}
\sum_{i=1}^\sigma
\mathsf{cost}_{\log}^\ast(r,r_{\rightarrow i}).\]
and~$\mathcal{R}^\ast_h$ denotes the set of runs in~$\mathcal{R}_h$
that are not leaves of~$\mathcal{T}$.
By following the proof of
Theorem~\ref{thm:PoS-log},
one concludes similarly that
\[\sum_{h \geqslant 1}
\mathsf{C}_{\log}^\ast(h) \leqslant n \mathcal{H}^\ast
+ 2 \log_2(\mathcal{H}^\ast+1)n + \mathcal{O}(n),\]
and observing that
\[\mathsf{C}_{\log}^\ast(-1) \leqslant \sum_{R \in \mathcal{R}^\ast_{-1}} (1+1/\log_2(r)) r \leqslant
\sum_{R \in \mathcal{R}^\ast_{-1}} 2 r \leqslant 6 n\]
completes the proof.
\end{proof} | 1,605 | 49,320 | en |
train | 0.10.17 | \subsection{Refined upper bounds for PeekSort\xspace}
\label{subsec:PeS:precise}
\begin{proposition}
\label{pro:PeS:not-tight}
The algorithm PeekSort\xspace does not have the
tight middle-growth property.
\end{proposition}
\begin{proof}
For all~$k \geqslant 0$,
let~$\mathcal{S}_k$ be the integer-valued
sequence defined inductively by~$\mathcal{S}_0 = 1$ and~$\mathcal{S}_{k+1} = \mathcal{S}_k \cdot
3^k \cdot \mathcal{S}_k$, where~$u \cdot v$ denotes the concatenation of two
sequences~$u$ and~$v$.
Then, let~$B_k$ be an array whose run decomposition
consists in runs whose lengths form the sequence~$\mathcal{S}_k$.
When sorting the array~$B_k$, the algorithm
PeekSort\xspace performs two merges per
sub-sequence~$S_1 = 1 \cdot 1 \cdot 1$,
thereby obtaining a sequence
of run lengths that is~$3 \mathcal{S}_{k-1}$, i.e.,
the sequence~$\mathcal{S}_{k-1}$ whose elements were
tripled. It then works on that new
sequence, obtaining the sequences~$9 \mathcal{S}_{k-2},
\ldots,3^k \mathcal{S}_0$.
The resulting tree, represented in
Figure~\ref{fig:32-PeS-tree},
has height~$h = 2k$ and its root
is a run of length~$r = 3^k = o(2^h)$.
\end{proof}
\begin{figure}
\caption{Merge tree induced by PeekSort\xspace on~$B_3$.
Each run is labelled by its length.\label{fig:32-PeS-tree}
\label{fig:32-PeS-tree}
\end{figure}
The method of casting aside runs with a limited
total length, which we employed while adapting
the proofs Theorems~\ref{thm:PoS-constant}
and~\ref{thm:PoS-log} to adaptive ShiversSort\xspace, does not work with
the algorithm PeekSort\xspace.
Instead, we rely on the approach employed by
Bayer~\cite{Ba75} to prove the~$n\mathcal{H}+2n$ upper
bound on the number of element comparisons and
moves when PeekSort\xspace uses the naïve merging
sub-routine. This approach is based on the
following result, which relies on the notions of
\emph{split runs} and \emph{growth rate}
of a run.
In what follows, let us recall some notations:
given an array~$A$ of length~$n$ whose run decomposition
consists of runs~$R_1,R_2,\ldots,R_\rho$, we set~$e_i = r_1+ \ldots + r_i$ for all integers~$i \leqslant \rho$,
and we denote by~$R_{i \ldots j}$ a run that results from
merging consecutive runs~$R_i,R_{i+1},\ldots,R_j$.
\begin{definition}
\label{def:split:runs}
Let~$\mathcal{T}$ be a merge tree, and let~$R$ and~$R'$
be the children of a run~$\overline{R}$.
The \emph{split run} of~$\overline{R}$ is
defined as the rightmost leaf of~$R$ if~$r \geqslant r'$, and as the leftmost leaf
of~$R'$ otherwise.
The length of that split run is then called the
\emph{split length} of~$\overline{R}$, and it is
denoted by~$\mathsf{sl}(\overline{R})$.
Finally, the quantity~$\log_2(\overline{r}) -
\max\{\log_2(r),\log_2(r')\}$ is called \emph{growth rate}
of the run~$\overline{R}$, and denoted by~$\mathsf{gr}(\overline{R})$.
\end{definition}
\begin{lemma}
\label{lem:bayer}
Let~$\mathcal{T}$ be a merge tree induced by PeekSort\xspace.
We have~$\mathsf{gr}(\overline{R}) \overline{r} + 2 \mathsf{sl}(\overline{R})
\geqslant \overline{r}$ for each internal node~$\overline{R}$ of~$\mathcal{T}$.
\end{lemma}
\begin{proof}
Let~$R = R_{i \ldots k}$ and~$R' = R_{k+1 \ldots j}$
be the children of the run~$\overline{R}$.
We assume, without loss of generality, that~$r \geqslant r'$. The case~$r < r'$
is entirely symmetric.
Definition~\ref{def:tree:PeS} then states that
\[|r - r'| = |2 e_k - e_j - e_{i-1}| \leqslant
|2 e_{k-1} - e_j - e_{i-1}| = |r - r' - 2 r_k|,\]
which means that~$r- r' - 2 r_k$ is negative
and that~$r - r' \leqslant 2 r_k + r' - r$.
Then, consider the function~$f \colon t \mapsto
4t-3-\log_2(t)$. This function is non-negative
and increasing on the interval~$[1/2,+\infty)$.
Finally, let~$z = r / \overline{r}$, so that~$\mathsf{gr}(\overline{R}) =
-\log_2(z)$ and~$r' = (1-z) \overline{r}$. Since~$r \geqslant r'$, we have~$z \geqslant 1/2$,
and therefore
\[\mathsf{gr}(\overline{R}) \overline{r} + 2 \mathsf{sl}(\overline{R})
= 2 r_k - \log_2(z) \overline{r} \geqslant
2 (r - r') - \log_2(z) \overline{r} =
(f(z)+1) \overline{r} \geqslant \overline{r}. \qedhere\]
\end{proof}
\begin{theorem}
\label{thm:PeS-constant}
Theorem~\ref{thm:PoS-constant} remains valid if
we consider the algorithm
PeekSort\xspace instead of
an algorithm with the tight middle-growth property.
\end{theorem}
\begin{proof}
Let~$\mathcal{T}$ be the merge tree induced by PeekSort\xspace on an
array~$A$ of length~$n$,
runs~$R_1,R_2,\ldots,R_\rho$
and dual runs~$S_1,S_2,\ldots,S_\sigma$,
and let~$\mathcal{T}^\ast$ be the tree obtained by deleting
the~$\rho$ leaves~$R_i$ from~$\mathcal{T}$.
Using~$\mathbf{t}$-galloping
to merge runs~$R$ and~$R'$ into one run~$\overline{R}$
requires at most
\begin{align*}
\mathcal{C}_{\overline{R}} & = 1 + \sum_{i=1}^\sigma
\mathsf{cost}_{\mathbf{t}}^\ast(r_{\rightarrow i}) +
\mathsf{cost}_{\mathbf{t}}^\ast(r'_{\rightarrow i}) \\
& \leqslant 1 + \sum_{i=1}^\sigma
\min\{(1+1/(\mathbf{t}+3)) \overline{r}_{\rightarrow i},
2\mathbf{t} + 4 + 4 \log_2(\overline{r}_{\rightarrow i}+1)\}
\end{align*}
element comparisons. Since~$1 + 1/(\mathbf{t}+3) \leqslant 2$ and
\[2 \, \mathsf{sl}(\overline{R}) \geqslant (1 - \mathsf{gr}(\overline{R})) \overline{r}
= \sum_{i=1}^\sigma (1 - \mathsf{gr}(\overline{R})) \overline{r}_{\rightarrow i},\]
we even have
\[\mathcal{C}_{\overline{R}} \leqslant 1 + 4 \mathsf{sl}(\overline{R}) + \sum_{i=1}^\sigma
\mathsf{cost}_{\mathbf{t},\mathsf{gr}(\overline{R})}^\heartsuit(\overline{r}_{\rightarrow i}),\]
where we set~$\mathsf{cost}_{\mathbf{t},\gamma}^\heartsuit(m) =
\min\{(1+1/(\mathbf{t}+3)) \gamma m,2\mathbf{t}+4+4\log_2(m+1)\}$.
Then, note that each leaf run~$R_i$ is a split run
of at most two internal nodes of~$\mathcal{T}$: these are
the parents of the least ancestor of~$R_i$ that
is a left run
(this least ancestor can be~$R_i$ itself) and of the least ancestor of~$R_i$
that is a right run.
It follows that
\[\sum_{R \in \mathcal{T}^\ast} \mathsf{sl}(R) \leqslant 2 \sum_{i=1}^\rho r_i = 2 n.\]
Consequently, overall, and including the~$n-1$
comparisons needed to find the run decomposition
of~$A$, the algorithm PeekSort\xspace performs a total of
at most
\[n + \sum_{R \in \mathcal{T}^\ast} \mathcal{C}_{R}
\leqslant 8 n + \sum_{i=1}^\sigma
\sum_{R \in \mathcal{T}^\ast} \mathsf{cost}_{\mathbf{t},\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i})\]
elements comparisons.
Now, given an integer~$i \leqslant \sigma$,
we say that a run~$R$ is \emph{small}
if it has a length~$r < (\mathbf{t}+1) n / s_i$, and
that~$R$ is \emph{large} otherwise.
Then, we denote by~$\mathcal{T}^{\mathsf{small}}$ the set of
small runs of~$\mathcal{T}$, and by~$\mathcal{T}^{\mathsf{large}}$
the set of large runs of~$\mathcal{T}$.
We will prove both inequalities
\begin{align}
\sum_{R \in \mathcal{T}^{\mathsf{small}}} \!\!\mathsf{cost}_{\mathbf{t},\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) \leqslant
(1+1/(\mathbf{t}+3)) \log_2((\mathbf{t}+1) n / s_i) s_i \\
\sum_{R \in \mathcal{T}^{\mathsf{large}}} \!\!\mathsf{cost}_{\mathbf{t},\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) = \mathcal{O}(s_i),
\end{align}
from which Theorem~\ref{thm:PeS-constant} will follow.
We first focus on proving the inequality (1).
For all~$j \leqslant \rho$, let~$R_j^\uparrow$
be the set of those strict ancestors of
the run~$R_j$ that are small, i.e., of the runs~$\aR{k}_j$ such
that~$1 \leqslant k \leqslant |R_j^\uparrow|$.
With these notations, it already follows that
\begin{align*}
\sum_{R \in \mathcal{T}^{\mathsf{small}}}
\mathsf{gr}(R) r_{\rightarrow i} & =
\sum_{R \in \mathcal{T}^{\mathsf{small}}}
\sum_{j=1}^\rho \mathbf{1}_{R \in R_j^\uparrow} \mathsf{gr}(R) (r_j)_{\rightarrow i} =
\sum_{j=1}^\rho \sum_{k=1}^{|R_j^\uparrow|} \mathsf{gr}(\aR{k}_j) (r_j)_{\rightarrow i} \\
& \leqslant
\sum_{j=1}^\rho \sum_{k=1}^{|R_j^\uparrow|}
\log_2(\ar{k}_j / \ar{k-1}_j) (r_j)_{\rightarrow i}
=
\sum_{j=1}^\rho \log_2(\ar{|R_j^\uparrow|}_j / r_j) (r_j)_{\rightarrow i} \\
& \leqslant \sum_{j=1}^\rho \log_2((\mathbf{t}+1) n/s_i) (r_j)_{\rightarrow i} = \log_2((\mathbf{t}+1) n/s_i) s_i,
\end{align*}
which proves that
\[\sum_{R \in \mathcal{T}^{\mathsf{small}}} \mathsf{cost}_{\mathbf{t},\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) \leqslant
(1+1/(\mathbf{t}+3)) \sum_{R \in \mathcal{T}^{\mathsf{small}}} \mathsf{gr}(R)
r_{\rightarrow i} \leqslant
(1+1/(\mathbf{t}+3)) \log_2((\mathbf{t}+1) n / s_i) s_i.\]
Finally, let
\[\mathsf{C}_{\mathbf{t}}^{\mathsf{large}}(h) = \sum_{R \in \mathcal{R}^{\mathsf{large}}_h}
\mathsf{cost}_{\mathbf{t},\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}),\]
where~$\mathcal{R}^{\mathsf{large}}_h$ denotes the set
of runs with height~$h \geqslant 0$
in the sub-tree~$\mathcal{T}^{\mathsf{large}}$.
Since PeekSort\xspace has the fast-growth property, there is an integer~$\kappa$ such that~$\ar\kappa \geqslant 2 r$ for all runs~$R$ in~$\mathcal{T}$.
Thus, each run~$R$ in~$\mathcal{R}^{\mathsf{large}}_h$ is of length~$r \geqslant 2^{h/\kappa-1} (\mathbf{t}+1) n / s_i$, and~$|\mathcal{R}^{\mathsf{large}}_h| \leqslant 2^{1-h/\kappa} s_i / (\mathbf{t}+1)$.
Then, since the functions~$f \colon x \mapsto 2\mathbf{t}+4+4\log_2(x+1)$
and~$g \colon x \mapsto x f(s_i/x)$
are positive, increasing and
concave on~$(0,+\infty)$, we have
\begin{align*}
\mathsf{C}_{\mathbf{t}}^{\mathsf{large}}(h)
& \leqslant
\sum_{R \in \mathcal{R}^{\mathsf{large}}_h}f(r_{\rightarrow i})
\leqslant
|\mathcal{R}^{\mathsf{large}}_h| \, f\big({\textstyle\sum_{R \in \mathcal{R}^{\mathsf{large}}_h} r / |\mathcal{R}^{\mathsf{large}}_h|}\big) \leqslant
g(|\mathcal{R}^{\mathsf{large}}_h|) \leqslant
g\big(2^{1-h/\kappa}s_i / (\mathbf{t}+1)\big) \\
& \leqslant
2^{1-h/\kappa}s_i \big(2\mathbf{t}+4+4
\log_2(2^{h/\kappa-1} (\mathbf{t}+1) + 1)\big)/ (\mathbf{t}+1) \\
& \leqslant
2^{1-h/\kappa}s_i \big(2\mathbf{t}+4+4
(h + \mathbf{t} + 2)\big)/ (\mathbf{t}+1)
\leqslant
(12 + 4 h) 2^{1-h/\kappa}s_i
\end{align*}
for all~$h \geqslant 0$. This allows us to
conclude that
\[\sum_{R \in \mathcal{T}^{\mathsf{large}}} \mathsf{cost}_{\mathbf{t},\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) =
\sum_{h \geqslant 0} \mathsf{C}_{\mathbf{t}}^{\mathsf{large}}(h)
\leqslant \sum_{h \geqslant 0}(12+4h)2^{1-h/\kappa} s_i
= \mathcal{O}(s_i). \qedhere\]
\end{proof} | 3,954 | 49,320 | en |
train | 0.10.18 | \begin{theorem}
\label{thm:PeS-log}
Theorem~\ref{thm:PoS-log} remains valid if
we consider the algorithm
PeekSort\xspace instead of
an algorithm with the tight middle-growth property.
\end{theorem}
\begin{proof}
Let us reuse the notations we introduced while
proving Theorem~\ref{thm:PeS-constant}.
With these notations, the algorithm PeekSort\xspace
performs at total of at most
\[\mathcal{C} = 8n + \sum_{i=1}^\sigma \sum_{R \in \mathcal{T}^\ast}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^{\heartsuit}(r_{\rightarrow i})\]
element comparisons.
Let~$\theta = 2^{\ln(2)}$, and let~$\mathcal{T}^\ast$ be the tree obtained after removing
the leaves of~$\mathcal{T}$.
We say that an inner node~$R$ or~$\mathcal{T}$
is \emph{biased} if it has a child~$R'$ such that~$r \leqslant \theta r'$, i.e.,
if~$\mathsf{gr}(R) \leqslant \ln(2)$.
Let~$\mathcal{T}^{\mathsf{biased}}$ be the set of such runs.
In~$R$ is biased,
Lemma~\ref{lem:bayer} proves that
the split run of~$R$ is a run~$\overline{R}$ of length
\[\overline{r} = \mathsf{sl}(R) \geqslant (1 - \mathsf{gr}(R)) r / 2
\geqslant \ln(e/2) r / 2.\]
Since each run~$R_j$ is the split run of at most
two nodes~$R \in \mathcal{T}^\ast$, the lengths of the biased runs
sum up to at most~$4 n / \ln(e/2)$, and therefore
\[\sum_{i=1}^\sigma \sum_{R \in \mathcal{T}^{\mathsf{biased}}}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^{\heartsuit}(r_{\rightarrow i}) \leqslant
\sum_{i=1}^\sigma \sum_{R \in \mathcal{T}^{\mathsf{biased}}}
2 r_{\rightarrow i} =
2 \sum_{R \in \mathcal{T}^{\mathsf{biased}}} r = \mathcal{O}(n).\]
Then, given an integer~$i \leqslant \sigma$,
we set~$z = n / s_i \geqslant 1$ and~$\mu = \lceil \log_\theta(z \log_2(2 z)) \rceil$.
We also denote by~$\mathcal{R}_h$ the set of unbiased runs~$R$ such that~$\lfloor \log_\theta(r) \rfloor = h$.
Since these runs are unbiased, each set~$\mathcal{R}_h$
consists of pairwise incomparable runs.
Then, let
\[\mathsf{C}(h) = \sum_{R \in \mathcal{R}_h} \mathsf{gr}(R) r_{\rightarrow i}
\text{, }
\mathsf{D}(h) = \sum_{R \in \mathcal{R}_h}
r_{\rightarrow i} / \log_2(r)
\text{ and }
\mathsf{E}(h) = \sum_{R \in \mathcal{R}_h} (1+\log_2(r+1)).\]
By following the proof of
Theorem~\ref{thm:PeS-constant}, we first observe
that
\[\sum_{h=1}^{\mu-1} \mathsf{C}(h) \leqslant \log_2(\theta^{\mu}) s_i =
s_i \log_2(z) + s_i \log_2(\log_2(2z)) +
\mathcal{O}(s_i).\]
We also observe that
\[\sum_{h=1}^{\mu-1} \mathsf{D}(h) \leqslant \sum_{h=1}^{\mu-1} \sum_{R \in \mathcal{R}_h} \frac{r_{\rightarrow i}}{h \log_2(\theta)} \leqslant \sum_{h=1}^{\mu-1}
\frac{s_i}{h \ln(2)} \leqslant s_i (1 + \log_2(\mu)) =
s_i \log_2(\log_2(2z)) + \mathcal{O}(s_i).\]
Finally, consider the functions~$f \colon x \mapsto 1 + \log_2(x+1)$ and~$g \colon x \mapsto x \, f(n / x)$.
Both functions~$f$ and~$g$ are concave,
positive and increasing on~$(0,+\infty)$, which proves that
\begin{align*}
\mathsf{E}(\mu + h) & \leqslant
\sum_{R \in \mathcal{R}_{\mu +h}}
f(r)
\leqslant
|\mathcal{R}_{\mu+h}| \, f\big({\textstyle\sum_{R \in \mathcal{R}_{\mu+h}} r / |\mathcal{R}_{\mu + h}|}\big) \leqslant
g(|\mathcal{R}_{\mu+h}|) \leqslant
g(n / \theta^{\mu+h}) \\
& \leqslant
g\big(\theta^{-h} s_i/\log_2(2z)\big) =
\theta^{-h} s_i f(\theta^h z \log_2(2z)) / \log_2(2z) \\
& \leqslant \theta^{-h} s_i \big(1 + \log_2(\theta^h
(2z)^2)\big) / \log_2(2z) =
\theta^{-h} (h \ln(2)+1) s_i / \log_2(2z) +
2 \theta^{-h} s_i \\
& \leqslant (h+3) \theta^{-h} s_i,
\end{align*}
and therefore that
\[\sum_{h \geqslant \mu} \mathsf{E}(h) \leqslant
\sum_{h \geqslant 0} (h+3) \theta^{-h} s_i =
\mathcal{O}(s_i).\]
These three summations allow us to conclude that
\begin{align*}
\sum_{R \notin \mathcal{T}^{\mathsf{biased}}}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) & =
\sum_{h \geqslant 1} \sum_{R \in \mathcal{R}_h}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) \\
& \leqslant \sum_{h \geqslant 1} \sum_{R \in \mathcal{R}_h}
\min\{\mathsf{gr}(R) r_{\rightarrow i} + r_{\rightarrow i}/\log_2(r),6 (1+\log_2(r+1))\} \\
& \leqslant \sum_{h \geqslant 1} \min\{\mathsf{C}(h) + \mathsf{D}(h), 6 \mathsf{E}(h)\} \\
& \leqslant \sum_{h=1}^{\mu-1} (\mathsf{C}(h) + \mathsf{D}(h)) + 6 \sum_{h \geqslant \mu} \mathsf{E}(h) \\
& \leqslant s_i \log_2(n/s_i) +
2 s_i \log_2(\log_2(2n/s_i)) + \mathcal{O}(s_i),
\end{align*}
from which it follows that
\begin{align*}
\mathsf{C} & = \sum_{i=1}^\sigma \sum_{R \in \mathcal{T}^\ast}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) + \mathcal{O}(n) \\
& = \sum_{i=1}^\sigma \sum_{R \in \mathcal{T}^{\mathsf{biased}}}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) +
\sum_{i=1}^\sigma \sum_{R \notin \mathcal{T}^{\mathsf{biased}}}
\mathsf{cost}_{\lceil \log_2(r) \rceil,\mathsf{gr}(R)}^\heartsuit(r_{\rightarrow i}) + \mathcal{O}(n) \\
& \leqslant \sum_{i=1}^\sigma
\big(s_i \log_2(n/s_i) +
2 s_i \log_2(\log_2(2n/s_i)) + \mathcal{O}(s_i)\big) +
\mathcal{O}(n) \\
& \leqslant n \mathcal{H}^\ast + 2 n \ln(\mathcal{H}^\ast+1) + \mathcal{O}(n). &&\qedhere
\end{align*}
\end{proof} | 2,224 | 49,320 | en |