Search is not available for this dataset
text
stringlengths 216
4.52M
| meta
dict |
---|---|
\section{Introduction}
Cavity optomechanics (COM) \cite{Meystre2013,Metcalfe2014,Aspelmeyer2014} is playing an increasingly important role in making and steering on-chip devices, such as long-lived quantum memory \cite{Bagheri2011}, transducers \cite{Bagci2014,Tian2015,Lecocq2016}, motion sensing \cite{Arcizet2006,Srinivasan2011,yang2016}, and phonon lasing \cite{Vahala2009,Kent2010,Vahala2010,Mahboob2012,Kemiktarak2014,Painter2015,Xiao2017,Yang2018}. Phonon lasing, or coherent mechanical amplification, exhibits similar properties as those of an optical laser, such as threshold, gain saturation, and linewidth narrowing in the lasing regime \cite{Wu2013,Jing2014,wang2014,Lv2017}, as demonstrated in experiments with trapped ions, nano-beams, superlattices, resonators, or electromechanical devices \cite{Vahala2009,Kent2010,Vahala2010,Mahboob2012,Kemiktarak2014,Painter2015,Xiao2017,Yang2018}. It provides coherent acoustic sources to drive phononic devices for practical applications in e.g., audio filtering, acoustic imaging, or topological sound control \cite{imaging,filter,comb,topo,phononics}.
COM-based ultralow-threshold phonon lasers, featuring a continuously tunable gain spectrum to selectively amplify
phonon modes, from radio frequency to microwave rates \cite{Vahala2010,Xiao2017,Yang2018}, provide a particularly attractive setting to explore quantum acoustic effects, such as two-mode mechanical correlations \cite{Kemiktarak2014} or phononic sub-Poissonian distributions \cite{Painter2015}.
In parallel, nonreciprocal optics \cite{Potton2014,Zoller2017} has emerged as an indispensable tool for such a wide range of applications as invisibility cloaking, noise-free sensing, directional lasing, or one-way optical communications \cite{Lin2011,Feng2011,Scheucher2016,Miri2017,Sounas2017,Bahari2017}. Directional transmission of light has been achieved by using optical nonlinearities or dynamically-modulated media \cite{bi2011,chang2014,shi2015,yang2014,Shen2016,Verhagen2016,Hua2016,Painter2017}. As a crucial element in signal readout and information processing, directional optical amplifiers (with minimal noises from the output port) have also been proposed and studied, in microwave circuits \cite{qubit1,qubit2} or a non-Hermitian time-Floquet device \cite{Fleury2018}. In a recent experiment, a reconfigurable optical device was demonstrated \cite{Dong2018}, having switchable functions as either a circulator or a directional amplifier \cite{Clerk-prx,pr-app}.
Directional amplification of microwave signals has also been experimentally demonstrated in a multi-mode COM system \cite{Sillanpaa}.
These abilities, allowing directional transmission and amplifications of optical signals \cite{Manipatruni2009,Bino2018,Yong2018,Bing2018,fan2018}, are fundamental for the emerging fields of chiral quantum optics and topological photonics \cite{Metelmann2017}.
As in optical systems \cite{qubit1,qubit2,Fleury2018,Dong2018,Clerk-prx,pr-app}, directional emissions and amplifications of phonons are particularly important in mechanical engineering \cite{li2011,Kim2015,Fleury2016,Alu2017,Poshakinskiy2017,Kim2016,Xu2016,Xu2017,thermal diode,Wang2018}, such as acoustic sensing or computing \cite{imaging,filter,comb,topo}. Here we propose a strategy to achieve a \textit{nonreciprocal mechanical amplifier} by coupling a COM resonator to a purely optical spinning resonator. We show that by exploiting the optical Sagnac effect \cite{Post1967,Chow1985,Ciminelli2017}, both the mechanical gain and the phonon-lasing threshold can be significantly altered. In particular, by driving the COM resonator from the left or the right side, coherent emission of phonons is enhanced or completely suppressed, enabling a highly-tunable nonreciprocal phonon laser. This provides a key element for applications of COM devices in e.g., chiral quantum acoustics or topological phononics \cite{comb,topo}.
\begin{figure}[ht]
\centering
\includegraphics[width=3.4in]{Fig1.eps}
\caption{(a) Schematic illustration of a nonreciprocal phonon laser composed of a spinning optical resonator coupled to a COM resonator. For the CCW rotation, driving the system from the left (with $\Delta_\mathrm{sag}>0)$ enhances the mechanical gain. The inset shows the equivalent two-level system model for phonon lasing where the transitions between supermodes are mediated by phonons. (b) Driving the system from the right (with $\Delta_\mathrm{sag}<0$) suppresses or even completely blocks the phonon-lasing process.}
\label{fig1}
\end{figure}
We note that very recently, by spinning a whispering-gallery-mode (WGM) optical resonator, nonreciprocal transmission with $99.6\%$ rectification was observed for photons, without any magnetic field or optical nonlinearity \cite{Carmon}. By spinning a shape-deformed resonator, purely-optical effects such as mode coupling \cite{Ge2014} and broken chiral symmetry \cite{Sarma2015} have also been revealed. Such spinning devices also can be useful in nano-particle sensing \cite{Jing2018} or single-photon control \cite{Huang2018}. For sound, excellent isolation was demonstrated even earlier by using a circulating fluid \cite{Fleury2014}. In recent experiments, chiral mechanical cooling was also realized via nonreciprocal Brillouin scattering \cite{Kim2015,Kim2016,Xu2016,Xu2017}.
Here, as another necessary element, we show that nonreciprocal \textit{amplifications} of phonons can be achieved by utilizing optical Sagnac effect induced by a spinning resonator \cite{Carmon}. This opens up a new route to operate nonreciprocal COM devices for applications in e.g., backaction-immune force sensing \cite{Vahala2009} and chiral acoustic engineering \cite{Zhang2018,Kuzyk}.
\section{Model and solutions}
We consider two coupled resonators, one of which is purely optical and the other supports a mechanical breathing mode (frequency $\omega_m$ and effective mass $m$) when pumped by light at frequency $\omega_L$ and amplitude $\varepsilon_L$.
Evanescent coupling between these resonators exists in the $1550\,\mathrm{nm}$ band, and the laser is coupled into and out of the resonator via a waveguide (see Fig.\,\ref{fig1}). The two resonators share the same resonance frequency $\omega_c$ for the stationary case. The decay rates of the COM resonator and the optical resonator are $\gamma_1$ and $\gamma_2$, respectively, which are related to the optical quality factors $Q_1$ and $Q_2$, i.e., $\gamma_{1,2}=\omega_c/Q_{1,2}$. This compound system, utilized to observe phonon lasing experimentally \cite{Vahala2010,Xiao2017}, can be further tuned by spinning the optical resonator with speed $\Omega$, due to the $\Omega$-dependent optical Sagnac effect. An immediate consequence of this effect is that for the counter-clockwise (CCW) rotation denoted by $\Omega>0$, \textit{phonon lasing can be enhanced or blocked} by driving the system from the left or the right (or equivalently, for the clockwise (CW) rotation denoted by $\Omega<0$, the phonon lasing is enhanced by driving the system from the right and it is blocked when the drive is from the left). This, as aforementioned, indicates a \textit{highly-tunable nonreciprocal phonon laser} by flexibly tuning the rotation speed and drive direction.
In a spinning resonator, the CW or CCW optical mode experiences different refractive indices \cite{Carmon}, i.e., $n_\pm=n[1\pm r_2\Omega(n^{-2}-1)/c]$, where $n$ and $r_2$ denote, respectively, the refractive index and the radius of the resonator, and $c$ is the speed of light in vacuum. As a result, the frequencies of the CW and CCW modes of the resonator experience Sagnac-Fizeau shifts \cite{Malykin,jing2017}. For light propagating in the same, or opposite, direction of the spinning resonator, we have $$\omega_c\rightarrow\omega_c\mp|\Delta_\mathrm{sag}|,$$ with
\begin{align}
\Delta_\mathrm{sag}=\pm\Omega\frac{nr_2\omega_c}{c}\left(1-\frac{1}{n^2}-\frac{\lambda}{n}\frac{dn}{d\lambda}\right),
\label{eq:sagnac}
\end{align}
where $\lambda=c/\omega_c$ is the optical wavelength. The dispersion term $(\lambda/n)(dn/d\lambda)$ denotes the Lorentz correction to the Fresnel-Fizeau drag coefficient, characterizing the relativistic origin of the Sagnac effect \cite{Malykin}. This term, relatively small in typical materials, is confirmed to be safely ignored in the recent experiment \cite{Carmon}. In such a device, light is dragged by the spinning resonator, leading to nonreciprocal transmissions for optical counter-propagating modes (see Ref.\,\cite{Carmon} for more details). Below we show that for our COM system, this leads to distinct changes of the radiation pressure on the mechanical mode, hence resulting in a \textit{nonreciprocal phonon laser}.
As shown in Fig.\,\ref{fig1}, spinning the resonator along the CCW direction and driving the device from the left or the right, induces an optical red or blue shift, i.e., $\Delta_\mathrm{sag}>0$ or $\Delta_\mathrm{sag}<0$. The Hamiltonian of the system, in a frame rotating at the frequency $\omega_L$, can be written as ($\hbar=1$)
\begin{align}
H&=H_0+H_{\mathrm{int}}+H_{\mathrm{dr}},\nonumber\\
H_0&=-\Delta_L a_1^\dag a_1-(\Delta_L+\Delta_\mathrm{sag})a_2^\dag a_2+\omega_mb^\dag b,\nonumber\\
H_{\mathrm{int}}&=-\zeta x_0a_1^\dag a_1(b^\dag+b)+J(a^\dag_1a_2+a^\dag_2a_1),\nonumber\\
H_{\mathrm{dr}}&=i(\varepsilon_La_1^\dag-\varepsilon_L^*a_1).
\label{eq:Hamiltonian}
\end{align}
$H_0$ is the free Hamiltonian where the first and second terms describe the optical modes in the COM resonator and the spinning resonator, respectively; and the third term denotes the energy of the mechanical mode. $H_\mathrm{int}$ is the interaction Hamiltonian where the first term describes the coupling between the optical and mechanical mode in the optomechanical resonator; and the second term describes the coupling between the optical modes of the resonators. Note that $\Delta_L=\omega_L-\omega_c$ is the detuning between the drive laser and the resonance frequency of the resonator, $a_1$ and $a_2$ denote the optical annihilation operators of the resonators (coupled with the optical strength $J$), $x_0=\sqrt{\hbar/2m\omega_m}$, $b$ is the mechanical annihilation operator, $\zeta=\omega_c/r_1$ denotes the COM coupling strength, and $r_1$ is the radius of the COM resonator. Finally, $H_{\mathrm{dr}}$ denotes the drive which is fed into the coupled resonator system through the waveguide (see Fig.\,\ref{fig1}), with the driving amplitude
$$\varepsilon_L=\sqrt{2\gamma_1P_\mathrm{in}/\hbar\omega_L},$$
and the input power $P_\mathrm{in}$.
The Heisenberg equations of motion of this compound system are then written as
\begin{align}
\dot{a}_1&=(i\Delta_L-\gamma_1)a_1+i\zeta x_0(b+b^\dag)a_1-iJa_2+\varepsilon_L,\nonumber\\
\dot{a}_2&=[i(\Delta_L+\Delta_{\mathrm{sag}})-\gamma_2]a_2-iJa_1,\nonumber\\
\dot{b}&=-(i\omega_m+\gamma_m)b+i\zeta x_0a^\dag_1a_1.
\label{eq:steady}
\end{align}
Here, $\gamma_m$ is the damping rate of the mechanical mode. We remark that as already confirmed in the experiment on optomechanical phonon laser \cite{Vahala2010}, for a strong pump field, the quantum noise terms can be safely ignored, if one only concerns about the mean-number behaviors (i.e., the threshold feature of the mechanical gain or the phonon amplifications). Setting all the derivatives of Eq.\,(\ref{eq:steady}) as zero, the steady-state solutions of the system can be readily derived as
\begin{align}
a_{1,s}&=\frac{\varepsilon_L}{\gamma_1 -(i\Delta_L+\zeta x_s)+J^2/[\gamma_2-i(\Delta_L\pm|\Delta_{\mathrm{sag}}|)]},\nonumber\\
a_{2,s}&=\frac{Ja_{1,s}}{\Delta_L\pm|\Delta_{\mathrm{sag}}|+i\gamma_2},~~~b_s=\frac{\zeta x_0|a_{1,s}|^2}{\omega_m-i\gamma_m},
\end{align}
where $x_s=x_0(b_s+b^\ast_s)$ is the steady-state mechanical displacement. Combining these expressions gives the balance equation of the radiation and spring forces
$$
m(\omega_m^2+\gamma_m^2)x_s=\hbar\zeta|a_{1,s}|^2.
$$
The displacement $x_s$ is determined from the optical density $|a_{1,s}|^2$ inside the COM resonator, which clearly depends on $\Delta_\mathrm{sag}$ (see also Fig.\,\ref{fig2}, e.g., both $|a_{1,s}|^2$ and $x_s$ become significantly different for $\Delta_\mathrm{sag}>0$ or $\Delta_\mathrm{sag}<0$). Also the ratio $\eta$ of the steady-sate mechanical displacement $x_s$ for spinning and no spinning the resonator is given by
\begin{align}
\eta_{>,<}\equiv\frac{x_s(\Delta_{\mathrm{sag}}>0,\,<0)}{x_s(\Omega=0)}.\nonumber
\end{align}
In close analogy to an optical laser, a coherent emission of phonons can be achieved with compound resonators through inversion of the two optical supermodes \cite{Vahala2010,Xiao2017}. This leads to a \textit{phonon laser at the breathing mode} with frequency $\omega_m$, above the threshold power $P_\mathrm{th}\sim 7\,\mu \mathrm{W}$, according to Grudinin ${et~al.}$ \cite{Vahala2010}.
By using the supermode operators $a_{\pm}=(a_1\pm a_2)/\sqrt{2}$, $H_0$ and $H_{\mathrm{dr}}$ in Eq.\,(\ref{eq:Hamiltonian}) can be written as
\begin{align}
\mathcal{H}_0&=\omega_+a^\dag_+a_++\omega_-a^\dag_-a_-+\omega_mb^\dag b,\nonumber\\
\mathcal{H}_{\mathrm{dr}}&=\frac{i}{\sqrt{2}}[\varepsilon_L(a^\dag_++a^\dag_-)-\mathrm{H.c.}],
\end{align}
with the supermode frequencies $$\omega_{\pm}=-\Delta_L-\frac{1}{2}\Delta_\mathrm{\mathrm{sag}}\pm J.$$
Under the rotating-wave approximation \cite{Vahala2010,Jing2014}, the interaction Hamiltonian can be written as
\begin{align}
\mathcal{H}_{\mathrm{int}}=&-\frac{\zeta x_0}{2}(a^{\dagger}_{+}a_{-}b+b^{\dagger}a^{\dagger}_{-}a_{+})\nonumber\\
&+\frac{\Delta_{\mathrm{sag}}}{2}(a^{\dagger}_{+}a_{-}+a^{\dagger}_{-}a_{+}).
\label{eq:int}
\end{align}
Besides the first term which describes the absorption and emission of phonons (as in a conventional COM system) \cite{Vahala2010}, $\mathcal{H}_{\mathrm{int}}$ in Eq.\,(\ref{eq:int}) includes an additional $\Omega$-dependent term which implies that the \textit{coupling} between the optical supermodes depends on the Sagnac effect. The second term in Eq.\,(\ref{eq:int}) is the reason for the striking modifications in the phonon-lasing process, which is very different from the ordinary cases without the coupling of supermodes \cite{Vahala2010}.
We note that in general, the supermode operators $a_\pm = (a_1\pm a_2)/\sqrt{2}$ are defined for coupled cavities sharing the same resonant frequency. These operators can still be used here due to the fact that the Sagnac shift in our system is much smaller than the optical detuning and the optical coupling rate.
It is possible to introduce another transformation to diagonalize the two-mode system, such as that in a recent work on phonon laser \cite{Yang2018}. We have confirmed that, since the Sagnac shift is $\Delta_\mathrm{sag}/\omega_m\simeq 0.1$ for $\Omega= 6\,$ kHz, i.e., much smaller than $\Delta_L$ and $J$, this transformation can be safely reduced to the above operators as we used.
In the supermode picture, we can define the \textit{ladder} operator and \textit{population inversion} operator of the optical supermodes as \cite{Vahala2010} $$p=a^\dag_- a_+,~~~\delta n=a^\dag_+ a_+-a^\dag_-a_-,$$ respectively. The equations of motion of the system then become
\begin{align}
\dot{b}=&-(\gamma_m+i\omega_m)b+\frac{i\zeta x_0}{2}p,\nonumber\\
\dot{p}=&-2(\gamma +iJ)p+\frac{i}{2}(\Delta_{\mathrm{sag}}-\zeta x_0b)\delta n \nonumber\\
&+\frac{1}{\sqrt{2}}(\varepsilon_L^*a_{+}+\varepsilon_La^{\dagger}_{-}),
\end{align}
with $\gamma=(\gamma_1+\gamma_2)/2$. By using the standard procedures (see Appendix B for more details), we can easily obtain the \textit{mechanical gain}, i.e., $G=G_0+\mathcal{G}$, where
\begin{align}
G_0=\frac{(\zeta x_0)^2\gamma\delta n}{2(2J-\omega_m)^2+8\gamma^2},
\label{eq:gain}
\end{align}
and
\begin{align}
\mathcal{G}=\frac{|\varepsilon_L|^2(\zeta x_0)^2(\omega_m-2J)(\Delta_\mathrm{sag}+2\Delta_L)\gamma}{4\left[ \beta^2+(2\Delta_L+\Delta_\mathrm{sag})^2\gamma^2\right]\left[(2J-\omega_m)^2+4\gamma^2\right]},
\end{align}
with
\begin{align}
\beta\simeq J^2+\gamma^2-\Delta_L^2+\frac{(\zeta x_0)^2n_b}{4}-\Delta_\mathrm{sag}\Delta_L,
\end{align}
in consideration of $\Delta_\mathrm{sag}\ll\Delta_L,J$ and $\zeta x_0/\Delta_L\ll1$. The \textit{population inversion} $\delta n$ can also be derived as
\begin{align}
\delta n
\simeq&\frac{2J|\varepsilon_L|^2}{\beta_0^2+4\gamma^2\Delta_L^2}\left(\Delta_L+\Delta_\mathrm{sag}\right),
\label{eq:inversion}
\end{align}
with $\beta_0=\beta(\Delta_\mathrm{sag}=0)$.
We have confirmed that the condition $\zeta x_0/\Delta_L\ll 1$ is valid for the range of parameters used in this work. However, in the numerical simulations, we have \textit{not} used this approximation and thus the presented results are valid for the general case. Different from the conventional phonon-laser system where both resonators are stationary \cite{Vahala2010}, besides the term $G_0$, we also have a new term $\mathcal{G}$ that depends on both $\Delta_\mathrm{sag}$ and $\Delta_L$. This indicates that by tuning $\Omega$ and $\Delta_L$ together, the \textit{mechanical gain $G$ could be made very different} for $\Delta_{\mathrm{sag}}>0$ or $\Delta_{\mathrm{sag}}<0$.
We note that the non-negative mechanical gain $G$ decreases the effective damping rate of the mechanical mode $\gamma_\mathrm{eff}=\gamma_m-G$. Initially, this leads to heating of the mechanical oscillator, and parametric instabilities can occur for $\gamma_\mathrm{eff}<0$. In this situation, an initial fluctuation of the mechanical displacement can grow exponentially until the oscillation amplitude is saturated due to the nonlinear effects, which results in a steady-state regime with a fixed oscillation amplitude (i.e., the phonon-lasing regime) \cite{Vahala2010,Aspelmeyer2014}. In practice, the in-phase and quadrature components of the mechanical motion mode, as well as its power spectral density, can be experimentally measured, from which a transition from a thermal state below threshold to a coherent state above threshold can be demonstrated, as the linear gain is turned on and allowed to increase until the phonon laser reaches the steady state \cite{Painter2015}.
\section{numerical results and discussions}
\begin{figure}[ht]
\centering
\includegraphics[width=3in]{Fig2.eps}
\caption{(a) The steady-state photon number $|a_{1,s}|^2$ as a function of the optical detuning $\Delta_L$. The dependence of $|a_{1,s}|^2$ on the spinning speed $\Omega$ is clearly seen. (b) The mechanical displacement amplification factor $\eta$ versus $\Delta_L$. Parameters are chosen as $\Omega=6$\,kHz, $J/\omega_m=0.5$, and $P_\mathrm{in}=10\,\mu$W.}
\label{fig2}
\end{figure}
Figure\,\ref{fig2}(a) shows the steady-state populations of intracavity photons as a function of the optical detuning. As in relevant experiments \cite{Vahala2010,Carmon,Righini2011}, the parameter values are taken as: $n=1.48$, $r_1=34.5\,\mu$m, $Q_1=9.7\times10^7$, $r_2=4.75\,$mm, $Q_2=3\times10^7$, $m=50$\,ng, $\gamma_m=0.24$\,MHz, $\omega_m=2\pi\times23.4$\,MHz, $\Omega=6\,$kHz, and thus $\Delta_\mathrm{sag}/\omega_m\sim 0.1$. It is seen that spinning the resonator \textit{increases} the intracavity photon number $|a_{1,s}|^2$ when $\Delta_{\mathrm{sag}}>0$ or \textit{decreases} it when $\Delta_{\mathrm{sag}}<0$, compared to the stationary resonator case ($\Omega=0$). This change in the intracavity photon number then modifies the radiation pressure. Thus, we \textit{can tune (increase or decrease) the strength of optomechanical interactions effectively by tuning the speed and direction of the rotation of the resonator}. Intuitively, this direction-dependent feature for the intracavity photon number (and also the resulting radiation pressure) can be well understood by the motion-induced different refractive indices for the counter-propagating modes, as demonstrated very recently in a spinning resonator \cite{Carmon} (see also similar phenomena in a moving optical lattice \cite{Ramezani2018} or in an acoustic device with a circulating fluid \cite{Fleury2014}).
\begin{figure}[h]
\centering
\includegraphics[width=3in]{Fig3.eps}
\caption{The scaled mechanical gain $G/\gamma_m$ as a function of the optical detuning $\Delta_L$. The phonon lasing can be generated (red curve) or prohibited (blue curve) for different rotation directions, indicating that the acoustic nonreciprocity can be achieved, as shown in (b). The thick points in the yellow vertical band correspond to two different driving directions at $\Delta_L/\omega_m=0.45$. The other parameters are the same as those in Fig.\,\ref{fig2}.}
\label{fig3}
\end{figure}
Figure\,\ref{fig2}(b) shows the mechanical displacement amplification factor $\eta$. Note that $x_s$ is enhanced in the red detuning regime for $\Delta_{\mathrm{sag}}>0$, or the blue detuning regime for $\Delta_{\mathrm{sag}}<0$, which is due to the enhanced COM interaction. The amplified displacement indicates an enhancement of the phonon generation.
\begin{figure}[ht]
\centering
\includegraphics[width=3in]{Fig4.eps}
\caption{(a) The stimulated emitted phonon number $N_b$ as a function of the pump power $P_{\mathrm{in}}$. The thick points correspond to the threshold power $P_\mathrm{th}$, which is determined by the threshold condition $G=\gamma_m$. (b) Dependence of the isolation parameter $\mathcal{R}$ on the optical detuning $\Delta_L$ and the rotation speed $\Omega$. The isolation parameter $\mathcal{R}$ can be maximized as $\mathcal{R}\sim10\,$dB for $\Delta_L/\omega_m=0.45$ and $\Omega=6\,$kHz [red point in Fig.\,\ref{fig3}(a)]. The parameters are $J/\omega_m=0.5$ in (a,b), and $P_\mathrm{in}=10\,\mu$W in (b).}
\label{fig4}
\end{figure}
We show in Fig.\,\ref{fig3} the mechanical gain $G$ as a function of the optical detuning $\Delta_L$, for different values of $\Omega$. In the stationary resonator case $(\Omega=0)$, the peak position of $G$ is always the same regardless of the direction of the driving light: we have $G>\gamma_m$ around $\Delta_L/\omega_m\sim 0.5$, corresponding to a conventional phonon laser in the blue-detuning regime \cite{Vahala2010}. In contrast, spinning the resonator leads to a $red$ or $blue$ shift also for the mechanical gain $G$, with $\Delta_{\mathrm{sag}}>0$ or $\Delta_{\mathrm{sag}}<0$, respectively. Due to these shifts, by tuning $\Delta_L$ (e.g., $\Delta_L/\omega_m\sim 0.45$ in the specific example of Fig.\,\ref{fig3}), the mechanical gain can be enhanced for $\Delta_{\mathrm{sag}}>0$, while significantly suppressed (i.e., $G<\gamma_m$) for $\Delta_{\mathrm{sag}}<0$.
The underlying physics can be explained as follows. Spinning the resonator results in opposite shifts of the counter-propagating WGMs, leading to nonreciprocal light transmission \cite{Carmon}. For the $\Delta_{\mathrm{sag}}>0$ case, the driving light is in the resonators, inducing an \textit{enhanced radiation pressure}, which corresponds to an enhanced population inversion [see Eq.\,(\ref{eq:inversion})]. As a result, the mechanical gain is enhanced.
For the $\Delta_{\mathrm{sag}}<0$ case, on the other hand, the driving light is transmitted out of the resonators, inducing a weakened radiation pressure, so that the mechanical gain is nearly zero, i.e., no phonon lasing.
Thus, our system provides a new route to control the behavior of phonon lasing.
Once the mechanical gain is obtained, the \textit{stimulated emitted phonon number} $N_b$ can be calculated, i.e.,
\begin{equation}
N_b=\mathrm{exp}[2(G-\gamma_m)/\gamma_m],
\end{equation}
which characterizes the performance of the phonon laser. Figure\,\ref{fig4}(a) shows $N_b$ with $\Delta_L/\omega_m=0.45$ and $\Omega=6$\,kHz [corresponding to the maximal value of the mechanical gain in Fig.\,\ref{fig3}(a)]. From the threshold condition for the phonon lasing $N_b=1$, i.e., $G=\gamma_m$, we can easily derive the threshold pump power \cite{Vahala2010}. For $J/\omega_m=0.5$, we substitute Eqs.\,(\ref{eq:gain}) and (\ref{eq:inversion}) into the threshold condition, and then obtain
\begin{align}
P_\mathrm{th}\approx\frac{2\hbar\gamma\gamma_m\omega_c [M+\gamma^2(2\Delta_L+\Delta_\mathrm{sag})^2]}{\gamma_1 J(\zeta x_0)^2(\Delta_L+\Delta_\mathrm{sag})},
\end{align}
with $$M=(J^2+\gamma^2-\Delta^2_L-\Delta_\mathrm{sag}\Delta_L)^2,$$
in which we have used $|b_s|^2\ll 1$ at the threshold. We can see that the Sagnac effect has a significant impact on the threshold.
For $\Delta_{\mathrm{sag}}>0$, the threshold power can be reduced to $6.03\,\mu$W, which is attributed to the enhancement of the mechanical gain. It is obvious that more phonons can be generated with larger pump powers. For $\Delta_{\mathrm{sag}}<0$, the mechanical gain is blocked at $\Delta_L/\omega_m=0.45$, so that a larger input power is needed to achieve the phonon laser.
We note that the threshold power is up to about $26.6\,\mu$W for a stationary system ($\Omega=0$). However, at the resonance point ($\Delta_L/\omega_m=0.5$), the threshold for phonon lasing is $6.53\,\mu$W, which approaches the threshold of about $7\,\mu$W reported in experiments \cite{Vahala2010}.
An \textit{optimal spinning speed} should be chosen to obtain a considerable phonon number. In order to clearly see the effect of the spinning speed on the stimulated emitted phonon number, we introduce the isolation parameter \cite{Hua2016,Alu2017}
\begin{align}
\mathcal{R}=10\log_{10}\frac{N_b(\Omega>0)}{N_b(\Omega<0)}.
\end{align}
\textit{Acoustic nonreciprocity} can be achieved for $\mathcal{R}\neq 0$ or $$N_b(\Delta_{\mathrm{sag}}>0)\neq N_b(\Delta_{\mathrm{sag}}<0),$$ indicating that the spinning COM system is driven from two different directions.
Figure\,\ref{fig4}(b) shows $\mathcal{R}$ versus the optical detuning and spinning speed. Nonreciprocity emerges for the two detuning regions around $\Delta_L/\omega_m\sim 0.45$ and $\Delta_L/\omega_m\sim 0.55$. For $\Delta_L/\omega_m=0.5$, we have $\mathcal{R}\sim 0$ implying a reciprocal system.
The nonreciprocity becomes obvious for the spinning resonator, which is an inevitable result from the difference between $\delta n(\Delta_{\mathrm{sag}}>0)$ and $\delta n(\Delta_{\mathrm{sag}}<0)$. Phonon lasing is favorable to be generated in the $\Delta_{\mathrm{sag}}>0$ regime and is always suppressed for $\Delta_{\mathrm{sag}}<0$. This brings about the convenience of turning on or off the phonon lasing just by changing the driving direction.
We note that optical nonreciprocity has been demonstrated in a pure optical system by spinning the resonator \cite{Carmon}.
In our spinning COM device, nonreciprocal phonon lasing can be realized due to the optical Sagnac effect, which can change the radiation pressure in the COM devices.
\section{summary}
In summary, we have studied theoretically the role of rotation in engineering a nonreciprocal phonon laser. We show that in our system, consisting of a COM resonator coupled to a spinning optical resonator \cite{Vahala2010,Carmon}, the optical Sagnac effect strongly modifies not only the intracavity optical intensities but also the mechanical gain and the phonon-lasing threshold. As a result, the threshold pump power can be reduced or raised, depending on whether the drive is input in the same or opposite direction of the spinning resonator. Our results, i.e., controlling the behavior of a phonon laser by using a spinning resonator, shed new light on engineering COM or other acoustic devices, such as COM transducers or motion sensors. In our future works, we will further study e.g. purely quantum correlations of emitted phonons, in which quantum noise terms should be included, or a nonreciprocal phonon laser operating at an exceptional point \cite{Yang2018}.
\bigskip
\section*{ACKNOWLEDGMENTS}
We thank S. K. \"Ozdemir at Pennsylvania State University for helpful discussions.
Y.J. and H.J. are supported by NSFC (11474087, 11774086). S.M. and T.C. are supported by the Israeli Centers of Research Excellence Circle of Light and the Israeli Science Foundation.
F.N. is supported by the MURI Center for Dynamic Magneto-Optics via the Air Force Office of Scientific Research (FA9550-14-1-0040), Army Research Office (Grant No. 73315PH), Asian Office of Aerospace Research and Development (Grant No. FA2386-18-1-4045), Japan Science and Technology Agency (the ImPACT program and CREST Grant No. JPMJCR1676), Japan Society for the Promotion of Science (JSPS-RFBR Grant No. 17-52-50023), RIKEN-AIST Challenge Research Fund, and the John Templeton Foundation.
\section*{APPENDIX A: DERIVATION OF THE HAMILTONIAN}
We consider two coupled WGM resonators, as shown in Fig.\,\ref{fig1}. One resonator supports a mechanical mode with frequency $\omega_m$ and effective mass $m$, which is pumped by a driving field at frequency $\omega_L$. The other resonator is purely optical, which can spin. The optical modes in the COM and optical resonators are denoted as $a_1$ and $a_2$, respectively.
The two cavities share the same resonance frequency (denoted as $\omega_c$) for the stationary case.
The resonance frequency of the spinning optical resonator can be shifted, as a result of the Sagnac effect. Therefore, the free Hamiltonian describing the optical and mechanical modes can be written as ($\hbar=1$)
\begin{align}
H^{\prime}_0=\omega_c a_1^\dag a_1+(\omega_c-\Delta_\mathrm{sag})a_2^\dag a_2+\omega_mb^\dag b,
\end{align}
where $b$ is the mechanical annihilation operator, and $\Delta_\mathrm{sag}$ is the frequency shift induced by the Sagnac effect.
For the resonator spinning along the CCW direction, $\Delta_\mathrm{sag}>0$ or $\Delta_\mathrm{sag}<0$ corresponds to the driving field from the left or the right.
In this system, we consider the coupling between the optical and mechanical mode in the COM resonator, and the evanescent coupling between the two resonators. The interaction Hamiltonian can be written as
\begin{align}
H^{\prime}_{\mathrm{int}}=-\zeta x_0a_1^\dag a_1(b^\dag+b)+J(a^\dag_1a_2+a^\dag_2a_1),
\end{align}
where $\zeta=\omega_c/r_1$ denotes the COM coupling strength, $J$ is the optical coupling strength, and $x_0=\sqrt{\hbar/2m\omega_m}$.
The driving field is fed into the COM resonator through the waveguide. Then the driving Hamiltonian reads
\begin{align}
H^{\prime}_{\mathrm{dr}}=i(\varepsilon_L e^{-i\omega_L t} a_1^\dag-\varepsilon^\ast_Le^{i\omega_L t} a_1),
\end{align}
where $\varepsilon_L=\sqrt{2\gamma_1P_\mathrm{in}/\hbar\omega_L}$ is the driving amplitude with the input power $P_\mathrm{in}$ and the optical loss rate $\gamma_{1}$.
The total Hamiltonian of the system can be written as $$H^{\prime}=H^{\prime}_0+H^{\prime}_{\mathrm{int}}+H^{\prime}_{\mathrm{dr}}.$$
By using the unitary transformation $$U=e^{-i\omega_L t(a_{1}^{\dagger} a_{1}+a_{2}^{\dagger}a_{2})},$$
the Hamiltonian $H^{\prime}$ can be transformed into the rotating frame, i.e.,
$$H=U^{\dagger}H^{\prime}U-iU^{\dagger}\frac{\partial U}{\partial t}.$$
Then we have
\begin{align}
H&=H_0+H_{\mathrm{int}}+H_{\mathrm{dr}},\nonumber\\
H_0&=-\Delta_L a_1^\dag a_1-(\Delta_L+\Delta_\mathrm{sag})a_2^\dag a_2+\omega_mb^\dag b,\nonumber\\
H_{\mathrm{int}}&=-\zeta x_0a_1^\dag a_1(b^\dag+b)+J(a^\dag_1a_2+a^\dag_2a_1),\nonumber\\
H_{\mathrm{dr}}&=i(\varepsilon_La_1^\dag-\varepsilon_L^*a_1),
\label{Aeq:Ha}
\end{align}
where $\Delta_L=\omega_L-\omega_c$ is the detuning of the driving field. This Hamiltonian sets the stage for our calculations of the mechanical gain and the threshold power.
We then introduce the supermode operators $a_{\pm}=(a_1\pm a_2)/\sqrt{2}$, which satisfy the commutation relations
\begin{align}
[a_{+}, a_{+}^\dag]=[a_{-}, a_{-}^\dag]=1,~
[a_{+}, a_{-}^\dag]=0.\nonumber
\end{align}
$H_0$ and $H_{\mathrm{dr}}$ in Eq.\,(\ref{Aeq:Ha}) can be written as
\begin{align}
\mathcal{H}_0&=\omega_+a^\dag_+a_++\omega_-a^\dag_-a_-+\omega_mb^\dag b,\nonumber\\
\mathcal{H}_{\mathrm{dr}}&=\frac{i}{\sqrt{2}}[\varepsilon_L(a^\dag_++a^\dag_-)-\mathrm{H.c.}],
\end{align}
with the frequencies $\omega_{\pm}=-\Delta_L-\frac{1}{2}\Delta_\mathrm{\mathrm{sag}}\pm J,$
and $H_{\mathrm{int}}$ can be transformed to
\begin{align}
\mathcal{H}_{\mathrm{int}}
=&\mathcal{H}_{\mathrm{int}}^{0}+\mathcal{H}_{\mathrm{int}}^1\nonumber\\
=&-\frac{\zeta
x_0}{2}[(a_{+}^{\dagger}a_++a_{-}^{\dagger}a_-)+(a_+^{\dagger}a_-+a_-^{\dagger}a_+)](b^{\dagger}+b)\nonumber\\
&+\frac{\Delta_\mathrm{sag}}{2}(a^\dag_+a_-+a^\dag_-a_+).
\end{align}
In the rotating frame with respect to $\mathcal{H}_0$, we have
\begin{align}
\mathcal{H}_{\mathrm{int}}^0=&-\frac{\zeta
x_0}{2}\Big[a_+^{\dagger}a_-b\mathrm{e}^{i(2J-\omega_m)t}+a_+a_-^{\dagger}b^{\dagger}\mathrm{e}^{-i(2J-\omega_m)t}\nonumber\\
&+a_+^{\dagger}a_-b^{\dagger}\mathrm{e}^{i(2J+\omega_m)t}+a_+a_-^{\dagger}b\mathrm{e}^{-i(2J+\omega_m)t}\Big]\nonumber\\
&+(a_+^{\dagger}a_++a_-^{\dagger}a_-)(b^{\dagger}\mathrm{e}^{i\omega_mt}+b\mathrm{e}^{-i\omega_mt})\Big].\nonumber
\end{align}
Under the rotating-wave approximation condition $$2J+\omega_m,\,\omega_m\gg|2J-\omega_m|,$$ the terms $a^\dag_{+}a_{-}b^\dag\mathrm{e}^{i(2J+\omega_m)t}$, $a_{+}a^\dag_{-}b\mathrm{e}^{-i(2J+\omega_m)t}$ and also $(a^\dag_{+}a_{+}+a^\dag_{-}a_{-})(b^\dag\mathrm{e}^{i\omega_mt}+b\mathrm{e}^{-i\omega_mt})$ can be omitted, in comparison with the near-resonance terms $a_+^{\dagger}a_-b\mathrm{e}^{i(2J-\omega_m)t}$ and $a_+a_-^{\dagger}b^{\dagger}\mathrm{e}^{-i(2J-\omega_m)t}$ \cite{Vahala2010}. Therefore, we have a simplified interaction Hamiltonian
\begin{align}
\mathcal{H}_{\mathrm{int}}=&-\frac{\zeta x_0}{2}(a^{\dagger}_{+}a_{-}b+b^{\dagger}a^{\dagger}_{-}a_{+})\nonumber\\
&+ \frac{\Delta_{\mathrm{sag}}}{2}(a^{\dagger}_{+}a_{-}+a^{\dagger}_{-}a_{+}).
\label{Aeq:int}
\end{align}
\section*{APPENDIX B: DERIVATION OF THE MECHANICAL GAIN}
In the supermode picture, the equations of motion of the system can be written as
\begin{align}
\dot{a}_+&=-(i\omega_++\gamma)a_++\frac{i}{2}(\zeta x_0b-\Delta_\mathrm{sag})a_-+\frac{\varepsilon_L}{\sqrt{2}},\nonumber\\
\dot{a}_-&=-(i\omega_-+\gamma)a_-+\frac{i}{2}(\zeta x_0b^\dag-\Delta_\mathrm{sag})a_++\frac{\varepsilon_L}{\sqrt{2}},\nonumber\\
\dot{b}&=-(i\omega_m+\gamma_m)b+\frac{i\zeta x_0}{2}a_+a^\dag_-.
\end{align}
We can define the ladder operator and population inversion operator as
$$p=a_{-}^\dag a_{+},~~~\delta n=a_{+}^\dag a_{+}-a_{-}^\dag a_{-},$$
respectively. The equations of the system then read
\begin{align}
\dot{b}=&-(\gamma_m+i\omega_m)b+\frac{i\zeta x_0}{2}p,\nonumber\\
\dot{p}=&-2(\gamma +iJ)p+\frac{i}{2}(\Delta_{\mathrm{sag}}-\zeta x_0b)\delta n \nonumber\\
&+\frac{1}{\sqrt{2}}(\varepsilon_L^*a_{+}+\varepsilon_La^{\dagger}_{-}).
\label{dynamics}
\end{align}
By setting the time derivatives of $a_{\pm}$ and $p$ as zero with $\gamma\gg\gamma_m$, we obtain the steady-state values of the system, i.e.,
\begin{align}
p&=\frac{\sqrt{2}(\varepsilon^*_La_++\varepsilon_La^\dagger_-)-i(\zeta x_0b-\Delta_\mathrm{sag})\delta n}{2i(2J-\omega_m)+4\gamma},\nonumber\\
a_+&=\frac{\varepsilon_L(2i\omega_-+2\gamma+i\zeta x_0b-i\Delta_\mathrm{sag})}{2\sqrt{2}[\beta-i(2\Delta_L+\Delta_\mathrm{sag})\gamma]},\nonumber\\
a_-&=\frac{\varepsilon_L(2i\omega_++2\gamma+i\zeta x_0b^\dag-i\Delta_\mathrm{sag})}{2\sqrt{2}[\beta-i(2\Delta_L+\Delta_\mathrm{sag})\gamma]},
\label{steaty}
\end{align}
with
\begin{align}
\beta=&\beta_0-\Delta_\mathrm{sag}\left[\Delta_L+\frac{\zeta x_0}{2}\mathrm{Re}(b)\right],\nonumber\\
\beta_0=&J^2+\gamma^2-\Delta_L^2+\frac{(\zeta x_0)^2n_b}{4},\nonumber
\end{align}
and the phonon number $n_b=b^\dag b$.
Substituting Eq.\,(\ref{steaty}) into the dynamical equation of $b$ in Eq.\,(\ref{dynamics}) results in
\begin{align}
\dot{b}&=(-i\omega_m-i\omega'+G-\gamma_m)b+D,
\end{align}
where
\begin{align}
\omega'=&\frac{(\zeta x_0)^2(2J-\omega_m)\delta n}{4(2J-\omega_m)^2+16\gamma^2}\nonumber\\
&+\frac{(\zeta x_0)^2|\varepsilon_L|^2\gamma^2(2\Delta_L+\Delta_\mathrm{sag})}{[2(2J-\omega_m)^2+8\gamma^2][\beta^2+(2\Delta_L+\Delta_\mathrm{sag})^2\gamma^2]},\nonumber\\
D=&\frac{\zeta x_0\Delta_\mathrm{sag}\delta n}{4i(2J-\omega_m)+8\gamma}\nonumber\\
&+\frac{i\zeta x_0\beta(\gamma-iJ)|\varepsilon_L|^2}{[2i(2J-\omega_m)+4\gamma][\beta^2+(2\Delta_L+\Delta_\mathrm{sag})^2\gamma^2]}\nonumber\\
&+\frac{i\zeta x_0\gamma|\varepsilon_L|^2(2\Delta_L+\Delta_\mathrm{sag})(\Delta_L+\Delta_\mathrm{sag})}{[2i(2J-\omega_m)+4\gamma][\beta^2+(2\Delta_L+\Delta_\mathrm{sag})^2\gamma^2]},\nonumber
\end{align}
and the mechanical gain is $G=G_0+\mathcal{G},$ with
\begin{align}
G_0&=\frac{(\zeta x_0)^2\gamma\delta n}{2(2J-\omega_m)^2+8\gamma^2},\\
\mathcal{G}&=\frac{|\varepsilon_L|^2(\zeta x_0)^2(\omega_m-2J)(\Delta_\mathrm{sag}+2\Delta_L)\gamma}{4\left[ \beta^2+(2\Delta_L+\Delta_\mathrm{sag})^2\gamma^2\right]\left[(2J-\omega_m)^2+4\gamma^2\right]},\nonumber
\end{align}
where $\delta n$ can be expressed as
\begin{align}
\delta n
=&\frac{|\varepsilon_L|^2\left[2J(\Delta_L+\Delta_{\mathrm{sag}})-\gamma\zeta x_0\mathrm{Im}(b)-J\zeta x_0\mathrm{Re}(b)\right]}{\beta^2+\gamma^2(2\Delta_L+\Delta_{\mathrm{sag}})^2}\nonumber.
\end{align}
In consideration of $\Delta_\mathrm{sag}\ll\Delta_L,J$ and $\zeta x_0/\Delta_L\ll1$, we have
\begin{align}
\delta n\simeq&\frac{|\varepsilon_L|^2\left[2J(\Delta_L+\Delta_{\mathrm{sag}})-\gamma\zeta x_0\mathrm{Im}(b)-J\zeta x_0\mathrm{Re}(b)\right]}{\beta_0^2+4\gamma^2\Delta_L(\Delta_L-\Delta_\mathrm{sag})+\beta_0\Delta_{\mathrm{sag}}[\Delta_L+\zeta x_0\mathrm{Re}(b)]}\nonumber\\
\simeq&|\varepsilon_L|^2\cdot\frac{2J\Delta_L-\gamma\zeta x_0\mathrm{Im}(b)-J\zeta x_0\mathrm{Re}(b)-2J\Delta_{\mathrm{sag}}}{\beta_0^2+4\gamma^2\Delta_L^2}\nonumber\\
&\cdot\left[1-\frac{\beta_0\Delta_{\mathrm{sag}}(2\Delta_L+\zeta x_0\mathrm{Re}(b))-8\gamma^2\Delta_L\Delta_{\mathrm{sag}}}{\beta_0^2+4\gamma^2\Delta_L^2}\right]\nonumber\\
\simeq&\frac{|\varepsilon_L|^2[2J\Delta_L-J\zeta x_0\mathrm{Re}(b)-\gamma\zeta x_0\mathrm{Im}(b)]}{\beta_0^2+4\gamma^2\Delta_L^2}\nonumber\\
&\cdot\left(1+\frac{2\Delta_{\mathrm{sag}}\beta_0 \Delta_L}{\beta_0^2+4\gamma^2\Delta_L^2}\right)
+\frac{2\Delta_\mathrm{sag}J|\varepsilon_L|^2}{\beta_0^2+4\gamma^2\Delta_L^2}\nonumber\\
\simeq&\frac{2J|\varepsilon_L|^2}{\beta_0^2+4\gamma^2\Delta_L^2}(\Delta_L+\Delta_{\mathrm{sag}}), \nonumber
\end{align}
in which we have used $\beta\simeq \beta_0-\Delta_\mathrm{sag}\Delta_L$.
\section*{APPENDIX C: EXPERIMENTAL FEASIBILITY OF THE SPINNING RESONATOR}
The resonator can be mounted on a turbine, which spins the resonator, as in a very recent experiment \cite{Carmon}. In this experiment, the resonator with the radius $r=4.75\,$mm can spin with the stability of its axis, reaching the rotation frequency 3\,kHz. In our calculations, the rotation speed is chosen according to this experiment \cite{Carmon}.
For example, the Sagnac shift is $\Delta_\mathrm{sag}=14.6\,$MHz for $\Omega=6\,$kHz, leading to $\Delta_\mathrm{sag}/\omega_m\sim 0.1$.
By positioning the resonator near a single-mode fiber, the light can be coupled into or out the resonator evanescently through the tapered region. In the device, aerodynamic processes lead to a stable resonator-fiber coupling, which can be explained as follows. A fast spinning resonator can drag air into the region between the cavity and the taper, forming a boundary layer of air. Due to the air pressure on the surface of the taper facing the resonator, the taper flies at a height above the resonator, which can be several nanometers. If some perturbation induces the taper rising higher than the stable equilibrium height, it floats back to its original position \cite{Carmon}. The self-adjustment of the taper separation from the spinning resonator enables critical coupling of light into the cavity, by which counter-circulating lights experience optical drags identical in size, but opposite in sign. This experiment also confirms that the taper did not touch or stick to the rotating resonator even if the taper is pushed towards it, which is in contrast to the situation for a stationary resonator (i.e., the taper can stick to the resonator through van der Waals forces and thus needs to be pulled back to break the connection). Other factors, including intermolecular forces, lubricant compressibility, tapered-fiber stiffness and wrap angle of the fiber, may affect the resonator-waveguide coupling. However, these factors are confirmed to be negligible in the experiment.
In our scheme, the spinning resonator is coupled with the stationary COM resonator, instead of the fiber, in which stationary coupling of the two resonators can also be achieved.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Adversarial examples~\cite{goodfellow2015explaining, szegedy2013intriguing} are deceptive model inputs designed to mislead an ML system into making the wrong prediction. They expose regions of the input space that are outside the training data distribution where the model is unstable.
It is important to reveal such vulnerabilities and correct for them, especially for tasks such as fact checking (FC).
\begin{figure}[t]
\centering
\includegraphics[width=0.95 \columnwidth]{images/architecture_v4.pdf}
\caption{High level overview of our method. First, universal triggers are discovered for flipping a source to a target label (e.g. SUPPORTS $\rightarrow$ REFUTES). These triggers are then used to condition the GPT-2 language model to generate novel claims with the original label, including at least one of the found triggers.}
\label{fig:puc}
\end{figure}
In this paper, we explore the vulnerabilities of FC models trained on the FEVER dataset~\cite{thorne2018fever}, where the inference between a claim and evidence text is predicted. We particularly construct \textit{universal adversarial triggers}~\cite{wallace2019universal} -- single n-grams appended to the input text that can shift the prediction of a model from a source class to a target one. Such adversarial examples are of particular concern, as they can apply to a large number of input instances.
However, we find that the triggers also change the meaning of the claim such that the true label is in fact the target class. For example, when attacking a claim-evidence pair with a `SUPPORTS' label, a common unigram found to be a universal trigger when switching the label to `REFUTES' is `none'. Prepending this token to the claim drastically changes the meaning of the claim such that the new claim is in fact a valid
`REFUTES' claim as opposed to an adversarial `SUPPORTS' claim.
Furthermore, we find adversarial examples constructed in this way to be nonsensical, as a new token is simply being attached to an existing claim.
Our \textbf{contributions} are as follows. We \textit{preserve the meaning} of the source text and \textit{improve the semantic validity} of universal adversarial triggers to automatically construct more potent adversarial examples. This is accomplished via: 1) a \textit{novel extension to the HotFlip attack}~\cite{ebrahimi2018hotflip}, where we jointly minimize the target class loss of a FC model and the attacked class loss of a natural language inference model; 2) a \textit{conditional language model} trained using GPT-2~\cite{radford2019language}, which takes
trigger tokens and a piece of evidence, and generates a semantically coherent new claim containing at least one trigger.
The resulting triggers maintain potency against a FC model while preserving the original claim label. Moreover, the conditional language model produces semantically coherent adversarial examples containing triggers, on which a FC model performs 23.8\% worse than with the original FEVER claims. The code for the paper is publicly available.\footnote{https://github.com/copenlu/fever-adversarial-attacks}
\section{Related Work}
\subsection{Adversarial Examples}
Adversarial examples for
NLP systems can be constructed as automatically generated text~\cite{ren2019generating} or perturbations of existing input instances~\cite{jintextfool,ebrahimi2018hotflip}. For a
detailed literature overview, see~\citet{zhang2019adversarial}.
One potent type of adversarial techniques are universal adversarial attacks~\cite{gao2019universal, wallace2019universal} -- single perturbation changes that can be applied to a large number of input instances and that cause significant performance decreases of the model under attack.
~\citet{wallace2019universal} find universal adversarial triggers that can change the prediction of the model using the HotFlip algorithm~\cite{ebrahimi2018hotflip}.
However, for NLI tasks, they also change the meaning of the instance they are appended to, and the prediction of the model remains correct. ~\citet{michel2019evaluation}
address this by exploring only perturbed instances in the neighborhood of the original one.
Their approach is for instance-dependent attacks, whereas we suggest finding \textit{universal} adversarial triggers that also preserve the original meaning of input instances.
Another approach to this
are rule-based perturbations of the input~\cite{ribeiro2018semantically} or imposing adversarial constraints on the produced perturbations~\cite{dia2019semantics}.
By contrast, we extend the HotFlip method by including an auxiliary Semantic Textual Similarity (STS) objective. We additionally use the extracted universal adversarial triggers to generate adversarial examples with low perplexity.
\subsection{Fact Checking}
Fact checking systems consist of components to identify check-worthy claims \cite{atanasova2018overview,hansen2019neural,wright2020fact}, retrieve and rank evidence documents \cite{conf/emnlp/0001R18,allein2020timeaware}, determine the relationship between claims and evidence documents \cite{bowman2015large,conf/emnlp/AugensteinRVB16,conf/naacl/BalyMGMMN18}, and finally predict the claims' veracity \cite{thorne2018fever,conf/emnlp2019/Augenstein}.
As this is a relatively involved task, models easily overfit to shallow textual patterns, necessitating the need for adversarial examples to evaluate the limits of their performance.
\citet{thorne2019evaluating} are the first to propose hand-crafted adversarial attacks.
They follow up on this with the FEVER 2.0
task~\cite{thorne-etal-2019-fever2}, where participants design adversarial attacks for existing FC systems. The first two winning systems~\cite{niewinski-etal-2019-gem} produce claims requiring multi-hop reasoning, which has been shown to be challenging for fact checking models \cite{ostrowski2020multihop}. The other remaining system~\cite{kim-allan-2019-fever} generates adversarial attacks manually. We instead find universal adversarial attacks that can be applied to most existing inputs while markedly decreasing fact checking performance.
\citet{niewinski-etal-2019-gem} additionally feed a pre-trained GPT-2 model with the target label of the instance along with the text for conditional adversarial claim generation. Conditional language generation has also been employed by \citet{keskar2019ctrl} to control the style, content, and the task-specific behavior of a Transformer.
\section{Methods}
\subsection{Models}
We take a RoBERTa~\cite{liu2019roberta} model pretrained with a LM objective and fine-tune it to classify claim-evidence pairs from the FEVER dataset as SUPPORTS, REFUTES, and NOT ENOUGH INFO (NEI). The evidence used is the gold evidence, available for the SUPPORTS and REFUTES classes. For NEI claims, we use the system of \citet{malon2018team} to retrieve evidence sentences.
To measure the semantic similarity between the claim before and after prepending a trigger, we use a large RoBERTa model fine-tuned on the Semantic Textual Similarity Task.\footnote{https://huggingface.co/SparkBeyond/roberta-large-sts-b} For further details, we refer the reader to \S\ref{sec:appendixA}.
\subsection{Universal Adversarial Triggers Method}
The Universal Adversarial Triggers method is developed to find n-gram trigger tokens $\mathbf{t_{\alpha}}$, which, appended to the original input $x$, $f(x) = y$, cause the model to predict a target class $\widetilde{y}$ : $f(t_{\alpha}, x) = \widetilde{y}$. In our work, we generate unigram triggers, as generating longer triggers would require additional objectives to later produce well-formed adversarial claims. We start by initializing the triggers with the token `a'. Then, we update the embeddings of the initial trigger tokens $\mathbf{e}_{\alpha}$ with embeddings $\mathbf{e}_{w_i}$ of candidate adversarial trigger tokens $w_i$ that minimize the loss $\mathcal{L}$ for the target class $\widetilde{y}$. Following the HotFlip algorithm, we reduce the brute-force optimization problem using a first-order Taylor approximation around the initial trigger embeddings:
\begin{equation}
\underset{\mathbf{w}_{i} \in \mathcal{V}}{\arg \min }\left[\mathbf{e}_{w_i}-\mathbf{e}_{\alpha}\right]^{\top} \nabla_{\mathbf{e}_{\alpha}} \mathcal{L}
\end{equation}
where $\mathcal{V}$ is the vocabulary of the RoBERTa model and $\nabla_{\mathbf{e}_{\alpha}} \mathcal{L}$ is the average gradient of the task loss accumulated for all batches. This approximation allows for a $\mathcal{O}(|\mathcal{V}|)$ space complexity of the brute-force candidate trigger search.
While HotFlip
finds universal adversarial triggers that successfully fool the model for many instances, we find that the most potent triggers are often negation words, e.g., `not', `neither', `nowhere'. Such triggers change the meaning of the text, making the prediction of the target class correct. Ideally, adversarial triggers would preserve the original label of the claim. To this end, we propose to include an auxiliary STS model objective when searching for candidate triggers. The additional objective is used to minimize the loss $\mathcal{L'}$ for the maximum similarity score (5 out of 0) between the original claim and the claim with the prepended trigger. Thus, we arrive at the combined optimization problem:
\begin{equation}
\small
\underset{\mathbf{w}_{i} \in \mathcal{V}}{\arg \min }([\mathbf{e}_{w_i}-\mathbf{e}_{\alpha}]^{\top} \nabla_{\mathbf{e}_{\alpha}} \mathcal{L} + [\mathbf{o}_{w_i}-\mathbf{o}_{\alpha}]^{\top} \nabla_{\mathbf{o}_{\alpha}} \mathcal{L'})
\end{equation}
where $\mathbf{o}_w$ is the STS model embedding of word $w$. For the initial trigger token, we use ``[MASK]'' as STS selects candidates from the neighborhood of the initial token.
\subsection{Claim Generation}
\label{sec:claim_generation}
In addition to finding highly potent adversarial triggers, it is also of interest to generate coherent statements containing the triggers. To accomplish this, we use the HuggingFace implementation of the GPT-2 language model~\cite{radford2019language,Wolf2019HuggingFacesTS}, a large transformer-based language model trained on 40GB of text.
The objective is to generate a coherent claim, which either entails, refutes, or is unrelated a given piece of evidence, while also including trigger words.
The language model is first fine tuned on the FEVER FC corpus with a specific input format. FEVER consists of claims and evidence with the labels \texttt{SUPPORTS}, \texttt{REFUTES}, or \texttt{NOT ENOUGH INFO} (NEI). We first concatenate evidence and claims with a special token.
Next, to encourage generation of claims with certain tokens, a sequence of tokens separated by commas is prepended to the input. For training, the sequence consists of a single token randomly selected from the original claim, and four random tokens from the vocabulary.
This encourages the model to only select the one token most likely to form a coherent and correct claim. The final input format is \texttt{[trigger tokens]}\textbar\textbar\texttt{[evidence]}\textbar\textbar\texttt{[claim]}.
Adversarial claims are then generated by providing an initial input of a series of five comma-separated trigger tokens plus evidence, and progressively generating the rest of the sequence. Subsequently, the set of generated claims is pruned to include only those which contain a trigger token,
and constitute the desired label. The latter is ensured by passing both evidence and claim through an external NLI model trained on SNLI \cite{bowman2015large}.
\section{Results}
We present results for universal adversarial trigger generation and coherent claim generation.
Results are measured using the original FC model on claims with added triggers and generated claims (macro F1). We also measure how well the added triggers maintain the claim's original label (semantic similarity score), the perplexity (PPL) of the claims with prepended triggers, and the semantic quality of generated claims (manual annotation). PPL is measured with a pretrained RoBERTa LM.
\subsection{Adversarial Triggers}
Table~\ref{tab:eval} presents the results of applying universal adversarial triggers to claims from the source class.
The top-performing triggers for each direction are found in \S\ref{sec:appendixC}.
The adversarial method with a single FC objective successfully deteriorates model performance by a margin of 0.264 F1 score overall. The biggest performance decrease is when the adversarial triggers are constructed to flip the predicted class from SUPPORTS to REFUTES. We also find that 8 out of 18 triggers from the top-3 triggers for each direction, are negation words such as `nothing', `nobody', `neither', `nowhere' (see Table~\ref{tab:evalonetrig} in the appendix). The first of these triggers decreases the performance of the model to 0.014 in F1. While this is a significant performance drop, these triggers also flip the meaning of the text. The latter is again indicated by the decrease of the semantic similarity between the claim before and after prepending a trigger token, which is the largest for the SUPPORTS to REFUTES direction. We hypothesise that the success of the best performing triggers is partly due to the meaning of the text being flipped.
Including the auxiliary STS objective increases the similarity between the claim before and after prepending the trigger for five out of six directions. Moreover, we find that now only one out of the 18 top-3 triggers for each direction are negation words. Intuitively, these adversarial triggers are worse at fooling the FC model as they also have to preserve the label of the original claim. Notably, for the SUPPORTS to REFUTES direction the trigger performance is decreased with a margin of 0.642 compared to the single FC objective.
We conclude that including the STS objective for generating Universal Adversarial triggers helps to preserve semantic similarity with the original claim, but also makes it harder to both find triggers preserving the label of the claim while substantially decreasing the performance of the model.
\begin{table}[t]
\small
\centering
\begin{tabular}{l@{\hspace{1.2\tabcolsep}}l@{\hspace{1.2\tabcolsep}}l@{\hspace{1.2\tabcolsep}}l}
\toprule
\textbf{Class} & \textbf{F1} & \textbf{STS} & \textbf{PPL}\\ \midrule
\multicolumn{4}{c}{\bf No Triggers} \\
All & .866 & 5.139 & 11.92 ($\pm$45.92) \\
S & .938 & 5.130 & 12.22 ($\pm$40.34) \\
R & .846 & 5.139 & 12.14 ($\pm$37.70) \\
NEI & .817 & 5.147 & 14.29 ($\pm$84.45) \\
\midrule
\multicolumn{4}{c}{\bf FC Objective} \\
All & .602 ($\pm$.289) & 4.586 ($\pm$.328) & 12.96 ($\pm$55.37) \\
S$\rightarrow$R & .060 ($\pm$.034) & 4.270 ($\pm$.295) & 12.44 ($\pm$41.74) \\
S$\rightarrow$NEI & .611 ($\pm$.360) & 4.502 ($\pm$.473) & 12.75 ($\pm$40.50) \\
R$\rightarrow$S & .749 ($\pm$.027) & 4.738 ($\pm$.052) & 11.91 ($\pm$36.53) \\
R$\rightarrow$NEI & .715 ($\pm$.026) & 4.795 ($\pm$.094) & 11.77 ($\pm$36.98) \\
NEI$\rightarrow$R & .685 ($\pm$.030) & 4.378 ($\pm$.232) & 14.20 ($\pm$83.32) \\
NEI$\rightarrow$S & .793 ($\pm$.054) & 4.832 ($\pm$.146) & 14.72 ($\pm$93.15) \\
\midrule
\multicolumn{4}{c}{\bf FC+STS Objectives} \\
All & .763 ($\pm$.123) & 4.786 ($\pm$.156) & 12.97 ($\pm$58.30) \\
S$\rightarrow$R & .702 ($\pm$.237) & 4.629 ($\pm$.186) & 12.62 ($\pm$41.91) \\
S$\rightarrow$NEI & .717 ($\pm$.161) & 4.722 ($\pm$.152) & 12.41 ($\pm$39.66) \\
R$\rightarrow$S & .778 ($\pm$.010) & 4.814 ($\pm$.141) & 11.93 ($\pm$37.04) \\
R$\rightarrow$NEI & .779 ($\pm$.009) & 4.855 ($\pm$.098) & 12.20 ($\pm$37.67) \\
NEI$\rightarrow$R & .780 ($\pm$.078) & 4.894 ($\pm$.115) & 15.27 ($\pm$111.2) \\
NEI$\rightarrow$S & .821 ($\pm$.008) & 4.800 ($\pm$.085) & 13.42 ($\pm$82.30) \\
\bottomrule
\end{tabular}
\caption{Universal Adversarial Trigger method performance. Triggers are generated given claims from a source class to fool the classifier to predict a target class (column \textit{Class}, with SUPPORTS (S), REFUTES (R), NEI).
The results are averaged over the top 10 triggers.}
\label{tab:eval}
\end{table}
\begin{table*}[!ht
\fontsize{9}{9}\selectfont
\renewcommand{\arraystretch}{1.4}
\centering
\begin{tabular}{p{6cm} P{4.2cm} p{4.6cm}}
\toprule
\textbf{Evidence} & \textbf{Triggers} & \textbf{Generated Claim}\\
\midrule
\multicolumn{3}{c}{\bf SUPPORTS Claims} \\
Since the 19th century, some Romani have also migrated to the Americas. & don,already,more,\textcolor{red}{during},home & Romani have moved to the Americas \textcolor{red}{during} the 19th century. \\
Cyprus is a major tourist destination in the Mediterranean. & foreign,biggest,\textcolor{red}{major},every, friends & Cyprus is a \textcolor{red}{major} tourist destination. \\
The first Nobel Prize in Chemistry was awarded in 1901 to Jacobus Henricus van't Hoff, of the Netherlands, ``for his discovery of the laws of chemical dynamics and osmotic pressure in solutions.'' & later,\textcolor{red}{already},quite,altern, whereas & Henricus Van't Hoff was \textcolor{red}{already} awarded the Nobel Prize. \\
\midrule
\multicolumn{3}{c}{\bf REFUTES Claims} \\
California Attorney General Kamala Harris defeated Sanchez , 61.6\% to 38.4\%. & phys,incarn,\textcolor{red}{not},occasionally,
something & Kamala Harris did \textcolor{red}{not} defeat Sanchez, 61.6\% to 38.4\%. \\
Uganda is in the African Great Lakes region. & unless,endorsed,picks,pref, \textcolor{red}{against} & Uganda is \textcolor{red}{against} the African Great Lakes region. \\
Times Higher Education World University Rankings is an annual publication of university rankings by Times Higher Education (THE) magazine. & interested,reward,visit, \textcolor{red}{consumer},conclusion & Times Higher Education World University Rankings is a \textcolor{red}{consumer} magazine. \\
\midrule
\multicolumn{3}{c}{\bf NOT ENOUGH INFO Claims} \\
The KGB was a military service and was governed by army laws and regulations, similar to the Soviet Army or MVD Internal Troops. & nowhere,\textcolor{red}{only},none,no,nothing & The KGB was \textcolor{red}{only} controlled by a military service. \\
The series revolves around Frank Castle, who uses lethal methods to fight crime as the vigilante ``the Punisher'', with Jon Bernthal reprising the role from Daredevil. & says,said,\textcolor{red}{take},say,is & \textcolor{red}{Take} Me High is about Frank Castle's use of lethal techniques to fight crime. \\
The Suite Life of Zack \& Cody is an American sitcom created by Danny Kallis and Jim Geoghan. & whilst,interest,applic,\textcolor{red}{someone}, nevertheless & The Suite Life of Zack \& Cody was created by \textcolor{red}{someone} who never had the chance to work in television. \\
\bottomrule
\end{tabular}
\caption{Examples of generated adversarial claims. These are all claims which the FC model incorrectly classified.}
\label{tab:generation_examples}
\end{table*}
\subsection{Generation}
We use the method described in \S\ref{sec:claim_generation} to generate 156 claims using triggers found with the additional STS objective, and 156 claims without. 52 claims are generated for each class (26 flipping to one class, 26 flipping to the other). A different GPT-2 model is trained to generate claims for each specific class, with triggers specific to attacking that class used as input. The generated claims are annotated manually (see \S\ref{app:B3} for the procedure). The overall average claim quality is 4.48, indicating that most generated statements are highly semantically coherent. The macro F1 of the generative model w.r.t. the intended label is 58.9 overall. For the model without the STS objective, the macro F1 is 56.6, and for the model with the STS objective, it is 60.7, meaning that using triggers found with the STS objective helps the generated claims to retain their intended label.
We measure the performance of the original FC model on generated claims (\autoref{tab:generation_eval}). We compare between using triggers that are generated with the STS objective (Ex2) and without (Ex1). In both cases, the adversarial claims effectively fool the FC model, which performs 38.4\% worse and 23.8\% worse on Ex1 and Ex2, respectively. Additionally, the overall sentence quality increases when the triggers are found with the STS objective (Ex2). The FC model's performance is higher on claims using triggers generated with the STS objective but still significantly worse than on the original claims. We provide examples of generated claims with their evidence in \autoref{tab:generation_examples}.
\begin{table
\fontsize{10}{10}\selectfont
\centering
\begin{tabular}{lccc}
\toprule
\textbf{Target} & \textbf{F1} & \textbf{Avg Quality} & \textbf{\# Examples}\\ \midrule
\multicolumn{4}{c}{\bf FC Objective} \\
Overall& 0.534& 4.33&156\\
SUPPORTS& 0.486& 4.79& 39\\
REFUTES& 0.494& 4.70&32\\
NEI& 0.621& 3.98 &85\\
\midrule
\multicolumn{4}{c}{\bf FC+STS Objectives} \\
Overall& 0.635& 4.63&156\\
SUPPORTS& 0.617& 4.77&67\\
REFUTES& 0.642& 4.68&28\\
NEI& 0.647& 4.44&61\\
\bottomrule
\end{tabular}
\caption{FC performance for generated claims.}
\label{tab:generation_eval}
\end{table}
Comparing FC performance with our generated claims vs. those from the development set of adversarial claims from the FEVER shared task
, we see similar drops in performance (0.600 and 0.644 macro F1, respectively). While the adversarial triggers from FEVER cause a larger performance drop, they were manually selected to meet the label coherence and grammatical correctness requirements. Conversely, we automatically generate claims that meet these requirements.
\section{Conclusion}
We present a method for automatically generating highly potent, well-formed, label cohesive claims for FC.
We improve upon previous work on universal adversarial triggers by determining how to construct valid claims containing a trigger word.
Our method is fully automatic, whereas previous work on generating claims for fact checking is generally rule-based or requires manual intervention. As FC is only one test bed for adversarial attacks, it would be interesting to test this method on other NLP tasks requiring semantic understanding such as question answering
to better understand shortcomings of models.
\section*{Acknowledgements}
$\begin{array}{l}\includegraphics[width=1cm]{images/euflag2.png} \end{array}$ This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 801199.
\clearpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A \emph{nut graph} is a graph of at least 2 vertices whose adjacency matrix has nullity 1 (i.e.\ rank $n-1$ where $n$ is the order of the graph) and for which non-trivial kernel
vectors do not contain a zero.
The topic of nut graphs, introduced by Sciriha and Gutman in~\cite{gutman1996graphs,sciriha1998nut}, is one that emerged from pure mathematics (linear algebra and graph theory), but which turns out to have natural connections with chemical theory in
at least three distinct areas: electronic structure theory, the chemical reactivity of radicals and,
perhaps more surprisingly, the theory of molecular conduction. The applications have generated new mathematical questions, and these in turn have implications for the scope of the chemical applications. This will be discussed in
more detail in Section~\ref{subsect:nut_chem_props}.
In this connection, we note that a
\emph{chemical graph} is a connected graph with maximum degree at most 3. This definition is
motivated by the use of graph theory in chemistry to describe electronic structure of unsaturated
carbon networks (H\"uckel theory~\cite{Streitwieser1962}), where vertices represent carbon atoms with bonds to at most three carbon neighbours.
The smallest nut graphs have seven vertices;
there are three seven-vertex nuts and they are shown in Figure~\ref{fig:smallest_nuts}.
Nuts form a subset of {\it core} graphs~\cite{sciriha1998construction,sciriha2009maximal}: a core graph is singular and has every vertex appearing with non-zero entry in some eigenvector belonging to the nullspace. A useful property of both nut and core graphs is that deletion of any vertex reduces the nullity by one.
It is also useful to note that
nut graphs are non-bipartite and have no leaves~\cite{sciriha1998nut}.
The smallest {\it chemical} nut graph has nine vertices and is shown in
Figure~\ref{fig:smallest_chemnut}.
\begin{figure}[h!t]
\centering
\includegraphics[width=0.8\textwidth]{Nuts_7v.pdf}
\caption{The smallest nut graphs.}
\label{fig:smallest_nuts}
\end{figure}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.35\textwidth]{NutChem_9v.pdf}
\caption{The smallest chemical nut graph.}
\label{fig:smallest_chemnut}
\end{figure}
There are various published constructions for expanding a nut graph, for example by adding duplicate vertices, or expanding edges to paths of appropriate parity, from which it is clear that arbitrarily large chemical nut graphs exist~\cite{Sciriha2008}.
In~\cite{fowler2014omni} Fowler et al.\ determined all nut graphs up to 10 vertices,
and all chemical nut graphs up to 16 vertices, respectively. Furthermore in~\cite{sciriha2007nonbonding} Sciriha and Fowler also determined all nut graphs among the cubic polyhedra up to 24 vertices.
They also determined all nut fullerenes up to 120 vertices and showed that there are no nut IPR fullerenes up to at least 150 vertices~\cite{sciriha2007nonbonding}. (A \textit{fullerene}~\cite{kroto_85} is a cubic polyhedron where all faces have size 5 or 6.)
In this article we present a specialised generation algorithm for nut graphs and using this algorithm we are able to expand significantly these lists of nut graphs.
The paper is organised as follows. In Section~\ref{section:generation_nuts} we present our generation algorithm for nut graphs. In Section~\ref{subsect:nut_counts} we present the complete lists of nut graphs which were generated by our implementation of this algorithm. Finally, in Section~\ref{subsect:nut_chem_props} we describe the results of our computations of chemically relevant properties on the lists of nut graphs.
\section{Generation of nut graphs}
\label{section:generation_nuts}
Several techniques can be used to determine whether a graph is a nut graph. A straightforward approach is to use the eigenvalues and eigenvectors of the corresponding adjacency matrix. The graph is a nut graph if and only if it has at least two vertices and
there is exactly one eigenvalue equal to zero and the corresponding eigenvector has no zero entries.
Although fast numerical algorithms for the determination of eigenvalues and eigenvectors of symmetric matrices exist, they are not ideal for our purposes because the floating point approximations used in computer implementations of these methods suffer from an inherent inaccuracy problem: it is never certain whether a result that `looks like' zero (perhaps because it coincides with zero up to 12 decimal places) corresponds to an actual zero, and conversely whether a result that seems different from zero might not be a real zero suffering from rounding errors.
For many problems of a numerical nature this is not an issue, because the value that is computed is a continuous function of the input values.
Unfortunately, the rank of a matrix (and hence the property of being a nut graph) does not belong to this category.
In our case it would help if we could determine in advance to what accuracy eigenvalues need to be computed, that is to say,
if it were possible to calculate a lower bound on the minimal non-zero eigenvalue of a graph in advance. We
are not aware of any theoretical results in this direction.
That inaccurate computations could indeed lead to false conclusions is illustrated by a literature example that is quite similar to the problem at hand: in a search for graphs with eigenvalue
$\sqrt{8}$
to test a conjecture made by Dias~\cite{dias1995structural},
examples of false-positive graphs were found~\cite{fowler2010counterexamples} at $n = 18$, i.e.\ graphs with a single eigenvalue not equal to $\sqrt{8}$ but coinciding with $\sqrt{8}$ to 10 places of decimals. The graph $G_{18}$ in Figure~\ref{fig:false_pos} has largest eigenvalue
$\lambda_1 = \sqrt8-\epsilon$, with $\epsilon \approx 5 \times 10\sp{-11}$.
This and other false positives in the search on the Dias conjecture~\cite{dias1995structural} were detected by the
informal method of checking a {\lq danger zone\rq} around the target eigenvalue, based on a cautious estimate of the numerical error of the eigenvalue routines (combined with the
knowledge that a true eigenvalue $\sqrt{8}$ would be paired with another at $-\sqrt{8}$).
\begin{figure}[h!t]
\centering
\includegraphics[width=0.6\textwidth]{OctoGraphs.pdf}
\caption{An $18$-vertex graph with Perron eigenvalue $\sqrt{8}-\epsilon$;
by taking the cartesian product of this graph with the star on 9 vertices,
a graph with a full eigenvector at eigenvalue $-\epsilon$ is produced.
In this case, $\epsilon \approx 5 \times 10\sp{-11}$.}
\label{fig:false_pos}
\end{figure}
It is easy to use such an example to generate a graph with a full eigenvector that corresponds to a near-zero eigenvalue: simply take the direct product of $G_{18}$ with $S_9$, the star on nine vertices, which has smallest eigenvalue $-\sqrt{8}$. The product $G_a \square G_b$
has eigenvalues $\lambda_a + \lambda_b$, where $\lambda_a$ and $\lambda_b$ run over the spectra of $G_a$ and $G_b$, respectively, and entries in non-degenerate eigenvectors of $G_a \square G_b$ are simple products of the eigenvector entries of the starting graphs. In the present case,
the $162$-vertex graph
$G_{18} \square S_9$ therefore has a non-degenerate eigenvalue
$-\epsilon$ formed from the sum of the Perron eigenvalue of $G_{18}$ and the anti-Perron $-\sqrt8$ eigenvalue of the star. As both eigenvectors are full, this vector is also full. ($G_{18} \square S_9$ also has $14$ true zero eigenvalues, arising from the nullities of $2$ and $7$ of $G_{18}$ and $S_9$, respectively).
A well known and more formal method for countering the inaccuracy problem is to use multiprecision integer arithmetic. Indeed, there are various methods of checking the number of zero eigenvalues of a candidate graph, and of determining the relative values of the entries in an
eigenvector that corresponds to a unique zero,
using only integer arithmetic, see for instance Longuet-Higgins~\cite{longuet1950some}
elaborated by {\v Z}ivkovi{\' c}~\cite{zivkovic1972calculation}. These methods are easiest to implement for eigenvalues that are integers but the algorithms can be extended to algebraic integers in general, such as the eigenvalue $\sqrt 8$ mentioned above. (Note that eigenvalues of graphs are always algebraic integers.)
The main disadvantage of these algorithms is that they are much slower than the classical methods, because their speed depends on the size of the numbers involved, and these grow quickly with larger graph orders. Tests indicate that for the problem at hand the multiprecision method is slower by at least an order of magnitude than the generation algorithm which we eventually used (cf.\ Section~\ref{subsection:modulo_p}).
\subsection{Properties of nut graphs}
For the reasons mentioned above, we compute the rank of a matrix directly without previous computation of the eigenvalues. Our algorithm is based on the following properties of adjacency matrices of nut graphs.
\begin{theorem}
\label{theo:nut}
Consider a graph $\Gamma$ with adjacency matrix
\[
A=
\left(\!
\begin{array}{c|c}
B & b^T \\ \hline
b & 0 \\
\end{array}
\!\right).
\]
Then $\Gamma$ is a nut graph if and only if
\begin{enumerate}
\item $B$ is non-singular,
\item $bB^{-1}b^T = 0$, and
\item $bB^{-1}$ has no zero entries.
\end{enumerate}
\end{theorem}
\begin{proof}
First assume that $B$ is non-singular. We multiply $A$ on the left with a non-singular matrix, as follows
\[
\left(\!
\begin{array}{c|c}
B^{-1} & 0 \\ \hline
-bB^{-1} & 1 \\
\end{array}
\!\right)
\left(\!
\begin{array}{c|c}
B & b^T \\ \hline
b & 0 \\
\end{array}
\!\right)
=
\left(\!
\begin{array}{c|c}
1 & B^{-1}b^T \\ \hline
0 & -bB^{-1}b^T \\
\end{array}
\!\right)
\]
If $-bB^{-1}b^T \ne 0$ then the right hand matrix, and hence also $A$, has full rank. Otherwise, both matrices have rank $n-1$.
In the latter case, consider the vector $(bB^{-1}\mid -1)$. We have
\[(bB^{-1}\mid -1) \left(\!
\begin{array}{c|c}
B & b^T \\ \hline
b & 0 \\
\end{array}
\!\right)
=
(0 \mid bB^{-1}b^T) = (0\mid 0).\]
Hence $(bB^{-1}\mid -1)$ is a (non-trivial) eigenvector of $A$. If this vector contains no zero entries,
the resulting graph is a nut graph.
Conversely, assume $\Gamma$ is a nut graph and consider a kernel vector of $A$. Because the last entry of this kernel vector is non-zero, we may always multiply the vector with a scalar to obtain a kernel vector of the form
$(x \mid -1)$. From $(x \mid -1)A = 0$ we find $xB = b$ and $xb^T = 0$.
If $B$ has an inverse, we find
$x = bB^{-1}$ and the theorem follows.
Otherwise, let $y$ denote a non-trivial kernel vector of $B$. Then $(y\mid 0) A = (0\mid yb^T) = (0\mid yBx^T) = (0\mid 0)$. Hence $(y\mid 0)$ is a kernel vector of $A$ with an entry equal to zero, and $\Gamma$ cannot be a nut graph.
\end{proof}
Alternative proofs of the properties in Theorem \ref{theo:nut}
can for instance be found in \cite{Sciriha2008}.
\begin{lemma}
\label{lemma-adj}
A graph with adjacency matrix $A$ is a nut graph if and only if
$\det A = 0$ and $\mathop{\mathrm{adj}}\nolimits A$ has no zero entries.
\end{lemma}
\begin{proof}
A matrix $A$ has nullity 1 if and only if $\det A = 0$ and $\mathop{\mathrm{adj}}\nolimits A\ne 0$.
Since $A \mathop{\mathrm{adj}}\nolimits A = (\det A)1 = 0$, the columns of $\mathop{\mathrm{adj}}\nolimits A$ are eigenvectors
of $A$ (possibly 0). Since $A$ has rank $n-1$ every column of $\mathop{\mathrm{adj}}\nolimits A$
must be a multiple of a fixed non-trivial eigenvector $x^T$ of $A$.
Write $\alpha_i x$ for the
$i$th column of $\mathop{\mathrm{adj}}\nolimits A$. Then $y=(\alpha_1,\ldots,\alpha_n)$ is non-zero. Moreover, every row of $\mathop{\mathrm{adj}}\nolimits A$ is a multiple of $y$. In fact: $\mathop{\mathrm{adj}}\nolimits A = x^Ty$.
As $\mathop{\mathrm{adj}}\nolimits A$ is symmetric, the rows
of $\mathop{\mathrm{adj}}\nolimits A$ are eigenvectors of $A$ and hence so is $y$. If $\Gamma$ is not a nut graph, then $\alpha_i=0$ for at least one index $i$ and $\mathop{\mathrm{adj}}\nolimits A$ contains at least one zero column (and row). If $\Gamma$ is a nut graph,
then neither $x$ nor $y$ contains a zero entry, and hence all entries
of $\mathop{\mathrm{adj}}\nolimits A$ are non-zero.
\end{proof}
\subsection{Generation algorithm}
\label{subsection:generation_algorithm}
The algorithm that was developed here
for generating all nut graphs of a given order~$n$ is an adaptation of the general canonical construction path method for isomorph-free generation of graphs, pioneered by McKay~\cite{mckay_98}.
Essentially, the graphs of order $k$ are generated from the graphs of order $k-1$ by recursively adding a new vertex and connecting it in all possible ways to the vertices already generated, pruning isomorphic copies on the way.
To this basic algorithm we add two filtering/pruning steps:
\begin{enumerate}
\item Only those graphs of order $n-1$ are retained that are non-singular, cf.\ property 1 of Theorem~\ref{theo:nut}.
\item Only those graphs of order $n$ are retained that satisfy properties 2 and 3 of Theorem~\ref{theo:nut}.
\end{enumerate}
In addition, the inverse $A^{-1}$ of the adjacency matrix $A$ of a graph retained in step 1 above, is stored in memory so it can be reused for the second step. In general, a single graph of order $n-1$ gives rise to a large number (often hundreds or thousands) of graphs of order~Ò$n$ so this turns out to be a very effective optimisation.
To check whether the adjacency matrix $A$ is non-singular and then compute its inverse, the standard Gauss-Jordan algorithm from linear algebra can be used, which essentially amounts to computing the echelon form of the augmented matrix $(A\mid 1)$ to yield $(1\mid A^{-1})$.
This algorithm has an asymptotic complexity of $O(n^3)$ under the assumption that all standard arithmetic operators take constant time. (In practice, however, this assumption is valid only with computer operations that use a fixed number of computer bits, in particular, not with multiprecision arithmetic).
Since, as we mentioned above, it is essential to avoid rounding errors, the algorithm cannot make use of (finite precision) floating point operations.
As the matrix $A$ has integral entries (in fact, it consists of only ones and zeros) we could instead use multiprecision rational arithmetic, i.e.\ work with exact fractions. Unfortunately, the numerators and denominators involved become very large very quickly, making this feasible only for graphs of small order. It is possible to adapt the algorithm so that
division can be avoided, i.e.\ work with multiprecision integers instead of rationals, but this does not improve the execution time significantly.
Instead we use a different approach, based on modular arithmetic, which turns out to be more efficient.
\subsection{Computations modulo $p$}
\label{subsection:modulo_p}
The main idea is to perform most of the work using arithmetic `modulo $p$' for some suitable primes $p$. Theorems \ref{theo-lift}--\ref{theo-lift-3} below show that we can then `lift' our results to the real numbers $\mathbf{R}$, provided we do this for a sufficient number of primes $p$. We can choose the primes to be quite large, as long as they do not surpass the size of a word for which a computer can perform fast division and multiplication (say, $p \approx 2^{31}$ for present-day computers).
Denote by $\Delta_n$ the maximum absolute value of the (real) determinant of a 0-1 matrix of size $n\times n$. We have
\begin{theorem}[Hadamard]
\[
\Delta_n \le 2\left(\frac{n+1}4\right)^{\displaystyle{\frac {n+1}2}}.
\]
\end{theorem}
Equality is possible, e.g. $\Delta_3=2$ is reached by the adjacency matrix $\left(\begin{smallmatrix}0&1&1\\1&0&1\\1&1&0\end{smallmatrix}\right)$ of the complete graph $K_3$.
For small values of $n$, exact values of $\Delta_n$ have been obtained \cite{Orrick2005}~:
\[
\begin{array}{c|*{20}c}
n & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
\Delta_n & 1 & 2 & 3 & 5 & 9 & 32 & 56 \\
\\
n & 9 & 10 & 11 & 12& 13 & 14& 15\\
\hline
\Delta_n & 144 & 320 & 1458 & 3645 &9477 & 25515& 131072\\
\\
n & 16 & 17 & 18 & 19 & 20 \\
\hline
\Delta_n& 327680 & 1114112 & 3411968 & 19531250 & 56640625 \\
\end{array}
\]
Let $A$ be a matrix with integral entries. Let $p$ be a prime. We write $\det_{p} A$ (resp.\ $\mathrm{rank}\,_{p} A$, $\mathop{\mathrm{adj}}\nolimits_{p} A$)
for the determinant (resp.\ rank, adjugate) of $A$ over the finite field $\mathbf{F}_{p}$. Recall that $\mathrm{rank}\,_{p} A$ is also called the $p$-rank of $A$. We have the following properties:
\begin{theorem}
\label{theo-lift}
Let $A$ be a symmetric 0-1-matrix of size $n\times n$. Let $p_1, p_2, \ldots p_k$
be distinct primes such that $p_1p_2\cdots p_k > \Delta_n$.
Then $A$ is non-singular over $\mathbf{R}$ if and only if $A$ is non-singular over $\mathbf{F}_{p_i}$ for at least one $i$, $1\le i \le k$. Or equivalently: $\det A = 0$ if and only if $\det_{p_i} A = 0$
for all $i$, $1\le i \le k$.
\end{theorem}
\begin{proof}
Because all entries of $A$ are integers, we have $\det_{p_i} A = \det A\bmod {p_i}$. Note that $\det_{p_i} A = 0$ if and only if
$\det A$ is divisible by $p_i$.
Hence, if $\det A = 0$ then also $\det_{p_i} A = 0$, for all $i$. Conversely,
if $\det_{p_i} A = 0$ for all $i$, then $\det A$ must be divisible by all $p_i$, and hence by their product. Since $|\det A| \le \Delta_n < \prod p_i$, this
is only possible when $\det A = 0$.
\end{proof}
\begin{theorem}
\label{theo-lift-2}
Let $A$ be a symmetric 0-1-matrix of size $n\times n$. Let $p_1, p_2, \ldots p_k$
be distinct primes such that $p_1p_2\cdots p_k > \Delta_n$.
Then $\mathrm{rank}\, A =n-1$ if and only if
$\mathrm{rank}\,_{p_i} A \le n-1$ for all $i$, $1\le i \le k$, with equality in at least one case.
\end{theorem}
\begin{proof}
We obtain $\mathop{\mathrm{adj}}\nolimits_{p_i} A$ by reducing every entry
of $\mathop{\mathrm{adj}}\nolimits A$ modulo $p$. (Note that $\mathop{\mathrm{adj}}\nolimits A$ has integral entries.)
Each entry $\mathop{\mathrm{adj}}\nolimits A$ is the determinant of a 0-1-matrix
of size $n-1\times n-1$
(viz.\ the corresponding co-factor). Hence if $\prod p_i > \Delta_{n} \ge \Delta_{n-1}$,
an entry which is zero in $\mathop{\mathrm{adj}}\nolimits_{p_i} A$ for all $i$, must correspond exactly to a zero entry in $\mathop{\mathrm{adj}}\nolimits A$.
If $A$ has rank $n-1$, then $\det A = 0$ and $\mathop{\mathrm{adj}}\nolimits A\ne 0$. Hence $\det_{p_i} A = 0$ and the rank of $A$ over $\mathbf{F}_{p_i}$ is at most $n-1$, for every $i$. Also $\mathop{\mathrm{adj}}\nolimits_{p_i} A\ne 0$ for at least one $i$ and hence $\mathrm{rank}\,_{p_i} A = n-1$ in that case. Conversely, if $\mathrm{rank}\,_{p_i} > n-1$ for at least one $i$, then $\det A\ne 0$. Also if $\mathrm{rank}\,_{p_i} < n-1$ for all $i$, then $\mathop{\mathrm{adj}}\nolimits_{p_i} A = 0$ for all $i$ and therefore $\mathop{\mathrm{adj}}\nolimits A = 0$, which implies $\mathrm{rank}\, A < n-1$.
\end{proof}
\begin{theorem}
\label{theo-lift-adj}
Let $\Gamma$ be a graph of order $n$ with adjacency matrix $A$ of rank $n-1$. Let $p_1, p_2, \ldots p_k$
be distinct primes such that $p_1p_2\cdots p_k > \Delta_n$. Then $\Gamma$ is a nut graph if and only if for each element position $(g,h), 1 \le g,h \le n$, there is at least one $i, 1\le i\le k$ such that $(\mathop{\mathrm{adj}}\nolimits_{p_i} A)_{g,h} \ne 0$.
\end{theorem}
\begin{proof}
The proof runs along the same lines as the proof of Theorems \ref{theo-lift} and
\ref{theo-lift-2}. By Lemma \ref{lemma-adj}, $\Gamma$ is a nut graph if and only if
$\mathop{\mathrm{adj}}\nolimits A$ contains no zero entries. An entry of $\mathop{\mathrm{adj}}\nolimits A$ is zero if and only if
the corresponding entry of $\mathop{\mathrm{adj}}\nolimits_{p_i} A$ is zero, for all $i, 1\le i\le k$.
\end{proof}
Theorems \ref{theo-lift} and \ref{theo-lift-2} provide fast ways to check whether the adjacency matrix of a graph has nullity 0 or 1, with a complexity of $O(n^3\log \Delta_n) \approx O(n^4\log n)$. Theorem \ref{theo-lift-adj} is less useful in the general case, because it involves computing the adjugate matrix. For small values of $n$ (such that $\Delta_n < 2^{32}$) we may however use the following
\begin{corollary}
\label{cor:mod_p}
Let $\Gamma$ be a graph of order $n$ with adjacency matrix $A$ of rank $n-1$. Let $p$ be prime, $p > \Delta_n$. Then $\Gamma$ is a nut graph if and only if it is a nut graph `modulo $p$'.
\end{corollary}
Note that the condition $p > \Delta_n$ cannot simply be waived. Indeed, in the course of our experiments we found seven (IPR fullerene) graphs of between 278
and 300 vertices that are nut graphs `modulo $p$' (with $p=2^{32}-5 = 4\,294\,967\,291$)
but that in reality turned out to be non-singular (with a determinant divisible by $p$).
For larger graphs we can use the following variant of Theorem~\ref{theo-lift-adj}:
\begin{theorem}
\label{theo-lift-3}
Let $\Gamma$ be a graph of order $n$ with adjacency matrix $A$ of rank $n-1$. Let $p_1, p_2, \ldots p_k$
be distinct primes such that $p_1p_2\cdots p_k > \Delta_n$ and such that $\mathrm{rank}\,_{p_i} A = n-1$. Then $\Gamma$ is a nut graph if and only if for each coordinate position $g$, $1 \le g \le n$, there is at least one $i, 1\le i\le k$ such that a corresponding (non-trivial) kernel vector $v_i$ of $A$, computed modulo $p_i$, has $(v_i)_g\ne 0$.
\end{theorem}
\begin{proof}
For a matrix $A$ of $p_i$-rank $n-1$ and non-trivial kernel vector $v_i$ satisfies $\mathop{\mathrm{adj}}\nolimits_{p_i} A = \lambda_i v_i^T v_i$ for some scalar $\lambda_i\in\mathbf{F}_{p_i}$, $\lambda_i\ne 0$. (Cf.\ proof of Lemma \ref{lemma-adj}.) The theorem now follows from Theorem \ref{theo-lift-adj}.
\end{proof}
Note that this theorem requires the primes $p_1, p_2, \ldots$ to be chosen such that $\mathrm{rank}\,_{p_i} A = n-1$. Fortunately, such primes can always be found (and in fact, most primes will satisfy this property). Indeed, as $\mathrm{rank}\, A = n-1$, only primes are forbidden for which $\mathop{\mathrm{adj}}\nolimits_{p_i} A = 0$ (while $\mathop{\mathrm{adj}}\nolimits A \ne 0$). These are the primes that divide all elements of $\mathop{\mathrm{adj}}\nolimits A$ at the same time. There is necessarily only a finite number of these, and
this number is typically zero.
The algorithm to check whether a given graph $\Gamma$ of order $n$ with adjacency matrix~$A$ hence runs as follows.
Perform the following for a (fixed) sequence of distinct primes $p_1,p_2,\ldots$ with $p_i\lessapprox 2^{31}$.
\begin{enumerate}
\item Compute $\mathrm{rank}\,_{p_i} A$ using the standard Gauss-Jordan algorithm over $\mathbf{F}_{p_i}$.
\item If $\mathrm{rank}\,_{p_i} A = n$, stop the algorithm. $\Gamma$ is not a nut.
\item If $\mathrm{rank}\,_{p_i} A < n-1$, discard $p_i$ and turn to the next prime in the list.
\item Otherwise, compute a non-trivial kernel vector $v_i$ for $A$ over $\mathbf{F}_{p_i}$. (Such a vector can easily be obtained from the reduced matrix resulting from step 1 above.)
\item Proceed to the next prime.
\end{enumerate}
These steps should be repeated until one of the following occurs:
\begin{enumerate}
\renewcommand{\theenumi}{\Alph{enumi}}
\item The product of the discarded primes exceeds $\Delta_n$. In this case $\mathrm{rank}\, A < n-1$ and $\Gamma$ is not a nut.
\item The product of the non-discarded primes exceeds $\Delta_n$.
\end{enumerate}
(The product of the primes need not be computed exactly. We may use an upper estimate for $\log_2 \Delta_n$ and divide this by 31 to obtain the required number of tries).
In case A, all primes that have been tried will also have been discarded. In case B, we still need to investigate the kernel vectors $v_i$ that were produced on the way. The graph $\Gamma$ will then be a nut if and only if there is no coordinate position where every vector $v_i$ is zero.
\section{Testing and results}
\subsection{The numbers of nut graphs}
\label{subsect:nut_counts}
We implemented our generation algorithm for nut graphs described in Section~\ref{section:generation_nuts} in the programming language C and incorporated it in the program \textit{geng}~\cite{nauty-website, mckay_14} which takes care of the isomorphism rejection. Our implementation of this algorithm is called \textit{Nutgen}, and can be downloaded from~\cite{nutgen-site}.
In~\cite{fowler2014omni} all nut graphs up to 10 vertices were determined. Using \textit{Nutgen} we generated all non-isomorphic nut graphs up to 13 vertices and also went several steps further for nut graphs with a given lower bound on the girth. (The \textit{girth} is the length of the smallest cycle of a graph).
Table~\ref{table:number_of_nuts} shows the counts of the complete lists of nut graphs generated by our program.
Figure~\ref{fig:smallest_nuts_girth} shows drawings of the smallest nut graphs with respect to their girth.
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c || c | c | c | c | c | c | c | c |}
\hline
Order & Nut graphs & $g=3$ & $g = 4$ & $g = 5$ & $g = 6$ & $g = 7$ & $g = 8$ & $g \geq 9$\\
\hline
$0-6$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
7 & 3 & 3 & 0 & 0 & 0 & 0 & 0 & 0\\
8 & 13 & 13 & 0 & 0 & 0 & 0 & 0 & 0\\
9 & 560 & 560 & 0 & 0 & 0 & 0 & 0 & 0\\
10 & 12 551 & 12 551 & 0 & 0 & 0 & 0 & 0 & 0\\
11 & 2 060 490 & 2 060 474 & 14 & 2 & 0 & 0 & 0 & 0\\
12 & 208 147 869 & 208 147 847 & 20 & 2 & 0 & 0 & 0 & 0\\
13 & 96 477 266 994 & 96 477 263 085 & 3 889 & 20 & 0 & 0 & 0 & 0\\
14 & ? & ? & 18 994 & 35 & 0 & 0 & 0 & 0\\
15 & ? & ? & 3 640 637 & 1 021 & 5 & 1 & 0 & 0\\
16 & ? & ? & 48 037 856 & 2 410 & 5 & 0 & 0 & 0\\
17 & ? & ? & 10 722 380 269 & 88 818 & 154 & 1 & 0 & 0\\
18 & ? & ? & ? & 341 360 & 139 & 0 & 0 & 0\\
19 & ? & ? & ? & 14 155 634 & 6 109 & 36 & 0 & 1\\
20 & ? & ? & ? & 82 013 360 & 6 660 & 8 & 0 & 0\\
\hline
\end{tabular}
\caption{The numbers of nut graphs. Columns with a header of the form $g = k$
list the numbers of nut graphs with girth~$k$ at each order.}
\label{table:number_of_nuts}
\end{table}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.9\textwidth]{Smallest_nuts_girth1.pdf}
\caption{Small nut graphs: (a) one of the 14 smallest nut graphs with girth 4 (with 11 vertices); (b) and (c) the two smallest nut graphs with girth 5 (with 11 vertices);
(d) shows one of the 6 smallest nut graphs with girth 6 (with 15 vertices); (e) the smallest nut graph with girth 7 (with 15 vertices).}
\label{fig:smallest_nuts_girth}
\end{figure}
In~\cite{fowler2014omni} all chemical nut graphs up to 16 vertices were determined.
Table~\ref{table:number_of_chemical_nuts} shows the counts of the complete lists of chemical nut graphs generated by our program and Figure~\ref{fig:smallest_chemical_nuts_girth} shows drawings of the smallest chemical nut graphs with respect to their girth.
The chemical relevance of girth is that rings of carbon atoms in networks are constrained by steric factors.
The ideal bond angle for unsaturated ($sp^2$ hybridised) carbon atoms is $120 \degree$. Departures from a ring size of six are
typically
punished with energy penalties that are especially severe for rings of size 3 and 4.
There are standard methods for comparing steric strain at an atom (vertex of the molecular graph)
(e.g.~\cite{haddon2001comment}),
which can be related to mathematical notions of combinatorial curvature for polyhedra~\cite{fowler2015distributed}.
\begin{table}
\centering
\small
\begin{tabular}{|c || c | c | c | c | c | c | c | c |}
\hline
Order & Nut graphs & $g=3$ & $g = 4$ & $g = 5$ & $g = 6$ & $g = 7$ & $g = 8$ & $g \geq 9$\\
\hline
$0-8$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
9 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
10 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
11 & 8 & 7 & 1 & 0 & 0 & 0 & 0 & 0 \\
12 & 9 & 7 & 1 & 1 & 0 & 0 & 0 & 0\\
13 & 27 & 23 & 2 & 2 & 0 & 0 & 0 & 0\\
14 & 23 & 22 & 0 & 1 & 0 & 0 & 0 & 0\\
15 & 414 & 338 & 51 & 25 & 0 & 0 & 0 & 0\\
16 & 389 & 339 & 36 & 13 & 1 & 0 & 0 & 0\\
17 & 7 941 & 6 153 & 1 364 & 408 & 15 & 1 & 0 & 0\\
18 & 8 009 & 6 742 & 1 079 & 182 & 6 & 0 & 0 & 0\\
19 & 67 970 & 52 719 & 9 668 & 5 275 & 298 & 10 & 0 & 0\\
20 & 51 837 & 45 261 & 3 812 & 2 628 & 135 & 1 & 0 & 0\\
21 & 1 326 529 & 995 228 & 214 777 & 109 999 & 6 435 & 84 & 5 & 1\\
22 & 1 372 438 & 1 141 082 & 157 415 & 70 977 & 2 937 & 27 & 0 & 0\\
\hline
\end{tabular}
\caption{The numbers of chemical nut graphs.
Columns with a header of the form $g = k$
list the numbers of chemical nut graphs with girth~$k$ at each order.}
\label{table:number_of_chemical_nuts}
\end{table}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.9\textwidth]{Smallest_chemical_nuts_girth1.pdf}
\caption{Figure~(a)-(e) show the smallest chemical nut graphs of girth 3, 4, 5, 6 and 7, respectively. They have 9, 11, 12, 16 and 17 vertices, respectively.}
\label{fig:smallest_chemical_nuts_girth}
\end{figure}
In~\cite{sciriha2007nonbonding} Sciriha and Fowler determined all cubic polyhedral nuts (i.e.\ cubic planar 3-connected graphs that are also nut
graphs) up to 24 vertices. Using the program \textit{plantri}~\cite{brinkmann_07} and our program to test if a graph is a nut graph as a filter, we determined all cubic polyhedral nuts up to 34 vertices. The counts of these graphs can be found in Table~\ref{table:number_of_cubic_polyhedra_nuts}.
Perhaps the most interesting feature of that table is the emergence at $n = 26$ of examples where the number of vertices is not divisible by $6$ (see discussion in~\cite{sciriha2007nonbonding}).
\begin{table}
\centering
\small
\begin{tabular}{| c | r | r |}
\hline
Order & cubic polyhedra & nut graphs\\
\hline
4 & 1 & 0 \\
6 & 1 & 0 \\
8 & 2 & 0 \\
10 & 5 & 0 \\
12 & 14 & 2 \\
14 & 50 & 0 \\
16 & 233 & 0 \\
18 & 1 249 & 285 \\
20 & 7 595 & 0 \\
22 & 49 566 & 0 \\
24 & 339 722 & 62 043 \\
26 & 2 406 841 & 4 \\
28 & 17 490 241 & 316 \\
30 & 129 664 753 & 16 892 864 \\
32 & 977 526 957 & 3 676 \\
34 & 7 475 907 149 & 447 790 \\
\hline
\end{tabular}
\caption{The numbers of cubic polyhedral nut graphs.}
\label{table:number_of_cubic_polyhedra_nuts}
\end{table}
In~\cite{sciriha2007nonbonding} Sciriha and Fowler also determined all nut fullerenes up to 120 vertices and showed that there are no {\it IPR} nut fullerenes up to at least 150 vertices.
Using the program \textit{buckygen}~\cite{BGM12, GM15}, we determined all nut fullerenes up to 250 vertices, and showed that there are no nut IPR fullerenes up to at least 320 vertices. The numbers of nut fullerenes up to 250 vertices can be found in Table~\ref{table:number_of_fullerene_nuts}. Only 173 out of the 21 627 759 707 fullerene isomers up to 250 vertices are nuts.
\begin{table}
\centering
\small
\begin{tabular}{|c | c |}
\hline
Order & Nut fullerenes\\
\hline
36 & 1 \\
42 & 1 \\
44 & 1 \\
48 & 2 \\
52 & 2 \\
60 & 6 \\
72 & 2 \\
82 & 1 \\
\hline
\end{tabular}\quad
\begin{tabular}{|c | c |}
\hline
Order & Nut fullerenes\\
\hline
84 & 8 \\
96 & 5 \\
108 & 7 \\
120 & 5 \\
132 & 14 \\
144 & 6 \\
156 & 11 \\
160 & 1 \\
\hline
\end{tabular}\quad
\begin{tabular}{|c | c |}
\hline
Order & Nut fullerenes\\
\hline
168 & 11 \\
180 & 16 \\
192 & 8 \\
204 & 19 \\
216 & 9 \\
228 & 21 \\
240 & 16 \\
& \\
\hline
\end{tabular}
\caption{The numbers of nut fullerenes up to 250 vertices.
For orders up to 250 where no count is listed,
the implication is that there is no nut fullerene of that order.}
\label{table:number_of_fullerene_nuts}
\end{table}
The nut graphs from Tables~\ref{table:number_of_nuts}-\ref{table:number_of_fullerene_nuts} can be
downloaded from the \textit{House of Graphs}~\cite{hog} at
\url{https://hog.grinvin.org/Nuts}
As a partial check on the correctness of our implementations and results, we compared our lists of nut graphs to the known lists of nut graphs up to 10 vertices and of chemical nut graphs up to 16 vertices, which were determined in~\cite{fowler2014omni}. Furthermore, we also compared our results on cubic polyhedral nuts and fullerene nuts with the results from~\cite{sciriha2007nonbonding}.
In each case all results were in complete agreement.
\subsection{Chemical properties of nut graphs}
\label{subsect:nut_chem_props}
In this section we describe the result of our computations of chemically relevant properties on the complete lists of nut graphs determined in Section~\ref{subsect:nut_counts}.
\subsubsection{Nut graphs and non-bonding orbitals}
The chemical significance of nuts comes from the association of the zero eigenvalue of the adjacency matrix with
\textit{non-bonding} orbitals (NBO) in molecules and with the \textit{Fermi level} in materials.
A non-bonding orbital is balanced between stabilisation (bonding) and destabilisation (anti-bonding) of a molecule by the presence of an electron.
The Fermi level in a conductor corresponds to the energy
that separates occupied from empty bands of energy levels
at absolute zero.
The association between adjacency eigenvalues and eigenvectors and chemical concepts is straightforward:
eigenvalues correspond to orbital energies, and eigenvectors to molecular orbitals. Of particular importance are
the HOMO and LUMO ({\it highest occupied} and {\it lowest occupied} molecular orbitals, respectively).
In H{\"u}ckel theory of an all-carbon material, the Fermi level corresponds to the mean of HOMO and LUMO energies.
A neutral carbon network of $n$ centres has $n$ electrons distributed over the centres in delocalised
orbitals according to three rules: the Aufbau and Pauli principles and Hund's rule of maximum multiplicity. In short, electrons are assigned to eigenvectors in decreasing order of eigenvalue (Aufbau), with at most two electrons per eigenvector (Pauli) and, whenever a degeneracy (multiplicity) is encountered, electrons are spread out across eigenvectors/orbitals as far as possible, and with parallel spins as far as possible
(Hund's rule).
Each electron carries an up or down spin, with component along a fixed axis
$\pm {\frac{1}{2}}\hbar$,
where $\hbar$ is Planck's constant divided by $2\pi$; spin-up and spin-down possibilities are known as $\alpha$ and $\beta$, respectively.
The \textit{occupation number} of a given orbital/eigenvector is therefore 2, 1 or 0, accordingly as it contains spin-paired electrons, a single electron or no electrons.
The physical significance of occupation of an orbital is that the \textit{charge density} at each site is found in H\"uckel theory by summing the squares of eigenvector entries over each eigenspace, weighting the sum by the average occupation number of the space. Electrons with $\alpha$ and $\beta$ spin contribute equally to charge density.
Likewise, the \textit{spin density} is determined by a
calculation involving squared eigenvector entries, but with $\alpha$ and $\beta$ spins contributing with opposite sign.
Thus, spin density is calculated from the squared entries in the eigenvectors, summed over any partially occupied eigenspace, weighted by the fraction
$n_\alpha - n_\beta / (n_\alpha+n_\beta)$, where $n_{\alpha/\beta}$ is the number of electrons
of $\alpha/\beta$ spin in the eigenspace.
Spin density has a particular chemical significance in that it indicates distribution of radical character. A radical is a molecule with unpaired electron spin distributed over its molecular framework. This has implications for reactivity and for physical measurements such as \textit{esr} (electron spin resonance) coupling constants~\cite{symons1978chemical}.
Radicals are typically reactive, and in the simplest picture, the most reactive sites within the radical will be those of highest spin density.
A nut graph has non-zero entries in the non-trivial nullspace vector on all vertices. It therefore corresponds in single occupation of the corresponding orbital to a distribution of spin density across the whole framework of unsaturated carbon atoms. Most radicals have a mixture of zero and non-zero spin densities across the framework. Nut graphs in this sense are the models for extreme delocalisation of spin density. As we will see below, chemical nut graphs in an electron configuration where the NBO is the singly occupied HOMO have at best a ratio of only 4 between highest and lowest spin densities. In a chemical non-nut graph these NBO spin densities are zero for some vertex or vertices.
The two parameters of chemical significance when considering a nut graph are
therefore
the \textit{spectral position} and the \textit{dispersion} of entries of
the non-trivial nullspace vector of the nut graph.
The position of the zero eigenvalue in the spectrum of the adjacency matrix is important, because occupation of the NBO will have most effect on the properties of the $\pi$-system if the NBO is either the HOMO or the LUMO, indicating interesting spin-distributions in systems with total charges near to neutrality. A useful indicator for a nut graph is therefore $\delta q$, defined as the charge required for half occupation of the zero eigenvalue vector. If the nut graph has $n_+$ strictly positive and $n_-$ strictly negative eigenvalues, simple counting gives
$$\delta q = n_+ - n_- = n - 2n_+ -1 = 2n_- - n +1 $$
for the charge in units of $e$, the proton charge.
For example, the $9$-vertex chemical nut graph with spectrum
$\theta_+,2,1,\phi\sp{-1},0,-1,\theta_-,-\phi,-2$,
where $\theta_\pm = {1\pm\sqrt{13}}/2$, carries $9$ electrons
and hence is neutral at half-filling of the NBO.
The defining characteristic of nut graphs is that all vertices carry a non-zero entry in the unique nullspace eigenvector, and hence spin density is distributed across the whole framework. All chemical nut graphs will have non-zero spin density at all sites, if the NBO is half occupied. A simple indicator of the dispersion of the spin-density distribution is the ratio $r$ of magnitudes of largest and smallest entries in the nullspace eigenvector. The function $r^2$ gives the expected ratio of spin densities
at most- and least-spin-rich sites in the molecule. The following is straightforward to prove.
\begin{theorem}\label{thm:r_at_least_2}
A chemical nut graph has $r^2 \geq 4$.
\end{theorem}
\begin{proof}
A chemical graph has maximum degree at most $3$, but a nut graph has minimum degree at least $2$ and is not a cycle, so every chemical nut graph has a
vertex, say $v$, of degree $3$. Call the kernel-vector entries on the neighbours of $v$
$\{a, b, c\}$; choose a normalisation such that $a = 1$, $b=x$, $c = -1-x$ with $x > 0$.
Then, either $x \geq 1$ and $| c/a | = |1+x| \geq 2$,
or $0 < x < 1$ and $| c/b | = |(1+x)/x| >2$.
Hence, $r\sp2 \geq 4$ for any chemical nut graph.
\end{proof}
A further aspect of nut graphs for a specific subset of chemical graphs is the classification of regular cubic nut
graphs. This is based on the distribution of entries in the unique nullspace eigenvector on the three neighbours of each vertex. A uniform nut graph has entries $a\{2, -1, -1\}$ around every vertex (where $a$ is a single scaling factor). A balanced graph has entries in ratio $\{2, -1, -1\}$, but with different scaling factors. All other cubic nut graphs are `just nuts'.
Dispersion is at a minimum for uniform nuts
($r^2 = 4$);
for a balanced nut $r^2$ is a power of $2$;
for a simple nut (a `just nut') it can be large.
Increase of $r$ corresponds to a reduction of the delocalisation of spin and charge densities for the orbital.
Nut graphs also figure in a different application of graph theory in chemistry: the source-and-sink-potential (SSP) model~\cite{goyer2007source,pickup2008analytical} of ballistic molecular condition.
It has recently emerged that in this model
the transmission of an electron through a $\pi$ framework at the Fermi level is determined by nullities of four graphs~\cite{fowler2009selection, fowler2009conduction}: $G, G-\bar L, G-\bar R$ and $G-\bar L-\bar R$. Here $G$ is the molecular graph and $\bar L$ and $\bar R$ are the vertices of $G$ that are next to the leads in the molecular circuit.
In this model, nut graphs are \textit{strong omniconductors}, that is to say they conduct at the Fermi level irrespective of the choice of vertices $\bar L \neq \bar R$ or $\bar L = \bar R$~\cite{fowler2013omni}. These have a special significance in that connection of a strong omniconductor
molecule to two leads in any manner whatsoever leads to a conducting device, in the simple
(empty-molecule) SSP model.
It has been proved in~\cite{fowler2014omni} that nut graphs are exactly the strong omniconductors of nullity one.
\subsubsection{Position in spectrum of the zero eigenvalue}
Tables~\ref{table:NBO_general_nuts}-\ref{table:NBO_cubic_polyhedra} show frequency tables of the position of the zero eigenvalue within the spectrum of the general nut graphs, chemical nut graphs and cubic polyhedral nut graphs, respectively.
To determine the data on the NBO in Tables~\ref{table:NBO_general_nuts}-\ref{table:NBO_cubic_polyhedra}, the eigenvalues $\lambda_1,...,\lambda_n$
were sorted in descending order (i.e.\ $\lambda_1$ is the largest eigenvalue and $\lambda_n$ the largest).
The tables report numbers of cases where the zero eigenvalue is at position
${\left\lceil\frac n2\right\rceil} +k$ for the nut graphs of order $n$. For the sets of graphs and ranges of $n$ considered here, $k$ falls between
$-3$ and $+3$ ($k = +3$ for some fullerenes).
The chemical implication is that there are typically
many molecules based on
chemical nut graphs where a radical with fully
delocalised spin density and small net charge
would be produced by half-occupation of the kernel eigenvector:
for odd $n$, these have the NBO at position $\left\lceil\frac n2\right\rceil$
and correspond to the neutral molecule;
for even $n$, these have the NBO at position
$\left\lceil\frac n2\right\rceil$ and charge $+1$ or at
$\left\lceil\frac n2\right\rceil+1$ and charge $-1$.
However, the tests also showed that for all nut fullerenes up to 250 vertices
the NBO is at position $k=3$, except for one fullerene on 42 vertices and
one of 60 vertices where the NBO is at position $k=2$, so the charges at which
nut fullerene radicals
could display this delocalisation are further from neutrality.
Amongst chemical graphs, fullerenes are atypical in that they tend to have $n_+ > n_-$, and hence occupation of the zero eigenvalue
often corresponds to a significant negative molecular charge.
\newcommand*{\displaystyle{\left\lceil\frac n2\right\rceil}}{\displaystyle{\left\lceil\frac n2\right\rceil}}
\begin{table}
\centering
\small
\footnotesize
\begin{tabular}{|c || c | c | c | c | c | c || c |}
\hline
Order \rule[-2ex]{0pt}{5.5ex}& $\displaystyle{\left\lceil\frac n2\right\rceil} \!-\! 3$ & $\displaystyle{\left\lceil\frac n2\right\rceil}\!-\!2$ &$\displaystyle{\left\lceil\frac n2\right\rceil}\!-\!1$ & $\displaystyle{\left\lceil\frac n2\right\rceil}$ & $\displaystyle{\left\lceil\frac n2\right\rceil}\!+\! 1 $ & $\displaystyle{\left\lceil\frac n2\right\rceil}\!+\!2$ & Total\\
\hline
7 & & & & 3 & & & 3 \\
8 & & & & 13 & & & 13 \\
9 & & 1 & 65 & 494 & & & 560 \\
10 & & 4 & 295 & 12 169 & 83 & & 12 551 \\
11 & 14 & 2 597 & 316 473 & 1 741 400 & 6 & & 2 060 490 \\
12 & 55 & 29 313 & 8 879 721 & 196 259 526 & 2 979 253 & 1 & 208 147 869 \\
\hline
\end{tabular}
\caption{Frequency table of the position of the zero eigenvalue
within the spectrum of the nut graphs.
The columns with a header of the form $\lceil n/2 \rceil + k$ contain the numbers of nut graphs of order $n$ where the NBO is at position $\lceil n/2 \rceil + k$.}
\label{table:NBO_general_nuts}
\end{table}
\begin{table}
\centering
\small
\begin{tabular}{|c || c | c | c | c | c || c |}
\hline
Order\rule[-2ex]{0pt}{5.5ex} & $\displaystyle{\left\lceil\frac n2\right\rceil}-2$ &$\displaystyle{\left\lceil\frac n2\right\rceil}- 1$ & $\displaystyle{\left\lceil\frac n2\right\rceil}$ & $\displaystyle{\left\lceil\frac n2\right\rceil} + 1 $ & $\displaystyle{\left\lceil\frac n2\right\rceil} + 2$ & Total\\
\hline
9 & & & 1 & & & 1 \\
10 & & & & & & 0 \\
11 & & & 8 & & & 8 \\
12 & & & 6 & 3 & & 9 \\
13 & & & 27 & & & 27 \\
14 & & 1 & 21 & 1 & & 23 \\
15 & & 5 & 409 & & & 414 \\
16 & & 5 & 311 & 73 & & 389 \\
17 & 3 & 173 & 7 754 & 11 & & 7 941 \\
18 & 1 & 112 & 5 769 & 2 121 & 6 & 8 009 \\
19 & 22 & 1 140 & 66 766 & 42 & & 67 970 \\
20 & 9 & 761 & 42 203 & 8 859 & 5 & 51 837 \\
21 & 194 & 18 986 & 1 306 168 & 1 181 & & 1 326 529 \\
22 & 107 & 13 788 & 1 024 175 & 334 132 & 236 & 1 372 438 \\
\hline
\end{tabular}
\caption{Frequency table of the position of the zero eigenvalue
within the spectrum of the chemical nut graphs.
Column headings as in Table {\ref{table:NBO_general_nuts}}.}
\label{table:NBO_chemical_nuts}
\end{table}
\begin{table}
\centering
\small
\begin{tabular}{|c || c | c | c | c | c || c |}
\hline
Order\rule[-2ex]{0pt}{5.5ex} & $\displaystyle{\left\lceil\frac n2\right\rceil}-2$ &$\displaystyle{\left\lceil\frac n2\right\rceil}- 1$ & $\displaystyle{\left\lceil\frac n2\right\rceil}$ & $\displaystyle{\left\lceil\frac n2\right\rceil} + 1 $ & $\displaystyle{\left\lceil\frac n2\right\rceil} + 2$ & Total\\
\hline
12 & & & 2 & & & 2 \\
18 & & 7 & 262 & 16 & & 285 \\
24 & 4 & 3 022 & 54 699 & 4 317 & 1 & 62 043 \\
26 & & & 1 & 2 & 1 & 4 \\
28 & & 128 & 187 & 1 & & 316 \\
30 & 18 486 & 1 363 546 & 14 169 947 & 1 339 896 & 989 & 16 892 864 \\
32 & 78 & 442 & 860 & 2 150 & 146 & 3 676 \\
34 & 108 & 197 257 & 249 825 & 600 & & 447 790 \\
\hline
\end{tabular}
\caption{Frequency table of the position of the
zero eigenvalue within the spectrum of the cubic polyhedral nut graphs.
Column headings as in Table {\ref{table:NBO_general_nuts}}.}
\label{table:NBO_cubic_polyhedra}
\end{table}
\subsubsection{Ratio of the largest to smallest kernel eigenvector entry}
In this section we will tabulate and discuss the ratio of the largest to smallest entry in the eigenvector that
corresponds with the zero eigenvalue for nut graphs (note: here we use the absolute values of the entries).
Since there are too many values to list the counts for each ratio and order, we only list the smallest ratio, largest ratio and the number of graphs with the smallest and largest ratio for each order. These results can be found in Tables~\ref{table:ratio_general}-\ref{table:ratio_cubic_polyhedra} for nut graphs, chemical nut graphs and cubic polyhedral nut graphs, respectively. We also found several nut graphs, chemical nut graphs and cubic polyhedral nut graphs for which the ratio is not an integer.
\begin{table}
\centering
\small
\begin{tabular}{|c || c | c | c | c |}
\hline
\multirow{2}{*}{Order} & \multirow{2}{*}{min $r$} & frequency & \multirow{2}{*}{max $r$} & frequency\\
& & min $r$ & & max $r$\\
\hline
7 & 1 & 3 & 1 & 3 \\
8 & 1 & 7 & 2 & 6 \\
9 & 1 & 83 & 4 & 4 \\
10 & 1 & 988 & 6 & 1 \\
11 & 1 & 34 910 & 12 & 9 \\
12 & 1 & 1 739 859 & 16 & 13 \\
\hline
\end{tabular}
\caption{Counts of the smallest ratio, largest ratio and the number of graphs with the smallest and largest ratio for nut graphs. (Here $r$ stands for the magnitude
of the ratio of largest to smallest entry in the eigenvector that corresponds with the zero eigenvalue).}
\label{table:ratio_general}
\end{table}
\begin{table}
\centering
\small
\begin{tabular}{|c || c | c | c | c |}
\hline
\multirow{2}{*}{Order} & \multirow{2}{*}{min $r$} & frequency & \multirow{2}{*}{max $r$} & frequency\\
& & min $r$ & & max $r$\\
\hline
9 & 2 & 1 & 2 & 1 \\
10 & - & - & - & - \\
11 & 2 & 6 & 4 & 1 \\
12 & 2 & 9 & 2 & 9 \\
13 & 2 & 7 & 4 & 8 \\
14 & 2 & 9 & 4 & 6 \\
15 & 2 & 80 & 8 & 2 \\
16 & 2 & 195 & 4 & 73 \\
17 & 2 & 1 284 & 10 & 12 \\
18 & 2 & 4 151 & 10 & 1 \\
19 & 2 & 1 822 & 15 & 5 \\
20 & 2 & 3 872 & 13 & 2 \\
21 & 2 & 32 278 & 22 & 7 \\
22 & 2 & 149 748 & 18 & 4 \\
\hline
\end{tabular}
\caption{Counts of the smallest ratio, largest ratio and the number of graphs with the smallest and largest ratio for chemical nut graphs. Conventions as in
Table~{\ref{table:ratio_general}}.}
\label{table:ratio_chemical}
\end{table}
An observation from Table~\ref{table:ratio_chemical} is that for every order $n$ in range for which there is a
chemical nut graph, the bound $r = 2$ is realised (recall from Theorem~\ref{thm:r_at_least_2} that chemical nut graphs have $r \ge 2$). We now show that this is always the case. In fact, the statistical evidence suggests that $r = 2$ is a common value.
\begin{theorem}
There is a chemical nut graph with $r = 2$ for every order $n \ge 9$ ($n \ne 10$).
\end{theorem}
\begin{proof}
An edge of a nut graph carries entries $a-b$ in the kernel eigenvector, and can be expanded by insertion of
a $P_4$ unit to give a kernel eigenvector of the $(n+4)$-vertex graph with entries
$a-b-{\overline a}-{\overline b}-a-b$; the expanded graph is still a nut, and as no new entry magnitudes have been
created, $r$ is conserved.
Hence, to guarantee existence of chemical nut graphs with $r = 2$ for all $n \ge 9$ ($n \ne 10$)
it is sufficient to have one such graph at each of $9, 11, 12, 14$, which is guaranteed by the data in
Table~\ref{table:ratio_chemical}.
\end{proof}
\begin{table}
\centering
\small
\begin{tabular}{|c || c | c | c | c |}
\hline
\multirow{2}{*}{Order} & \multirow{2}{*}{min $r$} & frequency & \multirow{2}{*}{max $r$} & frequency\\
& & min $r$ & & max $r$\\
\hline
12 & 2 & 2 & 2 & 2 \\
18 & 2 & 235 & 7 & 2 \\
24 & 2 & 35 632 & 20 & 2 \\
26 & 4 & 2 & 12 & 1 \\
28 & 5 & 7 & 18 & 1 \\
30 & 2 & 6 535 314 & 52 & 1 \\
32 & 4 & 803 & 25 & 1 \\
34 & 4 & 860 & 49 & 2 \\
\hline
\end{tabular}
\caption{Counts of the smallest ratio, largest ratio and the number of graphs with the smallest and largest ratio for cubic polyhedral nut graphs. Conventions as in
Table~{\ref{table:ratio_general}}}.
\label{table:ratio_cubic_polyhedra}
\end{table}
The chemical nut graphs for which the NBO is at position $\lceil n/2 \rceil$ and which have the smallest ratio (i.e.\ 2) are of special chemical interest since these will have the smoothest distribution of spin density in a molecular graph with an electron count close to neutrality where this eigenvector
is half occupied. These counts are listed in Table~\ref{table:NBO_n2_ratio2}.
Conversely, nut graphs with maximum $r$ are the nut graphs that are in a sense
as close as possible to losing their nut status.
\begin{table}
\centering
\small
\begin{tabular}{|c | c |}
\hline
Order & Counts\\
\hline
9 & 1 \\
10 & - \\
11 & 6 \\
12 & 6 \\
13 & 7 \\
14 & 7 \\
15 & 77 \\
\hline
\end{tabular}\quad
\begin{tabular}{|c | c |}
\hline
Order & Counts\\
\hline
16 & 142 \\
17 & 1 188 \\
18 & 2 753 \\
19 & 1 656 \\
20 & 2 773 \\
21 & 29 932 \\
22 & 98 087 \\
\hline
\end{tabular}
\caption{The number of chemical nut graphs for which the NBO is at position $\displaystyle{\left\lceil\frac n2\right\rceil}$
and the ratio $r$ of the largest to smallest entry in the eigenvector
which corresponds with the zero eigenvalue is minimum (i.e.\ $r=2$).}
\label{table:NBO_n2_ratio2}
\end{table}
\medskip
\noindent
\textit{Acknowledgements:}
Patrick W.\ Fowler is supported by the University of Sheffield and the Royal Society/Leverhulme Foundation. Jan Goedgebeur is supported by a Postdoctoral Fellowship of the Research Foundation Flanders (FWO).
Most computations for this work were carried out using the Stevin Supercomputer Infrastructure at Ghent University.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\bf Introduction}
Cosection localization via holomorphic two-forms was introduced by Lee and Parker \cite{LP}
in symplectic geometry and by Kiem and J. Li \cite{KL1, KL2} in algebraic geometry.
It is a localization theorem on virtual cycles such as the virtual fundamental cycles
arising from Gromov-Witten theory. Using this technique, Kiem and J. Li \cite{KL1, KL2}
studied the Gromov-Witten theory of minimal surfaces of general type, and J. Li and
the second author \cite{LL} computed the quantum boundary operator for the Hilbert
schemes of points on surfaces. Cosection localization also played a pivotal role in
\cite{LQ2} determining the structure of genus-$0$ extremal Gromov-Witten invariants
of these Hilbert schemes and verifying
the Cohomological Crepant Resolution Conjecture for the Hilbert-Chow morphisms.
In this paper, we study the Gromov-Witten theory of the Hilbert schemes $\Xn$ of points
on smooth projective surfaces $X$ with positive geometric genus
$p_g = h^0(X, \mathcal O_X(K_X))$.
Let $\Mbar_{g, r}(\Xn, \beta)$ be the moduli space of stable maps $\mu$ from
genus-$g$ nodal curves $D$ with $r$-marked points and with $\mu_*[D] = \beta$ to $\Xn$.
Let $C$ be a smooth curve in $X$, and fix distinct points
$x_1, \ldots, x_{n-1} \in X-C$. Define
\begin{eqnarray*}
\beta_n &=& \left \{ \xi + x_2 + \ldots + x_{n-1} \in \Xn | \Supp(\xi)
= \{x_1\} \right \}, \\
\beta_C &=& \left \{ x + x_1 + \ldots + x_{n-1} \in \Xn | \, x \in C \right \}.
\end{eqnarray*}
By linearity, extend the notion $\beta_C$ to an arbitrary divisor $C$
(see \eqref{BetaCHei} for details).
Using cosection localization technique, we obtain the following vanishing result.
\begin{theorem} \label{Intro-ThmVanish}
Let $X$ be a simply connected surface admitting a holomorphic differential two-form
with irreducible zero divisor. If $\beta \ne d_0 \beta_{K_X} - d \beta_n$ for
some integer $d$ and rational number $d_0 \ge 0$,
then all the Gromov-Witten invariants of $\Xn$ defined via the moduli space
$\Mbar_{g, r}(\Xn, \beta)$ vanish.
\end{theorem}
When $n=2$ and $X$ is further assumed to be a minimal surface of general type,
the possible non-vanishing Gromov-Witten invariants
$\langle \alpha_1, \ldots, \alpha_r \rangle_{g, \beta}^{\Xtwo}$ (see \eqref{def-GW}
for the precise definition) can be reduced to the $1$-point invariants
calculated in \cite{LQ1} and the following two types of invariants:
\begin{enumerate}
\item[{\rm (i)}] $\langle 1 \rangle_{1, d\beta_2}^{\Xtwo}$ with $d \ge 1$;
\item[{\rm (ii)}] $\langle 1 \rangle_{0, \,\, \beta_{K_X} - d\beta_2}^{\Xtwo}$ with $K_X^2 = 1$,
$1 \le p_g \le 2$ and $d \le 3$.
\end{enumerate}
These two types of Gromov-Witten invariants are investigated via detailed analyses
of the corresponding virtual fundamental cycles.
\begin{theorem} \label{Intro-theorem_ii}
Let $d \ge 1$. Let $X$ be a smooth projective surface. Then,
$$
\langle 1 \rangle_{1, d\beta_2}^{\Xtwo} = {K_X^2 \over 12d}.
$$
\end{theorem}
It follows that if $X$ is a simply connected surface admitting a holomorphic differential
two-form with irreducible zero divisor and satisfying $K_X^2 > 1$,
then all the Gromov-Witten invariants (without descendant insertions)
of $\Xtwo$ can be determined; moreover, the quantum cohomology of $\Xtwo$
coincides with its quantum corrected cohomology \cite{LQ1, LQ2}.
\begin{theorem} \label{Intro-theorem_iii}
Let $X$ be a simply connected minimal surface of general type with
$K_X^2 = 1$ and $1 \le p_g \le 2$ such that every member in $|K_X|$ is smooth. Then,
\begin{enumerate}
\item[{\rm (i)}] $\Mbar_{0, 0}(\Xtwo, \beta_{K_X} - 3\beta_2) \cong |K_X| \cong \Pee^{p_g-1}$;
\item[{\rm (ii)}] $\langle 1 \rangle_{0, \,\, \beta_{K_X} - 3 \beta_2}^{\Xtwo}
= (-1)^{\chi(\mathcal O_X)}$.
\end{enumerate}
\end{theorem}
We remark that our formula in Theorem~\ref{Intro-theorem_iii}~(ii) is consistent with
$$
\langle 1 \rangle_{K_X^2+1, \,\, K_X}^X = (-1)^{\chi(\mathcal O_X)}
$$
which is a well-known formula of Taubes \cite{Tau}
obtained via an interplay between Seiberg-Witten theory and Gromov-Witten theory.
This paper is organized as follows. In \S 2, we briefly review Gromov-Witten theory.
In \S 3, Theorem~\ref{Intro-ThmVanish} is proved. In \S 4, we compute some intersection
numbers on certain moduli spaces of genus-$1$ stable maps. In \S 5,
we study the homology classes of curves in Hilbert schemes of
points on surfaces. In \S 6, using the results from the previous two sections,
we verify Theorem~\ref{Intro-theorem_ii} and Theorem~\ref{Intro-theorem_iii}.
\medskip\noindent
{\bf Acknowledgment}: We thank Professors Dan Edidin and Jun Li for valuable helps.
The third author also thanks HKUST and Sun Yat-Sen University for their hospitality
and financial support during his visits in Summers 2013 and 2014.
\section{Stable maps and Gromov-Witten invariants}
\label{Stable}
In this section, we will briefly review the notions of stable maps
and Gromov-Witten invariants. We will also recall a result of Behrend from \cite{Beh}.
Let $Y$ be a smooth projective variety.
An $r$-pointed stable map to $Y$ consists of
a complete nodal curve $D$ with $r$ distinct ordered smooth points
$p_1, \ldots, p_r$ and a morphism $\mu: D \to Y$ such that
the data $(\mu, D, p_1, \ldots, p_r)$ has only finitely many automorphisms.
In this case, the stable map is denoted by
$[\mu: (D; p_1, \ldots, p_r) \to Y]$.
For a fixed homology class $\beta \in H_2(Y, \mathbb Z)$,
let $\overline {\frak M}_{g, r}(Y, \beta)$ be the coarse moduli space
parameterizing all the stable maps $[\mu: (D; p_1, \ldots, p_r) \to Y]$
such that $\mu_*[D] = \beta$ and the arithmetic genus of $D$ is $g$.
Then, we have the $i$-th evaluation map:
\begin{eqnarray}\label{evk}
{\rm ev}_i \colon \overline {\frak M}_{g, r}(Y, \beta) \to Y
\end{eqnarray}
defined by ${\rm ev}_i([\mu: (D; p_1, \ldots, p_r) \to Y]) =
\mu(p_i)$. It is known \cite{LT1, LT2, BF} that
the coarse moduli space $\overline {\frak M}_{g, r}(Y, \beta)$ is projective and
has a virtual fundamental class
$[\overline {\frak M}_{g, r}(Y, \beta)]^{\text{vir}} \in
A_{\frak d}(\overline {\frak M}_{g, r}(Y, \beta))$ where
\begin{eqnarray}\label{expected-dim}
\frak d = -(K_Y \cdot \beta) + (\dim (Y) - 3)(1-g) + r
\end{eqnarray}
is the expected complex dimension of
$\overline {\frak M}_{g, r}(Y, \beta)$,
and $A_{\frak d}(\overline {\frak M}_{g, r}(Y, \beta))$
is the Chow group of $\frak d$-dimensional cycles in
the moduli space $\overline {\frak M}_{g, r}(Y, \beta)$.
The Gromov-Witten invariants are defined by using
the virtual fundamental class
$[\overline {\frak M}_{g, r}(Y, \beta)]^{\text{vir}}$.
Recall that an element
$\alpha \in H^*(Y, \mathbb C) {\buildrel\text{def}\over=}
\bigoplus_{j=0}^{2 \dim_{\mathbb C}(Y)} H^j(Y, \mathbb C)$ is
{\it homogeneous} if $\alpha \in H^j(Y, \mathbb C)$ for some $j$;
in this case, we take $|\alpha| = j$.
Let $\alpha_1, \ldots, \alpha_r \in H^*(Y, \mathbb C)$
such that every $\alpha_i$ is homogeneous and
\begin{eqnarray}\label{homo-deg}
\sum_{i=1}^r |\alpha_i| = 2 {\frak d}.
\end{eqnarray}
Then, we have the $r$-point Gromov-Witten invariant defined by:
\begin{eqnarray}\label{def-GW}
\langle \alpha_1, \ldots, \alpha_r \rangle_{g, \beta}^Y \,\,
= \int_{[\overline {\frak M}_{g, r}(Y, \beta)]^{\text{vir}}}
{\rm ev}_1^*(\alpha_1) \otimes \ldots \otimes {\rm ev}_r^*(\alpha_r).
\end{eqnarray}
Next, we recall that {\it the excess dimension} is the difference between
the dimension of $\overline {\frak M}_{g, r}(Y, \beta)$ and
the expected dimension $\frak d$ in (\ref{expected-dim}).
Let $T_Y$ stand for the tangent sheaf of $Y$. For $0 \le i < r$, we shall use
\begin{eqnarray}\label{r-to-i}
f_{r, i}: \overline {\frak M}_{g, r}(Y, \beta) \to
\overline {\frak M}_{g, i}(Y, \beta)
\end{eqnarray}
to stand for the forgetful map
obtained by forgetting the last $(r-i)$ marked points
and contracting all the unstable components.
It is known that $f_{r, i}$ is flat when $\beta \ne 0$ and $0 \le i < r$.
The following can be found in \cite{Beh}.
\begin{proposition} \label{virtual-prop}
Let $\beta \in H_2(Y, \mathbb Z)$ and $\beta \ne 0$.
Let $e$ be the excess dimension of the moduli space $\overline {\frak M}_{g, r}(Y, \beta)$.
If $R^1(f_{r+1, r})_*({\rm ev}_{r+1})^*T_Y$ is a rank-$e$ locally free sheaf
over $\overline {\frak M}_{g, r}(Y, \beta)$, then $\overline {\frak M}_{g, r}(Y, \beta)$
is smooth (as a stack) of dimension
\begin{eqnarray} \label{virtual-prop.0}
\frak d + e = -(K_Y \cdot \beta) + (\dim (Y) - 3)(1-g) + r + e,
\end{eqnarray}
and $[\overline {\frak M}_{g, r}(Y, \beta)]^{\text{\rm vir}}
= c_e \big (R^1(f_{r+1, r})_*({\rm ev}_{r+1})^*T_Y \big )
\cap [\overline {\frak M}_{g, r}(Y, \beta)/\overline {\frak M}_{g, r}]$.
\end{proposition}
\section{\bf Vanishing of Gromov-Witten invariants}
\label{sect_Vanishing}
In this section, we will recall some basic notations regarding the Hilbert schemes of
points on surfaces, and prove Theorem~\ref{Intro-ThmVanish}.
Let $X$ be a smooth projective complex surface, and $\Xn$ be the Hilbert scheme of points in $X$.
An element in $\Xn$ is represented by a length-$n$ $0$-dimensional closed subscheme $\xi$ of $X$.
For $\xi \in \Xn$, let $I_{\xi}$ and $\mathcal O_\xi$ be the corresponding sheaf of ideals and
structure sheaf respectively. It is known from \cite{Fog1, Iar} that $\Xn$ is
a smooth irreducible variety of dimension $2n$.
The universal codimension-$2$ subscheme is
\beq \label{UnivZn}
\mathcal Z_n=\{(\xi, x) \in \Xn \times X \, |\, x\in \Supp{(\xi)}\}
\quad \subset \Xn \times X.
\eeq
The boundary of $\Xn$ is defined to be the subset
$$
B_n = \left \{ \xi \in \Xn|\, |\Supp{(\xi)}| < n \right \}.
$$
Let $C$ be a real-surface in $X$, and fix distinct points $x_1, \ldots, x_{n-1} \in X$
which are not contained in $C$. Define the subsets
\begin{eqnarray}
\beta_n &=& \left \{ \xi + x_2 + \ldots + x_{n-1} \in \Xn | \Supp(\xi) = \{x_1\} \right \},
\label{BetaN} \\
\beta_C &=& \left \{ x + x_1 + \ldots + x_{n-1} \in \Xn | \, x \in C \right \},
\label{BetaC} \\
D_C &=& \left \{ \xi \in \Xn | \, \Supp(\xi) \cap C \ne \emptyset \right \}.
\label{DC}
\end{eqnarray}
Note that $\beta_C$ (respectively, $D_C$) is a curve (respectively, a divisor) in $\Xn$ when
$C$ is a smooth algebraic curve in $X$. We extend the notions $\beta_C$ and $D_C$ to
all the divisors $C$ in $X$ by linearality. For a subset $Y \subset X$, define
$$
M_n(Y) = \{ \xi \in \Xn| \, \Supp(\xi) \text{ is a point in $Y$}\}.
$$
Nakajima \cite{Nak} and Grojnowski \cite {Gro} geometrically constructed a Heisenberg algebra
action on the cohomology of the Hilbert schemes $\Xn$. Denote the Heisenberg operators by
$\fa_m(\alpha)$ where $m \in \Z$ and $\alpha \in H^*(X, \C)$. Put
$$
\fock = \bigoplus_{n=0}^{+ \infty} H^*(\Xn, \C).
$$
Then the space $\fock$ is an irreducible representation of the Heisenberg algebra
generated by the operators $\fa_m(\alpha)$ with the highest weight vector being
$\vac = 1 \in H^*(X^{[0]}, \C) = \C$. It follows that the $n$-th component
$H^*(\Xn, \C)$ in $\fock$ is linearly spanned by the {\it Heisenberg monomial classes}:
$$
\mathfrak a_{-n_1}(\alpha_1) \cdots \mathfrak a_{-n_k}(\alpha_k) \vac
$$
where $k \ge 0$, $n_1, \ldots, n_k > 0$, and $n_1 + \ldots + n_k = n$. We have
\begin{eqnarray}
\beta_n &=& \fa_{-2}(x) \fa_{-1}(x)^{n-2} \vac, \label{BetaNHei} \\
\beta_C &=& \fa_{-1}(C) \fa_{-1}(x)^{n-1} \vac, \label{BetaCHei} \\
B_n &=& {1 \over (n-2)!} \fa_{-1}(1_X)^{n-2} \fa_{-2}(1_X) \vac, \label{BNHei} \\
D_C &=& {1 \over (n-1)!} \fa_{-1}(1_X)^{n-1} \fa_{-1}(C) \vac \label{DCHei}
\end{eqnarray}
where $x$ and $1_X$ denote the cohomology classes corresponding to a point $x \in X$ and
the surface $X$ respectively. By abusing notations, we also use $C$ to denote
the cohomology class corresponding to the real-surface $C$.
Assume that the surface $X$ admits a non-trivial holomorphic differential two-form
$\theta \in H^0(X, \Omega_X^2) = H^0(X, \mathcal O_X(K_X))$.
By the results of Beauville in \cite{Bea1, Bea2},
$\theta$ induces a holomorphic two-form $\theta^{[n]}$ of the Hilbert scheme $\Xn$
which can also be regarded as a map $\theta^{[n]}: T_\Xn \to \Omega_\Xn$.
For simplicity, put
$$
\Mbar = \Mbar_{g, r}(\Xn, \beta).
$$
Define the degeneracy locus $\Mbar(\theta)$ to be the subset of $\Mbar$ consisting of
all the stable maps $u: \Gamma \to \Xn$ such that the composite
\beq \label{thetaNull}
u^*(\theta^{[n]}) \circ du: \quad T_{\Gamma_{\rm reg}} \to
u^*T_{\Xn}|_{\Gamma_{\rm reg}} \to u^*\Omega_{\Xn}|_{\Gamma_{\rm reg}}
\eeq
is trivial over the regular locus $\Gamma_{\rm reg}$ of $\Gamma$.
By the results of Kiem-Li \cite{KL1, KL2}, $\theta^{[n]}$ defines
a regular cosection of the obstruction sheaf of $\Mbar$:
\beq \label{LL3.1}
\eta : \mathcal Ob_{\Mbar} \longrightarrow \mathcal O_{\Mbar}
\eeq
where $\mathcal Ob_{\Mbar}$ is the obstruction sheaf and $\mathcal O_{\Mbar}$ is
the structure sheaf of $\Mbar$. Moreover, the cosection $\eta$ is surjective away from
the degeneracy locus $\Mbar(\theta)$, and there exists a localized virtual cycle
$[\Mbar]^{\rm vir}_{\rm loc} \in A_*(\Mbar(\theta))$ such that
\beq \label{KL1.1}
[\Mbar]^{\rm vir} = \iota_*[\Mbar]^{\rm vir}_{\rm loc} \in A_*(\Mbar)
\eeq
where $\iota: \Mbar(\theta) \to \Mbar$ stands for the inclusion map.
\begin{lemma} \label{LmaNull}
Let $C_0$ be the zero divisor of $\theta$. Let $u: \Gamma \to \Xn$ be a stable map
in $\Mbar(\theta)$, and let $\Gamma_0$ be an irreducible component of $\Gamma$
with non-constant restriction $u|_{\Gamma_0}$. Then there exists $\xi_1 \in X^{[n_0]}$
for some $n_0$ such that $\Supp(\xi_1) \cap C_0 = \emptyset$ and
\beq \label{LmaNull.0}
u(\Gamma_0) \subset \xi_1 + \{\xi_2| \Supp(\xi_2) \subset C_0 \}.
\eeq
\end{lemma}
\begin{proof}
For notational convenience, we assume that $\Gamma = \Gamma_0$ is irreducible.
Then there exist a nonempty open subset $O \subset \Gamma$ and an integer
$n_0 \ge 0$ such that $O$ is smooth and for every element $p \in O$,
the image $u(p)$ is of the form
\beq \label{LmaNull.1}
u(p) = \xi_1(p) + \xi_2(p)
\eeq
where $\xi_1(p) \in X^{[n_0]}$ with $\Supp(\xi_1(p)) \cap C_0 = \emptyset$ and
$\Supp(\xi_2(p)) \subset C_0$. This induces a decomposition $u|_O = (u_1, u_2)$
where the morphisms $u_1: O \to X^{[n_0]}$ and $u_2: O \to X^{[n-n_0]}$ are
defined by sending $p \in O$ to $\xi_1(p)$ and $\xi_2(p)$ respectively.
Since \eqref{thetaNull} is trivial over the regular locus $\Gamma_{\rm reg}$ of $\Gamma$,
the composite
\beq \label{LmaNull.2}
u_1^*(\theta^{[n_0]}) \circ du_1: \quad T_O \to u_1^*T_{X^{[n_0]}}|_O
\to u_1^*\Omega_{X^{[n_0]}}|_O
\eeq
is trivial. Note that the holomorphic two-form $\theta^{[n_0]}$ on $X^{[n_0]}$ is
non-degenerate at $\xi_1(p), p \in O$ since $\Supp(\xi_1(p)) \cap C_0 = \emptyset$.
Thus, $du_1 = 0$ and $u_1$ is a constant morphism.
Setting $\xi_1 = \xi_1(p) = u_1(p), p \in O$ proves the lemma.
\end{proof}
In the rest of the paper, we will assume that $X$ is simply connected. Then,
\begin{eqnarray} \label{PicardH1=0}
{\rm Pic}(\Xn) \cong {\rm Pic}(X) \oplus \Z \cdot (B_n/2)
\end{eqnarray}
by \cite{Fog2}. Under this isomorphism, the divisor $D_C \in {\rm Pic}(\Xn)$
corresponds to $C \in {\rm Pic}(X)$. Let $\{\alpha_1, \ldots, \alpha_s\}$
be a linear basis of $H^2(X, \C)$.
Then,
\beq \label{BasisH^2}
\{D_{\alpha_1}, \ldots, D_{\alpha_s}, B_n\}
\eeq
is a linear basis of $H^2(\Xn, \C)$. Represent $\alpha_1, \ldots, \alpha_s$ by real-surfaces
$C_1, \ldots, C_s \subset X$ respectively. Then a linear basis of $H_2(\Xn, \C)$ is given by
\beq \label{BasisH_2}
\{\beta_{C_1}, \ldots, \beta_{C_s}, \beta_n\}.
\eeq
\begin{lemma} \label{MThetaNonEmpty}
Let the surface $X$ be simply connected. Assume that the zero divisor $C_0$ of $\theta$
is irreducible. If the subset $\Mbar(\theta)$ of $\Mbar = \Mbar_{g, r}(\Xn, \beta)$ is nonempty,
then $\beta = d_0 \beta_{C_0} - d \beta_n$ for some integer $d$ and some rational number
$d_0 \ge 0$. Moreover, if $C_0$ is also reduced, then $d_0$ is an non-negative integer.
\end{lemma}
\begin{proof}
Let $u: \Gamma \to \Xn$ be a stable map in $\Mbar(\theta)$. Restricting $u$
to the irreducible components of $\Gamma$ if necessary, we may assume that
$\Gamma$ is irreducible. By Lemma~\ref{LmaNull}, there exists $\xi_1 \in X^{[n_0]}$
for some $n_0$ such that $\Supp(\xi_1) \cap C_0 = \emptyset$ and
\beq \label{MThetaNonEmpty.1}
u(\Gamma) \subset \xi_1 + \{\xi_2| \Supp(\xi_2) \subset C_0 \}.
\eeq
We may further assume that $n_0 = 0$ and $C_0$ is reduced. Then for every $p \in \Gamma$,
$$
\Supp(u(p)) \subset C_0.
$$
Let $C$ be a real-surface in $X$. Assume that $C_0$ and $C$ intersect transversally
at $x_{1, 1}, \ldots, x_{1, s}, x_{2, 1}, \ldots, x_{2, t} \in (C_0)_{\rm reg}$ such that
each $x_{1, i}$ (respectively, $x_{2,i}$) contributes $1$ (respectively, $-1$)
to the intersection number $C_0 \cdot C$.
So $s - t = C_0 \cdot C$. Since $\Supp(u(p)) \subset C_0$ and $C_0$
is irreducible and reduced, there exists an integer $d_0'$ such that $d_0'$ is independent
of $C$ and that each $x_{1, i}$ (respectively, $x_{2,i}$) contributes $d_0'$
(respectively, $-d_0'$) to the intersection number $u(\Gamma) \cdot D_C$. Thus,
$$
u(\Gamma) \cdot D_C = sd_0' - td_0' = (s-t)d_0' = d_0' (C_0 \cdot C).
$$
In view of the bases \eqref{BasisH_2} and \eqref{BasisH^2},
$u(\Gamma) = d_0' \beta_{C_0} - d' \beta_n$ for some integer $d'$.
Choosing $C$ to be a very ample curve, we see that $d_0' \ge 0$.
Finally, since $\beta = \deg(u) \cdot u(\Gamma)$, we obtain
$\beta = d_0 \beta_{C_0} - d \beta_n$ for some integers $d_0 \ge 0$ and $d$.
\end{proof}
\begin{theorem} \label{ThmVanish}
Let $X$ be a simply connected surface admitting a holomorphic differential two-form
with irreducible zero divisor. If $\beta \ne d_0 \beta_{K_X} - d \beta_n$ for
some integer $d$ and rational number $d_0 \ge 0$,
then all the Gromov-Witten invariants of $\Xn$ defined via the moduli space
$\Mbar_{g, r}(\Xn, \beta)$ vanish.
\end{theorem}
\begin{proof}
Let $\theta \in H^0(X, \Omega_X^2) = H^0(X, \mathcal O_X(K_X))$ be the holomorphic differential
two-form whose zero divisor $C_0$ is irreducible.
By Lemma~\ref{MThetaNonEmpty}, we have $\Mbar(\theta) = \emptyset$.
It follows from \eqref{KL1.1} that $[\Mbar_{g, r}(\Xn, \beta)]^{\rm vir} = 0$.
Therefore, all the Gromov-Witten invariants defined via the moduli space
$\Mbar_{g, r}(\Xn, \beta)$ vanish.
\end{proof}
\begin{remark} \label{RmkThmVanish}
From the proof of Lemma~\ref{MThetaNonEmpty}, we see that if $K_X = C_0 = mC_0'$
for some irreducible and reduced curve $C_0'$, then the rational number $d_0$
in Theorem~\ref{ThmVanish} is of the form $d_0'/m$ for some integer $d_0' \ge 0$.
\end{remark}
Recall that $K_\Xn = D_{K_X}$. Thus, if $\beta = d_0 \beta_{K_X} - d \beta_n$ for
some rational number $d_0 \ge 0$ and integer $d$, then the expected dimension of
$\Mbar_{g, r}(\Xn, \beta)$ is
\begin{eqnarray} \label{ExpDim}
\mathfrak d
&=&-K_\Xn \cdot \beta + (\dim \Xn - 3)(1-g)+ r \nonumber \\
&=&-d_0K_X^2 + (2n - 3)(1-g)+ r.
\end{eqnarray}
Our first corollary deals with the case when $X$ is an elliptic surface.
\begin{corollary} \label{EllipticCor}
Let $X$ be a simply connected (minimal) elliptic surface without multiple fibers
and with positive geometric genus.
Let $n \ge 2$ and $\beta \ne 0$. Then all the Gromov-Witten invariants without descendant
insertions defined via the moduli space $\Mbar_{g, r}(\Xn, \beta)$ vanish,
except possibly when $0 \le g \le 1$ and $\beta = d_0 \beta_{K_X} - d \beta_n$ for
some integer $d$ and rational number $d_0 \ge 0$.
\end{corollary}
\begin{proof}
Since $X$ is a simply connected elliptic surface without multiple fibers,
$K_X = (p_g-1)f$ where $p_g \ge 1$ is the geometric genus of $X$ and $f$ denotes
a smooth fiber of the elliptic fibration. By Theorem~\ref{ThmVanish},
it remains to consider the case when $\beta = d_0 \beta_{K_X} - d \beta_n$ for
some integer $d$ and rational number $d_0 \ge 0$. By \eqref{ExpDim} and $K_X^2 = 0$,
the expected dimension of the moduli space $\Mbar_{g, r}(\Xn, \beta)$ is equal to
$\mathfrak d = (2n - 3)(1-g)+ r$. By the Fundamental Class Axiom, all the Gromov-Witten
invariants without descendant insertions are equal to zero if $g \ge 2$.
\end{proof}
Our second corollary concentrates on the case when $X$ is of general type.
\begin{corollary} \label{GenTypeCor}
Let $X$ be a simply connected minimal surface of general type admitting a holomorphic
differential two-form with irreducible zero divisor.
Let $n \ge 2$ and $\beta \ne 0$.
Then all the Gromov-Witten invariants without descendant insertions defined via
$\Mbar_{g, r}(\Xn, \beta)$ vanish, except possibly in the following cases
\begin{enumerate}
\item[{\rm (i)}] $g = 0$ and $\beta = d \beta_n$ for some integer $d > 0$;
\item[{\rm (ii)}] $g = 1$ and $\beta = d \beta_n$ for some integer $d > 0$;
\item[{\rm (iii)}] $g = 0$ and $\beta = d_0 \beta_{K_X} - d \beta_n$ for some integer $d$ and rational number $d_0 > 0$.
\end{enumerate}
\end{corollary}
\begin{proof}
In view of Theorem~\ref{ThmVanish}, it remains to consider the case when
$\beta = d_0 \beta_{K_X} - d \beta_n$ for some integer $d$ and rational number $d_0 \ge 0$.
When $d_0 = 0$ and $\beta = d \beta_n$ with $d > 0$, we see from \eqref{ExpDim} that
the expected dimension of the moduli space $\Mbar_{g, r}(\Xn, \beta)$ is equal to
$$
\mathfrak d = (2n - 3)(1-g)+ r.
$$
If $g \ge 2$, then all the Gromov-Witten invariants without descendant insertions defined via
$\Mbar_{g, r}(\Xn, \beta)$ vanish by the Fundamental Class Axiom.
Next, assume that $d_0 > 0$. Since $K_X^2 \ge 1$, we see from \eqref{ExpDim} that
$$
\mathfrak d < (2n - 3)(1-g)+ r.
$$
By the Fundamental Class Axiom, all the Gromov-Witten invariants without descendant insertions
vanish except possibly in the case when $g = 0$.
\end{proof}
\section{\bf Intersection numbers on some moduli space of genus-$1$ stable maps}
\label{sect_ProjV}
In this section, we will compute certain intersection numbers on the moduli space of
genus-$1$ stable maps to $\Pee(V)$ where $V$ is a rank-$2$ vector bundle over
a smooth projective curve $C$. The results will be used in Subsection~\ref{subsect_n=2Case2}.
\begin{notation} \label{Notation}
Let $V$ be a rank-$2$ bundle over a smooth projective variety $B$.
\begin{enumerate}
\item[{\rm (i)}] $f$ denotes a fiber of the ruling $\pi: \Pee(V) \to B$ or
its cohomology class.
\item[{\rm (ii)}] $\mathcal H = (f_{1,0})_*\omega$ is the rank-$1$ Hodge bundle
over $\Mbar_{1, 0}(\Pee(V), df)$ where $\omega$ is the relative
dualizing sheaf for $f_{1,0}: \Mbar_{1, 1}(\Pee(V), df) \to \Mbar_{1, 0}(\Pee(V), df)$.
\item[{\rm (iii)}] $\lambda = c_1(\mathcal H)$.
\end{enumerate}
\end{notation}
Let $d \ge 1$. If $u = [\mu: D \to \Pee(V)] \in \Mbar_{1, 0}(\Pee(V), df)$,
then $\mu(D)$ is a fiber of the ruling $\pi: \Pee(V) \to B$. Therefore,
there exists a natural morphism
\beq \label{phiToB}
\phi: \quad \Mbar_{1, 0}(\Pee(V), df) \to B
\eeq
whose fiber over $b \in B$ is
$\Mbar_{1, 0}\big (\pi^{-1}(b), d[\pi^{-1}(b)] \big ) \cong \Mbar_{1, 0}(\Pee^1, d[\Pee^1])$.
So the moduli space $\Mbar_{1, 0}(\Pee(V), df)$ is smooth (as a stack) with dimension
$$
\dim \Mbar_{1,0}(\Pee^1, d[\Pee^1]) + \dim (B) = 2d + \dim (B).
$$
By \eqref{expected-dim}, the expected dimension of $\Mbar_{1, 0}(\Pee(V), df)$ is $2d$.
Since $d \ge 1$, the sheaf $R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2)$ on
$\Mbar_{1, 0}(\Pee(V), df)$ is locally free of rank-$2d$. In addition,
\beq \label{Mumford}
\lambda^2 = 0
\eeq
according to Mumford's theorem in \cite{Mum} regarding the Chern character of
the Hodge bundles and the proof of Proposition~1 in \cite{FP}.
\begin{lemma} \label{deformation}
Let $d \ge 1$. Let $V$ be a rank-$2$ bundle over $B_0 \times C$ where $B_0$ and $C$
are smooth projective curves. Let $V_b = V|_{\{b\} \times C}$ for $b \in B_0$. Then,
\begin{eqnarray} \label{deformation.0}
\int_{[\Mbar_{1, 0}(\Pee(V_b), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V_b)}(-2) \big )
\end{eqnarray}
is independent of the points $b \in B_0$.
\end{lemma}
\begin{proof}
This follows from the observation that \eqref{deformation.0} is equal to
$$
\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \phi^*[\{b\} \times C] \cdot \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big )
$$
where the morphism $\phi: \Mbar_{1, 0}(\Pee(V), df) \to B_0 \times C$ is from \eqref{phiToB}.
\end{proof}
Formula \eqref{0K3.01} below is probably well-known, but we could not find a reference.
\begin{lemma} \label{0K3}
Let $d$ be a positive integer. Then, we have
\begin{eqnarray}
\int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )
&=&0, \label{0K3.01} \\
\int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]} \lambda \cdot
c_{2d-1} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )
&=&-{1 \over 12d}. \label{0K3.02}
\end{eqnarray}
\end{lemma}
\begin{proof}
We begin with the proof of \eqref{0K3.01}.
Choose a K3 surface $S$ which contains a smooth rational curve $C$.
Then, $C^2 = -2$, $T_S|_C = \mathcal O_C(2) \oplus \mathcal O_C(-2)$,
and $dC$ is the only element in the complete linear system $|dC|$. So we have
\begin{eqnarray*}
& &\int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big ) \\
&=&\int_{[\Mbar_{1, 0}(C, d[C])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*(T_S|_C) \big ) \\
&=&\int_{[\Mbar_{1, 0}(S, d[C])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*T_S \big ).
\end{eqnarray*}
Note that $R^1 (f_{1,0})_*{\rm ev}_1^*T_S$ is a rank-$2d$ bundle on the $2d$-dimensional
moduli space $\Mbar_{1, 0}(S, d[C])$ whose virtual dimension is $0$.
By Proposition~\ref{virtual-prop},
\begin{eqnarray*}
\int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )
= \deg \big ( [\Mbar_{1, 0}(S, d[C])]^\vir \big ).
\end{eqnarray*}
Since $[\Mbar_{g, r}(S, \beta)]^\vir = 0$ whenever $\beta \ne 0$, we obtain
$$
\int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big ) = 0.
$$
To prove \eqref{0K3.02}, we apply $(f_{1,0})_*{\rm ev}_1^*$ to the exact sequence
$$
0 \to \mathcal O_{\Pee^1}(-2) \to \mathcal O_{\Pee^1}(-1)^{\oplus 2}
\to \mathcal O_{\Pee^1} \to 0.
$$
Since $(f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}
= \mathcal O_{\Mbar_{1, 0}(\Pee^1, d[\Pee^1])}$ and
$R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1} = \mathcal H^\vee$, we get
$$
0 \to \mathcal O_{\Mbar_{1, 0}(\Pee^1, d[\Pee^1])}
\to R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2)
\to R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2} \to \mathcal H^\vee \to 0.
$$
Calculating the total Chern class and using \eqref{Mumford}, we see that
\begin{eqnarray} \label{0K3.1}
c\big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )
&=&c\big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2}
\big )/c\big ( \mathcal H^\vee \big ) \nonumber \\
&=&c\big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2} \big )
\cdot (1 + \lambda).
\end{eqnarray}
Thus, the top Chern class $c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )$
is equal to
$$
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2} \big )
+ \lambda \cdot c_{2d-1} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2} \big ).
$$
By the Proposition~2 in \cite{GP} and \eqref{0K3.01}, we conclude that
\begin{eqnarray} \label{0K3.2}
\int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]} \lambda \cdot
c_{2d-1} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2} \big )
= -{1 \over 12d}.
\end{eqnarray}
By \eqref{0K3.1} again, $\lambda \cdot c_{2d-1} \big (
R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big ) = \lambda \cdot
c_{2d-1} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-1)^{\oplus 2} \big )$.
Combining this with \eqref{0K3.2}, we obtain our formula \eqref{0K3.02}.
\end{proof}
\begin{lemma} \label{Hirzebruch}
Let $d \ge 1$, and $V$ be a rank-$2$ bundle over $\Pee^1$. Then,
\begin{eqnarray} \label{Hirzebruch.0}
\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) = {\deg(V) \over 12d}.
\end{eqnarray}
\end{lemma}
\noindent
{\it Proof.}
First of all, assume that $\deg(V) = 2k$ for some integer $k$.
Then $V$ can be deformed to $\mathcal O_{\Pee^1}(k) \oplus \mathcal O_{\Pee^1}(k)
= \big ( \mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1} \big ) \otimes
\mathcal O_{\Pee^1}(k)$. By Lemma~\ref{deformation},
\begin{eqnarray*}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \\
&=&\int_{[\Mbar_{1, 0}(\Pee^1 \times \Pee^1, df)]} \lambda \cdot
c_{2d} \Big ( R^1 (f_{1,0})_*{\rm ev}_1^*\big (\mathcal O_{\Pee^1 \times \Pee^1}(-2)
\otimes \mathcal O_{\Pee^1 \times \Pee^1}(-2kf) \big )\Big ).
\end{eqnarray*}
Note that $\Mbar_{1, 0}(\Pee^1 \times \Pee^1, df) \cong \Pee^1 \times
\Mbar_{1, 0}(\Pee^1, d[\Pee^1])$. Thus, we obtain
\begin{eqnarray*}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \\
&=&\int_{[\Pee^1 \times \Mbar_{1, 0}(\Pee^1, d[\Pee^1])]} \pi_2^*\lambda \cdot
c_{2d} \Big ( \pi_2^*\big (R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )
\otimes \pi_1^*\mathcal O_{\Pee^1}(-2k) \Big )
\end{eqnarray*}
where $\pi_1$ and $\pi_2$ are the projection on $\Pee^1 \times \Mbar_{1, 0}(\Pee^1, d[\Pee^1])$.
Hence,
\begin{eqnarray}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \nonumber \\
&=&\int_{[\Pee^1 \times \Mbar_{1, 0}(\Pee^1, d[\Pee^1])]} \pi_2^*\lambda \cdot
\pi_2^*c_{2d-1} \big (R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big )
\cdot \pi_1^*c_1\big (\mathcal O_{\Pee^1}(-2k) \big ) \nonumber \\
&=&{\deg(V) \over 12d} \label{Hirzebruch.1}
\end{eqnarray}
where we have used formula \eqref{0K3.02} in the last step.
Next, assume that $\deg(V) = 2k + 1$ for some integer $k$. Then $V$ can be deformed to
$\big ( \mathcal O_{\Pee^1}(2) \oplus \mathcal O_{\Pee^1}(-1) \big ) \otimes
\mathcal O_{\Pee^1}(k)$. As in the previous paragraph, we have
\begin{eqnarray*}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \\
&=&\int_{[\Mbar_{1, 0}(S, df)]} \lambda \cdot
c_{2d} \Big ( R^1 (f_{1,0})_*{\rm ev}_1^*\big (\mathcal O_S(-2)
\otimes \mathcal O_S(-2kf) \big )\Big ) \\
&=&\int_{[\Mbar_{1, 0}(S, df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_S(-2) \big ) + {2k \over 12d}
\end{eqnarray*}
where $S = \Pee \big ( \mathcal O_{\Pee^1}(2) \oplus \mathcal O_{\Pee^1}(-1) \big )$.
Let $\mathbb F_1$ be the blown-up of $\Pee^2$ at a point $p$, and $\sigma$ be the exceptional curve.
Then, $T_{\mathbb F_1}|_\sigma \cong \mathcal O_{\Pee^1}(2) \oplus \mathcal O_{\Pee^1}(-1)$. So
\begin{eqnarray}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \nonumber \\
&=&\int_{[\Mbar_{1, 0}(\Pee(T_{\mathbb F_1}|_\sigma), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_{\mathbb F_1}|_\sigma)}(-2) \big )
+ {2k \over 12d} \nonumber \\
&=&\int_{[\Mbar_{1, 0}(\Pee(T_{\mathbb F_1}), df)]} \phi^*[\sigma] \cdot \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_{\mathbb F_1})}(-2) \big )
+ {2k \over 12d} \label{Hirzebruch.2}
\end{eqnarray}
where the morphism $\phi: \Mbar_{1, 0}\big (\Pee(T_{\mathbb F_1}), df \big ) \to \mathbb F_1$
is from \eqref{phiToB}. Let $f_0$ be a fiber of the ruling $\mathbb F_1 \to \Pee^1$,
and $C$ be a smooth conic in $\Pee^2$ such that $p \not \in C$.
We use $C$ to denote its strict transform in $\mathbb F_1$. Then, $[\sigma] = [C]/2 - [f_0]$.
By \eqref{Hirzebruch.2},
\begin{eqnarray*}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \\
&=&{1 \over 2} \cdot \int_{[\Mbar_{1, 0}(\Pee(T_{\mathbb F_1}), df)]} \phi^*[C] \cdot
\lambda \cdot c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_{\mathbb F_1})}(-2)
\big ) \\
& &- \int_{[\Mbar_{1, 0}(\Pee(T_{\mathbb F_1}), df)]} \phi^*[f_0] \cdot \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_{\mathbb F_1})}(-2) \big )
+ {2k \over 12d} \\
&=&{1 \over 2} \cdot \int_{[\Mbar_{1, 0}(\Pee(T_{\mathbb F_1}|_C), df)]}
\lambda \cdot c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_{\mathbb F_1}|_C)}(-2)
\big ) \\
& &- \int_{[\Mbar_{1, 0}(\Pee(T_{\mathbb F_1}|_{f_0}), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_{\mathbb F_1}|_{f_0})}(-2) \big )
+ {2k \over 12d}.
\end{eqnarray*}
Note that $\deg \big ( T_{\mathbb F_1}|_C \big ) = 6$ and
$\deg \big ( T_{\mathbb F_1}|_{f_0} \big ) = 2$. By \eqref{Hirzebruch.1},
\begin{equation}
\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big )
= {1 \over 2} \cdot {6 \over 12d} - {2 \over 12d} + {2k \over 12d}
= {\deg(V) \over 12d}. \tag*{$\qed$}
\end{equation}
\begin{proposition} \label{PropDegV}
Let $d$ be a positive integer. Assume that $V$ is a rank-$2$ vector bundle over
a smooth projective curve $C$. Then, we have
\begin{eqnarray} \label{PropDegV.0}
\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) = {\deg(V) \over 12d}.
\end{eqnarray}
\end{proposition}
\begin{proof}
It is well-known that there exist a rank-$2$ bundle $\mathcal V$ over $\Pee^1 \times C$
and two points $b_1, b_2 \in \Pee^1$ such that $\mathcal V|_{\{b_1\} \times C} = V$
and $\mathcal V|_{\{b_2\} \times C} = (\mathcal O_C \oplus M) \otimes N$
where $M$ and $N$ are line bundles on $C$ with $M$ being very ample.
As in the proof of Lemma~\ref{Hirzebruch}, we conclude from Lemma~\ref{deformation} that
\begin{eqnarray} \label{PropDegV.1}
& &\int_{[\Mbar_{1, 0}(\Pee(V), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(V)}(-2) \big ) \nonumber \\
&=&\int_{[\Mbar_{1, 0}(\Pee(\mathcal O_C \oplus M), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(\mathcal O_C \oplus M)}(-2) \big )
+ {2 \deg(N) \over 12d}. \qquad
\end{eqnarray}
Since $M$ is very ample, there exists a morphism $\alpha: C \to \Pee^1$ such that
$M = \alpha^*\mathcal O_{\Pee^1}(1)$. Then, $\mathcal O_C \oplus M
= \alpha^*\big (\mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1}(1) \big )$.
This induces an isomorphism
$$
\Mbar_{1, 0}(\Pee(\mathcal O_C \oplus M), df) \cong C \times_{\Pee^1}
\Mbar_{1, 0}\big (\Pee(\mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1}(1)), df \big ).
$$
Let $\W \alpha: \Mbar_{1, 0}(\Pee(\mathcal O_C \oplus M), df) \to
\Mbar_{1, 0}\big (\Pee (\mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1}(1)), df \big )$
be the projection. Then
\begin{eqnarray*}
& &\int_{[\Mbar_{1, 0}(\Pee(\mathcal O_C \oplus M), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(\mathcal O_C \oplus M)}(-2) \big ) \\
&=&\int_{[\Mbar_{1, 0}(\Pee(\mathcal O_C \oplus M), df)]} \W \alpha^*\lambda \cdot
c_{2d} \Big (\W \alpha^*\big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee\big (
\mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1}(1) \big )}(-2) \big ) \Big ) \\
&=&\deg(\W \alpha) \cdot \int_{[\Mbar_{1, 0}(\Pee(
\mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1}(1)), df)]} \lambda \cdot
c_{2d} \big (R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(
\mathcal O_{\Pee^1} \oplus \mathcal O_{\Pee^1}(1))}(-2) \big ).
\end{eqnarray*}
By Lemma~\ref{Hirzebruch} and noticing $\deg(\W \alpha) = \deg(\alpha) = \deg(M)$, we get
\begin{eqnarray} \label{PropDegV.2}
\int_{[\Mbar_{1, 0}(\Pee(\mathcal O_C \oplus M), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(\mathcal O_C \oplus M)}(-2) \big )
= {\deg(M) \over 12d}.
\end{eqnarray}
Now our formula \eqref{PropDegV.0} follows immediately from \eqref{PropDegV.1}
and \eqref{PropDegV.2}.
\end{proof}
\section{\bf The homology classes of curves in Hilbert schemes}
\label{sect_homology}
This section contains some technical lemmas which will be used in
Subsection~\ref{subsect_n=2Case3}.
These lemmas deal with the homology classes of curves in Hilbert schemes.
\begin{lemma} \label{CurveSim}
Let $n \ge 2$ and $X$ be a simply connected surface. Let $\Gamma$ be
an irreducible curve in the Hilbert scheme $\Xn$. Then,
\begin{eqnarray} \label{CurveSim.01}
\Gamma \sim \beta_C + d \beta_n \quad \in H_2(\Xn, \C)
\end{eqnarray}
for some effective curve class $C$ (possibly zero) and some integer $d$.
\end{lemma}
\begin{proof}
Let $\pi_1, \pi_2$ be the two projections of $X^{[n]} \times X$,
and recall the universal codimension-$2$ subscheme ${\mathcal Z}_n$ from \eqref{UnivZn}.
Define
\beq \label{CurveSim.1}
{\mathcal Z}_\Gamma = \Gamma \times_{X^{[n]}} {\mathcal Z}_n.
\eeq
Let ${\w \pi}_1 =
\pi_1|_{{\mathcal Z}_\Gamma}: {\mathcal Z}_\Gamma \to \Gamma$ and ${\w \pi}_2 =
\pi_2|_{{\mathcal Z}_\Gamma}: {\mathcal Z}_\Gamma \to X$. Define
$$
C_\Gamma = \w \pi_2({\mathcal Z}_\Gamma) \subset X.
$$
Let $C_1, \ldots, C_t$ (possibly $t = 0$) be the irreducible components of $C_\Gamma$
such that $\dim C_i = 1$ for all $1 \le i \le t$. Let $m_1, \ldots, m_t$
be the degrees of the restrictions of $\pi_2|_{{\mathcal Z}_\Gamma}$ to the reduced
curves ${\big ( \w \pi_2^{-1}(C_1) \big )}_{\rm red}, \ldots,
{\big ( \w \pi_2^{-1}(C_t) \big )}_{\rm red}$ respectively.
Fix a very ample curve $H$ in $X$. Let $d_i = C_i \cdot H$ for $1 \le i \le t$.
Choose the curve $H$ such that the following conditions are satisfied:
\begin{enumerate}
\item[$\bullet$] for each $1 \le i \le t$, $H$ intersects $C_i$ transversally at
$d_i$ distinct smooth points
$$
{\w x}_{i, 1}, \ldots, {\w x}_{i, d_i};
$$
\item[$\bullet$] for $1 \le i \le t$ and $1 \le j \le d_i$,
${\w \pi}_2^{-1}({\w x}_{i, j})$ consists of distinct smooth points
$$
\big (\xi_{i, j; 1}, {\w x}_{i, j} \big ), \, \ldots, \,
\big (\xi_{i, j; m_i}, {\w x}_{i, j} \big ) \, \in \,
{\big ( {\w \pi}_2^{-1}(C_i) \big )}_{\rm red}
$$
at which the restriction of $\w \pi_1$ to ${\big ( {\w \pi}_2^{-1}(C_i)
\big )}_{\rm red}$ is also unramified.
\end{enumerate}
For $1 \le i \le t$, $1 \le j \le d_i$ and $1 \le k \le m_i$, let $\w m_{i,j,k}$
be the multiplicity of the unique irreducible component of ${\mathcal Z}_\Gamma$
containing the smooth point $\big (\xi_{i, j; k}, {\w x}_{i, j} \big )$.
Then the contribution of $\big (\xi_{i, j; k}, {\w x}_{i, j} \big )$ to
$\Gamma \cdot D_H$ is exactly $\w m_{i,j,k}$. Therefore,
\beq \label{CurveSim.2}
\Gamma \cdot D_H = \sum_{i=1}^t \sum_{j=1}^{d_i} \sum_{k=1}^{m_i} {\w m_{i,j,k}}.
\eeq
Note that $\sum_{k=1}^{m_i} {\w m_{i,j,k}}$ is independent of $j$ and $H$.
Put $\sum_{k=1}^{m_i} {\w m_{i,j,k}} = e_i$. By \eqref{CurveSim.2},
\begin{eqnarray} \label{CurveSim.3}
\Gamma \cdot D_H = \sum_{i=1}^t \sum_{j=1}^{d_i} e_i = \sum_{i=1}^t e_i d_i
= \sum_{i=1}^t e_i (C_i \cdot H) = \left ( \sum_{i=1}^t e_i C_i \right ) \cdot H.
\end{eqnarray}
Since $X$ is simply connected, ${\rm Pic}(\Xn) \cong {\rm Pic}(X) \oplus \Z \cdot (B_n/2)$
by \eqref{PicardH1=0}. By the duality between divisor classes and curve classes,
$\Gamma \sim \beta_C + d \beta_n$ for some integer $d$ and some class $C \in A_1(\Xn)$.
Combining with \eqref{CurveSim.3}, we get
$$
C \cdot H = (\beta_C + d \beta_n) \cdot D_H = \Gamma \cdot D_H
= \left ( \sum_{i=1}^t e_i C_i \right ) \cdot H
$$
So $C$ and $\sum_{i=1}^t e_i C_i$ are numerically equivalent divisors on the surface $X$.
Since $X$ is simply connected, we see that $C= \sum_{i=1}^t e_i C_i$ as divisors.
\end{proof}
Next, we study the homology classes of curves in
$C^{(n)} \subset \Xn$ where $C$ denotes a smooth curve in $X$.
The case when $g_C = 0$ has been settled in \cite{LQZ}. So we will assume $g_C \ge 1$.
We recall some standard facts about $C^{(n)}$ from \cite{ACGH, BT}.
For a fixed point $p \in C$, let $\Xi$ denote the divisor
$p + C^{(n-1)} \subset C^{(n)}$. Let
$$
{\rm AJ}: C^{(n)} \to {\rm Jac}_n(C)
$$
be the Abel-Jacobi map sending an element $\xi \in C^{(n)}$ to the corresponding
degree-$n$ divisor class in ${\rm Jac}_n(C)$. For an element $\delta \in {\rm Jac}_n(C)$,
the fiber ${\rm AJ}^{-1}(\delta)$ is the complete line system $|\delta|$.
Let $\mathcal Z_n(C) \subset C^{(n)} \times C$ be the universal divisor,
and let $\w \pi_1, \w \pi_2$ be the two projections on $C^{(n)} \times C$.
By the Lemma~2.5 on p.340 of \cite{ACGH} and the Proposition~2.1~(iv) of \cite{BT},
we have
\beq \label{ACGH1}
c_1\big ( \w \pi_{1*} \mathcal O_{\mathcal Z_n(C)} \big )
= (1 - g_C - n)\Xi + \Theta
\eeq
where $\Theta$ is the pull-back via AJ of a Theta divisor on ${\rm Jac}_n(C)$.
\begin{lemma} \label{HomlgCls}
Let $n \ge 2$ and $X$ be a simply connected surface. Let $C$ be a smooth curve in $X$,
and $\Gamma \subset C^{(n)}$ be a curve. Then,
\begin{eqnarray} \label{HomlgCls.01}
\Gamma \sim
(\Xi \cdot \Gamma) \beta_C + \big (-(n+ g_C - 1)(\Xi \cdot \Gamma) +
(\Theta \cdot \Gamma)\big ) \beta_n \quad \in H_2(\Xn, \C).
\end{eqnarray}
In addition, for every line
$\Gamma_0$ in a positive-dimensional fiber ${\rm AJ}^{-1}(\delta)$, we have
\begin{eqnarray} \label{HomlgCls.02}
\Gamma_0 \sim \beta_C - (n+ g_C - 1) \beta_n.
\end{eqnarray}
\end{lemma}
\begin{proof}
(i) Recall the universal codimension-$2$ subscheme $\mathcal Z_n \subset \Xn \times X$,
and let $\pi_1, \pi_2$ be the two projections on $\Xn \times X$. Then, we have
$$
\big ( \pi_{1*} \mathcal O_{\mathcal Z_n} \big )|_{C^{(n)}}
= \w \pi_{1*} \mathcal O_{\mathcal Z_n(C)}.
$$
It is well-known that $c_1\big ( \pi_{1*} \mathcal O_{\mathcal Z_n} \big ) = -B_n/2$.
Combining with \eqref{ACGH1}, we obtain
\begin{eqnarray} \label{HomlgCls.1}
(-B_n/2) \cdot \Gamma
&=&c_1\big ( \pi_{1*} \mathcal O_{\mathcal Z_n} \big )|_{C^{(n)}} \cdot \Gamma
= c_1\big ( \w \pi_{1*} \mathcal O_{\mathcal Z_n(C)} \big ) \cdot \Gamma
\nonumber \\
&=&(1 - g_C - n)(\Xi \cdot \Gamma) + (\Theta \cdot \Gamma).
\end{eqnarray}
Next, let $\alpha \in H^2(X, \C)$. Then, $D_\alpha|_{C^{(n)}} = (C \cdot \alpha) \Xi$.
Thus, we get
$$
D_\alpha \cdot \Gamma = (C \cdot \alpha) (\Xi \cdot \Gamma).
$$
By \eqref{BasisH^2} and \eqref{HomlgCls.1}, $\Gamma \sim
(\Xi \cdot \Gamma) \beta_C + \big ( (1 - g_C - n)(\Xi \cdot \Gamma) +
(\Theta \cdot \Gamma)\big ) \beta_n$.
(ii) Note that $\Theta \cdot \Gamma_0 = 0$. Also, it is known that
$\Xi|_{{\rm AJ}^{-1}(\delta)} = \mathcal O_{{\rm AJ}^{-1}(\delta)}(1)$.
So $\Xi \cdot \Gamma_0 = 1$. By (i), we see immediately that
$\Gamma_0 \sim \beta_C - (n+ g_C - 1) \beta_n$.
\end{proof}
\section{\bf Gromov-Witten invariants of the Hilbert scheme $\Xtwo$}
\label{sect_n=2}
This section studies the Gromov-Witten invariants of the Hilbert scheme $\Xtwo$.
Using the results from previous sections,
we will prove Theorems~\ref{Intro-theorem_ii} and \ref{Intro-theorem_iii}.
To begin with, we obtain the following from Corollary~\ref{GenTypeCor}.
\begin{proposition} \label{n=2Prop}
Let $X$ be a simply connected minimal surface of general type admitting a holomorphic
differential two-form with irreducible zero divisor. Let $\beta \ne 0$.
Then all the Gromov-Witten invariants without descendant insertions defined via
$\Mbar_{g, r}(\Xtwo, \beta)$ vanish, except possibly in the following cases
\begin{enumerate}
\item[{\rm (i)}] $g = 0$ and $\beta = d \beta_2$ for some integer $d > 0$;
\item[{\rm (ii)}] $g = 1$ and $\beta = d \beta_2$ for some integer $d > 0$;
\item[{\rm (iii)}] $K_X^2 = 1, g = 0$ and $\beta = \beta_{K_X} - d \beta_2$ for some integer $d$.
\end{enumerate}
\end{proposition}
\begin{proof}
Cases (i) and (ii) follow from Corollary~\ref{GenTypeCor}~(i) and (ii) respectively.
In the case of Corollary~\ref{GenTypeCor}~(iii), we have $g = 0$ and
$\beta = d_0 \beta_{K_X} - d \beta_2$ for some rational number $d_0 > 0$ and some integer $d$.
We see from \eqref{ExpDim} that the expected dimension of the moduli space
$\Mbar_{0, r}(\Xtwo, \beta)$ is equal to
$$
\mathfrak d = -d_0K_X^2 + 1+ r.
$$
Since $d_0K_X^2$ must be a positive integer, we conclude from the Fundamental Class Axiom
that all the Gromov-Witten invariants without descendant insertions defined via
$\Mbar_{0, r}(\Xtwo, \beta)$ vanish except possibly when $d_0 K_X^2 = 1$. Now write
$K_X = mC_0'$ where $C_0'$ is an irreducible and reduced curve, and $m \ge 1$ is an integer.
By Remark~\ref{RmkThmVanish}, $d_0 = d_0'/m$ for some integer $d_0' \ge 1$.
Therefore, we obtain
$$
1 = d_0 K_X^2 = d_0' m (C_0')^2.
$$
It follows that $d_0' = m = (C_0')^2 = 1$. Hence $K_X^2 = 1$ and $d_0 = 1$.
\end{proof}
Case~(i) in Proposition~\ref{n=2Prop} can be handled via
the Divisor Axiom of Gromov-Witten theory and the results in \cite{LQ1}.
By the Divisor Axiom, Case~(ii) in Proposition~\ref{n=2Prop} can be reduced to the invariant
\beq \label{n=2Case2}
\langle 1 \rangle_{1, d\beta_2}^{\Xtwo}.
\eeq
Similarly, Case~(iii) in Proposition~\ref{n=2Prop} can be reduced to the invariant
\beq \label{n=2Case3}
\langle 1 \rangle_{0, \,\, \beta_{K_X} - d\beta_2}^{\Xtwo}.
\eeq
The invariants \eqref{n=2Case2} and \eqref{n=2Case3} will be studied in the next two subsections.
\subsection{\bf Calculation of \eqref{n=2Case2}}
\label{subsect_n=2Case2}
\par
$\,$
Let $d$ denote a positive integer. In this subsection, we will compute \eqref{n=2Case2} for
an arbitrary smooth projective surface $X$. For simplicity, we put
\beq \label{Mgrd}
\Mbar_{g, r, d} = \Mbar_{g, r}(\Xtwo, d\beta_2).
\eeq
\begin{lemma} \label{obsbundle1}
\begin{enumerate}
\item[{\rm (i)}] The obstruction sheaf $\mathcal Ob = R^1(f_{1,0})_*({\rm ev}_1^*T_\Xtwo)$
over the moduli space ${\Mbar}_{1, 0, d}$ is locally free of rank $2d+2$;
\item[{\rm (ii)}] $[{\Mbar}_{1, 0, d}]^\vir = c_{2d+2}(\mathcal Ob)
\cap [{\Mbar}_{1, 0, d}]$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Recall the evaluation map ${\rm ev}_1: \Mbar_{1, 1, d} \to \Xtwo$ and
the forgetful map $f_{1, 0}: \Mbar_{1, 1, d} \to {\Mbar}_{1, 0, d}$ from Section~\ref{Stable}.
Let $u = [\mu: D \to \Xtwo] \in {\Mbar}_{1, 0, d}$. Then
$$
H^1 \big (f^{-1}_{1, 0}(u), ({\rm ev}_1^*T_{\Xtwo})|_{f^{-1}_{1, 0}(u)} \big )
\cong H^1(D, \mu^*T_{\Xtwo})
= H^1 \big (D, \mu^*(T_{\Xtwo}|_{\mu(D)}) \big ).
$$
Since $d \ge 1$, $\mu(D) = M_2(x) \cong \Pee^1$ for some point $x \in X$.
By the results in \cite{LQZ},
$$
T_{\Xtwo}|_{\mu(D)} = \mathcal O_{\mu(D)}(2) \oplus
\mathcal O_{\mu(D)}(-2) \oplus \mathcal O_{\mu(D)} \oplus \mathcal O_{\mu(D)}.
$$
Since the curve $D$ is of genus-$1$, $h^1 \big (D, \mu^*(T_{\Xtwo}|_{\mu(D)}) \big ) = 2d+2$.
It follows that the sheaf $R^1(f_{1,0})_*({\rm ev}_1^*T_\Xtwo)$ is locally free of rank $2d+2$.
(ii) First of all, note that there exists a natural morphism
$$\phi: {\Mbar}_{1, 0, d} \to X$$ sending an element
$u = [\mu: D \to \Xtwo] \in {\Mbar}_{1, 0, d}$ to $x \in X$ if $\mu(D) = M_2(x)$.
The fiber $\phi^{-1}(x)$ over $x \in X$ is $\Mbar_{1,0}(M_2(x), d[M_2(x)] \cong
\Mbar_{1,0}(\Pee^1, d[\Pee^1])$. So the moduli space ${\Mbar}_{1, 0, d}$ is smooth
(as a stack) with dimension
$$
\dim {\Mbar}_{1, 0, d} = \dim \Mbar_{1,0}(\Pee^1, d[\Pee^1]) + 2 = 2d + 2.
$$
By \eqref{ExpDim}, the expected dimension of ${\Mbar}_{1, 0, d}$ is $0$. Thus,
the excess dimension of ${\Mbar}_{1, 0, d}$ is $2d+2$. By (i) and Proposition~\ref{virtual-prop},
$[{\Mbar}_{1, 0, d}]^\vir = c_{2d+2}(\mathcal Ob) \cap [{\Mbar}_{1, 0, d}]$.
\end{proof}
Via the inclusion map $B_2 \hookrightarrow \Xtwo$, the evaluation map
${\rm ev}_1: \Mbar_{1, 1, d} \to \Xtwo$ factors through
a morphism $\widetilde {\rm ev}_1: \Mbar_{1, 1, d} \to B_2$. Also, $B_2 \cong \Pee(T_X^\vee)$.
Let $\rho: B_2 \to X$ be the canonical projection.
Then, there exists a commutative diagram of morphisms:
\begin{eqnarray}\label{com-diagram2}
\begin{matrix}
\Mbar_{1, 1, d} & \overset{\widetilde {\rm ev}_1}\to &B_2\\
\quad \downarrow^{f_{1,0}}&&\downarrow^{\rho}\\
\Mbar_{1, 0, d} &\overset{\phi} \to&X.
\end{matrix}
\end{eqnarray}
\begin{lemma} \label{obsbundle2}
\begin{enumerate}
\item[{\rm (i)}] Let $\mathcal H$ be the Hodge bundle over $\Mbar_{1, 0, d}$. Then,
$$
R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2} \cong \mathcal H^\vee \otimes \phi^*T_X;
$$
\item[{\rm (ii)}] There exists an exact sequence of locally free sheaves:
\begin{eqnarray} \label{obsbundle2.0}
0 \to R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2} \to \mathcal Ob
\to R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \to 0.
\end{eqnarray}
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Let $T_{B_2/X}$ be the relative tangent sheaf for the projection $\rho: B_2 \to X$.
Applying the functors $\widetilde {\rm ev}_1^*$ and $(f_{1,0})_*$ to the exact sequence
$$
0 \to T_{B_2/X} \to T_{B_2} \to \rho^*T_X \to 0
$$
of locally free sheaves, we obtain an exact sequence
\begin{eqnarray} \label{obsbundle2.1}
R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2/X} \to R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2}
\to R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*(\rho^*T_X) \to 0.
\end{eqnarray}
where we have used $R^2(f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2/X} = 0$
since $f_{1,0}$ is of relative dimension $1$.
We claim that $R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2/X} = 0$.
Indeed, let $u = [\mu: D \to \Xtwo] \in {\Mbar}_{1, 0, d}$, and assume that $\mu(D) = M_2(x)$.
Since $T_{B_2/X}|_{M_2(x)} = T_{M_2(x)} = \mathcal O_{M_2(x)}(2)$,
$$
H^1 \big (f^{-1}_{1, 0}(u), \widetilde {\rm ev}_1^*T_{B_2/X}|_{f^{-1}_{1, 0}(u)} \big )
\cong H^1(D, \mu^*\mathcal O_{M_2(x)}(2)) = 0.
$$
By \eqref{obsbundle2.1}, $R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2}
\cong R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*(\rho^*T_X)$.
Since $\rho \circ \widetilde {\rm ev}_1 = \phi \circ f_{1,0}$, we get
\begin{eqnarray*}
R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2}
&\cong&R^1 (f_{1,0})_*\big ( f_{1,0}^*(\phi^*T_X) \big ) \\
&\cong&R^1 (f_{1,0})_*\mathcal O_{\Mbar_{1, 1, d}} \otimes \phi^*T_X \\
&\cong&\mathcal H^\vee \otimes \phi^*T_X.
\end{eqnarray*}
(ii) Since ${\rm ev}_1$ factors through $\widetilde {\rm ev}_1$,
we see from Lemma~\ref{obsbundle1}~(i) that
$$
\mathcal Ob = R^1(f_{1,0})_*({\rm ev}_1^*T_\Xtwo)
= R^1(f_{1,0})_*\big (\widetilde {\rm ev}_1^*(T_\Xtwo|_{B_2}) \big ).
$$
Since $B_2$ is a smooth divisor in $\Xtwo$ and $\mathcal O_{B_2}(B_2) = \mathcal O_{B_2}(-2)$, we have
$$
0 \to T_{B_2} \to T_\Xtwo|_{B_2} \to \mathcal O_{B_2}(-2) \to 0.
$$
Applying the functors $\widetilde {\rm ev}_1^*$ and $(f_{1,0})_*$, we obtain an exact sequence
\begin{eqnarray} \label{obsbundle2.2}
(f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \to
R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*T_{B_2} \to \mathcal Ob
\to R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \to 0.
\end{eqnarray}
We claim that $(f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) = 0$.
Indeed, let $u = [\mu: D \to \Xtwo] \in {\Mbar}_{1, 0, d}$, and assume that $\mu(D) = M_2(x)$.
Then, since $\mathcal O_{B_2}(-2)|_{M_2(x)} = \mathcal O_{M_2(x)}(-2)$,
$$
H^0 \big (f^{-1}_{1, 0}(u), \widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2)|_{f^{-1}_{1, 0}(u)} \big )
\cong H^0(D, \mu^*\mathcal O_{M_2(x)}(-2)) = 0.
$$
So $(f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(B_2) = 0$, and
\eqref{obsbundle2.2} is simplified to the exact sequence \eqref{obsbundle2.0}.
\end{proof}
\begin{theorem} \label{theorem_ii}
Let $d \ge 1$. Let $X$ be a smooth projective surface. Then,
$$
\langle 1 \rangle_{1, d\beta_2}^{\Xtwo} = {K_X^2 \over 12d}.
$$
\end{theorem}
\begin{proof}
By Lemma~\ref{obsbundle1}~(ii), $\langle 1 \rangle_{1, d\beta_2}^{\Xtwo} =
\deg [{\Mbar}_{1, 0, d}]^\vir = \deg \big (c_{2d+2}(\mathcal Ob) \cap [{\Mbar}_{1, 0, d}] \big )$.
The Hodge bundle $\mathcal H$ and $R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2)$
on the moduli space $\Mbar_{1, 0, d}$ are of ranks $1$ and $2d$ respectively.
Therefore, by Lemma~\ref{obsbundle2} and \eqref{Mumford},
\begin{eqnarray*}
\langle 1 \rangle_{1, d\beta_2}^{\Xtwo}
&=&c_2 \big ( \mathcal H^\vee \otimes \phi^*T_X \big ) \, \cdot \, c_{2d} \big (
R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \big ) \nonumber \\
&=&\big ( \lambda^2 + \phi^*K_X \cdot \lambda + \phi^*c_2(T_X) \big ) \, \cdot \,
c_{2d} \big ( R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \big ) \\
&=&\big ( \phi^*K_X \cdot \lambda + \phi^*c_2(T_X) \big ) \, \cdot \,
c_{2d} \big ( R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \big ).
\end{eqnarray*}
Let $\chi(X)$ be the Euler characteristic of $X$. By \eqref{0K3.01}, we obtain
\begin{eqnarray*}
& &\phi^*c_2(T_X) \cdot c_{2d} \big (
R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \big ) \\
&=&\chi(X) \cdot \int_{[\Mbar_{1, 0}(\Pee^1, d[\Pee^1])]}
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee^1}(-2) \big ) \\
&=&0.
\end{eqnarray*}
Hence, $\langle 1 \rangle_{1, d\beta_2}^{\Xtwo} = \phi^*K_X \cdot \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \big )$.
Choose two smooth irreducible curves $C_1$ and $C_2$ satisfying $[C_1] - [C_2] = K_X$.
Since $B_2 \cong \Pee(T_X^\vee)$,
\begin{eqnarray*}
\langle 1 \rangle_{1, d\beta_2}^{\Xtwo}
&=&\phi^*([C_1] - [C_2]) \cdot \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*\widetilde {\rm ev}_1^*\mathcal O_{B_2}(-2) \big ) \\
&=&\int_{[\Mbar_{1, 0}(\Pee(T_X^\vee|_{C_1}), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_X^\vee|_{C_1})}(-2) \big ) \\
& &- \int_{[\Mbar_{1, 0}(\Pee(T_X^\vee|_{C_2}), df)]} \lambda \cdot
c_{2d} \big ( R^1 (f_{1,0})_*{\rm ev}_1^*\mathcal O_{\Pee(T_X^\vee|_{C_2})}(-2) \big ) \\
&=&{\deg(T_X^\vee|_{C_1}) \over 12d} - {\deg(T_X^\vee|_{C_2}) \over 12d} \\
&=&{K_X^2 \over 12d}
\end{eqnarray*}
where we have used Proposition~\ref{PropDegV} in the third step.
\end{proof}
\subsection{\bf Calculation of \eqref{n=2Case3}}
\label{subsect_n=2Case3}
\par
$\,$
Let $X$ be a minimal surface of general type with
$K_X^2 = 1$ and $p_g \ge 1$. By Noether's inequality, $p_g \le 2$.
Thus, $p_g = 1$ or $2$. If $p_g = 2$, then we see from the proof
of Proposition~(8.1) in \cite{BPV} that
$|K_X|$ is a pencil without fixed part and with one base point,
and the general canonical curve is a genus-$2$ smooth irreducible curve.
If $p_g = 1$, then $|K_X|$ consists of a single element which is
a connected curve of arithmetic genus-$2$.
In this subsection, we will study \eqref{n=2Case3} by assuming
that $X$ is a simply connected minimal surface of general type with
$K_X^2 = 1$ and $1 \le p_g \le 2$, and that every member in $|K_X|$ is
a smooth irreducible curve (of genus-$2$).
Our first lemma asserts that the lower bound of $d$ for the class
$\beta_{K_X} + d \beta_2$ to be effective is equal to $-3$,
and classifies all the curves homologous to $\beta_{K_X} - 3 \beta_2$.
\begin{lemma} \label{LowerBd}
Let $X$ be a simply connected minimal surface of general type with
$K_X^2 = 1$ and $1 \le p_g \le 2$ such that every member in $|K_X|$ is
a smooth curve. Let $\Gamma$ be a curve in $\Xtwo$ such that
$\Gamma \sim \beta_{K_X} + d \beta_2$ for some integer $d$. Then,
\begin{enumerate}
\item[{\rm (i)}] $d \ge -3$.
\item[{\rm (ii)}] $d = -3$ if and only if
$\Gamma = g_2^1(C) \subset C^{(2)}$, where $C \in |K_X|$ and
$g_2^1(C)$ is the unique linear system of
dimension $1$ and degree $2$ on the genus-$2$ curve $C$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\Gamma_1, \ldots, \Gamma_t$ be the irreducible components of $\Gamma$.
By Lemma~\ref{CurveSim}, there exists $t_1$ with $1 \le t_1 \le t$ such that
for $1 \le i \le t_1$, $\Gamma_i \sim \beta_{C_i} + d_i \beta_2$ for some curve $C_i$
and integer $d_i$, and that for $t_1 < j \le t$, $\Gamma_j \sim d_j \beta_2$ for
some integer $d_j > 0$. Then,
\begin{eqnarray} \label{LowerBd.1}
\beta_{K_X} + d \beta_2 \sim \Gamma = \sum_{i=1}^t \Gamma_i
\sim \sum_{i=1}^{t_1} (\beta_{C_i} + d_i \beta_2) + \sum_{j=t_1+1}^t (d_j \beta_2).
\end{eqnarray}
So $K_X \sim \sum_{i=1}^{t_1} C_i$. Since $X$ is simply connected,
$K_X = \sum_{i=1}^{t_1} C_i$ as divisors.
Since every member in $|K_X|$ is a smooth irreducible curve,
$t_1 = 1$ and $C_1 \in |K_X|$. By \eqref{LowerBd.1}, $d = d_1 + \sum_{j=2}^t d_j \ge d_1$.
To prove the lemma, it suffices to prove $d_1 \ge -3$, i.e., we will assume
in the rest of the proof that $\Gamma = \Gamma_1$ is irreducible.
We claim that there exists a non-empty open subset $U$ of $\Gamma$
such that every $\xi \in U$ consists of two distinct points in $X$.
Indeed, assume that this is not true. Then $\Gamma \subset B_2$.
Let $\alpha \in H^2(X, \C)$. By abusing notations, we also use $\alpha$
to denote a real surface in $X$ representing the cohomology class $\alpha$.
Note that $D_\alpha|_{B_2} = D_\alpha|_{M_2(X)} = 2M_2(\alpha)$. Thus, we obtain
$$
K_X \cdot \alpha = (\beta_{K_X} + d \beta_2) \cdot D_\alpha
= \Gamma \cdot D_\alpha = \Gamma \cdot D_\alpha|_{B_2} = 2 \Gamma \cdot M_2(\alpha).
$$
So $K_X \cdot \alpha$ is even for every $\alpha \in H^2(X, \C)$.
This contradicts to $K_X^2 = 1$.
Next, we claim that either $\Gamma \sim \beta_{K_X}$,
or there exists an irreducible curve $C \subset X$ such that
$\Supp(\xi) \in C$ for every $\xi \in \Gamma$.
Assume that there is no irreducible curve $C \subset X$ such that
$\Supp(\xi) \in C$ for every $\xi \in \Gamma$. Then by the previous paragraph,
we conclude that there exist a non-empty open subset $U$ of $\Gamma$ and
two distinct irreducible curves $\W C_1, \W C_2 \subset X$ such that every $\xi \in U$
is of the form $x_1 + x_2$ with $x_1 \in \W C_1$, $x_2 \in \W C_2$ and $x_1 \ne x_2$.
This leads to two (possibly constant) rational maps
$f_1: \Gamma \to \W C_1$ and $f_2: \Gamma \to \W C_2$.
Choose the real surface $\alpha \subset X$ in the previous paragraph such that
$\alpha, \W C_1$ and $\W C_2$ are in general position. Then,
$$
K_X \cdot \alpha = \Gamma \cdot D_\alpha
= \deg(f_1) \W C_1 \cdot \alpha + \deg(f_2) \W C_2 \cdot \alpha.
$$
So $K_X = \deg(f_1) \W C_1 + \deg(f_2) \W C_2$ in $H^2(X, \C)$. Since $X$ is simply-connected,
$K_X = \deg(f_1) \W C_1 + \deg(f_2) \W C_2$ as divisors.
Since every member in $|K_X|$ is a smooth irreducible curve and $K_X^2 = 1$,
we must have an equality of sets:
$$
\{ \deg(f_1), \deg(f_2) \} = \{1, 0\}.
$$
For simplicity,
let $\deg(f_1) = 1$ and $\deg(f_2) = 0$. Then, $x_2 \in \Supp(\xi)$ for every $\xi \in \Gamma$,
and $\W C_1 \in |K_X|$. By our assumption, $x_2 \not \in \W C_1$ (otherwise,
$\Supp(\xi) \in \W C_1$ for every $\xi \in \Gamma$). Thus, $\Gamma = \W C_1 + x_2$.
So $\Gamma \sim \beta_{\W C_1} = \beta_{K_X}$.
Finally, we assume that there exists an irreducible curve $C \subset X$ such that
$\Supp(\xi) \in C$ for every $\xi \in \Gamma$.
We claim that $C \in |K_X|$. Note that there exists a positive integer $s$
such that for a general point $x \in C$, there exist $s$ distinct elements
$\xi_1, \ldots, \xi_s \in \Gamma$ such that $x \in \Supp(\xi_i)$ for every $i$.
Choose the real surface $\alpha \subset X$ such that $\alpha$ and $C$ are
in general position. Then, we have
$$
K_X \cdot \alpha = s C \cdot \alpha.
$$
So $K_X = sC$ in $H^2(X, \C)$. Since $X$ is simply connected, $K_X = sC$ as divisors.
Since $K_X^2 = 1$, $s = 1$ and $C \in |K_X|$. Since $\Gamma \in C^{(2)}$ and $g_C = 2$,
$$
\beta_{K_X} + d \beta_2 \sim \Gamma \sim
(\Xi \cdot \Gamma) \beta_{K_X} + \big (-3(\Xi \cdot \Gamma) +
(\Theta \cdot \Gamma)\big ) \beta_2
$$
by \eqref{HomlgCls.01}.
Thus, $\Xi \cdot \Gamma = 1$ and $d = -3 + (\Theta \cdot \Gamma)$.
Since $\Theta$ is a nef divisor of $C^{(2)}$, we have $d \ge -3$.
In addition, $d = -3$ if and only if $\Theta \cdot \Gamma = 0$.
Since $\Theta$ is the pull-back of a Theta divisor on ${\rm Jac}_2(C)$ via
the map ${\rm AJ}: C^{(2)} \to {\rm Jac}_2(C)$, $d = -3$ if and only if
$\Gamma$ is contracted to a point by ${\rm AJ}$.
Note that the only positive-dimensional fiber of the map ${\rm AJ}$ is $g_2^1(C) \cong \Pee^1$.
Hence $d = -3$ if and only if $\Gamma = g_2^1(C) \subset C^{(2)}$.
\end{proof}
Fix $C \in |K_X|$ and an isomorphism $\mu_0: \Pee^1 \to g_2^1(C)$.
Via the inclusion $g_2^1(C) \subset C^{(2)}\subset \Xtwo$,
regard $\mu_0$ as a morphism from $\Pee^1$ to $\Xtwo$. Then, we have
\beq \label{DefOfMu0}
[\mu_0: \Pee^1 \to \Xtwo] \in \Mbar_{0, 0}(\Xtwo, \beta_{K_X} - 3\beta_2).
\eeq
\begin{lemma} \label{mu0}
$h^1(\Pee^1, \mu_0^*T_{\Xtwo}) = p_g - 1$.
\end{lemma}
\par\noindent
{\it Proof.}
From the exact sequence $0 \to \mathcal O_X \to \mathcal O_X(K_X) \to
\mathcal O_C(K_X) \to 0$, we get
$$
0 \to H^0(X, \mathcal O_X) \to H^0(X, \mathcal O_X(K_X)) \to
H^0(C, \mathcal O_C(K_X)) \to H^1(X, \mathcal O_X).
$$
Since $X$ is a simply connected surface, $h^1(X, \mathcal O_X) = 0$. So
\beq \label{mu0.1}
h^0(C, \mathcal O_C(K_X)) = p_g - 1.
\eeq
Put $\Gamma = g_2^1(C) \subset C^{(2)} \subset \Xtwo$. Since the smooth rational curve $\Gamma$
is the only positive-dimensional fiber of ${\rm AJ}: C^{(2)} \to {\rm Jac}_2(C)$,
$\Gamma$ is a $(-1)$-curve contracted by ${\rm AJ}$.
So the normal bundle $N_{\Gamma \subset C^{(2)}}$ of $\Gamma$ in $C^{(2)}$ is given by:
\beq \label{mu0.2}
N_{\Gamma \subset C^{(2)}} = \mathcal O_\Gamma(-1).
\eeq
Since $T_\Gamma = \mathcal O_\Gamma(2)$, we see from
$0 \to T_\Gamma \to T_{C^{(2)}}|_\Gamma \to N_{\Gamma \subset C^{(2)}} \to 0$ that
\beq \label{mu0.2.1}
T_{C^{(2)}}|_\Gamma = \mathcal O_\Gamma(2) \oplus \mathcal O_\Gamma(-1).
\eeq
Since $K_{\Xtwo} \cdot \Gamma
= D_{K_X} \cdot (\beta_{K_X} - 3\beta_2) = K_X^2 = 1$, we have
$$
\deg N_{\Gamma \subset \Xtwo} = -K_{\Xtwo} \cdot \Gamma - \deg T_\Gamma = -3.
$$
So $\deg \big (N_{C^{(2)} \subset \Xtwo}|_\Gamma \big ) = -2$ in view of
\eqref{mu0.2} and the exact sequence
\beq \label{mu0.3}
0 \to N_{\Gamma \subset C^{(2)}} \to N_{\Gamma \subset \Xtwo} \to
N_{C^{(2)} \subset \Xtwo}|_\Gamma \to 0.
\eeq
We claim that $N_{C^{(2)} \subset \Xtwo}|_\Gamma = \mathcal O_\Gamma(-p_g) \oplus
\mathcal O_\Gamma(p_g-2)$. Since $N_{C^{(2)} \subset \Xtwo}|_\Gamma$ is
a degree-$(-2)$ rank-$2$ bundle on $\Gamma \cong \Pee^1$, it suffices to prove
\beq \label{mu0.4}
h^0 \big (\Gamma, N_{C^{(2)} \subset \Xtwo}|_\Gamma \big ) = p_g-1.
\eeq
It is known from \cite{AIK} that $N_{C^{(2)} \subset \Xtwo} = \pi_{1*}\pi_2^*\mathcal O_X(C)|_{C^{(2)}}
= \pi_{1*}\pi_2^*\mathcal O_X(K_X)|_{C^{(2)}}$
where $\pi_1: {\mathcal Z}_2 \to \Xtwo$ and $\pi_2: {\mathcal Z}_2 \to X$ are
the natural projections. Let ${\mathcal Z}_\Gamma = \pi_1^{-1}(\Gamma) \subset {\mathcal Z}_2$.
Note that $\pi_2 \big ( {\mathcal Z}_\Gamma \big ) = C$.
Put $\w \pi_1 = \pi_1|_{{\mathcal Z}_\Gamma}: {\mathcal Z}_\Gamma \to \Gamma$
and $\w \pi_2 = \pi_2|_{{\mathcal Z}_\Gamma}: {\mathcal Z}_\Gamma \to C$.
Then, $\w \pi_2$ is an isomorphism. Up to an isomorphism, $\w \pi_1$ is the double cover
$C \to \Pee^1$ corresponding to the linear system $g_2^1(C)$. We have
\beq \label{mu0.401}
N_{C^{(2)} \subset \Xtwo}|_\Gamma = \pi_{1*}\pi_2^*\mathcal O_X(K_X)|_\Gamma
= \w \pi_{1*}\w \pi_2^*\big ( \mathcal O_X(K_X)|_C \big )
= \w \pi_{1*}\w \pi_2^*\mathcal O_C(K_X).
\eeq
Thus $H^0 \big (\Gamma, N_{C^{(2)} \subset \Xtwo}|_\Gamma \big )
= H^0 \big (\Gamma, \w \pi_{1*}\w \pi_2^*\mathcal O_C(K_X) \big )
= H^0 \big ({\mathcal Z}_\Gamma, \w \pi_2^*\mathcal O_C(K_X) \big )$.
Since $\w \pi_2$ is an isomorphism, we conclude from \eqref{mu0.1} that
$$
H^0 \big (\Gamma, N_{C^{(2)} \subset \Xtwo}|_\Gamma \big )
\cong H^0(C, \mathcal O_C(K_X)) = p_g-1.
$$
This proves \eqref{mu0.4}. Therefore, we obtain
\beq \label{mu0.5}
N_{C^{(2)} \subset \Xtwo}|_\Gamma = \mathcal O_\Gamma(-p_g) \oplus \mathcal O_\Gamma(p_g - 2).
\eeq
Now the exact sequence \eqref{mu0.3} becomes
$$
0 \to \mathcal O_\Gamma(-1) \to N_{\Gamma \subset \Xtwo} \to
\mathcal O_\Gamma(-p_g) \oplus \mathcal O_\Gamma(p_g - 2) \to 0
$$
which splits since $1 \le p_g \le 2$.
So $N_{\Gamma \subset \Xtwo} = \mathcal O_\Gamma(-1) \oplus \mathcal O_\Gamma(-p_g)
\oplus \mathcal O_\Gamma(p_g - 2)$.
From $T_\Gamma = \mathcal O_\Gamma(2)$ and the exact sequence
$
0 \to T_\Gamma \to T_{\Xtwo}|_\Gamma \to N_{\Gamma \subset \Xtwo} \to 0,
$
we see that $T_{\Xtwo}|_\Gamma = \mathcal O_\Gamma(2) \oplus \mathcal O_\Gamma(-1)
\oplus \mathcal O_\Gamma(-p_g) \oplus \mathcal O_\Gamma(p_g - 2)$. Finally,
\begin{equation}
h^1(\Pee^1, \mu_0^*T_{\Xtwo}) = h^1 \big (\Pee^1, \mu_0^*(T_{\Xtwo}|_\Gamma) \big )
= p_g - 1.
\tag*{$\qed$}
\end{equation}
\begin{theorem} \label{theorem_iii}
Let $X$ be a simply connected minimal surface of general type with
$K_X^2 = 1$ and $1 \le p_g \le 2$ such that every member in $|K_X|$ is smooth. Then,
\begin{enumerate}
\item[{\rm (i)}] $\Mbar_{0, 0}(\Xtwo, \beta_{K_X} - 3\beta_2) \cong |K_X| \cong \Pee^{p_g-1}$;
\item[{\rm (ii)}] $\langle 1 \rangle_{0, \,\, \beta_{K_X} - 3 \beta_2}^{\Xtwo}
= (-1)^{\chi(\mathcal O_X)}$.
\end{enumerate}
\end{theorem}
\par\noindent
{\it Proof.}
For simplicity, we denote $\Mbar_{0, 0}(\Xtwo, \beta_{K_X} - 3\beta_2)$ by $\Mbar$.
(i) Let $[\mu: D \to \Xtwo] \in \Mbar$. Put $\Gamma = \mu(D)$.
As in the first paragraph in the proof of Lemma~\ref{LowerBd}, let $\Gamma_1, \ldots, \Gamma_t$ be
the irreducible components of $\Gamma$. Let $m_i$ be the degree of the restriction
$\mu|_{\mu^{-1}(\Gamma_i)}: \mu^{-1}(\Gamma_i) \to \Gamma_i$. Then,
\beq \label{prop_iii.1}
\beta_{K_X} - 3 \beta_2 = \mu_*[D] = \sum_{i=1}^t m_i [\Gamma_i].
\eeq
By Lemma~\ref{CurveSim}, there exists some $t_1$ with $1 \le t_1 \le t$ such that
for $1 \le i \le t_1$, $\Gamma_i \sim \beta_{C_i} + d_i \beta_2$ for some curve $C_i$
and integer $d_i$, and that for $t_1 < j \le t$, $\Gamma_j \sim d_j \beta_2$ for
some integer $d_j > 0$. Combining with \eqref{prop_iii.1}, we obtain
$$
\beta_{K_X} - 3 \beta_2
= \sum_{i=1}^{t_1} m_i(\beta_{C_i} + d_i \beta_2) + \sum_{j=t_1+1}^t m_j(d_j \beta_2).
$$
So $K_X \sim \sum_{i=1}^{t_1} m_i C_i$. Since $X$ is simply connected,
$K_X = \sum_{i=1}^{t_1} m_i C_i$ as divisors.
Since every member in $|K_X|$ is smooth,
$t_1 = 1$, $m_1 = 1$ and $C_1 \in |K_X|$. By Lemma~\ref{LowerBd}, $t=1$ and
$\Gamma_1 = g_2^1(C_1)$. So $\mu(D) = g_2^1(C_1)$. Since $m_1 = 1$ and there is no marked points,
$D = \Pee^1$ and $\mu$ is an isomorphism from $D = \Pee^1$ to $g_2^1(C_1)$.
Therefore, $\Mbar = |K_X|$ as sets. Since the stable map $\mu: D \to \Xtwo$ has
the trivial automorphism group, $\Mbar$ is a fine moduli space.
Next, we construct a morphism $\psi: |K_X| \to \Mbar$. Let $\mathcal C \subset |K_X| \times X$
be the family of curves parametrized by $|K_X|$. Then we have the relative Hilbert schemes
$(\mathcal C/|K_X|)^{[2]} \subset (|K_X| \times X/|K_X|)^{[2]}$, i.e.,
$(\mathcal C/|K_X|)^{(2)} \subset |K_X| \times \Xtwo$. Let
$$
\W \Psi: (\mathcal C/|K_X|)^{(2)} \to \Xtwo
$$
be the composition of the inclusion $(\mathcal C/|K_X|)^{(2)} \subset |K_X| \times \Xtwo$
and the projection $|K_X| \times \Xtwo \to \Xtwo$. In the relative Jacobian
${\rm Jac}_2(\mathcal C/|K_X|)$, let $\Sigma$ be the section to the natural projection
${\rm Jac}_2(\mathcal C/|K_X|) \to |K_X|$ such that for $C \in |K_X|$, the point
$\Sigma(C) = K_C \in {\rm Jac}_2(C)$. Then the natural map $(\mathcal C/|K_X|)^{(2)}
\to {\rm Jac}_2(\mathcal C/|K_X|)$ is the blowing-up of ${\rm Jac}_2(\mathcal C/|K_X|)$
along $\Sigma$. Let $\mathcal E \subset (\mathcal C/|K_X|)^{(2)}$ be
the exceptional divisor of this blowing-up. Put
$$
\Psi = \W \Psi|_{\mathcal E}: \mathcal E \to \Xtwo.
$$
Then $\Psi$ is a family of stable maps in $\Mbar$ parametrized by $|K_X|$.
By the universality of the moduli space $\Mbar$, $\Psi$ induces a morphism $\psi: |K_X| \to \Mbar$.
By the discussion in the previous paragraph, we conclude that $\psi$ is bijective.
By \eqref{ExpDim}, the expected dimension of $\Mbar$ is $0$.
Since $\dim \Mbar = \dim |K_X| = p_g-1$, the excess dimension of $\Mbar$ is equal to $p_g-1$.
By Lemma~\ref{mu0}, $R^1(f_{1,0})_*{\rm ev}_1^*T_{\Xtwo}$ is a rank-$(p_g-1)$ locally free sheaf,
where $f_{1,0}: \Mbar_{0, 1}(\Xtwo, \beta_{K_X} - 3\beta_2) \to \Mbar$ is the forgetful map and
${\rm ev}_1: \Mbar_{0, 1}(\Xtwo, \beta_{K_X} - 3\beta_2) \to \Xtwo$ is the evaluation map.
By Proposition~\ref{virtual-prop},
$\Mbar$ is smooth (as a scheme since it is a fine moduli space).
By Zariski's Main Theorem, the bijective morphism $\psi: |K_X| \to \Mbar$ is an isomorphism.
So $\Mbar \cong |K_X| \cong \Pee^{p_g-1}$. Note also from Proposition~\ref{virtual-prop} that
\beq \label{prop_iii.2}
[\Mbar]^{\text{\rm vir}} = c_{p_g-1} \big (R^1(f_{1,0})_*{\rm ev}_1^*T_{\Xtwo} \big ) \cap [\Mbar].
\eeq
(ii) Since $X$ is simply connected, we have $\chi(\mathcal O_X) = 1 + p_g$.
When $p_g = 1$, $\Mbar_{0, 0}(\Xtwo, \beta_{K_X} - 3\beta_2)$ is a smooth point;
so $\langle 1 \rangle_{0, \,\, \beta_{K_X} - 3 \beta_2}^{\Xtwo} = 1$, and our formula holds.
In the rest of the proof, let $p_g = 2$. We will prove that
$\langle 1 \rangle_{0, \,\, \beta_{K_X} - 3 \beta_2}^{\Xtwo} = -1$.
We adopt the notations from the proof of (i). For simplicity, put $\Mbar = |K_X|$.
Then we have identifications $\Mbar_{0, 1}(\Xtwo, \beta_{K_X} - 3\beta_2) = \mathcal E$
and ${\rm ev}_1 = \Psi$. Moreover, the forgetful map
$f_{1,0}: \Mbar_{0, 1}(\Xtwo, \beta_{K_X} - 3\beta_2) \to \Mbar$
is identified with the natural projection $f: \mathcal E \to |K_X|$.
In view of \eqref{prop_iii.2}, we have
\beq \label{prop_iii.3}
[\Mbar]^{\text{\rm vir}} = c_1 \big (R^1f_*\Psi^*T_{\Xtwo} \big ) \cap [\Mbar].
\eeq
To understand the line bundle $R^1f_*\Psi^*T_{\Xtwo}$, we take the exact sequence of
relative tangent bundles associated to the pair
$(\mathcal C/|K_X|)^{(2)} \subset |K_X| \times \Xtwo$:
$$
0 \to T_{(\mathcal C/|K_X|)^{(2)}/|K_X|} \to
T_{|K_X| \times \Xtwo/|K_X|}|_{(\mathcal C/|K_X|)^{(2)}} \to
N_{(\mathcal C/|K_X|)^{(2)} \subset |K_X| \times \Xtwo} \to 0.
$$
Note that $T_{|K_X| \times \Xtwo/|K_X|}|_{(\mathcal C/|K_X|)^{(2)}}
= \W \Psi^*T_\Xtwo$. In addition, we have
\beq \label{prop_iii.301}
N_{(\mathcal C/|K_X|)^{(2)} \subset |K_X| \times \Xtwo} =
\big ( p_{1*}p_2^*\mathcal O_{|K_X| \times X}(\mathcal C) \big )|_{(\mathcal C/|K_X|)^{(2)}}
\eeq
by \cite{AIK},
where $p_1: |K_X| \times \mathcal Z_2 \to |K_X| \times \Xtwo$ and
$p_2: |K_X| \times \mathcal Z_2 \to |K_X| \times X$ are the natural projections.
Therefore, the above exact sequence becomes
$$
0 \to T_{(\mathcal C/|K_X|)^{(2)}/|K_X|} \to \W \Psi^*T_\Xtwo \to
\big ( p_{1*}p_2^*\mathcal O_{|K_X| \times X}(\mathcal C) \big )|_{(\mathcal C/|K_X|)^{(2)}} \to 0.
$$
Restricting it to $\mathcal E \subset (\mathcal C/|K_X|)^{(2)}$, we obtain the exact sequence
\beq \label{prop_iii.4}
0 \to T_{(\mathcal C/|K_X|)^{(2)}/|K_X|}|_{\mathcal E} \to \Psi^*T_\Xtwo \to
\big ( p_{1*}p_2^*\mathcal O_{|K_X| \times X}(\mathcal C) \big )|_{\mathcal E} \to 0.
\eeq
Let $\mathcal Z_{\mathcal E} = p_1^{-1}(\mathcal E)$.
Then $p_2 \big ( \mathcal Z_{\mathcal E} \big ) = \mathcal C$.
Put $\w p_1 = p_1|_{\mathcal Z_{\mathcal E}}: \mathcal Z_{\mathcal E} \to \mathcal E$ and
$\w p_2 = p_2|_{\mathcal Z_{\mathcal E}}: \mathcal Z_{\mathcal E} \to \mathcal C$.
Then the exact sequence \eqref{prop_iii.4} can be rewritten as
\beq \label{prop_iii.5}
0 \to T_{(\mathcal C/|K_X|)^{(2)}/|K_X|}|_{\mathcal E} \to \Psi^*T_\Xtwo \to
\w p_{1*}\w p_2^*\mathcal O_{\mathcal C}(\mathcal C) \to 0.
\eeq
By \eqref{mu0.2.1}, $R^1f_*\big (T_{(\mathcal C/|K_X|)^{(2)}/|K_X|}|_{\mathcal E} \big ) = 0$.
Thus applying $f_*$ to \eqref{prop_iii.5} yields
$$
R^1f_*\Psi^*T_\Xtwo \cong
R^1f_*\big ( \w p_{1*}\w p_2^*\mathcal O_{\mathcal C}(\mathcal C) \big ).
$$
Note that $\w p_2: \mathcal Z_{\mathcal E} \to \mathcal C$ is an isomorphism.
Via this isomorphism, $\w p_1: \mathcal Z_{\mathcal E} \to \mathcal E$
is identified with the natural double cover $\w p: \mathcal C \to \mathcal E$. So
\beq \label{prop_iii.6}
R^1f_*\Psi^*T_\Xtwo \cong
R^1f_*\big ( \w p_*\mathcal O_{\mathcal C}(\mathcal C) \big ).
\eeq
Since $K_X^2 = 1$, the complete linear system $|K_X|$ has a unique base point, denoted by $x_0$.
The blowing-up morphism $\w q_2: \W X \to X$ of $X$ at $x_0$ resolves the rational map
$X \dasharrow |K_X|$, and leads to a morphism $\w q_1: \W X \to |K_X|$.
In addition, $\W X = \mathcal C$, the inclusion $\mathcal C \subset
|K_X| \times X$ is given by the map $(\w q_1, \w q_2): \W X \to |K_X| \times X$,
and there exists a commutative diagrams of morphisms:
$$
\begin{array}{ccccc}
\W X = \mathcal C && \overset {\w p} \longrightarrow && \mathcal E \\
&\searrow^{\w q_1}&&\swarrow_f&\\
&&|K_X|&&
\end{array}
$$
From the exact sequence $0 \to T_{\mathcal C} \to T_{|K_X| \times X}|_{\mathcal C} \to
\mathcal O_{\mathcal C}(\mathcal C) \to 0$, we get
\beq \label{prop_iii.601}
\mathcal O_{\mathcal C}(\mathcal C) \cong \mathcal O_{\W X}(K_{\W X}) \otimes
\w q_1^*\mathcal O_{|K_X|}(2) \otimes \w q_2^*\mathcal O_X(-K_X)
\cong \mathcal O_{\W X}(E) \otimes \w q_1^*\mathcal O_{|K_X|}(2)
\eeq
where $E \subset \W X$ is the exceptional curve. Combining with \eqref{prop_iii.6}, we obtain
\beq \label{prop_iii.7}
R^1f_*\Psi^*T_\Xtwo \cong \mathcal O_{|K_X|}(2) \otimes
R^1f_*\big ( \w p_*\mathcal O_{\W X}(E) \big ).
\eeq
Next, we determine the rational ruled surface $\mathcal E$.
Let $\sigma = \w p(E)$. Since $E$ is a section to $\w q_1$, $\sigma$ is a section to $f$.
Let $C$ be a fiber of $\w q_1$. Then $\Gamma \overset {\rm def} = g_2^1(C)$ is
the fiber of $f$ over the point $\w q_1(C)$, and $K_{\W X} = \w q_2^*K_X + E = C + 2E$.
By the adjunction formula, $K_C = (K_{\W X} + C)|_C = 2(E|_C) = 2(E \cap C)$.
So the double cover $C \to \Gamma$ is ramified at the intersection point $E \cap C$.
It follows that the double cover $\w p$ is ramified along $E$. Thus $\w p^*\sigma = 2E$.
By the projection formula, $\sigma^2 = (\w p^*\sigma)^2/2 = 2E^2 = -2$. Thus,
$\mathcal E$ is the Hirzebruch surface $\mathbb F_2 =
{\mathbb P}\big ( \mathcal O_{|K_X|} \oplus \mathcal O_{|K_X|}(-2) \big )$
with $\mathcal O_{\mathcal E}(1) = \mathcal O_{\mathcal E}(\sigma)$.
It follows that $K_{\mathcal E} = -2\sigma - 4\Gamma$.
Let $B \subset \mathcal E$ be the branch locus of the double cover $\w p$,
and $\W B = \w p^{-1}(B)$. Then $B$ and $\W B$ are smooth, $\sigma \subset B$,
$E \subset \W B$, and $B = 2L$ for some divisor $L$ on $\mathcal E$. Also,
$$
C + 2E = K_{\W X} = \w p^*(K_{\mathcal E}) + \W B = \w p^*(-2\sigma - 4\Gamma) + \W B
= -4E - 4C + \W B.
$$
So $\W B = 6E + 5C$, and $B = \w p_*\W B = 6\sigma + 10 \Gamma$ (thus $B$ is the disjoint union
of $\sigma$ and a smooth curve in $|5\sigma + 10 \Gamma|$).
Since $\w p_*\mathcal O_{\W X} = \mathcal O_X \oplus \mathcal O_X(-L)$,
\begin{eqnarray} \label{prop_iii.8}
c_1\big ( \w p_*\mathcal O_{\W X}(E) \big )
&=&{\rm ch}_1\big ( \w p_!(\mathcal O_{\W X}(E)) \big )
= \big \{ \w p_*({\rm ch}( \mathcal O_{\W X}(E)) \cdot {\rm td}(T_{\w p})) \big \}_1
\nonumber \\
&=&\w p_*E + c_1\big ( \w p_*\mathcal O_{\W X} \big )
= \sigma + (-L) = -2\sigma - 5\Gamma
\end{eqnarray}
by the Grothendieck-Riemann-Roch Theorem, where $T_{\w p}$ is the relative tangent sheaf of $\w p$.
Since $h^0(\mathcal E, \w p_*\mathcal O_{\W X}(E)) = h^0(\W X, \mathcal O_{\W X}(E)) = 1$,
there exists an injection $\mathcal O_{\mathcal E} \to \w p_*\mathcal O_{\W X}(E)$
which in turn induces an injection $\mathcal O_{\mathcal E}(a\sigma) \to
\w p_*\mathcal O_{\W X}(E)$ with $a \ge 0$ and with torsion-free quotient
$\w p_*\mathcal O_{\W X}(E)/\mathcal O_{\mathcal E}(a\sigma)$. So we have an exact sequence
\beq \label{prop_iii.9}
0 \to \mathcal O_{\mathcal E}(a\sigma) \to \w p_*\mathcal O_{\W X}(E) \to
\mathcal O_{\mathcal E}((-a-2)\sigma - 5\Gamma) \otimes I_\eta \to 0
\eeq
where $\eta$ is a $0$-cycle on $\mathcal E$.
By \eqref{mu0.5}, \eqref{prop_iii.301} and \eqref{prop_iii.601}, we get
\begin{eqnarray*}
\mathcal O_\Gamma \oplus \mathcal O_\Gamma(-2)
&=&N_{C^{(2)} \subset \Xtwo}|_\Gamma
= \big ( N_{(\mathcal C/|K_X|)^{(2)} \subset |K_X| \times \Xtwo} \big )|_\Gamma \\
&=&\big ( p_{1*}p_2^*\mathcal O_{|K_X| \times X}(\mathcal C) \big )|_\Gamma
= \big ( \w p_{1*}\w p_2^*\mathcal O_{\mathcal C}(\mathcal C) \big )|_\Gamma \\
&=&\Big ( \w p_*\big ( \mathcal O_{\W X}(E) \otimes \w q_1^*\mathcal O_{|K_X|}(2)
\big ) \Big )|_\Gamma
= \big ( \w p_*\mathcal O_{\W X}(E) \big )|_\Gamma.
\end{eqnarray*}
Since this holds for every fiber $\Gamma$ of $f$, we conclude from \eqref{prop_iii.9} that
$a = 0$ and $\eta = \emptyset$. So the exact sequence \eqref{prop_iii.9} is simplified to
$$
0 \to \mathcal O_{\mathcal E} \to \w p_*\mathcal O_{\W X}(E) \to
\mathcal O_{\mathcal E}(-2\sigma - 5\Gamma) \to 0.
$$
Applying the functor $f_*$ to the above exact sequence yields
$$
R^1f_*\big ( \w p_*\mathcal O_{\W X}(E) \big )
\cong R^1f_*\mathcal O_{\mathcal E}(-2\sigma - 5\Gamma)
\cong \mathcal O_{|K_X|}(-5) \otimes R^1f_*\mathcal O_{\mathcal E}(-2\sigma).
$$
Since $\mathcal E = {\mathbb P}\big ( \mathcal O_{|K_X|} \oplus \mathcal O_{|K_X|}(-2) \big )$
with $\mathcal O_{\mathcal E}(1) = \mathcal O_{\mathcal E}(\sigma)$ and
$f_*\mathcal O_{\mathcal E} = \mathcal O_{|K_X|}$,
$$
R^1f_*\big ( \w p_*\mathcal O_{\W X}(E) \big )
\cong \mathcal O_{|K_X|}(-5) \otimes \Big ( (f_*\mathcal O_{\mathcal E})^\vee
\otimes \mathcal O_{|K_X|}(-2)^\vee \Big )
\cong \mathcal O_{|K_X|}(-3).
$$
By \eqref{prop_iii.7}, $R^1f_*\Psi^*T_\Xtwo \cong \mathcal O_{|K_X|}(-1)$.
Finally, by \eqref{prop_iii.3}, we obtain
\begin{equation}
\langle 1 \rangle_{0, \,\, \beta_{K_X} - 3\beta_2}^{\Xtwo}
= \deg \, [\Mbar]^{\text{\rm vir}} = \deg \, c_1 \big (R^1f_*\Psi^*T_\Xtwo \big ) = -1.
\tag*{$\qed$}
\end{equation}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction\label{sec:intro}}
The Kadison-Singer problem~\cite{KS59} posed in 1959 asks whether every pure state on the (abelian) von Neumann algebra $\mathbb{D}$ of bounded diagonal operators on $\ell_2$ has a unique extension to a pure state on $B(\ell_2)$, the von Neumann algebra of all bounded linear operators on the Hilbert space $\ell_2$. The statement of the Kadison-Singer problem
arises from work on the foundations of quantum mechanics done by Dirac in 1940s, and has been subsequently shown to be equivalent to numerous important problems in pure mathematics, applied mathematics, engineering and computer science~\cite{KS-detailed}.
Weaver~\cite{dm/Weaver04} shows that
the Kadison-Singer problem
is equivalent to the following discrepancy question, which is originally posed as a conjecture.
\begin{conjecture}[The $\mathsf{KS}_2$ Conjecture]
There exist universal constants $\eta\geq 2$ and $\theta>0$ such that the following holds. Let $v_1,\ldots, v_m\in\mathbb{C}^d$ satisfy $\|v_i\|\leq 1$ for all $i\in[m]$, and suppose
$
\sum_{i=1}^m | \langle u, v_i \rangle |^2 = \eta
$
for every unit vector $u\in\mathbb{C}^d$. Then, there exists a partition $S_1, S_2$ of $[m]$ so that
\[
\sum_{i\in S_{j}} |\langle u, v_i \rangle|^2 \leq \eta -\theta,\]
for every unit vector $u\in\mathbb{C}^d$ and every $j=\{1,2\}$.
\end{conjecture}
As a major breakthrough in mathematics, Marcus, Spielman and Srivastava~\cite{MSS} prove that the $\mathsf{KS}_2$ conjecture holds, and give an affirmative answer to the Kadison-Singer problem. Specifically, in this celebrated paper they show that, for any vectors $v_1,\ldots, v_m\in\mathbb{C}^d$
such that $\|v_i\|^2\leq\alpha$ for any $i\in [m]$ and $
\sum_{i=1}^m \langle v_i, x\rangle^2 =1$
for any $x\in\mathbb{C}^d$ with $\|x\|=1$, there is a partition $S_1, S_2$ of $[m]$ such that it holds for any $x\in\mathbb{C}^d$ with $\|x\|=1$ and
$j=1,2$ that
\[
\abs{\sum_{i \in S_j} \langle v_i, x\rangle^2 - \frac{1}{2}} \leq 3\cdot\sqrt{\alpha}.
\]
The proof of this result is based on studying interlacing families of polynomials~\cite{MSS13}. While analysing interlacing families of polynomials suffices to answer the $\mathsf{KS}_2$ conjecture and, as a consequence,
solve the Kadison-Singer problem, it is unclear if their existential proof on the partition guaranteed by the $\mathsf{KS}_2$ conjecture can be turned into an efficient algorithmic construction; designing efficient algorithms for the Kadison-Singer problem is listed as a natural open question in \cite{MSS}.
This question is particularly interesting in theoretical computer science, since it is directly linked to constructing unweighted spectral sparsifiers~\cite{BSS} and spectrally thin trees~\cite{AnariG14a}, among many other applications in approximation algorithms. However, there has been little work on the algorithmic Kadison-Singer problem,
and the complexity status of this problem is an important open question.
To address this question, we study the following $\mathsf{KS}_2$ problem with some constant $c\in\mathbb{R}^+$:
\begin{problem}[The \textsf{KS}$_2(c)$ problem]\label{pro:main}
Given vectors $v_1,\ldots, v_m\in\mathbb{R}^d$ such that $\|v_i\|^2\leq \alpha$ for any $i\in[m]$ and
$
\sum_{i=1}^m \langle v_i, x\rangle^2 =1$
for any $x\in\mathbb{R}^d$ with $\|x\|=1$, the {$\mathsf{KS}_2(c)$} problem asks to
\begin{itemize}
\item find some $S\subset [m]$, such that it holds for all $x \in \mathbb{R}^d$ with $\norm{x} = 1$ that
\begin{equation}\label{eq:KS_condition}
\abs{\sum_{i \in S} \langle v_i, x\rangle^2 - \frac{1}{2}} \leq c\cdot\sqrt{\alpha},
\end{equation}
\item or report no if such $S$ doesn't exist.
\end{itemize}
\end{problem}
Notice that the $\mathsf{KS}_2$ conjecture
is equivalent to finding some subset $S\subset[m]$ as stated in
Problem~\ref{pro:main} for some constant $c$. Here we choose to formulate the discrepancy of any set $S\subset [m]$ in \eqref{eq:KS_condition} as $c\cdot\sqrt{\alpha}$ for three reasons: first of all, Weaver~\cite{dm/Weaver04} shows that
the dependency on $O(\sqrt{\alpha})$
in \eqref{eq:KS_condition}
is tight, so the term $O(\sqrt{\alpha})$ is unavoidable when bounding the discrepancy; secondly, the $\mathsf{KS}_2$ conjecture shows that the existence of any universal constant $c$ in \eqref{eq:KS_condition} suffices to prove the Kadison-Singer conjecture, and it is proven in \cite{MSS} that the $\mathsf{KS}_2$ conjecture holds for $c=3$; however,
studying the tightness of this constant remains an interesting open question on its own~(Problem~8.1, \cite{KSconsequence}). Finally, as we will show shortly, the $\mathsf{KS}_2(c)$ problem belongs to different complexity classes with respect to different values of $c$, so introducing this parameter $c$ allows us to better understand the complexity of the algorithmic Kadison-Singer problem.
\subsection{Our Results}
Our first result is an algorithm called \textsc{Randomised-KS}$(\{v_i\}, c, \epsilon)$ for approximately solving the $\mathsf{KS}_2(c)$ problem for general values of $c$.
For any constant $c$, and any vectors $v_1,\ldots,v_m\in\mathbb{R}^d$ such that $\|v_i\|^2\leq \alpha$ for all $i\in[m]$, we show that
\begin{itemize}
\item if there exists an $S$ which satisfies \eqref{eq:KS_condition}, then with probability at least $(1-2/d)$ the algorithm returns a set $S'\subset \{v_i\}_{i=1}^m$ that satisfies
\begin{align}
(1-\epsilon)\Big(\frac 1 2 - c\sqrt \alpha\Big)\le \sum_{v\in S'} \inner{v}{x}^2\le (1+\epsilon)\Big(\frac 1 2 + c\sqrt \alpha\Big) \label{eq:KS_ep_condition}
\end{align}
for all unit vectors $x \in \mathbb{R}^d$, and
\item if no set exists which satisfies \eqref{eq:KS_ep_condition}, then with probability $1$ the algorithm returns `no'.
\end{itemize}
Our result is summarised as follows:
\begin{theorem} \label{thm:algorithm}
There is an algorithm, \textsc{Randomised-KS}$(\mathcal{I}, c, \epsilon)$, such that for any instance $\mathcal{I} \triangleq \{v_i\}_{i = 1}^m$ of the $\mathsf{KS}_2(c)$ problem with $v_i \in \mathbb{R}^d$ for $d\geq 3$, and for any $\epsilon \in (0, 1)$, the following holds:
\begin{itemize}
\item
if there exists a set $S \subset \mathcal{I}$ such that
\[
\left(\frac{1}{2} - c \sqrt{\alpha}\right) \leq \sum_{v \in S} \inner{v}{x}^2 \leq \left(\frac{1}{2} + c \sqrt{\alpha} \right)
\]
for all unit vectors $x \in \mathbb{R}^d$,
then with probability at least $(1 - 2/d)$,
the \textsc{Randomised-KS}$(\mathcal{I}, c, \epsilon)$ algorithm returns a subset $S' \subset \mathcal{I}$ which satisfies~\eqref{eq:KS_ep_condition} for all unit vectors $x \in \mathbb{R}^d$.
\item if there is no set $S \subset \mathcal{I}$ which satisfies \eqref{eq:KS_ep_condition},
then with probability $1$, the \textsc{Randomised-KS}$(\mathcal{I}, c, \epsilon)$ algorithm reports that no such set exists.
\end{itemize}
The algorithm has running time
\[
\bigo{\binom{m}{n}\cdot \mathrm{poly}(m, d)}~\mbox{ for }~n \triangleq \bigo{\frac{d}{\epsilon^2} \log(d) \max\left(\log\left(\frac{1}{c\sqrt{\alpha}}\right), \log\left(\frac{1}{(1/2) - c \sqrt{\alpha}}\right)\right)}.
\]
\end{theorem}
\begin{remark}
Since the most interesting instances of the
$\mathsf{KS}_2(c)$ problem are the cases in which $1/2 + c \sqrt{\alpha}$ is bounded away from $1$, we can assume that $c\sqrt{\alpha} \leq 1/2 - \sigma$ for some constant $\sigma$ which implies that
\[
n = \bigo{\frac{d}{\epsilon^2} \log(d) \log\left(\frac{1}{c\sqrt{\alpha}}\right)}.
\]
Combining this with $d=\sum_{i=1}^m \|v_i\|^2\leq \alpha m$, a constraint due to the isotropic nature of the input, shows that our algorithm runs in quasi-polynomial time in $m$.
\end{remark}
Compared with
the state-of-the-art that runs in
$d^{O(m^{1/3} \alpha^{-1/4})}$
time~\cite{conf/soda/AnariGSS18},
the most appealing fact of
Theorem~\ref{thm:algorithm}
is that it shows the $\mathsf{KS}_2(c)$ problem can be approximately solved in quasi-polynomial time when $d=O(\mathrm{poly} \log m)$. Moreover, for small values of $c$ where
a subset $S\subset [m]$ satisfying ~\eqref{eq:KS_condition} isn't guaranteed to exist, our algorithm, with the same time complexity, is still able to find an $S$ satisfying ~\eqref{eq:KS_ep_condition} with high probability if it exists, or report no with probability $1$ otherwise. These two facts together show that both determining the existence of a valid subset $S$ and finding such $S$ are computationally much easier in low dimensions, regardless of the range of $c$.
In addition, our result is much stronger than a random sampling based algorithm, which only works in the regime of $\alpha=O(1/\log d)$~\cite{tropp2012user}, while our algorithm works even when there are vectors with much larger norm, e.g., $\alpha=\Theta(1)$.
On the other side, like
many optimisation problems
that involve the dimension of input items in their formulation~(e.g., multi-dimensional packing~\cite{siamcomp/ChekuriK04}, and vector scheduling~\cite{algorithmica/BansalOVZ16}),
Theorem~\ref{thm:algorithm} indicates that the order of $d$ might play a significant role in the hardness of the $\mathsf{KS}_2(c)$ problem, and the hard instances of the problem might be in the regime of $m=O(d)$.
Inspired by this, we study the computational complexity of the $\mathsf{KS}_2(c)$ problem for general values of $d$, where the number of input vectors satisfies $m=O(d)$.
In order to study the `optimal' partitioning, for a given instance of the problem $\mathcal{I} = \{v_1, \ldots, v_m\}$, let
\[
\mathcal{W}(\mathcal{I}) \triangleq \min_{S \subset \mathcal{I}} \max_{\substack{x \in \mathbb{R}^d\\ \|x\| = 1}} \abs{\sum_{v \in S} \langle v, x \rangle^2 - \frac{1}{2}}.
\]
Then, we choose $c=1/(4\sqrt{2})$ and notice that, for any vectors that satisfy the conditions of the $\mathsf{KS}_2(c)$ problem, there could be no subset $S$ satisfying \eqref{eq:KS_condition} for such $c$. As our second result, we prove that, for any $c \leq 1/(4\sqrt{2})$,
distinguishing between instances for which $\mathcal{W}(\mathcal{I}) = 0$ and those for which $\mathcal{W}(\mathcal{I}) \geq c \cdot \sqrt{\alpha}$ is $\mathsf{NP}$-hard.
Our result is as follows:
\begin{theorem} \label{thm:hardness}
The $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem is $\mathsf{FNP}$-hard for general values of $d$.
Moreover, it is $\mathsf{NP}$-hard to distinguish between instances of the $\mathsf{KS}_2(c)$ problem with $\mathcal{W}(\mathcal{I}) = 0$ from instances with $\mathcal{W}(\mathcal{I}) \geq \left( 1 / 4 \sqrt{2} \right) \cdot \sqrt{\alpha}$.
\end{theorem}
\begin{remark}
It is important to note that, when $d$ is constant, the decision problem in Theorem~\ref{thm:hardness} can be solved in polynomial time.
For example, the $1$-dimensional problem is equivalent to the
$\mathsf{PARTITION}$ problem, in which we are given a set of real numbers $\mathcal{I} = \{x_1, \ldots, x_m\}$ such that $\sum_i x_i = 1$ and must determine whether there exists a subset $S \subset \mathcal{I}$ such that $\sum_{x \in S} x = 1/2$.
In this setting, \[
\mathcal{W}(\mathcal{I}) = \min_{S \subset \mathcal{I}} \abs{\left(\sum_{x \in S} x\right) - 1/2}.
\]
There is a well-known FPTAS for $\mathsf{PARTITION}$ which can distinguish between instances for which $\mathcal{W}(\mathcal{I}) = 0$ and those for which $\mathcal{W}(\mathcal{I}) \geq \epsilon$, for any $\epsilon > 0$.
An important consequence of Theorem~\ref{thm:hardness} is that there is no such FPTAS for the optimisation version of the $\mathsf{KS}_2(c)$ problem for general $d$.
\end{remark}
Theorem~\ref{thm:hardness} shows that the
isotropic structure of the $\mathsf{KS}_2(c)$ instance is not sufficient to make finding a partition easy when compared with similar problems.
As such, the
design of a potential polynomial-time algorithm for the Kadison-Singer problem would need to take some range of $c$ into account
and cannot solve the optimisation version of the $\mathsf{KS}_2(c)$ problem,
otherwise one would end up solving an $\mathsf{NP}$-hard problem.
On the other side, since a valid subset of the input vectors for the $\mathsf{KS}_2(c)$ problem always exists for $c\geq 3$~\cite{MSS}, the decision version of $\mathsf{KS}_2(3)$ is (trivially) in $\mathsf{P}$.
Hence, the complexity of the $\mathsf{KS}_2(c)$ problem depends on the value of $c$.
In our point of view,
Theorem~\ref{thm:hardness} raises an interesting open question on whether one can precisely characterise
the computational complexity of the $\mathsf{KS}_2(c)$ problem with respect to $c$,
the understanding of which will greatly help with its algorithm design.
\subsection{Our Techniques}
In this subsection we sketch our main techniques used in proving Theorems~\ref{thm:algorithm} and \ref{thm:hardness}.
\paragraph{Proof Sketch of Theorem~\ref{thm:algorithm}.}
We start by sketching the ideas
behind our algorithmic result. First of all, it is easy to see that
we can solve the $\mathsf{KS}_2(c)$ problem for any $c\in\mathbb{R}^+$ in $O\left(2^m\cdot\mathrm{poly}(m,d)\right)$ time, since we only need to enumerate all the $2^m$ subsets $S\subseteq \mathcal{I}$ of the input set $\mathcal{I}$ and check if every possible set $S$ satisfies the condition \eqref{eq:KS_condition}. To express all the subsets of $\mathcal{I}$, we
inductively construct level sets $\left\{\mathcal{L}_i\right\}_{i=0}^m$ with $\mathcal{L}_i\subseteq 2^{\mathcal{I}}$ as follows:
\begin{itemize}
\item initially, level $i=0$ consists of a single set $\emptyset$, and we set $\mathcal{L}_0=\{\emptyset\}$;
\item based on $\mathcal{L}_{i-1}$ for any $1\leq i\leq m$, we define $\mathcal{L}_{i}$ by
\[
\mathcal{L}_{i} \triangleq \left\{ S, S\cup\{v_{i}\} : S\in \mathcal{L}_{i-1} \right\}.
\]
\end{itemize}
It is important to see that, although $|\mathcal{L}_i|$ could be as high as $2^m$, there are only $m$ such
level sets
$\mathcal{L}_i$, which
are constructed inductively in an \emph{online} manner, and it holds for any $S\subseteq \mathcal{I}$ that $S\in \mathcal{L}_m$.
The bottleneck for
improving the efficiency of this simple enumeration algorithm
is the number of sets in $\mathcal{L}_m$, which could be exponential in $m$. To overcome this bottleneck, we introduce the notion of \emph{spectral equivalence classes} to reduce $|\mathcal{L}_i|$ for any $i\in[m]$. Informally speaking, if there are different $S_1, S_2\in\mathcal{L}_i$ for any $i\in[m]$ such that\footnote{For any two matrices $A$ and $B$ of the same dimension, we write $A\preceq B$ if $B-A$ is positive semi-definite.}
\[
(1 - \epsilon) \sum_{j \in S_2} v_j v_j^\transpose \preceq \sum_{j \in S_1} v_j v_j^\transpose \preceq (1 + \epsilon) \sum_{j \in S_2} v_j v_j^\transpose
\]
for some small $\epsilon$, then we view $S_1$ and $S_2$ to be ``spectrally equivalent'' to each other\footnote{Although this relationship is not symmetric, this informal definition is sufficient for the proof sketch and is not used directly in our analysis.}. It suffices to use one set to represent all of its spectral equivalences; hence, we only need to store the subsets which aren't spectrally equivalent to each other\footnote{The list of stored subsets can be thought of as an epsilon cover of all possible subsets.}.
Since there is a spectral sparsifier of any $S$ with $O(d\log (d)/\epsilon^2)$ vectors~\cite{cohen2016online,SpielmanS11}, we can reduce the total number of stored subsets~(i.e., the
number of spectral equivalence classes) in $\mathcal{L}_i$ for any $i\in[m]$ to $\binom{m}{n}$ where $n = \bigo{d \log(d) / \epsilon^2}$ which is no longer exponential in $m$.
Turning this idea into an algorithm design, we need be careful that the small approximation error introduced by every constructed spectral sparsifier does not compound as we construct sparsifiers from one level to another.
In order to avoid this, we employ the online vector sparsification algorithm presented in \cite{cohen2016online}. This allows us to construct sparsifiers in $\mathcal{L}_{i}$ from the ones in $\mathcal{L}_{i-1}$ and the vector $v_{i}$. In addition, the construction in each level preserves the same approximation error as the previous one.
We highlight that the design of our algorithm for solving the $\mathsf{KS}_2(c)$ problem is entirely different from the previous work, which is based on analysing the properties of interlacing polynomials~\cite{conf/soda/AnariGSS18}.
Moreover, one can view our use of online spectral sparsifiers in constructing spectral equivalence classes as an \emph{encoding} strategy to reduce the enumeration space of the $\mathsf{KS}_2(c)$ problem. From this aspect, our work sheds light on potential applications of other tools well-studied in algorithmic spectral graph theory and numerical linear algebra, such as sparsification and sketching.
\paragraph{Proof Sketch of Theorem~\ref{thm:hardness}.}
Our proof of the $\mathsf{FNP}$-hardness of the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem is based on a reduction from the well-known \textsf{NAE-3SAT}\ problem~\cite{garey_computers_1979} to
a
decision version of the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem,
which asks whether $\mathcal{W}(\mathcal{I}) = 0$ or $\mathcal{W}(\mathcal{I}) \geq \left(1 / \left(4 \sqrt{2}\right)\right) \sqrt{\alpha}$.
Our overall reduction consists of two steps: we first build a reduction from the \textsf{NAE-3SAT}\ problem to the so-called \textsf{NAE-3SAT-KS}\ problem, and then build a reduction from the \textsf{NAE-3SAT-KS}\ problem to the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem.
To sketch the first reduction, we examine the so-called \textsf{NAE-3SAT-KS}\ problem, which can be viewed as a restricted version of the \textsf{NAE-3SAT}\ problem, and used only as a tool to build the reduction from the \textsf{NAE-3SAT}\ problem to the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem.
Informally, the \textsf{NAE-3SAT-KS}\ problem consists of the \textsf{3SAT} Boolean formulae $\psi$, in which the number of occurrences of both $u$ and $\bar{u}$ for every variable $u$ in any $\psi$ is limited with respect to some additional constraints and any two clauses of $\psi$ share at most one literal; the \textsf{NAE-3SAT-KS}\ problem asks if there is a satisfying assignment for $\psi$ such that every clause of $\psi$ has at least one true literal and at least one false literal; we refer the reader to Problem~\ref{prob:satnew} in Section~\ref{sec:hardness} for the formal definition of the \textsf{NAE-3SAT-KS}\ problem. Based on a reduction from the \textsf{NAE-3SAT}\ problem, we show that the \textsf{NAE-3SAT-KS}\ problem is $\mathsf{NP}$-complete.
For the second and main reduction of our analysis, we build a reduction from the \textsf{NAE-3SAT-KS}\ problem to the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem. Specifically, for an \textsf{NAE-3SAT-KS}\ instance $\psi$ of $n$ variables and $m$ clauses, we construct a set $A$ of $\Theta(n+m)$ vectors as
a $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ instance, and each
$v\in A$ has dimension $n+m$, such that the following properties hold:
\begin{itemize}
\item every vector $v$ has norm $\|v\|^2 \leq 1/4$ and $\sum_{v\in A} v v^\transpose =I$;
\item if $\psi$ is a satisfiable instance of \textsf{NAE-3SAT-KS}, then there is a subset $S\subset A$ such that \[\sum_{v\in S} v v^\transpose = (1/2)\cdot I;\]
\item if $\psi$ is not a satisfiable instance of \textsf{NAE-3SAT-KS}, then for any subset $S\subset A$ there is always some $y\in\mathbb{R}^n$ with $\|y\|=1$ such that
\[
\left|
\sum_{v\in S} \inner{v}{y}^2 - \frac{1}{2}
\right| \geq \frac{1}{8 \sqrt{2}}.
\]
\end{itemize}
The key to proving these properties is a careful construction of a $\mathsf{KS}$ instance $\mathcal{I}$ from any formula $\psi$, and an analysis of the properties of $\sum_{v\in S}v v^\transpose $ for any set $S \subseteq \mathcal{I}$ if $\psi$ is an unsatisfiable instance of \textsf{NAE-3SAT-KS}.
We think that such a reduction from any \textsf{SAT} instance to a \textsf{KS} instance is quite novel, and might be further employed to sharpen the constant $1/(4 \sqrt{2})$.
\subsection{Related Work}
There has been little work on the algorithmic Kadison-Singer problem.
Anari et al.~\cite{conf/soda/AnariGSS18} studies approximating the largest root of a real rooted polynomial and its applications to interlacing families, which are the main tool developed in \cite{MSS} to prove the Kadison-Singer conjecture. They show that a valid partition promised by Weaver's $\mathsf{KS}_2$ conjecture can be found in $d^{O(m^{1/3} \alpha^{-1/4})}$ time, suggesting that exhaustive
search of all possibilities is not required for the algorithmic Kadison-Singer problem unlike
the strong exponential time hypothesis for the \textsf{SAT} problem.
Becchetti et al.~\cite{conf/soda/BecchettiCNPT20} studies the algorithmic Kadison-Singer problem for graphs under some restricted condition. Specifically, they show that, if $G=(V,E)$ is an $n$-vertex and $\Delta$-regular graph of
$\Delta=\Omega(n)$ and the second eigenvalue of the adjacency matrix of $G$ is at most a sufficient small constant times $\Delta$, then an unweighted spectral sparsifier of $G$ can be constructed efficiently. Their algorithm is combinatorial and only works for graphs.
Weaver~\cite{weaver2} shows that the \textsf{BSS}-framework for constructing linear-sized spectral sparsifiers~\cite{BSS} can be adapted for the one-sided Kadison-Singer problem, where the term ``one-sided'' refers to the fact that the discrepancy of the algorithm's output can be only upper bounded.
Finally, independent of our work, Spielman and Zhang~\cite{SZ22} studies the same complexity problem as ours. Different from our approach, their analysis starts with the $(3,2\mbox{-}2)$ Set Splitting problem, which is a variant of the $2$-$2$ Set Splitting problem. They prove that the $(3,2\mbox{-}2)$ Set Splitting problem remains \textsf{NP}-hard even if no pair of sets intersects in more than one variable. Applying this, they show that the $\mathsf{KS}_2(c)$ problem is $\mathsf{NP}$-hard for $c=1/4$. While their result is slightly tighter than ours with respect to the value of $c$, the conclusions of the two works are essentially the same.
\subsection{Notation \label{sec:pre}}
Let $[m]\triangleq \{1,\ldots, m\}$. For any integer $j$, we define vector $\mathbf{1}_j$, in which $\mathbf{1}_j(j)=1$ and all of $\mathbf{1}_j$'s other entries are $0$.
For any integer $d\geq 1$, let $\mathbf{0}_{d \times d}\in\mathbb{R}^{d\times d}$ be the matrix in which every entry is equal to $0$.
We call a matrix $A$ positive semi-definite~(\textsf{PSD}) if $x^{\rot}Ax\geq 0$ holds for any $x\in\mathbb{R}^d$. For any two matrices $A$ and $B$, we write $A\preceq B$ if $B-A$ is \textsf{PSD}. The spectral norm of any matrix $A$ is expressed by $\|A\|$.
\section{Algorithm Based on Spectral Equivalence Classes \label{sec:algo}}
This section discusses in detail the construction of spectral equivalence classes, and its application in designing a randomised algorithm for the $\mathsf{KS}_2(c)$ problem. We analyse the presented algorithm, and prove Theorem~\ref{thm:algorithm}.
\subsection{Algorithm}
Our algorithm consists of $m$ iterations:
in iteration $i$, the algorithm constructs the set $\mathcal{L}_i$ of spectral equivalence classes for the subsets $S \subseteq \{v_1, \ldots, v_i\}$.
For each equivalence class, $\mathcal{L}_i$ contains a pair $(S, B)$ where $S \subseteq \{v_1, \ldots v_i\}$ is a representative set in the equivalence class and $B \in \mathbb{R}^{d \times d}$ is a spectral sparsifier representing the equivalence class.
Moreover, the algorithm
constructs the representations of spectral equivalence classes in iteration $i$ based on the ones maintained in iteration $i-1$. That is, instead of constructing all the subsets of $\{v_1, \ldots, v_{i}\}$ and grouping them into different spectral equivalence classes,
the algorithm directly constructs the representations of the spectral equivalence classes of $\{v_1, \ldots, v_{i}\}$ based on its constructed equivalence classes of $\{v_1, \ldots, v_{i-1}\}$.
This can be achieved by applying an online algorithm for constructing spectral sparsifiers, since,
if we assume that in iteration $i-1$ every subset $S \subseteq \{v_1, \ldots, v_{i-1}\}$ is spectrally equivalent to some $(S', B') \in \mathcal{L}_{i-1}$ maintained by the algorithm, then both of $S$ and $S \cup \{v_{i}\}$ are spectrally equivalent to $S'$ and $S' \cup \{v_{i}\}$ in iteration $i$ as well.
As such, in iteration $i$ we only need to ensure that the sets $S'$ and $S' \cup \{v_{i}\}$ are still represented by some sparsifiers in $\mathcal{L}_{i}$.
Based on this, we can view all the vectors $v_1,\ldots, v_m$ as arriving \emph{online} and, starting with the trivial spectral equivalence class defined by $\mathcal{L}_0 = \{(\emptyset, \mathbf{0}_{d \times d})\}$, the algorithm constructs the representations of spectral equivalence classes of $\{v_1, \ldots v_i\}$ in iteration $i$.
Our algorithm applies the online algorithm for constructing spectral sparsifiers~\cite{cohen2016online}~(Lines~\ref{algline:lcomp}-\ref{algline:lupdate2} of Algorithm~\ref{alg:sparsified-ks})
to construct the representations of spectral equivalence classes of $\{v_1, \ldots, v_{i}\}$ based on those of $\{v_1, \ldots, v_{i-1}\}$. Since any subset of $\{v_1, \ldots v_m\}$ is spectrally equivalent to some set of vectors with size
$n$ where $n$ is nearly linear in $d$~\cite{cohen2016online},
the number of spectral equivalence classes in any set $\mathcal{L}_i$ will be at most $\binom{m}{n}$.
See Figure~\ref{fig:sparsified_table} for an illustration of the construction of the sets $\mathcal{L}_i$ and Algorithm~\ref{alg:sparsified-ks} for the formal description of the algorithm.
\begin{figure}[ht]
\centering
\resizebox{12cm}{!}{%
\tikzfig{figures/tikzDiagrams/spectrallevelsetssmall}
}
\caption{The construction of the sets $\mathcal{L}_i$ in Algorithm~\ref{alg:sparsified-ks}.
Each $\mathcal{L}_{i-1}$ contains sparsifiers representing the spectral equivalence classes of the vectors $\{v_1, \ldots, v_{i-1}\}$.
Then, $\mathcal{L}_{i}$ contains either one or two ``children'' of each sparsifier in $\mathcal{L}_{i-1}$, where the second child is added with some small probability which prevents $\cardinality{\mathcal{L}_m}$ from growing exponentially with $m$.
For a particular target subset $S \subseteq \{v_1, \ldots v_m\}$, there is some sequence of constructed sparsifiers which corresponds to the process of the online algorithm for constructing spectral sparsifiers~\cite{cohen2016online}, applied to $S$.}
\label{fig:sparsified_table}
\end{figure}
\begin{algorithm}[H] \label{alg:sparsified-ks}
\caption{\textsc{Randomised-KS}$(\mathcal{I} = \{v_i\}_{i = 1}^m, c, \epsilon)$, where $v_i \in \mathbb{R}^d$ and $\norm{v_i}^2 \leq \alpha$ }
$\mu \gets \epsilon / 6$ \\
$\lambda \gets \min\left(c \sqrt{\alpha}, 1/2 - c \sqrt{\alpha}\right)$ \\
$b \gets 8 \log(d) / \mu^2$ \\
$n \gets \bigo{d \log(d) \log(1/\lambda) / \mu^2}$\\
$\mathcal{L}_0 \gets \{ (\emptyset, \mathbf{0}_{d \times d}) \}$ \\ \label{algline:basecase}
\For{$i \gets 1~\mathrm{\ to\ }~m$}{
%
$\mathcal{L}_i \gets \emptyset$ \\
\For{$(S, B) \in \mathcal{L}_{i-1}$ and $B$ is constructed with at most $n$ vectors \label{algline:loop}}{
$S' \gets S \cup \{v_i\}$ \\
%
\If{$S'$ satisfies \eqref{eq:KS_ep_condition} \label{algline:if}}{
\Return $S'$
}
$p \gets \min\left(b \left(1 + \mu\right) v_i^\transpose \left(B + \lambda I \right)^{-1} v_i, 1 \right)$ \\ \label{algline:lcomp}
\eIf{$X \leq p$ where $X \sim \mathrm{Uniform}[0, 1]$ \label{algline:randif}}{
$B' \gets B + \frac{1}{p}v_i v_i^\transpose$ \\
%
$\mathcal{L}_i \gets \mathcal{L}_i \cup \{(S, B), (S', B')\}$ \\ \label{algline:lupdate}
}{
%
$\mathcal{L}_i \gets \mathcal{L}_i \cup \{(S', B)\}$ \\ \label{algline:lupdate2}
}
}
}
\Return \textsc{Failure}
\end{algorithm}
\begin{remark}
The if-condition on Line~\ref{algline:if} of Algorithm~\ref{alg:sparsified-ks} can be checked in polynomial time while introducing an arbitrarily small error, by constructing the matrix $\sum_{v\in S'} vv^{\rot}$ and computing its eigenvalues.
\end{remark}
\subsection{Analysis}
Since it holds for any vectors $v_1,\ldots, v_{\ell}$ with $v_i\in\mathbb{R}^d$ that
\[
\sum_{i=1}^{\ell} v_iv_i^{\rot} =
\begin{pmatrix}
\rule[.5ex]{2.5ex}{0.5pt}\ v_1\ \rule[.5ex]{2.5ex}{0.5pt}\\
\rule[.5ex]{2.5ex}{0.5pt}\ v_2\ \rule[.5ex]{2.5ex}{0.5pt}\\
\hspace{0.25cm}\vdots\hspace{0.25cm}\\
\rule[.5ex]{2.5ex}{0.5pt}\ v_{\ell}\ \rule[.5ex]{2.5ex}{0.5pt}
\end{pmatrix}^{\rot}
\begin{pmatrix}
\rule[.5ex]{2.5ex}{0.5pt}\ v_1\ \rule[.5ex]{2.5ex}{0.5pt}\\
\rule[.5ex]{2.5ex}{0.5pt}\ v_2\ \rule[.5ex]{2.5ex}{0.5pt}\\
\hspace{0.25cm}\vdots\hspace{0.25cm}\\
\rule[.5ex]{2.5ex}{0.5pt}\ v_{\ell}\ \rule[.5ex]{2.5ex}{0.5pt}
\end{pmatrix},
\]
sparsifying $\sum_{v\in S} v v^{\rot}$ for any $S \subseteq \mathcal{I}$ is equivalent to sparsifying the $|S|\times d$ matrix whose rows are defined by all the $v\in S$. Based on this, our proof uses the result from the online matrix sparsification algorithm~\cite{cohen2016online} as a black box. Specifically, we apply the following lemma in our analysis, which is a special case of Theorem~2.3 from~\cite{cohen2016online}. Notice that the algorithm described in Lemma~\ref{lem:cohenonline} below corresponds to the sampling scheme used in Algorithm~\ref{alg:sparsified-ks}.
\begin{lemma}[\cite{cohen2016online}, Theorem 2.3] \label{lem:cohenonline}
Let $S$ be a set of vectors $v_1, \ldots, v_m \in \mathbb{R}^d$, and let $A = \sum_{v \in S} v v^\transpose$. With $\mu, \delta \in [0, 1]$, $b \triangleq 8 \log(d) / \mu^2$ and $B_0 = \mathbf{0}_{d \times d}$, construct $B_i$ inductively for $i \in [m]$ such that
with probability
\[
p_i = \min\left(b (1 + \mu) v_i^\transpose \left(B_{i - 1} + \frac{\delta}{\mu} I\right)^{-1} v_i, 1\right),
\]
we have
\[
B_i = B_{i - 1} + \frac{1}{p_i} v_i v_i^\transpose,
\]
and with probability $1 - p_i$, we have $B_i = B_{i - 1}$.
Then, it holds with probability $(1 - 1/d)$ that
\[
(1 - \mu) A - \delta I \preceq B_m \preceq (1 + \mu) A + \delta I,
\]
and the number of vectors added to $B_m$ is $O\left(d\log d \log\left(\mu \|A\|^2/\delta \right)/\mu^2 \right )$.
\end{lemma}
Now, we analyse Algorithm~\ref{alg:sparsified-ks}.
We begin by showing that for each pair $(S, B)$ constructed by Algorithm~\ref{alg:sparsified-ks}, $B$ is a spectral sparsifier of $S$ with high probability.
\begin{lemma} \label{lem:algpairs}
Let $\mathcal{L}_i$ be the set constructed by Algorithm~\ref{alg:sparsified-ks} at iteration $i$.
Then, for any $(S^\star, B^\star) \in \mathcal{L}_i$, it holds with probability $(1 - 1/d)$ that
\[
(1 - \mu) A_{S^\star} - \delta I \preceq B^\star \preceq (1 + \mu) A_{S^\star} + \delta I
\]
where $A_{S^\star} = \sum_{v \in S^\star} v v^\transpose$, and the parameters are set in Algorithm~\ref{alg:sparsified-ks} to be $\mu = \epsilon / 6$ and $\delta = \mu \min(c \sqrt{\alpha}, 1/2 - c \sqrt{\alpha})$.
\end{lemma}
\begin{proof}
We will show that for any pair $(S^\star, B^\star)$ constructed by Algorithm~\ref{alg:sparsified-ks}, $B^\star$ is equivalent to the output of the algorithm described in Lemma~\ref{lem:cohenonline} when applied to $S^\star$.
We prove this by induction on $i$.
The base case $i = 0$ follows immediately from the initialisation of $\mathcal{L}_0 = \{(\emptyset, \mathbf{0}_{d \times d})\}$.
For the inductive step we show that the conclusion holds for every pair in $\mathcal{L}_{i}$, assuming it holds for every pair in $\mathcal{L}_{i-1}$.
For each pair $(S^\star, B^\star) \in \mathcal{L}_{i}$,
the proof proceeds by a case distinction.
\textbf{Case 1: $(S^\star, B^\star) \in \mathcal{L}_{i-1}$.}
This case corresponds to the pairs $(S, B)$ added on Line~\ref{algline:lupdate} of Algorithm~\ref{alg:sparsified-ks}.
Accordingly, by the inductive hypothesis, we have that $B^\star$ is equivalent to the output of the algorithm described in Lemma~\ref{lem:cohenonline} applied to $S^\star$.
\textbf{Case 2: $(S^\star, B^\star) \not \in \mathcal{L}_{i-1}$.}
This case covers the pairs involving $S'$ added on Lines~\ref{algline:lupdate} and \ref{algline:lupdate2} of Algorithm~\ref{alg:sparsified-ks}.
Let $(S, B_{i-1})$ be the pair in $\mathcal{L}_{i-1}$ from which $(S^\star, B^\star)$ is constructed.
Notice that $S^\star = S \cup \{v_{i}\}$.
Then, by the construction of Algorithm~\ref{alg:sparsified-ks}, with probability $p_{i}$, we have
\[
B^\star = B_{i-1} + \frac{1}{p_{i}} v_{i}{v_{i}}^\transpose
\]
and with probability $1 - p_{i}$, we have $B^\star = B_{i-1}$, where $p_{i}$ is the probability defined in Lemma~\ref{lem:cohenonline}.
As such, $B^\star$ is the result of applying an iteration of the algorithm defined in Lemma~\ref{lem:cohenonline}, for the new vector $v_{i}$.
This maintains that $B^\star$ is equivalent to the output of the Lemma~\ref{lem:cohenonline} algorithm applied to $S^\star$ and completes the inductive argument.
\end{proof}
We now show that \emph{any} set $S \subset \{v_1, \ldots, v_m\}$ is well approximated by one of the sparsifiers constructed in Algorithm~\ref{alg:sparsified-ks}.
\begin{lemma} \label{lem:ssparsifier}
Let $\mathcal{I} = \{v_i\}_{i = 1}^m$ be the input to Algorithm~\ref{alg:sparsified-ks}. Let $S \subseteq \mathcal{I}$ be any fixed set, and $A = \sum_{v \in S} v v^\transpose$.
Then, with
probability $(1 - 1/d)$,
there is a matrix $B$ constructed by Algorithm~\ref{alg:sparsified-ks} such that
\[
(1 - \mu) A - \delta I \preceq B \preceq (1 + \mu) A + \delta I,
\]
where $\mu = \epsilon / 6$ and $\delta = \mu \min(c \sqrt{\alpha}, 1/2 - c \sqrt{\alpha})$.
\end{lemma}
\begin{proof}
We prove that one of the matrices $B$ constructed by Algorithm~\ref{alg:sparsified-ks} is equivalent to the output of the algorithm defined in Lemma~\ref{lem:cohenonline} applied to the set $S$.
Although the matrices constructed in Algorithm~\ref{alg:sparsified-ks} are always part of a pair $(S', B)$, in this proof we consider only the matrices $B$, and ignore the sets $S'$ which are constructed alongside them.
We now inductively define a sequence $B_0, B_1, \ldots, B_{m}$, such that $B_i$ is a matrix constructed by the algorithm in iteration $i$ and $B_i \in \mathcal{L}_i$ corresponds to the output of the Lemma~\ref{lem:cohenonline} algorithm applied to $S \cap \{v_1, \ldots, v_i\}$.
Firstly, let $B_0 = \mathbf{0}_{d \times d}$, which is the initial condition for the algorithm in Lemma~\ref{lem:cohenonline} and is constructed by Algorithm~\ref{alg:sparsified-ks} on Line~\ref{algline:basecase}.
Then, for the inductive step, we assume that $B_{i-1}$ is the output of the Lemma~\ref{lem:cohenonline} algorithm applied to $S \cap \{v_1, \ldots, v_{i-1}\}$ and we define $B_i$ by case distinction.
\textbf{Case 1: $v_i \not\in S$.}
In this case, we set $B_i = B_{i-1}$, and notice that if $B_{i-1}$ is in the set $\mathcal{L}_{i-1}$ constructed by Algorithm~\ref{alg:sparsified-ks}, then $B_i$ must be in the set $\mathcal{L}_i$ since every matrix $B$ in $\mathcal{L}_{i-1}$ is included in $\mathcal{L}_i$ on either Line~\ref{algline:lupdate} or Line~\ref{algline:lupdate2}.
Since $S \cap \{v_1, \ldots, v_{i-1}\} = S \cap \{v_1, \ldots, v_i\}$, we have that $B_i$ is the output of the algorithm defined in Lemma~\ref{lem:cohenonline} applied to $S \cap \{v_1, \ldots, v_i\}$ by the inductive hypothesis.
\textbf{Case 2: $v_i \in S$.}
In this case, we set $B_i$ to be either $B_{i-1}$ or $B_{i-1} + (1/p) v_i v_i^\transpose$, according to the result of the condition on Line~\ref{algline:randif} of Algorithm~\ref{alg:sparsified-ks}.
Notice that, since the definition of $p$ in Algorithm~\ref{alg:sparsified-ks} is the same as the definition in Lemma~\ref{lem:cohenonline}, $B_i$ corresponds to the result of applying an iteration of the algorithm in Lemma~\ref{lem:cohenonline} with $B_{i-i}$ and $v_i$.
Therefore, by the induction hypothesis, $B_i$ is equivalent to the output of the Lemma~\ref{lem:cohenonline} algorithm applied to $S \cap \{v_1, \ldots, v_i\}$, which completes the inductive construction of $B_1, \ldots, B_m$.
Finally, since our defined $B_m$ corresponds to the output of the algorithm in Lemma~\ref{lem:cohenonline} applied to $S$, we can apply Lemma~\ref{lem:cohenonline} to $S$ and $B_m$ which completes the proof.
\end{proof}
Finally, to prove Theorem~\ref{thm:algorithm}, we need only apply Lemma~\ref{lem:ssparsifier}
for the target set $S \subset \mathcal{I}$, and Lemma~\ref{lem:algpairs} for one of the pairs $(S', B)$ constructed by the algorithm.
In particular, we do not need to take the union bound over all sparsifiers constructed by the algorithm; rather,
it is enough that an accurate sparsifier is constructed for one specific target set.
\begin{proof}[Proof of Theorem~\ref{thm:algorithm}]
We first look at the case in which there is some $S \subset \mathcal{I}$,
such that for
\[
A_S = \sum_{i \in S} v_i v_i^\transpose
\]
it holds that
\[
\left(\frac{1}{2} - c \sqrt{\alpha}\right) \leq x^\transpose A_S x \leq \left(\frac{1}{2} + c \sqrt{\alpha} \right),
\]
for all unit vectors $x \in \mathbb{R}^d$.
By Lemma ~\ref{lem:ssparsifier}, with probability greater than or equal to $1-1/d$, there exists some pair $(S',B)\in \mathcal{L}_m$ such that
\begin{align}
(1 - \mu) A_{S} - \delta I \preceq B \preceq (1 + \mu) A_{S} + \delta I, \label{eqn:thm1proof2}
\end{align}
where $\mu = \epsilon / 6$ and $\delta = \mu \min(c \sqrt{\alpha}, 1/2 - c \sqrt{\alpha}) \leq \mu$.
By Lemma~\ref{lem:algpairs}, with probability $1 - 1/d$, we also have that
\[
(1 - \mu) A_{S'} - \delta I \preceq B \preceq (1 + \mu) A_{S'} + \delta I,
\]
where $S'$ is the set constructed alongside $B$.
Taking the union bound over these two events, with probability at least $1-2/d$, we have for any unit vector $x\in \mathbb{R}^d$, that
\begin{align*}
x^\transpose A_{S'} x & \le \frac{1+\mu}{1-\mu}\left(\frac{1}{2} + c \sqrt{\alpha}\right) + \frac{2 \delta}{1 - \mu} & & & x^\transpose A_{S'} x & \geq \frac{1-\mu}{1+\mu}\left(\frac{1}{2} - c \sqrt{\alpha}\right) - \frac{2 \delta}{1 - \mu}\\
& \le \frac{1+3\mu}{1-\mu}\left(\frac{1}{2} + c \sqrt{\alpha}\right) & \text{and} & & & \ge \frac{1-3\mu}{1-\mu}\left(\frac{1}{2} - c \sqrt{\alpha}\right)\\
& \le (1+\epsilon)\left(\frac{1}{2} + c \sqrt{\alpha}\right) & & & & \ge (1-\epsilon)\left(\frac{1}{2} - c \sqrt{\alpha}\right),
\end{align*}
where we use the definition of $\delta$ and the fact that $\epsilon = 6 \mu \leq 1$.
Therefore, the set $S'$ satisfies \eqref{eq:KS_ep_condition} and will be returned by Algorithm~\ref{alg:sparsified-ks}.
On the other side, notice that, by the condition on Line~\ref{algline:if} of Algorithm~\ref{alg:sparsified-ks}, any set returned by the algorithm must satisfy \eqref{eq:KS_ep_condition}. Therefore,
with probability $1$ the algorithm will correctly report that there is no set $S\subset \mathcal{I}$ satisfying \eqref{eq:KS_ep_condition} if it is the case.
Finally, we analyse the running time of the algorithm. By Lemma~\ref{lem:cohenonline}, it holds that $B$ is constructed from $O(n)$ vectors with probability at least $1-1/d$.
For this reason, on Line~\ref{algline:loop} of Algorithm~\ref{alg:sparsified-ks} we consider only
the sparsifiers of size $\bigo{n}$. The remaining part of the algorithm contributes only polynomial factors to its running time, so the total running time of the algorithm is
\[
\bigo{\binom{m}{n}\cdot \mathrm{poly}(m, d)}. \qedhere
\]
\end{proof}
\section{$\mathsf{FNP}$-Hardness of $\mathsf{KS}_2\left(1/(4\sqrt{2})\right)$ \label{sec:hardness}}
This section studies the computational complexity of the $\mathsf{KS}_2(c)$ problem. We prove that $\mathsf{KS}_2\left(1/(4\sqrt{2})\right)$ is $\mathsf{FNP}$-hard.
This section is organised as follows.
In Section~\ref{sec:tfnp} we introduce the $\mathsf{FNP}$ complexity class.
We formally define the \textsf{NAE-3SAT-KS}\ problem in Section~\ref{sec:sat_npc}, and prove that this problem is $\mathsf{NP}$-hard. In Section~\ref{sec:ksnpc}, we build a reduction from the \textsf{NAE-3SAT-KS}\ problem to the $\mathsf{KS}_2(1/(4\sqrt{2}))$ problem.
\subsection{The $\mathsf{FNP}$ Complexity Class \label{sec:tfnp}}
In contrast with the complexity classes $\mathsf{P}$ and $\mathsf{NP}$, the class $\mathsf{FNP}$ is used to study problems with output which is more complex than simply ``yes'' or ``no''.
Formally, given a binary relation $R$ and an input $X$, the corresponding \emph{function problem} is to find $Y$ such that $R(X, Y)$ holds or report ``no'' if no such $Y$ exists.
For example, we can take $X$ to be an instance $\mathcal{I} = \{v_i\}_{i = 1}^m$ of the $\mathsf{KS}_2(c)$ problem, and $Y \subseteq \mathcal{I}$ to be a candidate solution.
Then, the relation $R_{\mathsf{KS}_2(c)}(\mathcal{I}, Y)$ holds if and only if $Y$ satisfies \eqref{eq:KS_condition}.
Any given binary relation $R$ is in the class $\mathsf{FNP}$
iff there is a deterministic polynomial-time algorithm which can determine whether $R(X, Y)$ holds for a given pair $(X, Y)$~\cite{rich2008automata}.
Notice that every function problem has a natural corresponding decision problem.
Specifically, given a binary relation $R$ and a value of $X$, the decision problem asks whether there exists some $Y$ such that $R(X, Y)$ holds.
A function problem $F$ is $\mathsf{FNP}$-hard if there is a polynomial-time reduction from all problems in $\mathsf{FNP}$ to $F$.
It is known that if the decision problem corresponding to $F$ is $\mathsf{NP}$-hard, then $F$ is $\mathsf{FNP}$-hard~\cite{rich2008automata}, and we will use this fact in our proof of Theorem~\ref{thm:hardness}.
\subsection{$\mathsf{NP}$-Completeness of
\textsf{NAE-3SAT-KS}\ \label{sec:sat_npc}}
In this subsection we study the following \textsf{NAE-3SAT-KS}\ problem, and prove that the problem is $\mathsf{NP}$-complete. We remark that we restrict ourselves to study SAT instances of a specific form here, since these SAT instances will be employed to prove the $\mathsf{NP}$-hardness of the $\mathsf{KS}_2\left(1/(4\sqrt{2})\right)$ problem.
\begin{problem}[\textsf{NAE-3SAT-KS}] Given a 3SAT instance $\psi$ that consists of a collection $C$ of clauses over the set $U$ of variables such that
\begin{enumerate}
\item every clause $c\in C$ has $3$ literals,
\item for every $u\in U$, both of $u$ and $\bar{u}$ appear in at most $2$ clauses of $C$,
\item for every $u \in U$, at least one of $u$ or $\bar{u}$ appears in exactly $2$ clauses of $C$, and
\item any two clauses share at most one literal and no variable appears twice in the same clause,
\end{enumerate}
the \textsf{NAE-3SAT-KS}\ problem asks if there is a satisfying assignment for $\psi$ such that every clause of $\psi$ has at least one true literal and at least one false literal. \label{prob:satnew}
\end{problem}
Our reduction is from the following well-known $\mathsf{NP}$-complete problem.
\begin{problem}[\textsf{NAE-3SAT}, \cite{garey_computers_1979}]
Given a 3SAT instance $\psi$ that consists of a collection $C$ of clauses over the set $U$ of variables such that every clause $c\in C$ has $3$ literals, the \textsf{NAE-3SAT}\ problem asks if there is a satisfying assignment for $\psi$ such that every clause of $\psi$ has at least one true literal and at least one false literal.
\end{problem}
\begin{theorem} \label{thm:nae2lit}
The \textsf{NAE-3SAT-KS}\ problem is $\mathsf{NP}$-complete.
\end{theorem}
\begin{proof}
Given any \textsf{NAE-3SAT-KS}\ instance $\psi$ and an assignment to $\psi$'s variables, it's straightforward to check in polynomial time if this is a satisfying assignment, and every clause of $\psi$ has at least one true literal and at least one false literal. Hence, the \textsf{NAE-3SAT-KS}\ problem is in $\mathsf{NP}$.
To prove that the \textsf{NAE-3SAT-KS}\ problem is $\mathsf{NP}$-complete, we build a reduction from the \textsf{NAE-3SAT}\ problem to the \textsf{NAE-3SAT-KS}\ problem. Specifically, for any \textsf{NAE-3SAT}\ instance $(U,C)$, where $U$ is the set of variables and $C$ is a collection of clauses, we construct an \textsf{NAE-3SAT-KS}\ instance $(U',C')$ such that $(U,C)$ is satisfiable in \textsf{NAE-3SAT}\ if and only if $(U',C')$ is satisfiable in \textsf{NAE-3SAT-KS}. Our construction of $(U',C')$ is as follows. Initially, we set $U'=U$ and $C'=C$.
Then, for any variable $x$ which appears only once in $C$, we remove $x$ from $U'$ and the corresponding clause from $C'$ since the clause can always be satisfied by setting $x$ appropriately and so removing the clause does not change the satisfiability of $(U', C')$.
Then, for every remaining variable $x$, we replace the instances of $x$ and $\bar{x}$ with new variables and add additional clauses to ensure that the satisfiability is unchanged.
Specifically, for each $x$ left in $U'$ let
\begin{itemize}
\item $n_1 = \cardinality{\{c \in C : x \in c\}}$
\item $n_2 = \cardinality{\{c \in C : \bar{x} \in c\}}$
\end{itemize}
and set $n = n_1 + n_2$.
Then, we introduce new variables $x_1, \ldots, x_n$ and replace the instances of $x$ in $C'$ with $x_1, \ldots, x_{n_1}$.
Similarly, we replace the instances of $\bar{x}$ with $\bar{x}_{n_1 + 1}, \ldots, \bar{x}_{n}$.
Now, in order to ensure that $(U', C')$ is satisfiable if and only if $(U, C)$ is satisfiable, we introduce new clauses to $C'$ which have the effect of constraining the variables $x_1, \ldots, x_n$ to have the same truth value in any satisfying assignment.
To achieve this, let $n' \geq n$ be an odd number, and we introduce additional new variables $y_1, \ldots, y_{n'}$ and clauses
\begin{equation} \label{eq:yvars}
\left(\bar{y}_{i} \lor \bar{y}_{i + 1} \lor y_{i + 2}\right)\quad\mbox{for any}\ i \in [1, n'],
\end{equation}
where the indices are taken modulo $n'$.
We will see that these clauses ensure that the $y_i$ variables must all have the same value in a satisfying assignment.
We see this by a simple case distinction.
\begin{itemize}
\item Case 1: $y_1 = y_2$ in a satisfying assignment. Then, by the first clause in \eqref{eq:yvars} it must be that $y_2 = y_3$ since there must be at least one true literal and one false literal in each satisfied clause. Proceeding inductively through the clauses in \eqref{eq:yvars}, we establish that $y_1 = y_2 = \ldots = y_{n'}$.
\item Case 2: $y_1 \neq y_2$ in a satisfying assignment. We will show that this leads to a contradiction. By the last clause in \eqref{eq:yvars}, $y_{n'} \neq y_1$ since there must be at least one true literal and one false literal. Again, we proceed inductively from the $(n'-1)$th clause in \eqref{eq:yvars} down to establish that $y_1 \neq y_2, y_2 \neq y_3, \ldots, y_{n'-1} \neq y_{n'}$. As such, we have $y_1 = y_3 = \ldots = y_{2i+1}$ which is a contradition since $n'$ is odd and we have already established that $y_1 \neq y_{n'}$.
\end{itemize}
As such, we can use the variables $y_1, \ldots, y_{n'}$ with the assumption that they have the same value in any satisfying assignment of $(U', C')$.
It remains to construct clauses to guarantee that the variables $x_1, \ldots, x_n$ have the same value in any satisfying assignment.
We add the clauses
\begin{equation} \label{eq:xvars}
\left(x_{i} \lor \bar{x}_{i + 1} \lor y_i\right)\quad\mbox{for any}\ i \in [1, n],
\end{equation}
where the indices are taken modulo $n$.
We will show that $x_1 = x_2 = \ldots = x_n$ in a satisfying assignment by case distinction.
\begin{itemize}
\item Case 1: $x_1 = y_i$ for all $i$. By the first clause in \eqref{eq:xvars}, it must be that $x_1 = x_2$ since we cannot have $x_1 = \bar{x}_2 = y_i$ in a satisfying assignment. Then, proceeding inductively using each clause in turn we establish that $x_1 = x_2 = \ldots = x_n$.
\item Case 2: $\bar{x}_1 = y_i$ for all $i$. By the last clause in \eqref{eq:xvars}, it must be that $\bar{x}_n = \bar{x}_1$ since we cannot have $x_n = \bar{x}_1 = y_n$ in a satisfying assignment. Then, proceeding inductively from the $(n-1)$th clause down, we establish that $\bar{x}_1 = \bar{x}_2 = \ldots = \bar{x}_n$.
\end{itemize}
Notice that by this construction, each literal $x_i$, $\bar{x}_i$, $y_i$, and $\bar{y}_i$ now appears at most twice in $C'$, no two clauses share more than one literal and no literal appears twice in the same clause.
Additionally, every $x_i$ and $\bar{x_i}$ appears exactly once in the clauses added by \eqref{eq:xvars}.
Since the variable $x_i$ also appears exactly once in the clauses corresponding directly to $C$,
requirement (3) of the \textsf{NAE-3SAT-KS}\ problem is satisfied.
Moreover, we have that $(U', C')$ has a satisfying assignment if $(U, C)$ has a satisfying assignment; this follows by setting the values of $x_1, \ldots, x_n$ in $U'$ to the value of their corresponding $x \in U$.
On the other hand, any satisfying assignment of $(U', C')$ corresponds to a satisfying assignment of $(U, C)$, since we must have that $x_1 = \ldots = x_n$ and can set the value of $x \in U$ to be the same value to get a satisfying assignment of $(U, C)$.
Finally, notice that our new instance $(U',C')$ of \textsf{NAE-3SAT-KS}\ can be constructed in polynomial time in the size of the instance $(U,C)$ of \textsf{NAE-3SAT}.
This completes the proof.
\end{proof}
\subsection{$\mathsf{FNP}$--Hardness of $\mathsf{KS}_2\left(1/(4\sqrt{2})\right)$ \label{sec:ksnpc}}
We now show that the $\mathsf{KS}_2(c)$ problem is $\mathsf{FNP}$-hard for any $c\leq 1/(4\sqrt{2})$, i.e.,~Theorem~\ref{thm:hardness}.
At a high level, the proof is by reduction from the \textsf{NAE-3SAT-KS}\ problem.
Given an instance of the \textsf{NAE-3SAT-KS}\ problem, we will construct an instance $\mathcal{I}$ of $\mathsf{KS}_2(c)$ such that
\begin{itemize}
\item if the \textsf{NAE-3SAT-KS}\ instance is satisfiable, then there is a set $S \subset \mathcal{I}$ with $\sum_{v \in S} v v^\transpose = (1/2) \cdot I$, and
\item if the \textsf{NAE-3SAT-KS}\ instance is not satisfiable, then for all sets $S \subset \mathcal{I}$ we have
\[
\abs{\sum_{v \in S} v v^\transpose - \frac{1}{2} I} \geq \frac{1}{4 \sqrt{2}} \sqrt{\alpha}.
\]
\end{itemize}
This will establish that the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem is $\mathsf{FNP}$-complete, and that it is $\mathsf{NP}$-hard to distinguish between instances of $\mathsf{KS}_2(c)$ with $\mathcal{W}(\mathcal{I}) = 0$ and those for which $\mathcal{W}(\mathcal{I}) \geq \left(1 / 4 \sqrt{2}\right) \sqrt{\alpha}$.
\begin{proof}[Proof of Theorem~\ref{thm:hardness}]
We prove that $\mathsf{KS}_2(1/(4\sqrt{2}))$ is $\mathsf{NP}$-hard by a reduction from the \textsf{NAE-3SAT-KS}\ problem to the decision version of the $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ problem.
We are given an instance $(U, C)$ of the \textsf{NAE-3SAT-KS}\ problem, and construct an instance of $\mathsf{KS}_2(c)$.
Let us refer to
\begin{itemize}
\item the clauses in $C$ as $c_1, \ldots c_m$;
\item the variables in $U$ as $x_1, \ldots, x_n$ and we sometimes write $x_i$ and $\bar{x}_i$ for the un-negated and negated literals.
\end{itemize}
Our constructed $\mathsf{KS}_2(c)$ instance has $\bigo{n + m}$ dimensions.
Specifically, there is one dimension for each clause in $C$ and one dimension for each variable in $U$ which appears both negated and un-negated in $C$.
We use
\begin{itemize}
\item $d^c_j$ to refer to the dimension corresponding to clause $c_j$, and
\item $d^x_j$ to refer to the dimension corresponding to variable $x_j$.
\end{itemize}
We add $\bigo{m + n}$ vectors to our $\mathsf{KS}_2(c)$ instance.
Conceptually, we add one vector for each clause and $4$ vectors for each literal.
We use
\begin{itemize}
\item $v^c_j$ to refer to the vector corresponding to clause $c_j$, and
\item $v^x_{j, 1}$ to $v^x_{j, 4}$ or $v^{\bar{x}}_{j, 1}$ to $v^{\bar{x}}_{j, 4}$ to refer to the vectors corresponding to the literal $x_j$ or $\bar{x}_j$.
\end{itemize}
For each clause $c_j$, we set $v^c_j(d^c_j) = 1/2$, and set the other entries of $v^c_j$ to be $0$.
Table~\ref{tab:2literalvectors} completes the definition of the vectors corresponding to literals.
For each literal, we define only the value on the dimensions corresponding to the variable and the clauses containing the literal; all other entries in the vector are $0$. Let $A$ be the set of vectors defined above. Notice that the squared norms of the vectors in $A$ are bounded above by $1 / 4$ and so $\alpha=1 / 4$ in the constructed \textsf{KS}$_2(c)$ instance.
\begin{table}[h]
\centering
\begin{tabular}{cccc}
\toprule
Vector & Value on $d^c_j$ & Value on $d^c_k$ & Value on $d^x_i$ \\
\midrule
$v^x_{i, 1}$ & $1/4$ & $1/4$ & $1 / \sqrt{8}$ \\
$v^x_{i, 2}$ & $1/4$ & $1/4$ & $- 1 / \sqrt{8}$ \\
$v^x_{i, 3}$ & $1/4$ & $- 1/4$ & $1 / \sqrt{8}$ \\
$v^x_{i, 4}$ & $1/4$ & $- 1/4$ & $- 1 / \sqrt{8}$ \\
\bottomrule
\end{tabular}
\caption{The construction of the vectors in the $\mathsf{KS}_2(c)$ instance for a literal $x_i$ which appears in clause $c_j$ and possibly also in clause $c_k$. If a literal appears in only one clause, $c_j$, we ignore the middle column corresponding to $c_k$; that is, the vectors corresponding to $x_i$ are non-zero only on dimensions $d^c_j$ and $d^x_i$.}
\label{tab:2literalvectors}
\end{table}
To complete the reduction, we will show the following:
\begin{enumerate}
\item It holds that
\[
\sum_{v \in A} v v^\transpose = I.
\]
\item If the original \textsf{NAE-3SAT-KS}\ instance has a satisfying assignment, then there is a set $S \subset A$ such that
\[
\sum_{v \in S} v v^\transpose = \frac{1}{2} I.
\]
\item Any set $S \subset A$ such that
\[
\norm{\sum_{v \in S} v v^\transpose - \frac{1}{2} I} < \frac{1}{8 \sqrt{2}} = \frac{1}{4\sqrt 2}\sqrt \alpha
\]
corresponds to a satisfying assignment of the original \textsf{NAE-3SAT-KS}\ instance.
\end{enumerate}
\paragraph{Vectors in $A$ are isotropic.}
Let us prove that
\[
\sum_{v \in A} v v^\transpose = I.
\]
Let $B = \sum_{v \in A} v v^\transpose$. Then, for any variable $x_i$, we have that
\begin{align*}
B(d^{x}_i, d^x_i) & = \sum_{v \in A} v(d^x_i)^2 \\
& = \sum_{j=1}^4 v^x_{i, j}(d^x_i)^2 + \sum_{j=1}^4 v^{\bar{x}}_{i, j}(d^x_i)^2 \\
& = 1.
\end{align*}
Additionally, for any clause $c_i$ we have that
\begin{align*}
B(d^c_i, d^c_i) & = \sum_{v \in A} v(d^c_i)^2 \\
& = v^c_i(d^c_i)^2 + \sum_{x_j \in c} \sum_{k=1}^4 v^x_{j, k}(d^c_i)^2 \\
& = \frac{1}{4} + 3 \cdot \frac{1}{4} \\
& = 1.
\end{align*}
This demonstrates that the diagonal entries of $B$ are all $1$.
We now see that the off-diagonal entries are all $0$.
First, notice that for any two dimensions relating to variables, $d^x_i$ and $d^x_j$, we have
\begin{align*}
B(d^x_i, d^x_j) & = \sum_{v \in A} v(d^x_i) v(d^x_j) = 0,
\end{align*}
since there is no vector in $A$ with a non-zero contribution to more than one dimension corresponding to a variable.
Now, let us consider two dimensions corresponding to different clauses $c_i$ and $c_j$.
We have
\begin{align*}
B(d^c_i, d^c_j) & = \sum_{v \in A} v(d^c_i) v(d^c_j) \\
& = \sum_{x_k \in c_i \cap c_j} \sum_{\ell = 1}^4 v^x_{k, \ell}(d^c_i) \cdot v^x_{k, \ell}(d^c_j) \\
& = \sum_{x_k \in c_i \cap c_j} \frac{1}{16} + \frac{1}{16} - \frac{1}{16} - \frac{1}{16} \\
& = 0,
\end{align*}
where we use the fact that $c_i$ and $c_j$ share at most one literal.
Finally, consider the case when one dimension corresponds to the clause $c_i$ and the other dimension corresponds to the variable $x_j$.
If the variable $x_j$ does not appear in $c_i$, then there are no vectors with a non-zero contribution to the two dimensions and so the entry is $0$.
Otherwise, we have
\begin{align*}
B(d^c_i, d^x_j) & = \sum_{v \in A} v(d^c_i) v(d^x_j) \\
& = \sum_{k = 1}^4 v^x_{i, k}(d^c_i) v^x_{i, k}(d^x_j) \\
& = \frac{1}{4 \sqrt{8}} + \frac{1}{4 \sqrt{8}} - \frac{1}{4 \sqrt{8}} - \frac{1}{4 \sqrt{8}} \\
& = 0,
\end{align*}
where we use the fact that no variable appears twice in the same clause.
This completes the proof that
\[
\sum_{v \in A}v v^\transpose = I.
\]
\paragraph{If the \textsf{NAE-3SAT-KS}\ instance is satisfiable, then there is a solution to $\mathsf{KS}_2(1/(4\sqrt{2}))$.}
Given a satisfying assignment to the \textsf{NAE-3SAT-KS}\ problem, let $T \subset U$ be the set of variables which are set to be \textsc{True} and let $F \subset U$ be the set of variables which are set to be \textsc{False}.
Recall that in a satisfying assignment, each clause in $C$ contains either $1$ or $2$ true literals.
Let $C' \subset C$ be the set of clauses with exactly $1$ true literal in the satisfying assignment.
Then, we define $S$ to be
\[
S \triangleq \{ v^x_{i, 1}, v^x_{i, 2}, v^x_{i, 3}, v^x_{i, 4} : x_i \in T \} \cup
\{ v^{\bar{x}}_{i, 1}, v^{\bar{x}}_{i, 2}, v^{\bar{x}}_{i, 3}, v^{\bar{x}}_{i, 4} : x_i \in F \} \cup
\{ v^c_i : c_i \in C' \}.
\]
and we show that
\[
\sum_{v \in S} v v^\transpose = \frac{1}{2} I.
\]
Now, we can repeat the calculations of the previous paragraph, this time setting $B = \sum_{v \in S} v v^\transpose$ to show that $B = (1/2) I$.
Specifically, for any variable $x_i$, it holds that
\begin{align*}
B(d^x_i, d^x_i) = \frac{1}{2}
\end{align*}
since only the vectors corresponding to the negated \emph{or} un-negated variable are included.
For any clause $c_i \in C'$, we have
\begin{align*}
B(d^c_i, d^c_i) & = v^c_i(d^c_i)^2 + \sum_{k=1}^4 v^x_{j, k}(d^c_i)^2 \\
& = \frac{1}{4} + \frac{1}{4} = \frac{1}{2}
,\end{align*}
where $x_j$ is the literal which is set to be true in the clause $c_i$.
Similarly, for any clause in $c_i \in C \setminus C'$, we have
\begin{align*}
B(d^c_i, d^c_i) & = \sum_{k=1}^4 v^x_{j, k}(d^c_i)^2 + \sum_{k=1}^4 v^x_{\ell, k}(d^c_i)^2 \\
& = \frac{1}{4} + \frac{1}{4} = \frac{1}{2},
\end{align*}
where the literals $x_j$ and $x_{\ell}$ are set to be true in the clause $c_i$.
Then, notice that the calculations for the off-diagonal entries follow in the same way as before.
This completes the proof that a satisfying assignment for the \textsf{NAE-3SAT-KS}\ problem implies a solution to the $\mathsf{KS}_2(1/(4\sqrt{2}))$ problem.
\paragraph{If there is a solution to $\mathsf{KS}_2(c)$, then the \textsf{NAE-3SAT-KS}\ instance is satisfiable.}
We prove this by a contrapositive argument. That is, we show that for any set $S'$ which does not correspond to a satisfying assignment of the \textsf{NAE-3SAT-KS}\ problem, there must be some vector $y$ with $\norm{y} = 1$ such that
\begin{equation} \label{eq:epserror}
\abs{y^\transpose \left(\sum_{v \in S'} v v^\transpose\right) y - \frac{1}{2}} \geq \epsilon
\end{equation}
for $\epsilon = \frac{1}{8 \sqrt{2}}$.
Specifically, we will analyse three cases, and show that
\begin{enumerate}
\item if there is some variable $x_i$ such that $S'$ does not contain exactly $4$ of the vectors \[\{v^x_{i, 1}, v^x_{i, 2}, v^x_{i, 3}, v^x_{i, 4}, v^{\bar{x}}_{i, 1}, v^{\bar{x}}_{i, 2}, v^{\bar{x}}_{i, 3}, v^{\bar{x}}_{i, 4}\},\] then there is a vector $y$ satisfying \eqref{eq:epserror} for $\epsilon = 1/8$;
\item if Item~(1) does not apply, then if there is some literal $x_i$ such that $S'$ contains $1$, $2$, or $3$ of the vectors $\{v^x_{i, 1}, v^x_{i, 2}, v^x_{i, 3}, v^x_{i, 4}\}$, then there is a vector $y$ satisfying \eqref{eq:epserror} for $\epsilon = \frac{1}{8 \sqrt{2}}$;
\item if neither Item~(1) nor (2) applies, then if $S'$ does not correspond to a satisfying assignment of the original \textsf{NAE-3SAT-KS}\ instance, there must be a vector $y$ satisfying \eqref{eq:epserror} for $\epsilon = 1/4$.
\end{enumerate}
For the first case, suppose that there is some variable $x_i$ such that $S'$ does not contain exactly $4$ vectors corresponding to the variable $x_i$.
Let $k \neq 4$ be the number of such vectors, and let $y$ be the vector with all zeros except for $y(d^x_i) = 1$. Notice that
\begin{align*}
\abs{y^\transpose \left(\sum_{v \in S'} v v^\transpose\right) y - \frac{1}{2}}
= \abs{ \sum_{v \in S'} v(d^x_i)^2 - \frac{1}{2} }
= \abs{ \frac{k}{8} - \frac{1}{2} }
\geq \frac{1}{8}.
\end{align*}
For the second case, suppose that the set $S'$ contains $4$ vectors for each variable, but there is some literal $x_i$ such that $S'$ contains some but not all of the vectors corresponding to $x_i$.
By condition $3$ of the \textsf{NAE-3SAT-KS}\ problem (Problem~\ref{prob:satnew}) we can assume that $x_i$ appears in two clauses $c_j$ and $c_k$. Otherwise, this is the case for $\bar{x}_i$ and $S'$ contains some, but not all, of the vectors corresponding to $\bar{x}_i$ since it contains exactly $4$ vectors corresponding to the variable $x_i$.
Now, we define
\[
B = \sum_{v \in S'} v v^\transpose
\]
and we consider the absolute values of certain off-diagonal entries in $B$, which are summarised in Table~\ref{tab:offdiags}.
\begin{table}[t]
\centering
\begin{tabular}{cccc}
\toprule
Vectors in $S'$ & $ |B(d^x_i, d^c_j)|$ & $\abs{B\left(d^x_i, d^c_k\right)}$ & $ |B(d^c_j, d^c_k)|$ \\
\midrule
One vector $v^x_{i, \ell}$ & $1 / \left(8 \sqrt{2}\right)$ & $1 / \left(8 \sqrt{2}\right)$ & $1 / 16$ \\
Vectors $v^x_{i, 1}$ and $v^x_{i, 2}$ & $0$ & $0$ & $1 / 8$ \\
Vectors $v^x_{i, 1}$ and $v^x_{i, 3}$ & $1 / \left(4 \sqrt{2}\right)$ & $0$ & $0$ \\
Vectors $v^x_{i, 1}$ and $v^x_{i, 4}$ & $0$ & $1 / \left( 4 \sqrt{2}\right)$ & $0$ \\
Vectors $v^x_{i, 2}$ and $v^x_{i, 3}$ & $0$ & $1 / \left(4 \sqrt{2}\right)$ & $0$ \\
Vectors $v^x_{i, 2}$ and $v^x_{i, 4}$ & $1 / \left(4 \sqrt{2}\right)$ & $0$ & $0$ \\
Vectors $v^x_{i, 3}$ and $v^x_{i, 4}$ & $0$ & $0$ & $1 / 8$ \\
Three vectors $v^x_{i, \ell}$ & $1 /\left(8 \sqrt{2}\right)$ & $1 /\left(8 \sqrt{2}\right)$ & $1 / 16$ \\
\bottomrule
\end{tabular}
\caption{The absolute values of certain off-diagonal entries in $B = \sum_{v \in S'} v v^\transpose$, depending on which vectors corresponding to the literal $x_i$ are included in $S'$. We assume that $x_i$ appears in the clauses $c_j$ and $c_k$.}
\label{tab:offdiags}
\end{table}
Notice that, regardless of which vectors corresponding to $x_i$ are included, there are two indices $\hat{d}_1$ and $\hat{d}_2$ such that
\[
\abs{B(\hat{d}_1, \hat{d}_2)} \geq \frac{1}{8 \sqrt{2}}.
\]
Using the indices $\hat{d_1}$ and $\hat{d_2}$, define the unit vector
\begin{align*}
y = \twopartdefow
{\frac 1 {\sqrt 2}(\mathbf{1}_{\hat{d_1}}+\mathbf{1}_{\hat{d_2}})}
{\text{sgn}(B(\hat{d_1},\hat{d_1})+B(\hat{d_2},\hat{d_2})-1) = \text{sgn}(B(\hat{d_1},\hat{d_2}))}
{\frac 1 {\sqrt 2}(\mathbf{1}_{\hat{d_1}}-\mathbf{1}_{\hat{d_2}})}
\end{align*}
where $\text{sgn}(\cdot)$ is the sign function.
Then we have
\begin{align*}
\abs{y^{\rot}By - \frac 1 2} &= \abs{\frac 1 2 \Big(B(\hat{d_1},\hat{d_1}) + B(\hat{d_2},\hat{d_2}) \pm B(\hat{d_1},\hat{d_2}) \pm B(\hat{d_2},\hat{d_1})\Big)-\frac 1 2}\\
& = \frac 1 2\abs{B(\hat{d_1},\hat{d_1}) + B(\hat{d_2},\hat{d_2}) -1 \pm 2B(\hat{d_1},\hat{d_2})}\\
&=\frac 1 2 \left(\abs{B(\hat{d_1},\hat{d_1})+B(\hat{d_2},\hat{d_2}) -1} + 2\abs{B(\hat{d_1},\hat{d_2})}\right)\\
& \geq \abs{B\left(\hat{d}_1, \hat{d}_2\right)} \\
& \geq \frac{1}{8 \sqrt{2}},
\end{align*}
where the third equality follows by the construction of $y$.\\
Finally, we consider the third case, in which there are $4$ vectors in $S'$ for each variable, and all $4$ vectors correspond to the same literal.
It is clear that such a set $S'$ corresponds unambiguously to an assignment for the original variables in the \textsf{NAE-3SAT-KS}\ instance: specifically, one can set a variable $x_i$ to be \textsc{True} if $S'$ contains $\{v^x_{i, 1}, v^x_{i, 2}, v^x_{i, 3}, v^x_{i, 4}\}$, and set $x_i$ to be \textsc{False} if $S'$ contains $\{v^{\bar{x}}_{i, 1}, v^{\bar{x}}_{i, 2}, v^{\bar{x}}_{i, 3}, v^{\bar{x}}_{i, 4}\}$.
Then, suppose that there is some clause $c_j \in C$ which is not satisfied by this assignment.
This implies that either all $12$ of the vectors corresponding to literals in $c_j$ are included in $S'$, or none of the vectors corresponding to literals in $c_j$ are included in $S'$.
In either case, we can set $y$ to be the indicator vector of the dimension $d^c_j$, and have that
\[
\abs{y^\transpose B y - \frac{1}{2}} = \abs{\sum_{v \in S'} v(d^c_j)^2 - \frac{1}{2}} \geq \frac{1}{4}
\]
since we can either include $v^c_j$ or not in order to set $\sum_{v \in S'} v(d^c_j)^2$ equal to either $1/4$ or $3/4$.
This completes the reduction from the \textsf{NAE-3SAT-KS}\ problem to the decision version of the $\mathsf{KS}_2(c)$ problem for $c \leq 1 / (4 \sqrt{2})$ which implies that $\mathsf{KS}_2\left(1/\left(4\sqrt{2}\right)\right)$ is $\mathsf{FNP}$-hard.
Furthermore, notice that by the reduction in this proof,
\begin{itemize}
\item if the \textsf{NAE-3SAT-KS}\ instance is satisfiable, then the constructed instance $\mathcal{I}$ of the $\mathsf{KS}_2(c)$ problem satisfies $\mathcal{W}(\mathcal{I}) = 0$;
\item if the \textsf{NAE-3SAT-KS}\ instance is not satisfiable, then the constructed instance $\mathcal{I}$ of the $\mathsf{KS}_2(c)$ problem satisfies $\mathcal{W}(\mathcal{I}) \geq 1 / \left(4 \sqrt{2}\right) \cdot \sqrt{\alpha}$.
\end{itemize}
This shows that distinguishing between instances with $\mathcal{W}(\mathcal{I}) = 0$ and $\mathcal{W}(\mathcal{I}) \geq 1 / \left(4 \sqrt{2} \right) \cdot \sqrt{\alpha}$ is $\mathsf{NP}$-hard, and completes the proof.
\end{proof}
\section{Conclusion}
This paper studies the algorithms and complexity of the Kadison-Singer problem through the $\mathsf{KS}_2(c)$ problem, and presents two results.
On one side, we prove that the $\mathsf{KS}_2(c)$ problem for any $c\in\mathbb{R}^+$ can be solved in quasi-polynomial time when $d=O(\log m)$, which suggests that the problem is much easier to solve in low dimensions.
The key to our algorithm design is a novel application of online spectral
sparsification subroutines, with which we are able to efficiently construct representations of all spectral
equivalence classes over time and reduce the enumeration space of the candidate solutions.
We expect that our work could motivate more research on the applications of spectral sparsification and related problems in numerical linear algebra to the algorithmic Kadison-Singer problem.
On the other side, our \textsf{NP}-hardness result shows that the Kadison-Singer type problem for arbitrary dimensions can be as hard as solving the \textsf{SAT} problem, and the $\mathsf{KS}_2(c)$ problem belongs to different complexity classes for different values of $c$. Hence, more refined studies on the classification of its computational complexity would help us better understand the complexity of the algorithmic Kadison-Singer problem.
In our point of view, both directions left from the paper are very interesting, and we leave these for future work.
\paragraph{Acknowledgement.} We would like to thank an anonymous reviewer for their detailed and valuable comments on an earlier version of our paper. These comments helped us significantly improve the presentation of the paper.
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
\subsection{Summary}
\label{subsec:summary}
Fix some algebraically closed field $k$ of characteristic zero. We recall that for $x\in X(k)$ and $f,g\in k(X)^\times$ \emph{the tame-symbol of the functions $f$ and $g$ at $x$} is defined by the following formula:
$$(f,g)_x=(-1)^{\mathrm{ord}_x(f)\mathrm{ord}_x(g)}\dfrac {g^{\mathrm{ord}_x(f)}}{f^{\mathrm{ord}_x(g)}}.$$
We have interchanged $f$ and $g$ in this formula for convenience with the further definitions.
The famous Weil reciprocity law states that for any $f,g\in k(X)^\times$ the following formula holds:
$$\prod\limits_{x\in X(k)}(f,g)_x=1.$$
The tame-symbol map induces a well-defined map $\Lambda^2 k(X)^\times\to k^\times$, which we denote by $\ts[2]_x$. In a similar way one can define the map $\Lambda^{3}k(X)^\times\to \Lambda^2 k^\times$, which we denote by $\ts_x$. Unlike the previous case, the total residue map $$\sum\limits_{x\in X(k)}\ts_x\colon \Lambda^3 k(X)^\times\to \Lambda^2 k^\times$$ is not equal to zero. A. Suslin \cite{suslin1979reciprocity} proved that the image of this map is generated by the elements of the form $c\wedge (1-c), c\in k^\times$. This result is called \emph{Suslin reciprocity law}. Denote the element $c\wedge (1-c)$ by $\delta_2(c)$. Between elements of the form $\delta_2(c)$ there are a lot of relations. The following proposition is true:
\begin{proposition}
For any $x,y \in k\backslash\{0,1\}, x\ne y$ the following formula holds:
\begin{equation*}
\delta_2(x)+\delta_2(y/x)+\delta_2\left((1-x)/(1-y)\right)=\delta_2(y)+\delta_2\left(\dfrac {1-x^{-1}}{1-y^{-1}}\right),
\end{equation*}
\end{proposition}
This proposition motivates the following definition:
\begin{definition}
For a field $F$ denote by ${B_2}(F)$ the \emph{pre-Bloch} group of $F$. It is an abelian group generated by the elements $\{x\}_2, x\in \mathbb P^1(F)$ modulo the following relations:
\begin{equation*}
\begin{split}
\label{five-relation}
\{x\}_2-\{y\}_2+\{y/x\}_2-\left\{\dfrac {1-x^{-1}}{1-y^{-1}}\right\}_2+\left\{(1-x)/(1-y)\right\}_2=0,\\
\{0\}_2=\{1\}_2=\{\infty\}_2=0,
\end{split}
\end{equation*}
where $x,y\in F\backslash\{0,1\}, x\ne y$.
\end{definition}
Suslin reciprocity law implies that there is a map $\left(\Lambda^3 k(X)^\times\right)\otimes \mathbb Q \dashrightarrow B_2(k)\otimes \mathbb Q$ making the following diagram commutative:
\begin{equation*}\label{diagramm:srl_motivation}
\begin{tikzcd}[row sep=huge]
& {\left(\Lambda^3 k(X)^\times\right)\otimes \mathbb Q} \\
B_2(k)\otimes \mathbb Q & \left(\Lambda^2 k^\times\right)\otimes \mathbb Q.
\arrow["{\delta_{2}}", from=2-1, to=2-2]
\arrow["\sum\limits_{x\in X(k)}\ts_x",from=1-2, to=2-2]
\arrow[dashed,from=1-2, to=2-1]
\end{tikzcd}
\end{equation*}
This motivates the following definition:
\begin{definition}
\label{def:lifted_rec_map_summary}
Let $X$ be a smooth projective curve over $k$. A map $h\colon \left(\Lambda^3 k(X)^\times\right)\otimes \mathbb Q \to B_2(k)\otimes \mathbb Q$ is called \emph{a lifted reciprocity map} on the field $k(X)$ if the following statements are true:
\begin{enumerate}
\item We have $h(c\wedge f_1\wedge f_2)=0$ for any $c\in k^\times$ and $f_1,f_2\in k(X)^\times$.
\item For any $f,g\in k(X)^\times, f\ne 1$ we have the following identity in the group $B_2(k)\otimes \mathbb Q$:
$$h(f\wedge (1-f)\wedge g)=\sum\limits_{\substack{x\in X(k)}}\mathrm{ord}_x(g)\{f(x)\}_2.$$
\item For any $a\in \Lambda^3 k(X)^\times$ we have the following identity in the group $\left(\Lambda^2 k^\times\right)\otimes \mathbb Q$:
$$\delta_2(h(a))=\sum\limits_{x\in X(k)}\partial_x^{(3)}(a).$$
\end{enumerate}
\end{definition}
It is easy to see that the existence of a lifted reciprocity map on the field $k(X)$ implies Suslin reciprocity law for the curve $X$.
Now we can formulate our main result:
\begin{theorem}
\label{th:main_result_summary}
For any smooth projective curve $X$ over $k$ one can choose a lifted reciprocity map $\mathcal H_{k(X)}$ on the field $k(X)$ such that for any non-constant map $\varphi\colon X\to Y$ and any $f_1,f_2,f_3\in k(Y)^\times$ we have $\mathcal H_{k(Y)}(f_1\wedge f_2\wedge f_3)=(\deg \varphi)^{-1}\mathcal H_{k(X)}(\varphi^*(f_1)\wedge \varphi^*(f_2)\wedge \varphi^*(f_3))$. Such a collection of lifted reciprocity maps is unique.
\end{theorem}
This statement was conjectured by A. Goncharov in \cite{goncharov2005polylogarithms}, and was partially solved by D. Rudenko in \cite{rudenko_2021}. For more details see the next two sections.
\subsection{The organization of the paper} The paper is organized as follows. In Section \ref{sec:definitions}, we give some basic definitions, and in Section \ref{sec:main_results} we present our main results. Section \ref{sec:preliminary_results} has three subsections. In the first subsection, we prove some basic properties of lifted reciprocity maps. In the second subsection, we define strictly regular elements and prove for them some version of Parshin reciprocity law. In the third subsection, we prove some analog of Bass and Tate's exact sequence for Milnor $K$-theory.
Section \ref{sec:main_section} takes the most of this paper. In this section, using the results from the previous two sections, we construct the functorial norm map on lifted reciprocity maps. It has three subsections. In the first subsection, we define a system of lifted reciprocity maps. In the second subsection, we prove our key result stating that systems of lifted reciprocity maps on the field $F(t)$ are in natural bijection with lifted reciprocity maps on the field $F$. As an application of this result, in the third subsection, we construct the norm map on lifted reciprocity maps. Finally, in the Section \ref{sec:proof_of_main_resulats} we proves our main results.
\begin{acknowledgements}
The author is grateful to A. Levin and D. Rudenko for setting the problem and stimulating discussion. I also thank S. Gorchinskiy for his interest in this paper.
\end{acknowledgements}
\section{Definitions}
\label{sec:definitions}
\subsection{Truncated polylogarithmic complexes}
\label{subsec:pol_complexes}
\begin{definition}
For $n\geq 2$ define the following complex $\Gamma_2(F,n)$ placed in degrees $[1,2]$:
$${B_2}(F)\otimes \Lambda^{n-2}F^\times \xrightarrow{\delta_{n}} {\Lambda^n F^\times}.$$
The differential is defined by the formula: $\delta_{n}(\{\xi_1\}_2\wedge \xi_3\wedge\dots\wedge \xi_n)=\xi_1\wedge (1-\xi_1)\wedge \xi_3\wedge \dots\wedge \xi_n$.
\end{definition}
Up to shift, these complexes coincide with the stupid truncation of the polylogarithmic complexes defined by A. Goncharov in \cite[Section 9]{goncharov1995geometry}, see also \cite{rudenko_2021}.
Let $(F,\nu)$ be a discrete valuation field. Denote $\mathcal O_\nu=\{x\in F|\nu(x)\geq 0\}, m_\nu=\{x\in F|\nu(x)>0\}$ and $\overline F_\nu =\mathcal O_\nu/m_\nu$. We recall that an element $a\in F^\times$ is called \emph{an uniformiser} if $\nu(a)=1$ and \emph{a unit} if $\nu(a)=0$. For $u\in \mathcal O_\nu$ denote by $\overline u$ its residue class in $\overline F_\nu$.
\begin{proposition}
Let $(F,\nu)$ be a discrete valuation field and $n\geq 3$. There is a unique morphism of complexes $\ts[n]_\nu\colon \Gamma_2(F,n)\to \Gamma(\overline F_\nu, n-1)$:
\[\begin{tikzcd}[row sep=huge]
{{B_2}(F)\otimes \Lambda^{n-2}F^\times} & {\Lambda^n F^\times} \\
{B_2}(\overline F_\nu)\otimes \Lambda^{n-3}\overline F_\nu^\times & \Lambda^{n-1}\overline F_\nu^\times,
\arrow["{\delta_{n}}", from=1-1, to=1-2]
\arrow["{\delta_{n-1}}", from=2-1, to=2-2]
\arrow["\ts[n]_\nu",from=1-2, to=2-2]
\arrow["\ts[n]_\nu",from=1-1, to=2-1]
\end{tikzcd}\]
satisfying the following conditions:
\begin{enumerate}
\item For any units $u_1,\dots u_n$ we have $\ts[n]_\nu(u_1\wedge\dots \wedge u_n)=0$.
\item For any uniformiser $\pi$ and units $u_2,\dots u_n\in F$ we have $\ts[n]_\nu(\pi\wedge u_2\wedge\dots \wedge u_n)=\overline {u_2}\wedge \dots \overline{u_n}$.
\item For any $a\in F\backslash \{0,1\}$ with $\nu(a)\ne 0$ and any $b\in \Lambda^{n-2}F^\times$ we have $\ts[n]_\nu(\{a\}_2\otimes b)=0$.
\item For any unit $u$ and $b\in \Lambda^{n-2}F^\times$ we have $\ts[n]_\nu(\{u\}_2\otimes b)=\{\overline u\}_2\otimes \ts[n-2]_\nu(b)$.
\end{enumerate}
\end{proposition}
The proof of this proposition can be found in \cite[Subsection 2.1]{rudenko_2021}, \cite[Section 14]{goncharov1995geometry}.
\subsection{The category $\mathbf{Fields}_d$}
\label{subsec:extensions_of_fields}
We recall that we have fixed some algebraically closed field $k$ of characteristic zero. Denote by $\mathbf{Fields}_d$ the category of finitely generated extensions of $k$ of transcendent degree $d$. Any morphism in this category is a finite extension. For $F\in \mathbf{Fields}_d$, denote by $\mathrm{dval}(F)$ the set of discrete valuations given by a Cartier divisor on some birational model of $F$. When $F\in\mathbf{Fields}_1$ this set is equal to the set of all $1$-dimensional valuations that are trivial on $k$. We denote this set simply by $\mathrm{val}(F)$.
Let $j\colon K\hookrightarrow F$ be an extension from $\mathbf{Fields}_d$ and $\nu\in \mathrm{dval}(K)$. Denote by $\text{ext}(\nu, F)$ the set of extensions of the valuation $\nu$ to $F$. Let $\nu'\in \text{ext}(\nu, F)$. Denote by $j_{\nu'|\nu}$ the natural embedding $\overline K_\nu\hookrightarrow \overline F_{\nu'}$. \emph{The inertia degree} $f_{\nu'|\nu}$ is defined as $\deg j_{\nu'|\nu}$. \emph{The ramification index} $e_{\nu'|\nu}$ is defined by the formula $\pi_K=u\pi_F^{e_{\nu'|\nu}}$, where $\pi_K, \pi_F$ are uniformisers of $K, F$ and $u$ is some unit. By \cite[Chapter II, \S 8]{neukirch2013algebraic} the set $\text{ext}(\nu, F)$ is finite and, moreover, the following formula holds:
\begin{equation}
\label{formula:degree_as_sum}
\sum\limits_{\nu'\in \text{ext}(\nu, F)}e_{\nu'|\nu}f_{\nu'|\nu}=[F:K].
\end{equation}
By Theorem of O. Zariski \cite[Chapter VI, \S 14, Theorem 31]{zariski2013commutative} a discrete valuation on $F$ is divisorial if and only if the corresponding residue field is finitely generated and has transcendence degree $1$. It implies that for any $\nu'\in \text{ext}(\nu, F)$, we have $\nu'\in \mathrm{dval}(F)$.
For any $n\geq 0$ there is the natural map $j_*\colon \Lambda^n K^\times\to \Lambda^n F^\times$ given by the formula $j_*(a)=a$. It is easy to see that for any $\nu'\in \text{ext}(\nu, F)$ the following formula holds:
\begin{equation}\label{formula:functoriality_of_tame_symbol}
\ts[n]_{\nu'}j_*(a)=e_{\nu'|\nu}\cdot (j_{\nu'|\nu})_ *(\ts[n]_\nu(a)).
\end{equation}
\subsection{Lifted reciprocity maps}
\label{sec:lifted_reciprocity_maps}
The following definition is a reformulation of Definition \ref{def:lifted_rec_map_summary}.
\begin{definition}
\label{def:SRL}
Let $F\in \mathbf{Fields}_1$. \emph{A lifted reciprocity map} on the field $F$ is a map $h\colon \Lambda^{3} F^\times\to {B_2}(k)$ satisfying the following conditions:
\begin{enumerate}
\item The following diagram is commutative:
\begin{equation}
\label{diagram:srl_definition}
\begin{tikzcd}[row sep=huge]
{{B_2}(F)\otimes F^\times} & {\Lambda^{3} F^{\times}} \\
{{B_2}(k)} & {\Lambda^{2}(k^\times).}
\arrow["{\delta_{3}}", from=1-1, to=1-2]
\arrow["h"', dashed, from=1-2, to=2-1]
\arrow["{\delta_{2}}", from=2-1, to=2-2]
\arrow["{\sum\limits_{\nu\in \mathrm{val}(F)}\ts_{\nu}}", from=1-2, to=2-2]
\arrow["{\sum\limits_{\nu\in \mathrm{val}(F)}\ts_{\nu}}"', from=1-1, to=2-1]
\end{tikzcd}
\end{equation}
\item The map $h$ vanishes on the image of the multiplication map $ \Lambda^2 F^\times\otimes k^\times\to \Lambda^3 F^\times$.
\end{enumerate}
\end{definition}
The set of all lifted reciprocity maps has a structure of affine space over $\mathbb Q$ as any set of homotopies.
Denote by $\textbf{Set}$ the category of sets.
Define a contravariant functor $$\mathrm{LRM}\colon \mathbf{Fields}_1\to \textbf{Set}$$ as follows. For any $F\in \mathbf{Fields}_1$ the set $\mathrm{LRM}(F)$ is equal to the set of all lifted reciprocity maps on $F$. If $j\colon K\hookrightarrow F$ then $\mathrm{LRM}(j)(h_{F})$ is defined by the formula $h_K(a):=\dfrac 1{\deg j}h_{F}(j_*(a))$.
In Section \ref{sec:SRL}
we will show that in this way we indeed get a functor.
\subsection{Conventions}
\label{subsec:conventions}
Everywhere we work over $\mathbb Q$. So any abelian group is supposed to be tensored by $\mathbb Q$. For example, when we write $\Lambda^2 k^\times$ this actually means $\left(\Lambda^2 k^\times\right)\otimes \mathbb Q$. All exterior powers and tensor products are over $\mathbb Q$.
If $C$ is a chain complex, denote by $C_d$ the elements lying in degree $d$. The symbol $\delta_n$ means the differential in the truncated polylogarithmic complex $\Gamma_2(F,n)$. Although it depends on the field $F$, we will omit the corresponding sign from the notation. In the same way, when $(F,\nu)$ is a discrete valuation field we denote the tame-symbol map $\Gamma_2(F,n)\to \Gamma_2(\overline F_\nu,n-1)$ by $\ts[n]_\nu$.
\section{Main results}
\label{sec:main_results}
The following theorem is a reformulation of Theorem \ref{th:main_result_summary} from Summary. It is a solution of Conjecture 6.2 from \cite{goncharov2005polylogarithms}.
\begin{theorem}
\label{th:limit_of_functor}
On any field $F\in\mathbf{Fields}_1$ one can choose a lifted reciprocity map $\mathcal H_F$ such that for any embedding $j\colon F_1\to F_2$ we have $SRL(j)(\mathcal H_{F_2})=\mathcal H_{F_1}$. Such a collection of lifted reciprocity maps is unique.
\end{theorem}
\begin{remark}
One of the main results of \cite{rudenko_2021} states that for any field $F\in \mathbf{Fields}_1$ there is a map $\Lambda^3 F^\times\toB_2(k)$ satisfying all but the second condition of Definition \ref{def:SRL}. However, it is not clear why this map can be chosen functorial. The functoriality is our new result.
\end{remark}
\begin{remark}
In \cite[Section 6]{goncharov2005polylogarithms}, A. Goncharov formulated his conjecture for some quotient $\mathcal B_2(k)$ of the group $B_2(k)$. In this setting he proved that for any elliptic curve $E$ over $k$ there is a $\mathcal B_2(k)$-valued lifted reciprocity map on the field $k(E)$. From the proof of Theorem \ref{th:isomorphism_of_functors} it is not difficult to show that his map coincides with $i\circ \mathcal H_{k(E)}$, where $i$ is the natural map $B_2(k)\to \mathcal B_2(k)$. Therefore, Theorem \ref{th:limit_of_functor} generalises A. Goncharov's construction to curves of arbitrary genus.
\end{remark}
\subsection{Chow dilogarithm}
The definition of Chow dilogarithm can be found in Section 6 of \cite{goncharov2005polylogarithms}. This function associate to any smooth projective curve $X$ over $\mathbb C$ and three non-zero rational functions $f_1,f_2,f_3$ on $X$ the value $\mathcal P_2(X; f_1, f_2, f_3)\in \mathbb R$. Remark after Conjecture 6.2 in loc. cit. implies that Theorem \ref{th:limit_of_functor} has the following corollary:
\begin{corollary}
\label{cor:Chow_dilogarithm}
For any smooth projective curve $X$ over $\mathbb C$ and three non-zero rational functions $f_1,f_2,f_3$ on $X$ the following formula holds:
$$\mathcal P_2(X; f_1, f_2, f_3)=\widetilde {\mathcal L}_2(\mathcal H_{\mathbb C(X)}(f_1\wedge f_2\wedge f_3)).$$
Here $\widetilde {\mathcal L}_2\colon {B_2}(\mathbb C)\to \mathbb R$ is a map given on the generators $\{x\}_2$ by the formula $$\widetilde{\mathcal L}_2(\{x\}_2)=\mathcal L_2(x),$$ where $\mathcal L_2$ is Bloch-Wigner dilogarithm.
\end{corollary}
\begin{remark}
This statement coincides with Corollary 1.5 from \cite{rudenko_2021}. However the proof from loc. cit. is not correct: it relies on remark after Conjecture 6.2 from \cite{goncharov2005polylogarithms}, which uses the functorial property. So Corollary \ref{cor:Chow_dilogarithm} is new.
\end{remark}
\subsection{Two-dimensional reciprocity law} From the proof of Theorem \ref{th:limit_of_functor} we get the following corollary:
\begin{corollary}
\label{cor:reciprocity_map_for_surfaces}
Let $L\in\mathbf{Fields}_2$. For any $b\in \Lambda^{4}L^\times$ and all but finite number $\nu\in \mathrm{dval}(L)$ we have $\mathcal H_{\overline L_\nu}\ts[4]_\nu(b)=0$. Moreover, the following sum is equal to zero:
\begin{equation}
\label{formula:rec_map_surfaces}
\sum\limits_{\nu\in \mathrm{dval}(L)}\mathcal H_{\overline L_\nu}\ts[4]_\nu(b)=0.
\end{equation}
\end{corollary}
Applying $\widetilde{\mathcal L}_2$ to both sides of (\ref{formula:rec_map_surfaces}) and using Corollary \ref{cor:Chow_dilogarithm}, we recover the functional equation for Chow dilogarithm proved by A. Goncharov in \cite[Section 1.4]{goncharov2005polylogarithms}, see also \cite{gil2015simplicial}. Actually Corollary \ref{cor:reciprocity_map_for_surfaces} was our motivation behind the construction of the map $\mathcal H$.
\subsection{The norm map}
The proof of Theorem \ref{th:limit_of_functor} takes most of this paper. While the proof of uniqueness follows from standard arguments, the proof of existence is non-trivial. On the field $k(t)$ there is a unique lifted reciprocity map, which we denote by $\mathcal H_{k(t)}$. To construct the lifted reciprocity map $\mathcal H_F$, for any embeddings of fields $j\colon F_1\hookrightarrow F_2$, we define the canonical norm map $N_{F_2/F_1}\colon \mathrm{LRM}(F_1)\to \mathrm{LRM}(F_2)$. We prove the following theorem:
\begin{theorem}
\label{th:norm_map}
The map $N$ satisfies the following properties:
\begin{enumerate}
\item Let $j\colon F_1\hookrightarrow F_2$ be an embedding. We have $SRL(j)\circ N_{F_2/F_1}=id$.
\item If $F_1\subset F_2\subset F_3$ is a tower of extension from $\mathbf{Fields}_1$ then $N_{F_3/F_1}=N_{F_3/F_2}\circ N_{F_2/F_1}$.
\item Let $F\in\mathbf{Fields}_1$. For any $a\in F\backslash k$ we have a finite extension $k(a)\subset F$. The element $\mathcal H_{F}:=N_{F/k(a)}(\mathcal H_{k(a)})\in \mathrm{LRM}(F)$ does not depend on $a$.
\end{enumerate}
\end{theorem}
Existence in Theorem \ref{th:limit_of_functor} follows immediately from the above theorem. The proof of Theorem \ref{th:norm_map} is similar to the construction of the norm map on Milnor $K$-theory \cite{MILNOR1969/70,bass1973milnor,suslin1979reciprocity,kato1980generalization}.
\section{The preliminary results}
\label{sec:preliminary_results}
\subsection{Lifted reciprocity maps}\label{sec:SRL}
\begin{proposition}\label{prop:SRL_is_functor}
$\mathrm{LRM}$ is indeed a functor.
\end{proposition}
\begin{proof}
If $j_1, j_2$ are some embeddings from $\mathbf{Fields}_1$ then the formula $\mathrm{LRM}(j_2\circ j_1)=\mathrm{LRM}(j_1)\circ \mathrm{LRM}(j_2)$ follows from the fact that the ramification index is multiplicative. So it is enough to show that for any embedding $j\colon K\hookrightarrow F$ and $h_F\in SRL(F)$ the map $h_K:=\mathrm{LRM}(j)(h_F)\colon \Lambda^3 K^\times\toB_2(k)$ is a lifted reciprocity map on $K$.
The statement that $h_K$ is zero on the image of the map $K^\times\otimes \Lambda^2 k^\times\to \Lambda^3 K^\times$ is obvious. Let us prove that diagram (\ref{diagram:srl_definition}) is commutative.
For any $\nu\in \mathrm{val}(K)$ and any $\nu'\in \text{ext}(\nu, F)$ we have $f_{\nu'|\nu}=1$. Therefore, formula (\ref{formula:degree_as_sum}) becomes $\sum\limits_{\nu'\in \text{ext}(\nu, F)}e_{\nu'|\nu}=[F:K]$. Since in our case $\overline K_{\nu}\cong \overline F_{\nu'}\cong k$, the formula (\ref{formula:functoriality_of_tame_symbol}) takes the form $e_{\nu'|\nu}\ts_\nu(a)=\ts_{\nu'}j_*(a)$.
For any $a\in \Lambda^3 K^\times$, we have:
\begin{equation*}
\begin{split}
\delta_2(h_K(a))=\dfrac 1{[F:K] }\delta_2(h_F(j_*(a)))=\dfrac 1{[F:K]}\sum\limits_{\nu'\in \mathrm{val}(F)}\ts_{\nu'}j_*(a)=\\
=\dfrac 1{[F:K]}\sum\limits_{\nu\in \mathrm{val}(K)}\sum\limits_{\nu'\in \text{ext}(\nu, F)}\ts_{\nu'}j_*(a)=\\
=\dfrac 1{[F:K]}\sum\limits_{\nu\in \mathrm{val}(K)}\sum\limits_{\nu'\in \text{ext}(\nu, F)}e_{\nu'|\nu}\ts_\nu(a)=
\sum\limits_{\nu\in\mathrm{val}(K)}\ts_\nu(a).
\end{split}
\end{equation*}
Here in the fourth equality we have used the formula $\ts_{\nu'}(j_*(a))=e_{\nu'|\nu}\ts_\nu(a)$ and in the last formula we have used the formula $\sum\limits_{\nu'\in \text{ext}(\nu, F)}e_{\nu'|\nu}=[F:K]$. So the lower right triangle is commutative. The commutativity of the upper left triangle is similar.
\end{proof}
\begin{proposition}
\label{prop:lifted_law_P^1}
On the field $k(t)$ there is a unique lifted reciprocity map. We will denote it by $\mathcal H_{k(t)}$
\end{proposition}
\begin{proof}Elementary calculation shows that the group $\Lambda^3 k(t)^\times$ is generated by the image of the multiplication map $ k(t)^\times\otimes \Lambda^2 k^\times\to \Lambda^3k(t)^\times$ and by the image of $\delta_3$. Uniqueness follows from this statement.
Existence was proved in \cite[Theorem 6.5]{goncharov1995geometry}. We remark that although the proof of Proposition 6.6 from \cite{goncharov1995geometry} uses rigidity argument, this proposition can be easily deduced from \cite{dupont1982generation}, where it was proved that ${B_2}(k(t))$ is generated by elements of the form $\left\{at+b\right\}_2, a,b\in k$.
\end{proof}
\subsection{Parshin reciprocity law}
\begin{definition}
Let $X$ be a smooth algebraic variety of dimension $n$ and $x\in X$. A Cartier divisor $D$ on $X$ is called \emph{supported on a simple normal crossing divisor} if there is some open affine neighborhood of the point $x$ such that $D$ is cut out by the function $\prod\limits_{i=1}^n x_i^{n_i}$, where $n_i\geq 0$ and $x_i$ is a regular system of parameters at $x$.
\end{definition}
We have the following statement \cite{kollar2009lectures}:
\begin{theorem}
\label{th:resolution_of_singularity}
Let $X$ be a variety over an algebraically closed field of characteristic zero and $D$ an effective Weil divisor on $X$. There is a birational morphism $f\colon \widetilde X\to X$ such that $\widetilde X$ is smooth and $f^{*}(D)$ is supported on a simple crossing divisor at all points of $\widetilde X$.
\end{theorem}
\begin{definition}
Let $\xi\in k(X)$. Write $D_0(\xi),D_\infty(\xi)$ for the divisors of zeros and poles of $\xi$ and let $|\xi|=D_0(\xi)+D_\infty(\xi)$. An element $\Gamma_2(k(X),4)_1$ (resp. $\Gamma_2(k(X),4)_2$) is called strictly regular at $x\in X$ if it can be represented as a linear combination of elements of the form $\xi_1\wedge \xi_2\wedge \xi_3\wedge \xi_4$ (resp. $\{\xi_1\}_2\otimes \xi_3\wedge \xi_4$) such that all the divisors $|\xi_1|+|\xi_2|+|\xi_3|+|\xi_4|$ (resp. $|\xi_1|+|\xi_3|+|\xi_4|$) are supported on a simple crossing divisor at $x$.
\end{definition}
Theorem \ref{th:resolution_of_singularity} has the following corollary:
\begin{corollary}
\label{cor:existence_of_birational_morphism}
Let $S$ be a smooth surface and $j\in \{1,2\}$. For any element $a\in \Gamma_2(k(S),4)_j$ there is a birational morphism $p\colon \widetilde S\to S$ such that the element $p^*(a)$ is strictly regular at all points.
\end{corollary}
The following lemma characterises strictly regular elements:
\begin{lemma}
\label{lemma:characterisation_of_strictly_regular_elements}
Let $S$ be a smooth algebraic surface and $x\in S$.
\begin{enumerate}
\item The subgroup of strictly regular elements of $\Gamma_2(k(S),4)_1$ is generated by elements of the following form:
\begin{enumerate}
\item $\{\pi_1^n\pi_2^m \xi_1\}\otimes \pi_1\wedge \pi_2$.
\item $\{\pi_1^n\pi_2^m \xi_1\}\otimes \pi_1\wedge \xi_4$.
\item $\{\pi_1^n\pi_2^m \xi_1\}\otimes \xi_3\wedge\xi_4$.
\end{enumerate}
Here all the functions $\xi_i$ take non-zero values at $x$ and $\pi_i$ is a regular system of parameters.
\item The subgroup of strictly regular elements of $\Gamma_2(k(S),4)_2$ is generated by elements of the following form:
\begin{enumerate}
\item $\pi_1\wedge\pi_2\wedge \xi_3\wedge\xi_4$.
\item $\pi_1\wedge \xi_2\wedge \xi_3 \wedge \xi_{4}$.
\item $\xi_1\wedge \xi_2\wedge \xi_3\wedge \xi_4$.
\end{enumerate}
The functions $\xi_i,\pi_i$ satisfy the same conditions as in item (i).
\end{enumerate}
\end{lemma}
\begin{proof}
Follows from the fact that if $\pi_1,\pi_2$ is a regular system of parameters at $x$ then any function $f\in k(S)$ can be written in the form $\pi_1^{n_1}\pi_2^{n_2}\xi$, where $n_i\in\mathbb Z$ and $\xi$ is a regular function at $x$ such that $\xi(x)\ne 0$.
\end{proof}
The following result is a version of the classical Parshin reciprocity law for strictly regular elements (see \cite{parshin1975class,horozov2014reciprocity, osipov2011categorical}).
\begin{theorem}
\label{th:Parshin_reciprocity_law}
Let $S$ be a surface smooth at some point $x\in S$ and $j\in \{1,2\}$. For any strictly regular element $b$ of the group $\Gamma_2(k(S),4)_j$ at $x$ the following sum is equal to zero:
$$\sum\limits_{\substack{C\subset S\\ C\ni x}}\ts_{\nu_{x, C}}\ts[4]_{\nu_{C}}(b)=0. $$
Here the sum is taken over all irreducible curves $C\subset S$ containing the point $x$ and smooth at it, $\nu_C$ is the valuation corresponding to $C$ and $\nu_{x, C}$ is a valuation of the residue field $\overline{k(S)}_{\nu_C}\cong k(C)$ corresponding to $x\in C$.
\end{theorem}
\begin{proof}
It is enough to prove this theorem for any of the generators from Lemma \ref{lemma:characterisation_of_strictly_regular_elements}. We will only consider the most interesting case $(i), (a)$. We can assume that $S$ is a smooth surface, $x\in S$ and $\pi_1, \pi_2$ is a system of regular parameters at $x$. Passing to some open affine neighborhood of the point $x$, we can assume that the following conditions hold:
\begin{enumerate}
\item the function $\xi_1$ is invertible and regular,
\item the functions $\pi_i$ are regular,
\item for any $i \in\{1,2\}$ the divisor of the function $\pi_i$ is equal to some irreducible curve $C_i$ passing through $x$.
\end{enumerate}
In general, if $X$ is a subvariety of an algebraic variety $Y$ and $f$ is a regular function on $Y$ we denote its restriction to $X$ by $\left.f\right|_X$. Let $b=\{\pi_1^n\pi_2^m \xi_1\}\otimes \pi_1\wedge \pi_2$. Obviously the only curves on $S$ satisfying $\ts[4]_{\nu_C}(b)\ne 0$ are $C_1$ and $C_2$. Consider the following cases:
\begin{description}
\item[Case $n,m\ne 0$:] In this case both of the tame-symbols $\ts[4]_{\nu_{C_1}}(b)$ and $\ts[4]_{\nu_{C_2}}(b)$ vanish and the statement is obvious.
\item [Case $n\ne 0, m=0$ or $m\ne 0, n=0$:] Consider, say, the first case. Obviously, $$\ts[4]_{\nu_{C_1}}(b)=0.$$ So it is enough to prove that $\ts_{\nu_{x,C_2}}\ts[4]_{\nu_{C_2}}(b)=0$. This follows the following formula: $$\mathrm{ord}_x(\left.(\pi_1^n\xi_1)\right|_{C_2})=n\ne 0.$$
\item[Case $n=m=0$:] In this case the statement follows from the following formula:
$$\ts_{\nu_{x,C_1}}\ts[4]_{\nu_{C_1}}(b)=-\ts_{\nu_{x,C_2}}\ts[4]_{\nu_{C_2}}(b)=\xi_1(x).$$
\end{description}
\end{proof}
We have the following corollary:
\begin{corollary}
\label{cor:Parshin_sum_point_curve}
Let $L\in\mathbf{Fields}_2$ and $j\in\{1,2\}$. For any $b\in \Gamma_2(L,4)_j$ and all but finite $\mu\in \mathrm{dval}(L)$ the following sum is zero:
$$\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)=0.$$
Moreover the following sum is zero:
$$\sum\limits_{\mu\in \mathrm{dval}(L)}\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)=0.$$
This corollary can be interpreted as the statement that the composition of the vertical arrows in the following diagram is zero:
\begin{equation}
\begin{tikzcd}
{\Gamma_2(L,4)_1} & {\Gamma_2(L,4)_2} \\
{\bigoplus\limits_{\mu\in\mathrm{dval}(L)} \Gamma_2(\overline L_\mu,3)_1} & {\bigoplus\limits_{\mu\in\mathrm{dval}(L)} \Gamma_2(\overline L_\mu,3)_2} \\
{\Gamma_2(k,2)_1} & {\Gamma_2(k,2)_2.}
\arrow["{(\ts[4]_\mu})", from=1-1, to=2-1]
\arrow["{\sum\ts_{\mu'}}", from=2-1, to=3-1]
\arrow["{\delta_{4}}"', from=1-1, to=1-2]
\arrow["{(\ts[4]_\mu})", from=1-2, to=2-2]
\arrow["{\sum\ts_{\mu'}}", from=2-2, to=3-2]
\arrow["{\delta_2}"', from=3-1, to=3-2]
\arrow["{\delta_{3}}", from=2-1, to=2-2]
\end{tikzcd}
\end{equation}
\end{corollary}
\begin{proof}
Let $S$ be an algebraic surface with $k(S)\cong L$. Denote by $\mathrm{dval}(L)_S$ the subset of divisorial valuations coming from divisors on $S$. Choose $S$ in such a way that $b$ would be strictly regular at all points of $S$. Theorem \ref{th:Parshin_reciprocity_law} implies the following formula:
$$\sum\limits_{\mu\in \mathrm{dval}(L)_S}\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)=0.$$
It remains to prove that for any $\mu\in \mathrm{dval}(L)\backslash \mathrm{dval}(L)_S$ the following sum vanishes:
$$\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)=0.$$
There is a birational morphism $p\colon \widetilde S\to S$ such that $\mu$ is given by a divisor on $\widetilde S$ contracted under $p$. The morphism $p$ is a sequence of blow-ups $p_m\circ\dots \circ p_1, p_i\colon S_i\to S_{i-1}, S_m=\widetilde S, S_0=S$. Let $D_i\subset S_i$ be the corresponding exceptional curve. Denote by $\mu_i$ the corresponding valuation. It is enough to show that for any $i$ the following formula holds:
$$\sum\limits_{\mu'\in \mathrm{val}(\overline L_{\mu_i})}\ts_{\mu'}\ts[4]_{\mu_i}(b)=0.$$
This formula follows from Theorem \ref{th:Parshin_reciprocity_law} for the element $b$ and the surfaces $S_i$ and $S_{i-1}$.
\end{proof}
\subsection{Results of D.Rudenko}
\label{sec:D_Rudenko_results}
Results of this section in a different form are contained in \cite{rudenko_2021}.
Let $F\in \mathbf{Fields}_1$. A valuation $\nu\in \mathrm{dval}(F(t))$ is called \emph{general} if it corresponds to some irreducible polynomial over $F$. The set of general valuations are in bijection with the set of all closed points on the affine line over $F$, which we denote by $\mathbb A_{F,(0)}^1$. A valuation is called \emph{special} if it is not general. Denote the set of general (resp. special) valuations by $\mathrm{dval}(F(t))_{gen}$ (resp. $\mathrm{dval}(F(t))_{sp}$). Denote by $\Gamma_2^{sp}(F(t),4)_{1}$ the subgroup of $\Gamma_2(F(t),4)_{1}$ lying in the kernel of all the maps $\ts[4]_\nu$, where $\nu\in \mathrm{dval}(F(t))_{gen}$.
The following theorem is the main result of this subsection:
\begin{theorem}
\label{th:main_exact_sequence}
The following sequence is exact:
\begin{equation}
\label{formula:main_exact_sequence_geometric}
\Lambda^{4} F^\times\oplus \Gamma_2^{sp}(F(t),4)_{1}\to\Lambda^{4} F(t)^\times\to\bigoplus\limits_{\nu\in \mathrm{dval}(F(t))_{gen}}\Lambda^{3} \overline{F(t)}_\nu^\times\to 0.
\end{equation}
Here the first component of the first map is the natural embedding, the second component is induced by $\delta_{4}$ and the second map is given by $(\ts[4]_\nu)_{\nu\in \mathrm{dval}(F(t))_{sp}}$.
\end{theorem}
For a point $p\in \mathbb A_{F,(0)}^1$ denote by $f_p$ the corresponding monic irreducible polynomial over $F$. By definition, $\deg p=\deg f_p$. Denote by $F(p)$ its residue field and by $\nu_p$ the corresponding valuation.
\begin{lemma}
\label{lemma:Res_is_surjective}
Let $m\geq 3$ be an integer. The following map is surjective:
$$\Gamma_2(F(t),m)\xrightarrow{\left(\ts[m]_{\nu_p}\right)} \bigoplus\limits_{p\in \mathbb A_{F,(0)}^1}\Gamma_2(F(p),m-1).$$
\end{lemma}
The proof of this lemma is completely similar to the proof of surjectivity in the Bass-Tate exact sequence for Milnor $K$-theory \cite{MILNOR1969/70,bass1973milnor}.
\begin{proof}
Denote by $\Gamma_2(F(t),m)_{\leq d}$ the set of elements lying in the kernels of all the maps $\ts[m]_{\nu_p}$ with $\deg P>d$. It is enough to prove that for any $d\geq 0$ the following map is surjective:
$$\Gamma_2(F(t),m)_{\leq d}\to \bigoplus\limits_{\substack{p\in \mathbb A_{F,(0)}^1\\\deg p\leq d}}\Gamma_2(F(p),m-1).$$
The proof is by induction on $d$. The case $d=-1$ is trivial. Let us prove the inductive step. It is enough to show that for any $a\in \Gamma_2(F(p),m-1)_j, j\in\{1,2\}$ there is an element $\widetilde a\in \Gamma_2(F(x),m)_{j}$ with the following properties:
\begin{enumerate}
\item for any $p'\ne p$ with $\deg p'\geq\deg p$ we have $$\ts[m]_{\nu_{p'}}(\widetilde a)=0.$$
\item We have $\ts[m]_{\nu_p}(\widetilde a)=a$.
\end{enumerate}
For an element $\xi\in F(p)$ there is a unique polynomial $l_p(\xi)$ of degree $<\deg p$ such that the image of $l_p(\xi)$ under the natural projection $F[x]\to F[x]/f_p\cong F(p)$ is equal to $\xi$.
The following formulas for $\widetilde a$ are taken from \cite[Section 5.2]{rudenko_2021}.
\begin{description}
\item[Case $j=1$] Choose a representation $a=\sum\limits_{\alpha}n_{\alpha}\cdot(\{\xi_1^{\alpha}\}_2\otimes \xi_3^{\alpha}\wedge \dots \wedge \xi_{m-1}^{\alpha}) $. Define the element $\widetilde a$ by the formula
$$\widetilde a=\sum\limits_{\alpha}n_{\alpha}\cdot (\{l_P(\xi_1^{\alpha})\}_2\otimes f_P\wedge l_P(\xi_3^{\alpha})\wedge \dots\wedge l_P(\xi_{m-1}^{\alpha})).$$
\item[Case $j=2$] Choose a representation $a=\sum\limits_{\alpha}n_{\alpha}\cdot(\xi_1^{\alpha}\wedge\dots\wedge \xi_{m-1}^{\alpha}) $. The element $\widetilde a$ is defined by the formula
$$\widetilde a=\sum\limits_{\alpha}n_{\alpha}\cdot (f_P\wedge l_P(\xi_1^{\alpha})\wedge\dots\wedge l_P(\xi_{m-1}^{\alpha})).$$
\end{description}
It is easy to see that these elements satisfy the conditions stated above.
\end{proof}
\begin{proposition}
\label{prop:Rudenko_result}
The following sequence is exact for $j=2$ and exact in the third term for $j=1$:
\begin{equation}
\begin{split}
0\to H^j(\Gamma_2(F,4))\to H^j(\Gamma_2(F(t),4))\xrightarrow{(\ts[4]_{\nu_p})}\bigoplus\limits_{p\in \mathbb A_{F,(0)}^1}H^{j}(\Gamma_2(F(p),3))\to 0.
\end{split}
\end{equation}
\end{proposition}
\begin{proof}
The case $j=2$ was proven in \cite{MILNOR1969/70} (see also \cite{bass1973milnor}). The exactness in the last term for $j=1$ is a particular case of the main result of \cite{rudenko_2021}.
\end{proof}
\begin{proof}[The proof of Theorem \ref{th:main_exact_sequence}]We need to prove that the following sequence is exact:
\begin{equation}
\label{formula:exact_sequence_2}
\Lambda^{4} F^\times\oplus \Gamma_2^{sp}(F(t)),4)_{1}\to\Lambda^{4} F(t)^\times\to\bigoplus\limits_{p\in\mathbb A_{F,(0)}^1}\Lambda^{3} F(p)^\times\to 0.
\end{equation}
Consider the following double complex:
\[\begin{tikzcd}
{\Gamma_2(F,4)_{1}} & {\Gamma_2(F,4)_{2}} \\
{\Gamma_2(F(t),4)_{1}} & {\Gamma_2(F(t),4)_{2}} \\
{\bigoplus\limits_{p\in\mathbb A_{F,(0)}^1}\Gamma_2(F(p),3)_{1}} & {\bigoplus\limits_{p\in\mathbb A_{F,(0)}^1}\Gamma_2(F(p),3)_{2}.}
\arrow["\delta_{4}",from=1-1, to=1-2]
\arrow[from=1-1, to=2-1]
\arrow["",from=2-1, to=3-1]
\arrow["\bigoplus \delta_{3}",from=3-1, to=3-2]
\arrow["\delta_{4}",from=2-1, to=2-2]
\arrow[from=1-2, to=2-2]
\arrow["",from=2-2, to=3-2]
\end{tikzcd}\]
Denote by $Tot$ the total complex placed in degrees $[1,4]$. Using Proposition \ref{prop:Rudenko_result}, the spectral sequence argument shows that $Tot$ has no cohomology in degree $3$. It follows that the sequence (\ref{formula:exact_sequence_2}) is exact in the second term. The exactness in the third term follows from Lemma \ref{lemma:Res_is_surjective} for $m=4$.
\end{proof}
\section{The proof of Theorem \ref{th:norm_map}}
\label{sec:main_section}
\subsection{Systems of lifted reciprocity maps}
\begin{definition}
Let $L\in \mathbf{Fields}_2$. \emph{A pre-system of lifted reciprocity maps } $\sigma$ on $L$ is a choice of a lifted reciprocity map $\sigma_\nu$ on the field $\overline L_\nu$ for any $\nu\in \mathrm{dval}(L)$.
\end{definition}
We have the following lemma:
\begin{lemma}
\label{lemma:finitnes_of_sum}
Let $L\in\mathbf{Fields}_2$. For any $b\in \Lambda^{4} L^\times$ and all but finite number of $\nu\in \mathrm{dval}(L)$ the element $\ts[4]_\nu(b)$ belongs to the image of the map $\overline L_\nu^\times\otimes \Lambda^2 k^\times\to \Lambda^3 \overline L_\nu^\times$.
In particular, if $\sigma$ is a pre-system of lifted reciprocity maps on $L$ then for any $b\in \Lambda^{4}L^\times$ and all but finite number of $\nu\in \mathrm{dval}(L)$ we have $\sigma_{\nu}\ts[4]_{\nu}(b)=0$.
\end{lemma}
\begin{proof}
Choose a smooth proper algebraic surface $S$ such that $L=k(S)$ and $b$ is strictly regular at all points of $S$. Let $\nu\in \mathrm{dval}(L)\backslash \mathrm{dval}(L)_S$. We claim that $\ts[4]_\nu(b)$ lies in the subgroup indicated by the lemma. Let $p\colon \widetilde S\to S$ be a birational morphism, such that $\nu$ correspond to some irreducible divisor $D$ on $\widetilde S$. Obviously, $D$ is contracted under $p$. Denote by $x\in S$ its image. We use Lemma \ref{lemma:characterisation_of_strictly_regular_elements} for the surface $S$, the point $x$ and the element $b$. The restriction of all the functions $p^*(\xi_i)$ to $D$ lie in $k^\times$. Now the statement follows from the definition of the tame-symbol.
\end{proof}
It follows from the previous lemma that for any pre-system of lifted reciprocity maps on $L$ and $b\in \Lambda^{4} L^\times$ the following sum is well-defined:
$$\sum\limits_{\nu\in \mathrm{dval}(L)}\sigma_{\nu}\ts[4]_{\nu}(b).$$
\begin{definition}
A pre-system of lifted reciprocity maps is called a system of lifted reciprocity maps if this sum is zero for any $b\in \Lambda^{4} L^\times$.
\end{definition}
Define a functor $$\mathrm{SLRM}\colon \mathbf{Fields}_2\to \textbf{Set}.$$ On objects it is equal to the set of all systems of lifted reciprocity maps. On morphisms, it is defined as follows. Assume that $j\colon L\hookrightarrow M$ be an embedding of fields and $\sigma$ is a system of lifted reciprocity maps on $M$. Define a system of lifted reciprocity maps $\mathrm{SLRM}(j)(\sigma)$ on $L$ by the following formula:
$$\mathrm{SLRM}(j)(\sigma)_\nu=\dfrac 1{[M:L]}\sum\limits_{\nu'\in \text{ext}(\nu, M)} e_{\nu'|\nu}f_{\nu'|\nu}\mathrm{LRM}(j_{\nu'|\nu})(\sigma_{\nu'}).$$
We recall that $e_{\nu'|\nu}, f_{\nu'|\nu}$ and $j_{\nu'|\nu}$ was defined in Subsection \ref{subsec:extensions_of_fields}. The proof of the fact that $\mathrm{SLRM}$ is indeed a functor is similar to the proof of Proposition \ref{prop:SRL_is_functor}.
Denote by $$Rf\colon \mathbf{Fields}_1\to \mathbf{Fields}_2$$ a functor given by the formula $F\mapsto F(t)$.
Define the natural transformation $res\colon \mathrm{SLRM} \circ Rf\to \mathrm{LRM}$ as follows. We need to define a map $res_F\colon \mathrm{SLRM}(F(t))\to \mathrm{LRM}(F)$ for any $F\in \mathbf{Fields}_1$. We set $res_F(\sigma)=\sigma_{\nu_{\infty}}$, where $\nu_\infty$ is the valuation corresponding to the point $\infty\in\mathbb P_F^1$. It is not difficult to show that $res$ is indeed a natural transformation.
Here is the main result of this subsection:
\begin{theorem}
\label{th:isomorphism_of_functors}
The natural transformation $res\colon \mathrm{SLRM} \circ Rf\to \mathrm{LRM}$ is an isomorphism of functors.
\end{theorem}
\subsection{Proof of Theorem \ref{th:isomorphism_of_functors}}
\begin{lemma}\label{lemma:surjectivity of res}
Let $F\in\mathbf{Fields}_1$. For any lifted reciprocity map $h$ on the field $F$ there is a system of lifted reciprocity maps $\sigma$ on the field $F(t)$ such that $res(\sigma)=h$.
\end{lemma}
Set $L=F(t)$. We need to show that for any lifted reciprocity map $h$ on $F$ there is a system of lifted reciprocity maps $\sigma$ on $L$ satisfying $res_F(\sigma)=h$. First, for any $\nu\in \mathrm{dval}(L)$ we construct a map $\sigma_\nu \colon \Lambda^3 \overline L_\nu^\times\to {B_2}(k)$. Then we will show that in this way we get a system of lifted reciprocity maps.
Let $\nu$ be special. If $\nu=\nu_\infty$ then define $\sigma_{\nu_\infty}=h$ (here we have used the identification of $\overline L_{\nu_\infty}$ with $F$). In the other case we have $\overline{F(t)}_\nu\simeq k(t)$. In this case define $\sigma_\nu$ to be a unique lifted reciprocity map from Proposition \ref{prop:lifted_law_P^1}.
We have defined $\sigma_\nu$ for any $\nu\in \mathrm{dval}(L)_{sp}$. Define the map $H\colon \Lambda^{4} L^\times\to {B_2}(k)$ by the following formula:
$$H(b)=-\sum\limits_{\nu\in \mathrm{dval}(L)_{sp}}\sigma_\nu\ts[4]_{\nu}(b).$$
This sum is well-defined by lemma \ref{lemma:finitnes_of_sum}.
\begin{lemma}\label{lemma:H_is_zero_on}
The map $H$ is zero on the image of the map $$\Lambda^{4} F^\times\oplus \Gamma_2^{sp}(L,4)_{1}\to \Lambda^4 L^\times.$$
Here the first map is the natural embedding and the second map is induced by $\delta_4$.
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:H_is_zero_on}]
\begin{enumerate}
\item Direct computation shows that for any $a\in \Lambda^4 F^\times$ and any $\nu\in \mathrm{dval}(L)$ the element $\ts[4]_\nu(a)$ lies in the subgroup $\Lambda^3 k^\times\subset\Lambda^{3}\overline L_\nu^\times$. It follows that the map $H$ vanishes on the image of the group $\Lambda^{4}F^\times$.
\item Let us prove that $H$ is zero on the image of the group $\Gamma_2^{sp}(L,4)_{1}$. Consider the following commutative diagram:
\begin{equation}
\begin{tikzcd}
{\Gamma_2(L,4)_1} & {\Gamma_2(L,4)_2} \\
{\bigoplus\limits_{\nu\in\mathrm{dval}(L)_{sp}} \Gamma_2(\overline L_\nu,3)_1} & {\bigoplus\limits_{\nu\in\mathrm{dval}(L)_{sp}} \Gamma_2(\overline L_\nu,3)_2} \\
{\Gamma_2(k,2)_1} & {\Gamma_2(k,2)_2.}
\arrow["{(\ts[4]_\nu)}"', from=1-1, to=2-1]
\arrow["{\sum\ts_{\nu'}}"'
, from=2-1, to=3-1]
\arrow["{\delta_{4}}", from=1-1, to=1-2]
\arrow["{(\ts[4]_{\nu})}", from=1-2, to=2-2]
\arrow["{\sum\sigma_\nu}"', from=2-2, to=3-1]
\arrow["{\sum\ts_{\nu'}}", from=2-2, to=3-2]
\arrow["{\delta_2}", from=3-1, to=3-2]
\arrow["{\delta_{3}}", from=2-1, to=2-2]
\end{tikzcd}
\end{equation}
For any $b\in \Gamma_2^{sp}(L,4)_{1}$ we get
$$\sum\limits_{\nu\in \mathrm{dval}(L)_{sp}}\sigma_\nu\ts[4]_{\nu}\delta_{4}(b)=\sum\limits_{\nu\in \mathrm{dval}(L)_{sp}}\sum\limits_{\nu'\in \mathrm{val}(\overline L_\nu)}\ts_{\nu'}\ts[4]_{\nu}(b).$$
So by definition of $H$, we obtain:
\begin{equation}
\begin{split}
H(\delta_{4}(b))=-\sum\limits_{\nu\in \mathrm{dval}(L)_{sp}}\sigma_\nu\ts[4]_\nu\delta_{4}(b)=-\sum\limits_{\nu\in \mathrm{dval}(L)_{sp}}\sum\limits_{\nu'\in \mathrm{val}(L_\nu)}\ts_{\nu'}\ts[4]_{\nu}(b)=\\
= -\sum\limits_{\nu\in \mathrm{dval}(L)}\sum\limits_{\nu'\in \mathrm{val}(L_\nu)}\ts_{\nu'}\ts[4]_{\nu}(b)=0.
\end{split}
\end{equation}
Here the third equality holds because the element $b$ lies in the group $\Gamma_2^{sp}(L,4)_1$ and has residues only to special valuations. The fourth equality follows from Corollary \ref{cor:Parshin_sum_point_curve}.
\end{enumerate}
\end{proof}
Therefore thanks to Theorem \ref{th:main_exact_sequence} we get a well-defined map $$\bigoplus\limits_{\nu\in \mathrm{dval}(L)_{gen}}\Lambda^{3}\overline L_\nu^\times \to {B_2}(k).$$ For any $\nu\in \mathrm{dval}(L)_{gen}$, define $\sigma_\nu$ to be the restriction of this map to the subgroup $\Lambda^{3}\overline L_\nu^\times$.
For $j\in \{1,2\}$, $\nu\in \mathrm{dval}(L)_{gen}$ and $a\in \Gamma_2(\overline L_\nu,3)_j$ denote by $\mathcal L(a)$ the set of elements $b\in \Gamma_2(L,4)_j$ such that $\ts[4]_\nu(b)=a$ and $\ts[4]_{\nu'}=0$ for $\nu'\ne\nu, \nu'\in \mathrm{dval}(L)_{gen}$. This set is non-empty by Lemma \ref{lemma:Res_is_surjective}.
\begin{lemma}\label{lemma:formula_for_sigma}
Let $\nu\in \mathrm{dval}(L)_{gen}$ and $a\in \Lambda^3 \overline L_\nu^\times$. For any $b\in\mathcal L(a)$ the element
$\sigma_\nu(a)$ is equal to $H(b)$.
\end{lemma}
\begin{proof}
Follows from the definition of $\sigma_\nu$.
\end{proof}
\begin{lemma}\label{lemma:double_sum}
Let $j\in \{1,2\}$, $\nu\in\mathrm{dval}(L), a\in \Gamma_2(\overline L_\nu,3)_j$ and $b\in \mathcal L(a)$. The following formula holds:
$$\sum\limits_{\mu'\in \mathrm{val}(\overline L_\nu))}\ts_{\mu'}(a)=-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sum\limits_{\mu'\in \mathrm{val}(\overline L_{\mu})}\ts_{\mu'}\ts[4]_\mu(b).$$
\end{lemma}
\begin{proof}
Corollary \ref{cor:Parshin_sum_point_curve} implies the following formula:
$$\sum\limits_{\mu\in \mathrm{dval}(L)}\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)=0.$$
From the other side:
\begin{equation*}
\begin{split}
\sum\limits_{\mu\in \mathrm{dval}(L)}\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)=\\
=\sum\limits_{\mu'\in \mathrm{val}(\overline L_\nu))}\ts_{\mu'}(a)+\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b).
\end{split}
\end{equation*}
The statement of the lemma follows.
\end{proof}
\begin{lemma}\label{lemma:sigma_is_SRL}
For any $\sigma\in \mathrm{dval}(L)$, $\sigma_\nu$ is a lifted reciprocity map.
\end{lemma}
\begin{proof}
Let us show that the following diagram is commutative:
\begin{equation*}
\begin{tikzcd}[row sep=huge]
{{B_2}(\overline L_\nu)\otimes \overline L_\nu^\times} & {\Lambda^{3} \overline L_\nu^{\times}} \\
{{B_2}(k)} & {\Lambda^{2}(k^\times).}
\arrow["{\delta_{3}}", from=1-1, to=1-2]
\arrow["\sigma_\nu"', dashed, from=1-2, to=2-1]
\arrow["{\delta_{2}}", from=2-1, to=2-2]
\arrow["{\sum\limits_{\mu'\in \mathrm{val}(\overline L_\nu)}\ts_{\mu'}}", from=1-2, to=2-2]
\arrow["{\sum\limits_{\mu'\in \mathrm{val}(\overline L_\nu)}\ts_{\mu'}}"', from=1-1, to=2-1]
\end{tikzcd}
\end{equation*}
\begin{description}
\item[The lower right triangle]
Let $a\in \Lambda^{3} \overline L_\nu^\times$. Choose some $b\in\mathcal L(a)$. We have:
\begin{align*}
&\sum\limits_{\mu'\in \mathrm{dval}(\overline L_\nu))}\ts_{\mu'}(a)=-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sum\limits_{\mu'\in \mathrm{val}(\overline L_\mu)}\ts_{\mu'}\ts[4]_\mu(b)\quad\text{(by Lemma \ref{lemma:double_sum})}
\\
&=-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}} \delta_2\sigma_{\mu}\ts[4]_\mu(b)\quad\text{(because $\sigma_\mu$ is a lifted reciprocity map)}\\&=
-\delta_2H(b)\quad\text{(follows from the definition of the map $H$)}\\&=\delta_2\sigma_\nu(a)\quad\text{(by Lemma \ref{lemma:formula_for_sigma})}.
\end{align*}
\item[The upper left triangle] Let $a_1\in \Gamma_2(\overline L_\nu,3)_{1}$. Choose $b_1\in \mathcal L(a_1)$. We have:
\begin{align*}
&\sum\limits_{\mu'\in \mathrm{val}(\overline L_\nu))}\ts_{\mu'}(a_1)=-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sum\limits_{\mu'\in \mathrm{val}(\overline L_{\mu})}\ts_{\mu'}\ts[4]_\mu(b_1)\quad\text{(by Lemma \ref{lemma:double_sum})}\\
& =-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}} \sigma_\mu\delta_{3}\ts[4]_\mu(b_1)\quad\text{(because $\sigma_\mu$ is a lifted reciprocity map)}\\
& =
-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}} \sigma_\mu\ts[4]_\mu\delta_{4}(b_1)\quad\text{(because $\ts[4]_\mu$ is a morphism of complexes)}\\&=H(\delta_4(b_1))\quad\text{(follows from the definition of the map $H$)}\\&=\sigma_{\nu}\delta_{3}(a_1)\quad\text{(by Lemma \ref{lemma:formula_for_sigma} and because $\delta_4(b_1)\in \mathcal L(\delta_3(a_1))$)}.
\end{align*}
\end{description}
To prove that $\sigma_\nu$ is a lifted reciprocity map it remains to show that it vanishes on elements of the form $a\wedge c, a\in \Lambda^2 \overline L_\nu^\times, c\in k^\times$. Lemma \ref{lemma:Res_is_surjective} for $m=3$ in the degree $2$ implies that there is $b\in \Lambda^3 L^\times$ such that $\ts_{\nu}(b)=a$ and $\ts_{\nu'}(b)=0$ for any $\nu\ne \nu,\nu'\in \mathrm{dval}(L)_{gen}$. It follows that $b\wedge c\in \mathcal L(a\wedge c)$. Now we have:
\begin{align*}
& \sigma_\nu(a\wedge c)=H(b\wedge c)\quad\text{(by Lemma \ref{lemma:formula_for_sigma} and because $b\wedge c\in \mathcal L(a\wedge c)$)} \\
& =-\sum\limits_{\mu\in\mathrm{dval}(L)_{sp}}\sigma_{\mu}\ts[4]_{\mu}(b\wedge c)\quad\text{(by the definition of $H$)}\\
& =-\sum\limits_{\mu\in\mathrm{dval}(L)_{sp}}\sigma_{\mu}(\ts[3]_{\mu}(b)\wedge c)\quad\text{(by the property of $\ts[4]_\mu$)}\\
& =0\quad\text{(because $\sigma_\mu$ is a lifted reciprocity map)}.
\end{align*}
So we have proved that $\sigma_\nu$ is a lifted reciprocity map.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:surjectivity of res}]
Let us prove that $\sigma$ is a system of lifted reciprocity maps. We need to show that for any $b\in \Lambda^4 L^\times$ the following formula holds:
\begin{equation}
\label{formula:proof_system_of_SRL}
\sum\limits_{\nu\in\mathrm{dval}(L)}\sigma_\nu\ts[4]_\nu (b)=0.
\end{equation}
By Theorem \ref{th:main_exact_sequence} the group $\Lambda^{4}L^\times$ is generated by the subsets $\mathcal L(a), a\in \Lambda^{3}\overline L_\nu, \nu\in \mathrm{dval}(L)_{gen}$. So we can assume that $b\in\mathcal L(a)$ for some $a\in \Lambda^{3}\overline L_\nu$. In this case formula (\ref{formula:proof_system_of_SRL}) follows from the definition of $\sigma_\nu$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:isomorphism_of_functors} ]
To prove that $res$ is an isomorphism of functors, we need to show that for any $F\in \mathbf{Fields}_1$ the map $$res_F\colon \mathrm{SLRM}(F(t))\to \mathrm{LRM}(F)$$ is a bijection. By Lemma \ref{lemma:surjectivity of res} this map is surjective. So we need to show that it is injective.
Assume that $h$ is a system of lifted reciprocity maps on $F$, and $\sigma,\sigma'$ be two systems of lifted reciprocity maps on $L$ satisfying $res(\sigma)=res(\sigma')=h$. We need to show that $\sigma=\sigma'$.
By definition $\sigma_{\nu_{\infty}}=\sigma'_{\nu_{\infty}}=h$. If $\nu$ is special valuation different from $\nu_{\infty}$ then $\overline L_\nu\cong k(t)$, and so by lemma \ref{prop:lifted_law_P^1} there is a unique lifted reciprocity map on $\overline L_\nu$. We conclude that when $\nu$ is special $\sigma_\nu=\sigma'_\nu$.
Let $\nu\in$ be an arbitrary element of $\mathrm{dval}(L)_{gen}$. It remains to show that for any $a\in \Lambda^3 \overline L_\nu^\times$ we have $\sigma_{\nu}(a)=\sigma'_{\nu}(a)$.
Choose some $b\in\mathcal L(a)$. By definition of $b$ and the fact that $\sigma$ is a system of lifted reciprocity maps we have:
$$\sigma_\nu(a)+\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sigma_\mu\ts[4]_\mu(b)=0.$$
So
$$\sigma_\nu(a)=-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sigma_\mu\ts[4]_\mu(b).$$
In the same way
$$\sigma'_\nu(a)=-\sum\limits_{\mu\in \mathrm{dval}(L)_{sp}}\sigma'_\mu\ts[4]_\mu(b).$$
The right hand side of the last two formulas coincide because $\sigma$ and $\sigma'$ coincide on special valuations. We conclude that $\sigma_\nu(a)=\sigma'_\nu(a)$ as well.
\end{proof}
\subsection{The proof of Theorem \ref{th:norm_map}}
In this subsection we will use Theorem \ref{th:isomorphism_of_functors} to construct the norm map on lifted reciprocity maps. We follow ideas from \cite[\S 1]{suslin1979reciprocity} (see also \cite{MILNOR1969/70, bass1973milnor, kato1980generalization}).
Since $res$ is an isomorphism of functors, it has an inverse. Denote it by $\mathcal N$. Let $F\in \mathbf{Fields}_1$ and $\nu\in \mathrm{dval}(F(t))$. Define the norm map $N_{\nu}\colon \mathrm{LRM}(F)\to \mathrm{LRM}(\overline{F(t)}_{\nu})$ by the formula $N_{\nu}(h)=\mathcal N_F(h)_{\nu}$.
Let $j\colon F\hookrightarrow K$ be an extension of some fields from $\mathbf{Fields}_1$. Let $a$ be some generator of $K$ over $F$. Consider a map $F[t]\to K$ given by the formula $p(t)\mapsto p(a)$. The kernel of this map is an irreducible polynomial $p_a$ over $F$. Denote by $\nu_a$ the corresponding valuation. The residue field $\overline{F(t)}_{\nu_{a}}$ is canonically isomorphic to $K$. So we get a map $N_{{\nu_a}\colon}\mathrm{LRM}(F)\to \mathrm{LRM}(K)$, which we denote by $N_{K/F, a}$. This map is called \emph{the norm map}. We will show that this map does not depend on $a$.
As a corollary from Theorem \ref{th:isomorphism_of_functors} we get the following lemma:
\begin{lemma}
\label{lemma:functoriality_of_norms}
\begin{enumerate}
\item Let $j\colon F\hookrightarrow K$ be an extension from $\mathbf{Fields}_1$, $\nu\in \mathrm{dval}(F(t))$ and $n=[K:F]$. The following diagram is commutative:
\begin{equation}
\label{diagram:functoriality_of_norm_valuations}
\begin{tikzcd}[row sep=huge,column sep=huge]
{\mathrm{LRM}(F)} & {\mathrm{LRM}(\overline{F(t)}_\nu)} \\
{\mathrm{LRM}(K)} & {\bigoplus\limits_{\nu'\in \text{ext}(\nu, K(t))}\mathrm{LRM}(\overline{K(t)_{\nu'}}).}
\arrow["N_{\nu}",from=1-1, to=1-2]
\arrow["(N_{\nu'})",from=2-1, to=2-2]
\arrow["\mathrm{LRM}(j)",from=2-1, to=1-1]
\arrow["\sum\limits_{\nu'\in \text{ext}(\nu,K(t))} \dfrac{e_{\nu'|\nu}f_{\nu'|\nu}}n\mathrm{LRM}(j_{\nu'|\nu})"',from=2-2, to=1-2]
\end{tikzcd}
\end{equation}
\item Let $j\colon F_1\subset K, F_1\subset F_2, $ be an extensions and $F_2\otimes_{F_1} K=\bigoplus\limits_{i=1}^m F_{2,i}$. Denote by $j_i$ the natural embedding $F_2\hookrightarrow F_{2,i}$. Let $n=[K:F_1]$ and $n_i=[F_{2,i}:F_2]$. Let $a$ be generator of $F_2$ over $F_1$. Denote by $a_i$ the corresponding generators of $F_{2,i}$ over $K$. The following diagram is commutative:
\begin{equation}
\label{diagram:functoriality_of_norm}
\begin{tikzcd}[row sep=huge,column sep=huge]
{\mathrm{LRM}(F_1)} & {\mathrm{LRM}(F_2)} \\
{\mathrm{LRM}(K)} & {\bigoplus\limits_{i=1}^k\mathrm{LRM}(F_{2,i}).}
\arrow["N_{F_2/F_1,a}",from=1-1, to=1-2]
\arrow["(N_{F_{2,i}/K,a_i})",from=2-1, to=2-2]
\arrow["\mathrm{LRM}(j)",from=2-1, to=1-1]
\arrow["\sum\limits_{i=1}^m \dfrac{n_i}{n}\mathrm{LRM}(j_i)"',from=2-2, to=1-2]
\end{tikzcd}
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Denote by $j'$ the embedding $F(t)\hookrightarrow K(t)$. Since $\mathcal N$ is a natural transformation of functors, the following diagram is commutative:
\begin{equation*}
\begin{tikzcd}
{\mathrm{LRM}(F)} & {\mathrm{SLRM}(F(t))} \\
{\mathrm{LRM}(K)} & {\mathrm{SLRM}(K(t)).}
\arrow["{\mathrm{LRM}(j)}"', from=2-1, to=1-1]
\arrow["{\mathcal N_{K}}", from=2-1, to=2-2]
\arrow["{\mathcal N_{F}}", from=1-1, to=1-2]
\arrow["{\mathrm{SLRM}(j')}"', from=2-2, to=1-2]
\end{tikzcd}
\end{equation*}
So for any $h_K\in\mathrm{LRM}(K)$ and any $\nu\in\mathrm{dval}(F(t))$, we get the following identity: $$\mathcal N_{F}(\mathrm{LRM}(j)(h_K))_{\nu}=\mathrm{SLRM}(j')(\mathcal N_{K}(h_K))_{\nu}.$$
Writing out, we get:
\begin{equation*}
\begin{split}
\mathcal N_{F}(\mathrm{LRM}(j)(h_K))_{\nu}=N_\nu(\mathrm{LRM}(j)(h_K)),\\
\mathrm{SLRM}(j')(\mathcal N_{K}(h_K))_{\nu}=\sum\limits_{\nu'\in \text{ext}(\nu, K(t))}\dfrac {e_{\nu'|\nu}f_{\nu'|\nu}}n\mathrm{LRM}(j_{\nu'|\nu})(\mathcal N_{K}(h_K)_{\nu'})=\\
=\sum\limits_{\nu'\in \text{ext}(\nu, K(t))}\dfrac {e_{\nu'|\nu}f_{\nu'|\nu}}n\mathrm{LRM}(j_{\nu'|\nu})(N_{\nu'}(h_K)).
\end{split}
\end{equation*}
Here, in the first formula we have used the definition of the functor $\mathrm{SLRM}$. The statement follows.
\item Let $p_a$ be the minimal polynomial of $a$ over $F_1$. We apply the previous statement for $F=F_1, K=K$ and $\nu$ corresponding to $p_a$. Let $p_a=\prod\limits_{i}^lp_{a,i}$ be the decomposition of $P$ in the field $K(t)$. The set $\text{ext}(\nu, K(t))$ are in bijection with the irreducible factors of $p_a$ in $K(t)$. Denote by $\nu_i\in \text{ext}(\nu, K(t))$ the valuation corresponding to $p_{a,i}$. We have $F_{2,i}\cong \overline{K(t)}_{\nu_i}$. The embeddings $j_i$ correspond to the embeddings $j_{\nu_i|\nu}$. Since the polynomial $p_a$ is separable, we have $f_{\nu'|\nu}=1$, and so $e_{\nu'|\nu}f_{\nu'|\nu}=[F_{2,i}:F_2]$. So the diagram (\ref{diagram:functoriality_of_norm_valuations}) can be identified with (\ref{diagram:functoriality_of_norm}).
\end{enumerate}
\end{proof}
\begin{lemma}
\label{lemma:SRL_is_surjective}
We have $\mathrm{LRM}(j)\circ N_{F_2/F_1, a}=id$. In particular if $F_1=F_2$ then $N_{F_2/F_1, a}$ is identical and for any $j\colon F_1\hookrightarrow F_2$ the map $\mathrm{LRM}(j)$ is surjective.
\end{lemma}
\begin{proof}
Let $h\in \mathrm{LRM}(F_1)$, $\sigma=\mathcal N_{F_1}(h)$ and $x\in \Lambda^{3} F_1^\times$. Consider the element $b=p_a\wedge x\in \Lambda^{4} F_1(t)^\times$, where $p_a$ is the minimal polynomial of $a$ over $F_1$. Since $\sigma$ is a system of lifted reciprocity maps, we have:
$$\sum\limits_{\nu\in \mathrm{dval}(F(t))_{gen}}\sigma_\nu\ts[4]_\nu(b)+\sum\limits_{\nu\in \mathrm{dval}(F(t))_{sp}}\sigma_\nu\ts[4]_\nu(b)=0.$$
We have $\ts[4]_{\nu_{p_a}}(b)=x$ and this this is the only general valuation satisfying $\ts[4]_\nu(b)\ne 0$. So by the definition of the norm map, the first term is equal to $N_{F_2/F_1,a}(h)(x)$. From the other side it is easy to see that there is only one special valuation $\nu$ such that $\sigma_\nu\ts[4]_\nu(b)\ne 0$, namely $\nu_{\infty}$. We have $\ts[4]_{\nu_\infty}(b)=-na$. The statement follows.
\end{proof}
\begin{proposition}
\label{prop:norm_does_not_depend_on_a}
The map $N_{F_2/F_1,a}$ does not depend on $a$. Denote $N_{F_2/F_1,a}$ simply by $N_{F_2/F_1}$.
\end{proposition}
\begin{proof}
We need to show that $N_{F_2/F_1,a}$ does not depend on $a$. Let $j\colon F_1\hookrightarrow K$ be a field extension of $F_1$ satisfying $F_2\otimes_{F_1} K\cong K^{\oplus[F_2:F_1]}$. We apply the second statement of Lemma \ref{lemma:functoriality_of_norms}. By definition of $K$ for any $n$ we have $F_{2,i}\cong K$. By Lemma \ref{lemma:SRL_is_surjective} the maps $N_{F_{2,i}/K, a_i}$ are identical. We conclude that in the diagram from Lemma \ref{lemma:functoriality_of_norms} all the maps except maybe $N_{F_2/F_1,a}$ do not depend on $a$. So the map $N_{F_2/F_1, a}$ does not depend on $a$ on the image of $\mathrm{LRM}(j)$. By the previous lemma this image coincides with $\mathrm{LRM}(F_1)$.
\end{proof}
\begin{proposition}
\label{prop:tower}
If $F_1\subset F_2\subset F_3$ is a tower of extension from $\mathbf{Fields}_1$ then $N_{F_3/F_1}=N_{F_3/F_2}\circ N_{F_2/F_1}$.
\end{proposition}
\begin{proof}
Choose a field extension $j\colon F_1\hookrightarrow K$. Denote $F_2\otimes_{F_1}K\cong \bigoplus_{i=1}^{n_{2}} F_{2,i}$ and $F_{3}\otimes_{F_2} F_{2,i} \cong \bigoplus_{s=1}^{n_{3,i}}F_{3,i,s}$. By associativity of tensor product we have $F_3\otimes_{F_1} K\cong \bigoplus\limits_{i,s}F_{3,i,s}$ Denote by $j_{i,s}$ the natural embeddings $F_3\hookrightarrow F_{3,i,s}$. Let $n_{i,s}=[F_{3,i,s}:F_3]$. Let $n$ be the degree of $F_3$ over $F_1$.
Repeated application of Lemma \ref{lemma:functoriality_of_norms} together with Proposition \ref{prop:norm_does_not_depend_on_a} shows that the following diagram is commutative:
\begin{equation*}
\begin{tikzcd}[row sep=huge,column sep=huge]
{\mathrm{LRM}(F_1)} & & {\mathrm{LRM}(F_3)} \\
{\mathrm{LRM}(K)} & & {\bigoplus\limits_{i,s}\mathrm{LRM}(F_{3,i,s}).}
\arrow["N_{F_3/F_2}\circ N_{F_2/F_1}",from=1-1, to=1-3]
\arrow["(N_{F_{3,i,s}/F_{2,i}}\circ N_{F_{2,i}/K})",from=2-1, to=2-3]
\arrow["\mathrm{LRM}(j)",from=2-1, to=1-1]
\arrow["\sum\limits_{i,s} \dfrac{n_{i,s}}{n}\mathrm{LRM}(j_{i,s})"',from=2-3, to=1-3]
\end{tikzcd}
\end{equation*}
Choose $K$ such that $F_3\otimes_{F_1}K\cong K^{\oplus [F_3:F_1]}$. It follows that $F_{3,i,s}\cong F_{2,i}\cong K$. So the bottom maps in the above diagram are identical. Let us compare this diagram with diagram (\ref{diagram:functoriality_of_norm}) for $F_2=F_3$. We see that the left, right and bottom maps are the same. Since $\mathrm{LRM}(j)$ is surjective, the statement of the proposition follows.
\end{proof}
\begin{proposition}
\label{prop:Ha=Hb}
For any $a,b\in F\backslash k$ we have $$N_{F/k(a)}(\mathcal H_{k(a)})=N_{F/k(b)}(\mathcal H_{k(b)}).$$ In particular, the element $\mathcal H_F:=N_{F/k(a)}(\mathcal H_{k(a)})$ does not depend on $a\in F\backslash k$.
\end{proposition}
\begin{proof} Let $a,b\in F\backslash k$ be two functions generating $F$ over $k$. We first prove the statement for the functions satisfying this condition.
Let $p_b\in k(a)[t]$ be the minimal polynomial of $b\in F$ over $k(a)$. Multiplying $p_a$ on some rational function of $a$ we can assume that $p_a$ lies in $k[a,t]$, and that $p_a$ is irreducible as a polynomial in two variables. Denote this polynomial by $h(a,t)$. It is easy to see that the polynomial $h'(b,t)$ given by the formula $h'(b,t)=h(t,b)$ is a minimal polynomial of $a$ over $k(b)$.
Let $A=k[x][y]$ and $L=k(x)(y)$. If we consider the polynomials $h,h'$ as polynomials in $x,y$ they gives two elements $\nu,\nu'\in \mathrm{dval} (L)$.
By Theorem \ref{th:isomorphism_of_functors} and Theorem \ref{prop:lifted_law_P^1} on the field $L$ there is a unique system of lifted reciprocity maps. Denote it by $\sigma$. Denote by $\lambda$ an automorphism of $L$ interchanging $x$ and $y$. Since $\sigma$ is a unique system of lifted reciprocity maps on $L$, it is invariant under $\lambda$. Since $\lambda$ interchanges $\nu$ and $\nu'$, it induces a map $\overline{\lambda}\colon \overline L_\nu\to \overline L_{\nu'}$. Because $\sigma$ is invariant under $\lambda$, we have $\mathrm{LRM}(\overline \lambda)(\sigma_{\nu'})=\sigma_{\nu}$.
A map $A\to F$ given by the formula $x\mapsto a, y\mapsto b$ induces an isomorphism $\theta\colon \overline L_\nu\to F$. In the same way a map $A\to F$ given by the formula $x\mapsto b, y\mapsto a$ induces an isomorphism of $\theta'\colon \overline L_{\nu'}\to F$. Since $\theta=\theta'\circ \overline \lambda$, we have
\begin{equation*}
\begin{split}
\mathrm{LRM}(\theta^{-1})(\sigma_\nu)=\mathrm{LRM}(\lambda^{-1}\circ \theta'^{-1})(\sigma_\nu)=\\
=\mathrm{LRM}(\theta'^{-1})\circ\mathrm{LRM}(\overline \lambda^{-1})(\sigma_\nu)=\mathrm{LRM}(\theta'^{-1})(\sigma_{\nu'}).
\end{split}
\end{equation*}
Here in the last formula we have used the formula $\mathrm{LRM}(\overline \lambda)(\sigma_{\nu'})=\sigma_{\nu}$. By definition $N_{F/k(a), b}=\mathrm{LRM}(\theta^{-1})(\sigma_\nu)$ and $N_{F/k(b), a}=\mathrm{LRM}(\theta'^{-1})(\sigma_{\nu'})$. So we have proved that $N_{F/k(a),b}(\mathcal H_{k(a)})=N_{F/k(b),a}(\mathcal H_{k(b)})$.
By Proposition \ref{prop:norm_does_not_depend_on_a} we have $$N_{F/k(a),b}=N_{F/k(a)},\quad N_{F/k(b),a}=N_{F/k(b)}.$$ So we have proved that for any $a,b\in F\backslash k$ generating $F$ over $k$ we have $$N_{F/k(a)}(\mathcal H_{k(a)})=N_{F/k(b)}(\mathcal H_{k(b)}).$$ Now the statement follows from the following fact: for any $a,b\in F\backslash k$, there is $c\in F\backslash k$ such that the pairs $(a,c),(b,c)$ generate $F$ over $k$.
\end{proof}
Theorem \ref{th:norm_map} follows directly from Lemma \ref{lemma:SRL_is_surjective}, Proposition \ref{prop:norm_does_not_depend_on_a}, Proposition \ref{prop:tower} and Proposition \ref{prop:Ha=Hb}.
\section{The proof of Theorem \ref{th:limit_of_functor} and Corollary \ref{cor:reciprocity_map_for_surfaces}}
\label{sec:proof_of_main_resulats}
\begin{proof}[Proof of Theorem \ref{th:limit_of_functor}]
\begin{description}
\item[Existence] Let $F\in \mathbf{Fields}_1$. Choose some embedding $$j\colon k(t)\hookrightarrow F.$$ Define the element $\mathcal H_F$ by the formula $\mathcal H_F:=N_{F/k(t)}(\mathcal H_{k(t)}).$ By the third statement of Theorem \ref{th:norm_map} this element does not depend on $j$.
We need to show that if $j'\colon F_1\to F_2$ is an embedding, then $$\mathrm{LRM}(j')(\mathcal H_{F_2})=\mathcal H_{F_1}.$$ It follows from the first and the second statement of the same theorem:
\begin{equation*}
\begin{split}
\mathrm{LRM}(j')(\mathcal H_{F_2})=\mathrm{LRM}(j')N_{F_2/k(t)}(\mathcal H_{k(t)})=\mathrm{LRM}(j')N_{F_2/F_1}N_{F_1/k(t)}(\mathcal H_{k(t)})=\\
=(\mathrm{LRM}(j')\circ N_{F_2/F_1})(N_{F_1/k(t)}\mathcal H_{k(t)})=\mathcal H_{F_1}.
\end{split}
\end{equation*}
\item[Uniqueness]
Let $\mathcal H_F, \mathcal H_F', F\in\mathbf{Fields}_1$ be two family of lifted reciprocity maps such that for any $j\colon F_1\hookrightarrow F_2$ we have $\mathrm{LRM}(j)(\mathcal H_{F_2})=\mathcal H_{F_1}$ and $\mathrm{LRM}(j)(\mathcal H'_{F_2})=\mathcal H'_{F_1}$. We need to show that $\mathcal H_F=\mathcal H_F'$ for any $F\in \mathbf{Fields}_1$. First of all it is true when $F=k(t)$. Let $F$ be any field. There is a field $F'\in\mathbf{Fields}_1$ together with two embeddings $F\subset F', k(t) \subset F'$ such that $F'/k(t)$ is Galois. It is enough to prove the statement for $F'$. Denote by $G$ the Galois group of $F'$ over $k(t)$. Since $\mathcal H_{F'}$ and $\mathcal H_{F'}'$ are invariant under the group $G$, it is enough to prove that they equal on the subgroup $\left(\Lambda^{3} F'^\times\right)^G$. We know that $\left(K_{3}^M(F')\right)^G=K_{3}^M(k(t))$. It follows that $\left(\Lambda^{3} F'^\times\right)^G$ is generated by the Steinberg elements and the elements coming from $k(t)$. On the Steinberg elements $\mathcal H_{F'}$ and $\mathcal H_{F'}'$ coincide because they are lifted reciprocity maps. On the elements coming from $k(t)$ they coincide because $\mathcal H_{k(t)}=\mathcal H_{k(t)}'$.
\end{description}
\end{proof}
Let $L\in \mathbf{Fields}_2$. Define the map $H_L\colon \Lambda^{4} L^\times\to {B_2}(k)$ by the formula:
$$H_L(b)=\sum\limits_{\nu\in \mathrm{dval}(L)}\mathcal H_{\overline L_\nu}\ts[4]_\nu(b).$$
This formula is well-defined by Lemma \ref{lemma:finitnes_of_sum}. The following lemma is corollary from Theorem \ref{th:limit_of_functor}:
\begin{lemma}
\label{lemma:functoriality_for_H}
If $j\colon L\hookrightarrow M$ is an extension of some fields from $\mathbf{Fields}_2$ then for any $b\in \Lambda^{4} L^\times$ we have $H_{L}(b)=\dfrac 1{[M:L]}H_{M}(j_*(b))$.
\end{lemma}
\begin{proof}
Let $\nu\in \mathrm{dval}(L)$. It is enough to show the following formula:
$$\mathcal H_{\overline L_\nu}\ts[4]_\nu(b)=\dfrac 1{[M:L]}\sum\limits_{\nu'\in \text{ext}(\nu,M)}\mathcal H_{\overline M_{\nu'}}\ts[4]_{\nu'}(j_*(b)). $$
We have: $$\ts[4]_{\nu'}(j_*(b))=e_{\nu'|\nu} \cdot j_{\nu'|\nu}(\ts[4]_\nu(b)).$$
By Theorem \ref{th:limit_of_functor}: $$\mathcal H_{\overline M_{\nu'}}j_{\nu'|\nu}\ts[4]_\nu(b)=f_{\nu'|\nu} \mathcal H_{\overline L_\nu}\ts[4]_\nu(b).$$
So we get:
$$\mathcal H_{\overline M_{\nu'}}\ts[4]_{\nu'}j_*(b)=e_{\nu'|\nu}f_{\nu'|\nu} \cdot\mathcal H_{\overline L_\nu}\ts[4]_\nu(b).$$
Using this formula, we obtain:
\begin{align*}
&\dfrac 1{[M:L]}\sum\limits_{\nu'\in \text{ext}(\nu, M)} \mathcal H_{\overline M_{\nu'}}\ts[4]_{\nu'}j_*(b)=\\
&=\dfrac 1{[M:L]}\sum\limits_{\nu'\in \text{ext}(\nu, M)}e_{\nu'|\nu}f_{\nu'|\nu} \mathcal H_{\overline L_\nu}\ts[4]_\nu(b)= \\&=\mathcal H_{\overline L_\nu}\ts[4]_\nu(b)\dfrac 1{[M:L]}\sum\limits_{\nu'\in \text{ext}(\nu,M)}e_{\nu'|\nu}f_{\nu'|\nu}=H_{\overline L_\nu}\ts[4]_\nu(b).
\end{align*}
The last equality follows from the formula $\sum\limits_{\nu'\in \text{ext}(\nu, M)}e_{\nu'|\nu}f_{\nu'|\nu}=[M:L]$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:reciprocity_map_for_surfaces}]
We need to show that $H_L=0$ for any $L\in \mathbf{Fields}_2$. Let us prove that it is true when $L=k(x)(y)$. By Theorem \ref{th:isomorphism_of_functors} and Proposition \ref{prop:lifted_law_P^1} in this case on $L$ there is a unique system $\sigma$ of lifted reciprocity maps. It is enough to show that for any $\nu\in \mathrm{dval}(L)$ we have $\sigma_\nu=\mathcal H_{\overline L_\nu}$. When $\nu$ is special it is true by Proposition \ref{prop:lifted_law_P^1}. When $\nu$ is general it follows from the definition of $\mathcal H_{\overline L_\nu}$.
Let us prove the statement for an arbitrary $L$. There are finite extensions $$j\colon k(x)(y)\hookrightarrow L', j'\colon L\hookrightarrow L'$$ such that $j$ is a Galois extension. Lemma \ref{lemma:functoriality_for_H} shows that it is enough to prove the statement for $L'$. Denote the Galois group of $j$ by $G$. Since $H_{L'}$ is invariant under $G$, it is enough to prove that $H_{L'}$ is zero on the subgroup $\left(\Lambda^4 L'^\times\right)^G$. Since $\left(K_{4}^M(L')\right)^G=K_{4}^M(k(x)(y))$, $\left(\Lambda^{4} L'^\times\right)^G$ is generated by the Steinberg elements and by the elements coming from $k(x)(y)$. Vanishing of $H_{L'}$ on the elements coming from $k(x)(y)$ follows from Lemma \ref{lemma:functoriality_for_H} together with the formula $H_{k(x)(y)}=0$. Let us prove that $H_{L'}$ is zero on the Steinberg elements.
For any $b\in B_2(L',4)_1$ we have
\begin{equation*}
\begin{split}
H_{L'}(\delta_4(b))=\sum\limits_{\nu\in \mathrm{dval}(L')}\mathcal H_{\overline {L'}_\nu}\ts[4]_\nu\delta_4(b)=\sum\limits_{\nu\in \mathrm{dval}(L')}\mathcal H_{\overline {L'}_\nu}\delta_3\ts[4]_\nu(b)=\\=\sum\limits_{\nu\in \mathrm{dval}(L')}\sum\limits_{\nu'\in \mathrm{val}(\overline {L'}_\nu)}\ts_{\nu'}\ts[4]_{\nu}(b)=0.
\end{split}
\end{equation*}
Here the second formula is true because $\ts[4]_\nu$ is morphism of complexes, the third formula is true because $\mathcal H_{\overline {L'}_\nu}$ is a lifted reciprocity map and the fourth formula follows from Corollary \ref{cor:Parshin_sum_point_curve}.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Distributed sensor networks provide a platform for many military and civilian applications. Examples include surveillance, monitoring wildlife, or controlling the power grid. Sensor networks built for these applications are intended to solve various problems such as detecting, tracking, classifying, counting, and estimating. They are also required to adhere to a number of physical constraints including power, bandwidth, latency, and complexity. Much research has been reported on each of these topics over the past two decades. In the field of distributed estimation, for example, various estimation problems have been formulated and solved.
Many works choose either to optimize a distributed sensor network with respect to energy consumption during transmission \cite{Li09,Wu08} or impose bandwidth constraints and thus focus on designing an optimal quantization strategy for the distributed network \cite{Ribeiro06}, \cite{Niu2006}. There are few that involve both constraints (see \cite{Cui07} as an example).
Among research groups working on the problem of distributed estimation, there are a few dealing with distributed estimation of a field (a multidimensional function, in general) \cite{Schabus11,Wang08,Nowak03}. Since in many real-world applications distributed estimation of a multidimensional function may provide additional information that aids in making a high-fidelity decision or in solving another inference problem, we contribute to this topic by formulating and solving the problem of a parametric field estimation from sparse noisy sensor measurements. Distributed target localization is another active area of research \cite{Niu2006}. An iterative solution to this problem is a second contribution of our work.
In this paper, the problem of distributed estimation of a physical field from sensory data collected by a homogeneous sensor network is stated as a maximum likelihood estimation problem. The physical field is a deterministic function and has a known spatial distribution parameterized by a set of unknown parameters, such as the location of an object generating the field and the strength of the field in the region occupied by the sensors. Sensor observations are distorted by additive white Gaussian noise. Prior to transmission, each sensor quantizes its observation to $M$ levels. The quantized data are then communicated over parallel additive white Gaussian channels to the fusion center where the unknown parameters of the underlying physical field are estimated. An iterative expectation-maximization (EM) algorithm to estimate the unknown parameter is formulated, and a simplified numerical solution involving additional approximations is developed. The numerical examples illustrate the developed approach for the case of the field modeled as a Gaussian bell.
The remainder of the paper is organized as follows. Sec. \ref{sec:Problem Statement} formulates the problem. Sec. \ref{sec:Iterative Solution} develops an EM solution. Sec. \ref{sec:Numerical_Analysis} provides numerical performance evaluation. The summary of the developed results is provided in Sec. \ref{sec:Summary}.
\section{Problem Statement}
\label{sec:Problem Statement}
Consider a distributed network of homogeneous sensors monitoring the environment for the presence of a substance or an object. Assume that each substance or object is characterized by a location parameter and by a spatially distributed physical field generated by it. As an example, a ferromagnetic object can be viewed as a single or a collection of dipoles characterized by a magnetic field that they generate. This field can be sensed by a network of magnetometers placed in the vicinity of the object. Depending on the design of the magnetometers, they may take measurements of a directional complex valued magnetic field or of the magnitude of the field only. The field generated by a dipole decays as a function of the inverse cube of the distance to the dipole. The sensor network does not know a priori the location of the dipole as well as the type and size of the object. However, the type and size of an object can be associated with the strength of the magnetic field. Examples of other physical fields include (1) a radioactive field that can be modeled as a stationary spatially distributed Poisson field with a two-dimensional intensity function decaying according to the inverse-square law or (2) a distribution of pollution or chemical fumes that, if stationary, can often be modeled as a Gaussian bell.
Consider a network of $K$ sensors distributed over an area $A.$ The network is calibrated in the sense that the relative locations of the sensors are known. Sensors act independently of one another and take noisy measurements of a physical field $G(x,y).$ A sample of $G(x,y)$ at a location $(x_k,y_k)$ is denoted as $G_k=G(x_k,y_k).$ The parametric field $G(x,y)$ is characterized by $L$ unknown parameters $\mathbf{\theta}=[\theta_1,\ldots,\theta_L]^T.$ To emphasize this dependence we will use both $G_k$ and $G(x_k,y_k:\theta)$ throughout the text. The sensor noise, denote it by $W_k,$ $k=1,\ldots,K$ is known and modeled as Gaussian distributed with mean zero and variance $\sigma^2.$ The noise of sensors is independent and identically distributed. Let $R_k,$ $k=1,\ldots,K$ be the noisy samples of the field at the location of distributed sensors. Then $R_k$ is modeled as $R_k=G(x_k,y_k)+W_k.$
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{block_diagram}
\vspace{-1.5cm}
\caption{Block-diagram of the distributed sensor network. }
\label{fig:block_diagram}
\end{figure}
Due to constraints that are imposed by practical technology, each sensor may be required to quantize its measurements prior to transmitting them to the fusion center (FC). Assume that a deterministic quantizer with $M$ quantization levels is involved. Let $\nu_1,\nu_2,\ldots,\nu_M$ be known reproduction points of the quantizer. Denote by $q(R_k)=q_k$ the quantized version of the measurement by the $k$-th sensor. These data are modulated using a digital modulation scheme and then transmitted to the FC over noisy parallel channels. The noise in channels is due to quantization error and channel impairments, denote it by $\tilde{N}_k,$ $k=1,\dots,K.$ Denote by $m(\cdot)$ a modulation function and by $d(\cdot)$ a demodulation function. Let $Z_1,\ldots,Z_K,$ be noisy observations received by the FC. Then each $Z_k$ is given by $Z_k=d(m(q_k))+\tilde{N}_k, \ \ k=1,\ldots,K.$ In this work we assume that $m(\cdot)$ and $d(\cdot)$ are linear and that the demodulator recovers the quantized signal by using a soft thresholding rule. These assumptions allow $Z_k$ be approximated by its asymptotic counterpart $Z_k=q_k+N_k,$ where $N_k$ is a white Gaussian noise with variance $\eta^2.$
Given the noisy measurements and the relative location of the sensors in the network, the task of the FC is to estimate the vector parameter $\mathbf{\theta}.$ A block diagram of the distributed network used for estimation of parameters of a physical field is shown in Fig. \ref{fig:block_diagram}.
In this work we adopt a maximum likelihood (ML) estimation approach to solve the problem of distributed parameter estimation. The joint likelihood function of the independent quantized noisy measurements $Z_1, Z_2, \ldots, Z_K$ can be written as
\begin{eqnarray}
l(\mathbf{Z}) & = & \sum^{K}_{k =1} \log \left(\sum^{M}_{j =1}\ p_{k,j} \exp \left( - \frac{ \left( Z_k - \nu_j \right)^2 }{2\eta^2 } \right)\right),
\label{eq:incomplete_likelihood}
\end{eqnarray}
where $\mathbf{Z}$ is the vector of measurements $[Z_1,Z_2,\ldots,Z_K]^T,$ $p_{k,j}$ are the probabilities for the output of the sensor $k$ to be mapped to the $j$-th reproduction point during the encoding process
\[ p_{k,j} =\int_{\tau_j}^{\tau_{j+1}} \frac{1}{\sqrt{2\pi\sigma^2} } \exp \left( -\frac{ \left( t -G_k \right)^2 }{2\sigma^2 } \right)dt,\]
$\tau_j$ and $\tau_{j+1},$ $j=1,\ldots,M$ are the boundaries of the $j$-th quantization region. The ML solution $\mathbf{\hat{\theta}}$ is the solution that maximizes the expression (\ref{eq:incomplete_likelihood}). For a numerical example in Sec. \ref{sec:Numerical_Analysis}, the field is modeled as a Gaussian bell with three unknown parameters: the strength of the field $\mu$ and the location parameter $(x_c,y_c).$
\section{Iterative Solution}
\label{sec:Iterative Solution}
Since the expression for the log-likelihood function (\ref{eq:incomplete_likelihood}) is highly nonlinear in unknown parameters, we develop an iterative solution to the problem. We first formulate a set of Expectation-Maximization (EM) iterations \cite{Dempster77} and then involve a Newton's linearization to solve for the unknown parameters.
\subsection{Expectation Maximization Solution}
We select the pairs of random variables $(R_k,N_k)$, $k=1,2, \ldots, K$ as complete data. The complete data log-likelihood, denote it by $l_{cd}(\cdot)$, is given by
\begin{multline}
l_{cd}(\mathbf{R},\mathbf{N})= - \frac{ 1 }{2\sigma^2 }\sum_{i=1}^{K}(R_i-G_i)^2 \\+{\text{terms not function of }}\mathbf{\theta}.
\end{multline}
The measurements $Z_i,$ $i=1,\ldots,K,$ form incomplete data. The mapping from complete data space to incomplete data space is given by $Z_k=q(R_k)+N_k$, where $q(.)$ is a known quantization function.
Denote by $\hat{\mathbf{\theta}}^{(k)}$ an estimate of the vector $\mathbf{\theta}$ obtained at the $k$-th iteration. To update the estimates of parameters we alternate the expectation and maximization steps. During the expectation step, we evaluate the conditional expectation of the complete data log-likelihood:
\begin{eqnarray}
Q^{(k+1)}& = &E\left[ \left. - \frac{ 1 }{2\sigma^2 }\sum^{K}_{i =1}(R_i-G_i)^2 \right| {\mathbf{Z},\hat{\theta }^{(k)}}\right],
\label{eq:E-step}
\end{eqnarray}
where the expectation is with respect to the conditional probability density function of the complete data, given the incomplete data (measurements) and the estimates of the parameters at the $k$-th iteration. During the maximization step we maximize (\ref{eq:E-step}):
\begin{multline}
\frac{dQ^{(k+1)}}{d\theta_t} = E\left[ \left. \frac{ 1 }{\sigma^2 }\sum^{K}_{i =1}(R_i-G_i)\frac{dG_i}{d\theta_t} \right| {\mathbf{Z},\hat{\theta }^{(k)}}\right]\Big|_{\hat{\theta }^{(k+1)}}
=0,\\\text{ $ t$}=1,\ldots,L.
\label{eq:M-step}
\end{multline}
To find the conditional expectation we note that the conditional probability density function (p.d.f.) of $Z_i,$ given $R_i,$ is Gaussian with mean $q(R_i)$ and variance $\eta^2$ and the p.d.f. of $R_i$ is Gaussian with mean $G_i$ and variance $\sigma^2.$ We also note that at the $k$-th iteration the conditional pdf of $R_i,$ given $Z_i,$ implicitly involves the estimates of the parameters obtained at the $k$-th iteration.
Denote by $G_i^{(k)}$ the estimate of the field $G(x,y)$ at the location $(x_i,y_i)$ with the vector of parameters $\mathbf{\theta}$ replaced by their estimates $\hat{\mathbf{\theta}}^{(k)}.$ Then
the final expression for the iterative evaluation of the unknown parameters can be written as
\begin{multline}
\sum^{K}_{i =1}\frac{dG_i^{(k+1)}}{d\theta_t}A(G_i^{(k)})-\sum^{K}_{i =1}G_i^{(k+1)}\frac{dG_i^{(k+1)}}{d\theta_t}B(G_i^{(k)})
=0, \\ t=1,\ldots,L,
\label{eq:structure}
\end{multline}
where
\begin{equation}
A(G_i^{(k)})=\sum_{j=1}^M \frac{\exp\left(-\frac{(z_i-\nu_j)^2}{2\eta^2}\right)}{f_{Z_i}^{(k)}(z_i)\sqrt{2 \pi \eta^2}} \left( \sqrt{\frac{\sigma^2}{2\pi}} e^{-\frac{(\tau_j-G_i^{(k)})^2}{2\sigma^2}} \right. \end{equation}
\[ \left. - \sqrt{\frac{\sigma^2}{2\pi}} e^{-\frac{(\tau_{j+1}-G_i^{(k)})^2}{2\sigma^2}} + G_i^{(k)} \Delta Q^{(k)}(j,i)\right), \]
\begin{equation}
B(G_i^{(k)})=\sum^{M}_{j =1}\frac{ \exp \left(-\frac{(z_i-\nu_j)^2 }{2\eta^2 }\right)}{f_{Z_i}^{(k)}(z_i)\sqrt{2\pi\eta^2}}\Delta Q^{(k)}(j,i),
\end{equation}
with $\Delta Q^{(k)}(j,i)= Q\left(\frac{\tau_j-G_i^{(k)}}{\sigma}\right)-Q\left(\frac{\tau_{j+1}-G_i^{(k)}}{\sigma}\right)$ and \[f_{Z_i}^{(k)}(z_i)=\int f^{(k)}(z_i|r)f^{(k)}(r)dr.\]
The expression $Q(\cdot)$ is used to denote the Q-function.
\subsection{Linearization}
\label{sec:Linearization}
The equations (\ref{eq:structure}) are nonlinear in $\hat{\mathbf{\theta}}^{(k+1)}$ and have to be solved numerically for each iteration. To simplify the solution, we linearize the expression in (\ref{eq:structure}) by means of Newton's method. Denote by $\mathbf{F}(\theta^{(k+1)})$ the vector form of the left side in (\ref{eq:structure}), which is a mapping from $\mathbf{\theta}^{(k+1)}$ to the range of $\mathbf{F}(\theta^{(k+1)})$. Let $J\left(\theta_n^{(k+1)}\right)$ be the Jacobian of the mapping. The index $n$ indicates the iteration of the Newton's solution. Then $\mathbf{\theta}^{(k)}$ solves the following linearized equation:
\begin{eqnarray} \label{}
J\left(\theta_n^{(k+1)}\right)\left(\theta_{n+1}^{(k+1)}-\theta_{n}^{(k+1)}\right)=-F \left( \theta_n^{(k+1)}\right).
\end{eqnarray}
\section{Numerical Analysis}
\label{sec:Numerical_Analysis}
In this section, the performance of the distributed ML estimator in (\ref{eq:structure}) is demonstrated on simulated data. A distributed network of $K$ sensors is formed by positioning sensors at random over an area $A$ of size $8 \times 8.$ The location of each sensor is noted. A Gaussian field shown in Fig. \ref{fig:Gaussian_field} is sampled at the location of the $i$-th sensor, $i=1,\ldots,K$ and a sample of randomly generated Gaussian noise with mean zero and variance $\sigma^2$ is added to each field measurement. In our simulations, $K$ is varied from $5$ to $200$ and $\sigma^2$ is selected such that the total signal-to-noise ratio (SNR) of the local observations defined as
\begin{equation}
SNR_O = \frac{ {\int \int}_A G^2(x,y:\theta)dxdy}{A \sigma^2}
\end{equation}
is $15$ dB. Each sensor observation is quantized to one of $M$ levels using a uniform deterministic quantizer. We set the number of quantization levels to $M=8$ and the quantization step to $8.$ $K$ parallel white Gaussian noise channels add samples of noise with variance $\eta^2$ selected to set the total SNR during data transmission defined as
\begin{equation}
SNR_C = \frac{ {\int \int}_A E\left[q^2(R(x,y:\theta))\right]dxdy}{A \eta^2}
\label{eq:SNR_C}
\end{equation}
to $15$ dB, and the FC observes the noisy quantized samples of the field. The function $q(R(x,y:\theta))$ in (\ref{eq:SNR_C}) is a quantized version of $R(x,y:\theta)=G(x,y:\theta)+W.$
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{gaussian_field}
\caption{Squared Gaussian field located at $(x_c,y_c)=(4,4)$ with peak parameter $4$ and variance $4.$ }
\label{fig:Gaussian_field}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{square_difference3}
\caption{The squared difference between the original and reconstructed fields. }
\label{fig:Difference_field}
\end{figure}
First, we illustrate the convergence of the EM algorithm. The value of the ML estimate as a function of iteration is shown in Figs. \ref{fig:Peak_parameter}, \ref{fig:X_location} and \ref{fig:Y_location} for the peak value of the field, its $x$-location and its $y$-location, respectively. This illustration is based on a single realization of the distributed network with $K=10$ and $M=8.$ We can observe that with the initial values $9$ for the peak of the field, $3$ for the x-location and $3$ for the y-location, the algorithm takes about $600$ EM iterations to converge to the final values $7.90,$ $3.88,$ and $3.88,$ respectively. The true values of these parameters are $8,$ $4,$ and $4.$ The discrepancy between the vectors of estimated and true parameters are due to a low sensor density in the network, relatively rough quantization, and the distortions due to sensor and channel noise.
\begin{figure}[!t]
\centering
\includegraphics[width=1.9in]{Peak_parameter}
\caption{Illustration of the EM convergence for $M=8.$ Peak parameter.}
\label{fig:Peak_parameter}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=1.9in]{X_location}
\caption{ Illustration of the EM convergence for $M=8.$ X-location.}
\label{fig:X_location}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=1.9in]{Y_location}
\caption{ Illustration of the EM convergence for $M=8.$ Y-location.}
\label{fig:Y_location}
\end{figure}
The square distance per pixel between the original and reconstructed Gaussian fields is displayed in Fig. \ref{fig:Difference_field}.
To further analyze the estimation performance, we evaluate the mean square error (MSE) between the estimated and true location parameters. The MSE is evaluated numerically by means of 1000 Monte Carlo simulations. Each vector of estimated parameters is substituted back in the expression for the parametric field, and an integrated square error (ISE) between the true and estimated fields is evaluated. The integrated square error (ISE) is defined as
\begin{equation}
ISE= \frac{{\int \int}_A |\hat{G}(x,y)-G(x,y)|^2 dxdy}{{\int \int}_A |G(x,y)|^2 dxdy}.
\end{equation}
The ISE statistically averaged over 1000 Monte Carlo simulations is an approximation to integrated mean square error (IMSE).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{MSE3}
\end{center}
\vspace{-0.5cm}
\caption{A box plot of the SE between the estimated and true location of the object displayed as a function of the number of sensors distributed over the area $A.$ The number of quantization levels is set to $M=8.$}
\label{fig:MSE_box_plot}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{IMSE2}
\end{center}
\vspace{-0.5cm}
\caption{Dependence of the simulated ISE on the number of sensors distributed over the area $A$ for the case of $M=8.$}
\label{fig:box_plot_quantized}
\end{figure}
The dependence of the MSE on the number of sensors, $K,$ in the distributed network for the case of $M=8$ quantization levels is shown in Fig. \ref{fig:MSE_box_plot}.
The dependence of the IMSE on the number of sensors (sensor density) in the distributed network for the same value of $M$ is displayed in Fig. \ref{fig:box_plot_quantized}. The number of sensors distributed over the area $A$ is varied from $5$ to $200$ with the step $5.$ Each box in Fig. \ref{fig:MSE_box_plot} and Fig. \ref{fig:box_plot_quantized} is generated using $1000$ Monte Carlo realizations of the network and EM runs. The central mark in each box is the median. The edges of the box present the $25$th and $75$th percentiles. The dashed vertical lines mark the data that extend beyond the two percentiles, but not considered as outliers. The outliers are plotted individually and marked with a ``+'' sign. The percentage of outliers due to divergence of the EM algorithm is depicted in Fig. \ref{fig:P_outliers_MSE}. Note the large percentage of outliers for small values of $K,$ $K=10, 15, 20.$ These correspond to the case when one of the three parameters did not converge to its true value.
The results indicate that the location estimation and the field reconstruction of a relatively good quality is possible with $M=8$ and the number of sensors equal or exceeding $20.$
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{P_outliers_vs_threshold}
\end{center}
\vspace{-0.5cm}
\caption{Probability of outliers $P_{outliers}(\tau)=P[SE > \tau]$ (expressed in percents) as a function of $\tau.$ The plot is based on $1000$ Monte Carlo simulations.}
\label{fig:P_outliers_MSE}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{Probabilityofoutliers}
\end{center}
\vspace{-0.5cm}
\caption{Probability of outliers (expressed in percents) as a function of the threshold for different values of SNR in observation and transmission channels.}
\label{fig:P_outliers_SNR}
\end{figure}
Fig. \ref{fig:P_outliers_SNR} compares the percentage of outliers plotted as a function of varying threshold for three different realizations of $SNR_O$ and $SNR_C.$ Note that for $M=8$ the effect of the SNR in the observation channel is more pronounced compared to the SNR in the transmission channel. The case of high $SNR_O=20$ dB and low $SNR_C=10$ dB is preferred by the estimator compared to the case of low $SNR_O=10$ dB and high $SNR_C=20$ dB.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{SE_M16}
\end{center}
\caption{A box plot of the SE between the estimated and true location of the object displayed as a function of the number of sensors distributed over the area $A.$ The number of quantization levels is set to $M=16.$}
\label{fig:SE_M16}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{ISE_M16}
\end{center}
\caption{Dependence of the simulated ISE on the number of sensors distributed over the area $A.$ The number of quantization levels is set to $M=16.$}
\label{fig:ISE_M16}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{SE_M32}
\end{center}
\caption{A box plot of the SE between the estimated and true location of the object displayed as a function of the number of sensors distributed over the area $A.$ The number of quantization levels is set to $M=32.$}
\label{fig:SE_M32}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{ISE_M32}
\end{center}
\caption{Dependence of the simulated ISE on the number of sensors distributed over the area $A.$ The number of quantization levels is set to $M=32.$}
\label{fig:ISE_M32}
\end{figure}
A set of box plots showing dependence of the SE and the ISE on the number of sensors distributed over the area $A$ for $M=16$ and $M=32$ have been also generated. The results are similar to those for the case of $M=8$ with the difference that the number of outliers as a function of the threshold decays faster to zero (see Figs. \ref{fig:SE_M16}, \ref{fig:ISE_M16}, \ref{fig:SE_M32}, and \ref{fig:ISE_M32}) for illustration).
\section{Summary}
\label{sec:Summary}
In this paper, a distributed ML estimation procedure for estimating a parametric physical field is formulated. An iterative linearized EM solution is presented and numerically evaluated. The model of the network assumed (1) independent Gaussian sensor and transmission noise; (2) quantization of sensory data prior to transmission; and (3) parametric function estimation at the FC. The stability of the EM algorithm has been evaluated for three different values of $SNR_O$ and $SNR_C.$ The results show that for a small number of quantization levels (quantization error is large) $SNR_O$ dominates $SNR_C$ in terms of its effect on the performance of the estimator. Also, when the sensor network is sparse, $K=10, 15, 20$ the EM algorithm produces a substantial number of outliers. Denser networks, $K>20,$ are more stable in terms of reliable parameter estimation. A similar analysis has been performed for $M=16$ and $M=32.$
In the future, we plan to analyze the estimation abilities of the network at low SNR values and develop a Cramer-Rao bound on the estimated parameters.
\appendices
\section{}
This section provides details leading to the equation (\ref{eq:structure}). Consider the $i$-th term under the sum in (\ref{eq:M-step}):
\[ E\left[ \left. (r_i-G_i)\frac{\partial G_i}{\partial \theta_t} \right| z_i,\hat{\theta}^{(k)} \right] \]
\[ = \int_{-\infty}^{+\infty} (r_i-G_i) \frac{\partial G_i}{\partial \theta_t} \frac{\exp\left( -\frac{(r_i-G_i^{(k)})^2}{2\sigma^2}\right)}{f_{Z_i}^{(k)}(z_i) \sqrt{2\pi\sigma^2}} \]
\[ \times \frac{\exp\left(-\frac{(z_i-q^{(k)}(r_i))^2}{2 \eta^2}\right)}{\sqrt{2\pi \eta^2}} dr_i \]
\[ = \sum_{j=1}^M \int_{\tau_j}^{\tau_{j+1}} (r_i-G_i) \frac{\partial G_i}{\partial \theta_t} \frac{\exp\left(-\frac{(r_i-G_i^{(k)})^2}{2\sigma^2}\right)}{f_{Z_i}^{(k)}(z_i) \sqrt{2\pi\sigma^2}} \]
\[ \times \frac{\exp\left(-\frac{(z_i-\nu_j)^2}{2 \eta^2}\right)}{\sqrt{2\pi \eta^2}} dr_i \]
\[ = \sum_{j=1}^M \frac{\exp\left(-\frac{(z_i-\nu_j)^2}{2 \eta^2}\right)}{f_{Z_i}^{(k)}(z_i) \sqrt{2\pi \eta^2}} \frac{\partial G_i}{\partial \theta_t} \]
\[ \times \int_{\tau_j}^{\tau_{j+1}} (r_i-G_i)\frac{\exp\left(-\frac{(r_i-G_i^{(k)})^2}{2\sigma^2}\right)}{\sqrt{2\pi\sigma^2}} dr_i.\]
Note that the difference $(r_i-G_i)$ in the last integral can be rewritten as $(r_i-G_i^{(k)}+G_i^{(k)}-G_i).$ Then
\[E\left[ \left. (r_i-G_i)\frac{\partial G_i}{\partial \theta_t} \right| z_i,\hat{\theta}^{(k)} \right] = \sum_{j=1}^M \frac{\exp\left(-\frac{(z_i-\nu_j)^2}{2\eta^2}\right)}{f_{Z_i}^{(k)}(z_i) \sqrt{2\pi\eta^2}} \frac{\partial G_i}{\partial \theta_t} \]
\[ \times \left\{ \frac{1}{\sqrt{2 \pi \sigma^2}} \int_{\tau_j}^{\tau_{j+1}} \exp \left( -\frac{(r_i-G_i^{(k)})^2}{2\sigma^2} \right) d \frac{(r_i-G^{(k)}_i)^2}{2} \right. \]
\[ \left. + (G^{(k)}_i-G_i) \frac{1}{\sqrt{2\pi \sigma^2}} \int_{\tau_j}^{\tau_{j+1}} \exp \left(-\frac{(r_i-G_i^{(k)})^2}{2 \sigma^2}\right) dr_i \right\}. \]
Replacing the last integral with a difference of two Q-functions $Q\left(\frac{\tau_j-G_i^{(k)}}{\sigma}\right)$ and $Q\left(\frac{\tau_{j+1}-G_i^{(k)}}{\sigma}\right)$ we obtain:
\[ \sum_{i=1}^K E\left[ \left. (r_i-G_i)\frac{\partial G_i}{\partial \theta_t} \right| z_i,\hat{\theta}^{(k)} \right] = \sum_{i=1}^K \sum_{j=1}^{M} \frac{\exp\left(-\frac{(z_i-\nu_j)^2}{2\eta^2}\right)}{f_{Z_i}^{(k)}(z_i) \sqrt{2\pi\eta^2}} \frac{\partial G_i}{\partial \theta_t} \]
\[\times \left\{ \frac{\sigma^2}{\sqrt{2 \pi \sigma^2}} \left\{ \exp \left( -\frac{(\tau_j-G_i^{(k)})^2}{2\sigma^2}\right) - \exp \left(-\frac{(\tau_{j+1}-G_i^{(k)})^2}{2\sigma^2}\right) \right\} \right. \]
\[ \left. + (G_i^{(k)}-G_i) \left\{ Q\left(\frac{\tau_j-G_i^{(k)}}{\sigma}\right) \right. \right. \]
\[ \left. \left. \left.- Q\left(\frac{\tau_{j+1}-G_i^{(k)}}{\sigma}\right) \right\} \right\} \right|_{G_i=G_i^{(k+1)}} = 0.\]
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\balance
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Quantum state estimation \cite{PR2004b,H1997,B-K2010} remains one of the hot topics in the field of quantum information processing. The hope to recover each element in the density matrix, however, is impeded by the exponential growth of the number of matrix elements with the number of qubits, and the concomitant exponential growth in time and memory required to compute and store the density matrix. The task can become intimidating when 14 qubits are involved \cite{MSBCNCHHHB2010}, and so efforts have been made to simplify quantum state tomography. One such effort focused on states that have high purity \cite{GLFBE2010} so that the size of the state space shrinks significantly (from ${\cal O}(D^2)$ to ${\cal O}(D)$ for a system described by a $D$ dimensional Hilbert space). Given that the measurement record is used to verify the assumptions made initially, this method avoids the trap of simplification through imposing {\em a priori} assumptions merely by {\em fiat}. Another recent effort \cite{CPFSGBL-CPL2010} in the same spirit considered multi-qubit states that are well represented by matrix product states \cite{KSZ1991,FNW1992,P-GVWC2007} (which require a number of parameters growing only polynomially with the number of qubits). Many states of interest, such as ground states of certain model Hamiltonians in condensed-matter physics, are of that form.
Crucially, the particular form of the state can be verified by the data.
Here we go one step further, and we will allow, tentatively, {\em any} parametrized form for the density matrix of the quantum system to be tested, possibly containing just a few parameters. In fact, we may have several different tentative ideas of how our quantum state is best parameterized. The questions are then, how the data reveal which of those descriptions work sufficiently well, and which description is the best.
This idea corresponds to a well-developed field in statistics: model selection \cite{J1961b,BA2002b,Z2000}. All mathematical descriptions of reality are in fact models (and a quantum state, pure or mixed, is an excellent example of a model), and they can be evaluated by judging their performance relative to that of the true model (assuming it exists). In order to quantify this relative performance, we will make use of the Kullback-Leibler divergence (aka mutual information, aka cross entropy, aka relative entropy) \cite{KL1951}, which has the interpretation of the amount of information lost when a specific model is used rather than the true model.
Based on the minimization of the Kullback-Leibler divergence over different models, the Akaike Information Criterion (AIC) \cite{A1974} was developed as a ranking system so that models are evaluated with respect to each other, given measurement data. The only quantities appearing in the criterion are the maximum likelihood obtainable with a given model (i.e., the probability the observed data would occur according to the model, maximized over all model parameters), and the number of independent parameters of the model.
The minimization does not require any knowledge of the true model, only that the testing model is sufficiently close to the true model. The legitimate application of AIC should, therefore, in principle be limited to ``good'' models, ones that include the true model (in our case, the exact quantum state that generated the data), at least to a very good approximation. However this does not prevent one from resorting to the AIC for model evaluation when there is no such guarantee. In fact, Takeuchi studied the case where the true model does not belong to the model set and came up with a more general criterion, named the Takeuchi Information Criterion, TIC \cite{T1976}. However the estimation of the term introduced by Takeuchi to counterbalance the bias of the maximum likelihood estimator used in the AIC, requires estimation of a $K\times K$ matrix ($K$ being the number of independent parameters used by a model) from the data, which, unfortunately, is prone to significant error. This reduces the overall charm and practical use of the TIC. Since in most cases the AIC is still a good approximation to the TIC \cite{BA2002b}, especially in the case of many data, we stick to the simpler and more robust criterion here.
Information criteria are designed to produce a relative (rather than absolute) ranking of models, so that fixing a reference model is convenient. Throughout this paper we choose the ``full-parameter model'' (FPM) as reference, that is, a model with just enough independent variables to fully parameterize the measurement on our quantum system. For tomographically complete measurements (discussed in detail in Sec.~\ref{sec_tomography}) the number of independent variables is given by the number of free parameters in the density matrix ($2^{2D}-1$ for a $D$-dimensional Hilbert space). For tomographically incomplete measurements (see Sec.~\ref{sec_witness}), the number of independent variables of FPM is smaller, and equals the number of independent observables. We will, in fact, not even need the explicit form of the FPM (which may be hard to construct for tomographically incomplete measurements), as its maximum possible likelihood can be easily upper-bounded.
We should note an important distinction between maximum likelihood estimation (MLE) \cite{BDPS1999}, a technique often used in quantum tomography, and the method of information criteria and model selection. MLE produces the state that fits the data best. Now the data inevitably contains (statistical) noise, and the MLE state predicts, incorrectly, that same noise to appear in future data. Information criteria, on the other hand, have been designed to find the model that best predicts future data, and tries, in particular, to avoid overfitting to the data, by limiting the number of model parameters. This is how a model with a few parameters can turn out to be the best predictive model, even if, obviously, the MLE state will fit the (past) data better.
We also note that information criteria have been applied mostly in areas of research outside of physics. This is simply due to the happy circumstance that in physics we tend to know what the ``true'' model underlying our observations is (or should be), whereas this is much less the case in other fields. Within physics, information criteria have been applied to astrophysics \cite{L2007}, where one indeed may not know the ``true'' model (yet), but also to the problem of entanglement estimation \cite{LvE2009}. In the latter case (and in quantum information theory in general) the problem is not that we do not know what the underlying model is, but that that model may contain far too many parameters. Hence the potential usefulness of information criteria. And as we recently discovered, the AIC has even been applied to quantum state estimation, not for the purpose of making it more efficient, but making it more accurate, by avoiding overfitting \cite{UNTMN2003}.
\section{The Akaike Information Criterion - A Schematic Derivation}
\label{sec_AIC}
Suppose we are interested in measuring certain variables, summarized as a vector $\mathbf{x}$, and their probability of occurrence as outcome of our measurement. We denote $f(\mathbf{x})$ as the probabilistic model that truthfully reflects reality (assuming for convenience that such a model exists) and $g(\mathbf{x}|\vec{\theta})$ as our (approximate) model characterized by one or more parameters, summarized as a vector $\vec{\theta}$. The models satisfy the normalization condition $\int {\rm d}\mathbf{x}f(\mathbf{x})=\int{\rm d}\mathbf{x}g(\mathbf{x}|\vec{\theta})=1$ for all $\vec{\theta}$. By definition, we say there is no information lost when $f(\mathbf{x})$ is used to describe reality. The amount of information lost when $g(\mathbf{x}|\vec{\theta})$ is used instead of the true model is defined to be the Kullback-Leibler divergence \cite{KL1951} between the model $g(\mathbf{x}|\vec{\theta})$ and the true model $f(\mathbf{x})$:
\begin{align}
I(f,g_{\vec{\theta}})=&\int {\rm d}\mathbf{x} f(\mathbf{x})\log(f(\mathbf{x}))\nonumber\\
&-\int {\rm d}\mathbf{x} f(\mathbf{x})\log(g(\mathbf{x}|\vec{\theta})).
\label{eq_Ifg_int}
\end{align}
Eq.~(\ref{eq_Ifg_int}) can be conveniently rewritten as
\begin{align}
I(f,g_{\vec{\theta}})=E_\mathbf{x}\left[\log(f(\mathbf{x}))\right]-E_\mathbf{x}\left[\log(g(\mathbf{x}|\vec{\theta}))\right],
\label{eq_Ifg_exp}
\end{align}
where $E_\mathbf{x}[\cdot]$ denotes an estimate with respect to the true distribution $f(\mathbf{x})$.
We see that $\mathbf{x}$ is no longer a variable in the above estimator, as we integrated it out. The only variable that affects $I(f,g_{\vec{\theta}})$ is $\vec{\theta}$. Since the first term in Eq.~(\ref{eq_Ifg_int}) is irrelevant to the purpose of rank-ordering different models $g$ (not to mention we cannot evaluate it when $f$ is not known), we only have to consider the second term. Suppose there exists $\vec{\theta}_0$ such that $g(\mathbf{x}|\vec{\theta}_0)=f(\mathbf{x})$ for every $\mathbf{x}$, that is, the true model is included in the model set. Note that for this to hold, $\vec{\theta}$ does not necessarily contain the same number of parameters as the dimension of the system. To use a simpler notation without the integration over $\mathbf{x}$ we denote the second term in Eq.~(\ref{eq_Ifg_int}) (without the minus sign) as
\begin{align}
S(\vec{\theta}_0:\vec{\theta})=\int {\rm d}\mathbf{x} g(\mathbf{x}|\vec{\theta}_0)\log(g(\mathbf{x}|\vec{\theta})),
\label{eq_S_int}
\end{align}
where we have used $g(\mathbf{x}|\vec{\theta}_0)$ to represent the true model $f(\mathbf{x})$. The advantage of this estimator is that it can be approximated without knowing the true distribution $f(\mathbf{x})$. To do that we first consider the situation where $\vec{\theta}$ is close to $\vec{\theta}_0$. This assumption can be justified in the limit of large $N$, $N$ being the number of measurement records, since the model $\vec{\theta}$ ought to approach $\vec{\theta}_0$ asymptotically (assuming, for simplicity, $\vec{\theta}_0$ is unique). We know that $S(\vec{\theta}_0:\vec{\theta})$ must have a maximum when $\vec{\theta}=\vec{\theta}_0$, and we may then symbolically expand $S(\vec{\theta}_0:\vec{\theta})$ in the vicinity of $\vec{\theta}_0$ by
\begin{align}
S(\vec{\theta}_0:\vec{\theta})= S(\vec{\theta}_0:\vec{\theta}_0)&-\frac{1}{2}||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_0}^2\nonumber\\
&+{\cal O}\left(||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_0}^{3/2}\right),
\label{eq_S_theta0-theta}
\end{align}
where
\begin{align}
||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_0}^2=\left(\vec{\theta}-\vec{\theta}_0\right)'\cdot\left.\frac{\partial^2S(\vec{\theta}_0:\vec{\theta})}{\partial\vec{\theta}^2}\right|_{\vec{\theta}=\vec{\theta}_0}\cdot\left(\vec{\theta}-\vec{\theta}_0\right)
\end{align}
denoting a \emph{squared length} derived from a metric defined at $\vec{\theta}_0$. It can be proved that when $N$ is sufficiently large
$||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_0}^2$ can be approximated by the $\chi_K^2$ distribution, with $K$ equal to the number of independent parameters used by the model $\vec{\theta}$. From the properties of the $\chi_K^2$ distribution, we know the average value of $||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_0}^2$ will approach $K$.
The next step is to evaluate the estimator $S(\vec{\theta}_0:\vec{\theta}_0)$, where $\vec{\theta}_0$ is now considered a variable. Suppose we find the maximum likelihood estimate $\vec{\theta}_M$ from the measurement outcomes such that $S(\vec{\theta}_M:\vec{\theta}_M)$ is the maximum. Now $\vec{\theta}_M$ should also be close to the true model $\vec{\theta}_0$, when $N$ is sufficiently large. Therefore we can similarly expand $S(\vec{\theta}_0:\vec{\theta}_0)$ in the vicinity of $\vec{\theta}_M$ as
\begin{align}
S(\vec{\theta}_0:\vec{\theta}_0)= S(\vec{\theta}_M:\vec{\theta}_M)&-\frac{1}{2}||\vec{\theta}_0-\vec{\theta}_M||_{\vec{\theta}_M}^2\nonumber\\
&+{\cal O}\left(||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_M}^{3/2}\right),
\label{eq_S_theta0-theta0}
\end{align}
$||.||_{\vec{\theta}_M}$ is a length similarly defined as in Eq.~(\ref{eq_S_theta0-theta}) and has the same statistical attributes
as $||\vec{\theta}-\vec{\theta}_0||_{\vec{\theta}_0}^2$ since $\vec{\theta}_0$ is related to $\vec{\theta}_M$ the same way $\vec{\theta}$ is related to $\vec{\theta}_0$ and $\vec{\theta}_M$ is very close to $\vec{\theta}_0$. Its average value, therefore, approaches again $K$, according to the $\chi_K^2$ distribution. Thus we are able to rewrite Eq.~(\ref{eq_S_int}) as
\begin{align}
S(\vec{\theta}_0:\vec{\theta})\approx S(\vec{\theta}_M:\vec{\theta}_M)-K.
\end{align}
We see that now our target estimator $S(\vec{\theta}_0:\vec{\theta})$ is evaluated by the MLE solution $\vec{\theta}_M$ only (plus the number of parameters $K$ of the model), with no knowledge of what the true model $f$ is. The assumption that underlies this convenience is constituted by two parts: estimating $S(\vec{\theta}_0:\vec{\theta})$ with its maximum $\vec{\theta}_0$ and estimating $S(\vec{\theta_0}:\vec{\theta}_0)$ from the data by its optimum $\vec{\theta}_M$. The deviations from their respective maxima are equal and result simply in the appearance of the constant $K$.
We now denote $\L_M=S(\vec{\theta}_M:\vec{\theta}_M)$, which is the maximum likelihood obtainable by our model, with respect to a given set of measurement records. The AIC is then defined by
\begin{align}
{\rm AIC}=-2\L_M+2K.\label{eq_AIC}
\end{align}
Apart from the conventional factor 2, and a constant independent of the model $\vec{\theta}$, AIC is an estimator of the quantity in Eq.~(\ref{eq_Ifg_int}) we originally considered, that is, the Kullback-Leibler divergence between a model that is used to describe the true model and the true model itself. Therefore a given model is considered better than another if it has a {\em lower} value of AIC.
Finally, in the case that $N$ is not so large yet that asymptotic relations hold to a very good approximation, one
can include a correction factor to the AIC taking the deviation from asymptotic values into account. The corrected AIC gives rise to a slightly different criterion \cite{Sug1978}:
\begin{align}
{\rm AICc}=-2\L_M+2K+\frac{2K(K+1)}{N-K-1}.\label{eq_AICc}
\end{align}
\section{Results}
\label{sec_results}
\subsection{Dicke states}
\label{sec_Dicke}
We will apply the AIC to measurements on a popular family of entangled states, the Dicke states of four qubits \cite{D1954,HHRBCCKRRSBGDB2005,KSTSW2007,PCDLvEK2009,CGPvEK2010}. We simulate two different experiments, one tomographically complete experiment, another measuring an entanglement witness. We include imperfections of a simple type, and we investigate how model selection, according to the AIC, would work. We consider cases where we happen to guess the correct model, as well as cases where our initial guess is, in fact, incorrect.
We consider the four-qubit Dicke states with one or two excitations $\ket{D_4^{1,2}}$ (with the state $\ket{1}$ representing an excitation):
\begin{subequations}
\begin{align}
\ket{D_4^1}=&\left(\ket{0001}+\ket{0010}+\ket{0100}+\ket{1000}\right)/2,\\
\ket{D_4^2}=&\left(\ket{0011}+\ket{0101}+\ket{0110}+\ket{1001}\right.\nonumber\\
&+\left.\ket{1010}+\ket{1100}\right)\sqrt{6}.
\end{align}
\end{subequations}
For simplicity, let us suppose that white noise is the only random noise in the state generation,
and that it corresponds to mixing of the ideal state with the maximally mixed state of the entire space (instead of the subspace with exactly one or two excitations, which could be a reasonable choice, too, depending on the actual implementation of the Dicke states). We thus write the states under discussion as
\begin{align}
\rho^{1,2}(\alpha)=(1-\alpha)\ket{D_4^{1,2}}\bra{D_4^{1,2}}+\alpha\openone/D,
\end{align}
where $\openone/D$ is the maximally mixed state for dimension $D=2^4$, and $0\leq \alpha\leq 1$. We will fix the actual state generating our data to be
\begin{align}
\rho_{\rm actual}^{1,2}=\rho^{1,2}(\alpha=0.2).
\end{align}
This choice is such that the mixed state is entangled (as measured by our multi-qubit version of the negativity, see below), even though the entanglement witness whose measurement we consider later in Sec.~\ref{sec_witness}, just fails to detect it.
For our first model (to be tested by AIC) we wish to pick a one-parameter model (so, $K=1$) that also includes a wrong guess.
A straightforward model choice, denoted by $M_{1\phi}$, is
\begin{align}
M_{1\phi}: \rho_\phi^{1,2}(q)=(1-q)\ket{\Psi_{\rm target}^{1,2}(\phi)}&\nonumber\\
\bra{\Psi_{\rm target}^{1,2}(\phi)}+&q\openone/D.
\label{eq_M1_SICPOVM}
\end{align}
We refer to the pure states appearing here as the {\em target} states $\ket{\Psi_{\rm target}^{1,2}(\phi)}$, simulating the case where we (possibly incorrectly) think we would be creating a pure state of that form, if only the white noise were absent ($q=0$). The phase $\phi$ is included not as a (variable) parameter of the model but as an inadvertently mis-specified property. In this case, it stands for us being wrong about a single relative phase in one of the qubits in state $|1\rangle$.
Without loss of generality we assume the first qubit in our representation to carry the wrong phase, and we write
\begin{subequations}
\begin{align}
\ket{\Psi_{\rm target}^1(\phi)}=&\frac{1}{2}\left(\ket{0001}+\ket{0010}+\ket{0100}\right.\nonumber\\
&+\left.e^{i\phi}\ket{1000}\right),\label{eq_tilde1}\\
\ket{\Psi_{\rm target}^2(\phi)}=&\frac{1}{\sqrt{6}}\left[\ket{0011}+\ket{0100}+\ket{0110}\right.\nonumber\\
+&\left.e^{i\phi}\left(\ket{1001}+\ket{1010}+\ket{1100}\right)\right].\label{eq_tilde2}
\end{align}
\label{eq_tilde}
\end{subequations}
Alternatively, if we {\em do} consider this a two-parameter model (changing $K=1$ to $K=2$), then $\phi$ is variable, and we would optimize over $\phi$. In our case, this optimum value should always be close to $\phi=0$.
\begin{figure}[t]
\begin{center}
\subfigure[$M_{1\phi}$.]
{\includegraphics[width=195pt]{D41}}
\\
\subfigure[$M_{2\phi}$.]
{\includegraphics[width=195pt]{D41_2bitflip_1e4}}
\end{center}
\caption{{\em How AIC ranks the one- and two-parameter models vs. the full-parameter model (FPM):} \\
Plot of the difference between AIC values of our models and the FPM, \textsl{i.e.}, $-\Delta{\rm AIC}={\rm AIC(FPM)}-{\rm AIC}(M_{1\phi})$ or $-\Delta{\rm AIC}={\rm AIC(FPM)}-{\rm AIC}(M_{2\phi})$, for various numbers of SIC-POVM measurements, $N$, with $\ket{\Psi_{\rm target}^1}$ as the target state, as functions of the angle $\phi$. The horizontal line demarcates $\Delta{\rm AIC}=0$: points above (below) that line correspond to cases where the model with fewer (more) parameters is preferred. The figures with $\ket{\Psi_{\rm target}^2}$ as the target state look very similar (see FIG.~\ref{fig_AIC_M1_1e4_SICPOVM} for an example of this similarity).}
\label{fig_AIC_M1M2_SICPOVM}
\end{figure}
\subsection{Tomographically complete measurement}
\label{sec_tomography}
We first consider a tomographically complete measurement, in which a so-called SIC-POVM (symmetric informationally complete positive operator values measure \cite{RK-BSC2004}) with 4 outcomes is applied to each qubit individually. We first test our one-parameter model, and compare it to the FPM, which contains 255 ($=4^4-1$) parameters, which is the number of parameters needed to fully describe a general state of 4 qubits. With definition Eq.~(\ref{eq_AIC}) we have
\begin{align}
{\rm AIC}(M_{1\phi})=-2\L_M(M_{1\phi})+2,
\label{eq_AIC_M1_SICPOVM}
\end{align}
since $K=1$ for $M_{1\phi}$. For the FPM we have
\begin{align}
{\rm AIC(FPM)} =-2\L_M({\rm FPM})+2\times 255,
\label{eq_AIC_FPM_SICPOVM}
\end{align}
where $\L_M({\rm FPM})$ is the log of the maximum likelihood obtainable by the FPM. The latter can be bounded from above by noting that the best possible FPM would generate probabilities that exactly match the actual observed frequencies of all measurement outcomes. In the following we will always use that upper bound, rather than the actual maximum likelihood. Even though it is possible to find the maximum likelihood state in principle (and even in practice for small enough Hilbert spaces), we are only concerned with the FPM's ranking according to the AIC, which does not require its density matrix representation. For $M_{1\phi}$ to beat the FPM we require
\begin{align}
-\Delta{\rm AIC}:={\rm AIC}({\rm FPM})-{\rm AIC}(M_{1\phi})>0.
\end{align}
This is a sufficient but not necessary requirement, as we use the above-mentioned upper bound to the FPM likelihood.
\begin{figure}[t]
\begin{center}
\includegraphics[width=195pt]{N1e4}
\end{center}
\caption{{\em Comparing single- and double-excitation Dicke states}: The difference between AICs of $M_{1\phi}$ and the FPM, \textsl{i.e.}, $-\Delta{\rm AIC}={\rm AIC(FPM)}-{\rm AIC}(M_{1\phi})$ for both target states, when $N=10000$, as functions of $\phi$. The horizontal line demarcates $\Delta{\rm AIC}=0$. }
\label{fig_AIC_M1_1e4_SICPOVM}
\end{figure}
We plot the difference $\Delta{\rm AIC}$ between the two rankings in FIG.~\ref{fig_AIC_M1M2_SICPOVM}(a) for various values of the number of measurements, and for various values of the phase $\phi$. We observe the following: The simple model
is, correctly, judged better than the FPM when the phase $\phi$ is sufficiently small.
The more measurements one performs, the smaller $\phi$ has to be for AIC to still declare the model superior to the FPM (i.e., for the points to stay above the solid line, at $\Delta{\rm AIC}=0$).
Although the correction to the AIC mentioned in Eq.~(\ref{eq_AICc}) is not very small for the FPM for $N=1000$, applying that correction still does not shift the second and third point below zero: that is, $N=1000$ measurements is still not sufficiently large for the AICc to recognize that $\phi=\pi/4$ and $\phi=\pi/2$ are incorrect guesses. One can argue about what the cause of this is: it could be that $N$ is just too small for the derivation of the AIC (or even the AICc) to be correct. Or it could be that the AIC ranking is unreliable because the assumption that the true model is included in the model, is violated. Or it could be that, even with a perfectly valid criterion (perhaps the TIC), the statistical noise present in the data would still be too large.
If we consider the phase $\phi$ as a second (variable) parameter (thus creating a two-parameter model), then we can give FIG.~1 a different interpretation: we would pick $\phi=0$ as the best choice, and we would increase $K$ by 1. The latter correction is small on the scale of the plots, and so we find the two-parameter model to be superior to the one-parameter model for any nonzero plotted value of $\phi$, and to the FPM.
This is a good illustration of
the following rather obvious fact:
even if one has the impression that a particular property of one's quantum source is (or ought to be) known, it still might pay off to represent that property explicitly as a variable parameter (at the small cost of increasing $K$ by 1), and let the data determine its best value.
\subsection{Cross modeling}
Suppose one picked a one-parameter model with a wrong (nonzero) value of $\phi$, and the AIC has declared the model to be worse than the FPM. How can one improve the model in a systematic way when one lacks a good idea of which parameters to add to the model (we assume we already incorporated all parameters deemed important {\em a priori}). Apart from taking more and different measurements, one could use a hint from the existing data.
One method making use of the data is to apply ``cross modeling,'' where half the data is used to construct a modification to the model, and the remaining half is used for model validation, again by evaluating AIC on just that part of the data. So suppose $N$ measurements generate a data sequence $\mathcal{F}=\{f_1,f_2,...,f_N\}$. One takes, \textsl{e.g.}, the first $N/2$ data points, $\{f_1,...,f_{N/2}\}$, as the training set, and acquires the MLE state $\rho_{\rm MLE}$, or a numerically feasible approximation thereof, with respect to the training set. We then create a model with two parameters like so:
\begin{align}
M_{2\phi}:\rho_\phi(\epsilon,q)=(1-\epsilon)\left[(1-q)\rho_{\rm MLE}\right.&\nonumber\\
\left.+q\ket{\Psi_{\rm target}^{1,2}(\phi)}\bra{\Psi_{\rm target}^{1,2}(\phi)}\right]+\epsilon\openone/D.&
\label{eq_M2_SICPOVM}
\end{align}
For practical reasons $\rho_{\rm MLE}$ does not need to be strictly the MLE state, in particular when the dimension of the full parameter space is large. One would only require it to explain $\{f_1,...,f_{N/2}\}$ well enough to make sure that part of the data is properly incorporated in the model. Thus, one could, for example, use one of the numerical shortcuts described in \cite{KJ2009}. The rest of the data $\{f_{N/2+1},...,f_N\}$ is used to evaluate $M_{2\phi}$ against the FPM.
We note the resemblance of this procedure with the method of ``cross-validation'' \cite{SEJS1986}. In cross-validation one tries to find out how well a {\em given} predictive model performs by partitioning the data set into training set and validation set (exactly the same idea as given above). One uses multiple different partitions, and the results are averaged and optimized over those partitions. It can be shown \cite{S1977} that under certain conditions cross-validation and the AIC are asymptotically equivalent in model selection. This virtually exempts one from having to check multiple partitions of the data set, by applying the AIC to the whole data set.
It is worth emphasizing that what we do here is different in two ways. First, our model is not fixed but modified, based on information obtained from one half of the data. Second, we partition the data set only once, and the reason is, that it would be cheating to calculate the (approximate) MLE state of the full set of data (or, similarly, check many partitions and average), and then consider the resulting MLE state a parameter-free model.
FIG.~\ref{fig_AIC_M1M2_SICPOVM}(b) shows results for $M_{2\phi}$ and SIC-POVM measurements. When the number of measurements is $N=1000$, all $M_{2\phi}$ models
are considered better than the FPM, regardless of the phase error $\phi$ assumed for the target state. The reason is that around $\phi=\pi/2$ the approximate MLE state obtained from the first half of the data is able to ``predict'' the measurement outcomes (including their large amount of noise!) on the second half better than the 1-parameter model with the wrong phase.
On the other hand, when $N=10000$ the AIC recognizes only the simple models with small phase errors ($\phi=0, \pi/8, \pi/4$) as better than the FPM.
So, neither the approximate MLE state, nor the 1-parameter model with wrong phase are performing well. This indicates how many measurements are needed to predict a single phase to a given precision.
\begin{figure}[t]
\begin{center}
\includegraphics[width=195pt]{borges_angle_N.pdf}
\end{center}
\caption{{\em How the AIC ranks our one-parameter model vs. the FPM for an entanglement witness measurement}:
The difference between AICs of $M_{1\phi}$ with $\ket{\Psi_{\rm target}^2(\phi)}$ and the FPM, \textsl{i.e.}, $-\Delta{\rm AIC}={\rm AIC(FPM)}-{\rm AIC}(M_{1\phi})$, for different numbers of witness measurements, as functions of $\phi$. The horizontal line demarcates $\Delta{\rm AIC}=0$.}
\label{fig_AIC_M1_witness}
\end{figure}
\subsection{Witness measurement}
\label{sec_witness}
For states that are close to symmetric Dicke states $\ket{D_N^{N/2}}$, their entanglement can be verified by using measurements that require only two different local settings, \textsl{e.g.}, spins (or polarizations) either {\em all} in the $x$-direction or {\em all} in the $y$-direction. In particular, when $N=4$, an efficient witness is $W_{Jxy}=7/2+\sqrt{3}-J_x^2-J_y^2$ \cite{T2007}, where $J_{x,y}=\sum_j\sigma_{x,y}^{(j)}/2$, with $\sigma_{x,y}^{(j)}$ the Pauli matrices for the $j$-th subsystem. This witness detects (by having a negative expectation value) Dicke states with a white noise background, \textsl{i.e.}, $\rho(\alpha)=(1-\alpha)\ket{D_4^2}\bra{D_4^2}+\alpha\openone/D$ whenever $0\leq\alpha<0.1920$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=195pt]{WJxy.pdf}
\caption{{\em Does the witness $W_{J_{xy}}$ detect entanglement if there is a phase error?}: Witness performance $\langle W_{J_{xy}}\rangle$ for different states (defined as in Eq.~(\ref{eq_M1_witness})) as a function of $\phi$. A negative expectation value detects entanglement.}
\label{fig_WJxy}
\end{center}
\end{figure}
So we suppose we perform $N/2$ measurements on all of the four spins in the $x$-direction simultaneously, and another $N/2$ similar measurements in the $y$-direction. Instead of calculating the witness $W_{J_{xy}}$ and ending up with one single value determining entanglement, we make use of the full record of all individual outcomes in order to evaluate (and then maximize) likelihoods. For example, for the measurement of all four spins in the $x$-direction simultaneously, we can count the number of times they are projected onto the $\ket{x+x+x+x+}$ state, the $\ket{x+x+x+x-}$ state, etc. In both $x$- or $y$-directions, the number of independent observables (i.e., the number of independent joint expectation values) is 15, which can be seen as follows: Any density matrix of $M$ qubits can be expressed in terms of the expectation values of $4^M$ tensor products of the 3 Pauli operators and the identity $\openone$, but the expectation value of the product of $M$ identities equals 1 for any density matrix, thus leading to $4^M-1$ independent parameters encoded in a general density matrix. From having measured just $\sigma_x$ on all $M$ qubits, we can evaluate all expectation values of all operators that are tensor products of $\sigma_x$ and the identity. There are $2^M$ such products, and subtracting the trivial expectation value for $\openone^{\otimes M}$ leaves $2^M-1$ independent expectation values.
This means it only takes $2\times 15=30$ independent parameters to form the FPM, and we have $K=30$. Similar to the tomographically complete case, we do not need the concrete form of the whole 255-30 dimensional manifold of MLE states, nor do we need to explicitly parameterize the 30-parameter FPM states, as we can simply upper bound the maximum likelihood for this model, $L_M({\rm FPM})$, by noting the best one could possibly do is reproduce exactly the observed frequencies of all possible measurement outcomes.
\begin{figure}[t]
\begin{center}
\subfigure[$N=100$]{\includegraphics[width=195pt]{neg_dist1e2_1}}
\subfigure[$N=1000$]{\includegraphics[width=195pt]{neg_dist1e3_1}}
\caption{{\em How one quantifies entanglement from a witness measurement}:
The posterior probability distributions of $\mathcal{N}_0$ for different numbers of witness measurements from model $M_1(q)$ with target state $\ket{\Psi_{\rm target}^{2(1)}}$, where $\phi=0, \pi/6, \pi/3$. $\mathcal{N}_0(\rho_{\rm actual})=0.4770$. The prior distribution is assumed to be uniform on [0,1] for both $\epsilon$ and $q$. The distributions of $\mathcal{N}_1$ and $\mathcal{N}_2$ are similar (up to a simple shift).}
\label{fig_negdist_M1_witness}
\end{center}
\end{figure}
\subsection{Estimating entanglement}
Our state $\rho_{\rm actual}=\rho(\alpha=0.2)$ is just not detected by the witness $W_{Jxy}$, but still contains a considerable amount of entanglement.
We choose to quantify this entanglement by means of three entanglement monotones (of which only two are independent), simply constructed from all bipartite negativities.
If the four parties are denoted $A$, $B$, $C$ and $D$, the generalized negativities \cite{VW2002,SG-A2008,YvE2011} are defined as
\begin{subequations}
\begin{align}
\mathcal{N}_1=&\left(\mathcal{N}_{AB-CD}\mathcal{N}_{AC-BD}\mathcal{N}_{AD-BC}\right)^{1/3},\\
\mathcal{N}_2=&\left(\mathcal{N}_{A-BCD}\mathcal{N}_{B-CDA}\mathcal{N}_{C-DAB}
\mathcal{N}_{D-ABC}\right)^{1/4},\\
\mathcal{N}_0=&\left(\mathcal{N}_1^3 \mathcal{N}_2^4\right)^{1/7},
\end{align}
\end{subequations}
where $\mathcal{N}_{AB-CD}$ denotes the negativity with respect to partition $AB$ against $CD$, etc. The main advantage of the generalized negativities is that they are all efficiently computable directly from the density matrix.
We have for our state
$ \mathcal{N}_1=0.6293$, $ \mathcal{N}_2=0.3875$, and $ \mathcal{N}_0=0.4770$.
Similarly to the tomographically complete case, we first consider the following one-parameter model:
\begin{align}
M_{1\phi}:~\rho_\phi(q)=(1-q)\ket{\Psi_{\rm target}^2(\phi)}&\nonumber\\
\bra{\Psi_{\rm target}^2(\phi)}+q&\openone/D,\label{eq_M1_witness}
\end{align}
where $\ket{\Psi_{\rm target}^2(\phi)}$ is defined in Eg.~(\ref{eq_tilde1}). The AICs for $M_{1\phi}$ and FPM are
\begin{align}
{\rm AIC}(M_{1\phi})=&-2\L_M(M_{1\phi})+2,\label{eq_AIC_M1_witness}\\
{\rm AIC}({\rm FPM})=&-2\L_M({\rm FPM})+2\times30.\label{eq_AIC_FPM_witness}
\end{align}
FIG.~\ref{fig_AIC_M1_witness} shows that, as before, the marks above the horizontal solid line correspond to models deemed better than FPM. Compared to the case of full tomography (FIG.~\ref{fig_AIC_M1M2_SICPOVM}(a)), here the value of ${\rm AIC}(M_{1\phi})$ is larger than ${\rm AIC}({\rm FPM})$ by a much smaller amount, even when the phase term is correct ($\phi=0$). The absolute value of the difference is not relevant, though, and what counts is its sign. The obvious reason for the smaller difference is that the number of independent parameters for the FPM has dropped from 255 to 30. In addition, the FPM in this case does not refer to a specific 30-parameter model. On the contrary, since the number of degrees of freedom of the quantum system is still 255, there is a whole subspace of states, spanning a number of degrees of freedom equal to 225 (=255-30), all satisfying the maximum likelihood condition.
The witness measurement is very sensitive to the phase error, even when the number of measurements is still small. When $N=1000$, the estimation of $\phi$ is within an error of $\pi/6$, as the second point plotted is already below the line $\Delta{\rm AIC}=0$.
Compared to FIG.~\ref{fig_AIC_M1M2_SICPOVM}(a), this precision is only reached when $N=10000$.
An interesting comparison can be made between AIC and the entanglement-detecting nature of witness $W_{J_{xy}}$. FIG.~\ref{fig_WJxy} shows the performance of $\langle W_{J_{xy}}\rangle$ for the pure state $\ket{\Psi_{\rm target}^2(\phi)}$ ($\rho_\phi(q=0)$, solid curve) and the mixed with 20\% of identity mixed in ($\rho_\phi(q=0.2)$, dot-dashed curve). Even when the state is pure, $\langle W_{J_{xy}}\rangle$ will not be able to witness any entanglement if the phase error is larger than $\pi/3$, just about when AIC declares such a model deficient. Entanglement in the mixed $\rho_\phi(q=0.2)$ of course is never witnessed. This means $\langle W_{J_{xy}}\rangle$ is only an effective witness in the vicinity of $\ket{D_4^2}$, with limited tolerance of either white noise or phase noise in even just one of the four qubits. (Of course, one would detect the entanglement in the pure state by appropriately rotating the axes in the spin measurement on the first qubit over an angle $\phi$.)
To test whether a few-parameter model correctly quantifies entanglement if that model is preferred over the FPM by AIC, we estimate a (posterior, Bayesian) probability distribution over the generalized negativities (defined above). We see that the first three curves in
FIG.~\ref{fig_negdist_M1_witness}(a) and the first two curves in FIG.~\ref{fig_negdist_M1_witness}(b), which correspond to the data points above the horizontal line in FIG.~\ref{fig_AIC_M1_witness}, all give consistent estimates of $\mathcal{N}_0$, compared to the actual value of $\mathcal{N}_0$ for the true state (and the same holds for $\mathcal{N}_{1,2}$ (not shown)). Conversely, the estimate cannot be trusted when AIC deems the simple model inferior to the FPM (of course, it may still happen to be a correct estimate, but one could not be sure).
This gives additional evidence for the success of AIC.
\subsection{Cross modeling for a witness measurement}
We now construct a two-parameter model $M_{2\phi}$ similar in spirit to that discussed for tomographically complete measurements: half the data [on which half the time $(\sigma_x)^{\otimes 4}$ is measured, and half the time $(\sigma_y)^{\otimes 4}$] are used to generate a better model, which is then tested on the other half of the data (also containing both types of measurements equally). We write
\begin{align}
\rho(\epsilon,q)=&(1-\epsilon)\left[(1-q)\rho_{\rm observation}\right.\nonumber\\
&\left.+q\ket{\Psi_{\rm target}^{2}(\phi)}\bra{\Psi_{\rm target}^{2}(\phi)}\right]+\epsilon\openone/D.\label{eq_M2_witness}
\end{align}
To find a $\rho_{\rm observation}$---there are many equivalent ones for predicting the outcomes of the witness measurements---we recall that a generic four-qubit state can be expressed as
\begin{align}
\rho=\sum_{jklm}c_{jklm}\sigma_j\otimes\sigma_k\otimes\sigma_l\otimes\sigma_m,
\end{align}
where $j,k,l,m=1,2,3,4$ where
$\sigma_{1,2,3}$ denote the Pauli matrices $\sigma_{x,y,z}$ and $\sigma_4=\openone$. The witness measures the coefficients $c_{jklm}$ where $j,k,l,m$ can be combinations of only 1 and 4 or combinations of only 2 and 4 (\textsl{e.g.}, $c_{1441}$ or $c_{4222}$). We label the $c_{jklm}$'s that can be recovered from witness measurement as $c_{jklm}^w$ ($w$ as in \emph{witness}). We do not include in $c_{jklm}^w$ the coefficient $c_{4444}$, which always equals $1/16$, so that it does not depend on measurement outcomes. We define
\begin{align}
\rho_{\rm observation}=\sum_{jklm}c_{jklm}^w\sigma_j\otimes\sigma_k\otimes\sigma_l
\otimes\sigma_m+\openone/16.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=165pt]{unphysical_e_q.pdf}
\caption{{\em What fraction of the
model Eq. (\ref{eq_M2_witness}) describes physical states?}: The lower left part separated by the curves is where $\rho(\epsilon,q)$ of Eq. (\ref{eq_M2_witness}) is unphysical (and so is not actually included in the model), for different number of measurements $N$. }
\label{fig_unphysical_e_q}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=195pt]{L_M_D42_UNPHYSICAL.pdf}
\caption{{\em How the AIC ranks our two-parameter model vs. the FPM, for a witness measurement}: The difference between AICs of $M_{2\phi}$ and the FPM, \textsl{i.e.}, $-\Delta{\rm AIC}={\rm AIC(FPM)}-{\rm AIC}(M_{2\phi})$, for different numbers of witness measurements as a function $\phi$. The target state is $\ket{\Psi_{\rm target}^{2(1)}}$. The horizontal line demarcates $\Delta{\rm AIC}=0$.}
\label{fig_AIC_M2_witness}
\end{center}
\end{figure}
Note that $\rho_{\rm observation}$ can be considered as a trace-one \emph{pseudostate}, since it is not necessarily positive semi-definite. But the most attractive property of $\rho_{\rm observation}$ is that it preserves the measurement outcomes. It is in fact the unique \emph{pseudostate} that reproduces the exact frequencies of all measurement outcomes {\em and} that has vanishing expectation values for all other {\em un}performed collective Pauli measurements. As a component of $\rho(\epsilon,q)$, we allow $\rho_{\rm observation}$ to be unphysical, but we only keep those $\rho(\epsilon,q)$ that are positive semi-definite.
We checked numerically for what values of $\epsilon$ and $q$ the states end up being physical, and how this depends on the number of measurements performed.
Physical states are located in the upper right part of the square in FIG.~\ref{fig_unphysical_e_q}. That is, only if $\epsilon$ and/or $q$ are sufficiently large, so that a sufficiently large amount of $\ket{\Psi_{\rm target}^2}$ and/or $\openone/16$ has been mixed in, does $\rho(\epsilon,q)$ become physical. Depending on the number of measurements, the area of the upper right part is about 69\%-77\% of the whole square. The physical/unphysical boundary shifts closer to the origin as the number of measurements increases.
\begin{figure*}[t]
\begin{center}
\subfigure[$N=100, \ket{\Psi_{\rm target}^2(\phi)}$]{\includegraphics[width=162pt]{posteriorN2_D42_1e2_1_UNPHYSICAL.pdf}}
\subfigure[$N=1000, \ket{\Psi_{\rm target}^2(\phi)}$]{\includegraphics[width=162pt]{posteriorN2_D42_1e3_1_UNPHYSICAL.pdf}}
\subfigure[$N=10000, \ket{\Psi_{\rm target}^2(\phi)}$]{\includegraphics[width=162pt]{posteriorN2_D42_1e4_1_UNPHYSICAL.pdf}}
\caption{{\em Quantifying entanglement from a witness measurement}: The posterior distributions for $\mathcal{N}_2$, for different numbers of measurements, using model $M_{2\phi}$ with $\rho_{\rm observation}$ and target state $\ket{\Psi_{\rm target}^2}$, where $\phi=0, \pi/6, \pi/3$. $\mathcal{N}_2(\rho_{\rm actual})=0.3875$. The same prior is used as in FIG.~\ref{fig_negdist_M1_witness}. Whenever the AIC declares a model superior to the FPM, the estimated entanglement agrees, within error bars, with the actual value, but may be wrong otherwise. }
\label{fig_negdist_M2_witness}
\end{center}
\end{figure*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=195pt]{M1vsM2}
\caption{{\em Comparing one- and two-parameter models directly}: The difference between AICs of $M_{2\phi}$ and $M_{1\phi}$ for 20 different sets of witness measurements ($N=50$) as functions of $\phi$. The target state is $\ket{\Psi_{\rm target}^{2}}$. The horizontal line demarcates $\Delta{\rm AIC}=0$. The dotted-dashed line is the average of all 20 points at each different $\phi$.}
\label{fig_AIC_M1vsM2_witness}
\end{center}
\end{figure}
We test the two-parameter model (the physical part of it), and show the results in FIG.~\ref{fig_AIC_M2_witness}. We find that for $N=100$ the AIC ranks $\rho_{2\phi}(\epsilon,q)$ better than the FPM, even when the guess about $\phi$ is very imprecise: 100 witness measurements are, unsurprisingly, not enough for a correct reconstruction of the state. When $N=1000$, AIC only prefers the models with a value for $\phi$ within $\pi/6$ of the correct value. And when $N=10000$, the accepted values of $\phi$ are even closer to the true value.
The corresponding posterior distributions of negativities ${\cal N}_2$ are plotted in FIG.~\ref{fig_negdist_M2_witness} for the three better guesses, $\phi=0, \pi/6, \pi/3$.
When $N=100$ all three give decent predictions of $\mathcal{N}_2$ (and indeed, AIC ranks those models highly). For $N=1000$ and $N=10000$, we would only trust the estimates arising from the lower two values of $\phi$, or just the correct value of $\phi$, respectively. This trust is rewarded in FIG.~\ref{fig_AIC_M2_witness}(b) and FIG.~\ref{fig_AIC_M2_witness}(c), as those estimates are indeed correct, within the error bars. In addition, the untrusted estimate for $\phi=\pi/6$ for $N=10000$ still happens to be correct, too.
\subsection{Comparing one- and two-parameter models directly}
Finally, the AIC can compare the one- and two-parameter models $M_{1\phi}$ and $M_{2\phi}$ directly.
For that purpose one needs to use the {\em same} validation set of data, which implies that the two-parameter model needs {\em additional} data to generate $\rho_{{\rm observation}}$. Here we display results for just 50 witness measurements, and an additional set of 50 measurements for $M_{2\phi}$. FIG.~\ref{fig_AIC_M1vsM2_witness} shows that even such a small number of additional data is useful if the angle $\phi$ is wrong, and, similarly, it shows that the same small number suffices to detect a wrong single-qubit phase when it is larger than $\pi/3$.
\section{Conclusions}
\label{sec_conclusion}
We applied information criteria, and the Akaike Information Criterion (AIC) developed in Ref.~\cite{A1974} in particular, to quantum state estimation. We showed it to be a powerful method, provided one has a reasonably good idea of what state one's quantum source actually generates.
For each given model, which may include several parameters describing error and noise, as well as some parameters---call them the ideal-state parameters--- describing the state one would like to generate in the ideal (noiseless and error-free) case, the AIC determines a ranking from the observed data. One can construct multiple models, for instance, models where some ideal-state parameters and some noise parameters are fixed (possibly determined by previous experiments in the same setup), with others still considered variable.
Crucially, the AIC also easily ranks the full-parameter model (FPM), which uses in principle all exponentially many parameters in the full density matrix, and which is, therefore, the model one would use in full-blown quantum state tomography. This ranking of the FPM can be accomplished without actually having to find the maximum-likelihood state (or its likelihood)---which quickly would run into insurmountable problems for many-qubit systems---by using a straightforward upper bound.
This way, observed data is used to justify {\em a posteriori} the use of the few-parameter models---namely, if the AIC ranks that model above the FPM---and thus our method is in the same spirit as several other recent proposals \cite{GLFBE2010,CPFSGBL-CPL2010} to simplify quantum tomography, by tentatively introducing certain assumptions on the quantum state generated, after which data is used to certify those assumptions (and if the certification fails, one at least knows the initial assumptions were incorrect).
We illustrated the method on (noisy and mis-specified) four-qubit members of the family of Dicke states, and demonstrated its effectiveness and efficiency. For instance, we showed that one can detect mis-specified ideal-state parameters and determine noise and error parameters. We also showed by example the successful application of the method to a specific and useful subtask, that of quantifying multi-qubit entanglement.
\section*{Acknowledgement}
This work was supported by NSF grant PHY-1004219.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Lexicalized Tree Adjoining Grammars (LTAG) and Combinatory Categorial
Grammar (CCG) \cite{steedman:97} are known to be weakly
equivalent but not strongly equivalent. Coordination schema have a
natural description in CCG, while these schema have no natural
equivalent in a standard LTAG.
In \cite{Joshi91} it was shown that in principle it is possible to
construct a CCG-like account for coordination in the framework of
LTAGs, but there was no clear notion of what the derivation structure
would look like. In this paper, continuing the work of \cite{Joshi91},
we show that an account for coordination can be constructed using the
derivation structures in an LTAG.
Using the notions given in this paper we also discuss the construction
of practical parser for LTAGs that can handle coordination including
cases of non-constituent coordination. This approach has been
implemented in the XTAG system \cite{xtagrpt95} thus extending it to
handle coordination. This is the first full implementation of
coordination in the LTAG framework.
\section{LTAG}
An LTAG is a set of trees ({\em elementary trees}) which have at least
one terminal symbol on its frontier called the {\em anchor}. Each node
in the tree has a unique address obtained by applying a Gorn tree
addressing scheme, shown in the tree {\em $\alpha$(cooked)}
(Fig.~\ref{fig:ltag}). Trees can be rewritten using {\em
substitution\/} and {\em adjunction}. A history of these operations
on elementary trees in the form of a {\em derivation tree} can be used
to reconstruct the derivation of a string recognized by a LTAG. In
Fig.~\ref{fig:ltag}, the tree {\em $\beta$(dried)} adjoins into {\em
$\alpha$(beans)} and trees {\em $\alpha$(John)} and {\em
$\alpha$(beans)} substitutes into {\em $\alpha$(cooked)} to give a
derivation tree for {\em John cooked dried beans}. Each node in the
derivation tree is the name of an elementary tree. The labels on the
edges denote the address in the parent node where a substitution or
adjunction has occured.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=ltag.ps,scale=65}
\end{center}
\caption{Example of an LTAG and an LTAG derivation}
\label{fig:ltag}
\end{figure}
\vspace{-0.12in}
\section{Trees as Structured Categories}
In \cite{Joshi91} elementary trees as well as derived trees in an LTAG
were considered as structured categories defined as a 3-tuple of an
elementary or derived tree, the string it spanned and the functional
type of the tree, e.g $\langle \sigma_1, l_1, \tau_1 \rangle$ in
Fig.~\ref{fig:eats-cookies}. Functional types for trees could be
thought of as defining un-Curried functions corresponding to the
Curried CCG counterpart. A functional type was given to sequences of
lexical items in trees even when they were not contiguous; i.e.
discontinuous constituents were also assigned types. They were,
however, barred from coordinating.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=eats-cookies.ps,scale=60}
\end{center}
\caption{Structured Category for {\em eats cookies}}
\label{fig:eats-cookies}
\end{figure}
\vspace{-0.12in}
Coordination of two structured categories $\sigma_1, \sigma_2$
succeeded if the lexical strings of both categories were contiguous,
the functional types were identical, and the least nodes dominating
the strings spanned by the component tree have the same label. For
example, in Fig.~\ref{fig:eats-and-drinks} the tree corresponding to
{\em eats cookies and drinks beer} would be obtained by:
\begin{enumerate}
\item equating the {\em NP} nodes%
\footnote{ This notion of sharing should not be confused with a
deletion type analysis of coordination. The scheme presented in
\cite{Joshi91} as well as the analysis presented in this paper are
not deletion analyses.}%
\ in $\sigma_1$ and $\sigma_2$, preserving the linear precedence of
the arguments.
\item coordinating the {\em VP} nodes, which are the least nodes
dominating the two contiguous strings.
\item collapsing the supertrees above the {\em VP} node.
\item selecting the leftmost {\em NP} as the lexical site for the
argument, since precedence with the verb is maintained by this
choice.
\end{enumerate}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=eats-and-drinks.ps,scale=65}
\end{center}
\caption{Coordination of {\em eats cookies and drinks beer}}
\label{fig:eats-and-drinks}
\end{figure}
\vspace{-0.12in}
The process of coordination built a new derived structure given
previously built pieces of derived structure (or perhaps elementary
structures). There is no clear notion of a derivation structure for
this process.
\section{Coordination in TAG}
An account for coordination in a standard LTAG cannot be given without
introducing a notion of sharing of arguments in the two lexically
anchored trees because of the notion of {\em locality} of arguments in
LTAG. In~\ref{ex:rnr} for instance, the NP {\em the beans} in the
``right node raising'' construction has to be shared by the two
elementary trees (anchored by {\em cooked} and {\em ate}
respectively).
\beginsentences
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} (((Harry cooked) and (Mary ate)) the beans)
\label{ex:rnr}
\end{list}
We introduce a notation that will enable us to talk about this more
formally. In Fig.~\ref{fig:ltag} the notation $\downarrow$ denotes
that a node is a non-terminal and hence expects a substitution
operation to occur. The notation $*$ marks the foot node of an
auxiliary tree. Making this explicit we can view an elementary tree
as a ordered pair of the tree structure and a ordered
set%
\footnote{ The ordering is given by the fact that the elements of the
set are Gorn addresses.}%
\ of such nodes from its frontier%
\footnote{ We shall assume there are no adjunction constraints in
this paper. }%
, e.g. the tree for {\em cooked} will be represented as $\langle
\alpha(cooked), \{ 1, 2.2 \} \rangle$. Note that this representation
is not required by the LTAG formalism. The second projection of this
ordered pair is used here for ease of explication. Let the second
projection of the pair minus the foot nodes be the {\em substitution
set}. We will occasionally use the first projection of the
elementary tree to refer to the ordered pair.
{\em Setting up Contractions.} We introduce an operation called {\em
build-contraction} that takes an elementary tree, places a subset
from its second projection into a {\em contraction set} and assigns
the difference of the set in the second projection of the original
elementary tree and the contraction set to the second projection of
the new elementary tree. The contents of the contraction set of a tree
can be inferred from the contents of the set in the second projection
of the elementary tree. Hence, while we refer to the contraction set
of an elementary tree, it does not have to be stored along with its
representation.
Fig.~\ref{fig:contract-list} gives some examples; each node in the
contraction set is circled in the figure. In the tree $\langle
\alpha(cooked), \{ 1, 2.2 \} \rangle$ application of the operation on
the {\em NP} node at address $2.2$ gives us a tree with the
contraction set $\{ 2.2 \}$. The new tree is denoted by $\langle
\alpha(cooked)_{\{ 2.2 \}}, \{ 1 \} \rangle$, or $\alpha(cooked)_{\{
2.2 \}}$ for short. Placing the {\em NP} nodes at addresses $1$ and
$2.2$ of the tree $\alpha(cooked)$ into the contraction set gives us
$\alpha(cooked)_{\{ 1, 2.2 \}}$.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=contract-list-small.ps,scale=65}
\end{center}
\caption{Building contraction sets}
\label{fig:contract-list}
\end{figure}
We assume that the anchor cannot be involved in a {\em
build-contraction}. This assumption needs to be revised when gapping
is considered in this framework~(\S\ref{sec:c-anchor}).
{\em The Coordination Schema.} We use the standard notion of
coordination shown in Fig.~\ref{fig:conj} which maps two constituents
of {\em like type}, but with different interpretations,
into a constituent of the same type%
\footnote{ In this paper, we do not consider coordination of unlike
categories, e.g. {\em Pat is a Republican and proud of
it}. \cite{tr:coord} discusses such cases, following \newcite{ja:92}.}%
.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=conj.ps,scale=65}
\end{center}
\caption{Coordination schema}
\label{fig:conj}
\end{figure}
\vspace{-0.12in}
We add a new rewriting operation to the LTAG formalism called {\em conjoin\/}%
\footnote{ Later we will discuss an alternative which replaces this
operation by the traditional operations of substitution and
adjunction. }%
. While substitution and adjunction take two trees to give a derived
tree, {\em conjoin\/} takes three trees and composes them to give a
derived tree. One of the trees is always the tree obtained by
specializing the schema in Fig.~\ref{fig:conj} for a particular
category%
\footnote{ The tree obtained will be a lexicalized tree, with the
lexical anchor as the conjunction: {\em and}, {\em but}, etc. }%
.
Informally, the conjoin operation works as follows: The two trees
being coordinated are substituted into the conjunction tree. This
notion of substitution differs from the traditional LTAG substitution
operation in the following way: In LTAG substitution, always the root
node of the tree being substituted is identified with the substitution
site. In the conjoin operation however, the node substituting into the
conjunction tree is given by an algorithm, which we shall call {\em
FindRoot} that takes into account the contraction sets of the two
trees. {\em FindRoot} returns the lowest node that dominates all nodes
in the substitution set of the elementary tree%
\footnote{ This ensures the node picked by {\em FindRoot} always
dominates a contiguous string in a derivation. This captures the
string contiguity condition that was used in~\cite{Joshi91}. A
coordinated node will never dominate multiple foot nodes. Such a
case occurs, e.g., two auxiliary trees with substitution nodes at
the same tree address are coordinated with only the substitution
nodes in the contraction set. }%
, e.g. $FindRoot(\alpha(cooked)_{\{ 2.2 \}})$ will return the root
node, i.e. corresponding to the {\em S conj S} instantiation of the
coordination schema. $FindRoot(\alpha(cooked)_{\{ 1, 2.2 \}})$ will
return node address $2.1$, corresponding to the {\em V conj V}
instantiation.
The conjoin operation then creates a {\em contraction\/} between nodes
in the contraction sets of the trees being coordinated. The term {\em
contraction\/} is taken from the graph-theoretic notion of edge
contraction. In a graph, when an edge joining two vertices is
contracted, the nodes are merged and the new vertex retains edges to
the union of the neighbors of the merged vertices%
\footnote{ Merging in the graph-theoretic definition of contraction
involves the identification of two previously distinct nodes. In
the process of contraction over nodes in elementary trees it is the
operation on that node (either substitution or adjunction) that is
identified. }%
. The conjoin operation supplies a new edge between each corresponding
node in the contraction set and then contracts that edge. As a
constraint on the application of the conjoin operation, the
contraction sets of the two trees must be identical.
Another way of viewing the conjoin operation is as the construction of
an auxiliary structure from an elementary tree. For example, from the
elementary tree $\langle \alpha(drinks), \{ 1, 2.2 \} \rangle$, the
conjoin operation would create the auxiliary structure $\langle
\beta(drinks)_{\{ 1 \}}, \{ 2.2 \} \rangle$ shown in
Fig.~\ref{fig:aux-conj}. The adjunction operation would now be
responsible for creating contractions between nodes in the contraction
sets of the two trees supplied to it. Such an approach is attractive
for two reasons. First, it uses only the traditional operations of
substitution and adjunction. Secondly, it treats {\em conj X} as a
kind of ``modifier'' on the left conjunct {\em X}. We do not choose
between the two representations but continue to view the conjoin
operation as a part of our formalism.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=aux-conj.ps,scale=65}
\end{center}
\caption{Coordination as adjunction.}
\label{fig:aux-conj}
\end{figure}
\vspace{-0.12in}
For example, applying {\em conjoin\/} to the trees {\em Conj(and)},
$\alpha(eats)_{\{1 \}}$ and $\alpha(drinks)_{\{1 \}}$ gives us the
derivation tree and derived structure for the constituent in
\ref{ex:vpc} shown in Fig.~\ref{fig:vpc}.
\beginsentences
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} \ldots eats cookies and drinks beer. \label{ex:vpc}
\end{list}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=vpc.ps,scale=65}
\end{center}
\caption{An example of the {\em conjoin\/} operation.}
\label{fig:vpc}
\end{figure}
In Fig.~\ref{fig:vpc} the nodes $\alpha(eats)_{\{1 \}}$ and
$\alpha(drinks)_{\{1 \}}$ signify an operation left incomplete at
address $1$.
{\em The Effects of Contraction.} One of the effects of contraction is
that the notion of a derivation tree for the LTAG formalism has to be
extended to an acyclic {\em derivation graph\/}%
\footnote{ We shall use the general notation {\em derivation
structure} to refer to both derivation trees and
derivation graphs. }%
. Simultaneous substitution or adjunction modifies a derivation tree
into a graph as can be seen in Fig.~\ref{fig:chapman}.
If a contracted node in a tree (after the conjoin operation) is a
substitution node, then the argument is recorded as a substitution
into the two elementary trees as for example in the
sentences~\ref{ex:chapman} and \ref{ex:keats-chapman}.
\beginsentences
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} Chapman eats cookies and drinks beer. \label{ex:chapman}
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} Keats steals and Chapman eats apples. \label{ex:keats-chapman}
\end{list}
Fig.~\ref{fig:chapman} contains the derivation and derived structures
for \ref{ex:chapman} and Fig.~\ref{fig:keats-chapman} for
\ref{ex:keats-chapman}. In Fig.~\ref{fig:keats-chapman} the derivation
graph for sentence~\ref{ex:keats-chapman} accounts for the
coordinations of the traditional nonconstituent ``Keats steals'' by
carrying out the coordination at the root, i.e. {\em S conj S}. No
constituent corresponding to ``Keats steals'' is created in the
process of coordination.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=chapman.ps,scale=65}
\end{center}
\caption{Derivation for {\em Chapman eats cookies and drinks beer.}}
\label{fig:chapman}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=keats-chapman-small.ps,scale=65}
\end{center}
\caption{Derivation for {\em Keats steals and Chapman eats apples.}}
\label{fig:keats-chapman}
\end{figure}
\vspace{-0.12in}
The derived structures in Figs.~\ref{fig:chapman}
and~\ref{fig:keats-chapman} are difficult to reconcile with traditional
notions of phrase structure%
\footnote{ \newcite{mccawley:82} raised the heterodox view that a
discontinuous constituent structure should be given for right node
raising cases, having the same notion of constituency as our
approach. However, no conditions on the construction of such a
structure was given. In fact, his mechanism also covered cases of
parenthetical placement, scrambling, relative
clause extraposition and heavy NP shift.}%
. However, the derivation structure gives us all the information
about dependency that we need about the constituents. The derivation
encodes exactly how particular elementary trees are put together.
Obtaining a tree structure from a derived structure built by the
conjoin operation is discussed in~\cite{tr:coord}.
Considerations of the locality of movement phenomena and its
representation in the LTAG formalism \cite{kj86} can also now explain
constraints on coordinate structure, such as across-the-board
exceptions to the well known coordinate structure constraint, see
Fig.~\ref{fig:atb}. Also in cases of unbounded right node raising such
as {\em Keats likes and Chapman thinks Mary likes beans}, {\em Chapman
thinks} simply adjoins into the right conjunct of the coordinate
structure%
\footnote{ A comparision of this paper's approach with the
derivational machinery in CCG and the devices of 3-D coordination is
done in \cite{tr:coord}.}%
.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=atb-new.ps,scale=70}
\end{center}
\caption{Derivation for {\em Who laughed and seemed to be happy?}}
\label{fig:atb}
\end{figure}
\section{Contractions on Anchors}
\label{sec:c-anchor}
An LTAG along with the operations of substitution and adjunction also
has the implicit operation of lexical insertion (represented as the
diamond mark in Fig.~\ref{fig:lexicalization}). Under this view, the
LTAG trees are taken to be templates. For example, the tree in
Fig.~\ref{fig:lexicalization} is now represented as $\langle
\alpha(eat), \{ 1, 2.1, 2.2 \} \rangle$.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=lexicalization.ps,scale=60}
\end{center}
\caption{Lexicalization in a LTAG.}
\label{fig:lexicalization}
\end{figure}
\vspace{-0.12in}
If we extend the notion of contraction in the conjoin operation
together with the operation of lexical insertion we have the following
observations: The two trees to be used by the conjoin operation are no
longer strictly lexicalized as the label associated with the diamond
mark is a preterminal. Previous uses of conjoin applied to two
distinct trees. If the lexicalization operation is to apply
simultaneously, the same anchor projects two elementary trees from the
lexicon. The process of contraction ensures that the anchor is placed
into a pair of LTAG tree templates with a single lexical insertion.
{\em Gapping.} Using this extension to {\em conjoin}, we can handle
sentences that have the ``gapping'' construction like
sentence~\ref{ex:gap}.
\beginsentences
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} John ate bananas and Bill strawberries. \label{ex:gap}
\end{list}
The conjoin operation applies to copies of the same elementary tree
when the lexical anchor is in the contraction set. For example, let
$\alpha(eats)$ be the tree selected by {\em eats}. The coordination
of $\alpha(eats)_{\{ 2.1 \}}$ with a copy of itself and the subsequent
derivation tree is depicted in Fig.~\ref{fig:gapping}%
\footnote{ In English, following \newcite{ross:70}, the anchor goes to
the left conjunct. }%
.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=gapping.ps,scale=65}
\end{center}
\caption{Handling the gapping construction using contractions.}
\label{fig:gapping}
\end{figure}
An extension of the approach here will be to permit the conjoin
operation to create contractions on {\em all} the nodes in contraction
sets that it dominates during a derivation, allowing us to recognize
cases of gapping such as: {\em John wants Penn to win and Bill,
Princeton.} and {\em John wants to try to see Mary and Bill, Susan.}
{\em Coordinating Ditransitive verbs.} In
sentence~\ref{ex:ditrans-conj} if we take the position that the string
{\em Mary a book} is not a constituent (i.e. {\em give} has a
structure as in Fig.~\ref{fig:ditrans}), then we can use the notion
of contraction over the anchor of a tree to derive the sentence
in~\ref{ex:ditrans-conj}. The structure we derive is shown in
Fig.~\ref{fig:ditrans-conj}.
\beginsentences
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} John gave Mary a book and Susan a flower. \label{ex:ditrans-conj}
\end{list}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=ditrans.ps,scale=60}
\end{center}
\caption{Tree for a ditransitive verb in LTAG.}
\label{fig:ditrans}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=ditrans-conj.ps,scale=60}
\end{center}
\caption{Derived tree for {\em John gave Mary a book and Susan a flower}.}
\label{fig:ditrans-conj}
\end{figure}
\vspace{-0.12in}
{\em Interactions.} Permitting contractions on multiple substitution
and adjunction sites along with contractions on the anchor allow the
derivation of {\em sluicing} structures such as~\ref{ex:sluicing}
(where the conjunct {\em Bill too} can be interpreted as {\em [John
loves] Bill too} or as {\em Bill [loves Mary] too}%
\footnote{ Whether this should be derived syntactically is
controversial, for example, see~\cite{steedman90}.
}%
.
\beginsentences
\renewcommand{\thesentencesubctr}{(\smainform{sentencectr} John loves Mary and Bill too. \label{ex:sluicing}
\end{list}
\section{Parsing Issues}
This section discusses parsing issues that arise in the modified TAG
formalism that we have presented. We do not discuss general issues in
parsing TAGs, rather we give the appropriate modifications that are
needed to the existing Earley-type parsing algorithm for TAGs due to
\newcite{Schabes88a}.
The algorithm relies on a tree traversal that scans the input string
from left to right while recognizing the application of the conjoin
operation. The nodes in the elementary trees are visited in a top-down
left to right manner (Fig.~\ref{fig:tree-traversal}). Each dot in
Fig.~\ref{fig:tree-traversal} divides the tree into a left context and
a right context, enabling the algorithm to scan the elementary tree
while trying to recognize possible applications of the conjoin
operation.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\psfig{figure=tree-traversal.ps,scale=65}
\caption{Example of a tree traversal}
\label{fig:tree-traversal}
\end{center}
\end{figure}
\vspace{-0.12in}
The derived structure corresponding to a coordination is a composite
structure built by applying the conjoin operation to two elementary
trees and an instantiation of the coordination schema. The algorithm
never builds derived structures. It builds the derivation by visiting
the appropriate nodes during its tree traversal in the following order
(see Fig.~\ref{fig:recognize-conj}).
\[ 1\ 2 \cdots 3\ 4 \cdots 5\ 6 \cdots 2'\ 7' \cdots 3'\ 4' \cdots 5'\ 6' \cdots 7\ 8 \]
The algorithm must also compute the correct span of the string for the
nodes that have been identified via a contraction.
Fig.~\ref{fig:recognize-conj} gives the possible scenarios for the
position of nodes that have been linked by a contraction. When foot
nodes undergo contraction, the algorithm has to ensure that both the
foot nodes share the subtree pushed under them, e.g. $9 \cdots 10$ and
$9' \cdots 10'$ in Fig.~\ref{fig:recognize-conj}(a). Similarly, when
substitution nodes undergo contraction, the algorithm has to ensure
that the tree recognized due by predicting a substitution is shared by
the nodes, e.g. $11 \cdots 12$ and $11' \cdots 12'$ in
Figs.~\ref{fig:recognize-conj}(b) and~\ref{fig:recognize-conj}(c). The
traversals $9 \cdots 10$ should span the same length of the input as
$9' \cdots 10'$, similarly for $11 \cdots 12$ and $11' \cdots 12'$.
Various positions for such traversals is shown in
Fig.~\ref{fig:recognize-conj}. A derivation is valid if the input
string is accepted and each node in a contraction spans a valid
substring in the input. The complete and formal description of the
parsing algorithm is given in~\cite{tr:coord}.
\begin{figure}[htbp]
\begin{center}
\leavevmode \psfig{figure=recognize-conj-mod.ps,scale=80}
\caption{Moving the dot while recognizing a conjoin operation}
\label{fig:recognize-conj}
\end{center}
\end{figure}
\section{Conclusion}
We have shown that an account for coordination can be given in a LTAG
while maintaining the notion of a derivation structure which is
central to the LTAG approach. We showed that fixed constituency can be
maintained at the level of the elementary tree while accounting for
cases of non-constituent coordination and gapping. We discussed the
construction of a practical parser for LTAG that can handle these
cases of coordination.
\vspace{-0.22in} {\footnotesize | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In this paper, we shall be concerned with instanton
effects in the gauge theory which, in the context of ${\cal N}=4$
supersymmetric $\text{SU}(N)$ gauge theory, have a very precise relation to
D-instanton effects in the string theory
\cite{BGKR,lett,MO-III,Green:rev} in perfect accord with the AdS/CFT
correspondence \cite{MAL,AGMOO:rev}.
This relation can be seen in the dynamics of D-instantons and D3-branes in type IIB string
theory. The world volume theory of $N$ D3-branes is precisely
the ${\cal N}=4$ supersymmetric $\text{U}(N)$ gauge theory, in the low energy
decoupling limit where all energy scales are much smaller than
$(\alpha')^{-1/2}$ \cite{MO-III}. Instantons solutions in the gauge theory are identified
as D-instantons (or D$(-1)$-branes) bound to the D3-branes
\cite{Witten:1996gx,Douglas:1995bn,Douglas:1996uz,MO-III}. The
D-instantons are described by a `world volume' $\text{U}(k)$ matrix model,
for $k$ instantons, whose degrees-of-freedom correspond to the ADHM construction of
instantons. This theory is the dimensional reduction of
six-dimensional ${\cal N}=(1,0)$ supersymmetric gauge theory and has $\text{U}(k)$
adjoint and fundamental degrees-of-freedom, the latter arising from
strings stretched between the D-instantons and D3-branes. In the
decoupling limit this ADHM matrix model is strongly coupled and only
certain terms in the action survive. The resulting partition function
is identical to the measure on the ADHM moduli space weighted by the
instanton action constructed by purely field theoretical methods in
\cite{meas1,meas2,MO-III}. Remarkably, auxiliary scalars introduced
to bi-linearize a certain four fermionic collective coordinate
interaction are interpreted as the freedom for the D-instantons to be
ejected from the D3-branes: they need not be bound!
At large $N$, the ADHM matrix model can be approximated.
The idea is to first integrate out the variables
corresponding to D$(-1)$-D3 strings, the resulting measure is then
amenable to a standard saddle-point analysis. From the point-of-view of the
D-instantons the D3-branes disappear to leave a description of
D-instantons in a $AdS_5\times S^5$ background. Instantons are
therefore a perfect way to see the AdS/CFT correspondence in action.
It may seem surprising that the gauge theory instanton calculation is
done at large $N$ with $g^2N\ll1$ and yet we find perfect agreement
with the supergravity regime with $g^2N\gg1$. Apparently there is a
non-renormalization theorem at work, which is natural from the
supergravity side of the description \cite{Gopakumar:1999xh}, but has yet to be proved
from the gauge theory side.
Since instantons prove such a powerful tool it is natural to study them
in other gauge theories which may or may not have a known dual
description. This has been undertaken in the following conformal field
theories:
(i) ${\cal N}=4$ supersymmetric gauge theory with gauge groups $\text{O}(N)$ and
$\text{Sp}(N)$ \cite{spo}. In these theories the dual is known to be the
type IIB superstring on $AdS_5\times{\mathbb R}P^5$
\cite{Kakushadze:1998tr,Kakushadze:1998tz}.
(ii) ${\cal N}=2$ supersymmetric $\text{SU}(N)$ gauge theory with $2N$ fundamental
hypermultiplets \cite{N=2}. In this case the dual string theory is not known.
(iii) `Orbifold' theories which correspond to certain projections of
the ${\cal N}=4$ supersymmetric $\text{SU}(N)$ theory \cite{orbi} by a finite
subgroup $\Gamma$ of the $\text{SU}(4)$ $R$-symmetry group of the theory.
The resulting theories have
either ${\cal N}=0,1$ or 2 supersymmetries. In this case the duals are
known to be type IIB superstring theory on $AdS_5\times S^5/\Gamma$ \cite{Kachru:1998ys}.
There is another class of conformal ${\cal N}=2$ theories based on the gauge
group $\text{Sp}(N)$.\footnote{For us, $\text{Sp}(N)$ has rank $N$, so $Sp(1)\equiv\text{SU}(2)$ although
it is sometimes denoted $\text{Sp}(2N)$ or even $\text{USp}(2N)$.} These
theories have an $2^{\rm nd}$-rank anti-symmetric tensor
and four fundamental hypermultiplets and are conformal
for any $N$. This class of theories is thought to be equivalent to
type IIB theory on $AdS_5\times S^5/{\mathbb Z}_2$, where the ${\mathbb Z}_2$ action
fixes an $S^3\subset S^5$, with four D7-branes wrapped around the $S^3$
and filling $AdS_5$ \cite{Fayyazuddin:1998fb,Aharony:1998xz}.
What is particularly interesting about his case
is that, in common with the ${\cal N}=4$ theory,
there are detailed predictions for instanton contributions to
particular correlation functions \cite{Gutperle:1999xu} which merit investigation
from the gauge theory side. This paper sets up the formalism which
allows such a comparison. Near the completion of this paper, there
appeared a paper by Gava et al.~\cite{Gava:1999ky} which, as well as
setting up the same formalism leading to the large-$N$ instanton
measure, performs this comparison and finds perfect agreement with the
AdS/CFT correspondence.
Consider the gauge theory in more detail. It is useful to use a mock
${\cal N}=4$ notation for the vector multiplet and anti-symmetric tensor
hypermultiplet. So the Weyl fermions are labelled $\lambda^A$,
$A=1,\ldots,4$, where $A=1,3$ correspond to the vector multiplet, and are
therefore adjoint-valued, while $A=2,4$ correspond to the
hypermultiplet. All fields can be thought of as $2N\times2N$
dimensional hermitian matrices subject to
\begin{equation}
X=\pm JX^TJ\ ,
\end{equation}
where $J$ is the usual symplectic matrix
\begin{equation}
J=\begin{pmatrix}0 & 1_{\scriptscriptstyle[N]\times[N]}\\ -1_{\scriptscriptstyle[N]\times[N]} & 0\end{pmatrix}\ ,
\end{equation}
and the upper (lower) sign corresponds to the vector multiplet (anti-symmetric
tensor hypermultiplet). With this restriction we see that the adjoint
representation of $Sp(N)$ has dimension $N(2N+1)$ while the
anti-symmetric tensor representation has dimension $N(2N-1)$.
In addition, we must add four fundamental hypermultiplets consisting
of eight ${\cal N}=1$ chiral superfields transforming with respect to an
$O(8)$ flavour symmetry.
The theory has vanishing beta function for any $N$ and when $N=1$ the
only the fundamental hypermultiplets remain and the theory is
equivalent to the conformal $\text{SU}(2)$ theory with four fundamental hypermultiplets.
\section{The Instanton Moduli Space and Measure}
The ADHM construction of instanton solutions \cite{ADHM} for gauge group $\text{Sp}(N)$
is described in \cite{Corrigan:1978ce}. However, rather than use the
language of quaterions, we find it more in keeping with the string
theory connection to think of $\text{Sp}(N)$ as embedded in $\text{SU}(2N)$, and
use the ADHM construction of the latter suitably restricted. For a
review of the the ADHM construction for unitary groups see \cite{KMS,MO-III}.
The instanton solution at charge $k$ in $\text{SU}(2N)$, is
described by an $(2N+2k)\times2k$ dimensional matrix $a$, and its
conjugate, with the form
\begin{equation}
a=\begin{pmatrix} w_{\dot\alpha} \\ a'_{\alpha{\dot\alpha}} \end{pmatrix}\ ,\qquad
\bar a=\begin{pmatrix} \bar w^{\dot\alpha} & \bar a^{\prime{\dot\alpha}\alpha} \end{pmatrix}\ ,
\end{equation}
where $w_{\dot\alpha}$ is a (spacetime) Weyl-spinor-valued $2N\times k$ matrix and
$a'_{\alpha{\dot\alpha}}$ is (spacetime) vector-valued $k\times k$
matrix. The conjugates are defined as
\begin{equation}
\bar w^{\dot\alpha}\equiv(w_{\dot\alpha})^\dagger\ ,\qquad \bar
a^{\prime{\dot\alpha}\alpha}\equiv(a'_{\alpha{\dot\alpha}})^\dagger\ .
\end{equation}
The explicit $\text{SO}(4)$ vector components of $a'$ are defined via
$a'_{\alpha{\dot\alpha}}=a'_n\sigma^n_{\alpha{\dot\alpha}}$ and the matrices $a'_n$ are
restricted to be hermitian: $(a'_n)^\dagger=a'_n$.
Up till now the discussion is valid for gauge group $\text{SU}(2N)$,
however, in order to describe gauge group $\text{Sp}(N)$ we only need
subject the variables to the additional conditions:
\begin{equation}
\bar w^{\dot\alpha}=
\epsilon^{{\dot\alpha}{\dot\beta}}(w_{\dot\beta})^TJ\ ,\qquad
(a'_{\alpha{\dot\alpha}})^T=a'_{\alpha{\dot\alpha}}\ ,
\end{equation}
The remaining $4k(N+k)$ variables are still an over parametrization of
the instanton moduli space which is obtained by a hyper-k\"ahler
quotient construction. One first imposes the ADHM constraints:
\begin{equation}
D^{\dot\alpha}_{\ {\dot\beta}}\equiv\bar w^{\dot\alpha} w_{\dot\beta}+\bar a^{\prime{\dot\alpha}\alpha}a'_{\alpha{\dot\beta}}=\lambda
\delta^{{\dot\alpha}}_{\ {\dot\beta}}\ ,
\label{badhm}\end{equation}
where $\lambda$ is an arbitrary constant. The ADHM moduli space is then
identified with the space of $a$'s subject to \eqref{badhm} modulo the
action of an $\text{O}(k)$ symmetry which acts on the instanton indices of
the variables as follows
\begin{equation}
w_{\dot\alpha}\rightarrow w_{\dot\alpha} U\ ,\qquad a'_{\alpha{\dot\alpha}}\rightarrow
Ua'_{\alpha{\dot\alpha}}U^T\ ,\quad U\in\text{O}(k).
\end{equation}
Since there are $3k(k-1)/2$ constraints \eqref{badhm} and $\text{O}(k)$ has
dimension $k(k-1)/2$ the dimension of the ADHM moduli space is $4k(N+1)$.
The final piece of the story is the explicit construction of the
self-dual gauge field itself. To this end, we define the matrix
\begin{equation}
\Delta(x)=\begin{pmatrix}w_{\dot\alpha} \\ x_{\alpha{\dot\alpha}}1_{\scriptscriptstyle[k]\times[k]}+
a'_{\alpha{\dot\alpha}}\end{pmatrix}\ ,
\end{equation}
where $x_{\alpha{\dot\alpha}}$, or equivalently $x_n$, is a point in spacetime.
For generic $x$, the $(2N+2k)\times2N$ dimensional complex-valued matrix $\text{U}(x)$
is a basis for ${\rm ker}(\bar\Delta)$:
\begin{equation}
\bar\Delta U= 0 = \bar U\Delta\ ,
\label{uan}\end{equation}
where $U$ is orthonormalized according to
\begin{equation}
\bar U U = \ 1_{{\scriptscriptstyle[2N]}\times{\scriptscriptstyle [2N]}}\,.
\label{udef}\end{equation}
The self-dual gauge field is then simply
\begin{equation}v_n =
\bar U \partial_{n}U\ .
\label{vdef}\end{equation}
It is straightforward to show that $v_n$ is anti-hermitian and
satisfies $v_n=J(v_n)^TJ$ and so is valued in the Lie algebra of $\text{Sp}(N)$.
The fermions in the background of the instanton lead to new Grassmann
collective coordinates. Associated to the fermions in the adjoint and
anti-symmetric tensor representations, $\lambda^A$, these collective
coordinates are described by the $(2N+2k)\times k$ matrices ${\cal M}^A$,
and their conjugates
\begin{equation}
{\cal M}^A=\begin{pmatrix} \mu^A \\ {\cal M}^{\prime A}_\alpha \end{pmatrix}\
,\qquad
\bar{\cal M}^A=\begin{pmatrix} \bar\mu^A & \bar{\cal M}^{\prime\alpha A} \end{pmatrix}\ ,
\end{equation}
where $\mu^A$ are $2N\times k$ matrices and ${\cal M}^{\prime A}_\alpha$ are Weyl-spinor-valued
$k\times k$ matrices. The conjugates are defined by
\begin{equation}
\bar\mu^A\equiv(\mu^A)^\dagger\ ,\qquad
\bar{\cal M}^{\prime\alpha A}\equiv({\cal M}^{\prime A}_\alpha)^\dagger\ ,
\end{equation}
and the constraint $\bar{\cal M}^{\prime A}_\alpha={\cal M}^{\prime A}_\alpha$ is
imposed.
Up till now the construction of the fermions is valid for the gauge
group $\text{SU}(2N)$. For $\text{Sp}(N)$ all we need do is impose the additional conditions
\begin{equation}
\bar\mu^A=(-1)^{A+1}(\mu^A)^TJ\ ,\qquad
\qquad({\cal M}^{\prime A}_\alpha)^T=(-1)^{A+1}{\cal M}^{\prime A}_\alpha\ .
\label{frest}\end{equation}
So, for instance, the matrices ${\cal M}^{\prime A}_\alpha$ are symmetric,
for the vector representation, and anti-symmetric, for the
anti-symmetric tensor representation. These symmetry properties can be
deduced from the ADHM tensor product construction in \cite{Corrigan:1979xi}.
Notice that the different
conditions on the collective coordinates breaks the $\text{SU}(4)$ symmetry
of the mock ${\cal N}=4$ notation to $\text{SU}(2)\times\text{U}(1)$, the true
$R$-symmetry of the ${\cal N}=2$ theory.
These fermionic collective coordinates are subject to analogues of the
ADHM constraints \eqref{badhm}:
\begin{equation}
\lambda^A_{\dot\alpha}\equiv\bar w_{\dot\alpha}\mu^A+\bar\mu^Aw_{\dot\alpha}+[{\cal M}^{\prime\alpha
A},a'_{\alpha{\dot\alpha}}]=0\ .
\label{fadhm}\end{equation}
One can check that there are consequently $2k(N+1)$ and $2k(N-1)$
fermionic collective coordinates corresponding to $\lambda^A$ for
the vector and anti-symmetric tensor representations, respectively.
This is in accord with the index
theorem that counts the zero eigenvalue solutions to the Dirac
equation in the background of the instanton. The quantities
\begin{equation}
\xi_\alpha^A=k^{-1}{\rm tr}_k\,{\cal M}^{\prime A}_\alpha\ ,
\qquad \bar\eta^{{\dot\alpha} A}=k^{-1}{\rm tr}_k\,\bar w^{\dot\alpha}\mu^A\ ,
\end{equation}
in the vector multiplet, $A=1,3$, are the collective coordinates
corresponding to the four supersymmetric and four
superconformal zero modes.
Finally we have the fundamental hypermultiplets. They contribute
additional fermionic collective coordinates which can be described by
a $2N\times8$ dimensional matrix ${\cal K}$. These variables are not subject
to any additional ADHM-type constraints. There are no additional bosonic
collective coordinates associated to any of the scalar fields in the theory.
At lowest order in $g$, the instanton action, which is the pull-back
of the action to the instanton solution, is
\begin{equation}
S^k_{\text{inst}}={8\pi^2 k\over g^2}+S^k_{\text{quad}}\ ,
\end{equation}
where $S^k_{\text{quad}}$ is a certain four-fermion interaction which is
induced from the Yukawa interactions of the theory via tree-level
scalar exchange. This is where the ${\cal N}=4$ labelling comes in handy:
the interaction induced by the Yukawa couplings between the vector
multiplet and anti-symmetric tensor hypermultiplet
is precisely what would arise in the
${\cal N}=4$ theory but with the variables restricted as in
\eqref{frest}. In addition, there
is a coupling between a bi-linear in the vector fermionic variables and the
hypermultiplet fermionic variables as in \cite{Dorey:1996bf} arising
form the Yukawa couplings between the vector multiplet and fundamental hypermultiplets:
\begin{equation}
S^k_{\text{quad}}={\pi^2\over
2g^2}{\rm tr}_k\big[\epsilon_{ABCD}\bar{\cal M}^A{\cal M}^B\BL^{-1}\bar{\cal M}^C{\cal M}^D-
{\cal K}\K^T\BL^{-1}(\bar{\cal M}^1{\cal M}^3-\bar{\cal M}^3{\cal M}^1)\big]\ .
\label{qact}\end{equation}
Here, $\BL$ is the following operator on $k\times k$
matrices:
\begin{equation}
\BL\cdot\Omega=\tfrac12\{\Omega, W^0\}+[a'_n,[a'_n,\Omega]]\ ,
\end{equation}
where $W^0\equiv\bar w^{\dot\alpha} w_{\dot\alpha}$.
Notice that the fundamental fermionic collective coordinates are only
coupled to the adjoint fermionic collective coordinates since there
are no Yukawa couplings between the former and the anti-symmetric
fermionic collective coordinates.
Notice that the pattern of lifting of fermionic zero modes
in the fermion quadrilinear \eqref{qact}
leads to a selection rule on the insertions of fermion modes. Suppose the insertions
in a correlator involve $n_{\rm adj}$ adjoint, $n_{\rm ast}$
antisymmetric tensor and $n_{\rm f}$ fundamental fermionic collective
coordinates, then for a non-trivial instanton contribution we need
\begin{equation}
n_{\rm adj}=n_{\rm ast}+n_{\rm f}\ ,
\end{equation}
subject to $n_{\rm adj}\leq4k(N+1)$, $n_{\rm ast}\leq4k(N-1)$ and
$n_{\rm f}\leq8k$.
In particular, since the action does not lift the 8 exact supersymmetric and
superconformal zero-modes we have $n_{\rm adj}\geq8$.
Later we shall find it essential to bi-linearize this quadrilinear by
introducing bosonic variables $\chi_{AB}\equiv-\chi_{BA}$, six $k\times k$
matrices, subject to the reality condition
\begin{equation}
\bar\chi^{AB}\equiv(\chi_{AB})^\dagger=\tfrac12\epsilon^{ABCD}\chi_{CD}\ .
\end{equation}
We can write $\chi_{AB}$ as an explicit $\text{SO}(6)$ vector $\chi_a$
with components
\begin{equation}\begin{split}
&\chi_1=\sqrt2(\chi_{12}+\chi_{34})\ ,\quad\chi_2=i\sqrt2(\chi_{12}-\chi_{34})\
,\quad\chi_3=\sqrt2(\chi_{14}+\chi_{23})\ ,\\
&\chi_4=i\sqrt2(\chi_{14}+\chi_{23})\ ,\quad\chi_5=i\sqrt2(\chi_{13}-\chi_{24})\
,\quad\chi_6=\sqrt2(\chi_{13}+\chi_{24})\ ,
\end{split}\end{equation}
normalized so that $\chi_a\chi_a\equiv\epsilon^{ABCD}\chi_{AB}\chi_{CD}$.
Taking account the following symmetry properties
\begin{equation}
(\bar{\cal M}^{[A}{\cal M}^{B]})^T=(-1)^{A+B+1}\bar{\cal M}^{[A}{\cal M}^{B]}\ ,\qquad
({\cal K}\K^T)^T=-{\cal K}\K^T\ ,
\end{equation}
the auxiliary variables $\chi_{AB}$ are subject in addition to
\begin{equation}
\chi_{AB}=(-1)^{A+B+1}(\chi_{AB})^T\ .
\end{equation}
So $\chi_a$, $a=1,\ldots,4$, are symmetric and $\chi_a$, $a=5,6$, are anti-symmetric.
These conditions obviously only respect a
$\text{SU}(2)\times\text{U}(1)$ subgroup of $\text{SU}(4)$, the $R$-symmetry
of the ${\cal N}=2$ theory. The transformation we want is then
\begin{equation}
e^{-S^k_{\text{quad}}}=(\text{det}_s\BL)^2(\text{det}_a\BL)
\int d\chi\,\exp\big[-\epsilon^{ABCD}{\rm tr}_k\,\chi_{AB}\BL\chi_{CD}
+4\pi ig^{-1}{\rm tr}_k\,\chi_{AB}\Lambda^{AB}\big]\ ,
\label{aact}\end{equation}
where $\text{det}_s\BL$ and $\text{det}_{a}\BL$ are the determinants
of $\BL$ evaluated on a
basis of symmetric and anti-symmetric matrices, respectively, and we
have defined the fermion bi-linear
\begin{equation}
\Lambda^{AB}={1\over2\sqrt2}\bar{\cal M}^{A}{\cal M}^{B}-
{1\over4\sqrt2}\delta^{A2}\delta^{B4}{\cal K}\K^T\ .
\end{equation}
The determinant factors in \eqref{aact} are conveniently cancelled by
factors that arise from integrating-out pseudo collective coordinates
corresponding to the scalars from the vector and anti-symmetric tensor
multiplets (see \cite{MO-III}).
The measure on the ADHM moduli space can be deduced from \cite{meas1,meas2,MO-III}
and is simply the `flat' Cartesian measure for all the ADHM variables,
including the auxiliary variables $\chi$, with the ADHM constraints \eqref{badhm} and
\eqref{fadhm} imposed by explicit delta functions, weighted by the
exponential of the action on the right-hand side of \eqref{aact}.
Up to an overall normalization factor
\begin{equation}\begin{split}
\int d\mu^k_{\rm phys}\,e^{-S^k_{\rm inst}}&
={1\over\text{Vol}\,\text{O}(k)}\int da'\,dw\,d\chi\,d{\cal M}'\,d\mu\,d{\cal K}\,\\
&\times \delta(D^{\dot\alpha}_{\ {\dot\beta}})\,\delta(\lambda^A_{\dot\alpha})
\exp\big[-\epsilon^{ABCD}{\rm tr}_k\,\chi_{AB}\BL\chi_{CD}
+4\pi ig^{-1}{\rm tr}_k\,\chi_{AB}\Lambda^{AB}\big]\ ,
\label{meas}\end{split}\end{equation}
where the delta functions impose the
bosonic and fermionic ADHM constraints \eqref{badhm} and \eqref{fadhm}.
As in the ${\cal N}=4$ case, this measure has the natural interpretation in
terms of branes. First of all the ${\cal N}=2$ gauge theory can be described
by the low energy limit of a set of $N$ D3-branes moving tangent to
of one of the four orientifold O7-planes \cite{Banks:1996nj,Douglas:1997js} in the type
IIB orientifold of Sen \cite{Sen:1996vd}. In this construction there are four D7-branes
on top of each of the O7-planes. The gauge theory on the world volume
of the D3-branes is precisely our ${\cal N}=2$
supersymmetric $\text{Sp}(N)$ gauge theory, with an anti-symmetric tensor
hypermultiplet and four fundamental hypermultiplets. The latter fields
arise from strings stretched between the D3- and the four D7-branes.
Now on top of this construction
we include $k$ D-instantons. The matrix theory on the `world volume'
of the D-instantons will reproduce the
ADHM construction and measure described above. In order to see this,
it is useful to start from D instantons is flat ten-dimensional space
which are described by the dimensional reduction of ${\cal N}=1$
supersymmetric gauge theory in ten dimensions. The Lorentz group in
ten-dimensional Euclidean space is broken to
$H=\text{SO}(4)_1\times\text{SO}(4)_2\times\text{SO}(2)\subset\text{SO}(10)$, where the first
factor corresponds to the directions along the D3-brane world-volume,
the second factor to the directions in the D7/O7 world-volume
orthogonal to the D3-branes and finally the last factor corresponds to
the two directions orthogonal to the D7/O7 world volume.
The components of ten-dimensional gauge field decompose as
\begin{equation}
{\bf10}\rightarrow({\bf4},{\bf1},{\bf1})^s\oplus({\bf1},{\bf4},{\bf1})^s
\oplus({\bf1},{\bf1},{\bf2})^a
\label{dvec}\end{equation}
corresponding in the ADHM construction
to $a'_n$, $\chi_a$, $a=1,\ldots,4$ and $\chi_a$,
$a=5,6$, respectively. The superscripts in \eqref{dvec} label the
projections on the $\text{U}(2N)$ valued variables
imposed in the orientifold background, where $s$ means symmetric and $a$
means anti-symmetric.
In order to describe the fermions we consider
the covering group of $H$, $\bar H=\text{SU}(2)_{L_1}\times\text{SU}(2)_{R_1}\times
\text{SU}(2)_{L_2}\times\text{SU}(2)_{R_2}\times\text{U}(1)$. The Majorana-Weyl fermion
of the ten-dimensional theory decomposes as
\begin{equation}
{\bf16}\rightarrow({\bf2},{\bf1},{\bf2},{\bf1})^s_1
\oplus({\bf2},{\bf1},{\bf1},{\bf2})^{a}_1\oplus({\bf1},{\bf2},{\bf2},{\bf1})^s_{-1}\oplus
({\bf1},{\bf2},{\bf1},{\bf2})^{a}_{-1}
\end{equation}
under $\bar H$. Again the superscripts indicate the projection imposed
by the orientifold background. The first two components are identified with the
ADHM variables ${\cal M}^{\prime A}_\alpha$, for $A=1,3$ and $2,4$, respectively,
and the latter two with
some additional variables $\lambda^{\dot\alpha}_A$ whose r\^ole will emerge
shortly. Notice that the subgroup $\text{SU}(2)_{L_2}\times\text{U}(1)\subset\bar
H$ is identified with the $R$-symmetry of the original ${\cal N}=2$ gauge theory.
The remaining ADHM variables correspond to the presence of the D3- and
D7-branes. For instance $w_{\dot\alpha}$ and $\mu^A$ are associated to strings stretched
between the D-instantons and the D3-branes and finally the variables ${\cal K}$ are
associated to strings stretched between the D-instantons and D7-branes
(there are no bosonic variables associated to these strings). We shall not
write down the full action for this theory, but it is a simple
generalization to include ${\cal K}$ of that for the ${\cal N}=4$ theory written
down in \cite{MO-III}. However, as explained in \cite{MO-III} it is useful to
introduce additional bosonic auxiliary $k\times k$ matrices $D_{\ {\dot\beta}}^{\dot\alpha}$
transforming in a triplet of $\text{SU}(2)_{R_1}$, and in the present
context anti-symmetric, whose significance will
emerge shortly. Now in the decoupling limit $\alpha'\rightarrow0$, the
coupling constant of the D-instanton `world volume' matrix theory
goes to infinity which effectively
removes certain terms from the action. It is straightforward to show
that the remaining action is precisely that on the right-hand side of
\eqref{meas} and the variables $D_{\ {\dot\beta}}^{\dot\alpha}$ and $\lambda^{\dot\alpha}_A$ act as Lagrange
multipliers that impose the bosonic and fermionic ADHM
constraints \eqref{badhm} and \eqref{fadhm}, respectively.
\section{The Large-$N$ Measure}
We now want to take find an approximation of the ADHM measure valid
in the large $N$-limit. The procedure is by now well documented
\cite{MO-III,N=2,orbi}.
First of all it is advantageous to write the measure in terms of a set of
gauge invariant variables defined in terms of a $2k\times 2k$
dimensional matrix
\begin{equation}
W^{\dot\alpha}_{\ {\dot\beta}}=\bar w^{\dot\alpha} w_{\dot\beta}\ .
\end{equation}
From this we define the following four $k\times k$ matrices
\begin{equation}
W^0=\bar w^{\dot\alpha} w_{\dot\alpha}\ ,\qquad W^c=(\tau^c)^{\dot\beta}_{\ {\dot\alpha}}\bar w^{\dot\alpha} w_{\dot\beta}\ .
\end{equation}
The matrix $W^0$ is symmetric while the three matrices
$W^c$ are anti-symmetric.
We now change variables from $w_{\dot\alpha}$ to the gauge invariant variables
above and the gauge transformations acting on the solution which
parameterize a coset. Since the measure is used to integrate gauge
invariant quantities we integrate over the gauge coset which yields a
volume factor. Up to numerical factors\footnote{For later use, the
numerical pre-factor goes like $N^{-2kN+k^2-k/2}$ at large $N$.}
and for $N\geq k$ we have
\begin{equation}
\int_{\text{coset}}dw\sim
\int(\text{det}_{2k}W)^{N-k^2/2+1/4}\,dW^0\prod_{c=1,2,3}
dW^c\ .
\label{wcv}\end{equation}
In terms of the gauge invariant coordinates the ADHM constraints
\eqref{badhm} are linear:
\begin{equation}
W^c=i\bar\eta^c_{nm}[a'_n,a'_m]\ ,
\end{equation}
where $\bar\eta^c_{nm}$ is a 't Hooft eta-symbol defined in
\cite{MO-III}, and hence can be trivially integrated out.
On the fermionic side we need to integrate out the superpartners of the
gauge degrees-of-freedom. To isolate these variables we expand
\begin{equation}
\mu^A=w_{{\dot\alpha}}\zeta^{{\dot\alpha} A}+w_{\dot\alpha}\sigma^{{\dot\alpha} A}+\nu^A\ ,
\label{E44}\end{equation}
where the $\nu^A$ modes---the ones we need to integrate out---form a
basis for the kernel of $\bar w^{\dot\alpha}$, i.e.~$\bar w^{\dot\alpha}\nu^A=0$,
The remaining variables are chosen to have the symmetry properties
\begin{equation}
(\zeta^{{\dot\alpha} A})^T=(-1)^{A+1}\zeta^{{\dot\alpha} A}\ ,\qquad
(\sigma^{{\dot\alpha} A})^T=(-1)^A\sigma^{{\dot\alpha} A}\ .
\end{equation}
Notice that the $\nu^A$ variables decouple from the fermionic
ADHM constraints \eqref{fadhm}. The change of variables from $\mu^A$ to
$\{\zeta^A,\sigma^A,\nu^A\}$ involves a Jacobian:
\begin{equation}
\int\, \prod_{A=1,2,3,4}d\mu^A=\int \left(\text{det}_{2k}W\right)^{-2k}\,\prod_{A=1,2,3,4}
d\zeta^A\,d\sigma^A\,d\nu^A\ .
\end{equation}
We can now integrate out the $\nu^A$ variables:
\begin{equation}
\int\, \prod_{A=1,2,3,4} d\nu^A\,
\exp\big[\sqrt8\pi ig^{-1}
{\rm tr}_k\,\chi_{AB}\bar\nu^A\nu^B\big]=2^{6k(N-k)}
(\pi/g)^{4k(N-k)}
\left({\rm det}_{4k}\chi\right)^{N-k}\ .
\label{E55}\end{equation}
Furthermore, the fermionic ADHM constraints \eqref{fadhm}
can then be used to integrate-out the variables $\sigma^{{\dot\alpha} A}$.
The resulting expression is rather cumbersome and is, in any case, not
needed at this stage. Fortunately, in the large $N$ limit it
simplifies considerably.
We now proceed to take a large-$N$ limit of the measure using a
steepest descent method. As explained
in \cite{MO-III} it is useful to scale $\chi_{AB}\rightarrow\sqrt
N\chi_{AB}$ because this exposes the three terms which contribute to the
large-$N$ `saddle-point' action:
\begin{equation}S_{\rm
eff}=-\log{\rm det}_{2k}W-\log{\rm det}_{4k}\chi+{\rm
tr}_K\,\chi_a\BL\chi_a\,.
\label{E57}\end{equation}
The first and second terms come from \eqref{wcv} and \eqref{E55} while the
final term comes from \eqref{meas}.
We can now perform a steepest descent approximation of the measure
which involves finding the minima of the effective action with
respect to the variables $\chi_{AB}$, $W^0$ and $a'_n$. The resulting
saddle-point equations are\footnote{As usual we shall frequently
swap between the two representations $\chi_a$ and $\chi_{AB}$ for $\text{SO}(6)$ vectors.}
\begin{equation}
\begin{split}
\epsilon^{ABCD}\left(\BL\cdot\chi_{AB}\right) \chi_{CE}\ &=\
\tfrac12\delta^D_E \,1_{\scriptscriptstyle [K]\times[K]}\ ,\\
\chi_a\chi_a\ &=\ \tfrac12(W^{-1})^0\ ,\\
[\chi_a,[\chi_a,a'_n]]\ &=\ i\bar\eta^c_{nm}[a'_m,(W^{-1})^c]\ .
\label{E60}
\end{split}
\end{equation}
where we have introduced the matrices
\begin{equation}
(W^{-1})^0=
{\rm tr}_2\,W^{-1},\qquad
(W^{-1})^c=
{\rm tr}_2\,\tau^cW^{-1}\ .
\label{E60.1}\end{equation}
We note that the expression for the effective action \eqref{E57}
and the saddle point equations \eqref{E60}
look identical to those derived in \cite{MO-III}
for the ${\cal N}=4$ theory, however, the difference is the symmetry
properties of the variables $\{a'_n,W^0,\chi_a\}$ which will have to
be taken into account.
As in the ${\cal N}=4$ case we look for a solution with $W^c=0$,
$c=1,2,3$, which means that the instantons are embedded in mutually commuting
$\text{SU}(2)$ subgroups of the gauge group. In this case
equations \eqref{E60} are equivalent to
\begin{equation}
[a'_n,a'_m]=[a'_n,\chi_a]=[\chi_a,\chi_b]=0\ ,\quad
W^0=\tfrac12(\chi_a\chi_a)^{-1}\ .\label{echo}
\end{equation}
The final equation can be viewed as giving the value of $W^0$, whose
eigenvalues are the
instanton sizes at the saddle-point. Clearly $\chi_a\chi_a$ and
$W^0$ must be non-degenerate.
The solution space
to these equations (up to the auxiliary $\text{O}(k)$ symmetry)
is composed of distinct branches. On the first branch
\begin{equation}\begin{split}
a'_n&={\rm diag}\big(\,-X^1_n\,,\,\ldots\,,\,-X^k_n\,\big)\ ,
\\
\chi_a&=\begin{cases}{\rm
diag}\big(\rho_1^{-1}\hat\Theta_a^{1}\,,\,\ldots\,,\,\rho_k^{-1}\hat\Theta_a^k\big)
& a=1,\ldots,4\ ,\\ 0 & a=5,6\ .\end{cases}
\label{firb}
\end{split}\end{equation}
where $\hat\Theta_a$ is a unit four-vector, i.e.~$\hat\Theta_a\equiv0$ for
$a=5,6$. A second branch of solutions only exists for even instanton
number $2k$ and is, in block form,
\begin{equation}\begin{split}
a'_n&={\rm diag}\big(\,-X^1_n\,,\,\ldots\,,\,-X^k_n\,\big)
\otimes1_{\scriptscriptstyle[2]\times[2]}\ ,\\
\chi_a&=
{\rm diag}\big(\rho_1^{-1}\hat\Omega_a^{1}\,,\,\ldots\,,\,\rho_k^{-1}\hat\Omega_a^k\big)\otimes
\begin{cases}1_{\scriptscriptstyle[2]\times[2]} & a=1,\ldots,4\ ,\\ \tau^2 & a=5,6\ ,\end{cases}
\label{secb}
\end{split}\end{equation}
where $\hat\Omega_a$ is a unit six-vector. Notice that the sign of $\hat\Omega_a^i$,
$a=5,6$, can be reversed by an $\text{O}(2k)$ transformation $\tau^1$. So
physically $\hat\Omega_a^i$ is coordinate on the orbifold $S^5/{\mathbb Z}_2$.
In these solutions $X_n^i$ are the
position of the individual instantons in ${\mathbb R}^4$ and $\rho_i$
are their scale sizes.
The first branch can be interpreted as parameterizing, via
$\{X^i_n,\rho_i,\hat\Theta_a^i\}$, the position of
$k$ objects, the D-instantons, moving in $AdS_5\times S^3$,
where the $S^3$ is precisely the orbifold singularity of $S^5/{\mathbb
Z}_2$ of the dual type IIB superstring background. Since these
instantons are confined to the singularity they are
fractional \cite{Douglas:1996sw,Douglas:1997xg,Diaconescu:1998br}. The second branch
describes how the fractional D-instantons confined to the singularity on the first
branch can move off in pairs to explore the whole of $S^5/{\mathbb Z}_2$ parametrized
by $\hat\Omega_a^i$. Of course there are also mixed branches consisting of
fractional instantons on, and `normal' instanton off, the
singularity. However, we shall not need these more general
solutions in our analysis.
In principle, in order to do a saddle-point analysis
we have to expand the effective action
around these general solutions to sufficient order
to ensure that the fluctuation integrals converge. In
general because the Gaussian form has zeros whenever two D-instantons
coincide one has to go to quartic order in the fluctuations.
Fortunately, as explained in \cite{MO-III},
we do not need to expand about these most general solution to the
saddle-point equations to quartic order since this is equivalent to expanding
to the same order around the most degenerate solution where all the
D-instantons are at the same point.
The resulting quartic action has flat directions corresponding to the
relative positions of the D-instantons. However, when the fermionic
integrals are taken into account the integrals over these relative
positions turn out to be convergent and hence these degrees-of-freedom
should be viewed as fluctuations around rather than facets of the maximally
degenerate solution. The variables left un-integrated, since they are
not convergent, are the centre-of-mass coordinates.
In the present situation, as in the orbifold
theories \cite{orbi}, there is an important further implication: the
number of centre-of-mass coordinates (corresponding to the
non-damped integrals) can depend on exactly what fermionic insertions are
made into measure since this affects the convergence properties of the
bosonic integrals. We will see that this is closely connected with the
different branches of the solution space.
For the first branch of solutions, the maximally degenerate solution
is \eqref{firb} with all the
instantons at the same point:
\begin{equation}\begin{split}
a'_n&=-X_n1_{\scriptscriptstyle[k]\times[k]}\ ,\\
\chi_a&=\begin{cases}
\rho^{-1}\hat\Theta_a1_{\scriptscriptstyle[k]\times[k]} & a=1,\ldots,4\ ,\\
0 & a=5,6\ .\end{cases}
\label{fmaxs}\end{split}
\end{equation}
For the second branch with instanton number $2k$, the maximally
degenerate solution is \eqref{secb} with all the instantons at the
same point:
\begin{equation}\begin{split}
a'_n&=-X_n1_{\scriptscriptstyle[2k]\times[2k]}\ ,\\
\chi_a&=\rho^{-1}\hat\Omega_a1_{\scriptscriptstyle[k]\times[k]}\otimes\begin{cases}
1_{\scriptscriptstyle[2]\times[2]} & a=1,\ldots,4\ ,\\
\tau^2 & a=5,6\ .\end{cases}
\label{smaxs}\end{split}
\end{equation}
In \eqref{fmaxs} $\hat\Theta_a$ parametrizes $S^3$, whereas in
\eqref{smaxs}, $\hat\Omega_a$ parametrizes $S^5/{\mathbb Z}_2$.
The expansion of the effective action to quartic order around these solution
can be deduced from the analysis of \cite{MO-III}, although the first
case describing the fractional D-instantons is somewhat simpler and we
describe it first. After
integrating-out the fluctuations in the variables $W^0$ which are lifted at Gaussian
order, as in \cite{MO-III}, the quartic fluctuations are governed by the
action which looks identical to the action of ten-dimensional ${\cal N}=1$
supersymmetric $\text{U}(k)$ gauge theory dimensionally reduced to zero dimensions:
\begin{equation}
S_{\rm b}=-{1\over2}{\rm tr}_k\,\big[A_\mu,A_\nu\big]^2\ ,
\end{equation}
where the gauge field has components
\begin{equation}
A_\mu=N^{1/4}\big(\rho^{-1}a'_n,\rho\chi_a\big)\ .
\end{equation}
However, the components of the gauge field are subject to
\begin{equation}
(A_\mu)^T=A_\mu\ ,\quad\text{for}\ \mu=1,\ldots,8\ ,\quad
(A_\mu)^T=-A_\mu\ ,\quad\text{for}\ \mu=9,10\ .
\end{equation}
In other words, the matrix model only has $\text{O}(k)\subset\text{U}(k)$ symmetry.
The centre-of-mass parameters of the maximally degenerate solution,
$X_n$ and $\rho^{-1}\hat\Theta_a$, correspond to the trace parts
of $A_\mu$, $\mu=1,\ldots,8$, and these decouple from the action.
Now we turn to the fermionic sector which are coupled to the
bosonic variables in \eqref{aact}. First of all we fulfill our
promise to deal with the fermionic ADHM constraints. To leading order
in $1/N$, these constraints read
\begin{equation}
2\rho^2\sigma_{\dot\alpha}^A=-\tfrac12[\delta W^0,\zeta_{\dot\alpha}^A]-[{\cal M}^{\prime\alpha
A},a'_{\alpha{\dot\alpha}}]\ .
\label{lnfadhm}\end{equation}
So the integrals over the $\sigma^{{\dot\alpha} A}$ variables soak up the
delta-functions imposing the fermionic ADHM constraints, as promised.
In \eqref{lnfadhm}, $\delta W^0$ are the fluctuations in $W^0$, all of which are
lifted at Gaussian order. Due to a cross term we can effectively replace
$\delta W^0$ with $-4\rho^3\hat\Theta\cdot\chi$ at leading order (see
\cite{MO-III}). Collecting all the leading order terms, the fermion couplings
are
\begin{equation}\begin{split}
S_{\rm f}=i\Big({8\pi^2 N\over g^2}\Big)^{1/2}
{\rm tr}_k\Big[&
-2\rho^2(\hat\Theta\cdot\chi)\hat\Theta_{AB}
\zeta^{{\dot\alpha} A}\zeta_{\dot\alpha}^B+\rho^{-1}\hat\Theta_{AB}\big[a'_{\alpha{\dot\alpha}},
{\cal M}^{\prime\alpha A}\big]\zeta^{{\dot\alpha} B}\\
&\qquad\qquad\qquad+\chi_{AB}\big(\rho^2
\zeta^{{\dot\alpha} A}\zeta_{\dot\alpha}^B+{\cal M}^{\prime
\alpha A}{\cal M}^{\prime B}_\alpha\big)-\tfrac12\chi_{24}{\cal K}\K^T
\Big] \ .
\label{fermc}\end{split}\end{equation}
The fermionic variables ${\cal M}^{\prime A}_\alpha$ and
$\zeta^{{\dot\alpha} A}$ can be amassed into two $\text{SO}(8)$ spinors $\Psi$ and
$\Phi$:
\begin{equation}\begin{split}
\Psi&=\sqrt{\pi\over2 g}N^{1/8}\big(\rho^{-1/2}{\cal M}^{\prime 1}_\alpha,
\rho^{-1/2}{\cal M}^{\prime
3}_\alpha,\rho^{1/2}\zeta^{\aD1},\rho^{1/2}\zeta^{\aD3}\big)\ ,\\
\Phi&=\sqrt{\pi\over2 g}N^{1/8}\big(\rho^{-1/2}{\cal M}^{\prime 2}_\alpha,
\rho^{-1/2}{\cal M}^{\prime
4}_\alpha,\rho^{1/2}\zeta^{\aD2},\rho^{1/2}\zeta^{\aD4}\big)\ ,
\end{split}\end{equation}
transforming as an ${\bf8}_s$ and ${\bf8}_{\bar s}$, respectively, and
as $k\times k$ matrices subject to
\begin{equation}
\Psi^T=\Psi\ ,\qquad\Phi^T=-\Phi\ .
\end{equation}
We also define rescaled fundamental collective coordinates
\begin{equation}
{\cal K}\rightarrow\sqrt{4g\over\pi}N^{-1/8}{\cal K}\ .
\end{equation}
In terms of the new variables, the fermion couplings can be written in
an elegant way as
\begin{equation}
S_{\rm f}=i\,{\rm
tr}_k\big(\Psi\Gamma_{\hat\mu}[A_{\hat\mu},\Phi]+\Psi[A_9-iA_{10},\Psi]
+\Phi[A_9+iA_{10},\Phi]-(A_9+iA_{10}){\cal K}\K^T\big)\ ,
\label{fc}\end{equation}
with the appropriate $\text{SO}(8)$ inner products between the spinors.
Here $\Gamma_{\hat\mu}$, $\hat\mu=1,\ldots,8$ is a representation of the
$D=8$ Clifford algebra which depends upon $\hat\Theta$.\footnote{The
construction of this Clifford algebra can be deduced from the similar
construction in $D=10$ for the ${\cal N}=4$ theory in \cite{MO-III}.}
The couplings \eqref{fc} do not involve the trace parts of
$\Psi$ corresponding to the eight supersymmetric and superconformal
zero-modes of the instanton, which we denoted previously as
$\xi^A_{\dot\alpha}$ and $\bar\eta^{{\dot\alpha} A}$, for $A=1,3$.
Separating out the integrals over these variables and the bosonic
centre-of-mass variables, the measure at large $N$ has the
form\footnote{We have kept track of power of $N$ and $g$ but not other
numerical factors.}
\begin{equation}
\int d\mu^ke^{-S^k_{\rm inst}}\Big\vert_{1^{\rm st}\text{ branch}}\
\underset{N\rightarrow\infty}=\
g^4Ne^{2\pi ik\tau}\int
{d^4X\,d\rho\over\rho^5}\,d^3\hat\Theta\,\prod_{A=1,3}d^2\xi^A\,d^2\bar\eta^A\cdot
{\cal Z}_{\text{O}(k)}\ ,
\label{lnm}\end{equation}
where ${\cal Z}_{\text{O}(k)}$ is the partition function of an $\text{O}(k)$
matrix model:
\begin{equation}
{\cal Z}_{\text{O}(k)}={1\over{\rm Vol}\,\text{O}(k)}\int d\hat A\,d\hat\Psi
\,d\Phi\,d{\cal K}\,e^{-S(A_\mu,\Psi,\Phi,{\cal K})}\ ,
\end{equation}
where the hat indicates the traceless parts and
$S(A_\mu,\Psi,\Phi,{\cal K})=S_{\rm b}+S_{\rm f}$. This matrix model is
identical to the one written down in \cite{Gutperle:1999xu} and describes the
dynamics of $k$ D-instantons in flat space. In other words, at large
$N$ the D3-branes have disappeared explicitly in the description of
D-instanton dynamics and their only effect is to change the
centre-of-mass measure from ${\mathbb R}^{10}$ to $AdS_5\times S^3$, along with the 8
supersymmetric and superconformal fermionic integrals which are
associated to the 8 symmetries broken by the instanton. One can show
that the integrals over all the bosonic variables in the definition
of ${\cal Z}_{\text{O}(k)}$ are actually convergent so that ${\cal
Z}_{\text{O}(k)}$ is some finite numerical factor (this is important for
getting the $N$ counting of the answer correct).
Now we turn to the expansion around the maximally degenerate solution
for the second branch of solutions \eqref{secb} with instanton number
$2k$. In this case the solution is not proportional to the identity
and many of the fluctuations, those which do not commute $\chi_a$, for
$a=5,6$, are lifted at Gaussian order. In fact it is not difficult to
see that the resulting leading order term can be deduced from the
$\text{O}(2k)$ matrix model constructed above by expanding around
$\chi_a=\rho^{-1}\hat\Omega_a1_{\scriptscriptstyle[k]\times[k]}\otimes\tau^2$, $a=5,6$.
The solution only commutes with a subgroup $\text{U}(k)\subset\text{O}(2k)$, and
so the resulting model only has $\text{U}(k)$ symmetry. After a suitable
gauge fixing (described in the context of the orbifold models in
\cite{orbi}) and after integrating-out the Gaussian
fluctuations, which include all the fundamental collective coordinates
${\cal K}$, one is left with the partition function of
a $\text{U}(k)$ matrix model which is
identical to that which appears in the ${\cal N}=4$ calculation namely
ten-dimensional ${\cal N}=1$ gauge theory dimensionally reduced to zero
dimensions. The $\text{U}(k)$ gauge field $A'_\mu$ is embedded in the $\text{O}(2k)$
model variables $A_\mu$ as
\begin{equation}
A_\mu=A'_\mu\otimes\begin{cases}1_{\scriptscriptstyle[2]\times[2]} & \mu=1,\ldots,8\
,\\ \tau^2 & \mu=9,10\ .\end{cases}\ .
\end{equation}
with a similar relation for the $\text{U}(k)$ fermions:
\begin{equation}
\Psi=\Psi'\otimes1_{\scriptscriptstyle[2]\times[2]}\ ,\qquad\Phi=\Phi'\otimes
\tau^2\ .
\end{equation}
The 16 fermions $\Psi'$ and $\Phi'$ combine into the 16 component
Majorana Weyl fermion of the ten-dimensional theory.
The trace parts of the gauge field $A'_\mu$
separate out to give an integral over $AdS_5\times S^5/{\mathbb Z}_2$ along
with 16 fermionic integrals which include the 8 supersymmetric and superconformal
zero modes along with the 8 components of $\Phi$ proportional to
$1_{\scriptscriptstyle[k]\times[k]}\otimes\tau^2$, which for conformity of
notation we denote $\xi_\alpha^A$
and $\bar\eta^{{\dot\alpha} A}$, for $A=2,4$. The final expression is
\begin{equation}
\int d\mu^{2k}_{\rm phys}e^{-S^{2k}_{\rm inst}}\Big\vert_{2^{\rm nd}
\text{ branch}}\ \underset{N\rightarrow\infty}=\
g^8\sqrt Ne^{4\pi
ik\tau}\int{d^4X\,d\rho\over\rho^5}\,d^5\hat\Omega\,\prod_{A=1,\ldots,4}\,
d^2\xi^A\,d^2\bar\eta^A\cdot{\cal Z}_{\text{SU}(k)}\ ,
\end{equation}
which is identical to the measure in the ${\cal N}=4$ case. Again, it is
important that bosonic integrals in the definition of the partition
function ${\cal Z}_{\text{SU}(k)}$ are all convergent and this factor is
some overall numerical factor. Our analysis
makes it clear that the second branch is actually contained within the
more general first branch. The key point is that the convergence of
the integrals of the bosonic variables in the $\text{O}(2k)$ matrix model
depends on the `fermionic context', i.e.~on what fermion insertions
are made
into the partition function. The second branch corresponds to inserting
the 8 modes $\xi_\alpha^A$ and $\bar\eta^{{\dot\alpha} A}$, $A=2,4$, into the
$\text{O}(2k)$ partition function. The integrals over
the components of $A_\mu$, for $\mu=9,10$, proportional to
$1_{\scriptscriptstyle[k]\times[k]}\otimes\tau^2$, are not damped after taking
into account the now smaller number of fermionic integrals. These divergent integrals
need to be separated out as additional centre-of-mass coordinates
leading to an integral over $S^5/{\mathbb Z}_2$ rather
than $S^3$.
\acknowledgments
I would like to thank Michael Gutperle for very helpful conversations.
Note: near the completion of this work
there appeared a paper by Gava et al.~\cite{Gava:1999ky}. That paper
has identical conclusions to the present work but goes much further and
provides a very detailed comparison between instanton contributions to
certain correlators in the gauge theory and the string theory. The
results are in perfect agreement with the AdS/CFT correspondence.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\setcounter{equation}{0}
In a quantum sense, there inevitably exist quantum vacuum fluctuations, which may induce some novel effects that do not exist in classical physics. A typical example is the Casimir-Polder (CP) interaction \cite{CP}.
In general, fluctuating electromagnetic fields in vacuum induce instantaneous electric dipole moments in neutral atoms and molecules, which then interact with each other via the exchange of virtual photons.
Such effects have been widely applied in various areas, e.g., in material science, chemistry, and biology
(see Ref. \cite{Woods2016} for a recent review).
Moreover, the measurement of the atom-surface and surface-surface CP interactions has already been accomplished \cite{Sukenik1993,Chen2006,Hertlein2008,Bender2010}. However, the interatomic CP interaction is so weak that it has not yet been measured.
It has been shown that the interatomic CP interaction can be significantly modified by externally applied electromagnetic waves \cite{Thirunamachandran1980,Milonni1992,Milonni1996,Bradshaw2005,Andrews2006,Salam2006,Salam2007, Sherkunov2009}.
Compared to the interatomic CP interaction, which originates purely from the electromagnetic vacuum fluctuations, the physical process resulting in the leading interatomic interaction in the presence of external electromagnetic waves occurs differently. In the latter case, the externally applied electromagnetic waves induce dipole moments in atoms, which are then correlated to each other through the electromagnetic vacuum fluctuations. That is, a real photon is scattered by a pair of atoms which are coupled via the exchange of a virtual photon, and the interaction energy is thus modified. Such a modified CP interaction, which is actually a fourth-order effect, has been shown to behave as $r^{-3}$ in the near regime and $r^{-2}$ or $r^{-1}$ in the far regime depending on the propagating direction of the external electromagnetic field with respect to the alignment of the atoms, where $r$ is the interatomic distance \cite{Thirunamachandran1980}. Moreover, this result reduces to the classical dipole-dipole interaction, which behaves as $r^{-3}$ in all distance regimes, when the frequency of the external electromagnetic field tends to zero. Naturally, one may wonder what the interatomic interaction will be in the presence of externally applied electrostatic fields.
First of all, external electrostatic fields will induce electric dipole moments in atoms and so
a classical dipole-dipole interaction will result. However, to higher orders, there exist quantum corrections that arise from modified vacuum field fluctuations due to the presence of the external fields.
In fact, the interaction between two atoms or molecules in the presence of an electrostatic field has been studied by Mackrodt \cite{Mackrodt1974}. It has been shown that, in the fourth order, the interatomic interaction contains two parts: One is the classical dipole-dipole interaction induced by the electrostatic field, which behaves as $r^{-3}$ in all distance regimes and can be obtained directly through the fourth-order perturbation theory with the assistance of 24 one-photon-exchange time-ordered diagrams, while the other is the well-known two-photon-exchange CP interaction induced purely by vacuum fluctuations.
Recently, this question was revisited by Fiscelli \textit{et al.} in Ref. \cite{Passante2020} in a different approach, wherein the authors first obtained the dressed atomic states in the electrostatic field and then calculated the second-order energy correction between dressed atoms due to the coupling with the fluctuating electromagnetic fields. They found that, in the fourth order, the interatomic interaction from one-photon exchange can generate a novel $r^{-4}$ term in the far zone.
That is, there are different answers to the question of whether the induced classical electrostatic dipole-dipole interaction can be corrected by electromagnetic vacuum fluctuations or, in other words, whether external-field-related quantum corrections to the classical electrostatic dipole-dipole interaction exist besides those originating purely from quantum vacuum fluctuations.
In this paper we consider, in the presence of an external electrostatic field, the interatomic interaction in the fourth, sixth and eighth orders within the dipole coupling approximation, based on the perturbation method utilized in Ref. \cite{Passante2020}.
First, we give a formal analysis of the interaction between two ground-state atoms in the presence of an electrostatic field.
As we will show in detail, in the fourth order, the external electrostatic field merely induces a classical dipole-dipole interaction between the two atoms and does not generate an $r^{-4}$ term in the far zone, contrary to the recent claim in Ref. \cite{Passante2020}. In the sixth order, the external electrostatic field effectively modifies the atomic polarizability to affect the two-photon-exchange interatomic CP interaction,
which can be considered as one kind of external-field-related quantum correction to the classical dipole-dipole interaction, while in the eighth order, the external field enables an additional three-photon exchange which does not occur between two ground-state atoms in vacuum, which is another kind of external-field-related quantum correction.
Second, as a specific example, we apply our analysis
to the case of two ground-state hydrogen atoms and show concretely what the electrostatic-field-related quantum corrections of the induced classical dipole-dipole interaction are and whether such corrections may be detectable in the foreseeable future.
Throughout this paper, the Einstein summation convention for repeated indices is assumed and the Latin indices run from $1$ to $3$.
\section{Interatomic interaction in the presence of an electrostatic field}
\label{CPEF}
We now consider the interatomic interaction between two neutral atoms ($A$ and $B$) in their ground states in the presence of an electrostatic field based on the perturbation method.
Since our concern is the interatomic interaction related to the electrostatic field, we first obtain the atomic states dressed by the external electrostatic field to the second order and then calculate the second-order, fourth-order, and sixth-order energy corrections of the dressed ground state, due to the interaction with the fluctuating transverse fields.
The total Hamiltonian of the system considered is given by
\begin{equation} \label{Hamiltonian}
H=H_F+H_S+V_S+H_I,
\end{equation}
where $H_{F}$ and $H_{S}$ are the Hamiltonians of the fluctuating vacuum electromagnetic fields and the atoms, respectively, and $V_S$ and $H_I$ are respectively the interaction Hamiltonians of the atoms with the external electric field and the fluctuating vacuum electromagnetic fields, which, in the multipolar coupling scheme and within the dipole approximation, take the form
\begin{eqnarray}
V_S &=& -d^A_i \varepsilon_{i}(\vec x_A)-d^B_i \varepsilon_{i}(\vec x_B),\\
H_I &=& -\epsilon_0^{-1}d^A_i \mathscr{E}_i(\vec x_A)-\epsilon_0^{-1}d^B_i \mathscr{E}_i(\vec x_B).
\end{eqnarray}
Here $d^{\sigma}_i$ is the component of the dipole moment of atom $\sigma$, $\epsilon_0$ is the vacuum permittivity, $\varepsilon_i(\vec x)$ represents the external electrostatic field, and $\mathscr{E}_i(\vec x)$ is the transverse displacement field, of which the explicit form is given by \cite{PT,Passante2018}
\begin{equation}\label{E-field}
\mathscr{E}_i(\vec x)=i\sum_{\vec k, \lambda}\sqrt{\frac{\hbar c k \epsilon_0}{2\mathcal{V}}} e^{\lambda}_i [a_{\lambda}(\vec k) e^{i \vec k \cdot \vec x}-a^{\dag}_{\lambda}(\vec k)e^{-i \vec k \cdot \vec x}],
\end{equation}
where $\mathcal{V}$ is the quantization volume, $a_{\lambda}(\vec k)$ and $a^{\dag}_{\lambda}(\vec k)$ are respectively the annihilation and creation operators of the electromagnetic vacuum field with wave vector $\vec k$ and polarization $\lambda$, and $e^{\lambda}_i$ are polarization unit vectors.
We label the unperturbed energy eigenvalues of the ground and excited states as $E_0$ and $E_{n}$ $( n=1,2,3, . . .)$, respectively, with the corresponding eigenstates being $|e_0\rangle$ and $|e_n\rangle$.
In the presence of an electrostatic field, the states of neutral atoms are modified due to the interaction between the induced permanent dipole moments and the electrostatic field. Formally, we denote the corrected atomic state by $|\hat e_{\gamma}\rangle$, which can be given by the second-order perturbation theory, i.e.,
\begin{eqnarray}\label{eq2}
\nonumber|\hat e_{\gamma} \rangle&=&\left(1-\sum_{\alpha} {'}\frac{\hat d^{\gamma \alpha}_i \hat d^{\alpha \gamma}_j} {2E_{\gamma\alpha}^2}\varepsilon_i \varepsilon_j \right)|e_{\gamma}\rangle -\sum_{\alpha}{'}\frac{\hat d^{\alpha \gamma}_i \varepsilon_i }{E_{\gamma\alpha}}|e_{\alpha}\rangle\\
&&+\sum_{\beta} {'}\left(\sum_{\alpha} {'}\frac{\hat d^{\beta\alpha}_i \hat d^{\alpha \gamma}_j } {E_{\gamma\alpha}E_{\gamma\beta}}-\frac{\hat d^{\beta \gamma}_i \hat d^{\gamma\gamma}_j }{E_{\gamma\beta}^2}\right)\varepsilon_i\varepsilon_j |e_{\beta}\rangle,
\end{eqnarray}
where $\alpha, \beta, \gamma=0, 1, 2, 3, . . .$,
$E_{\gamma\alpha}=E_{\gamma}-E_{\alpha}$, and $\hat d^{\alpha\beta}_i=\langle e_{\alpha}|d_i|e_{\beta}\rangle$.
Note here that a neutral atom becomes a polar one in the presence of the electrostatic field.
First, we calculate the interaction due to the exchange of a single virtual photon between the two atoms dressed by the electrostatic field, i.e., the fourth-order effect.
Let us note that the one-photon exchange is only possible for dressed atoms since the dressed atoms are in a superposition of the ground state and excited states [see Eq. (\ref{eq2})], while the undressed ones are in the energy eigenstates, and the average value of the induced dipole moment over dressed atomic states is nonzero, while it is zero over undressed atomic states.
Based on the second-order perturbation theory, we obtain \cite{PT}
\begin{equation}\label{E_AB perturbation}
\Delta E_{AB}=\sum_{I}\frac{\langle \psi|H_I|I\rangle\langle I|H_I|\psi\rangle}{E_{\psi} -E_{I}},
\end{equation}
where $|\psi\rangle=|\hat e^A_0\rangle |\hat e^B_0\rangle|0\rangle$ is the corrected initial state of the system, with $|0\rangle$ the electromagnetic vacuum state, $|I\rangle=|\hat e^A_0\rangle |\hat e^B_0\rangle |1_{\vec k, \lambda}\rangle$ is the virtual intermediate state, and $E_I$ is the energy of state $|I\rangle$. Then the interaction energy can be obtained as
\begin{equation}
\Delta E_{AB}=-\sum_{\vec k,\lambda}\frac{1}{\epsilon_0 V} \hat d^A_i \hat d^B_j e^{\lambda}_i e^{\lambda}_j e^{i\vec k\cdot \vec r},
\end{equation}
where $\vec r=\vec x_A-\vec x_B$ and $\hat d^{\sigma}_i=\langle\hat e_0^{\sigma}|d^{\sigma}_i|\hat e_0^{\sigma}\rangle$ is the induced permanent dipole moment in atom $\sigma$. Replacing summation by integration $\sum_{\vec k}\rightarrow V/(2\pi)^3 \int k^2 dk \int d\Omega$ and performing the polarization summation $\sum_{\lambda}e^{\lambda}_ie^{\lambda}_j=\delta_{ij}-\hat k_i \hat k_j$, we obtain
\begin{equation}\label{classical dd}
\Delta E_{AB}=\frac{\hat d^A_i \hat d^B_j}{4\pi\epsilon_0 r^3}(\delta_{ij}-3\hat r_i \hat r_j),
\end{equation}
where $\hat r_i$ is the $i$th component of the unit vector $\vec r /r$. That is, in the fourth-order, the electrostatic-field-induced interatomic interaction is just the $r^{-3}$ classical dipole-dipole interaction which applies in all distance regimes. Notice that here the result is obtained by a second-order calculation of two atoms dressed by an electrostatic field to the second order, which is in agreement with an early result between two undressed atoms obtained by a fourth-order calculation \cite{Mackrodt1974}. Moreover, our result is also in agreement with the interatomic interaction in the presence of external electromagnetic waves \cite{Thirunamachandran1980,Milonni1996} when the wave frequency is so small that the field is effectively static and with the electrostatic interaction between two atomic dipole moments induced by an external static field in a cavity \cite{Donaire3} when the thickness of the cavity tends to infinity. However, the above result does not agree with that in Ref. \cite{Passante2020}.
In Ref. \cite{Passante2020} the ground state was corrected to the second order while the excited state was only corrected to the zeroth order, so the corrected excited states used in Ref. \cite{Passante2020} were not orthogonal to the corrected ground state (under the second order). Then a nonzero incorrect $r^{-4}$ term in the far regime was obtained since the authors chose the
excited states corrected to the zeroth order as the intermediate states. Actually, the contributing term is not relevant to the corrected excited states if the excited states are also corrected to the second order, due to the fact that they are orthogonal to the
corrected ground state in this circumstance.
Second, we examine the interaction between the two dressed atoms due to two-photon exchange. For simplicity, we consider only the far-zone behavior of the dipole-dipole interaction potential. In the far zone, the computation relates to only four time-ordered diagrams \cite{PT,Salam} and the expression of the interaction energy shift is formally given by
\begin{eqnarray}\label{Integral_2FZ}
\nonumber\Delta E_{AB}&=&-\sum_{\vec k, \vec k'}\left(\frac{\hbar c k}{2\epsilon_0 \mathcal{V}}\right) \left(\frac{\hbar c k'}{2\epsilon_0 \mathcal{V}}\right)\frac{e^{i (\vec k+\vec k')\cdot \vec r}}{\hbar c k+\hbar c k'} \\
&&\times(\delta_{il}-\hat k_i \hat k_l) (\delta_{jm}-\hat k_j' \hat k_m') \alpha^A_{ij}(0)\alpha^B_{lm}(0) .
\end{eqnarray}
Here $\alpha_{ij}(0)$ is the static polarizability, which is defined as
\begin{equation}\label{alpha}
\alpha_{ij}(0)=\sum_{s}\frac{1}{E_{s0}}(\hat d^{0s}_{i}\hat d^{s0}_j+\hat d^{0s}_{j}\hat d^{s0}_i),
\end{equation}
where the index $0$ represents the corrected ground state $|\hat e_0\rangle$ and the index $s$ represents the corrected excited state $|\hat e_s\rangle$. Replacing the summation in Eq. (\ref{Integral_2FZ}) by integration and performing the integral with the help of the integral representation
\begin{equation}
\frac{1}{k+k'}=r \int_{0}^{\infty} e^{-(k+k')r\eta}d\eta,
\end{equation}
we obtain
\begin{equation}\label{E_AB-2FZ}
\Delta E_{AB}=-\frac{\hbar c}{128\pi^3\epsilon_0^2r^7}M_{AB},
\end{equation}
where
\begin{eqnarray}\label{M_AB}
\nonumber M_{AB}&=&20\alpha^A_{33}(0)\alpha^B_{33}(0)+13\alpha^A_{11}(0)\alpha^B_{11}(0)+13\alpha^A_{22}(0)\alpha^B_{22}(0)\\
\nonumber&&+13\alpha^A_{12}(0)\alpha^B_{12}(0)+13\alpha^A_{21}(0)\alpha^B_{21}(0)-15\alpha^A_{13}(0)\alpha^B_{13}(0)\\
&&-15\alpha^A_{31}(0)\alpha^B_{31}(0)-15\alpha^A_{23}(0)\alpha^B_{23}(0)-15\alpha^A_{32}(0)\alpha^B_{32}(0).
\end{eqnarray}
The above result (\ref{E_AB-2FZ}) is formally the same as the interatomic CP interaction between undressed atoms, while the atomic polarizability is now modified by the applied electrostatic field, i.e., the elements of the dipole transition matrix in Eq. (\ref{alpha}) are related to the dressed atomic states. That is, in the sixth order,
the external-field-related quantum correction to the classical dipole-dipole interaction takes the same form as the CP interaction but the atomic polarizability is effectively dressed by the external electrostatic field.
Note here that the above result (\ref{E_AB-2FZ}) is in fact the far-zone counterpart of the near-zone London interaction between a pair of undressed molecules in an electrostatic field studied in Ref. \cite{Mackrodt1974},
which is not the complete sixth-order potential. In a complete sixth-order calculation of the interaction energy shift between two undressed molecules \cite{Mackrodt1974}, there also exist terms corresponding to a vacuum-fluctuation-induced modification of the two-photon-exchange CP interaction, i.e., an effective dressing of the atomic polarizability by vacuum fluctuations.
We leave an order-of-magnitude estimation of such an effect to Sec. \ref{3}, since here our main concern is the interatomic interaction relevant to the external electrostatic field.
Third, we study the interaction due to the three-photon exchange between the two dressed atoms. This is particularly interesting since the three-photon exchange is not allowed without the presence of an external electrostatic field for a reason we will explain later. A complete calculation of the three-photon-exchange interatomic interaction involves $360$ time-ordered diagrams.
However, in the far-zone limit, the leading term relates only to $12$ time-ordered diagrams \cite{Salam1997} and can be formally expressed as
\begin{eqnarray}\label{Integral_3FZ}
\nonumber\Delta E_{AB}=&-&\frac{1}{3}\sum_{\vec k, \vec k', \vec k''}
\left(\frac{\hbar c k}{2\epsilon_0 \mathcal{V}}\right)\left(\frac{\hbar c k'}{2\epsilon_0 \mathcal{V}}\right) \left(\frac{\hbar c k''}{2\epsilon_0 \mathcal{V}}\right)\\
\nonumber&\times& (\delta_{il}-\hat k_i \hat k_l)(\delta_{jm}-\hat k_j' \hat k_m')(\delta_{kn}-\hat k_k'' \hat k_n'')\\
&\times&\frac{e^{i(\vec k+\vec k'+\vec k'') \cdot \vec r}}{\hbar c(k+k'+k'')}\beta^A_{ijk}(0) \beta^B_{lmn}(0),
\end{eqnarray}
where $\beta_{ijk}(0)$ is the static first hyperpolarizability tensor, which is formally given by
\begin{eqnarray}\label{betaijk}
\nonumber\beta_{ijk}(0)&=&\sum_{t,s}\frac{1}{E_{t0}E_{s0}}(\hat d^{0 t}_i \hat d^{t s}_j \hat d^{s 0}_k +\hat d^{0 t}_i \hat d^{t s}_k \hat d^{s 0}_j+\hat d^{0 t}_j \hat d^{t s}_k \hat d^{s 0}_i \\
&&\quad +\hat d^{0 t}_j \hat d^{t s}_i \hat d^{s 0}_k+\hat d^{0 t}_k \hat d^{t s}_i \hat d^{s 0}_j+\hat d^{0 t}_k \hat d^{t s}_j \hat d^{s 0}_i),
\end{eqnarray}
where the index $0$ represents the corrected ground state $|\hat e_0\rangle$, while the indices $t$ and $s$ represent the corrected excited state $|\hat e_t\rangle$ and $|\hat e_s\rangle$, respectively.
Replacing the summation in Eq. (\ref{Integral_3FZ}) by a spherical coordinate integral and performing the integral with the help of the integral representation
\begin{equation}
\frac{1}{\omega+\omega'+\omega''}=r \int_{0}^{\infty} e^{-(\omega+\omega'+\omega'')r\eta}d\eta,
\end{equation}
we obtain
\begin{equation}\label{E_AB-3FZ}
\Delta E_{AB}=\frac{\hbar^2 c^2}{2^{13}\pi^5 \epsilon_0^3 r^{11}}D_{AB},
\end{equation}
where
\begin{eqnarray}\label{D_AB}
\nonumber D_{AB}&=&-336\beta^A_{333}(0)\beta^B_{333}(0)+225\beta^A_{222}(0)\beta^B_{222}(0)\\
\nonumber&&+225\beta^A_{111}(0)\beta^B_{111}(0)+675\beta^A_{212}(0)\beta^B_{212}(0)\\
\nonumber&&+675\beta^A_{121}(0)\beta^B_{121}(0)-744\beta^A_{131}(0)\beta^B_{131}(0)\\
\nonumber&&-744\beta^A_{232}(0)\beta^B_{232}(0)+840\beta^A_{313}(0)\beta^B_{313}(0)\\
&&+840\beta^A_{323}(0)\beta^B_{323}(0)-1488\beta^A_{123}(0)\beta^B_{123}(0).
\end{eqnarray}
The above result (\ref{E_AB-3FZ}) shows that the interatomic interaction potential between the two dressed atoms due to the exchange of three virtual photons behaves as $r^{-11}$ in the far zone, which is another kind of quantum correction to the classical electrostatic dipole-dipole interaction.
In fact, the three-photon-exchange process is not possible for undressed atoms within the dipole coupling approximation, since the undressed atoms are in their energy eigenstates. According to the dipole transition rule, the hyperpolarizability (\ref{betaijk}) for undressed atoms must be zero since the three dipole transition matrix elements in the products in Eq. (\ref{betaijk}) cannot be nonzero at the same time. That is, the undressed atoms cannot return to the initial state as required by the perturbation approach after an exchange of odd virtual photons, since the angular momentum quantum number must be changed due to the dipole transition rule. However, there is no such problem for dressed atoms, since the dressed atoms are in a superposition of the ground state and excited states [see Eq. (\ref{eq2})].
Therefore, the possible electrostatic-field-related quantum corrections to the induced classical dipole-dipole interaction between two ground-state atoms can be classified into two categories: (i) corrections arising from the exchange of even virtual photons, in which the atomic polarizability is effectively modified by the external fields, and (ii) corrections arising from the exchange of odd virtual photons enabled by electrostatic fields, which do not occur within the dipole coupling approximation in the absence of an external electrostatic field.
Note that the atomic polarizability can also be modified by electromagnetic vacuum fluctuations \cite{Bullough1,Bullough2,Bullough3,Bullough4,Bullough5,Donaire1,Donaire2}. Since
our main concern here is the interatomic interaction relevant to the external electrostatic field,
an order-of-magnitude estimation of such an effect is left to Sec. \ref{3}.
\section{Interaction between ground-state hydrogen atoms in an electrostatic field}\label{3}
Now we take hydrogen atoms as an example to show the electrostatic-field-related quantum corrections of the induced classical dipole-dipole interaction.
The unperturbed ground state of the total system is
\begin{equation}
|\phi^A_{100}\rangle|\phi^B_{100}\rangle|0\rangle,
\end{equation}
where $|\phi_{nlm}\rangle$ is the atomic state with quantum numbers $n$, $l$, and $m$ and energy $E_n$, and $|0\rangle$ is the electromagnetic vacuum state.
For simplicity, we assume that the direction of the external electrostatic field is along the $z$ axis, i.e., $\varepsilon_i(\vec x_{\sigma})=(0, 0, \varepsilon_{\sigma})$, and consider the contributions from intermediate states with $n=2$ only. For atom $A$ (or $B$), the corrected ground state is obtained as
\begin{eqnarray}
\nonumber|\psi^A_1\rangle&=&\left(1-\gamma^2 \varepsilon_A^2 \right)|\phi_{100}\rangle-\sqrt{2}\gamma \varepsilon_A |\phi_{210}\rangle\\
&&-\frac{3^6}{2^6\sqrt{2}} \gamma^2 \varepsilon_A^2 |\phi_{200}\rangle,\label{HydrogenA-ground}
\end{eqnarray}
where $\gamma=2^9 q a_0/3^6 E_1$ with $q$ the electric charge and $a_0$ the Bohr radius, and $E_2=E_1/4$ has been applied. Then the second-order ground state of the system (eigenstate of $H_F+H_S+V_S$) is given by
\begin{equation}\label{Hydrogen-ground}
|\psi\rangle=|\psi^A_1\rangle|\psi^B_1\rangle|0\rangle.
\end{equation}
To calculate the interaction energy in the presence of the external electrostatic field, we need to find all possible intermediate states.
For this purpose, we should first find the corrected excited states of the atoms in the presence of the electrostatic field to the second order.
For a hydrogen atom, the unperturbed fourfold-degenerate excited state ($n=2$) is divided into three energy levels (one degenerate and two nondegenerate) in the presence of an external electrostatic field, i.e., the Stark effect.
Therefore, the second-order correction of the two nondegenerate levels can be approximately obtained by nondegenerate-state perturbation theory. For atom $A$ (or $B$), the corrected excited states can be obtained as (consider only the contributions from $n=1$ and $2$)
\begin{eqnarray}
\nonumber |\psi^A_{21}\rangle&=&\frac{1}{\sqrt{2}}\left(1-\frac{1}{2}\gamma^2\varepsilon_A^2\right) \left(|\phi^A_{200}\rangle-|\phi^A_{210}\rangle\right)\\
&&-\left(\gamma\varepsilon_A- \frac{3^6}{2^7}\gamma^2\varepsilon_A^2 \right)|\phi^A_{100}\rangle, \label{Hydrogen-excited1}\\
\nonumber |\psi^A_{22}\rangle&=&\frac{1}{\sqrt{2}}\left(1-\frac{1}{2}\gamma^2\varepsilon_A^2\right) \left(|\phi^A_{200}\rangle+|\phi^A_{210}\rangle\right) \\
&&+\left(\gamma\varepsilon_A+ \frac{3^6}{2^7}\gamma^2\varepsilon_A^2 \right)|\phi^A_{100}\rangle.\label{Hydrogen-excited2}
\end{eqnarray}
Obviously, the corrected excited states (\ref{Hydrogen-excited1}) and (\ref{Hydrogen-excited2}) are orthogonal to the ground state (\ref{HydrogenA-ground}) (in the second order, which is proportional to $q^2$), i.e., $\langle\psi^{\sigma}_{21}|\psi^{\sigma}_1\rangle=\langle\psi^{\sigma}_{22}|\psi^{\sigma}_1\rangle$ can be taken as zero in the calculations. This is also required because the new eigenstates (ground or excited) of the atoms in the presence of an electrostatic field should still be orthogonal.
Now we show the electrostatic-field-related quantum corrections to the induced classical dipole-dipole interaction between two ground-state atoms.
First, we consider the correction related to the process of two-photon exchange between two dressed hydrogen atoms, wherein the atomic polarizability is effectively modified by the electrostatic field.
Although the interaction energy shift (\ref{E_AB-2FZ}) is relative to all components of the polarizability tensor, the nonvanishing modification part is only in $\alpha_{33}(0)$ since the external electric field is assumed to be along the $z$ axis. According to Eqs. (\ref{HydrogenA-ground}), (\ref{Hydrogen-excited1}), and (\ref{Hydrogen-excited2}), it is easy to obtain
\begin{equation}\label{alpha33}
\alpha_{33}(0)=-\frac{2^{18}}{3^{11} E_1}q^2 a_0^2-\frac{2^{22}}{3^{11}E_1}q^2a_0^2\left(\frac{q a_0 \varepsilon}{E_1}\right)^2.
\end{equation}
The first term is the undressed polarizability and the second term is the polarizability modified by the electrostatic field $\varepsilon$.
The interaction potential relevant to the electrostatic field is then reflected in the first term in Eq. (\ref{M_AB}), i.e.,
\begin{equation}
\alpha^A_{33}(0)\alpha^B_{33}(0)=\frac{2^{36}}{3^{22}E_1^2}q^4 a_0^4+\frac{2^{40}}{3^{22}E_1^4}q^6 a_0^6(\varepsilon_A^2+\varepsilon_B^2) +\frac{2^{44}}{3^{22}E_0^6}q^8a_0^8\varepsilon_A^2\varepsilon_B^2.
\end{equation}
Thus, in the sixth order, the electrostatic-field-related quantum correction to the induced classical dipole-dipole interaction is obtained as
\begin{equation}\label{E^M}
\Delta E^{(6)}_{AB}=-5\frac{2^{35}\hbar c q^6 a_0^6}{3^{22}\pi^3\epsilon_0^2 E_1^4 r^7}(\varepsilon_A^2+\varepsilon_B^2).
\end{equation}
Now we compare numerically the external-field-related quantum correction with respect to the unperturbed CP potential between two ground-state atoms due to the exchange of two virtual photons in vacuum~\cite{CP}.
Here we consider only the contributions from $n=1$ and $2$ and assume that the atoms are isotropically polarizable, i.e., $\alpha_{ij}=\alpha\, \delta_{ij}$. Then the CP potential takes the form
\begin{equation}
U_{AB}=-\frac{23\hbar c}{64\pi^3\epsilon_0^2 r^7}\alpha^A \alpha^B =-23\frac{2^{30}\hbar c q^4 a_0^4}{3^{22}\pi^3\epsilon_0^2 E_1^2 r^7},
\end{equation}
where $\alpha^{A (B)}$ is the scalar polarizability of atom $A$ or $B$ defined as
\begin{equation}
\alpha=\sum_{s}\frac{2}{3E_{s0}}\hat d_{i}^{0s} \hat d_{i}^{s0}.
\end{equation}
Taking $\varepsilon_A=\varepsilon_B\simeq10^8$ V/m, which is within the applicable range of the perturbation theory and can be reached in the laboratory \cite{Maron1986,Bailey1995}, and $r=10^{-6}$ m, which is larger than the transition wavelength ($\sim 10^{-8}$ m), the relative magnitude of the corrected term with respect to the unperturbed CP potential can be obtained as
\begin{equation}
\frac{\Delta E^{(6)}_{AB}}{U_{AB}}=\frac{160}{23E_1^2}q^2 a_0^2(\varepsilon_A^2 + \varepsilon_B^2)\simeq10^{-6}.
\end{equation}
That is, the electrostatic-field-related quantum correction in the sixth order is smaller than the unperturbed two-photon-exchange CP interaction.
This is expected because the former is a sixth-order effect while the latter is a fourth-order effect.
Second, we calculate the quantum correction to the classical electrostatic dipole-dipole interaction related to the process of three-photon exchange between two dressed hydrogen atoms.
According to Eqs. (\ref{HydrogenA-ground}), (\ref{Hydrogen-excited1}) and (\ref{Hydrogen-excited2}), it is easy to obtain the nonvanishing component of the static first hyperpolarizability, i.e.,
\begin{equation}
\beta_{333}(0)=\frac{2^{38}}{3^{22}E_1^3}q^4 a_0^4 \varepsilon.
\end{equation}
Then the interaction energy (\ref{E_AB-3FZ}) can be expressed as
\begin{equation}\label{E3_AB hydrogen}
\Delta E_{AB}^{(8)}=-7\frac{2^{67}\hbar^2 c^2 q^8 a_0^8 }{3^{43}\pi^5 \epsilon_0^3 E_1^6 r^{11}}\varepsilon_A \varepsilon_B.
\end{equation}
This term is absent without the electrostatic field since the exchange of three virtual photons between two ground-state hydrogen atoms is forbidden in the absence of an electrostatic field.
Taking $r=10^{-6}$ m and $\varepsilon_A=\varepsilon_B\simeq10^8$ V/m, the relative magnitude of this interaction energy with respect to the unperturbed two-photon-exchange CP potential is obtained as
\begin{equation}
\frac{\Delta E_{AB}^{(8)}}{U_{AB}}=\frac{7\times 2^{37}}{23\times3^{21}}\frac{\hbar c q^4 a_0^4} {\pi^2\epsilon_0 E_1^4 r^4}\varepsilon_A \varepsilon_B\simeq10^{-20}.
\end{equation}
That is, the electrostatic-field-enabled three-photon-exchange interatomic interaction is much smaller than the unperturbed
two-photon-exchange CP interaction, which suggests that an experimental observation of such an effect seems to be a hope for the distant future.
Finally, we note that, apart from the electrostatic-field-related quantum correction to the induced classical dipole-dipole interaction between two ground-state atoms we considered here, there are other terms in the complete result of the interatomic interaction. Now we would like to make an order-of-magnitude estimation of these terms and compare them with the quantum corrections of the classical electrostatic dipole-dipole interaction given in Eqs. (\ref{E^M}) and (\ref{E3_AB hydrogen}).
(i) As mentioned in Sec. \ref{CPEF}, in the sixth order, the two-photon-exchange CP interaction can also be modified by vacuum fluctuations, which is relevant to the self-interaction processes and can be regarded as causing an effective modification of the denominators of the atomic polarizability due to the vacuum-induced energy shift \cite{Bullough1,Bullough2,Bullough3,Bullough4,Bullough5,Donaire1,Donaire2}.
As an order-of-magnitude estimation, we write the vacuum-modified atomic polarizability
in analogy to the electrostatic-field-modified polarizability (\ref{alpha33}) as
\begin{equation}
\alpha\sim-\frac{q^2 a_0^2}{E_1}-\frac{q^2 a_0^2}{E_1}\left(\frac{\hbar\delta\omega_L}{E_1}\right)^2.
\end{equation}
Here $\hbar\delta\omega_L$ is the vacuum-induced Lamb shift, which plays a role similar to that of the electrostatic-field-induced Stark shift ($\sim q a_0 \varepsilon$).
Then the leading modification of the two-photon-exchange CP potential induced by vacuum fluctuations (in the sixth-order) can be approximately given as
\begin{equation}\label{E^M_L}
\Delta E^{(vac)}_{AB}\sim-\frac{\hbar c q^4 a_0^4}{\epsilon_0^2 E_1^2 r^7} \left(\frac{\hbar\delta\omega_L}{E_1}\right)^2.
\end{equation}
The ratio of the result (\ref{E^M_L}) to the sixth-order quantum correction to the classical electrostatic dipole-dipole interaction (\ref{E^M}) can be written as
\begin{equation}
\frac{\Delta E^{(vac)}_{AB}}{\Delta E^{(6)}_{AB}}\sim\frac{\hbar^2 \delta\omega_L^2}{q^2 a_0^2 \varepsilon^2}.
\end{equation}
As a numerical estimation, we take the Lamb shift $\hbar\delta\omega_L$ to be of the order of $\sim10^{-6}$ eV, i.e., the energy difference between $2^{2}S_{1/2}$ and $2^{2}P_{1/2}$ of a hydrogen atom \cite{Lamb1947}. To achieve $\Delta E^{(6)}_{AB}\gg\Delta E^{(vac)}_{AB}$, it is required that
\begin{equation}
\varepsilon\gg\frac{\hbar \delta\omega_L}{q a_0}\simeq10^{4}~\text{V/m}.
\end{equation}
That is, when the external electrostatic field is much larger than $10^{4}~\text{V/m}$, the Stark shift is much larger than the Lamb shift and the electrostatic-field-related quantum correction (sixth order) to the classical electrostatic dipole-dipole interaction is much larger than the modification of the two-photon-exchange CP interaction by vacuum fluctuations (sixth order).
(ii) As discussed in Sec. \ref{CPEF}, the electrostatic field enables a process of three-photon exchange for neutral atoms within the dipole coupling approximation.
However, when the quadrupole coupling is taken into account, the three-photon-exchange processes may also occur without the presence of an external electrostatic field, which is related to the processes of one quadrupole and two dipole transitions, and the corresponding interaction potential can be given by the dimensional analysis as
\begin{equation}\label{E^{QD}}
\Delta E^{Q}_{AB}\sim-\frac{\hbar^2 c^2 q^6 a_0^8}{\epsilon_0^3 E_1^4 r^{13}}.
\end{equation}
Since it scales as $r^{-13}$, it is usually smaller than the electrostatic-field-enabled three-photon-exchange interatomic interaction $\Delta E^{(8)}_{AB}$ in the far region.
(iii) Another point of consideration is the higher-order far-zone expansion of the two-photon-exchange CP interaction. In fact, it is well known that the $r^{-7}$ CP potential is the leading term of the complete two-photon-exchange interatomic CP potential in the far zone, which takes the form \cite{CP}
\begin{equation}
E_{\text{CP}}=-\frac{\hbar c \alpha^A \alpha^B}{16\pi^3\epsilon_0^2 r^6}\int^{\infty}_{0}\frac{k_A^2 k_B^2 e^{-2u r}} {(k_A^2+u^2)(k_B^2+u^2)}(u^4 r^4+ 2u^3 r^3+ 5u^2 r^2+ 6u r+ 3) du,
\end{equation}
where $k_{A(B)}=\omega_{A(B)}/c$. For identical atoms ($k_A=k_B$), after some algebra, we obtain the far-zone expansion of the CP potential, i.e.,
\begin{equation}\label{E_CP}
E_{\text{CP}}=-\frac{23\hbar c \alpha^A \alpha^B}{64\pi^3\epsilon_0^2 r^7} \left(1-\frac{129}{23k_A^2 r^2} +\frac{1917}{23k_A^4r^4}- . . .\right).
\end{equation}
This shows that there exist terms scaling as $r^{-9}$ and $r^{-11}$ in the far-zone expansion (labeled as $\Delta_1 E_{\text{CP}}$ and $\Delta_2 E_{\text{CP}}$, respectively). Since the electrostatic-field-enabled three-photon-exchange interatomic interaction $\Delta E^{(8)}_{AB}$ behaves as $r^{-11}$, it is much smaller than the $r^{-9}$ term in Eq. (\ref{E_CP}) in the far region, i.e., $\Delta E^{(8)}_{AB}\ll\Delta_1 E_{\text{CP}}$. Moreover, the relative order of magnitude between the $r^{-11}$ term (i.e., $\Delta_2 E_{\text{CP}}$) in Eq. (\ref{E_CP}) and $\Delta E^{(8)}_{AB}$ can be obtained as
\begin{equation}
\frac{\Delta_2 E_{\text{CP}}}{\Delta E_{AB}^{(8)}}\sim\frac{\epsilon_0 E_1^4 \lambda^4}{\hbar c q^4 a_0^4 \varepsilon^2},
\end{equation}
where $\lambda=k_A^{-1}$ is the transition wavelength.
A numerical estimation shows that the electrostatic-field-enabled three-photon-exchange interaction $\Delta E^{(8)}_{AB}$ is smaller than $\Delta_2 E_{\text{CP}}$ for realistic values of the electrostatic field. This is understandable since the former is an eighth-order effect while the latter is the fourth-order effect.
\section{Summary}
\label{sec_disc}
In this paper we investigated the interatomic interaction between two ground-state atoms in the presence of an external electrostatic field within the dipole coupling approximation. Based on the perturbation method,
we demonstrated that, in the fourth order, the external electrostatic field only induces a classical dipole-dipole interaction between the two atoms, in contrast to a recent claim in Ref. \cite{Passante2020}, while in higher orders, such a classical effect can be corrected by two kinds of mechanisms.
In the sixth order, the external field effectively modifies the atomic polarizability to give rise to a two-photon-exchange quantum correction, while in the eighth order, the external field enables an additional process of three-photon exchange which is not allowed in the absence of the external field, and this process generates an $r^{-11}$ term in the interaction potential in the far regime.
Numerical estimations showed that these external-field-related quantum corrections are much smaller than the two-photon-exchange Casimir-Polder interaction.
\begin{acknowledgments}
This work was supported in part by the NSFC under Grants No. 11690034, No. 11805063, and No. 12075084 and the Hunan Provincial Natural Science Foundation of China under Grant No. 2020JJ3026.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The high-luminosity upgrade of the LHC will lead to a substantial reduction of the statistical error in experiments regarding beyond the standard model research. In order to reach reliable conclusions from the experiments, the precision of theoretical predictions has to be improved accordingly. In particular, this concerns the effects of double parton scattering (DPS), which can be parameterized through so-called double parton distributions (DPDs). The corresponding contribution to the proton-proton scattering cross section involves the following integral over the transverse quark distance $\tvec{y}$ \cite{Diehl:2011yj}:
\begin{align}
\int \mathrm{d}^2\tvec{y} \;
F_{a_1 a_2}(x_1, x_2, \tvec{y}) \, F_{b_1 b_2}(x_1', x_2', \tvec{y}) \,,
\end{align}
with the DPD $F_{a_1 a_2}(x_1,x_2,\tvec{y})$, which depends on the momentum fractions $x_i$ of the two scattering quarks within one proton, as well as the transverse distance $\tvec{y}$ between them. Like ordinary PDFs, DPDs are non-perturbative objects, the determinations from first principle is quite challenging.
Due to the lack of knowledge, DPDs are often approximated to completely factorize w.r.t.\ their arguments $x_i$ (momentum fractions) and $\tvec{y}$:
\begin{align}
\label{eq:dpd-pocket}
F_{a_1 a_2}(x_1, x_2, \tvec{y})
& \overset{?}{=} f_{a_1}(x_1)\, f_{a_2}(x_2) \, G(\tvec{y}) \,.
\end{align}
where $G(\tvec{y})$ is assumed to be a unique (flavor independent) function. This leads to the so-called pocket formula \cite{Bartalini:2011jp}:
\begin{align}
\sigma_{\mathrm{DPS},ij} = \frac{1}{C} \frac{\sigma_{\mathrm{SPS},i}\ \sigma_{\mathrm{SPS},j}}{\sigma_{\mathrm{eff}}} \,,
\label{eq:dpd-pocket-formula}
\end{align}
Non-perturbative access to DPDs is given by lattice simulations. In the present work we shall give a summary on our simulations of proton four-point functions, which can be related to DPD Mellin moments. Here restrict to the lowest moment. For those, we shall present the most important results. This includes aspects regarding the validity of the aforementioned pocket formula and further factorization assumptions. For further details, we refer to \cite{Bali:2021gel}.
\section{Double parton distributions and two-current matrix elements}
For an unpolarized proton double parton distributions can be defined as \cite{Diehl:2011yj}
\begin{align}
\label{eq:dpd-def}
F_{a_1 a_2}(x_1,x_2,\tvec{y})
= 2p^+ \int \mathrm{d} y^- \int \frac{\mathrm{d} z^-_1}{2\pi}\, \frac{\mathrm{d} z^-_2}{2\pi}\,
& e^{i\mskip 1.5mu ( x_1^{} z_1^- + x_2^{} z_2^-)\mskip 1.5mu p^+}
\nonumber \\
& \times
\sideset{}{^\prime}\sum_\lambda \bra{p,\lambda} \mathcal{O}_{a_1}(y,z_1)\, \mathcal{O}_{a_2}(0,z_2) \ket{p,\lambda}
\,,
\end{align}
where $\sideset{}{^\prime}\sum_\lambda$ indicates the average over the two helicity states. The definition involves the light-cone operators
\begin{align}
\label{eq:quark-ops}
\mathcal{O}_{a}(y,z)
&= \left.\bar{q}\left( y - {\textstyle\frac{1}{2}} z \right)\, \Gamma_{a} \, q\left( y + {\textstyle\frac{1}{2}} z \right)
\right|_{z^+ = y^+_{} = 0,\, \tvec{z} = \tvec{0}}\,,
\end{align}
where the Dirac matrix $\Gamma_a$ selects the quark polarization. We distinguish between the following three possibilities
\begin{align}
\label{eq:quark-proj}
\Gamma_q & = {\textstyle\frac{1}{2}} \gamma^+ \,, &
\Gamma_{\Delta q} &= {\textstyle\frac{1}{2}} \gamma^+\gamma_5 \,, &
\Gamma_{\delta q}^j = {\textstyle\frac{1}{2}} i \sigma^{j+}_{} \gamma_5 \quad (j=1,2) \,,
\end{align}
corresponding to unpolarized, longitudinally polarized and transversely polarized quarks, respectively. Due to rotational symmetry the DPDs can be decomposed in terms of rotationally invariant functions:
\begin{align} \label{eq:invar-dpds}
F_{q_1 q_2}(x_1,x_2, \tvec{y}) &= f_{q_1 q_2}(x_1,x_2, y^2) \,,
\nonumber \\
F_{\Delta q_1 \Delta q_2}(x_1,x_2, \tvec{y})
&= f_{\Delta q_1 \Delta q_2}(x_1,x_2, y^2) \,,
\nonumber \\
F_{\delta q_1 q_2}^{j_1}(x_1,x_2, \tvec{y})
&= \epsilon^{j_1 k} \tvec{y}^k\, m f_{\delta q_1 q_2}(x_1,x_2, y^2) \,,
\nonumber \\
F_{q_1 \delta q_2}^{j_2}(x_1,x_2, \tvec{y})
&= \epsilon^{j_2 k} \tvec{y}^k\, m f_{q_1 \delta q_2}(x_1,x_2, y^2) \,,
\nonumber \\
F_{\delta q_1 \delta q_2}^{j_1 j_2}(x_1,x_2, \tvec{y})
&= \delta^{j_1 j_2} f^{}_{\delta q_1 \delta q_2}(x_1,x_2, y^2)
\nonumber \\
&\quad + \bigl( 2 \tvec{y}^{j_1} \tvec{y}^{j_2}
- \delta^{j_1 j_2} \tvec{y}^2 \bigr)\mskip 1.5mu
m^2 f^{\mskip 1.5mu t}_{\delta q_1 \delta q_2}(x_1,x_2, y^2) \,.
\end{align}
Further symmetries are discussed in \cite{Diehl:2011yj,Bali:2021gel}.
In theoretical prescriptions, DPDs are often approximated in terms of impact parameter distributions $f_a(x,\tvec{b})$, i.e.\ a factorization of the form
\begin{align}
\label{eq:dpd-fact}
F_{a_1 a_2}(x_1, x_2, \tvec{y})
& \overset{?}{=} \int \mathrm{d}^2\tvec{b}\; f_{a_1}(x_1, \tvec{b} + \tvec{y})\,
f_{a_2}(x_2, \tvec{b}) \,.
\end{align}
Formally, this can be derived by inserting of a complete set of eigenstates between the two light-cone operators in \eqref{eq:dpd-def} and assuming that only the proton states dominate. In this step any possible correlations between the two scattering quarks are neglected. Differences between the two sides of the equation indicate the strength of quark-quark correlations.
Like for ordinary parton distribution functions, obtaining information on the $x_i$-dependence requires a treatment of light-like distances between the quark fields, which cannot be done on an Euclidean lattice. Hence, we consider Mellin moments, where the corresponding degrees of freedom are integrated out. The definition of the first DPD Mellin moment is given by:
\begin{align}
\label{eq:skewed-inv-mellin-mom-def}
I_{a_1 a_2}(y^2)
&= \int_{-1}^{1} \mathrm{d} x_1^{} \int_{-1}^{1} \mathrm{d} x_2^{} \;
f_{a_1 a_2}(x_1,x_2,y^2)
\,.
\end{align}
After the integration over the momentum fractions $x_i$, the light-cone operators appearing in \eqref{eq:dpd-def} become local operators. If the distance between the two operators is purely spatial, i.e.\ $y^0 = 0$, the corresponding matrix elements can be evaluated on the lattice. In general, we treat matrix elements of the following form:
\begin{align}
\label{eq:mat-els}
M^{\mu_1 \cdots \mu_2 \cdots}_{q_1 q_2, i_1 i_2}(p,y)
&:= \sideset{}{^\prime}\sum_\lambda \bra{p,\lambda} J^{\mu_1 \cdots}_{q_1, i_1}(y)\,
J^{\mu_2 \cdots}_{q_2, i_2}(0) \ket{p,\lambda} \,,
\end{align}
with the local currents
\begin{align}
\label{eq:local-ops}
J_{q, V}^\mu(y) &= \bar{q}(y) \mskip 1.5mu \gamma^\mu\mskip 1.5mu q(y) \,,
&
J_{q, A}^\mu(y) &= \bar{q}(y) \mskip 1.5mu \gamma^\mu \gamma_5\, q(y) \,,
&
J_{q, T}^{\mu\nu}(y) &= \bar{q}(y) \mskip 1.5mu \sigma^{\mu\nu} \mskip 1.5mu q(y) \,.
\end{align}
Exploiting Lorentz symmetry these two-current matrix elements can be decomposed in terms of a certain set of Lorentz invariant functions. For instance, the vector-vector matrix elements can be rewritten as:
\begin{align}
\label{eq:tensor-decomp}
M^{\{\mu\nu\}}_{q_1 q_2, V V}
- \tfrac{1}{4} g^{\mu\nu} g_{\alpha\beta}
M^{\alpha\beta}_{q_1 q_2, V V}
& = \left( 2p^\mu p^\nu - {\textstyle\frac{1}{2}} g^{\mu\nu} p^2 \right)\, A_{q_1 q_2}^{}
+ \left( 2y^{\{\mu p^\nu\}} - {\textstyle\frac{1}{2}} g^{\mu\nu} py \right)\, m^2\mskip 1.5mu B_{q_1 q_2}^{} \nonumber\\
&\quad + \left( 2y^\mu y^\nu - {\textstyle\frac{1}{2}} g^{\mu\nu} y^2 \right)\, m^4\mskip 1.5mu C_{q_1 q_2}^{} \,,
\end{align}
and similar for all other current combinations. Notice that we use symmetrized and trace-subtracted versions of the matrix elements, in order to reduce the number of independent invariant functions. At the twist-two level, there is a specific subset of invariant functions contributing. Explicitly these are the so-called twist-two functions $A_{q_1 q_2}$, $A_{\Delta q_1 \Delta q_2}$, $A_{\delta q_1 q_2}$, $A_{q_1 \delta q_2}$, $A_{\delta q_1 \delta q_2}$ and $B_{\delta q_1 \delta q_2}$. It can be shown that these functions are directly related to the DPD Mellin moments \eqref{eq:skewed-inv-mellin-mom-def} by the following relation:
\begin{align}
\label{eq:skewed-mellin-inv-fct}
I_{a_1 a_2}(y^2)
&= \int_{-\infty}^{\infty} \mathrm{d}(py)\, A_{a_1 a_2}(py,y^2) \,,
\nonumber \\
I^t_{\delta q \delta q^\prime}(y^2)
&= \int_{-\infty}^{\infty} \mathrm{d}(py)\, B_{\delta q \delta q^\prime}(py,y^2) \,,
\end{align}
In this article we shall present lattice results on the twist-two functions, as well as on the DPD Mellin moments themselves.
\section{Lattice simulations}
The unpolarized two-current matrix elements \eqref{eq:mat-els} can be evaluated on the lattice through the so-called four-point function for a given momentum $\mvec{p}$:
\begin{align}
C^{ij,\mvec{p}}_{\mathrm{4pt}}(\mvec{y},t,\tau)
&:=
a^6 \sum_{\mvec{z}^\prime,\mvec{z}}
e^{-i\mvec{p}(\mvec{z}^\prime-\mvec{z})}\
\left\langle \operatorname{tr} \left\{
P_+ \mathcal{P}(\mvec{z}^\prime,t)\ J_i(\mvec{y},\tau)\
J_j(\mvec{0},\tau)\ \overline{\mathcal{P}}(\mvec{z},0)
\right\} \right\rangle\,,
\label{eq:4ptdef}
\end{align}
with the proton interpolator $\mathcal{P}$ and the parity projection operator $P_+$. On a Euclidean lattice this is possible if the distance between the two currents is purely spatial, i.e.\ $y^0 = 0$. A relation between four-point functions and two-current matrix elements is given by:
\begin{align}
2V \sqrt{m^2 + \mvec{p}^2}
\left.
\frac{C^{ij,\mvec{p}}_{\mathrm{4pt}}(\mvec{y},t,\tau)}
{C^{\mvec{p}}_\mathrm{2pt}(t)}
\right|_{0 \ll \tau \ll t} &=
\left.
\frac{
\sum_{\lambda\lambda^\prime}
\bar{u}^{\lambda^\prime}(p) P_+ u^{\lambda}(p)\
\bra{p,\lambda} J_i(y)\ J_j(0) \ket{p,\lambda^\prime}
}{
\sum_{\lambda}
\bar{u}^{\lambda}(p) P_+ u^{\lambda}(p)
}\right|_{y^0 = 0} \nonumber\\
&= \sideset{}{^\prime}\sum_\lambda \bra{p,\lambda} J_{i}(y)\,
J_{j}(0) \ket{p,\lambda} \,,
\label{eq:4pt-spin-sum}
\end{align}
where the limit on the l.h.s.\ ensures that excited states are suppressed. Evaluating the fermionic part of the four-point function \eqref{eq:4ptdef} leads to a sum of Wick contractions, which is specific to the current flavors. There are five kinds of Wick contractions, which we call $C_1$, $C_2$, $S_1$, $S_2$ and $D$. A graphical representation of the corresponding topologies in terms of quark lines is shown in figure\ \ref{fig:graphs}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.82]{graphs/graphs.pdf}
\end{center}
\caption{Depiction of the five kinds of Wick contractions that contribute to the four-point function. For the graphs $C_1$, $C_2$ and $S_1$ the explicit contribution depends on the quark flavor of the insertion operator. Since we have the same particle at the source and the sink, $S_1$ depends only on one quark flavor. Moreover, if all considered quarks have the same mass (as it is the case in our setup), $C_2$ depends only on the flavor of the propagators connected to the source or sink. In this graphic we also indicate the disconnected parts $G_{3\mathrm{pt},q}^i$ and $G_{2\mathrm{pt}}$, as well as the loops $L_1^i$ and $L_2^{ij}$. \label{fig:graphs}}
\end{figure}
The explicit contraction again depends on the involved quark flavor combinations, which are indicated by corresponding subscripts. For light quarks and flavor conserving currents, the corresponding matrix elements in terms of the Wick contractions are given by:
\begin{align}
\left. M_{ud, ij}(p,y)\right|_{y^0 = 0}
&=
C^{ij,\mvec{p}}_{1,uudd}(\mvec{y}) +
S^{ij,\mvec{p}}_{1,u}(\mvec{y}) +
S^{ji,\mvec{p}}_{1,d}(-\mvec{y}) +
D^{ij,\mvec{p}}(\mvec{y})\,,
\nonumber \\
\left. M_{uu, ij}(p,y)\right|_{y^0 = 0}
&=
C^{ij,\mvec{p}}_{1,uuuu}(\mvec{y}) +
C^{ij,\mvec{p}}_{2,u}(\mvec{y}) +
C^{ji,\mvec{p}}_{2,u}(-\mvec{y}) +
S^{ij,\mvec{p}}_{1,u}(\mvec{y}) +
S^{ji,\mvec{p}}_{1,u}(-\mvec{y})
\nonumber \\
&\quad +
S_{2}^{ij,\mvec{p}}(\mvec{y}) +
D^{ij,\mvec{p}}(\mvec{y})\,,
\nonumber \\
\left. M_{dd, ij}(p,y)\right|_{y^0 = 0}
&=
C^{ij,\mvec{p}}_{2,d}(\mvec{y}) +
C^{ji,\mvec{p}}_{2,d}(-\mvec{y}) +
S^{ij,\mvec{p}}_{1,d}(\mvec{y}) +
S^{ji,\mvec{p}}_{1,d}(-\mvec{y})
\nonumber \\
&\quad +
S_{2}^{ij,\mvec{p}}(\mvec{y}) +
D^{ij,\mvec{p}}(\mvec{y})\,,
\label{eq:phys_me_decomp}
\end{align}
where $C_{1,uudd}^{ij,\mvec{p}}(\mvec{y})$ denotes the ratio of the corresponding contraction and the two-point function in the limit given at the l.h.s.\ of \eqref{eq:4pt-spin-sum}. Since isospin symmetry is exact in our setup, the relations \eqref{eq:phys_me_decomp} can be translated to the neutron case by interchanging the role of $u$ and $d$ quarks.
\begin{table}
\begin{center}
\begin{tabular}{ccccccccccc}
\hline
\hline
id & $\beta$ & $a[\mathrm{fm}]$ & $L^3 \times T$ & $\kappa_{l}$ & $\kappa_{s}$ & $m_{\pi,K}[\mathrm{MeV}]$ & $m_\pi L a$ & configs \\
\hline
H102 & $3.4$ & $0.0856$ & $32^3 \times 96$ & $0.136865$ & $0.136549339$ & $355$, $441$ & $4.9$ & $2037$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Details on the CLS ensemble which is used for the calculation of the two-current matrix elements. The simulation includes 990 configurations.\label{tab:cls}}
\end{table}
The simulation of the four-point functions is performed on the CLS ensemble H102 \cite{Bruno:2014jqa}, where 990 configurations are used. The ensemble has $n_f = 2+1$ $\mathcal{O}(a)$-improved Wilson fermions and employs the Lüscher-Weisz gauge action with $\beta=3.4$. The pseudoscalar masses are $m_\pi = 355~\mathrm{MeV}$ and $m_K = 441~\mathrm{MeV}$, the extension is $96\times 32^3$. More details are given in table\ \ref{tab:cls}. Each contraction is evaluated on boosted proton sources (momentum smearing) \cite{Bali:2016lva}, where we use APE-smeared gauge links \cite{Falcioni:1984ei}. The source is located at the timeslice $t_{\mathrm{src}} = T/2 = 48a$ (open boundary conditions in time direction). The momentum smearing technique is again employed at the sink, which is located at the time-slice $t_{\mathrm{src}}+t$. The source-sink separation $t$ is specific to the proton momentum, where we use $t=12a$ for $\mvec{p} = \mvec{0}$ and $t=10a$, otherwise. We perform the calculation for six proton momenta up to $|\mvec{p}| \approx 1.57~\mathrm{GeV}$. The graphs $C_1$, $C_2$ and $S_1$ require the sequential source technique at the sink. Moreover, we use stochastic $Z_2\otimes Z_2$ noise vectors for the evaluation of various propagators. This applies for one propagator connecting one insertion and the sink in $C_1$, the propagator connecting the two insertions in $C_2$, and the propagator in the loop $L_1$, which appears in the graphs $S_1$ and $D$. For the latter there exists also a version where one of the loops is located at a point-like insertion (fixed position). If applicable, the stochastic propagators are improved by exploiting ultra-locality of the action (hopping parameter expansion) \cite{Bali:2009hu}. All of the applied techniques are described in detail in \cite{Bali:2021gel}.
\section{Results}
In the following, we discuss the results for the twist-two functions, which are obtained by solving the corresponding overdetermined systems of equations (e.g.\ \eqref{eq:tensor-decomp}) for $py=0$. We take into account only the data of the connected contractions $C_1$ and $C_2$, since those appear to be the cleanest. From fits of data for $py\neq 0$ to a specific model, we are able to extrapolate the dependence of the twist-two functions on $py$, which enables us to perform the integral \eqref{eq:skewed-mellin-inv-fct}, such that we obtain a first lattice result for the DPD Mellin moments. Notice that this is not feasible for every channel, due to data quality. The channels where a reliable extraction of the DPD Mellin moments is not possible we refer to as "bad" channels; these are not shown in our final results for the Mellin moments. For details on the model and the extrapolation, we refer to \cite{Bali:2021gel}.
\begin{figure}
\begin{center}
\subfigure[{\parbox[t]{4cm}{polarization dependence, $ud$, \\ DPD Mellin moment \label{fig:mellin-polcomp-ud}}}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/mellin_pol_comp-ud.pdf}} \hfill
\subfigure[{\parbox[t]{4cm}{polarization dependence, $ud$, \\ twist-two function at $py=0$ \label{fig:twist2-polcomp-ud}}}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/chan_comp_phys-ud.pdf}} \\
\subfigure[{\parbox[t]{4cm}{polarization dependence, $uu$, \\ DPD Mellin moment \label{fig:mellin-polcomp-uu}}}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/mellin_pol_comp-uu.pdf}} \hfill
\subfigure[{\parbox[t]{4cm}{polarization dependence, $uu$, \\ twist-two function at $py=0$ \label{fig:twist2-polcomp-uu}}}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/chan_comp_phys-uu.pdf}} \\
\end{center}
\caption{Comparison between different combinations of quark polarizations. This is shown for the DPD Mellin moments in (a) for the flavor combination $ud$ and (c) for $uu$, as well as for the corresponding twist-two functions in (b) and (d). \label{fig:polcomp}}
\end{figure}
We first consider the effects of the quark polarization. The corresponding results are plotted in figure\ \ref{fig:polcomp}. For all quark flavor combinations we observe dominance of the results for two unpolarized quarks. Polarization effects are visible for $ud$ and $uu$, where in the latter case they are suppressed. For both flavor combinations, the largest polarized contribution is that for one quark being transversely polarized and the second one being unpolarized. These observations are similar for the twist-2 functions (figure\ \ref{fig:twist2-polcomp-ud} and \ref{fig:twist2-polcomp-uu}) and for the DPD Mellin moments (figure\ \ref{fig:mellin-polcomp-ud} and \ref{fig:mellin-polcomp-uu}).
\begin{figure}
\begin{center}
\subfigure[flavor comparison, $I_{qq^\prime}(y^2)$ \label{fig:mellin-flavcomp-VV}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/mellin_flav_comp-A_VV_log.pdf}} \hfill
\subfigure[flavor comparison, $I_{\delta qq^\prime}(y^2)$ \label{fig:mellin-flavcomp-TV}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/mellin_flav_comp-A_TV_log.pdf}} \\
\end{center}
\caption{Dependence of the DPD Mellin moments on the quark flavor for two unpolarized quarks (a) and one transversely polarized quark (b). On the vertical axis we use a logarithmic scale. \label{fig:flavcomp}}
\end{figure}
The second important aspect to be studied is the dependence on the quark flavor. In this context, we consider the results for two unpolarized quarks, as well as the channels for one quark being transversely polarized and the other one being unpolarized. This is plotted in figure\ \ref{fig:mellin-flavcomp-VV} and \ref{fig:mellin-flavcomp-TV}, respectively. In both cases, a clear dependence on the flavor can be observed. In particular, the dependence on the quark distance $y$ differs between the different flavor combinations. This is in contrast to assumptions that are made in the derivation for the pocket formula \eqref{eq:dpd-pocket-formula}, where one requires a flavor independent function $G(\tvec{y})$ parameterizing the dependence on the transverse quark distance (see \eqref{eq:dpd-pocket}).
\begin{figure}
\begin{center}
\subfigure[$I_{ud}$ vs $\int F_{u} F_{d}$ \label{fig:mellin-factcomp-ud}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/I_VVud_p2433_fact_comp_2-11.pdf}} \hfill
\subfigure[$I_{uu}$ vs $\int F_{u} F_{u}$ \label{fig:mellin-factcomp-uu}]{
\includegraphics[scale=0.24,trim={0.5cm 1.2cm 0.5cm 2.8cm},clip]{plots/I_VVuu_p2433_fact_comp_2-11.pdf}} \\
\end{center}
\caption{The DPD Mellin moments $I_{ud}$ (a) and $I_{uu}$ (b). These are compared to the results obtained from the corresponding factorized expression \eqref{eq:fact-IVV}. The blue curve represents the contribution from the $F_1 F_1$ term. \label{fig:factcomp}}
\end{figure}
The last subject we want to address here, is the strength of quark-quark correlations. These can be studied by factorizing DPDs in terms of impact parameter distributions, see \eqref{eq:dpd-fact}. At the level of Mellin moments a factorized expression is given by an integral over a product of Pauli and Dirac form factors. Explicitly, we find:
\begin{align}
I_{qq^\prime}(-\tvec{y}^2) \stackrel{?}{=}
\int \frac{\mathrm{d} r}{2\pi}\
r J_0(ry)
\left[
F_1^q(-\tvec{r}^2)\ F_1^{q^\prime}(-\tvec{r}^2) +
\frac{\tvec{r}^2}{4m^2} F^q_2(-\tvec{r}^2)\ F_2^{q^\prime}(-\tvec{r}^2)
\right] \,,
\label{eq:fact-IVV}
\end{align}
where $F_1$ and $F_2$ denote the Pauli and Dirac form factors of the proton. In the present work we use the form factor data that has been generated in the context of the simulation described in \cite{RQCD:2019jai}.
In figure\ \ref{fig:factcomp} we compare our results for the DPD Mellin moment (green) representing the l.h.s.\ of \eqref{eq:fact-IVV} with the corresponding result of the form factor integral on the r.h.s.\ of \eqref{eq:fact-IVV} (red). We also show the contribution of the $F_1 F_1$-term, separately (blue). We observe for the two flavor combinations $ud$ and $uu$ that the factorized expression yields the correct order of magnitude. However, we find visible deviations. In the case of $ud$ and small quark distances $y$, the factorized result appears to be larger than our result obtained from the four-point data, whereas it is slightly smaller for large $y$. One might conclude that the two quarks would be closer together if they were uncorrelated. For $uu$ the factorized signal appears to be larger in any region.
\section{Conclusions}
We evaluated four-point functions on the lattice in order to obtain two-current matrix elements of the proton. These we used to extract so-called twist-two functions, which are related to DPD Mellin moments. For both of these quantities we presented results. We can draw the following conclusions: There are significant polarization effects for the flavor combination $ud$, which are largest for one transversely polarized quark. The latter are also observable for $uu$, but polarization effects appeared to be suppressed in this case. Moreover, we observed clear differences between the DPD Mellin moments for different quark flavors, which is in contradiction to assumptions made in the derivation of the pocket formula. Our third observation is the presence of quark-quark correlations, which can be concluded from discrepancies between our result for the DPD Mellin moments and its factorized version (see \eqref{eq:fact-IVV}).
\section*{Acknowledgments}
I thank the RQCD collaboration, in particular A. Sch\"afer, G. S. Bali and B. Gl\"a\ss{}le, as well as M. Diehl for fruitful discussions. For providing the proton form factor data, I thank Thomas Wurm. Moreover, I acknowledge the CLS effort for generating the $n_f=2+1$ ensembles, one of which was used for this work. The simulations were performed on the SFB/TRR-55 QPACE 3 cluster.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
In the proposed Electron-Ion Collider (EIC), imaging Cherenkov detectors are the backbone of particle identification. When charged particles move through the dielectric media of the Cherenkov detector at a speed larger than the phase velocity of light in that medium, they emit Cherenkov radiation in a characteristic conical shape.
All detector designs proposed for EIC have a dual radiator ring-imaging Cherenkov detector (dRICH) in the hadron direction, detection of internally reflected Cherenkov light (DIRC) in the barrel, and a modular-aerogel RICH (mRICH) in the electron direction (as displayed in Fig.~\ref{fig:pid_backbone}).\footnote{Cf. presentations by the proto-collaborations ATHENA, CORE and ECCE at the EICUG Summer 2021 \cite{eicug_summer_2021}.}
\begin{figure}[!htbp]
\centering
\includegraphics[width=.40\textwidth]{./figs/PID_backbone.png}
\includegraphics[width=.50\textwidth]{./figs/table_PID.png}
\caption{\label{fig:pid_backbone} (left) The imaging Cherenkov detectors for particle identification in a proposed EIC detector concept
(right) the table with the coverage in momentum for particle identification taken from the EIC Yellow Report \cite{khalek2021science}.
}
\end{figure}
As discussed in \cite{joosten_simulations}, the simulation of Cherenkov detectors involve optical processes with many photons that need to be tracked through complex surfaces, making these detectors relatively slow to simulate (CPU intensive) with full simulations like Geant \cite{ agostinelli2003geant4}:
(i) the mRICH, for example, has photons that originate in the aerogel and pass through a Fresnel lens made by many grooves whose impact on the simulation performance is non-negligible (for more details, see, \textit{e.g.}, \cite{he2020development});
(ii) The dRICH uses an aerogel radiator and a large volume heavy gas radiator: light needs to propagate to mirrors and eventually to light sensors, and nested ring patterns with noise are utilized for particle identification (more details can be found in \cite{cisbani2020ai});
(iii) The DIRC uses long quartz bars coupled to an expansion volume: it has similar challenges with a much more complex optics system resulting in more complex hit patterns (more details can be found in \cite{kalicy2018high}).
Following the taxonomy of Fig.~\ref{fig:AI}, Artificial Intelligence is already contributing to face challenges associated to computationally intensive simulations and complex pattern recognition problems, and in what follows we discuss how AI can play a role for imaging Cherenkov detectors that will be built at the Electron Ion Collider.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.80\textwidth]{./figs/AI_taxonomy.png}
\caption{\label{fig:AI} Taxonomy followed at AI4EIC defining AI, and ML and DL as subsets of AI.
}
\end{figure}
In Sec. \ref{sec:ai} we will describe recent activities involving AI applications for imaging Cherenkov detectors that can be utilized at the EIC; in Sec.~\ref{sec:conclusions} we present our conclusions and perspectives.
\section{AI for Imaging Cherenkov Detectors: Recent Activities}\label{sec:ai}
An ongoing effort in the EIC community is providing a detector concept that meets the physics requirements described in the EIC Yellow Report \cite{khalek2021science}.
The proposed design may be further optimized after the detector proposal submission.
In Sec.~\ref{sec:dRICH_optimization} we will describe the opportunity of further optimizing the complex design of the dual RICH detector with AI.
As we mentioned, imaging Cherenkov detectors are characterized by computationally
intensive simulations as well as complex patterns for particle identification. The most complex ring topologies are those of the DIRC detector, and in Sec.~\ref{sec:DIRC_deeplearning} we will describe recent works based on deep learning for the DIRC, which in principle can be also extended to other imaging Cherenkov detectors.
\subsection{Detector design assisted by AI: the dRICH case}\label{sec:dRICH_optimization}
Detector optimization is an essential part of the R\&D and design process that involves mechanical design and budget to realize the best performance possible.
This process is anticipated to continue in the months following the detector proposal towards CD-2 and CD-3.
In general, a sequential AI-based strategy gathers the information associated to the proposed design point, \textit{i.e.} some figures of merit that quantify the goodness of the design, and based on this information suggests which design parameters to query at the next iteration (cf. workflow represented in Fig.~\ref{fig:design_AI}).
\begin{figure}[!]
\centering
\includegraphics[scale = 0.16]{figs/design_workflow.png}
\caption
Typical workflow of detector design assisted by AI: physics events are injected in a detector characterized by some given design parameters. Reconstructed events are analyzed and figures of merit are quantified and passed to some AI-based strategy, which in turn suggests the next design point to observe in this sequential approach; AI can also intervene in the simulation and reconstruction steps.
}
\label{fig:design_AI}
\end{figure}
This becomes particularly useful when facing with computationally intensive simulations, complex designs characterized by large dimensionality, and noisy black-box objective functions.
The first parallelized, automated and self-consistent procedure for AI-optimized detector design was developed for the dRICH design at EIC by \cite{cisbani2020ai} leveraging Bayesian optimization (BO) \cite{snoek2012practical}:
the baseline design consisted of two radiators (aerogel and C$_2$F$_6$ gas) sharing the same outward-focusing spherical mirror and highly segmented photosensors ($\approx 3$\,mm$^2$ pixel size) located outside of the charged-particle acceptance. The work in \cite{cisbani2020ai} was initially developed for the JLEIC design \cite{jleic} before Brookhaven National Laboratory (BNL) was selected for building the EIC, but the same exercise can be repeated for the dRICH design of detector concepts proposed by the proto-collaborations.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.30, angle = 0]{./figs/det1_new.png}
\includegraphics[scale=0.32, angle = 0]{./figs/newf_new.png}
\caption{Geant4 based simulation of the dRICH \cite{cisbani2020ai} (left: entire detector; right: one of the six sectors). In transparent wired red is the aerogel radiator, in transparent wired green is the gas radiator volume; the mirrors sectors are in gray and the photo-detector surfaces (spherical shape) of about 8500~cm$^2$ per sector in dark-yellow. A pion of momentum 10~GeV/c is simulated \cite{cisbani2020ai}.
\label{fig:dual1}
}
\end{figure}
The dRICH detector in the hadron endcap is essential for particle identification in a wide range of momentum, cf. table in Fig.~\ref{fig:pid_backbone} (right).
In \cite{cisbani2020ai}, the important role played by certain parameters characterizing the design of the dRICH has been shown, particularly the mirror radius and its position, the location of the detecting tiles in each of the six modular petals of the dRICH, and the aerogel refractive index and thickness.
Results of the AI-based optimization are shown in Fig.~\ref{fig:drich_optimal} which shows the improvement in the $\pi$/K separation power.
Similarly the dRICH parameters can be fine-tuned in the designs proposed by the proto-collaborations ATHENA, CORE and ECCE, in each case considering the differences and constraints imposed by the global detector design.
\begin{figure}[!]
\centering
\includegraphics[width=.5\textwidth,origin=c,angle=0]{./figs/optimal_separation_alllog_new.pdf}
\caption{\label{fig:drich_optimal}
$\pi/K$ separation as number of $\sigma$, as a function of the charged particle momentum.
The plot shows the improvement in the separation power with the approach discussed in \cite{cisbani2020ai} compared to the legacy baseline design \cite{del2017design}. The curves are drawn with 68\% C.L. bands.
}
\end{figure}
Noticeably, EIC is already utilizing AI-supported optimization of the detector for the ongoing detector proposal. ECCE, for example, has built a multi-objective optimization for the design of the tracking system, and more details can be found here \cite{phelps_ecce}.
\subsection{Deep learning for fast simulations and particle identification: the DIRC example}\label{sec:DIRC_deeplearning}
As already mentioned, simulation of imaging Cherenkov detectors involve optical processes with many photons that need to be tracked through complex surfaces, making simulations computationally intensive.
In addition to that, detectors like the DIRC present complex hit patterns (for topology and sparsity of the hits) which can make difficult the extraction of information about the particle to identify.
As discussed during the AI4EIC workshop, these features seem to offer a natural place for deep learning applications for fast simulations and reconstruction.
In what follows we will focus on the DIRC detector, for which there are already developed examples and ongoing activities using AI.
Fig.~\ref{fig:DIRC_EIC} displays an example of a hit pattern obtained with charged tracks of pions detected by the \textsc{GlueX} \ DIRC detector \cite{barbosa2017gluex}.
\begin{figure}[!]
\centering
\includegraphics[width=.75\textwidth,origin=c,angle=0]{./figs/GlueX_Hit.png}
\caption{\label{fig:DIRC_EIC}
Example of complexity of hit patterns for the \textsc{GlueX} \ DIRC detector \cite{ali2020gluex}. Hit pattern for $\pi$+ track: real data (top) and GEANT MC simulation (bottom).
}
\end{figure}
Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} have been used to simulate the response of the \textsc{GlueX} \ DIRC in a work by \cite{derkach2019cherenkov} to bypass low-level details at the photon generation stage.
The architecture is trained to reproduce high-level features of the incident charged particles simulated with FastDIRC \cite{hardin2016fastdirc}, and allows for a dramatic increase of simulation speed.\footnote{The FastDIRC package \cite{hardin2016fastdirc} allows for fast simulation of the hit patterns as well as PID through a likelihood-based approach.}
GANs have been also recently used in LHCb for event generation and simulation of detector responses \cite{anderlini_LHCb_simulations}.
In fact, the increasing luminosities of future LHC runs will require an unprecedented amount of simulated events to be produced. The accurate simulation of Cherenkov detectors takes a sizeable fraction of CPU time and as an alternative high-level reconstructed observables can be generated with GANs to bypass low level details. In \cite{maevskiy2020fast}, in particular, the fast simulation is trained using real data samples collected by LHCb during run 2.
A novel architecture for Deeply learning the Reconstruction of Imaging CHerenkov (DeepRICH) detectors directly from low level features has been proposed in \cite{Fanelli_2020}.
A flowchart of DeepRICH is shown in Fig.~\ref{fig:DeepRICH}: it is a custom architecture consisting of Variational Autoencoders (VAE) \cite{kingma2019introduction} for reconstruction, and Convolutional Neural Networks (CNN) \cite{koushik2016understanding} combined with a Multilayer Perceptron (MLP) for particle identification.
\begin{figure}[!t]
\centering
\begin{tabular}[b]{cc}
\begin{tabular}[b]{c}
\begin{subfigure}[b]{0.3\columnwidth}
\includegraphics[width=\textwidth]{figs/pions_rec.pdf}
\caption{Example of hit points reconstructed by DeepRICH at 4 GeV/c, with an almost perfect overlap between the reconstructed and the injected hits of both pions. Image taken from \cite{fanelli2020machine}.}
\label{fig:A}
\end{subfigure}\\
\begin{subfigure}[b]{0.3\columnwidth}
\includegraphics[width=\textwidth]{figs/4_cnn_features.pdf}
\caption{Example of distributions in the latent space for pions and kaons at 4 GeV/c. Image taken from \cite{fanelli2020machine}.}
\label{fig:B}
\end{subfigure}
\end{tabular}
&
\begin{subfigure}[b]{0.45\columnwidth}
\includegraphics[width=\textwidth]{figs/deeprich_flowchart.pdf}
\caption{A flowchart of DeepRICH: the inputs are concatenated with the kinematics. VAE generates a set of vectors of latent variables, which are then used for both the classification of the particle and for the reconstruction of the hits. Image taken from \cite{Fanelli_2020}.}
\label{fig:C}
\end{subfigure}
\end{tabular}
\caption{The DeepRICH architecture \cite{Fanelli_2020}. (a) an example of reconstructed hit patterns by the VAE; (b) an example of $\pi/K$ distinguishing power in the latent space at 4 GeV/c; (c) the DeepRICH flowchart: the CNN/MLP is used for particle identification based on the latent space distributions from the VAE.
\label{fig:DeepRICH}}
\end{figure}
The classification is supervised and needs labeled data. In \cite{Fanelli_2020} studies have been performed utilizing samples produced with FastDIRC for the \textsc{GlueX} \ DIRC design.
In \cite{fanelli_ai4cherenkov} it has been discussed: (a) the possibility of using for training high purity samples directly from real data using specific topologies with, \textit{e.g.}, $\pi$, K and p; (b) a potential procedure for data augmentation at any given bin of the particle kinematics, consisting of sampling the expected hit pattern according to the expected photon yield distribution.\footnote{This works if the hits forming the hit pattern can be considered independent from each other in good approximation .}
The main features of DeepRICH can be summarized in the following points: (i) it is fast and provides accurate reconstruction;
(ii) it can be extended to multiple particle types (multi-class identification); (iii) it can be generalized to fast simulation, using VAE as a generative model; (iv) it can utilize (x,y,t) patterns if time is measured; can deal with different topologies and different detectors; (v) it deeply learns the detector response (high-purity samples of real data can be injected during the training process).
In terms of performance, it has been shown that the reconstruction efficiency is consistent with that of established methods such as FastDIRC \cite{hardin2016fastdirc}, with an area under the curve (AUC) close to that of FastDIRC across the entire phase-space that has been studied, namely AUC(DeepRICH) $\gtrsim$ 0.99 AUC (FastDIRC).
The main advantage is in the effective inference time per particle, which on a Titan V GPU resulted to be $\lesssim$ 1 $\mu$s utilizing a batch of 10$^{4}$ particles.
DeepRICH has been prototyped looking at pions and kaons at \textsc{GlueX} \ in a limited phase-space between 4 and 5 GeV and utilizing one fused silica bar.
Exciting work is planned to extend the kinematics of the particle and the number of bars, as well as to improve the training time with the possibility of distributed training.
Another interesting activity is that of utilizing architectures like DeepRICH as generative models for simulations of the hit patterns as a function of the kinematics.
Another potential application could be training DeepRICH using pure samples of identified particles from real data, allowing to deeply learn the response of the Cherenkov detector \cite{Fanelli_2020}.
\section{Conclusions and Perspectives}\label{sec:conclusions}
The last few years have been characterized by a groundswell of applications in nuclear and particle physics based on AI/ML both for fast simulations and for particle reconstruction and identification.
The particle identification at the Electron Ion Collider is based on imaging Cherenkov detectors, which typically entail computationally intensive simulations and challenging pattern recognition problems.
During the design phase, Artificial Intelligence can be utilized for optimizing the design of Cherenkov detectors and for reducing the computing budget necessary to explore a large number of design points.
We also discussed how simulation speed up can be obtained using generative models like GANs, and covered novel multi-purpose architectures which can in principle be utilized for fast simulation, reconstruction and identification of particles.
Architectures like DeepRICH can work for different imaging Cherenkov detectors and topologies of hit patterns, and lots of exciting activities are planned in the next few years to extend and characterize the performance of these applications.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{sec:intro}
Changes in the observed flux of a star over time, generally referred to as stellar photometric variability, can be attributed to a range of astrophysical processes. Stellar variability may be stochastic in nature, such as rapid changes in flux caused by flares, mass ejections, or novae, but changes in flux may also occur periodically. Periodic (or semi-periodic) variations in stellar flux have been linked to pulsations, rotation, eclipses, and other conditions or processes. The details of these variations can provide information about the dynamics, internal structure, and composition of stars, along with fundamental physical properties, such as mass, radius, and age \citep[see][and references therein]{Soderblom10, Chaplin13, Hekker17, Kochukhov21, Kurtz22}. Ages, for example, are typically the most difficult stellar property to measure, but gyrochronology has allowed stellar ages to be constrained from the rotation rates of Sun-like stars since they are known to lose angular momentum and spin down over time \citep{Irwin09, Meibom09, van_Saders13, Angus19}. The rate of angular momentum loss is connected to a star's internal structure and angular momentum loss through stellar wind, such that a star's surface activity can also be used to probe the structure and dynamo processes inside stars \citep{Bohm-Vitense07, Ferreira_Lopes15, Han21}.
Characterizing the variability of stars is also important for understanding the nature of exoplanetary systems. Exoplanets are typically detected indirectly, such as through the transit of a planet in front of its host star or the radial velocity changes of a star due to the orbit of a planet. Therefore, the determination of the properties of exoplanets relies on our understanding of their host stars. Planetary radii, for example, are a critical measurement for distinguishing between terrestrial planets, giant planets, brown dwarfs, and stellar companions \citep{Kane12, Fulton17, Owen17}. However, uncertainties in stellar radii lead to corresponding uncertainties in planetary radii \citep{Seager07, Ciardi15, Hirsch17, Kane14, Kane18}. Active stars with large flux amplitude variations have also been shown to masquerade as planetary signatures in radial velocity measurements \citep[e.g.,][]{Henry02, Robertson14, Robertson14-1, Robertson15, Kane16, Hojjatpanah20, Prajwal22, Simpson22}, while at the same time also potentially hiding the presence of small planets in their system. Furthermore, stellar variability directly impacts the insolation flux received by exoplanets, with implications for atmospheric erosion \citep{Lammer03, Murray-Clay09, Owen13, Dong17, Kreidberg19, Sakuraba19, Kane20} and habitability \citep{Tarter07, Segura10, Dong18, Howard20b}. This is especially true for planets around M-dwarf stars, which are known to exhibit particularly strong outbursts of ultraviolet and X-ray radiation as part of their activity cycles \citep{Tarter07, Segura10, Jackman19, Gunther20, Howard20b}. High-energy particles and radiation expelled from the star as a result of its magnetic or chromospheric activity can be damaging to the atmospheres of close-in planets and any potential life on the planet \citep{Tarter07, Segura10, Dong18, Gunell18, Howard20b}. Addressing questions about how planetary atmospheres evolve in these systems requires understanding the influence of its host star, including its associated variability.
Wide-field variability surveys, by design, are able to constrain the fraction of variable stars across their entire field of view down to a certain brightness limit. This brute-force approach typically aims to identify rare transients, such as supernovae, transits of exoplanets in wide orbits, and variable stars along the instability strip, but also enables an understanding of variability across nearly the entire sky. Since a star's variability is linked to its evolutionary state, measuring stellar variability across the sky can be used to understand the evolution history of our galaxy. Extensive work has already been done to detect photometric variability across the sky through ground-based surveys, such as the All-Sky Automated Survey \citep[ASAS;][]{Pojmanski02}, the Optical Gravitational Lensing Experiment \citep[OGLE;][]{Udalski08}, the Kilodegree Extremely Little Telescope \citep[KELT;][]{Oelkers18}, the Asteroid Terrestrial-impact Last Alert System \citep[ALTAS;][]{Heinze18}, the All-Sky Automated Survey for Supernovae \citep[ASAS-SN;][]{Jayasinghe18}, and the Next-Generation Transit Survey \citep[NGTS;][]{Briegal22}. However, ground-based surveys are limited by seeing conditions, weather, and inability to observe during the day. Furthermore, variability studies from space-based surveys, such as the \textit{Kepler}\ space telescope \citep{Borucki10} and \textit{Gaia} \citep{Gaia_Collaboration16}, have found that the fraction of variable stars across the sky has been limited by photometric precision rather than astrophysical variability \citep{Ciardi11, Basri11, Gaia_Collaboration19, Briegal22}. The Transiting Exoplanet Survey Satellite (\textit{TESS}) used a tile-based strategy to gradually sample about 70\% of the sky during its first two years of operation \citep{Ricker14, Ricker15}, making it an ideal instrument for a comprehensive study of stellar variability within the galaxy. Furthermore, the near-infrared bandpass utilized by \textit{TESS}\ allows for high precision photometry of M dwarf stars in the Solar neighborhood.
In this paper, we present a stellar variability catalog based on our search for photometric variability on timescales shorter than 13\,days across nearly the entire sky using the 2-min cadence photometry obtained during the \textit{TESS}\ Prime Mission. In \autoref{sec:methods} we outline the preparation of the \textit{TESS}\ photometry, the Fourier analysis of the data, special considerations as to how the \textit{TESS}\ mission design affects our study of variability, and the methodology used to identify variable signatures that are astrophysical in nature---as opposed to being caused by spacecraft systematics. Our stellar variability catalog, where we prioritized having a high percentage of astrophysical variables (opposed to systematic), is presented in \autoref{sec:results}. A review of the demographics and population statistics of the stars in the variability catalog are presented in \autoref{sec:discussion}, alongside a comparison to previous works and our planned follow-up investigations. Finally, we summarize this work in \autoref{sec:summary}.
\section{Data and Methods}\label{sec:data}
\subsection{TESS Photometry} \label{sec:methods}
Here we briefly summarize information about the \textit{TESS}\ mission that is relevant to this work, and refer the reader to \citet{Ricker14, Ricker15} for more detail. The \textit{TESS}\ spacecraft is in a 13.7\,day orbit around the Earth in a 2:1 resonance with the Moon and observes a $24^\circ\times96^\circ$ region of the sky using four optical CCD cameras that are sensitive to light in the range of 600--1000\,nm. Time-series photometry is obtained in 27.4\,day segments, known as sectors, after which the spacecraft repoints to observe a different segment of the sky. During the \textit{TESS}\ Prime Mission (2018~July~25--2020~July~04), $\sim$200,000 targets covering about 70\% of the sky were observed at 2-min cadence for at least the duration of one sector of observations. Targets located closer to the ecliptic poles were more likely to be observed in multiple sectors, at most being observed nearly continuously for 1\,year (13\,sectors). \autoref{fig:skyDist} shows the distribution of the \nSingle\ stars observed by a single \textit{TESS}\ sector compared to the \nMult\ stars observed in two or more sectors. Information about the \textit{TESS}\ targets is provided in the \textit{TESS}\ Input Catalog \citep[TICv8;][]{Stassun18,Stassun19}, which provides both observational and astrophysical information about each star and those in the surrounding star field. Since the \textit{TESS}\ pixels are 21\,arcsec and are subject to blending between neighboring stars, the TICv8 includes a crude calculation of the degree of flux contamination for each star, called the contamination ratio (\texttt{CONTRATIO}). The \textit{TESS}\ Input Catalog (\dataset[10.17909/fwdt-2x66]{http://dx.doi.org/10.17909/fwdt-2x66}), full-frame images (\dataset[10.17909/3y7c-wa45]{http://dx.doi.org/10.17909/3y7c-wa45}), time-series photometry (\dataset[10.17909/t9-nmc8-f686]{http://dx.doi.org/10.17909/t9-nmc8-f686}) are publicly available through the Mikulski Archive for Space Telescopes\footnote{\url{https://archive.stsci.edu/}} (MAST).
\begin{figure*}
\plottwo{singleTargets_RA_DEC.png}{multiTargets_RA_DEC.png}
\caption{Distribution in equatorial coordinates of stars observed at 2-min cadence by \textit{TESS}\ during its first two years of operations: the \textit{TESS}\ Prime Mission. \textit{Left:} Stars that were observed in only one sector of \textit{TESS}\ observations (27.4\,days). \textit{Right:} Stars that were observed in two or more sectors of \textit{TESS}\ observations ($>$54.8\,days).}
\label{fig:skyDist}
\end{figure*}
While the \textit{TESS}\ Prime Mission (Sectors 1--26) observed a total of \nAll\ unique stars ($\sim$20,000\,stars observed per sector), we restrict our periodicity search to the \nSamp\ stars that are brighter than $T=14$ and are not severely blended with neighboring stars (\texttt{CONTRATIO} $<$ 0.2). We also only examine stars with physical parameters listed in the TIC, which does not include white dwarfs and many subdwarfs. We use the 2-min cadence time-series photometry that were processed by the Science Processing Operations Center (SPOC) pipeline \citep{Jenkins16}, known as the Presearch Data Conditioning Simple Aperture Photometry \citep[PDCSAP;][]{Stumpe12, Stumpe14, Smith12} light curves. The PDCSAP light curves are based on an optimal aperture extraction of the raw photometry that was detrended using co-trending basis vectors in order to correct for common instrument systematics. We apply additional post-processing to the light curves, including removing data that are flagged as being anomalous in quality, trimming any known and candidate planetary transit events, and removing outliers that are $>$5$\sigma$ from the RMS of the light curve. Transit events are removed using the reported orbital period and a transit duration that is 10\% longer than that which is reported in the \textit{TESS}\ Object of Interest (TOI) catalog \citep{Guerrero21}. In addition to removing data that has been flagged as being anomalous quality by the pipeline, we remove observations between 1347.0--1349.8\,TJD during which significant flux scatter is observed in most Sector\,1 light curves due to a known spacecraft stability issue that is discussed in the \textit{TESS}\ data release notes. This section of Sector\,1 was the only set of data that we removed due to pervasive problems across many light curves.
\subsection{Periodicity Search Tools} \label{sec:periodogram}
Stellar variability manifests in many forms, and can be periodic, semiperiodic, or stochastic. In this paper, we mostly concern ourselves with continuous periodic sinusoidal-like photometric variability. Such behavior is associated with stellar pulsations, rotational modulations due to starspots, and ellipsoidal variations in stellar binaries and can be identified with the Lomb-Scargle (LS) periodogram \citep{Lomb76, Scargle82}.
We compute the fast LS periodogram using the \texttt{astropy.timeseries.LombScargle} function and sampling at equally spaced frequencies that are 1.35\,min$^{-1}$ apart, deliberately slightly oversampling the time series for maximal frequency resolution. The highest power peak in the periodogram is used to determine the most significant periodic signature of the light curve. The frequency, uncertainty, and significance (i.e., normalized power) of the most significant periodic signature are measured using a quadratic function fit to the highest power peak in the LS periodogram. The inverse of the frequency with the highest periodogram power is taken to represent the stellar variability period ($P=1/f$). A sinusoidal function is fit to the light curve and defined by
\begin{eqnarray}
F(t) &= A\cos\left[{\frac{2\pi}{P}(t-t_{0})}\right]
\mathrm{,}
\label{eq:sinewave}
\end{eqnarray}
where $A$ is the amplitude of the sine wave, $t-t_0$ represents the time since the first maximum relative to each point in time, and $P$ is the period with the highest power from the LS periodogram. The reduced chi-squared, $\chi^2_\nu$, is used to determine the goodness-of-fit of the sinusoidal function.
In order to distinguish continuous periodic variability from more punctuated periodic variability, such as planetary transits and stellar eclipses, we also utilize the auto-correlation function (ACF). The ACF requires data that are equally spaced in time, therefore we interpolate any missing observations in time and set their fluxes to zero. The ACF is measured by correlating the interpolated data with itself and then smoothing the result using a Gaussian kernel. The lag time of the ACF that has the highest correlation (i.e., strongest peak) is fit with a quadratic function to identify the variability period, its uncertainty, and its significance (i.e., normalized power). To determine the goodness-of-fit for the ACF, the light curve is binned into 100 equally spaced points in phase space and a cubic interpolation of the binned points is used to represent the ACF model when calculating the $\chi_\nu^2$ of the ACF model compared to the light curve data.
\subsection{Considerations of the \textit{TESS}\ Photometry} \label{sec:consideration}
There are several considerations that affect our periodogram search in accordance to the nature of the \textit{TESS}\ photometry. In particular, continuous segments of observations take place over a single spacecraft orbit, the placement of each target on the CCD cameras changes between \textit{TESS}\ sectors, and the spacecraft performs periodic adjustments in pointing that can introduce systematic noise into the photometry.
Continuous \textit{TESS}\ observations occur during the majority of each spacecraft orbit, but there is a $\sim$1\,day pause in observations when the spacecraft approaches perigee in order to downlink the data. Otherwise, observing conditions remain consistent over the course of two spacecraft orbits, which makes up the duration of a \textit{TESS}\ sector ($\sim$27\,days). Every \textit{TESS}\ target is observed nearly continuously for at least one sector, such that we can search for repeatable signatures in the \textit{TESS}\ light curves up to $\sim$13\,days. However, periodic signatures that are measured to be closer to the 13-day limit are subject to greater uncertainties than those on shorter timescales.
There are \nMultSamp\ stars in our sample that have been observed in more than one sector of \textit{TESS}\ photometry. Due to the observational design of the \textit{TESS}\ mission, each \textit{TESS}\ sector observes a different region of the sky such that any given star is never observed on the same part of the detector between sectors. The differing positions of the star on the detector introduces systematics into the time-series photometry that are not adequately removed with a simple normalization offset. Therefore, we choose to defer an investigation of long-period variability ($>$13\,days) to a future work (Fetherolf et al. in prep.). In this work, we instead use these stars to understand the reliability of detecting stars that are astrophysically variable---opposed to being caused by spacecraft systematics---in our catalog by ensuring that the same periodic signal for a given star can be recovered in different sectors (see \autoref{sec:reliability}).
The spacecraft reaction wheels have their speeds reset (also known as ``momentum dumps'') as frequently as every few days, which can cause periodic changes in the light curve that can masquerade as stellar variability. These systematic variations are not consistent between individual sectors (becoming less frequent over the mission duration), but they can occasionally be detected in the light curve at high Lomb-Scargle power in some cases. \autoref{fig:density} shows how the variability periods detected from the single sinusoidal model at 1--13\,days are distributed in normalized power and period space for stars observed during \textit{TESS}\ Sectors 5 and 16. There are several regions in the normalized power versus period space that contain a higher density of stars, but these periodicities would align between different \textit{TESS}\ sectors if they were astrophysical in nature. In the interest of developing a stellar variability catalog of high confidence, we elect to remove any stars that fall in high-density regions in power versus period space in each \textit{TESS}\ sector at the cost of removing a few real variable stars from the catalog (see \autoref{sec:search}).
\begin{figure*}
\plottwo{sec5_1peak_all_period_power.png}{sec16_1peak_all_period_power.png}
\caption{Normalized LS power versus the most significant photometric periodicity detected from the light curves for stars that were observed during \textit{TESS}\ Sectors 5 (\textit{left}) and 16 (\textit{right}). Higher density regions indicate periodic light curve modulations that are primarily caused by instrument systematics.}
\label{fig:density}
\end{figure*}
\subsection{Variability Search} \label{sec:search}
\begin{figure*}
\plotone{flowchart_v2.pdf}
\caption{A flowchart that summarizes our variability search algorithm, with the direction of flow being indicated by black (always), green (only when True), or red (only when False) arrows. Starting from the stars observed during the \textit{TESS}\ Prime Mission, we first remove stars that are faint ($T_{\mathrm{mag}}>14$\,mag) or subject to significant blending (\texttt{CONTRATIO} $>$ 0.2). We then search for periodic signatures at 0.01--1.5 and 1--13\,days in each individual sector of \textit{TESS}\ photometry. Several tests are preformed to determine significance of variability that is best fit with a single sinusoidal model (see \autoref{sec:search}). Stars that make it into one of the ``Variable'' categories are included in Tables~\ref{tab:1peak}--\ref{tab:ACF}.}
\label{fig:flowchart}
\end{figure*}
We perform a periodogram search on \nSamp\ unique stars, which comprises approximately half a million light curves that were obtained in \textit{TESS}\ Sectors 1--26. Our periodogram search includes two distinct period ranges at 0.01--1.5\,days and 1--13\,days. \autoref{fig:flowchart} shows the overall flow of our periodogram search, where stars that we identify as significantly variable are best characterized by either a single-sinusoidal function (1-sine Variable), a double-sinusoidal function (2-sine Variable), or the ACF (ACF Variable).
Our default model for the light curve is a single sinusoidal fit, but a double-sinusoidal function fit is also performed when the second-highest peak in the LS periodogram has a normalized power greater than 0.1. The double-sinusoidal fit is accepted as a better model than the single-sinusoidal fit to the light curve variations if there is at least a 25\% improvement in the $\chi_\nu^2$. The ACF fit is performed when both the single- and double-sinusoidal models are poorly fit ($\chi_\nu^2>100$) to the light curve. The ACF is accepted as a better model if the strongest correlation value is greater than 0.5 and the $\chi_\nu^2$ improves compared to the sinusoidal models. A star that is best fit with either a double-sinusoidal model or the ACF is considered significantly variable by default, the purity of which is further explored in \autoref{sec:reliability}. The values for the normalized power cutoffs and $\chi_\nu^2$ improvement were determined through visual inspection of the light curve fits for a preliminary sample of stars.
If a star that passes the initial periodogram search and is not identified as significantly variable with a double-sinusoidal function or the ACF, then it is subject to further vetting after assuming the single-sinusoidal fit to be the best model. First, short-period variability is identified separately from the LS periodogram search (0.01--1.5\,days) if the normalized power is $>$0.005 at a periodicity of $<$1.1\,days. Several vetting steps are then applied to remove systematic or long-period ($>$13\,days) variability from the periodogram search at 1--13\,days, with the first being a minimum threshold LS normalized power of 0.001. To retain high purity in the variability catalog, we remove stars that fall in high-density regions in power-period space (see \autoref{sec:consideration} and \autoref{fig:density}) with variability likely due to spacecraft systematics. High-density regions are determined by binning the log power and log period into 50 equally spaced bins (power $=$ 0.001--1.0; $P=$ 1--13\,days), and stars whose variability properties fall within bins that contain five or more total stars are considered to be spuriously variable---although we reiterate that some real variable stars will be removed during this step. Stars that are variable on timescales longer than that which can be properly constrained by the \textit{TESS}\ single-sector photometry ($\gg$13\,days), or are indistinguishable from uncorrected systematics on these timescales, are identified via two methods and removed from our variability catalog: 1) Stars that exhibit a maximum power in their LS periodogram at the upper limit of the period range searched (i.e., 13\,days) are assumed to exhibit variability at $\gtrsim$13\,days, which is beyond our sensitivity limit for the variability period. 2) We perform a linear fit to each light curve; if the $\chi_\nu^2$ of the linear fit is better than that of the sinusoidal or ACF models, we consider the star to have a long-term trend. Overall, stars that exhibit single-sinusoidal variability are considered significantly variable and, thus, are retained in our variability catalog if they are not caught by the filters for being spuriously variable and if they do not exhibit long-term trends.
Stars that were observed in multiple \textit{TESS}\ sectors may be detected as significantly variable in more than one sector. However, the variability period may not match between all sectors. It may be that the detected periodicity is half or twice that of another sector, the photometric variability for a given sector may be dominated by periodic momentum dumps of the spacecraft, or the star may be exhibiting variability that changes in period or amplitude over time. For the variability catalog reported in this work, we provide only the results from a single sector of \textit{TESS}\ photometry. The \textit{TESS}\ sector from which we report the periodogram results for a given star is based on first prioritizing any variability that is best fit by an ACF or double-sinusoidal model, then choosing the periodogram result with the highest normalized power.
\begin{deluxetable*}{lcrcrrrrrccc}
\tablecaption{Light Curve Fitting and Stellar Properties of Single-Sinusoidal Variables\label{tab:1peak}}
\tablewidth{700pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{TIC} & \colhead{Sector} &
\colhead{$P_\mathrm{var}$} &
\colhead{power} & \colhead{$A$} &
\colhead{$T_0$} & \colhead{$\chi^2_\nu$} &
\colhead{RMS} & \colhead{$T_\mathrm{mag}$\tablenotemark{a}} &
\colhead{$T_\mathrm{eff}$\tablenotemark{a}} & \colhead{$R_*$\tablenotemark{a}} &
\colhead{$L_*$\tablenotemark{b}} \\ [-0.3cm]
\colhead{~} & \colhead{~} & \colhead{(days)} & \colhead{~} &
\colhead{(ppm)} & \colhead{(BTJD)} & \colhead{~} & \colhead{(ppm)} &
\colhead{~} & \colhead{(K)} & \colhead{(\mbox{R$_{\odot}$})} & \colhead{(\mbox{L$_{\odot}$})}
}
\startdata
355235442 & 9 & $1.225 \pm 0.024$ & 0.999 & 43096 & 1543.778 & 5.65 & 30751 & 10.15 & 5266 & 1.02 & 0.72 \\
361948797 & 16 & $9.485 \pm 1.361$ & 0.999 & 52113 & 1743.029 & 6.89 & 37109 & 8.77 & 14618 & --- & --- \\
285651536 & 12 & $12.365 \pm 2.394$ & 0.998 & 22679 & 1625.519 & 6.07 & 16823 & 7.85 & --- & --- & --- \\
29953651 & 9 & $1.472 \pm 0.033$ & 0.995 & 20919 & 1544.337 & 11.92 & 14845 & 8.10 & --- & --- & --- \\
441807438 & 21 & $1.135 \pm 0.018$ & 0.995 & 33715 & 1871.116 & 19.43 & 23917 & 8.81 & 5181 & 1.22 & 0.97 \\
127256815 & 7 & $1.313 \pm 0.027$ & 0.994 & 26086 & 1491.655 & 9.46 & 18674 & 8.95 & 13833 & --- & --- \\
47985275 & 6 & $1.111 \pm 0.021$ & 0.993 & 16295 & 1465.957 & 4.15 & 11605 & 9.37 & 12081 & --- & --- \\
303860976 & 11 & $1.275 \pm 0.025$ & 0.993 & 6953 & 1596.913 & 2.62 & 4993 & 7.97 & 10237 & 3.83 & 144.53 \\
117765777 & 6 & $1.101 \pm 0.021$ & 0.992 & 31558 & 1465.491 & 78.11 & 22488 & 7.91 & 11345 & 3.00 & 134.18 \\
401481773 & 10 & $8.580 \pm 1.209$ & 0.992 & 24077 & 1577.481 & 9.97 & 17740 & 8.63 & --- & --- & --- \\
158797702 & 26 & $1.419 \pm 0.031$ & 0.991 & 11566 & 2010.424 & 5.78 & 8231 & 8.02 & 6445 & 4.44 & 30.54 \\
453442366 & 11 & $7.694 \pm 0.913$ & 0.991 & 28422 & 1597.729 & 1.67 & 20602 & 10.68 & 3891 & 0.57 & 0.07 \\
128379228 & 16 & $1.431 \pm 0.032$ & 0.990 & 32838 & 1738.988 & 24.70 & 23465 & 8.70 & 10821 & 3.08 & 117.11 \\
255635577 & 6 & $0.067 \pm 0.000$ & 0.990 & 13035 & 1465.223 & 9.45 & 9265 & 7.72 & 7213 & 1.62 & 6.41 \\
369718807 & 12 & $11.915 \pm 2.074$ & 0.990 & 12433 & 1631.364 & 86.54 & 8949 & 5.26 & 3783 & 72.52 & 967.09 \\
26543048 & 15 & $1.003 \pm 0.015$ & 0.989 & 25304 & 1712.223 & 18.02 & 18005 & 8.09 & 6164 & 4.24 & 23.30 \\
444975216 & 20 & $10.028 \pm 1.296$ & 0.989 & 8071 & 1845.705 & 1.83 & 5461 & 8.07 & 8348 & 2.36 & 24.23 \\
449671660 & 12 & $1.384 \pm 0.028$ & 0.989 & 46663 & 1625.001 & 2.08 & 33352 & 11.67 & 4232 & 0.78 & 0.17 \\
367092843 & 24 & $1.006 \pm 0.015$ & 0.989 & 55091 & 1956.161 & 3.11 & 39150 & 11.10 & 4705 & 0.79 & 0.27 \\
432191995 & 9 & $1.451 \pm 0.032$ & 0.988 & 11900 & 1543.253 & 5.66 & 8460 & 8.43 & 6867 & 3.49 & 24.28
\enddata
\tablecomments{The 20 stars with the highest LS normalized power are listed here as a reference, but the complete table is available in the machine-readable format.}
\tablenotetext{a}{Taken from the TICv8 catalog \citep{Stassun18,Stassun19}.}
\tablenotetext{b}{Calculated from $T_\mathrm{eff}$ and $R_*$.}
\end{deluxetable*}
\begin{deluxetable*}{lcrcrrrrrccc}
\tablecaption{Light Curve Fitting and Stellar Properties of Double-Sinusoidal Variables\label{tab:2peak}}
\tablewidth{700pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{TIC} & \colhead{Sector} &
\colhead{$P_\mathrm{var}$} &
\colhead{power} & \colhead{$A$} &
\colhead{$T_0$} & \colhead{$\chi^2_\nu$} &
\colhead{RMS} & \colhead{$T_\mathrm{mag}$\tablenotemark{a}} &
\colhead{$T_\mathrm{eff}$\tablenotemark{a}} & \colhead{$R_*$\tablenotemark{a}} &
\colhead{$L_*$\tablenotemark{b}} \\ [-0.3cm]
\colhead{~} & \colhead{~} & \colhead{(days)} & \colhead{~} &
\colhead{(ppm)} & \colhead{(BTJD)} & \colhead{~} & \colhead{(ppm)} &
\colhead{~} & \colhead{(K)} & \colhead{(\mbox{R$_{\odot}$})} & \colhead{(\mbox{L$_{\odot}$})}
}
\startdata
56980355 & 8 & $4.490 \pm 0.281$ & 0.984 & 35938 & 1520.424 & 16.25 & 25262 & 9.08 & 9684 & 4.95 & 193.51 \\
& & $6.045 \pm 0.487$ & 0.316 & 3387 & 1518.653 & & & & & & \\
140797524 & 8 & $9.576 \pm 1.287$ & 0.979 & 87883 & 1525.411 & 904.94 & 61352 & 7.24 & 4691 & 8.67 & 32.69 \\
& & $6.216 \pm 0.516$ & 0.332 & 7859 & 1519.354 & & & & & & \\
455000299 & 11 & $7.911 \pm 0.899$ & 0.977 & 30308 & 1596.905 & 7.18 & 21797 & 10.09 & 4397 & 1.18 & 0.47 \\
& & $5.220 \pm 0.398$ & 0.417 & 3457 & 1598.956 & & & & & & \\
33298241 & 7 & $11.784 \pm 2.262$ & 0.968 & 47926 & 1499.506 & 86.60 & 34356 & 8.88 & 10636 & 4.02 & 185.68 \\
& & $6.944 \pm 0.546$ & 0.143 & 4868 & 1491.729 & & & & & & \\
291366757 & 12 & $8.328 \pm 1.009$ & 0.965 & 30518 & 1632.860 & 178.56 & 21593 & 7.18 & 8231 & 3.56 & 52.26 \\
& & $5.580 \pm 0.379$ & 0.210 & 3800 & 1628.350 & & & & & & \\
255585244 & 7 & $11.819 \pm 2.360$ & 0.965 & 36199 & 1502.757 & 5.06 & 25681 & 9.67 & --- & --- & --- \\
& & $6.940 \pm 0.550$ & 0.210 & 5036 & 1492.010 & & & & & & \\
202414513 & 17 & $12.515 \pm 2.167$ & 0.964 & 16514 & 1769.312 & 9.51 & 10897 & 8.53 & 3550 & 95.10 & 1289.69 \\
& & $6.958 \pm 0.656$ & 0.449 & 1417 & 1771.497 & & & & & & \\
258908663 & 11 & $6.147 \pm 0.560$ & 0.957 & 52026 & 1600.852 & 6.22 & 36753 & 11.62 & 3731 & 0.88 & 0.13 \\
& & $10.151 \pm 1.458$ & 0.383 & 11196 & 1600.146 & & & & & & \\
57065604 & 9 & $11.338 \pm 2.502$ & 0.953 & 57539 & 1550.353 & 547.86 & 41710 & 7.99 & 11179 & 2.52 & 88.87 \\
& & $6.693 \pm 0.524$ & 0.273 & 7994 & 1549.570 & & & & & & \\
280161519 & 14 & $12.859 \pm 2.680$ & 0.951 & 12086 & 1689.719 & 14.75 & 8704 & 8.12 & 3672 & 59.03 & 568.85 \\
& & $7.703 \pm 0.572$ & 0.154 & 1533 & 1687.899 & & & & & & \\
360000758 & 11 & $7.697 \pm 0.876$ & 0.949 & 29091 & 1603.245 & 5.13 & 20399 & 10.70 & --- & --- & --- \\
& & $5.293 \pm 0.353$ & 0.347 & 5749 & 1597.015 & & & & & & \\
245171169 & 11 & $8.320 \pm 1.020$ & 0.948 & 27211 & 1600.872 & 96.54 & 19893 & 8.27 & 5117 & 4.44 & 12.13 \\
& & $5.438 \pm 0.430$ & 0.448 & 5275 & 1597.047 & & & & & & \\
168448855 & 9 & $11.956 \pm 2.447$ & 0.944 & 14505 & 1545.440 & 5.61 & 10293 & 9.72 & 4840 & 0.88 & 0.38 \\
& & $6.835 \pm 0.585$ & 0.279 & 2112 & 1547.284 & & & & & & \\
300449121 & 2 & $8.815 \pm 1.040$ & 0.944 & 10886 & 1358.375 & 4.32 & 7602 & 9.36 & 6163 & 1.05 & 1.43 \\
& & $5.828 \pm 0.444$ & 0.321 & 1896 & 1358.011 & & & & & & \\
381949031 & 11 & $10.887 \pm 2.522$ & 0.941 & 11370 & 1607.081 & 153.12 & 8528 & 5.85 & 3801 & 75.06 & 1055.79 \\
& & $6.572 \pm 0.493$ & 0.253 & 1689 & 1602.841 & & & & & & \\
55745232 & 11 & $7.270 \pm 0.888$ & 0.937 & 22617 & 1601.638 & 5.63 & 17176 & 9.68 & --- & --- & --- \\
& & $4.969 \pm 0.322$ & 0.548 & 5831 & 1597.578 & & & & & & \\
233605660 & 21 & $8.841 \pm 1.198$ & 0.936 & 18857 & 1875.201 & 0.98 & 13637 & 11.61 & 4786 & 0.76 & 0.27 \\
& & $4.155 \pm 0.215$ & 0.125 & 3296 & 1874.036 & & & & & & \\
41030783 & 11 & $7.199 \pm 0.868$ & 0.934 & 10904 & 1598.240 & 4.20 & 8297 & 9.72 & 5538 & 1.19 & 1.20 \\
& & $4.973 \pm 0.318$ & 0.423 & 2372 & 1599.084 & & & & & & \\
382418027 & 2 & $11.362 \pm 1.776$ & 0.934 & 10294 & 1357.492 & 2.93 & 7450 & 10.00 & 5464 & 0.85 & 0.58 \\
& & $7.123 \pm 0.549$ & 0.170 & 1630 & 1357.786 & & & & & & \\
149176305 & 9 & $8.718 \pm 1.171$ & 0.932 & 9719 & 1544.627 & 85.89 & 6973 & 6.30 & 5486 & 0.83 & 0.55 \\
& & $3.954 \pm 0.217$ & 0.126 & 1822 & 1545.480 & & & & & &
\enddata
\tablecomments{The 20 stars with the highest LS normalized power are listed here as a reference, but the complete table is available in the machine-readable format.}
\tablenotetext{a}{Taken from the TICv8 catalog \citep{Stassun18,Stassun19}.}
\tablenotetext{b}{Calculated from $T_\mathrm{eff}$ and $R_*$.}
\end{deluxetable*}
\begin{deluxetable*}{lcrcrrrrrccc}
\tablecaption{Light Curve Fitting and Stellar Properties of ACF Variables\label{tab:ACF}}
\tablewidth{700pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{TIC} & \colhead{Sector} &
\colhead{$P_\mathrm{var}$} &
\colhead{correlation} & \colhead{$A$} &
\colhead{$T_0$} & \colhead{$\chi^2_\nu$} &
\colhead{RMS} & \colhead{$T_\mathrm{mag}$\tablenotemark{a}} &
\colhead{$T_\mathrm{eff}$\tablenotemark{a}} & \colhead{$R_*$\tablenotemark{a}} &
\colhead{$L_*$\tablenotemark{b}} \\ [-0.3cm]
\colhead{~} & \colhead{~} & \colhead{(days)} & \colhead{~} &
\colhead{(ppm)} & \colhead{(BTJD)} & \colhead{~} & \colhead{(ppm)} &
\colhead{~} & \colhead{(K)} & \colhead{(\mbox{R$_{\odot}$})} & \colhead{(\mbox{L$_{\odot}$})}
}
\startdata
451949522 & 10 & $0.454 \pm 0.002$ & 0.937 & 105168 & 1569.842 & 5.28 & 36430 & 8.94 & 4962 & 1.77 & 1.70 \\
162432383 & 10 & $0.324 \pm 0.002$ & 0.935 & 12114 & 1569.524 & 70.84 & 4800 & 8.07 & 7308 & 1.38 & 4.86 \\
41170667 & 9 & $0.487 \pm 0.002$ & 0.932 & 200327 & 1543.618 & 14.06 & 69559 & 10.30 & 6853 & 3.51 & 24.41 \\
24662304 & 6 & $0.545 \pm 0.007$ & 0.932 & 13671 & 1465.473 & 829.76 & 5922 & 6.29 & --- & --- & --- \\
285413207 & 7 & $0.638 \pm 0.003$ & 0.931 & 44123 & 1492.088 & 5.75 & 16092 & 9.96 & 6254 & 1.15 & 1.81 \\
43216747 & 22 & $0.438 \pm 0.002$ & 0.930 & 99613 & 1899.351 & 5.43 & 35717 & 10.35 & 5083 & 1.10 & 0.73 \\
132764448 & 7 & $0.418 \pm 0.001$ & 0.930 & 29187 & 1491.636 & 3.41 & 10413 & 9.34 & 4612 & 0.96 & 0.38 \\
157212164 & 7 & $0.572 \pm 0.002$ & 0.929 & 93374 & 1491.733 & 6.98 & 32852 & 10.51 & 5647 & 0.89 & 0.73 \\
424721218 & 20 & $0.388 \pm 0.001$ & 0.929 & 334262 & 1842.409 & 2.55 & 119979 & 11.64 & 7165 & 5.05 & 60.44 \\
140132301 & 9 & $0.329 \pm 0.001$ & 0.928 & 201079 & 1543.415 & 69.62 & 71698 & 9.32 & --- & --- & --- \\
149248196 & 7 & $0.515 \pm 0.002$ & 0.928 & 216510 & 1491.831 & 1121.62 & 74942 & 6.11 & 5128 & --- & --- \\
140084919 & 9 & $0.422 \pm 0.001$ & 0.928 & 56044 & 1543.510 & 3.97 & 19864 & 9.04 & --- & --- & --- \\
115419820 & 24 & $0.439 \pm 0.001$ & 0.927 & 299383 & 1955.875 & 98.54 & 99015 & 9.18 & 7084 & 2.88 & 18.79 \\
458404466 & 10 & $0.648 \pm 0.004$ & 0.927 & 129691 & 1569.612 & 12.43 & 45873 & 6.80 & --- & --- & --- \\
316333039 & 22 & $0.413 \pm 0.002$ & 0.927 & 242883 & 1899.580 & 417.29 & 89688 & 7.99 & 6618 & 3.21 & 17.70 \\
232634683 & 20 & $0.637 \pm 0.003$ & 0.927 & 64219 & 1842.830 & 5.43 & 22260 & 10.05 & 4620 & 0.82 & 0.28 \\
150166721 & 9 & $0.486 \pm 0.001$ & 0.927 & 298494 & 1543.315 & 13.60 & 93872 & 9.07 & 6549 & 4.47 & 32.94 \\
314459000 & 15 & $0.429 \pm 0.001$ & 0.926 & 357776 & 1711.345 & 84.01 & 128233 & 9.42 & 6309 & 3.13 & 13.98 \\
461498341 & 20 & $0.563 \pm 0.002$ & 0.926 & 109183 & 1842.447 & 65.72 & 39085 & 8.77 & 5016 & 1.00 & 0.57 \\
149539114 & 8 & $0.430 \pm 0.004$ & 0.926 & 94670 & 1517.393 & 12.17 & 33901 & 10.61 & 5367 & 1.05 & 0.83
\enddata
\tablecomments{The 20 stars with the highest correlation value are listed here as a reference, but the complete table is available in the machine-readable format.}
\tablenotetext{a}{Taken from the TICv8 catalog \citep{Stassun18,Stassun19}.}
\tablenotetext{b}{Calculated from $T_\mathrm{eff}$ and $R_*$.}
\end{deluxetable*}
\section{Results}\label{sec:results}
\subsection{Variability Catalog} \label{sec:catalog}
Tables~\ref{tab:1peak}--\ref{tab:ACF} list the \nSine\ stars that are significantly variable when described by a simple sinusoidal function, \nDsine\ stars that exhibit double-sinusoidal variability, and \nACF\ stars that are otherwise strictly periodic and best characterized with the ACF, for a total of \nTotal\ unique stars that are included in the variability catalog as a whole. The variability period ($P_\mathrm{var}$) and normalized power (or correlation) are determined from the LS periodogram (or ACF). The amplitude ($A$) and time of maximum flux ($T_0$) of the light curve modulations---and the corresponding reduced chi-squared statistic ($\chi_\nu^2$)---are measured from the best-fit single- or double-sinusoidal function. The \textit{TESS}\ magnitude ($T_\mathrm{mag}$), stellar effective temperature ($T_\mathrm{eff}$), and stellar radius ($R_*$) are taken from the TICv8 catalog \citep{Stassun18, Stassun19} when available, and we calculate stellar luminosities ($L_*$) using the reported radii and temperatures. \autoref{tab:1peak}, in particular, includes variability that was detected with at least 0.001 normalized power ($\sim$70,000 stars), but we hold the highest confidence in the variability of stars with $>$0.1 normalized power ($\sim$20,000 stars). The figures and results that follow only include variable stars that were detected with at least 0.1 normalized power (\nTotalSig\ stars). Tables~\ref{tab:1peak}--\ref{tab:ACF} are available in the machine-readable format and figures showing the light curve, LS periodogram, and phase-folded light curve (e.g., see Figures~\ref{fig:ex_1peak}--\ref{fig:ex_ACF}) for each star in the variability catalog are available as a High-Level Science Product (HLSP) on MAST.
Our default model assumes sinusoidal variability, which will generally encapsulate rotational variability. \autoref{fig:ex_1peak} shows three examples of periodic variability that are best characterized by a single-sinusoidal function. Light curve modulations that are characterized by a single periodic signal can represent a broad range of stellar activity, including rotational variations caused by starspot activity, a dominant pulsation, or overcontact binaries. The examples shown in \autoref{fig:ex_1peak} showcase three examples of rotational and/or pulsational modulations, including an F star that is hotter than the Kraft break (top row; TIC~9971569), an M dwarf star that has not spun down over time due to its fully convective interior (center row; TIC~220523369), and a giant star exhibiting more quasi-periodic rotation (bottom row; TIC~9742362).
\begin{figure*}
\epsscale{1.1}
\plotone{ex_1peak_aboveKraft.png}
\plotone{ex_1peak_Mdwarf.png}
\plotone{ex_1peak_giant.png}
\caption{Single sector light curve (\textit{left column}), periodogram (\textit{center column}), and phase-folded light curve (\textit{right column}) for 3 examples of stars that are best characterized by a single-sinusoidal function. The black points in the right panels show the medians of 100 bins for the phase curve. The red curve represents the best-fit sinusoidal function and the blue triangles denote spacecraft momentum dump timings. The examples shown include a quickly rotating F star (\textit{top row}), an M dwarf with a fully convective interior (\textit{center row}), and a red giant star (\textit{bottom row}).}
\label{fig:ex_1peak}
\end{figure*}
Some stars exhibit multiple periodicities in their light curves, thus being better characterized by a double-sinusoidal function. Examples of more complex light curves that are best characterized by a double-sinusoidal function are shown in \autoref{fig:ex_2peak}. Multi-periodic variations in stars could be attributed to pulsations or differential stellar rotation. The light curves shown in \autoref{fig:ex_2peak} show examples of a $\delta$~Scuti pulsator (top row; TIC~70657495), a rotating F star that is cooler than the Kraft break (center row; TIC~468838146), and a giant star with two close periodicities (bottom row; TIC~71374409).
\begin{figure*}
\epsscale{1.1}
\plotone{ex_2peak_dScuti.png}
\plotone{ex_2peak_belowKraft.png}
\plotone{ex_2peak_giant.png}
\caption{Same as \autoref{fig:ex_1peak}, except for stars that are best characterized by a double-sinusoidal function. The right panel shows the phase-folded light curve for the two periodicities that are identified. The examples shown include a $\delta$~Scuti pulsator (\textit{top row}), an F star with a harmonic rotation (\textit{center row}), and a giant star (\textit{bottom row}).}
\label{fig:ex_2peak}
\end{figure*}
Light curves that are significantly periodic in nature, but are not sinusoidal in shape, are best characterized using the ACF. Finally, we show examples of light curves that exhibit strictly periodic variability that is not necessarily sinusoidal in shape in \autoref{fig:ex_ACF}, which are best characterized by the ACF. These light curves typically represent short-period ($<$11\,days) eclipsing binary systems with V-shaped eclipses, but also can include strictly periodic non-sinusoidal rotational variability. The examples shown in \autoref{fig:ex_ACF} represent an RR~Lyrae variable star (top row; TIC~393702163), an eclipsing binary with a V-shaped eclipse and strong Doppler boosting variations (center row; TIC~279254042), and a flat-bottomed eclipsing binary with strong ellipsoidal modulations (bottom row; TIC~450089997).
\begin{figure*}
\epsscale{1.1}
\plotone{ex_ACF_RRLyrae.png}
\plotone{ex_ACF_BEB.png}
\plotone{ex_ACF_SDEB.png}
\caption{Same as \autoref{fig:ex_1peak}, except for stars that are best characterized by an ACF. The center panel shows the correlation over the range of searched variability periods. The red curve represents an interpolation between each median binned point in the phase curve. The examples shown include an RR~Lyrae variable star (\textit{top row}), an eclipsing binary with a V-shaped eclipse (\textit{center row}), and an eclipsing binary with ellipsoidal modulations (\textit{bottom row}).}
\label{fig:ex_ACF}
\end{figure*}
\subsection{Catalog Reliability}\label{sec:reliability}
We can characterize the reliability of our catalog by examining stars observed in more than one sector. We compare the periodogram search results for the stars across individual sectors to see whether the periodicity properties are consistent between sectors. As stated in \autoref{sec:search}, we select a single \textit{TESS}\ sector for reporting the variability analysis results in our catalog (Tables~\ref{tab:1peak}--\ref{tab:ACF}) that is based on the periodicity with the highest normalized power. This selected sector is used as the reference when comparing variability analyses from multiple \textit{TESS}\ sectors.
There are \nSineMult\ stars that are best characterized by our default model (single-sinusoidal function) and were observed in multiple \textit{TESS}\ sectors. For 14,415 stars, we find that the periodicity measured from the selected sector agrees within 10\% of either: 1) the periodicity detected from any other individual sector of photometry; 2) twice or half the periodicity detected from other sectors of photometry; 3) the average periodicity measured from each of the individual sectors; or 4) the average perodicity after removing the largest outlier when the star was observed in three or more \textit{TESS}\ sectors. Stars whose periodic signatures are not in agreement between multiple sectors may still be astrophysically variable, but could be affected by differential spot rotation or poor photometric detrending in one of the sectors. Overall, there are 4,789\ stars in our single-sinusoidal variability catalog that were observed in multiple sectors but we cannot confirm their astrophysical nature, suggesting a possible 25\% degree of contamination in the single-sinusoidal variability catalog (\autoref{tab:1peak}) by non-variable stars. However, the catalog contains a list of stars with periodicities that have $>$0.001 normalized power, but we hold higher confidence that the stars with $>$0.1 normalized power are intrinsically variable since only 7\% of these stars are unconfirmed from their multiple sectors of photometry.
There are \nDsineMult\ stars that are observed in multiple \textit{TESS}\ sectors and best characterized by a double-sinusoidal function. We find that, for all but 29 stars, at least one of the two periodicities identified from the selected sector variability analysis is within 10\% of one of the two periodicities identified in other sectors characterized by a double-sinusoidal function. There are only 5 stars that have periodicities that are inconsistent with other sectors beyond 15\%, but after visual inspection of their light curves we find that they are all consistent with being intrinsically variable in nature with their inconsistent periodicities between \textit{TESS}\ sectors being attributed to differential rotation (i.e., several close significant periodicities). Overall, we infer with high confidence that \textit{all stars identified as exhibiting a double-sinusoidal signature in their light curve are truly variable in nature}---even when identified using only a single sector of \textit{TESS}\ photometry (see \autoref{tab:2peak}).
There are \nACFmult\ stars that are observed in multiple \textit{TESS}\ sectors and are periodic, but not necessarily sinusoidal in nature, and are thus best characterized by an ACF. Stars that are best characterized by an ACF in at least one \textit{TESS}\ sector are typically consistent with other ACF periodicities available in other sectors within 10\%. There are 7 stars that have inconsistent results, but are visually examined and found that they could all equally be described by a double-sinusoidal function, with several significant periodicities that are not necessarily harmonics of each other. Given that these few exceptions are visually confirmed as being significantly variable in nature, we infer that \textit{all stars with light curves that are best fit with an ACF}---including those that were observed in only a single sector of \textit{TESS}\ photometry---are confirmed as being truly variable in nature with high confidence (see \autoref{tab:ACF}).
\section{Discussion}\label{sec:discussion}
\subsection{Demographics of Variable Stars} \label{sec:stats}
The analysis presented here makes use of the \textit{TESS}\ time-series photometry and the estimated stellar properties from the TICv8. The photometric analysis only considers periodic variability in the \textit{TESS}\ bandpass that can be detected using the Lomb-Scargle periodogram and the auto-correlation function. These diagnostics are not sufficient for a {\it detailed} demographic study linking the morphology of periodic variability to the specific mechanisms of stellar structure and evolution. They do not incorporate many other ways to characterize stellar variability, such as stochastic (nonperiodic) variability, chromatic changes, spectroscopic variability, or other measures of changing flux. Nevertheless, we can extract information from this analysis that tells us about broad classes of variability. We first consider the distribution of variability across the Hertzsprung-Russell diagram\footnote{White dwarf stars were intentionally excluded from the Candidate Target List \citep[CTL;][]{Stassun18, Stassun19} during the \textit{TESS}\ Prime Mission, such that they do not have 2-min cadence photometry available for the variability analyses presented in this work. A cut in luminosity and radius was also applied to the CTL in order to remove most red giant stars, with some degree of contamination expected when including all bright stars with $T_{\mathrm{mag}}<6$\,mag.} (HRD), and we then explore the relationship between periodicity and stellar luminosity.
\begin{figure*}
\epsscale{1.1}
\plotone{HRdiagram_all.png}
\caption{Hertzsprung-Russell diagram of stars that are identified as significantly variable, and thus are included in the variability catalog (Tables~\ref{tab:1peak}--\ref{tab:ACF}). The points are colored by the measured variability period. Luminosities are calculated from the effective temperatures and stellar radii available in the TICv8 catalog \citep{Stassun18,Stassun19}.}
\label{fig:HRdiag4}
\end{figure*}
\autoref{fig:HRdiag4} shows the stars in the variability catalog on the HRD colored by their measured variability periods. The figure includes the subset of variable stars that were best characterized by a single-sinusoidal model (\autoref{tab:1peak}), double-sinusoidal model (\autoref{tab:2peak}), or ACF (\autoref{tab:ACF}). The location of stars on the HRD is significantly correlated with their measured variability period, which supports the expectation that the variability in these stars is primarily driven by starspot rotation and pulsations. We can describe the mechanisms for variability by connecting the different regions on the HRD with the variability periods measured from this work and examples of their observed light curves (e.g., Figures~\ref{fig:ex_1peak}--\ref{fig:ex_ACF}).
The stars with the longest variability periods are on the red giant branch and single FGK dwarf stars, which tend to have slower rotation periods as they age \citep{Irwin09, Meibom09, van_Saders13, Angus19}. Low-mass M dwarf stars ($<3500$\,K), on the other hand, have fully convective interiors such that they do not spin down as quickly over time and maintain generally short rotation periods \citep[$<$1\,day;][]{Chabrier97, Wright11, Wright18, Astudillo-Defru17, Newton17}. The shortest variability periods are found in OBAF oscillating stars, which experience multiple modes of pulsations that reveal information about their internal structures \citep{Gaia_Collaboration22, Kurtz22}. A few stars in this sample also outline the binary sequence \citep[e.g.,][]{Gaia_Collaboration18, Gaia_Collaboration19}, which is offset from the main sequence towards higher luminosities and was typically better characterized by an ACF with shorter variability periods. For stars characterized by an ACF, the main sequence tends to be dominated by eclipsing binaries, overcontact binaries, other tidally locked binaries, and fast rotating young main sequence stars, with just a few stars dominated by longer period starspot rotational modulations on the lower luminosity edge of the main sequence. The red giant branch includes a mix of intrinsic variables and eclipsing binaries. There are also a few variable stars identified along the instability strip---including those that are consistent with being RR Lyrae or Cepheid variables.
Finally, we highlight the existence of what appears to be a prominent sub-subgiant population above the binary main-sequence and below the sub-giant branch. Sub-subgiant (SSG) stars have been recognized as likely representing unusual stellar evolution pathways ever since their initial detection as anomalies in the color-magnitude diagrams (CMDs) of some open clusters \citep[see, e.g.,][and references therein]{Mathieu03}. Subsequent studies of SSGs in clusters have proffered several possible interpretations for these systems: mass transfer in a binary system, collision of two main sequence stars, mass loss of subgiant envelopes through dynamical encounters, and reduced luminosity due to the strong surface coverage of magnetic starspots \citep[see, e.g.,][]{Leiner17}. Some recent works have concluded that mass transfer and dynamical formation pathways are disfavored based on the small numbers of SSGs in open clusters, preferring instead the strong starspot interpretation \citep[e.g.,][]{Gosnell22}. However, attempts to identify and characterize the broader SSG population in the field have only very recently begun \citep{Leiner22}. Thus, the large population of apparent SSGs in the field identified in \autoref{fig:HRdiag4} could be an opportunity to make substantial new progress in understanding these enigmatic systems.
Figures~\ref{fig:per-lum-type}--\ref{fig:per-lum-amp} show the period-luminosity relationship for the stars that are included in the stellar variability catalog. \autoref{fig:per-lum-type}, in particular, highlights which stars are best characterized by a single-sinusoidal model (black points), double-sinusoidal model (blue points), or ACF (orange points). \autoref{fig:per-lum-teff} is colored by the effective temperature and includes labels for several known astrophysical features, and \autoref{fig:per-lum-amp} is colored by the measured variability amplitude from their light curves to emphasize how the strength of the photometric variations can be used to differentiate between different types of stellar variability.
The distribution of variable stars in the period--luminosity diagram is similar between the stars that were best characterized with a single- or double-sinusoidal function. The OBAF main sequence stars show a positive correlation between their luminosities and short-period oscillations \citep[$\lesssim$0.2\,days;][]{Gaia_Collaboration22, Kurtz22}, and we attribute the bimodal clustering of this group to ambiguous identification between the first two harmonics of oscillations that exhibit similar strength in LS normalized power. The discontinuity at $\sim$4\,\mbox{L$_{\odot}$}\ luminosity is attributed to the Kraft break \citep[$\sim$6200\,K;][]{Kraft67}, where cooler stars tend to rotate more slowly as magnetized stellar winds remove angular momentum from the star. There is also a positive linear relationship between luminosity and variability period for stars on the evolved giant branch ($\gtrsim$100\,\mbox{L$_{\odot}$}), which is consistent with asteroseismic studies of red giants \citep{Huber11, Mosser12}. The primary difference between the stars best characterized with a single-sinusoidal versus double-sinusoidal function can be seen in the giant stars, where multiple periodicities (i.e., 2-Sine) shorter than $\sim$1.5\,days are not observed in giant stars. The transition from single-sinusoidal to double-sinusoidal variability towards higher luminosity giant stars is in contrast with previous studies that have found instead that higher luminosity giants tend to pulsate with fewer distinct modes than low-luminosity giant stars \citep{Dennis14, Yu20}. However, this discrepancy may be caused by a detection bias in our catalog since lower luminosity giants tend to have smaller oscillation amplitudes.
\begin{figure*}
\epsscale{1.1}
\plotone{Pvar-Lum_all_type.png}
\caption{Calculated stellar luminosities versus the measured variability periods of stars that are identified as significantly variable, and thus are included in the variability catalog (Tables~\ref{tab:1peak}--\ref{tab:ACF}). The points are colored by whether their light curves were best characterized by a single-sinusoidal function (black points), double-sinusoidal function (blue points), or ACF (orange points). Luminosities are calculated from the effective temperatures and stellar radii available in the TICv8 catalog \citep{Stassun18,Stassun19}.}
\label{fig:per-lum-type}
\end{figure*}
\begin{figure*}
\epsscale{1.1}
\plotone{Pvar-Lum_all_Teff.png}
\caption{Same as \autoref{fig:per-lum-type}, but the points are colored by their effective temperatures. Several known astrophysical relationships are highlighted and labeled with faded ellipses.}
\label{fig:per-lum-teff}
\end{figure*}
\begin{figure*}
\epsscale{1.1}
\plotone{Pvar-Lum_all_amp.png}
\caption{Same as \autoref{fig:per-lum-type}, but the points are colored by the variability amplitude measured from their representative light curves.}
\label{fig:per-lum-amp}
\end{figure*}
The period-luminosity relationship for stars characterized with an ACF (orange points in \autoref{fig:per-lum-type}) is vastly different when compared to the stars with variations that are more sinusoidal in nature (black and blue points). Stars with higher amplitude variations (redder points in \autoref{fig:per-lum-amp}) tend to be eclipsing or overcontact binaries. In particular, the period-luminosity relationship for overcontact binaries can be seen twice, where the shorter period relationship is half the binary orbital period. The linear relationship at higher luminosities ($\gtrsim$5\,\mbox{L$_{\odot}$}) with periods ranging between $\sim$0.01--0.2\,days represents the period-luminosity relationship for $\delta$ Scuti stars \citep{King91, Antoci19, Ziaali19, Barac22, Kurtz22}. There is also a separation in luminosity ($\sim$4\,\mbox{L$_{\odot}$}) for stars with lower amplitude variations that can be attributed to the Kraft break.
The left panels of \autoref{fig:Pvar_hist} shows histograms of the variability periods measured for all dwarf and giant stars in the variability catalog, where we assume the identification of dwarf and giant stars using the \texttt{LUMCLASS} flag in the TICv8 catalog \citep{Stassun18, Stassun19}. There are significantly fewer dwarf stars that exhibit variations on timescales of 1.5--2\,days, which is not necessarily related to the Kraft break since these rotation periods are also uncommon for M dwarf stars (see \autoref{fig:per-lum-teff}). If the dearth of variability at 1.5--2\,days were caused by a systematic effect, then the drop in occurrence would be observable in both the dwarf and giant stars. On the contrary, the drop in variable stars at 1.5--2\,days only occurs in dwarf stars even when only the subset of stars that were best fit by a single-sinusoidal function are considered (right panels; \autoref{sec:consideration}). \citet{Kounkel22} suggested that binaries amount FGK-type stars with periods faster than $\sim$2\,days, and M-type stars with periods faster than $\sim$1.5\,days are predominately binaries that are possibly undergoing tidal interactions. Furthermore, the lack of $\sim$2 day periodicities spans over 4 orders of magnitude in luminosity, including M dwarfs, which suggest that it is could be related to binarity rather than pulsations. However, we leave an in-depth investigation in the binary occurrence rate in FGKM stars near variability periods of 1.5--2\,days to a future work.
In the right panels of \autoref{fig:Pvar_hist} we observe fewer variable stars at 5--6\,days for both dwarfs and giants, which is in contrast to \citet{Holcomb22} where they identified an overdensity of rotational variables at $\sim$5\,days. However, they suspected that their observed overdensity of variable stars at $\sim$5\,days could be related to uncorrected systematics in the \textit{TESS}\ PDCSAP photometry. This interpretation is consistent with how we removed variable stars from the variability catalog that were best be characterized by a single-sinusoidal function but could be attributed to spacecraft systematics---including some stars that are truly variable at 5--6\,days (see \autoref{sec:consideration}).
\begin{figure*}
\epsscale{1.1}
\plotone{Pvar_hist.png}
\caption{Histograms of the measured variability periods for all (\textit{left panels}) dwarf (\textit{top panels}) and giant (\textit{bottom panels}) stars and the subset of stars best characterized by a single-sinusoidal function (\textit{right panels}). Both dwarf and giant stars characterized by a single-sinusoidal function have fewer variables at $\sim$5--6\,days due to the removal of true intrinsic variables when accounting for \textit{TESS}\ systematics (see \autoref{sec:consideration}). However, only dwarf stars show fewer variable stars at $\sim$1.5--2\,days.}
\label{fig:Pvar_hist}
\end{figure*}
\subsection{Comparisons to Similar Work}
Projects similar to this paper have been undertaken using photometric data from other sky surveys, as described in \autoref{sec:intro}. The observational and methodological heterogeneity of these surveys makes it hard to directly compare their bulk variability results. However, we can qualitatively compare the results from this study to those of the two of the most similar surveys: the exploration of variable stars across the HRD using \textit{Gaia} data in \citet{Gaia_Collaboration19}, and the search for periodic stellar variability in data from the ground-based NGTS survey in \citet{Briegal22}. The observational properties of both surveys have some overlap with the stellar population observed by \textit{TESS}, and we intend to do direct star-by-star comparisons with those results in future work (see discussion in \autoref{sec:future}).
In \citet{Gaia_Collaboration19}, the authors examine the position of variable star classes on the HRD, separated by the various intrinsic and extrinsic types of known stellar variability. They compile a large set of known variable stars from various published sources and locate them on the HRD using \textit{Gaia} colors and magnitudes, and also conduct a limited exploration of stellar variables using \textit{Gaia} photometry, considering only the amplitude (the interquartile range) of the \textit{Gaia} $G$-band light curves. They identify regions of the HRD with a greater incidence of variability (see their Fig. 8), and also differences in typical variability amplitude across the HRD (their Fig. 9). \citet{Briegal22} conduct a general search for periodic variability in their lightcurves, which cover a set of fields around the sky, each observed for an average of 141 nights across an average time baseline of 218 days. Their variability search method shares some broad similarities to ours, while also being careful to remove the systematic trends that arise in wide-field ground based photometric observing. They also plot their variability detections across a CMD using \textit{Gaia} magnitudes.
We can examine some of the variability distributions across the HRD between the studies. This comparison is only qualitative and approximate, since the \textit{Gaia} and NGTS analyses use CMDs whereas we are plotting physical properties in an HRD. Also, the selection of targets for \textit{TESS}\ 2-min cadence observations was motivated by the search for transiting planets and not a broad sample of stars in a magnitude-limited way, which will certainly lead to selection biases in the numerical results. \textit{Gaia} observes deeper than \textit{TESS}\ to fainter stars, and over a longer time frame (22 months), with fewer individual epochs of observation (of order 20 or more observations).\footnote{Partly because of that, we do not see the white dwarf sequence in the \textit{TESS}\ data (along with the fact that the TICv8 stellar parameters were not designed to include white dwarfs).} NGTS observes a more similar set of stars in apparent magnitude, and over a time baseline more similar to \textit{Gaia}, but with a number of epochs more similar to \textit{TESS}. On the other hand, \textit{TESS}\ light curves have much greater photometric precision and duty cycles than either of the other surveys. Nevertheless, we can still qualitatively compare our overall distributions to those in these other two papers.
We plot the fraction of stars that we identify as variable across the HRD in \autoref{fig:varFrac}. Similar to Fig. 8 from \citet{Gaia_Collaboration19} and Figs. 3a and 7 in \citet{Briegal22}, we show higher rates of variability---especially with short periods---for the hot stars of the upper main sequence, and also along the top edge of the lower main sequence, which includes the binary sequence. The increased variability fraction for hotter stars on the upper main sequence is consistent with observations of the $\delta$ Scuti instability strip \citep{Murphy19}. We do not show the high rates of variability at the upper part of the red giant branch due to the limited time span of the single-sector \textit{TESS}\ observations in this analysis. We do show high rates of variability on the lower part of the giant branch, but the comparison of an HRD with a CMD, and varying timescales of variability probed between the different surveys make that point difficult to verify.
\begin{figure*}
\epsscale{1.1}
\plottwo{fracVariables_median.png}{fracVariables_median_powerCut_0p1.png}
\caption{The fraction of stars that are determined to be significantly variable across the HRD, with normalized power cutoffs of 0.001 (\textit{left}) and 0.1 (\textit{right}). The gray bins indicate areas of low completeness ($<$10 stars). Note the increased fractions of variable stars at the upper edge of the binary main-sequence as well as among the sub-subgiant population.}
\label{fig:varFrac}
\end{figure*}
\begin{figure*}
\epsscale{1.1}
\plotone{HRdiagram_amplitude.png}
\caption{Same as \autoref{fig:HRdiag4}, but the points are colored by the measured variability amplitude from the light curves. Note the gradient of increasing variability amplitude with increased displacement above the main-sequence; these are evidently binary stars on the binary main-sequence. Interestingly, the sub-subgiant population exhibits a heterogeneity of variabilty amplitudes.}
\label{fig:HRdiag5}
\end{figure*}
We can look at the distribution of variability amplitude between \autoref{fig:HRdiag5} and Fig. 9 in \citet{Gaia_Collaboration19}. We show the same increase in typical variability amplitude for stars on the upper edge of the main sequence, around the binary sequence. We see a gap in variability between the main sequence and the upper end of the giant branch, although the correspondence between the gaps in the two plots around the subgiant and lower giant branch regimes is not well defined. Above the part of the main sequence where the subgiant branch emerges, we find a number of very high-amplitude variables, which turn out to be classical pulsators in the instability strip. We do not see the high amplitudes among the red giant stars in the \textit{TESS}\ data due to the limited time frame of the current analysis, although once we incorporate multi-sector \textit{TESS}\ data in future work the red giant high-amplitude, long-period variability should become more apparent.
While \citet{Gaia_Collaboration19} did not include an investigation of periodic variability, \citet{Briegal22} does compare the periodicity of their variables to the colors of the stars in a manner similar to our analysis throughout \autoref{sec:stats}. We make a direct comparison to the period-color distribution of Fig. 8 from \citet{Briegal22} in \autoref{fig:NGTScomp}, where we show our measured variability periods versus \textit{Gaia} $G_{BP}$ and $G_{RP}$, and color the points by the amplitude of their variability. We see a similar distribution to the figure in \citet{Briegal22}, with a gradual trend towards longer variability periods for redder stars. Also seen at periodicities between 0.1 and 0.5 days are two populations of variables with the opposite trends with color, which we visually confirm as representing the overcontact binaries. In general, the upper population represents those stars identified at the correct orbital period, while the lower population represents those identified at half the true period, as noted also in \citet{Briegal22}.
\begin{figure*}
\epsscale{1.1}
\plotone{NGTS_comp.png}
\caption{Variability period versus \textit{Gaia} $G_{BP}-G_{RP}$ color, with the points colored by the measured variability amplitude from the light curves.}
\label{fig:NGTScomp}
\end{figure*}
\subsection{Future Directions} \label{sec:future}
The work presented in this paper is only an initial step in detecting and analyzing stellar variability using \textit{TESS}\ data. There are numerous ways to expand upon this work. A natural next step that we are already working on is to extend the periodicity search beyond 13 days by stitching \textit{TESS}\ observations across multiple sectors from the Simple Aperture Photometry, which will be especially powerful for examining stars in the \textit{TESS}\ CVZs. Furthermore, there are now multiple pipelines extracting light curves of stars not selected for 2-min cadence observation. These include light curves from QLP \citep{Huang20}, TESS-SPOC \citep{Caldwell20}, DIA \citep{Oelkers18}, and DIAMANTE \citep{Montalto20}, along with customized extraction tools for individual targets such as \texttt{lightkurve} \citep{Lightkurve_Collaboration18}, \texttt{eleanor} \citep{Feinstein19}, and \texttt{giants}
\citep{Saunders22}. The availability of truly complete stellar samples available from the full frame images would allow future population analysis of variability unaffected by the target selection process for the 2-min cadence targets\footnote{To be fair, even the FFI data will be constrained by magnitude limits of the \textit{TESS}\ aperture, and blending due to the \textit{TESS}\ pixel size.}.
An additional extension of this work that we are working on is to combine the \textit{TESS}\ light curves with photometry from other wide-field surveys. Ground-based surveys such as ASAS \citep{Pojmanski02}, HATNet \citep{Hartman04}, KELT \citep{Oelkers18}, SuperWASP \citep{Pollacco06}, ASAS-SN \citep{Jayasinghe18}, and ZTF \citep{Yao19} cover large fractions of the sky with broadband photometry (typically in a single bandpass) over long periods of time. While the data from these surveys do not reach the photometric precision or duty cycle of \textit{TESS}, they do cover much longer time frames. Combining \textit{TESS}\ photometry with ground-based archival data will allow us to search for longer-term variations or changes in variability.
We also intend to cross-match the cataloged variables from this project with those from other variability searches. Although there have been a number of lists of photometric variables published by other projects (e.g. the superWASP and ASAS-SN catalogs), the heterogeneity of photometric surveys has generally made cross-survey combination of variable lists difficult to conduct and incomplete. That tends to be due to differing observing cadences, time baselines, duty cycles, passbands, magnitude ranges, photometric precision, angular resolution and other observational parameters. Nevertheless, the sky coverage, photometric precision, and duty cycle of \textit{TESS}\ will make this data set a more natural base catalog of (periodic) variability from which to build a true all-sky variability catalog, at least for short-period variability of bright stars.
Two recent catalogs in particular will be worth detailed comparisons. A recent catalog from \citet{Prsa22} provides a list of 4,584 eclipsing binary stars detected in the 2-min Prime Mission \textit{TESS}\ data. While we have detected a number of EB-like signatures in our catalog (see discussion in \autoref{sec:catalog} and example EB light curves), the LS and ACF tools that we have employed in this work are not optimal for detection of many EBs, especially the punctuated signals of well-detached EBs. In an upcoming work, we will explore the overlap of the EBs detected in this work and the set discovered in \citet{Prsa22} to identify any missed in the other work and to identify which ranges of physical and observational EB parameters are well probed by the methods in this work. Another catalog we similarly intend to do a direct comparison of is a paper exploring rotational variability detected in \textit{TESS}\ FFI light curves by \citet{Kounkel22}.
Lastly, there are a number of efforts underway to classify photometric variables with much finer physical and observational categories than has been done in this work. Notable examples are the OGLE project \citep{Udalski08}, the \textit{Gaia} DR2 variability search \citep{Gaia_Collaboration19}, specific types of variable stars observed with \textit{TESS}\ \citep[e.g.,][]{Antoci19, Howard20a, Hon21, Avallone22, Barac22, Holcomb22, Prsa22, Kurtz22, Saunders22}, and many others. The \textit{TESS}\ Asteroseismic Science Operations Center (TASOC) is engaged in much more detailed and individualized analysis of certain types of stellar variability \citep{Audenaert21}. Additionally, both supervised and unsupervised machine learning classification tools have been deployed to automatically classify variable stars \citep[e.g.,][]{Blomme10, Audenaert21, Barbara22, Claytor22}. We hope that the more limited variability characteristics and broad quantitative parameters we have presented here, along with the open description of our selection and threshold procedures, can be useful for such projects going forward.
\section{Summary} \label{sec:summary}
We used \textit{TESS}\ Prime Mission 2-min cadence photometry to search for stellar variability in $\sim$200,000\,stars. Using a LS periodogram and ACF, we searched for photometric periodic variability on timescales up to 13\,days in the light curves observed from each \textit{TESS}\ sector. In Tables~\ref{tab:1peak}--\ref{tab:ACF}, we present our variability catalog, which includes $\sim$40,000 stars that are considered astrophysically variable to high confidence. In \autoref{sec:stats}, we identify several trends of variability on the HRD and period--luminosity diagram. We find that the photometric variability period and amplitude of variability can be used as significant metrics for classifying different types of variability across the HRD. Furthermore, the distribution of stars in period--luminosity space reveals important information about different astrophysical mechanisms driving stellar variability. After careful consideration of variations caused by \textit{TESS}\ spacecraft systematics, we also identify a dearth of dwarf star variability around 1.5--2\,days (see \autoref{fig:Pvar_hist}) where the astrophysical mechanism is not currently well-understood.
Overall, this work presents a broad overview of stellar variability observed across nearly the entire sky, but presents many opportunities for future investigations into stellar astrophysics and how stellar variability may affect exoplanetary systems. Stellar variability has already been shown to be a source for false positive exoplanet detections \citep[e.g.,][]{Henry02, Robertson14, Robertson14-1, Robertson15, Kane16, Hojjatpanah20, Prajwal22, Simpson22}, but avoiding active stars in exoplanet studies has also led to observational biases against planets around stars of certain spectral types and stages of evolution that tend to be particularly active (Simpson et al., in prep.). Since all stars are variable at some point in their lifetime, understanding how stellar activity affects the measurements of exoplanet properties is a critical step towards understanding how star-planet systems---and especially planetary atmospheres---evolve over time.
\begin{acknowledgements}
The authors thank Guillermo Torres for their helpful conversations and insight into the interpretations of this work. The authors acknowledge support from NASA grants 80NSSC18K0544 and 80NSSC18K0445, funded through the Exoplanet Research Program (XRP), and NASA grant 80NSSC20K0447 funded through the ADAP program. T.F. acknowledges support from the University of California President's Postdoctoral Fellowship Program. D.H. acknowledges support from the Alfred P. Sloan Foundation and the National Aeronautics and Space Administration (80NSSC21K0652). We acknowledge the use of public \textit{TESS}\ data from pipelines at the \textit{TESS}\ Science Office and at the \textit{TESS}\ Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. All of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. This paper includes data collected with the \textit{TESS}\ mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the \textit{TESS}\ mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. This research made use of Lightkurve, a Python package for \textit{Kepler}\ and \textit{TESS}\ data analysis \citep{Lightkurve_Collaboration18}.
\end{acknowledgements}
\facilities{TESS}
\software{Astropy \citep{Astropy_Collaboration13, Astropy_Collaboration18},
Astroquery \citep{Ginsburg19},
Lightkurve \citep{Lightkurve_Collaboration18},
Matplotlib \citep{Hunter07},
NumPy \citep{Harris20},
SciPy \citep{Virtanen20}
}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{introd}
The weak galerkin finite element method using triangulated meshes was proposed by J. Wang, etc, see \cite{JW13}. Since then, the method gained applications in multiple areas, see \cite{Lin13, Lin14, Guan18}. A weak Galerkin mixed finite element method for second order elliptic problems has been successfully applied to polytopal meshes in \cite{Wang14}. The method was further
developed in \cite{Lin15}, which was no longer a mixed method and provided an elegant and reliable way to solve second order elliptic problems. However, in \cite{Wang14} and \cite{Lin15}, the shape regularity requires the length of each edge or the area of each face being proportional to the diameter of the polygon or polyhedron which is the element of the partition.
To get more flexibility of generating polytopal meshes, in this paper, we presented the new shape regularity assumptions and additional analysis to extend the weak Galerkin finite element method to use polytopal meshes with arbitrary small edges or faces.
We define the $L_2$ norm as $\|\cdot\|_{L_2(\Omega)},$ the inner product as $(\cdot,\cdot)_{\Omega}$, and the vector-valued space $H({\rm div};\Omega)$ as
$$
H({\rm div};\Omega) = \left\{ {\vec v } : {\vec v } \in [L_2(\Omega)]^n, \nabla\cdot {\vec v } \in L_2(\Omega)\right\}.
$$
The crucial part of weak Galerkin finite element method for second order problem is the definition of a weak gradient operator and its approximation. Suppose we have a polygonal or polyhedral domain $D \subset \mathbb{R}^n, (n=2,3),$ with interior $D_0$ and boundary $\partial D$ and a ``weak function'' on $D$ as $v=(v_0,v_b)$ with $v_0\in L_2(D_0)$ on $D_0$, $v_b\in L_2(\partial D),$ on $\partial D.$ And $v_b$ is not necessarily associated with the trace of $v_0$ on $\partial D.$ Then we denote the space $W(D)$ as
\begin{equation}\label{wd}
W(D) := \left\{v = (v_0,v_b): v_0\in L_2(D_0), v_b\in L_2(\partial D)\right\}.
\end{equation}
For any $v\in W(D)$, the weak gradient of $v$ is defined as a linear functional $\nabla_{w}v$
in the dual space of $H({\rm div};D)$ whose action on $ {\vec q } \in H({\rm div};D)$ is
\begin{equation}\label{wg1}
(\nabla_{w}v, {\vec q } \ )_D := -\int_D v_0\nabla\cdot {\vec q } \ {\rm d}x
+
\int_{\partial D} v_b {\vec q } \cdot {\vec n } \ {\rm d}S,
\end{equation}
where $ {\vec n } $ is the outward normal direction to $\partial D.$ By trace theorem, we know that the definition of $\nabla_{w} v$ is well posed and $\nabla_{w} v = \nabla v$ if $v\in H^1(D).$
The discrete weak gradient operator is defined with a polynomial subspace of $H({\rm div};D)$.
For any integer $k\geq 0,$ $\mathbb{P}_k(D)$ is the polynomial space with degree no more than $k$. Denote the vector-valued space $[\mathbb{P}_k(D)]^n,$ then the discrete weak gradient $\nabla_{w,k,D} v$ of $v\in W(D)$ is defined as the solution of the following equation
\begin{equation}\label{wg2}
(\nabla_{w,k,D}v, {\vec q } _k\ )_D = -\int_D v_0\nabla\cdot {\vec q } _k\ {\rm d}x
+
\int_{\partial D} v_b {\vec q } _k\cdot {\vec n } \ {\rm d}S,\quad \forall {\vec q } _k\in [\mathbb{P}_k(D)]^n,
\end{equation}
where $\nabla_{w,k,D} v \in [\mathbb{P}_k(D)]^n$, the definition is also well posed.
The paper is organized as follows: With the techniques from \cite{Lin15, Brenner17, Guan184, Brenner17-2}, several useful Lemmas were proved in Section 2 to 4. In Section 2, the new shape regularity assumptions were given and the $L_2$ operators were defined. In Section 3, we denoted the weak norm and the discrete weak Galerkin finite element space. Also, the error estimates for the $L_2$ operators were obtained. In Section 4, the weak Galerkin finite element method was applied to Poisson's equation. The $H^1$ and $L_2$ error estimates were proved being optimal. In Section 5, we draw some conclusions.
\section{Shape Regularity}
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale = 1.6]
\coordinate (A) at (0.2,0.2);
\coordinate (B) at (3,0);
\coordinate (C) at (2.2,1.5);
\coordinate (D) at (2,3);
\coordinate (E) at (-0.2,1.5);
\coordinate (F) at (0,1.2);
\coordinate (O) at ($1/6*(A)+1/6*(B)+1/6*(C)+1/6*(D)+1/6*(E)+1/6*(F)$);
\draw (A)
--(B)
--(C)
--(D)
--(E)
--(F);
\draw (F)--(A);
\draw[style=dashed](O) circle (0.7);
\fill [black] (O) circle (1pt);
\draw[style=dashed](O)--(A);
\draw[style=dashed](O)--(B);
\draw[style=dashed](O)--(C);
\draw[style=dashed](O)--(D);
\draw[style=dashed](O)--(E);
\draw[style=dashed](O)--(F);
\end{tikzpicture}
\end{center}\caption{A star shaped sub-domain $D$}\label{fg1}
\end{figure}
Let $\mathcal{T}_h$ be a partition of the domain $\Omega$ consisting of polygons in two dimensional space or polyhedrons
in three dimensional space. Denote by $\mathcal{E}_h$ the set of all edges or flat faces in $\mathcal{T}_h$ , and let $\mathcal{E}_h^0 = \mathcal{E}_h/ \partial \Omega$ be
the set of all interior edges or flat faces. For every element $D \in \mathcal{T}_h$ , we denote by
$|D|$ the area or volume of $D$ and by $h_D$ its diameter. Similarly, we denote by $|e|$ the
length or area of e and by $h_e$ the diameter of edge or flat face $e\in \mathcal{E}_h$ . We also set
as usual the mesh size of $\mathcal{T}_h$ by
$$h = \max\limits_{D \in \mathcal{T}_h} h_D.$$
All the elements of $\mathcal{T}_h$ are assumed to be closed and simply connected polygons
or polyhedrons; see Figure \ref{fg1} for an example in two dimensional space. We need some shape regularity assumptions for the partition $\mathcal{T}_h$
described as below.
Here the shape regularity assumptions are the same as in \cite{Brenner17-2}. Let $ {D}$ be the polygon or polyhedron with diameter $h_D$. Assume that
\begin{equation}\label{assume1}
{D}\ {\rm is\ star\ shaped\ with\ respect\ to\ a\ disc/ball\ } \mathfrak{B}_D\subset D {\rm \ with\ radius\ =\ } \rho_D h_D,\ 0<\rho_D<1.
\end{equation}
Then we denote $\tilde{\mathfrak{B}}_D$ the disc/ball concentric with $\mathfrak{B}_D$ whose radius is $h_D$. It's clear that
\begin{equation}\label{assume2}
\mathfrak{B}_D\subset D\subset \tilde{\mathfrak{B}}_D.
\end{equation}
We will use the notation $A \apprle B$ to represent the inequality $A \leq (constant)B$.
The notation $A \approx B$ is equivalent to $A \apprle B$ and $A \apprge B$.
Figure \ref{fg1} is an example of $D$ satisfies the shape regularity assumptions. Based on the shape regularity assumptions \eqref{assume1} and \eqref{assume2}, there are several Lemmas. The hidden constants only depends on $\rho_D$ if there is no other statements.
\begin{lemma}\label{bramble}\cite{Bramble70} Bramble-Hilbert Estimates. Conditions \eqref{assume1}-\eqref{assume2} imply that we have the following estimates:
\begin{equation}
\inf\limits_{q\in\mathbb{P}_l} |\xi - q|_{H^m(D)} \apprle h^{l+1-m}
|\xi|_{ H^{l+1}(D)}, \
\forall \xi\in H^{l+1}(D),\ l = 0,\cdots, k,\ and\ 0\leq m \leq l.
\end{equation}
\end{lemma}
Details can be found in \cite{Brenner07}, Lemma 4.3.8.
\subsection{A Lipschitz Isomorphism between $D$ and $\mathfrak{B}_D$}
With the star-shaped assumption \eqref{assume1}, there exists a Lipschitz isomorphism
$\Phi: \mathfrak{B}_D\rightarrow D$ such that both $|\Phi|_{W^{1,\infty}(\mathfrak{B}_D)}$ and $|\Phi|_{W^{1,\infty}(D)}$ are bounded by constant that only depends on $\rho_D$ (see \cite{V11}, Section 1.1.8).
It then follows that
\begin{equation}\label{Dh}
|D|\approx h_D^n {\ \rm and \ } |\partial D|\approx h_D^{n-1},\ n=2,3,
\end{equation}
where $|D|$ is the area of $D$ ($n=2$) or the volume of $D$ ($n=3$), and $|\partial D|$ is the arclength of $\partial D$ or the surface area of $D$ ($n=3$). Moreover from Theorem 4.1 in \cite{J87}, we have
\begin{eqnarray}
\|\xi\|_{L_2(\partial D)} &\approx& \|\xi\circ\Phi\|_{L_2(\partial \mathfrak{B}_D)},\quad \forall \xi\in L_2(\partial D), \label{iso1} \\
\|\xi\|_{L_2( D)} &\approx& \|\xi\circ\Phi\|_{L_2( \mathfrak{B}_D)},\quad \forall \xi\in L_2( D), \label{iso2} \\
|\xi|_{H^1(D)} &\approx& |\xi\circ\Phi|_{H^1( \mathfrak{B}_D)},\quad \forall \xi\in H^1(D). \label{iso3}
\end{eqnarray}
Same as in \cite{Brenner17-2}, from \eqref{Dh}, \eqref{iso1}-\eqref{iso3} and the standard (scaled) trace inequalities for $H^1(\mathfrak{B}_D)$ we have
\begin{lemma}\label{trace}
(Trace Inequality (2.18) ) \cite{Brenner17-2}.
Let $\mathcal{T}_h$ be a partition of the domain $\Omega$ into polygons ($n=2$) or polyhedrons ($n=3$). Assume that $D\in \mathcal{T}_h$ satisfies the assumptions \eqref{assume1} and \eqref{assume2} as specified above. Then we have
$$
h_D^{-1}\|\xi\|_{L_2(\partial D)}^2 \apprle h_D^{-2}\|\xi\|_{L_2(D)}^2 + |\xi |_{H^1(D)}^2 ,
$$
for any $\xi \in H^1(D ).$
\end{lemma}
\subsection{$L_2$ Projection Operators}
For each element $D \in \mathcal{T}_h$, denote by $Q_{k,D}^0$ the $L_2$ projection from $L_2 (D )$ onto $\mathbb{P}_k (D ).$
Analogously, for each edge or flat face $e \in \mathcal{E}_h$, let $Q_{k,D}^b$ be the $L_2$ projection operator
from $L_2 (e)$ onto $\mathbb{P}_k (e)$.
We define a projection operator $Q_h$ as follows
\begin{equation}\label{l2proj}
Q_h v|_D := (Q_{k,D}^0 v_0 , Q_{k,D}^b v_b),\
\forall v \in W(D).
\end{equation}
Denote by $\mathbb{Q}_{k-1,D}$ the $L_2$ projection from $[L_2(D)]^n$ onto the local discrete
gradient space $[\mathbb{P}_{k-1} (D)]^n$.
With these definitions, we also have the following Lemmas.
\begin{lemma}\label{l_discrete}
For any $ {\vec q } \in [\mathbb{P}_k(D)]^n$, we have
$$
h_D\| {\vec q } \|^2_{L_{2}(\partial D)}+h_D^{2}\|\nabla\cdot {\vec q } \|^2_{L_2(D)}
\apprle
\| {\vec q } \|^2_{L_2(D)},
$$
the hidden constant only depends on $\rho_D$ and $k$.
\end{lemma}
\begin{proof}
Suppose $n=2$, and $ {\vec q } = (q_1,q_2)$, then by Lemma 2.3 in \cite{Brenner17-2}, we have
$$
h_D\|q_i\|^2_{L_{2}(\partial D)}+h_D^{2} | q_i|^2_{H^1(D)}
\apprle
\|q_i\|^2_{L_2(D)},\ i = 1,2,
$$
so that
$$
h_D\| {\vec q } \|^2_{L_{2}(\partial D)}+h_D^{2}\|\nabla\cdot {\vec q } \|^2_{L_2(D)}
\apprle
\| {\vec q } \|^2_{L_2(D)}.
$$
For $n=3,$ the proof is similar.
\end{proof}
\begin{lemma}\label{l4}
Lemma 3.9 \cite{Brenner17-2}. Assume that $D$ satisfies all the assumptions
as specified above. Then, we have
$$
|Q_{k,D}^0 \xi|_{H^1(D)} \apprle | \xi |_{H^1(D)},\ \forall \xi\in H^1(D),
$$
the hidden constant only depends on $\rho_D$ and $k$.
\end{lemma}
\begin{lemma}\label{leq}
Lemma 5.1 in \cite{Lin15}. Let $Q_h$ be the projection operator defined as in \eqref{l2proj}. Then, on each
element $D \in \mathcal{T}_h$, we have
$$\nabla_{w,k-1,D}(Q_h \xi) = \mathbb{Q}_{k-1,D} (\nabla\xi),\
\forall \xi\in H^1(D).$$
\end{lemma}
The following lemma provides some estimates for the projection operators $Q_h$ and
$\mathbb{Q}_{k-1,D}.$
\begin{lemma}\label{ler}
Let $D$ satisfy the shape regular
assumptions as given above. Then for $\xi\in H^{k+1}(D)$, we have
\begin{equation}\label{l2Q1}
\|\xi-Q_{k,D}^0 \xi\|_{L_2(D)}^2+
h_D^2 |\xi-Q_{k,D}^0 \xi |_{H^1(D)}^2
\apprle h^{2(k+1)}\|\xi\|_{H^{k+1}(D)}^2,
\end{equation}
\begin{equation}\label{l2Q2}
\|
\nabla\xi - \mathbb{Q}_{k-1,D}\nabla\xi
\|_{L_2(D)}^2
+
h_D^2|\nabla\xi - \mathbb{Q}_{k-1,D}\nabla\xi|_{H^1(D)}^2
\apprle h^{2k}\|\xi\|_{H^{k+1}(D)}^2,
\end{equation}
the hidden constant only depends on $\rho_D$ and $k$.
\end{lemma}
\begin{proof}
For the $L_2$ projection $Q_{k,D}^0$ in (\ref{l2Q1}), with Lemma \ref{bramble}, we have
$$
\|\xi-Q_{k,D}^0 \xi\|_{L_2(D)}^2
\apprle h^{2(k+1)}\|\xi\|_{H^{k+1}(D)}^2,
$$
Let $p$ be any polynomial with degree $k$, with Lemma \ref{bramble} and Lemma \ref{l4}, we have
\begin{eqnarray*}
| \xi-Q_{k,D}^0 \xi |_{H^1(D)}
&\leq&
| \xi- p |_{H^1(D)}+ | p-Q_{k,D}^0\xi |_{H^1(D)}\\
&\leq&
| \xi- p |_{H^1(D)}+ | Q_{k,D}^0 (p-\xi) |_{H^1(D)}\\
&\apprle&
| \xi- p |_{H^1(D)}\\
&\apprle&
h^{k}\|\xi\|_{H^{k+1}(D)}.
\end{eqnarray*}
For the $L_2$ projection $\mathbb{Q}_{k-1,D}$ in \eqref{l2Q2}, with Lemma \ref{bramble}, suppose $\nabla\xi = (\xi_x,\xi_y)$ for $n=2$, we have
\begin{eqnarray*}
\|
\nabla\xi - \mathbb{Q}_{k-1,D}\nabla\xi
\|_{L_2(D)}
&\apprle&
\|
\xi_x - Q_{k-1,D}^0\xi_x
\|_{L_2(D)}+\|
\xi_y - Q_{k-1,D}^0\xi_y
\|_{L_2(D)}
\\
&\apprle& h^{k}\|\xi\|_{H^{k+1}(D)}.
\end{eqnarray*}
Then we consider the second term in \eqref{l2Q2} with Lemma \ref{bramble} and Lemma \ref{l4},
\begin{eqnarray*}
|\nabla\xi - \mathbb{Q}_{k-1,D}\nabla\xi|_{H^1(D)}^2
&=&
|\xi_x - Q_{k-1,D}^0\xi_x|_{H^1(D)}^2
+
|\xi_y - Q_{k-1,D}^0\xi_y|_{H^1(D)}^2\\
&\apprle&
h^{2(k-1)}\|\xi\|_{H^{k+1}(D)}^2.
\end{eqnarray*}
The case $n=3$ is similar. So that \eqref{l2Q1} and \eqref{l2Q2} are proved.
\end{proof}
\section{The Weak Galerkin Finite Element Scheme}
Suppose
$\Omega$ is a bounded convex polygonal or polyhedral domain in $\mathbb{R}^n, (n=2,3)$,
$\mathcal{T}_h$ is a shape regular partition of $\Omega$. On each $D\in \mathcal{T}_h,$ we have $W(D)$ defined in \eqref{wd}. Then let $W$ be the weak functional space on $\mathcal{T}_h$ as
$$
W := \prod_{D\in \mathcal{T}_h} W(D).
$$
Same as Section 4.2 in \cite{Lin15}, we denote $V$ as a subspace of $W$. For each interior edge or face $e \in \mathcal{E}_h^0$, there are $D_1$ and $D_2$, so that $e\subset \partial D_1\cap \partial D_2$. Denote $v\in V$, so that for $v_i\in W(D_i), i=1,2,$ we have
$$
v_1|_e = v_2|_e.
$$
Then the weak norm of $v\in V$ is defined as
\begin{equation}\label{w1n}
| v |^2_{k-1,w} = \sum\limits_{D\in\mathcal{T}_h}
\int_{D}\nabla_{w,k-1,D} v\cdot\nabla_{w,k-1,D} v\ {\rm d} x
+
h_D^{-1}
\langle
v_0-v_b, v_0-v_b
\rangle_{\partial D},
\end{equation}
where $k\geq 1$ is integer.
Let $\mathbb{P}_k(D_0)$ be the set of polynomials on $D_0$ with degree no more than $k$, and $\mathbb{P}_k(e)$ be the set of polynomials on each edge or face $e\in \mathcal{E}_h$. Then the weak finite element space is given by
\begin{equation}\label{Sjl}
V_h:= \{v: v|_{D_0}\in \mathbb{P}_k(D_0)\ \forall D\in \mathcal{T}_h \ {\rm and}\ v|_{e}\in \mathbb{P}_k(e) \ \forall e\in \mathcal{E}_h \}.
\end{equation}
Denote the space $V^0_h$ as a subspace of $V_h$ which has vanishing boundary value on $\partial\Omega$ by
\begin{equation}\label{Sjl0}
V^0_h := \{v: v\in V_h \ {\rm and}\ v|_{\partial\Omega} =0 \}.
\end{equation}
\begin{lemma}\label{lem_h1}
Assume that $\mathcal{T}_h$ is shape regular. Then for any $w \in H^{k+1} (\Omega)$ and
$v = (v_0, v_b) \in V_h$, we have
\begin{equation}\label{e-trace1}
\left|
\sum\limits_{D\in \mathcal{T}_h} h_D^{-1}
\langle
Q_{k,D}^0 w-Q_{k,D}^b w ,
v_0-v_b
\rangle_{\partial D}
\right|\apprle h^k\|w\|_{H^{k+1}(\Omega)}|v|_{k-1,w},
\end{equation}
\begin{equation}\label{e-trace2}
\left|
\sum\limits_{D\in \mathcal{T}_h}
\langle
(\nabla w - \mathbb{Q}_{k-1,D} \nabla w)\cdot {\vec n } ,
v_0-v_b
\rangle_{\partial D}
\right|\apprle h^k\|w\|_{H^{k+1}(\Omega)}|v|_{k-1,w},
\end{equation}
where $k\geq 1$ and the hidden constant only depends on $\rho_D$ and $k$.
\end{lemma}
\begin{proof}
The proof is similar as Lemma 5.3 in \cite{Lin15}. For completeness, we give the proof here.
To get \eqref{e-trace1}, we have
\begin{eqnarray*}
\left|
\sum\limits_{D\in \mathcal{T}_h} h_D^{-1}
\langle
Q_{k,D}^0 w-Q_{k,D}^b w ,
v_0-v_b
\rangle_{\partial D}
\right|
&=&
\left|
\sum\limits_{D\in \mathcal{T}_h} h_D^{-1}
\langle
Q_{k,D}^0 w- w ,
v_0-v_b
\rangle_{\partial D}
\right|
\\
&\apprle&
\left(
\sum\limits_{D\in \mathcal{T}_h} h_D^{-1}
\| Q_{k,D}^0 w- w \|_{L_2(\partial D)}^2
\right)^{1/2}
\left(
\sum\limits_{D\in \mathcal{T}_h} h_D^{-1}
\| v_0-v_b \|_{L_2(\partial D)}^2
\right)^{1/2},
\end{eqnarray*}
with Lemma \ref{trace} and Lemma \ref{ler}, \eqref{e-trace1} is obtained.
To get \eqref{e-trace2}, we have
\begin{eqnarray*}
\left|
\sum\limits_{D\in \mathcal{T}_h}
\langle
(\nabla w - \mathbb{Q}_{k-1,D} \nabla w)\cdot {\vec n } ,
v_0-v_b
\rangle_{\partial D}
\right|
\apprle
\left(
\sum\limits_{D\in \mathcal{T}_h} h_D
\|\nabla w - \mathbb{Q}_{k-1,D} \nabla w\|_{L_2(\partial D)}^2
\right)^{1/2}
\left(
\sum\limits_{D\in \mathcal{T}_h} h_D^{-1}
\| v_0-v_b \|_{L_2(\partial D)}^2
\right)^{1/2},
\end{eqnarray*}
with Lemma \ref{trace} and Lemma \ref{ler}, \eqref{e-trace2} is obtained.
\end{proof}
\section{The Weak Galerkin Finite Element Method for Poisson's Equation}
Let $\Omega$ be a bounded convex polygonal or polyhedral domain in $\mathbb{R}^n, (n=2,3)$, $f\in L_2(\Omega)$, the Poisson's equation is
\begin{equation}\label{poisson-equation}
\begin{cases}
\ -\Delta u &= f, \\
\ u|_{\partial \Omega} &= 0.
\end{cases}
\end{equation}
And $\forall v\in V_h,$ the weak gradient of $v$ is defined on each element $D$ by \eqref{wg2}, respectively.
And for any $u,v\in V_h$, the bilinear form is defined as
\begin{equation}\label{bhr}
a_{h}(u,v) = \sum\limits_{D\in\mathcal{T}_h}\int_{D}\nabla_{w,k-1,D} u\cdot\nabla_{w,k-1,D} v\ {\rm d} x.
\end{equation}
The stabilization term is:
\begin{equation}\label{stable}
s_{h}(u,v) = \sum\limits_{D\in\mathcal{T}_h}
h_D^{-1}
\langle
u_0-u_b, v_0-v_b
\rangle_{\partial D}.
\end{equation}
A numerical solution for \eqref{poisson-equation} can be obtained by seeking $u_h =(u_0,u_b)\in V^0_h$ such that
\begin{equation}\label{num-poisson}
a_s(u_h,v_h) = a_h(u_h,v_h)+s_h(u_h,v_h) = (f,v_0)_{\Omega}, \ \forall v=(v_0,v_b)\in V_h^0.
\end{equation}
Same as Lemma 7.1 in \cite{Lin15}, we have the discrete Poincar$\acute{\rm e}$ inequality:
\begin{lemma}\label{poincare}
Suppose the partition $\mathcal{T}_h$ is shape regular. Then we have
$$
\|v_0\|_{L_2(\Omega)} \apprle |v|_{k-1,w}, \ \forall v=(v_0,v_b)\in V_h^0,
$$
the hidden constant only depends on $\rho_D$ and $k$.
\end{lemma}
Also, we have the existence and uniqueness of the solution of \eqref{num-poisson} with the same proof as Lemma 7.2 in \cite{Lin15}.
To get the error analysis, we need another Lemma.
\begin{lemma}\label{K-bound}
Assume that $D$ satisfies all the assumptions as specified above. Then, we have
\begin{equation}
\|\nabla v_0\|^2_{L_2(D)}
\apprle
\|\nabla_{w,k-1,D} v\|^2_{L_2(D)}+
h_D^{-1}\|v_b-v_0\|^2_{L_2(\partial D)},\ \forall v\in V_h|_D,
\end{equation}
the hidden constant only depends on $\rho_D$ and $k$.
\end{lemma}
\begin{proof}
Suppose on $D$ we have $v=(v_0,v_b)$, $ {\vec q } \in H({\rm div};D)$ , then by the definition of $\nabla_{w,k-1,D}$, we have
\begin{eqnarray*}
(\nabla_{w,k-1,D} v, {\vec q } )_D
&=&
-(v_0,\nabla\cdot {\vec q } )_D +\langle v_b, {\vec q } \cdot {\vec n } \rangle_{\partial D}\\
&=&
(\nabla v_0, {\vec q } )_D +\langle v_b-v_0, {\vec q } \cdot {\vec n } \rangle_{\partial D}
\end{eqnarray*}
so that
$$
(\nabla_{w,k-1,D} v-\nabla v_0, {\vec q } )_D = \langle v_b-v_0, {\vec q } \cdot {\vec n } \rangle_{\partial D}
$$
let $ {\vec q } = \nabla_{w,k-1,D} v-\nabla v_0$, then
$$
(\nabla_{w,k-1,D} v-\nabla v_0,\nabla_{w,k-1,D} v-\nabla v_0 )_D = \langle v_b-v_0,(\nabla_{w,k-1,D} v-\nabla v_0)\cdot {\vec n } \rangle_{\partial D},
$$
By the discrete inequalities of polynomials, Lemma \ref{l_discrete}, we have
$$
\|\nabla_{w,k-1,D} v-\nabla v_0\|_{L_2(D)}\apprle
h_D^{-1/2}\|v_b-v_0\|_{L_2(\partial D)},
$$
so that
\begin{equation*}
\|\nabla v_0\|^2_{L_2(D)}
\apprle
\|\nabla_{w,k-1,D} v\|^2_{L_2(D)}+
h_D^{-1}\|v_b-v_0\|^2_{L_2(\partial D)}.
\end{equation*}
\end{proof}
\subsection{Error Analysis}
Let $u$ be the solution of \eqref{poisson-equation} and $v\in V_h^0$. It follows from the definition of weak gradient \eqref{wg2}, and the integration by parts that
\begin{eqnarray}
(\nabla_{w,k-1,D} Q_hu,\nabla_{w,k-1,D}v)_D
=
(\nabla u,\nabla v_0)_D
-
\langle v_0-v_b,(\mathbb{Q}_{k-1,D}\nabla u)\cdot {\vec n } \rangle_{\partial D}. \label{wI_h}
\end{eqnarray}
Then, multiply \eqref{poisson-equation} by $v_0$ of $v=(v_0,v_b)\in V_h^0$ we have
\begin{equation}\label{wu}
\sum\limits_{D\in\mathcal{T}_h} (\nabla u,\nabla v_0)_D
=(f,v_0)+\sum\limits_{D\in\mathcal{T}_h}\langle v_0-v_b , \nabla u\cdot {\vec n } \rangle_{\partial D}.
\end{equation}
Combine \eqref{wI_h} and \eqref{wu}, we have
\begin{equation}\label{w-error1}
\sum\limits_{D\in\mathcal{T}_h}
(\nabla_{w,k-1,D} Q_hu,\nabla_{w,k-1,D}v)_D
=
(f,v_0)_{\Omega}
+
\sum\limits_{D\in\mathcal{T}_h}
\langle
v_0-v_b ,
(\nabla u - \mathbb{Q}_{k-1,D} \nabla u)\cdot {\vec n }
\rangle_{\partial D}.
\end{equation}
Adding $s_h({Q}_h u, v)$ to both sides of \eqref{w-error1} gives
\begin{equation}\label{w-error2}
a_s(Q_h u,v)
=
(f,v_0)_{\Omega}
+
\sum\limits_{D\in\mathcal{T}_h}
\langle
v_0-v_b ,
(\nabla u - \mathbb{Q}_{k-1,D} \nabla u)\cdot {\vec n }
\rangle_{\partial D} +s_h(Q_hu,v).
\end{equation}
Subtracting \eqref{num-poisson} from \eqref{w-error2}, we have the error equation
\begin{equation}\label{w-error3}
a_s(e_h,v)
=
\sum\limits_{D\in\mathcal{T}_h}
\langle
v_0-v_b ,
(\nabla u - \mathbb{Q}_{k-1,D} \nabla u)\cdot {\vec n }
\rangle_{\partial D} +s_h(Q_hu,v).
\end{equation}
where
$$
e_h|_D = (e_0,e_b)|_D:= (Q_{k,D}^0 u -u_0, Q_{k,D}^b u -u_b)|_D= (u_h-Q_hu)|_{D}
$$
which is the error between the weak Galerkin finite element solution and the $L_2$ projection of the exact solution.
Then we define a norm $\|\cdot\|_h$ as
$$
\| {\vec v } \|_h^2 := \sum\limits_{D\in\mathcal{T}_h} \| {\vec v } \|_{L_2(D)}, \ \forall {\vec v } \in [L_2(\Omega)]^n,
$$
and
$$
(\nabla_{w,k-1} u_h)|_D = \nabla_{w,k-1,D} (u_h|_D);\quad (\mathbb{Q}_{k-1}(\nabla u))|_D = \mathbb{Q}_{k-1,D} (\nabla u|_D).
$$
\begin{theorem}\label{th1}
Let $u_h \in V_h$ be the weak Galerkin finite element solution of the
problem (4.1). Assume that the exact solution is so regular that
$u \in H^{k+1} (\Omega)$. Then we have
\begin{eqnarray}
\|\nabla u - \nabla_{w,k-1} u_h\|_h
&\apprle& h^k\|u\|_{H^{k+1}(\Omega)}, \label{we1} \\
\|\nabla u - \nabla u_0\|_h
&\apprle&
h^k\|u\|_{H^{k+1}(\Omega)}, \label{we2}
\end{eqnarray}
the hidden constant only depends on $\rho_D$ and $k$.
\end{theorem}
\begin{proof}
Let $v = e_h$ in \eqref{w-error3}, with \eqref{lem_h1} we have
\begin{equation}
|e_h|_{k-1,w}^2
=
\sum\limits_{D\in\mathcal{T}_h}
\langle
e_0-e_b ,
(\nabla u - \mathbb{Q}_{k-1,D} \nabla u)\cdot {\vec n }
\rangle_{\partial D} +s_h(Q_hu,e_h).
\end{equation}
It then follows from Lemma \ref{lem_h1} that
\begin{equation}\label{error1}
|e_h|_{k-1,w}^2 \apprle h^k\|u\|_{H^{k+1}(\Omega)}|e_h|_{k-1,w}.
\end{equation}
Based on \eqref{error1}, firstly, we prove \eqref{we1},
\begin{eqnarray*}
\|\nabla u - \nabla_{w,k-1} u_h\|_h
&\leq&
\|\nabla u - \mathbb{Q}_{k-1}(\nabla u)\|_h
+\|\mathbb{Q}_{k-1}(\nabla u) - \nabla_{w,k-1} u_h\|_h,
\end{eqnarray*}
with Lemma \ref{leq} and Lemma \ref{ler}, we have
$$
\|\mathbb{Q}_{k-1}(\nabla u) - \nabla_{w,k-1} u_h\|_h
=
\|\nabla_{w,k-1} ( {Q}_{h} u - u_h) \|_h\leq |e_h|_{k-1,w}
$$
so that we have \eqref{we1}.
Secondly, with Lemma \ref{K-bound}, we have
\begin{eqnarray*}
\sum_{D\in\mathcal{T}_h}\|\nabla (Q^0_{k,D} u -u_h|_{D_0})\|^2_{L_2(D)}
&=&
\sum\limits_{D\in\mathcal{T}_h} \|\nabla e_0\|^2_{L_2(D)}\\
&\apprle&
\sum\limits_{D\in\mathcal{T}_h} \|\nabla_{w,k-1} e_h\|^2_{L_2(D)} +h_D^{-1}\|e_b-e_0\|^2_{L_2(\partial D)}\\
&\apprle&
|u_h-Q_h u|_{k-1,w}^2
\end{eqnarray*}
which means
$$
\sum_{D\in\mathcal{T}_h}\|\nabla (Q^0_{k,D} u -u_h|_{D_0})\|^2_{L_2(D)} \apprle h^{2k}\|u\|^2_{H^{k+1}(\Omega)}.
$$
Also by
$$
\sum_{D\in\mathcal{T}_h}\|\nabla (Q^0_{k,D} u -u)\|^2_{L_2(D)} \apprle h^{2k}\|u\|^2_{H^{k+1}(\Omega)},
$$
with Lemma \ref{ler}, we have \eqref{we2}
$$
\|\nabla u - \nabla u_0\|_h \apprle h^k\|u\|_{H^{k+1}(\Omega)}.
$$
\end{proof}
With Lemma \ref{poincare}, we have \eqref{l2e} with the similar proof as Theorem 8.2 in \cite{Lin15}. Also, we complete the proof of estimation for edge based error:
$$
\|u-u_h\|_{\mathcal{E}_h}^2 := \sum\limits_{e\in\mathcal{E}_h} h_e\int_e |u-u_b|^2 \rm{d} s ,
$$
where $u$ is the exact solution and $u_h$ is the numerical solution of \eqref{poisson-equation}.
\begin{theorem}\label{t2}
Let $u_h \in V_h$ be the weak Galerkin finite element solution of the
problem (4.1). Assume that the exact solution
$u \in H^{k+1} (\Omega)$. Then we have
\begin{eqnarray}
\|u - u_0\|_{L_2(\Omega)}
&\apprle& h^{k+1}\|u\|_{H^{k+1}(\Omega)}, \label{l2e} \\
\|u - u_h\|_{\mathcal{E}_h}
&\apprle& h^{k+1}\|u\|_{H^{k+1}(\Omega)}, \label{l2eb}
\end{eqnarray}
the hidden constant only depends on $\rho_D$ and $k$.
\end{theorem}
\begin{proof}
Firstly, to prove \eqref{l2e}, we begin with a dual problem seeking $\phi\in H_0^1(\Omega)$ such that
$
-\Delta \phi = e_0.
$
With the assumption of $\Omega$, we have $\|\phi\|_{H^2(\Omega)}\apprle \|e_0\|_{L_2(\Omega)}$.
Then we have
\begin{equation}\label{l2e1}
\|e_0\|_{L_2(\Omega)}^2 =
\sum\limits_{D\in\mathcal{T}_h}(\nabla\phi,\nabla e_0)_D
-
\sum\limits_{D\in\mathcal{T}_h}
\langle
\nabla\phi\cdot {\vec n } ,
e_0-e_b
\rangle_{\partial D}.
\end{equation}
Let $u=\phi$ and $v=e_h$ in \eqref{wI_h}, we have
\begin{equation}\label{l2e2}
(\nabla_{w,k-1,D} Q_h \phi,\nabla_{w,k-1,D} e_h)_D
=
(\nabla \phi,\nabla e_0)_D
-
\langle e_0-e_b,(\mathbb{Q}_{k-1,D}\nabla \phi)\cdot {\vec n } \rangle_{\partial D}.
\end{equation}
Combine \eqref{l2e1} and \eqref{l2e2}, we have
\begin{equation}\label{l2e3}
\|e_0\|_{L_2(\Omega)}^2 =
(\nabla_{w,k-1} Q_h \phi,\nabla_{w,k-1} e_h)_\Omega
+
\sum\limits_{D\in\mathcal{T}_h}
\langle
(\mathbb{Q}_{k-1,D}\nabla \phi-\nabla\phi)\cdot {\vec n } ,
e_0-e_b
\rangle_{\partial D}.
\end{equation}
Then let $v = Q_h \phi$ in \eqref{w-error3}, such that
\begin{equation}\label{l2e4}
(\nabla_{w,k-1} Q_h \phi,\nabla_{w,k-1} e_h)_\Omega
=
\sum\limits_{D\in\mathcal{T}_h}
\langle
Q^0_{k,D}\phi- Q^b_{k,D}\phi ,
(\nabla u - \mathbb{Q}_{k-1,D} \nabla u)\cdot {\vec n }
\rangle_{\partial D} +s_h(Q_hu,Q_h \phi) -s_h(e_h,Q_h \phi).
\end{equation}
Plugging \eqref{l2e4} in \eqref{l2e3}, we get the following equation
\begin{eqnarray}\label{l2e5}
\|e_0\|_{L_2(\Omega)}^2
&=&
\sum\limits_{D\in\mathcal{T}_h}
\langle
Q^0_{k,D}\phi- Q^b_{k,D}\phi ,
(\nabla u - \mathbb{Q}_{k-1,D} \nabla u)\cdot {\vec n }
\rangle_{\partial D}\nonumber \\
&&+
\sum\limits_{D\in\mathcal{T}_h}
\langle
(\mathbb{Q}_{k-1,D}\nabla \phi-\nabla\phi)\cdot {\vec n } ,
e_0-e_b
\rangle_{\partial D}
+
s_h(Q_hu,Q_h \phi) -s_h(e_h,Q_h \phi).
\end{eqnarray}
By Lemma \ref{ler}, Lemma \ref{lem_h1}, Lemma \ref{poincare} and \eqref{error1}, we can estimate the right hand side terms of \eqref{l2e5} same as in \cite{Lin15}.
Then we have
\begin{eqnarray*}
\|Q_k^0u - u_0\|_{L_2(\Omega)}
\apprle h^{k+1}\|u\|_{H^{k+1}(\Omega)},
\end{eqnarray*}
with
$$
\|Q_k^0u - u\|_{L_2(\Omega)}
\apprle h^{k+1}\|u\|_{H^{k+1}(\Omega)},
$$
the error estimate \eqref{l2e} is obtained.
Then, to prove \eqref{l2eb}, we begin with the edge based error
\begin{eqnarray*}
\|u-u_h\|_{\mathcal{E}_h}^2 &:=& \sum\limits_{e\in\mathcal{E}_h} h_e\int_e |u-u_b|^2 \rm{d} s\\
&\apprle&
\sum\limits_{D\in\mathcal{T}_h} h_D\int_{\partial D} |u-u_b|^2 \rm{d} s\\
&\apprle&
\sum\limits_{D\in\mathcal{T}_h} h_D
\left(
\langle u-u_0, u-u_0 \rangle_{\partial D}
+2\langle u-u_0, u_0-u_b \rangle_{\partial D}
+ \langle u_0-u_b, u_0-u_b \rangle_{\partial D}
\right)\\
&\apprle&
\sum\limits_{D\in\mathcal{T}_h} h_D
\left(
\|u-u_0 \|^2_{L_2(\partial D)}
+ \|u_0-u_b\|^2_{L_2(\partial D)}
\right).
\end{eqnarray*}
By Lemma \ref{trace}, \eqref{we2}, \eqref{l2e}, we have
\begin{eqnarray*}
h_D
\|u-u_0 \|^2_{L_2(\partial D)}
&\apprle&
\|u-u_0\|_{L_2(D)}^2 + h_D^2|u-u_0 |_{H^1(D)}^2,
\end{eqnarray*}
so that
$$
\sum\limits_{D\in\mathcal{T}_h} h_D
\|u-u_0 \|^2_{L_2(\partial D)} \apprle h^{2(k+1)}\|u\|^2_{H^{k+1}(\Omega)}.
$$
Then, by Lemma \ref{trace}, \eqref{l2Q1}, \eqref{w1n}, \eqref{error1}, we have
\begin{eqnarray*}
h_D
\|u_0-u_b\|^2_{L_2(\partial D)}
&\apprle&
h_D\|u_0-Q_{k,D}^0u -(u_b-Q_{k,D}^b u)\|^2_{L_2(\partial D)}
+
h_D\|Q_{k,D}^0u-Q_{k,D}^b u\|^2_{L_2(\partial D)} \\
&\apprle&
h_D\|e_0 -e_b\|^2_{L_2(\partial D)}
+
h_D\|Q_{k,D}^0u - u\|^2_{L_2(\partial D)},
\end{eqnarray*}
so that
$$
\sum\limits_{D\in\mathcal{T}_h} h_D
\|u_0-u_b\|^2_{L_2(\partial D)} \apprle h^{2(k+1)}\|u\|^2_{H^{k+1}(\Omega)}.
$$
And \eqref{l2eb} is proved.
\end{proof}
\section{Conclusions}
The shape regularity assumptions here are different with the assumptions in \cite{Wang14} and \cite{Lin15}. To get the similar results, it requires new Lemmas from Lemma \ref{bramble} to Lemma \ref{lem_h1}, which are valid under the new shape regularity assumptions. Also we provide element based error estimation in \eqref{we2} of Theorem \ref{th1} and edge based error estimation in \eqref{l2eb} of Theorem \ref{t2} which make it easier to compare the numerical solutions with the exact ones.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Heegaard Floer homology, due to Ozsv{\'a}th--Szab{\'o} \cite{HFOrig}, is a powerful set of invariants for low-dimensional manifolds; it is of modern interest in topology, representation theory, and physics. Heegaard Floer homology encodes a symplectic approach to Seiberg--Witten theory, which is closely related to categorified Witten--Reshetikhin--Turaev invariants associated to the Lie superalgebra $\gl(1|1)$. Indeed, Heegaard Floer homology contains a homological knot invariant (knot Floer homology or $\HFK$, \cite{HFKOrig,RasmussenThesis}) that categorifies the quantum $\gl(1|1)$ knot invariant (the Alexander polynomial), analogously to how Khovanov homology categorifies the Jones polynomial although the definitions of $\HFK$ and Khovanov homology are quite different.
Ozsv{\'a}th--Szab{\'o}'s theory of bordered $\HFK$ \cite{OSzNew,OSzNewer,OSzHolo,OSzLinks}, based on the ideas of bordered Floer homology \cite{LOTBorderedOrig}, is a major advance in the efficient computation of $\HFK$ (see \cite{HFKCalc}) as well as its algebraic structure and relationship to $\gl(1|1)$ categorification \cite{ManionDecat,ManionKS,LaudaManion,Hypertoric1}. If $V$ is the vector representation of $U_q(\gl(1|1))$, Ozsv{\'a}th--Szab{\'o} define algebras categorifying arbitrary mixed tensor products of $V$ and its dual $V^*$. They also define bimodules categorifying the intertwining maps associated to arbitrary tangles, recovering $\HFK$ for closed knots. Basic idempotents of Ozsv{\'a}th--Szab{\'o}'s algebras naturally correspond to canonical or crystal basis elements of the underlying representations; in particular, they do not correspond to elements of the standard tensor-product basis. Ellis--Petkova--V{\'e}rtesi have a related theory with similar properties \cite{PetkovaVertesi,EPV}, called tangle Floer homology; Tian \cite{TianUT} and Sartori \cite{SartoriCat} have categorifications of the $U_q(\gl(1|1))$ representation $V^{\otimes n}$ but do not recover $\HFK$ from their constructions.\footnote{For bordered $\HFK$, Tian's theory, and Sartori's theory, the quantum group that acts on $V^{\otimes n}$ is not $U_q(\gl(1|1))$ itself but rather a modified version. In Ellis--Petkova--Vert{\'e}si's theory, $U_q(\gl(1|1))$ itself acts, but on a representation that is slightly different from $V^{\otimes n}$.}
In general, Heegaard Floer homology associates chain complexes to Heegaard diagrams. Bordered Floer homology decomposes such diagrams along a set of non-intersecting cuts; to each cut one assigns an algebra, and to the pieces one assigns bimodules, recovering the chain complex of the original diagram after tensoring together the bimodules for local pieces over the boundary algebras. When applied to Heegaard diagrams arising from knot projections, the Heegaard diagram cuts often arise from cuts in the knot projection as in Figure~\ref{fig:OneCut}.
\begin{figure}
\includegraphics[scale=0.8]{one_cut.eps}
\caption{Knot diagram cuts typically analyzed using bordered Floer homology.}
\label{fig:OneCut}
\end{figure}
Extending one level down, cornered Floer homology \cite{DM,DLM} decomposes a Heegaard diagram along two transverse cuts. In \cite{ManionRouquier}, Rapha{\"e}l Rouquier and the author reformulated and extended the cornered-Floer algebra gluing operation in terms of a tensor product construction $\ootimes$ for 2-representations of a monoidal dg category $\U$ introduced by Khovanov \cite{KhovOneHalf}. When applied to Heegaard diagrams arising from knot projections, one would like cornered Floer homology to recover the algebra for $n$ tangle endpoints (say positively oriented) as a tensor power $\ootimes^n$ of the algebra for a single positive endpoint, and one would like to recover the bordered-Floer bimodule for (e.g.) a crossing between strands $i,i+1$ out of $n$ strands from a simpler bimodule for a truly-local crossing with only two strands (see Figure~\ref{fig:TwoCuts}).
\begin{figure}
\includegraphics[scale=0.8]{two_cuts.eps}
\caption{Type of knot-diagram gluing where cornered Floer homology should be relevant.}
\label{fig:TwoCuts}
\end{figure}
The algebras appearing in Ozsv{\'a}th--Szab{\'o}'s bordered HFK cannot arise as tensor powers $\ootimes^n$; indeed, the basic idempotents of a tensor power $\ootimes^n$ would correspond to standard tensor-product basis elements of tensor-product representations, not canonical or crystal basis elements. For similar reasons, Petkova--V{\'e}rtesi's tangle Floer algebras \cite{PetkovaVertesi,EPV} cannot arise as tensor powers $\ootimes^n$. The algebras appearing in Tian \cite{TianUT} are closely related to tensor power algebras, but there are technical difficulties when trying to define bimodules over these algebras using Heegaard diagrams.
The above three categorifications of $V^{\otimes n}$ are closely related to the bordered-Floer ``strands algebras'' $\A(\Zc)$ for (generalized) arc diagrams\footnote{i.e. chord diagrams} $\Zc$ (Sartori's categorification is more algebraic but is closely related \cite{LaudaManion} to Ozsv{\'a}th--Szab{\'o}'s bordered $\HFK$). Specifically, the diagrams $\Zc$ in question appear as three of the four vertices of a square of diagrams; see Figure~\ref{fig:FourSquare}. The horizontal edges of the square are diagrammatic equivalences (sequences of arcslides) giving derived equivalences for the corresponding algebras $\A(\Zc)$. The vertical edges of the square are the operation $\mathcal{Z} \leftrightarrow \Zc_*$ from \cite{LOTMorphism} which should, by analogy with the results of \cite{LOTMorphism}, give Koszul dualities for the corresponding algebras $\A(\Zc)$.
\begin{figure}
\includegraphics[scale=0.7]{foursquareV7.eps}
\caption{Square of four chord diagrams (for $n=4$).}
\label{fig:FourSquare}
\end{figure}
The diagram $\Zc$ in the top left corner of Figure~\ref{fig:FourSquare} is the one remaining diagram in this square that has not yet been used to categorify $V^{\otimes n}$ or define bordered-Floer bimodules. Like the diagram related to Tian's theory (bottom left corner), the algebra $\A(\Zc)$ for the top-left diagram $\Zc$ arises as a tensor power $\ootimes^n$, and the technical difficulties related to Heegaard diagram bimodules for the algebras on the top side of the diagram should not arise for the algebras on the bottom side.\footnote{Petkova--V{\'e}rtesi avoid these issues by modifying the diagrams, helping explain why they categorify a different representation than $V^{\otimes n}$; similar modification to Tian's diagrams would be more drastic.}
In this paper we initiate the study of categorifications of $\gl(1|1)$ representations and bordered theories for $\HFK$ based on the diagram $\Zc$ in the top-left corner of Figure~\ref{fig:FourSquare}. In order to work with bigraded algebras and bimodules, we define a differential bigraded 2-category $\U^-$ (based on $\U$) that categorifies the idempotented form $\Udot_q(\gl(1|1)^-)$ of the negative half of $U_q(\gl(1|1))$, as well as a bigraded version of the tensor product $\ootimes$ from \cite{ManionRouquier}.
\begin{theorem}[cf. Section~\ref{sec:BigradedTensor}, Section~\ref{sec:BigradedTensorDecat}]\label{thm:IntroCat}
The bigraded tensor product operation for 2-representations of $\U^-$ is well-defined and categorifies\footnote{See Section~\ref{sec:SplitGG} for the notion of categorification we use here.} the usual tensor product of representations of $\Udot_q(\gl(1|1)^-)$.
\end{theorem}
We then define bigraded algebras $\A_K$ for $K \geq 1$, equipped with 2-actions of $\U^-$, that categorify the representations $\wedge_q^K V$ of $\Udot_q(\gl(1|1)^-)$. The algebra $\A_1$ agrees with the strands algebra of the $n=1$ diagram in the top left corner of Figure~\ref{fig:FourSquare}; the other algebras $\A_K$ are new but closely related to $\A_1$. Taking bigraded tensor products, we get the following corollary.
\begin{corollary}[cf. Corollary~\ref{cor:GeneralKDecat}]
The algebra
\[
\A_{K_1,\ldots,K_n} \coloneqq \A_{K_1} \ootimes \cdots \ootimes \A_{K_n},
\]
with its 2-action of $\U^-$, categorifies the representation $\wedge_q^{K_1} V \otimes \cdots \otimes \wedge_q^{K_n} V$ with its action of $\Udot_q(\gl(1|1)^-)$.
\end{corollary}
While analogous categorification results often require detailed computations to show that the action of the quantum group on the decategorification is correct, here we get the result immediately from properties of the bigraded tensor product. The results of \cite{ManionRouquier} imply that $\A_{1,\ldots,1}$ agrees with the strands algebra of the general diagram in the top-left corner of Figure~\ref{fig:FourSquare}; the bigraded tensor product gives this algebra a bigrading.
We then proceed to define the basic ``truly local'' pieces appearing in gluings like the one shown in Figure~\ref{fig:TwoCuts}, beginning with bimodules for trivalent vertices. To these vertices we will assign AD bimodules, a type of $A_{\infty}$ bimodule appearing in bordered Floer homology, although we will typically work with their DA bimodule duals when doing computations
\begin{theorem}[cf. Sections~\ref{sec:EasyVertex}, \ref{sec:HardVertex}]
The trivalent vertex bimodules of Sections~\ref{sec:EasyVertex} and \ref{sec:HardVertex}, over the algebras $\A_{1,1}$ and $\A_2$, are well-defined AD bimodules and categorify the appropriate $U_q(\gl(1|1))$-intertwining maps $V^{\otimes 2} \leftrightarrow \wedge_q^2 V$ arising from skew Howe duality (see Appendix~\ref{sec:Uqgl11Review}).
\end{theorem}
\begin{corollary}
The singular crossing AD bimodule of Section~\ref{sec:SingularCrossing}, over the algebra $\A_{1,1}$, is well-defined and categorifies the $U_q(\gl(1|1))$-intertwining map $V^{\otimes 2} \to V^{\otimes 2}$ for a singular crossing.
\end{corollary}
Both trivalent vertex bimodules are directly motivated by domains for holomorphic curve counts in the two Heegaard diagrams on the left of Figure~\ref{fig:IntroHDs}; see Figures~\ref{fig:EasyVertexDomains},~\ref{fig:HardVertexDomains}. As the rest of Figure~\ref{fig:IntroHDs} illustrates, these diagrams glue to a diagram that agrees (after Heegaard moves) with a reasonable adaptation of the Ozsv{\'a}th--Stipsicz--Szab{\'o} diagram \cite{OSSz} for a singular crossing; see also Figure~\ref{fig:OSSzDiag}. The presence of $O_1$ and $O_2$ basepoints in one of the trivalent-vertex diagrams leads to an infinitely-generated bimodule; we simplify to a homotopy equivalent but finitely generated version and work primarily with this simplification.
\begin{figure}
\includegraphics[scale=0.8]{intro_HDs.eps}
\caption{Heegaard diagrams for trivalent vertices being glued to form Heegaard diagram for singular crossing (after a $\beta$ handleslide and a destabilization).}
\label{fig:IntroHDs}
\end{figure}
\begin{figure}
\includegraphics[scale=0.8]{cobordisms.eps}
\caption{Bordered sutured cobordisms represented by the Heegaard diagrams from Figure~\ref{fig:IntroHDs}.}
\label{fig:Cobordisms}
\end{figure}
\begin{remark}
The new algebra $\A_2$ categorifying $\wedge_q^2 V$, with its extra polynomial generator as compared to $\A_1$, is exactly what is needed to permit the recovery of the singular-crossing bimodule from the trivalent-vertex pieces. The algebras $\A_K$ are defined by analogy to $\A_2$.
\end{remark}
\begin{remark}\label{rem:WeightedSutures}
By the rules for assigning 3d cobordisms to Heegaard diagrams in bordered sutured Floer homology \cite{Zarev}, the trivalent-vertex Heegaard diagrams in Figure~\ref{fig:IntroHDs} should represent bordered sutured cobordisms like the ones shown in Figure~\ref{fig:Cobordisms} (see \cite[Figure 2(b)]{Zarev} for a figure drawn with similar visual conventions). Each 3d cobordism is the complement of (an open tubular neighborhood of) the corresponding trivalent vertex, viewed as a web in $D^2 \times [0,1]$, with a bordered sutured structure on the boundary.
While the literature does not seem to give a sutured interpretation for the doubled $X$ basepoint in the top-left Heegaard diagram of Figure~\ref{fig:IntroHDs}, it is reasonable to suppose that an $X$ or $O$ basepoint appearing $m$ times in some region of a Heegaard diagram should give rise to a ``suture with weight $m$,'' and that sutured and bordered sutured Floer homology can be generalized to accommodate such weighted sutures. In Figure~\ref{fig:Cobordisms}, we indicate weight-two sutures with green circles that are thicker than the ones for weight-one sutures.
\end{remark}
We equip the trivalent vertex bimodules (and thus their compositions, like singular crossing bimodules) with the structure of 1-morphisms between 2-representations of $\U^-$, for a definition of 1-morphism adapted to the setting of AD and DA bimodules as discussed in Section~\ref{sec:2RepMorphisms}.
Passing to 2-morphisms, we further upgrade these trivalent vertex bimodules to form the 1-morphism part of a functor from $\dot{\U}_q(\gl(2))$, the 2-category categorifying the idempotented form $\Udot_q(\gl(2))$ (see \cite{KLIII, Rouquier2KacMoody}) to the 2-representation bicategory $2\Rep(\U^-)$ of $\U^-$ as defined in Section~\ref{sec:2RepMorphisms}.\footnote{This appearance of $\gl(2)$ acting on representations of $\gl(1|1)$ is an instance of skew Howe duality; see Remark~\ref{rem:IntroSkewHowe} for further discussion.}
\begin{theorem}[cf. Theorem~\ref{thm:SkewHowe2Action}]\label{thm:IntroSkewHowe}
The 2-morphisms in $2\Rep(\U^-)$ associated to generating 2-morphisms in $\dot{\U}_q(\gl(2))$ are well-defined and satisfy the defining relations for 2-morphisms in $\dot{\U}_q(\gl(2))$, so they give a functor from $\dot{\U}_q(\gl(2))$ to $2\Rep(\U^-)$ (descending to the ``Schur quotient'' $\Sc(2,2)$ of $\dot{\U}_q(\gl(2))$ from Mackaay--Sto{\v s}i{\' c}--Vaz \cite{MSVSchur}).
\end{theorem}
\begin{remark}
In particular, our trivalent vertex bimodules are biadjoint to each other; this property follows from the relations in $\dot{\U}_q(\gl(2))$.
\end{remark}
We get the following corollary from \cite{MSVSchur}.
\begin{corollary}[cf. Corollary~\ref{cor:SoergelFunctor}]\label{cor:IntroSoergel}
We have a functor from the Soergel category $\SC'_1$ (viewed as a 2-category with 1 object; see Section~\ref{sec:Soergel} for notation) to $2\Rep(\U^-)$.
\end{corollary}
Further, by Elias--Krasner \cite{EliasKrasner} we also get a functor from the 2-strand braid cobordism category.
\begin{corollary}[cf. Corollary~\ref{cor:BrCob}]\label{cor:IntroBraidCob}
We have a functor from the 2-strand braid cobordism category $\BrCob(2)$ (viewed as a 2-category with 1 object) to a bicategory $\ADBimod$ of dg categories, certain AD bimodules, and homotopy classes of closed AD bimodule morphisms.
\end{corollary}
Due to algebraic subtleties in taking mapping cones on 2-morphisms, we do not automatically get a functor from $\BrCob(2)$ to $2\Rep(\U^-)$. However, at least at the level of 1-morphisms, we can lift from $\BrCob(2) \to \ADBimod$ to $\BrCob(2) \to 2\Rep(\U^-)$ as discussed in Remark~\ref{rem:UpgradingBrCobFunctor}. In particular, for truly local positive and negative crossings, we have 1-morphisms between 2-representations of $\U^-$, and for 2-strand braid cobordisms we have bimodule maps (presumably 2-morphisms) satisfying the movie moves. Functoriality for 4d cobordisms, like braid cobordisms, is an important \textit{a priori} part of Heegaard Floer theory, but to our knowledge braid cobordism maps have not yet been defined in any local, bordered-Floer-based approach to $\HFK$.
Although they are defined over different algebras, it is natural to ask how our positive and negative crossing bimodules relate to Ozsv{\'a}th--Szab{\'o}'s bordered $\HFK$ bimodules for positive and negative crossings. To address this question, we define explicit bimodules for the derived equivalence from the top edge of Figure~\ref{fig:FourSquare} in the $n=2$ case, based on certain ``change-of-basis'' Heegaard diagrams.
\begin{theorem}[cf. Section~\ref{sec:ChangeOfBasis}]\label{thm:IntroChangeOfBasis}
Our change-of-basis bimodules are well-defined, homotopy inverses to each other, and categorify the change-of-basis matrices between the tensor-product basis and the canonical basis (see Appendix~\ref{sec:NonstandardBases}) for $V^{\otimes 2}$.
\end{theorem}
Tensoring our nonsingular crossing bimodules on either side with change-of-basis bimodules, we can compare with Ozsv{\'a}th--Szab{\'o}'s nonsingular crossing bimodules.
\begin{theorem}[cf. Section~\ref{sec:OSzRelationship}]\label{thm:IntroOSz}
After categorified change of basis, our nonsingular crossing bimodules are homotopy equivalent to Ozsv{\'a}th--Szab{\'o}'s.
\end{theorem}
\begin{figure}
\includegraphics[scale=0.7]{OSz_full_local_Z.eps}
\caption{Diagram $\Zc$ needed for Ozsv{\'a}th--Szab{\'o}'s minimal local crossing bimodules.}
\label{fig:OSzFullLocalZ}
\end{figure}
\begin{remark}
The Ozsv{\'a}th--Szab{\'o} bimodules appearing in Theorem~\ref{thm:IntroOSz} are simpler than the minimal bimodules Ozsv{\'a}th--Szab{\'o} need to encode all the relevant local holomorphic-curve data in their theory. We work with Ozsv{\'a}th--Szab{\'o} bimodules defined over an algebra $\A_{1,1}^{\can}$ (quasi-isomorphic to the strands algebra of the $n=2$ top-right corner of Figure~\ref{fig:FourSquare} by \cite{LP,MMW1,MMW2}; our notation indicates the relationship between this algebra and the canonical basis for $V^{\otimes 2}$). However, the minimal local bimodules in Ozsv{\'a}th--Szab{\'o}'s theory are defined over a larger algebra $\B(2)$ quasi-isomorphic to $\A(\Zc)$ for $\Zc$ the diagram shown in Figure~\ref{fig:OSzFullLocalZ}. While $\A_{1,1}^{\can}$ has four basic idempotents (corresponding to four basis elements of $V^{\otimes 2}$), the algebra $\B(2)$ has eight basic idempotents.
Even though simpler Ozsv{\'a}th--Szab{\'o} bimodules over the smaller algebra $\A_{1,1}^{\can}$ appear in Theorem~\ref{thm:IntroOSz}, we expect that the corresponding bimodules over $\A_{1,1}$ (with their 1-morphism structure) contain all the local data needed to build $n$-strand crossing bimodules as in Figure~\ref{fig:TwoCuts}. This is one advantage of the algebras $\A_{1,\ldots,1}$ and the tensor product construction; just as standard tensor-product basis elements are easier to glue together than canonical basis elements are, the extension process that builds $n$-strand bimodules out of our 2-strand bimodules should be simpler (in terms of numbers of idempotents, although perhaps not in terms of holomorphic geometry) and more algebraically structured than in Ozsv{\'a}th--Szab{\'o}'s theory.
\end{remark}
\begin{remark}
The Heegaard diagrams appearing in this paper can be readily drawn and composed in the plane (although as in Figure~\ref{fig:IntroHDs}, the compositions often produce tubes which can be removed after destabilization). Thus, one can think of the theory developed here as an instance of ``bordered knot Floer homology done using planar diagrams'' in a broad sense. However, ``the planar diagram'' also has a more specific meaning in knot Floer homology, referring to a specific diagram introduced in \cite{OSzCube,ManolescuCube}. The diagrams considered here do not look exactly like this specific planar diagram, although they are related; for a proposed approach to defining a bordered HFK theory based on the planar diagram of \cite{OSzCube,ManolescuCube} using the $n$-strand Ozsv{\'a}th--Szab{\'o} algebras $\A_{1,\ldots,1}^{\can}$, see \cite{ManionDiagrams}.
\end{remark}
\begin{remark}
Various bimodules below will be equipped with the structure of 1-morphisms of 2-representations. For bimodules arising from Heegaard diagrams, the 1-morphism structure should also arise from counts of holomorphic disks whose domains have multiplicity at corners of the Heegaard diagram as in \cite{DM,DLM}. We hope to return to this point in a future paper. It would also be desirable to have Heegaard diagram interpretations for the bimodule maps we define in Section~\ref{sec:SkewHowe2Action} below; perhaps these maps arise from Heegaard diagram representations of 4d cobordisms between web complements given by foam complements with certain sutured structure.
\end{remark}
\begin{remark}\label{rem:COB1Mor}
The change-of-basis bimodules in Section~\ref{sec:ChangeOfBasis} can be upgraded to 1-morphisms of 2-representations, and one can deduce 1-morphism structure for the Ozsv{\'a}th--Szab{\'o} positive and negative crossing bimodules considered in Section~\ref{sec:OSzRelationship}. Diagrammatically, though, one can see that the 1-morphism structure on our change-of-basis bimodules will not be enough to define change-of-basis bimodules between $\A_{1,\ldots,1}$ and $\A_{1,\ldots,1}^{\can}$ for an arbitrary number $n$ of strands; a more global definition will be required. For Ozsv{\'a}th--Szab{\'o}'s bimodules, the 1-morphism structure is most interesting for their full local bimodules over the algebra $\B(2)$, where it encodes a compatibility between the summands of their bimodules that morally underlies their extensions as in Figure~\ref{fig:TwoCuts}; this is another point to which we hope to return in a future paper.
\end{remark}
\begin{remark}
One would like to extend $\ootimes$ to a monoidal structure on $2\Rep(\U^-)$ (or a related 2-category) and then upgrade to a braided monoidal structure. Unlike with Ozsv{\'a}th--Szab{\'o}'s bimodules, the bimodules for positive and negative crossings defined here should be instances of this higher braiding.
\end{remark}
\begin{remark}\label{rem:IntroSkewHowe}
Our constructions give a categorification of the ``skew Howe'' representation $\wedge_q^2 (\C_q^{1|1} \otimes \C_q^2)$ with its commuting actions of $U_q(\gl(1|1)^-)$ and $U_q(\gl(2))$; see \cite{QS, TVW, LQR}. Both actions are categorified here, and the 1-morphism structure on our trivalent vertex bimodules encodes the commutativity. It would be desirable to extend to a categorification of $\wedge_q^K(\C_q^{1|1} \otimes \C_q^n)$ for arbitrary $n$ and $K$.
\end{remark}
\begin{remark}
To recover $\HFK$ from the constructions of this paper, one would first want to extend $\ootimes$ from objects to at least (DA or AD bimodule) 1-morphisms in $2\Rep(\U^-)$, allowing one to perform gluings as in Figure~\ref{fig:TwoCuts}. One would then want to extend to mixed orientations (both $V$ and $V^*$) and define bimodules for local maximum and minimum points. Invariance of tangle invariants under Reidemeister moves, and recovery of $\HFK$, would follow if one could also define change-of-basis bimodules between $\A_{1,\ldots,1}$ and $\A_{1,\ldots,1}^{\can}$ as mentioned in Remark~\ref{rem:COB1Mor} and establish a general relationship with Ozsv{\'a}th--Szab{\'o}'s theory. It would also be desirable to derive the Reidemeister invariance as a consequence of an action of the $n$-strand braid cobordism category coming from an action of $\dot{\U}_q(\gl(n))$, as part of a categorification of $\wedge_q^K(\C_q^{1|1} \otimes \C_q^n)$ for all $n,K$.
\end{remark}
\subsection*{Organization}
In Section~\ref{sec:BorderedAlg}, we review what we need of the algebra of bordered Floer homology. We also introduce a matrix-based notation to facilitate computations with the $A_{\infty}$ bimodules and morphisms that will appear frequently below.
In Section~\ref{sec:2Reps}, we define a bigraded tensor product operation and discuss its decategorification, proving Theorem~\ref{thm:IntroCat}. We also define 1-morphisms and 2-morphisms of 2-representations of the dg category $\U^-$ introduced in this section.
In Section~\ref{sec:OurExamples}, we define the algebras $\A_K$ and study their tensor products, including their 2-representation structure. For $\A_{1,\ldots,1}$, we discuss how the 2-representation structure relates to strands pictures and to Heegaard diagrams.
In Section~\ref{sec:EasyVertex}, we define a 1-morphism of 2-representations for one type of trivalent vertex, and in Section~\ref{sec:HardVertex} we do the same for the other type. Compositions of these bimodules are analyzed in Section~\ref{sec:SimpleWebs}.
In Section~\ref{sec:SkewHowe2Action}, we extend to an action of 2-morphisms in $\dot{\U}_q(\gl(2))$, proving Theorem~\ref{thm:IntroSkewHowe}. In Section~\ref{sec:SoergelBraidCob} we deduce Corollaries~\ref{cor:IntroSoergel} and \ref{cor:IntroBraidCob}, while also extending our functor to the Soergel category $\SC'_1$.
In Section~\ref{sec:PosNegCrossings}, we simplify the resulting bimodules for positive and negative crossings and relate them to bimodules motivated by holomorphic curve counts in Heegaard diagrams. In Section~\ref{sec:ChangeOfBasis}, we define change-of-basis bimodules between our algebras and the analogous algebras in Ozsv{\'a}th--Szab{\'o}'s bordered $\HFK$, proving Theorem~\ref{thm:IntroChangeOfBasis}. In Section~\ref{sec:OSzRelationship} we use the change-of-basis bimodules to prove Theorem~\ref{thm:IntroOSz}.
Finally, in Appendix~\ref{sec:Uqgl11Review}, we review what we need of the representation theory of $U_q(\gl(1|1))$.
\subsection*{Acknowledgments}
The author would like to thank Aaron Lauda, Ciprian Manolescu, Aaron Mazel--Gee, Peter Ozsv{\'a}th, Ina Petkova, Rapha{\"e}l Rouquier, and Zolt{\'a}n Szab{\'o} for useful conversations. The author was partially supported by NSF grant DMS-1902092 and Army Research Office W911NF2010075.
\section{Bordered algebra and matrix notation}\label{sec:BorderedAlg}
In this section we review relevant aspects of the algebra of bordered Floer homology and introduce a convenient matrix-based notation for DA bimodules. Let $k$ be a field of characteristic $2$; we will occasionally work over more general $k$, e.g. polynomial rings over fields of characteristic 2, but what we say here generalizes to this setting without issue.
All categories, algebras, modules, etc. discussed below are assumed to be $k$-linear. We will also assume that these algebras and modules come equipped with a bigrading by $\Z^2$, consisting of a $\Z$-grading $\deg^q$ (the quantum grading) with respect to which algebra multiplication maps, module action maps, and differentials are degree $0$, as well as a $\Z$-grading $\deg^h$ (the homological grading) with respect to which differentials are degree $1$. For a bigraded vector space $W$, we write $q^i W [j]$ for $W$ in which all quantum degrees have been shifted up by $i$ and all homological degrees have been shifted down by $j$.
\begin{remark}
The below definitions also make sense in the ungraded setting, where the degree shifts in the below formulas should be ignored, as well as in the more general setting of gradings by nonabelian groups $G$ as is common in bordered Floer homology (see \cite{LOTBimodules} for the definitions in the $G$-graded case).
\end{remark}
\subsection{DA bimodules}
If $A$ is a dg category, we write $I$ for the non-full subcategory of $A$ having the same objects as $A$ but having only identity morphisms. We think of $A$ as being a (possibly non-unital) dg algebra equipped with a collection of orthogonal idempotents given by identity morphisms on the objects of $A$. Correspondingly, we will call $I$ the \emph{idempotent ring} or \emph{ring of idempotents} of $A$, even though $I$ is a category. In all examples considered in this paper, $A$ (and thus $I$) will have finitely many objects.
By convention, we assume all dg categories $A$ come equipped with an augmentation functor $\epsilon\co A \to I$ restricting to the identity on $I \subset A$; we write $A_+$ for the kernel of $\epsilon$.
\begin{definition}\label{def:DABimod}
Let $A, A'$ be dg categories with rings of idempotents $I, I'$. A DA bimodule over $(A',A)$ is a pair $(M,\{\delta^1_i: i \geq 1\})$ where $M$ is a bigraded $(I',I)$-bimodule (i.e. a $k$-linear functor from $I' \otimes I^{\opp}$ to bigraded $k$-vector spaces) and, for $i \geq 1$,
\[
\delta^1_i\co M \otimes_{I} (A[1])^{\otimes(i-1)} \to A'[1] \otimes_{I'} M
\]
is an $(I',I)$-bilinear map of bidegree zero satisfying the DA bimodule relations
\begin{align*}
&\sum_{j=1}^i(\mu' \otimes \id) \circ (\id \otimes \delta^1_{i-j+1}) \circ (\delta^1_j \otimes \id) \\
&+ \sum_{j=1}^{i-1} \delta^1_i \circ (\id \otimes \partial_j) \\
&+ \sum_{j=1}^{i-2} \delta^1_{i-1} \circ (\id \otimes \mu_{j,j+1}) \\
&+ (\partial' \otimes \id) \circ \delta^1_i \\
&= 0,
\end{align*}
where $\mu', \partial'$ denote the multiplication and differential on $A'$, $\partial_j$ denotes the differential on the $j^{th}$ factor of $A^{\otimes(i-1)}$, and $\mu_{j,j+1}$ denotes the multiplication on the $j^{th}$ and $(j+1)^{st}$ factors of $A^{\otimes(i-1)}$.
\end{definition}
\begin{example}
The identity DA bimodule $\mathbb{I}_A$ over $A$ has $\mathbb{I}_A(S',S) = k$ if $S = S'$ and $\mathbb{I}_A(S',S) = 0$ otherwise, so $\mathbb{I}_A$ is the identity bimodule over the idempotent ring $I$. We set $\delta^1_2$ to be the identity map (endofunctor) on $A[1]$; we set $\delta^1_i = 0$ for $i \neq 2$. One can check that the DA bimodule relations are satisfied.
\end{example}
\begin{remark}
A DA bimodule $M$ has an underlying $A_{\infty}$ bimodule $A' \otimes_{I'} M$. As a left $A_{\infty}$ module, $A' \otimes_{I'} M$ has no higher actions; it is a left dg module (even projective as a non-differential module). However, the right action of $A$ on $A' \otimes_{I'} M$ may have higher $A_{\infty}$ terms. We have a natural identification of $A \otimes_{I} \mathbb{I}_A$ with the ordinary identity bimodule over $A$.
\end{remark}
Let $\delta^1 \coloneqq \sum_i \delta^1_i$, a map from $M \otimes_{I} T^*(A[1])$ to $A'[1] \otimes_{I'} M$ of bidegree zero. Define
\[
\delta^j_i\co M \otimes_{I} (A[1])^{\otimes(i-1)} \to (A'[1])^{\otimes j} \otimes_{I'} M
\]
by
\[
\delta^j_i(-,a_1,\ldots,a_{i-1}) = \sum_{i-1 = (i_1 - 1) + \cdots + (i_j - 1)} (\id \otimes \delta^1_{i_j}(-,\ldots,a_{i-1})) \circ \cdots \circ \delta^1_{i_1}(-,a_1,\ldots,a_{i_1 - 1}),
\]
where there are $j$ factors in the composition.
\begin{definition}
We say that a DA bimodule $M$ is:
\begin{itemize}
\item left bounded if for each $x \in M$ and each $i \geq 1$, there exists $n$ such that $\delta^j_i(x,-,\ldots,-)$ vanishes on $(A[1])^{\otimes(i-1)}$ for all $j > n$;
\item finitely generated if $M$ is finite-dimensional over $k$;
\item strictly unital if, for all $x \in M(S',S)$, we have
\begin{itemize}
\item $\delta^1_2(x,\id_{S}) = \id_{S'} \otimes x$
\item $\delta^1_i(x,\ldots,\id_{S''},\ldots) = 0$ for $i > 2$ and any $S''$.
\end{itemize}
\end{itemize}
\end{definition}
\subsection{Matrix notation}
We specify a DA bimodule $M$ (finitely generated at first) by giving a ``primary matrix'' and a ``secondary matrix.'' Let $A$ and $A'$ be dg categories with finitely many objects; we start with a primary matrix whose columns are indexed by objects $S$ of $A$, whose rows are indexed by objects $S'$ of $A'$, and whose entries are sets (finite sets at first). If the primary matrix has block form, we will often give each block separately. From the primary matrix, we can define $M$ as an $(I',I)$-bimodule; we take the set in row $S'$, column $S$ as a basis for $M(S',S)$.
We also assume we are given a secondary matrix with both rows and columns indexed by entries of the primary matrix. The entries of the secondary matrix should be sums (potentially infinite) of expressions like $a' \otimes (a_1, \ldots, a_{i-1})$ where $a' \in A'$ and $(a_1, \ldots, a_{i-1})$ are distinct sequences of elements of some chosen homogeneous basis for (the morphism spaces in) $A$ such that:
\begin{itemize}
\item identity morphisms are basis elements;
\item all other basis elements are in $A_+$.
\end{itemize}
We require that no $a_j$ is an identity morphism in any such sequence $(a_1, \ldots, a_{i-1})$. By convention, we write $0$ for the empty sum, and when $i=1$ we write $a'$ as shorthand for $a' \otimes ()$. As with the primary matrix, we will often give secondary matrices block-by-block. There are many examples below; see, for instance, Section~\ref{sec:SingularCrossing}.
\begin{definition}
Given this data, if $(a_1,\ldots,a_{i-1})$ is a sequence of elements in the chosen basis for $A$ (possibly empty but none an identity morphism) and $x$ is a basis element of $M(S',S)$, we take $\delta^1_i(x \otimes a_1 \otimes \cdots \otimes a_{i-1})$ to be the sum, over all rows $y$ in column $x$ of the secondary matrix, of $a' \otimes y$ if $a' \otimes (a_1, \ldots, a_{i-1})$ is a term of the secondary matrix entry in row $y$ and column $x$ and zero otherwise. For any identity morphism $\id_{S}$ in $A$ and any basis element $x$ of $M(S',S)$, we set $\delta^1_2(x,\id_{S}) = \id_{S'} \otimes x$. We extend $\delta^1_i$ linearly over $k$.
\end{definition}
A main advantage of this matrix-based notation is that it allows computations involving DA bimodules to be formulated using familiar linear-algebraic operations. For example, we describe a procedure for checking that a DA bimodule defined by primary and secondary matrices is well-defined.
\begin{procedure}\label{proc:DAWellDef}
To check that the DA bimodule relations hold for $M$ defined by primary and secondary matrices, the first step is to multiply the secondary matrix by itself. When multiplying (a term of) an entry $a' \otimes (a_1, \ldots, a_{i-1})$ in the right matrix factor with another (term of an) entry $b' \otimes (b_1, \ldots, b_{j-1})$ in the left matrix factor, the result is $a'b' \otimes (a_1, \ldots, a_{i-1}, b_1,\ldots,b_{i-1})$.
Next, one considers a ``multiplication matrix'' derived from the secondary matrix as follows: whenever an algebra input $a_j$ in a term $a' \otimes (a_1,\ldots,a_{i-1})$ of the secondary matrix (row $y$, column $x$) is a nonzero term of the basis expansion of $b_1 \cdot b_2$ for two basis elements $b_1, b_2$ of $A$ (with coefficient $c \in k$), add a term $ca' \otimes (a_1,\ldots,a_{j-1},b_1,b_2,a_{j+1},\ldots,a_{i-1})$ to the multiplication matrix in row $y$ and column $x$.
Finally, one considers two ``differential matrices;'' the first is obtained from the secondary matrix by replacing each term $a' \otimes (a_1,\ldots,a_{i-1})$ with $\partial'(a') \otimes (a_1,\ldots,a_{i-1})$ in each entry. The second is obtained as follows: whenever an algebra input $a_j$ in a term $a' \otimes (a_1,\ldots,a_{i-1})$ of the secondary matrix (row $y$, column $x$) is a nonzero term of the basis expansion of $\partial(b)$ for some basis element $b$ of $A$ (with coefficient $c \in k$), add a term $ca' \otimes (a_1,\ldots,a_{j-1},b,a_{j+1},\ldots,a_{i-1})$ to the second differential matrix in row $y$ and column $x$.
The DA bimodule relations amount to checking that the sum of the squared secondary matrix, the multiplication matrix, and the two differential matrices is zero.
\end{procedure}
\begin{remark}
A priori, one would want to add terms $\id_{S'} \otimes \id_{S}$ to the diagonal entries of the secondary matrix before performing the above operations; one can check that the above procedure (without $\id_{S'} \otimes \id_{S}$ terms) suffices to check the DA bimodule relations, and that DA bimodules arising from primary and secondary matrices are strictly unital.
\end{remark}
\begin{remark}
With appropriate care, one can apply the same ideas to infinitely-generated DA bimodules whose generators come in regular families. We will see several examples below; in the secondary matrix, even if there are infinitely many rows, one requires that only finitely many entries of each column for any given input sequence are nonzero.
\end{remark}
\begin{remark}
Suppose that $M$ is a DA bimodule over $(A',A)$ with $\delta^1_i = 0$ for $i > 2$. It follows that $A' \otimes_{I'} M$ is an ordinary dg bimodule, with differential given by $\delta^1_1$ and right action of $A$ given by $\delta^1_2$ (the left action of $A'$ comes from multiplication in $A'$). For such bimodules, it will often be convenient to omit the full secondary matrix in favor of matrices for the differential and for the right action of each multiplicative generator of $A$; this type of description can be considerably simpler.
Assume for simplicity that $A$ and $A'$ have no differential (this will be true for nearly all the examples in this paper). To check that such a bimodule $M$ is well-defined, it suffices to check that:
\begin{itemize}
\item the matrix for the differential on $M$ squares to zero;
\item the matrices for the right action commute with the matrix for the differential;
\item whatever relations are satisfied by the multiplicative generators in the algebra are satisfied for the corresponding right-action matrices.
\end{itemize}
\end{remark}
\subsection{Morphisms}
\begin{definition}
Let $A, A'$ be dg categories with rings of idempotents $I, I'$, and let $M, N$ be DA bimodules over $(A',A)$. A DA bimodule morphism $f\co M \to N$ is a collection $\{f_i : i \geq 1\}$ where
\[
f_i\co M \otimes_{I} (A_+[1])^{\otimes(i-1)} \to A' \otimes_{I'} N
\]
are $(I',I)$-bilinear maps (if all have the same bidegree, then $f$ is said to be homogeneous of this bidegree). Such morphisms form a chain complex whose differential is defined by
\begin{align*}
d(f)_i &= \sum_{j=1}^i (\mu' \otimes \id) \circ (\id \otimes \delta^1_{i-j+1}) \circ f_j \\
&+ \sum_{j=1}^i (\mu' \otimes \id) \circ (\id \otimes f_{i-j+1}) \circ \delta^1_j \\
&+ \sum_{j=1}^{i-1} f_i \circ (\id \otimes \partial_j) \\
&+ \sum_{j=1}^{i-2} f^1_{i-1} \circ (\id \otimes \mu_{j,j+1}) \\
&+ (\partial' \otimes \id) \circ f_i.
\end{align*}
We say that $f$ is strict if $f_i = 0$ for $i > 1$.
\end{definition}
\begin{remark}
A morphism of DA bimodules $f\co M \to N$ gives a morphism of $A_{\infty}$ bimodules $A' \otimes_{I'} M \to A' \otimes_{I'} N$, compatibly with the differential on morphisms.
\end{remark}
If $M$ and $M'$ are defined by specifying primary and secondary matrices, then we can also specify a morphism $f$ by giving a matrix. The columns of the matrix for $f$ are indexed by entries $x$ of the primary matrix for $M$, and the rows are indexed by entries $y$ of the primary matrix for $M'$. The entry of the matrix for $f$ in row $y$ and column $x$ should be a sum of expressions like $a' \otimes (a_1, \ldots, a_{i-1})$ where $a' \in A'$ and $(a_1, \ldots, a_{i-1})$ are distinct sequences of elements of a chosen basis for $A$ as above. We require that no $a_j$ is an identity morphism in any such sequence (so all $a_j$ are in $A_+$), we write $0$ for the empty sum, and when $i=1$ we write $a'$ as shorthand for $a' \otimes ()$.
\begin{definition}
Given this data, if $(a_1,\ldots,a_{i-1})$ is a sequence of elements in the chosen basis for $A$ and $x$ is a basis element of $M(S',S)$, we take $f_i(x \otimes a_1 \otimes \cdots \otimes a_{i-1})$ to be the sum, over all rows $y$ in column $x$ of the matrix for $f$, of $a' \otimes y$ if $a' \otimes (a_1, \ldots, a_{i-1})$ is a term of the matrix entry for $f$ in row $y$ and column $x$ and zero otherwise. We extend $\delta^1_i$ linearly over $k$.
\end{definition}
To compose DA bimodule morphisms $f$ and $g$ given by matrices, one multiplies the matrices the same way one does when computing the squared secondary matrix in Procedure~\ref{proc:DAWellDef}.
\begin{procedure}
If $f\co M \to N$ is a morphism of DA bimodules and $M, N, f$ are given in matrix notation, a matrix for $d(f)$ can be obtained as the sum of the following matrices:
\begin{itemize}
\item Two matrices from multiplying the matrix with $f$ and the secondary matrices for $M, N$ (in the order that makes sense),
\item One ``multiplication matrix'' and two ``differential matrices'' obtained from the matrix for $f$ as in Procedure~\ref{proc:DAWellDef}.
\end{itemize}
\end{procedure}
For a given $(A',A)$, the DA bimodules over $(A',A)$ and chain complexes of DA bimodule morphisms form a dg category; isomorphism and homotopy equivalence of DA bimodules are defined in terms of this category. A closed morphism of DA bimodules $f\co M \to N$ is called a quasi-isomorphism if its strict part $f_1$ induces an isomorphism on the homology of the underlying left dg modules $A' \otimes_{I'} M$ and $A' \otimes_{I'} N$.
\begin{remark}
In fact, every quasi-isomorphism of DA bimodules is a homotopy equivalence (see \cite[Corollary 2.4.4]{LOTBimodules}); this is a general feature of $A_{\infty}$ morphisms.
\end{remark}
\subsection{Box tensor products}\label{sec:BoxTensor}
If $M$ and $N$ are two DA bimodules over $(A'',A')$ and $(A',A)$ respectively, one can define a DA bimodule $M \boxtimes N$ as in \cite[Section 2.3.2]{LOTBimodules} (given suitable finiteness conditions). If both $M$ and $N$ are left bounded, then $M \boxtimes N$ is well-defined and left bounded by \cite[Proposition 2.3.10(1), Remark 2.3.12]{LOTBimodules}.
If $M$ and $N$ are given by primary and secondary matrices, then we can describe $M \boxtimes N$ similarly.
\begin{procedure}\label{proc:BoxTensorBimods}
The primary matrix for $M \boxtimes N$ is obtained by multiplying the primary matrices for $M$ and $N$. Each entry of the primary matrices is a set; to multiply two entries, take the Cartesian product. To sum over all products of entries for each entry in the resulting matrix, take the disjoint union.
The secondary matrix for $M \boxtimes N$ is obtained as follows: let $(x,y)$ and $(x',y')$ be entries of the primary matrix for $M \boxtimes N$. For every choice of:
\begin{itemize}
\item $i \geq 1$
\item basis elements $y = y_1, y_2, \ldots, y_i = y'$ of $N$
\item term $a'' \otimes (a'_1,\ldots,a'_{i-1})$ in row $x'$ and column $x$ of the secondary matrix of $M$
\item terms $b'_j \otimes (a_{j,1},\ldots,a_{j,i_j-1})$ of the secondary matrix of $N$ in row $y_{j+1}$ and column $y_j$ for $1 \leq j \leq i-1$, where $c_j a'_j$ is a nonzero term in the basis expansion of $b'_j$ (for some $c_j \in k$),
\end{itemize}
add the entry
\[
c_1 \cdots c_{i-1} a'' \otimes (a_{1,1},\ldots,a_{1,i_1-1}, \ldots, a_{i-1,1}, \ldots, a_{i-1,i_{i-1}-1})
\]
to row $(x',y')$ and column $(x,y)$ of the secondary matrix for $M \boxtimes N$.
\end{procedure}
By \cite[Lemma 2.3.14(2)]{LOTBimodules}, box tensor products of DA bimodules are associative up to canonical isomorphism. Also, there are canonical isomorphisms between $M$ and the box tensor product of $M$ with an identity DA bimodule on either side.
\subsection{Box tensor products of morphisms}\label{sec:BoxTensorOfMorphisms}
Let $M_1, M_2$ be DA bimodules over $(A'',A')$ and let $N_1, N_2$ be DA bimodules over $(A',A)$. Let $f\co M_1 \to M_2$ and $g\co N_1 \to N_2$ be morphisms. As discussed in \cite[Section 2.3.2]{LOTBimodules}, the box tensor product $f \boxtimes g$ is only defined up to homotopy. However, expressions like $f \boxtimes \id$ and $\id \boxtimes g$ are unambiguously defined and we can compute them in matrix notation.
\begin{procedure}\label{proc:BoxTensorMorphisms}
Suppose that $N_1 = N_2 = N$ and we have a morphism $f\co M_1 \to M_2$. The matrix for $f \boxtimes \id_N$ has columns indexed by primary matrix entries for $M_1 \boxtimes N$ and rows indexed by primary matrices for $M_2 \boxtimes N$. For each entry $a'' \otimes (a'_1,\ldots,a'_{i-1})$ of the matrix for $f$ (in column $x_1$ and row $x_2$), consider the $(i-1)^{st}$ power of the secondary matrix for $N$, defined by concatenating both input sequences and output terms when multiplying individual entries. If $(x_1,y_1)$ and $(x_2,y_2)$ are entries of the primary matrices of $M_1 \boxtimes N$ and $M_2 \boxtimes N$ respectively, then for each entry of the $(i-1)^{st}$ power matrix of $N$ in column $y_1$ and row $y_2$ (say with output sequence $(b'_1, \ldots, b'_{i-1})$ and input sequence $\vec{S}$), let $c_j \in k$ be the coefficient on $a'_j$ in the basis expansion of $b'_j$ for $1 \leq j \leq i-1$. Let $c = c_1 \cdots c_{k-1}$ and add $ca'' \otimes \vec{S}$ to the matrix for $f \boxtimes \id_N$ in column $(x_1,y_1)$ and row $(x_2,y_2)$.
Now suppose that $M_1 = M_2 = M$ and we have a morphism $g\co N_1 \to N_2$. The matrix for $\id_M \boxtimes g$ is defined as above, except the matrix for $f$ above is replaced by the secondary matrix for $M$ with the terms $\id_S \otimes \id_{S'}$ included. The $(i-1)^{st}$ power matrix above is replaced, for $i \geq 2$, by the sum of all ways of multiplying one instance of the matrix for $g$ with $i-2$ instances of the secondary matrices for $N_1$ or $N_2$ as appropriate, again concatenating both input sequences and output terms when multiplying.
\end{procedure}
Note that $i=1$ terms in the secondary matrix for $M$ do not contribute to the matrix for $\id_M \boxtimes g$.
\subsection{Simplifying DA bimodules}\label{sec:PrelimSimplifying}
Suppose that a DA bimodule $M$ is given by primary and secondary matrices, and that some entry of the secondary matrix (say in column $x$ and row $y$) is an identity morphism of $A'$ (i.e. $\id_{S'} = \id_{S'} \otimes ()$). We can simplify $M$ to obtain a homotopy equivalent DA bimodule $\widetilde{M}$ having two fewer basis elements than $M$. We describe $\widetilde{M}$ in matrix notation as follows.
\begin{procedure}
The primary matrix of $\widetilde{M}$ is the primary matrix of $M$ with the elements $x$ and $y$ removed from their corresponding entries. The secondary matrix of $\widetilde{M}$ is obtained as follows:
\begin{itemize}
\item Start by deleting the row and column corresponding to $x$, as well as the row and column corresponding to $y$, from the secondary matrix of $M$.
\item Now, for each term $a' \otimes (a_1,\ldots,a_{i-1})$ of an entry of the original secondary matrix of $M$ in row $y$ (and some column $x'$) other than the $\id_{S'}$ we are canceling, and each such term $b' \otimes (b_1,\ldots,b_{j-1})$ in column $x$ (and some row $y'$) other than the $\id_{S'}$ we are canceling, add the term $a'b' \otimes (a_1,\ldots,a_{i-1},b_1,\ldots,b_{j-1})$ to the secondary matrix of $\widetilde{M}$ in column $x'$ and row $y'$.
\end{itemize}
\end{procedure}
If we can cancel $x$ and $y$ from $M$ as above, we will call $(x,y)$ a canceling pair in $M$. If there are multiple disjoint canceling pairs in $M$, we can cancel them all at once, potentially picking up more general terms than the ones above; we can even do this when $M$ has an infinite set of disjoint canceling pairs, assuming all resulting matrix entries and sums are finite.
\subsection{Duals and AD bimodules}\label{sec:DualsAD}
One can modify Definition~\ref{def:DABimod} by interchanging left and right everywhere; one obtains the definition of an AD bimodule. We can specify AD bimodules $N$ over $(A',A)$ using matrix notation, just as for DA bimodules; the difference is that the entries of a secondary matrix specifying an AD bimodule are sums of terms of the form $(a'_1, \ldots, a'_{i-1}) \otimes a$. Right boundedness, finite generation, and strict unitality for AD bimodules are defined like left boundedness, finite generation, and strict unitality for DA bimodules. Morphisms, including isomorphisms, homotopy equivalences, and quasi-isomorphisms, are also defined similarly, and every quasi-isomorphism of AD bimodules is a homotopy equivalence.
The box tensor product of AD bimodules is also defined analogously to the DA bimodule case, and is associative up to canonical isomorphism. There are identity AD bimodules as in the DA case, and the box tensor product of an AD bimodule $N$ with an identity AD bimodule on either side is canonically isomorphic to $N$.
If $M$ is a finitely generated DA bimodule over $(A',A)$, the opposite AD bimodule to $M$ (over $(A,A')$) is defined in \cite[Definition 2.2.53]{LOTBimodules}. Similarly, a finitely generated AD bimodule $N$ over $(A',A)$ has an opposite DA bimodule over $(A,A')$. The opposite of $M$ corresponds to the bimodule $\Hom_{A'}(A' \otimes_{I'} M, A')$ with an induced left $A_{\infty}$ action of $A$, so we will write the opposite AD bimodule to $M$ as ${^{\vee}}M$ and call it the left dual of $M$. Similarly, the opposite of $N$ corresponds to $\Hom_{A}(N \otimes_{I} A, A)$; we will write the opposite of $N$ as $N^{\vee}$ and call the right dual of $N$.
Duality preserves finite generation and strict unitality; it sends left bounded DA bimodules to right bounded AD bimodules and vice-versa. It reverses both $q$-degree and homological degree; we have ${^{\vee}}(q^i M [j]) = q^{-i} ({^{\vee}}M) [-j]$. The opposite of an identity DA bimodule is an identity AD bimodule and vice-versa.
Duality is compatible with box tensor products after reversing the order: we have natural identifications ${^{\vee}}(M_1 \boxtimes M_2) \cong {^{\vee}}M_2 \boxtimes {^{\vee}}M_1$ and $(N_1 \boxtimes N_2)^{\vee} \cong N_2^{\vee} \boxtimes N_1^{\vee}$.
A morphism of finitely generated DA bimodules $f\co M \to M'$ gives a dual morphism ${^{\vee}}f\co {^{\vee}}M' \to {^{\vee}}M$, such that ${^{\vee}}(g \circ f) = {^{\vee}}f \circ {^{\vee}}g$ and ${^{\vee}}(d(f)) = d({^{\vee}}f)$. Similarly, we can dualize morphisms of finitely generated AD bimodules to get morphisms of DA bimodules.
\begin{remark}
Since duality reverses bidegrees on DA and AD bimodules while simultaneously reversing the direction of morphisms, the dual ${^{\vee}}f$ of a DA bimodule morphism $f$ has the same bidegree as $f$, and similarly for duals of AD bimodule morphisms.
\end{remark}
Concretely, duality acts on bimodules presented in matrix notation as follows.
\begin{procedure}
For a finitely generated DA bimodule $M$ given by a primary matrix and secondary matrix, form a primary matrix for ${^{\vee}}M$ by taking the transpose of the primary matrix for $M$. Form a secondary matrix for ${^{\vee}}M$ by
\begin{itemize}
\item taking the transpose of the secondary matrix for $M$;
\item changing all entries $a' \otimes (a_1, \ldots, a_{i-1})$ in this secondary matrix to $(a_1, \ldots, a_{i-1}) \otimes a'$.
\end{itemize}
\end{procedure}
\subsection{Split Grothendieck groups}\label{sec:SplitGG}
In this section we will require our dg categories $A$ to have finitely many objects for simplicity. While the Grothendieck groups we consider below can be defined over $\Z[q,q^{-1}]$, we will pass to $\C_q \coloneqq \C(q)$ since we will be relating these Grothendieck groups to representations of $U_q(\gl(1|1))$.
\begin{definition}
Let $A$ be a dg category with finitely many objects; recall that we assume $A$ is equipped with a functor $\epsilon\co A \to I$ restricting to the identity on $I \subset A$. Let $G_0(A)$ be the split Grothendieck group of the closure of $A$ under direct sums and degree shifts (both quantum and homological), tensored over $\Z[q,q^{-1}]$ with $\C_q$. In other words, $G_0(A)$ is the $\C_q$-vector space generated by isomorphism classes of objects $[S]$ of this closure of $A$ under the relations
\[
[S_1 \oplus S_2] = [S_1] + [S_2], \,\,\, [qS] = q[S], \,\,\, [S[1]] = -[S].
\]
It follows from the existence of $\epsilon$ that $G_0(A)$ has a basis given by the objects of $A$ ($\epsilon$ ensures that no two distinct objects of $A$ can be isomorphic).
\end{definition}
Our convention will be to use AD bimodules to induce maps on split Grothendieck groups as follows. Suppose that $A$ and $A'$ each have finitely many objects.
\begin{definition}
Let $N$ be a finitely generated AD bimodule over $(A',A)$. Define a $\C_q$-linear map $[N]\co G_0(A) \to G_0(A')$ by declaring that, for basis vectors $[S]$ of $G_0(A)$ and $[S']$ of $G_0(A')$ coming from objects $S$ of $A$ and $S'$ of $A'$, the matrix entry of $[N]$ in row $[S']$ and column $[S]$ is the $q$-graded Euler characteristic of the finite-dimensional bigraded $k$-module $N(S',S)$.
\end{definition}
Heuristically, with this convention one thinks of applying the functor $N \boxtimes -$ to simple $A$-modules and expanding the result in terms of simple $A'$-modules, but we will stick with the above elementary definition in this paper.
We can also decategorify DA bimodules; let $K_0(A)$ be the $\C_q$-vector space defined in the same way as $G_0(A)$, except that we call the generators $[P]$ instead of $[S]$.
\begin{definition}
Let $M$ be a finitely generated DA bimodule over $(A',A)$. Define a $\C_q$-linear map $[M]\co K_0(A) \to K_0(A')$ by declaring that, for basis vectors $[P]$ of $K_0(A)$ and $[P']$ of $K_0(A')$ coming from objects $P$ of $A$ and $P'$ of $A'$, the matrix entry of $[M]$ in row $[P']$ and column $[P]$ is the $q$-graded Euler characteristic of the finite-dimensional bigraded $k$-module $M(P',P)$.
\end{definition}
With $K_0$ one thinks of applying the functor $M \boxtimes -$ to indecomposable projective $A$-modules and expanding the result in terms of indecomposable projective $A'$-modules.
In terms of matrix notation, we have the following decategorification procedure.
\begin{procedure}
To decategorify a finitely generated AD or DA bimodule given in matrix notation, one discards the secondary matrix and replaces sets in the entries of the primary matrix with sums of $(-1)^j q^i$ for each element with $\deg^q = i$ and $\deg^h = j$.
If $M$ is a finitely generated DA bimodule given in matrix notation, then to decategorify ${^{\vee}}M$, one discards the secondary matrix, replaces sets in the entries of the primary matrix of $M$ with sums of $(-1)^j q^{-i}$ for each element with $\deg^q = i$ and $\deg^h = j$, and takes the transpose.
\end{procedure}
\section{2-representations and morphisms}\label{sec:2Reps}
\subsection{Preliminaries}
Let $\U$ denote the differential monoidal category of \cite[Section 4.1.1]{ManionRouquier}. We first define a variant $\U^-$ of $\U$. Write $\varepsilon_1 = (1,0)$ and $\varepsilon_2 = (0,1)$ for the standard basis vectors of $\Z^2$ and let $\alpha = \varepsilon_1 - \varepsilon_2 = (1,-1) \in \Z^2$.
\begin{definition}
Let $\U^-$ be the strict dg 2-category defined as follows:
\begin{itemize}
\item The objects are lattice points $\omega = \omega_1 \varepsilon_1 + \omega_2 \varepsilon_2 \in \Z^2$.
\item The 1-morphisms are generated under composition by $f_{\omega}\co \omega \to \omega - \alpha$ for all $\omega \in \Z^2$.
\item The 2-morphisms are generated under horizontal and vertical composition by endomorphisms $\tau_{\omega}$ of $f_{\omega - \alpha} f_{\omega}$ for $\omega \in \Z^2$, subject to
\[
\tau_{\omega}^2 = 0, \,\,\, f_{\omega - 2\alpha} \tau_{\omega} \circ \tau_{\omega - \alpha} f_{\omega} \circ f_{\omega - 2\alpha} \tau_{\omega} = \tau_{\omega - \alpha} f_{\omega} \circ f_{\omega - 2\alpha} \tau_{\omega} \circ \tau_{\omega - \alpha} f_{\omega}, \,\,\, d(\tau_{\omega}) = \id_{f_{\omega - \alpha} f_{\omega}}.
\]
We set $\deg^q(\tau_{\omega}) = 0$ and $\deg^h(\tau_{\omega}) = -1$.
\end{itemize}
\end{definition}
We let $H_n$ denote the dg 2-endomorphism algebra of the 1-morphism $f_{\omega-(n-1)\alpha} \circ \cdots \circ f_{\omega}$ of $\U^-$ for any $\omega \in \Z^2$ (the dg algebra $H_n$ does not depend on the choice of $\omega$).
A bimodule 2-representation of $\U^-$ is a functor from $\U^-$ into (dg categories, dg bimodules, bimodule maps); it amounts to dg categories $A_{\omega}$, dg bimodules ${_{\omega - \alpha}}F_{\omega}$, and bimodule endomorphisms $\tau_{\omega}$ of ${_{\omega - 2\alpha}} F_{\omega - \alpha} F_{\omega}$ satisfying the above relations. By forgetting the bigrading and weight decomposition, a bimodule 2-representation of $\U^-$ gives a bimodule 2-representation of $\U$.
\begin{definition}
We say a bimodule 2-representation of $\U^-$ is right finite if each bimodule ${_{\omega - \alpha}}F_{\omega}$ is given by $N_{\omega} \otimes_{I_{\omega}} A_{\omega}$ for some AD bimodule $N_{\omega}$ over $(A_{\omega - \alpha}, A_{\omega})$ with vanishing higher action terms $\delta^1_i$ for $i > 2$; in this case, we write ${_{\omega}}F^{\vee}_{\omega - \alpha}$ for the right dual of ${_{\omega - \alpha}}F_{\omega}$ (equal to $A_{\omega} \otimes_{I_{\omega}} N_{\omega}^{\vee}$ where $N_{\omega}^{\vee}$ is defined as in Section~\ref{sec:DualsAD}).
\end{definition}
When working with right finite 2-representations, we will often take the AD bimodules representing the dg bimodules ${_{\omega - \alpha}} F_{\omega}$ to be part of the data.
\subsection{Bigraded tensor product}\label{sec:BigradedTensor}
If $(A_1, F_1, \tau_1)$ and $(A_2, F_2, \tau_2)$ are right finite bimodule 2-representations of $\U^-$, we can take the tensor product $\ootimes$ of the underlying bimodule 2-representations of $\U$ to get another bimodule 2-representation of $\U$. We will call this tensor product 2-representation $\left( A_1 \ootimes A_2, X, \tau \right)$. We now define a bimodule 2-representation $\left( (A_1 \ootimes A_2)_\omega, \, {_{\omega - \alpha}}F_{\omega}, \, \tau_{\omega} \right)$ of $\U^-$ lifting $\left( A_1 \ootimes A_2, X, \tau \right)$.
\begin{definition}
We first define a dg category $(A_1 \ootimes A_2)_{\omega}$ for each $\omega \in \Z^2$. Recall that the objects of $A_1 \ootimes A_2$ are the same as the objects of $A_1 \otimes A_2$, namely pairs $S = (S_1, S_2)$ where $S_i$ is an object of $A_i$.
\begin{itemize}
\item For $\omega \in \Z^2$, we let $(A_1 \ootimes A_2)_{\omega}$ be the full differential subcategory of $A_1 \ootimes A_2$ on objects $(S_1,S_2)$ such that, for some $\omega', \omega'' \in \Z^2$, $S_1$ is an object of $(A_1)_{\omega'}$, $S_2$ is an object of $(A_2)_{\omega''}$, and we have $\omega = \omega' + \omega''$. Note that $A_1 \ootimes A_2 = \oplus_{\omega \in \Z^2} (A_1 \ootimes A_2)_{\omega}$ as differential categories.
\item For objects $S = (S_1,S_2)$ and $T = (T_1,T_2)$ of $(A_1 \otimes A_2)_{\omega}$ where $S_1$ is an object of $(A_1)_{\omega'}$, $S_2$ is an object of $(A_2)_{\omega''}$, and $\omega = \omega' + \omega''$, we lift $\Hom_{A_1 \ootimes A_2}(S,T)$ to a bigraded chain complex by setting
\[
\Hom_{(A_1 \ootimes A_2)_{\omega}}(S,T) = \bigoplus_{i \geq 0} \left({_{\omega'+i\alpha}}(\widetilde{F}_1^{\vee})_{\omega'}^i(T_1, S_1)\right) \otimes_{H_i} \Big({_{\omega''-i\alpha}}(F_2)^i_{\omega''} (T_2, S_2)\Big)
\]
where
\[
{_{\omega' + \alpha}}(\widetilde{F}_1^{\vee})_{\omega'} \coloneqq q^{\omega'_1 + \omega'_2} {_{\omega' + \alpha}}(F_1^{\vee})_{\omega'} [\omega'_2 - 1].
\]
\end{itemize}
By construction, composition in $(A_1 \ootimes A_2)_{\omega}$ has degree zero, so $(A_1 \ootimes A_2)_{\omega}$ is a dg category.
\end{definition}
Note that for an object $S$ of $(A_1 \ootimes A_2)_{\omega}$, the left differential $A_1 \ootimes A_2$-module $X(-,S)$ vanishes on $T$ unless $T$ is an object of $(A_1 \ootimes A_2)_{\omega - \alpha}$. Thus, $X = \oplus_{\omega \in \Z^2} ({_{\omega - \alpha}}F_{\omega})$ as differential bimodules over $A_1 \ootimes A_2$, where for $\omega \in \Z^2$, ${_{\omega - \alpha}}F_{\omega}$ is the differential bimodule over $((A_1 \ootimes A_2)_{\omega - \alpha}, (A_1 \ootimes A_2)_{\omega})$ defined by ${_{\omega - \alpha}}F_{\omega}(T,S) = X(T,S)$ for objects $S$ of $(A_1 \ootimes A_2)_{\omega}$ and $T$ of $(A_1 \ootimes A_2)_{\omega - \alpha}$.
\begin{definition}
For $\omega \in \Z^2$, we give the differential bimodule ${_{\omega - \alpha}}F_{\omega}$ a bigrading as follows. For objects $S = (S_1, S_2)$ of $(A_1 \ootimes A_2)_{\omega}$ and $T = (T_1, T_2)$ of $(A_1 \ootimes A_2)_{\omega - \alpha}$ such that $T_1$ is an object of $(A_1)_{\omega'}$, $T_2$ is an object of $(A_2)_{\omega'' - \alpha}$, and $\omega = \omega' + \omega''$, the map
\[
u(T,S)\co \Big((A_1 \otimes X_2) \otimes_{A_1 \otimes A_2} (A_1 \ootimes A_2)\Big)(T,S) \to \Big((X_1 \otimes A_2) \otimes_{A_1 \otimes A_2} (A_1 \ootimes A_2)\Big)(T,S)
\]
(where $u$ is adjoint to the multiplication map as in \cite[Section 5.3.3]{ManionRouquier}; see also \cite[Section 5.3.4]{ManionRouquier}) has degree zero when viewed as a map
\begin{align*}
u(T,S)\co & \,\,\, q^{\omega'_1 + \omega'_2} \Big( ((A_1)_{\omega'} \otimes ({_{\omega'' - \alpha}}(F_2)_{\omega''})) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} \left(A_1 \ootimes A_2\right)_{\omega}\Big) (T,S) [\omega'_2 - 1] \\
&\to \Big(({_{\omega'}}(F_1)_{\omega'+\alpha} \otimes (A_2)_{\omega'' - \alpha}) \otimes_{(A_1)_{\omega' + \alpha} \otimes (A_2)_{\omega'' - \alpha}} \left(A_1 \ootimes A_2\right)_{\omega} \Big)(T,S).
\end{align*}
We let ${_{\omega - \alpha}}F_{\omega}(T,S)$ be the mapping cone of the above degree-zero map $u(T,S)$; we get a dg bimodule ${_{\omega - \alpha}}F_{\omega}$ over $((A_1 \otimes A_2)_{\omega - \alpha}, (A_1 \ootimes A_2)_{\omega})$ such that, as a non-differential bimodule, ${_{\omega - \alpha}}F_{\omega}(T,S)$ is a direct sum of
\[
q^{\omega'_1 + \omega'_2} \Big( ((A_1)_{\omega'} \otimes ({_{\omega'' - \alpha}}(F_2)_{\omega''})) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} \left(A_1 \ootimes A_2\right)_{\omega}\Big) (T,S) [\omega'_2]
\]
and
\[
\Big(({_{\omega'}}(F_1)_{\omega'+\alpha} \otimes (A_2)_{\omega'' - \alpha}) \otimes_{(A_1)_{\omega' + \alpha} \otimes (A_2)_{\omega'' - \alpha}} \left(A_1 \ootimes A_2\right)_{\omega} \Big)(T,S)
\]
\end{definition}
\begin{proposition}
The map defining the left action of $(A_1 \ootimes A_2)_{\omega - \alpha}$ on ${_{\omega - \alpha}}F_{\omega}$ (i.e. the map $w$ from \cite[Section 5.3.3]{ManionRouquier}; see also the $2 \times 2$ matrices in \cite[Remark 5.3.5]{ManionRouquier}) has degree zero.
\end{proposition}
\begin{proof}
The maps
\begin{align*}
&q^{2 \omega'_1 + 2\omega'_2} \Big( \Big( {_{\omega'+\alpha}}(F_1^{\vee})_{\omega'} \otimes (A_2)_{\omega''-2\alpha} \Big) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-2\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-2\alpha}}(F_2)_{\omega''-\alpha} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [2 \omega'_2 - 1] \\
&\xrightarrow{\id \otimes \tau_2 \otimes \id} q^{2 \omega'_1 + 2\omega'_2} \Big( \Big( {_{\omega'+\alpha}}(F_1^{\vee})_{\omega'} \otimes (A_2)_{\omega''-2\alpha} \Big) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-2\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-2\alpha}}(F_2)_{\omega''-\alpha} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [2 \omega'_2 - 2] \\
&\xrightarrow{\lambda \otimes \id} q^{2\omega'_1 + 2\omega'_2} \Big( \Big( (A_1)_{\omega'+\alpha} \otimes {_{\omega''-2\alpha}}(F_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega'+\alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( {_{\omega + \alpha}}(F_1^{\vee})_{\omega'} \otimes (A_2)_{\omega''-\alpha} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [2 \omega'_2 - 2] \\
&\xrightarrow{\id \otimes \mult} q^{\omega'_1 + \omega'_2} \Big( \Big( (A_1)_{\omega'+\alpha} \otimes {_{\omega''-2\alpha}}(F_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega'+\alpha} \otimes (A_2)_{\omega''-\alpha}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2 - 1] \\
\end{align*}
each have degree zero, where
\begin{align*}
\lambda\co &\Big( {_{\omega'+\alpha}}(F_1^{\vee})_{\omega'} \otimes (A_2)_{\omega''-2\alpha} \Big) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-2\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-2\alpha}}(F_2)_{\omega''-\alpha} \Big) \\
& \to \Big( (A_1)_{\omega'+\alpha} \otimes {_{\omega''-2\alpha}}(F_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega'+\alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( {_{\omega + \alpha}}(F_1^{\vee})_{\omega'} \otimes (A_2)_{\omega''-\alpha} \Big)
\end{align*}
is defined in \cite[equation (5.3.1)]{ManionRouquier} and is equal to the usual swap isomorphism by the definitions in \cite[Section 5.3.4]{ManionRouquier}. Thus, the composite of these maps (the top-left entry of the $2 \times 2$ matrix defining the left action of $(A_1 \ootimes A_2)_{\omega}$ on ${_{\omega-\alpha}}F_{\omega}$) has degree zero. Similarly, the maps
\begin{align*}
&q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'}}(F_1^{\vee})_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega' - \alpha} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega' - \alpha} \otimes (A_2)_{\omega''}} \Big( {_{\omega' - \alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2] \\
&\xrightarrow{\id \otimes \sigma} q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'}}(F_1^{\vee})_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( {_{\omega' - \alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2] \\
&\xrightarrow{\varepsilon \otimes \id} q^{\omega'_1 + \omega'_2} \Big( \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2] \\
\end{align*}
each have degree zero where $\sigma$ is the swap isomorphism as in \cite[Section 5.3.4]{ManionRouquier} and $\varepsilon$ is the counit of the adjunction $F_1^{\vee} \dashv F_1$. Finally, the maps
\begin{align*}
&q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'}}(F_1^{\vee})_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega' - \alpha} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega' - \alpha} \otimes (A_2)_{\omega''}} \Big( {_{\omega' - \alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2] \\
&\xrightarrow{\id \otimes \sigma} q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'}}(F_1^{\vee})_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega' - \alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( {_{\omega' - \alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2] \\
&\xrightarrow{\rho \otimes \id} q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'}}(F_1)_{\omega' + \alpha} \otimes (A_2)_{\omega''-\alpha} \Big) \otimes_{(A_1)_{\omega' + \alpha} \otimes (A_2)_{\omega''-\alpha}} \Big( {_{\omega' + \alpha}}(F_1^{\vee})_{\omega'} \otimes (A_2)_{\omega''-\alpha} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S) [\omega'_2 - 1] \\
&\xrightarrow{\id \otimes \mult} \Big( \Big( {_{\omega'}}(F_1)_{\omega' + \alpha} \otimes (A_2)_{\omega'' - \alpha} \Big) \otimes_{(A_1)_{\omega' + \alpha} \otimes (A_2)_{\omega'' - \alpha}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S)
\end{align*}
each have degree zero, where $\rho$ is defined in \cite[equation (4.4.2)]{ManionRouquier} (note that $\rho$ has the same degree as $\tau$).
\end{proof}
It follows that ${_{\omega-\alpha}}F_{\omega}$ is a dg bimodule over $((A_1 \ootimes A_2)_{\omega-\alpha}, (A_1 \ootimes A_2)_{\omega})$ lifting ${_{\omega-\alpha}}F_{\omega}$ as a differential bimodule over $((A_1 \ootimes A_2)_{\omega-\alpha}, (A_1 \ootimes A_2)_{\omega})$.
\begin{proposition}
The map defining the endomorphism $\tau$ of $X^2$ (see \cite[equation (5.3.4)]{ManionRouquier}) restricts to an endomorphism $\tau_{\omega}$ of the dg bimodule ${_{\omega - 2\alpha}}F^2_{\omega}$ with $\deg^q(\tau_{\omega}) = 0$ and $\deg^h(\tau_{\omega}) = -1$.
\end{proposition}
\begin{proof}
For the top-left and bottom-right entries of the $4 \times 4$ matrix defining $\tau$, this is clear. For the entry in row 2, column 3, the map
\begin{align*}
&q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'- \alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega'' - \alpha} \Big) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''- \alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S)[\omega'_2] \\
&\xrightarrow{\sigma^{-1}} \Big( \Big( (A_1)_{\omega'-\alpha} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \otimes_{(A_1)_{\omega'-\alpha} \otimes (A_2)_{\omega''}} \Big( {_{\omega'-\alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\otimes''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S)[\omega'_2]
\end{align*}
has bidegree zero, so
\begin{align*}
&q^{\omega'_1 + \omega'_2} \Big( \Big( {_{\omega'- \alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega'' - \alpha} \Big) \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''-\alpha}} \Big( (A_1)_{\omega'} \otimes {_{\omega''- \alpha}}(F_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\omega''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S)[\omega'_2] \\
&\xrightarrow{\sigma^{-1}} \Big( \Big( (A_1)_{\omega'-\alpha} \otimes {_{\omega''-\alpha}}(F_2)_{\omega''} \Big) \otimes_{(A_1)_{\omega'-\alpha} \otimes (A_2)_{\omega''}} \Big( {_{\omega'-\alpha}}(F_1)_{\omega'} \otimes (A_2)_{\omega''} \Big) \\
& \quad \quad \otimes_{(A_1)_{\omega'} \otimes (A_2)_{\otimes''}} (A_1 \ootimes A_2)_{\omega} \Big) (T,S)[\omega'_2 + 1]
\end{align*}
has $\deg^q = 0$ and $\deg^h = -1$.
\end{proof}
\subsection{Decategorification}\label{sec:BigradedTensorDecat}
Below we will consider right finite bimodule 2-representations $(A,F,\tau)$ of $\U^-$ where the dg category $A$ has only finitely many objects and where each bimodule ${_{\omega - \alpha}}F_{\omega}$ comes with the data of an AD bimodule $N_{\omega}$ (without higher terms $\delta^1_i$ for $i > 2$) satisfying ${_{\omega - \alpha}}F_{\omega} = N_{\omega} \otimes_{I_{\omega}} A_{\omega}$.
From Section~\ref{sec:SplitGG} we have a map $[{_{\omega - \alpha}}F_{\omega}]\co G_0(A_{\omega}) \to G_0(A_{\omega - \alpha})$; summing over all $\omega$, we get an endomorphism $[F]$ of $G_0(A)$. If $(A,F,\tau)$ is a right finite bimodule 2-representation of $\U^-$, then $(G_0(A), [F])$ is a representation of $U_q(\gl(1|1)^-)$ (see Appendix~\ref{sec:Uqgl11Review}) equipped with a decomposition into $\gl(1|1)$ weight spaces for $\omega \in \Z^2$. The weight space decomposition determines the structure of a super or $\Z_2$-graded $\C_q$-vector space on $G_0(A)$; for an object $S$ of $A_{\omega}$ for $\omega \in \Z^2$, we declare $[S]$ to be even if $\omega_2$ is even and odd if $\omega_2$ is odd. The weight space decomposition also determines an action on $G_0(A)$ of the Hopf subalgebra of $U_q(\gl(1|1))$ generated by $F$, $q^{H_1}$, and $q^{H_2}$, with $q^{H_i} [S] \coloneqq q^{\omega_i} [S]$ if $S$ is an object of $A_{\omega}$; as in Appendix~\ref{sec:Uqgl11Review}, we can equivalently view this data as an action of $\Udot_q(\gl(1|1)^-)$ on $G_0(A)$.
\begin{proposition}
If $(A_1, F_1, \tau_1)$ and $(A_2, F_2, \tau_2)$ are right finite bimodule 2-representations of $\U^-$, we have a natural identification
\[
G_0(A_1 \ootimes A_2) \cong G_0(A_1) \otimes G_0(A_2)
\]
as modules over $\Udot_q(\gl(1|1)^-)$
\end{proposition}
\begin{proof}
It is clear that $G_0(A_1 \ootimes A_2)$ and $G_0(A_1) \otimes G_0(A_2)$ agree as super $\C_q$-vector spaces and that the actions of $q^{H_1}$ and $q^{H_2}$ agree; we will show that $[F] = [F_1] \otimes 1 + q^{H_1 + H_2} \otimes [F_2]$ as endomorphisms of this super vector space, where $[F]$ is the map on $G_0$ induced by the dg bimodule over $A_1 \ootimes A_2$ defined by the tensor product construction.
First, write $F_1 = F'_1 \otimes_{I_1} A_1$ and $F_2 = F'_2 \otimes_{I_2} A_2$. We have
\begin{align*}
& (F_1 \otimes A_2) \otimes_{A_1 \otimes A_2} (A_1 \ootimes A_2) \\
&= (F'_1 \otimes I_2) \otimes_{I_1 \otimes I_2} (A_1 \otimes A_2) \otimes_{A_1 \otimes A_2} (A_1 \ootimes A_2) \\
&= (F'_1 \otimes I_2) \otimes_{I_1 \otimes I_2} (A_1 \ootimes A_2)
\end{align*}
and
\begin{align*}
& (A_1 \otimes F_2) \otimes_{A_1 \otimes A_2} (A_1 \ootimes A_2) \\
&= (I_1 \otimes F_2) \otimes_{I_1 \otimes I_2} (A_1 \otimes A_2) \otimes_{A_1 \otimes A_2} (A_1 \ootimes A_2) \\
&= (I_1 \otimes F_2) \otimes_{I_1 \otimes I_2} (A_1 \ootimes A_2);
\end{align*}
note that $I_1 \otimes I_2$ is the non-full subcategory of $A_1 \ootimes A_2$ containing all objects but only identity morphisms.
Recall that $F = \oplus_{\omega \in \Z^2} \left( {_{\omega - \alpha}}F_{\omega} \right)$; for $\omega \in \Z^2$ we have
\[
{_{\omega - \alpha}}F_{\omega} = {_{\omega - \alpha}}(F')_{\omega} \otimes_{I_1 \otimes I_2} (A_1 \ootimes A_2)
\]
as $(I_1 \otimes I_2, I_1 \otimes I_2)$-bimodules, where
\[
{_{\omega - \alpha}}(F')_{\omega} = \bigoplus_{\omega' + \omega'' = \omega} \Big(q^{\omega'_1 + \omega'_2} ((I_1)_{\omega'} \otimes {_{\omega''- \alpha}}(F'_2)_{\omega''}) [\omega'_2] \Big) \oplus \Big({_{\omega' - \alpha}}(F'_1)_{\omega'} \otimes (I_2)_{\omega''}\Big)
\]
(ignoring the differential). For objects $S = (S_1, S_2)$ and $T = (T_1, T_2)$ of $(I_1 \otimes I_2)_{\omega}$ with $S_1 \in (I_1)_{\omega'}$, $S_2 \in (I_2)_{\omega''}$, and $\omega = \omega' + \omega''$, we have
\begin{align*}
{_{\omega - \alpha}}(F')_{\omega}(T,S) &= q^{\omega'_1 + \omega'_2} \Big( (I_1)_{\omega'}(T_1, S_1) \otimes {_{\omega'' - \alpha}}(F'_2)_{\omega''} (T_2, S_2) \Big) [\omega'_2] \\
& \oplus \Big( {_{\omega' - \alpha}}(F'_1)_{\omega'}(T_1, S_1) \otimes (I_2)_{\omega''}(T_2,S_2) \Big).
\end{align*}
Taking $\chi_q$, the entry of $[F]$ in row $[T]$ and column $[S]$ equals
\[
(-1)^{\omega'_2} q^{\omega'_1 + \omega'_2} \delta_{T_1 = S_1} \chi_q( {_{\omega''-\alpha}}(F'_2)_{\omega''} (T_2, S_2) ) + \delta_{T_2 = S_2} \chi_q ( {_{\omega'-\alpha}}(F'_1)_{\omega'} (T_1, S_1) ).
\]
This expression is also the coefficient on $[T]$ of $([F_1] \otimes 1 + q^{H_1 + H_2} \otimes [F_2])([S])$; indeed, $q^{H_1 + H_2}$ acts on $[S_1]$ to give $q^{\omega'_1 + \omega'_2} [S_1]$, while commuting $[F_2]$ past $[S_1]$ picks up a factor of $(-1)^{\omega'_2}$ since $[F_2]$ is odd and $[S_1]$ has parity given by the sign of $\omega'_2$.
\end{proof}
\subsection{Morphisms}\label{sec:2RepMorphisms}
\subsubsection{Dg bimodule 1-morphisms of 2-representations}
Following \cite{ManionRouquier} and adding bigradings, dg bimodule 1-morphisms between bimodule 2-representations of $\U^-$ can be defined as follows.
\begin{definition}[Section 5.1.1 of \cite{ManionRouquier}]
Given bimodule 2-representations $(A,F,\tau)$ and $(A',F',\tau')$ of $\U^-$, a dg bimodule 1-morphism from $(A,F,\tau)$ to $(A',F',\tau')$ is, for each $\omega \in \Z^2$, a dg $(A'_{\omega},A_{\omega})$-bimodule $P_{\omega}$ equipped with a closed degree-zero isomorphism of $(A'_{\omega-\alpha},A_{\omega})$-bimodules
\[
\varphi_{\omega}\co P_{\omega - \alpha} \otimes_{A_{\omega - \alpha}} {_{\omega - \alpha}}F_{\omega} \xrightarrow{\cong} {_{\omega - \alpha}}F'_{\omega} \otimes_{A'_{\omega}} P_{\omega}
\]
such that
\begin{align*}
&(\tau'_{\omega} \otimes \id_{P_{\omega}}) \circ (\id_{{_{\omega - 2\alpha}}F'_{\omega - \alpha}} \otimes \varphi_{\omega}) \circ (\varphi_{\omega - \alpha} \otimes \id_{{_{\omega - \alpha}}F_{\omega}}) \\
&= (\id_{{_{\omega - 2\alpha}}F'_{\omega - \alpha}} \otimes \varphi_{\omega}) \circ (\varphi_{\omega - \alpha} \otimes \id_{{_{\omega - \alpha}}F_{\omega}}) \circ (\id_{P_{\omega - 2\alpha}} \otimes \tau_{\omega})
\end{align*}
as maps from ${_{\omega - 2\alpha}}(PF^2)_{\omega}$ to ${_{\omega - 2\alpha}}((F')^2 P)_{\omega}$.
\end{definition}
If $(A,F,\tau)$ and $(A',F',\tau')$ are right finite, we can equivalently define a 1-morphism from $(A,F,\tau)$ to $(A',F',\tau')$ using a closed isomorphism from ${_{\omega + \alpha}}(F')^{\vee}_{\omega} \otimes_{A'_{\omega}} P_{\omega}$ to $P_{\omega + \alpha} \otimes_{A_{\omega + \alpha}} {_{\omega + \alpha}}F^{\vee}_{\omega}$; we can also reverse the direction and define 1-morphisms in terms of
\[
\varphi_{\omega}\co P_{\omega + \alpha} \otimes_{A_{\omega + \alpha}} {_{\omega + \alpha}}F^{\vee}_{\omega} \xrightarrow{\cong} {_{\omega + \alpha}}(F')^{\vee}_{\omega} \otimes_{A'_{\omega}} P_{\omega}
\]
satisfying
\begin{align*}
&(\tau'_{\omega} \otimes \id_{P_{\omega}}) \circ (\id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega + \alpha}} \otimes \varphi_{\omega}) \circ (\varphi_{\omega + \alpha} \otimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}}) \\
&= (\id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega + \alpha}} \otimes \varphi_{\omega}) \circ (\varphi_{\omega + \alpha} \otimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}}) \circ (\id_{P_{\omega + 2\alpha}} \otimes \tau_{\omega}).
\end{align*}
\subsubsection{AD and DA bimodule 1-morphisms of 2-representations}
We relax some equalities and isomorphisms to homotopies and homotopy equivalences here, as required to accommodate the examples we will consider below. If $(A,F,\tau)$ is a bimodule 2-representation of $\U^-$, we say that $(A,F,\tau)$ is right bounded if $F = N \otimes_I A$ for some right bounded AD bimodule $N$ (not to be confused with right finiteness for $F$ which asks for finite generation of $N$).
\begin{remark}
Below we will identify right finite dg bimodules $F$ with their underlying finitely generated AD bimodules, rather than maintaining a notational distinction as above.
\end{remark}
\begin{definition}\label{def:AD1Mor}
Assume that $(A,F,\tau)$, $(A',F',\tau')$ are right finite, right bounded bimodule 2-representations of $\U^-$. An AD bimodule 1-morphism of 2-representations $(P,\varphi)$ from $(A,F,\tau)$ to $(A',F',\tau')$ consists of, for $\omega \in \Z^2$, finitely generated right bounded AD bimodules $P_{\omega}$ over $(A'_{\omega},A_{\omega})$ together with homotopy equivalences of AD bimodules
\[
\varphi_{\omega}\co P_{\omega - \alpha} \boxtimes_{A_{\omega - \alpha}} {_{\omega - \alpha}}F_{\omega} \xrightarrow{\cong} {_{\omega - \alpha}}F'_{\omega} \boxtimes_{A'_{\omega}} P_{\omega}
\]
satisfying
\begin{align*}
&(\tau'_{\omega} \boxtimes \id_{P_{\omega}}) \circ (\id_{{_{\omega - 2\alpha}}F'_{\omega - \alpha}} \boxtimes \varphi_{\omega}) \circ (\varphi_{\omega - \alpha} \boxtimes \id_{{_{\omega - \alpha}}F_{\omega}}) \\
&= (\id_{{_{\omega - 2\alpha}}F'_{\omega - \alpha}} \boxtimes \varphi_{\omega}) \circ (\varphi_{\omega - \alpha} \boxtimes \id_{{_{\omega - \alpha}}F_{\omega}}) \circ (\id_{P_{\omega - 2\alpha}} \boxtimes \tau_{\omega}).
\end{align*}
\end{definition}
We have a similar definition using DA bimodules.
\begin{definition}\label{def:DA1Mor}
Assume that $(A,F,\tau)$, $(A',F',\tau')$ are right finite, right bounded bimodule 2-representations of $\U^-$. A DA bimodule 1-morphism of 2-representations $(P,\varphi)$ from $(A,F,\tau)$ to $(A',F',\tau')$ consists of, for $\omega \in \Z^2$, finitely generated left bounded DA bimodules $P_{\omega}$ over $(A'_{\omega},A_{\omega})$ together with $A_{\infty}$ homotopy equivalences
\[
\varphi_{\omega}\co P_{\omega + \alpha} \boxtimes_{A_{\omega + \alpha}} {_{\omega + \alpha}}F^{\vee}_{\omega} \xrightarrow{\sim} {_{\omega + \alpha}}(F')^{\vee}_{\omega} \boxtimes_{A'_{\omega}} P_{\omega}
\]
satisfying
\begin{align*}
&(\tau'_{\omega} \boxtimes \id_{P_{\omega}}) \circ (\id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega + \alpha}} \boxtimes \varphi_{\omega}) \circ (\varphi_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}}) \\
&= (\id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega + \alpha}} \boxtimes \varphi_{\omega}) \circ (\varphi_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}}) \circ (\id_{P_{\omega + 2\alpha}} \boxtimes \tau_{\omega}).
\end{align*}
\end{definition}
\begin{remark}
We require finite generation of $P$ in Definition~\ref{def:AD1Mor} for duality $P \leftrightarrow P^{\vee}$ to be well-behaved, and similarly in Definition~\ref{def:DA1Mor}, but the above definitions make sense without these finiteness assumptions. Some DA and AD bimodules arising from Heegaard diagrams with basepoints in this paper are not finitely generated; however, when discussing 1-morphism structure, we will work primarily with homotopy-equivalent versions that are finitely generated.
\end{remark}
If $(P,\varphi)$ is a DA bimodule 1-morphism of 2-representations from $(A,F,\tau)$ to $(A',F',\tau')$, then for each $\omega \in \Z^2$ we have a finitely generated right bounded AD bimodule ${^{\vee}}P_{\omega}$. Dualizing the map $\varphi_{\omega}$, we get
\[
\varphi'_{\omega + \alpha}\co {^{\vee}}P_{\omega} \boxtimes_{A'_{\omega}} {_{\omega}}F'_{\omega + \alpha} \xrightarrow{\sim} {_{\omega}}F_{\omega + \alpha} \boxtimes_{A_{\omega + \alpha}} {^{\vee}}P_{\omega + \alpha}
\]
giving us an AD bimodule 1-morphism $({^{\vee}}P, \varphi')$ from $(A',F',\tau')$ to $(A,F,\tau)$. Similarly, we can dualize AD bimodule 1-morphisms to get DA bimodule 1-morphisms.
\begin{example}
Given $(A,F,\tau)$, we upgrade the identity AD and DA bimodules over $A$ to identity 1-morphisms of 2-representations by taking $\varphi_{\omega} = \id$ for all $\omega$.
\end{example}
Suppose that $(A,F,\tau) \xrightarrow{(P,\varphi)} (A',F',\tau') \xrightarrow{(P',\varphi')} (A'',F'',\tau'')$ are AD bimodule 1-morphisms. Define their composition to be $P' \boxtimes P$ equipped with the maps
\[
(\varphi_{\omega} \boxtimes \id_{P_{\omega}}) \circ (\id_{P'_{\omega - \alpha}} \boxtimes \varphi'_{\omega})
\]
for $\omega \in \Z^2$; one can check that the square for compatibility with $\tau$ commutes. Similarly, the composition of DA bimodule 1-morphisms $(A,F,\tau) \xrightarrow{(P,\varphi)} (A',F',\tau') \xrightarrow{(P',\varphi')} (A'',F'',\tau'')$ is defined to be $P' \boxtimes P$ equipped with the maps
\[
(\varphi_{\omega} \boxtimes \id_{P_{\omega}}) \circ (\id_{P'_{\omega + \alpha}} \boxtimes \varphi'_{\omega})
\]
for $\omega \in \Z^2$. The dual of a composition of DA bimodule 1-morphisms can be naturally identified with the composition of duals (as AD bimodule 1-morphisms) after reversing the order, and vice-versa.
\subsubsection{2-morphisms}
\begin{definition}
Assume that $(A,F,\tau)$, $(A',F',\tau')$ are right finite, right bounded bimodule 2-representations of $\U^-$; let $(P,\varphi), (P',\varphi')$ be AD bimodule 1-morphisms of 2-representations from $(A,F,\tau)$ to $(A',F',\tau')$. A 2-morphism from $P$ to $P$ consists of, for $\omega \in \Z^2$, AD bimodule morphisms $f_{\omega}\co P_{\omega} \to P'_{\omega}$ such that
\[
(\id_{{_{\omega - \alpha}}F'_{\omega}} \boxtimes f_{\omega}) \circ \varphi_{\omega} \sim \varphi'_{\omega} \circ (f_{\omega - \alpha} \boxtimes \id_{{_{\omega - \alpha}}F_{\omega}}),
\]
where $\sim$ denotes homotopy of AD bimodule morphisms.
\end{definition}
Given $(A,F,\tau)$ and $(A',F',\tau')$, the AD bimodule 1-morphisms from $(A,F,\tau)$ to $(A',F',\tau')$ and the 2-morphisms between them form a dg category; homotopy of 2-morphisms is defined in terms of this dg category, as are homotopy equivalence and isomorphism of 1-morphisms. Compositions $\boxtimes$ of AD bimodule 1-morphisms are associative up to canonical isomorphism of 1-morphisms, and the composition of an AD bimodule 1-morphism $(P,\varphi)$ with the identity 1-morphism on either side is canonically isomorphic (as 1-morphisms) to $(P,\varphi)$.
\begin{definition}
The $k$-linear $\Z^2$-graded bicategory $2\Rep(\U^-)^{*,*}$ is defined as follows.
\begin{itemize}
\item Objects of $2\Rep(\U^-)^{*,*}$ are right finite, right bounded bimodule 2-representations $(A,F,\tau)$ of $\U^-$.
\item For two objects $(A,F,\tau)$ and $(A',F',\tau')$ of $2\Rep(\U^-)^{*,*}$, the $k$-linear $\Z^2$-graded category
\[
\Hom_{2\Rep(\U^-)^{*,*}}((A,F,\tau),(A',F',\tau'))
\]
is the bigraded homotopy category $H^{*,*}$ of the dg category with
\begin{itemize}
\item objects: finitely generated right bounded AD bimodule 1-morphisms
\item morphism complexes of 2-morphisms between AD bimodule 1-morphisms.
\end{itemize}
\item Identity 1-morphisms are identity AD bimodules.
\item Composition functors are defined as above on objects and send $(f,g)$ to the homotopy class $[f \boxtimes g]$; by \cite[Lemma 2.3.3]{LOTBimodules}, this homotopy class is well-defined, and one can check that $[f \boxtimes g]$ is the homotopy class of a 2-morphism.
\item Unitors and associators are the canonical isomorphisms from the paragraph above this definition.
\end{itemize}
\end{definition}
The superscript $*,*$ is just a reminder that there is a bigrading. Well-definedness of $2\Rep(\U^-)^{*,*}$ follows from the results of \cite[Section 2.3.2]{LOTBimodules}, especially \cite[Corollary 2.3.5]{LOTBimodules}.
All of the above has an analogue for DA bimodule 1-morphisms. In particular, we can define 2-morphisms of DA bimodules as follows.
\begin{definition}
Assume that $(A,F,\tau)$, $(A',F',\tau')$ are right finite, right bounded bimodule 2-representations of $\U^-$; let $(P,\varphi), (P',\varphi')$ be DA bimodule 1-morphisms of 2-representations from $(A,F,\tau)$ to $(A',F',\tau')$. A 2-morphism from $P$ to $P$ consists of, for $\omega \in \Z^2$, DA bimodule morphisms $f_{\omega}\co P_{\omega} \to P'_{\omega}$ such that
\[
(\id_{{_{\omega + \alpha}}(F')^{\vee}_{\omega}} \boxtimes f_{\omega}) \circ \varphi_{\omega} \sim \varphi'_{\omega} \circ (f_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}}).
\]
\end{definition}
Duality gives an equivalence (reversing the direction of both 1-morphisms and 2-morphisms while preserving bidegrees of 2-morphisms) between $2\Rep(\U^-)^{*,*}$ and the analogous bicategory defined using DA 1-morphisms rather than AD 1-morphisms.
\begin{remark}\label{rem:Strong2Mor}
A limitation of the above definitions is that one cannot naturally define the structure of a 1-morphism on the mapping cone of a 2-morphism. To enable the definition of mapping cones, we could (say in the DA bimodule setting) define a strong 2-morphism from $(P,\varphi)$ to $(P',\varphi')$ to be the data of:
\begin{itemize}
\item For $\omega \in \Z^2$, DA bimodule morphisms $f_{\omega}\co P_{\omega} \to P'_{\omega}$;
\item For $\omega \in \Z^2$, DA bimodule morphisms $h_{\omega}\co P_{\omega + \alpha} \boxtimes {_{\omega + \alpha}}F^{\vee}_{\omega} \to {_{\omega + \alpha}}(F')^{\vee}_{\omega} \boxtimes P'_{\omega}$ satisfying
\[
d(h) = (\id_{{_{\omega + \alpha}}(F')^{\vee}_{\omega}} \boxtimes f_{\omega}) \circ \varphi_{\omega} - \varphi'_{\omega} \circ (f_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}})
\]
and
\begin{align*}
&\left(\tau'_{\omega} \boxtimes \id_{P'_{\omega}}\right) \circ \bigg( \left( \id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega + \alpha}} \boxtimes h_{\omega} \right) \circ \left(\varphi_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}} \right) \\
&\qquad + \left( \id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega+\alpha}} \boxtimes \varphi'_{\omega} \right) \circ \left( h_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}} \right) \bigg) \\
&= \bigg( \left( \id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega + \alpha}} \boxtimes h_{\omega} \right) \circ \left(\varphi_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}} \right) \\
&\qquad + \left( \id_{{_{\omega + 2\alpha}}(F')^{\vee}_{\omega+\alpha}} \boxtimes \varphi'_{\omega} \right) \circ \left( h_{\omega + \alpha} \boxtimes \id_{{_{\omega + \alpha}}F^{\vee}_{\omega}} \right) \bigg) \circ \left( \id_{P_{\omega + 2\alpha}} \boxtimes \tau_{\omega} \right).
\end{align*}
\end{itemize}
One can check that if $(f,h)$ is a strong 2-morphism from $(P,\varphi)$ to $(P',\varphi')$, then the matrix $\begin{bmatrix} \varphi & 0 \\ h & \varphi' \end{bmatrix}$ defines a valid 1-morphism structure for the mapping cone of $f$.
\end{remark}
\section{A family of 2-representations}\label{sec:OurExamples}
\subsection{Definitions}
\subsubsection{Fundamental examples}\label{sec:Fundamental2RepExamples}
\begin{definition}
For $K \geq 1$ we define an $\F_2$-linear right finite, right bounded bimodule 2-representation $(\A_K, F, \tau)$ of $\U^-$ by:
\begin{itemize}
\item $(\A_K)_{\varepsilon_1 + (K-1)\varepsilon_2} = \F_2[e_1,\ldots,e_{K-1}]$ where $\deg^q(e_i) = -2i$ and $\deg^h(e_i) = 2i$, with no differential;
\item $(\A_K)_{K \varepsilon_2} = \F_2[e_1, \ldots, e_K]$ where $\deg^q(e_i) = -2i$ and $\deg^h(e_i) = 2i$, with no differential;
\item ${_{K\varepsilon_2}}F_{\varepsilon_1 + (K-1)\varepsilon_2} = \F_2[e_1,\ldots,e_{K-1}]$ as a bimodule over $\F_2[e_1,\ldots,e_{K-1}]$, where the left action of $e_K$ is zero;
\item all other summands of $F$ are zero, and $\tau = 0$.
\end{itemize}
\end{definition}
We choose monomials in the variables $e_i$ as our preferred homogeneous basis for morphism spaces in $\A_K$; we define an augmentation functor $\epsilon$ from $\A_K$ to its idempotent ring by sending all $e_i$ variables to zero.
\begin{example}
If $K = 1$, then $\A_K = \A_1$ is the right finite bimodule 2-representation of $\U^-$ with:
\begin{itemize}
\item $(\A_1)_{\varepsilon_1} = \F_2$;
\item $(\A_1)_{\varepsilon_2} = \F_2[U]$ where $\deg^q(U) = -2$ and $\deg^h(U) = 2$, with no differential;
\item ${_{\varepsilon_2}}F_{\varepsilon_1} = \F_2$, where the left action of $U$ is zero;
\item all other summands of $F$ are zero, and $\tau = 0$.
\end{itemize}
\end{example}
\begin{example}
If $K = 2$, then $\A_K = \A_2$ is the right finite bimodule 2-representation of $\U^-$ with:
\begin{itemize}
\item $(\A_2)_{\varepsilon_1 + \varepsilon_2} = \F_2[e_1]$ where $\deg^q(e_1) = -2$ and $\deg^h(e_1) = 2$;
\item $(\A_2)_{2\varepsilon_2} = \F_2[e_1, e_2]$ where $\deg^q(e_i) = -2i$ and $\deg^h(e_i) = 2i$, with no differential;
\item ${_{2\varepsilon_2}}F_{\varepsilon_1 + \varepsilon_2} = \F_2[e_1]$, where the left action of $e_2$ is zero;
\item all other summands of $F$ are zero, and $\tau = 0$.
\end{itemize}
\end{example}
We describe the DA bimodule $F^{\vee} = {_{\varepsilon_1}}F^{\vee}_{\varepsilon_2}$ over $\A_K$ in matrix notation. We will refer to the unique object of $(\A_K)_{\varepsilon_1 + (K-1)\varepsilon_2}$ (respectively $(\A_K)_{K\varepsilon_2}$) as $\u$ for ``unoccupied'' (respectively $\o$ for ``occupied.'') This language of occupancy is motivated by how the algebras relate to the strands algebra construction (see Section~\ref{sec:ExamplesStrandsInterp}) and, correspondingly, holomorphic disk counts in Heegaard diagrams (see Section~\ref{sec:ExamplesHDInterp}).
\begin{example}
The primary matrix for the bimodule $F^{\vee}$ over $\A_K$ is
\[
\kbordermatrix{
& \u & \o \\
\u & & \xi_{\u,\o} \\
\o & &
};
\]
the one generator $\xi_{\u,\o}$ has $q$-degree and homological degree both equal to zero. The matrix for the right action of $e_i$ is
\[
\kbordermatrix{
& \xi_{\u,\o} \\
\xi_{\u,\o} & e_i
}
\]
for $1 \leq i \leq K-1$; the matrix for the right action of $e_K$ is zero.
\end{example}
By construction, $(G_0(\A_K), [F])$ can be identified with $\wedge_q^k V$ where $V$ is the vector representation of $U_q(\gl(1|1))$ (restricted to a representation of the subalgebra generated by $F$, $q^{H_1}$, and $q^{H_2}$, or alternatively a representation of $\Udot_q(\gl(1|1)^-)$); see Appendix~\ref{sec:Uqgl11Review}. The basis elements $[S_{\u}]$ and $[S_{\o}]$ of $G_0(\A_K)$ get identified with the basis elements $\ket{0} \wedge (\ket{1})^{\wedge(k-1)}$ and $(\ket{1})^{\wedge k}$ of $\wedge_q^k V$ respectively.
\subsubsection{Tensor products}
Write
\[
\A_{K_1,\ldots,K_n} \coloneqq \A_{K_1} \ootimes \cdots \ootimes \A_{K_n}
\]
as 2-representations of $\U^-$.
\begin{corollary}\label{cor:GeneralKDecat}
We have an identification
\[
G_0\left(\A_{K_1, \ldots, K_n}\right) \cong \wedge_q^{K_1} V \otimes \cdots \otimes \wedge_q^{K_n} V
\]
as representations of $\Udot_q(\gl(1|1)^-)$.
\end{corollary}
In particular, $G_0 \left( \A_{1,\ldots,1} \right)$ is identified with $V^{\otimes n}$. We will often write objects of $\A_{K_1,\ldots,K_n}$ as sequences of letters $\u$ (``unoccupied'') and $\o$ (``occupied'').
\begin{example}
As a dg category, the 2-representation $\A_{K_1,K_2}$ of $\U^-$ can be described as follows:
\begin{itemize}
\item $(\A_{K_1,K_2})_{2\varepsilon_1 + (K_1+K_2-2)\varepsilon_2} = \F_2$ with one object $\uu$.
\item $(\A_{K_1,K_2})_{\varepsilon_1 + (K_1+K_2-1)\varepsilon_2}$ has two objects $\ou$ and $\uo$ with morphisms generated by endomorphisms
\[
e_{1,1}, \ldots, e_{1,K_1}, e_{2,1}, \ldots, e_{2,K_2-1}
\]
of $\ou$, endomorphisms
\[
e_{1,1}, \ldots, e_{1,K_1-1}, e_{2,1}, \ldots, e_{2,K_2}
\]
of $\uo$, and a morphism $\lambda$ from $\ou$ to $\uo$, subject to the relations
\begin{itemize}
\item $e_{j,i} g = g e_{j,i}$ for $j=1,2$, $1 \leq i \leq K_j - 1$, and all morphisms $g$,
\item $e_{2,K_2} \lambda = 0$ and $\lambda e_{1,K_1} = 0$.
\end{itemize}
We have $\deg^q(e_{j,i}) = -2i$, $\deg^h(e_{j,i}) = 2i$, $\deg^q(\lambda) = K_1$, and $\deg^h(\lambda) = 1-K_1$.
\item $(\A_{K_1,K_2})_{(K_1+K_2)\varepsilon_2} = \F_2[e_{1,1}, \ldots, e_{1,K_1}, e_{2,1}, \ldots, e_{2,K_2}]$ with one object $\oo$; we have
\[
\deg^q(e_{j,i}) = -2i, \qquad \deg^h(e_{j,i}) = 2i.
\]
\end{itemize}
\end{example}
\begin{example}
The DA bimodule $F^{\vee}$ over $\A_{K_1,K_2}$ can be described in matrix notation as follows. The primary matrix for ${_{2\varepsilon_1 + (K_1+K_2-2)\varepsilon_2}}(F^{\vee})_{\varepsilon_1 + (K_1+K_2-1)\varepsilon_2}$ is
\[
\kbordermatrix{
& \ou & \uo \\
\uu & \xi_{\uu,\ou} & \xi_{\uu,\uo}
};
\]
we have $\deg^q(\xi_{\uu,\ou}) = \deg^h(\xi_{\uu,\ou}) = 0$ as well as $\deg^q(\xi_{\uu,\uo}) = -K_1$ and $\deg^h(\xi_{\uu,\uo}) = K_1 - 1$.
The matrix for the right action of $\lambda$ is
\[
\kbordermatrix{
& \xi_{\uu,\ou} & \xi_{\uu,\uo} \\
\xi_{\uu,\ou} & 0 & 1 \\
\xi_{\uu,\uo} & 0 & 0
}.
\]
For $j = 1,2$ and $1 \leq i \leq K_j - 1$, the right action by $e_{j,i}$ has matrix
\[
\kbordermatrix{
& \xi_{\uu,\ou} & \xi_{\uu,\uo} \\
\xi_{\uu,\ou} & e_{j,i} & 0 \\
\xi_{\uu,\uo} & 0 & e_{j,i}
}.
\]
The differential and the right action of $e_{1,K_1}$ and $e_{2,K_2}$ are zero.
The primary matrix for ${_{\varepsilon_2+(K_1+K_2-1)}}(F^{\vee})_{(K_1+K_2)\varepsilon_2}$ is
\[
\kbordermatrix{
& \oo \\
\ou & \xi_{\ou,\oo} \\
\uo & \xi_{\uo,\oo} \\
};
\]
we have $\deg^q(\xi_{\ou,\oo}) = -K_1$, $\deg^h(\xi_{\ou,\oo}) = K_1$, $\deg^q(\xi_{\uo,\oo}) = 0$, and $\deg^h(\xi_{\uo,\oo}) = 0$.
The differential has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & 0 & \lambda \\
\xi_{\uo,\oo} & 0 & 0
},
\]
the right action by $e_{1,K_1}$ has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & e_{1,K_1} & 0 \\
\xi_{\uo,\oo} & 0 & 0
},
\]
the right action by $e_{2,K_2}$ has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & 0 & 0 \\
\xi_{\uo,\oo} & 0 & e_{2,K_2}
},
\]
and the right action by $e_{j,i}$ for $j = 1,2$, $1 \leq i \leq K_j - 1$ has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & e_{j,i} & 0 \\
\xi_{\uo,\oo} & 0 & e_{j,i}
}.
\]
\end{example}
\begin{example}
The endomorphism $\tau$ of the bimodule ${_{2\varepsilon_1 + (K_1+K_2-2)\varepsilon_2}}(F^{\vee})^2_{(K_1+K_2)\varepsilon_2}$ has matrix
\[
\kbordermatrix{
&\xi_{\uu,\ou} \xi_{\ou,\oo} & \xi_{\uu,\uo} \xi_{\uo,\oo} \\
\xi_{\uu,\ou} \xi_{\ou,\oo} & 0 & 0 \\
\xi_{\uu,\uo} \xi_{\uo,\oo} & 1 & 0
}.
\]
\end{example}
\begin{figure}
\includegraphics[scale=0.4]{A11Quiver_V3.eps}
\caption{The dg category $\A_{1,1}$.}
\label{fig:A11Quiver}
\end{figure}
For convenience, we specialize to the case $K_1 = K_2 = 1$ which will be especially important below. In this case we write $U_1$ rather than $e_{1,1}$ and $U_2$ rather than $e_{2,1}$.
\begin{example}
The dg category $\A_{1,1}$ is shown in Figure~\ref{fig:A11Quiver}; arrows point from source to target. The differential on $\A_{1,1}$ is zero; the gradings are given by
\begin{itemize}
\item $\deg^q(U_1) = -2$, $\deg^h(U_1) = 2$,
\item $\deg^q(\lambda) = 1$, $\deg^h(\lambda) = 0$,
\item $\deg^q(U_2) = -2$, $\deg^h(U_2) = 2$.
\end{itemize}
The primary matrix for ${_{2\varepsilon_1}}(F^{\vee})_{\varepsilon_1 + \varepsilon_2}$ is
\[
\kbordermatrix{
& \ou & \uo \\
\uu & \xi_{\uu,\ou} & \xi_{\uu,\uo}
};
\]
the generators $\xi_{\uu,\ou}$ and $\xi_{\uu,\uo}$ have $q$-degree $0$ and $-1$ respectively, and both generators have homological degree $0$. The matrix for the right action of $\lambda$ is
\[
\kbordermatrix{
& \xi_{\uu,\ou} & \xi_{\uu,\uo} \\
\xi_{\uu,\ou} & 0 & 1 \\
\xi_{\uu,\uo} & 0 & 0
};
\]
the differential and the right action of $U_1$ and $U_2$ are zero (so that the above matrix is equivalently the secondary matrix for ${_{2\varepsilon_1}}(F^{\vee})_{\varepsilon_1 + \varepsilon_2}$).
The primary matrix for ${_{\varepsilon_1 + \varepsilon_2}}(F^{\vee})_{2\varepsilon_2}$ is
\[
\kbordermatrix{
& \oo \\
\ou & \xi_{\ou,\oo} \\
\uo & \xi_{\uo,\oo} \\
};
\]
the generator $\xi_{\ou,\oo}$ has $q$-degree $-1$ and homological degree $1$, while $\xi_{\uo,\oo}$ has $q$-degree $0$ and homological degree $0$. The differential has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & 0 & \lambda \\
\xi_{\uo,\oo} & 0 & 0
},
\]
the right action by $U_1$ has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & U_1 & 0 \\
\xi_{\uo,\oo} & 0 & 0
},
\]
and the right action by $U_2$ has matrix
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & 0 & 0 \\
\xi_{\uo,\oo} & 0 & U_2
}.
\]
Equivalently, the secondary matrix for ${_{\varepsilon_1 + \varepsilon_2}}(F^{\vee})_{2\varepsilon_2}$ is
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} & U_1^{k+1} \otimes U_1^{k+1} & \lambda \\
\xi_{\uo,\oo} & 0 & U_2^{k+1} \otimes U_2^{k+1}
}
\]
(where $k$ runs over all nonnegative integers in each entry with a $k$ index).
The endomorphism $\tau$ of ${_{2\varepsilon_1}}(F^{\vee})^2_{2\varepsilon_2}$ has matrix
\[
\kbordermatrix{
&\xi_{\uu,\ou} \xi_{\ou,\oo} & \xi_{\uu,\uo} \xi_{\uo,\oo} \\
\xi_{\uu,\ou} \xi_{\ou,\oo} & 0 & 0 \\
\xi_{\uu,\uo} \xi_{\uo,\oo} & 1 & 0
}.
\]
\end{example}
\subsection{Strands interpretation}\label{sec:ExamplesStrandsInterp}
\subsubsection{Strands algebras and chord diagrams}
\begin{definition}[cf. Section 7.2.4 of \cite{ManionRouquier}, Definition 2.1.1 of \cite{Zarev}] A chord diagram\footnote{Terminology following e.g. \cite{ACPRS}.} $\Zc = (\Zc, \mathbf{a}, \mu)$ is a finite collection $\Zc$ of oriented circles and intervals, equipped with a two-to-one matching $\mu$ on a finite subset $\mathbf{a}$ of points in the interior of $\Zc$.
\end{definition}
Zarev allows only intervals in $\Zc$, not circles. We will draw the intervals and circles in black while indicating the matching $\mu$ with red arcs between points of $\mathbf{a}$ (endpoints of the red arcs). Figures~\ref{fig:Z1} and \ref{fig:Zstn} show some examples of chord diagrams.
Differential (ungraded) strands categories for general chord diagrams $\Zc$ are defined in \cite{ManionRouquier}.
\begin{itemize}
\item Let $N = |\mathbf{a}|/2$.
\item Let $\mathbf{a}/\sim$ be the set of two-element equivalence classes of points of $\mathbf{a}$ determined by the matching.
\item Let $Z$ be the singular curve associated to $\Zc$ in \cite[Section 7.2.4]{ManionRouquier} (we can view $\mathbf{a}/\sim$ as a subset of $Z$).
\item Let $\Sc(Z)$ be the strands category of $Z$ defined in \cite[Definition 7.4.22]{ManionRouquier}.
\end{itemize}
\begin{definition}
Let $\A(\Zc) = \Sc_{\mathbf{a}/\sim}(Z)$, the full differential subcategory of $\Sc(Z)$ on objects contained in the finite set $\mathbf{a}/\sim$. We can write
\[
\A(\Zc) \coloneqq \bigoplus_{k=0}^N \A(\Zc,k)
\]
where $\A(\Zc,k)$ is the full subcategory of $\A(\Zc)$ on objects $S$ with $|S|=k$.
\end{definition}
Furthermore, \cite{ManionRouquier} defines (ungraded) bimodule 2-representations $(\A(\Zc), L_{\xi_+}, \tau)$ and $(\A(\Zc), R_{\xi_-}, \tau)$ of $\U$ for each chord diagram $\Zc$ equipped with a chosen interval component as follows. By \cite[Section 8.1.6]{ManionRouquier}, the chosen interval component of $\Zc$ gives injective morphisms of curves $\xi_+\co \R_{>0} \to Z$ and $\xi_-\co \R_{<0} \to Z$ which are outgoing and incoming for $Z$ respectively (as defined in \cite[Section 8.1.1]{ManionRouquier}).
By \cite[Proposition 8.1.3]{ManionRouquier}, taking $M = \mathbf{a} / \sim$, we get a left finite 2-representation $(\A(\Zc), L_{\xi_+}, \tau)$ of $\U$. Its left dual $(\A(\Zc), {^{\vee}}L_{\xi_+}, \tau)$ is thus a right finite 2-representation of $\U$ which agrees with $(\A(\Zc), R_{\xi_-}, \tau)$ by \cite[Proposition 8.1.15]{ManionRouquier}. It follows from \cite[Proposition 8.1.10]{ManionRouquier} (resp. \cite[Section 8.1.5]{ManionRouquier}) that $L_{\xi_+}$ is left bounded (resp. $R_{\xi_-}$ is right bounded).
\subsubsection{Fundamental examples}
\begin{figure}
\includegraphics[scale=0.6]{Z1.eps}
\caption{The chord diagram $\Zc_1$.}
\label{fig:Z1}
\end{figure}
\begin{example}
The differential category $\A(\Zc_1)$ agrees with $\A_1$ (ignoring gradings), where $\Zc_1$ is shown in Figure~\ref{fig:Z1}; we have $\A(\Zc_1,0) = (\A_1)_{\varepsilon_1}$ and $\A(\Zc_1,1) = (\A_1)_{\varepsilon_2}$.
\end{example}
Basis elements of $\A(\Zc_1)$ over $\F_2$ can be represented by strands pictures in the disjoint union of a rectangle and a cylinder; the element $U^i$ of $\A(\Zc_1,1) = \F_2[U]$ wraps around the cylinder $i$ times, while $1 \in \A(\Zc_1,1) = \F_2$ is a pair of dotted lines corresponding to the two matched points in the chord diagram $\Zc_1$ (see Figure~\ref{fig:LStrandsExamples}).
\begin{figure}
\includegraphics[scale=0.6]{L_strands_examples.eps}
\caption{Elements $1$, $U$, and $U^2$ of $\A(\Zc_1,1) = \F_2[U]$.}
\label{fig:LStrandsExamples}
\end{figure}
The bimodule $R_{\xi_-}$ over $\A(\Zc_1)$ can be viewed as a bimodule over $(\A(\Zc_1,1), \A(\Zc_1,0))$ with a single basis element over $\F_2$, shown on the left of Figure~\ref{fig:LStrandsBimodExamples}. The left action of $U \in \A(\Zc_1,1) = \F_2[U]$ on this basis element is zero. We can thus identify $R_{\xi_-}$ with the bimodule $F = {_{\varepsilon_2}}F_{\varepsilon_1}$ over $\A_1$ (ignoring the gradings on the latter bimodule).
Similarly, the only nonzero summand of the bimodule $L_{\xi_+}$ over $\A(\Zc_1)$ is a bimodule over $(\A(\Zc_1,0), \A(\Zc_1,1))$ with a single basis element over $\F_2$, shown on the right of Figure~\ref{fig:LStrandsBimodExamples}. The right action of $U \in \A(\Zc_1,1) = \F_2[U]$ on this basis element is zero. We can thus identify $L_{\xi_+}$ with the bimodule $F^{\vee} = {_{\varepsilon_1}}F^{\vee}_{\varepsilon_2}$ over $\A_1$ (ignoring gradings again).
\begin{figure}
\includegraphics[scale=0.6]{L_strands_bimod_examples.eps}
\caption{The unique basis elements of the bimodules $F = R_{\xi_-}$ and $F^{\vee} = L_{\xi^+}$ over $\A_1$.}
\label{fig:LStrandsBimodExamples}
\end{figure}
\begin{corollary}
The 2-representation of $\U$ underlying the 2-representation $(\A_1,F,0)$ of $\U^-$ agrees with $(\A(\Zc_1), R_{\xi_-}, 0)$, and its dual agrees with $(\A(\Zc_1), L_{\xi_+}, 0)$.
\end{corollary}
\subsubsection{Tensor products}
\begin{figure}
\includegraphics[scale=0.6]{Zstn.eps}
\caption{The chord diagram $\Zc_{1,\ldots,1}^{\st}$.}
\label{fig:Zstn}
\end{figure}
The following proposition follows from \cite[Theorem 8.3.1]{ManionRouquier}.
\begin{proposition}
We have a canonical isomorphism
\[
\A_{1,\ldots,1} \cong \A(\Zc^{\st}_{1,\ldots,1})
\]
as 2-representations of $\U$, where $\Zc^{\st}_{1,\ldots,1}$ is the chord diagram shown in Figure~\ref{fig:Zstn}.
\end{proposition}
The dg category $\A(\Zc^{\st}_{1,\ldots,1})$ comes from a pointed strands category as in \cite[Definition 7.4.22]{ManionRouquier}, so it comes with a preferred choice of basis (basis elements correspond to strands pictures as in Figure~\ref{fig:LStrandsExamples}). We define an augmentation functor $\epsilon$ from $\A(\Zc^{\st}_{1,\ldots,1})$ to its idempotent ring by sending all non-identity basis elements to zero; on $\A(\Zc^{\st}_1) = \A_1$, this basis and augmentation agree with the ones defined in Section~\ref{sec:Fundamental2RepExamples}.
\begin{figure}
\includegraphics[scale=0.6]{LL_strands_examples.eps}
\caption{Generators $U_1, \lambda$, and $U_2$ of $\A(\Zc_{1,1}^{\st},1)$.}
\label{fig:LLStrandsExamples}
\end{figure}
Figure~\ref{fig:LLStrandsExamples} shows the generators $U_1$, $\lambda$, and $U_2$ of $(\A_{1,1})_{\varepsilon_1 + \varepsilon_2} \cong \A(\Zc_{1,1}^{\st},1)$. Similarly, Figure~\ref{fig:LLStrandsBimodExamples} shows the two generators $\xi_{\uu,\uo}$ and $\xi_{\uu,\ou}$ of ${_{2\varepsilon_1}}F^{\vee}_{\varepsilon_1 + \varepsilon_2}$, and Figure~\ref{fig:LLMoreStrandsBimodExamples} shows the two generators $\xi_{\ou,\oo}$ and $\xi_{\uo,\oo}$ of ${_{\varepsilon_1 + \varepsilon_2}}F^{\vee}_{2\varepsilon_2}$.
\begin{figure}
\includegraphics[scale=0.6]{LL_strands_bimod_examples.eps}
\caption{Generators $\xi_{\uu,\uo}$ and $\xi_{\uu,\ou}$ of the bimodule ${_{2\varepsilon_1}}F^{\vee}_{\varepsilon_1 + \varepsilon_2}$ as a left module.}
\label{fig:LLStrandsBimodExamples}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{LL_more_strands_bimod_examples.eps}
\caption{Generators $\xi_{\ou,\oo}$ and $\xi_{\uo,\oo}$ of the bimodule ${_{\varepsilon_1+\varepsilon_2}}F^{\vee}_{2\varepsilon_2}$ as a left module.}
\label{fig:LLMoreStrandsBimodExamples}
\end{figure}
\subsection{Heegaard diagram interpretation}\label{sec:ExamplesHDInterp}
By ideas resembling those found in \cite{EPV,Hypertoric1}, the strands (or tensor-product) bimodule $F^{\vee}$ over $\A_{1,\ldots,1}$ should also have a Heegaard diagram interpretation (the same should be true for $F$ but for simplicity we will work with DA bimodules when possible). Heegaard Floer bimodules based on holomorphic curve counts have not yet been defined for any general family of Heegaard diagrams that includes the diagrams in question, so we will just use these diagrams as heuristic motivation for the above definitions; despite this, we will speak as if the holomorphic-curve bimodules are well-defined and the below computations are accurate. We require sets of intersection points in the Heegaard diagram of Figure~\ref{fig:EDual2Strands} to contain exactly one point on the blue curve, and we do not count holomorphic curves with nonzero multiplicity on the left and right vertical portions of the boundary of the diagram.
\begin{figure}
\includegraphics[scale=0.4]{e_dual_2strands_V2.eps}
\caption{Heegaard diagram for the bimodule $F^{\vee}$ over $\A_{1,\ldots,1}$.}
\label{fig:EDual2Strands}
\end{figure}
In particular, the DA bimodule $F^{\vee}$ over $\A_1$ comes from the Heegaard diagram in Figure~\ref{fig:EDual1Strand}. Figure~\ref{fig:EDual1StrandGen} shows the one generator $\xi_{\u,\o}$ of the bimodule as a set of intersection points in the Heegaard diagram from Figure~\ref{fig:EDual1Strand}. Figure~\ref{fig:EDual2StrandsGens} shows generators for the top summand ${_{2\varepsilon_1}}(F^{\vee})_{\varepsilon_1 + \varepsilon_2}$ of $F^{\vee}$ as sets of intersection points, together with the domain giving rise to the right action of $\lambda$. Figure~\ref{fig:EDual2StrandsGensLower} shows the generators for the bottom summand ${_{\varepsilon_1 + \varepsilon_2}}(F^{\vee})_{2\varepsilon_2}$ of $F^{\vee}$, and Figure~\ref{fig:EDual2StrandsDomainsLower} shows the domains for the secondary matrix of ${_{\varepsilon_1 + \varepsilon_2}}(F^{\vee})_{2\varepsilon_2}$ (the entries $U_i^{k+1} \otimes U_i^{k+1}$ correspond to domains of multiplicity $k+1$, which are shown with darker shading in Figure~\ref{fig:EDual2StrandsDomainsLower}).
\begin{figure}
\includegraphics[scale=0.3]{e_dual_1strand.eps}
\caption{Heegaard diagram for bimodule $F^{\vee}$ over $\A_1$.}
\label{fig:EDual1Strand}
\end{figure}
\begin{figure}
\includegraphics[scale=0.65]{e_dual_1strand_gens_V2.eps}
\caption{Basis element $\xi_{\u,\o}$ of $F^{\vee}$ in terms of the Heegaard diagram.}
\label{fig:EDual1StrandGen}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{e_dual_2strands_gens_V2.eps}
\caption{Generators for ${_{2\varepsilon_1}}F^{\vee}_{\varepsilon_1 + \varepsilon_2}$ and domain for the right action of $\lambda$.}
\label{fig:EDual2StrandsGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{e_dual_2strands_gens_lower_V2.eps}
\caption{Generators for ${_{\varepsilon_1 + \varepsilon_2}}F^{\vee}_{2\varepsilon_2}$.}
\label{fig:EDual2StrandsGensLower}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{e_dual_2strands_domains_lower_V2.eps}
\caption{Domains for ${_{\varepsilon_1 + \varepsilon_2}}F^{\vee}_{2\varepsilon_2}$.}
\label{fig:EDual2StrandsDomainsLower}
\end{figure}
\begin{convention}
We will always draw domains in the presence of a starting set of intersection points, drawn with solid dots, and an ending set of intersection points, drawn with open dots. Orientation conventions are fixed by requiring that in a simple bigon with positive multiplicity, a positively oriented path around the boundary starting and ending at the solid dot should traverse an $\alpha$ curve (red) before reaching the open dot and a $\beta$ curve afterward; this is the usual convention in Heegaard Floer homology.
\end{convention}
\section{Bimodule for the \texorpdfstring{$\lambda$}{lambda}-shaped trivalent vertex}\label{sec:EasyVertex}
\subsection{Definition of the bimodule}
\begin{figure}
\includegraphics[scale=0.4]{easy_vertex_v3.eps}
\caption{Heegaard diagram for the ``$\lambda$-shaped'' trivalent vertex}
\label{fig:EasyVertex}
\end{figure}
In this section we will define a bimodule for one of the two types of trivalent vertex, based on the Heegaard diagram in Figure~\ref{fig:EasyVertex}.
At the decategorified level, one can view maps for trivalent vertices as arising from skew Howe duality. The morphism $1_{2,0} E_{\gl(2)} 1_{1,1}$ of the idempotented quantum group $\Udot_q(\gl(2))$ induces a $U_q(\gl(1|1))$-linear map $V^{\otimes 2} \to \wedge_q^2 V$; see Appendix~\ref{sec:SingularNonsingular}. The morphism $1_{0,2}F_{\gl(2)}1_{1,1}$ of $\Udot(\gl(2))$ induces the same map $V^{\otimes 2} \to \wedge_q^2 V$. This map respects the $\gl(1|1)$ weight spaces of its domain and codomain; we will refer to it simply as $1_{2,0} E_{\gl(2)} 1_{1,1}$ (or $1_{0,2}F_{\gl(2)}1_{1,1}$).
The nonzero $\gl(1|1)$ weight spaces of $\wedge_q^2 V$ are $\varepsilon_1 + \varepsilon_2$ and $2\varepsilon_2$, while the nonzero weight spaces of $V^{\otimes 2}$ are $2\varepsilon_1$, $\varepsilon_1 + \varepsilon_2$, and $2\varepsilon_2$. We will refer to these weight spaces as the ``upper,'' ``middle,'' and ``lower'' weight spaces respectively. The codomain $\wedge_q^2 V$ of the map $1_{2,0} E_{\gl(2)} 1_{1,1}$ is zero in the upper weight space, so $1_{2,0} E_{\gl(2)} 1_{1,1}$ is the direct sum of maps on the middle and lower weight spaces.
We will define DA bimodules $\Lambda_{\midd}$ and $\Lambda_{\low}$ whose left duals categorify $1_{2,0} E_{\gl(2)} 1_{1,1}$ acting on these middle and lower weight spaces. The bimodules are obtained by heuristically counting holomorphic disks using the Heegaard diagram shown in Figure~\ref{fig:EasyVertex}; conjecturally, these disk counts are accurate once the analytic theory has been defined.
Neither of these bimodules is $A_{\infty}$; the bimodule in the middle weight space has a differential, while in the lower weight space it is an ordinary bimodule. We will set $\Lambda \coloneqq \Lambda_{\midd} \oplus \Lambda_{\low}$. We define $\Lambda_{\upp}$ to be zero; more generally, we set $\Lambda_{a\varepsilon_1 + b\varepsilon_2} = 0$ unless $(a,b)$ equals $(1,1)$ or $(0,2)$.
\begin{definition}
The dg bimodule $\Lambda_{\midd}$ over $((\A_{1,1})_{\varepsilon_1+\varepsilon_2}, (\A_2)_{\varepsilon_1+\varepsilon_2})$ has primary matrix
\[
\kbordermatrix{
& \u \\
\ou & X \\
\uo & Y
}.
\]
We set $\deg^q(X) = -1$, $\deg^h(X) = 1$, $\deg^q(Y) = 0$, and $\deg^h(Y) = 0$. The differential is given by the matrix
\[
\kbordermatrix{
& X & Y \\
X & 0 & \lambda \\
Y & 0 & 0
},
\]
and the right action of $e_1$ is given by the matrix
\[
\kbordermatrix{
& X & Y \\
X & U_1 & 0 \\
Y & 0 & U_2
}.
\]
Since the above matrices commute and are compatible with the gradings, this dg bimodule is well-defined.
\end{definition}
\begin{definition}
The (ordinary) bimodule $\Lambda_{\low}$ over $((\A_{1,1})_{2\varepsilon_2}, (\A_2)_{2\varepsilon_2})$ has primary matrix
\[
\kbordermatrix{
& \o \\
\oo & Z
}.
\]
We set $\deg^q(Z) = \deg^h(Z) = 0$.
The matrix for right action of $e_1$ is
\[
\kbordermatrix{
& Z \\
Z & U_1 + U_2
}
\]
and the matrix for right action of $e_2$ is
\[
\kbordermatrix{
& Z \\
Z & U_1 U_2
}.
\]
The matrices for the actions of $e_1$ and $e_2$ commute and are compatible with the gradings, so the bimodule is well-defined.
\end{definition}
\begin{figure}
\includegraphics[scale=0.6]{easy_vertex_gens.eps}
\caption{Generators of $\Lambda$ in terms of intersection points.}
\label{fig:EasyVertexGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{easy_vertex_domain_V3.eps}
\caption{Domains giving rise to the differential and right action of $e_2^k$ on $\Lambda$.}
\label{fig:EasyVertexDomains}
\end{figure}
Figure~\ref{fig:EasyVertexGens} shows the correspondence between generators of $\Lambda$ and sets of intersection points in the Heegaard diagram of Figure~\ref{fig:EasyVertex}. Figure~\ref{fig:EasyVertexDomains} shows the domains in the Heegaard diagram corresponding to the differential on $\Lambda_{\midd}$ and the right action of $e_2$ on $\Lambda_{\low}$. The given formula for the right action of $e_1$ is necessary for the theory to work, but it does not appear to come from the Heegaard diagram, at least in currently-known versions of Heegaard Floer homology.
\subsection{Decategorification of \texorpdfstring{$\Lambda$}{Lambda}}
\begin{proposition}
The DA bimodule $\Lambda$ categorifies the map from $K_0(\A_2)$ to $K_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& [P_{\u}] & [P_{\o}] & \\
{[P_{\uu}]} & 0 & 0 \\
{[P_{\ou}]} & -q^{-1} & 0 \\
{[P_{\uo}]} & 1 & 0 \\
{[P_{\oo}]} & 0 & 1 \\
}.
\]
Equivalently, the AD bimodule ${^{\vee}}\Lambda$ categorifies the map from $G_0(\A_{1,1})$ to $G_0(\A_2)$ with matrix
\[
\kbordermatrix{
& {[S_{\uu}]} & {[S_{\ou}]} & {[S_{\uo}]} & {[S_{\oo}]} \\
{[S_{\u}]} & 0 & -q & 1 & 0 \\
{[S_{\o}]} & 0 & 0 & 0 & 1
}.
\]
This latter map can be identified with $1_{2,0} E_{\gl(2)} 1_{1,1}$ (or equivalently $1_{0,2} F_{\gl(2)} 1_{1,1}$) mapping from $V^{\otimes 2}$ to $\wedge_q^2 V$ as in Appendix~\ref{sec:AppendixSkewHoweMaps}.
\end{proposition}
\subsection{2-representation morphism structure}\label{sec:Lambda2RepMorphism}
We equip $\Lambda = \Lambda_{\midd} \oplus \Lambda_{\low}$ with the structure of a DA bimodule $1$-morphism of $2$-representations of $\U^-$. We need a homotopy equivalence
\[
\varphi\co \Lambda_{\midd} \boxtimes F^{\vee} \to F^{\vee} \boxtimes \Lambda_{\low}
\]
as well as a homotopy equivalence
\[
\varphi\co 0 \to F^{\vee} \boxtimes \Lambda_{\midd}
\]
(i.e. we need $F^{\vee} \boxtimes \Lambda_{\midd}$ to be contractible).
The following three propositions follow from Procedure~\ref{proc:BoxTensorBimods}.
\begin{proposition}\label{prop:EDualAfterMiddleEasy}
The dg bimodule $F^{\vee} \boxtimes \Lambda_{\midd}$ has primary matrix
\[
\kbordermatrix{
& \u \\
\uu & \xi_{\uu,\ou} X \quad \xi_{\uu,\uo} Y
}
\]
and differential
\[
\kbordermatrix{
& \xi_{\uu,\ou} X & \xi_{\uu,\uo} Y \\
\xi_{\uu,\ou} X & 0 & 1 \\
\xi_{\uu,\uo} Y & 0 & 0
}.
\]
The right action of $e_1$ on $F^{\vee} \boxtimes \Lambda_{\midd}$ is zero.
\end{proposition}
\begin{proposition}\label{prop:EDualAfterLowerEasy}
The dg bimodule $F^{\vee} \boxtimes \Lambda_{\low}$ has primary matrix
\[
\kbordermatrix{
& \o \\
\ou & \xi_{\ou,\oo} Z \\
\uo & \xi_{\uo,\oo} Z
}
\]
and differential
\[
\kbordermatrix{
& \xi_{\ou,\oo} Z & \xi_{\uo,\oo} Z \\
\xi_{\ou,\oo} Z & 0 & \lambda \\
\xi_{\uo,\oo} Z & 0 & 0
}.
\]
The matrix for right action of $e_1$ is
\[
\kbordermatrix{
& \xi_{\ou,\oo} Z & \xi_{\uo,\oo} Z \\
\xi_{\ou,\oo} Z & U_1 & 0 \\
\xi_{\uo,\oo} Z & 0 & U_2
};
\]
the matrix for right action of $e_2$ is zero.
\end{proposition}
\begin{proposition}\label{prop:EDualBeforeMiddleEasy}
The dg bimodule $\Lambda_{\midd} \boxtimes F^{\vee}$ has primary matrix
\[
\kbordermatrix{
& \o \\
\ou & X \xi_{\u,\o} \\
\uo & Y \xi_{\u,\o}
}
\]
and differential
\[
\kbordermatrix{
& X \xi_{\u,\o} & Y \xi_{\u,\o} \\
X \xi_{\u,\o} & 0 & \lambda \\
Y \xi_{\u,\o} & 0 & 0
}.
\]
The matrix for the right action of $e_1$ is
\[
\kbordermatrix{
& X \xi_{\u,\o} & Y \xi_{\u,\o} \\
X \xi_{\u,\o} & U_1 & 0 \\
Y \xi_{\u,\o} & 0 & U_2
};
\]
the matrix for the right action of $e_2$ is zero.
\end{proposition}
\begin{definition}\label{def:YEasy2RepAlpha}
We will write $\Lambda$ for the 1-morphism between 2-representations of $\U^-$ given by $(\Lambda, \varphi)$, where
\[
\varphi\co \Lambda \boxtimes F^{\vee} \to F^{\vee} \boxtimes \Lambda
\]
is zero as a map from $0$ to $F^{\vee} \boxtimes \Lambda_{\midd}$ and is given by the matrix
\[
\kbordermatrix{
& X \xi_{\u,\o} & Y \xi_{\u,\o} \\
\xi_{\ou,\oo} Z & 1 & 0 \\
\xi_{\uo,\oo} Z & 0 & 1
}
\]
as a map from $\Lambda_{\midd} \boxtimes F^{\vee}$ to $F^{\vee} \boxtimes \Lambda_{\low}$. By Propositions~\ref{prop:EDualAfterLowerEasy} and \ref{prop:EDualBeforeMiddleEasy}, $\varphi$ is a closed morphism of dg bimodules. Note that $\varphi$ is not an isomorphism, but it is a homotopy equivalence by Proposition~\ref{prop:EDualAfterMiddleEasy}.
\end{definition}
Because the square
\[
\xymatrix{
0 = \Lambda_{\upp} \boxtimes (F^{\vee})^{\boxtimes 2} \ar[rr]^{\varphi \boxtimes \id_{F^{\vee}}} \ar[d]_{\id_{\Lambda_{\upp}} \boxtimes \tau} && F^{\vee} \boxtimes \Lambda_{\midd} \boxtimes F^{\vee} \ar[rr]^{\id_{F^{\vee}} \boxtimes \varphi} && (F^{\vee})^{\boxtimes 2} \boxtimes \Lambda_{\low} \ar[d]^{\tau \boxtimes \id_{\Lambda_{\low}}} \\
0 = \Lambda_{\upp} \boxtimes (F^{\vee})^{\boxtimes 2} \ar[rr]^{\varphi \boxtimes \id_{F^{\vee}}} && F^{\vee} \boxtimes \Lambda_{\midd} \boxtimes F^{\vee} \ar[rr]^{\id_{F^{\vee}} \boxtimes \varphi} && (F^{\vee})^{\boxtimes 2} \boxtimes \Lambda_{\low}
}
\]
automatically commutes ($\Lambda_{\upp} = 0$), $(\Lambda,\varphi)$ is a valid $1$-morphism.
\section{Bimodule for the Y-shaped trivalent vertex}\label{sec:HardVertex}
Now we define a bimodule for the other type of trivalent vertex, based on the Heegaard diagram in Figure~\ref{fig:HardVertex} and categorifying the map $1_{1,1} F_{\gl(2)} 1_{2,0} = 1_{1,1} E_{\gl(2)} 1_{0,2}$ from $\wedge_q^2 V$ to $V^{\otimes 2}$ arising from skew Howe duality (see Appendix~\ref{sec:AppendixSkewHoweMaps}). As in Section~\ref{sec:EasyVertex}, the bimodule is defined to be zero in the upper weight space $2\varepsilon_1$ (see the beginning of Section~\ref{sec:EasyVertex}); we describe the middle ($\varepsilon_1 + \varepsilon_2$) and bottom ($2\varepsilon_2$) weight spaces below.
\begin{figure}
\includegraphics[scale=0.4]{hard_vertex_V3.eps}
\caption{Heegaard diagram for the ``Y-shaped'' trivalent vertex}
\label{fig:HardVertex}
\end{figure}
\subsection{Middle weight space, fully unsimplified version}
We first define a subsidiary DA bimodule $Y'_{\midd}$, after introducing a useful convention for secondary matrices.
\begin{convention}
Any unspecified indices appearing in exponents of secondary matrix entries, for example the indices $k$ and $l$ below, are assumed to range over all nonnegative values. For example, the top-left entry of the secondary matrix in Definition~\ref{def:YPrimeHard} is the infinite sum $w_1 \otimes U_1 + w_1^2 \otimes U_1^2 + \cdots$. We start the indexing at $k+1$ rather than $k$ because we do not include $\id \otimes \id$ terms in secondary matrices. Sometimes, to simplify notation, we will index a secondary matrix entry in such a way that $\id \otimes \id$ is a term, but such $\id \otimes \id$ terms should be implicitly omitted.
\end{convention}
\begin{definition}\label{def:YPrimeHard} The DA bimodule $Y'_{\midd}$ over
\[
((\A_2)_{\varepsilon_1 + \varepsilon_2} \otimes \F_2[w_1,w_2], (\A_{1,1})_{\varepsilon_1 + \varepsilon_2} \otimes \F_2[w_1, w_2])
\]
has primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\u & A \quad A' & B \quad B'
}
\]
and secondary matrix
\[
\kbordermatrix{
& A & A' & B & B' \\
A & w_1^{k+1} \otimes U_1^{k+1} & 0 & w_2^k \otimes (U_2^{k+1},\lambda) & w_1^l w_2^k \otimes (U_2^{k+1}, \lambda, U_1^{l+1}) \\
A' & w_2 & w_1^{k+1} \otimes U_1^{k+1} & 1 \otimes \lambda & w_1^k \otimes (\lambda, U_1^{k+1}) \\
B & 0 & 0 & w_2^{k+1} \otimes U_2^{k+1} & 0 \\
B' & 0 & 0 & w_1 & w_2^{k+1} \otimes U_2^{k+1}
}.
\]
All generators are also $w_1$- and $w_2$-equivariant in the sense that all entries of the above matrix representing $\delta^1_i$ terms for $i > 1$ should be multiplied by $w_1^p w_2^q$ on both sides (in each entry, for higher terms, with the output exponents given by the sum of the input exponents on both $w_1$ and $w_2$). Equivalently, we could treat $\F_2[w_1,w_2]$ rather than $\F_2$ as the ground ring, in which case we do not require these entries.
We define gradings as follows:
\begin{itemize}
\item $\deg^q(A) = -2$, $\deg^h(A) = 2$
\item $\deg^q(A') = 0$, $\deg^h(A') = 1$
\item $\deg^q(B) = -1$, $\deg^h(B) = 1$
\item $\deg^q(B') = 1$, $\deg^h(B') = 0$
\item $\deg^q(w_1) = -2$, $\deg^h(w_1) = 2$
\item $\deg^q(w_2) = -2$ $\deg^h(w_2) = 2$.
\end{itemize}
\end{definition}
\begin{figure}
\includegraphics[scale=0.6]{hard_vertex_gens_V2.eps}
\caption{Generators of $Y'_{\midd}$ in terms of intersection points}
\label{fig:HardVertexGens}
\end{figure}
The correspondence between sets of intersection points in the Heegaard diagram of Figure~\ref{fig:HardVertex} and generators of $Y'_{\midd}$ (primary matrix entries) is shown in Figure~\ref{fig:HardVertexGens}. Not all sets of intersection points in the diagram appear in Figure~\ref{fig:HardVertex}; the rest contribute to the summand $Y'_{\low}$ of $Y'$ in the lower weight space and appear in Figure~\ref{fig:HardVertexGensLower}. The domains giving rise to secondary matrix entries are shown in Figure~\ref{fig:HardVertexDomains}.
\begin{figure}
\includegraphics[scale=0.6]{hard_vertex_domains_V5.eps}
\caption{Domains giving rise to the secondary matrix for $Y'_{\midd}$.}
\label{fig:HardVertexDomains}
\end{figure}
\begin{proposition}\label{prop:HardFullyUnsimplifiedMiddleWellDefined}
The DA bimodule $Y'_{\midd}$ is well-defined.
\end{proposition}
\begin{proof}
Treating $\F_2[w_1,w_2]$ as the ground ring, the squared secondary matrix is
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A & A' & B & B' \\
A & w_1^{k+1} \otimes U_1^{k+1} & 0 & w_2^k \otimes (U_2^{k+1},\lambda) & w_1^l w_2^k \otimes (U_2^{k+1}, \lambda, U_1^{l+1}) \\
A' & w_2 & w_1^{k+1} \otimes U_1^{k+1} & 1 \otimes \lambda & w_1^k \otimes (\lambda, U_1^{k+1}) \\
B & 0 & 0 & w_2^{k+1} \otimes U_2^{k+1} & 0 \\
B' & 0 & 0 & w_1 & w_2^{k+1} \otimes U_2^{k+1}
}
\kbordermatrix{
& A & A' & B & B' \\
A & w_1^{k+1} \otimes U_1^{k+1} & 0 & w_2^k \otimes (U_2^{k+1},\lambda) & w_1^l w_2^k \otimes (U_2^{k+1}, \lambda, U_1^{l+1}) \\
A' & w_2 & w_1^{k+1} \otimes U_1^{k+1} & 1 \otimes \lambda & w_1^k \otimes (\lambda, U_1^{k+1}) \\
B & 0 & 0 & w_2^{k+1} \otimes U_2^{k+1} & 0 \\
B' & 0 & 0 & w_1 & w_2^{k+1} \otimes U_2^{k+1}
}
}
\]
which equals
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A & A' & B & B' \\
A & w_1^{k+p+2} \otimes (U_1^{k+1}, U_1^{p+1}) & 0 & w_2^{k+p+1} \otimes (U_2^{k+1},U_2^{p+1},\lambda) & \begin{matrix} w_1^{k+p+1} w_2^q \otimes (U_2^{q+1}, \lambda, U_1^{k+1}, U_1^{p+1}) \\ + w_1^k w_2^{l+p+1} \otimes (U_2^{l+1}, U_2^{p+1}, \lambda, U_1^{k+1}) \end{matrix} \\
A' & 0 & w_1^{k+p+2} \otimes (U_1^{k+1}, U_1^{p+1}) & 0 & w_1^{k+p+1} \otimes (\lambda, U_1^{k+1}, U_1^{p+1}) \\
B & 0 & 0 & w_2^{k+p+2} \otimes (U_2^{k+1},U_2^{p+1}) & 0 \\
B' & 0 & 0 & 0 & w_2^{k+p+2} \otimes (U_2^{k+1},U_2^{p+1})
}.
}
\]
This matrix is also the matrix of multiplication terms for $Y'_{\midd}$, so $Y'_{\midd}$ satisfies the DA bimodule structure relations.
\end{proof}
From $Y'_{\midd}$, we can get an infinitely generated DA bimodule over $((\A_2)_{\varepsilon_1 + \varepsilon_2}, (\A_{1,1})_{\varepsilon_1 + \varepsilon_2})$ by restricting both the left and right actions on $Y'_{\midd}$ via the inclusions of $\A_2$ and $\A_{1,1}$ into $\A_2 \otimes \F_2[w_1,w_2]$ and $\A_{1,1} \otimes \F_2[w_1, w_2]$. The primary matrix of the result is
\[
\kbordermatrix{
& \ou & \uo \\
\u & w_1^k w_2^l A \quad w_1^k w_2^l A' & w_1^k w_2^l B \quad w_1^k w_2^l B'
}
\]
(letting $k$ and $l$ range over all nonnegative integers). The secondary matrix is
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& w_1^k w_2^l A & w_1^k w_2^l A' & w_1^k w_2^l B & w_1^k w_2^l B' \\
A & w_1^{k+p+1} w_2^l \otimes U_1^{p+1} & 0 & w_1^k w_2^{l+p} \otimes (U_2^{p+1},\lambda) & w_1^{k+p} w_2^{l+q} \otimes (U_2^{q+1}, \lambda, U_1^{p+1}) \\
A' & w_1^k w_2^{l+1} & w_1^{k+p+1} w_2^l \otimes U_1^{p+1} & w_1^k w_2^l \otimes \lambda & 0 \\
B & 0 & 0 & w_1^k w_2^{l+p+1} \otimes U_2^{p+1} & w_1^{k+p} w_2^l \otimes (\lambda, U_1^{p+1}) \\
B' & 0 & 0 & w_1^{k+1} w_2^l & w_1^k w_2^{l+p+1} \otimes U_2^{p+1}
};
}
\]
multiplication by $w_1$ or $w_2$ on the ``output side'' of a secondary matrix entry should not be viewed as multiplication by an algebra element, but rather as specifying in which row of the (infinite) secondary matrix the entry belongs.
Motivated by the two-term complexes appearing in \cite[proof of Theorem 4.1]{OSzCube} (see also e.g. \cite[Theorem 2.3]{ManolescuCube}), the DA bimodule $Y_{\fu,\midd}$ is defined to be
\[
\xymatrix{
q^{-2} Y'_{\midd}[-1] \ar@/^1.5pc/[rr]^{\Xi} & \oplus & Y'_{\midd}
}
\]
where $\Xi$ is the DA bimodule endomorphism of $Y'_{\midd}$ given by
\[
\kbordermatrix{
& A & A' & B & B' \\
A & w_1 + w_2 + e_1 & 0 & 0 & 0 \\
A' & 0 & w_1 + w_2 + e_1 & 0 & 0 \\
B & 0 & 0 & w_1 + w_2 + e_1 & 0 \\
B' & 0 & 0 & 0 & w_1 + w_2 + e_1
}.
\]
\begin{proposition}
The DA bimodule endomorphism $\Xi$ is closed.
\end{proposition}
\begin{proof}
The product of the matrix for the endomorphism with the secondary matrix for $Y'_{\midd}$, in either order, equals the secondary matrix for $Y'_{\midd}$ with the output of each entry multiplied by $w_1 + w_2 + e_1$.
\end{proof}
As with $Y'_{\midd}$, we can view $Y_{\fu,\midd}$ as an infinitely generated DA bimodule over
\[
((\A_2)_{\varepsilon_1+\varepsilon_2}, (\A_{1,1})_{\varepsilon_1 + \varepsilon_2}).
\]
We will write the generators of the two summands of $Y_{\fu,\midd}$ as
\[
\{A_1, A'_1, B_1, B'_1\}, \quad \{A_2, A'_2, B_2, B'_2\}
\]
respectively, where $\Xi$ maps the ``$2$'' summand to the ``$1$'' summand.
\begin{proposition}\label{prop:HardFullyUnsimplifiedMiddleHomology}
As a left dg module over $(\A_2)_{\varepsilon_1 + \varepsilon_2} = \F_2[e_1]$, the homology of $Y_{\fu,\midd}$ is a free module of rank two generated by $A'_1 = w_1^0 w_2^0 A'_1$ and $B'_1$.
\end{proposition}
\begin{proof}
As a non-differential left module over $\F_2[e_1]$, $Y_{\fu,\midd}$ is free with one generator for each pair of a monomial $w_1^k w_2^l$ ($k,l \geq 0$) and an element of $\{A_i,A'_i,B_i,B'_i : i = 1,2\}$. The kernel of the differential on $Y_{\fu,\midd}$ consists of the summands corresponding to $w_1^k w_2^l A'_1$ and $w_1^k w_2^l B'_1$. In the homology of $Y_{\fu,\midd}$, we have $w_2 = w_1 + e_1$ (equivalently, $w_1 = w_2 + e_1$), so the homology is generated by elements $w_1^k A'_1$ and $w_2^k B'$. Furthermore, $w_1^{k+1} A'_1 = e_1 w_1^k A'_1$ and $w_2^{k+1} B'_1 = e_1 w_2^k B'_1$ modulo the image of the differential, so the homology is generated by the two elements $A'_1$ and $B'_1$. The image of the differential intersected with the $\F_2[e_1]$-submodule generated by $A'_1$ and $B'_1$ is trivial, proving the proposition.
\end{proof}
\subsection{Middle weight space, half-simplified version}
We now simplify $Y_{\fu,\midd}$, resulting in a half-simplified version $Y_{\hs,\midd}$. The equivariance over $\F_2[w_1,w_2]$ will go away, and we will treat $Y_{\hs,\midd}$ as an infinitely generated DA bimodule over $((\A_2)_{\varepsilon_1 + \varepsilon_2}, (\A_{1,1})_{\varepsilon_1 + \varepsilon_2})$. We also streamline notation by relabeling $A'_1, B'_1$ as $A',B'$.
\begin{definition}
The DA bimodule $Y_{\hs,\midd}$ has primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\u & w_1^k A \quad w_1^k A' & w_2^k B \quad w_2^k B'
}
\]
(where $k$ ranges over nonnegative integers) and secondary matrix
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& w_1^k A & w_1^k A' & w_2^k B & w_2^k B' \\
A & w_1^{k+p+1} \otimes U_1^{p+1} & 0 & (w_2 + e_1)^k w_2^p \otimes (U_2^{p+1},\lambda) & (w_2 + e_1)^{k+q} w_2^p \otimes (U_2^{p+1}, \lambda, U_1^{q+1}) \\
A' & w_1^k(w_1 + e_1) & w_1^{k+p+1} \otimes U_1^{p+1} & (w_2 + e_1)^k \otimes \lambda & (w_2 + e_1)^{k+p} \otimes (\lambda, U_1^{p+1}) \\
B & 0 & 0 & w_2^{k+p+1} \otimes U_2^{p+1} & 0 \\
B' & 0 & 0 & w_2^k(w_2 + e_1) & w_2^{k+p+1} \otimes U_2^{p+1}
}.
}
\]
The $q$- and $h$-degrees of the generators are the same as they were in $Y_{\fu,\midd}$.
\end{definition}
One can check that $Y_{\hs,\midd}$ can be obtained from $Y_{\fu,\midd}$ by the simplification procedure from Section~\ref{sec:PrelimSimplifying}, so $Y_{\hs,\midd}$ is well-defined and homotopy equivalent to $Y_{\fu,\midd}$. We get the following corollary from Proposition~\ref{prop:HardFullyUnsimplifiedMiddleHomology}.
\begin{corollary}\label{prop:HardHalfSimplifiedMiddleHomology}
As a left dg module over $\F_2[e_1]$, the homology of $Y_{\hs,\midd}$ is a free module of rank two generated by $A' = w_1^0 A'$ and $B' = w_2^0 B'$.
\end{corollary}
\subsection{Middle weight space, fully simplified version}\label{sec:HardVertexMiddleWSSimpl}
We now simplify the bimodule $Y_{\hs,\midd}$ in the middle weight space even further, giving a finitely generated DA bimodule $Y_{\fs,\midd}$ over $((\A_2)_{\varepsilon_1 + \varepsilon_2}, (\A_{1,1})_{\varepsilon_1 + \varepsilon_2})$. The primary and secondary matrices of the result $Y_{\fs,\midd}$ are described below. One can check that these matrices arise from the simplification procedure of Section~\ref{sec:PrelimSimplifying}; note that we are canceling infinitely many disjoint pairs $(w_1^k A, w_1^{k+1} A')$ and $(w_2^k B, w_2^{k+1} B')$.
\begin{definition}
The DA bimodule $Y_{\fs,\midd}$ has primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\u & A' & B'
}
\]
and secondary matrix
\[
\kbordermatrix{
& A' & B' \\
A' & e_1^{k+1} \otimes U_1^{k+1} & \begin{matrix} e_1^k \otimes (\lambda, U_1^{k+1}) \\+ e_1^k \otimes (U_2^{k+1},\lambda)\end{matrix} \\
B' & 0 & e_1^{k+1} \otimes U_2^{k+1}
}.
\]
As before, we have $\deg^q(A') = 0$, $\deg^h(A') = 1$, $\deg^q(B') = 1$, and $\deg^h(B') = 0$.
\end{definition}
\subsection{Lower weight space}
Next we define the bimodule $Y'_{\low}$ in the lower weight space. Unlike with $Y'_{\midd}$ in the middle weight space, $Y'_{\low}$ will be an ordinary dg bimodule with no higher $A_{\infty}$ actions.
\begin{definition}
The DA bimodule $Y'_{\low}$ over
\[
((\A_2)_{2\varepsilon_2} \otimes \F_2[w_1,w_2], (\A_{1,1})_{2\varepsilon_2} \otimes \F_2[w_1,w_2])
\]
has primary matrix
\[
\kbordermatrix{
& \oo \\
\o & C \quad C'
}.
\]
We set
\begin{itemize}
\item $\deg^q(C) = -3$, $\deg^h(C) = 3$,
\item $\deg^q(C') = 1$, $\deg^h(C') = 0$.
\end{itemize}
The secondary matrix is
\[
\kbordermatrix{
& C & C' \\
C & w_1^k w_2^l \otimes U_1^k U_2^l & 0 \\
C' & w_1 w_2 + e_2 & w_1^k w_2^l \otimes U_1^k U_2^l
}
\]
(recall that we implicitly exclude $\id \otimes \id$ if it appears in a secondary matrix entry). Equivalently, the differential has matrix
\[
\kbordermatrix{
& C & C' \\
C & 0 & 0 \\
C' & w_1 w_2 + e_2 & 0
}
\]
and right multiplication by $U_i$ has matrix
\[
\kbordermatrix{
& C & C' \\
C & w_i & 0 \\
C' & 0 & w_i
}
\]
for $i \in \{1,2\}$. The matrix entries maps are assumed to be $w_1$ and $w_2$-equivariant as in Definition~\ref{def:YPrimeHard}; equivalently, right multiplication by $w_i$ has the same matrix as right multiplication by $U_i$.
\end{definition}
\begin{proposition}
The DA bimodule $Y'_{\low}$ is well-defined.
\end{proposition}
\begin{proof}
The above matrices for the differential and the action of algebra generators commute with each other. One can check that these matrices together amount to the same structure as the given secondary matrix.
\end{proof}
\begin{figure}
\includegraphics[scale=0.6]{hard_vertex_gens_lower_V2.eps}
\caption{Generators of $Y'_{\low}$ in terms of intersection points.}
\label{fig:HardVertexGensLower}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{hard_vertex_domains_lower_V3.eps}
\caption{Domains for the secondary matrix of $Y'_{\low}$.}
\label{fig:HardVertexDomainsLower}
\end{figure}
Figure~\ref{fig:HardVertexGensLower} shows the generators $C$ and $C'$ as intersection points in the Heegaard diagram of Figure~\ref{fig:HardVertex}; Figure~\ref{fig:HardVertexDomainsLower} shows the domains giving rise to the secondary matrix.
The bimodule $Y_{\fu,\low}$ is defined to be
\[
\xymatrix{
q^{-2} Y'_{\low}[-1] \ar@/^1.5pc/[rr]^{\Xi} & \oplus & Y'_{\low}
}
\]
where $\Xi$ is the endomorphism of $Y'_{\low}$ with matrix
\[
\kbordermatrix{
& C & C' \\
C & w_1 + w_2 + e_1 & 0 \\
C' & 0 & w_1 + w_2 + e_1
}
\]
Thus, the secondary matrix of $Y_{\fu,\low}$ can be written as
\[
\kbordermatrix{
& C_2 & C'_2 & C_1 & C'_1 \\
C_2 & w_1^k w_2^l \otimes U_1^k U_2^l & 0 & 0 & 0 \\
C'_2 & w_1 w_2 + e_2 & w_1^k w_2^l \otimes U_1^k U_2^l & 0 & 0 \\
C_1 & w_1 + w_2 + e_1 & 0 & w_1^k w_2^l \otimes U_1^k U_2^l & 0 \\
C'_1 & 0 & w_1 + w_2 + e_1 & w_1 w_2 + e_2 & w_1^k w_2^l \otimes U_1^k U_2^l \\
}
\]
Equivalently, the differential has matrix
\[
\kbordermatrix{
& C_2 & C'_2 & C_1 & C'_1 \\
C_2 & 0 & 0 & 0 & 0 \\
C'_2 & w_1 w_2 + e_2 & 0 & 0 & 0 \\
C_1 & w_1 + w_2 + e_1 & 0 & 0 & 0 \\
C'_1 & 0 & w_1 + w_2 + e_1 & w_1 w_2 + e_2 & 0 \\
}
\]
and right multiplication by $U_i$ or $w_i$ has matrix
\[
\kbordermatrix{
& C_2 & C'_2 & C_1 & C'_1 \\
C_2 & w_i & 0 & 0 & 0 \\
C'_2 & 0 & w_i & 0 & 0 \\
C_1 & 0 & 0 & w_i & 0 \\
C'_1 & 0 & 0 & 0 & w_i \\
}
\]
for $i \in \{1,2\}$. As with $Y'_{\midd}$ and $Y_{\fu,\midd}$, we can view $Y'_{\low}$ and $Y_{\fu,\low}$ as infinitely generated DA bimodules over $((\A_2)_{2\varepsilon_2}, (\A_{1,1})_{2\varepsilon_2})$. We will take this perspective and write the generators of $Y_{\fu,\low}$ as
\[
\{C_1, C'_1, C_2, C'_2\}
\]
times monomials in $w_1$ and $w_2$.
\begin{proposition}
As a left dg module over $(\A_2)_{2\varepsilon_2} = \F_2[e_1,e_2]$, $H_*(Y_{\fu,\low})$ is free of rank two. One can take $\{w_1 C'_1, C'_1\}$ to generate the homology.
\end{proposition}
\begin{proof}
The kernel of the differential is spanned by the elements $w_1^k w_2^l C'_1$. In homology, we have the relation $w_2 C'_1 = (w_1 + e_1)C'_1$, so the homology is generated by elements $w_1^k C'_1$. We also have the relation $(w_1^2 + e_1 w_1)C'_1 = w_1 w_2 C'_1 = e_2 C'_1$, so the homology is generated by the two elements $C'_1$ and $w_1 C'_1$. The submodule generated by these two elements has zero intersection with the image of the differential, so we can identify the homology with this submodule.
\end{proof}
The left submodule of $Y_{\fu,\low}$ generated by $\{w_1 C'_1, C'_1 \}$ is not closed under right multiplication. However, the left submodule generated by the remaining generators is closed under right multiplication, and we can view $H_*(Y_{\fu,\low})$ as the quotient of $Y_{\fu,\low}$ by this sub-bimodule. It follows that $Y_{\fu,\low}$ is a formal dg bimodule, since the quotient map is a quasi-isomorphism to its homology.
\begin{proposition}\label{prop:ActionsOnHomologyLower}
The right action of $U_1$ on $H_*(Y_{\fu,\low})$ is given by
\[
\kbordermatrix{
& w_1 C'_1 & C'_1 \\
w_1 C'_1 & e_1 & 1 \\
C'_1 & e_2 & 0
},
\]
and the right action of $U_2$ is given by
\[
\kbordermatrix{
& w_1 C'_1 & C'_1 \\
w_1 C'_1 & 0 & 1 \\
C'_1 & e_2 & e_1
}.
\]
\end{proposition}
We can use the above matrices to define the fully-simplified bimodule $Y_{\fs,\low}$; we will not need a half-simplified version. As above, we streamline notation by relabeling $C'_1$ as $C'$.
\begin{definition}
The (ordinary) bimodule $Y_{\fs,\low}$ over $((\A_2)_{2\varepsilon_2}, (\A_{1,1})_{2\varepsilon_2})$ has primary matrix
\[
\kbordermatrix{
& \oo \\
\o & w_1 C' \quad C'
}
\]
and right actions of $U_1$ and $U_2$ given by the matrices in Proposition~\ref{prop:ActionsOnHomologyLower}. We have
\[
\deg^q(w_1 C') = -1, \quad \deg^h(w_1 C') = 2, \quad \deg^q(C') = 1, \quad \deg^h(C') = 0.
\]
\end{definition}
\subsection{Decategorification of \texorpdfstring{$Y$}{Y}}
\begin{proposition}
The AD bimodule ${^{\vee}}Y$ categorifies the map from $G_0(\A_2)$ to $G_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[S_{\u}]} & {[S_{\o}]} \\
{[S_{\uu}]} & 0 & 0 \\
{[S_{\ou}]} & -1 & 0 \\
{[S_{\uo}]} & q^{-1} & 0 \\
{[S_{\oo}]} & 0 & q + q^{-1}
}.
\]
This map can be identified with $1_{1,1} F_{\gl(2)} 1_{2,0}$ (or equivalently with $1_{1,1} E_{\gl(2)} 1_{0,2}$) mapping from $\wedge_q^2 V$ to $V^{\otimes 2}$.
\end{proposition}
\subsection{2-representation morphism structure}
We will give
\[
Y_{\fs} \coloneqq Y_{\fs,\midd} \oplus Y_{\fs,\low}
\]
the structure of a $1$-morphism of $2$-representations of $\U^-$ by defining an isomorphism
\[
\varphi \co Y_{\fs} \boxtimes F^{\vee} \to F^{\vee} \boxtimes Y_{\fs}.
\]
The two bimodules in question are only nonzero as bimodules over $((\A_2)_{\varepsilon_1 + \varepsilon_2}, (\A_{1,1})_{2\varepsilon_2})$, so we need to to give an isomorphism
\[
\varphi \co Y_{\fs,\midd} \boxtimes F^{\vee} \to F^{\vee} \boxtimes Y_{\fs,\low}.
\]
Using the matrix-based formulas for the box tensor product of DA bimodules in Section~\ref{sec:BoxTensor}, we can describe both $Y_{\fs,\midd} \boxtimes F^{\vee}$ and $F^{\vee} \boxtimes Y_{\fs,\low}$.
\begin{proposition}
The (a priori DA) bimodule $Y_{\fs,\midd} \boxtimes F^{\vee}$ has primary matrix
\[
\kbordermatrix{
& \oo \\
\u & A' \xi_{\ou,\oo} \quad B' \xi_{\uo,\oo}
}
\]
and secondary matrix
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} & B' \xi_{\uo,\oo} \\
A' \xi_{\ou,\oo} & e_1^{k+1} \otimes U_1^{k+1} & \begin{matrix} e_1^k \otimes U_1^{k+1} \\+ e_1^k \otimes U_2^{k+1} \end{matrix} \\
B' \xi_{\uo,\oo} & 0 & e_1^{k+1} \otimes U_2^{k+1}
}.
\]
\end{proposition}
This bimodule is no longer $A_{\infty}$, and it has no differential. The right action of $U_1$ is given by the matrix
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} & B' \xi_{\uo,\oo} \\
A' \xi_{\ou,\oo} & e_1 & 1 \\
B' \xi_{\uo,\oo} & 0 & 0
},
\]
and the right action of $U_2$ is given by the matrix
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} & B' \xi_{\uo,\oo} \\
A' \xi_{\ou,\oo} & 0 & 1 \\
B' \xi_{\uo,\oo} & 0 & e_1
}.
\]
\begin{proposition}
The DA (or ordinary) bimodule $F^{\vee} \boxtimes Y_{\fs,\low}$ has primary matrix
\[
\kbordermatrix{
& \oo \\
\u & \xi_{\u,\o} (w_1 C') \quad \xi_{\u,\o} C'
}.
\]
The differential is zero; the matrix for the right action of $U_1$ is
\[
\kbordermatrix{
& \xi_{\u,\o} w_1 C' & \xi_{\u,\o} C' \\
\xi_{\u,\o} w_1 C' & e_1 & 1 \\
\xi_{\u,\o} C' & 0 & 0
},
\]
and the matrix for the right action of $U_2$ is
\[
\kbordermatrix{
& \xi_{\u,\o} w_1 C' & \xi_{\u,\o} C' \\
\xi_{\u,\o} w_1 C' & 0 & 1 \\
\xi_{\u,\o} C' & 0 & e_1
}.
\]
\end{proposition}
\begin{definition}\label{def:YHardFS2RepAlpha}
We will write $Y_{\fs}$ for the 1-morphism between 2-representations of $\U^-$ given by $(Y_{\fs}, \varphi)$, where
\[
\varphi\co Y_{\fs} \boxtimes F^{\vee} \to F^{\vee} \boxtimes Y_{\fs}
\]
is given by the matrix
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} & B' \xi_{\uo,\oo} \\
\xi_{\u,\o} w_1 C' & 1 & 0 \\
\xi_{\u,\o} C' & 0 & 1
}
\]
as a map from $Y_{\fs,\midd} \boxtimes F^{\vee}$ to $F^{\vee} \boxtimes Y_{\fs,\low}$ ($\varphi$ is zero as a map between the other summands).
\end{definition}
As in Section~\ref{sec:Lambda2RepMorphism}, the square
\[
\xymatrix{
0 = Y_{\upp,\fs} \boxtimes (F^{\vee})^{\boxtimes 2} \ar[rr]^{\varphi \boxtimes \id_{F^{\vee}}} \ar[d]_{\id_{Y_{\upp,\fs}} \boxtimes \tau} && F^{\vee} \boxtimes Y_{\midd,\fs} \boxtimes F^{\vee} \ar[rr]^{\id_{F^{\vee}} \boxtimes \varphi} && (F^{\vee})^{\boxtimes 2} \boxtimes Y_{\low,\fs} \ar[d]^{\tau \boxtimes \id_{Y_{\low,\fs}}} \\
0 = Y_{\upp,\fs} \boxtimes (F^{\vee})^{\boxtimes 2} \ar[rr]^{\varphi \boxtimes \id_{F^{\vee}}} && F^{\vee} \boxtimes Y_{\midd,\fs} \boxtimes F^{\vee} \ar[rr]^{\id_{F^{\vee}} \boxtimes \varphi} && (F^{\vee})^{\boxtimes 2} \boxtimes Y_{\low,\fs}
}
\]
automatically commutes ($Y_{\upp,\fs} = 0$), so $(Y_{\fs},\varphi)$ is a valid $1$-morphism.
\section{Bimodules for compositions of two trivalent vertices}\label{sec:SimpleWebs}
\subsection{The relevant Heegaard diagrams}
Figure~\ref{fig:SingularCrossing} shows two webs, a ``singular crossing'' composed of two trivalent vertices and the other possible composition of these vertices, together with two Heegaard diagrams. These diagrams are obtained from the two possible ways of gluing the diagram of Figure~\ref{fig:HardVertex} to the diagram of Figure~\ref{fig:EasyVertex} vertically, after some handleslides and destabilizations. In this section, we take tensor products of the bimodules for the hard and easy trivalent vertices in both directions to get bimodules for the singular crossing and other composite web.
\begin{figure}
\includegraphics[scale=0.4]{singular_crossing_V5.eps}
\caption{Heegaard diagrams for compositions of two trivalent vertices}
\label{fig:SingularCrossing}
\end{figure}
\begin{remark}
Like the diagram of Figure~\ref{fig:HardVertex}, the diagrams in Figure~\ref{fig:SingularCrossing} have regions with multiple $X$ or $O$ basepoints, which are typically not allowed for the multi-pointed diagrams used in knot and link Floer homology (see e.g. \cite{HFLOrig}). The diagram for a singular crossing, however, is more-or-less standard; Figure~\ref{fig:OSSzDiag} shows a local version of the Heegaard diagram for a singular crossing defined in \cite{OSSz} (see also \cite{ManionDiagrams, ManionSingular}). It is reasonable to suppose that the two diagrams of Figure~\ref{fig:SingularCrossing} represent the complements of their corresponding webs in $D^2 \times I$ with certain sutured structures on the boundary (see Figure~\ref{fig:Cobordisms} and Remark~\ref{rem:WeightedSutures} in the introduction), although it would be good to have a general theory of such generalized diagrams and the sutured 3-manifolds or cobordisms they represent.
\end{remark}
\begin{remark}
Bimodules constructed literally from the Heegaard diagrams in Figure~\ref{fig:SingularCrossing} would be more like the fully unsimplified bimodules considered in Section~\ref{sec:HardVertex}, due to the presence of $O$ basepoints in the diagram. We will focus on the fully simplified bimodules instead.
\end{remark}
\begin{figure}
\includegraphics[scale=0.6]{ossz_diagram_BL_V2.eps}
\caption{Ozsv{\'a}th--Stipsicz--Szab{\'o} diagram for a singular crossing (the leftmost two figures are related by handleslides of $\beta$ circles).}
\label{fig:OSSzDiag}
\end{figure}
\subsection{Singular crossing}\label{sec:SingularCrossing}
\begin{definition}
Let
\[
X_{\midd} \coloneqq \Lambda_{\midd} \boxtimes Y_{\fs,\midd}
\]
and
\[
X_{\low} \coloneqq \Lambda_{\low} \boxtimes Y_{\fs,\low}.
\]
\end{definition}
\begin{proposition}
The DA bimodule $X_{\midd}$ has primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\ou & XA' & XB'\\
\uo & YA' & YB'
}
\]
and secondary matrix
\[
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
XA' & U_1^{k+1} \otimes U_1^{k+1} & \lambda & \begin{matrix} U_1^k \otimes (\lambda, U_1^{k+1}) \\+ U_1^k \otimes (U_2^{k+1}, \lambda)\end{matrix} & 0 \\
YA' & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 & \begin{matrix} U_2^k \otimes (U_2^{k+1}, \lambda) \\+ U_2^k \otimes (\lambda, U_1^{k+1})\end{matrix} \\
XB' & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda \\
YB' & 0 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1}
}.
\]
We have
\begin{itemize}
\item $\deg^q(XA') = -1$, $\deg^h(XA') = 2$,
\item $\deg^q(YA') = 0$, $\deg^h(YA') = 1$,
\item $\deg^q(XB') = 0$, $\deg^h(XB') = 1$,
\item $\deg^q(YB') = 1$, $\deg^h(YB') = 0$.
\end{itemize}
\end{proposition}
\begin{proposition}
The (ordinary) bimodule $X_{\low}$ has primary matrix
\[
\kbordermatrix{
& \oo \\
\oo & Z(w_1 C') \quad ZC' \\
}.
\]
The right action of $U_1$ is given by the matrix
\[
\kbordermatrix{
& Z(w_1 C') & ZC' \\
Z(w_1 C') & U_1 + U_2 & 1 \\
ZC' & U_1 U_2 & 0
}
\]
and the right action of $U_2$ is given by the matrix\[
\kbordermatrix{
& Z(w_1 C') & ZC' \\
Z(w_1 C') & 0 & 1 \\
ZC' & U_1 U_2 & U_1 + U_2
}.
\]
We have
\begin{itemize}
\item $\deg^q(Z (w_1 C')) = -1$, $\deg^h(Z (w_1 C')) = 2$,
\item $\deg^q(ZC') = 1$, $\deg^h(ZC') = 0$.
\end{itemize}
\end{proposition}
Both propositions can be proven by applying the definitions of Section~\ref{sec:BoxTensor}.
\begin{corollary}
The DA bimodule $X$ categorifies the map $K_0(\A_{1,1}) \to K_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& [P_{\uu}] & [P_{\ou}] & [P_{\uo}] & [P_{\oo}] \\
{[P_{\uu}]} & 0 & 0 & 0 & 0 \\
{[P_{\ou}]} & 0 & q^{-1} & -1 & 0 \\
{[P_{\uo}]} & 0 & -1 & q & 0 \\
{[P_{\oo}]} & 0 & 0 & 0 & q + q^{-1}\\
}.
\]
Equivalently, the left dual ${^{\vee}}X$ of $X$ categorifies the map $G_0(\A_{1,1}) \to G_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[S_{\uu}]} & {[S_{\ou}]} & {[S_{\uo}]} & {[S_{\oo}]} \\
{[S_{\uu}]} & 0 & 0 & 0 & 0 \\
{[S_{\ou}]} & 0 & q & -1 & 0 \\
{[S_{\uo}]} & 0 & -1 & q^{-1} & 0 \\
{[S_{\oo}]} & 0 & 0 & 0 & q + q^{-1} \\
}.
\]
Since the decategorification of a box tensor product is the product of decategorifications (and box tensor products are compatible with opposite bimodules), this endomorphism agrees with $1_{1,1} F_{\gl(2)} 1_{2,0} E_{\gl(2)} 1_{1,1}$ (or equivalently $1_{1,1} E_{\gl(2)} 1_{0,2} F_{\gl(2)} 1_{1,1}$) acting on $V ^{\otimes 2}$, a relation that can also be checked directly (see Appendix~\ref{sec:SingularNonsingular}).
\end{corollary}
We now consider the 1-morphism structure on $X$.
\begin{definition}
We write $X = (X,\varphi)$ for the $1$-morphism of $2$-representations from $\A_{1,1}$ to itself given by the composition of the $1$-morphisms
\[
Y_{\fs}\co \A_{1,1} \to \A_2
\]
and
\[
\Lambda \co \A_2 \to \A_{1,1}.
\]
\end{definition}
The map $\varphi$ is the composition of $\id_{\Lambda} \boxtimes \varphi_{Y}$ and $\varphi_{\Lambda} \boxtimes \id_{Y_{\fs}}$ from Definitions~\ref{def:YHardFS2RepAlpha} and \ref{def:YEasy2RepAlpha} under the natural identification of $(\Lambda \boxtimes F^{\vee}) \boxtimes Y_{\fs}$ with $\Lambda \boxtimes (F^{\vee} \boxtimes Y_{\fs})$. The map $\id_{\Lambda} \boxtimes \varphi_{Y}$ has matrix
\[
\kbordermatrix{
& XA' \xi_{\ou,\oo} & YA' \xi_{\ou,\oo} & XB' \xi_{\uo,\oo} & YB' \xi_{\uo,\oo} \\
X \xi_{\u,\o} (w_1 C') & 1 & 0 & 0 & 0\\
Y \xi_{\u,\o} (w_1 C') & 0 & 1 & 0 & 0 \\
X \xi_{\u,\o} C' & 0 & 0 & 1 & 0 \\
Y \xi_{\u,\o} C' & 0 & 0 & 0 & 1
}
\]
and the map $\varphi_{\Lambda} \boxtimes \id_{Y_{\fs}}$ has matrix
\[
\kbordermatrix{
& X\xi_{\u,\o} (w_1 C') & Y\xi_{\u,\o} (w_1 C') & X \xi_{\u,\o} C' & Y \xi_{\u,\o} C' \\
\xi_{\ou,\oo} Z(w_1 C') & 1 & 0 & 0 & 0\\
\xi_{\uo,\oo} Z(w_1 C') & 0 & 1 & 0 & 0 \\
\xi_{\ou,\oo} ZC' & 0 & 0 & 1 & 0 \\
\xi_{\uo,\oo} ZC' & 0 & 0 & 0 & 1
}.
\]
Thus, the map $\varphi$ for $X$ is given explicitly by the matrix
\[
\kbordermatrix{
& XA' \xi_{\ou,\oo} & YA' \xi_{\ou,\oo} & XB' \xi_{\uo,\oo} & YB' \xi_{\uo,\oo} \\
\xi_{\ou,\oo} Z(w_1 C') & 1 & 0 & 0 & 0\\
\xi_{\uo,\oo} Z(w_1 C') & 0 & 1 & 0 & 0 \\
\xi_{\ou,\oo} ZC' & 0 & 0 & 1 & 0 \\
\xi_{\uo,\oo} ZC' & 0 & 0 & 0 & 1
}.
\]
It follows from well-definedness of composition of $1$-morphisms between $2$-representations that the square
\[
\xymatrix{
X_{\upp} \boxtimes (F^{\vee})^{\boxtimes 2} \ar[rr]^{\varphi \boxtimes \id_{F^{\vee}}} \ar[d]_{\id_{X_{\upp}} \boxtimes \tau} && F^{\vee} \boxtimes X_{\midd} \boxtimes F^{\vee} \ar[rr]^{\id_{F^{\vee}} \boxtimes \varphi} && (F^{\vee})^{\boxtimes 2} \boxtimes X_{\low} \ar[d]^{\tau \boxtimes \id_{X_{\low}}} \\
X_{\upp} \boxtimes (F^{\vee})^{\boxtimes 2} \ar[rr]^{\varphi \boxtimes \id_{F^{\vee}}} && F^{\vee} \boxtimes X_{\midd} \boxtimes F^{\vee} \ar[rr]^{\id_{F^{\vee}} \boxtimes \varphi} && (F^{\vee})^{\boxtimes 2} \boxtimes X_{\low}
}
\]
commutes; alternatively, commutativity is also immediate because $X_{\upp} = 0$.
\subsection{The other composition}\label{sec:Bubble}
\begin{definition}
Let
\[
\Phi_{\midd} \coloneqq Y_{\fs,\midd} \boxtimes \Lambda_{\midd}
\]
and
\[
\Phi_{\low} \coloneqq Y_{\fs,\low} \boxtimes \Lambda_{\low}.
\]
\end{definition}
\begin{proposition}
The (ordinary) bimodule $\Phi_{\midd}$ has primary matrix
\[
\kbordermatrix{ & \u \\
\u & A'X \quad B'Y
}
\]
and right action of $e_1$ given by
\[
\kbordermatrix{ & A'X & B'Y \\
A'X & e_1 & 0 \\
B'Y & 0 & e_1
}.
\]
We have $\deg^q(A'X) = -1$, $\deg^h(A'X) = 2$, $\deg^q(B'Y) = 1$, and $\deg^h(B'Y) = 0$.
\end{proposition}
\begin{proposition}
The (ordinary) bimodule $\Phi_{\low}$ has primary matrix
\[
\kbordermatrix{ & \o \\
\o & (w_1 C') Z \quad C' Z
},
\]
right action of $e_1$ given by
\[
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 0 \\
C' Z & 0 & e_1
},
\]
and right action of $e_2$ given by
\[
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_2 & 0 \\
C' Z & 0 & e_2
}.
\]
We have $\deg^q((w_1 C') Z) = -1$, $\deg^h((w_1 C')Z) = 2$, $\deg^q(C' Z) = 1$, and $\deg^h(C' Z) = 0$, .
\end{proposition}
\begin{proof}
These propositions follow from Section~\ref{sec:BoxTensor}. For example, for the right action of $e_2$ on $\Phi_{\low}$, note that
\[
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\]
equals
\[
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_2 & 0 \\
C' Z & 0 & e_2
}.
\]
\end{proof}
\begin{corollary}
The DA bimodule $\Phi$ categorifies the map from $K_0(\A_2)$ to $K_0(\A_2)$ with matrix
\[
\kbordermatrix{
& [P_{\u}] & [P_{\o}] \\
{[P_{\u}]} & q + q^{-1} & 0 \\
{[P_{\o}]} & 0 & q + q^{-1} \\
};
\]
equivalently, ${^{\vee}}\Phi$ categorifies the map from $G_0(\A_2)$ to $G_0(\A_2)$ with matrix
\[
\kbordermatrix{
& {[S_{\u}]} & {[S_{\o}]} \\
{[S_{\u}]} & q + q^{-1} & 0 \\
{[S_{\o}]} & 0 & q + q^{-1} \\
};
\]
Since the decategorification of a box tensor product is the product of decategorifications, this endomorphism is equal to $1_{2,0} E_{\gl(2)} 1_{1,1} F_{\gl(2)} 1_{2,0}$ (or to $1_{0,2} F_{\gl(2)} 1_{1,1} E_{\gl(2)} 1_{0,2}$) acting on $\wedge_q^2 V$, a relation that can also be checked directly.
\end{corollary}
\begin{definition}
We write $\Phi = (\Phi,\varphi)$ for the $1$-morphism of $2$-representations from $A_2$ to itself given by the composition of the $1$-morphisms
\[
\Lambda \co \A_2 \to \A_{1,1}
\]
and
\[
Y_{\fs}\co \A_{1,1} \to \A_2.
\]
\end{definition}
The map $\varphi$ is the composition of $\id_{Y_{\fs}} \boxtimes \varphi_{\Lambda}$ and $\varphi_{Y} \boxtimes \id_{\Lambda}$ from Definitions~\ref{def:YEasy2RepAlpha} and \ref{def:YHardFS2RepAlpha} under the natural identification of $(Y_{\fs} \boxtimes F^{\vee}) \boxtimes \Lambda$ with $Y_{\fs} \boxtimes (F^{\vee} \boxtimes \Lambda)$. The map $\id_{Y_{\fs}} \boxtimes \varphi_{\Lambda}$ has matrix
\[
\kbordermatrix{
& A'X\xi_{\u,\o} & B'Y \xi_{\u,\o} \\
A'\xi_{\ou,\oo} Z & 1 & 0 \\
B'\xi_{\uo,\oo} Z & 0 & 1
}
\]
and the map $\varphi_{Y} \boxtimes \id_{\Lambda}$ has matrix
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} Z & B' \xi_{\uo,\oo} Z \\
\xi_{\u,\o} (w_1 C')Z & 1 & 0 \\
\xi_{\u,\o} C' Z & 0 & 1
}.
\]
Thus, the map $\varphi$ for $\Phi$ is given explicitly by the matrix
\[
\kbordermatrix{
& A'X\xi_{\u,\o} & B'Y\xi_{\u,\o} \\
\xi_{\u,\o} (w_1 C')Z & 1 & 0 \\
\xi_{\u,\o} C'Z & 0 & 1
}.
\]
\section{A skew Howe 2-action}\label{sec:SkewHowe2Action}
\subsection{A categorified quantum group}\label{sec:CatQGDef}
We recall a graded 2-category $\Sc(2,2)^*$ defined in Mackaay--Sto{\v{s}}i\'c--Vaz \cite{MSVSchur} as a quotient of the categorified quantum group $\dot{\U}_q(\gl(2))^*$ (the details of which are reviewed in \cite{MSVSchur}). While $\Q$ coefficients are assumed in \cite{MSVSchur}, the definitions of $\dot{\U}_q(\gl(2))^*$ and $\Sc(2,2)^*$ make sense over $\Z$; we will work with their reductions over $\F_2$.
\begin{remark}
The $*$ notation indicates that the morphism spaces consist of arbitrary-degree morphisms, not necessarily degree zero. In contrast with \cite{MSVSchur}, we will not take the objects of $\dot{\U}_q(\gl(2))^*$ or $\Sc(2,2)^*$ to be closed under grading shifts, although if one preferred, one could include them without issue.
\end{remark}
\begin{definition}
Let the graded $\F_2$-linear 2-category $\Sc(2,2)^*$ be the quotient of $\dot{\U}_q(\gl(2))^*$ (with no grading shifts on objects as remarked above) by the ideal generated by the identity 2-morphism on $\One_{\lambda}$ for all $\lambda \in \Z^2$ except for $\lambda \in \{(2,0), (1,1), (0,2)\}$.
\end{definition}
\begin{remark}
The $\gl(2)$ weights $\lambda$ appearing here should not be confused with $\gl(1|1)$ weights $\omega$.
\end{remark}
We will write $\Sc(2,2)^{\ungr}$ for $\Sc(2,2)$ with its grading ignored; we will be most interested in a bigraded lift $\Sc(2,2)^{*,*}$ of $\Sc(2,2)^{\ungr}$. Concretely, we can take $\Sc(2,2)^{*,*}$ to have objects $\One_{2,0}$, $\One_{1,1}$, and $\One_{0,2}$. A generating set of 1-morphisms is given by
\begin{itemize}
\item $\One_{1,1} \xleftarrow{\One_{1,1} \E \One_{0,2}} \One_{0,2}$,
\item $\One_{2,0} \xleftarrow{\One_{2,0} \E \One_{1,1}} \One_{1,1}$,
\item $\One_{1,1} \xleftarrow{\One_{1,1} \Fc \One_{2,0}} \One_{2,0}$,
\item $\One_{0,2} \xleftarrow{\One_{0,2} \Fc \One_{1,1}} \One_{1,1}$.
\end{itemize}
We will draw 1-morphisms as sequences of vertical strings with the regions between them labeled by $\gl(2)$ weights $\lambda \in \Z^2$; $\E$ morphisms will point upwards and $\Fc$ morphisms will point downwards.
The 2-morphisms in $\Sc(2,2)^{*,*}$ are generated by the following string diagrams, read from bottom to top:
\begin{itemize}
\item \includegraphics[scale=0.4]{dot1.eps} with $\deg^q = -2$ and $\deg^h = 2$
\item \includegraphics[scale=0.4]{dot2.eps} with $\deg^q = -2$ and $\deg^h = 2$
\item \includegraphics[scale=0.4]{dot3.eps} with $\deg^q = -2$ and $\deg^h = 2$
\item \includegraphics[scale=0.4]{dot4.eps} with $\deg^q = -2$ and $\deg^h = 2$
\item \includegraphics[scale=0.4]{crossing1.eps} with $\deg^q = 2$ and $\deg^h = -2$
\item \includegraphics[scale=0.4]{crossing2.eps} with $\deg^q = 2$ and $\deg^h = -2$
\item \includegraphics[scale=0.4]{cup1.eps} with $\deg^q = -1$ and $\deg^h = 0$
\item \includegraphics[scale=0.4]{cup2.eps} with $\deg^q = 1$ and $\deg^h = -2$
\item \includegraphics[scale=0.4]{cup3.eps} with $\deg^q = 1$ and $\deg^h = -2$
\item \includegraphics[scale=0.4]{cup4.eps} with $\deg^q = -1$ and $\deg^h = 0$
\item \includegraphics[scale=0.4]{cap1.eps} with $\deg^q = -1$ and $\deg^h = 2$
\item \includegraphics[scale=0.4]{cap2.eps} with $\deg^q = 1$ and $\deg^h = 0$
\item \includegraphics[scale=0.4]{cap3.eps} with $\deg^q = 1$ and $\deg^h = 0$
\item \includegraphics[scale=0.4]{cap4.eps} with $\deg^q = -1$ and $\deg^h = 2$
\end{itemize}
The relations imposed on the 2-morphisms are:
\begin{enumerate}
\item\label{it:Biadjointness1} \includegraphics[scale=0.4]{biadjointness1.eps}
\item\label{it:Biadjointness2} \includegraphics[scale=0.4]{biadjointness2.eps}
\item\label{it:Biadjointness3} \includegraphics[scale=0.4]{biadjointness3.eps}
\item\label{it:Biadjointness4} \includegraphics[scale=0.4]{biadjointness4.eps}
\item\label{it:DotCyclic1} \includegraphics[scale=0.4]{dot_cyclic1.eps}
\item\label{it:DotCyclic2} \includegraphics[scale=0.4]{dot_cyclic2.eps}
\item\label{it:CrossingCyclic} \includegraphics[scale=0.64]{crossing_cyclic.eps}
\item\label{it:NegativeBubble1} \includegraphics[scale=0.4]{negative_bubble1.eps}
\item\label{it:NegativeBubble2} \includegraphics[scale=0.4]{negative_bubble2.eps}
\item\label{it:Deg0Bubble1} \includegraphics[scale=0.4]{deg0_bubble1.eps}
\item\label{it:Deg0Bubble2} \includegraphics[scale=0.4]{deg0_bubble2.eps}
\item\label{it:ExtendedSl2_1} \includegraphics[scale=0.4]{extendedsl2_1.eps}
\item\label{it:ExtendedSl2_2} \includegraphics[scale=0.4]{extendedsl2_2.eps}
\item\label{it:ExtendedSl2_3} \includegraphics[scale=0.4]{extendedsl2_3.eps}
\item\label{it:ExtendedSl2_4} \includegraphics[scale=0.4]{extendedsl2_4.eps}
\item\label{it:NilHecke1} \includegraphics[scale=0.4]{nilhecke1.eps}
\item\label{it:NilHecke2} \includegraphics[scale=0.4]{nilhecke2.eps}
\end{enumerate}
(we include signs in various places but they are unnecessary over $\F_2$). In items \eqref{it:ExtendedSl2_1} and \eqref{it:ExtendedSl2_3}, the sideways crossings are defined by
\begin{center}
\includegraphics[scale=0.4]{sideways_crossings1.eps} which has $\deg^q = 0$ and $\deg^h = 0$
\end{center}
\noindent and
\begin{center}
\includegraphics[scale=0.4]{sideways_crossings2.eps} which has $\deg^q = 0$ and $\deg^h = 0$.
\end{center}
We will define a functor from $\Sc(2,2)^{*,*}$ into $2\Rep(\U^-)^{*,*}$.
On objects, it sends
\begin{itemize}
\item $\One_{2,0} \mapsto \A_2$
\item $\One_{1,1} \mapsto \A_{1,1}$
\item $\One_{0,2} \mapsto \A_2$
\end{itemize}
where we view $\A_2$ and $\A_{1,1}$ as 2-representations $(A,F,\tau)$ of $\U^-$.
\subsection{Bimodules for 1-morphisms}
We send
\begin{itemize}
\item $\One_{1,1} \E \One_{0,2} \mapsto {^{\vee}}Y$,
\item $\One_{2,0} \E \One_{1,1} \mapsto {^{\vee}}\Lambda$,
\item $\One_{1,1} \Fc \One_{2,0} \mapsto {^{\vee}}Y$,
\item $\One_{0,2} \Fc \One_{1,1} \mapsto {^{\vee}}\Lambda$,
\end{itemize}
where we view ${^{\vee}}Y$ and ${^{\vee}}\Lambda$ as AD bimodule 1-morphisms of 2-representations of $\U^-$.
\subsection{Bimodule maps for 2-morphisms}\label{sec:BimodMapsFor2Mors}
\subsubsection{Dots}
\begin{itemize}
\item For the 2-morphism \includegraphics[scale=0.4]{dot1.eps} in $\Sc(2,2)^{*,*}$, we define an endomorphism of ${^{\vee}}Y$ as the dual of the endomorphism $\delta^{\up}_{0,2}$ of $Y$ with matrix
\[
\kbordermatrix{
& A' & B' \\
A' & e_1 & 1 \otimes \lambda \\
B' & 0 & 0
}
\]
on the summand $Y_{\midd}$ and matrix
\[
\kbordermatrix{
& w_1 C' & C' \\
w_1 C' & e_1 & 1\\
C' & e_2 & 0
}
\]
on the summand $Y_{\low}$.
\item For the 2-morphism \includegraphics[scale=0.4]{dot2.eps} in $\Sc(2,2)^{*,*}$, we define an endomorphism of ${^{\vee}}\Lambda$ as the dual of the endomorphism $\delta^{\up}_{1,1}$ of $\Lambda$ with matrix
\[
\kbordermatrix{
& X & Y \\
X & 0 & 0 \\
Y & 0 & U_2
}
\]
on the summand $\Lambda_{\midd}$ and matrix
\[
\kbordermatrix{
& Z \\
Z & U_2 \\
}
\]
on the summand $\Lambda_{\low}$.
\item For the 2-morphism \includegraphics[scale=0.4]{dot3.eps} in $\Sc(2,2)^{*,*}$, we define an endomorphism of ${^{\vee}}Y$ as the dual of the endomorphism $\delta^{\down}_{2,0}$ of $Y$ with matrix
\[
\kbordermatrix{
& A' & B' \\
A' & 0 & 1 \otimes \lambda \\
B' & 0 & e_1
}
\]
on the summand $Y_{\midd}$ and
\[
\kbordermatrix{
& w_1 C' & C' \\
w_1 C' & 0 & 1 \\
C' & e_2 & e_1
}
\]
on the summand $Y_{\low}$.
\item For the 2-morphism \includegraphics[scale=0.4]{dot4.eps} in $\Sc(2,2)^{*,*}$, we define an endomorphism of ${^{\vee}}\Lambda$ as the dual of the endomorphism $\delta^{\down}_{1,1}$ of $\Lambda$ with matrix
\[
\kbordermatrix{
& X & Y \\
X & U_1 & 0 \\
Y & 0 & 0
}
\]
on the summand $\Lambda_{\midd}$ and matrix
\[
\kbordermatrix{
& Z \\
Z & U_1 \\
}
\]
on the summand $\Lambda_{\low}$.
\end{itemize}
\begin{proposition}
The four maps defined above are 2-morphisms.
\end{proposition}
\begin{proof}
\begin{itemize}
\item To see that $\delta^{\up}_{0,2}$ is a 2-morphism, we want the square
\[
\xymatrix{
YF^{\vee} \ar[d]_-{\delta^{\up}_{0,2} F^{\vee}} \ar[rr]^-{\varphi_Y} && F^{\vee} Y \ar[d]^-{F^{\vee} \delta^{\up}_{0,2}} \\
YF^{\vee} \ar[rr]_-{\varphi_Y} && F^{\vee} Y
}
\]
to commute up to homotopy. In the only nonzero case, the top and bottom edges of the square are
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} & B' \xi_{\uo,\oo} \\
\xi_{\u,\o} w_1 C' & 1 & 0 \\
\xi_{\u,\o} C' & 0 & 1
};
\]
the left edge is
\[
\kbordermatrix{
& A' \xi_{\ou,\oo} & B' \xi_{\uo,\oo} \\
A' \xi_{\ou,\oo} & e_1 & 1 \\
B' \xi_{\uo,\oo} & 0 & 0
}
\]
and the right edge is
\[
\kbordermatrix{
& \xi_{\u,\o} w_1 C' & \xi_{\u,\o} C' \\
\xi_{\u,\o} w_1 C' & e_1 & 1 \\
\xi_{\u,\o} C' & 0 & 0
}.
\]
Thus, the square commutes.
\item To see that $\delta^{\up}_{1,1}$ is a 2-morphism, we want the square
\[
\xymatrix{
\Lambda F^{\vee} \ar[d]_-{\delta^{\up}_{1,1} F^{\vee}} \ar[rr]^-{\varphi_{\Lambda}} && F^{\vee} \Lambda \ar[d]^-{F^{\vee} \delta^{\up}_{1,1}} \\
\Lambda F^{\vee} \ar[rr]_-{\varphi_{\Lambda}} && F^{\vee} \Lambda
}
\]
to commute up to homotopy. Recall that the top and bottom edges of the square are
\[
\kbordermatrix{
& X \xi_{\o,\u} & Y \xi_{\o,\u} \\
\xi_{\ou,\oo} Z & 1 & 0 \\
\xi_{\uo,\oo} Z & 0 & 1
};
\]
the left edge is
\[
\kbordermatrix{
& X \xi_{\o,\u} & Y \xi_{\o,\u} \\
X \xi_{\o,\u} & 0 & 0 \\
Y \xi_{\o,\u} & 0 & U_2
}
\]
and the right edge is
\[
\kbordermatrix{
& \xi_{\ou,\oo} Z & \xi_{\uo,\oo} Z \\
\xi_{\ou,\oo} Z & 0 & 0 \\
\xi_{\uo,\oo} Z & 0 & U_2
}.
\]
Thus, the square commutes.
\item For $\delta^{\down}_{2,0}$, one can check similarly that the square
\[
\xymatrix{
YF^{\vee} \ar[d]_-{\delta_{2,0} F^{\vee}} \ar[rr]^-{\varphi_Y} && F^{\vee} Y \ar[d]^-{F^{\vee} \delta_{2,0}} \\
YF^{\vee} \ar[rr]_-{\varphi_Y} && F^{\vee} Y
}
\]
commutes, so $\delta^{\down}_{2,0}$ is a 2-morphism.
\item For $\delta^{\down}_{1,1}$, one can check similarly that the square
\[
\xymatrix{
\Lambda F^{\vee} \ar[d]_-{\delta^{\down}_{1,1} F^{\vee}} \ar[rr]^-{\varphi_{\Lambda}} && F^{\vee} \Lambda \ar[d]^-{F^{\vee} \delta^{\down}_{1,1}} \\
\Lambda F^{\vee} \ar[rr]_-{\varphi_{\Lambda}} && F^{\vee} \Lambda
}
\]
commutes, so $\delta^{\down}_{1,1}$ is a 2-morphism.
\end{itemize}
\end{proof}
\begin{remark}
The map $\delta^{\up}_{0,2}$ can be thought of as ``right multiplication by $U_1$'' on $Y$; more precisely, to get the above matrices, one sums all ways of inserting $U_1$ as an algebra input in some slot for the right $A_{\infty}$ action on $Y$. The other maps defined above have similar interpretations:
\begin{itemize}
\item $\delta^{\up}_{1,1}$ comes from left multiplication by $U_2$.
\item $\delta^{\down}_{2,0}$ comes from right multiplication by $U_2$.
\item $\delta^{\down}_{1,1}$ comes from left multiplication by $U_1$.
\end{itemize}
\end{remark}
\subsubsection{Crossings}
For the 2-morphism \includegraphics[scale=0.4]{crossing1.eps} in $\Sc(2,2)^{*,*}$, we define an endomorphism of ${^{\vee}}\Phi$ as the dual of the endomorphism $\chi$ of $\Phi$ with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\]
in the lower summand; it is clear that $\chi$ is a 2-morphism.
To the 2-morphism \includegraphics[scale=0.4]{crossing2.eps} in $\Sc(2,2)^{*,*}$, we assign the same endomorphism of ${^{\vee}}\Phi$ as for \includegraphics[scale=0.4]{crossing1.eps} (namely the dual of $\chi$).
\subsubsection{Cups and caps}
\begin{itemize}
\item For the 2-morphism \includegraphics[scale=0.4]{cup1.eps} in $\Sc(2,2)^{*,*}$, we define a morphism from $\id$ to ${^{\vee}}X$ as the dual of the morphism
\[
\varepsilon'\co X \to \id
\]
with matrix
\[
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
\Ib_{\ou} & U_1 & 0 & 1 \otimes \lambda & 0 \\
\Ib_{\uo} & 0 & 0 & 0 & 1
}
\]
in the upper summand and
\[
\kbordermatrix{
& Z(w_1 C') & ZC' \\
\Ib_{\oo} & U_1 & 1
}
\]
in the lower summand.
\item For the 2-morphism \includegraphics[scale=0.4]{cup2.eps} in $\Sc(2,2)^{*,*}$, we define a morphism from $\id$ to ${^{\vee}}\Phi$ as the dual of the morphism
\[
\varepsilon\co \Phi \to \id
\]
with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\]
in the upper summand and
\[
\kbordermatrix{
& w_1 C' Z & C'Z \\
\Ib_{\o} & 1 & 0
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{cup3.eps} gets assigned the dual of $\varepsilon\co \Phi \to \id$.
\item The 2-morphism \includegraphics[scale=0.4]{cup4.eps} gets assigned the dual of $\varepsilon'\co X \to \id$.
\item For the 2-morphism \includegraphics[scale=0.4]{cap1.eps} in $\Sc(2,2)^{*,*}$, we define a morphism from $X^{\vee}$ to $\id$ as the dual of the morphism
\[
\eta\co \id \to X
\]
with matrix
\[
\kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
XA' & 1 & 0 \\
YA' & 0 & 1 \otimes \lambda \\
XB' & 0 & 0 \\
YB' & 0 & U_2
}
\]
in the upper summand and
\[
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & 1 \\
ZC' & U_2
}
\]
in the lower summand.
\item For the 2-morphism \includegraphics[scale=0.4]{cap2.eps} in $\Sc(2,2)^{*,*}$, we define a morphism from ${^{\vee}}\Phi$ to $\id$ as the dual of the morphism
\[
\eta'\co \id \to \Phi
\]
with matrix
\[
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\]
in the upper summand and
\[
\kbordermatrix{
& \Ib_{\o} \\
(w_1 C')Z & 0 \\
C' Z & 1
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{cap3.eps} gets assigned the dual of $\eta'\co \id \to \Phi$.
\item The 2-morphism \includegraphics[scale=0.4]{cap4.eps} gets assigned the dual of $\eta\co \id \to X$.
\end{itemize}
We first check that $\eta$ is a closed morphism of DA bimodules.
\begin{proposition}
The map $\eta$ is a closed morphism of DA bimodules in the upper summand.
\end{proposition}
\begin{proof}
Since the only $A_{\infty}$ input to $\eta$ is $\lambda$ which cannot be factored, we need to check that the matrix for $\eta$ intertwines the secondary matrices for the DA bimodules $X_{\midd}$ and $(\mathbb{I}_{\A_{1,1}})_{\midd}$. The product
\[
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
XA' & U_1^{k+1} \otimes U_1^{k+1} & \lambda & \begin{matrix} U_1^k \otimes (\lambda, U_1^{k+1}) \\+ U_1^k \otimes (U_2^{k+1}, \lambda)\end{matrix} & 0 \\
YA' & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 & \begin{matrix} U_2^k \otimes (U_2^{k+1}, \lambda) \\+ U_2^k \otimes (\lambda, U_1^{k+1})\end{matrix} \\
XB' & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda \\
YB' & 0 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1}
}
\cdot \kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
XA' & 1 & 0 \\
YA' & 0 & 1 \otimes \lambda \\
XB' & 0 & 0 \\
YB' & 0 & U_2
}
\]
equals
\[
\kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
XA' & U_1^{k+1} \otimes U_1^{k+1} & \lambda \otimes \lambda \\
YA' & 0 & U_2^{k+1} \otimes (U_2^{k+1}, \lambda) \\
XB' & 0 & 0 \\
YB' & 0 & U_2^{k+2} \otimes U_2^{k+1}
}.
\]
The product
\[
\kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
XA' & 1 & 0 \\
YA' & 0 & 1 \otimes \lambda \\
XB' & 0 & 0 \\
YB' & 0 & U_2
}
\cdot \kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
\Ib_{\ou} & U_1^{k+1} \otimes U_1^{k+1} & \lambda \otimes \lambda \\
\Ib_{\uo} & 0 & U_2^{k+1} \otimes U_2^{k+1}
}
\]
gives the same result.
\end{proof}
\begin{proposition}
The map $\eta$ is a valid homomorphism of (ordinary) bimodules, and thus a closed morphism of DA bimodules, in the lower summand.
\end{proposition}
\begin{proof}
To see that $\eta$ respects the right action of $U_1$, note that
\[
\kbordermatrix{
& Z(w_1 C') & ZC' \\
Z(w_1 C') & U_1 + U_2 & 1 \\
ZC' & U_1 U_2 & 0
}
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & 1 \\
ZC' & U_2
}
\]
equals
\[
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & U_1 \\
ZC' & U_1 U_2
},
\]
which also equals
\[
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & 1 \\
ZC' & U_2
}
\kbordermatrix{
& \Ib_{\oo} \\
\Ib_{\oo} & U_1
}.
\]
Thus, $\eta$ respects the right action of $U_1$.
To see that $\eta$ respects the right action of $U_2$, note that
\[
\kbordermatrix{
& Z(w_1 C') & ZC' \\
Z(w_1 C') & 0 & 1 \\
ZC' & U_1 U_2 & U_1 + U_2
}
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & 1 \\
ZC' & U_2
}
\]
equals
\[
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & U_2 \\
ZC' & U_2^2
},
\]
which also equals
\[
\kbordermatrix{
& \Ib_{\oo} \\
Z(w_1 C') & 1 \\
ZC' & U_2
}
\kbordermatrix{
& \Ib_{\oo} \\
\Ib_{\oo} & U_2
}
\]
Thus, $\eta$ is a bimodule homomorphism.
\end{proof}
The analogous result for $\varepsilon$ is clear.
\begin{remark}\label{rem:UsingSymmetry}
The matrices for $\eta'$ and $\varepsilon'$ are obtained from the matrices for $\varepsilon$ and $\eta$ respectively by:
\begin{enumerate}
\item\label{it:Symm1} transposing the matrices ``along their anti-diagonals,'' while preserving (not reversing) the ordering of basis elements;
\item\label{it:Symm2} exchanging $U_1$ and $U_2$ (as well as $w_1$ and $w_2$) everywhere they appear;
\item\label{it:Symm3} reversing the order of all higher sequences of $A_{\infty}$ inputs
\end{enumerate}
(the last item does not arise for the maps under consideration, and neither do $w_i$ elements). Closedness of these morphisms follows from closedness of $\varepsilon$ and $\eta$ because the secondary matrices for each summand of $\Lambda$ and $Y_{\fs}$ are symmetric under the simultaneous application of the above items.
\end{remark}
We now check that the maps for cups and caps are 2-morphisms.
\begin{proposition}\label{prop:FirstEtaIs2Mor}
The map $\varepsilon$ is a $2$-morphism.
\end{proposition}
\begin{proof}
For $\varepsilon$ to be a 2-morphism, the square
\[
\xymatrix{
\Phi_{\midd} \boxtimes F^{\vee} \ar[d]_{\alpha} \ar[rr]^{\varepsilon \boxtimes \id_{F^{\vee}}} && F^{\vee} \ar[d]^{\id} \\
F^{\vee} \boxtimes \Phi_{\low} \ar[rr]_{\id_{F^{\vee}} \boxtimes \varepsilon} && F^{\vee}
}
\]
should commute, at least up to homotopy.
The top edge of the square for $\varepsilon$ has matrix
\[
\kbordermatrix{
& A'X \xi_{\u,\o} & B'Y\xi_{\u,\o} \\
\xi_{\u,\o} & 1 & 0 \\
},
\]
and the bottom edge of this square has matrix
\[
\kbordermatrix{
& \xi_{\u,\o} (w_1 C')Z & \xi_{\u,\o} C' Z \\
\xi_{\u,\o} & 1 & 0
}.
\]
As computed in Section~\ref{sec:Bubble}, the left edge of the square has matrix
\[
\kbordermatrix{
& A'X \xi_{\u,\o} & B'Y\xi_{\u,\o} \\
\xi_{\u,\o} (w_1 C')Z & 1 & 0 \\
\xi_{\u,\o} C' Z & 0 & 1
}.
\]
Thus, the square for $\varepsilon$ commutes.
\end{proof}
For $\eta$ to be a $2$-morphism, the squares
\[
\xymatrix{
F^{\vee} \ar[d]_{\id} \ar[rr]^-{\eta \boxtimes \id_{F^{\vee}}} && X_{\midd} \boxtimes F^{\vee} \ar[d]^{\varphi} \\
F^{\vee} \ar[rr]_-{\id_{F^{\vee}} \boxtimes \eta} && F^{\vee} \boxtimes X_{\low}
}
\]
and
\[
\xymatrix{
F^{\vee} \ar[d]_{\id} \ar[rr]^-{\eta \boxtimes \id_{F^{\vee}}} && X_{\upp} \boxtimes F^{\vee} = 0 \ar[d]^{\varphi} \\
F^{\vee} \ar[rr]_-{\id_{F^{\vee}} \boxtimes \eta} && F^{\vee} \boxtimes X_{\midd}
}
\]
should commute, at least up to homotopy.
\begin{proposition}\label{prop:FirstEpsIs2MorSquare1}
The first square for $\eta$ commutes.
\end{proposition}
\begin{proof}
The matrix for the top edge of the nontrivial square for $\eta$ is
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
XA'\xi_{\ou,\oo} & 1 & 0 \\
YA'\xi_{\ou,\oo} & 0 & 1 \\
XB'\xi_{\uo,\oo} & 0 & 0 \\
YB'\xi_{\uo,\oo} & 0 & U_2
},
\]
and the matrix for the bottom edge of the square is
\[
\kbordermatrix{
& \xi_{\ou,\oo} & \xi_{\uo,\oo} \\
\xi_{\ou,\oo} Z(w_1 C') & 1 & 0 \\
\xi_{\uo,\oo} Z(w_1 C') & 0 & 1 \\
\xi_{\ou,\oo} ZC' & 0 & 0 \\
\xi_{\uo,\oo} ZC' & 0 & U_2
}.
\]
As computed in Section~\ref{sec:SingularCrossing}, the matrix for the right edge of the square is
\[
\kbordermatrix{
& XA' \xi_{\ou,\oo} & YA' \xi_{\ou,\oo} & XB' \xi_{\uo,\oo} & YB' \xi_{\uo,\oo} \\
\xi_{\ou,\oo} Z(w_1 C') & 1 & 0 & 0 & 0\\
\xi_{\uo,\oo} Z(w_1 C') & 0 & 1 & 0 & 0 \\
\xi_{\ou,\oo} ZC' & 0 & 0 & 1 & 0 \\
\xi_{\uo,\oo} ZC' & 0 & 0 & 0 & 1
},
\]
so the first square for $\eta$ commutes.
\end{proof}
\begin{proposition}\label{prop:FirstEpsIs2MorSquare2}
The second square for $\eta$ commutes up to homotopy.
\end{proposition}
\begin{proof}
The map $\id_{F^{\vee}} \boxtimes \eta$ on the bottom edge has matrix
\[
\kbordermatrix{
& \xi_{\uu,\ou} & \xi_{\uu,\uo} \\
\xi_{\uu,\ou} XA' & 1 & 0 \\
\xi_{\uu,\uo} YA' & 0 & 1 \otimes \lambda \\
\xi_{\uu,\ou} XB' & 0 & 0 \\
\xi_{\uu,\uo} YB' & 0 & 0
}.
\]
The secondary matrix for the bimodule $F^{\vee} \boxtimes X_{\midd}$ is
\[
\kbordermatrix{
& \xi_{\uu,\ou} XA' & \xi_{\uu,\uo} YA' & \xi_{\uu,\ou} XB' & \xi_{\uu,\uo} YB' \\
\xi_{\uu,\ou} XA' & 0 & 1 & \begin{matrix} 1 \otimes (\lambda, U_1) \\+ 1 \otimes (U_2, \lambda)\end{matrix} & 0 \\
\xi_{\uu,\uo} YA' & 0 & 0 & 0 & \begin{matrix} U_2^k \otimes (U_2^{k+1}, \lambda) \\+ U_2^k \otimes (\lambda, U_1^{k+1})\end{matrix} \\
\xi_{\uu,\ou} XB' & 0 & 0 & 0 & 1 \\
\xi_{\uu,\uo} YB' & 0 & 0 & 0 & 0
};
\]
recall that the secondary matrix for the relevant summand of $F^{\vee}$ is
\[
\kbordermatrix{
& \xi_{\uu,\ou} & \xi_{\uu,\uo} \\
\xi_{\uu,\ou} & 0 & 1 \otimes \lambda \\
\xi_{\uu,\uo} & 0 & 0
}.
\]
If we let $h$ have matrix
\[
\kbordermatrix{
& \xi_{\uu,\ou} & \xi_{\uu,\uo} \\
\xi_{\uu,\ou} XA' & 0 & 0 \\
\xi_{\uu,\uo} YA' & 1 & 0 \\
\xi_{\uu,\ou} XB' & 0 & 0 \\
\xi_{\uu,\uo} YB' & 0 & 0
},
\]
then $h$ is a null-homotopy of $\id_{F^{\vee}} \boxtimes \eta$.
\end{proof}
\begin{corollary}\label{cor:FirstEpsIs2Mor}
The map $\eta$ is a $2$-morphism.
\end{corollary}
\begin{remark}\label{rem:EtaStrong}
One can check that the homotopy $h$ in the proof of Proposition~\ref{prop:FirstEpsIs2MorSquare2} makes $\eta$ into a strong 2-morphism as in Remark~\ref{rem:Strong2Mor}.
\end{remark}
\begin{proposition}
The map $\eta'$ is a $2$-morphism.
\end{proposition}
\begin{proof}
For $\eta'$ to be a $2$-morphism, the square
\[
\xymatrix{
F^{\vee} \ar[d]_{\\id} \ar[rr]^{\eta' \boxtimes \id_{F^{\vee}}} && \Phi_{\midd} \boxtimes F^{\vee} \ar[d]^{\alpha} \\
F^{\vee} \ar[rr]_{\id_{F^{\vee}} \boxtimes \eta'} && F^{\vee} \boxtimes \Phi_{\low}
}
\]
should commute, at least up to homotopy. Commutativity follows from the proof of Proposition~\ref{prop:FirstEtaIs2Mor} by applying the modifications of items \eqref{it:Symm1}--\eqref{it:Symm3} in Remark~\ref{rem:UsingSymmetry} to all matrices in the proof.
\end{proof}
\begin{proposition}
The map $\varepsilon'$ is a $2$-morphism.
\end{proposition}
\begin{proof}
For $\varepsilon'$ to be a 2-morphism, the square
\[
\xymatrix{
X_{\midd} \boxtimes F^{\vee} \ar[d]_{\alpha} \ar[rr]^-{\varepsilon' \boxtimes \id_{F^{\vee}}} && F^{\vee} \ar[d]^{\id} \\
F^{\vee} \boxtimes X_{\low} \ar[rr]_-{\id_{F^{\vee}} \boxtimes \varepsilon'} && F^{\vee}
}
\]
should commute, at least up to homotopy; the square
\[
\xymatrix{
0 = X_{\upp} \boxtimes F^{\vee} \ar[d]_{\alpha} \ar[rr]^-{\varepsilon' \boxtimes \id_{F^{\vee}}} && F^{\vee} \ar[d]^{\id} \\
F^{\vee} \boxtimes X_{\midd} \ar[rr]_-{\id_{F^{\vee}} \boxtimes \varepsilon'} && F^{\vee}
}
\]
automatically commutes. Commutativity of the first square follows from the proof of Proposition~\ref{prop:FirstEpsIs2MorSquare1} by applying the modifications of items \eqref{it:Symm1}--\eqref{it:Symm3} in Remark~\ref{rem:UsingSymmetry} to all matrices in the proof.
\end{proof}
\subsection{Checking the relations}
We begin by gathering some preliminary computations which can be obtained using Procedure~\ref{proc:BoxTensorMorphisms}.
\begin{proposition}\label{prop:HelperMorphisms}
The following string diagram 2-morphisms get assigned algebraic 2-morphisms as follows.
\begin{enumerate}
\item\label{it:IdEta} The 2-morphism \includegraphics[scale=0.4]{helper_morphism5.eps} gets assigned the dual of the morphism from $Y$ to $Y\Lambda Y$ with matrix
\[
\kbordermatrix{
& A' & B' \\
A'XA' & 1 & 0 \\
B'YA' & 0 & 1 \otimes \lambda \\
A'XB' & 0 & 1 \\
B'YB' & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' & C' \\
(w_1 C')Z(w_1 C') & 1 & 0 \\
C' Z (w_1 C') & 0 & 1 \\
(w_1 C')Z C' & 0 & 1 \\
C' Z C' & e_2 & e_1
}
\]
in the lower summand.
\item\label{it:EpsId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism6.eps} gets assigned the dual of the morphism from $Y \Lambda Y$ to $Y$ with matrix
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A' & 1 & 0 & 0 & 0 \\
B' & 0 & 0 & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
w_1 C' & 1 & 0 & 0 & 0 \\
C' & 0 & 0 & 1 & 0
}
\]
in the lower summand.
\item\label{it:Eta'Id} The 2-morphism \includegraphics[scale=0.4]{helper_morphism7.eps} gets assigned the dual of the morphism from $Y$ to $Y\Lambda Y$ with matrix
\[
\kbordermatrix{
& A' & B' \\
A'XA' & 0 & 0 \\
B'YA' & 1 & 0 \\
A'XB' & 0 & 0 \\
B'YB' & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' & C' \\
(w_1 C')Z(w_1 C') & 0 & 0 \\
C' Z (w_1 C') & 1 & 0 \\
(w_1 C')Z C' & 0 & 0 \\
C' Z C' & 0 & 1
}
\]
in the lower summand.
\item\label{it:IdEps'} The 2-morphism \includegraphics[scale=0.4]{helper_morphism8.eps} gets assigned the dual of the morphism from $Y \Lambda Y$ to $Y$ with matrix
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A' & e_1 & 1 & 1 \otimes \lambda & 0 \\
B' & 0 & 0 & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
w_1 C' & e_1 & 1 & 1 & 0\\
C' & e_2 & 0 & 0 & 1
}
\]
in the lower summand.
\item\label{it:IdEta'} The 2-morphism \includegraphics[scale=0.4]{helper_morphism15.eps} gets assigned the dual of the morphism from $\Lambda$ to $\Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& X & Y \\
X A' X & 0 & 0 \\
Y A' X & 0 & 0 \\
X B' Y & 1 & 0 \\
Y B' Y & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z \\
Z(w_1 C')Z & 0 \\
ZC' Z & 1
}
\]
in the lower summand.
\item\label{it:Eps'Id} The 2-morphism \includegraphics[scale=0.4]{helper_morphism16.eps} gets assigned the dual of the morphism from $\Lambda Y \Lambda$ to $\Lambda$ with matrix
\[
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X & U_1 & 0 & 1 & 0 \\
Y & 0 & 0 & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z & U_1 & 1
}
\]
in the lower summand.
\item\label{it:EtaId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism13.eps} gets assigned the dual of the morphism from $\Lambda$ to $\Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& X & Y \\
X A' X & 1 & 0 \\
Y A' X & 0 & 1 \\
X B' Y & 0 & 0 \\
Y B' Y & 0 & U_2
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z \\
Z(w_1 C')Z & 1 \\
ZC' Z & U_2
}
\]
in the lower summand.
\item\label{it:IdEps} The 2-morphism \includegraphics[scale=0.4]{helper_morphism14.eps} gets assigned the dual of the morphism from $\Lambda Y \Lambda$ to $\Lambda$ with matrix
\[
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X & 1 & 0 & 0 & 0 \\
Y & 0 & 1 & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z & 1 & 0
}
\]
in the lower summand.
\item\label{it:IdEta'Again} The 2-morphism \includegraphics[scale=0.4]{helper_morphism9.eps} gets assigned the same morphism as in item \eqref{it:IdEta'}.
\item\label{it:Eps'IdAgain} The 2-morphism \includegraphics[scale=0.4]{helper_morphism10.eps} gets assigned the same morphism as in item \eqref{it:Eps'Id}.
\item\label{it:EtaIdAgain} The 2-morphism \includegraphics[scale=0.4]{helper_morphism11.eps} gets assigned the same morphism as in item \eqref{it:EtaId}.
\item\label{it:IdEpsAgain} The 2-morphism \includegraphics[scale=0.4]{helper_morphism12.eps} gets assigned the same morphism as in item \eqref{it:IdEps}.
\item\label{it:IdEtaAgain} The 2-morphism \includegraphics[scale=0.4]{helper_morphism1.eps} gets assigned the same morphism as in item \eqref{it:IdEta}.
\item\label{it:EpsIdAgain} The 2-morphism \includegraphics[scale=0.4]{helper_morphism2.eps} gets assigned the same morphism as in item \eqref{it:EpsId}.
\item\label{it:Eta'IdAgain} The 2-morphism \includegraphics[scale=0.4]{helper_morphism17.eps} gets assigned the same morphism as in item \eqref{it:Eta'Id}.
\item\label{it:IdEps'Again} The 2-morphism \includegraphics[scale=0.4]{helper_morphism18.eps} gets assigned the same morphism as in item \eqref{it:IdEps'}.
\item\label{it:UpDotId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism45.eps} gets assigned the dual of the morphism from $Y \Lambda$ to $Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\]
in the lower summand.
\item\label{it:Extra} The 2-morphism \includegraphics[scale=0.4]{extra.eps} gets assigned the dual of the morphism from $\Lambda Y \Lambda$ to $\Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& XA'X & YA'X & XB'Y & YB'Y \\
XA'X & U_1 & 0 & 1 & 0 \\
YA'X & 0 & U_2 & 0 & 1 \\
XB'Y & 0 & 0 & 0 & 0 \\
YB'Y & 0 & 0 & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z(w_1 C')Z & U_1 + U_2 & 1 \\
ZC' Z & U_1 U_2 & 0
}
\]
in the lower summand.
\item\label{it:IdUpDot} The 2-morphism \includegraphics[scale=0.4]{helper_morphism3.eps} gets assigned the dual of the morphism from $Y\Lambda$ to $Y\Lambda$ with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\]
in the lower summand.
\item\label{it:IdUpDotId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism4.eps} gets assigned the dual of the morphism from $Y \Lambda Y$ to $Y \Lambda Y$ with matrix
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A'XA' & 0 & 1 & 0 & 0 \\
B'YA' & 0 & e_1 & 0 & 0 \\
A'XB' & 0 & 0 & 0 & 1 \\
B'YB' & 0 & 0 & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
(w_1 C')Z(w_1 C') & 0 & 1 & 0 & 0 \\
C' Z (w_1 C') & e_2 & e_1 & 0 & 0 \\
(w_1 C')Z C' & 0 & 0 & 0 & 1 \\
C' Z C' & 0 & 0 & e_2 & e_1
}
\]
in the lower summand.
\item\label{it:DownDownCap} The 2-morphism \includegraphics[scale=0.4]{helper_morphism20.eps} gets assigned the dual of the morphism from $Y \Lambda \to Y \Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'XA'X & 0 & 0 \\
B'YA'X & 1 & 0 \\
A'XB'Y & 0 & 0 \\
B'YB'Y & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C') Z (w_1 C') Z & 0 & 0 \\
C' Z (w_1 C') Z & 1 & 0 \\
(w_1 C') Z C' Z & 0 & 0 \\
C' Z C' Z & 0 & 1
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism21.eps} gets assigned the same morphism as in item \eqref{it:IdEta}.
\item\label{it:IdEtaId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism22.eps} gets assigned the dual of the morphism from $Y \Lambda$ to $Y \Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'XA'X & 1 & 0 \\
B'YA'X & 0 & 1 \\
A'XB'Y & 0 & 1 \\
B'YB'Y & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C') Z (w_1 C') Z & 1 & 0 \\
C' Z (w_1 C') Z & 0 & 1 \\
(w_1 C') Z C' Z & 0 & 1 \\
C' Z C' Z & e_2 & e_1
}
\]
in the lower summand.
\item\label{it:ComplicatedCap} The 2-morphism \includegraphics[scale=0.4]{helper_morphism24.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda$ to $Y \Lambda Y \Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'XA'XA'X & 1 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 1 & 0 & 0 \\
A'XB'YA'X & 0 & 1 & 0 & 0 \\
B'YB'YA'X & 0 & e_1 & 0 & 0 \\
A'XA'XB'Y & 0 & 0 & 1 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 1 \\
A'XB'YB'Y & 0 & 0 & 0 & 1 \\
B'YB'YB'Y & 0 & 0 & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 1 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 0 & 1 & 0 & 0 \\
C' Z C' Z (w_1 C') Z & e_2 & e_1 & 0 & 0 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 1 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 1\\
(w_1 C') Z C' Z C' Z & 0 & 0 & 0 & 1 \\
C' Z C' Z C' Z & 0 & 0 & e_2 & e_1
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism26.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda$ to $Y \Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'XA'X & 0 & 0 & 0 & 0 \\
B'YA'X & 0 & 0 & 0 & 0 \\
A'XB'Y & 1 & 0 & 0 & 0 \\
B'YB'Y & 0 & 1 & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z & 1 & 0 & 0 & 0 \\
C' Z C' Z & 0 & 1 & 0 & 0
}
\]
in the lower summand.
\item\label{it:BigCrossing} The 2-morphism \includegraphics[scale=0.4]{helper_morphism28.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda Y \Lambda$ to $Y \Lambda Y \Lambda Y \Lambda$ with matrix
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'XA'X & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XB'YA'X & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YB'YA'X & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XA'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XB'YB'Y & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
B'YB'YB'Y & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
}
\]
in the middle summand and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z C' Z (w_1 C') Z & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z C' Z & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
C' Z C' Z C' Z & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism29.eps} gets assigned the same morphism as in item \eqref{it:Eps'Id}.
\item\label{it:IdEps'Id} The 2-morphism \includegraphics[scale=0.4]{helper_morphism30.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda$ to $Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'X & e_1 & 1 & 1 & 0\\
B'Y & 0 & 0 & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z & e_1 & 1 & 1 & 0 \\
C' Z & e_2 & 0 & 0 & 1
}
\]
in the lower summand.
\item\label{it:ComplicatedCup} The 2-morphism \includegraphics[scale=0.4]{helper_morphism32.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda Y \Lambda$ to $Y \Lambda Y \Lambda$ with matrix
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'X & e_1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
B'YA'X & 0 & e_1 & 0 & 1 & 0 & 1 & 0 & 0 \\
A'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
B'YB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
}
\]
in the middle summand and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z & e_1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
C' Z (w_1 C') Z & 0 & e_1 & 0 & 1 & 0 & 1 & 0 & 0 \\
(w_1 C') Z C' Z & e_2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
C' Z C' Z & 0 & e_2 & 0 & 0 & 0 & 0 & 0 & 1
}
}
\]
in the lower summand.
\item\label{it:CupDownDown} The 2-morphism \includegraphics[scale=0.4]{helper_morphism34.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda$ to $Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'X & 1 & 0 & 0 & 0 \\
B'Y & 0 & 1 & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z & 0 & 1 & 0 & 0
}
\]
in the lower summand.
\item\label{it:CapDownDown} The 2-morphism \includegraphics[scale=0.4]{helper_morphism35.eps} gets assigned the dual of the morphism from $Y \Lambda$ to $Y \Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'XA'X & 0 & 0 \\
B'YA'X & 0 & 0 \\
A'XB'Y & 1 & 0 \\
B'YB'Y & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C') Z (w_1 C') Z & 0 & 0 \\
C' Z (w_1 C') Z & 0 & 0 \\
(w_1 C') Z C' Z & 1 & 0 \\
C' Z C' Z & 0 & 1
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism36.eps} gets assigned the same morphism as in item \eqref{it:EtaId}.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism37.eps} gets assigned the same morphism as in item \eqref{it:IdEtaId}.
\item\label{it:ComplicatedCap2} The 2-morphism \includegraphics[scale=0.4]{helper_morphism39.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda$ to $Y \Lambda Y \Lambda Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'XA'XA'X & 1 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 1 & 0 & 0 \\
A'XB'YA'X & 0 & 0 & 1 & 0 \\
B'YB'YA'X & 0 & 0 & 0 & 1 \\
A'XA'XB'Y & 0 & 0 & 1 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 1 \\
A'XB'YB'Y & 0 & 0 & e_1 & 0 \\
B'YB'YB'Y & 0 & 0 & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 1 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 0 & 0 & 1 & 0 \\
C' Z C' Z (w_1 C') Z & 0 & 0 & 0 & 1 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 1 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 1 \\
(w_1 C') Z C' Z C' Z & e_2 & 0 & e_1 & 0 \\
C' Z C' Z C' Z & 0 & e_2 & 0 & e_1
}
\]
in the lower summand.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism40.eps} gets assigned the same morphism as in item \eqref{it:IdEps'}.
\item The 2-morphism \includegraphics[scale=0.4]{helper_morphism41.eps} gets assigned the same morphism as in item \eqref{it:IdEps'Id}.
\item\label{it:ComplicatedCup2} The 2-morphism \includegraphics[scale=0.4]{helper_morphism43.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda Y \Lambda$ to $Y \Lambda Y \Lambda$ with matrix
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'X & e_1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
B'YA'X & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
A'XB'Y & 0 & 0 & 0 & 0 & e_1 & 1 & 1 & 0 \\
B'YB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
}
\]
in the middle summand and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z & e_1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z & e_2 & 0 & 0 & 0 & e_1 & 1 & 1 & 0 \\
C' Z C' Z & 0 & e_2 & 0 & 0 & 0 & 0 & 0 & 1
}
}
\]
in the lower summand.
\item\label{it:DownDownCup} The 2-morphism \includegraphics[scale=0.4]{helper_morphism44.eps} gets assigned the dual of the morphism from $Y \Lambda Y \Lambda$ to $Y \Lambda$ with matrix
\[
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'X & 1 & 0 & 0 & 0 \\
B'Y & 0 & 0 & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z & 0 & 0 & 1 & 0
}
\]
in the lower summand.
\item\label{it:EtaIdId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism46.eps} gets assigned the dual of the morphism from $\Lambda Y$ to $\Lambda Y \Lambda Y$ with matrix
\[
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
XA'XA' & 1 & 0 & 0 & 0 \\
YA'XA' & 0 & 1 & 0 & 0 \\
XB'YA' & 0 & 0 & 0 & 0 \\
YB'YA' & 0 & U_2 & 0 & 0 \\
XA'XB' & 0 & 0 & 1 & 0 \\
YA'XB' & 0 & 0 & 0 & 1 \\
XB'YB' & 0 & 0 & 0 & 0 \\
YB'YB' & 0 & 0 & 0 & U_2
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z (w_1 C') & Z C' \\
Z (w_1 C') Z (w_1 C') & 1 & 0 \\
Z C' Z (w_1 C') & U_2 & 0 \\
Z (w_1 C') Z C' & 0 & 1 \\
Z C' Z C' & 0 & U_2
}
\]
in the lower summand.
\item\label{it:IdCrossId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism47.eps} gets assigned the dual of the morphism from $\Lambda Y \Lambda Y$ to $\Lambda Y \Lambda Y$ with matrix
\[
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA'XA' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YA'XA' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
XB'YA' & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YB'YA' & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
XA'XB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YA'XB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
XB'YB' & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
YB'YB' & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') Z (w_1 C') & 0 & 0 & 0 & 0 \\
Z C' Z (w_1 C') & 1 & 0 & 0 & 0 \\
Z (w_1 C') Z C' & 0 & 0 & 0 & 0 \\
Z C' Z C' & 0 & 0 & 1 & 0
}
\]
in the lower summand.
\item\label{it:IdIdEps'} The 2-morphism \includegraphics[scale=0.4]{helper_morphism48.eps} gets assigned the dual of the morphism from $\Lambda Y \Lambda Y$ to $\Lambda Y$ with matrix
\[
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA' & U_1 & 0 & 1 & 0 & 1 \otimes \lambda & 0 & 0 & 0 \\
YA' & 0 & U_2 & 0 & 1 & 0 & 1 \otimes \lambda & 0 & 0 \\
XB' & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
YB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') & U_1 + U_2 & 1 & 1 & 0 \\
Z C' & U_1 U_2 & 0 & 0 & 1
}
\]
in the lower summand.
\item\label{it:IdIdEta} The 2-morphism \includegraphics[scale=0.4]{helper_morphism49.eps} gets assigned the dual of the morphism from $\Lambda Y$ to $\Lambda Y \Lambda Y$ with matrix
\[
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
XA'XA' & 1 & 0 & 0 & 0 \\
YA'XA' & 0 & 1 & 0 & 0 \\
XB'YA' & 0 & 0 & 1 \otimes \lambda & 0 \\
YB'YA' & 0 & 0 & 0 & 1 \otimes \lambda \\
XA'XB' & 0 & 0 & 1 & 0 \\
YA'XB' & 0 & 0 & 0 & 1 \\
XB'YB' & 0 & 0 & U_1 & 0 \\
YB'YB' & 0 & 0 & 0 & U_2
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z (w_1 C') & Z C' \\
Z (w_1 C') Z (w_1 C') & 1 & 0 \\
Z C' Z (w_1 C') & 0 & 1 \\
Z (w_1 C') Z (w_1 C') & 0 & 1 \\
Z C' Z (w_1 C') & U_1 U_2 & U_1 + U_2
}
\]
in the lower summand.
\item\label{it:IdDownCrossId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism51.eps} gets assigned the same morphism as in item \eqref{it:IdCrossId}.
\item\label{it:Eps'IdId} The 2-morphism \includegraphics[scale=0.4]{helper_morphism52.eps} gets assigned the dual of the morphism from $\Lambda Y \Lambda Y$ to $\Lambda Y$ with matrix
\[
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA' & U_1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
YA' & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
XB' & 0 & 0 & 0 & 0 & U_1 & 0 & 1 & 0 \\
YB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') & U_1 & 1 & 0 & 0 \\
Z C' & 0 & 0 & U_1 & 1
}
\]
in the lower summand.
\item\label{it:DotForNilHecke1} The 2-morphism \includegraphics[scale=0.4]{helper_morphism57.eps} gets assigned the same morphism as in item \eqref{it:IdUpDot}.
\item\label{it:DotForNilHecke2} The 2-morphism \includegraphics[scale=0.4]{helper_morphism58.eps} gets assigned the same morphism as in item \eqref{it:UpDotId}.
\end{enumerate}
\end{proposition}
\begin{lemma}
The maps for both types of sideways crossings \includegraphics[scale=0.4]{sideways_simple1.eps} and \includegraphics[scale=0.4]{sideways_simple2.eps} are the identity endomorphism of ${^{\vee}}X$.
\end{lemma}
\begin{proof}
The map for \includegraphics[scale=0.4]{sideways_simple1.eps} is \eqref{it:IdIdEps'}\eqref{it:IdCrossId}\eqref{it:EtaIdId}, which equals
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA' & U_1 & 0 & 1 & 0 & 1 \otimes \lambda & 0 & 0 & 0 \\
YA' & 0 & U_2 & 0 & 1 & 0 & 1 \otimes \lambda & 0 & 0 \\
XB' & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
YB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA'XA' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YA'XA' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
XB'YA' & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YB'YA' & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
XA'XB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YA'XB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
XB'YB' & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
YB'YB' & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
XA'XA' & 1 & 0 & 0 & 0 \\
YA'XA' & 0 & 1 & 0 & 0 \\
XB'YA' & 0 & 0 & 0 & 0 \\
YB'YA' & 0 & U_2 & 0 & 0 \\
XA'XB' & 0 & 0 & 1 & 0 \\
YA'XB' & 0 & 0 & 0 & 1 \\
XB'YB' & 0 & 0 & 0 & 0 \\
YB'YB' & 0 & 0 & 0 & U_2
}
}
\]
in the middle summand and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') & U_1 + U_2 & 1 & 1 & 0 \\
Z C' & U_1 U_2 & 0 & 0 & 1
}
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') Z (w_1 C') & 0 & 0 & 0 & 0 \\
Z C' Z (w_1 C') & 1 & 0 & 0 & 0 \\
Z (w_1 C') Z C' & 0 & 0 & 0 & 0 \\
Z C' Z C' & 0 & 0 & 1 & 0
}
\kbordermatrix{
& Z (w_1 C') & Z C' \\
Z (w_1 C') Z (w_1 C') & 1 & 0 \\
Z C' Z (w_1 C') & U_2 & 0 \\
Z (w_1 C') Z C' & 0 & 1 \\
Z C' Z C' & 0 & U_2
}
}
\]
in the lower summand.
The map for \includegraphics[scale=0.4]{sideways_simple2.eps} is \eqref{it:Eps'IdId}\eqref{it:IdDownCrossId}\eqref{it:IdIdEta} or equivalently \eqref{it:Eps'IdId}\eqref{it:IdCrossId}\eqref{it:IdIdEta}, which equals
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA' & U_1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
YA' & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
XB' & 0 & 0 & 0 & 0 & U_1 & 0 & 1 & 0 \\
YB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
\kbordermatrix{
& XA'XA' & YA'XA' & XB'YA' & YB'YA' & XA'XB' & YA'XB' & XB'YB' & YB'YB' \\
XA'XA' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YA'XA' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
XB'YA' & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YB'YA' & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
XA'XB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
YA'XB' & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
XB'YB' & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
YB'YB' & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
XA'XA' & 1 & 0 & 0 & 0 \\
YA'XA' & 0 & 1 & 0 & 0 \\
XB'YA' & 0 & 0 & 1 \otimes \lambda & 0 \\
YB'YA' & 0 & 0 & 0 & 1 \otimes \lambda \\
XA'XB' & 0 & 0 & 1 & 0 \\
YA'XB' & 0 & 0 & 0 & 1 \\
XB'YB' & 0 & 0 & U_1 & 0 \\
YB'YB' & 0 & 0 & 0 & U_2
}
}
\]
in the middle summand and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') & U_1 & 1 & 0 & 0 \\
Z C' & 0 & 0 & U_1 & 1
}
\kbordermatrix{
& Z (w_1 C') Z (w_1 C') & Z C' Z (w_1 C') & Z (w_1 C') Z C' & Z C' Z C' \\
Z (w_1 C') Z (w_1 C') & 0 & 0 & 0 & 0 \\
Z C' Z (w_1 C') & 1 & 0 & 0 & 0 \\
Z (w_1 C') Z C' & 0 & 0 & 0 & 0 \\
Z C' Z C' & 0 & 0 & 1 & 0
}
\kbordermatrix{
& Z (w_1 C') & Z C' \\
Z (w_1 C') Z (w_1 C') & 1 & 0 \\
Z C' Z (w_1 C') & 0 & 1 \\
Z (w_1 C') Z (w_1 C') & 0 & 1 \\
Z C' Z (w_1 C') & U_1 U_2 & U_1 + U_2
}
}
\]
in the lower summand.
\end{proof}
\begin{theorem}\label{thm:SkewHowe2Action}
The relations in $\Sc(2,2)^{*,*}$ hold for the maps defined above, so we have a functor of bicategories from $\Sc(2,2)^{*,*}$ to $2\Rep(\U^-)^{*,*}$.
\end{theorem}
\begin{proof}
We need to check the relations \eqref{it:Biadjointness1}--\eqref{it:NilHecke2} of Section~\ref{sec:CatQGDef}.
\begin{itemize}
\item For relation \eqref{it:Biadjointness1}, we want \eqref{it:EpsId}\eqref{it:IdEta} $= \id =$ \eqref{it:IdEps'}\eqref{it:Eta'Id} in the labeling of Proposition~\ref{prop:HelperMorphisms}. Indeed, the matrix products
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A' & 1 & 0 & 0 & 0 \\
B' & 0 & 0 & 1 & 0
}
\kbordermatrix{
& A' & B' \\
A'XA' & 1 & 0 \\
B'YA' & 0 & 1 \otimes \lambda \\
A'XB' & 0 & 1 \\
B'YB' & 0 & e_1
}
\]
and
\[
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
w_1 C' & 1 & 0 & 0 & 0 \\
C' & 0 & 0 & 1 & 0
}
\kbordermatrix{
& w_1 C' & C' \\
(w_1 C')Z(w_1 C') & 1 & 0 \\
C' Z (w_1 C') & 0 & 1 \\
(w_1 C')Z C' & 0 & 1 \\
C' Z C' & e_2 & e_1
}
\]
are both the identity, so \eqref{it:EpsId}\eqref{it:IdEta} $= \id$, and the products
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A' & e_1 & 1 & 1 \otimes \lambda & 0 \\
B' & 0 & 0 & 0 & 1
}
\kbordermatrix{
& A' & B' \\
A'XA' & 0 & 0 \\
B'YA' & 1 & 0 \\
A'XB' & 0 & 0 \\
B'YB' & 0 & 1
}
\]
and
\[
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
w_1 C' & e_1 & 1 & 1 & 0\\
C' & e_2 & 0 & 0 & 1
}
\kbordermatrix{
& w_1 C' & C' \\
(w_1 C')Z(w_1 C') & 0 & 0 \\
C' Z (w_1 C') & 1 & 0 \\
(w_1 C')Z C' & 0 & 0 \\
C' Z C' & 0 & 1
}
\]
are both the identity, so $\id =$ \eqref{it:IdEps'}\eqref{it:Eta'Id}.
\item For relation \eqref{it:Biadjointness2}, we want \eqref{it:Eps'Id}\eqref{it:IdEta'} $=\id=$ \eqref{it:IdEps}\eqref{it:EtaId}. Indeed, the matrix products
\[
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X & U_1 & 0 & 1 & 0 \\
Y & 0 & 0 & 0 & 1
}
\kbordermatrix{
& X & Y \\
X A' X & 0 & 0 \\
Y A' X & 0 & 0 \\
X B' Y & 1 & 0 \\
Y B' Y & 0 & 1
}
\]
and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z & U_1 & 1
}
\kbordermatrix{
& Z \\
Z(w_1 C')Z & 0 \\
ZC' Z & 1
}
\]
are both the identity, so \eqref{it:Eps'Id}\eqref{it:IdEta'} $=\id$, and the matrix products
\[
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X & 1 & 0 & 0 & 0 \\
Y & 0 & 1 & 0 & 0
}
\kbordermatrix{
& X & Y \\
X A' X & 1 & 0 \\
Y A' X & 0 & 1 \\
X B' Y & 0 & 0 \\
Y B' Y & 0 & U_2
}
\]
and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z & 1 & 0
}
\kbordermatrix{
& Z \\
Z(w_1 C')Z & 1 \\
ZC' Z & U_2
}
\]
are both the identity, so $\id=$ \eqref{it:IdEps}\eqref{it:EtaId}.
\item For relation \eqref{it:Biadjointness3}, we want \eqref{it:Eps'IdAgain}\eqref{it:IdEta'Again} $=\id=$ \eqref{it:IdEpsAgain}\eqref{it:EtaIdAgain}; this amounts to \eqref{it:Eps'Id}\eqref{it:IdEta'} $=\id=$ \eqref{it:IdEps}\eqref{it:EtaId} which was proved above.
\item For relation \eqref{it:Biadjointness4}, we want \eqref{it:EpsIdAgain}\eqref{it:IdEtaAgain} $=\id=$ \eqref{it:IdEps'Again}\eqref{it:Eta'IdAgain}; this amounts to \eqref{it:EpsId}\eqref{it:IdEta} $=\id=$ \eqref{it:IdEps'}\eqref{it:Eta'Id} which was proved above.
\item For relation \eqref{it:DotCyclic1}, we want \eqref{it:Eps'IdAgain}\eqref{it:Extra}\eqref{it:IdEta'Again} $=\delta^{\down}_{1,1}=$ \eqref{it:IdEpsAgain}\eqref{it:Extra}\eqref{it:EtaIdAgain}; this amounts to \eqref{it:Eps'Id}\eqref{it:Extra}\eqref{it:IdEta'} $=\delta^{\down}_{1,1}=$ \eqref{it:IdEps}\eqref{it:Extra}\eqref{it:EtaId}. Indeed, the matrix products
\[
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X & U_1 & 0 & 1 & 0 \\
Y & 0 & 0 & 0 & 1
}
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X A' X & U_1 & 0 & 1 & 0 \\
Y A' X & 0 & U_2 & 0 & 1 \\
X B' Y & 0 & 0 & 0 & 0 \\
Y B' Y & 0 & 0 & 0 & 0
}
\kbordermatrix{
& X & Y \\
X A' X & 0 & 0 \\
Y A' X & 0 & 0 \\
X B' Y & 1 & 0 \\
Y B' Y & 0 & 1
}
\]
and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z & U_1 & 1
}
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z(w_1 C')Z & U_1 + U_2 & 1 \\
ZC' Z & U_1 U_2 & 0
}
\kbordermatrix{
& Z \\
Z(w_1 C')Z & 0 \\
ZC' Z & 1
}
\]
are equal to the two summands of $\delta^{\down}_{1,1}$, while the matrix products
\[
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X & 1 & 0 & 0 & 0 \\
Y & 0 & 1 & 0 & 0
}
\kbordermatrix{
& X A' X & Y A' X & X B' Y & Y B' Y \\
X A' X & U_1 & 0 & 1 & 0 \\
Y A' X & 0 & U_2 & 0 & 1 \\
X B' Y & 0 & 0 & 0 & 0 \\
Y B' Y & 0 & 0 & 0 & 0
}
\kbordermatrix{
& X & Y \\
X A' X & 1 & 0 \\
Y A' X & 0 & 1 \\
X B' Y & 0 & 0 \\
Y B' Y & 0 & U_2
}
\]
and
\[
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z & 1 & 0
}
\kbordermatrix{
& Z(w_1 C')Z & ZC' Z \\
Z(w_1 C')Z & U_1 + U_2 & 1 \\
ZC' Z & U_1 U_2 & 0
}
\kbordermatrix{
& Z \\
Z(w_1 C')Z & 1 \\
ZC' Z & U_2
}
\]
are also equal to the two summands of $\delta^{\down}_{1,1}$.
\item For relation \eqref{it:DotCyclic2}, we want \eqref{it:EpsIdAgain}\eqref{it:IdUpDotId}\eqref{it:IdEtaAgain} $=\delta^{\down}_{2,0}=$ \eqref{it:IdEps'Again}\eqref{it:IdUpDotId}\eqref{it:Eta'IdAgain}, or equivalently \eqref{it:EpsId}\eqref{it:IdUpDotId}\eqref{it:IdEta} $=\delta^{\down}_{2,0}=$ \eqref{it:IdEps'}\eqref{it:IdUpDotId}\eqref{it:Eta'Id}. Indeed, the matrix products
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A' & 1 & 0 & 0 & 0 \\
B' & 0 & 0 & 1 & 0
}
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A'XA' & 0 & 1 & 0 & 0 \\
B'YA' & 0 & e_1 & 0 & 0 \\
A'XB' & 0 & 0 & 0 & 1 \\
B'YB' & 0 & 0 & 0 & e_1
}
\kbordermatrix{
& A' & B' \\
A'XA' & 1 & 0 \\
B'YA' & 0 & 1 \otimes \lambda \\
A'XB' & 0 & 1 \\
B'YB' & 0 & e_1
}
\]
and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
w_1 C' & 1 & 0 & 0 & 0 \\
C' & 0 & 0 & 1 & 0
}
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
(w_1 C')Z(w_1 C') & 0 & 1 & 0 & 0 \\
C' Z (w_1 C') & e_2 & e_1 & 0 & 0 \\
(w_1 C')Z C' & 0 & 0 & 0 & 1 \\
C' Z C' & 0 & 0 & e_2 & e_1
}
\kbordermatrix{
& w_1 C' & C' \\
(w_1 C')Z(w_1 C') & 1 & 0 \\
C' Z (w_1 C') & 0 & 1 \\
(w_1 C')Z C' & 0 & 1 \\
C' Z C' & e_2 & e_1
}
}
\]
are equal to the two summands of $\delta^{\down}_{2,0}$, while the matrix products
\[
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A' & e_1 & 1 & 1 \otimes \lambda & 0 \\
B' & 0 & 0 & 0 & 1
}
\kbordermatrix{
& A'XA' & B'YA' & A'XB' & B'YB' \\
A'XA' & 0 & 1 & 0 & 0 \\
B'YA' & 0 & e_1 & 0 & 0 \\
A'XB' & 0 & 0 & 0 & 1 \\
B'YB' & 0 & 0 & 0 & e_1
}
\kbordermatrix{
& A' & B' \\
A'XA' & 0 & 0 \\
B'YA' & 1 & 0 \\
A'XB' & 0 & 0 \\
B'YB' & 0 & 1
}
\]
and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
w_1 C' & e_1 & 1 & 1 & 0\\
C' & e_2 & 0 & 0 & 1
}
\kbordermatrix{
& (w_1 C')Z(w_1 C') & C' Z (w_1 C') & (w_1 C')Z C' & C' Z C' \\
(w_1 C')Z(w_1 C') & 0 & 1 & 0 & 0 \\
C' Z (w_1 C') & e_2 & e_1 & 0 & 0 \\
(w_1 C')Z C' & 0 & 0 & 0 & 1 \\
C' Z C' & 0 & 0 & e_2 & e_1
}
\kbordermatrix{
& w_1 C' & C' \\
(w_1 C')Z(w_1 C') & 0 & 0 \\
C' Z (w_1 C') & 1 & 0 \\
(w_1 C')Z C' & 0 & 0 \\
C' Z C' & 0 & 1
}
}
\]
are also equal to the two summands of $\delta^{\down}_{2,0}$.
\item For relation \eqref{it:CrossingCyclic}, we want \eqref{it:CupDownDown}\eqref{it:ComplicatedCup}\eqref{it:BigCrossing}\eqref{it:ComplicatedCap}\eqref{it:DownDownCap} $=\chi=$ \eqref{it:DownDownCup}\eqref{it:ComplicatedCup2}\eqref{it:BigCrossing}\eqref{it:ComplicatedCap2}\eqref{it:CapDownDown}. Indeed, the matrix products
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'X & 1 & 0 & 0 & 0 \\
B'Y & 0 & 1 & 0 & 0
}
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'X & e_1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
B'YA'X & 0 & e_1 & 0 & 1 & 0 & 1 & 0 & 0 \\
A'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
B'YB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'XA'X & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XB'YA'X & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YB'YA'X & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XA'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XB'YB'Y & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
B'YB'YB'Y & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'XA'XA'X & 1 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 1 & 0 & 0 \\
A'XB'YA'X & 0 & 1 & 0 & 0 \\
B'YB'YA'X & 0 & e_1 & 0 & 0 \\
A'XA'XB'Y & 0 & 0 & 1 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 1 \\
A'XB'YB'Y & 0 & 0 & 0 & 1 \\
B'YB'YB'Y & 0 & 0 & 0 & e_1
}
\kbordermatrix{
& A'X & B'Y \\
A'XA'X & 0 & 0 \\
B'YA'X & 1 & 0 \\
A'XB'Y & 0 & 0 \\
B'YB'Y & 0 & 1
}
}
\]
and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z & 0 & 1 & 0 & 0
}
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z & e_1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
C' Z (w_1 C') Z & 0 & e_1 & 0 & 1 & 0 & 1 & 0 & 0 \\
(w_1 C') Z C' Z & e_2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
C' Z C' Z & 0 & e_2 & 0 & 0 & 0 & 0 & 0 & 1
}
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z C' Z (w_1 C') Z & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z C' Z & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
C' Z C' Z C' Z & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 1 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 0 & 1 & 0 & 0 \\
C' Z C' Z (w_1 C') Z & e_2 & e_1 & 0 & 0 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 1 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 1\\
(w_1 C') Z C' Z C' Z & 0 & 0 & 0 & 1 \\
C' Z C' Z C' Z & 0 & 0 & e_2 & e_1
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C') Z (w_1 C') Z & 0 & 0 \\
C' Z (w_1 C') Z & 1 & 0 \\
(w_1 C') Z C' Z & 0 & 0 \\
C' Z C' Z & 0 & 1
}
}
\]
equal the two summands of $\chi$, as do the matrix products
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'X & 1 & 0 & 0 & 0 \\
B'Y & 0 & 0 & 1 & 0
}
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'X & e_1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
B'YA'X & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
A'XB'Y & 0 & 0 & 0 & 0 & e_1 & 1 & 1 & 0 \\
B'YB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
}
\kbordermatrix{
& A'XA'XA'X & B'YA'XA'X & A'XB'YA'X & B'YB'YA'X & A'XA'XB'Y & B'YA'XB'Y & A'XB'YB'Y & B'YB'YB'Y \\
A'XA'XA'X & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XB'YA'X & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YB'YA'X & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XA'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
A'XB'YB'Y & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
B'YB'YB'Y & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\kbordermatrix{
& A'XA'X & B'YA'X & A'XB'Y & B'YB'Y \\
A'XA'XA'X & 1 & 0 & 0 & 0 \\
B'YA'XA'X & 0 & 1 & 0 & 0 \\
A'XB'YA'X & 0 & 0 & 1 & 0 \\
B'YB'YA'X & 0 & 0 & 0 & 1 \\
A'XA'XB'Y & 0 & 0 & 1 & 0 \\
B'YA'XB'Y & 0 & 0 & 0 & 1 \\
A'XB'YB'Y & 0 & 0 & e_1 & 0 \\
B'YB'YB'Y & 0 & 0 & 0 & e_1
}
\kbordermatrix{
& A'X & B'Y \\
A'XA'X & 0 & 0 \\
B'YA'X & 0 & 0 \\
A'XB'Y & 1 & 0 \\
B'YB'Y & 0 & 1
}
}
\]
and
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z & 0 & 0 & 1 & 0
}
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z & e_1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z & e_2 & 0 & 0 & 0 & e_1 & 1 & 1 & 0 \\
C' Z C' Z & 0 & e_2 & 0 & 0 & 0 & 0 & 0 & 1
}
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z (w_1 C') Z & (w_1 C') Z C' Z (w_1 C') Z & C' Z C' Z (w_1 C') Z & (w_1 C') Z (w_1 C') Z C' Z & C' Z (w_1 C') Z C' Z & (w_1 C') Z C' Z C' Z & C' Z C' Z C' Z \\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z C' Z (w_1 C') Z & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(w_1 C') Z C' Z C' Z & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
C' Z C' Z C' Z & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
}
\kbordermatrix{
& (w_1 C') Z (w_1 C') Z & C' Z (w_1 C') Z & (w_1 C') Z C' Z & C' Z C' Z\\
(w_1 C') Z (w_1 C') Z (w_1 C') Z & 1 & 0 & 0 & 0 \\
C' Z (w_1 C') Z (w_1 C') Z & 0 & 1 & 0 & 0 \\
(w_1 C') Z C' Z (w_1 C') Z & 0 & 0 & 1 & 0 \\
C' Z C' Z (w_1 C') Z & 0 & 0 & 0 & 1 \\
(w_1 C') Z (w_1 C') Z C' Z & 0 & 0 & 1 & 0 \\
C' Z (w_1 C') Z C' Z & 0 & 0 & 0 & 1 \\
(w_1 C') Z C' Z C' Z & e_2 & 0 & e_1 & 0 \\
C' Z C' Z C' Z & 0 & e_2 & 0 & e_1
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C') Z (w_1 C') Z & 0 & 0 \\
C' Z (w_1 C') Z & 0 & 0 \\
(w_1 C') Z C' Z & 1 & 0 \\
C' Z C' Z & 0 & 1
}.
}
\]
\item For relation \eqref{it:NegativeBubble1}, we want $\varepsilon \eta' = 0$; indeed, the matrix products
\[
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\]
and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\]
are both zero.
\item For relation \eqref{it:NegativeBubble2}, we also want $\varepsilon \eta' = 0$; this was just shown.
\item For relation \eqref{it:Deg0Bubble1}, we want $\varepsilon$\eqref{it:IdUpDot}$\eta' = \id$; indeed, the matrix products
\[
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\]
and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\]
are both the identity.
\item For relation \eqref{it:Deg0Bubble2}, we want $\varepsilon$\eqref{it:UpDotId}$\eta' = \id$; indeed, the matrix products
\[
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\]
and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\]
are both the identity.
\item Relation \eqref{it:ExtendedSl2_1} holds because both sideways crossings give identity morphisms.
\item For relation \eqref{it:ExtendedSl2_2}, the twice-dotted bubble \includegraphics[scale=0.4]{double_bubble1.eps} gets assigned $\varepsilon$\eqref{it:IdUpDot}\eqref{it:IdUpDot}$\eta'$ which has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\]
in the lower summand, amounting to multiplication by $e_1$ in both cases. Thus, we want to show that $\id = \eta' \varepsilon$\eqref{it:IdUpDot} $+$ \eqref{it:IdUpDot}$\eta' \varepsilon + e_1 \eta' \varepsilon$. Indeed, $\eta' \varepsilon$\eqref{it:IdUpDot} has matrix
\[
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\]
in the lower summand, \eqref{it:IdUpDot}$\eta' \varepsilon$ has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\]
in the lower summand, and $e_1 \eta' \varepsilon$ has matrix
\[
e_1 \kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\]
in the middle summand and
\[
e_1 \kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\]
in the lower summand. The matrices in each summand add up to the identity.
\item Relation \eqref{it:ExtendedSl2_3} holds because both sideways crossings give identity morphisms.
\item For relation \eqref{it:ExtendedSl2_4}, the twice-dotted bubble \includegraphics[scale=0.4]{double_bubble2.eps} gets assigned $\varepsilon$\eqref{it:UpDotId}\eqref{it:UpDotId}$\eta'$ which has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\]
in the lower summand, amounting to multiplication by $e_1$ in both cases. Thus, we want to show that $\id = \eta' \varepsilon$\eqref{it:UpDotId} $+$ \eqref{it:UpDotId}$\eta' \varepsilon + e_1 \eta' \varepsilon$. Indeed, $\eta' \varepsilon$\eqref{it:UpDotId} has matrix
\[
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\]
in the lower summand, \eqref{it:UpDotId}$\eta' \varepsilon$ has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\kbordermatrix{
& \Ib_{\u} \\
A'X & 0 \\
B'Y & 1
}
\kbordermatrix{
& A'X & B'Y \\
\Ib_{\u} & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\kbordermatrix{
& \Ib_{\o} \\
w_1 C' Z & 0 \\
C' Z & 1
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
\Ib_{\o} & 1 & 0
}
\]
in the lower summand, and $e_1 \eta' \varepsilon$ has the same matrices as given above. The matrices in each summand add up to the identity.
\item For relation \eqref{it:NilHecke1}, the matrices
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\]
and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\]
are both zero.
\item For relation \eqref{it:NilHecke2}, we want to show that $\id =$ \eqref{it:DotForNilHecke1}$\chi + \chi$\eqref{it:DotForNilHecke2} $=$ \eqref{it:DotForNilHecke2}$\chi + \chi$\eqref{it:DotForNilHecke1}. Equivalently, we want to show that $\id =$ \eqref{it:IdUpDot}$\chi + \chi$\eqref{it:UpDotId} $=$ \eqref{it:UpDotId}$\chi + \chi$\eqref{it:IdUpDot}. Indeed, \eqref{it:IdUpDot}$\chi$ has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\]
in the lower summand, while $\chi$\eqref{it:UpDotId} has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\]
in the lower summand. Similarly, \eqref{it:UpDotId}$\chi$ has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & e_1 & 1 \\
B'Y & 0 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\]
in the middle summand and
\[
\kbordermatrix{
& (w_1 C') Z & C' Z \\
(w_1 C') Z & e_1 & 1 \\
C' Z & e_2 & 0
}
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\]
in the lower summand, while $\chi$\eqref{it:IdUpDot} has matrix
\[
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 0 \\
B'Y & 1 & 0
}
\kbordermatrix{
& A'X & B'Y \\
A'X & 0 & 1 \\
B'Y & 0 & e_1
}
\]
in the middle summand and
\[
\kbordermatrix{
& w_1 C' Z & C' Z \\
w_1 C' Z & 0 & 0 \\
C' Z & 1 & 0
}
\kbordermatrix{
& (w_1 C')Z & C' Z \\
(w_1 C')Z & 0 & 1 \\
C' Z & e_2 & e_1
}
\]
in the lower summand.
\end{itemize}
\end{proof}
\section{The Soergel category and braid cobordisms}\label{sec:SoergelBraidCob}
\subsection{The Soergel category}\label{sec:Soergel}
Following Elias--Khovanov \cite{EliasKhovanov}, let $\SC_1^{\ungr}$ be the monoidal category denoted $\SC_1(\{1\})$ in Elias--Krasner \cite{EliasKrasner} (equivalently, $\SC_1(2)$ in Mackaay--Sto{\v{s}}i{\'c}--Vaz \cite{MSVSchur}) with its grading ignored. The morphisms in $\SC_1^{\ungr}$ are generated by pictures
\begin{center}
\noindent \includegraphics[scale=0.4]{soergel_gens.eps}
\end{center}
\noindent modulo certain relations that we will not need to consider explicitly. Composition comes from vertical stacking of pictures, and the monoidal structure comes from horizontal stacking.
\begin{remark}
As Elias--Krasner remark in \cite[Section 5.2]{EliasKrasner}, $\SC_1^{\ungr}$ and its graded versions make sense over $\Z$; we will work with their reductions over $\F_2$.
\end{remark}
Similarly, let $(\SC'_1)^{\ungr}$ be the monoidal category denoted $\SC'_1(\{1\})$ in \cite{EliasKrasner} (equivalently, $\SC'_1(2)$ in \cite{MSVSchur}) with its grading ignored. Compared with $\SC_1^{\ungr}$, the monoidal category $(\SC'_1)^{\ungr}$ has the same objects and additional generating morphisms $U_1$ and $U_2$ from the monoidal unit object to itself (represented graphically by boxes labeled $1$ or $2$). The additional relations beyond those of $\SC_1^{\ungr}$ are:
\begin{enumerate}
\item\label{it:SCPrimeRel1} \includegraphics[scale=0.4]{SCprimeRel1.eps}
\item\label{it:SCPrimeRel2} \includegraphics[scale=0.4]{SCprimeRel2.eps}
\item\label{it:SCPrimeRel3} \includegraphics[scale=0.4]{SCprimeRel3.eps}
\end{enumerate}
\begin{remark}
In characteristic two, the categorification relationship between Hecke algebras and these Soergel categories does not necessarily hold; also, the relationship between $\SC'_1$ and $\SC_1$ is more complicated than in characteristic zero. See \cite[Section 5.2]{EliasKrasner} for a more detailed discussion.
\end{remark}
We define a bigraded version $(\SC'_1)^{*,*}$ of $(\SC'_1)^{\ungr}$ by setting
\begin{itemize}
\item for \includegraphics[scale=0.4]{lp_down.eps}: $\deg^q = -1$, $\deg^h = 2$; \quad for \includegraphics[scale=0.4]{lp_up.eps}: $\deg^q = -1$, $\deg^h = 0$
\item for \includegraphics[scale=0.4]{soergelLambda.eps}: $\deg^q = 1$, $\deg^h = 0$; \quad for \includegraphics[scale=0.4]{soergelY.eps}: $\deg^q = 1$, $\deg^h = -2$
\item for \includegraphics[scale=0.4]{soergelDot1.eps} and \includegraphics[scale=0.4]{soergelDot2.eps}: $\deg^q = -2$ and $\deg^h = 2$.
\end{itemize}
We get a bigraded version $\SC_1^{*,*}$ of $\SC_1^{\ungr}$. The single grading on $\SC_1$ from \cite{EliasKhovanov,EliasKrasner,MSVSchur} is the negative of our $q$-grading.
By \cite{MSVSchur} we have, in this case and ignoring gradings, a functor from $\SC_1^{\ungr}$ (viewed as a 2-category with one object) to the ungraded version $\Sc(2,2)^{\ungr}$ of the 2-category $\Sc(2,2)^{*,*}$ from Section~\ref{sec:CatQGDef} above. Note that while $\Q$ coefficients are assumed in \cite{MSVSchur}, they are not used in the definition of this functor, which makes sense over $\Z$ and thus over $\F_2$.
\begin{proposition}
Mackaay--Sto{\v{s}}i{\'c}--Vaz's functor respects the bigradings we define here on $\SC_1^{*,*}$ and $\Sc(2,2)^{*,*}$.
\end{proposition}
\begin{proof}
One can check that the bidegree of each generating 2-morphism of $\SC_1^{*,*}$ agrees with the bidegree of its image in $\Sc(2,2)^{*,*}$.
\end{proof}
From Theorem~\ref{thm:SkewHowe2Action} we get the following corollary.
\begin{corollary}\label{cor:SoergelFunctor}
The above constructions give a functor of bicategories
\[
\SC_1^{*,*} \to 2\Rep(\U^-)^{*,*}.
\]
\end{corollary}
We can extend this functor to the larger domain $(\SC'_1)^{*,*}$. For the 2-morphism \includegraphics[scale=0.4]{soergelDot1.eps} of $(\SC'_1)^{*,*}$, we define an endomorphism of ${^{\vee}}\id_{\A_{1,1}} = \id_{\A_{1,1}}$ to be the dual of multiplication by $U_1$ (which is itself just multiplication by $U_1$), a 2-endomorphism of the identity 1-morphism on $(\A_{1,1},F,\tau)$. Similarly, to the 2-morphism \includegraphics[scale=0.4]{soergelDot2.eps}, we associate multiplication by $U_2$, which is also a 2-endomorphism of the identity 1-morphism. These associations preserve bidegree.
\begin{theorem}
The relations in $(\SC'_1)^{*,*}$ hold for the maps defined above.
\end{theorem}
\begin{proof}
We need to check the relations \eqref{it:SCPrimeRel1}--\eqref{it:SCPrimeRel3} in the definition of $(\SC'_1)^{*,*}$.
\begin{itemize}
\item For relation \eqref{it:SCPrimeRel1}, the 2-morphism \includegraphics[scale=0.4]{lp_updown.eps} of $(\SC'_1)^{*,*}$ gets sent to the 2-morphism \includegraphics[scale=0.4]{lp_updown_image.eps} of $\Sc(2,2)^*$. In turn, this 2-morphism gets sent to the dual of $\varepsilon' \eta$, and the matrix of $\varepsilon' \eta$ is
\[
\kbordermatrix{
& XA' & YA' & XB' & YB' \\
\Ib_{\ou} & U_1 & 0 & 1 \otimes \lambda & 0 \\
\Ib_{\uo} & 0 & 0 & 0 & 1
}
\kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
XA' & 1 & 0 \\
YA' & 0 & 1 \otimes \lambda \\
XB' & 0 & 0 \\
YB' & 0 & U_2
}
= \kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} \\
\Ib_{\ou} & U_1 & 0 \\
\Ib_{\uo} & 0 & U_2
}
\]
in the middle weight space and
\[
\kbordermatrix{
& Z (w_1 C') & Z C' \\
\Ib_{\oo} & U_1 & 1
}
\kbordermatrix{
& \Ib_{\oo} \\
Z (w_1 C') & 1 \\
Z C' & U_2
}
=
\kbordermatrix{
& \Ib_{\oo} \\
\Ib_{\oo} & U_1 + U_2
}
\]
in the lower weight space.
\item For relation \eqref{it:SCPrimeRel2}, the sum of \includegraphics[scale=0.4]{soergelDot1.eps} and \includegraphics[scale=0.4]{soergelDot2.eps} gets sent to multiplication by $U_1 + U_2$ which is central in $\A_{1,1}$ (note that $U_2 \lambda = \lambda U_1 = 0$ in the middle summand of $\A_{1,1}$).
\item For relation \eqref{it:SCPrimeRel3}, the product of \includegraphics[scale=0.4]{soergelDot1.eps} and \includegraphics[scale=0.4]{soergelDot2.eps} gets sent to multiplication by $U_1 U_2$ which is central in $\A_{1,1}$.
\end{itemize}
\end{proof}
\subsection{Braid cobordisms}\label{sec:BraidCob}
Let $\BrCob(2)$ denote the monoidal category of two-strand braid cobordisms; objects are diagrams for two-strand braids, and morphisms are generated (over $\F_2$ for us) by movies modulo relations from movie moves. See Elias--Krasner \cite{EliasKrasner} for details. We define a bigraded version of $\BrCob(2)$, denoted $\BrCob(2)^{*,*}$, by assigning the following bidegrees to two-strand movie generators, indexed as in \cite[Section 3]{EliasKrasner}:
\begin{itemize}
\item Birth and death generator 1 (birth of a negative crossing): $\deg^q = -1$, $\deg^h = 0$
\item Birth and death generator 2 (death of a negative crossing): $\deg^q = 1$, $\deg^h = 0$
\item Birth and death generator 3 (birth of a positive crossing): $\deg^q = 1$, $\deg^h = 0$
\item Birth and death generator 4 (death of a positive crossing): $\deg^q = -1$, $\deg^h = 0$
\item Reidemeister 2 generator 1a: $\deg^q = 0$, $\deg^h = 0$
\item Reidemeister 2 generator 1b: $\deg^q = 0$, $\deg^h = 0$
\item Reidemeister 2 generator 2a: $\deg^q = 0$, $\deg^h = 0$
\item Reidemeister 2 generator 2b: $\deg^q = 0$, $\deg^h = 0$.
\end{itemize}
The constructions of \cite{EliasKrasner} give, in this case and ignoring gradings, a monoidal functor from $\BrCob(2)$ to the homotopy category of ungraded complexes in $\SC_1^{\ungr}$. Taking our bigradings into account (and reversing the role of positive and negative crossings in \cite{EliasKrasner} to match our conventions), we define a monoidal functor from $\BrCob^{*,*}$ to the homotopy category of bigraded complexes in $\SC_1^{*,*}$ (where differentials preserve $\deg^q$ and increase $\deg^h$ by one). On objects, we send
\[
\includegraphics[scale=0.4]{pos_cr.eps} \qquad \qquad \mapsto \qquad \qquad
\xymatrix{
\Bigg| [-1] \ar@/^1.5pc/[rr]^{\includegraphics[scale=0.4]{lp_down.eps}} & \oplus & q \bullet
}
\]
and
\[
\includegraphics[scale=0.4]{neg_cr.eps} \qquad \qquad \mapsto \qquad \qquad
\xymatrix{
q^{-1} \bullet \ar@/^1.5pc/[rr]^{\includegraphics[scale=0.4]{lp_up.eps}} & \oplus & \Bigg| [-1]
}
\]
lifting the ungraded-complexes version of Elias--Krasner's functor after interchanging positive and negative crossings (we follow the notation of \cite{EliasKrasner} where $\Bigg|$ and $\bullet$ are the objects of $\SC_1$ corresponding to the nonnegative integers $1$ and $0$ respectively; we indicate shifts in $\deg^q$ by powers of $q$ and use $[1]$ for a downward shift by one in $\deg^h$).
On morphisms, the functor is defined by requiring that it lift the ungraded Elias--Krasner functor after interchanging positive and negative crossings. Specifically:
\begin{itemize}
\item For the birth of a negative crossing, we associate Elias--Krasner's chain map for the birth of a positive crossing and vice-versa.
\item For the death of a negative crossing, we associate Elias--Krasner's chain map for the death of a positive crossing and vice-versa.
\item For the Reidemeister 2 generator 1a, we associate Elias--Krasner's chain map for the Reidemeister 2 generator 2a, and vice-versa.
\item For the Reidemeister 2 generator 1b, we associate Elias--Krasner's chain map for the Reidemeister 2 generator 2b, and vice-versa.
\end{itemize}
\begin{proposition}
The above monoidal functor respects the bigradings on $\BrCob(2)^{*,*}$ and the homotopy category of bigraded complexes in $\SC_1^{*,*}$.
\end{proposition}
\begin{proof}
One can check that the bidegree of each generating morphism of $\BrCob(2)^{*,*}$ is the same as the bidegree of its image in the homotopy category of complexes in $\SC_1^{*,*}$.
\end{proof}
As discussed in Remark~\ref{rem:Strong2Mor}, we do not build enough data into 2-morphisms for the mapping cone on a 2-morphism to carry 1-morphism structure, so we do not have a functor from the homotopy category of bigraded complexes in $\SC_1^{*,*}$ (viewed as a bicategory with one object) into $2\Rep(\U^-)^{*,*}$. However, if we let $\ADBimod^{*,*}$ be the $\Z^2$-graded bicategory whose
\begin{itemize}
\item objects are dg categories,
\item 1-morphisms are finitely generated right bounded AD bimodules,
\item 2-morphisms are homotopy classes of closed AD bimodule morphisms (of arbitrary bidegree),
\end{itemize}
then the functor of bicategories $\SC_1^{*,*} \to 2\Rep(\U^-)^{*,*} \xrightarrow{\forget} \ADBimod^{*,*}$ extends naturally to a functor from the homotopy category of complexes in $\SC_1^{*,*}$ (1-morphisms are complexes of 1-morphisms in $\SC_1^{*,*}$ etc.) into $\ADBimod^{*,*}$.
\begin{corollary}\label{cor:BrCob}
The above constructions give a functor of bicategories
\[
\BrCob(2)^{*,*} \to \ADBimod^{*,*}.
\]
\end{corollary}
\begin{remark}\label{rem:UpgradingBrCobFunctor}
The above functor sends a positive crossing to the mapping cone of $\eta$ and a negative crossing to the mapping cone of $\varepsilon'$; the map $\eta$ was given the structure of a strong 2-morphism in Proposition~\ref{prop:FirstEpsIs2MorSquare2} (see Remark~\ref{rem:EtaStrong}) while $\varepsilon'$ is a strong 2-morphism with $h=0$. Thus, by Remark~\ref{rem:Strong2Mor}, we can in fact define 1-morphism structure on the AD bimodules (and their dual DA bimodules) we associate to positive and negative crossings. Using this 1-morphism structure, one could check that the AD bimodule maps we assign to the generating braid cobordisms are 2-morphisms, giving us a functor
\[
\BrCob(2)^{*,*} \to 2\Rep(\U^-)^{*,*}.
\]
\end{remark}
\section{Bimodules for positive and negative crossings}\label{sec:PosNegCrossings}
In this section, we will give a matrix description of (the duals of) the positive and negative crossing bimodules arising from Sections \ref{sec:SkewHowe2Action}, \ref{sec:SoergelBraidCob} and relate them to bimodules coming from Heegaard diagrams.
\subsection{Mapping cone bimodule for positive crossing}\label{sec:MappingConePos}
We write $P' = P'_{\upp} \oplus P'_{\midd} \oplus P'_{\low}$ for the 1-morphism of 2-representations of $\U^-$ associated to a positive crossing by Sections \ref{sec:SkewHowe2Action}, \ref{sec:SoergelBraidCob}. Below we will discuss a simplified version $P''$ of $P'$.
\subsubsection{Upper weight space, positive crossing}
We have $P'_{\upp} = P''_{\upp} = q^{-1} \F_2$ (in homological degree zero) as a bimodule over the upper summand $(\A_{1,1})_{2\varepsilon_1} \cong \F_2$ of $\A_{1,1}$.
\subsubsection{Middle weight space, positive crossing}
The DA bimodule
\[
P'_{\midd} \coloneqq \xymatrix{
q^{-1} \mathbb{I}_{(\A_{1,1})_{\varepsilon_1 + \varepsilon_2}} \ar@/^1.5pc/[rr]^{\eta} & \oplus & X_{\midd}[1]
}
\]
over the middle summand $(\A_{1,1})_{\varepsilon_1 + \varepsilon_2}$ of $\A_{1,1}$, where $\eta$ is defined in Section~\ref{sec:BimodMapsFor2Mors}, has primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\ou & XA' \quad \Ib_{\ou} & XB'\\
\uo & YA' & YB' \quad \Ib_{\uo}
}
\]
and secondary matrix
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& XA' & YA' & XB' & YB' & \Ib_{\ou} & \Ib_{\uo} \\
XA' & U_1^{k+1} \otimes U_1^{k+1} & \lambda & \begin{matrix} U_1^k \otimes (\lambda,U_1^{k+1}) \\+ U_1^k \otimes (U_2^{k+1},\lambda)\end{matrix} & 0 & 1 & 0 \\
YA' & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 & \begin{matrix} U_2^k \otimes (U_2^{k+1},\lambda) \\+ U_2^k \otimes (\lambda,U_1^{k+1})\end{matrix} & 0 & 1 \otimes \lambda \\
XB' & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda & 0 & 0 \\
YB' & 0 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1} & 0 & U_2 \\
\Ib_{\ou} & 0 & 0 & 0 & 0 & U_1^{k+1} \otimes U_1^{k+1} & \lambda \otimes \lambda \\
\Ib_{\uo} & 0 & 0 & 0 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1}
}.
}
\]
Simplifying $P'_{\midd}$ using Procedure~\ref{sec:PrelimSimplifying}, we get a DA bimodule $P''_{\midd}$ with primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\ou & & XB'\\
\uo & YA' & YB' \quad \Ib_{\uo}
}
\]
and secondary matrix
\[
\kbordermatrix{
& YA' & XB' & YB' & \Ib_{\uo} \\
YA' & U_2^{k+1} \otimes U_1^{k+1} & 0 & \begin{matrix} U_2^k \otimes (U_2^{k+1},\lambda) \\+ U_2^k \otimes (\lambda,U_1^{k+1})\end{matrix} & 1 \otimes \lambda \\
XB' & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda & 0 \\
YB' & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1} & U_2 \\
\Ib_{\uo} & 0 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1}
}.
\]
The generators have degrees
\begin{itemize}
\item $\deg^q(YA') = 0$, $\deg^h(YA') = 0$,
\item $\deg^q(XB') = 0$, $\deg^h(XB') = 0$,
\item $\deg^q(YB') = 1$, $\deg^h(YB') = -1$,
\item $\deg^q(\Ib_{\uo}) = -1$, $\deg^h(\Ib_{\uo}) = 0$.
\end{itemize}
\subsubsection{Lower weight space, positive crossing}
The dg bimodule
\[
P'_{\low} \coloneqq \xymatrix{
q^{-1} \mathbb{I}_{(\A_{1,1})_{2\varepsilon_2}} \ar@/^1.5pc/[rr]^{\eta} & \oplus & X_{\low}[1]
}
\]
over the lower summand $(\A_{1,1})_{2\varepsilon_2}$ of $\A_{1,1}$, where $\eta$ is defined in Section~\ref{sec:BimodMapsFor2Mors}, has primary matrix
\[
\kbordermatrix{
& \oo \\
\oo & Z(w_1 C') \quad ZC' \quad \Ib_{\oo}
}.
\]
The differential has matrix
\[
\kbordermatrix{
& Z(w_1 C') & ZC' & \Ib_{\oo} \\
Z(w_1 C') & 0 & 0 & 1 \\
ZC' & 0 & 0 & U_2 \\
\Ib_{\oo} & 0 & 0 & 0
},
\]
the right action of $U_1$ has matrix
\[
\kbordermatrix{
& Z (w_1 C') & ZC' & \Ib_{\oo} \\
Z(w_1 C') & U_1 + U_2 & 1 & 0 \\
ZC' & U_1 U_2 & 0 & 0 \\
\Ib_{\oo} & 0 & 0 & U_1
},
\]
and the right action of $U_2$ has matrix
\[
\kbordermatrix{
& Z (w_1 C') & ZC' & \Ib_{\oo} \\
Z(w_1 C') & 0 & 1 & 0 \\
ZC' & U_1 U_2 & U_1 + U_2 & 0 \\
\Ib_{\oo} & 0 & 0 & U_2
}.
\]
\begin{proposition}\label{prop:SimplifyingPosCrossingLower}
The dg bimodule $P'_{\low}$ is quasi-isomorphic (or homotopy equivalent as DA bimodules) to the ordinary bimodule $P''_{\low}$ with primary matrix
\[
\kbordermatrix{
& \oo \\
\oo & ZC'
}
\]
and secondary matrix
\[
\kbordermatrix{
& ZC' \\
ZC' & U_1^l U_2^k \otimes U_1^k U_2^l
}.
\]
\end{proposition}
\begin{proof}
One can check that the map with matrix
\[
\kbordermatrix{
& Z(w_1 C') & ZC' & \Ib_{\oo} \\
Z C' & U_2 & 1 & 0
}
\]
is a homomorphism of dg bimodules. This map sends the generator $ZC'$ of the homology of the mapping cone (free on this one generator as a left module over $\F_2[U_1,U_2]$) to the single generator of the bimodule in the statement of the proposition (also free as a left module over $\F_2[U_1,U_2]$ on its one generator). Thus, the map is a quasi-isomorphism.
\end{proof}
The generator $ZC'$ of $P''_{\low}$ has degrees $\deg^q(Z C') = 1$ and $\deg^h(Z C') = -1$.
\subsection{Decategorification of the bimodule for a positive crossing}
Let $P'' = P''_{\upp} \oplus P''_{\midd} \oplus P''_{\low}$, a DA bimodule over $(\A_{1,1},\A_{1,1})$.
\begin{proposition}
The DA bimodule $P''$ categorifies the map from $K_0(\A_{1,1})$ to $K_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[P_{\uu}]} & {[P_{\ou}]} & {[P_{\uo}]} & {[P_{\oo}]} \\
{[P_{\uu}]} & q^{-1} & 0 & 0 & 0 \\
{[P_{\ou}]} & 0 & 0 & 1 & 0 \\
{[P_{\uo}]} & 0 & 1 & q^{-1} - q & 0\\
{[P_{\oo}]} & 0 & 0 & 0 & -q
}.
\]
Equivalently, the AD bimodule ${^{\vee}}P''$ categorifies the map from $G_0(\A_{1,1})$ to $G_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[S_{\uu}]} & {[S_{\ou}]} & {[S_{\uo}]} & {[S_{\oo}]} \\
{[S_{\uu}]} & q & 0 & 0 & 0 \\
{[S_{\ou}]} & 0 & 0 & 1 & 0 \\
{[S_{\uo}]} & 0 & 1 & q - q^{-1} & 0\\
{[S_{\oo}]} & 0 & 0 & 0 & -q^{-1}
}.
\]
\end{proposition}
This latter map can be identified with the braiding acting on $V^{\otimes 2}$ as in Appendix~\ref{sec:SingularNonsingular}.
\subsection{Mapping-cone bimodule for negative crossing}\label{sec:MappingConeNeg}
Similarly to the previous section, we write $N' = N'_{\upp} \oplus N'_{\midd} \oplus N'_{\low}$ for the 1-morphism of 2-representations of $\U^-$ associated to a negative crossing by Sections \ref{sec:SkewHowe2Action}, \ref{sec:SoergelBraidCob}, and we will discuss a simplified version $N''$ of $N'$.
\subsubsection{Upper weight space, negative crossing}
We have $N'_{\upp} = N''_{\upp} = q^{1} \F_2$ (in homological degree zero) as a bimodule over the upper summand $(\A_{1,1})_{2\varepsilon_1} \cong \F_2$ of $\A_{1,1}$.
\subsubsection{Middle weight space, negative crossing}
The DA bimodule
\[
N'_{\midd} \coloneqq \xymatrix{
X_{\midd}[1] \ar@/^1.5pc/[rr]^{\varepsilon'} & \oplus & q \mathbb{I}_{(\A_{1,1})_{\varepsilon_1 + \varepsilon_2}}
}
\]
over the middle summand $(\A_{1,1})_{\varepsilon_1 + \varepsilon_2}$ of $\A_{1,1}$, where $\varepsilon'$ is defined in Section~\ref{sec:BimodMapsFor2Mors}, has primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\ou & \Ib_{\ou} \quad XA' & XB' \\
\uo & YA' & \Ib_{\uo} \quad YB'
}
\]
and secondary matrix
\[
\resizebox{\textwidth}{!}{
\kbordermatrix{
& \Ib_{\ou} & \Ib_{\uo} & XA' & YA' & XB' & YB' \\
\Ib_{\ou} & U_1^{k+1} \otimes U_1^{k+1} & \lambda \otimes \lambda & U_1 & 0 & 1 \otimes \lambda & 0 \\
\Ib_{\uo} & 0 & U_2^{k+1} \otimes U_2^{k+1} & 0 & 0 & 0 & 1 \\
XA' & 0 & 0 & U_1^{k+1} \otimes U_1^{k+1} & \lambda & \begin{matrix} U_1^k \otimes (\lambda,U_1^{k+1}) \\+ U_1^k \otimes (U_2^{k+1},\lambda)\end{matrix} & 0 \\
YA' & 0 & 0 & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 & \begin{matrix} U_2^k \otimes (U_2^{k+1},\lambda) \\+ U_2^k \otimes (\lambda, U_1^{k+1})\end{matrix} \\
XB' & 0 & 0 & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda \\
YB' & 0 & 0 & 0 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1}
}.
}
\]
Simplifying $N'_{\midd}$, we get a DA bimodule $N''_{\midd}$ with primary matrix
\[
\kbordermatrix{
& \ou & \uo \\
\ou & \Ib_{\ou} \quad XA' & XB' \\
\uo & YA' &
}
\]
and secondary matrix
\[
\kbordermatrix{
& \Ib_{\ou} & XA' & YA' & XB' \\
\Ib_{\ou} & U_1^{k+1} \otimes U_1^{k+1} & U_1 & 0 & 1 \otimes \lambda \\
XA' & 0 & U_1^{k+1} \otimes U_1^{k+1} & \lambda & \begin{matrix} U_1^k \otimes (\lambda,U_1^{k+1}) \\+ U_1^k \otimes (U_2^{k+1},\lambda)\end{matrix} \\
YA' & 0 & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 \\
XB' & 0 & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} \\
}.
\]
The generators have degrees
\begin{itemize}
\item $\deg^q(\Ib_{\ou}) = 1$, $\deg^h(\Ib_{\ou}) = 0$.
\item $\deg^q(XA') = -1$, $\deg^h(XA') = 1$,
\item $\deg^q(YA') = 0$, $\deg^h(YA') = 0$,
\item $\deg^q(XB') = 0$, $\deg^h(XB') = 0$.
\end{itemize}
\subsubsection{Lower weight space, negative crossing}
The DA bimodule
\[
N'_{\low} \coloneqq \xymatrix{
X_{\low}[1] \ar@/^1.5pc/[rr]^{\varepsilon'} & \oplus & q\mathbb{I}_{(\A_{1,1})_{2\varepsilon_2}}
}
\]
over the lower summand $(\A_{1,1})_{2\varepsilon_2}$ of $\A_{1,1}$, where $\varepsilon'$ is defined in Section~\ref{sec:BimodMapsFor2Mors}, has primary matrix
\[
\kbordermatrix{
& \oo \\
\oo & \Ib_{\oo} \quad Z (w_1 C') \quad ZC' \\
}.
\]
The differential has matrix
\[
\kbordermatrix{
& \Ib_{\oo} & Z (w_1 C') & ZC' \\
\Ib_{\oo} & 0 & U_1 & 1 \\
Z (w_1 C') & 0 & 0 & 0 \\
ZC' & 0 & 0 & 0
},
\]
the right action of $U_1$ has matrix
\[
\kbordermatrix{
& \Ib_{\oo} & Z (w_1 C') & ZC' \\
\Ib_{\oo} & U_1 & 0 & 0 \\
Z (w_1 C') & 0 & U_1 + U_2 & 1 \\
ZC' & 0 & U_1 U_2 & 0
},
\]
and the right action of $U_2$ has matrix
\[
\kbordermatrix{
& \Ib_{\oo} & Z (w_1 C') & ZC' \\
\Ib_{\oo} & U_2 & 0 & 0 \\
Z (w_1 C') & 0 & 0 & 1 \\
ZC' & 0 & U_1 U_2 & U_1 + U_2
}.
\]
\begin{proposition}
The dg bimodule $N'_{\low}$ is quasi-isomorphic (or homotopy equivalent as DA bimodules) to the ordinary bimodule $N''_{\low}$ with primary matrix
\[
\kbordermatrix{
& \oo \\
\oo & Z(w_1 C')
}
\]
and secondary matrix
\[
\kbordermatrix{
& Z(w_1 C') \\
Z(w_1 C') & U_1^l U_2^k \otimes U_1^k U_2^l
}.
\]
\end{proposition}
\begin{proof}
As in Proposition~\ref{prop:SimplifyingPosCrossingLower}, one can check that the map with matrix
\[
\kbordermatrix{
& Z(w_1 C') \\
\Ib_{\oo} & 0 \\
Z(w_1 C') & 1 \\
ZC' & U_1
}
\]
is a quasi-isomorphism of dg bimodules.
\end{proof}
The generator of $N''_{\low}$ has degrees $\deg^q(Z(w_1 C')) = -1$ and $\deg^h(Z (w_1 C')) = 1$.
\subsection{Decategorification of the bimodule for a negative crossing}
Let $N'' = N''_{\upp} \oplus N''_{\midd} \oplus N''_{\low}$, a DA bimodule over $(\A_{1,1},\A_{1,1})$.
\begin{proposition}
The DA bimodule $N''$ categorifies the map from $K_0(\A_{1,1})$ to $K_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[P_{\uu}]} & {[P_{\ou}]} & {[P_{\uo}]} & {[P_{\oo}]} \\
{[P_{\uu}]} & q & 0 & 0 & 0 \\
{[P_{\ou}]} & 0 & q-q^{-1} & 1 & 0 \\
{[P_{\uo}]} & 0 & 1 & 0 & 0 \\
{[P_{\oo}]} & 0 & 0 & 0 & -q^{-1}
}.
\]
Equivalently, the DA bimodule ${^{\vee}}N''$ categorifies the map from $G_0(\A_{1,1})$ to $G_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[S_{\uu}]} & {[S_{\ou}]} & {[S_{\uo}]} & {[S_{\oo}]} \\
{[S_{\uu}]} & q^{-1} & 0 & 0 & 0 \\
{[S_{\ou}]} & 0 & q^{-1}-q & 1 & 0 \\
{[S_{\uo}]} & 0 & 1 & 0 & 0 \\
{[S_{\oo}]} & 0 & 0 & 0 & -q
}.
\]
\end{proposition}
This latter map can be identified with the inverse of the braiding acting on $V^{\otimes 2}$ as in Appendix~\ref{sec:SingularNonsingular}.
\subsection{Bimodules from Heegaard diagrams}\label{sec:PositiveCrossingHD}
\begin{figure}
\includegraphics[scale=0.4]{nonsingular_crossing_V3.eps}
\caption{Heegaard diagrams for positive and negative crossings.}
\label{fig:NonsingularCrossing}
\end{figure}
We now consider slightly different DA bimodules, motivated directly by holomorphic disk counts in the Heegaard diagrams of Figure~\ref{fig:NonsingularCrossing}.
\begin{definition}
Let $P$ be the DA bimodule over $(\A_{1,1},\A_{1,1})$ with primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\uu & I & & & \\
\ou & & & K & \\
\uo & & J & L \, M \\
\oo & & & & N
}
\]
(treating all three of its summands together). We set
\begin{itemize}
\item $\deg^q(I) = -1$, $\deg^h(I) = 0$,
\item $\deg^q(J) = 0$, $\deg^h(J) = 0$,
\item $\deg^q(K) = 0$, $\deg^h(K) = 0$,
\item $\deg^q(L) = 1$, $\deg^h(L) = -1$,
\item $\deg^q(M) = -1$, $\deg^h(M) = 0$,
\item $\deg^q(N) = 1$, $\deg^h(N) = -1$.
\end{itemize}
The secondary matrix of $P$ is given as follows.
\begin{itemize}
\item The secondary matrix in the top summand (with generator $I$) is zero (which, by convention, corresponds to the identity bimodule over $\F_2$).
\item The secondary matrix in the middle summand is
\[
\kbordermatrix{
& J & K & L & M \\
J & U_2^{k+1} \otimes U_1^{k+1} & 0 & U_2^k \otimes (\lambda, U_1^{k+1}) & 1 \otimes \lambda \\
K & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda & 0 \\
L & 0 & 0 & 0 & U_2 \\
M & 0 & 0 & 0 & 0
}.
\]
\item The secondary matrix in the bottom summand is
\[
\kbordermatrix{
& N \\
N & U_1^l U_2^k \otimes U_1^k U_2^l
}.
\]
\end{itemize}
\end{definition}
\begin{figure}
\includegraphics[scale=0.6]{positive_crossing_gens_V3.eps}
\caption{Generators of the positive crossing bimodule $P$ in terms of intersection points in the Heegaard diagram of Figure~\ref{fig:NonsingularCrossing}.}
\label{fig:PositiveCrossingGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{positive_crossing_domains_V2.eps}
\caption{Domains giving rise to secondary matrix entries for the positive crossing bimodule $P$ (middle summand).}
\label{fig:PositiveCrossingDomains}
\end{figure}
Figure~\ref{fig:PositiveCrossingGens} shows the generators of $P$ in terms of sets of intersection points in the Heegaard diagram of Figure~\ref{fig:NonsingularCrossing}. Figure~\ref{fig:PositiveCrossingDomains} shows the domains in this Heegaard diagram that give rise to nonzero entries in the middle summand of the secondary matrix of $P$ (the other two summands are simpler and are not shown).
\begin{proposition}
The DA bimodule $P$ for a positive crossing is isomorphic to the simplified mapping cone bimodule $P''$ from Section~\ref{sec:MappingConePos}.
\end{proposition}
\begin{proof}
The top and bottom summands of the two bimodules in question are identical under the identification of $I$ with $\Ib_{\uu}$ and $N$ with $ZC'$ (note that in the mapping cone bimodule, the degree of $\Ib_{\uu}$ is $q^{-1} h^0$ and the degree of $Z C'$ is $q^{1} h^{-1}$). In the middle weight space, an isomorphism from $P$ to the mapping cone bimodule is given by the matrix
\[
\kbordermatrix{
& J & K & L & M \\
Y'A & 1 & 0 & 0 & 0 \\
X'B & 0 & 1 & 0 & 0 \\
Y'B & 0 & 0 & 1 & 0 \\
\Ib_{\uo} & 0 & 0 & U_2^k \otimes U_2^{k+1} & 1
}.
\]
One can check that this matrix represents a closed grading-preserving morphism of DA bimodules that is invertible with inverse
\[
\kbordermatrix{
& Y'A & X'B & Y'B & \Ib_{\uo} \\
J & 1 & 0 & 0 & 0 \\
K & 0 & 1 & 0 & 0 \\
L & 0 & 0 & 1 & 0 \\
M & 0 & 0 & U_2^k \otimes U_2^{k+1} & 1
};
\]
recall that the degrees of $YA'$, $XB'$, $YB'$, and $\Ib_{\uo}$ are $q^0 h^0$, $q^0 h^0$, $q^{1} h^{-1}$, and $q^{-1} h^0$ respectively.
\end{proof}
\begin{definition}
Let $N$ be the DA bimodule over $(\A_{1,1},\A_{1,1})$ with primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\uu & I' & & & \\
\ou & & J' \, K'& M' & \\
\uo & & L' & &\\
\oo & & & & N'
}
\]
(treating all three of its summands together). We set
\begin{itemize}
\item $\deg^q(I') = 1$, $\deg^h(I') = 0$,
\item $\deg^q(J') = 1$, $\deg^h(J') = 0$,
\item $\deg^q(K') = -1$, $\deg^h(K') = 1$,
\item $\deg^q(L') = 0$, $\deg^h(L') = 0$,
\item $\deg^q(M') = 0$, $\deg^h(M') = 0$,
\item $\deg^q(N') = -1$, $\deg^h(N') = 1$.
\end{itemize}
The secondary matrix of $N$ is given as follows.
\begin{itemize}
\item The secondary matrix in the top summand (with generator $I'$) is zero.
\item The secondary matrix in the middle summand is
\[
\kbordermatrix{
& J' & K' & L' & M' \\
J' & 0 & U_1 & 0 & 1 \otimes \lambda \\
K' & 0 & 0 & \lambda & U_1^k \otimes (U_2^{k+1},\lambda) \\
L' & 0 & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 \\
M' & 0 & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1}
}.
\]
\item The secondary matrix in the bottom summand is
\[
\kbordermatrix{
& N' \\
N' & U_1^l U_2^k \otimes U_1^k U_2^l
}.
\]
\end{itemize}
\end{definition}
\begin{figure}
\includegraphics[scale=0.6]{negative_crossing_gens_V2.eps}
\caption{Generators of the negative crossing bimodule $N$ in terms of intersection points in the right Heegaard diagram of Figure~\ref{fig:NonsingularCrossing}.}
\label{fig:NegativeCrossingGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{negative_crossing_domains_V2.eps}
\caption{Domains giving rise to secondary matrix entries for the negative crossing bimodule $N$ (middle summand).}
\label{fig:NegativeCrossingDomains}
\end{figure}
Figure~\ref{fig:NegativeCrossingGens} shows the generators of $N$ in terms of sets of intersection points in the right Heegaard diagram of Figure~\ref{fig:NonsingularCrossing}. Figure~\ref{fig:NegativeCrossingDomains} shows the domains in this Heegaard diagram that give rise to nonzero entries in the middle summand of the secondary matrix of $N$ (the other two summands are simpler and are not shown).
\begin{proposition}
The DA bimodule $N$ for a negative crossing is isomorphic to the simplified mapping cone bimodule $N''$ from Section~\ref{sec:MappingConeNeg}.
\end{proposition}
\begin{proof}
The top and bottom summands of the two bimodules in question are identical under the identification of $I'$ with $\Ib_{\uu}$ and $N'$ with $Z (w_1 C')$ (note that in the mapping cone bimodule, the degree of $\Ib_{\uu}$ is $q^{1} h^0$ and the degree of $Z (w_1 C')$ is $q^{-1} h^{1}$). In the middle weight space, an isomorphism from $N$ to the mapping cone bimodule is given by the matrix
\[
\kbordermatrix{
& J' & K' & L' & M' \\
\Ib_{\ou} & 1 & 0 & 0 & 0 \\
XA' & U_1^k \otimes U_1^{k+1} & 1 & 0 & 0 \\
YA' & 0 & 0 & 1 & 0 \\
XB' & 0 & 0 & 0 & 1
}.
\]
One can check that this matrix represents a closed grading-preserving morphism of DA bimodules that is invertible with inverse
\[
\kbordermatrix{
& \Ib_{\ou} & XA' & YA' & XB' \\
J' & 1 & 0 & 0 & 0 \\
K' & U_1^k \otimes U_1^{k+1} & 1 & 0 & 0 \\
L' & 0 & 0 & 1 & 0 \\
M' & 0 & 0 & 0 & 1
};
\]
recall that the degrees of $\Ib_{\ou}$, $XA'$, $YA'$, $XB'$ are $q^{1}h^0$, $q^{-1} h^{1}$, $q^0 h^0$, and $q^0 h^0$ respectively.
\end{proof}
\section{Change-of-basis bimodules}\label{sec:ChangeOfBasis}
\subsection{Ozsv{\'a}th--Szab{\'o} canonical basis algebras}
Here we define the Ozsv{\'a}th--Szab{\'o} algebra $\A_{1,1}^{\can}$.
\begin{figure}
\includegraphics[scale=0.4]{A11CanQuiver.eps}
\caption{The dg category $\A^{\can}_{1,1}$.}
\label{fig:A11CanQuiver}
\end{figure}
\begin{definition}
The dg category $\A^{\can}_{1,1}$ is shown in Figure~\ref{fig:A11CanQuiver}; arrows point from source to target. The differential on $\A^{\can}_{1,1}$ is zero; the gradings are given by
\begin{itemize}
\item $\deg^q(R_1) = -1$, $\deg^h(R_1) = 1$,
\item $\deg^q(L_1) = -1$, $\deg^h(L_1) = 1$,
\item $\deg^q(U_i) = -2$, $\deg^h(U_i) = 2$ for $i \in \{1,2\}$.
\end{itemize}
\end{definition}
\subsection{Bimodules}
We would like to relate the positive and negative crossing bimodules $P$ and $N$ of Section~\ref{sec:PositiveCrossingHD} to the bimodules for positive and negative crossings defined by Ozsv{\'a}th--Szab{\'o} in \cite{OSzNew}. To do so, we will first define categorified change-of-basis bimodules between the algebras $\A_{1,1}$ and $\A_{1,1}^{\can}$.
\begin{figure}
\includegraphics[scale=0.4]{standard_to_canonical_BL.eps}
\caption{Heegaard diagram for changing canonical basis to standard basis}
\label{fig:StandardToCanonical}
\end{figure}
\begin{figure}
\includegraphics[scale=0.4]{canonical_to_standard_BL.eps}
\caption{Heegaard diagram for changing standard basis to canonical basis}
\label{fig:CanonicalToStandard}
\end{figure}
\begin{definition}\label{def:FirstCOBBimodule}
The change-of-basis DA bimodule over $(\A_{1,1}^{\can},\A_{1,1})$ has primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\varnothing & \kappa_1 & & & \\
A & & \kappa_2 & \kappa_3 & \\
B & & & \kappa_4 & \\
AB & & & & \kappa_5 \\
};
\]
we set
\begin{itemize}
\item $\deg^q(\kappa_1) = 0$, $\deg^h(\kappa_1) = 0$,
\item $\deg^q(\kappa_2) = 0$, $\deg^h(\kappa_2) = 0$,
\item $\deg^q(\kappa_3) = -1$, $\deg^h(\kappa_3) = 0$,
\item $\deg^q(\kappa_4) = 0$, $\deg^h(\kappa_4) = 0$,
\item $\deg^q(\kappa_5) = 0$, $\deg^h(\kappa_5) = 0$.
\end{itemize}
This change of basis bimodule has secondary matrix
\[
\kbordermatrix{
& \kappa_2 & \kappa_3 & \kappa_4 \\
\kappa_2 & U_1^{k+1} \otimes U_1^{k+1} & 1 \otimes \lambda & L_1 U_1^k \otimes (\lambda, U_1^{k+1}) \\
\kappa_3 & 0 & 0 & 0 \\
\kappa_4 & 0 & R_1 & U_2^{k+1} \otimes U_2^{k+1}
}
\]
in the middle summand and secondary matrix
\[
\kbordermatrix{
& \kappa_5 \\
\kappa_5 & U_1^k U_2^l \otimes U_1^k U_2^l
}
\]
in the bottom summand.
\end{definition}
\begin{figure}
\includegraphics[scale=0.6]{kappa_gens_BL_V2.eps}
\caption{Generators of one change of basis bimodule in terms of intersection points.}
\label{fig:KappaGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{kappa_domains_BL_V2.eps}
\caption{Domains for one change of basis bimodule (middle summand).}
\label{fig:KappaDomains}
\end{figure}
\begin{definition}\label{def:SecondCOBBimodule}
The change-of-basis DA bimodule over $(\A_{1,1},\A_{1,1}^{\can})$ has primary matrix
\[
\kbordermatrix{
& \varnothing & A & B & AB\\
\uu & \lambda_1 & & & \\
\ou & & \lambda_2 & \lambda_3 & \\
\uo & & & \lambda_4 & \\
\oo & & & & \lambda_5
};
\]
we set
\begin{itemize}
\item $\deg^q(\lambda_1) = 0$, $\deg^h(\lambda_1) = 0$,
\item $\deg^q(\lambda_2) = 0$, $\deg^h(\lambda_2) = 0$,
\item $\deg^q(\lambda_3) = -1$, $\deg^h(\lambda_3) = 1$,
\item $\deg^q(\lambda_4) = 0$, $\deg^h(\lambda_4) = 0$,
\item $\deg^q(\lambda_5) = 0$, $\deg^h(\lambda_5) = 0$.
\end{itemize}
This change of basis bimodule has secondary matrix
\[
\kbordermatrix{
& \lambda_2 & \lambda_3 & \lambda_4 \\
\lambda_2 & U_1^{k+1} \otimes U_1^{k+1} & U_1^{k+1} \otimes L_1 U_1^k & 0 \\
\lambda_3 & U_1^k \otimes R_1 U_1^k & U_1^{k+1} \otimes U_1^{k+1} & \lambda \\
\lambda_4 & 0 & 0 & U_2^{k+1} \otimes U_2^{k+1}
}
\]
in the middle summand, and secondary matrix
\[
\kbordermatrix{
& \lambda_5 \\
\lambda_5 & U_1^k U_2^l \otimes U_1^k U_2^l
}
\]
in the bottom summand.
\end{definition}
\begin{figure}
\includegraphics[scale=0.6]{lambda_gens_BL_V2.eps}
\caption{Generators of the other change of basis bimodule in terms of intersection points.}
\label{fig:LambdaGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{lambda_domains_BL_V2.eps}
\caption{Domains for the other change of basis bimodule (middle summand).}
\label{fig:LambdaDomains}
\end{figure}
One can check that these bimodules are well-defined and that their box tensor product either way is homotopy equivalent to the appropriate identity bimodule.
\begin{proposition}
The change-of-basis bimodule from Definition~\ref{def:FirstCOBBimodule} categorifies the map from $K_0(\A_{1,1})$ to $K_0(\A_{1,1}^{\can})$ with matrix
\[
\kbordermatrix{
& {[P_{\uu}]} & {[P_{\ou}]} & {[P_{\uo}]} & {[P_{\oo}]} \\
{[P_{\varnothing}]} & 1 & 0 & 0 & 0 \\
{[P_{A}]} & 0 & 1 & q^{-1} & 0 \\
{[P_{B}]} & 0 & 0 & 1 & 0 \\
{[P_{AB}]} & 0 & 0 & 0 & 1
};
\]
equivalently, its left dual categorifies the map from $G_0(\A_{1,1})$ to $G_0(\A_{1,1}^{\can})$ with matrix
\[
\kbordermatrix{
& {[S_{\varnothing}]} & {[S_{A}]} & {[S_{B}]} & {[S_{AB}]} \\
{[S_{\uu}]} & 1 & 0 & 0 & 0 \\
{[S_{\ou}]} & 0 & 1 & 0 & 0 \\
{[S_{\uo}]} & 0 & q & 1 & 0 \\
{[S_{\oo}]} & 0 & 0 & 0 & 1
}.
\]
Under our identifications, this latter map agrees with the change of basis matrix from the canonical basis (introduced in Section~\ref{sec:NonstandardBases}) to the standard basis of $V^{\otimes 2}$.
\end{proposition}
\begin{proposition}
The change-of-basis bimodule from Definition~\ref{def:SecondCOBBimodule} categorifies the map from $K_0(\A_{1,1}^{\can})$ to $K_0(\A_{1,1})$ with matrix
\[
\kbordermatrix{
& {[P_{\varnothing}]} & {[P_{A}]} & {[P_{B}]} & {[P_{AB}]} \\
{[P_{\uu}]} & 1 & 0 & 0 & 0 \\
{[P_{\ou}]} & 0 & 1 & -q^{-1} & 0 \\
{[P_{\uo}]} & 0 & 0 & 1 & 0 \\
{[P_{\oo}]} & 0 & 0 & 0 & 1
};
\]
equivalently, its left dual categorifies the map from $G_0(\A_{1,1})$ to $G_0(\A_{1,1}^{\can})$ with matrix
\[
\kbordermatrix{
& {[S_{\uu}]} & {[S_{\ou}]} & {[S_{\uo}]} & {[S_{\oo}]} \\
{[S_{\varnothing}]} & 1 & 0 & 0 & 0 \\
{[S_{A}]} & 0 & 1 & 0 & 0 \\
{[S_{B}]} & 0 & -q & 1 & 0 \\
{[S_{AB}]} & 0 & 0 & 0 & 1
}.
\]
Under our identifications, this latter map agrees with the change of basis matrix from the standard basis to the canonical basis of $V^{\otimes 2}$.
\end{proposition}
\section{Relationship with \texorpdfstring{Ozsvath-Szabo's}{Ozsv{\'a}th--Szab{\'o}'s} Kauffman-states functor}\label{sec:OSzRelationship}
\subsection{Positive crossing}
We now review Ozsv{\'a}th--Szab{\'o}'s bimodule for a positive crossing between two strands, with both strands oriented upwards, in our notation.
\begin{proposition}
Ozsv{\'a}th--Szab{\'o}'s DA bimodule $P_{\OSz}$ over $\A_{1,1}^{\can}$ for a positive crossing has primary matrix
\[
\kbordermatrix{ & \varnothing & A & B & AB \\
\varnothing & S_{\varnothing} & & & \\
A & & S_A & & \\
B & & W & N_B & \\
AB & & & & N_{AB}
}.
\]
In the top weight space, the secondary matrix is zero and the bimodule is $\F_2$ as a bimodule over itself (generated by $S_{\varnothing}$). The secondary matrix in the middle weight space is
\[
\kbordermatrix{
& S_A & W & N_B \\
S_A & 0 & 0 & L_1 U_1^k \otimes (U_2^{k+1}, L_1) \\
W & R_1 & U_2^{k+1} \otimes U_1^{k+1} & U_2^k \otimes L_1 U_1^k \\
N_B & 0 & U_2^{k+1} \otimes R_1 U_1^k & \begin{matrix} U_2^{k+1} \otimes U_1^{k+1} \\+ U_1^{k+1} \otimes U_2^{k+1} \end{matrix}
}
\]
and the secondary matrix in the top weight space is
\[
\kbordermatrix{
& N_{AB} \\
N_{AB} & U_1^l U_2^k \otimes U_1^k U_2^l
}.
\]
\end{proposition}
The degrees of the generators depend only on their type $\in \{N,E,S,W\}$ and are given by
\begin{itemize}
\item $\deg^q(N) = 1$, $\deg^h(N) = -1$,
\item $\deg^q(E) = 0$, $\deg^h(E) = 0$,
\item $\deg^q(S) = -1$, $\deg^h(S) = 0$,
\item $\deg^q(W) = 0$, $\deg^h(W) = 0$.
\end{itemize}
(for the bimodule under consideration, there are no generators of type $E$.)
\begin{remark}
Actually, Ozsv{\'a}th--Szab{\'o} assign this bimodule to a negative crossing; to match the conventions used here, one should exchange positive and negative crossings in \cite{OSzNew}.
\end{remark}
\begin{remark} In \cite{OSzNew}, Ozsv{\'a}th--Szab{\'o} use dg algebras with extra $C_i$ generators when strands are oriented upwards. These generators help upward-oriented strands interact with downward-oriented strands, so that bimodules for maximum and minimum points can be defined. In \cite{OSzNewer} and \cite{OSzHolo}, these generators no longer appear except in descriptions of Koszul duals of Ozsv{\'a}th--Szab{\'o}'s algebras; the interaction between upward-pointing and downward-pointing strands is mediated differently. In any case, the orientations of the strands impact the grading of the algebras, and we use the grading that corresponds (in Ozsv{\'a}th--Szab{\'o}'s conventions) to upward-pointing strands. Since we do not consider duals or downward-pointing strands in this paper, we do not need to consider the modifications (e.g. curvature in \cite{OSzHolo}) that Ozsv{\'a}th--Szab{\'o} use when dealing with mixed orientations.
\end{remark}
\begin{figure}
\includegraphics[scale=0.6]{alex_maslov_gr_V2.eps}
\caption{Quantum and homological degrees of generators of type $N$, $E$, $S$, and $W$.}
\label{fig:AlexMaslov}
\end{figure}
\begin{remark}
Generators of Ozsv{\'a}th--Szab{\'o}'s bimodule generators correspond to certain ``partial Kauffman states,'' a local version of the Kauffman states defined in \cite{FKT}. At each crossing in a tangle diagram, a partial Kauffman state has a type $\in \{N,E,S,W\}$ depending on which of the four corners adjacent to the crossing is chosen for the state. One can compare Figure~\ref{fig:AlexMaslov} to \cite[Figure 2]{OSzNew}; our quantum degrees are $-2$ times Ozsv{\'a}th--Szab{\'o}'s Alexander degrees (the minus sign is due to the reversal of positive and negative crossings while the $2$ is due to the usual relation $q^2 = t$ between the parameter $t$ of the Alexander polynomial $\Delta_K(t)$ and the quantum parameter $q$), and our homological degrees agree of Ozsv{\'a}th--Szab{\'o}'s. Note that the agreement of homological degrees results from a product of two minus signs, one due to the reversal of positive and negative crossings with respect to \cite{OSzNew} and the other because we use $+1$ differentials while Ozsv{\'a}th--Szab{\'o} use $-1$ differentials.
\end{remark}
\begin{figure}
\includegraphics[scale=0.4]{osz_diagram_BL_V4.eps}
\caption{Heegaard diagram for Ozsv{\'a}th--Szab{\'o}'s bimodules over $\A_{1,1}^{\can}$ for positive and negative crossings (in our conventions).}
\label{fig:OSzDiagramRightside}
\end{figure}
It follows from \cite{OSzHolo} that the bimodule $P_{\OSz}$ can be obtained by counting holomorphic disks in the Heegaard diagram on the left of Figure~\ref{fig:OSzDiagramRightside}; see Figures \ref{fig:OSzNegativeGens}, \ref{fig:OSzNegativeDomains}.
\begin{figure}
\includegraphics[scale=0.6]{osz_negative_gens.eps}
\caption{Generators of the Ozsv{\'a}th--Szab{\'o} positive crossing bimodule $P_{\OSz}$ in terms of intersection points in the positive-crossing diagram of Figure~\ref{fig:OSzDiagramRightside}.}
\label{fig:OSzNegativeGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{osz_negative_domains.eps}
\caption{Domains giving rise to secondary matrix entries for the Ozsv{\'a}th--Szab{\'o} positive crossing bimodule $P_{\OSz}$ (middle summand).}
\label{fig:OSzNegativeDomains}
\end{figure}
To relate $P_{\OSz}$ with our bimodule $P$, we first change basis on the left of $P_{\OSz}$ to get a DA bimodule over $(\A_{1,1},\A_{1,1}^{\can})$. In other words, we take the box tensor product of $P_{\OSz}$ with the DA bimodule over $(\A_{1,1},\A_{1,1}^{\can})$ from Definition~\ref{def:SecondCOBBimodule}. The result has primary matrix
\[
\kbordermatrix{
& \varnothing & A & B & AB \\
\uu & \lambda_{1} S_{\varnothing} & & & \\
\ou & & \lambda_2 S_A \quad \lambda_3 W & \lambda_3 N_B & \\
\uo & & \lambda_4 W & \lambda_4 N_B & \\
\oo & & & & \lambda_5 N_{AB}
},
\]
secondary matrix
\[
\kbordermatrix{
&\lambda_2 S_A & \lambda_3 W & \lambda_4 W & \lambda_3 N_B & \lambda_4 N_B \\
\lambda_2 S_A & 0 & 0 & 0 & U_1^{k+1} \otimes (U_2^{k+1}, L_1) & 0 \\
\lambda_3 W & 1 & 0 & \lambda & 1 \otimes L_1 & 0 \\
\lambda_4 W & 0 & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 & U_2^{k} \otimes L_1 U_1^k \\
\lambda_3 N_B & 0 & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda \\
\lambda_4 N_B & 0 & 0 & U_2^{k+1} \otimes R_1 U_1^k & 0 & U_2^{k+1} \otimes U_1^{k+1}
}
\]
in the middle summand, and secondary matrix
\[
\kbordermatrix{
& \lambda_5 N_{AB} \\
\lambda_5 N_{AB} & U_1^l U_2^k \otimes U_1^k U_2^l
}
\]
in the lower summand. Simplifying the above bimodule, we get primary matrix
\[
\kbordermatrix{
& \varnothing & A & B & AB \\
\uu & \lambda_{1} S_{\varnothing} & & & \\
\ou & & & \lambda_3 N_B & \\
\uo & & \lambda_4 W & \lambda_4 N_B & \\
\oo & & & & \lambda_5 N_{AB}
}
\]
and secondary matrix
\[
\kbordermatrix{
& \lambda_4 W & \lambda_3 N_B & \lambda_4 N_B \\
\lambda_4 W & U_2^{k+1} \otimes U_1^{k+1} & 0 & U_2^k \otimes L_1 U_1^k \\
\lambda_3 N_B & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda \\
\lambda_4 N_B & U_2^{k+1} \otimes R_1 U_1^k & 0 & U_2^{k+1} \otimes U_1^{k+1}
}
\]
in the middle summand.
Now we change basis on the right as well, by taking a further box tensor product with the DA bimodule over $(\A_{1,1}^{\can},\A_{1,1})$ from Definition~\ref{def:FirstCOBBimodule}. The result has primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\uu & \lambda_{1} S_{\varnothing} \kappa_1 & & & \\
\ou & & & \lambda_3 N_B \kappa_4 & \\
\uo & & \lambda_4 W \kappa_2 & \lambda_4 N_B \kappa_4 \quad \lambda_4 W \kappa_3 & \\
\oo & & & & \lambda_5 N_{AB} \kappa_5
},
\]
secondary matrix
\[
\kbordermatrix{
& \lambda_4 W \kappa_2 & \lambda_3 N_B \kappa_4 & \lambda_4 N_B \kappa_4 & \lambda_4 W \kappa_3\\
\lambda_4 W \kappa_2 & U_2^{k+1} \otimes U_1^{k+1} & 0 & U_2^k \otimes (\lambda, U_1^{k+1}) & 1 \otimes \lambda \\
\lambda_3 N_B \kappa_4 & 0 & U_1^{k+1} \otimes U_2^{k+1} & \lambda & 0 \\
\lambda_4 N_B \kappa_4 & 0 & 0 & 0 & U_2 \\
\lambda_4 W \kappa_3 & 0 & 0 & 0 & 0 \\
}
\]
in the middle summand, and secondary matrix
\[
\kbordermatrix{
& \lambda_5 N_{AB} \kappa_5 \\
\lambda_5 N_{AB} \kappa_5 & U_1^l U_2^k \otimes U_1^k U_2^l
}
\]
in the lower summand. This bimodule agrees with the positive-crossing bimodule from Section~\ref{sec:PositiveCrossingHD} under the grading-preserving identification
\begin{align*}
& \lambda_1 S_{\varnothing} \kappa_1 \leftrightarrow I \\
& \lambda_4 W \kappa_2 \leftrightarrow J \\
& \lambda_3 N_B \kappa_4 \leftrightarrow K \\
& \lambda_4 N_B \kappa_4 \leftrightarrow L \\
& \lambda_4 W \kappa_3 \leftrightarrow M \\
& \lambda_5 N_{BC} \kappa_5 \leftrightarrow N.
\end{align*}
\subsection{Negative crossing}
\begin{proposition}
Ozsv{\'a}th--Szab{\'o}'s DA bimodule $N_{\OSz}$ over $\A_{1,1}^{\can}$ for a negative crossing (in Ozsv{\'a}th--Szab{\'o}'s papers, a positive crossing as remarked above) has primary matrix
\[
\kbordermatrix{ & \varnothing & A & B & AB \\
\varnothing & S'_{\varnothing} & & & \\
A & & S'_A & & \\
B & & W' & N'_B & \\
AB & & & & N'_{AB}
}.
\]
In the top weight space, the secondary matrix is zero and the bimodule is $\F_2$ as a bimodule over itself (generated by $S'_{\varnothing}$). The secondary matrix in the middle weight space is
\[
\kbordermatrix{
& S'_A & W' & N'_B \\
S_A' & 0 & L_1 & 0 \\
W' & 0 & U_2^{k+1} \otimes U_1^{k+1} & U_2^{k+1} \otimes L_1 U_1^k \\
N'_B & R_1 U_1^k \otimes (R_1, U_2^{k+1}) & U_2^k \otimes R_1 U_1^k & \begin{matrix} U_2^{k+1} \otimes U_1^{k+1} \\+ U_1^{k+1} \otimes U_2^{k+1} \end{matrix}
}
\]
and the secondary matrix in the top weight space is
\[
\kbordermatrix{
& N'_{AB} \\
N'_{AB} & U_1^l U_2^k \otimes U_1^k U_2^l
}.
\]
\end{proposition}
The degrees of the generators depend only on their type $\in \{N',E',S',W'\}$ and are given by
\begin{itemize}
\item $\deg^q(N') = -1$, $\deg^h(N') = 1$,
\item $\deg^q(E') = 0$, $\deg^h(E') = 0$,
\item $\deg^q(S') = 1$, $\deg^h(S') = 0$,
\item $\deg^q(W') = 0$, $\deg^h(W') = 0$
\end{itemize}
as described in the bottom row of Figure~\ref{fig:AlexMaslov} (again, there are no generators of type $E'$.) By \cite{OSzHolo}, the bimodule $N_{\OSz}$ can be obtained by counting holomorphic disks in the negative-crossing Heegaard diagram of Figure~\ref{fig:OSzDiagramRightside}.
\begin{figure}
\includegraphics[scale=0.6]{osz_positive_gens.eps}
\caption{Generators of the Ozsv{\'a}th--Szab{\'o} negative crossing bimodule $N_{\OSz}$ in terms of intersection points in the negative-crossing diagram of Figure~\ref{fig:OSzDiagramRightside}.}
\label{fig:OSzPositiveGens}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{osz_positive_domains.eps}
\caption{Domains giving rise to secondary matrix entries for the Ozsv{\'a}th--Szab{\'o} negative crossing bimodule $N_{\OSz}$ (middle summand).}
\label{fig:OSzPositiveDomains}
\end{figure}
To relate $N_{\OSz}$ with our bimodule $N$, we first change basis on the right of $N_{\OSz}$ to get a DA bimodule over $(\A_{1,1}^{\can},\A_{1,1})$. In other words, we take the box tensor product of $N_{\OSz}$ with the DA bimodule over $(\A_{1,1}^{\can},\A_{1,1})$ from Definition~\ref{def:FirstCOBBimodule}. The result has primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\varnothing & S'_{\varnothing} \kappa_1 & & & \\
A & & S'_A \kappa_2 & S'_A \kappa_3 & \\
B & & W' \kappa_2 & W' \kappa_3 \quad N'_B \kappa_4 & \\
AB & & & & N'_{AB} \kappa_5
},
\]
secondary matrix
\[
\kbordermatrix{
&S'_A \kappa_2 & W' \kappa_2 & S'_A \kappa_3 & W' \kappa_3 & N'_B \kappa_4 \\
S'_A \kappa_2 & 0 & L_1 & 1 \otimes \lambda & 0 & 0 \\
W' \kappa_2 & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 & 1 \otimes \lambda & U_2^{k+1} \otimes (\lambda, U_1^{k+1}) \\
S'_A \kappa_3 & 0 & 0 & 0 & L_1 & 0 \\
W' \kappa_3 & 0 & 0 & 0 & 0 & 0 \\
N'_B \kappa_4 & 0 & 0 & R_1 U_1^k \otimes U_2^{k+1} & 1 & U_1^{k+1} \otimes U_2^{k+1}
}
\]
in the middle summand, and secondary matrix
\[
\kbordermatrix{
& N'_{AB} \kappa_5 \\
N'_{AB} \kappa_5 & U_1^l U_2^k \otimes U_1^k U_2^l
}
\]
in the lower summand. Simplifying the above bimodule, we get primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\varnothing & S'_{\varnothing} \kappa_1 & & & \\
A & & S'_A \kappa_2 & S'_A \kappa_3 & \\
B & & W' \kappa_2 & & \\
AB & & & & N'_{AB} \kappa_5
},
\]
and secondary matrix
\[
\kbordermatrix{
&S'_A \kappa_2 & W' \kappa_2 & S'_A \kappa_3 \\
S'_A \kappa_2 & 0 & L_1 & 1 \otimes \lambda \\
W' \kappa_2 & 0 & U_2^{k+1} \otimes U_1^{k+1} & R_1 U_1^k \otimes (U_2^{k+1}, \lambda) \\
S'_A \kappa_3 & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1}
}
\]
in the middle summand.
Now we change basis on the left as well, by taking a further box tensor product with the DA bimodule over $(\A_{1,1},\A_{1,1}^{\can})$ from Definition~\ref{def:SecondCOBBimodule}. The result has primary matrix
\[
\kbordermatrix{
& \uu & \ou & \uo & \oo \\
\uu & \lambda_{1} S'_{\varnothing} \kappa_1 & & & \\
\ou & & \lambda_2 S'_A \kappa_2 \quad \lambda_3 W' \kappa_2 & \lambda_2 S'_A \kappa_3 & \\
\uo & & \lambda_4 W' \kappa_2 & & \\
\oo & & & & \lambda_5 N'_{AB} \kappa_5
},
\]
secondary matrix
\[
\kbordermatrix{
& \lambda_2 S'_A \kappa_2 & \lambda_3 W' \kappa_2 & \lambda_4 W' \kappa_2 & \lambda_2 S'_A \kappa_3 \\
\lambda_2 S'_A \kappa_2 & 0 & U_1 & 0 & 1 \otimes \lambda \\
\lambda_3 W' \kappa_2 & 0 & 0 & \lambda & U_1^k \otimes (U_2^{k+1},\lambda) \\
\lambda_4 W' \kappa_2 & 0 & 0 & U_2^{k+1} \otimes U_1^{k+1} & 0 \\
\lambda_2 S'_A \kappa_3 & 0 & 0 & 0 & U_1^{k+1} \otimes U_2^{k+1} \\
}
\]
in the middle summand, and secondary matrix
\[
\kbordermatrix{
& \lambda_5 N'_{AB} \kappa_5 \\
\lambda_5 N'_{AB} \kappa_5 & U_1^l U_2^k \otimes U_1^k U_2^l
}
\]
in the lower summand. This bimodule agrees with the negative-crossing bimodule from Section~\ref{sec:PositiveCrossingHD} under the grading-preserving identification
\begin{align*}
& \lambda_1 S'_{\varnothing} \kappa_1 \leftrightarrow I' \\
& \lambda_2 S'_A \kappa_2 \leftrightarrow J' \\
& \lambda_3 W' \kappa_2 \leftrightarrow K' \\
& \lambda_4 W' \kappa_2 \leftrightarrow L' \\
& \lambda_2 S'_A \kappa_3 \leftrightarrow M' \\
& \lambda_5 N'_{BC} \kappa_5 \leftrightarrow N'.
\end{align*}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the past two decades, T-duality \cite{giveon, lectdual} has been highly successful in helping increase our understanding of string theory. In the 1990s, S, T and U-dualities were used to relate the five different string theories. The existence of these dualities was crucial in the conjecture that the five string theories are different limits of a theory that is the strong coupling limit of type IIA string theory, M-theory \cite{mtheoryht, mtheoryw}. T-duality has also provided insights into the study of D-branes, which would perhaps be inaccessible otherwise. For example, Myers \cite{myerseffect} used consistency of the world-volume action for the D-brane with T-duality to find the correct coupling of background Ramond-Ramond fields to D-branes, which was used to discover the Myers effect. Furthermore, another important application of T-duality has been to use it to generate solutions in supergravity \cite{Tsolngen}.
In Buscher's formulation of T-duality \cite{buscher1, *buscher2}, a shift symmetry of a target space coordinate, which corresponds to an isometry in the target space of the sigma-model, is used to make a field redefinition in the sigma model. The new sigma model is classically of the same form as the original sigma model except for the sigma model couplings, i.e.\ the metric and the 2-form field, which are different. The two sigma models are equivalent quantum-mechanically if the dilaton also transforms. This shows that the string theories described by the two sigma models with different couplings, which correspond to different backgrounds for the string theories, are equivalent. The transformed background fields are related to the original fields by the killing vector that corresponds to the isometry.
Recently, this idea has been generalised to the case where the target space has a fermionic isometry, or supersymmetry, as opposed to an isometry, to find a duality of tree-level type II string theory, fermionic T-duality \cite{fermdual, fermdual2}. Under this duality the background Ramond-Ramond fields and the dilaton transform and the metric and the NSNS 2-form field are invariant. Analogously to T-duality, the transformation of the background supergravity fields are given by the Killing spinors corresponding to the supersymmetry in superspace.
Given the success of T-duality, we expect that fermionic T-duality will also make important contributions to our understanding of string theory. In fact, fermionic T-duality was introduced to explain the dual superconformal symmetry of planar scattering amplitudes in $\mathcal{N}=4$ super Yang-Mills theory \cite{fermdual, fermdual2}, which has no obvious origin in the weak coupling computations of these amplitudes in which this symmetry was found \cite{dualsuperconf1, *dualsuperconf2}. There have also been other studies of fermionic T-duality \cite{adam, fre, hao, bb, sfetsos}.
However, in contrast to T-duality, fermionic T-duality generically transforms a real supergravity background into a complex supergravity background. An ordinary T-duality along a time-like direction can then be applied to get back a real background. This means that the application of fermionic T-duality as a solution generating mechanism, one of the key applications of T-duality, is limited to supergravity solutions with a timelike Killing vector.
In this paper, we consider fermionic T-duality from the spacetime viewpoint, rather than the worldsheet perspective in which it was found. We consider a general ansatz for the transformation of the Ramond-Ramond fields and the dilaton involving Killing spinors in both type IIA and type IIB supergravity. We then systematically impose that the supergravity equations are invariant under this transformation, i.e.\ we impose that the transformed fields are solutions of the supergravity equations. We find that the symmetry includes fermionic T-duality, as it must, but it also admits real transformations of the supergravity fields.
The structure of the paper is as follows. In section \ref{review}, we review B\"{u}scher's T-duality and show how a symmetry in the target space allows a field redefinition in the worldsheet theory that gives rise to a duality. We also review the transformation rules of fermionic T-duality \cite{fermdual} in this section. Then, in section \ref{symmIIA}, we set our conventions by stating the type IIA supergravity Lagrangian and equations, and we construct the symmetry for type IIA supergravity. In section \ref{symmIIB}, we construct an analogous symmetry for type IIB supergravity. Finally, in section \ref{com}, we make some comments and outline future work.
\section{Review of T-duality and fermionic T-duality}
\label{review}
B\"{u}scher \cite{buscher1, *buscher2} showed that T-duality in curved backgrounds arises as a symmetry of the sigma-model. Consider the bosonic string sigma-model
\begin{equation}
S= \frac{1}{4 \pi \alpha'} \int \d ^2 {\sigma} \left[ \sqrt{h} h^{ \alpha \beta} g_{a b} \partial_{\alpha} X^{a}\partial_{\beta} X^{b}+ \epsilon^{\alpha \beta} B_{a b} \partial_{\alpha} X^{a}\partial_{\beta} X^{b} + \alpha' \sqrt{h} R^{(2)} \phi (X) \right]. \notag
\end{equation}
The field $X^{a}$ is the position of the point $(\sigma^{1},\sigma^{2})$ on the worldsheet in spacetime; $g_{a b}$ is the metric on the target space; $B_{a b}$ is the antisymmetric gauge potential; $\phi$ is the dilaton and $R^{(2)}$ is the curvature of the worldsheet metric $h.$ Imposing conformal invariance in the quantum theory gives the equations of motion for the background fields \cite{stringback}. In 26 dimensions, the equations of motion for the metric, two-form field and dilaton are
\begin{gather}
R_{a b}-\frac{1}{4} H_{a}^{\;\; c d} H_{b c d} +2 \nabla_{a} \nabla_{b} \phi=0, \notag \\
\nabla_{c} H^{c}_{\;\;a b} -2 \left(\partial_{c} \phi \right) H^{c}_{\;\;ab}=0, \notag \\
4 \left(\partial \phi \right)^2 -4 \Box \phi -R + \frac{1}{12} H^2=0,
\label{betaeqns}
\end{gather}
respectively. The tensor $R_{a b}$ is the Ricci tensor associated to the metric on the target space and $H_{a b c} = 3 \partial_{[a} B_{b c]}.$
If there is a Killing vector in the target space, $k,$ then we can choose a coordinate system---we will let $X^{a}$ be such a coordinate system---in which $k=\partial/\partial{X^0}.$ In this coordinate system, the metric, two-form field and the dilaton are independent of the $X^0$ coordinate. We can then write $$ \partial_{\alpha} X^0 =V_{\alpha}, $$ in the action, but we must impose the constraint that $V_{\alpha}$ is exact. For Euclidean worldsheets of spherical topology we can impose this constraint using a Lagrange multiplier term $$ \epsilon^{\alpha \beta} \hat{X}^{0} \partial_{\alpha} V_{\beta}.$$
So we can write the action as
\begin{multline}
S=\frac{1}{4 \pi \alpha'} \int \d ^2 {\sigma} \left( \sqrt{h} h^{ \alpha \beta} \left[ g_{0 0} V_{\alpha} V_{\beta} + 2 g_{0 i} V_{\alpha} \partial_{\beta} X^{i} + g_{i j} \partial_{\alpha} X^{i} \partial_{\beta} X^{j} \right] \right. \\ \left. + \epsilon^{\alpha \beta} \left[2 B_{0 i} V_{\alpha} \partial_{\beta} X^{i} +B_{i j} \partial_{\alpha}X^{i} \partial_{\beta} X^{j} \right] + 2 \epsilon^{\alpha \beta} \hat{X}^{0} \partial_{\alpha} V_{\beta} + \alpha' \sqrt{h} R^{(2)} \phi (X) \right), \notag
\end{multline}
where $a=(0,i).$
The equation of motion for $\hat{X}^{0}$ gives that $V$ is closed, which for a spherical worldsheet implies that $V$ is exact, so we get back the original theory. The $V$ equation of motion is $$ V_{\alpha} =-\frac{1}{g_{00}} \left[ g_{0i} \partial_{\alpha} X^i +\frac{\epsilon_{\alpha}^{\;\; \beta}}{\sqrt{h}}\left( B_{0i} \partial_{\beta} X^i + \partial_{\beta} \hat{X}^0 \right) \right].$$ Integrating the action over $V$ we get the dual action that has the same form as the original action except that the metric and the two-form field are now
\begin{gather}
\tilde{g}_{00} = \frac{1}{g_{00}}, \notag \\
\tilde{g}_{0i} = \frac{B_{0i}}{g_{00}}, \qquad \tilde{B}_{0i} = \frac{g_{0i}}{g_{00}}, \notag \\
\tilde{g}_{ij} = g_{ij}-\frac{g_{0i}g_{0j}-B_{0i}B_{0j}}{g_{00}}, \notag \\
\tilde{B}_{ij} = B_{ij}-\frac{g_{0i}B_{0j}-B_{0i}g_{0j}}{g_{00}}.
\label{eqn:tdual}
\end{gather}
We would like to impose the condition that the T-dual theory is also conformally invariant. This can be imposed at
one-loop using either the results of reference \cite{stringback}, i.e.\ equations \eqref{betaeqns}, or by considering the change in the measure of the path-integral. Either method suggests that the dilaton is shifted \cite{buscher1,*buscher2} to
\begin{equation}
\tilde{\phi}=\phi-\frac{1}{2} \log{g_{00}}.
\end{equation}
Note that if we take the special case of toroidal compactification on a flat background, then we get the well-known result that the radius of the compactification circle is inverted and the string coupling constant, which is the exponential of the expectation value of the dilaton, is modified by a factor of $\sqrt{\alpha'}/R.$
The argument given here is valid only for spherical worldsheets, hence the duality has only been proved to first order in string perturbation theory. By gauging the isometry the duality can be extended to higher genus worldsheets, but in this case the isometry orbits must be compact, or in other words the shift symmetry has to be along a compact coordinate \cite{rocek}.
Recently, Berkovits and Maldacena \cite{fermdual} have generalised B\"{u}scher's formulation of T-duality to the case where the worldsheet action is invariant under constant shifts of spacetime fermionic coordinates $\theta^{J},$ $J=1,\dots n.$ They show that under this duality the metric and the NS-NS 2-form potential do not change, and they give the transformation of the Ramond-Ramond fields in terms of the bispinor field strength.
In type IIA string theory the bispinor field strength is $$ F= \frac{1}{2} F^{(2)}_{\;\;\;\; a_1 a_2} \gamma^{a_1 a_2} + \frac{1}{4!} F^{(4)}_{\;\;\;\; a_1 \dots a_4} \gamma^{a_1 \dots a_4} \gamma_{11},$$ where the 2-form $F^{(2)}$ and 4-form $F^{(4)}$ are the RR field strengths. In our notation, $\gamma-$matrices are 32 by 32 matrix representations of the ten-dimensional Clifford algebra; two Majorana-Weyl spinors describing $\mathcal{N}=2A$ supersymmetry are combined into a single Majorana-Dirac spinor. In type IIB string theory the Ramond-Ramond field strengths are the 1-form $F^{(1)},$ 3-form $F^{(3)}$ and the self-dual 5-form $F^{(5)},$ and we define the R-R bispinor field strength by $$ F= F^{(1)}_{\;\;\;\;a} \gamma^a \sigma^1 + \frac{1}{3!} F^{(3)}_{\;\;\;\;a_1\dots a_3} \gamma^{a_1 \dots a_3} \left(i \sigma^2 \right) + \frac{1}{2.5!} F^{(5)}_{\;\;\;\;a_1\dots a_5} \gamma^{a_1 \dots a_5} \sigma^1.$$ In type IIB theory, two Majorana-Weyl spinors, $\varepsilon$ and $\hat{\varepsilon},$ with the same chirality are combined into an $SO(2)$ vector
$$\epsilon=\begin{pmatrix}
\varepsilon \\
\hat{\varepsilon}
\end{pmatrix},$$
which are rotated amongst each other by acting with $\textup{e}^{i \sigma^{2} \theta},$ and on which Pauli matrices act on in the obvious way.
Fermionic T-duality transforms the bispinor field strength in the following way:
\begin{equation}
\textup{e}^{\phi'} F'= \textup{e}^{\phi} F \pm 32 \sum_{I,J=1}^{N} \left(\varepsilon_{I} \otimes \hat{\varepsilon}_{J}\right) M_{IJ},
\label{eqn:BMFtrans}
\end{equation}
where we take $+$ in type IIB theory and $-$ in type IIA string theory. We will always combine two type IIA Weyl spinors into a Dirac spinor and type IIB spinors into an $SO(2)$ vector. However, in equation \eqref{eqn:BMFtrans}, and only there, the spinors are Weyl Killing spinors.
Furthermore, under fermionic T-duality, the dilaton is transformed to
\begin{equation}
\phi'=\phi + \frac{1}{2} \sum_{I=1}^{n}\left(\log \left(\frac{1}{2} M^{-1}\right) \right)_{II},
\label{eqn:BMdiltrans}
\end{equation}
where $M^{-1}$ satisfies $$ \partial_{a} \left(M^{-1}\right)_{IJ} = 2 \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \epsilon_{J}$$ for type IIA theory, and $$ \partial_{a} \left(M^{-1}\right)_{IJ} = 2 \bar{\epsilon}_{I} \gamma_{a} \sigma^{3} \epsilon_{J}$$ for type IIB theory---of course $\epsilon$ has a different meaning for each theory, as described above. The spinors, $\epsilon_{J},$ are Killing spinors corresponding to the constant shift symmetry of the fermionic coordinate $\theta^{J}$ in superspace $$ \theta_{I} \rightarrow \theta_{I} + \rho_{I},$$ where $\rho_{I}$ is a Grassmann-valued constant. Since this symmetry is abelian, $\{\epsilon_{I} Q_{I},\epsilon_{J} Q_{J}\}=0,$ where there is no summation over $I$ and $J.$ However, from the supersymmetry algebra $$\{\epsilon_{I} Q_{I},\epsilon_{J} Q_{J}\} = \bar{\epsilon}_{I} \gamma^{a} \epsilon_{J} P_{a},$$
where $P$ is the generator for translations. Therefore,
\begin{equation}
\bar{\epsilon}_{I} \gamma_{a} \epsilon_{J}=0,
\label{eqn:spincon}
\end{equation}
for all $I,J=1,\dots, n\in \mathbb{Z}^{+}.$ This condition can only be satisfied non-trivially for complex $\epsilon,$ so, in general, the transformation sends real supergravity backgrounds into complex backgrounds.
Fermionic T-duality preserves the number of supersymmetries, which is necessary in order for it to be a duality of string theory. Explicitly, the Killing spinors in the T-dual theory are
\begin{equation}
\epsilon'_{I} = M_{IJ} \epsilon_{J}.
\label{newkillingspinors}
\end{equation}
\section{Type IIA supergravity symmetry}
\label{symmIIA}
In our conventions, summarised in appendix \ref{conv}, the type IIA supergravity action is
\begin{equation}
S= \frac{1}{2 \kappa^2} \int \d^{10} x \sqrt{g} \left\{ \textup{e}^{-2 \phi} \left[ R + 4 (\partial \phi)^2 - \frac{1}{12} H^2 \right] - \frac{1}{2} \left[ \frac{1}{2} F^{{(2)}^2} + \frac{1}{4!} F^{{(4)}^2 }\right] -\frac{1}{144} \frac{1}{\sqrt{g}} \epsilon \, \partial C^{(3)} \partial C^{(3)} B \right\}.
\end{equation}
The first square bracket is the action for the NSNS fields, the metric $g,$ the 2-form field $B,$ and the dilaton. The second set of terms constitute the action for the RR fields, the 1-form potential $C^{(1)},$ and the 3-form potential $C^{(3)}.$ The last term is the Chern-Simons term. The field strengths, $H, F^{(2)},F^{(4)}$ are defined by
\begin{align*}
H &= \d B, \\
F^{(2)} &= \d C^{(1)}, \\
F^{(4)} &= \d C^{(3)} - H \wedge C^{(1)}.
\end{align*}
The Bianchi identities for the field strengths are
\begin{align}
&\d H =0, \label{bianchiH} \\
&\d F^{(2)}=0, \label{bianchi2}\\
&\d F^{(4)}-H \wedge F^{(2)}=0.
\label{bianchi4}
\end{align}
The equations of motion are
\begin{align}
& \d \left(\textup{e}^{-2 \phi} \star H \right) + F^{(2)} \wedge \star F^{(4)} - \frac{1}{2} F^{(4)} \wedge F^{(4)} =0, \label{eomHA} \\
& \d \star F^{(2)} + H \wedge \star F^{(4)} =0, \\
& \d \star F^{(4)} - H \wedge F^{(4)} =0.
\end{align}
The Einstein equation is
\begin{multline}
R_{ab}= - \frac{1}{4} g_{ab} \Box \phi + \frac{1}{2} g_{ab} \left(\partial \phi \right)^{2} - 2 \nabla_{a} \nabla_{b} \phi + \frac{1}{4} \left( H_{acd}H_{b}^{\;cd} - \frac{1}{12} g_{ab} H^{2} \right)\\
+\frac{1}{2} \textup{e}^{2 \phi} \left( F^{(2)}_{\;\;\;\;ac} F^{(2)\;c}_{\;\;\;\;b} -\frac{1}{16} g_{ab} F^{(2)^{2}} \right) +\frac{1}{12} \textup{e}^{2 \phi} \left(F^{(4)}_{\;\;\;\;acde} F^{(4)\;cde}_{\;\;\;\;b} -\frac{3}{32} g_{ab} F^{(4)^{2}} \right),
\label{einstein}
\end{multline}
and the dilaton equation of motion is
\begin{equation}
R + 4 \Box \phi - 4 \left(\partial \phi \right)^{2} - \frac{1}{12} H^{2}=0.
\label{dileom}
\end{equation}
We can check the consistency of these equations by showing that the contracted Bianchi identity holds. Indeed, using the Bianchi identities and equations of motion for the field strengths and the Einstein equation,
$$ \nabla^{a}(R_{ab}-\frac{1}{2} g_{ab}R) = - \partial_{b} \phi \left(R + 4 \Box \phi - 4 \left(\partial \phi \right)^{2} - \frac{1}{12} H^{2} \right),$$
which vanishes by the dilaton equation of motion.
The Killing spinor equations from the variations of the gravitino and dilatino are
\begin{equation}
\nabla_{a} \epsilon - \frac{1}{8} H_{abc} \gamma^{bc} \gamma_{11} \epsilon - \frac{1}{16} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;bc}\gamma^{bc} \gamma_{a} \gamma_{11} \epsilon + \frac{1}{192} \textup{e}^{\phi} F^{(4)}_{\;\;\;\;bcde} \gamma^{bcde}\gamma_{a} \epsilon=0,
\label{eqn:ksegrav}
\end{equation}
\begin{equation}
\left( \gamma^{a} \partial_{a} \phi - \frac{1}{12} H_{abc} \gamma^{abc} \gamma_{11} - \frac{3}{8} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;ab} \gamma^{ab} \gamma_{11} + \frac{1}{96} \textup{e}^{\phi} F^{(4)} _{\;\;\;\; abcd} \gamma^{abcd} \right) \epsilon=0,
\label{eqn:ksedil}
\end{equation}
respectively.
We will consider a transformation in the RR fields only. We also allow the dilaton to transform because at the quantum level this restores the conformal invariance of the string sigma model. For a string theory to be quantum mechanically consistent, it can be shown that the background fields must satisfy the supergravity equations of motion by imposing the vanishing of the beta-function \cite{stringback, betaGS}, or imposing $\kappa-$invariance of the Green-Schwarz action \cite{kappa, kappainvII}. The other NSNS fields, the metric and the 3-form field strength $H,$ are invariant under the transformation.
We consider the most general ansatz for the transformation of the fields:
\begin{align}
\textup{e}^{\phi} F^{(2)}_{\;\;\;\;ab} &\rightarrow \textup{e}^{\phi'} F'^{(2)}_{\;\;\;\;ab} = \textup{e}^{\phi} F^{(2)}_{\;\;\;\;ab} + \bar{\epsilon}_{I} \gamma_{ab} (S_1 + S_2 \gamma_{11} ) \eta_{J} M_{IJ}, \notag \\
\textup{e}^{\phi} F^{(4)}_{\;\;\;\;abcd} &\rightarrow \textup{e}^{\phi'} F'^{(4)}_{\;\;\;\;abcd} = \textup{e}^{\phi} F^{(4)}_{\;\;\;\;abcd} + \bar{\epsilon}_{I} \gamma_{abcd} (S_3 + S_4 \gamma_{11} ) \eta_{J} M_{IJ}, \label{fieldtrans}
\end{align}
where $\phi', M_{IJ}, S_{1} , \dots, S_{4}$ are arbitrary functions, and spinors $\epsilon_{I}, \eta_I,$ $I=1, \dots, n,$ satisfy the gravitino and dilatino Killing spinor equations. Both of the RR field strengths transform with the same spinors and functions, for if they transformed with different spinors and functions, then, for example, upon requiring that the Bianchi identity for the transformed 4-form field strength, equation \eqref{bianchi4}, holds, they would be identified. We will identify them from the onset in the interests of clarity and terseness. Furthermore, from equation \eqref{eomHA}, $\phi'$ can be identified as the transformed dilaton, which we will write as $$ \phi' = \phi +X,$$ where $X$ is an unknown function.
Let us consider each Bianchi identity and equation of motion in turn. First, consider the transformed Bianchi identity for the RR 2-form, equation \eqref{bianchi2}. Using the gravitino Killing spinor equation
\begin{align}
\nabla_{[a} F'^{(2)}_{\;\;\;\;bc]} &= \nabla_{[a} \left( \textup{e}^{-X} F^{(2)}_{\;\;\;\;bc]}+\textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{bc]} \left(S_1 + S_2 \gamma_{11} \right) \eta_{J} M_{IJ} \right) \notag \\
&= -\textup{e}^{-X} \left( F^{(2)}_{\;\;\;\;[bc}+\textup{e}^{-\phi} \bar{\epsilon}_{I} \gamma_{[bc} \left(S_1 + S_2 \gamma_{11} \right) \eta_{J} M_{IJ} \right) \partial_{a]} X\notag \\
& \qquad - \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{[bc} \left(S_1 + S_2 \gamma_{11} \right) \eta_{J} M_{IJ}\partial_{a]} \phi \notag \\
& \qquad \quad + \textup{e}^{-X} \epsilon_{I}^{\alpha} \eta_{J}^{\beta} \left( \frac{1}{4} \textup{e}^{-\phi} H_{de[a} \left( S_1 \left(\gamma_{bc]} \gamma^{de} \gamma_{11} \right)_{(\alpha \beta)} +S_2 \left( \gamma_{bc]} \gamma^{de} \right)_{[\alpha \beta]} \right) \right. \notag \\
& \qquad \qquad + \frac{1}{8} F^{(2)}_{\;\;\;\;de} \left( S_1 \left(\gamma_{[bc} \gamma^{de} \gamma_{a]} \gamma_{11} \right)_{(\alpha \beta)} - S_2 \left( \gamma_{[bc} \gamma^{de} \gamma_{a]} \right)_{[\alpha \beta]} \right) \notag\\
& \qquad \qquad \quad \biggl. - \frac{1}{96} F^{(4)}_{\;\;\;\;defg} \left( S_1 \left(\gamma_{[bc} \gamma^{defg} \gamma_{a]} \gamma_{11} \right)_{(\alpha \beta)} - S_2 \left( \gamma_{[bc} \gamma^{defg} \gamma_{a]} \right)_{[\alpha \beta]} \right) \biggr) M_{IJ} \notag \\
& \qquad \qquad \qquad + \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{[bc} \partial_{a]} \left[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \right] \eta_{J}.
\label{eqn:bian2}
\end{align}
The Greek indices, $\alpha, \beta =1, \dots, 32,$ are spinor indices.
Now we can use the dilatino Killing spinor equation to express the term involving the derivative of the dilaton in the expression above in terms of the field strengths. As $\epsilon_{I}$ and $\eta_{J}$ satisfy the dilatino Killing spinor equation, \eqref{eqn:ksedil},
\begin{equation}
\epsilon_{I}^{\alpha} \gamma_{abc} \left( \gamma^{d} \partial_{d} \phi - \frac{1}{12} H_{def} \gamma^{def} \gamma_{11} - \frac{3 }{8} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;de} \gamma^{de} \gamma_{11} + \frac{1}{96} \textup{e}^{\phi} F^{(4)}_{\;\;\;\;defg} \gamma^{defg} \right)^{(\alpha \beta)} \eta_{J}^{\beta}=0.
\label{eqn:dileqnb2}
\end{equation}
Using \footnote{This identity can be proved by using induction on $m$ with $n=1$ and then by induction on $n.$} \begin{equation}
\gamma_{a_1 \dots a_m} \gamma^{b_1 \dots b_n} = \sum_{k=0}^{\textup{min}(m,n)} c_{mn}^k \gamma_{[a_1 \dots a_{m-k}}^{\qquad \quad \;\; [b_1 \dots b_{n-k}} \delta_{a_{m-k+1}}^{b_{n-k+1}} \dots \delta_{a_m]}^{b_n]},
\label{gammaidentity}
\end{equation}
where $$ c_{mn}^k=(-1)^{kn+\frac{1}{2} k(k+1)} \frac{m! n!}{k!(m-k)!(n-k)!},$$
and \begin{align}
\left( \gamma^{a_1 \dots a_n} \right)^{ \alpha \beta} &= (-1)^{\frac{1}{2}n(n+1)+1} \left( \gamma^{a_1 \dots a_n} \right)^{ \beta \alpha }, \notag \\
\left( \gamma^{a_1 \dots a_n} \gamma_{11} \right)^{ \alpha \beta} &= (-1)^{\frac{1}{2}n(n+3)} \left( \gamma^{a_1 \dots a_n} \gamma_{11} \right)^{ \beta \alpha },
\label{eqn:symgamma}
\end{align}
equation \eqref{eqn:dileqnb2} implies that
\begin{equation*}
\begin{split}
& \bar{\epsilon}_{I} \gamma_{[bc} \eta_{J} \partial_{a]} \phi - \frac{1}{12} H_{de[a} \bar{\epsilon}_{I} \left( 3 \gamma_{bc]}^{\;\;\;\;de} - 2 \delta_{b}^{d} \delta_{c]}^{e} \right) \gamma_{11} \eta_{J} \\
& \qquad \qquad - \frac{1}{8} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;de} \bar{\epsilon}_{I} \left( \gamma_{abc}^{\;\;\;\;\;de} - 6 \gamma_{[a} \delta_{b}^{d} \delta_{c]}^{e} \right) \gamma_{11} \eta_{J} + \frac{1}{24} \textup{e}^{\phi} F^{(4)}_{\;\;\;\;defg} \bar{\epsilon}_{I} \left(\gamma_{[ab}^{\;\;\;\;def} \delta_{c]}^{g} -2 \gamma^{d} \delta_{a}^{e} \delta_{b}^{f} \delta_{c}^{g} \right) \eta_{J}=0.
\end{split}
\end{equation*}
Similarly,
\begin{equation*}
\begin{split}
& \bar{\epsilon}_{I} \gamma_{[bc} \gamma_{11} \eta_{J} \partial_{a]} \phi - \frac{1}{12} H_{de[a} \bar{\epsilon}_{I} \left( 3 \gamma_{bc]}^{\;\;\;\;de} - 2 \delta_{b}^{d} \delta_{c]}^{e} \right) \eta_{J} - \frac{3}{4} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;de} \bar{\epsilon}_{I} \gamma_{[bc}^{\;\;\;\;d} \delta_{a]}^{e} \eta_{J} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad - \frac{1}{288} \textup{e}^{\phi} F^{(4)}_{\;\;\;\;defg} \bar{\epsilon}_{I} \left(\gamma_{abc}^{\;\;\;\;\;defg} -36 \gamma_{[b}^{\;\;d e} \delta_{c}^{f} \delta_{a]}^{g} \right) \gamma_{11} \eta_{J}=0.
\end{split}
\end{equation*}
Substituting the two equations above in equation \eqref{eqn:bian2} and using the gamma matrix identities \eqref{gammaidentity} and \eqref{eqn:symgamma}, equation \eqref{eqn:bian2} becomes
\begin{align}
&\nabla_{[a} F'^{(2)}_{\;\;\;\;bc]} \notag \\
= &-\textup{e}^{-X} \left( F^{(2)}_{\;\;\;\;[bc}+\textup{e}^{-\phi} \bar{\epsilon}_{I} \gamma_{[bc} \left(S_1 + S_2 \gamma_{11} \right) \eta_{J} M_{IJ} \right) \partial_{a]} X + \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{[bc} \partial_{a]} \left[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \right] \eta_{J} \notag \\
& \quad - \frac{1}{3} \textup{e}^{-(\phi+X)} H_{abc} \bar{\epsilon}_{I} \left( S_1 \gamma_{11}+S_2 \right) \eta_{J} M_{IJ} + \frac{1}{2} \textup{e}^{-X} F^{(2)}_{\;\;\;\;de} \bar{\epsilon}_{I} \left( 2 S_1 \gamma_{[a} \delta_{b}^{d} \delta_{c]}^{e} \gamma_{11} - S_2 \gamma_{[bc}^{\;\;\;\;d} \delta_{a]}^{e} \right) \eta_{J} M_{IJ}\notag \\
& \qquad \quad + \frac{1}{144} \textup{e}^{-X} F^{(4)}_{\;\;\;\;defg} \bar{\epsilon}_{I} \left(48 S_1 \gamma^{d} \delta_{a}^{e} \delta_{b}^{f} \delta_{c}^{g} + S_2 \left( \gamma_{abc}^{\;\;\;\;\;defg} + 36 \gamma_{[a}^{\;\;d e} \delta_{b}^{f} \delta_{c]}^{g} \right) \right) \eta_{J} M_{IJ}. \notag
\label{eqn:bian22}
\end{align}
The expression above must vanish for the transformed Bianchi identity to be satisfied. Since we are considering generic supergravity solutions, by looking at the terms proportional to the RR 2-form field strength we conclude that \begin{equation} \partial_{a} X= S_1 \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \eta_{J} M_{IJ} \quad \textup{and} \quad S_2 \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0. \label{eqn:bian2con2} \end{equation}
From the terms proportional to the NSNS field strength we get that \begin{equation} \bar{\epsilon}_{I} \left( S_1 \gamma_{11} +S_2 \right) \eta_{J} M_{IJ} =0, \label{eqn:bian2conH} \end{equation} and from the terms involving the RR 4-form field strength we get \begin{equation} S_1 \bar{\epsilon}_{I} \gamma_{a} \eta_{J} M_{IJ}=0 \quad \textup{and} \quad S_2 \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ} =0, \label{eqn:bian2con4} \end{equation} using $S_2 \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0,$ from equation \eqref{eqn:bian2con2}. Finally from the remaining terms we have that \begin{equation} \bar{\epsilon}_{I} \gamma_{[bc}\Bigl( \partial_{a]} X \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} - \partial_{a]} \bigl[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0. \label{eqn:bian2condm} \end{equation}
Using similar techniques to those used above, we can also show that
\begin{align}
\nabla_{[a} F'^{(4)}_{\;\;\;\;bcde]} -2 H_{[abc} F'^{(2)}_{\;\;\;\;de]} &= -\textup{e}^{-X} \left( F^{(4)}_{\;\;\;\;[bcde}+\textup{e}^{-\phi} \bar{\epsilon}_{I} \gamma_{[bcde} \left(S_3 + S_4 \gamma_{11} \right) \eta_{J} M_{IJ} \right) \partial_{a]} X\notag \\
& \;\; - 2 \textup{e}^{-(\phi+X)} H_{[abc} \bar{\epsilon}_{I} \gamma_{de]} \left( \left( S_2+ S_3 \right) \gamma_{11} + \left( S_1 + S_4 \right) \right) \eta_{J} M_{IJ} \notag \\
& \quad + \frac{1}{20} \textup{e}^{-X} F^{(2)}_{\;\;\;\;fg} \bar{\epsilon}_{I} S_3 \left(\gamma_{abcde}^{\quad \;\;\; fg} - 20 \gamma_{[abc} \delta_{d}^{f} \delta_{e]}^{g} \right) \gamma_{11} \eta_{J} M_{IJ} \notag\\
& \;\;\;\quad + \frac{1}{120} \textup{e}^{-X} F^{(4)}_{\;\;\;\;fghi} \bar{\epsilon}_{I} \left(10 S_3 \left( \gamma_{[bcde}^{\quad \;\;fgh} \delta_{a]}^{i} + 12 \gamma_{[ab}^{\;\;\;\;f} \delta_{c}^{g} \delta_{d}^{h} \delta_{e]}^{i} \right) \right. \notag \\
& \qquad \qquad \qquad \qquad \qquad \left. + S_4 \left( \gamma_{abcde}^{\quad \;\;\; fghi} - 120 \gamma_{[a} \delta_{b}^{f} \delta_{c}^{g} \delta_{d}^{h} \delta_{e]}^{i} \right) \gamma_{11} \right) \eta_{J} M_{IJ} \notag \\
& \qquad \qquad + \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{[bcde} \partial_{a]} \left[ \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} \right] \eta_{J},
\label{eqn:bian4}
\end{align}
\begin{align}
& \nabla^{b} F'^{(2)}_{\;\;\;\;ba} - \frac{1}{6} H^{bcd} F'^{(4)}_{\;\;\;\;abcd} \notag \\
= &-\textup{e}^{-X} \left( F^{(2)}_{\;\;\;\;ba}+\textup{e}^{-\phi} \bar{\epsilon}_{I} \gamma_{ba} \left(S_1 + S_2 \gamma_{11} \right) \eta_{J} M_{IJ} \right) \partial^{b} X + \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{ba} \partial^{b} \left[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \right] \eta_{J}\notag \\
& \qquad \quad - \frac{1}{6} \textup{e}^{-(\phi+X)} H_{bcd} \bar{\epsilon}_{I}\gamma_{a}^{\;\;bcd} \left( (S_1 + S_4) \gamma_{11}+ (S_2 + S_3) \right) \eta_{J} M_{IJ} \notag \\
& \qquad\qquad \quad + \frac{1}{4} \textup{e}^{-X} F^{(2)}_{\;\;\;\;bc} \bar{\epsilon}_{I} \left( 4 S_1 \gamma^{b} \delta_{a}^{c} \gamma_{11} + S_2 \gamma_{a}^{\;\;bc} \right) \eta_{J} M_{IJ} - \frac{1}{6} \textup{e}^{-X} F^{(4)}_{\;\;\;\;bcda} \bar{\epsilon}_{I} S_2 \gamma^{bcd} \eta_{J} M_{IJ}
\label{eom2}
\end{align}
and
\begin{align}
& \nabla^{d} F'^{(4)}_{\;\;\;\;dabc} - \frac{1}{144} \epsilon_{a b c d_1 \dots d_7} H^{d_1 d_2 d_3} F'^{(4) d_4 \dots d_7} \notag \\
= &-\textup{e}^{-X} \left( F^{(4)}_{\;\;\;\;dabc}+\textup{e}^{-\phi} \bar{\epsilon}_{I} \gamma_{dabc} \left(S_3 + S_4 \gamma_{11} \right) \eta_{J} M_{IJ} \right) \partial^{d} X \notag \\
& \quad + \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{dabc} \partial^{d} \left[ \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} \right] \eta_{J} + \frac{3}{2} \textup{e}^{-X} F^{(2)}_{\;\;\;\;d[c} \bar{\epsilon}_{I} \left( S_3 \gamma_{ab]}^{\quad d} \gamma_{11} - 2 S_4 \gamma_{a} \delta_{b]}^{d} \right) \eta_{J} M_{IJ} \notag\\
& \qquad - \frac{1}{48} \textup{e}^{-X} F^{(4)}_{\;\;\;\;defg} \bar{\epsilon}_{I} \left( S_3 \left( \gamma_{abc}^{\quad \;defg} + 36 \gamma_{[a}^{\;\;\;de} \delta_{b}^{f} \delta_{c]}^{g} \right) + 48 S_4 \gamma^{d} \delta_{a}^{e} \delta_{b}^{f} \delta_{c}^{g} \gamma_{11} \right) \eta_{J} M_{IJ}.
\label{eom4}
\end{align}
For the transformed equation of motion for the RR 4-form field to hold, from equation \eqref{eom4}, the following must be satisfied \begin{gather}
\partial_{a} X= -S_4 \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \eta_{J} M_{IJ}, \label{eom4condX} \\
S_4 \bar{\epsilon}_{I} \gamma_{a} \eta_{J} M_{IJ} =0, \label{eom4conds41gamma} \\
S_3 \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0, \qquad S_3 \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ} =0, \label{eom4conds3gamma} \\
\bar{\epsilon}_{I} \gamma_{abcd}\Bigl( \partial^{d} X \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} - \partial^{d} \bigl[ \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0. \label{eom4con}
\end{gather}
Note that if, for example, $$ \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ} =0, $$ then $$\bar{\epsilon}_{I} \gamma_{a_1 \dots a_7} \eta_{J} M_{IJ} =0,$$ for
\begin{equation}
\gamma_{a_1 \dots a_m}= -\frac{(-1)^{\frac{1}{2}(10-m)(10-m+1)}}{(10-m)!} \epsilon_{a_1 \dots a_m b_1 \dots b_{10-m}} \gamma^{b_1 \dots b_{10-m}} \gamma_{11},
\label{eqn:gammadual}
\end{equation}
proved in Appendix \ref{gammadual}.
Now, if equations \eqref{eqn:bian2con2}, (\ref{eom4condX}--\ref{eom4conds3gamma}) hold, then the expressions in equations \eqref{eqn:bian4} and \eqref{eom2} vanish only if
\begin{gather}
(S_1 +S_4) \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ}+(S_2 +S_3) \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{[bcde}\Bigl( \partial_{a]} X \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} - \partial_{a]} \bigl[ \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0
\label{eqn:bian4con2}
\end{gather}
and
\begin{gather}
(S_1 +S_4) \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} M_{IJ}+(S_2 +S_3) \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{ab}\Bigl( \partial^{b} X \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} - \partial^{b} \bigl[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0, \label{eom2con}
\end{gather}
respectively.
In summary, the Killing spinors and functions that describe the transformation must satisfy
\begin{gather}
\partial_{a} X= S_1 \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \eta_{J} M_{IJ}, \qquad (S_1 +S_4) \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \eta_{J} M_{IJ}=0, \notag \\
\bar{\epsilon}_{I} \left( S_1 \gamma_{11} +S_2 \right) \eta_{J} M_{IJ} =0, \notag \\
S_{1} \bar{\epsilon}_{I} \gamma_{a} \eta_{J} M_{IJ}=0, \qquad S_{4} \bar{\epsilon}_{I} \gamma_{a} \eta_{J} M_{IJ}=0, \notag \\
S_2 \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0, \qquad S_2 \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ} =0, \notag \\
S_3 \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0, \qquad S_3 \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ} =0, \notag \\
(S_1 +S_4) \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ}+ (S_2 +S_3) \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ}=0, \notag \\
(S_1 +S_4) \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} M_{IJ}+ (S_2 +S_3) \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} M_{IJ}=0, \notag \\
\bar{\epsilon}_{I} \gamma_{[bc}\Bigl( \partial_{a]} X \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} - \partial_{a]} \bigl[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0,\notag \\
\bar{\epsilon}_{I} \gamma_{ab}\Bigl( \partial^{b} X \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} - \partial^{b} \bigl[ \left(S_1 + S_2 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0, \notag \\
\bar{\epsilon}_{I} \gamma_{[bcde}\Bigl( \partial_{a]} X \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} - \partial_{a]} \bigl[ \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0 \notag \\
\bar{\epsilon}_{I} \gamma_{abcd}\Bigl( \partial^{d} X \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} - \partial^{d} \bigl[ \left(S_3 + S_4 \gamma_{11} \right) M_{IJ} \bigr] \Bigr) \eta_{J}=0
\label{RRcon}
\end{gather}
in order for the Bianchi identities and equations of motion for the transformed RR fields to be satisfied.
Let us consider the NSNS 3-form equations. The Bianchi identity for the NSNS 3-form field is invariant under the transformation, so we do not need to consider it. However, using the Killing spinor equations and equations \eqref{RRcon}, the equation of motion for the NSNS 3-form, \eqref{eomHA}, reduces to
\begin{align}
& \textup{e}^{-(\phi + 2 X)} F_{cd} \bar{\epsilon}_{I} \left( \left(3 S_{3} -S_{4} \gamma_{11} \right)\delta_{a}^{c} \delta_{b}^{d} - S_{3} \gamma_{ab}^{\;\;\;cd}\right) \eta_{J} M_{IJ} \notag \\
& \qquad - \textup{e}^{-(\phi + 2 X)}S_{3} F_{abcd} \bar{\epsilon}_{I} \gamma^{cd} \gamma_{11} \eta_{J} M_{IJ} - 4 \textup{e}^{-2(\phi + X)} \bar{\epsilon}_{I} \gamma_{[a} \eta_{J} \partial_{b]} \left(S_4 M_{IJ}\right) \notag \\
& \qquad\qquad + \frac{1}{2} \textup{e}^{-2(\phi + X)} \left( \bar{\epsilon}_{I} \gamma_{abcd} \left(S_3 + S_4 \gamma_{11} \right) \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cd} \left(S_1 + S_2 \gamma_{11}\right) \eta_{L} \right) M_{IJ} M_{KL} \notag \\
& \qquad \qquad \qquad \;\; - \frac{1}{48} \textup{e}^{-2(\phi + X)} \left( \bar{\epsilon}_{I} \gamma_{cdef} \left(S_3 + S_4 \gamma_{11} \right) \eta_{J} \right) ( \bar{\epsilon}_{K} \gamma_{ab}^{\;\;\;cdef} \left(S_3 \gamma_{11} + S_4 \right) \eta_{L} ) M_{IJ} M_{KL}=0.
\label{eqn:transHeqnA}
\end{align}
The supergravity fields we are considering are generic, so, in particular from the term proportional to the RR 2-form field, we must have that
$$S_{3} \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} M_{IJ}=0,$$
which implies that $S_{3} =0,$ for this is precisely the combination that enters in the transformation of the 4-form RR field strength. Furthermore, since $S_3=0,$ from $$(S_1 +S_4) \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ}+ (S_2 +S_3) \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ}=0$$ we get that $$ S_2 \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ} \propto \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ},$$ hence without loss of generality we can set $S_2=0.$
Since $S_2$ and $S_3$ vanish, we must have that at least one of $$\bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ}, \qquad \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} M_{IJ}$$ are non-zero in order for the transformation to be non-trivial. Therefore, using equations
\begin{gather*}
(S_1 +S_4) \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ}=0,\qquad (S_1 +S_4) \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} M_{IJ}=0,
\end{gather*}
from the set of equations \eqref{RRcon} with $S_2=S_3=0,$ we deduce that $$S_4=-S_1.$$ Without loss of generality we can let $S_1=1.$
Furthermore, from the first term in equation \eqref{eqn:transHeqnA}, the spinors must satisfy
\begin{equation}
\bar{\epsilon}_{I}\gamma_{11} \eta_{J} M_{IJ}=0.
\label{eqn:gamma11con}
\end{equation}
The last two terms in equation \eqref{eqn:transHeqnA} are quartic in spinors and they can be simplified using Fierz identities.
The Fierz identity for commuting spinors $\lambda, \chi, \psi, \varphi$ in $d-$dimensions is
\[
\left( \bar{\lambda} M \chi \right) \left( \bar{\psi} N \varphi \right) = 2^{-[d/2]} \sum_{I} \left( \bar{\lambda} M \mathcal{O}^I N \varphi \right) \left( \bar{\psi} \mathcal{O}_I \chi \right),
\]
where $M,N$ are arbitrary combination of gamma matrices and $$\{\mathcal{O}_I\}= \{ \mathbb{I}, \gamma_a, i \gamma_{ab}, i \gamma_{abc}, \gamma_{abcd}, \dots\}$$ forms a basis for $2^{[d/2]} \times 2^{[d/2]} $ matrices and $$\{\mathcal{O}^I\}= \{ \mathbb{I}, \gamma^a, i \gamma^{ab}, i \gamma^{abc}, \gamma^{abcd}, \dots\}$$ is the dual basis.
Using Fierz identities, equation \eqref{eqn:transHeqnA} with
\begin{gather*}
S_1=-S_4=1, \quad S_2=S_3=0, \\
\bar{\epsilon}_{I}\gamma_{11} \eta_{J} M_{IJ}=0
\end{gather*}
becomes
\begin{align*}
& 4 \bar{\epsilon}_{I} \gamma_{[a} \eta_{J} \partial_{b]} M_{IJ} - \biggl( 16 \left( \bar{\epsilon}_{I} \gamma_{[a} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{b]} \eta_{L} \right) - \left( \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{11} \eta_{L} \right) + \left( \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \eta_{L} \right)\biggr. \notag \\
& \qquad \qquad\quad +\frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cd} \gamma_{11} \eta_{L} \right)+\frac{1}{48} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cdef} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0.
\end{align*}
We can use equation \eqref{eqn:gamma11con} again to simplify the above equation to
\begin{align}
& 4 \bar{\epsilon}_{I} \gamma_{[a} \eta_{J} \partial_{b]} M_{IJ} - \biggl( 16 \left( \bar{\epsilon}_{I} \gamma_{[a} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{b]} \eta_{L} \right) + \left( \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \eta_{L} \right)\biggr. \notag \\
& \qquad \qquad \qquad \left. +\frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cd} \gamma_{11} \eta_{L} \right) +\frac{1}{48} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cdef} \eta_{L} \right) \right) M_{IJ} M_{KL}=0.
\label{Hcon}
\end{align}
So far, having only the dilaton and Einstein equation to consider, we have the following conditions on the Killing spinors and functions in the transformation of the fields:
\begin{gather}
S_1=-S_4=1, \label{eqn:S11con} \\
S_3=S_2=0, \label{eqn:S20con} \\
\partial_{a} X= \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \eta_{J} M_{IJ}, \label{eqn:dXcon} \\
\bar{\epsilon}_{I} \gamma_{11} \eta_{J} M_{IJ} =0, \label{eqn:gamma11con2} \\
\bar{\epsilon}_{I} \gamma_{a} \eta_{J} M_{IJ}=0, \label{eqn:gamma10con} \\
\bar{\epsilon}_{I} \gamma_{[bc} \eta_{J} \left( \partial_{a]} X M_{IJ} - \partial_{a]} M_{IJ} \right) =0, \label{eqn:quarb2con} \\
\bar{\epsilon}_{I} \gamma_{ab} \eta_{J} \left( \partial^{b} X M_{IJ} - \partial^{b} M_{IJ} \right)=0, \label{eqn:quare2con} \\
\bar{\epsilon}_{I} \gamma_{[bcde} \gamma_{11} \eta_{J} \Bigl( \partial_{a]} X M_{IJ} - \partial_{a]} M_{IJ} \Bigr) =0, \label{eqn:quarb4con} \\
\bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} \left( \partial^{d} X M_{IJ} - \partial^{d} M_{IJ} \right)=0 \label{eqn:quare4con}
\end{gather}
and equation \eqref{Hcon}.
The Dilaton equation for the transformed fields is
\begin{align*}
R + 4 \Box \phi' - 4 \left(\partial \phi' \right)^{2} - \frac{1}{12} H^{2}=0,
\end{align*}
which using the dilaton equation for the original fields implies that
\begin{equation}
\Box X= 2 \partial_{a} \phi \partial^{a} X + \partial_{a} X \partial^{a} X.
\label{dilcon}
\end{equation}
Using equation \eqref{eqn:dXcon} and the Killing spinor equation from the variation of the gravitino,
\begin{align*}
\Box X &= \nabla_{a} \left( \bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} M_{IJ} \right) \\
&= - \frac{3}{4} \textup{e}^{\phi} F_{bc} \bar{\epsilon}_{I} \gamma^{bc} \eta_{J} M_{IJ} + \frac{1}{48} \textup{e}^{\phi} F_{bcde} \bar{\epsilon}_{I} \gamma^{bcde} \gamma_{11} \eta_{J} M_{IJ} + \bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} \partial_{a} M_{IJ}.
\end{align*}
However, since $\epsilon_{I}$ and $\eta_{I},$ for all $I= 1 \dots, n,$ satisfy the dilatino Killing spinor equation, \eqref{eqn:ksedil},
\begin{equation*}
\begin{split} &\epsilon_{I}^{\alpha} \left( \gamma_{11} \left( \gamma^{a} \partial_{a} \phi - \frac{1}{12} H_{abc} \gamma^{abc} \gamma_{11} - \frac{3}{8} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;ab} \gamma^{ab} \gamma_{11} + \frac{1}{96} \textup{e}^{\phi} F^{(4)} _{\;\;\;\; abcd} \gamma^{abcd} \right) \right)_{(\alpha \beta)} \eta_{J}^{\beta} =0,
\end{split}
\end{equation*}
hence, using equation \eqref{eqn:symgamma},
$$- \frac{3}{4} \textup{e}^{\phi} F_{bc} \bar{\epsilon}_{I} \gamma^{bc} \eta_{J} M_{IJ} + \frac{1}{48} \textup{e}^{\phi} F_{bcde} \bar{\epsilon}_{I} \gamma^{bcde} \gamma_{11} \eta_{J} M_{IJ} = 2 \bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} \partial_{a} \phi M_{IJ}.$$
Therefore,
\begin{align*}
\Box X &= 2 \bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} \partial_{a} \phi M_{IJ} + \bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} \partial_{a} M_{IJ}.
\end{align*}
So, from equation \eqref{dilcon}, the transformed dilaton equation is satisfied if
\begin{equation}
\bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} \left( \bar{\epsilon}_{K} \gamma_{a} \gamma_{11} \eta_{L} M_{IJ} M_{KL} - \partial_{a} M_{IJ} \right) =0.
\label{dilconquar}
\end{equation}
Finally, we have to find conditions on the spinors and functions in the transformation in order for the Einstein equation to be satisfied for the transformed fields. Using equations \eqref{eqn:S11con}, \eqref{eqn:S20con} and \eqref{dilcon} and the Einstein equation for the original fields, Einstein's equation becomes
\begin{align}
&\frac{1}{4} g_{ab} \Box X -2 \nabla_{a} \nabla_{b} X + \textup{e}^{\phi} \left( F^{(2)\;\;\,c}_{\;\;\;\;(a} \bar{\epsilon}_{I} \gamma_{b)c} \eta_{J} -\frac{1}{16} g_{ab} F^{(2)}_{\;\;\;\;cd} \bar{\epsilon}_{I} \gamma^{cd} \eta_{J} \right) M_{IJ} - \frac{1}{6} \textup{e}^{\phi} \left( F^{(4)\;\;\,cde}_{\;\;\;\;(a} \bar{\epsilon}_{I} \gamma_{b)cde} \gamma_{11} \eta_{J} \right. \notag \\
& \; \left.- \frac{3}{32} g_{ab} F^{(4)}_{\;\;\;\;cdef} \bar{\epsilon}_{I} \gamma^{cdef} \gamma_{11} \eta_{J} \right) M_{IJ} + \frac{1}{2} \left( \left(\bar{\epsilon}_{I} \gamma_{ac} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{b}^{\;\;c} \eta_{L} \right) -\frac{1}{16} g_{ab} \left(\bar{\epsilon}_{I} \gamma_{cd} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cd} \eta_{L} \right) \right) M_{IJ} M_{KL}\notag \\
& \qquad + \frac{1}{12} \bigg( \left(\bar{\epsilon}_{I} \gamma_{acde} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{b}^{\;\;cde} \gamma_{11} \eta_{L} \right) -\frac{3}{32} g_{ab} \left(\bar{\epsilon}_{I} \gamma_{cdef} \gamma_{11} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cdef} \gamma_{11} \eta_{L} \right) \bigg) M_{IJ} M_{KL} =0.
\label{eqn:einst}
\end{align}
Now, consider
\begin{equation*}
\begin{split} &\bar{\epsilon}_{I} \left[ \gamma_{(a|} \gamma_{11} \left( \nabla_{|b)} - \frac{1}{8} H_{|b)cd} \gamma^{cd} \gamma_{11} - \frac{1}{16} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;cd} \gamma^{cd} \gamma_{|b)} \gamma_{11} + \frac{1}{192} \textup{e}^{\phi} F^{(4)} _{\;\;\;\;cdef} \gamma^{cdef} \gamma_{|b)} \right) \right] \eta_{J} =0.
\end{split}
\end{equation*}
Adding this to the same expression, but with $\epsilon_{I}$ and $\eta_{J}$ interchanged we get
\begin{equation*}
\begin{split}
& \bar{\epsilon}_{I} \gamma_{(a} \gamma_{11} \nabla_{b)} \eta_{J} + \bar{\eta}_{J} \gamma_{(a} \gamma_{11} \nabla_{b)} \epsilon_{I} + \frac{1}{8} \textup{e}^{\phi} F^{(2)}_{\;\;\;\;cd} \bar{\epsilon}_{I} \left( 4 \gamma_{(a}^{\;\;c} \delta_{b)}^{d} + g_{ab} \gamma^{cd} \right) \eta_{J} \\
& \qquad \qquad\qquad\qquad \qquad\qquad\quad\qquad \qquad \qquad- \frac{1}{96} \textup{e}^{\phi} F^{(4)}_{\;\;\;\;cdef} \bar{\epsilon}_{I} \left( 8 \gamma_{(a}^{\;\;cde} \delta_{b)}^{f} + g_{ab} \gamma^{cdef} \right) \gamma_{11} \eta_{J} =0,
\end{split}
\end{equation*}
using equations \eqref{gammaidentity} and \eqref{eqn:symgamma}. The above equation and the equation obtained by contracting the two free indices in the above equation can be used to reduce equation \eqref{eqn:einst} to
\begin{align}
&\frac{1}{4} g_{ab} \Box X -2 \nabla_{a} \nabla_{b} X + 2 \nabla_{(a} \left( \bar{\epsilon}_{I} \gamma_{b)} \gamma_{11} \eta_{J} \right) M_{IJ} - \frac{1}{4} g_{ab} \nabla_{c} \left( \bar{\epsilon}_{I} \gamma^{c} \gamma_{11} \eta_{J} \right) M_{IJ} \notag \\
& \quad + \frac{1}{2} \left( \left(\bar{\epsilon}_{I} \gamma_{ac} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{b}^{\;\;c} \eta_{L} \right) -\frac{1}{16} g_{ab} \left(\bar{\epsilon}_{I} \gamma_{cd} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cd} \eta_{L} \right) \right) M_{IJ} M_{KL} \notag \\
& \qquad + \frac{1}{12} \bigg( \left(\bar{\epsilon}_{I} \gamma_{acde} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{b}^{\;\;cde} \gamma_{11} \eta_{L} \right) -\frac{3}{32} g_{ab} \left(\bar{\epsilon}_{I} \gamma_{cdef} \gamma_{11} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cdef} \gamma_{11} \eta_{L} \right) \biggr) M_{IJ} M_{KL} =0.
\label{eqn:einst2}
\end{align}
Using equation \eqref{eqn:dXcon} to simplify the terms on the first line, Fierz identities and equations (\ref{eqn:gamma11con2}) and (\ref{eqn:gamma10con}), it can be shown that equation \eqref{eqn:einst2} is
\begin{align}
& - 2 \bar{\epsilon}_{I} \gamma_{(a} \gamma_{11} \eta_{J} \left( \partial_{b)} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{b)} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right) + \frac{1}{4} g_{ab} \bar{\epsilon}_{I} \gamma^{c} \gamma_{11} \eta_{J} \left( \partial_{c} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{c} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right)\notag\\
& \qquad +\biggl( 4 \left( \bar{\epsilon}_{I} \gamma_{a} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma_{b} \eta_{J} \right) + \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{a}^{\;\;c} \gamma_{11} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{bc} \gamma_{11} \eta_{L} \right) + \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{a}^{\;\;cde} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{bcde} \eta_{L} \right)\notag \\
& \qquad \qquad + \frac{1}{16} g_{ab} \left( \bar{\epsilon}_{I} \eta_{J} \right) \left(\bar{\epsilon}_{K} \eta_{L} \right) - \frac{1}{2} g_{ab} \left( \bar{\epsilon}_{I} \gamma_{c} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma^{c} \eta_{J} \right) - \frac{1}{32} g_{ab} \left( \bar{\epsilon}_{I} \gamma_{cd} \gamma_{11} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cd} \gamma_{11} \eta_{L} \right)\notag\\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \Bigl. -\frac{1}{128} g_{ab} \left( \bar{\epsilon}_{I} \gamma_{cdef} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cdef} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0.
\label{eqn:quarEcon}
\end{align}
This is a quartic condition on the spinors. Moreover, from equations \eqref{Hcon}, (\ref{eqn:quarb2con}--\ref{eqn:quare4con}) and equation \eqref{dilconquar} we also have that the spinors must satisfy
\begin{align}
& 4\bar{\epsilon}_{I} \gamma_{[a} \eta_{J} \left( \partial_{b]} M_{IJ} - 4 \bar{\epsilon}_{K} \gamma_{b]} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right) - \biggl( \left( \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \eta_{L} \right) \biggr. \notag \\
& \qquad \quad\qquad \left. +\frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cd} \gamma_{11} \eta_{L} \right) +\frac{1}{48} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{cdef} \eta_{L} \right) \right) M_{IJ} M_{KL}=0,
\label{Hcon2}
\end{align}
\begin{align}
& \bar{\epsilon}_{I} \gamma_{[ab} \eta_{J} \left( \partial_{c]} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{c]} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right) + \biggl( \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{[ab}^{\;\;\;\;d} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{c]d} \gamma_{11} \eta_{L} \right)\notag \\
& \qquad \quad + \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \eta_{L} \right) - \frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{[ab}^{\;\;\;\;de} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{c]de} \gamma_{11} \eta_{L} \right) + \frac{1}{3} \left( \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{d} \eta_{L} \right)\notag\\
& \qquad \qquad \qquad +\frac{2}{3} \left( \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{L} \right) \left( \bar{\epsilon}_{K} \gamma^{d} \eta_{J} \right) \left. - \frac{1}{36} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{def} \eta_{L} \right) \right) M_{IJ} M_{KL} =0, \label{eqn:quarb2con2}
\end{align}
\begin{align}
& \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} \left( \partial^{b} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma^{b} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right) + \biggl( \left( \bar{\epsilon}_{I} \gamma_{a} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{11} \eta_{L} \right) + 2 \left( \bar{\epsilon}_{I} \gamma_{a} \eta_{L} \right) \left( \bar{\epsilon}_{K} \gamma_{11} \eta_{J} \right)\notag \\
& \qquad \qquad\qquad \qquad -\frac{3}{4} \left( \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{bc} \gamma_{11} \eta_{L} \right) \left. - \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abcd} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{bcd} \gamma_{11} \eta_{L} \right) \right) M_{IJ} M_{KL} =0, \label{eqn:quare2con2}
\end{align}
\begin{align}
& \bar{\epsilon}_{I} \gamma_{[abcd} \gamma_{11} \eta_{J} \left( \partial_{e]} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{e]} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right)+ \biggl( \left( \bar{\epsilon}_{I} \gamma_{[abc} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{de]} \gamma_{11} \eta_{L} \right) \notag \\
& \quad + \left( \bar{\epsilon}_{I} \gamma_{[abc}^{\;\;\;\;\;\;f} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{de]f} \eta_{L} \right) + \frac{1}{5} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{f} \eta_{L} \right) + \frac{2}{5} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \eta_{L} \right) \left( \bar{\epsilon}_{K} \gamma^{f} \eta_{J} \right)\notag\\
& \qquad \quad\left. -\frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{[abcd}^{\quad \;\;\; fg} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{e]fg} \gamma_{11} \eta_{L} \right) + \frac{1}{20} \left( \bar{\epsilon}_{I} \gamma_{abcdefg} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{fg} \gamma_{11} \eta_{L} \right) \right) M_{IJ} M_{KL} =0,
\label{eqn:quarb4con2}
\end{align}
\begin{align}
& \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} \left( \partial^{d} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma^{d} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right) + \biggl(3 \left( \bar{\epsilon}_{I} \gamma_{[ab} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{c]} \eta_{L} \right) \notag \\
& \qquad + 6 \left( \bar{\epsilon}_{I} \gamma_{[ab} \eta_{L} \right) \left( \bar{\epsilon}_{K} \gamma_{c]} \eta_{J} \right) + \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} \right) \left( \bar{\epsilon}_{K} \eta_{L} \right) + \frac{3}{2} \left( \bar{\epsilon}_{I} \gamma_{[ab}^{\;\;\;\;\;d} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{c]d} \gamma_{11} \eta_{L} \right) \notag\\
& \qquad \qquad \left. - \frac{3}{4} \left( \bar{\epsilon}_{I} \gamma_{[ab}^{\;\;\;\;de} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{c]de} \eta_{L} \right) - \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abcdef} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{def} \gamma_{11} \eta_{L} \right) \right) M_{IJ} M_{KL} =0,
\label{eqn:quare4con2}
\end{align}
\begin{align}
& \bar{\epsilon}_{I} \gamma^{a} \gamma_{11} \eta_{J} \left( \partial_{a} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{a} \gamma_{11} \eta_{L} M_{IL} M_{KJ} \right) + \biggl( \left( \bar{\epsilon}_{I} \gamma^{a} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{a} \eta_{L} \right) + 2 \left( \bar{\epsilon}_{I} \gamma^{a} \eta_{L} \right) \left( \bar{\epsilon}_{K} \gamma_{a} \eta_{J} \right) \notag \\
& \qquad \qquad\qquad \left. -\frac{1}{12} \left( \bar{\epsilon}_{I} \gamma^{abc} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{abc} \eta_{L} \right) - \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma^{abc} \gamma_{11} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{abc} \gamma_{11} \eta_{L} \right) \right) M_{IJ} M_{KL} =0,
\label{dilconquar2}
\end{align}
respectively, where Fierz identities have been used to rewrite equations (\ref{eqn:quarb2con}--\ref{eqn:quare4con}) and equation \eqref{dilconquar}.
The only set of quadratic constraints on the spinors that we have found that solve equations (\ref{eqn:quarEcon}--\ref{dilconquar2}) is
\begin{gather*}
\bar{\epsilon}_{I} \gamma_{a} \eta_{J}=0, \quad \bar{\epsilon}_{I} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ}=0 \\
\bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ}=0, \quad \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ}=0, \quad \bar{\epsilon}_{I} \gamma_{cdef} \eta_{J} M_{IJ}=0, \\
\partial_{a} M_{IJ} = - 2 \bar{\epsilon}_{K} \gamma_{a} \gamma_{11} \eta_{L} M_{IL} M_{KJ}.
\end{gather*}
We have shown that the type IIA supergravity equations admit a symmetry described by the following transformations of the dilaton and RR field strengths
\begin{align}
\phi \rightarrow \phi' &= \phi + X,\notag \\
\textup{e}^{\phi} F^{(2)}_{\;\;\;\;ab} \rightarrow \textup{e}^{\phi'} F'^{(2)}_{\;\;\;\;ab} &= \textup{e}^{\phi} F^{(2)}_{\;\;\;\;ab} + \bar{\epsilon}_{I} \gamma_{ab} \eta_{J} M_{IJ}, \notag \\
\textup{e}^{\phi} F^{(4)}_{\;\;\;\;abcd} \rightarrow \textup{e}^{\phi'} F'^{(4)}_{\;\;\;\;abcd} &= \textup{e}^{\phi} F^{(4)}_{\;\;\;\;abcd} - \bar{\epsilon}_{I} \gamma_{abcd} \gamma_{11} \eta_{J} M_{IJ},
\label{eqn:IIAsymm}
\end{align}
where the Killing spinors must satisfy
\begin{gather}
\bar{\epsilon}_{I} \gamma_{a} \eta_{J}=0, \label{eqn2:gamma10con} \\
\bar{\epsilon}_{I} \gamma_{11} \eta_{J} M_{IJ} =0, \\
\bar{\epsilon}_{I} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ}=0, \label{con2A1}\\ \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ}=0, \quad \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ}=0, \quad \bar{\epsilon}_{I} \gamma_{cdef} \eta_{J} M_{IJ}=0, \label{con2A2}\\
\partial_{a} X= \bar{\epsilon}_{I} \gamma_{a} \gamma_{11} \eta_{J} M_{IJ}, \label{eqn:Xdef} \\
\partial_{a} M_{IJ} = - 2 \bar{\epsilon}_{K} \gamma_{a} \gamma_{11} \eta_{L} M_{IL} M_{KJ}. \label{eqn:Mdef}
\end{gather}
Equation \eqref{eqn:Mdef} is equivalent to
\begin{equation}
\partial_{a} (M^{-1})_{IJ} = 2 \bar{\epsilon}_{J} \gamma_{a} \gamma_{11} \eta_{I},
\label{eqn:Minvdef}
\end{equation}
and equation \eqref{eqn:Xdef} can be solved to find $X$ up to a constant of integration:
\begin{equation}
X = \frac{1}{2} \sum_{I=1}^{n}\left(\log M^{-1} \right)_{II}.
\end{equation}
The integrability conditions arising from equations \eqref{eqn:Xdef} and \eqref{eqn:Minvdef} are trivial because
\begin{align*}
\nabla_{[a} \nabla_{b]} X = \frac{1}{2} H_{abc} \bar{\epsilon}_{I} \gamma^{c} \eta_{J} M_{IJ} \quad \textup{and} \quad
\nabla_{[a} \nabla_{b]} (M^{-1})_{IJ}= H_{abc} \bar{\epsilon}_{J} \gamma^{c} \eta_{I},
\end{align*}
which vanish by equation \eqref{eqn2:gamma10con}.
In the transformations given by Berkovits and Maldacena the spinors $\epsilon_{I}$ and $\eta_{I}$ are identified. This is sufficient for
\begin{gather*}
\bar{\epsilon}_{I} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{ab} \gamma_{11} \eta_{J} M_{IJ}=0 ,\\ \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ}=0, \quad \bar{\epsilon}_{I} \gamma_{abc} \gamma_{11} \eta_{J} M_{IJ}=0, \quad \bar{\epsilon}_{I} \gamma_{cdef} \eta_{J} M_{IJ}=0.
\end{gather*}
When $\epsilon_{I}$ and $\eta_{I}$ are identified, only the symmetric part of $M_{IJ}$ contributes in the transformations of the fields, so without loss of generality we can let $M_{IJ}$ be symmetric in $I$ and $J,$ as a consequence of which the above equations are satisfied. If we identity $\epsilon_{I}$ and $\eta_{I}$ then we recover the transformations of Berkovits and Maldacena, but with an extra condition on the spinors, namely that $$ \bar{\epsilon}_{I} \gamma_{11} \epsilon_{J} M_{IJ} =0.$$
When $n=1,$ we can explicitly show that the solution to \begin{gather*} \bar{\epsilon} \eta =0, \quad \bar{\epsilon} \gamma_{ab} \gamma_{11} \eta=0 , \\ \quad \bar{\epsilon} \gamma_{abc} \eta=0, \quad \bar{\epsilon} \gamma_{abc} \gamma_{11} \eta=0, \quad \bar{\epsilon} \gamma_{cdef} \eta=0 \end{gather*} is $$\epsilon \propto \eta.$$ However, when $n>1,$ these conditions do not reduce to the transformation rules of fermionic T-duality.
\section{Type IIB supergravity symmetry}
\label{symmIIB}
The type IIB supergravity action is \begin{multline}
S= \frac{1}{2 \kappa^2} \int \d^{10} x \sqrt{g} \left\{ \textup{e}^{-2 \phi} \left[ R + 4 (\partial \phi)^2 - \frac{1}{12} H^2 \right] \right.\\
\left.- \frac{1}{2} \left[ F^{{(1)}^2} + \frac{1}{3!} F^{{(3)}^2} + \frac{1}{2.5!} F^{{(5)}^2} \right] - \frac{1}{192} \frac{1}{\sqrt{g}} \epsilon \, C^{(4)} \partial B \partial C^{(2)} \right\}.
\end{multline}
In type IIB supergravity the RR fields are the scalar $C^{(0)},$ the 2-form $C^{(2)}$ and the 4-form $C^{(4)}.$
In terms of potentials $B,C^{(0)},C^{(2)}$ and $C^{(4)},$ the field strengths are defined to be
\begin{align*}
H = \, & \d B, \quad F^{(1)} = \d C^{(0)}, \quad F^{(3)} = \d C^{(2)} - H C^{(0)}, \\
&F^{(5)} = \d C^{(4)} - \frac{1}{2} C^{(2)} \wedge H +\frac{1}{2} B \wedge \d C^{(2)}.
\end{align*}
The 5-form field strength is constrained to be self-dual.
The Bianchi identities for the field strengths are
\begin{align}
&\d H =0, \label{bianchiHB} \\
&\d F^{(1)}=0,
\label{bianchi1B} \\
&\d F^{(3)}- H \wedge F^{(1)}=0,
\label{bianchi3B} \\
&\d F^{(5)}- H \wedge F^{(3)}=0.
\label{bianchi5B}
\end{align}
The equations of motion are
\begin{align}
& \d \left(\textup{e}^{-2 \phi} \star H \right) - F^{(1)} \wedge \star F^{(3)} - F^{(3)} \wedge F^{(5)} =0, \\
& \d \star F^{(1)} + H \wedge \star F^{(3)} =0, \\
& \d \star F^{(3)} + H \wedge F^{(5)} =0.
\end{align}
The equation of motion for the 5-form field strength, $F^{(5)},$ is equivalent to the Bianchi identity for the 5-form, equation \eqref{bianchi5B}, as it is self-dual.
Moreover, the Einstein equation is
\begin{multline}
R_{ab}= -\frac{1}{4} g_{ab} \Box \phi + \frac{1}{2} g_{ab} \left(\partial \phi \right)^{2} - 2 \nabla_{a} \nabla_{b} \phi
+ \frac{1}{4} \left( H_{acd}H_{b}^{\;cd} - \frac{1}{12} g_{ab} H^{2} \right) \\
+ \frac{1}{2} \textup{e}^{2 \phi} F^{(1)}_{\;\;\;\;a} F^{(1)}_{\;\;\;\;b} + \frac{1}{4} \textup{e}^{2 \phi} \left( F^{(3)}_{\;\;\;\;acd} F^{(3)\;cd}_{\;\;\;\;b} -\frac{1}{12} g_{ab} F^{(3)^{2}} \right) + \frac{1}{96} \textup{e}^{2 \phi} F^{(5)}_{\;\;\;\;acdef} F^{(5)\;cdef}_{\;\;\;\;b},
\label{einsteinB}
\end{multline}
noting that $F^{(5)^2}$ vanishes because the 5-form field is self-dual. Finally, the dilaton equation of motion is the same as the type IIA supergravity dilaton equation of motion, equation \eqref{dileom}. Also, the twice-contracted Bianchi identity is again satisfied using the equations of motion for the fields.
The Killing spinor equations from the variation of the gravitino and dilatino are
\begin{equation}
\nabla_{a} \epsilon - \frac{1}{8} H_{abc} \gamma^{bc} \sigma^{3} \epsilon - \frac{1}{8} \textup{e}^{\phi} \biggl( F^{(1)}_{\;\;\;\;b}\gamma^{b} \gamma_{a} \left( i \sigma^{2} \right) \epsilon + \frac{1}{3!} F^{(3)}_{\;\;\;\;bcd}\gamma^{bcd} \gamma_{a} \sigma^{1} \epsilon + \frac{1}{2.5!} F^{(5)}_{\;\;\;\;bcdef} \gamma^{bcdef} \gamma_{a} \left( i \sigma^{2} \right) \epsilon \biggr) = 0,
\label{eqn:ksegravB}
\end{equation}
\begin{equation}
\left( \gamma^{a} \partial_{a} \phi - \frac{1}{12} H_{abc} \gamma^{abc} \sigma^{3} + \textup{e}^{\phi} F^{(1)}_{\;\;\;\;a} \gamma^{a} \left( i \sigma^{2} \right) + \frac{1}{12} \textup{e}^{\phi} F^{(3)} _{\;\;\;\; abc} \gamma^{abc} \sigma^{1} \right) \epsilon=0,
\label{eqn:ksedilB}
\end{equation}
respectively.
We now consider the most general transformation of the RR field strengths and we will also allow the dilaton to transform:
\begin{align*}
\textup{e}^{\phi} F^{(1)}_{\;\;\;\;a} &\rightarrow \textup{e}^{\phi'} F'^{(1)}_{\;\;\;\;a}= \textup{e}^{\phi} F^{(1)}_{\;\;\;\;a} + \bar{\epsilon}_{I} \gamma_{a} S^{(1)} \eta_J M_{IJ}, \\
\textup{e}^{\phi} F^{(3)}_{\;\;\;\;abc} &\rightarrow \textup{e}^{\phi'} F'^{(3)}_{\;\;\;\;abc}= \textup{e}^{\phi} F^{(3)}_{\;\;\;\;abc} + \bar{\epsilon}_{I} \gamma_{abc} S^{(2)} \eta_J M_{IJ}, \\
\textup{e}^{\phi} F^{(5)}_{\;\;\;\;abcde} &\rightarrow \textup{e}^{\phi'} F'^{(5)}_{\;\;\;\;abcde}= \textup{e}^{\phi} F^{(5)}_{\;\;\;\;abcde} + \bar{\epsilon}_{I} \gamma_{abcde} S^{(3)} \eta_J M_{IJ},
\end{align*}
where $M_{IJ}$ is an arbitrary function; the spinors $\epsilon_I, \eta _I$ satisfy the gravitino and dilatino Killing spinor equations; $$S^{(1,2,3)}= \sum_{\mu} S_{\mu}^{(1,2,3)} \sigma^{ \star \mu},$$ $\sigma^{\star \mu}=\left( \mathbb{I}, \sigma^{1}, i \sigma^{2},\sigma^{3} \right)^{\mu}.$ The field $\phi'$ is some arbitrary field, which is identified with the transformed dilaton upon considering the NSNS 3-form equation of motion with transformed fields. We let $$ \phi'= \phi + X,$$ where $X$ is some arbitrary function.
Note that the RR fields need not \emph{a priori} transform with the same spinors and coefficients. However, as in section \ref{symmIIA}, if we let them transform with different spinors and coefficients, then we will find from equation (\ref{bianchi3B}), for example, that the spinors and functions have to be identified.
As in section \ref{symmIIA}, we let the NSNS fields $g$ and $H$ be invariant under the transformation.
It is important that the 5-form field strength remains self-dual after the transformation. The Hodge dual of $\delta F^{(5)}$ is
\begin{align*}
\star \left( \bar{\epsilon}_{I} \gamma_{a_1 \dots a_5} S^{(3)} \eta_J M_{IJ} \d x^{a_1} \wedge \dots \wedge \d x^{a_5} \right) &= \bar{\epsilon}_{I} \left( \frac{1}{5!} \epsilon_{a_1 \dots a_5 b_1 \dots b_5 }\gamma^{b_1 \dots b_5} \right) S^{(3)} \eta_J M_{IJ} \d x^{a_1} \wedge \dots \wedge \d x^{a_5} \\
&= \bar{\epsilon}_{I} \left( \gamma_{a_1 \dots a_5} \gamma_{11} \right) S^{(3)} \eta_J M_{IJ} \d x^{a_1} \wedge \dots \wedge \d x^{a_5},
\end{align*}
by identity \eqref{eqn:gammadual}. Hence if we let $\gamma_{11} \eta_J=\eta_J$ then the transformed 5-form field strength is self-dual. Recall that in type IIB supergravity all the Killing spinors have the same chirality, hence $\gamma_{11} \epsilon_I=\epsilon_I.$
We will now find the constraints that the various functions and the Killing spinors must satisfy so that the transformed fields satisfy the Bianchi identities and the equations of motion. First, let us consider the Bianchi identities. Using the gravitino Killing spinor equation, the Bianchi identity for the transformed RR 1-form field strength is
\begin{align*}
\nabla_{[a} F'^{(1)}_{\;\;\;\;b]} &=\nabla_{[a} \left( \textup{e}^{-X} F^{(1)}_{\;\;\;\;b]} + \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{b]} S^{(1)} \eta_J M_{IJ}\right) \\
&= - \textup{e}^{-X} \partial_{[a} X \left(F_{b]} + \textup{e}^{-\phi} \bar{\epsilon}_I \gamma_{b]} S^{(1)} \eta_{J} M_{IJ} \right) - \textup{e}^{-(\phi+X)} \partial_{[a} \phi \bar{\epsilon}_I \gamma_{b]} S^{(1)} \eta_J M_{IJ} \\
& \qquad + \textup{e}^{-X}M_{IJ} \epsilon_{I}^{\alpha} \left( \frac{1}{8} \textup{e}^{- \phi} H_{cd[a} \left( \left(\gamma_{b]}^{\;\;cd} \right)_{\alpha \beta} \left( S^{(1)} \sigma^{3}\right) + \left(\gamma_{b]}^{\;\;cd} \right)_{\beta \alpha} \left(\sigma^{3} S^{(1)} \right)\right) \right. \\
& \qquad \quad + \frac{1}{8} F^{(1)}_{\;\;c} \left( \left(\gamma_{[b} \gamma^{c} \gamma_{a]} \right)_{\alpha \beta} \left(i S^{(1)} \sigma^{2} \right) - \left(\gamma_{[b} \gamma^{c} \gamma_{a]} \right)_{\beta \alpha} \left(i \sigma^{2} S^{(1)} \right) \right) \\
& \qquad \quad + \frac{1}{8.3!} F^{(3)}_{\;\;cde} \left( \left(\gamma_{[b} \gamma^{cde} \gamma_{a]} \right)_{\alpha \beta} \left( S^{(1)} \sigma^{1} \right) + \left(\gamma_{[b} \gamma^{cde} \gamma_{a]} \right)_{\beta \alpha} \left( \sigma^{1} S^{(1)} \right) \right) \\
& \qquad \quad \left. + \frac{1}{16.5!} F^{(5)}_{\;\;cdefg} \left( \left(\gamma_{[b} \gamma^{cdefg} \gamma_{a]} \right)_{\alpha \beta} \left(i S^{(1)} \sigma^{2} \right) - \left(\gamma_{[b} \gamma^{cdefg} \gamma_{a]} \right)_{\beta \alpha} \left(i \sigma^{2} S^{(1)} \right) \right) \right) \eta_{J}^{\beta} \\
& \qquad \qquad + \textup{e}^{-(\phi+X)} \bar{\epsilon}_I \gamma_{[b} \partial_{a]} S^{(1)} \eta_{J} M_{IJ} +\textup{e}^{-(\phi+X)} \bar{\epsilon}_I \gamma_{[b} S^{(1)} \eta_{J} \partial_{a]} M_{IJ}=0.
\end{align*}
Now, from the dilatino Killing spinor equation
$$ \bar{\epsilon}_{I} \gamma_{ba} S^{(1)} \left( \gamma^{c} \partial_{c} \phi - \frac{1}{12} H_{cde} \gamma^{cde} \sigma^{3} + \textup{e}^{\phi} F^{(1)}_{\;\;\;\;c} \gamma^{c} (i \sigma^2) + \frac{1}{12} \textup{e}^{\phi} F^{(3)}_{\;\;\;\;cde} \gamma^{cde} \sigma^1 \right) \eta_{J}=0.$$
Adding the above equation to
$$ \bar{\eta}_{J} \gamma_{ba} S^{(1) \textup{t}} \left( \gamma^{c} \partial_{c} \phi - \frac{1}{12} H_{cde} \gamma^{cde} \sigma^{3} + \textup{e}^{\phi} F^{(1)}_{\;\;\;\;c} \gamma^{c} (i \sigma^2) + \frac{1}{12} \textup{e}^{\phi} F^{(3)}_{\;\;\;\;cde} \gamma^{cde} \sigma^1 \right) \epsilon_{I}=0,$$
where $S^{\textup{t}}$ is the transpose of $S,$ and using the first identity in the set of equations \eqref{eqn:symgamma} we can show that
\begin{align*}
\bar{\epsilon}_{I} \partial_{[a} \phi \gamma_{b]} S^{(1)} \eta_{J} = \epsilon_{I}^{\alpha} &\left( \frac{1}{48} H_{cde} \left( \left(\gamma_{ba}\gamma^{cde} \right)_{\alpha \beta} \left( S^{(1)} \sigma^{3}\right) + \left(\gamma_{ba}\gamma^{cde} \right)_{\beta \alpha} \left(\sigma^{3} S^{(1)} \right) \right) \right. \\
& \qquad - \frac{1}{4}\textup{e}^{\phi} F^{(1)}_{\;\;c} \left( \left(\gamma_{ba} \gamma^{c}\right)_{\alpha \beta} \left( S^{(1)} i\sigma^{2} \right) - \left(\gamma_{ba} \gamma^{c} \right)_{\beta \alpha} \left(i \sigma^{2} S^{(1)} \right) \right) \\
& \qquad \qquad \left. - \frac{1}{48} \textup{e}^{\phi} F^{(3)}_{\;\;cde} \left( \left(\gamma_{ba} \gamma^{cde} \right)_{\alpha \beta} \left( S^{(1)} \sigma^{1} \right) + \left(\gamma_{ba} \gamma^{cde}\right)_{\beta \alpha} \left( \sigma^{1} S^{(1)} \right) \right) \right) \eta_J.
\end{align*}
Therefore, using the above equation and performing some gamma matrix manipulations, using equations \eqref{gammaidentity} and \eqref{eqn:symgamma}, we can show that
\begin{align*}
\nabla_{[a} F'^{(1)}_{\;\;\;\;b]} & = \textup{e}^{-(\phi+X)} \bar{\epsilon}_I \gamma_{[b} \Bigl( \partial_{a]} \left(S^{(1)} M_{IJ} \right) - \partial_{a]} X S^{(1)} M_{IJ} \Bigr) \eta_{J} - \textup{e}^{-X} \partial_{[a} X F^{(1)}_{\;\;b]} \\
& \quad - \frac{1}{24} \textup{e}^{-(\phi+X)} H_{cde} \bar{\epsilon}_{I} \left(\gamma_{ba}^{\;\;\;cde} + \gamma^{c} \delta^{d}_{b} \delta^{e}_{a} \right) \left( S^{(1)}_{0} \sigma^{3} + S^{(1)}_{3} \mathbb{I} \right) \eta_{J} M_{IJ} \\
& \qquad + \frac{1}{4} \textup{e}^{-X} F^{(1)}_{\;\;c} \bar{\epsilon}_{I} \left( \gamma_{ba}^{\;\;\;c} \left(S^{(1)}_{0} i \sigma^{2} - S^{(1)}_{2} \mathbb{I} \right) - 4 \gamma_{[b} \delta^{c}_{a]} \left(S^{(1)}_{1} \sigma^{3} - S^{(1)}_{3} \sigma^{1} \right) \right) \eta_{J} M_{IJ}\\
& \qquad\quad + \frac{1}{4} \textup{e}^{-X} F^{(3)}_{\;\;cd[a} \bar{\epsilon}_{I} \left( \gamma_{b]}^{\;\;cd} \left(S^{(1)}_{2} \sigma^{3} + S^{(1)}_{3} i \sigma^{2} \right)- 2 \gamma^{c} \delta^{d}_{b]} \left(S^{(1)}_{0} \sigma^{1} + S^{(1)}_{1} \mathbb{I} \right) \right) \eta_{J} M_{IJ} \\
& \qquad\qquad - \frac{1}{24} \textup{e}^{-X} F^{(5)}_{\;\;bacde} \bar{\epsilon}_{I} \gamma^{cde} \left(S^{(1)}_{0} i \sigma^{2} - S^{(1)}_{2} \mathbb{I} \right) \eta_{J} M_{IJ}.
\end{align*}
This expression must vanish if the transformed RR 1-form field strength is to satisfy the Bianchi identity. We are considering generic supergravity fields, so the expression vanishes only if
\begin{gather}
\partial_{a} X = \bar{\epsilon}_{I} \gamma_{a} \left(S^{(1)}_{1} \sigma^{3} - S^{(1)}_{3} \sigma^{1} \right) \eta_{J} M_{IJ}, \label{eqn:bian1condX} \\
\bar{\epsilon}_{I} \gamma_{abcde} \left( S^{(1)}_{0} \sigma^{3} + S^{(1)}_{3} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{a} \left( S^{(1)}_{0} \sigma^{3} + S^{(1)}_{3} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{abc}\left(S^{(1)}_{0} i \sigma^{2} - S^{(1)}_{2} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{abc} \left(S^{(1)}_{2} \sigma^{3} + S^{(1)}_{3} i \sigma^{2} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{a} \left(S^{(1)}_{0} \sigma^{1} + S^{(1)}_{1} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_I \gamma_{[b} \Bigl( \partial_{a]} \left(S^{(1)} M_{IJ} \right) - \partial_{a]} X S^{(1)} M_{IJ} \Bigr) \eta_{J}=0.
\end{gather}
We can also show that
\begin{align*}
&\nabla_{[a} F'^{(3)}_{\;\;\;\;bcd]} + F'^{(1)}_{\;\;\;\;[a} H_{bcd]} \\
= &\textup{e}^{-(\phi+X)} \bar{\epsilon}_I \gamma_{[bcd} \Bigl( \partial_{a]} \left(S^{(2)} M_{IJ} \right) - \partial_{a]} X S^{(2)} M_{IJ} \Bigr) \eta_{J} - \textup{e}^{-X} \partial_{[a} X F^{(3)}_{\;\;bcd]} \\
& \quad - \frac{1}{48} \textup{e}^{-(\phi+X)} H_{efg} \bar{\epsilon}_{I} \biggl( \left(\gamma_{bcda}^{\quad\;\; efg} + 36 \gamma_{[bc}^{\;\;\;\;e} \delta_{d}^{f} \delta_{a]}^{g} \right) \left( S^{(2)}_{0} \sigma^{3} + S^{(2)}_{3} \mathbb{I} \right) \\
& \qquad\qquad\qquad + 48 \gamma_{[b} \delta_{c}^{e} \delta^{f}_{d} \delta^{g}_{a]} \left( S^{(1)}_{0} \mathbb{I} + \left(S^{(1)}_{1} - S^{(2)}_{2} \right) \sigma^{1} + \left(S^{(1)}_{2} - S^{(2)}_{1} \right) i \sigma^{2} + S^{(1)}_{3} \sigma^{3} \right) \biggr) \eta_{J} M_{IJ} \\
& \qquad + \frac{1}{2} \textup{e}^{-X} F^{(1)}_{\;\;[a} \bar{\epsilon}_{I} \gamma_{bcd]} \left(S^{(2)}_{3} \sigma^{1} - S^{(2)}_{1} \sigma^{3} \right) \eta_{J} M_{IJ}\\
& \qquad\quad - \frac{1}{48} \textup{e}^{-X} F^{(3)}_{\;\;efg} \bar{\epsilon}_{I} \biggl( \left(\gamma_{bcda}^{\quad\;\; efg} + 36 \gamma_{[bc}^{\;\;\;\;e} \delta_{d}^{f} \delta_{a]}^{g} \right) \left( S^{(2)}_{0} \sigma^{1} + S^{(2)}_{1} \mathbb{I} \right) \\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad + 48 \gamma_{[b} \delta_{c}^{e} \delta^{f}_{d} \delta^{g}_{a]} \left( S^{(2)}_{2} \sigma^{3} + S^{(2)}_{3} i \sigma^{2} \right) \biggr) \eta_{J} M_{IJ} \\
& \qquad\qquad - \frac{1}{4} \textup{e}^{-X} F^{(5)}_{\;\;ef[bcd} \bar{\epsilon}_{I} \left( \gamma^{e} \delta_{a]}^{f} \left(S^{(2)}_{0} i \sigma^{2} - S^{(2)}_{2} \mathbb{I} \right) + \gamma_{a]}^{\;\;ef} \left(S^{(2)}_{1} \sigma^{3} - S^{(2)}_{3} \sigma^{1} \right)\right) \eta_{J} M_{IJ},
\end{align*}
\begin{align*}
&\nabla_{[a} F'^{(5)}_{\;\;\;\;bcdef]} + \frac{10}{3} F'^{(3)}_{\;\;\;\;[abc} H_{def]} \\
= &\textup{e}^{-(\phi+X)} \bar{\epsilon}_I \gamma_{[bcdef} \Bigl( \partial_{a]} \left(S^{(3)} M_{IJ} \right) - \partial_{a]} X S^{(3)} M_{IJ} \Bigr) \eta_{J} - \textup{e}^{-X} \partial_{[a} X F^{(5)}_{\;\;bcdef]} \\
& \quad - \frac{1}{72} \textup{e}^{-(\phi+X)} H_{ghi} \bar{\epsilon}_{I} \biggl( \left(\gamma_{bcdefa}^{\qquad \; ghi} + 90 \gamma_{[bcde}^{\quad\;\;\; g} \delta_{f}^{h} \delta_{a]}^{i} \right) \left( S^{(3)}_{0} \sigma^{3} + S^{(3)}_{3} \mathbb{I} \right) \\
& \qquad\qquad\quad + 240 \gamma_{[bcd} \delta_{e}^{g} \delta^{h}_{f} \delta^{i}_{a]} \left( S^{(2)}_{0} \mathbb{I} + \left(S^{(2)}_{1} - S^{(3)}_{2} \right) \sigma^{1} + \left(S^{(2)}_{2} - S^{(3)}_{1} \right) i \sigma^{2} + S^{(2)}_{3} \sigma^{3} \right) \biggr) \eta_{J} M_{IJ}\\
& \qquad - \frac{1}{12} \textup{e}^{-X} F^{(1)}_{\;\;g} \bar{\epsilon}_{I} \gamma_{bcdefa}^{\qquad \; g} \left(S^{(3)}_{0} i \sigma^{2} - S^{(3)}_{2} \mathbb{I} \right) \eta_{J} M_{IJ}\\
& \qquad\quad - \frac{1}{72} \textup{e}^{-X} F^{(3)}_{\;\;ghi} \bar{\epsilon}_{I} \biggl(\gamma_{bcdefa}^{\qquad \; ghi} \left( S^{(3)}_{0} \sigma^{1} + S^{(3)}_{1} \mathbb{I} \right) \\
& \qquad\qquad\qquad\qquad\qquad + \left(9 \gamma_{[bcdef}^{\qquad gh} \delta_{a]}^{i} + 60 \gamma_{[bcd} \delta_{e}^{g} \delta^{h}_{f} \delta^{i}_{a]} \right) \left( S^{(3)}_{2} \sigma^{3} + S^{(3)}_{3} i \sigma^{2} \right) \biggr) \eta_{J} M_{IJ} \\
& \qquad\qquad + \frac{1}{4} \textup{e}^{-X} F^{(5)}_{\;\;g[defa} \bar{\epsilon}_{I} \left(5 \gamma_{bc]}^{\;\;\; g} \left(S^{(3)}_{0} i \sigma^{2} - S^{(3)}_{2} \mathbb{I} \right) - 4 \gamma_{b} \delta_{c]}^{g} \left(S^{(3)}_{1} \sigma^{3} - S^{(3)}_{3} \sigma^{1} \right)\right) \eta_{J} M_{IJ}.
\end{align*}
Both of the above expressions must vanish for the Bianchi identities for $F'^{(3)}$ and $F'^{(5)},$ respectively, to hold. So we have that
\begin{gather}
\partial_{a} X = \bar{\epsilon}_{I} \gamma_{a} \left(S^{(2)}_{2} \sigma^{3} + S^{(2)}_{3} i \sigma^{2} \right) \eta_{J} M_{IJ}, \label{eqn:bian3condX}\\
\bar{\epsilon}_{I} \gamma_{abc} \left( S^{(2)}_{0} \sigma^{3} + S^{(2)}_{3} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \label{eqn:bian3conG3S0S3} \\
\bar{\epsilon}_{I} \gamma_{a} \left( S^{(1)}_{0} \mathbb{I} + \left(S^{(1)}_{1} - S^{(2)}_{2} \right) \sigma^{1}
+ \left(S^{(1)}_{2} - S^{(2)}_{1} \right) i \sigma^{2} + S^{(1)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}=0, \label{eqn:bian3conG1Sall}\\
\bar{\epsilon}_{I} \gamma_{abc} \left(S^{(2)}_{3} \sigma^{1} - S^{(2)}_{1} \sigma^{3} \right)\eta_{J} M_{IJ}=0, \label{eqn:bian3conG3S1S3}\\
\bar{\epsilon}_{I} \gamma_{abc} \left( S^{(2)}_{0} \sigma^{1} + S^{(2)}_{1} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{a} \left(S^{(2)}_{0} i \sigma^{2} - S^{(2)}_{2} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_I \gamma_{[bcd} \Bigl( \partial_{a]} \left(S^{(2)} M_{IJ} \right) - \partial_{a]} X S^{(2)} M_{IJ} \Bigr) \eta_{J}=0,
\end{gather}
and
\begin{gather}
\partial_{a} X = \bar{\epsilon}_{I} \gamma_{a} \left(S^{(3)}_{1} \sigma^{3} - S^{(3)}_{3} \sigma^{1} \right) \eta_{J} M_{IJ}, \\
\bar{\epsilon}_{I} \gamma_{a} \left( S^{(3)}_{0} \sigma^{3} + S^{(3)}_{3} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{abcde} \left( S^{(3)}_{0} \sigma^{3} + S^{(3)}_{3} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon} \gamma_{abc} \left( S^{(2)}_{0} \mathbb{I} + \left(S^{(2)}_{1} - S^{(3)}_{2} \right) \sigma^{1} + \left(S^{(2)}_{2} - S^{(3)}_{1} \right) i \sigma^{2} + S^{(2)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{abc}\left(S^{(3)}_{0} i \sigma^{2} - S^{(3)}_{2} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{a} \left(S^{(3)}_{0} \sigma^{1} + S^{(3)}_{1} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{abc} \left(S^{(3)}_{2} \sigma^{3} + S^{(3)}_{3} i \sigma^{2} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma_{abc} \left(S^{(3)}_{0} i \sigma^{2} - S^{(3)}_{2} \mathbb{I} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_I \gamma_{[bcdef} \Bigl( \partial_{a]} \left(S^{(3)} M_{IJ} \right) - \partial_{a]} X S^{(3)} M_{IJ} \Bigr) \eta_{J}=0, \label{eqn:bian5conquart}
\end{gather}
respectively.
The NSNS 3-form field strength does not change, so the Bianchi identity for the 3-form field is the same as before.
Now, we assume that equations (\ref{eqn:bian1condX}--\ref{eqn:bian5conquart}) hold and consider the equations of motion. As before, using the Killing spinor equations for $\epsilon_{I}$ and $\eta_{J},$ the equation of motion for the transformed RR 1-form field strength can be simplified to
\begin{align*}
&\nabla^{a} F'^{(1)}_{\;\;\;\;a} + \frac{1}{6} H_{abc} F'^{(3) abc}\\
= & \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma^{a} \Bigl(\partial_{a} \left(S^{(1)} M_{IJ}\right) - \partial_{a} X S^{(1)} M_{IJ} \Bigr) \eta_{J} \\
& \qquad + \frac{1}{6} \textup{e}^{-(\phi+X)} H_{abc} \bar{\epsilon}_{I} \gamma^{abc} \left( S^{(2)}_{0} \mathbb{I}+ \left(S^{(2)}_{1} - S^{(1)}_{2} \right) \sigma^{1} + \left(S^{(2)}_{2} - S^{(1)}_{1} \right) i \sigma^{2} + S^{(2)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}=0.
\end{align*}
So, we get the following conditions:
\begin{gather}
\bar{\epsilon}_{I} \gamma_{abc} \left( S^{(2)}_{0} \mathbb{I} + \left(S^{(2)}_{1} - S^{(1)}_{2} \right) \sigma^{1} + \left(S^{(2)}_{2} - S^{(1)}_{1} \right) i \sigma^{2} + S^{(2)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}=0, \\
\bar{\epsilon}_{I} \gamma^{a} \Bigl(\partial_{a} \left(S^{(1)} M_{IJ}\right) - \partial_{a} X S^{(1)} M_{IJ} \Bigr) \eta_{J}=0.
\label{conE1}
\end{gather}
Similarly, the equation of motion for the transformed 3-form field strength becomes
\begin{align*}
&\nabla^{a} F'^{(3)}_{\;\;\;\;abc} + \frac{1}{6} H^{def} F'^{(5)}_{\quad\; bcdef}\\
= & \textup{e}^{-(\phi+X)} \bar{\epsilon}_{I} \gamma_{abc} \Bigl(\partial^{a} \left(S^{(2)} M_{IJ}\right) - \partial^{a} X S^{(2)} M_{IJ} \Bigr) \eta_{J} \\
& \quad + \frac{1}{6} \textup{e}^{-(\phi+X)} H^{def} \bar{\epsilon}_{I} \gamma_{bcdef} \left( S^{(3)}_{0} \mathbb{I} + \left(S^{(3)}_{1} - S^{(2)}_{2} \right) \sigma^{1} + \left(S^{(3)}_{2} - S^{(2)}_{1} \right) i \sigma^{2} + S^{(3)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}=0.
\end{align*}
hence we need to impose
\begin{gather}
\bar{\epsilon}_{I} \gamma_{abcde} \left( S^{(3)}_{0} \mathbb{I} + \left(S^{(3)}_{1} - S^{(2)}_{2} \right) \sigma^{1} + \left(S^{(3)}_{2} - S^{(2)}_{1} \right) i \sigma^{2} + S^{(3)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}=0, \label{eqn:eom3conG5} \\
\bar{\epsilon}_{I} \gamma_{abc} \Bigl(\partial^{a} \left(S^{(2)} M_{IJ}\right) - \partial^{a} X S^{(2)} M_{IJ} \Bigr) \eta_{J}=0.
\label{conE3}
\end{gather}
We also need to show that the transformed fields satisfy the equation of motion for the NSNS 3-form:
\begin{align}
&\nabla^{a} \left( \textup{e}^{-2\phi'} H_{abc} \right) - F'^{(1)a} F'^{(3)}_{\;\;abc} - \frac{1}{6} F'^{(3)def} F'^{(5)}_{\;\;bcdef} \notag \\
= & -2 \textup{e}^{-2(\phi +X)} \partial^{a} X H_{abc} - \textup{e}^{-(\phi + 2 X)} M_{IJ} \left( F^{(1)a} \bar{\epsilon}_{I} \gamma_{abc} S^{(2)} \eta_{J} + F^{(3)}_{\;\;\;abc} \bar{\epsilon}_{I} \gamma^{a} S^{(1)} \eta_{J} \right.\notag \\
& \qquad \quad\left. + \frac{1}{6} F^{(3)def} \bar{\epsilon}_{I} \gamma_{bcdef} S^{(3)} \eta_{J} +\frac{1}{6} F^{(5)}_{\;\;bcdef} \bar{\epsilon}_{I} \gamma^{def} S^{(2)} \eta_{J} \right) \notag \\
& \qquad \qquad \qquad- \textup{e}^{-2(\phi + X)} \left( \bar{\epsilon}_{I} \gamma^{a} S^{(1)} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{abc} S^{(2)} \eta_{L} \right) M_{IJ} M_{KL} \notag \\
& \qquad \qquad \qquad\qquad\qquad- \frac{1}{6} \textup{e}^{-2(\phi + X)} \left( \bar{\epsilon}_{I} \gamma^{def} S^{(2)} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{bcdef} S^{(3)} \eta_{L} \right) M_{IJ} M_{KL}=0,
\label{eqn:transHeqnB}
\end{align}
where the NSNS 3-form equation of motion with the original supergravity fields has been used in the first equality.
Using the gravitino Killing spinor equation and the self-duality of the 5-form RR field strength we can show that
\begin{align}
F^{(1)a} \bar{\epsilon}_{I} \gamma_{abc} S^{(2)}_{2} i\sigma^{2} \eta_{J} M_{IJ} = & 4 \nabla_{[b} \left( \bar{\epsilon}_{I} \gamma_{c]} \eta_{J} \right) S^{(2)}_{2} M_{IJ} - 2 H_{abc} \bar{\epsilon}_{I} \gamma^{a} S^{(2)}_{2} \sigma^{3} \eta_{J} M_{IJ} \notag \\
& \qquad - \frac{1}{6} \textup{e}^{\phi} F^{(3)}_{\;\;\;def} \bar{\epsilon}_{I} \left( \gamma_{bc}^{\;\;\;def} + \gamma^{d} \delta_{b}^{e} \delta_{c}^{f} \right) S^{(2)}_{2} \sigma^{1} \eta_{J} M_{IJ} \notag \\
& \qquad \qquad\qquad - \frac{1}{6} \textup{e}^{\phi} F^{(5)}_{\;\;\;bcdef} \bar{\epsilon}_{I} \gamma^{def} S^{(2)}_{2} i \sigma^{2} \eta_{J} M_{IJ},
\end{align}
and using the dilatino Killing spinor equation and equations (\ref{eqn:bian3conG3S0S3}--\ref{eqn:bian3conG3S1S3}) we get that
\begin{align}
&F^{(1)a} \bar{\epsilon}_{I} \gamma_{abc} \left( S^{(2)}_{0} \mathbb{I} + S^{(2)}_{1} \sigma^{1} + S^{(2)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ} \notag \\
= &- 2 \partial_{[b} \phi \bar{\epsilon}_{I} \gamma_{c]} S^{(2)}_{0} i \sigma^{2} \eta_{J} M_{IJ} + \frac{1}{12} H_{def} \bar{\epsilon}_{I} \left( \gamma_{bc}^{\;\;\;def} - 6 \gamma^{d} \delta_{b}^{e} \delta_{c}^{f} \right) S^{(2)}_{3} i \sigma^{2} \eta_{J} M_{IJ} \notag \\
& \qquad\qquad\qquad\qquad\qquad\quad\qquad + \frac{1}{12} \textup{e}^{\phi} F^{(3)}_{\;\;\;def} \bar{\epsilon}_{I} \left( \gamma_{bc}^{\;\;\;def} - 6 \gamma^{d} \delta_{b}^{e} \delta_{c}^{f} \right) S^{(2)}_{1} i \sigma^{2} \eta_{J} M_{IJ}=0.
\end{align}
Substituting the above equations into equation \eqref{eqn:transHeqnB}, and using equations \eqref{eqn:bian3condX}, \eqref{eqn:bian3conG1Sall} and \eqref{eqn:eom3conG5}, the NSNS 3-form equation of motion becomes
\begin{align}
& -4 \nabla_{[b} \left( \bar{\epsilon}_{I} \gamma_{c]} \eta_{J} \right) S^{(2)}_{2} M_{IJ} + 2 \partial_{[b} \phi \bar{\epsilon}_{I} \gamma_{c]} S^{(2)}_{0} i \sigma^{2} \eta_{J} M_{IJ} - \frac{1}{12} H_{def} \bar{\epsilon}_{I} \left( \gamma_{bc}^{\;\;\;def} + 18 \gamma^{d} \delta_{b}^{e} \delta_{c}^{f} \right) S^{(2)}_{3} i \sigma^{2} \eta_{J} M_{IJ}\notag \\
& \; - \frac{1}{4} \textup{e}^{\phi} F^{(3)}_{\;\;\;def} \bar{\epsilon}_{I} \left( \gamma_{bc}^{\;\;\;def} + 2 \gamma^{d} \delta_{b}^{e} \delta_{c}^{f} \right) S^{(2)}_{1} i \sigma^{2} \eta_{J} M_{IJ} - \frac{1}{6} \textup{e}^{\phi} F^{(5)}_{\;\;\;bcdef} \bar{\epsilon}_{I} \gamma^{def} \left( S^{(2)}_{0} \mathbb{I} + S^{(2)}_{1} \sigma^{1} + S^{(2)}_{3} \sigma^{3} \right) \eta_{J} M_{IJ}\notag \\
& \;\;\; - \left( \bar{\epsilon}_{I} \gamma^{a} S^{(1)} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{abc} S^{(2)} \eta_{L} \right) M_{IJ} M_{KL}- \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma^{def} S^{(2)} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{bcdef} S^{(3)} \eta_{L} \right) M_{IJ} M_{KL}=0.
\label{eqn:transHeqn2B}
\end{align}
The supergravity fields are generic so the terms proportional to each supergravity field in the above expression must vanish. In particular, if we consider the expression proportional to the RR 5-form field strength, then as this expression is exactly the expression that enters in the transformation of the RR 3-form field strength $$S^{(2)}_{0} = S^{(2)}_{1} = S^{(2)}_{3}=0,$$ and without loss of generality we can set $$S^{(2)}_2=1.$$
Similarly, by using the Killing spinor equations to substitute in for $$ F^{(3)}_{\;\;\;abc} \bar{\epsilon}_{I} \gamma^{a} S^{(1)} \eta_{J} \quad \textup{and} \quad F^{(3)}_{\;\;\;def} \bar{\epsilon}_{I} \gamma_{bc}^{\;\;\;def} S^{(3)} \eta_{J} $$ in equation \eqref{eqn:transHeqnB}, instead of $F^{(1)}_{\;\;\;a} \bar{\epsilon}_{I} \gamma^{a}_{\;\,bc} S^{(2)} \eta_{J}$, we can show that $$ S^{(1)}= \sigma^{1} \quad \textup{and} \quad S^{(3)}= \sigma^{1}.$$
Letting $$ S^{(1)}= S^{(3)}= \sigma^{1} \quad \textup{and} \quad S^{(2)}= i \sigma^{2},$$ conditions (\ref{eqn:bian1condX}--\ref{conE3}) become
\begin{gather}
\partial_{a} X = \bar{\epsilon}_{I} \gamma_{a} \sigma^{3} \eta_{J} M_{IJ}, \label{eqn:condXB} \\
\bar{\epsilon}_{I} \gamma_{a}\eta_{J} M_{IJ}=0, \label{eqn:congamma1} \\
\bar{\epsilon}_I \gamma_{[b} \sigma^{1} \eta_{J} \left( \partial_{a]} M_{IJ} - \partial_{a]} X M_{IJ} \right) =0, \label{eqn:B1conquart} \\
\bar{\epsilon}_I \gamma_{[bcd} i \sigma^{2} \eta_{J} \left( \partial_{a]} M_{IJ} - \partial_{a]} X M_{IJ} \right) =0, \\
\bar{\epsilon}_I \gamma_{[bcdef} \sigma^{1} \eta_{J} \left( \partial_{a]} M_{IJ} - \partial_{a]} X M_{IJ} \right) =0, \\
\bar{\epsilon}_{I} \gamma^{a} \sigma^{1} \eta_{J} \left(\partial_{a} M_{IJ} - \partial_{a} X M_{IJ} \right) =0, \\
\bar{\epsilon}_{I} \gamma_{abc} i \sigma^{2} \eta_{J} \left(\partial^{a} M_{IJ} - \partial^{a} X M_{IJ} \right) =0. \label{eqn:E3conquart}
\end{gather}
The NSNS 3-form field strength equation of motion, equation \eqref{eqn:transHeqn2B}, becomes
\begin{align*}
& 4 \nabla_{[b} \left( \bar{\epsilon}_{I} \gamma_{c]} \eta_{J} \right) M_{IJ} + \left( \bar{\epsilon}_{I} \gamma^{a} \sigma^{1} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{abc} i \sigma^{2} \eta_{L} \right) M_{IJ} M_{KL} \notag \\
& \qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad+ \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma^{def} i \sigma^{2} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{bcdef} \sigma^{1} \eta_{L} \right) M_{IJ} M_{KL}=0,
\end{align*}
and using equation \eqref{eqn:congamma1} this reduces to
\begin{align}
& 4 \bar{\epsilon}_{I} \gamma_{[b} \eta_{J} \partial_{c]} M_{IJ} + \left( \left( \bar{\epsilon}_{I} \gamma^{a} \sigma^{1} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{abc} i \sigma^{2} \eta_{L} \right)+ \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma^{def} i \sigma^{2} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma_{bcdef} \sigma^{1} \eta_{L} \right) \right)M_{IJ} M_{KL}=0. \notag \\
& \qquad \qquad \qquad\qquad\quad\;\;
\label{eqn:transHeqn3B}
\end{align}
Fierz identities can be used to simplify the terms that are quartic in spinors. Just as the tensor product of a combination of gamma matrices, $M$ and $N,$ can be expanded in the basis $\{\mathcal{O}_I\}= \{ \mathbb{I}, \gamma_a, i \gamma_{ab}, i \gamma_{abc}, \gamma_{abcd}, \dots\},$
\[
M^{\alpha}_{\;\; \beta} N^{\gamma}_{\;\; \delta} = 2^{-[d/2]} \sum_{I} \left( M \mathcal{O}^I N \right)^{\alpha}_{\;\; \delta} \mathcal{O}^{\; \gamma}_{I \;\; \beta},
\]
we can expand the tensor product of $2 \times 2$ matrices, $\Sigma$ and $\Xi$, in the basis $\sigma^{\mu}=(\mathbb{I}, \sigma^{1}, \sigma^{2}, \sigma^{3}),$
\[
\Sigma_{AB} \Xi_{CD} = 2^{-1} \sum_{\mu} \left( \Sigma \sigma^{\mu} \Xi \right)_{AD} \sigma^{\mu}_{CB},
\]
where uppercase Latin letters are $SO(2)$ vector indices. Hence, for type IIB theory spinors, the Fierz identity is
\[
\left( \bar{\lambda} M \Sigma \chi \right) \left( \bar{\psi} N \Xi \varphi \right) = \frac{1}{64} \sum_{I, \mu} \bar{\lambda} \left( M \mathcal{O}^I N\right) \left( \Sigma \sigma^{\mu} \Xi \right) \varphi \bar{\psi} \left( \mathcal{O}_I \sigma^{\mu} \right) \chi.
\]
Using the Fierz identity multiple times, we can show that
\begin{align*}
& \left( \bar{\epsilon}_{I} \gamma^{a} \sigma^1 \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{abc} i \sigma^2 \eta_{L} \right) + \frac{1}{6} \left(\bar{\epsilon}_{I} \gamma^{a_1 \dots a_3} i \sigma^2 \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{a_1 \dots a_3 bc} \sigma^1 \eta_{L} \right) \\
= & -16 \left( \bar{\epsilon}_{I} \gamma_{[b} \sigma^3 \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma_{c]} \eta_{J} \right) + \left( \bar{\epsilon}_{I} \gamma^{a} i \sigma^2 \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{abc} \sigma^1 \eta_{L} \right)+ \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma^{a_1 \dots a_3} \sigma^1 \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{a_1 \dots a_3 bc} i \sigma^2 \eta_{L} \right).
\end{align*}
So from equation \eqref{eqn:transHeqn3B}, the NSNS 3-form equation of motion is satisfied if
\begin{align}
& 4 \bar{\epsilon}_{I} \gamma_{[b} \eta_{J} \left( \partial_{c]} M_{IJ} + 4 \bar{\epsilon}_{K} \gamma_{c]} \sigma^3 \eta_{L} M_{KJ} M_{IL} \right) + \left(\bar{\epsilon}_{I} \gamma^{a} i \sigma^2 \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{abc} \sigma^1 \eta_{L} \right) M_{IJ} M_{KL}\notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma^{a_1 \dots a_3} \sigma^1 \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{a_1 \dots a_3 bc} i \sigma^2 \eta_{L} \right) M_{IJ} M_{KL} =0.
\label{conEHB}
\end{align}
The dilaton equation, \eqref{dileom}, for the transformed fields simply reduces to
\begin{equation}
\Box X -2 \partial_{a} \phi \partial^{a} X - (\partial X)^{2} =0.
\label{condilB}
\end{equation}
Using $ \partial_{a} X = \bar{\epsilon}_{I} \gamma_{a} \sigma^{3} \eta_{J} M_{IJ},$ and the Killing spinor equations, the above equation reduces to
\begin{equation*}
\bar{\epsilon}_{I} \gamma^{a} \sigma^3 \eta_{J} \left(\partial_{a} M_{IJ} - \bar{\epsilon}_{K} \gamma_{a} \sigma^3 \eta_{L} M_{IJ} M_{KL} \right)=0,
\end{equation*}
which using Fierz identities becomes
\begin{align}
&\bar{\epsilon}_{I} \gamma^{a} \sigma^3 \eta_{J} \left(\partial_{a} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{a} \sigma^3 \eta_{L} M_{IL} M_{KJ} \right) + \biggl( \left( \bar{\epsilon}_{I} \gamma_{a} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{a} \eta_{L} \right) + 2 \left( \bar{\epsilon}_{I} \gamma_{a} \eta_{L} \right) \left( \bar{\epsilon}_{K} \gamma^{a} \eta_{J} \right)\notag \\
& \qquad \qquad\qquad \qquad -\frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{abc} \eta_{L} \right) -\frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abc} \sigma^{3} \eta_{J} \right) \left( \bar{\epsilon}_{K} \gamma^{abc} \sigma^{3} \eta_{L} \right) \biggr) M_{IJ} M_{KL} =0.
\label{dilred}
\end{align}
This must hold in order for the dilaton equation to be satisfied for the transformed fields.
Now, let us consider the Einstein equation. We can use the gravitino Killing spinor equation and the constraint from the dilaton equation of motion, equation \eqref{condilB}, to show that the Einstein equation reduces to
\begin{align*}
& 2 \nabla_{(a} \left( \bar{\epsilon}_{I} \gamma_{b)} \sigma^{3} \eta_{J} M_{IJ} - \partial_{b)} X \right) - \frac{1}{4} g_{ab} \nabla_{c} \left( \bar{\epsilon}_{I} \gamma^{c} \sigma^{3} \eta_{J} M_{IJ} - \partial^{c} X \right) - 2 \bar{\epsilon}_{I} \gamma_{(a} \sigma^{3} \eta_{J} \partial_{b)} M_{IJ}\\
& \quad + \frac{1}{4} g_{ab} \bar{\epsilon}_{I} \gamma^{c} \sigma^{3} \eta_{J} \partial_{c} M_{IJ}+\frac{1}{96} \biggl( 48 \left( \bar{\epsilon}_{I} \gamma_{a} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{b} \sigma^{1} \eta_{L} \right) + 24 \left( \bar{\epsilon}_{I} \gamma_{a}^{\;\;cd} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{bcd} i \sigma^{2} \eta_{L} \right)\\
& \qquad \qquad - 2 g_{ab} \left( \bar{\epsilon}_{I} \gamma_{cde} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cde} \eta_{L} i \sigma^{2} \right) + \left( \bar{\epsilon}_{I} \gamma_{a}^{\;\;cdef} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{bcdef} \sigma^{1} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0.
\end{align*}
The first two terms vanish because of equation \eqref{eqn:condXB}, and we can use Fierz identities to rewrite the terms that are quartic in spinors. Upon doing so, Einstein's equation becomes
\begin{align}
& - 2 \bar{\epsilon}_{I} \gamma_{(a} \sigma^{3} \epsilon_{J} \left( \partial_{b)} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{b)} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right) + \frac{1}{4} g_{ab} \bar{\epsilon}_{I} \gamma^{c} \sigma^{3} \epsilon_{J} \left( \partial_{c} M_{IJ} +2 \bar{\epsilon}_{K} \gamma_{c} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right)\notag \\
& \qquad \qquad+\frac{1}{96} \biggl(384 \left( \bar{\epsilon}_{I} \gamma_{a} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma_{b} \eta_{J} \right) + 48 \left( \bar{\epsilon}_{I} \gamma_{a} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{b} i \sigma^{2} \eta_{L} \right) \notag\\
& \qquad \qquad\qquad\quad + 24 \left( \bar{\epsilon}_{I} \gamma_{a}^{\;\;cd} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{bcd} \sigma^{1} \eta_{L} \right)+ \left( \bar{\epsilon}_{I} \gamma_{a}^{\;\;cdef} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{bcdef} i \sigma^{2} \eta_{L} \right) \notag\\
& \qquad \qquad\qquad\qquad \quad- 48 g_{ab} \left( \bar{\epsilon}_{I} \gamma_{c} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{c} \eta_{L} \right) - 2 g_{ab} \left( \bar{\epsilon}_{I} \gamma_{cde} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cde} \sigma^{1} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0.
\label{Einscon}
\end{align}
So, the transformed fields satisfy the type IIB supergravity equations if equations (\ref{eqn:condXB}--\ref{eqn:E3conquart}), \eqref{conEHB}, \eqref{dilred} and \eqref{Einscon} are satisfied. Using Fierz identities, equations (\ref{eqn:B1conquart}--\ref{eqn:E3conquart}) are equivalent to
\begin{align}
&\bar{\epsilon}_{I} \gamma_{[a} \sigma^{1} \epsilon_{J} \left( \partial_{b]} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{b]} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right) - \biggl( \frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{c} i \sigma^{2} \eta_{L} \right) \notag \\
& \qquad\quad+ \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abc} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{c} \eta_{L} \right) + \left( \bar{\epsilon}_{I} \gamma_{abc} i \sigma^{2} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma^{c} \eta_{J} \right)- \frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{[a}^{\;\;\;cd} \sigma^{3} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{b]cd} \sigma^{1} \eta_{L} \right)\notag \\
& \qquad\qquad\qquad\qquad\qquad\qquad -\frac{1}{24} \left( \bar{\epsilon}_{I} \gamma_{abcde} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cde} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0,
\label{eqn:B1conquart2}
\end{align}
\begin{align}
&\bar{\epsilon}_{I} \gamma_{[abc} i \sigma^{2} \epsilon_{J} \left( \partial_{d]} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{d]} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right) - \biggl( \frac{3}{4} \left( \bar{\epsilon}_{I} \gamma_{[ab}^{\;\;\;\;\, e} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{cd]e} \eta_{L} \right)\notag \\
& \qquad \quad + \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{[abc} \sigma^{3} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{d]} i \sigma^{2} \eta_{L} \right) + \frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{abcde} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{e} \eta_{L} \right) + \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abcde} \sigma^{1} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma^{e} \eta_{J} \right)\notag\\
& \qquad \qquad\qquad + \frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{[abc}^{\quad \;\, e f} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{d]ef} \sigma^{3} \eta_{L} \right)+ \frac{1}{48} \left( \bar{\epsilon}_{I} \gamma_{abcdefg}\eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{efg} \sigma^{1} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0,
\label{eqn:B3conquart2}
\end{align}
\begin{align}
&\bar{\epsilon}_{I} \gamma_{[abcde} \sigma^{1} \epsilon_{J} \left( \partial_{f]} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{f]} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right) - \biggl( \frac{5}{3} \left( \bar{\epsilon}_{I} \gamma_{[abc} \sigma^{3} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{def]} \sigma^{1} \eta_{L} \right)\notag \\
& \qquad \quad + \frac{5}{4} \left( \bar{\epsilon}_{I} \gamma_{[abcd}^{\quad \;\;\; g} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{ef]g} \eta_{L} \right) - \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abcdefg} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{g} i \sigma^{2} \eta_{L} \right) \notag\\
& \qquad \qquad \quad \;\;\;+ \frac{1}{4} \left( \bar{\epsilon}_{I} \gamma_{[abcde}^{\qquad g h} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{f]g h} \sigma^{3} \eta_{L} \right) + \frac{1}{6} \left( \bar{\epsilon}_{I} \gamma_{abcdefg} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{g} \eta_{L} \right)\notag\\
& \qquad \qquad\qquad \qquad \quad+ \frac{1}{3} \left( \bar{\epsilon}_{I} \gamma_{abcdefg} i \sigma^{2} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma^{g} \eta_{J} \right) \biggr) M_{IJ} M_{KL}=0,
\label{eqn:B5conquart2}
\end{align}
\begin{align}
&\bar{\epsilon}_{I} \gamma^{a} \sigma^{1} \epsilon_{J} \left( \partial_{a} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma_{a} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right) - \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abc} \sigma^{1} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{abc} \sigma^{3} \eta_{L} \right) M_{IJ} M_{KL}=0,
\label{eqn:E1conquart2}
\end{align}
\begin{align}
&\bar{\epsilon}_{I} \gamma_{abc} i \sigma^{2} \epsilon_{J} \left( \partial^{c} M_{IJ} + 2 \bar{\epsilon}_{K} \gamma^{c} \sigma^3 \epsilon_{L} M_{IL} M_{KJ} \right) + \biggl( 2 \left( \bar{\epsilon}_{I} \gamma_{[a} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{b]} \sigma^{1} \eta_{L} \right) \notag \\
& \qquad \quad+ 4 \left( \bar{\epsilon}_{I} \gamma_{[a} \eta_{L} \right) \left(\bar{\epsilon}_{K} \gamma_{b]} \sigma^{1} \eta_{J} \right)- \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{[a}^{\;\;\;cd} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma_{b]cd} \sigma^{1} \eta_{L} \right) + \frac{1}{2} \left( \bar{\epsilon}_{I} \gamma_{abc} \sigma^{3} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{c} i \sigma^{2} \eta_{L} \right)\notag \\
& \qquad \qquad\qquad\qquad\qquad\qquad - \frac{1}{12} \left( \bar{\epsilon}_{I} \gamma_{abcde} i \sigma^{2} \eta_{J} \right) \left(\bar{\epsilon}_{K} \gamma^{cde} \sigma^{3} \eta_{L} \right) \biggr) M_{IJ} M_{KL}=0,
\label{eqn:E3conquart2}
\end{align}
respectively.
A solution to equations \eqref{conEHB}, (\ref{dilred}--\ref{eqn:E3conquart2}) is
\begin{gather*}
\bar{\epsilon}_{I} \gamma_{a} \eta_{J} =0, \quad \bar{\epsilon}_{I} \gamma_{a} i \sigma^{2} \eta_{J} M_{IJ} =0, \\
\bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{abc} \sigma^{1} \eta_{J} M_{IJ} =0, \\
\bar{\epsilon}_{I} \gamma_{abc} \sigma^{3} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{abcde} i \sigma^{2} \eta_{J} M_{IJ} =0, \\
\partial_{a} M_{IJ} = -2 \bar{\epsilon}_{K} \gamma_{a} \sigma^3 \eta_{L} M_{IL} M_{KJ}.
\end{gather*}
Therefore, the type IIB supergravity symmetry is described by the following transformations of the RR fields and dilaton
\begin{align}
\phi \rightarrow \phi' &= \phi + X,\notag \\
\textup{e}^{\phi} F^{(1)}_{\;\;\;\;a} \rightarrow \textup{e}^{\phi'} F'^{(1)}_{\;\;\;\;a} &= \textup{e}^{\phi} F^{(1)}_{\;\;\;\;a} + \bar{\epsilon}_{I} \gamma_{a} \sigma^{1} \eta_{J} M_{IJ}, \notag \\
\textup{e}^{\phi} F^{(3)}_{\;\;\;\;abc} \rightarrow \textup{e}^{\phi'} F'^{(3)}_{\;\;\;\;abc} &= \textup{e}^{\phi} F^{(3)}_{\;\;\;\;abc} + \bar{\epsilon}_{I} \gamma_{abc} i \sigma^{2} \eta_{J} M_{IJ}, \notag \\
\textup{e}^{\phi} F^{(5)}_{\;\;\;\;abcde} \rightarrow \textup{e}^{\phi'} F'^{(5)}_{\;\;\;\;abcde} &= \textup{e}^{\phi} F^{(5)}_{\;\;\;\;abcde} + \bar{\epsilon}_{I} \gamma_{abcde} \sigma^{1} \eta_{J} M_{IJ},
\label{eqn:IIBsymm}
\end{align}
where
\begin{gather}
\gamma_{11} \epsilon_{I} = \epsilon_{I}, \quad \gamma_{11} \eta_{I} = \eta_{I}, \\
\bar{\epsilon}_{I} \gamma_{a} \eta_{J} =0, \label{eqn2:congamma1} \\
\bar{\epsilon}_{I} \gamma_{a} i \sigma^{2} \eta_{J} M_{IJ} =0,\quad \bar{\epsilon}_{I} \gamma_{abc} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{abc} \sigma^{1} \eta_{J} M_{IJ} =0, \label{con2B} \\
\bar{\epsilon}_{I} \gamma_{abc} \sigma^{3} \eta_{J} M_{IJ} =0, \quad \bar{\epsilon}_{I} \gamma_{abcde} i \sigma^{2} \eta_{J} M_{IJ} =0, \label{con3B} \\
\partial_{a} X = \bar{\epsilon}_{I} \gamma_{a} \sigma^{3} \eta_{J} M_{IJ}, \label{XdefB} \\
\partial_{a} M_{IJ} = -2 \bar{\epsilon}_{K} \gamma_{a} \sigma^3 \eta_{L} M_{IL} M_{KJ}.
\label{eqn:MdefB}
\end{gather}
Equation \eqref{eqn:MdefB} is equivalent to
\begin{equation}
\partial_{a} (M^{-1})_{IJ} = 2 \bar{\epsilon}_{J} \gamma_{a} \sigma^3 \eta_{I},
\end{equation}
and up to a constant of integration
\begin{equation}
X = \frac{1}{2} \sum_{I=1}^{n}\left(\log M^{-1} \right)_{II}.
\end{equation}
The integrability conditions for equations \eqref{XdefB} and \eqref{eqn:MdefB} are satisfied by equation \eqref{eqn2:congamma1}.
If $\epsilon_{I} = \eta_{I}$ then the equations in the lines labelled by \eqref{con2B} and \eqref{con3B} are satisfied, and the transformations are precisely the transformations found by Berkovits and Maldacena, equations \eqref{eqn:BMFtrans} and \eqref{eqn:BMdiltrans} in section \ref{review}. Furthermore, when $n=1$ these equations can be explicitly solved to show that $\epsilon \propto \eta.$ When $n>1$ this is no longer true, and the conditions can be satisfied without identifying $\epsilon_{I}$ and $\eta_{I}.$
Note that, since in our transformations it is not necessary to identify $\epsilon_{I}$ and $\eta_{I},$ we can solve $$\bar{\epsilon}_{I} \gamma_{a} \eta_{J} =0$$ for real spinors.
\section{Comments}
\label{com}
In both type IIA and type IIB supergravity we have found a larger class of transformations that include the transformations of Berkovits and Maldacena \cite{fermdual}. In both cases, when $n=1,$ these transformations are precisely the transformations found by Berkovits and Maldacena. However, for $n>1$ $$\epsilon_{I} \propto \eta_{I}$$ is sufficient but no longer necessary for the conditions given by equations (\ref{con2A1}, \ref{con2A2}) and (\ref{con2B}, \ref{con3B}) in the analysis for type IIA and type IIB supergravity, respectively, to be satisfied. Indeed, in both cases, we have found spinors $\epsilon_{I} \neq \eta_{I},$ where $I=1,2,$ for which $M_{IJ}$ is antisymmetric in its $I,J$ indices and the above-mentioned conditions hold.
In the transformations of fermionic T-duality, the spinors were complexified in order to find non-trivial solutions to \begin{equation*}
\bar{\epsilon}_{I} \gamma_{a} \epsilon_{J}=0.
\end{equation*}
Note that, in the transformations that we have constructed $\eta_{I}$ does not necessarily have to be identified with $\epsilon_{I}$ when $n>1.$ Therefore,
\begin{equation*}
\bar{\epsilon}_{I} \gamma_{a} \eta_{J}=0
\end{equation*}
can be solved for real spinors, keeping the transformation real.
Furthermore, when the two set of Killing spinors $\epsilon_I$ and $\eta_I$ are identified the supersymmetry of the transformed supergravity solution is the same as the original solution. In fact, the Killing spinors in the new background can be written explicitly in terms of the Killing spinors of the original background \cite{fermdual}, equation \eqref{newkillingspinors}. This must be true because fermionic T-duality is a duality of string theory, so the transformation must preserve supersymmetry. However, for our transformation it is not clear whether supersymmetry is preserved. If this is case, then the transformation could be a useful tool for generating backgrounds with lower supersymmetry.
In general, however, the conditions given in equations (\ref{eqn2:gamma10con}--\ref{con2A2}) and (\ref{eqn2:congamma1}--\ref{con3B}) are difficult to solve explicitly. If this symmetry, and indeed fermionic T-duality, is to be a more practical solution-generating mechanism then a new technique must be found to solve these constraints.
The original motivation for fermionic T-duality was to understand the dual superconformal invariance found in maximally supersymmetric Yang-Mills theory. Similarly, it is hoped that there will an understanding of the dual superconformal symmetry of ABJM \cite{bargheer, huang} using fermionic T-duality in type IIA theory. The string theory dual to ABJM \cite{ABJM} theory is type IIA string theory on AdS$_4 \times \mathbb{CP}^3,$ and there has been work on trying to understand the self-duality of the AdS$_4 \times \mathbb{CP}^3$ background under a combination of T-duality and fermionic T-duality \cite{adam, grassi, ado2}. In \cite{ado2}, fermionic T-duality transformations on the partially $\kappa$--gauge fixed Green-Schwarz action is considered and found to be singular. However, the partially $\kappa$-gauge fixed action for the $AdS_{4} \times \mathbb{C}P^{3}$ sigma model is not well-defined for all string configurations. It is not clear in \cite{ado2} whether the singularity arises for this reason or not. The transformation rules for the type IIA supergravity fields derived in this paper can be used to perform the transformation from the target space point of view. Indeed this has recently been done by Bakhmatov \cite{bakhmatov}. The results of this paper are consistent with the singularity found in \cite{ado2}. In \cite{bakhmatov} the transformation is done solely in supergravity, and hence the work suggests that the singularity found in \cite{ado2} does not have its source in the sigma model. It is an intriguing problem to find out the origin of this singularity at the supergravity level.
Finally, in the transformation rules for type IIA supergravity besides the conditions which have analogues in the type IIB supergravity transformations we also found that
\begin{equation*}
\bar{\epsilon}_{I} \gamma_{11} \eta_{J} M_{IJ}=0
\end{equation*}
must hold. This condition may be physically interpreted as maintaining a zero Romans mass \cite{massiveIIA}, for the Romans mass can be thought of as a constant 0-form field strength \cite{D8}. This suggests that the fermionic symmetry that we have constructed for type IIA supergravity can be extended to massive type IIA supergravity. We will report on this problem in a future paper.
Another problem that we would like to address in the future is the complexity of the fermionic T-duality transformations, for in string theory the transformations cannot be made real. Understanding the physical interpretation of the complexity may reveal important, hitherto unknown, aspects of string theory.
\section*{Acknowledgements}
We would like to thank D. Berman, J. Gutowski, A. Maharana, J. Santos and M. Wolf for discussions. HG is supported by the STFC.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
One of the major components of the physics program at the LHC
will be a detailed study of the properties of the top quark.
In this talk, we report on recent theoretical work on how to
observe spin correlations in the production and decay of top
quark pairs via gluon fusion\cite{ref:MPlhctop}.
The top quark is unique among the quarks in the Standard Model
in that its decay lifetime is so short that it is predicted to
decay before spin decorrelation takes place\cite{ref:Bigi}.
Thus, top quarks produced in a particular spin state will pass
information about that state on to its decay products,
and an investigation of the angular correlations among the decay
products in select $t\bar{t}$ events can
shed light on the details of the production and decay mechanisms.
The mere observation of $t\bar{t}$ spin correlations would provide
an upper limit on the top quark lifetime; conversely, the observation
of a lack of significant $t\bar{t}$ spin correlations would indicate
the participation of the top quark in some sort of new strong
interaction that causes spin decorrelation to occur more quickly than
predicted by the Standard Model considerations of Ref.~\cite{ref:Bigi}.
Thus, in the near-term, establishing the presence or absence of
these correlations is a worthwhile physics goal. In the longer term,
a detailed study of as many aspects of the correlations as possible
will provide a series of additional wide-ranging tests of the
Standard Model.
The outline of
this talk is as follows. We will begin
with a detailed examination of the spin structure of the main
top pair production mechanisms at the Tevatron
($q\bar{q}\rightarrow t\bar{t}$) and LHC ($gg\rightarrow t \bar{t}$),
with a particular emphasis on elucidating the spin state(s) of the
final $t\bar{t}$ pairs. In Sec.~\ref{sec:decays} we will examine the
angular distributions associated with the decay of polarized top
quarks. Combining the production properties with the decays will
lead us to the strategies for observing the spin correlations.
These strategies will be different for the Tevatron (Sec.~\ref{sec:FNAL})
and LHC (Sec.~\ref{sec:LHC}).
In Sec.~\ref{sec:future} we will briefly consider
some of the physics that could
be accomplished with ${\cal O}(10^6)$ $t\bar{t}$ pairs at our
disposal. Finally, Sec.~\ref{sec:conclusions} contains a summary
and our conclusions. Additional details
may be found in Ref.~\cite{ref:MPlhctop}.
\section{Detailed spin structure in top quark pair production}
The main two top quark pair production mechanisms at hadron colliders
are $q\bar{q}\rightarrow t\bar{t}$, which dominates
at the Fermilab Tevatron, and $gg\rightarrow t\bar{t}$, which
dominates at the LHC. In this section, we examine the spin
structure of each of these processes in some detail.
The process $q\bar{q}\rightarrow t\bar{t}$ proceeds through an
$s$-channel gluon; in order to couple to this gluon, the incoming
$q\bar{q}$ pair must possess opposite helicities. The initial state
is thus either $q_R \bar{q}_L$ or $q_L \bar{q}_R$. We will
focus our discussion on $q_R \bar{q}_L$: results for $q_L \bar{q}_R$
may be generated by flipping all of the spins in the initial and
final states.
The top quark pairs produced from the $q_R \bar{q}_L$ are well-described
by the off-diagonal spin basis\cite{ref:OD}
(see Fig.~\ref{fig:qqspins}):
at leading
order the top quarks have opposite spins ($t_\uparrow\bar{t}_\downarrow$ or
$t_\downarrow\bar{t}_\uparrow$) 100\% of the time using this
spin quantization axis.
At threshold, the off-diagonal basis coincides with
the beamline basis\cite{ref:MP1}, the appropriate
choice of quantization axis in the $\beta\rightarrow0$
limit\cite{ref:thinkshop}. At the other extreme ($\beta\rightarrow1$),
the off-diagonal basis coincides with the helicity basis.
Between these two extremes, the
off-diagonal basis smoothly interpolates
from the beamline basis at small $\beta$ to the
helicity basis in the ultra-relativistic
limit while maintaining a
spin state consisting of 100\% opposite spin
$t\bar{t}$ pairs.
\noindent
\begin{figure}[hbt]
\hfil\includegraphics[height=1.6truein,keepaspectratio=true]{Fig01.eps}
\caption[]{The spin configurations for the process
$q_R \bar{q}_L \rightarrow t\bar{t}$ are best described by the
off-diagonal basis. The angle between the top quark momentum and
the spin vector in the zero momentum frame
is given by $\tan\Omega = (1-\beta^2)\tan\theta$
where $\beta$ is the speed of the top quark in the ZMF.
The ratio of $t_{\uparrow} \bar{t}_{\downarrow}$ to $t_{\downarrow} \bar{t}_{\uparrow}$
production is
$(1+\sqrt{1-\beta^2\sin^2\theta})^2:(1-\sqrt{1-\beta^2\sin^2\theta})^2$.
The results for the $q_L \bar{q}_R$ initial state may be obtained
by reversing all of the spins in the diagram. The same spin
structure also applies to $gg\rightarrow t\bar{t}$ with
opposite helicity gluons.}
\label{fig:qqspins}
\end{figure}
\noindent
The gluon fusion top pair production mechanism possesses a
very rich spin structure: the spin state of the $t\bar{t}$ is
different for different gluon helicities.
When the initial gluons have opposite
helicity, the $t\bar{t}$
spin state is the same as for $q\bar{q}$ production
(Fig.~\ref{fig:qqspins}).
Thus, the off-diagonal basis provides the description with
the maximum $t\bar{t}$ spin correlation in this case.
Opposite helicity gluon pairs form the
dominant contribution to $gg\rightarrow t\bar{t}$
when $\beta\gamma\sin\theta > 1$,
or, equivalently, when $\beta^2 > 1/(2-\cos^2\theta)$.
On the other hand, when $\beta\gamma\sin\theta <1$, like-helicity
gluon pairs form the dominant contribution.
In this case, it turns out that the helicity basis provides the
best description of the $t\bar{t}$ spin correlations,
no matter
what the value of $\beta$.
Like helicity gluons produce like-helicity $t\bar{t}$ pairs.
The ratio of $t_R\bar{t}_R$ to $t_L\bar{t}_L$ production from
a $g_R g_R$ initial state is given by $(1+\beta)^2:(1-\beta)^2$.
\section{Decay of polarized top quarks}\label{sec:decays}
We now consider the angular distributions associated
with the decay of spin up top quarks. We define the decay angles
in the top quark rest frame;
we will denote the angle between the $i$th particle and the top
quark spin direction by $\theta_i$. Then, the decay angular
distribution takes the simple form\cite{ref:LOalpha}
\begin{equation}
{{1}\over{\Gamma}}
{{\drm\Gamma}\over{\drm(\cos\theta_i)}}
= {{1}\over{2}}( 1 + \alpha_i \cos\theta_i).
\end{equation}
The analyzing power $\alpha_i$ depends on which decay product we
consider; the values of $\alpha_i$ at next-to-leading
order\cite{ref:NLOalpha}
are collected in Table~\ref{tab:alphas}.
\begin{table}
\caption{Analyzing powers $\alpha$ for both semi-leptonic
and hadronic top quark decays at next-to-leading order.
The coefficients for $u,d,s,c$ and $b$ are for partons: the values
for jets differ slightly at NLO (see Ref.~\cite{ref:NLOalpha}).}
\label{tab:alphas}
\begin{narrowtabular}{2cm}{ccrc}
\hline
& Decay product & $\alpha\quad$ & \\
\hline
& $\ell^{+}$ & $0.998$ & \\
& $\bar{d}, \bar{s}$ & $0.966$ & \\
& $\nu$ & $-0.314$ & \\
& $u, c$ & $-0.317$ & \\
& $b$ & $-0.393$ & \\
\hline
\end{narrowtabular}
\end{table}
The normalized distribution
for the complete $2\rightarrow6$ production and decay process
is
\begin{equation}
{ {1}\over{\sigma} }
{
{\drm^2\sigma}
\over
{ \drm(\cos\theta_i)
\drm(\cos\bar\theta_{\bar\imath}})
}
= {{1}\over{4}}
\Bigl[
1 + {{ N_\parallel - N_\times }\over{N_\parallel+N_\times}}
\alpha_i \bar\alpha_{\bar\imath}
\cos\theta_i \cos\bar\theta_{\bar\imath}
\Bigr].
\label{eq:doublediff}
\end{equation}
Eq.~(\ref{eq:doublediff}) clearly displays the dependence on
production ($N_\parallel$ is the number of events with
like-spin $t\bar{t}$ pairs; $N_{\times}$ is the number of
events with opposite-spin $t\bar{t}$ pairs) and decay
($\theta_i,\alpha_i$ refer to the $t$ side of the event;
$\bar\theta_{\bar\imath},\bar\alpha_{\bar\imath}$ refer to
the $\bar{t}$ side of the event; the angles are measured
in the respective rest frames of the $t$ and $\bar{t}$\thinspace).
The next-to-leading-order corrections to this distribution
have been presented in Refs.~\cite{ref:NLOdouble1,ref:NLOdouble2};
these corrections will play an important
role in future precision measurements
of the correlations.
\section{Tevatron strategy for observing spin correlations}\label{sec:FNAL}
At the Fermilab Tevatron, the strategy for observing the spin
correlations in $t\bar{t}$ production and decay
consists of reconstructing
the double decay angular distribution
Eq.~(\ref{eq:doublediff}) from the data.
An examination of Eq.~(\ref{eq:doublediff}) indicates how to maximize
the size of the correlations.
First, one should
choose a spin quantization axis that maximizes the asymmetry
$(N_{\parallel}-N_{\times})/(N_{\parallel}+N_{\times})$. Since the
dominant process at the Tevatron is $q\bar{q}\rightarrow t \bar{t}$,
this means using the off-diagonal basis, although in the long
term it will be worthwhile to do the measurement using the helicity
and beamline bases as well. Measurements with multiple spin bases
probe different aspects of the $q\bar{q}\rightarrow t\bar{t}$ matrix
element and provide a cross-check on the analysis. Furthermore, the
sources of systematic uncertainty are likely to vary greatly
for different spin bases.
Second, one should choose $t$ and $\bar{t}$ decay products
with large analyzing powers (\idest\ the charged lepton or $d$-type
quark). This consideration suggests looking in the dilepton
or lepton+jets modes. The dilepton mode has the advantage of the
maximum possible analyzing power, but suffers from a small branching
ratio and the complications associated with the two (unobserved)
neutrinos in the final state. The lepton+jets mode couples higher
statistics with a much better ability to reconstruct the $t$ and
$\bar{t}$ rest frames, but has a lower analyzing power since the
jet associated with the
$d$-type quark must be selected probabilistically.
Eventually, given sufficient statistics, it will be
useful to measure the correlations for as many different combinations
of decay products as possible.
Although this is a very difficult analysis, both CDF
and D0 have reported initial attempts to observe
these correlations
despite the limited data set currently available.
At present, with only about half of the approximately 7~fb$^{-1}$
delivered by the Tevatron analyzed so far, no unambiguous ($>3\sigma$)
sign of the correlations has been seen; however, both CDF and D0
plan on updates as well as a combined result
in the near future\cite{ref:headproc}.
\section{LHC strategy for observing spin correlations}\label{sec:LHC}
The dominant top quark pair production mechanism at the LHC is
gluon fusion. The detailed study of this process using an arbitrary
spin axis performed in Ref.~\cite{ref:MPlhctop} leads us to the
following two conclusions:
When $\beta\gamma\sin\theta < 1$, the best
spin basis is the one which maximizes the like-spin fraction
(like-helicity gluons dominate in this region).
On the other hand, when $\beta\gamma\sin\theta>1$, the best
spin basis is the one which maximizes the opposite-spin fraction
(opposite-helicity gluons dominate in this region).
Because the LHC will be a copious source of $t\bar{t}$ pairs
(about $10^6$ $t\bar{t}$ pairs per fb$^{-1}$ at full beam
energy; even at reduced energy there will be $10^4$ to $10^5$
pairs per fb$^{-1}$), there will be room to implement significant
cuts on the data before statistical uncertainties become
comparable to the systematic uncertainties. Thus, we will focus
on the $\beta\gamma\sin\theta < 1$ region. In this region,
like-helicity gluons to like-helicity $t\bar{t}$ pairs dominate.
Since $\beta$ is restricted to fairly moderate values in most of this
region, the spin correlations won't be masked by large boosts, and
one could, in principle, employ the same analysis as has been used
at the Tevatron. Another option exists, however.
In order to motivate this alternative, we will examine the ratio
\begin{equation}
{\cal S} \equiv
{
{(\vert{\cal A}\vert^2_{RR} + \vert{\cal A}\vert^2_{LL})_{\rm corr}}
\over
{(\vert{\cal A}\vert^2_{RR} + \vert{\cal A}\vert^2_{LL})_{\rm uncorr}}
}
\end{equation}
which compares the sum of the squares of the like-helicity amplitudes
for the fully-correlated $gg\rightarrow t\bar{t}$ matrix
elements to the same sum for matrix elements calculated with
a toy model employing top quark decays which are spherical
in their rest frames (but with all other decays -- \idest\
the $W$'s -- fully correlated). In terms of the cosines of the
angles in the ZMF between various pairs of particles and the ZMF
speed of the top quarks,
\begin{equation}
{\cal S} =
\Biggl[ {{1-\beta^2}\over{1+\beta^2}} \Biggr]
\Biggl[
{
{ (1+\beta^2)
+(1-\beta^2)c_{\bar{e}\mu}
-2\beta^2 c_{t\bar{e}} c_{\bar{t}\mu} }
\over
{ (1-\beta c_{t\bar{e}})
(1-\beta c_{\bar{t}\mu}}) }
\Biggr],
\end{equation}
which reduces to $1+c_{\bar{e}\mu}$ for $\beta\rightarrow 0$.
So, at smallish values of $\beta$, the difference between the
correlated and uncorrelated versions of the matrix elements is
sensitive to the angle between the two charged leptons, suggesting
that we examine $\Delta\phi$, $\Delta\eta$, and $\Delta R$ for
these two particles. Of these three distributions, $\Delta\phi$
turns out to be the most sensitive to the presence or absence of
$t\bar{t}$ spin correlations. This distribution also enjoys the
advantage of being invariant under longitudinal boosts; once
the event sample has been selected, this
measurement can be made in the laboratory frame without the
complications associated with the reconstruction of and boost
to some special frame of reference\footnote{Unfortunately, the
$\Delta\phi$ distribution in $q\bar{q}\rightarrow t\bar{t}$ is
insensitive to the presence or absence of spin correlations. Thus,
this observable is not particularly interesting at the Tevatron.}.
In Fig.~\ref{fig:dphi}, we present a comparison of the correlated
and uncorrelated distributions in $\Delta\phi$ for the two
leptons in a sample of dilepton events. In the plot on the left,
the events were required to have a $t\bar{t}$ invariant mass
of less than 400 GeV to ensure that they sample
the like-helicity-gluon-dominated $\beta\gamma\sin\theta<1$
region of phase space. Clearly this distribution has significant
sensitivity to the presence or absence of spin correlations.
Unfortunately, the pair of neutrinos in these events makes a
full reconstruction of the event
impossible\footnote{Although there are a total of 8 constraint equations
for the 8 unknown components of the $\nu$ and $\bar\nu$ 4-momenta, two
of these constraints are quadratic, leading to up to 4 distinct
solutions for each of the two possible pairings of
$W$-bosons (leptons) with $b$ jets. Thus, there are up to 8 distinct
values of $m_{t\bar{t}}$ generated by the reconstruction program.},
and we must look for
an alternate means of selecting the $\beta\gamma\sin\theta<1$ region.
In the plot on the right, the event selection is based on the
value of the (na{\"\i}ve) unweighted average of the
various $m_{t\bar{t}}$ values produced by the neutrino reconstruction
routine.
Cutting on this quantity has the unfortunate side-effect of producing
a systematic depletion of events near $\Delta\phi=\pi$; however,
this depletion affects both models in a similar fashion and
significant discriminating power remains. What we have here is a
proof-of-concept: to do the actual measurement it will be necessary
to understand the detector systematics and NLO corrections to
the $\Delta\phi$ distribution very well. Left open are the questions of
whether a better substitute for the true value of $m_{t\bar{t}}$
exists as well as the optimal maximum value of $m_{t\bar{t}}$ to use
in selecting the data.
\noindent
\begin{figure}[hbt]
\hfil\includegraphics[height=1.9truein,keepaspectratio=true]{Fig02.eps}
\caption[]{The differential distribution in $\Delta\phi$,
$(1/\sigma_{{}_T}) \drm\sigma/\drm(\Delta\phi)$.
The solid curve is for the fully correlated case whereas the
dashed curve assumes that the top quarks decay spherically in their
respective rest frames. In the plot on the left, a cut restricting
the (true) invariant mass of the $t\bar{t}$ pairs to a maximum of
400~GeV has been applied to the distributions; in the plot on the
right, the cut restricts the (na{\"\i}ve) unweighted average
of all of the reconstructed values of $m_{t\bar{t}}$ for each event
to a maximum of 400~GeV.}
\label{fig:dphi}
\end{figure}
We now turn to the lepton+jets mode. This channel has the advantage
of larger statistics than dilepton mode, and only a single,
over-constrained, neutrino to be reconstructed. Thus, it is
possible to do a good job of locating the ZMF and calculating
$m_{t\bar{t}}$. This mode has one disadvantage, however: it is not
possible to know with 100\% certainty which of the two $W$-decay jets
corresponds to the $d$-type quark. Thus, it is necessary to use
our best informed guess to select this jet on a probabilistic basis.
The authors of Ref.~\cite{ref:MP1} suggest using the jet which has the
smallest spatial separation from the $b$ jet in the $W$ rest frame;
this is equivalent to selecting the jet with the lowest energy in the
$t$ rest frame\cite{ref:denergy}. Although this is the correct
choice only about 60\% of the time, it is good enough to provide
an analyzing power of 0.4734 to 0.4736 at NLO\cite{ref:NLOalpha},
depending on what jet reconstruction algorithm is used.
Because the ZMF is well-determined in lepton+jets mode, it is possible
to examine the actual opening angle between the lepton and the $d$-jet
candidate (Fig.~\ref{fig:cth}).
\noindent
\begin{figure}[bt]
\hfil
\includegraphics[height=1.6truein,keepaspectratio=true]{Fig03.eps}
\hfill
\caption[]{The differential distribution in $\cos\theta$,
$(1/\sigma_{{}_T}) \drm\sigma/\drm(\cos\theta)$, where $\theta$
is the ZMF angle between the charged lepton and the $d$-quark jet
(defined to be the jet which is spatially closest to the $b$-tagged
jet in the $W$ rest frame; this is also the jet with the lowest energy
in the top quark rest frame). The solid curve is for the fully-correlated
case whereas the dashed curve assumes that the top quarks decay
spherically in their respective rest frames. A cut restricting the
invariant mass of the $t\bar{t}$ pairs to a maximum of 400~GeV
has been applied to these distributions.
}
\label{fig:cth}
\end{figure}
Taking half of the area between
the correlated and uncorrelated curves to be a measure of the
potential sensitivity of this observable, we note that this half-area
is equal to 0.07 for the $\cos\theta_{\bar{e}d}$ distribution,
as compared to a value of 0.11 for the $\Delta\phi_{\bar{e}\mu}$
distribution. However, this somewhat reduced sensitivity could
easily be offset by the higher statistics of the lepton+jets mode.
Furthermore, the systematics of this measurement are certainly very
different than those associated with the $\Delta\phi$ distribution;
thus, further investigation of this variable by the experimental
collaborations is warranted.
Before closing this section on the LHC, we present
Fig.~\ref{fig:energydep}, which summarizes the effect of
running the LHC at reduced energy on the ability to observe
$t\bar{t}$ spin correlations. The conclusion to be drawn from
the three plots in Fig.~\ref{fig:energydep} is that the biggest
penalty of running at reduced energy comes from the much
smaller $t\bar{t}$ production cross section: the size of the
spin correlations is essentially unchanged as one reduces
$\sqrt{s}$ from 14~TeV to 7~TeV. Thus, the prospects of observing
these correlations with the data collected during the early stages
of running are good.
\noindent
\begin{figure}[hbt]
\hfil\includegraphics[height=1.6truein,keepaspectratio=true]{Fig04.eps}
\caption[]{Effects of varying the machine center-of-mass energy
$\sqrt{s}$. (a) Total leading order cross section for
$pp\rightarrow{t\bar{t}}$. These values should be multiplied by
the branching fraction to dileptons (4.6\%) or lepton-plus-jets
(29\%), as appropriate. We include only the $e$ and $\mu$ channels.
(b) Fraction of dilepton and lepton plus jets events with
$m_{t\bar{t}}<400$~GeV. For dilepton events (crosses) we employ
the unweighted average of the up to 8 solutions for the $t\bar{t}$
invariant mass. For lepton+jets events (diamonds) the true value
of $m_{t\bar{t}}$ may be reconstructed and used in event selection.
(c) Half of the area between the appropriate unit-normalized angular
distributions for the fully correlated and spherical cases. For
leptons+jets events (crosses), we use the distribution in $\cos\theta$,
where $\theta$ is the angle between the charged lepton and the $d$-jet
candidate in the zero momentum frame of the event. For dilepton
events (diamonds), we use the azimuthal opening angle $\Delta\phi$
between the two charged leptons.
}
\label{fig:energydep}
\end{figure}
\section{The future}\label{sec:future}
Before concluding, let us take a moment or two to comment on the
sorts of precision tests of the Standard Model
that could be done with the few million $t\bar{t}$
pairs that will ultimately be available at the LHC, beginning
with the couplings between gluons and the top quark.
All 3 of the diagrams for $gg\rightarrow t\bar{t}$ influence
the spin correlations;
these diagrams are related in a
very specific manner to satisfy SU(3) gauge-invariance.
Thus, a test of the spin correlations can be viewed as a test of
QCD gauge-invariance.
The time scale for the top quark is expected to be much shorter
than the spin decorrelation time scale. This fact
offers us the opportunity to perform a direct test
of the predicted $V-A$ structure of the $Wtb$ vertex, the only quark
for which such a test is possible.
By measuring the size of the correlations for different combinations
of decay products, it will be possible, with sufficient data, to
perform measurements of the analyzing powers of all of the
top quark decay products listed in
Table~\ref{tab:alphas}\footnote{Since the neutrino is over-constrained
in lepton+jets mode, a measurement of $\alpha_\nu$ may be possible, and
a detailed feasibility study should be performed to investigate this
possibility.}
In addition, one ought to look for
non-Standard Model couplings and/or couplings to new particles, such as
Kaluza-Klein gluons, ``extra'' Higgs bosons, or ``extra'' $Z$-bosons.
Should a new particle that couples strongly to the top
quark be observed as
a resonance in the $t\bar{t}$ channel, then the spin correlations
of the $t\bar{t}$ pairs produced in the resonance region of phase
space would bear the imprint of the spin and parity of the
new particle and serve as a useful diagnostic in identifying
the nature of the new physics.
\section{Summary and conclusions}\label{sec:conclusions}
In summary, we have seen that the Tevatron and LHC probe two different
aspects of spin correlations in top quark pair production and decay.
At the Tevatron, the production cross section is dominated by
$q\bar{q}\rightarrow t \bar{t}$. For this process, the off-diagonal
spin basis provides the largest signal of spin correlations.
The strategy for observing the correlations at the Tevatron involves
extracting a joint decay distribution utilizing decay angles
in the $t$ and $\bar{t}$ rest frames. At present, this challenging
measurement is limited by low statistics.
At the LHC, the production cross section is dominated by
$gg\rightarrow t \bar{t}$.
At high values of the $t\bar{t}$ invariant mass,
opposite helicity gluons to opposite spin $t\bar{t}$ pairs dominate
the cross section; the correlations involved are the same as
for $q\bar{q}\rightarrow t\bar{t}$. At low values of
the $t\bar{t}$ invariant mass, like helicity gluons dominate the
cross section; these gluons primarily
produce like-helicity $t\bar{t}$ pairs. By cutting on the value
of $m_{t\bar{t}}$, it is possible to enhance the contributions from
either like or opposite helicity gluon pairs. In addition to a
repeat of a Tevatron-style analysis, promising observables for
observing spin correlations include the azimuthal opening angle
between the two leptons in dilepton events and the cosine of the
angle between the lepton and $d$-jet candidate in the
ZMF in lepton+jets mode. With millions of $t\bar{t}$ pairs
on the horizon, precision (\%-level) measurements of the various
correlation parameters should be possible during the next decade.
\acknowledgments
The author would like to thank Stephen Parke for providing
his insight and
unique perspective on many of the topics associated with this work.
Funding to present this talk in Bruges was provided by
the Office of the Director of Academic Affairs
at Penn State Mont Alto,
the Penn State Mont Alto Faculty Affairs Committee Professional
Development Fund,
and the
Eberly College of Science Professional Development Fund.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{INT}
One-dimensional (1D) interacting systems are characterized by a breakdown of the basic Fermi liquid quasiparticle picture.
Indeed, no quasiparticles with the same quantum numbers as the electrons exist when the motion is restricted to a single
spatial dimension. Rather, in a 1D lattice, correlated electrons split into basic fractionalized charge-only and spin-only
particles \cite{Blumenstein_11,Voit_95}. Hence the removal or addition of electrons generates an energy continuum of
excitations described by these exotic fractionalized particles which are not adiabatically connected to free electrons.
Hence they must be described using a different language.
These models share common low-energy properties associated with the universal class of the Tomonaga-Luttinger liquid (TLL) \cite{Blumenstein_11,Voit_95,Tomonaga_50,Luttinger_63}. To access their high-energy dynamical correlation functions
beyond the low-energy TLL limit, approaches such as the pseudofermion dynamical theory (PDT) \cite{Carmelo_05}
or the mobile quantum impurity model (MQIM) \cite{Imambekov_09,Imambekov_12} must be used. Those approaches
incorporate nonlinearities in the dispersion relations of the fractionalized particles.
An important low-energy TLL property of 1D correlated electronic metallic systems is the universal power-law scaling of the
spectral intensity $I (\omega,T)$ such that $I (0,T) \propto T^{\alpha}$ and $I (\omega,0)\propto\vert\omega\vert^{\alpha}$.
Here the exponent $\alpha$ controls the suppression of the density of states (SDS) and $\omega$ is a small excitation
energy near the ground-state level. The value SDS exponent $\alpha = (1-K_c)^2/(4K_c)$ is determined by that of the
TLL charge parameter $K_c$ \cite{Blumenstein_11,Voit_95,Schulz_90}. Importantly, this exponent provides useful
information about the range of the underlying electron interactions.
In the case of integrable 1D models solvable by the Bethe ansatz \cite{Bethe_31} (such as the 1D Hubbard model
\cite{Lieb_68,Martins_98}), the PDT and MQIM describe the same mechanisms and lead to the same expressions for
the dynamical correlation functions \cite{Carmelo_18}. The advantage of the MQIM is that it also applies to non-integrable
systems \cite{Imambekov_12}. The exponents characterizing the singularities in these systems differ significantly from
the predictions of the linear TLL theory, except in the low-energy limit where the latter is valid.
For integrable 1D lattice electronic models with only onsite repulsion (such as the Hubbard model), the TLL charge
parameter $K_c $ is larger than $1/2$ and thus the SDS exponent $\alpha = (1-K_c)^2/(4K_c)$ is smaller than $1/8$.
In non-integrable systems a SDS exponent $\alpha$ larger than $1/8$ stems from finite-range interactions \cite{Schulz_90}.
In fact, as shown in Table \ref{table1}, for the metallic states of both 1D and
quasi-1D electronic systems, the SDS exponent $\alpha$ frequently has experimental values in the range $0.5-0.8$
\cite{Blumenstein_11,Voit_95,Schulz_90,Claessen_02,Kim_06,Schoenhammer_93,Ma_17,Ohtsubo_15,Ohtsubo_17}.
In actual materials, a finite effective range interaction \cite{Bethe_49,Blatt_49,Joachain_75,Preston_75,Flambaum_99}
generally results from screened long-range Coulomb interactions with potentials
vanishing as an inverse power of the separation with an exponent larger than one.
In general, such finite-range interactions in 1D lattice systems represent a complex
and unsolved quantum problem involving non-perturbative microscopic electronic processes.
Indeed, as originally formulated, the MQIM does not apply to lattice electronic systems with finite-range
interactions whose screened Coulomb potentials vanish as an inverse power of the electron distance.
Recently, the MQIM has been extended to a class of electronic systems with effective interaction ranges of
about one lattice spacing, compatible with the high-energy one-electron spectral properties observed in twin grain
boundaries of molybdenum diselenide MoSe$_2$ \cite{Ma_17,Cadez_19}. This has been achieved by suitable
renormalization of the phase shifts of the charge fractionalized particles. That theoretical scheme, called here
``MQIM-LO'', accounts for the effects of only the {\it leading order} (LO) in the effective range expansion
\cite{Bethe_49,Blatt_49} of such phase shifts.
In this paper we consider a bismuth-induced anisotropic structure on indium antimonide which we henceforth call
Bi/InSb(001) \cite{Ohtsubo_15}. Experimentally, strong evidence has been found that Bi/InSb(001) exhibits 1D
physics \cite{Ohtsubo_15,Ohtsubo_17}. However, a detailed understanding of the exotic one-electron spectral
properties revealed by its angle resolved photo-emission spectroscopy (ARPES) \cite{Ohtsubo_15,Ohtsubo_17}
at energy scales beyond the TLL has remained elusive. In particular, the predictions of the MQIM-LO for the
location in the $(k,\omega)$ plane of the experimentally observed high-energy peaks in the ARPES momentum
distribution curves (MDC) and energy distribution curves (EDC) of Bi/InSb(001) do not lead to the same quantitative
agreement as for the ARPES in the MoSe$_2$ line defects \cite{Ma_17,Cadez_19}. This raises the important
question of what additional effects must be included to obtain agreement with the experimental data.
In this paper, we answer this question by extending the MQIM-LO to a larger class of 1D lattice electronic systems
with finite-range interactions by accounting for higher-order terms in the effective range expansion
\cite{Bethe_49,Blatt_49,Landau_65,Kermode_90,Burke_11} of the phase shifts of the fractionalized charged
particles. While the corresponding {\it higher order} ``MQIM-HO'' corresponds in general to a complicated,
non-perturbative many-electron problem, we find, unexpectedly, that the interactions of the fractionalized
charged particles with the charge mobile quantum impurity occur in the unitary limit of (minus) infinite scattering length \cite{Newton_82,Zwerger_12,Horikoshi_17}. In that limit, the separation between the interacting charged
particles (the inverse density) is much greater than the range of the interactions, and the calculations simplify
considerably.
The unitary limit plays an important role in the physics of many physical systems, including the dilute neutron
matter in shells of neutron stars \cite{Dean_03} and in atomic scattering in systems of trapped cold atoms
\cite{Zwerger_12,Horikoshi_17}. Our discovery of its relevance in a condensed matter system is new and reveals
new physics.
\begin{widetext}
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
System & Parameter $K_c$ & SDS exponent $\alpha$ & Technique & Source \\
\hline
(TMTSF)$_2$XX=PF$_6$,AsF$_6$,ClO$_4$ & $0.23$ & $0.64$ & Optical conductivity & from SI of Ref. \onlinecite{Blumenstein_11} \\
\hline
Carbon Nanotubes & $0.28$ & $0.46$ & Photoemission & from SI of Ref. \onlinecite{Blumenstein_11} \\
\hline
Purple Bronze Li$_{0.9}$Mo$_6$O$_{17}$ & $0.24$ & $0.60$ & ARPES and tunneling spectroscopy & from SI of Ref. \onlinecite{Blumenstein_11} \\
\hline
1D Gated Semiconductors & $0.26-0.28$ & $0.46-0.53\approx 0.5$ & Transport conductivity & from SI of Ref. \onlinecite{Blumenstein_11} \\
\hline
MoSe$_2$ 1D line defects & $0.20-0.22$ & $0.70-0.80$ & ARPES & from Ref. \onlinecite{Ma_17} \\
\hline
Bi/InSb(001) & $0.22-0.24$ & $0.60-0.70$ & ARPES & from Ref. \onlinecite{Ohtsubo_15} \\
\hline
\end{tabular}
\caption{Experimental TLL charge parameter $K_c={\tilde{\xi}}_c^2/2$ and
related SDS exponent $\alpha = (1-K_c)^2/(4K_c)$ (SI stands for Supplementary Information.)}
\label{table1}
\end{center}
\end{table}
\end{widetext}
The results of the MQIM-HO are consistent with the expectation that the microscopic mechanisms behind the
one-electron spectral properties of Bi/InSb(001) include finite-range interactions. Indeed, accounting for the effective
range of the corresponding interactions \cite{Joachain_75,Preston_75,Flambaum_99} leads to theoretical
predictions that quantitatively agree with both (i) the experimental value of the SDS exponent ($\alpha\in [0.6-0.7]$) in
Bi/InSb(001) observed in $I (\omega,0)\propto\vert\omega\vert^{\alpha}$ and (ii) the location in the $(k,\omega)$ plane
of the experimentally observed high-energy peaks in the ARPES MDC and EDC.
Since Bi/InSb(001) is a complex system and the MQIM-HO predictions are limited to the properties (i) and (ii),
in the discussion section of this paper we consider other possible effects beyond the present theoretical framework
that might contribute to the microscopic mechanisms determining spectral properties of Bi/InSb(001).
In this paper we employ units of $\hbar =1$ and $k_B =1$. In Sec. \ref{model} we introduce the theoretical scheme
used in our studies. The effective-range expansion of the phase shift associated with the interactions of the charge
fractionalized particles and charge hole mobile impurity, the corresponding unitary limit, and the scattering lengths
are all issues we address in Sec. \ref{EREUL}. In Sec. \ref{ER} the effective range expression is derived and
expressed in terms of the ratio of the renormalized and bare scattering lengths. In Sec. \ref{ARPES} we show how
our approach predicts the location in the $(k,\omega)$ plane of the experimentally observed high-energy Bi/InSb(001)
ARPES MDC and EDC peaks. In Sec. \ref{DISCONCL}, we discuss our results and experimental properties outside
the present theoretical framework, mention open questions on the Bi/InSb(001) spectral properties, and offer concluding
remarks.
\section{The model}
\label{model}
The 1D model Hamiltonian associated with the MQIM-HO for electronic density $n_e\in ]0,1[$ is given by,
\begin{eqnarray}
{\hat{H}} & = & t\,\hat{T} + \hat{V}\hspace{0.20cm}{\rm where}
\nonumber \\
\hat{T} & = & - \sum_{\sigma=\uparrow,\downarrow }\sum_{j=1}^{L}\left(c_{j,\sigma}^{\dag}\,
c_{j+1,\sigma} + c_{j+1,\sigma}^{\dag}\,c_{j,\sigma}\right)
\nonumber \\
\hat{V} & = & \sum_{r=0}^{L/2-1}V_e (r)
\sum_{\sigma=\uparrow,\downarrow}\sum_{\sigma'=\uparrow,\downarrow}\sum_{j=1}^{L}\hat{\rho}_{j,\sigma}\hat{\rho}_{j+r,\sigma'} \, .
\label{equ1}
\end{eqnarray}
Here $\hat{\rho}_{j,\sigma} = \left(c_{j,\sigma}^{\dag}\,c_{j,\sigma} - {1\over 2}\right)$,
$V_e (0) = U/2$, $V_e (r) = U\,F_e (r)/r$ for $r>0$, and $F_e (r)$ is a continuous
screening function such that $F_e (r)\leq 1/4$, which at large $r$ vanishes as some inverse
power of $r$ whose exponent is larger than one, so that $\lim_{r\rightarrow\infty}F_e (r)=0$.
We use a representation of the fractionalized $c$ (charge) and $s$ (spin) particles
that also naturally emerges in the MQIM-LO \cite{Ma_17}. For simplicity, in general in this
paper they are called $c$ particles and $s$ particles, respectively. They occupy a $c$ band and
an $s$ band whose momentum values $q_j$ and $q_j'$, respectively, are such that
$q_{j+1}-q_j = 2\pi/L$ and $q_{j+1}'-q_j' = 2\pi/L$. In the thermodynamic limit one often
uses a continuum representation in terms of corresponding $c$ band momentum
variables $q$ and $s$ band momentum variables $q'$ with ground-state occupancies
$q \in [-2k_F,2k_F]$ and $q' \in [-k_F,k_F]$, respectively, where $2k_F=\pi n_e$.
The energy dispersions for $c$ and $s$ particles, ${\tilde{\varepsilon}}_c (q)$ and ${\tilde{\varepsilon}}_s (q')$,
respectively, are defined for these momentum intervals
in Eqs. (\ref{equA2}) and (\ref{equA4})-(\ref{equA10}) of Appendix \ref{APA}.
Most of the weight of the one-electron spectral function is generated by transitions to excited states
involving creation of one hole in the $c$ band, one hole in the $s$ band, plus
low-energy particle-hole processes in such bands. Processes where
both holes are created away from the $c$ band and $s$ band Fermi points $\pm 2k_F$ and $\pm k_F$, respectively,
contribute to the spectral-function continuum. Processes where the $c$ band hole is created at momentum values
spanning its band interval $q\in ]-2k_F,2k_F[$ and the $s$ hole (spinon)
is created near one of its Fermi points $\pm k_F$ contribute to the $c$ and $c'$ branch lines whose spectra
run from $k\in ]-k_F,k_F[$ and $k \in ]-3k_F,3k_F[$, respectively. Since in such processes
the $c$ band hole is created away from the $c$ band Fermi points, we call it
a {\it $c$ (charge) hole mobile impurity}. Finally, processes where the $s$ band hole is created at momentum
values in the interval $q'\in ]-k_F,k_F[$ and the $c$ hole (holon)
is created near one of its Fermi points $\pm 2k_F$ contribute to the $s$ branch line whose spectrum
runs from $k\in ]-k_F,k_F[$. In the case of these processes it is the $s$ band hole that is created away from the
corresponding $s$ band Fermi points. Hence we call it {\it $s$ (spin) hole mobile impurity}.
See a sketch of such spectra in Fig. \ref{figure1}. In the remainder of this paper the charge (and spin) hole
mobile impurity is merely called $c$ (and $s$) impurity.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=8.0cm]{Sketch_csLines.pdf}}
\caption{Sketch of the $s$ (spin) and $c$ and $c'$ (charge) branch lines in the one-electron removal spectral function of the lattice
electronic correlated models discussed in this paper. The soft grey region refers to the small spectral-weight distribution
continuum whereas the darker grey regions below the branch lines typically display more weight. In the actual spectral function,
Eq. (\ref{equ2}), this applies to $k$ subdomains for which the exponents that control the line shape near those lines are negative.
The lack of spectral weight in some of the figure $(k,\omega)$-plane regions is imposed by kinematical constraints.}
\label{figure1}
\end{center}
\end{figure}
The one-electron operators matrix elements between energy eigenstates
in the expressions for the spectral function involve phase shifts and the charge parameter ${\tilde{\xi}}_c=\sqrt{2K_c}$
whose value is determined by them. Its range for the present lattice systems is ${\tilde{\xi}}_c=\sqrt{2K_c}\in ]1/2,\xi_c]$,
where the bare parameter $\xi_c \in ]1,\sqrt{2}[$ defined by Eq. (\ref{equA16}) of Appendix \ref{APA} refers to the
1D Hubbard model. Note that the model in Eq. (\ref{equ1}),
becomes the 1D Hubbard model at the bare charge parameter value, ${\tilde{\xi}}_c =\xi_c$. In this limit,
the SDS exponent reads $\alpha_0 = (2 - \xi_c^2)^2/(8\xi_c^2)\in [0,1/8]$ with
$\alpha_0 =0$ and $\alpha_0 =1/8$ for $u\rightarrow 0$ and $u\rightarrow\infty$, respectively.
For $n_e\in ]0,1[$ there is a $\xi_c\rightarrow {\tilde{\xi}}_c$ transformation
\cite{Ma_17} for each fixed value of $\xi_c$ and ${\tilde{\xi}}_c<\xi_c$ such that
$\xi_c \in ]1,\sqrt{2}[$ and ${\tilde{\xi}}_c\in ]1/2,1[\,;]1,\xi_c]$. This maps the
1D Hubbard model onto the model, Eq. (\ref{equ1}), upon gently turning on $F_e (r)$. Consistent with this result,
$\lim_{{\tilde{\xi}}_c\rightarrow\xi_c}F_e (r)\rightarrow 0$ for $r\in [0,\infty]$.
For ${\tilde{\xi}}_c<\xi_c$ the corresponding SDS exponent intervals are
$\alpha = (2 - {\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)\in [\alpha_0,1/8[\,;]1/8,49/32[$.
<
The phase shifts in the one-electron matrix elements play a major role in our study
by appearing explicitly in the expressions of the momentum-dependent exponents of the
one-electron removal spectral function. These phases shifts are $2\pi{\tilde{\Phi}}_{c,s}(\pm 2k_F,q')$ and
$2\pi{\tilde{\Phi}}_{c,c}(\pm 2k_F,q)$. Specifically, $-2\pi{\tilde{\Phi}}_{c,s}(\pm 2k_F,q')$ and
$-2\pi{\tilde{\Phi}}_{c,c}(\pm 2k_F,q)$ are the phase shifts, respectively, imposed on a $c$ particle of $c$ band
momentum $\pm 2k_F$ by a $s$ and $c$ impurity created at momentum $q' \in [-k_F,k_F]$ and
$q \in [-2k_F,2k_F]$. (Their explicit expressions are given below.)
The charge parameter ${\tilde{\xi}}_c$ is given by a superposition of charge-charge phase shifts,
\begin{eqnarray}
{\tilde{\xi}}_c = 1 + \lim_{q\rightarrow 2k_F}\{{\tilde{\Phi}}_{c,c}(+2k_F,q)+{\tilde{\Phi}}_{c,c}(-2k_F,q)\} \, .
\nonumber
\end{eqnarray}
The expressions for the exponents of spectral functions also involve the phase shifts
$2\pi{\tilde{\Phi}}_{s,c}(\pm k_F,q) = \mp {\pi\over\sqrt{2}}$ and
$2\pi{\tilde{\Phi}}_{s,s} (\pm k_F,q') = \pm (\sqrt{2}-1)(\sqrt{2} + (-1)^{\delta_{q',\pm k_F}}){\pi\over\sqrt{2}}$
induced on a $s$ particle of $s$ band momentum $\pm k_F$ by a
$c$ and $s$ impurity created at momentum $q \in [-2k_F,2k_F]$ and $q' \in [-k_F,k_F]$,
respectively. Their simple expressions are invariant under the
$\xi_c\rightarrow {\tilde{\xi}}_c$ transformation and, due to the spin $SU(2)$ symmetry, are interaction, density, and
momentum independent. (Except for $(-1)^{\delta_{q',\pm k_F}}$ in the
$2\pi{\tilde{\Phi}}_{s,s} (\pm k_F,q')$ expression at $q'=\pm k_F$.)
For small energy deviations $({\tilde{\omega}}_{\beta} (k)-\omega)>0$ and $({\tilde{\omega}}_{s} (k)-\omega)>0$
near the $\beta =c,c'$ branch lines and $s$ branch line, the spectral function behaves as,
\begin{eqnarray}
{\tilde{B}} (k,\omega) & \approx & \sum_{\iota=\pm 1}C_{\beta,\iota}
{\rm Im}\left\{\left({(\iota)\over{\tilde{\omega}}_{\beta} (k)-\omega - {i\over 2\tau_{\beta} (k)}}\right)^{-{\tilde{\zeta}}_{\beta} (k)}\right\}
\nonumber \\
{\tilde{B}} (k,\omega) & = & C_{s} ({\tilde{\omega}}_{s} (k)-\omega)^{{\tilde{\zeta}}_{s} (k)} \, ,
\label{equ2}
\end{eqnarray}
respectively. Here $C_{\beta,\iota}$ and $C_{s}$ are $n_e$, $u=U/4t$, and ${\tilde{\xi}}_c$ dependent constants
for energy and momentum values corresponding to the small energy deviations $({\tilde{\omega}}_{\beta} (k)-\omega)>0$
and $({\tilde{\omega}}_{s} (k)-\omega)>0$, respectively, and $\omega<0$ are high energies beyond those of the TLL.
The upper bounds of the constants $C_{c,\iota}$, $C_{c',\iota}$, and $C_{s}$ in Eq. (\ref{equ2})
are known from matrix elements and sum rules for spectral weights, but their precise values remain in general
an unsolved problem. The expressions for the $\gamma =c,c',s$ spectra
${\tilde{\omega}}_{\gamma} (k)$ and exponents ${\tilde{\zeta}}_{\gamma} (k)$ are given
in Eqs. (\ref{equA1}) and (\ref{equA3}) of Appendix \ref{APA}, respectively. As
discussed in Appendix \ref{RTLL}, the MQIM-HO also applies to the
low-energy TLL limit in which such exponents have different expressions. For the present high-energy regime,
they have the same expressions as for the MQIM-LO except that
the phase shift $2\pi{\tilde{\Phi}}_{c,c}(\pm 2k_F,q)$ in that of the spectral function
exponents ${\tilde{\zeta}}_{c} (k)$ and ${\tilde{\zeta}}_{c'} (k)$ has MQIM-HO additional terms.
That the $s$ branch line coincides with the edge of the support for the spectral function
ensures that near it the line shape is power-law like, as given in Eq. (\ref{equ2}).
For the $c,c'$ branch likes, which run within the spectral weight continuum,
the $\beta =c,c'$ lifetime $\tau_{\beta} (k)$ in Eq. (\ref{equ2}) is very large for the interval ${\tilde{\xi}}_c \in ]1,\xi_c[$,
so that the expression given in that equation is {\it nearly}
power-law like, ${\tilde{B}} (k,\omega) \propto \left({\tilde{\omega}}_{\beta} (k)-\omega\right)^{{\tilde{\zeta}}_{\beta} (k)}$.
The finite-range interaction effects increase upon decreasing ${\tilde{\xi}}_c$ in the
interval ${\tilde{\xi}}_c\in [{\tilde{\xi}}_c^{\oslash},1[$ where ${\tilde{\xi}}_c^{\oslash} = 1/\xi_c$.
In it the corresponding $c$ impurity relaxation
processes associated with large lifetimes $\tau_{c} (k)$ and $\tau_{c'} (k)$ in Eq. (\ref{equ2})
for the $k$ intervals for which ${\tilde{\zeta}}_{c} (k)<0$ and ${\tilde{\zeta}}_{c'} (k)<0$, respectively,
start transforming the power-law singularities into broadened peaks with small widths. Such effects become more pronounced
upon further decreasing ${\tilde{\xi}}_c$ in the interval ${\tilde{\xi}}_c\in ]{\tilde{\xi}}_c^{\oslash},1]$.
As discussed in more detail below in Sec. \ref{Relaxation}, for $k$ ranges for which the
exponents ${\tilde{\zeta}}_{c} (k)$ and ${\tilde{\zeta}}_{c'} (k)$ become positive upon
decreasing ${\tilde{\xi}}_c$, the relaxation processes wash out the peaks entirely.
\section{The effective-range expansion and the unitary limit}
\label{EREUL}
\subsection{The effective-range expansion}
\label{ERAE}
As we shall establish in detail below, the finite-range electron interactions have their strongest effects in
the charge-charge interaction channel. In contrast, for the charge-spin channel,
the renormalization factor of the phase shift,
\begin{equation}
2\pi{\tilde{\Phi}}_{c,s} (\pm 2k_F,q')={{\tilde{\xi}}_c\over\xi_c}\,2\pi\Phi_{c,s} (\pm 2k_F,q') \, ,
\label{equ3}
\end{equation}
remains that of the MQIM-LO.
For small relative momentum $k_r = q \mp 2k_F$ of the $c$ impurity
of momentum $q$ and $c$ particle of momentum $\pm 2k_F$ the phase shift
${\tilde{\Phi}}_c (k_r) = -2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,\pm 2k_F + k_r)$
associated with the charge-charge channel obeys an effective range expansion,
\begin{eqnarray}
\cot ({\tilde{\Phi}}_c (k_r)) = {-1\over {\tilde{a}}\,k_r} + {1\over 2}\,R_{\rm eff}\,k_r
- P_{\rm eff}\,R_{\rm eff}^3\,k_r^3 + {\cal{O}} (k_r^5) \, .
\label{equ4}
\end{eqnarray}
This equation is the same as for three-dimensional (3D) s-wave scattering problems if $k_r$
is replaced by $\vert k_r\vert$ \cite{Bethe_49,Blatt_49}. The first and second terms involve the
scattering length ${\tilde{a}}$ and effective range $R_{\rm eff}$, respectively. The third and higher
terms are negligible and involve the shape parameters
\cite{Bethe_49,Blatt_49,Burke_11,Kermode_90,Landau_65}.
One finds that in the bare charge parameter limit, ${\tilde{\xi}}_c = \xi_c$, the effective range expansion reads
$\cot (\Phi_c (k_r)) = -1/(a\,k_r)$ where $\Phi_c (k_r) = -2\pi\Phi_{c,c} (\pm 2k_F,\pm 2k_F + k_r)$, $2\pi\Phi_{c,c} (\pm 2k_F,q)$ is the bare
phase shift defined in Eqs. (\ref{equA11})-(\ref{equA15}) of Appendix \ref{APA}, and
$a=\lim_{{\tilde{\xi}}_c\rightarrow\xi_c}{\tilde{a}}$ is the bare scattering length.
Due to the 1D charge-spin separation at all MQIM energy scales,
the repulsive electronic potential $V_e (r)$ gives rise to an attractive potential $V_c (x)$ associated with the
interaction of the $c$ particle and $c$ impurity at a distance $x$. To go beyond the MQIM-LO, we
must explicitly account for the general properties of $V_c (x)$ whose form is determined by that of $V_e (r)$.
The corresponding relation between the electron and $c$ particle representations is discussed see Appendix \ref{RECP}. The
attractive potential $V_c (x)$ is negative for $x>x_0$ where $x_0$ is a non-universal distance
that either vanishes or is much smaller than
the lattice spacing $a_0$. Moreover, for the present class of systems $V_c (x)$ vanishes for {\it large $x$} as,
\begin{eqnarray}
V_c^{\rm asy} (x) & = & - {\gamma_c\over x^l} = - {C_c\over (x/2r_l)^l}\hspace{0.2cm}{\rm where}
\nonumber \\
C_c & = & {1\over (2r_l)^2\mu}\hspace{0.2cm}{\rm and}\hspace{0.2cm}\gamma_c = {(2r_l)^{l-2}\over \mu} \, .
\label{equ5}
\end{eqnarray}
Here $\mu$ is a non-universal reduced mass,
$l$ is an integer determined by the large-$r$ behavior of $V_e (r)$,
and $2r_l$ is a length scale whose $l$ dependence for ${\tilde{\xi}}_c <1$ is given below in Sec. \ref{ER}.
(And is twice the van der Waals length at $l=6$).
Since $V_c (x)$ has asymptotic behavior $1/x^l$, the scattering length, effective range, and shape parameter terms
in Eq. (\ref{equ4}) only converge if $l > 5$, $l > 7$, and $l > 9$, respectively \cite{Burke_11}.
We shall find that agreement with the experimental results is achieved provided that
the effective range studied in Sec. \ref{ER} is finite and this requires that $l > 5$ in Eq. (\ref{equ5}).
Similarly to the potentials considered in Refs. \onlinecite{Flambaum_99} and \onlinecite{Gribakin_93},
the class of potentials with large-distance behavior, Eq. (\ref{equ5}) and whose depth
is larger that the scattering energy of the corresponding interactions considered here are such that the
positive ``momentum'' $\sqrt{2\mu (-V_c (x))}$ obeys a sum rule of general form,
\begin{eqnarray}
& & \int_{x_0}^{\infty}dx\sqrt{2\mu (-V_c (x))} = \Phi + {\theta_c\pi\over 2(l-2)}\hspace{0.20cm}{\rm where}
\nonumber \\
& & \tan (\Phi) = - {\Delta a\over{\tilde{a}}}\cot \left({\pi\over l-2}\right)\hspace{0.20cm}{\rm and}\hspace{0.20cm}{\rm thus}
\nonumber \\
&& {a\over {\tilde{a}}} = 1 - \tan\left({\pi\over l-2}\right)\tan(\Phi) \, .
\label{equ6}
\end{eqnarray}
Here $\Delta a/{\tilde{a}}$ where $\Delta a = a - {\tilde{a}}$
is a relative fluctuation that involves two uniquely defined yet non-universal scattering lengths,
$a$ and ${\tilde{a}}$. As justified in Sec. \ref{ER}, in the present unitary-limit case discussed in
Sec. \ref{UL}, they are the bare and renormalized scattering lengths, respectively,
defined in that section. The physically important renormalized charge parameter range is
${\tilde{\xi}}_c\in ]1/2,1[$ for which $\alpha >1/8$. The term $\theta_c\pi/[2(l-2)]$ in Eq. (\ref{equ6}) refers to a
potential boundary condition\cite{Flambaum_99,Gribakin_93} with $\theta_c = 1$ for that interval.
(In that regime, the expressions in Eq. (\ref{equ6}) are similar to those in
Eqs. (4) and (6) of Ref. \onlinecite{Flambaum_99} with $a$, ${\tilde{a}}$, $l$, and $\Phi$ corresponding
to $a$, ${\bar{a}}$, $n$, and $\Phi - \pi/[2(n-2)]$, respectively.)
A function $\theta_c =\sqrt{(\xi_c^4 - {\tilde{\xi}}_c^4)/(\xi_c^4 - 1)}$
for the interval ${\tilde{\xi}}_c\in ]1,\xi_c[$ for which $\alpha <1/8$ merely ensures that the sum rule
in Eq. (\ref{equ6}) continuously vanishes as ${\tilde{\xi}}_c\rightarrow\xi_c$.
Our choice of potentials with large-$x$ behavior given in Eq. (\ref{equ5}) is such that the
sum rule, Eq. (\ref{equ6}), is obeyed yet for small $x$ the form of $V_c (x)$ is not
universal and is determined by the specific small-$r$ form of $V_e (r)$ itself. The
zero-energy phase $\Phi$ in Eq. (\ref{equ6}) whose physics is further clarified below
can be expressed as,
\begin{equation}
\Phi = \int_{x_0}^{x_2}dx\sqrt{2\mu (-V_c (x))}
\hspace{0.20cm}{\rm where}\hspace{0.20cm} x_2 = 2r_l\left({4\sqrt{2}\over\pi\theta_c}\right)^{2\over l-2} \, .
\label{equ7}
\end{equation}
Indeed, $V_c (x)= V_c^{\rm asy} (x)$ for $x>x_2$ and
$\int_{x_2}^{\infty}dx\sqrt{2\mu (-V_c^{\rm asy} (x))} = {\theta_c\pi\over 2(l-2)}$.
Here $x_2 \approx 2r_l$ for ${\tilde{\xi}}_c\in ]1/2,1[$
with the ratio $x_2/2r_l$ decreasing from $1.342$ at $l=6$ to $1$ at $l=\infty$.
For ${\tilde{\xi}}_c\in ]1,\xi_c]$ it is an increasing function of ${\tilde{\xi}}_c$
such that $\lim_{{\tilde{\xi}}_c\rightarrow\xi_c}x_2/2r_l = \infty$ for $l$ finite.
The universal form of the spectral function near the singularities, Eq. (\ref{equ2}),
is determined by the large $x$ behavior of $V_c (x)$, Eq. (\ref{equ5}), and sum rules,
Eqs. (\ref{equ6}) and (\ref{equ7}). In the spectral-weight continuum, its form is not universal, as it depends on the
specific small $x$ form of $V_c (x)$ determined by $V_e (r)$.
The length scale $2r_l$ in Eq. (\ref{equ5}) is found below in Sec. \ref{ER} to obey the
inequality $\sqrt{2}\,(2r_l)^{l-2\over 2}=\sqrt{2\mu\gamma_c}\gg 1$ in units of $a_0 =1$.
($\sqrt{2\mu\gamma_c}$ in such units corresponds to the important parameter
$\gamma = \sqrt{2\,\mu\,\alpha}/\hbar$ of Ref. \onlinecite{Flambaum_99} in units of Bohr
radius $a_0 = 0.529177$\,\textrm{\AA} with $\mu$ and $\alpha$ corresponding to $\mu$ and $\gamma_c$,
respectively.) This inequality justifies why $V_c (x)= V_c^{\rm asy} (x)$ for $x>x_2$ and implies that
$\int_{x_0}^{x_2}dx\sqrt{2\mu (-V_c (x))}$ in Eq. (\ref{equ7})
has for ${\tilde{\xi}}_c\in ]1/2,1[$ large values. This is consistent with the above
mentioned requirement of the scattering energy of the residual interactions
of the $c$ particles and $c$ impurity being
smaller than the depth $-V_c (x_1)$ of the potential $V_c (x)$ well, which since
$\int_{x_0}^{\infty}dx\sqrt{2\mu (-V_c (x))}/\pi\gg 1$ must be large.
Here $x_1$ is a small non-universal potential-dependent $x$ value such that $x_0<x_1<a_0$ at
which $\partial V_c (x)/\partial x =0$ and $-V_c (x)$ reaches its maximum value.
\subsection{The unitary limit and the scattering lengths}
\label{UL}
As confirmed below in Sec. \ref{ER}, the expression for the phase shift in the thermodynamic limit,
\begin{equation}
-2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F, \pm 2k_F + k_r)\vert_{k_r = \mp {2\pi\over L}}
=\mp {({\tilde{\xi}}_c -1)^2\pi\over {\tilde{\xi}}_c} \, ,
\label{equ8}
\end{equation}
for $\lim_{k_r\rightarrow0}{\tilde{\Phi}}_c (k_r)$
remains the same as for the MQIM-LO. Its use along with that of
$\mp (\xi_c -1)^2\pi/\xi_c$ for the bare phase shift
$\lim_{k_r\rightarrow0}\cot (\Phi_c (k_r))$
in the leading term of the corresponding effective-range expansions gives
the scattering lengths. In the thermodynamic limit they read,
\begin{eqnarray}
\tilde{a} & = & - {L\over 2\pi}\tan \left({({\tilde{\xi}}_c -1)^2\pi\over {\tilde{\xi}}_c}\right)
\rightarrow - \infty
\hspace{0.20cm}{\rm for}\hspace{0.20cm}{\tilde{\xi}}_c\neq 1\hspace{0.20cm}{\rm and}
\nonumber \\
a & = & - {L\over 2\pi}\tan \left({(\xi_c -1)^2\pi\over \xi_c}\right) \rightarrow - \infty
\hspace{0.20cm}{\rm for}\hspace{0.20cm}\xi_c\neq 1 \, ,
\label{equ9}
\end{eqnarray}
respectively. This is known as the unitary limit \cite{Zwerger_12,Horikoshi_17}.
The validity of the MQIM-HO refers to this limit, which occurs provided that $\xi\neq 1$, ${\tilde{\xi}}_c\neq 1$,
and as confirmed below that ${\tilde{\xi}}_c>1/2$. The
dependence of the bare charge parameter $\xi_c = \xi_c (n_e,u)$ on the density $n_e$ and $u=U/4t$
is defined by Eq. (\ref{equA16}) of Appendix \ref{APA}. It is such that $\xi_c=\sqrt{2}$ for $u\rightarrow 0$ and
$\xi_c=1$ for $u\rightarrow\infty$ for $n_e\in ]0,1[$
and $\xi_c=1$ for $u>0$ and $\xi_c=\sqrt{2}$ at $u=0$ for both $n_e\rightarrow 0$ and
$n_e\rightarrow 1$. This implies that $a=-\infty$ provided that the relative momentum obeys the inequality
$\vert k_r\vert\ll {\tan (\pi\,n_e)\over 4u}$. This excludes electronic densities very near $n_e=0$ and $n_e=1$
for all $u$ values and excludes large $u$ values for the remaining electronic densities.
The phase shifts ${\tilde{\Phi}}_c = -2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,q)$
incurred by the $c$ particles from their interactions with the $c$ impurity
created at momentum $q\in [-2k_F+k_{Fc}^0,2k_F-k_{Fc}^0[$
have $c$ band momenta in two small intervals $[-2k_F,-2k_F+k_{Fc}^0]$ and
$[2k_F-k_{Fc}^0,2k_F]$ near the $c$ band Fermi points $-2k_F$ and $2k_F$, respectively.
As discussed in Appendix \ref{RTLL}, the creation of an impurity in the
$c$ band intervals $q\in [-2k_F,-2k_F+k_{Fc}^0]$ and $q\in [2k_F-k_{Fc}^0,2k_F]$
refers to the low-energy TLL regime. Its velocity
becomes that of the low-energy particle-hole excitations near $-2k_F$
and $2k_F$, respectively. In this regime, the physics is different, as the
impurity loses its identity, since it cannot be distinguished from the $c$ band holes
(TLL holons) in such excitations.
The small momentum $k_{Fc}^0$ can be written as $k_{Fc}^0 = \pi n_{Fc}^0$.
The unitary limit refers to the corresponding low-density $n_{Fc}^0$ of $c$ particle scatterers with phase shift
${\tilde{\Phi}}_c = -2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,q)$
and $c$ band momentum values $[-2k_F,-2k_F+k_{Fc}^0]$ and $[2k_F-k_{Fc}^0,2k_F]$ near
$-2k_F$ and $2k_F$, respectively. They, plus the single $c$ impurity
constitute the usual dilute quantum liquid of the unitary limit whose density is thus $n_{Fc}^0$.
$k_{Fc}^0$ is such that $k_{Fc}^0\,\vert{\tilde{a}}\vert = {1\over 2}N_{Fc}^0\tan([({\tilde{\xi}}_c -1)^2/{\tilde{\xi}}_c]\pi)$
and $k_{Fc}^0\,\vert a\vert = {1\over 2}N_{Fc}^0\tan([(\xi_c -1)^2/\xi_c]\pi)$ for ${\tilde{\xi}}_c=\xi_c$.
Here $N_{Fc}^0$ is the number of $c$ particle scatterers in $n_{Fc}^0=N_{Fc}^0/L$.
In the thermodynamic limit one has that $n_{Fc}^0$ is very small or even such that
$\lim_{L\rightarrow\infty}n_{Fc}^0\rightarrow 0$. Consistent with this result, the following relations of the
usual unitary limit of the dilute quantum liquid unitary limit
\cite{Zwerger_12}, $R_{\rm eff}\ll 1/k_{Fc}^0\ll\vert{\tilde{a}}\vert$ and
$0\ll 1/k_{Fc}^0\ll \vert a\vert$ hold. The effective range $R_{\rm eff}$ derived
in Sec. \ref{ER} is such that $R_{\rm eff}\rightarrow\infty$ as ${\tilde{\xi}}_c\rightarrow 1/2$. The unitary
limit requirement that $R_{\rm eff}\ll 1/k_{Fc}^0$ in the thermodynamic limit is the reason that
the value ${\tilde{\xi}}_c =1/2$ is excluded from the regime in which the MQIM-HO is valid.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=8.0cm]{tan_phi.pdf}}
\caption{$\tan (\Phi) = - (\Delta a/{\tilde{a}})\cot (\pi/[l-2])$, Eqs. (\ref{equ6}) and (\ref{equ11}), as a function of
the renormalized charge parameter ${\tilde{\xi}}_c$ for the electronic density $n_e = 0.176$,
interaction $u=U/4t=0.30$, and integer quantum numbers $l=6-12$ used in Sec. \ref{ARPES} for Bi/InSb(001).
For ${\tilde{\xi}}_c\rightarrow 1/2$ and at ${\tilde{\xi}}_c={\tilde{\xi}}_c^{\oslash}=1/\xi_c=0.805$,
$\tan (\Phi)$ reads $\cot (\pi/(l-2))$ and $0$, respectively, and at both ${\tilde{\xi}}_c={\tilde{\xi}}_c^{\ominus-}=0.857$ and
${\tilde{\xi}}_c={\tilde{\xi}}_c^{\ominus+}=1.166$ it is given by $-\cot (\pi/(l-2))$. The MQIM-HO is not valid
near ${\tilde{\xi}}_c= 1$ at which $\Delta a/{\tilde{a}}=\infty$ and the corresponding scattering problem does not
refer to the unitary limit. The finite-range effects are more pronounced
for ${\tilde{\xi}}_c \in ]1/2,{\tilde{\xi}}_c^{\oslash}[$ when $\Delta a/{\tilde{a}}<0$ and $\tan (\Phi)>0$.}
\label{figure2}
\end{center}
\end{figure}
Importantly, although both $a^{-1}=0$ and ${\tilde{a}}^{-1}=0$, the ratio $\tilde{a}/a$ is finite.
Since below in Sec. \ref{ER} we confirm that $a$ and ${\tilde{a}}^{-1}$ are in Eq. (\ref{equ6})
the scattering lengths given by Eq. (\ref{equ9}), the value of ${\tilde{\xi}}$ in their ratio $\tilde{a}/a$
expression is found to be controlled by the potential $V_c (x)$ though $\tan (\Phi)$ in the
sum rules provided in Eqs. (\ref{equ6}) and (\ref{equ7}) as,
\begin{equation}
{{\tilde{a}}\over a} = {\tan (\pi({\tilde{\xi}}_c -1)^2/{\tilde{\xi}}_c)\over\tan (\pi(\xi_c -1)^2/\xi_c)}
= {1\over 1 - \tan\left({\pi\over l-2}\right)\tan(\Phi)} \, .
\label{equ10}
\end{equation}
The first expression on the right-hand side of this equation is specific to the present 1D quantum problem and
follows directly from Eq. (\ref{equ9}). Hence in the present case $\tan (\Phi)$ in Eq. (\ref{equ6}) can
be expressed as,
\begin{equation}
\tan (\Phi) = - {\sin\left({(\xi_c - {\tilde{\xi}}_c)(\xi_c{\tilde{\xi}}_c-1)\pi\over \xi_c{\tilde{\xi}}_c}\right)
\cot\left({\pi\over l-2}\right)
\over \sin\left({({\tilde{\xi}}_c -1)^2\over {\tilde{\xi}}_c}\pi\right)
\cos\left({(\xi_c -1)^2\over\xi_c}\pi\right)} \, .
\label{equ11}
\end{equation}
One finds from the use of Eq. (\ref{equ10}) that effects of the finite-range interactions controlled by relative fluctuation $\Delta a/{\tilde{a}}$
in $\tan (\Phi)=- {\Delta a\over{\tilde{a}}}\cot \left({\pi\over l-2}\right)$, Eq. (\ref{equ6}),
are stronger for ${\tilde{\xi}}_c\in ]1/2,{\tilde{\xi}}_c^{\oslash}]= ]1/2,1/\xi_c]$ when
$\Delta a/{\tilde{a}}<0$, ${\tilde{a}}/a>1$, and $\tan (\Phi)>0$ in Fig. \ref{figure2}.
Upon increasing ${\tilde{\xi}}_c$ within the intervals ${\tilde{\xi}}_c \in ]1/2,1[$ and ${\tilde{\xi}}_c \in ]1,\xi_c]$, the
relative fluctuation increases from $\Delta a/{\tilde{a}}=-1$ for
${\tilde{\xi}}_c\rightarrow 1/2$ to $\Delta a/{\tilde{a}}=\infty$ for ${\tilde{\xi}}_c\rightarrow 1$, crossing $0$ and $1$ at
${\tilde{\xi}}_c={\tilde{\xi}}_c^{\oslash}=1/\xi_c$ and ${\tilde{\xi}}_c={\tilde{\xi}}_c^{\ominus-}$,
respectively. Upon further increasing ${\tilde{\xi}}_c$, the ratio decreases from $\Delta a/{\tilde{a}}=\infty$ to $\Delta a/{\tilde{a}}=0$ at
${\tilde{\xi}}_c=\xi_c$, crossing $1$ at ${\tilde{\xi}}_c={\tilde{\xi}}_c^{\ominus+}$. Here ${\tilde{\xi}}_c^{\ominus -}\in ]0.778,1[$
and ${\tilde{\xi}}_c^{\ominus -}\in ]1,1.284[$ are given by Eqs. (\ref{equ12}) and (\ref{equ13})
with $\eta_c (\xi_c,\Phi,l) = 1 + {1\over 2\pi}\arctan \left({\vert a\vert\,\pi\over L}\right)$ where $a$ is the bare scattering length
in Eq. (\ref{equ9}). For the electronic density $n_e = 0.176$ and interaction $u=U/4t=0.30$ (the values used in Sec. \ref{ARPES}
for Bi/InSb(001)), ${\tilde{\xi}}_c^{\oslash}=1/\xi_c = 0.805$, ${\tilde{\xi}}_c^{\ominus-} = 0.857$, and ${\tilde{\xi}}_c^{\ominus+} = 1.166$.
The renormalized charge parameter intervals ${\tilde{\xi}}_c \in ]1/2,1[$ for which $\alpha >1/8$ and ${\tilde{\xi}}_c \in ]1,\xi_c]$
for which $\alpha <1/8$ refer to two qualitatively different problems.
Importantly, the ${\tilde{\xi}}_c$ value in the $\xi_c\rightarrow {\tilde{\xi}}_c$ transformation
is uniquely defined for {\it each} of these two intervals solely by the bare charge parameter $\xi_c = \xi_c (n_e,u)$,
Eq. (\ref{equA16}) of Appendix \ref{APA}, the integer quantum number $l$ in the potential $V_c (x)$ large-$x$ expression,
Eq. (\ref{equ5}), and its sum rule zero-energy phase $\Phi $, Eq. (\ref{equ7}), as follows:
\begin{eqnarray}
{\tilde{\xi}}_c & = & \eta_c (\xi_c,\Phi,l) \left(1 - \sqrt{1 - {1\over \eta_c^2 (\xi_c,\Phi,l)}}\right) \in ]1/2,1[
\nonumber \\
& = & \eta_c (\xi_c,\Phi,l)\left(1 + \sqrt{1 - {1\over \eta_c^2 (\xi_c,\Phi,l)}}\right) \in ]1,\xi_c]
\label{equ12}
\end{eqnarray}
where,
\begin{equation}
\eta_c (\xi_c,\Phi,l) = 1 + {1\over 2\pi}\arctan\left({\tan \left({(\xi_c -1)^2\pi\over \xi_c}\right)\over
1 + \tan \left({\pi\over l-2}\right) \tan (\Phi) }\right) \, .
\label{equ13}
\end{equation}
\section{The effective range}
\label{ER}
\subsection{The effective-range general problem and cancellation of its unwanted terms}
\label{ERGP}
The MQIM-HO accounts for the higher terms in the effective range expansion, Eq. (\ref{equ4}),
so that as anticipated the phase shift $2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,q)$ acquires an additional term,
$2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r)$, relative to the MQIM-LO,
\begin{eqnarray}
&& 2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,q) = 2\pi{\tilde{\Phi}}_{c,c}^{{\tilde{a}}} (\pm 2k_F,q) + 2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r)
\nonumber \\
&& 2\pi{\tilde{\Phi}}_{c,c}^{{\tilde{a}}} (\pm 2k_F,q) = {\xi_c\over {\tilde{\xi}}_c}{({\tilde{\xi}}_c -1)^2\over (\xi_c -1)^2}\,2\pi\Phi_{c,c} (\pm 2k_F,q)
\nonumber \\
& & \hspace{2.6cm} = {\arctan\left({{\tilde{a}}\over L}\,2\pi\right)\over\arctan\left({a\over L}\,2\pi\right)}\,2\pi\Phi_{c,c} (\pm 2k_F,q)
\nonumber \\
& & 2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r) =
\nonumber \\
&& \arctan\left({1\over 2}R_{\rm eff}\,k_r\sin^2 \left({({\tilde{\xi}}_c -1)^2\over {\tilde{\xi}}_c}\pi\right) + P_c (k_r)\right) .
\label{equ14}
\end{eqnarray}
The second term in the expression for the phase shift $2\pi{\tilde{\Phi}}_{c,c}^{{\tilde{a}}} (\pm 2k_F,q)$
reveals that its renormalization is controlled by the scattering lengths associated with the leading term in the effective
range expansion. The ${\tilde{\xi}}_c =\xi_c$ bare phase shift $2\pi\Phi_{c,c} (\pm 2k_F,q)$ in that
expression is defined in Eqs. (\ref{equA11})-(\ref{equA15}) of Appendix \ref{APA}.
Furthermore, the function $P_c (k_r)$ in the expression of $2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r)$
vanishes for $l<8$ and is such that its use in the term on the left-hand side of Eq. (\ref{equ4}),
$\cot ({\tilde{\Phi}}_c (k_r))=\cot(-2\pi{\tilde{\Phi}}_{c,c}^{{\tilde{a}}} (\pm 2k_F,q) - 2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r))$,
gives rise to all the shape parameter terms in the expansion, Eq. (\ref{equ4}), beyond
the two leading terms, ${-1\over {\tilde{a}}\,k_r} + {1\over 2}\,R_{\rm eff}\,k_r$.
Fortunately, in the unitary limit all properties that are characterized by these higher-order terms
become irrelevant also for $l>7$. Hence $2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r)$ is given by
$\arctan\left({1\over 2}R_{\rm eff}\,k_r\sin^2 ([({\tilde{\xi}}_c -1)^2/{\tilde{\xi}}_c]\pi)\right)$, which
gives $\cot ({\tilde{\Phi}}_c (k_r))={-1\over {\tilde{a}}\,k_r} + {1\over 2}\,R_{\rm eff}\,k_r$
at small $k_r$. (That $2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (\mp 2\pi/L)$ vanishes
in the thermodynamic limit confirms that at $k_r = \mp 2\pi/L$
the phase shift $2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,\pm 2k_F + k_r)$
has the same value $\pm ({\tilde{\xi}}_c -1)^2\pi/{\tilde{\xi}}_c$,
Eq. (\ref{equ8}), as for the MQIM-LO.)
Both the unitary limit and the fact that for ${\tilde{\xi}}_c\in ]1/2,1[$ the scattering energy of the residual interactions
of the $c$ particles and $c$ impurity are much smaller than the depth $-V_c (x_1)$
of the potential $V_c (x)$ will play important roles in the following derivations of
the effective range $R_{\rm eff}$ in the expression of $2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r)$,
Eq.(\ref{equ14}).
First, note that the phase shift term $-2\pi{\tilde{\Phi}}_{c,c}^{{\tilde{a}}} (\pm 2k_F,\pm 2k_F + k_r)$
(see Eq. (\ref{equ14})) of ${\tilde{\Phi}}_c (k_r) = -2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,\pm 2k_F + k_r)$ in the
effective range expansion, Eq. (\ref{equ4}), contributes only to the leading term in that expansion
, ${-1\over {\tilde{a}}\,k_r}$. Thus it does not contribute to the effective range
$R_{\rm eff}$. Indeed, that phase shift term reads $\mp ({\tilde{\xi}}_c -1)^2\pi/{\tilde{\xi}}_c$,
Eq. (\ref{equ8}), at $k_r=\mp 2\pi/L$ whereas it vanishes at $k_r =0$, so that in the thermodynamic
limit the derivative $-2\pi\partial {\tilde{\Phi}}_{c,c}^{{\tilde{a}}} (\pm 2k_F,\pm 2k_F + k_r)/\partial k_r\vert_{k_r =0}$
is ill defined.
For a potential with large-$x$ behavior, $- C_c/(x/2r_l)^l$,
Eq. (\ref{equ5}), the effective range $R_{\rm eff}$ in the phase shift term
$2\pi{\tilde{\Phi}}_{c,c}^{R_{\rm eff}} (k_r)$ of Eq. (\ref{equ14}) follows from
standard scattering-theory methods, and becomes \cite{Joachain_75,Preston_75,Flambaum_99}
\begin{eqnarray}
R_{\rm eff} = 2\int_0^{\infty} dx\left((\psi_c^0 (x))^2 - (\psi_c (x))^2\right) \, .
\label{equ15}
\end{eqnarray}
This integral converges provided that $l>5$.
The bare limit ${\tilde{\xi}}_c =\xi_c$ boundary condition $V_c (x)=0$ for all $x$
corresponds to the wave function $\psi_c^0 (x)$ in Eq. (\ref{equ15}).
It is the zero-energy solution of the Schr\"odinger equation for the free motion,
\begin{eqnarray}
- {1\over 2\mu}{d^2\psi_c^0 (x)\over dx^2} = 0 \, .
\nonumber
\end{eqnarray}
Here $\mu$ is the reduced mass of the $c$ particle and $c$ impurity.
The function $\psi_c^0 (x)$ then has the form $\psi_c^0 (x) = 1 - x/a$ for all $x\in [0,\infty]$.
In contrast, the wave function $\psi_c (x)$ in Eq. (\ref{equ15})
is associated with the potential $V_c (x)$ induced by the potential $V_e (r)$ in Eq. (\ref{equ1}).
The former is associated with the interaction of the $c$ particle and $c$ impurity.
That wave function is thus the solution of a corresponding
Schr\"odinger equation at zero energy,
\begin{eqnarray}
- {1\over 2\mu}{d^2\psi_c (x)\over dx^2} + V_c (x)\,\psi_c (x) = 0 \, ,
\label{equ16}
\end{eqnarray}
with the boundary condition $\psi_c (0) = 0$. It is normalized at $x\rightarrow\infty$
as $\psi_c (x) = \psi_c^0 (x) = 1 - x/a$.
The charge parameter interval ${\tilde{\xi}}_c\in ]1,\xi_c]$ for which $\alpha <1/8$ that corresponds
to the second ${\tilde{\xi}}_c$ expression in Eq. (\ref{equ12}) is of little interest for our studies,
as similar $\alpha$ values are reachable by the 1D Hubbard model.
Two boundary conditions that must be obeyed by $R_{\rm eff}$ in that parameter interval are
$\lim_{{\tilde{\xi}}_c\rightarrow \xi_c} R_{\rm eff}=0$ and $\lim_{{\tilde{\xi}}_c\rightarrow 1} R_{\rm eff}= a_0$.
They are satisfied by the following {\it phenomenological} effective range expression,
\begin{equation}
R_{\rm eff} = a_0\left(1 - {{\tilde{a}}\over a}\right) < a_0 \, .
\label{equ17}
\end{equation}
In the case of the interval ${\tilde{\xi}}_c\in ]1/2,1[$ for which $\alpha >1/8$
that corresponds to the first ${\tilde{\xi}}_c$ expression in Eq. (\ref{equ12}), the
explicit derivation of the integral in the effective range expression, Eq. (\ref{equ15}), simplifies
because the inequalities $\sqrt{2\mu\gamma_c}\gg 1$ in units of $a_0 =1$ and $\Phi/\pi\gg 1$ are found to
apply, as reported in Sec. \ref{EREUL}. This ensures that for ${\tilde{\xi}}_c<1$
the scattering energy of the residual interactions of the $c$ particles and
$c$ impurity is much smaller than the depth $-V_c (x_1)$ of the potential $V_c (x)$.
The following analysis applies to general scattering lengths $a$, finite or infinite,
and potentials with these general properties. They imply that the large-$x$ function
$\psi_c (x)$ obeying a Schr\"odinger equation,
\begin{eqnarray}
{d^2\psi_c (x)\over dx^2} + {2(2r_l)^{l-2}\over x^l}\psi_c (x) = 0 \, ,
\nonumber
\end{eqnarray}
whose attractive potential is given by its large-distance asymptotic
behavior $V_c^{\rm asy} (x) = - C_c/(x/2r_l)^l$, Eq. (\ref{equ5}), which has the general form,
\begin{eqnarray}
\psi_c (x) = \sqrt{x}\,\left(B_1\,\phi_{1\over l-2}(x)
- B_2\,{\phi_{{-1\over l-2}}(x)\over\cos\left({\pi\over l-2}\right)}\right) \, .
\label{equ18}
\end{eqnarray}
This expression can be used for all $x\in [0,\infty]$
{\it provided} that $V_c (x)$ at small distances $x\approx x_1$ where
it is deep is replaced by a suitable energy-independent boundary condition.
This is also valid for 3D s-wave scattering problems
whose potentials have the above general properties and whose scattering
lengths are parametrically large \cite{Flambaum_99}.
$B_1$ and $B_2$ are in Eq. (\ref{equ18}) $x$ independent constants and
$\phi_{1\over l-2} (x) = J_{1\over l-2} (y)$ and
$\phi_{{-1\over l-2}} (x) = J_{{-1\over l-2}} (y)$ where
$J_{1\over l-2} (y)$ and $J_{{-1\over l-2}} (y)$ are
Bessel functions of argument,
\begin{eqnarray}
y = {2\sqrt{2}\over (l-2)(x/2r_l)^{l-2\over 2}} \, .
\nonumber
\end{eqnarray}
From the use in Eq. (\ref{equ18}) of the asymptotic behavior, $J_{\nu} (y) \approx y^{\nu}/(2^{\nu}\Gamma (1+\nu))$, of
the Bessel functions for $x\gg 1$ and thus $y\ll 1$
one finds that the normalization at $x\rightarrow\infty$
as $\psi_c (x) = \psi_c^0 (x) = 1 - x/a$ requires that,
\begin{equation}
B_1 = {1\over\sqrt{2r_l}}\left({l-2\over\sqrt{2}}\right)^{1\over l-2}
\Gamma \left({l-1\over l-2}\right) \, ,
\label{equ19}
\end{equation}
and
\begin{eqnarray}
B_2 & = & B_2^0 = {\bar{a}\over a}\,B_1\hspace{0.20cm}{\rm where}\hspace{0.20cm}{\rm the}
\hspace{0.20cm}{\rm length}\hspace{0.20cm}\bar{a}\hspace{0.20cm}{\rm reads}
\nonumber \\
\bar{a} & = & 2r_l\,\left({\sqrt{2}\over l-2}\right)^{2\over l-2}
{\Gamma \left({l-3\over l-2}\right) \over\Gamma \left({l-1\over l-2}\right)}\cos \left({\pi\over l-2}\right) \, .
\label{equ20}
\end{eqnarray}
It is convenient to write the integrand in Eq. (\ref{equ15}) as
$(\psi_c^0 (x))^2 - \psi_c^2 (x) = g_c^{\rm virtual} (x) + g_c (x)$ where,
\begin{eqnarray}
g_c^{\rm virtual} (x) & = & (\psi_c^0 (x))^2 - f_c (x) \hspace{0.20cm}{\rm and}
\nonumber \\
\psi_c (x) & = & \sqrt{f_c (x) - g_c (x)} \, ,
\label{equ21}
\end{eqnarray}
and the functions $f_c (x)$ and $g_c (x)$ are given by,
\begin{eqnarray}
& & f_c (x) = (2r_l)^2{d\over dx}\{\left({x\over 2r_l}\right)^2[B_1^2\,\phi_{1\over l-2}^2 (x)
- {B_2\over\cos\left({\pi\over l-2}\right)}
\nonumber \\
& & \times \{B_1\,\phi_{{1\over l-2}}(x)
- {B_2\over 3\cos\left({\pi\over l-2}\right)}\,\phi_{{-1\over l-2}}(x)\}\,\phi_{{-1\over l-2}}(x)]\}
\label{equ22}
\end{eqnarray}
and
\begin{eqnarray}
& & g_c (x) = \left({x\over 2r_l}\right)^{-{(l-2)\over 2}+1}4\sqrt{2}\, r_l
\{B_1^2\,\phi_{1\over l-2}(x)\,\phi_{{1\over l-2}+1}(x)
\nonumber \\
& - & {B_1\,B_2\over 2\cos\left({\pi\over l-2}\right)} [\phi_{1\over l-2}(x)\,\phi_{{-1\over l-2}+1}(x)
+ \phi_{{-1\over l-2}}(x)\,\phi_{{1\over l-2}+1}(x)]
\nonumber \\
& + & {B_2^2\over 3\cos^2\left({\pi\over l-2}\right)}\,\phi_{{-1\over l-2}}(x)\,\phi_{{-1\over l-2}+1}(x)\} \, ,
\label{equ23}
\end{eqnarray}
respectively.
The separation in Eq. (\ref{equ21}) is convenient because
the divergences all appear in the functions $(\psi_c^0 (x))^2$ and $f_c (x)$.
That $B_1$ and $B_2$ have the expressions given in Eqs. (\ref{equ19})
and (\ref{equ20}), respectively, ensures that both $2\int_0^{\infty} dx\,(\psi_c^0 (x))^2$ and
$2\int_0^{\infty} dx\,f_c (x)$ read $\lim_{x\rightarrow\infty} 2\left(x - {x^2\over a} + {1\over 3}{x^3\over a^2}\right)$
and thus the divergences from $(\psi_c^0 (r))^2$ and $f_c (x)$ exactly cancel each other
under the integration of Eq. (\ref{equ15}). Hence $R_{\rm eff}$ can be expressed as,
\begin{eqnarray}
R_{\rm eff} = 2\int_0^{\infty} dx\,g_c (x) \, .
\label{equ24}
\end{eqnarray}
\subsection{The energy-independent boundary condition}
\label{EIBC}
The only role of $f_c (x)$, Eq. (\ref{equ22}), is to cancel $(\psi_c^0 (x))^2$
within $g_c^{\rm virtual} (x)$, Eq. (\ref{equ21}), under the integration in Eq. (\ref{equ15}).
In the general expression for the effective range given in that equation,
$\psi_c (x)$ is the solution of Eq. (\ref{equ16}) with
the actual potential $V_c (x)$ defined in its full domain, $x\in [0,\infty]$.
The alternative use of Eq. (\ref{equ24}), which was derived
by using the function $\psi_c (x)$ large-$x$ expression, Eq. (\ref{equ18}), for the whole domain
$x\in [0,\infty]$, also leads to the effective range, Eq. (\ref{equ15}). This applies provided that $V_c (x)$ is replaced at small
distances near $x=x_1$, where it is deep, by the energy-independent boundary condition defined below.
It accounts for the effects from $V_c (x)$ for small distances.
In the unitary limit the inverse scattering length, $a^{-1}=0$, which appears in the $B_2$
expression, Eq. (\ref{equ20}), is at the middle of negative $a^{-1}<0$ and positive $a^{-1}>0$ values and could
refer to $a = -\infty$ or $a = \infty$. Hence in that limit there is not much difference between
the repulsive and attractive scattering cases. As discussed in Ref. \onlinecite{Castin_12}
for the case of two particles with a s-wave interaction, the scattering lengths in the attractive $a=-\infty$
and repulsive $a=\infty$ cases of the unitary limit merely refer to different states of the {\it same} $a^{-1}=0$
scattering problem.
For a potential $V (r)$ with a {\it finite} scattering length $a$ and having the general properties
reported above, at small distances $r$ where the potential is deep it can be replaced by
an energy-independent boundary condition such that the ratio
$B_2/B_1 = \bar{a}/a$ reads $\left[1 - \tan\left({\pi\over l-2}\right)\tan ({\bar{\Phi}})\right]^{-1}$ where
${\bar{\Phi}} = \int_{r_0}^{\infty}dr\sqrt{2\mu (-V (r))} - \pi/[2(l-2)]$. Here
${\bar{\Phi}}\gg\pi$ is a potential dependent zero-energy phase, $V (r_0)=0$, and $V (r)<0$ for $r>r_0$.
Moreover, $\tan ({\bar{\Phi}}) = - {\Delta a\over {\bar{a}}}\cos \left({\pi\over l-2}\right)$ where $\Delta a = a - {\bar{a}}$,
$\Delta a/{\bar{a}}$ is a relative fluctuation, and ${\bar{a}}$ given in Eq. (\ref{equ20}) is a mean scattering length determined by the
asymptotic behavior $\propto 1/r^l$ of the potential $V (r)$ through the integer $l>5$ and
the length scale $2r_l$. For instance, in terms of the constants $A=B_1-B_2$ and $B=-B_2\tan\left({\pi\over l-2}\right)$,
of the scattering problem studied in Ref. \onlinecite{Flambaum_99},
the ratio $\bar{a}/a = B_2/B_1$ on the left hand side of the above boundary condition reads
$\left[1 - (A/B)\tan\left({\pi\over n-2}\right)\right]^{-1}$ where $n=l$.
For the present range ${\tilde{\xi}}_c\in ]1/2,1[$ the length scale $2r_l$ whose expression is given below is finite
in the unitary limit and thus the related length scale ${\bar{a}}$ in Eq. (\ref{equ20}) is also finite.
It follows both that $\bar{a}/a = 0$ and the constant $B_2 = B_2^0$, Eq. (\ref{equ20}), vanishes. This result is
clearly incorrect. The reason is that we have have not yet accounted for the behavior of $V_c (x)$ at small
distances through a suitable energy-independent boundary condition. In the case of the unitary limit,
this boundary condition renders both $\bar{a}/a$ and $B_2 = B_2^0$ in Eq. (\ref{equ20}) finite.
Specifically, the scattering length $a$ is suitably mapped under it into a finite
scattering length $a_{\rm f}= a\,{\bar{a}\over {\tilde{a}}} $, so that $B_2^0$ is mapped
onto the following corresponding finite constant $B_2$,
\begin{eqnarray}
B_2 & = & {\bar{a}\over a_{\rm f}}\,B_1 = {{\tilde{a}}\over a}\,B_1
\hspace{0.20cm}{\rm where}
\nonumber \\
a_{\rm f} & = & a\,{\bar{a}\over {\tilde{a}}} = \bar{a}\left[1 - \tan\left({\pi\over l-2}\right)\tan ({\bar{\Phi}}_{\rm f})\right] \, .
\label{equ25}
\end{eqnarray}
Here $\tan ({\bar{\Phi}}_{\rm f})=\tan (\Phi)$ yet
${\bar{\Phi}}_{\rm f}\gg \pi$ may be different from $\Phi\gg \pi$ in Eq. (\ref{equ10}). Indeed, the relation
$\tan ({\bar{\Phi}}_{\rm f})=\tan (\Phi)$ is insensitive to such phase differences. In the unitary limit the boundary
condition is thus equivalent to a transformation $a\rightarrow a_{\rm f}$ such that $\tan ({\bar{\Phi}}_{\rm f})=\tan (\Phi)$.
The energy-independent boundary condition in Eq. (\ref{equ25}) is in terms of the finite scattering length $a_{\rm f}$
such that $B_2/B_1 = \bar{a}/a_{\rm f}$ is given by $\left[1 - \tan\left({\pi\over l-2}\right)\tan ({\bar{\Phi}}_{\rm f})\right]^{-1}$,
similarly to scattering problems of the same universality class whose scattering lengths are
parametrically large \cite{Flambaum_99,Gribakin_93}. The positivity of $a_{\rm f} = a\,{\bar{a}\over {\tilde{a}}}$
often occurs for potentials that for large distances are attractive \cite{Flambaum_99}. If $a_{\rm f}$ were negative,
$\bar{a}/a_{\rm f} = - \tilde{a}/a$,
then $\tan ({\bar{\Phi}}_{\rm f})$ would be given by $2\cot\left({\pi\over l-2}\right)-\tan (\Phi)$, which would violate
both the requirements that $\tan ({\bar{\Phi}}_{\rm f})=\tan (\Phi)$ and that $\tan ({\bar{\Phi}}_{\rm f})=0$ in the
bare limit, ${\tilde{\xi}}_c =\xi_c$.
Importantly, the cancellation $(\psi_c^0 (x))^2 - f_c (x)=0$ is {\it independent} of the value
of the scattering length in the expressions for $\psi_c^0 (x)$ and $f_c (x)$. Hence all results associated with Eqs. (\ref{equ15})-(\ref{equ24})
remain the same, with $a$ replaced by $a_{\rm f}$. This includes the effective range
$R_{\rm eff}$, Eq. (\ref{equ24}), remaining determined only by $g_c (x)$.
The main property of the transformation $a\rightarrow a_{\rm f}$ is the corresponding exact equality of the ratios,
$\bar{a}/a_{\rm f} = \tilde{a}/a$. This actually justifies why the scattering lengths $a$ and $\tilde{a}$,
Eqs. (\ref{equ9}), can be used in $\tan (\Phi)$ in Eq. (\ref{equ6}). That transformation is also the mechanism
through which the renormalized scattering length ${\tilde{a}}$ emerges in $\psi_c (x)$.
Hence similarly to finite-$a$ scattering problems of the same universality class, as for instance
those studied in Refs. \onlinecite{Flambaum_99} and \onlinecite{Kishi_17}, the relations
of general form, Eq. (\ref{equ6}), apply. In the present unitary limit the scattering length ratio $\tilde{a}/a$
in them equals the ratio $\bar{a}/a_{\rm f}$ also given by Eq. (\ref{equ10}). The sum rule
$\Phi = \int_{x_0}^{x_2}dx\sqrt{2\mu (-V_c (x))}$, Eq. (\ref{equ7}), encodes the effects from $V_c (x)$ for
small distance near $x=x_1$, referring to the interval $x\in [x_0,x_2]$ around $x_1$.
\subsection{The effective range dependence on the scattering length finite ratio $\tilde{a}/a$}
\label{EISL}
The use of the function $g_c (x)$, Eq. (\ref{equ23}), with the constants $B_1$ and $B_2$ given
in Eqs. (\ref{equ19}) and (\ref{equ25}), respectively, in Eq. (\ref{equ24}) leads for ${\tilde{\xi}}_c\in ]1/2,1[$ to,
\begin{eqnarray}
& & R_{\rm eff} = 2\sqrt{2}\,\Gamma^2 \left({l-1\over l-2}\right)\,\left({l-2\over\sqrt{2}}\right)^{2\over l-2}
\nonumber \\
& \times & \{\int_0^{\infty} dx\, \left({x\over 2r_l}\right)^{-{l-2\over 2}+1}\,\phi_{1\over l-2}(x)\,\phi_{{-1\over l-2}+1}(x)
\nonumber \\
& - & \left({{\tilde{a}}\over a}\right)\int_0^{\infty} dx\,\left({x\over 2r_l}\right)^{-{l-2\over 2}+1}
\nonumber \\
& \times & {\phi_{1\over l-2}(x)\,\phi_{{-1\over l-2}+1}(x)
+ \phi_{{-1\over l-2}}(x)\,\phi_{{1\over l-2}+1}(x)\over 2\cos \left({\pi\over l-2}\right)}
\nonumber \\
& + & \left({{\tilde{a}}\over a}\right)^2\int_0^{\infty} dx\,\left({x\over 2r_l}\right)^{-{l-2\over 2}+1}
{\phi_{{-1\over l-2}}(x)\,\phi_{{-1\over l-2}+1}(x)
\over 3\cos^2 \left({\pi\over l-2}\right)}\}
\nonumber
\end{eqnarray}
After performing the integrations, one finally reaches the following expression
valid for the present charge parameter interval ${\tilde{\xi}}_c \in ]1/2,1[$ and $\alpha >1/8$,
\begin{equation}
R_{\rm eff} = a_0\left(1 - c_1\,\left({{\tilde{a}}\over a}\right) + c_2\,\left({{\tilde{a}}\over a}\right)^2\right) \, .
\label{equ26}
\end{equation}
That here the coefficient,
\begin{equation}
a_0 = 2r_l\left({2\over 3\pi}{\left({2\over (l-2)^2}\right)^{1\over l-2}\over \sin\left({\pi\over l-2}\right)}\right)
{\Gamma \left({1\over l-2}\right)\Gamma \left({4\over l-2}\right)
\over \Gamma^2 \left({2\over l-2}\right)\Gamma \left({3\over l-2}\right)} \, ,
\label{equ27}
\end{equation}
is identified with the lattice spacing results from the imposition of $R_{\rm eff}$ having the same value for
${\tilde{\xi}}_c\rightarrow 1^-$ and ${\tilde{\xi}}_c\rightarrow 1^+$. The coefficients $c_1$ and $c_2$ in
Eq. (\ref{equ26}) can be expressed in terms the usual $\Gamma$ function and are given by,
\begin{eqnarray}
c_1 & = & {2\over \cos\left({\pi\over l-2}\right)} {\Gamma \left({2\over l-2}\right)\Gamma \left({l-4\over l-2}\right)
\over \Gamma \left({1\over l-2}\right)\Gamma \left({l-3\over l-2}\right)}\hspace{0.20cm}{\rm and}
\nonumber \\
c_2 & = & {3\,(l+1)\over (l-1)\cos^2\left({\pi\over l-2}\right)}{\Gamma \left({3\over l-2}\right)\Gamma \left(-{l+1\over l-2}\right)
\over \Gamma \left({-1\over l-2}\right)\Gamma \left(-{l-1\over l-2}\right)} \, ,
\label{equ28}
\end{eqnarray}
respectively. They decrease from $c_1=c_2=2$ at $l=6$ to $c_1=1$ and $c_2=1/3$
for $l\rightarrow\infty$.
The effective range $R_{\rm eff}$, Eq. (\ref{equ26}), appears in the expression
of the spectral function exponents ${\tilde{\zeta}}_c (k)$ and ${\tilde{\zeta}}_{c'} (k)$,
Eq. (\ref{equA3}) of Appendix \ref{APA}, through the expression for the phase shift $2\pi{\tilde{\Phi}}_{c,c} (\pm 2k_F,q)$,
Eq. (\ref{equ14}). $R_{\rm eff}=\infty$ for ${\tilde{\xi}}_c\rightarrow 1/2$ is excluded, as it is outside
the range of validity of the unitary limit. The $R_{\rm eff}$ values found below
in Sec. \ref{ARPES} for Bi/InSb(001) are given in Table \ref{table5}. They obey the unitary limit
inequality, $R_{\rm eff}\ll 1/k_{Fc}^0$.
The effective range, Eq. (\ref{equ26}), can alternatively be expressed in terms of the ratio $\bar{a}/a_{\rm f}$
involving the finite scattering lengths $\bar{a}\propto 2r_l$ and $a_{\rm f}$ defined by Eqs. (\ref{equ20}) and (\ref{equ25}),
respectively.
The expression for the lattice spacing $a_0$, Eq. (\ref{equ27}), contains important physical information: Its
inverse gives the following expression valid for ${\tilde{\xi}}_c \in ]1/2,1[$ for the length scale $2r_l$ in the
potential $V_c^{\rm asy} (x)$ expression, Eq. (\ref{equ5}), and the related length scale $x_2$, Eq. (\ref{equ7}),
\begin{equation}
2r_l = {3\pi a_0\over 2}\sin\left({\pi\over l-2}\right)\left({l-2\over\sqrt{2}}\right)^{2\over l-2}
{\Gamma^2 \left({2\over l-2}\right)\Gamma \left({3\over l-2}\right)\over
\Gamma \left({1\over l-2}\right)\Gamma \left({4\over l-2}\right)} \, .
\label{equ29}
\end{equation}
Here $2r_l$ is given by $5.95047\,a_0$ at $l=6$, reaches a maximum $6.48960\,a_0$ at $l=10$, and decreases to
$4.93480\,a_0$ as $l\rightarrow\infty$, so that $\sqrt{2}\,(2r_l)^{l-2\over 2}=\sqrt{2\mu\gamma_c}\gg 1$
in units of $a_0 =1$ as given in Table \ref{table5}. Thus $\Phi/\pi\gg 1$ for $l=6-12$.
As in the case of 3D s-wave atomic scattering problems \cite{Flambaum_99,Gribakin_93}, this shows
that for ${\tilde{\xi}}_c\in ]1/2,1[$ the scattering energy of the interactions of the $c$ particles and
$c$ impurity is indeed much smaller than the depth $-V_c (x_1)$ of the potential $V_c (x)$ well.
This confirms the consistency of the derivation of the effective range for ${\tilde{\xi}}_c \in ]1/2,1[$ that
assumed the validity of such properties.
The length scales involved in the MQIM-HO description are explicitly defined in Table \ref{table2}.
\begin{widetext}
\begin{table}
\begin{center}
\begin{tabular}{|c||c|}
\hline
Length scale & Description \\
\hline
$2r_l$ & length scale in the large-$x$ potential decay with exponent $l>5$, $V_c^{\rm asy} (x)\propto (x/2r_l)^{-l}$,
Eqs. (\ref{equ5}) and (\ref{equ29}) \\
\hline
$a_0$ & lattice spacing related to $2r_l$ (twice the van der Waals length at $l=6$) as given in Eq. (\ref{equ27})\\
\hline
$a$ and $\tilde{a}$ & scattering lengths at $\xi_c$ and ${\tilde{\xi}}_c$, respectively, within the $\xi_c\rightarrow{\tilde{\xi}}_c$
transformation, Eqs. (\ref{equ9}) and (\ref{equ10}) \\
\hline
$R_{\rm eff}$ & effective range $R_{\rm eff} = a_0\left(1 - c_1\,\left({{\tilde{a}}\over a}\right) + c_2\,\left({{\tilde{a}}\over a}\right)^2\right)$
for the interval ${\tilde{\xi}}_c<1$ of physical interest, Eqs. (\ref{equ26}) and (\ref{equ28})\\
\hline
\end{tabular}
\caption{Length scales involved in the MQIM-HO description}
\label{table2}
\end{center}
\end{table}
\end{widetext}
\section{ARPES in Bi/InSb(001)}
\label{ARPES}
\subsection{Brief information on the sample preparation and ARPES experiments}
\label{PREP}
Concerning the preparation of the Bi/InSb(001) surface, a substrate InSb(001) was cleaned by
repeated cycles of Ar sputtering and annealing up to $680$ K. Bi was evaporated on it up to nominally
$3$ monolayers (ML): One ML is defined as the atom density of bulk-truncated substrate. Then, the
substrate was flash-annealed up to $680$ K for $\sim 10$ seconds. The resulting surface showed a
($1\times 3$) low-energy electron diffraction pattern.
Although the Bi/InSb(001) surface state is formed by evaporating Bi on the InSb substrate, in addition
to Bi also In and Sb are found at the surface, modified from their bulk positions by Bi evaporation.
Hence Bi, In, and Sb can all be significant sources of the surface electronic states. Detailed information
of the characterization of the Bi/InSb(001) surface sample is provided in Ref. \onlinecite{Ohtsubo_15}.
ARPES measurements were performed at $\hbar\omega=15$ eV and taken at $8$ K in the
CASSIOP\'EE beamline of SOLEIL synchrotron. The photoelectron kinetic energy at $E_F$ and the
overall energy resolution of the ARPES setup were calibrated by the Fermi edge of the photoelectron
spectra from Mo foils attached to the sample. The energy resolution was $\sim$20 meV. The ARPES
taken at $8$ K is shown in Fig. \ref{figure3}.
The theoretical predictions reported in this paper refer to (i) the $(k,\omega)$-plane location of the
high-energy Bi/InSb(001) MDC and EDC ARPES peaks and (ii) the value of the power-law SDS
exponent $\alpha$ associated with the angle integration to detect the low-energy suppression
of the photoelectron intensity that were performed at $k_y = 0.2\,\textrm{\AA}^{-1}$, near the
boundary of the ($1 \times 3$) surface Brillouin zone ($0.23\,\textrm{\AA}^{-1}$).
\subsection{Criteria for agreement between ARPES and the present theory}
\label{Criteria}
Refs. \onlinecite{Ohtsubo_15} and \onlinecite{Ohtsubo_17} found strong experimental evidence that Bi/InSb(001)
at $y$ momentum component $k_y = 0.2\,\textrm{\AA}^{-1}$ and temperature $8$ K displays 1D physics with
an SDS exponent that for small $\vert\omega\vert < 0.10$ eV has values in the interval $\alpha\in [0.6,0.7]$.
As discussed and justified below in Sec. \ref{OTHER}, the one-electron spectral properties of Bi/InSb(001)
are expected to be controlled mainly by the interplay of one dimensionality and finite-range electron interactions,
despite a likely small level of disorder. Consistent with an SDS exponent $\alpha$ larger than $1/8$ stemming
from finite-range interactions \cite{Schulz_90}, here we use the MQIM-HO
to predict one-electron spectral properties of Bi/InSb(001).
As discussed in Sec. \ref{OTHER}, Bi/InSb(001) is a complex system and some of its experimental properties
beyond those studied here may involve microscopic processes other than those described by the MQIM-HO
and the Hamiltonian, Eq. (\ref{equ1}). This includes coupling to two-dimensional (2D) physics if
$k_y = 0.2\,\textrm{\AA}^{-1}$ is smoothly changed to $k_ y =0$.
As reported in Sec. \ref{ERAE}, the MQIM-HO can describe both the low-energy TLL regime and
the spectral function, Eq. (\ref{equ2}), at high energies near the $(k,\omega)$-plane singularities.
At and in the vicinity of those singularities, the renormalization from its bare ${\tilde{\xi}}_c = \xi_c$ form
is determined by the large $x$ behavior of $V_c (x)$, Eq. (\ref{equ5}), and its sum rules, Eqs. (\ref{equ6})
and (\ref{equ7}), which refer to a high energy regime that goes well beyond the TLL limit.
Hence we can predict two properties of the one-electron spectral function : (i) the location in the
$(k,\omega)$ plane of the experimentally observed high-energy peaks in the ARPES MDC and EDC and
(ii) the value of the low-energy SDS exponent $\alpha$. Our $T=0$ theoretical results describe the former
high-energy experimental data taken at $8$ K for which the smearing of the spectral function singularities
by thermal fluctuations is negligible. The quantitative agreement with the corresponding experimental data
taken at fixed momentum $k_y = 0.2\,\textrm{\AA}^{-1}$ reached below provides further evidence of 1D
physics and electron finite-range interactions in Bi/InSb(001).
\begin{widetext}
\begin{table}
\begin{center}
\begin{tabular}{|c||c|}
\hline
Agreement & Description \\
\hline
First type & overall $(k,\omega)$-plane shapes of the theoretical
branch-line spectra, Eq. (\ref{equA1}), versus ARPES experimental spectra \\
\hline
Second type & $(k,\omega)$-plane location of the singularities corresponding
to negative exponents, Eq. (\ref{equA3}), versus ARPES peaks \\
\hline
Third type & SDS exponent $\alpha$ from the dependence of the exponents, Eq. (\ref{equA3}),
on ${\tilde{\xi}}_c$ versus its low-$\omega$ experimental value\\
\hline
\end{tabular}
\caption{Types of agreement between theory and experiments}
\label{table3}
\end{center}
\end{table}
\end{widetext}
A first type of agreement of the theoretical branch-line energy spectra with the $(k,\omega)$-plane
shape of the ARPES image spectra must be reached for well-defined fixed values of electronic
density $n_e$ and interaction $u=U/4t$. Through Eq. (\ref{equA16}) of Appendix (\ref{APA}), these
uniquely determine the value of the bare charge parameter $\xi_c = \xi_c (u,n_e)$ to be used in the
$\xi_c\rightarrow {\tilde{\xi}}_c$ transformations suited to Bi/InSb(001). In addition, that
first type of agreement also determines the value of the transfer integral $t$.
The experimental values of the lattice spacing $a_0$ and of the momentum width of the spectra at $\omega =0$
provide the Fermi momentum $k_F = (\pi/2a_0)\,n_e$ and thus the electronic density $n_e$. At the density $n_e$,
the ratio ${\tilde{W}}_s/{\tilde{W}}_c$ of the experimental energy bandwidths
${\tilde{W}}_s \equiv \vert{\tilde{\omega}}_{s} (0)\vert$ of the $s$ branch line spectrum and
${\tilde{W}}_c \equiv \vert{\tilde{\omega}}_c (0)\vert = \vert{\tilde{\omega}}_{c'} (0)\vert$ of the $c$ and $c'$
branch line spectra at momentum $k=0$ uniquely determine $u=U/4t$. (See such energy bandwidths in the
sketch of the theoretical spin $s$ and charge $c$ and $c'$ branch lines in Fig. \ref{figure1}.) Finally, the
experimental values of ${\tilde{W}}_s$ and ${\tilde{W}}_c$ determine the value of the transfer integral $t$.
As discussed below in Sec. \ref{BiARPES}, from the available experimental data it is not possible to trace the
energy dispersion of the $s$ branch line. However, combining the experimental data on the EDC with kinematic
constraints of the MDC provides information about the most probable value of the energy at which its bottom is
located, which equals the branch line energy bandwidth ${\tilde{W}}_s$.
\begin{figure}[!htb]
\begin{center}
\centerline{\includegraphics[width=8.7cm]{BiInSb_ARPES_u03.pdf}}
\caption{(a) Raw Bi/InSb(001) ARPES data for $\hbar\omega = 15$ eV. (b) An ARPES
EDC at $k = 0$ (\AA$^{-1}$.) (c) ARPES MDC at $\omega = -0.05$ eV. (d) ARPES EDC
from $k = -0.16$ (bottom) to $+0.16$ (top) (\AA$^{-1}$.) The thick line is the normal-emission spectrum ($k = 0$ (\AA$^{-1}$).) (e) and (f)
Second-derivative ARPES images. Derivation was made along momentum in (e) and energy in (f). Circles and error
bars in (e) indicate the MDC peak positions. Solid and dashed lines overlaid in (e) are the theoretical $s$ (red), $c$
(light blue) and $c$' (black) branch lines for $u = U/4t = 0.30$, $t = 1.22$ eV, and electronic density $n_e = 0.176$. Only
for the solid-line $k$ ranges in (e) for which the exponents are negative in
Fig. \ref{figure4} and Figs. \ref{figure5} and \ref{figure6} of Appendix \ref{APA} can they be seen in the ARPES image.
(ARPES from the same experimental data as in Refs. \onlinecite{Ohtsubo_15} and \onlinecite{Ohtsubo_17}.).}
\label{figure3}
\end{center}
\end{figure}
A second type of agreement is between the momentum interval and corresponding energy interval
for which the exponents ${\tilde{\zeta}}_c (k)$, ${\tilde{\zeta}}_{c'} (k)$, and ${\tilde{\zeta}}_{s} (k)$, Eq. (\ref{equA3}) of Appendix \ref{APA},
are negative and the $(k,\omega)$-plane location of the experimentally observed high-energy ARPES MDC and EDC
peaks. That agreement must be reached at the fixed $u$ and $n_e$ values and corresponding bare charge parameter
$\xi_c = \xi_c (u,n_e)$ value obtained from the first type of agreement. This second type of agreement is reached at some values
of the integer quantum number $l>5$ in the large-$x$ potential $V_c (x)$ expression, Eq. (\ref{equ5}),
and of the renormalized charge parameter ${\tilde{\xi}}_c$ (and thus of $\tan (\Phi)$, see Eqs. (\ref{equ12}) and (\ref{equ13})).
For the theoretically predicted high-energy ARPES peaks located on the $s$ branch line, there is only limited
experimental information. Hence we start by finding the ${\tilde{\xi}}_c$ and $l>5$ values
at which the second type of agreement is reached concerning
the momentum intervals where the exponents ${\tilde{\zeta}}_c (k)$ and ${\tilde{\zeta}}_{c'} (k)$ are negative
and the corresponding $(k,\omega)$-plane location of the experimentally observed high-energy
ARPES MDC and EDC peaks. Fortunately, it turns out that the
${\tilde{\xi}}_c$ values lead to a prediction of location in the $(k,\omega)$-plane of the
high-energy ARPES peaks associated with the $s$ branch line that is consistent with the available experimental EDC data.
This second type of agreement is reached for specific ${\tilde{\xi}}_c$ values. This then provides a prediction for the SDS exponent
$\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ obtained from a different {\it low-energy} experiment
that detects the suppression of the photoelectron intensity. That the SDS exponent $\alpha$ determined by
the ${\tilde{\xi}}_c$ values for which the second type of agreement is reached
is also that measured within the low-energy angle integrated photoemission intensity then becomes the
required third type of agreement.
In the Lehmann representation of the spectral function, the first and second types of agreement correspond
to the energy spectra and the overlaps of the one-electron matrix elements, respectively.
The exponents in Eq. (\ref{equA3}) of Appendix \ref{APA} involved in the second type of agreement
depend {\it both} on ${\tilde{\xi}}_c$ and momentum-dependent phase shifts
${\tilde{\Phi}}_{c,c}(\pm 2k_F,q)$ and ${\tilde{\Phi}}_{c,s}(\pm 2k_F,q')$.
There is no apparent direct relation between the high-energy ARPES MDC peaks and the low-energy SDS.
That the MQIM-HO describes the main microscopic mechanisms behind the specific one-electron
spectral properties of Bi/InSb(001) then requires that the third type of agreement is
fulfilled.
The three types of agreement between theory and experiment are explicitly described in Table \ref{table3}.
\subsection{Searching for agreement between theory and experiments}
\label{BiARPES}
\subsubsection{First type of agreement}
\label{BiARPES1}
The MDC spectral shape plotted in Fig. \ref{figure3}(c) displays two peaks centered at well defined
Fermi momentum values $-k_F = -0.06$\,$\textrm{\AA}^{-1}$ and $k_F = 0.06$\,$\textrm{\AA}^{-1}$, respectively.
Furthermore, the experimental circles (with error bars) in Fig. \ref{figure3}(e) clearly indicate that the MDC peaks are
located on two lines that in the limit of zero energy start at such two Fermi momenta.
Since the experimental data lead to $\pi/a_0\approx 0.68\,\textrm{\AA}^{-1}$, one finds from
$k_F = (\pi/2a_0)\,n_e\approx 0.06\,\textrm{\AA}^{-1}$ a small electronic density, $n_e \approx 0.176$.
The experimental value of the $c$ branch line energy bandwidth
${\tilde{W}}_c$ is directly extracted from analysis of the experimental MDC data provided in Fig. \ref{figure3}(e).
From analysis of the EDCs in Fig. \ref{figure3}(d) alone one finds
that there is a uncertainty $0.05\pm 0.05$ eV concerning the energy at which the bottom of the $s$ branch line
is located. It is clear that in this energy region there is a hump that cannot be explained by assuming the single
peak at $0.25$ eV, which refers to the bottom of the $c$ branch line.
The zero-energy level of the theoretically predicted downward-convex parabolic-like dispersion of the
$s$ branch line plotted in Fig. \ref{figure3}(e) (see also sketch depicted in Fig. \ref{figure1}) refers to the
Fermi level. Hence the $s$ branch line energy bandwidth ${\tilde{W}}_s$ equals that of its bottom.
While the energy range uncertainty of that bottom energy is experimentally rather wide,
one can lessen it by combining the experimental ARPES MDC intensity distribution
shown in Fig. \ref{figure3}(c) with its kinematical constraints, which follow
from the finite-energy bandwidth of the theoretical $s$ branch line. One then finds that
the most probable value of the $s$ branch line bottom energy and thus
of ${\tilde{W}}_s$ is between $0.05$ and $0.10$ eV.
The maximum momentum width of the ARPES MDC intensity distribution shown in Fig. \ref{figure3}(c) for energy
$\vert\omega\vert = 0.05$ eV allowed by such kinematic constraints involves the superposition of two maximum
momentum widths $\Delta k$, centered at $-k_F$ and $k_F$, respectively. Within the MQIM-HO, these kinematical
constraints explain the lack of spectral weight in well-defined $(k,\omega)$-plane regions shown in Fig. \ref{figure1}.
Fortunately, the lines that limit such regions without spectral weight only involve the $s$ band dispersion spectrum.
In the case of the spectral weight centered at $-k_F$ and $k_F$, respectively, such kinematical constraints imply
that for each energy value $\vert\omega\vert = -\omega$ the corresponding maximum
momentum width reads,
\begin{eqnarray}
\Delta k & = & 2(k_F - k) \hspace{0.20cm}{\rm for}\hspace{0.20cm}
\vert\omega\vert = \vert{\tilde{\omega}}_{s} (k)\vert \hspace{0.20cm}{\rm and}\hspace{0.20cm}k \in ]0,k_F[
\nonumber \\
& = & 2(k_F + k) \hspace{0.20cm}{\rm for}\hspace{0.20cm}
\vert\omega\vert = \vert{\tilde{\omega}}_{s} (k)\vert \hspace{0.20cm}{\rm and}\hspace{0.20cm}k \in ]-k_F,0[
\nonumber \\
{\rm no} & & {\rm constraints}\hspace{0.20cm}{\rm for}\hspace{0.20cm}
\vert\omega\vert > {\tilde{W}}_s\hspace{0.20cm}{\rm and}\hspace{0.20cm}
\vert k\vert > k_F
\label{equ30}
\end{eqnarray}
where ${\tilde{W}}_s = \vert{\tilde{\omega}}_{s} (0)\vert$ and the $s$ band dispersion spectrum ${\tilde{\omega}}_{s} (k)$
is given in Eq. (\ref{equA1}) of Appendix \ref{APA}.
For $\vert\omega\vert\ll {\tilde{W}}_s$ the kinematical constraints, Eq. (\ref{equ30}), are those of a TLL,
$\Delta k = 2\vert\omega\vert/v_s (k_F)$, consistent with $v_s (k_F) ={\rm min}(v_s (k_F),v_c (2k_F))$ \cite{Orgad_01}.
However, for energy $\vert\omega\vert = -\omega$ larger than the $s$ branch
line energy bandwidth ${\tilde{W}}_s = \vert{\tilde{\omega}}_{s} (0)\vert$, which is that at which the $s$ branch line bottom is located
in the experimental data, there are {\it no} kinematical constraints.
The absolute value of the derivative with respect to $k$ of the ARPES MDC intensity plotted in Fig. \ref{figure3}(c)
increases in a $\vert k\vert$ interval $\vert k\vert\in [k_F,k_F+k_{\rm MDC}]$ and decreases for
$\vert k\vert >k_{\rm MDC}$. Theoretically, the ARPES MDC intensity should be symmetrical
around $k=0$. Its actual experimental shape then introduces a small uncertainty in the value
of $k_{\rm MDC}$. The relatively large intensity in the tails located at the momentum region
$\vert k\vert > k_{\rm MDC}$ is explained by the larger uncertainty in the $s$ branch line bottom energy
${\tilde{W}}_s$. Indeed, the ARPES MDC under consideration
refers to an energy $\vert\omega\vert = 0.05$ eV within that uncertainty. And, as given in
Eq. (\ref{equ30}), there are {\it no} kinematic constraints for $\vert\omega\vert > {\tilde{W}}_s$.
One can then identify the most probable value of ${\tilde{W}}_s$ within its uncertainty interval as that for which
at the energy $\vert\omega\vert = 0.05$ eV the kinematic constraints would limit the ARPES MDC
intensity to momentum values within the interval $\vert k\vert \leq k_{\rm MDC}$. The corresponding momenta $k=\pm k_{\rm MDC}$
are the inflection points at which the derivative of the variation of the ARPES MDC intensity with respect to
$k$ changes sign in Fig. \ref{figure3}(c). The momentum width associated with $\vert k\vert \leq k_{\rm MDC}$
is thus that of the ARPES MDC shown in that figure if one excludes the tails.
The corresponding maximum momentum width $\Delta k$, Eq. (\ref{equ30}), of the two overlapping spectral weights
centered at $k_F$ and $-k_F$, respectively, that at $\vert\omega\vert = 0.05$ eV would lead to the
kinematic constraint $\Delta k = 2(k_{\rm MDC} - k_F)$, so that $\pm (k_F + \Delta k/2)
= \pm k_{\rm MDC}$. According to the kinematic constraints in Eq. (\ref{equ30}), this is fulfilled when at
$k =\pm (k_F - \Delta k/2)=\pm (2k_F - k_{\rm MDC})$ so that the $s$ branch line energy spectrum reads
$\vert{\tilde{\omega}}_{s} (k)\vert = - {\tilde{\omega}}_{s} (k) = 0.05$ eV.
Accounting for the combined $k_{\rm MDC}$ and ${\tilde{W}}_s$ uncertainties, the
most probable value of the energy bandwidth ${\tilde{W}}_s$ is larger
than $0.05$ eV and smaller than $0.10$ eV, as that of the theoretical $s$ branch line plotted in Fig. \ref{figure3}(e).
At electronic density $n_e = 0.176$ the best second type of agreement between theory and experiments
discussed in the following is reached within that combined uncertainty by the $u=U/4t$ and $t$ values that
are associated with the energy bandwidth ${\tilde{W}}_s$ of such a theoretical $s$ branch line. They read $u= 0.30$ and $t = 1.22$ eV,
as determined from the corresponding ratio ${\tilde{W}}_s/{\tilde{W}}_c$ and experimental ${\tilde{W}}_c$ value in Fig. \ref{figure3}(e).
Hence within the MQIM-HO the first type of agreement with the ARPES spectra is reached by choosing these
parameter values for the electronic density $n_e = 0.176$.
\subsubsection{Second type of agreement}
\label{BiARPES2}
The second type of agreement involves the theoretical $\gamma = c,c',s$ exponents ${\tilde{\zeta}}_{\gamma} (k)$,
Eq. (\ref{equA3}) of Appendix \ref{APA}. They are plotted for $u=0.30$ and $n_e=0.176$ as a function of the
momentum $k$ in Fig. \ref{figure4}(a) for $l=6$ and in Fig. \ref{figure4}(b) for $l=12$. In Appendix \ref{APA}, they
are plotted as a function of $k$ for several additional values of $l$.
\begin{figure}[!htb]
\begin{center}
\subfigure{\includegraphics[width=8.25cm]{BiInSbl6u03.pdf}}
\subfigure{\includegraphics[width=8.25cm]{BiInSbl12u03.pdf}}
\caption{The exponents, Eq. (\ref{equA3}) of Appendix \ref{APA}, in the
spectral function, Eq. (\ref{equ2}), that control the line shape
near the theoretical $c$, $c'$, and $s$ branch lines in Fig. \ref{figure3}(e), respectively,
associated with the experimentally observed high-energy Bi/InSb(001) ARPES MDC and EDC
peaks \cite{Ohtsubo_15,Ohtsubo_17}.
They are here plotted as a function of the momentum $k$ within the
MQIM-HO for (a) $l=6$ and (b) $l=12$ at $u=0.30$, $n_e=0.176$, and
several ${\tilde{\xi}}_c$ and $\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ values. The black solid lines
refer to the bare limit, ${\tilde{\xi}}_c =\xi_c$ ($\xi_c = 1.242$ and $\alpha_0= 0.017$).
The black dashed and the dashed-dotted lines correspond to $\alpha<1/8$ and $\alpha>1/8$ values, respectively.
Moreover, ${\tilde{\xi}}_c=0.805$, $0.857$, and $1.166$ refer to ${\tilde{\xi}}_c^{\oslash}=1/\xi_c$,
${\tilde{\xi}}_c^{\ominus-}$, and ${\tilde{\xi}}_c^{\ominus+}$, respectively. The ${\tilde{\xi}}_c$
values of the lines whose negative exponents ranges agree with the experimentally observed high-energy ARPES $(k,\omega)$-plane
MDC and EDC peaks in Fig. \ref{figure3}(e) are those for which the $c'$ branch-line exponent crosses zero
between $k/\pi =0$ and $k/\pi \approx 0.07$.
For $l=6$ and $l=12$ this refers to the small ${\tilde{\xi}}_c$ subintervals
$\alpha = 0.610-0.633$ and $\alpha = 0.674-0.700$, respectively. (Such
limiting values are given in Tables \ref{table4} and \ref{table5} for all $l=6-12$
integers.)}
\label{figure4}
\end{center}
\end{figure}
The different curves in each figure are associated with different values of the charge parameter ${\tilde{\xi}}_c$
and thus of the SDS exponent $\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ and effective range $R_{\rm eff}$.
The black solid lines refer to the bare charge parameter limit, ${\tilde{\xi}}_c=\xi_c=1.242$. The values
${\tilde{\xi}}_c=0.805$, $0.857$, and $1.166$ correspond to ${\tilde{\xi}}_c^{\oslash}=1/\xi_c$,
${\tilde{\xi}}_c^{\ominus-}$, and ${\tilde{\xi}}_c^{\ominus+}$, respectively.
As justified in Sec. \ref{Criteria}, we start by finding the ${\tilde{\xi}}_c$ and $l>5$ values at which the second type of agreement
is reached. It refers to the momentum intervals (and corresponding energy ranges) at which the exponents
${\tilde{\zeta}}_c (k)$ and ${\tilde{\zeta}}_{c'} (k)$ are negative. Those are required to agree with
the corresponding $(k,\omega)$-plane location of the experimentally observed high-energy ARPES MDC and EDC peaks
in Figs. \ref{figure3}(e) and (f), respectively. This reveals that the integers $l>5$ and the values of the charge
parameter ${\tilde{\xi}}_c$ and corresponding SDS exponent
$\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ for which agreement is reached are those for which the
exponents ${\tilde{\zeta}}_c (k)$ and ${\tilde{\zeta}}_{c'} (k)$, Eq. (\ref{equA3}) of Appendix \ref{APA},
in the spectral-function expression near the $c$ and $c'$ branch lines, Eq. (\ref{equ2}), are negative for
$k \in [-2k_F + k_{Fc}^{\rm ex}, 2k_F - k_{Fc}^{\rm ex}]$ and $k \in [- k_{c'}^{\rm ex}, k_{c'}^{\rm ex}]$, respectively.
In the case of the exponent ${\tilde{\zeta}}_c (k)$, the momentum $k_{Fc}^{\rm ex}$ appearing in the interval
$k \in [-2k_F + k_{Fc}^{\rm ex}, 2k_F - k_{Fc}^{\rm ex}]$ is such that $k_{Fc}^{\rm ex}/k_F$ is vanishing or very small
in the thermodynamic limit. It is the experimental value of the small theoretical
$c$ band momentum $k_{Fc}^0 = \pi n_{Fc}^0$ associated with the low density $n_{Fc}^0$ of $c$ particle
scatterers near the $c$ band Fermi points $-2k_F$ and $2k_F$ considered in Sec. \ref{UL}.
Concerning the momentum interval $k \in [- k_{c'}^{\rm ex}, k_{c'}^{\rm ex}]$ for which the
exponent ${\tilde{\zeta}}_{c'} (k)$ must be negative, there is a small uncertainty in
the value of $k_{c'}^{\rm ex}$. It is such that $k_{c'}^{\rm ex}\in [0,\delta k_0]$ where
$2\delta k_0 \approx 0.10\,\textrm{\AA}^{-1}$ in Fig. \ref{figure3}(e) is the momentum width of
the ARPES image crossed by the $c'$ branch line.
This small uncertainty, which in the units used in the figures corresponds to $\delta k_0\in [0,0.07\pi]$,
implies corresponding small uncertainties in the ${\tilde{\xi}}_c$ and $\alpha$ values at which for
each $l$ agreement with the experimentally observed high-energy ARPES MDC and EDC
peaks is reached. The corresponding two limiting values of such ${\tilde{\xi}}_c$ and $\alpha$ uncertainties
at which the exponent ${\tilde{\zeta}}_{c'} (k)$ in Fig. \ref{figure4} and in Figs. \ref{figure5} and \ref{figure6} of Appendix \ref{APA}
crosses zero at $k\approx 0$ and $k\approx 0.07\pi$, respectively, are given in Table \ref{table4} for each integer $l=6-12$.
Following the direct relation between the $c$ and $c'$ branch lines spectra, that $\delta k_0\in [0,0.07\pi]$ ensures that
the exponent ${\tilde{\zeta}}_c (k)$ is indeed negative for $k$ intervals $k \in [-2k_F + k_{Fc}^{\rm ex}, 2k_F - k_{Fc}^{\rm ex}]$
where $k_{Fc}^{\rm ex}/k_F\ll 1$, as also required for the second type of agreement to be reached.
Hence regarding the $c$ and $c'$ branch lines, agreement between theory and experiments is reached by the
${\tilde{\xi}}_c$ and $l>5$ values that in Fig. \ref{figure4} and in Figs. \ref{figure5} and \ref{figure6} of Appendix \ref{APA}
correspond to the $c'$ branch line exponent curves crossing zero between $k\approx 0$ and $k \approx \delta k_0\approx 0.07\pi$.
(In such figures only the two corresponding $c'$ branch line exponent curves crossing zero at $k\approx 0$ and
$k \approx \delta k_0\approx 0.07\pi$, respectively, are plotted.)
\begin{table}
\begin{tabular}{|c||c|c||c|c|}
\hline
$l$ & ${\tilde{\xi}}_c$ & $\alpha$ & ${\tilde{\xi}}_c$ & $\alpha$ \\
\hline
& (${\tilde{\zeta}}_{c'} (0)=0$) & (${\tilde{\zeta}}_{c'} (0)=0$) & (${\tilde{\zeta}}_{c'} ({7\pi\over 100})=0$) & (${\tilde{\zeta}}_{c'} ({7\pi\over 100})=0$) \\
\hline
\hline
$6$ & $0.682$ & $0.633$ & $0.690$ & $0.610$ \\
\hline
$7$ & $0.673$ & $0.662$ & $0.680$ & $0.640$ \\
\hline
$8$ & $0.667$ & $0.680$ & $0.673$ & $0.660$ \\
\hline
$9$ & $0.663$ & $0.691$ & $0.672$ & $0.665$ \\
\hline
$10$ & $0.662$ & $0.695$ & $0.67$ & $0.670$ \\
\hline
$11$ & $0.661$ & $0.699$ & $0.669$ & $0.672$ \\
\hline
$12$ & $0.661$ & $0.700$ & $0.669$ & $0.674$ \\
\hline
\end{tabular}
\caption{The two values of the charge parameter ${\tilde{\xi}}_c$ and corresponding
SDS exponent $\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ that at each integer $l=6-12$
are those at which the exponent ${\tilde{\zeta}}_{c'} (k)$ plotted as a function of $k$ in
Fig. \ref{figure4}(a) for $l=6$ and in Fig. \ref{figure4}(b) for $l=12$ crosses zero at $k\approx 0$ and $k\approx 0.07\pi$, respectively.
The same applies to the exponent ${\tilde{\zeta}}_{c'} (k)$ plotted for $l=7-11$
in Figs. \ref{figure5}(a) and (b) and \ref{figure6}(a)-(c) of Appendix \ref{APA}.}
\label{table4}
\end{table}
The theoretical $s$ branch line exponent ${\tilde{\zeta}}_{s} (k)$, Eq. (\ref{equA3}) of Appendix \ref{APA}, does not
depend on the integer quantum number $l>5$. For the ${\tilde{\xi}}_c$ values for which the $c'$ branch line exponent
curves cross zero between $k\approx 0$ and $k \approx \delta k_0\approx 0.07\pi$ in Fig. \ref{figure4} and in
Figs. \ref{figure5} and \ref{figure6} of Appendix \ref{APA}, the exponent ${\tilde{\zeta}}_s (k)$ is negative in corresponding
intervals $k \in ]-k_F + k_{Fs}^{*}, k_F - k_{Fs}^{*}[$ and thus positive for $\vert k\vert \in ](k_F - k_{Fs}^{*}),k_F]$.
Here $k_{Fs}^{*}$ is a function of $n_e$, $u$, and ${\tilde{\xi}}_c$ and $k = \pm (k_F - k_{Fs}^{*})$ are the two
momentum values at which ${\tilde{\zeta}}_{s} (k)$ vanishes.
The predicted location at $k \in ]-k_F + k_{Fs}^{*}, k_F - k_{Fs}^{*}[$ of the ARPES MDC peaks associated with the
$s$ branch line cannot be confirmed from the available experimental data. Indeed and as mentioned in Sec. \ref{Criteria},
it is not possible to extract from such data the dispersion of that line. However, the corresponding energy intervals
$\vert\omega\vert\in [\vert{\tilde{\omega}}_{s} (k_F - k_{Fs}^{*})\vert,{\tilde{W}}_s]$ are consistent with the available
experimental data from the EDCs in Fig. \ref{figure3}(d). Here $\vert\omega\vert ={\tilde{W}}_s = \vert{\tilde{\omega}}_{s} (0)\vert$
is the bottom of the $s$ branch line energy, as estimated in Sec. \ref{BiARPES1} from the interplay of the kinematical constraints,
Eq. (\ref{equ30}), and the ARPES MDC shown in Fig. \ref{figure3}(c) for $\vert\omega\vert = 0.05$ eV.
\subsubsection{Third type of agreement}
\label{BiARPES3}
From the above results we see that for $l = 6-12$ agreement with the experimentally observed high-energy
ARPES MDC and EDC peaks in Figs. \ref{figure3}(e) and (f) is reached by the exponents curves referring
to ${\tilde{\xi}}_c$ and $\alpha$ values belonging to the small intervals reported in Table \ref{table5}. The overlap
of the subintervals obtained for each $l=6-12$ given in that table then leads to the
theoretical predictions ${\tilde{\xi}}_c \in [0.66,0.69]$ and $\alpha \in [0.610-0.700]$.
Table \ref{table5} also provides the corresponding intervals of the effective range $R_{\rm eff}$ in units
of the lattice spacing that refer to first and second types of agreements. The effective range dependence on
the bare charge parameter $\xi_c = \xi_c (n_e,u)$, renormalized charge parameter ${\tilde{\xi}}_c$, and integer
quantum number $l>5$ values at which such agreements have been reached is defined by combining
Eqs. (\ref{equ10}) and (\ref{equ26}). That table also provides the values of the length scale $2r_l$ in the same
units whose dependence on $l$ is given in Eq. (\ref{equ29}). Upon increasing $l$ from $l=6$ to $l=12$, the
effective range $R_c^{\rm eff}$ values for which there is agreement with the experiments change from
$R_c^{\rm eff}\approx 5r_l$ to $R_c^{\rm eff}\approx r_l$, respectively.
According to the analysis of Sec. \ref{BiARPES2},
agreement with the experimentally observed high-energy $(k,\omega)$-plane ARPES MDC and EDC
peaks distribution has been reached for the SDS exponent range $\alpha \in [0.610-0.700]$.
The third type of agreement between theory and experiments defined in Sec. \ref{BiARPES2} is
reached provided that such a predicted SDS exponent range agrees with the $\alpha$ values measured
within the low-energy angle integrated photoemission
intensity. An experimental uncertainty $\alpha = 0.65\pm 0.05$ of the SDS exponent was found for $-\omega <$ 0.1 eV
in Ref. \onlinecite{Ohtsubo_15}.
The remarkable quantitative agreement of the MQIM-HO predictions within the third criterion
reported in Sec. \ref{BiARPES2} provides evidence of finite-range interactions playing an active role in the Bi/InSb(001) spectral
properties and confirms the 1D character of its metallic states also found in Ref. \onlinecite{Ohtsubo_15}.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$l$ & ${\tilde{\xi}}_c$ & $\alpha$ & $R_{\rm eff}/a_0$ & $2r_l/a_0$ & $\sqrt{2\mu\gamma_c}$ \\
\hline
$6$ & $0.68-0.69$ & $0.610-0.633$ & $14.4-17.0$ & $6.0$ & $50.1$ \\
\hline
$7$ & $0.67-0.68$ & $0.640-0.662$ & $6.9-8.1$ & 6.3 & $140.5$ \\
\hline
$8$ & $0.67$ & $0.660-0.680$ & $5.0-5.8$ & $6.4$ & $377.2$ \\
\hline
$9$ & $0.66-0.67$ & $0.665-0.691$ & $4.0-4.8$ & 6.5 & $983.3$ \\
\hline
$10$ & $0.66-0.67$ & $0.670-0.695$ & $3.4-4.2$ & $6.5$ & $2.51\times 10^3$ \\
\hline
$11$ & $0.66-0.67$ & $0.672-0.699$ & $3.1-3.8$ & 6.5 & $6.29\times 10^3$ \\
\hline
$12$ & $0.66-0.67$ & $0.674-0.700$ & $2.9-3.5$ & $6.4$ & $1.56\times 10^4$ \\
\hline
\end{tabular}
\caption{The renormalized charge parameter ${\tilde{\xi}}_c$, SDS exponent $\alpha$, and
effective range $R_c^{\rm eff}$ intervals for which there is agreement between the
$(k,\omega)$-plane regions where the theoretical branch lines display
singularities and the corresponding experimentally observed high-energy Bi/InSb(001)
ARPES MDC and EDC peaks in Figs. \ref{figure3}(e) and \ref{figure3}(f) for
$n_e = 0.176$, $u=0.30$, and $l=6-12$. As given in Table \ref{table4}, for each integer $l$ the
smallest and largest ${\tilde{\xi}}_c$ value refers to the largest and smallest corresponding
$\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ value, respectively.
(The $\alpha$ values were derived using more digits in the ${\tilde{\xi}}_c$ values than given in the table.)
The values in units of $a_0 =1$ of the length scale $2r_l$, Eq. (\ref{equ29}), and related parameter
$\sqrt{2\mu\gamma_c} =\sqrt{2}\,(2r_l)^{l-2\over 2}$ are also provided.}
\label{table5}
\end{table}
\subsection{Interplay of relaxation processes with the momentum dependence of the exponents}
\label{Relaxation}
Here we discuss the physical mechanisms within the MQIM-HO that
underlie the dependence of the exponents ${\tilde{\zeta}}_c (k)$, ${\tilde{\zeta}}_{c'} (k)$, and
${\tilde{\zeta}}_{s} (k)$ on the charge parameter ${\tilde{\xi}}_c$ . These exponents are plotted in Fig. \ref{figure4}
and in Figs. \ref{figure5} and \ref{figure6} of Appendix \ref{APA}.
In the bare charge parameter limit, ${\tilde{\xi}}_c=\xi_c$, the exponents being negative or positive just
refers to a different type of power-law behavior near the corresponding charge and spin branch lines.
For ${\tilde{\xi}}_c < \xi_c$, this applies only to the spin $s$ branch line. It coincides with the edge of support
of the one-electron removal spectral function that separates $(k,\omega)$-plane regions without
and with finite spectral weight. Hence conservation laws impose that, near that line, the spectral function
remains of power-law form, Eq. (\ref{equ2}), for both intervals ${\tilde{\xi}}_c \in ]1/2,1[$ and
${\tilde{\xi}}_c \in]1,\xi_c]$. As confirmed from an analysis of the $s$ branch-line exponents
plotted in Fig. \ref{figure4} and in Figs. \ref{figure5} and \ref{figure6} of Appendix \ref{APA}, the effect of
decreasing the charge parameter ${\tilde{\xi}}_c$ from its initial bare value $\xi_c$ (and thus
increasing the SDS exponent $\alpha = (2-{\tilde{\xi}}_c^2)^2/(8{\tilde{\xi}}_c^2)$ from
$\alpha_0 = (2 - \xi_c^2)/(8\xi_c^2)\in [0,1/8]$) is merely to increase the spin branch line exponent
${\tilde{\zeta}}_{s} (k)$. Except for two regions near $-k_F$ and $k_F$ corresponding to
$\vert k\vert \in [(k_F - k_{Fs}^{*}),k_F]$, that exponent remains negative, so that the singularities
prevail. In the complementarily small momentum regions near $\pm k_F$ defined by
$\vert k\vert \in [(k_F - k_{Fs}^{*}),k_F]$ where the exponent is positive, the line shape remains of
power-law type.
Analysis of the $c$ and $c'$ branch-line exponents curves plotted in the same figures reveals that the
situation is different for the one-electron removal spectral function in the vicinity of the charge $c$ and $c'$
branch lines, Eq. (\ref{equ2}). These are located in the continuum of the one-electron spectral function. The
physics is though different for the subintervals ${\tilde{\xi}}_c \in ]1,\xi_c]$ and ${\tilde{\xi}}_c \in ]1/2,1[$,
respectively.
Smoothly decreasing ${\tilde{\xi}}_c$ from its initial bare value $\xi_c$ to ${\tilde{\xi}}_c\rightarrow 1$,
produces effects quite similar to those of increasing $U$ within the 1D Hubbard model to
$U\rightarrow\infty$ \cite{Carmelo_18}. Indeed, these changes render ${\tilde{\zeta}}_c (k)$ and
${\tilde{\zeta}}_{c'} (k)$ more negative and lead to an increase of the width of the $k$ intervals in which
they are negative. Within the ${\tilde{\xi}}_c\in ]1,\xi_c]$ interval, a large number of ${\tilde{\xi}}_c=\xi_c$
conservation laws that are behind the factorization of the scattering $S$ matrix into two-particle
scattering processes survive, which tend to prevent the $c$ impurity from undergoing relaxation processes.
Hence the lifetimes $\tau_{c} (k)$ and $\tau_{c'} (k)$ in Eq. (\ref{equ2}) are very large for the $k$ intervals for which
the corresponding branch line exponents are negative, so that the expression given in the equation
for the spectral function near the $\beta = c,c'$ branch lines is nearly power-law like,
${\tilde{B}} (k,\omega)\propto \left({\tilde{\omega}}_{\beta} (k)-\omega\right)^{{\tilde{\zeta}}_{\beta} (k)}$.
The effects of the finite-range interactions increase upon decreasing
${\tilde{\xi}}_c$ within the interval ${\tilde{\xi}}_c \in [{\tilde{\xi}}_c^{\oslash},1[$ where
${\tilde{\xi}}_c^{\oslash}=1/\xi_c = 0.805$ for $n_e = 0.176$ and $u=0.30$.
Indeed, smoothly decreasing ${\tilde{\xi}}_c$ within that interval tends to remove an increasing number of
conservation laws, which strengthens the effects of the impurity relaxation processes.
Such effects become more pronounced when $\Delta a/{\tilde{a}}\in ]-1,0[$ and $\tan(\Phi)>0$,
upon further decreasing ${\tilde{\xi}}_c$ within the interval ${\tilde{\xi}}_c \in]1/2,{\tilde{\xi}}_c^{\oslash}]$.
In the $k$ intervals for which the $\beta = c,c'$ branch line exponents ${\tilde{\zeta}}_{\beta} (k)$
remain negative, the lifetimes $\tau_{\beta} (k)$ in Eq. (\ref{equ2}) remain large and the
$c$ impurity relaxation processes only slightly broaden the spectral-function power-law singularities,
as given in Eq. (\ref{equ2}). For the complementary $k$ ranges for which such exponents become positive
upon decreasing ${\tilde{\xi}}_c$ and thus increasing $\alpha$, the high-energy singularities are rather washed
out by the relaxation processes.
As confirmed by analysis of the curves plotted in Fig. \ref{figure4} and in Figs. \ref{figure5} and \ref{figure6} of
Appendix \ref{APA}, starting at $\vert k\vert = 3k_F-k_{Fc}^0$ and downwards, smoothly decreasing
${\tilde{\xi}}_c$ from ${\tilde{\xi}}_c^{\oslash}$ first gradually enhances the $k$ domains where ${\tilde{\zeta}}_{c'} (k)$
is positive. Further decreasing ${\tilde{\xi}}_c$ after the $c'$ branch line singularities are fully washed out leads
to the emergence of a $c$ branch line $k$ domain starting at $\vert k\vert=0$ and upwards in which that line
singularities are finally fully washed out up to $\vert k\vert = k_F-k_{Fc}^0$ below a smaller ${\tilde{\xi}}_c$ value.
\section{Discussion and concluding remarks}
\label{DISCONCL}
\subsection{Discussion of other effects and properties outside the range of the MQIM-HO}
\label{OTHER}
As reported in Sec. \ref{PREP}, the ARPES data were taken at $8$ K and the angle integrations to detect
the suppression of the photoelectron intensity were performed at $k_y = 0.2\,\textrm{\AA}^{-1}$, near the
boundary of the ($1 \times 3$) surface Brillouin zone ($0.23\,\textrm{\AA}^{-1}$).
As shown in Fig. 2(b) of Ref. \onlinecite{Ohtsubo_15}, at $k_y = 0.2\,\textrm{\AA}^{-1}$ there is an energy
gap between the spectral features studied in this paper within a 1D theoretical framework and a bulk valence
band. Due to that energy gap, the coupling between the two problems is negligible, which justifies that the
system studied here corresponds to 1D physics.
Smoothly changing $k_y$ from $k_y = 0.2\,\textrm{\AA}^{-1}$ to $k_y=0$ corresponds to smoothly turning
on the coupling to the 2D physics. As shown in Fig. 2(a) of Ref. \onlinecite{Ohtsubo_15}, at $k_y = 0$ the
energy gap between the spectral features studied in this paper and that bulk valence band has been closed.
The study of the microscopic mechanisms involved in the physics associated with turning on the coupling to
the 2D physics by smoothly changing $k_y$ from $k_y = 0.2\,\textrm{\AA}^{-1}$ to $k_y=0$ is an interesting
problem that deserves further investigation.
Another interesting open problem refers to theoretical prediction of the MDC for extended momentum intervals and
of the EDC for corresponding energy ranges. The universal form of the spectral function near the singularities, Eq. (\ref{equ2}),
is determined by the large $x$ behavior of the potential $V_c (x)$, Eq. (\ref{equ5}), which follows from that of the
potential $V_e (r)$ in Eq. (\ref{equ1}), and potential sum rules, Eqs. (\ref{equ6}) and (\ref{equ7}). As reported in
Eqs. (\ref{equ12}) and (\ref{equ13}), the value of the renormalized charge parameter ${\tilde{\xi}}_c$ behind the
renormalization of the phase shifts in the exponents of that spectral function expression, Eq. (\ref{equ2}), is
indeed controlled by the value of the initial bare charge parameter $\xi_c = \xi_c (n_e,u)$, the integer quantum
number $l>5$ associated with the potential $V_c (x)$ large-$x$ behavior, Eq. (\ref{equ5}), and the zero-energy
phase $\Phi$ determined by that potential sum rules, Eqs. (\ref{equ6}) and (\ref{equ7}). Plotting a MDC for extended
momentum intervals and an EDC for corresponding energy ranges is a problem that involves non-universal properties
of the one-electron removal spectral function. This would require additional information of that function in $(k,\omega)$-plane
regions where it is determined by the detailed non-universal dependence on $r$ of the specific electronic potential
$V_e (r)$ {\it suitable} to Bi/InSb(001).
Another interesting issue refers to the validity of the MQIM-HO to describe the Bi/InSb(001) one-electron spectral
properties. The question is whether the interplay of one dimensionality and electron finite-range interactions
is indeed the main microscopic mechanism behind such properties. As in all lattice electronic condensed matter systems,
it is to be expected that there are both some degree of disorder effects and electron-electron effects in the Bi/InSb(001)
physics. However, we can provide evidence that the interplay of latter effects with the Bi/InSb(001) metallic states
one dimensionality is the dominant contribution to the one-electron removal spectral properties.
The first strong evidence that this is so is the experimentally observed universal power-law scaling of the
spectral intensity $I (\omega,T)$. (Here $\omega =0$ refers to the Fermi-level energy.) For instance, at
$\omega =0$ and finite $T$ and at $T=0$ and low $\omega$ it was found in Ref. \onlinecite{Ohtsubo_15} to
have the following TLL behaviors for Bi/InSb(001),
\begin{equation}
I (0,T) \propto T^{\alpha} \hspace{0.30cm}{\rm and}\hspace{0.30cm}
I (\omega,0) \propto \vert\omega\vert^{\alpha} \, ,
\label{equ31}
\end{equation}
respectively, where $\alpha$ is the SDS exponent.
If there were important effects from disorder, its interplay with electron-electron interactions
would rather give rise in the case of 1D and quasi-1D systems to a spectral intensity $I (\omega,T)$ with
the following behaviors \cite{Rollbuhler_01,Mishchenko_01,Bartoscha _02},
\begin{eqnarray}
I (0,T) & \propto & e^{-\sqrt{C_0^2\over 16\pi D_0 T}} \hspace{0.20cm}{\rm and}
\nonumber \\
I (\omega,0) & \propto & \vert\omega\vert^{1/2} {\sqrt{32\pi D_0}\over C_0}\,e^{-{C_0^2\over 32\pi D_0 \vert\omega\vert}} \,,
\label{equ32}
\end{eqnarray}
for $\omega \ll C_0^2/(32\pi D_0)$. Here $D_0\propto l$ is the {\it bare} diffusion coefficient and $C_0$ is a constant that
depends on the effective electron-electron interaction and electronic density.
The behaviors in Eq. (\ref{equ32}) are qualitatively different from those reported in Eq. (\ref{equ31}), which are those
experimentally observed in Bi/InSb(001). This holds specially for $I (0,T)$, in which case disorder effects {\it cannot}
generate such a temperature power-law scaling. Also the experimentally found behavior
$I (\omega,0) \propto \vert\omega\vert^{\alpha}$ disagrees with that implied by Eq. (\ref{equ32}).
Further, in the limit of low $\omega$ and $T$, the MQIM-HO describes the corresponding TLL limit in
which the universal power-law scaling of the spectral intensity $I (\omega,T)$ has the behaviors
reported in Eq. (\ref{equ31}). Theoretically, the value of the SDS exponent $\alpha$
depends on those of the electronic density $n_e$, the interaction $u=U/4t$, and the renormalized
charge parameter ${\tilde{\xi}}_c$. Within the MQIM-HO phase shifts constraints, its values span the
intervals $\alpha \in [\alpha_0,1/8[\,;]1/8,49/32[$.
The theoretically predicted $\alpha$ value has been determined from agreement of the $T=0$ one-electron
spectral function, Eq. (\ref{equ2}), with the ARPES peaks location in the $(k,\omega)$-plane. The quantitative agreement
then reached refers to the experimental value $\alpha =0.65\pm 0.05$ obtained for $I (\omega,0) \propto \vert\omega\vert^{\alpha}$
at $\vert\omega\vert<0.1$ eV in Ref. \onlinecite{Ohtsubo_15}. For low temperatures and $\omega =0$ the MQIM-HO also
leads to the $I (0,T)$ behavior given in Eq. (\ref{equ31}).
Finally, despite bismuth Bi, indium In, and antimony Sb being heavy elements, the present 1D surface metallic states do not show any
detectable spin-orbit coupling effects and nor any related Rashba-split bands. In this regard, it is very important to distinguish the system studied in this paper with 1-2 monatomic layers of Bi thickness whose ARPES data were first reported in Ref. \onlinecite{Ohtsubo_15} from the system with a similar chemical name which was studied in
Ref. \onlinecite{Kishi_17}, which refers to 5-20 monatomic layers of Bi thickness. One expects, and indeed observes, significant qualitative differences in these two systems.
\subsection{Concluding remarks}
\label{CONCL}
In this paper we have discussed an extension of the MQIM-LO used in the theoretical
studies of the ARPES in the line defects of MoSe$_2$ \cite{Ma_17}. This MQIM-type approach
\cite{Imambekov_09,Imambekov_12} accounts only for the renormalization of the leading term in
the effective range expansion of the charge-charge phase shift, Eq. (\ref{equ4}). As shown in
Ref. \onlinecite{Cadez_19}, this is a good approximation if the effective range of the interactions
of the $c$ particles and the $c$ impurity is of about one lattice spacing.
The MQIM-HO developed in this paper accounts for the renormalization of the higher terms
in the effective range expansion of the charge-charge phase shift, Eq. (\ref{equ4}).
It applies to a class of 1D lattice electronic systems described by the Hamiltonian, Eq. (\ref{equ1}),
which has longer range interactions. The quantum problem described by that Hamiltonian
is very involved in terms of many-electron interactions. However, we found that a key simplification is
the unitary limit associated with the scattering of the fractionalized charged particles by the $c$ impurity.
We have shown a theory based on the MQIM-HO with finite-range interactions, Eq. (\ref{equ1}), applies
to the study of {\it some} of the one-electron spectral properties of Bi/InSb(001) measured at $y$
momentum component $k_y = 0.2\,\textrm{\AA}^{-1}$ and temperature $8$ K.
Consistent with the relation of the electron and $c$ particle representations
discussed in Appendix \ref{RECP}, the form of the attractive potential $V_c (x)$ associated with the
interaction of the $c$ particles and the $c$ impurity at a distance $x$ is determined by that of
the electronic potential $V_e (r)$ in Eq. (\ref{equ1}). The universal behavior of the spectral function near the singularities
given in Eq. (\ref{equ2}) whose $(k,\omega)$-plane location refers to that of the experimentally observed high-energy
ARPES peaks, is determined by the large $x$ behavior of $V_c (x)$ shown in Eq. (\ref{equ5}) and sum rules,
Eqs. (\ref{equ6}) and (\ref{equ7}). Otherwise the spectral function expression in the continuum is not universal.
Despite the limited available experimental information about the ARPES peaks located on the spin branch line,
we have shown that all the three criteria associated with the different types of agreement between theory and
experiments considered in Sec. \ref{Criteria} are satisfied. This provides further evidence to that given in Ref.
\onlinecite{Ohtsubo_15} for the interplay of one dimensionality and finite-range interactions playing an important
role in the one-electron spectral properties of the metallic states in Bi/InSb(001).
\acknowledgements
J. M. P. C. acknowledges the late Adilet Imambekov for discussions that were helpful in producing
the theoretical part of the paper. Y. O. and S.-i. K. thank illuminating discussions to obtain and analyze
the ARPES dataset by Patrick Le F\`evre, Fran\c{c}ois Bertran, and Amina Taleb-Ibrahimi. J. M. P. C. would like to
thank Boston University's Condensed Matter Theory Visitors Program for support and the hospitality of MIT.
J. M. P. C. and T. \v{C}. acknowledge the support from Funda\c{c}\~ao para a Ci\^encia e
Tecnologia (FCT) through the Grants UID/FIS/04650/2013,
PTDC/FIS-MAC/29291/2017, and SFRH/BSAB/142925/2018, NSAF U1530401 and computational resources
from Computational Science Research Center (CSRC) (Beijing), and the National Science Foundation of
China (NSFC) Grant 11650110443.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{Introduction}
\noindent In the recent years there has been a considerable progress in perturbative String Theory (considering D-brane systems as well). Acheivements have been going on in S-matrix calculations involving Ramond-Ramond massless states and open superstrings \cite{Ehsan1, Garousi1, Ehsan2, Ehsan3, Garousi2, Ehsan4}\footnote{There have been various recent results about mixed open/closed superstring computations, to all $\alpha'$ order, by E. Hatefi \cite{Ehsan5}.}, the closed and the open sector of the superstring low energy effective lagrangian \cite{Garousi3, Garousi4, Barreiro0}, higher loop calculations of closed and open superstring scattering \cite{Gomez1, Green0, Mafra0}, Mellin correspondence between supergravity and superstring amplitudes \cite{Stieberger1, Stieberger2} and a deeper understanding of the $\alpha'$ expansion of tree level open and closed superstrings \cite{Schlotterer1, Broedel2, Broedel1, Stieberger4}, among other things. \\
\noindent One of the very important results that has been acheived is Mafra-Schlotterer-Stieberger (MSS) formula for the (tree level) $N$-point scattering of nonabelian massless open superstrings \cite{Mafra1}:
\begin{eqnarray}
A(1, \ldots , N) & = & \sum_{\sigma_N \varepsilon S_{N-3}} F^{\{\sigma_N\}}(\alpha') \ A_{SYM}(1,\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}, N-1, N) \ ,
\label{MSS}
\end{eqnarray}
where $A(1, \ldots , N)$ is the subamplitude and
\begin{eqnarray}
{\cal A}_N & = & i (2 \pi)^{10} \delta^{(10)} (k_1 + \ldots +k_N ) \ \biggl[ \ \mbox{tr}(\lambda^{a_1} \lambda^{a_2} \ldots \lambda^{a_N}) \ A(1, 2, \ldots, N) +
\left( \begin{array}{c}
\mbox{non-cyclic} \\
\mbox{permutations}
\end{array}
\right) \ \biggr] \ \ \ \
\label{N-point}
\end{eqnarray}
is the complete $N$-point open superstring (tree level) scattering amplitude (where $N \geq 3$).\\
\noindent In (\ref{MSS}) $\sigma_N=\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}$ denotes a permutation of indexes $\{ 2,3, \ldots, (N-2) \}$ and the $F^{\{\sigma_N\}}(\alpha')$'s are the momentum factors which contain the $\alpha'$ information of the scattering amplitude\footnote{Besides $\alpha'$, the $F^{\{\sigma_N\}}(\alpha')$'s depend on the $k_i \cdot k_j$ scalar products, which can be written in terms of the independent Mandelstam variables of this $N$-point process.} (see eqs.(\ref{MSS2}) and (\ref{MSS3})) . There are $(N-3)!$ terms in the sum in (\ref{MSS}). \\
\noindent Formula (\ref{MSS}) considers all possible scattering processes involving external gauge bosons and their fermionic massless superpartners. In it, $A_{SYM}$ denotes the tree level scattering subamplitude of this process in $D=10$ Super Yang-Mills theory and $A$ (on the left hand-side) is the corresponding scattering subamplitude in Open Superstring Theory. In (\ref{N-point}) the $\lambda^a$'s are the gauge group matrices in the adjoint representation (see eq. (\ref{adjointm})). \\
\noindent Formula (\ref{MSS}) has the merit of clarifying that the kinematics of the open superstring $N$-point amplitude, at any $\alpha'$ order, is governed by the Super Yang-Mills kinematics, that is, from the low energy theory. It also has the virtue of identifying an explicit general integral formula, valid for any $N \geq 3$, for all the $F^{\{\sigma_N\}}(\alpha')$ momentum factors (see eq.(\ref{MSS2})).\\
\noindent During the past and the present year quite nontrivial results have been obtained for the $\alpha'$ expansion of the $F^{\{\sigma_N\}}(\alpha')$ momentum factors, in eq.(\ref{MSS}). These factors are disk worldsheet integrals which can be found, but in terms of non elementary functions so, besides the cases of $N=3$ and $N=4$, their $\alpha'$ expansion is a non trivial thing to compute\footnote{They are $(N-3)$ multiple integrals (see eq.(\ref{MSS2})). In the case of the 5-point amplitude one finds the $_{3}F_{2}$ Hypergeometric function (see, for example, \cite{Kitazawa1} and \cite{Brandt1}); in the case of $6$-point amplitudes one finds a double series of $_{4}F_{3}$ Hypergeometric functions (see, for example, \cite{Oprisa1} and \cite{Hemily1}); and for $N \geq 7$ one has to deal with even more complicated expressions. In all these cases the coefficients of the $\alpha'$ expansion are given in terms of Harmonic (or Euler-Zagier) sums and/or Polylogarithmic integrals: all of them are nowadays known how to be calculated (see, for example, refs. \cite{Vermaseren1} and \cite{Remiddi1}), but the required calculations to find them grow enormously with the $\alpha'$ order.}. \\
\noindent Besides the fact of improving the $\alpha'$ order of expansions of the past \cite{Paiva1, Barreiro1, Oprisa1} in various orders \cite{Boels1, Schlotterer1, Broedel2}, a conjecture has been established for the general form of the $\alpha'$ expansion of the $F^{\{\sigma_N\}}(\alpha')$'s (for arbitrary $N$ and arbitrarily high order in $\alpha'$\cite{Schlotterer1})\footnote{See Appendix \ref{N5 and higher} for this explicit form.}'\footnote{The mentioned conjecture of ref. \cite{Schlotterer1} has been checked in \cite{Broedel2} up to ${\alpha'}^{21}$ order for $N=5$, up to ${\alpha'}^{9}$ order for $N=6$ and up to ${\alpha'}^{7}$ order for $N=7$.} and, also, a recursive formula (in $N$) for these $\alpha'$ expansions has been proved in \cite{Broedel1} (by means of a generalization of the work in ref.\cite{Drummond1})\footnote{The mathematical framework used in refs. \cite{Drummond1, Broedel2} is related, but is not directly the same one of ref. \cite{Schlotterer1}.}.\\
\noindent All these results have as a common starting point the MSS formula, given in eq.(\ref{MSS}). This formula was derived using the Pure Spinor formalism \cite{Berkovits1}, which is manifestly supersymmetric right from the beginning. Formula (\ref{MSS}) is the final (and simple) result of an ellaborated study involving pure spinor superspace and its cohomology structure \cite{Mafra2}, first applied in the calculation of Super Yang-Mills amplitudes and afterwards extended to the corresponding calculations in Open Superstring Theory \cite{Mafra1}. \\
\noindent The purpose of the present work is two-fold. On one side, we show that it is possible to arrive to MSS's formula in (\ref{MSS}) working only in the Ramond-Neveu-Schwarz (RNS) formalism \cite{Friedan1}\footnote{The subtlety is that we do not deal with fermion vertex operators at all. We only work with the $N$-point gauge boson amplitude, $A_b(1, \ldots, N)$, given in eq.(\ref{ANfermionic}), which comes from only gauge boson vertex operators in the RNS formalism \cite{Schwarz1}. See section \ref{For gauge bosons and massless fermions} for more details about this.}. For the moment, we only have a proof for $3 \leq N \leq 7$ within this approach, but we think that a deeper understanding of our procedure can, eventually, lead to the proof for arbitrary $N$. On the other side, we shed light in how the $\alpha'$ expansion of the $F^{\{\sigma_N\}}(\alpha')$ momentum factors in (\ref{MSS}) can be obtained, order by order in $\alpha'$, not by doing the explicit computations of the coefficients of its expansion (which are given in terms of multiple zeta values (MZV's)\cite{Brown1, Mafra3, Boels1, Schlotterer1}), but by demanding tree level unitarity of the amplitudes and (presumably) using only the $\alpha'$ expansion of the $5$-point amplitude\footnote{This statement seems to be in contradiction with the final part of the abstract of this work, where we sayed that we would only use the $\alpha'$ information from the $4$-point amplitude.\\
The clarifying statement is that, for the calculation purposes of this work, in which at most we have obtained ${\alpha'}^6$ order results, it will be enough to use the $4$-point amplitude information. As explained in section \ref{Using}, the $5$-point amplitude information will become important from ${\alpha'}^8$ order onwards and, as argued in that section, we claim that the $pure$ $5$-point amplitude is enough to find the whole $\alpha'$ terms of the $F^{\{\sigma_N\}}(\alpha')$ momentum factors.}.\\
\noindent The basic tool in which our findings are supported is the `revisited S-matrix approach', found by us in the past year \cite{Barreiro0}. This method was initially proposed as an efficient tool to determine, order by order in $\alpha'$, the bosonic part of the open superstring low energy effective lagrangian (OSLEEL) but, as we will see in the present work, it has a direct counterpart in the determination of the scattering amplitudes of the theory, allowing us to arrive to (\ref{MSS}) and also to the $\alpha'$ expansion version of it. This method is intrinsically kinematic and supersymmetric, although it is not {\it manifestly} supersymmetric. It deals, first, with the pure external gauge boson interactions and only at the end, it incorporates the interactions between external gauge bosons and its fermionic superpartners.\\
\noindent The kinematics is present right from the beginning since the main statement of the method has to do with the kinematical structure of the $N$-point amplitudes of gauge bosons: in Open Superstring Theory they should not contain $(\zeta \cdot k)^N$ terms, at any $\alpha'$ order \cite{Barreiro0}. With respect to supersymmetry, it is also present right from the beginning since (we believe that) it is the reason for the absence of those kinematical terms in the amplitudes\footnote{It is well known that the $(\zeta \cdot k)^N$ terms are indeed present in the case of $3$ and $4$-point amplitudes of massless states in Open Bosonic String Theory \cite{Schwarz1, Kawai1} and, from the general integral formula for the $N$-point amplitude \cite{Schwarz1}, it is also believed that they are present in this general case. Therefore, it is quite reasonable to conjecture that the absence of the $(\zeta \cdot k)^N$ terms, in the case of supersymmetric open strings, is a consequence of Spacetime Supersymmetry.}.\\
\noindent The structure of this work is as follows. In section \ref{Brief review} we give a brief review of the revisited S-matrix method. We explain in it why we claim that demanding the absence of the $(\zeta \cdot k)^N$ terms in the $N$-point amplitude of gauge bosons and, based on the conjecture of ref. \cite{Schlotterer1} and the main result of ref.\cite{Broedel1}, using only the $\alpha'$ expansion of the $5$-point amplitude, that is enough information to find the complete (bosonic part of the) OSLEEL. We claim there, then, that using similar arguments there should be a direct analog of this situation from the perspective of the scattering amplitudes, that is, at a given order in $\alpha'$, knowing the $5$-point amplitude is enough information to know any higher $N$-point amplitude\footnote{See footnote $8$ in the previous page.}.\\
\noindent We begin the ellaboration of these ideas in section \ref{Kinematical} by examining the space of $N$-point kinematic expressions which are on-shell gauge invariant and which do not contain $(\zeta \cdot k)^N$ terms. We find that this space is $(N-3)!$ dimensional (at least for $3 \leq N \leq 7$). We then check that a BCJ basis for Yang-Mills subamplitudes (see eq.(\ref{basis})) \cite{Bern1} can indeed be chosen as a basis for this space. In light of this important kinematical result, the determination of the explicit expressions of the open superstring subamplitudes (for gauge bosons) and of the BCJ relations themselves become simply a linear algebra problem: we know a vector of the space (that is, a given subamplitude for which we want an expression) and what is left to do is to find the components of this vector with respect to the basis of the vector space. We do this calculation in section \ref{Closed} for the open superstring subamplitudes (for gauge bosons), arriving precisely to the bosonic part of (\ref{MSS}), and in Appendix \ref{BCJ}, for the BCJ relations themselves (arriving to the same result of refs. \cite{Bjerrum1, Stieberger3}). Then, in the last part of section \ref{Closed}, we briefly explain why once we have found the gauge boson amplitudes in this manifestly gauge invariant way, the corresponding amplitudes involving fermions are immediate, and thus leading to MSS result in eq. (\ref{MSS}) (this time considering there all possible scattering processes involving external gauge bosons and their supersymmetric partners).\\
\noindent In section \ref{Finding the} we apply our revisited S-matrix method to find the $\alpha'$ expansion of the $F^{\{\sigma_N\}}(\alpha')$ momentum factors of (\ref{MSS}), in the case of $N=5,6,7$ \footnote{Because of computer limitations, we have done the explicit calculations only up to ${\alpha'}^6$ order in the first two cases and up to ${\alpha'}^4$ order, in the last case. But it is clear that the method can be used to obtain higher order $\alpha'$ contributions. See more details in section \ref{Finding the}.}. In order to acheive this, besides the requirements of the revisited S-matrix method, we demand cyclicity and tree level unitarity to be obeyed by the subamplitudes.\\
\noindent We end in section \ref{Summary} by summarizing our results and conclusions.\\
\noindent Throughout this work all scattering amplitudes are tree level ones.\\
\noindent Since, at some points, we have needed to deal with huge calculations and formulas, we have considered only the simplest ones in the main body of this work and we have left the more extensive ones for the appendices\footnote{Moreover, there are some extremely long expressions that we have preferred not to include them in the text of this work and only to attach them as `txt' files, in the version that we have submitted to the hep-th arXiv.}. These last ones usually do not offer any new conceptual insight, but they have played an important role checking our main statements.
\section{Review of the revisited S-matrix method}
\label{Brief review}
\subsection{Finding terms of the OSLEEL in an efficient way}
\label{Finding terms}
\noindent Let ${\cal L}_{\mbox{eff}}$ be the general low energy effective lagrangian (LEEL) for nonabelian gauge bosons in (either bosonic or supersymmetric) Open String Theory . It has the following form:
\begin{eqnarray}
{\cal L}_{\mbox{eff}} & = & \frac{1}{g^2} \ \mbox{tr} \biggl[ \ -\frac{1}{4} F^2 + (2 \alpha') F^3 + (2 \alpha')^2 F^4 + (2 \alpha')^3 ( F^5 + D^2 F^4) +
\nonumber \\
&& \hphantom{ \frac{1}{g^2} \ \mbox{tr} \biggl[ \ \ } (2 \alpha')^4 ( F^6 + D^2 F^5 + D^4 F^4) + {\cal O}\big( (2 \alpha')^5 \big) \ \biggr] \ .
\label{Leff}
\end{eqnarray}
Each of the $F^n$ and the $D^{2p}F^q$ terms in (\ref{Leff}) is an abbreviation of the sum of different contractions of Lorentz indexes for those sort of terms. For example,``$F^4 \ $'' really denotes an abbreviation of $ ( b_{1}F^{\mu\lambda}F^{\nu}_{\ \lambda}F_{\mu}^{\ \rho}F_{\nu\rho} +b_{2}F^{\mu}_{\ \lambda}F_{\nu}^{\ \lambda}F^{\nu\rho}F_{\mu\rho} +b_{3}F^{\mu\nu}F_{\mu\nu}F^{\lambda\rho}F_{\lambda\rho}
+b_{4}F^{\mu\nu}F^{\lambda\rho}F_{\mu\nu}F_{\lambda\rho} )$ \cite{Tseytlin1}, where $\{ b_1, b_2, b_3, b_4\}$ are the coefficients to be determined. \\
\noindent In the second column of the table in (\ref{table2})\footnote{This table has been taken from eq.(3.4) of ref. \cite{Barreiro0}.} we have written the number of coefficients that the general LEEL contains at the first orders in ${\alpha'}$ \footnote{The terms which are being taken into account in the LEEL are only the ones which remain invariant under field redefinitions.}. These coefficients are the ones that the conventional S-matrix approach usually finds by computing the open string $N$-point amplitudes (from $N=4$ up to $N=p+2$, at least) at ${\alpha'}^p$ order.
\begin{eqnarray}
\begin{tabular}{c|c|c}
$p$ & Dimension of the general basis & Dimension of the constrained basis\\
& at order ${\alpha'}^p$ & at order ${\alpha'}^p$ \\
\hline
1 & 1 & 0 \\
2 & 4 & 1 \\
3 & 13 & 1 \\
4 & 96 & 0 \\
\vdots & \vdots & \vdots
\label{table2}
\end{tabular}
\end{eqnarray}
\noindent In the third column of table (\ref{table2}) we have written the number of coefficients (which is extremely small!) that the revisited S-matrix approach really needs to find in order to determine the OSLEEL at a given $\alpha'$ order. The reason for the smallness of these numbers (in relation to the corresponding ones in the second column) is that in the revisited S-matrix method (only applicable to the case of the supersymmetric string) the $N$-point amplitudes satisfy the constraint \cite{Barreiro0}
\begin{eqnarray}
\mbox{absence of} \ (\zeta \cdot k)^N \ \mbox{terms} \ .
\label{absence}
\end{eqnarray}
This constraint implies further linear restrictions that the $b_j$ coefficients of the LEEL in (\ref{Leff}) should satisfy (see section 4 of \cite{Barreiro0} for more detalis about these restrictions). These restrictions are so strong that only a small number of coefficients remain free: this number is precisely the `dimension of the constrained basis' appearing in the third column of table (\ref{table2}). These same type of contraints had correctly been found about ten years before, by Koerber and Sevrin, using the method of BPS configurations (which is not directly a String Theory one) \cite{Koerber1}. In \cite{Barreiro0} we pointed out that the (probable) reason for the constraint in (\ref{absence}) is Spacetime Supersymmetry. \\
\noindent Due to the highly constrained form that the OSLEEL lagrangian adopts after demanding the requirement in eq. (\ref{absence}), in order to determine the ${\alpha'}^p$ order terms of it, a $(p+2)$-point amplitude calculation, in Open Superstring Theory, is no longer needed (as in the conventional S-matrix method): very much lower $N$-point amplitudes (expanded at ${\alpha'}^p$ order) are expected to be enough for this purpose. In fact, in \cite{Barreiro0} we saw that the $\alpha'$ expansion of the $4$-point momentum factor (given in eq.(\ref{expansionGamma})) is enough to determine explicitly the OSLEEL, at least up to ${\cal O}({\alpha'}^4)$ terms.\\
\subsection{Using the $4$ and the $5$-point $\alpha'$ information to obtain $N$-point information}
\label{Using}
\noindent So, a main idea that arises naturally from the revisited S-matrix method is the fact that the $\alpha'$ expansion of the $4$-point momentum factor (whose coefficients are completely known in terms of integer zeta values, at any $\alpha'$ order, see eq. (\ref{formula1})) is enough information to obtain completely the $\alpha'$ expansion of higher $N$-point amplitudes, at least up to a certain order in $\alpha'$. For example, with the calculations that we did in \cite{Barreiro0} it is clear that the $5$ and the $6$-point amplitudes (and $any$ higher $N$-point amplitude) can be completely determined at least up ${\alpha'}^4$ order, because we found the OSLEEL explicitly up to that order, bypassing $5$ and $6$-point worldsheet integral calculations. \\
\noindent In fact, in \cite{Barreiro0} we raised the possibility that the OSLEEL could be determined to any $\alpha'$ order by means of the revisited S-matrix method plus the known $\alpha'$ expansion of the $4$-point momentum factor (see eqs. (\ref{formula1}) and (\ref{expansionGamma})), but this can hardly happen since there are higher order coefficients given in terms of multiple zeta values (MZV's)\footnote{See Appendix \ref{Multiple} for an extremely short review on MZV's.}, like $\zeta(3,5)$, $\zeta(3,7)$, $\zeta(3,3,5)$, etc., which already show up in the $\alpha'$ expansion of the $5$-point amplitude (at ${\alpha'}^8$, ${\alpha'}^{10}$ and ${\alpha'}^{11}$ order, respectively \cite{Schlotterer1, Boels1})\footnote{We thank Rutger Boels for calling our attention to this point.}. These coefficients are not expected to be given by linear combinations of products of $\zeta(n)$'s (in which the coefficients of these linear combinations are rational numbers)\footnote{In this work we will refer to these peculiar MZV's as $non \ trivial$ MZV's, in opposition to the $trivial$ ones, which are known to be given as rational linear combinations of products of $\zeta(n)$'s \cite{Blumlein1}. The $non \ trivial$ MZV's that we will be referring to are $only$ the ones that appear in the MZV basis of this last reference. A few examples of $trivial$ MZV's can be found in formulas (\ref{zeta12}), (\ref{zeta22}) and (\ref{zeta14}) of Appendix \ref{Momentum factors}.}\cite{Brown0}, so they are not present in the $\alpha'$ expansion of the $4$-point amplitude.\\
\noindent Since ${\alpha'}^8$ is the first order at which these $non \ trivial$ MZV's arise, we do not have a proof, but we believe that up to ${\alpha'}^7$ order any $N$-point amplitude can be found by means of the revisited S-matrix method plus only the known $\alpha'$ expansion of the $4$-point momentum factor, $F^{\{2\}}$, (see eqs. (\ref{A1234-3}), (\ref{formula1}) and (\ref{expansionGamma})). \\
\noindent From ${\alpha'}^8$ order onwards we expect the $4$-point amplitude to only give a partial (but still important) information for the determination of the OSLEEL terms\footnote{Curiously enough, from eqs.(\ref{F})-(\ref{Qns}) of Appendix \ref{Momentum factors} we see that at ${\alpha'}^9$ order the $5$-point amplitude does not contain any $non \ trivial$ MZV's and, therefore, we suspect that the OSLEEL can eventually be completely determined at this $\alpha'$ order, by only using $4$-point amplitude information.}. For example, the ${\alpha'}^p D^{2p-4}F^4$ terms of the OSLEEL (for $p=2, 3, 4, \ldots $) are still going to be $completely$ determined by the $4$-point amplitude \cite{DeRoo1,Chandia1}. \\
\noindent One might ask if new MZV's, besides the ones that already appear in the $5$-point amplitude, will eventually appear at a given (highly enough) $\alpha'$ order. This would be a signal, from this $\alpha'$ order onwards, that $\alpha'$ information from a $6$-point (or eventually higher $N$-point) amplitude would be required, in order to determine those $\alpha'$ terms of the OSLEEL. But this does not seem to be the case. A remarkable observation was done in ref.\cite{Schlotterer1}, claiming that the coefficients of the $\alpha'$ expansion of the $F^{\{\sigma_N\}}(\alpha')$ momentum factors, for $N>5$, are always the same as the ones that appear for $N=5$: what changes (from one $N$ to another) is only the kinematic polynomial which is being multiplied by this coefficient (see Appendix \ref{N5 and higher} for more details)\footnote{For increasing $N$, the number of independent Mandelstam variables grows and, therefore, the kinematic polynomial, which depends on them, gets bigger.}. This conjecture is also consistent with the recent discovery that the $F^{\{\sigma_N\}}(\alpha')$'s can be iteratively obtained (from the $N$-point point of view), to any order in $\alpha'$, from a unique and same Drinfeld associator (which is a generating series for the MZV's)\cite{Broedel1}. So, the MZV's that already appear in the $\alpha'$ expansion of the $5$-point amplitude, will be the same ones that will appear for higher $N$-point amplitudes\footnote{Shortly speaking, the reason of why not all the MZV's of the $\alpha'$ expansion of the $5$-point function do appear in the $4$-point case (see eqs.(\ref{formula1}) and (\ref{expansionGamma})), is that the kinematic polynomial that would go multiplying them is $zero$, in the case of $N=4$. See refs. \cite{Schlotterer1} and \cite{Broedel1} for more details of this explanation.}.\\
So we are left with the conjecture that our revisited S-matrix method plus the $\alpha'$ expansion of the $5$-point amplitude, are enough informations to find $all$ the $\alpha'$ corrections coming from Open Superstring Theory to the Yang-Mills lagrangian\footnote{Even the ${\alpha'}^p D^{2p-4}F^4$ terms (for $any$ $p=2, 3, 4, \ldots $), that were found in \cite{Chandia1} using the $4$-point amplitude, could in principle be determined using $only$ the $5$-point amplitude $\alpha'$ expansion.}.\\
\noindent In spite of this conjecture, for the calculations that we do in this work, which at most go to ${\alpha'}^6$ order (see section \ref{Finding the}), we will still use only the known $4$-point $\alpha'$ expansion (see Appendix \ref{Gamma factor}).
\section{Basis for open superstring and Yang-Mills subamplitudes}
\label{Kinematical}
In this section we prove that, from a kinematical structure perspective, open superstring and Yang-Mills (tree level) subamplitudes belong to a same space of kinematical $N$-point expressions (where $N \ge 3$) and that a possible basis for this space is given by the $(N-3)!$ Yang-Mills subamplitudes\footnote{The arguments that we present in this work have been only proved for $N=3,4,5,6,7$, but we suspect that they can be generalized for an arbitrary $N$.}
\begin{eqnarray}
{\cal B}_N & = & \Big \{ \ A_{YM}(1,\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}, N-1, N) \ , \ \ \ \sigma \ \varepsilon \ S_{N-3} \ \Big \} \ ,
\label{basis}
\end{eqnarray}
where $\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}$ denotes a $\sigma$ permutation of indexes $\{ 2,3, \ldots, (N-2) \}$\footnote{All the kinematical proof that we deal with does not depend on the spacetime dimension D, as it also happens with the BCJ relations \cite{Bern1}, for example. In spite of this, evidence has been found in ref. \cite{Nastase1} that the basis of this space might indeed depend on D.\\
It will happen that this subtlety can be kept aside from the kinematical analysis that we do. The implications of our results will still be valid for $any$ D, as it happens with the the BCJ relations.\\
See the introductory paragraph in Appendix \ref{Linear}, for a few more details about this.}. \\
In order to acheive this, in subsection \ref{Some general} we review some important facts about the structure of gauge boson scattering subamplitudes. Then, based on this previous background, in the following subsections we argue, case by case (from $N=3$ up to $N=7$), that the set of subamplitudes in (\ref{basis}) is indeed a basis for the corresponding space\footnote{The details of the computations that support our claim, when $N=5,6,7$, are given in Appendix \ref{N-point basis}.}.
\subsection{Some general facts about the structure of scattering amplitudes of gauge bosons}
\label{Some general}
If we compare open superstring and Yang-Mills (tree level) $N$-point subamplitudes for gauge bosons, both theories being treated in the Lorentz gauge, from the point of view of the structure of its kinematical terms they have in common the constraint in eq. (\ref{absence}), namely, the absence of $(\zeta \cdot k)^N$ terms. In the first case this has been recently emphasized in \cite{Barreiro0}, together with the strong implications that it has for determining the bosonic terms of the low energy effective lagrangian of the theory in a very simplified way and, in the second case, the claim in (\ref{absence}) can easily be confirmed by considering Feynman rules in the construction of tree level scattering amplitudes.\\
So, let us consider the space of all scalar $N$-point kinematical expressions constructed with the polarizations $\zeta_i$ and the momenta $k_i$ of $N$ external gauge bosons in a nonabelian theory (like Open Superstring Theory or Yang-Mills theory, for example). The momenta and polarizations should satisfy:
\begin{eqnarray}
\label{momentum}
\mbox{Momentum conservation:} & \ \ & k_1^{\mu} + k_2^{\mu} + \ldots + k_N^{\mu} = 0 \ .\\
\label{mass-shell}
\mbox{Mass-shell condition:} & \ \ & k_1^2 = k_2^2 = \ldots = k_N^2 = 0 \ . \\
\label{transversality}
\mbox{Transversality (Lorentz gauge) condition:} & \ \ & \zeta_i \cdot k_i = 0 \ , (i = 1, \ldots , N) \ .
\end{eqnarray}
Let us denote this space by ${\cal V}_N$. We further restrict ${\cal V}_N$ such that its elements $T(1,2,\ldots, N)$ obey the following conditions: \\
\begin{eqnarray}
\begin{array}{rl}
\mbox{1.} & \mbox{They are multilinear in the polarizations $\zeta_i$ ($i=1,2, \ldots, N$).} \\
\mbox{2.} & \mbox{They do not contain $(\zeta \cdot k)^N$ terms.} \\
\mbox{3.} & \mbox{(On-shell) Gauge invariance: whenever any $\zeta_i \rightarrow k_i$ ($i=1,2, \ldots, N$) then $T(1,2,\ldots, N)$} \\
& \mbox{becomes $zero$.} \\
\end{array}
\label{requirements}
\end{eqnarray}
\noindent These four requirements are simply properties of tree level gauge boson subamplitudes in Open Superstring and Yang-Mills theories and we want the elements of ${\cal V}_N$ to also satisfy them. Since these elements, the $T(1,2,\ldots, N)$'s, are Lorentz scalars, they can only be constructed from linear combinations of \footnote{Some care must be taken with the expression ``linear combination'' because, as we will see immediately after (\ref{general-form}), the coefficients that go multiplying the terms that appear in it will, in general, depend in the momenta $k^{\mu}_i$ .}
\begin{eqnarray}
(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{N-2} , (\zeta \cdot \zeta)^2 (\zeta \cdot k)^{N-4}, \ldots , (\zeta \cdot \zeta)^{[N/2]} (\zeta \cdot k)^{N - 2 [N/2]} \ ,
\label{general-form}
\end{eqnarray}
where $[p]$ denotes the integer part of $p$ and $N \ge 3$. \\
Besides the terms excluded in (\ref{absence}) and the ones that we have mentioned in (\ref{general-form}), there are no more possibilities of kinematical terms that can be constructed from the polarizations $\zeta_i$ and the momenta $k_i$ of the external gauge bosons. The only place where some extra dependence in the momenta could be considered is in the scalar coefficients which in $T(1,2,\ldots, N)$ go multiplying the kinematical terms in (\ref{general-form}): these coefficients are allowed to be given in the terms of the $k_i \cdot k_j$ factors (or equivalently, in terms of the Mandelstam variables) and, eventually, in terms of a length scale (for example $\sqrt{\alpha'}$, in the case of String Theory) . \\
For example, in the case of $N=3$, the Yang-Mills subamplitude contains only $ (\zeta \cdot \zeta)^1 (\zeta \cdot k)^1$ terms. It is given by \cite{Schwarz1}
\begin{eqnarray}
A_{YM}(1,2,3) & = & 2 g \ \big[ \ (\zeta_1 \cdot k_2)(\zeta_2
\cdot \zeta_3) + (\zeta_2 \cdot k_3)(\zeta_3 \cdot \zeta_1) + (\zeta_3
\cdot k_1)(\zeta_1 \cdot \zeta_2) \ \big] \ .
\label{A3YM}
\end{eqnarray}
It is easy to see that it is an element of ${\cal V}_3$. In subsection \ref{N3} we will prove that the only elements of ${\cal V}_3$ are multiples of $A_{YM}(1,2,3)$ (see eq.(\ref{3point3})), so the fact that $A_{YM}(1,2,3) \ \varepsilon \ {\cal V}_3$ will turn out to be an immediate thing. \\
In order to do a counting of all the possible independent kinematical terms that can be taken from the list in (\ref{general-form}) to construct the expression for an element $T(1,2,\ldots , N) \ \varepsilon \ {\cal V}_N$, respecting the kinematic conditions in (\ref{momentum}), (\ref{mass-shell}) and (\ref{transversality}) and also the requirement in (\ref{absence}), we need first to analize the $(\zeta \cdot k)$ terms. In principle, for each $i$ there are $N$ possible $(\zeta_i \cdot k_j)$ terms (because $j$ runs from $1$ to $N$). But taking into account the the transversality condition (\ref{transversality}) and also momentum conservation (\ref{momentum}), this implies that for each $i$ ($=1, 2, \ldots, N$) we have:
\begin{eqnarray}
\sum_{j \neq i}^N (\zeta_i \cdot k_j) = 0 \ .
\label{restrict1}
\end{eqnarray}
So, at the end, for each $i$ there are only $(N-2)$ independent $(\zeta_i \cdot k_j)$ terms.\\
With this information we are in conditions to do a counting of the different independent terms, specified by structure in eq. (\ref{general-form}), that in principle are allowed to appear in $T(1,2, \ldots, N)$. This leads us to the following table, for the number of independent allowed kinematical terms of $T(1,2, \ldots, N)$:
\begin{eqnarray}
\begin{tabular}{c|c|c}
\hline
$N$ & Element of ${\cal V}_N$ & Number of independent allowed kinematical terms \\
\hline
3 & $T(1,2,3)$ & 3 $ (\zeta \cdot \zeta)^1 (\zeta \cdot k)^1$ terms \\
4 & $T(1,2,3,4)$ & 24 $ (\zeta \cdot \zeta)^1 (\zeta \cdot k)^2 $ , 3 $(\zeta \cdot \zeta)^2$ terms \\
5 & $T(1,2,3,4,5)$ & 270 $ (\zeta \cdot \zeta)^1 (\zeta \cdot k)^3 $ , 45 $ (\zeta \cdot \zeta)^2 (\zeta \cdot k)^1$ terms \\
6 & $T(1,2,3,4,5,6)$ & 3840 $ (\zeta \cdot \zeta)^1 (\zeta \cdot k)^4$ , 720 $ (\zeta \cdot \zeta)^2 (\zeta \cdot k)^2$ , 15 $(\zeta \cdot \zeta)^3$ terms \\
7 & $T(1,2,3,4,5,6,7)$ & 65625 $ (\zeta \cdot \zeta)^1(\zeta \cdot k)^5$ , 13125 $ (\zeta \cdot \zeta)^2 (\zeta \cdot k)^3$ , 525 $ (\zeta \cdot \zeta)^3 (\zeta \cdot k)^1$ terms
\label{table1} \\
\hline
\end{tabular}
\end{eqnarray}
\vspace{0.5cm}
\noindent It is not difficult to prove that the number of independent $(\zeta \cdot \zeta)^j (\zeta \cdot k)^{N-2j} $ terms (where $j= 1, 2, \ldots, [N/2]$), for an arbitrary $N$, is given by
\begin{eqnarray}
d_{N,j} & = & \frac{N (N-1) (N-2) \ldots (N-(2j-1))}{2^j \ j!} \ (N-2)^{N-2j} \ .
\label{number}
\end{eqnarray}
This is the formula that has been used to compute the corresponding number of kinematical terms in table (\ref{table1})\footnote{The number of independent kinematical terms mentioned in (\ref{table1}) should not be confused with the number of independent \underline{coefficients} that will appear multiplying each of those terms. It might even happen that some of these coefficients will become $zero$ after demanding the requirement of on-shell gauge invariance and, consequently, the corresponding kinematical terms referred to in table (\ref{table1}) will not be present in the final expression of $T(1, \ldots, N)$.}. We have only worried to specify in this table the cases $N=3,4,5,6,7,$ because these are the ones that we will consider in the present work. \\
In the next subsections, when writing all the allowed independent kinematical terms in $T(1,\ldots, N)$, in particular when choosing the $(N-2)$ independent $(\zeta_i \cdot k_j)$ terms, our choice will be the following list of $N (N-2)$ terms :
\begin{eqnarray}
\begin{array}{cccc}
\Big \{ (\zeta_1 \cdot k_2) \ , & (\zeta_1 \cdot k_3) \ , & \ldots \ , & (\zeta_1 \cdot k_{N-1}) \ , \hphantom{ \ \Big \} }\\
\hphantom{ \Big \{ } (\zeta_2 \cdot k_1) \ , & (\zeta_2 \cdot k_3) \ , & \ldots \ , & (\zeta_2 \cdot k_{N-1}) \ , \hphantom{ \ \Big \} } \\
\hphantom{ \Big \{ } \vdots & \vdots & \vdots & \vdots \\
\hphantom{ \Big \{ } (\zeta_{N-1} \cdot k_1) \ , & (\zeta_{N-1} \cdot k_2) \ , & \ldots \ , & (\zeta_{N-1} \cdot k_{N-2}) \ , \hphantom{ \ \Big \} } \\
\hphantom{ \Big \{ } (\zeta_{N} \cdot k_1) \ , & (\zeta_{N} \cdot k_2) \ , & \ldots \ , & (\zeta_{N} \cdot k_{N-2}) \ \Big \} \ ,
\end{array}
\label{zetakterms}
\end{eqnarray}
that is, for each $i$ ($=1, 2, \ldots, N-1$) we have used the restriction in (\ref{restrict1}) to eliminate $(\zeta_i \cdot k_N)$ in terms of the remaing $(N-2)$ $(\zeta_i \cdot k_j)$ terms ($j=1, 2, \ldots, N-1$, with $j \neq i$) and, in the last line of (\ref{zetakterms}), we have eliminated $(\zeta_N \cdot k_{N-1})$ in terms of the remaining $(\zeta_N \cdot k_j)$ terms ($j=1, 2, \ldots, N-2$).\\
\subsection{Finding a basis for ${\cal V}_N$, when $3 \leq N \leq 7$}
\label{Finding a}
\noindent In this subsection we will explicitly present the derivation of a basis for ${\cal V}_N$ only in the case $N=3$ and $N=4$, which involve simple calculations and illustrate the procedure to arrive at that basis. For $N=5,6,7$ we will just mention the final result and we will leave the details of the main calculations to Appendix \ref{N-point basis}.\\
\subsubsection{Case of $N=3$}
\label{N3}
It is well known that in the case of $3$-point amplitudes of massless states, momentum conservation and the mass-shell condition (\ref{mass-shell}) imply that the momenta obey
the relations
\begin{eqnarray}
k_i \cdot k_j & = & 0 \ (i,j = 1,2,3) \ .
\label{kikj0}
\end{eqnarray}
According to the table in (\ref{table1}), $T(1,2,3)$ has only three independent $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^1$ terms, which, following our prescription in (\ref{zetakterms}), leads to:
\begin{eqnarray}
T(1,2,3) & = & \lambda_1 (\zeta_1 \cdot k_2)(\zeta_2 \cdot \zeta_3) + \lambda_2 (\zeta_2 \cdot k_1)(\zeta_3 \cdot \zeta_1)
+ \lambda_3 (\zeta_3 \cdot k_1)(\zeta_1 \cdot \zeta_2) \ .
\label{3point}
\end{eqnarray}
Demanding on-shell gauge invariance for gauge boson 1, $A(1,2,3)$ should become $0$ when $\zeta_1 \rightarrow k_1$. Using the mass-shell condition (\ref{mass-shell}) for $k_1$ and the condition in (\ref{kikj0}), then
\begin{eqnarray}
T(1,2,3)\Big|_{\zeta_1 = k_1} = 0 \nonumber \\
\Rightarrow \lambda_2 (\zeta_2 \cdot k_1)(\zeta_3 \cdot k_1) + \lambda_3 (\zeta_3 \cdot k_1)(\zeta_2 \cdot k_1) = 0 \nonumber \\
\Rightarrow (\lambda_2 + \lambda_3) (\zeta_2 \cdot k_1)(\zeta_3 \cdot k_1) = 0 \nonumber \\
\Rightarrow \lambda_2 = -\lambda_3 \ .
\label{3-point-on-shell-1}
\end{eqnarray}
Similarly, demanding on-shell gauge invariance for gauge bosons 2 and 3 we arrive, respectively, to the conditions
\begin{eqnarray}
\label{3-point-on-shell-2}
\lambda_3 = \lambda_1 \ , \\
\label{3-point-on-shell-3}
\lambda_1 = -\lambda_2 \ .
\end{eqnarray}
To arrive to (\ref{3-point-on-shell-2}) and (\ref{3-point-on-shell-3}) we have needed to use, in accordance to eq. (\ref{zetakterms}), that $(\zeta_3 \cdot k_2) = - (\zeta_3 \cdot k_1)$, $(\zeta_1 \cdot k_3) = - (\zeta_1 \cdot k_2)$ and $(\zeta_2 \cdot k_3) = - (\zeta_2 \cdot k_1)$.\\
The three conditions $\{ (\ref{3-point-on-shell-1}), (\ref{3-point-on-shell-2}), (\ref{3-point-on-shell-3}) \}$ are linearly dependent. From them we conclude that $\lambda_1 = - \lambda_2 = \lambda_3 $ and, for convenience, we choose them to be $2 g \lambda$. So, finally, (\ref{3point}) can be written as\footnote{In eq.(\ref{3point2}) we have substituted back the relation $(\zeta_2 \cdot k_1) = - (\zeta_2 \cdot k_3)$ in order to obtain the familar expression of $A_{YM}(1,2,3)$.}
\begin{eqnarray}
T(1,2,3) & = & \lambda \cdot 2 g \Big[ (\zeta_1 \cdot k_2)(\zeta_2 \cdot \zeta_3) + (\zeta_2 \cdot k_3)(\zeta_3 \cdot \zeta_1)
+ (\zeta_3 \cdot k_1)(\zeta_1 \cdot \zeta_2) \Big] \ ,
\label{3point2}
\end{eqnarray}
where $\lambda$ is an arbitrary dimensionless factor and $g$ is the open string coupling constant (which agrees with the one from the Yang-Mills lagrangian). \\
Eq. (\ref{3point2}) can equivalently be written as
\begin{eqnarray}
T(1,2,3) & = & \lambda \cdot A_{YM}(1,2,3) \ ,
\label{3point3}
\end{eqnarray}
where $A_{YM}(1,2,3)$ is has been given in eq. (\ref{A3YM}).\\
So ${\cal B}_3= \{ A_{YM}(1,2,3) \}$ is a basis for ${\cal V}_3$ and the dimension of ${\cal V}_3$ is 1 ($dim({\cal V}_3)=1$). \\
In this case, the constant $\lambda$ cannot have any momentum dependence due to the conditions (\ref{kikj0}), so it can only be a numerical constant. \\
\noindent Two trivial, but immediate applications of (\ref{3point3}) are the case of $T(1,2,3)$ being any of the Yang-Mills 3-point subamplitudes, for which this equation becomes the cyclic or the reflection (or combination of both) properties (where $\lambda=1$ or $\lambda=-1$) and the case of $T(1,2,3)$ being the open superstring subamplitude, $A(1,2,3)$, for which eq.(\ref{3point3}) implies that $A(1,2,3)$ receives no $\alpha'$ corrections and that $\lambda=1$.
\subsubsection{Case of $N=4$}
\label{N4}
In this case, according to the table in (\ref{table1}) the open superstring 4-point subamplitude has a kinematical expression which consists of twenty four $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{2}$ and three $(\zeta \cdot \zeta)^2$ terms, all of them being independent, so, following the prescrition in (\ref{zetakterms}), leads to:
\begin{eqnarray}
T(1,2,3,4) & = & \lambda_1 (\zeta_1 \cdot \zeta_2) (\zeta_3 \cdot k_1) (\zeta_4 \cdot k_1) + \lambda_2 (\zeta_1 \cdot \zeta_2) (\zeta_3 \cdot k_1) (\zeta_4 \cdot k_2) + \lambda_3 (\zeta_1 \cdot \zeta_2) (\zeta_3 \cdot k_2) (\zeta_4 \cdot k_1) + \nonumber \\
&& \lambda_4 (\zeta_1 \cdot \zeta_2) (\zeta_3 \cdot k_2) (\zeta_4 \cdot k_2) + \lambda_5 (\zeta_1 \cdot \zeta_3) (\zeta_2 \cdot k_1) (\zeta_4 \cdot k_1) + \lambda_6 (\zeta_1 \cdot \zeta_3) (\zeta_2 \cdot k_1) (\zeta_4 \cdot k_2) + \nonumber \\
&& \lambda_7 (\zeta_1 \cdot \zeta_3) (\zeta_2 \cdot k_3) (\zeta_4 \cdot k_1) + \lambda_8 (\zeta_1 \cdot \zeta_3) (\zeta_2 \cdot k_3) (\zeta_4 \cdot k_2) + \lambda_9 (\zeta_1 \cdot \zeta_4) (\zeta_2 \cdot k_1) (\zeta_3 \cdot k_1) + \nonumber \\
&& \lambda_{10} (\zeta_1 \cdot \zeta_4) (\zeta_2 \cdot k_1) (\zeta_3 \cdot k_2) + \lambda_{11} (\zeta_1 \cdot \zeta_4) (\zeta_2 \cdot k_3) (\zeta_3 \cdot k_1) + \lambda_{12} (\zeta_1 \cdot \zeta_4) (\zeta_2 \cdot k_3) (\zeta_3 \cdot k_2) + \nonumber \\
&& \lambda_{13} (\zeta_2 \cdot \zeta_3) (\zeta_1 \cdot k_2) (\zeta_4 \cdot k_1) + \lambda_{14} (\zeta_2 \cdot \zeta_3) (\zeta_1 \cdot k_2) (\zeta_4 \cdot k_2) + \lambda_{15} (\zeta_2 \cdot \zeta_3) (\zeta_1 \cdot k_3) (\zeta_4 \cdot k_1) + \nonumber \\
&& \lambda_{16} (\zeta_2 \cdot \zeta_3) (\zeta_1 \cdot k_3) (\zeta_4 \cdot k_2) + \lambda_{17} (\zeta_2 \cdot \zeta_4) (\zeta_1 \cdot k_2) (\zeta_3 \cdot k_1) + \lambda_{18} (\zeta_2 \cdot \zeta_4) (\zeta_1 \cdot k_2) (\zeta_3 \cdot k_2) + \nonumber \\
&& \lambda_{19} (\zeta_2 \cdot \zeta_4) (\zeta_1 \cdot k_3) (\zeta_3 \cdot k_1) + \lambda_{20} (\zeta_2 \cdot \zeta_4) (\zeta_1 \cdot k_3) (\zeta_3 \cdot k_2) + \lambda_{21} (\zeta_3 \cdot \zeta_4) (\zeta_1 \cdot k_2) (\zeta_2 \cdot k_1) + \nonumber \\
&& \lambda_{22} (\zeta_3 \cdot \zeta_4) (\zeta_1 \cdot k_2) (\zeta_2 \cdot k_3) + \lambda_{23} (\zeta_3 \cdot \zeta_4) (\zeta_1 \cdot k_3) (\zeta_2 \cdot k_1) + \lambda_{24} (\zeta_3 \cdot \zeta_4) (\zeta_1 \cdot k_3) (\zeta_2 \cdot k_3) + \nonumber \\
&& \rho_{1} (\zeta_1 \cdot \zeta_2) (\zeta_3 \cdot \zeta_4) + \rho_{2} (\zeta_1 \cdot \zeta_3) (\zeta_2 \cdot \zeta_4) + \rho_{3} (\zeta_1 \cdot \zeta_4) (\zeta_2 \cdot \zeta_3) \ .
\label{4point}
\end{eqnarray}
In Appendix \ref{Calculations} we find that demanding on-shell gauge invariance of the subamplitude when $\zeta_1 \rightarrow k_1$ we can arrive to the following thirteen linearly independent relations:
\begin{eqnarray}
\lambda_4 = 0 \ , \ \lambda_8=0 \ , \ \lambda_{12}=0 \ , \
\lambda_3 + \lambda_{10}=0 \ ,\ \lambda_{7} +\lambda_{11}=0 \ , \ \lambda_{1} +\lambda_{5}+\lambda_{9}=0 \ ,\ \lambda_{6} +\lambda_{2}=0 \ , \nonumber \\
\ 2 \rho_{1} +\lambda_{21}s-\lambda_{23}(s+t)=0 \ , \
2 \rho_{2} +\lambda_{17}s-\lambda_{19}(s+t)=0 \ , \ 2 \rho_{3} +\lambda_{13}s-\lambda_{15}(s+t)=0 \ , \ \nonumber \\
\lambda_{14}s - \lambda_{16} (s+t) =0 \ , \ \lambda_{18} s - \lambda_{20}(s+t) =0 \ , \ \lambda_{22} s - \lambda_{24}(s+t) =0 \ , \ \hspace{1cm}
\label{4point1}
\end{eqnarray}
where
\begin{eqnarray}
s = (k_1 + k_2)^2 = 2 k_1 \cdot k_2 \ \ \ \mbox{and} \ \ \ t = (k_1 + k_4)^2 = 2 k_1 \cdot k_4
\label{Mandelstam4}
\end{eqnarray}
are two of the three Mandelstam variables that appear in the 4-point scattering\footnote{The other one is $u=(k_1 +k_3)^2 = 2 k_1 \cdot k_3 = -s -t$.}'\footnote{Notice that our convention for the $4$-point Mandelstam variables has a different sign than the common one in very cited references, like \cite{Schwarz1}, \cite{Green1} and \cite{Polchinski1}. We have followed this convention in order to be compatible with the one that we use for higher $N$-point Mandelstam variables. See Appendix \ref{Mandelstam variables}}.\\
The set of equations in (\ref{4point1}) is the 4-point analog to eq.(\ref{3-point-on-shell-1}), found for the 3-point subamplitude when
$\zeta_1 \rightarrow k_1$.\\
In the same way, demanding on-shell gauge invariance of the $T(1,2,3,4)$, when $\zeta_2 \rightarrow k_2$, $\zeta_3 \rightarrow k_3$ and $\zeta_4 \rightarrow k_4$, we can arrive to a set of thirteen linearly independent equations in each case. The explicit expression of these additional equations and the details of its solution can be found in Appendix \ref{Calculations}.\\
The important thing is that in the whole set of fifty two equations that come from demanding on-shell gauge invariance (eqs. (\ref{4point1}), (\ref{4point4}), (\ref{4point5}) and (\ref{4point6})) only half of them are linearly independent as a whole. The solution of this system is given in Appendix \ref{Calculations}, in eq. (\ref{4pointsolution}): it consists of 7 null coefficients ($\lambda_1$, $\lambda_4$, $\lambda_8$, $\lambda_{12}$, $\lambda_{15}$, $\lambda_{19}$ and $\lambda_{21}$) and 20 which are given in terms of the Mandelstam variables $s$ and $t$ and a $unique$ arbitrary parameter, which, for convenience, we have chosen to be $\lambda_{24}$, written as $4 g^2 \lambda^{\{2\}} /t$ (where $\lambda^{\{2\}}$ is arbitrary). After substituing this solution in (\ref{4point}) we finally arrive to\footnote{Substituing eqs. (\ref{4point1}), (\ref{4point4}), (\ref{4point5}) and (\ref{4point6}), together with $\lambda_{24}=4 g^2 \lambda^{\{2\}} /t$, in eq. (\ref{4point}), leads to an expression which is only on-shell equivalent to the one in eq.(\ref{A1234-1}): it is necessary to use eqs. (\ref{u}) and (\ref{additional}) in order to check the equivalence between both formulas.}
\begin{eqnarray}
T(1,2,3,4) & = & \lambda^{\{2\}} \cdot 8 g^2 \frac{1}{st} \Biggl\{ - \frac{1}{4} \Bigl[ ts(\zeta_1 \cdot \zeta_3)(\zeta_2 \cdot
\zeta_4) + su(\zeta_2 \cdot \zeta_3)(\zeta_1 \cdot \zeta_4) +
ut(\zeta_1 \cdot \zeta_2)(\zeta_3 \cdot \zeta_4) \Bigr] +
\nonumber
\\&&\hphantom{ \lambda \cdot 8 g^2 \frac{1}{st} \Biggl\{ } - \frac{1}{2} s \Bigl[ (\zeta_1 \cdot k_4)(\zeta_3 \cdot
k_2)(\zeta_2 \cdot \zeta_4) + (\zeta_2 \cdot k_3)(\zeta_4 \cdot
k_1)(\zeta_1 \cdot \zeta_3) +
\nonumber \\&&
\hphantom{ \lambda \cdot 8 g^2 \frac{1}{st} \Biggl\{ + \frac{1}{2} s \Bigl[}
+(\zeta_1 \cdot k_3)(\zeta_4 \cdot
k_2)(\zeta_2 \cdot \zeta_3)
+ (\zeta_2 \cdot k_4)(\zeta_3 \cdot k_1)(\zeta_1 \cdot \zeta_4)
\Bigr] +
\nonumber\\ &&\hphantom{ \lambda \cdot 8 g^2 \frac{1}{st} \Biggl\{ } - \frac{1}{2} t \Bigl[ (\zeta_2 \cdot k_1)(\zeta_4 \cdot
k_3)(\zeta_3 \cdot \zeta_1) + (\zeta_3 \cdot k_4)(\zeta_1 \cdot
k_2)(\zeta_2 \cdot \zeta_4) +
\nonumber \\&& \hphantom{ \lambda \cdot 8 g^2 \frac{1}{st} \Biggl\{ + \frac{1}{2} s \Bigl[}
+ (\zeta_2 \cdot k_4)(\zeta_1 \cdot k_3)(\zeta_3 \cdot \zeta_4) +
(\zeta_3 \cdot k_1)(\zeta_4 \cdot k_2)(\zeta_2 \cdot \zeta_1)
\Bigr] +
\nonumber\\ && \hphantom{ \lambda \cdot 8 g^2 \frac{1}{st} \Biggl\{ }
-\frac{1}{2} u \Bigl[ (\zeta_1 \cdot k_2)(\zeta_4 \cdot
k_3)(\zeta_3 \cdot \zeta_2) +
(\zeta_3
\cdot k_4)(\zeta_2 \cdot k_1)(\zeta_1 \cdot \zeta_4) +
\nonumber \\&& \hphantom{ \lambda \cdot 8 g^2 \frac{1}{st} \Biggl\{ + \frac{1}{2} s \Bigl[}
+ (\zeta_1 \cdot k_4)(\zeta_2 \cdot k_3)(\zeta_3 \cdot \zeta_4) +
(\zeta_3 \cdot k_2)(\zeta_4 \cdot k_1)(\zeta_1 \cdot \zeta_2)
\Bigr] \ \Biggr\} \ ,
\label{A1234-1}
\end{eqnarray}
or equivalently,
\begin{eqnarray}
T(1,2,3,4) & = & \lambda^{\{2\}} \cdot A_{YM}(1,\{2\},3,4) \ ,
\label{A1234-2}
\end{eqnarray}
where $A_{YM}(1,\{2\},3,4)=A_{YM}(1,2,3,4)$ is the familiar Yang-Mills 4-point subamplitude\footnote{This explicit expression for $A_{YM}(1,2,3,4)$ can be found in many places in the literature, for example, in section 3 of \cite{Brandt1}. But there are some sign differences due to the different convention that we have used for the $4$-point Mandelstam variables in eq.(\ref{Mandelstam4}).}'\footnote{We have used the curly brackets, `$\{ \}$', in the index $2$, just as a remainder of the rule, mentioned in eq.(\ref{basis}), to choose the Yang-Mills subamplitudes of ${\cal B}_N$.}.\\
So ${\cal B}_4= \{ A_{YM}(1,2,3,4) \}$ is a basis for ${\cal V}_4$ and $dim({\cal V}_4)=1$. \\
In (\ref{A1234-2}) $\lambda^{\{2\}}$ is a dimensionless factor which may depend in the dimensionless variables $\alpha' s$ and $\alpha' t$, where, at this point, $\alpha'$ could be any squared length scale (that in the case of String Theory would be the string fundamental constant). $\lambda^{\{2\}}$ corresponds to what it is usually called the ``momentum factor''.
\subsubsection{Case of $N=5, 6, 7$}
\label{N567}
The procedure for $N>4$ is exactly the same one presented in subsections \ref{N3} and \ref{N4}, but for the corresponding $N$-point subamplitudes whose general kinematical structure we have mentioned in table (\ref{table1}). Since for these cases the calculations are much more involved than the ones that we have already done for $N=4$, we leave the details of them for Appendix \ref{N-point basis}. In all these cases it is possible to arrive to an $(N-3)!$ dimensional basis for ${\cal V_N}$ ($dim({\cal V}_N)=(N-3)!$) and it is always possible to choose this basis in terms of only Yang-Mills subamplitudes, in particular the one indicated in at the beginning of this section, in eq. (\ref{basis}). That set of subamplitudes precisely contains $(N-3)!$ elements. That set constitutes one of the possible basis that have appeared in the literature as basis for the space of Yang-Mills subamplitudes \cite{Bern1, Bjerrum1, Stieberger1} and also for the space of open superstring subamplitudes \cite{Mafra1}, which is the purpose of this whole section to prove. \\
In Appendix \ref{N-point basis} we see that in the cases for $N>4$ our calculations have not lead directly to the basis in (\ref{basis}), but only after having verified that the dimension of ${\cal V_N}$ is $(N-3)!$ and that the set of amplitudes mentioned in (\ref{basis}) is linearly independent\footnote{Our proposal of basis in eq. (\ref{basis}) is based in the previously known results of the BCJ relations for $N=5,6,7$ \cite{Bern1, Bjerrum1, Stieberger1}. Except for the case of $N=4$, without these previously known results it would have been quite difficult to guess this basis and, as seen in Appendix \ref{N-point basis}, we would have had only $(N-3)!$ known (and very long) $N$-point kinematical expressions for which we would have no interpretation at all.}.\\
So, summarizing, with the calculations and proofs that we have done in Appendix \ref{N-point basis} we can write an arbitrary element of ${\cal V}_5$, ${\cal V}_6$ and ${\cal V}_7$, respectively as\footnote{In eqs. (\ref{5point3}), (\ref{6point}) and (\ref{7point}) we are following the same sort of notation that the authors of \cite{Mafra1} used to write the linear combinations of the subamplitudes, that is, the momentum factors are labelled by a superscript denoting the corresponding $\sigma_N$ permuation.}
\begin{eqnarray}
\label{5point3}
T(1,2,3,4,5) & = &\lambda^{\{23\}} A_{YM}(1,\{2,3\},4,5) + \lambda^{\{32\}} A_{YM}(1,\{3,2\},4,5) \ ,
\end{eqnarray}
\begin{multline}
T(1,2,3,4,5,6) = \\
\begin{split}
= \ & \lambda^{\{234\}} A_{YM}(1,\{2,3,4\},5,6) + \lambda^{\{324\}} A_{YM}(1,\{3,2,4\},5,6) + \lambda^{\{243\}} A_{YM}(1,\{2,4,3\},5,6) + \\
& \lambda^{\{342\}} A_{YM}(1,\{3,4,2\},5,6) + \lambda^{\{423\}} A_{YM}(1,\{4,2,3\},5,6) + \lambda^{\{432\}} A_{YM}(1,\{4,3,2\},5,6)
\end{split}
\label{6point}
\end{multline}
and
\begin{eqnarray}
\label{7point}
T(1,2,3,4,5,6,7) & = & \lambda^{\{2345\}} A_{YM}(1,\{2,3,4,5\},6,7) + \lambda^{\{2354\}} A_{YM}(1,\{2,3,5,4\},6,7) + \\
& &\lambda^{\{2435\}} A_{YM}(1,\{2,4,3,5\},6,7) + \ldots + \lambda^{\{5432\}} A_{YM}(1,\{5,4,3,2\},6,7)
\ , \nonumber
\end{eqnarray}
\noindent where the $\lambda^{\{\sigma_N\}}$'s are the corresponding momentum factors, which may depend in the dimensionless Mandelstam variables ($\alpha' s_i$ and $\alpha' t_j$) of each $N$-point scattering process of gauge bosons.\\
An independent set of Mandelstam variables for $N=5,6,7$ is given in Appendix \ref{N-point basis}. \\
\noindent In the right-hand side of (\ref{7point}) there are $4!=24$ terms that are being summed, in accordance to eq.(\ref{basis}) for $N=7$. \\
The same as in eq. (\ref{A1234-2}), in eqs. (\ref{5point3}), (\ref{6point}) and (\ref{7point}) we have inserted curly brackets, `$\{ \}$', just as a remainder of the rule, mentioned in eq.(\ref{basis}), to choose the Yang-Mills subamplitudes of ${\cal B}_5$, ${\cal B}_6$ and ${\cal B}_7$, respectively.\\
\subsection{Finding the components of an element of ${\cal V}_N$ with respect to the basis ${\cal B}_N$}
\label{Finding the components}
\noindent Once we have accepted that the set ${\cal B}_N$ is indeed a basis for ${\cal V}_N$, at least for $3 \leq N \leq 7$, the next step consists in finding the components of an element of ${\cal V}_N$ with respect to ${\cal B}_N$. We will present here a procedure which we expect to be valid for any $N \geq 3$ (even for $N > 7$)\footnote{At least in this subsection we will work with the hypothesis that ${\cal B}_N$, as specified in (\ref{basis}), is a basis of ${\cal V}_N$ for $any$ $N \geq 3$.}.\\
\noindent Let $T(1, \ldots, N) \ \varepsilon \ {\cal V}_N$. To find the components of $T(1, \ldots, N)$ with respect to the basis ${\cal B}_N$ we have to find the momentum factors, $\lambda^{\{\sigma_N\}}$'s, such that
\begin{eqnarray}
T(1, \ldots, N) & = & \sum_{ \sigma_N \ \varepsilon \ S_{N-3} } \lambda^{\{\sigma_N\}} \ A_{YM}(1,\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}, N-1, N) \ ,
\label{T}
\end{eqnarray}
where $\sigma_N=\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}$ denotes the same permutation of indexes $\{ 2,3, \ldots, (N-2) \}$ that we referred to in eq.(\ref{basis}).\\
\noindent The natural (but tedious!) way to find the $\lambda^{\{\sigma_N\}}$'s is by writing down the expression of $T(1, \ldots, N)$, and of each Yang-Mills submplitude in ${\cal B}_N$, in terms of the kinematical terms listed in eq. (\ref{general-form}), with the convention (\ref{zetakterms}) for the $(\zeta \cdot k)$ terms, for example. Then, a linear system for the $\lambda^{\{\sigma_N\}}$'s arises from demanding that the coefficient of each
$(\zeta \cdot \zeta)^j (\zeta \cdot k)^{N-2j}$ term (where $j=1, \ldots, [N/2]$), in both sides of (\ref{T}), is the same. Since this linear system is overdetermined, it is not necessary to consider $all$ the equations of it in order to find the $(N-3)!$ $\lambda^{\{\sigma_N\}}$'s. \\
\noindent For small values of $N$ it is easy to see that considering in (\ref{T}) the kinematical terms with the smallest number of $(\zeta \cdot k)$ terms (that is, the ones for $j=[N/2]$), that is enough information to find the $\lambda^{\{\sigma_N\}}$'s. Consider, for example, the cases of $N=3,4,5$. From the table in eq.(\ref{table1}) we see that the $3$-point amplitude contains $3$ $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{1}$ terms, the $4$-point amplitude contains $3$ $(\zeta \cdot \zeta)^2$ terms and the $5$-point amplitude contains $45$ $(\zeta \cdot \zeta)^2 (\zeta \cdot k)^{1}$ terms. It is clear, then, that considering these particular kinematical terms in (\ref{T}), that would be enough information to find the $\lambda^{\{\sigma_N\}}$'s, since for $N=3$ and $N=4$ there is only one component and for $N=5$ there are two components to determine.\\
\noindent But, for sufficiently large $N$ \footnote{Consider $N \geq 12$, for example.}, the number of kinematical terms of the type mentioned above (the ones for $j=[N/2]$) is less than $(N-3)!$ and, then, considering only those type of terms will not be enough information to find $all$ the $\lambda^{\{\sigma_N\}}$'s.\\
\noindent Since, at this point, we are worried about a strategy to find the components of $T(1, \ldots, N)$, for an arbitrary $N$, our proposal will be to consider the kinematical terms for $j=1$, namely, the $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{N-2}$ terms. Those are the ones, among all the possible types of kinematical terms in (\ref{general-form}), which show up in more amount in the most general expression for $T(1, \ldots, N)$\footnote{It is not difficult to see, by considering formula (\ref{number}), that the number of independent $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{N-2}$ terms is bigger than $(N-3)!$.}.\\
\noindent So, in the next two sections, when we will look for the momentum factors that are present in the BCJ relations and in the open superstring formula, we will consider the $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{N-2}$ terms in our kinematical analysis.
\section{Closed expression for the $N$-point disk amplitude using the RNS formalism}
\label{Closed}
\noindent An immediate and natural application of the analysis done in subsection \ref{Finding the components} is to find the momentum factors in the case that $T(1, \ldots, N)$ is {\it any} of the possible Yang-Mills or open superstring $N$-point subamplitudes, because they are gauge invariant and the $(\zeta \cdot k)^N$ are absent in them\footnote{In the case of Yang-Mills subamplitudes, it was already mentioned in the beginning of subsection \ref{Some general} that they do not contain $(\zeta \cdot k)^N$ terms.}. In the first case, the result becomes one of the BCJ relations and in the second case, the result becomes a closed formula for the open superstring subamplitudes. We do the exercise for the BCJ relations in Appendix \ref{BCJ} and in this section we do the corresponding calculation for the open superstring, arriving to MSS's result, in eq.(\ref{MSS}), by means of a RNS formalism (for $3 \leq N \leq 7$). \\
\noindent We will first derive MSS formula in the case of pure gauge boson scattering in subsection \ref{For gauge bosons only} and then, in subsection \ref{For gauge bosons and massless fermions}, we will quickly see that there is no need to deal with fermion vertex operators (at least for the tree level amplitudes) in order to find the scattering amplitudes involving fermions (once the closed formula has been found for gauge bosons).\\
\subsection{For gauge bosons only}
\label{For gauge bosons only}
\noindent Using the RNS formalism, it is known that the $N$-point subamplitude for gauge bosons in Open Superstring Theory is given by the following integral formula \cite{Green1}\footnote{In eq.(\ref{ANfermionic}) we have used an index {\it b} in the $N$-point subamplitude as a remainder that it corresponds to the scattering process involving only bosons.}:
\begin{eqnarray}
A_b(1, 2, \ldots,N) & = & 2 \frac{g^{N-2}}{(2 \alpha')^2}
(x_{N-1}-x_1)(x_{N}-x_1) \
\int_0^{x_{N-1}} dx_{N-2} \int_0^{x_{N-2}} dx_{N-3} \ \dots \int_0^{x_3} dx_2 \ \times \nonumber \\
&& {} \times \int d \theta_1 \ldots d \theta_{N-2} \prod_{p<q}^N (
x_q - x_p - \theta_q \theta_p )^{2 \alpha' k_p \cdot k_q}\times \int d \phi_1 \ldots d \phi_N
\nonumber \\
& &
\times \ \mbox{exp} \left( \sum_{i \neq j}^N
\frac{ (2 \alpha')^1 (\theta_j-\theta_i) \phi_j (\zeta_j \cdot k_i) -1/2 \ (2 \alpha')^1 \phi_j \phi_i (\zeta_j \cdot \zeta_i) }{x_j-x_i-\theta_j \theta_i} \right) \ .
\label{ANfermionic}
\end{eqnarray}
\noindent In this formula, the $\theta_i$'s and the $\phi_j$'s are Grassmann variables, while the $x_k$'s are common real variables, where $x_1 < x_2 < \ldots < x_N$. Briefly speaking, the result in eq.(\ref{ANfermionic}) has been obtained by averaging over the ground state of the theory, the product of $N$ gauge boson vertex operators, localized at positions $x_1$, $x_2$, $\ldots$, $x_N$. The residual symmetry of the integrand can be gauged fixed, for example, by demanding $\{ x_1=0$, $x_{N-1}=1$, $x_N = + \infty\}$ and $\{ \theta_{N-1} =\theta_N=0 \}$ \footnote{In (\ref{ANfermionic}) we have already kept $x_1$, $x_{N-1}$, $x_N$, $\theta_{N-1}$ and $\theta_N$ fixed, but we have not yet chosen the peculiar values mentioned in the text. See section 7.3 of ref. \cite{Green1}, for more details.}.\\
\noindent In this section we will prove that in the case of $T(1, \ldots, N)=A_b(1, 2, \ldots,N)$, given in eq.(\ref{ANfermionic}), then the momentum factors of eq.(\ref{T}) are given precisely by the ones appearing in MSS formula, eq.(\ref{MSS}). The integral formula that Mafra, Schlotterer and Stieberger find for them is \cite{Mafra1}
\begin{eqnarray}
F^{\{23 \ldots N-2\}}(\alpha') & = & \int_0^1 dx_{N-2} \int_0^{x_{N-2}} dx_{N-3} \ \ldots \int_0^{x3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{N-1} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \times \nonumber \\
&& \hphantom{ \int_0^1 dx_{N-2} \int_0^{x_{N-2}} dx_{N-3} \ \ldots \int_0^{x3} \ \ } \times \biggl\{ \ \prod_{p=2}^{N-2} \ \sum_{q=1}^{p-1} \Big(\frac{ \ 2 \alpha' k_p \cdot k_q \ }{x_p - x_q}\Big) \ \biggr\} \ ,
\label{MSS2}
\end{eqnarray}
where $x_1=0$ and $x_{N-1}=1$\footnote{In the expression in (\ref{MSS2}) $x_N$ has already be taken to $+\infty$: that is why it does not appear on it.}.\\
\noindent Formula (\ref{MSS2}) is the one for the momentum factor which in eq.(\ref{MSS}) goes multiplying the subamplitude $A_{YM}(1, \{ 2, 3, \ldots, N-2 \}, N-1, N)$. The MSS prescription for the remaining $F^{\{ \sigma_N \}}(\alpha')$ momentum factors consists in interchanging the indices $\{ 2, 3, \ldots, N-2 \}$, according to the $\sigma_N$ permutation, {\it only} in the curly brackets of the right hand-side of eq.(\ref{MSS2}). This interchange of indices should be done in {\it both}, the $k_j$'s momenta and the $x_j$'s variables inside the curly brackets. This will become more clear in the case-by-case study that we will consider in the following subsections.\\
\noindent Before going into the derivation of the momentum factors we have two remarks:
\begin{enumerate}
\item As they stand, formulas (\ref{ANfermionic}) and (\ref{MSS2}) are not applicable to the case of $N=3$. Since it is very well known that in this case $A_b(1,2,3)=A_{YM}(1,2,3)$, the $3$-point momentum factor is simply {\it defined} as being $1$.
\item There is an equivalent expression that Mafra, Schlotterer and Stieberger give for the momentum factor in eq.(\ref{MSS2}), for $N \geq 5$, and it comes by integrating by parts on it \cite{Mafra3}:
\begin{eqnarray}
F^{\{23 \ldots N-2\}}(\alpha') & = & \int_0^1 dx_{N-2} \int_0^{x_{N-2}} dx_{N-3} \ \ldots \int_0^{x3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{N-1} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \times \nonumber \\
&& \times \biggl\{ \ \prod_{p=2}^{[N/2]} \ \sum_{q=1}^{p-1} \Big(\frac{ \ 2 \alpha' k_p \cdot k_q \ }{x_p - x_q}\Big) \ \biggr\} \
\biggl\{ \ \prod_{p=[N/2]+1}^{N-2} \ \sum_{q=p+1}^{N-1} \Big(\frac{ \ 2 \alpha' k_p \cdot k_q \ }{x_p - x_q}\Big) \ \biggr\} \ , \ \ \ \
\label{MSS3}
\end{eqnarray}
\end{enumerate}
\noindent In the following subsections we will see that we reproduce formula (\ref{MSS2}) in the case of $N=4$ and formula (\ref{MSS3}) in the cases of $N=5,6,7$.\\
\noindent In the case of $N=4$ we will present the derivation with enough details to see the consistency of the relations that arise when we consider the $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^{N-2}$ terms of the subamplitudes. For $N = 5, 6, 7$ the procedure will be the same one, but we will not present so many details because the open superstring and the Yang-Mills subamplitudes become too large. For $N=5$ we will explain how to arrive at the known expression of the two momentum factors (eq.(\ref{MSS3})), together with some evidence of the self consitency of the calculations, and for $N=6,7$ we will just explain how to arrive to the corresponding momentum factors.
\subsubsection{Case of $N=4$}
\label{N4-3}
\noindent In this case the relation (\ref{A1234-2}) guarantees that choosing $T(1,2,3,4)=A_b(1,2,3,4)$ it is possible to write down
\begin{eqnarray}
A_b(1,2,3,4) & = & F^{\{2\}}(\alpha') \ A_{YM}(1, 2,3,4) \ ,
\label{A1234-3}
\end{eqnarray}
for a certain momentum factor $F^{\{2\}}(\alpha')$ that we want to find.\\
\noindent $A_{YM}(1, 2,3,4)$ has already been given in (\ref{A1234-1}) and the expression for $A_b(1,2,3,4)$ is not difficult to be obtained from (\ref{ANfermionic}) for $N=4$ (after expanding the exponential and integrating over the two Grassmann $\theta_i$'s and the four Grassmann $\phi_j$'s). \\
\noindent Following the procedure proposed in subsection \ref{Finding the components}, after writing all $(\zeta \cdot k)$ terms in the basis given in (\ref{zetakterms}) and equating the $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^2$ terms of both sides of (\ref{A1234-3}), we arrive to
\begin{multline}
2 g^2 \ \biggl\{ \ (\zeta_1 \cdot \zeta_2) \Big [ \ - (4 \alpha') I^{[4]}_1 (\zeta_3 \cdot k_1) (\zeta_4 \cdot k_2) + (4 \alpha') I^{[4]}_3 (\zeta_3 \cdot k_2) (\zeta_4 \cdot k_1) \Big] \ + \hspace{4cm} \\
(\zeta_1 \cdot \zeta_3) \Big [ (4 \alpha') I^{[4]}_1 (\zeta_2 \cdot k_1) (\zeta_4 \cdot k_1) + (4 \alpha') I^{[4]}_1 (\zeta_2 \cdot k_1) (\zeta_4 \cdot k_2) - (4 \alpha') I^{[4]}_2 (\zeta_2 \cdot k_3) (\zeta_4 \cdot k_1) \ \Big] \ + \\
(\zeta_1 \cdot \zeta_4) \Big [ \ - (4 \alpha') I^{[4]}_1 (\zeta_2 \cdot k_1) (\zeta_3 \cdot k_1) - (4 \alpha') I^{[4]}_3 (\zeta_2 \cdot k_1) (\zeta_3 \cdot k_2) + (4 \alpha') I^{[4]}_2 (\zeta_2 \cdot k_3) (\zeta_3 \cdot k_1) \ \Big] \ + \\
(\zeta_2 \cdot \zeta_3) \Big [ - (4 \alpha') I^{[4]}_3 (\zeta_1 \cdot k_2) (\zeta_4 \cdot k_1) - (4 \alpha') I^{[4]}_3 (\zeta_1 \cdot k_2) (\zeta_4 \cdot k_2) - (4 \alpha') I^{[4]}_2 (\zeta_1 \cdot k_3) (\zeta_4 \cdot k_2) \ \Big] \ + \\
(\zeta_2 \cdot \zeta_4) \Big [ \ (4 \alpha') I^{[4]}_1 (\zeta_1 \cdot k_2) (\zeta_3 \cdot k_1) + (4 \alpha') I^{[4]}_3 (\zeta_1 \cdot k_2) (\zeta_3 \cdot k_2) + (4 \alpha') I^{[4]}_2 (\zeta_1 \cdot k_3) (\zeta_3 \cdot k_2) \ \Big] \ + \\
(\zeta_3 \cdot \zeta_4) \Big [ - (4 \alpha') I^{[4]}_3 (\zeta_1 \cdot k_2) (\zeta_2 \cdot k_3) + (4 \alpha') I^{[4]}_1 (\zeta_1 \cdot k_3) (\zeta_2 \cdot k_1) - (4 \alpha') I^{[4]}_2 (\zeta_1 \cdot k_3) (\zeta_2 \cdot k_3) \ \Big] \ \biggr\} = \\
\begin{split}
&= 2 g^2 \ F^{\{2\}}(\alpha') \ \biggl\{ \ \ (\zeta_1 \cdot \zeta_2) \Big [ - \frac{4}{s} (\zeta_3 \cdot k_1) (\zeta_4 \cdot k_2) + \big( \frac{4}{s} + \frac{4}{t} \big) (\zeta_3 \cdot k_2) (\zeta_4 \cdot k_1) \ \Big] \ + \\
&(\zeta_1 \cdot \zeta_3) \Big [ \ \frac{4}{s} (\zeta_2 \cdot k_1) (\zeta_4 \cdot k_1) + \frac{4}{s} (\zeta_2 \cdot k_1) (\zeta_4 \cdot k_2) - \frac{4}{t} (\zeta_2 \cdot k_3) (\zeta_4 \cdot k_1) \ \Big] \ + \\
& (\zeta_1 \cdot \zeta_4) \Big [ - \frac{4}{s} (\zeta_2 \cdot k_1) (\zeta_3 \cdot k_1) - \big( \frac{4}{s} + \frac{4}{t} \big) (\zeta_2 \cdot k_1) (\zeta_3 \cdot k_2) + \frac{4}{t} (\zeta_2 \cdot k_3) (\zeta_3 \cdot k_1) \ \Big] \ + \\
& (\zeta_2 \cdot \zeta_3) \Big [ - \Big( \frac{4}{s} + \frac{4}{t} \Big) (\zeta_1 \cdot k_2) (\zeta_4 \cdot k_1) - \Big( \frac{4}{s} + \frac{4}{t} \Big) (\zeta_1 \cdot k_2) (\zeta_4 \cdot k_2) - \frac{4}{t} (\zeta_1 \cdot k_3) (\zeta_4 \cdot k_2) \ \Big] \ + \\
&(\zeta_2 \cdot \zeta_4) \Big [ \ \frac{4}{s} (\zeta_1 \cdot k_2) (\zeta_3 \cdot k_1) + \Big( \frac{4}{s} + \frac{4}{t} \Big) (\zeta_1 \cdot k_2) (\zeta_3 \cdot k_2) + \frac{4}{t} (\zeta_1 \cdot k_3) (\zeta_3 \cdot k_2) \ \Big] \ + \\
& (\zeta_3 \cdot \zeta_4) \Big [ - ( \frac{4}{s} + \frac{4}{t} ) (\zeta_1 \cdot k_2) (\zeta_2 \cdot k_3) + \frac{4}{s} (\zeta_1 \cdot k_3) (\zeta_2 \cdot k_1) - \frac{4}{t} (\zeta_1 \cdot k_3) (\zeta_2 \cdot k_3) \ \Big] \ \biggr\} \ ,
\end{split}
\label{F2-0}
\end{multline}
\noindent where\footnote{In (\ref{I2}) we have introduced the superscript `[4]' as a remainder that the integrals that have appeared are the ones for the $4$-point case.}
\begin{eqnarray}
I^{[4]}_1 = \int_0^1 dx_2 \ {x_2}^{\alpha' s -1} (1-x_2)^{\alpha' t} \ , \ \ \
I^{[4]}_2 = \int_0^1 dx_2 \ {x_2}^{\alpha' s } (1-x_2)^{\alpha' t -1} \ , \nonumber \\
I^{[4]}_3 = \int_0^1 dx_2 \ {x_2}^{\alpha' s -1} (1-x_2)^{\alpha' t -1} \ . \hspace{2cm}
\label{I2}
\end{eqnarray}
\noindent In (\ref{F2-0}), when we substituted $A_{YM}(1,2,3,4)$ from eq.(\ref{A1234-1}), we have used that $u=-s-t$.\\
\noindent Comparing the coefficient of the $(\zeta_1 \cdot \zeta_2)(\zeta_3 \cdot k_1)(\zeta_4 \cdot k_2 )$ term in both sides of (\ref{F2-0}) we find that
\begin{eqnarray}
F^{\{2\}}(\alpha') & = & \alpha' s \ I^{[4]}_1 \ ,
\label{F2-1}
\end{eqnarray}
or equivalently,
\begin{eqnarray}
\label{F2-2}
F^{\{2\}}(\alpha') & = & 2 \alpha' k_2 \cdot k_1 \int_0^1 dx_2 \ x_2^{2 \alpha' k_2 \cdot k_1-1} (1-x_2)^{2 \alpha' k_3 \cdot k_2} \\
\label{F2-3}
& = & \int_0^1 dx_{2} \ \biggl( \prod_{i>j \geq 1}^{3} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \frac{ \ 2 \alpha' k_2 \cdot k_1 \ }{x_2 - x_1} \ ,
\end{eqnarray}
where $x_1=0$ and $x_3=1$.\\
\noindent Formula (\ref{F2-3}) corresponds precisely to formula (\ref{MSS2}) for the case of $N=4$, as we had anticipated.\\
\noindent Notice that comparing other $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^2$ terms in eq.(\ref{F2-0}) we may arrive to different expressions for $F^{\{2\}}(\alpha')$. For example, if we compare the coefficient of $(\zeta_1 \cdot \zeta_3)(\zeta_2 \cdot k_3)(\zeta_4 \cdot k_1 )$ and the coefficient of $(\zeta_2 \cdot \zeta_3)(\zeta_1 \cdot k_2)(\zeta_4 \cdot k_1 )$, in both sides of (\ref{F2-0}), we find that, respectively,
\begin{eqnarray}
F^{\{2\}}(\alpha') = \alpha' t \ I^{[4]}_2 \ , \ \ F^{\{2\}}(\alpha') = \alpha' \frac{s \ t}{s+t} \ I^{[4]}_3 \ .
\label{F2-4}
\end{eqnarray}
Integrating by parts in the definition of $I^{[4]}_2$ (and considering the analytic continuation in $\alpha' s$ and $\alpha' t$ of the three integrals in eq.(\ref{I2})), we have that
\begin{eqnarray}
I^{[4]}_2 & = & \frac{s}{t} \ I^{[4]}_1 \ .
\label{I2-2}
\end{eqnarray}
Also, multiplying the identity $1/ \{ x_2 (1-x_2) \} = 1/x_2 + 1/(1-x_2)$, by the term $x_2^{2 \alpha' s} (1-x_2)^{2 \alpha' t}$ and integrating in the interval $[0,1]$, this identity becomes
\begin{eqnarray}
I^{[4]}_3 & = I^{[4]}_1 + I^{[4]}_2 \ .
\label{I2-3}
\end{eqnarray}
Using relations (\ref{I2-2}) and (\ref{I2-3}) it is easy to see that the two alternative expressions for $F^{\{2\}}(\alpha')$, given in eq.(\ref{F2-4}), are equivalent to the one in (\ref{F2-1}) (and, therefore, equivalent to the one in (\ref{F2-3})). \\
\noindent One might even ask what would have happended if we had included the three $(\zeta \cdot \zeta)^2$ terms in both sides of (\ref{F2-0}) and compared their corresponding coefficients. The answer is that we would have found two of the three integral representations that we already have for $F^{\{2\}}(\alpha')$ plus a new one:
\begin{eqnarray}
F^{\{2\}}(\alpha') & = & \frac{s \ (\alpha' s-1)}{s+t} \ \int_0^1 dx_2 \ {x_2}^{\alpha' s -2} (1-x_2)^{\alpha' t} \ .
\label{F2-5}
\end{eqnarray}
It can be easily verified that this expression can be obtained using integration by parts in $I^{[4]}_3$, in the second equation in (\ref{F2-4}).
\subsubsection{Case of $N=5$}
\label{N4-5}
\noindent In this case the relation (\ref{5point3}) guarantees that choosing $T(1,2,3,4,5)=A_b(1,2,3,4,5)$ it is possible to write down
\begin{eqnarray}
A_b(1,2,3,4,5) & = & F^{\{23\}}(\alpha') \ A_{YM}(1,2,3,4,5) + F^{\{32\}}(\alpha') \ A_{YM}(1,3,2,4,5) \ .
\label{A12345-3}
\end{eqnarray}
\noindent Following the procedure proposed in subsection \ref{Finding the components}, in eq.(\ref{A12345-3}) we consider only the $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^3$ terms,
\begin{multline}
A_b(1,2,3,4,5)\Big|_{(\zeta \cdot \zeta)^1 (\zeta \cdot k)^3} = \\
\begin{split}
&= F^{\{23\}}(\alpha') \ A_{YM}(1,2,3,4,5)\Big|_{(\zeta \cdot \zeta)^1 (\zeta \cdot k)^3} + F^{\{32\}}(\alpha') \ A_{YM}(1,3,2,4,5)\Big|_{(\zeta \cdot \zeta)^1 (\zeta \cdot k)^3} \ ,
\end{split}
\label{A12345-4}
\end{multline}
where we are supposed to write all $(\zeta \cdot k)$ terms in the basis given in (\ref{zetakterms}) for $N=5$.\\
On one side, in section $5$ of ref.\cite{Brandt1} it has been explained in detail how the $x_5 \rightarrow + \infty$ limit is taken and how the Grassmann integrations are done in eq.(\ref{ANfermionic}), in the case of $N=5$\footnote{There are three $\theta_i$'s and five $\phi_j$'s in this case.}, so all the terms of the left hand-side of eq.(\ref{A12345-4}) are known.\\
\noindent On the other side, we give the expression for all the $(\zeta \cdot \zeta)^1 (\zeta \cdot k)^3$ terms of $A_{YM}(1,2,3,4,5)$ (and, therefore, also the corresponding terms of $A_{YM}(1,3,2,4,5)$) in eq.(\ref{A5}), so the right hand-side of eq.(\ref{A12345-4}) is also completely known, except for $ F^{\{23\}}(\alpha')$ and $ F^{\{32\}}(\alpha')$.\\
\noindent If we examine the coefficient of $(\zeta_1 \cdot \zeta_2)(\zeta_3 \cdot k_4)(\zeta_4 \cdot k_1)(\zeta_5 \cdot k_2)$ in both sides of eq.(\ref{A12345-4}) we find that\footnote{In our $N=5$ calculations we have substituted the $k_i \cdot k_j$ invariants in terms of the independent Mandelstam variables that we have chosen. See eqs.(\ref{Mandelstam5}) and (\ref{Mandelstam5-2}).}
\begin{multline}
\frac{8}{s_1 s_3} \ F^{\{23\}}(\alpha') = \\
\begin{split}
= 8 {\alpha'}^2 \int_0^1 dx_3 \int_0^{x_3} dx_2 \ {x_2}^{\alpha' s_1-1} {x_3}^{\alpha' ( s_4-s_1-s_2 )} {(x_3-x_2)}^{\alpha' s_2} {(1-x_2)}^{\alpha' ( s_5-s_2-s_3 )} {(1-x_3)}^{\alpha' s_3-1} \ .
\end{split}
\label{F23-1}
\end{multline}
\noindent This equation leads directly to an expression for $F^{\{23\}}(\alpha')$, which might be written as\footnote{In (\ref{F23-2}) we have substituted back the explicit expressions for the $N=5$ Mandelstam variables.}
\begin{eqnarray}
F^{\{23\}}(\alpha') &=& \int_0^1 dx_{3} \int_0^{x_3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{4} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \frac{ \ 2 \alpha' k_2 \cdot k_1 \ }{x_2 - x_1} \cdot \frac{ \ 2 \alpha' k_4 \cdot k_3 \ }{x_4 - x_3} \ ,
\label{F23-2}
\end{eqnarray}
where $x_1=0$ and $x_4=1$.\\
\noindent The fact that in eq.(\ref{F23-1}) $F^{\{32\}}(\alpha')$ is not present means that the $(\zeta_1 \cdot \zeta_2)(\zeta_3 \cdot k_4)(\zeta_4 \cdot k_1)(\zeta_5 \cdot k_2)$ term is present in $A_b(1,2,3,4,5)$ and $A_{YM}(1,2,3,4,5)$, but it is not present in $A_{YM}(1,3,2,4,5)$. \\
\noindent Similarly, if we examine the coefficient of $(\zeta_1 \cdot \zeta_3)(\zeta_2 \cdot k_4)(\zeta_4 \cdot k_1)(\zeta_5 \cdot k_3)$ in both sides of eq.(\ref{A12345-4}) we find that
\begin{multline}
-\frac{8}{ (s_1+s_2-s_4 ) ( s_5-s_2-s_3 ) } \ F^{\{32\}}(\alpha') = \\
\begin{split}
= 8 {\alpha'}^2 \int_0^1 dx_3 \int_0^{x_3} dx_2 \ {x_2}^{\alpha' s_1} {x_3}^{\alpha' ( s_4-s_1-s_2 )-1} {(x_3-x_2)}^{\alpha' s_2} {(1-x_2)}^{\alpha' ( s_5-s_2-s_3 )-1} {(1-x_3)}^{\alpha' s_3} \ ,
\end{split}
\label{F32-1}
\end{multline}
from which we can arrive at
\begin{eqnarray}
F^{\{32\}}(\alpha') &=& \int_0^1 dx_{3} \int_0^{x_3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{4} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \frac{ \ 2 \alpha' k_3 \cdot k_1 \ }{x_3 - x_1} \cdot \frac{ \ 2 \alpha' k_4 \cdot k_2 \ }{x_4 - x_2} \ .
\label{F32-2}
\end{eqnarray}
Formulas (\ref{F23-2}) and (\ref{F32-2}) correspond to the expressions of the two momentum factors that appear in the case of $N=5$. These two formulas agree completely with the expected expression for $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$, according to formula (\ref{MSS3}). \\
\noindent The two specific kinematical structures that allowed us to arrive directly to the known expressions of $ F^{\{23\}}(\alpha')$ and $ F^{\{32\}}(\alpha')$ are not the only ones that lead to those expressions. For example, we have checked that examining the coefficient of $(\zeta_1 \cdot \zeta_4)(\zeta_2 \cdot k_1)(\zeta_3 \cdot k_4)(\zeta_5 \cdot k_1)$ and $(\zeta_1 \cdot \zeta_4)(\zeta_2 \cdot k_1)(\zeta_3 \cdot k_4)(\zeta_5 \cdot k_2)$, in both sides of (\ref{A12345-4}), also leads to eq.(\ref{F23-1}). We have also checked that examining the coefficient of $(\zeta_1 \cdot \zeta_4)(\zeta_2 \cdot k_4)(\zeta_3 \cdot k_1)(\zeta_5 \cdot k_1)$ and $(\zeta_1 \cdot \zeta_4)(\zeta_2 \cdot k_4)(\zeta_3 \cdot k_1)(\zeta_5 \cdot k_3)$, in both sides of (\ref{A12345-4}), also leads to eq.(\ref{F32-1}). \\
\noindent For the sake of completeness we will examine in eq.(\ref{A12345-4}) two other kinematical structures, namely, $(\zeta_1 \cdot \zeta_2)(\zeta_3 \cdot k_1)(\zeta_4 \cdot k_2)(\zeta_5 \cdot k_1)$ and $(\zeta_1 \cdot \zeta_2)(\zeta_3 \cdot k_4)(\zeta_4 \cdot k_2)(\zeta_5 \cdot k_1)$. This leads, respectively, to the following two relations that $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$ should satisfy:
\begin{eqnarray}
\frac{2}{s_1} \ F^{\{23\}}(\alpha') \ + \ \frac{2 \ (s_2 + s_3 - s_4 -s_5)}{ ( s_5-s_2-s_3 ) (s_1+s_2-s_4 ) } \ F^{\{32\}}(\alpha') \ & = & \ \ \ 2 \ {\alpha'}^2 \ s_4 \ I^{[5]}_1 \ , \nonumber \\
- \frac{2 \ ( s_1 + s_5 )}{s_1 s_3} \ F^{\{23\}}(\alpha') \ + \ \hspace{2.1cm} \frac{2 }{ ( s_2+s_3-s_5 ) } \ F^{\{32\}}(\alpha') \ & = & - 2 \ {\alpha'}^2 \ s_5 \ I^{[5]}_2 \ ,
\label{system2}
\end{eqnarray}
where
\begin{eqnarray}
I^{[5]}_1 & = & \int_0^1 dx_3 \int_0^{x_3} dx_2 \ {x_2}^{\alpha' s_1-1} {x_3}^{\alpha' ( s_4-s_1-s_2 )} {(x_3-x_2)}^{\alpha' s_2} {(1-x_2)}^{\alpha' ( s_5-s_2-s_3 )-1} {(1-x_3)}^{\alpha' s_3 - 1} \ , \nonumber \\
&& \\
I^{[5]}_2 & = & \int_0^1 dx_3 \int_0^{x_3} dx_2 \ {x_2}^{\alpha' s_1 -1} {x_3}^{\alpha' ( s_4-s_1-s_2 )-1} {(x_3-x_2)}^{\alpha' s_2} {(1-x_2)}^{\alpha' ( s_5-s_2-s_3 )-1} {(1-x_3)}^{\alpha' s_3} \ . \nonumber \\
&&
\label{I5}
\end{eqnarray}
\noindent Solving the linear system in (\ref{system2}) leads to integral expressions for $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$ which apparently differ from the ones found in (\ref{F23-1}) and (\ref{F32-1}), respectively. But this is exactly the same thing that we analized in eqs.(\ref{F2-4}) and (\ref{F2-5}), for the case of $N=4$: there are integration by parts and partial fraction relations that allow us to write a same momentum factor in apparently different integral representations. Since the expressions found for the momentum factors this time are double integrals, there is a much rich variety of relations that can be found for them and it is not always an immediate thing to prove the equivalence between this different integral representations. In refs. \cite{Machado1} and \cite{Barreiro1} it was explained how using these relations, $A_b(1,2,3,4,5)$ for the first time could be written in a simple form, as a sum of two contributions (in direct analogy to eq.(\ref{A12345-3})).\\
\subsubsection{Case of $N=6$ and $N=7$}
\label{N67-3}
\noindent In the case of $N=6$ and $N=7$ we will not refer any longer to the alternative integral expressions that show up for the momentum factors (and which can always be proved by combining integration by parts and partial fraction techniques). These expressions will simply have to be equivalent to the ones we are looking for, because of the consistency of our method. \\
\noindent In this case relations (\ref{6point}) and (\ref{7point}) guarantee that choosing $T(1,2,3,4,5,6)=A_b(1,2,3,4,5,6)$ and $T(1,2,3,4,5,6,7)=A_b(1,2,3,4,5,6,7)$, respectively, it is possible to write down similar relations to the ones in those equations, but relabelling the momentum factors as
\begin{eqnarray}
\lambda^{\{ \sigma_6\}} \rightarrow F^{\{ \sigma_6\}}(\alpha') \ , \ \ \ \ \ \lambda^{\{ \sigma_7\}} \rightarrow F^{\{ \sigma_7\}}(\alpha') \ ,
\label{lambdas}
\end{eqnarray}
\noindent where $\sigma_6$ and $\sigma_7$ are all possible $S_{N-3}$ permutations (for $N=6$ and $N=7$) of indices $\{2, 3, 4 \}$ and
$\{2, 3, 4, 5 \}$, respectively.\\
\noindent The final result is that we succeed in arriving at the six and the twenty four momentum factors of formula (\ref{MSS3}) for $N=6$ and $N=7$, respectively, that is:
\begin{eqnarray}
F^{\{234\}}(\alpha') &=& \int_0^1 dx_{4} \int_0^{x_4} dx_{3} \int_0^{x_3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{5} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr)
\biggl\{ \frac{ \ 2 \alpha' k_2 \cdot k_1 \ }{x_2 - x_1} \cdot \frac{ \ 2 \alpha' k_5 \cdot k_4 \ }{x_5 - x_4} \times \nonumber \\
&&\hspace{7cm} \Big( \frac{ \ 2 \alpha' k_3 \cdot k_1 \ }{x_3 - x_1} + \frac{ \ 2 \alpha' k_3 \cdot k_2 \ }{x_3 - x_2} \Big) \biggr\} \hspace{1cm}
\label{G1}
\end{eqnarray}
and
\begin{eqnarray}
F^{\{2345\}}(\alpha') &=& \int_0^1 dx_5 \int_0^{x_5} dx_{4} \int_0^{x_4} dx_{3} \int_0^{x_3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{6} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \ \biggl\{ \frac{ \ 2 \alpha' k_2 \cdot k_1 \ }{x_2 - x_1} \times \nonumber \\
&&\hphantom{ \int_0^1 } \frac{ \ 2 \alpha' k_6 \cdot k_5 \ }{x_6 - x_5} \cdot \Big( \frac{ \ 2 \alpha' k_3 \cdot k_1 \ }{x_3 - x_1} + \frac{ \ 2 \alpha' k_3 \cdot k_2 \ }{x_3 - x_2} \Big)
\Big( \frac{ \ 2 \alpha' k_5 \cdot k_4 \ }{x_5 - x_4} + \frac{ \ 2 \alpha' k_6 \cdot k_4 \ }{x_6 - x_4} \Big) \biggr\} \ . \hspace{0.8cm}
\label{H1}
\end{eqnarray}
\noindent In eq.(\ref{G1}) it is understood that $\{x_1=0, x_5=1\}$ while in eq.(\ref{H1}) is is understood that $\{x_1=0, x_6=1\}$.\\
As we mentioned after eq.(\ref{MSS2}), the MSS prescription for finding the remaining momentum factors consists in simply doing a permutation of indices {\it inside} the curly brackets of (\ref{G1}) and (\ref{H1}), according to the $\sigma_N$ permutation that is being considered (where $N=6$ and $N=7$, respectively). For example, doing $2 \leftrightarrow 3$ in (\ref{G1}) and (\ref{H1}) we arrive, respectively, at
\begin{eqnarray}
F^{\{324\}}(\alpha') &=& \int_0^1 dx_{4} \int_0^{x_4} dx_{3} \int_0^{x_3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{5} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \biggl\{ \ \frac{ \ 2 \alpha' k_3 \cdot k_1 \ }{x_3 - x_1} \cdot \frac{ \ 2 \alpha' k_5 \cdot k_4 \ }{x_5 - x_4} \times \nonumber \\
&&\hspace{7cm} \Big( \frac{ \ 2 \alpha' k_2 \cdot k_1 \ }{x_2 - x_1} + \frac{ \ 2 \alpha' k_2 \cdot k_3 \ }{x_2 - x_3} \Big) \ \biggr\} \hspace{1cm}
\label{G2}
\end{eqnarray}
and
\begin{eqnarray}
F^{\{3245\}}(\alpha') &=& \int_0^1 dx_5 \int_0^{x_5} dx_{4} \int_0^{x_4} dx_{3} \int_0^{x_3} dx_{2} \ \biggl( \prod_{i>j \geq 1}^{6} (x_i - x_j)^{2 \alpha' k_i \cdot k_j} \biggr) \ \biggl\{ \frac{ \ 2 \alpha' k_3 \cdot k_1 \ }{x_3 - x_1} \times \nonumber \\
&&\hphantom{ \int_0^1 } \frac{ \ 2 \alpha' k_6 \cdot k_5 \ }{x_6 - x_5} \cdot \Big( \frac{ \ 2 \alpha' k_2 \cdot k_1 \ }{x_2 - x_1} + \frac{ \ 2 \alpha' k_2 \cdot k_3 \ }{x_2 - x_3} \Big)
\Big( \frac{ \ 2 \alpha' k_5 \cdot k_4 \ }{x_5 - x_4} + \frac{ \ 2 \alpha' k_6 \cdot k_4 \ }{x_6 - x_4} \Big) \biggr\} \ . \hspace{0.8cm}
\label{H7}
\end{eqnarray}
In Appendix \ref{Linear system} we specify the kinematical terms that allow us to write a linear system of equations which has a $unique$ solution for the momentum factors and which is precisely given by the ones in eqs.(\ref{G1}) and (\ref{H1}) (and for the remaining momentum factors, by means of a $\sigma_N$ permutation of the corresponding indices). We also specify, in that appendix, the corresponding linear system of equations for the $F^{\{ \sigma_6\}}(\alpha')$'s and for the $F^{\{ \sigma_7\}}(\alpha')$'s.
\subsection{For gauge bosons and massless fermions}
\label{For gauge bosons and massless fermions}
\noindent In this section we briefly explain how to find the amplitudes involving fermions directly from the corresponding amplitude involving only gauge bosons. From the point of view of vertex operators, in the RNS formalism, this is a non trivial thing to do since it is known that there arise difficulties when calculating amplitudes of $n$ gauge bosons and $2m$ massless fermions in the case of $m \geq 3$ \footnote{The difficulties have to do with the fact that the total super ghost charge of the involved vertex operators must be $-2$ and fermion vertex operators are generally known in the $+1/2$ and $-1/2$ picture (therefore, four fermions gives $4 \times (-1/2) = -2$ and this case is fine). We thank E. Hatefi for clarifying this issue to us. \\
\noindent In spite of this difficulty in ref. \cite{Kostelecky1} the {\it pure} fermion $6$-point amplitude was obtained using the RNS formalism.}(where $n= 1, 2, \ldots $) \cite{Friedan1}.\\
\noindent The shortcut to find the amplitudes involving (any number of even) fermions is that we do not need to compute them from the beginning, by considering correlation functions of vertex operators: the closed formula for the $N$-point amplitude of gauge bosons and D=10 Supersymmetry are enough ingredients to find the amplitudes involving fermions. We already saw this in section {\it 2.3} of ref.\cite{Barreiro2}, in the case of the open superstring $5$-point amplitude, where the ansatz that we proposed for the fermion amplitudes consisted in simply changing the original gauge boson subamplitudes of the $2$-dimensional basis ($A_{YM}(1,2,3,4,5)$ and $A_{F^4}(1,2,3,4,5)$, in the case of ref.\cite{Barreiro2}) by the corresponding expression of their superpartner subamplitudes.\\
\noindent In the $N$-point case we do exactly the same thing: we change the
gauge boson subamplitudes of the basis (in this case given by the Yang-Mills subamplitudes in (\ref{basis})) by their superpartner subamplitudes. The resulting formula is precisely the one obtained by Mafra, Schlotterer and Stieberger \cite{Mafra1} (see eq. (\ref{MSS})).\\
\noindent Besides the expression in (\ref{MSS}) there is no other possibility for the scattering amplitudes involving fermions since, as analyzed in \cite{Green1}, the global supersymmetry requirement is sufficient to determine the structure of the boson/fermion interactions $uniquely$\footnote{This analysis can be found, for example, in sections 5.3.1 and 7.4.1 of ref. \cite{Green1}. We thank N. Berkovits for suggesting part of the specific sections of this book to us.}. In formula (\ref{MSS}) this means that the summed variation of all possible boson/fermion subamplitudes, under the supersymmetry transformations ,
\begin{eqnarray}
\delta \zeta^{\mu}_j = \frac{i}{2} ( \bar{\epsilon} \gamma^{\mu} u_j ) \ , \ \ \
\delta u_j = \frac{i}{2} ( \gamma_{\mu \nu} \epsilon ) \zeta^{\mu}_j k^{\nu}_j \ , \ \ \
\delta \bar{u}_j = \frac{i}{2} ( \bar{\epsilon} \gamma_{\mu \nu} ) \zeta^{\mu}_j k^{\nu}_j \ , \ \ \ (j=1, \ldots, N)
\label{susytransf}
\end{eqnarray}
is $zero$, after using the on-shell and the physical state conditions, together with momentum conservation\footnote{The formulas in (\ref{susytransf}) are nothing else than the momentum space version of the supersymmetric transformations of the fields $A^a_{\mu}$ and $\psi^a$ of D=10 Super Yang-Mills theory.}.\\
\section{Finding the $\alpha'$ expansion of the momentum factors}
\label{Finding the}
\noindent It was mentioned in ref. \cite{Mafra3} that the factorization properties of open superstring subamplitudes could be used as a complementary approach to determine the ${\alpha'}$ expansion of the momentum factors $F^{\{\sigma_N\}}(\alpha')$\footnote{See, for example, Appendix C of this reference.}. We agree completely with this observation and, indeed, we will use these sort of factorization properties in this section as part of the tools to find the ${\alpha'}$ expansion of the momentum factors (see eqs.(\ref{unitarity2}) and (\ref{collinear})). Our remark at this point is that there is a more general form for the factorization (or tree-level unitarity) relations than the ones considered in ref. \cite{Mafra3}, which includes as a particular case the collinear limit considered there (see eq.(\ref{collinear})). This form, together with the cyclic property of the subamplitude, allows us to obtain all the $\alpha'$ terms of the momentum factors, up to quite high ${\alpha'}$ order, by just using the very well known $4$-point (Veneziano-type) momentum factor. \\
This is a very non trivial result that will allow us to bypass $\alpha'$ expansions of worldsheet integrals for $N=5$, $N=6$ and $N=7$, at least up to ${\alpha'}^6$ order in the first two cases and ${\alpha'}^4$ order in the last one\footnote{Our limitation to go to higher orders in $\alpha'$, by just considering the expansion of the $4$-point Gamma factor, is just a computational one since the expressions become extremely huge as the $\alpha'$ order grows. As we explained in subsection \ref{Using}, we believe that using using the 4-point amplitude $\alpha'$ expansion, we are able to obtain completely the $\alpha'$ expansion of any $N$-point momentum factor up to ${\alpha'}^7$ order.}, in the same spirit that the revisited S-matrix method succeeds in finding the corresponding $\alpha'$ terms of the OSLEEL which would commonly be determined by open superstring $N$-point amplitudes (where $N>4$)\cite{Barreiro0}.\\
\vspace{0.5cm}
\subsection{Tree level unitarity of the amplitudes}
\label{Using tree}
\noindent In the case of gauge bosons, the tree level unitarity relations state that the $N$-point subamplitude obeys\footnote{In eq.(\ref{unitarity1}) and the remaing ones that contain expressions involving the subamplitudes, in this subsection, we will not use the index `{\it b}' on them because their dependence in the polarization vectors $\zeta_i$ has been explicited and, therefore, it is clear that they are gauge boson subamplitudes.\\
\noindent As a general rule, we will only use the `{\it b}' index in the subamplitude variable $A$ when it has not been explicited its dependence in the polarizations.}\cite{Polchinski1}
\begin{eqnarray}
{\cal A}(\zeta_1, k_1;\ldots ; \zeta_N, k_N) \sim i \int \frac{d^D k}{(2 \pi)^D}
\frac{{\cal A}^{\mu } (\zeta _{1},k_{1};\ldots ;\zeta _{m-1},k_{m-1};k) \ {\cal A}_{\mu}(-k;\zeta _{m},k_{m};\ldots ;\zeta _{N},k_{N})}{ -k^2 } \ \ \ \ \
\label{unitarity1}
\end{eqnarray}
\noindent where ${\cal A}^{\mu }$ and ${\cal A}_{\mu }$ are, respectively, $m$ and $(N+2-m)$-point subamplitudes and where the pole that ${\cal A}(\zeta_1, k_1;\ldots ; \zeta_N, k_N)$ has in the Mandelstam variable $(k_{1}+k_{2}+\cdots +k_{m-1})^{2}$ is being taken to {\it zero}:
\begin{eqnarray}
(k_{1}+k_{2}+\ldots +k_{m-1})^{2} \rightarrow 0 \ .
\label{zero}
\end{eqnarray}
\noindent All ${\cal A}$ subamplitudes in (\ref{unitarity1}) include the `$i \ \delta$' factor that takes into account momentum conservation (see eq.(\ref{N-point})) and $D$ ($=10$) is the spacetime dimension.\\
\noindent Eq.(\ref{unitarity1}) is applicable for $N \geq 4$ and $m=3, \ldots, N-1$, so there are $(N-3)$ unitarity relations for Mandelstam variables like $(k_{1}+k_{2}+\cdots +k_{m-1})^{2}$. \\
\noindent The remaining unitarity relations arise when the other Mandelstam variables,
\begin{eqnarray}
(k_{2}+k_{3}+\ldots +k_{m-1})^{2}, (k_{3}+k_{4}+\ldots +k_{m-1})^{2}, \ldots , (k_{m-2}+k_{m-1})^{2} \ ,
\label{}
\end{eqnarray}
are individually taken to zero. There are $N(N-3)/2$ independent Mandelstam variables and that is the total number of independent unitarity relations.\\
\noindent Substituing the explicit dependence of each ${\cal A}$ subamplitude in the `$i \ \delta$' factor in (\ref{unitarity1}) and calculating the $D$-dimensional integral leads to
\begin{eqnarray}
A(\zeta_1, k_1;\ldots ; \zeta_N, k_N) \sim
\frac{A^{\mu } (\zeta _{1},k_{1};\ldots ;\zeta _{m-1},k_{m-1};k) \ A_{\mu}(-k;\zeta _{m},k_{m};\ldots ;\zeta _{N},k_{N}) }
{ (k_{1}+k_{2}+\cdots +k_{m-1})^{2} } \ ,
\label{unitarity2}
\end{eqnarray}
\noindent where, now, the $A$, $A^{\mu}$ and $A_{\mu}$ subamplitudes no longer include the $\delta$ dependence and the momentum conservation on each of them must be implicitly assumed. These subamplitudes are the ones that we have been dealing with throughout this work.\\
\noindent $A^{\mu}$ and $A_{\mu}$ are subamplitudes in which the corresponding polarization vector has been taken away:
\begin{eqnarray}
\label{Asuper}
A^{\mu }(\zeta _{1},k_{1};\ldots ;\zeta _{m-1},k_{m-1};k) & = & \frac{\partial }{\partial \zeta_{\mu}} A(\zeta _{1},k_{1};\ldots ;\zeta _{m-1},k_{m-1}; \zeta, k) \ , \\
\label{Asub}
A_{\mu}(-k;\zeta _{m},k_{m};\ldots ;\zeta _{N},k_{N}) & = & \frac{\partial }{\partial \zeta^{\mu}} A(\zeta, -k;\zeta _{m},k_{m};\ldots ;\zeta _{N},k_{N}) \ ,
\end{eqnarray}
where
\begin{eqnarray}
k^{\mu} = - ( {k}^{\mu}_1 + \ldots + {k}^{\mu}_{m-1} ) = {k}^{\mu}_m + \ldots + {k}^{\mu}_{N} \
\label{kmu1}
\end{eqnarray}
and where the assymptotic mass-shell condition (\ref{zero}) is being taken into account.\\
\noindent Formula (\ref{unitarity2}) states that the residue that the $N$-point subamplitude has in $(k_{1}+k_{2}+\cdots +k_{m-1})^{2} = 0$ is given by the product of two lower-point subamplitudes (an $m$-point and a $(N+2-m)$-point one). It is valid at any $\alpha'$ order. In particular, if $\alpha' \rightarrow 0$ it means that Yang-Mills subamplitudes also satisfy it.\\
\noindent When eq.(\ref{unitarity2}) is considered for the particular case of $m=3$, it becomes
\begin{eqnarray}
A(\zeta_1, k_1;\ldots ; \zeta_N, k_N) \sim
\frac{A^{\mu }(\zeta _{1},k_{1};\zeta_2, k_2 ;k) \ A_{\mu}(-k;\zeta _{3},k_{3};\ldots ;\zeta _{N},k_{N}) }
{ (k_{1}+k_{2})^{2} } \ ,
\label{unitarity3}
\end{eqnarray}
or equivalently
\begin{eqnarray}
A( \zeta_1, k_1;\ldots ; \zeta_N, k_N ) &\sim &
\frac{1}{2(k_{1}\cdot k_{2})}\ \ V^{\mu }_{(12)} \ \frac{\partial }{\partial \zeta
^{\mu }}A(\zeta ,k_{1}+k_{2};\zeta_3,k_{3};\ldots;\zeta_{N},k_{N})\ ,
\label{collinear}
\end{eqnarray}
where
\begin{eqnarray}
V^{\mu }_{(12)}=-g \ \Big[ \ (\zeta _{1}\cdot \zeta _{2}) \ (k_{1}-k_{2})^{\mu }-2(\zeta
_{2}\cdot k_{1}) \ \zeta^{\mu }_{1}+2(\zeta _{1}\cdot k_{2}) \ \zeta^{\mu }_{2} \ \Big] \
\label{V12}
\end{eqnarray}
\noindent is the (contracted) Yang-Mills vertex.\\
\noindent For $m=3$, condition (\ref{zero}) implies that $(k_1+k_2)^2 = 2 k_1 \cdot k_2 $ goes to {\it zero}:
\begin{eqnarray}
k_1 \cdot k_2 \rightarrow 0 \ .
\label{parallel}
\end{eqnarray}
\noindent If $k_1$ and $k_2$ are non zero light-like vectors, then condition (\ref{parallel}) implies that these momenta are parallel ($k_1 || k_2$) and, therefore, the unitarity relation (\ref{collinear}) is nothing else than the collinear version of the factorization property of gauge boson subamplitudes \cite{Mangano1}. \\
\noindent In the case of arbitrary $m$ ($=3, \ldots , N-1$), if $\{k_1, \ldots, k_{m-1} \}$ are non zero physical (Minkowski) momenta, then condition (\ref{zero}) implies that {\it all} of them are parallel ($k_1 \ || \ k_2 \ || \ldots || \ k_{m-1}$)\footnote{Due to global momentum conservation, it turns out that {\it all} $k_i$ become parallel, for $i=1, \ldots, N$, not only the first $m$ momenta.}. Subsequently, all Mandelstam variables should tend to zero (because if the momenta tend to be parallel then $k_i \cdot k_j \rightarrow 0$). This is what happens for the Mandelstam variables in the physical region.\\
\noindent So, the word of caution when considering eq.(\ref{unitarity2}) subject to condition (\ref{zero}), is that we are considering there the subamplitude expressions where the Mandelstam variables have been analytically continued in the complex plane. Then, the Mandelstam variables are generally out of the physical region and, therefore, condition (\ref{zero}) does not have any implication for any of the other Mandelstam variables: they remain being independent in spite of one of them (namely, $(k_{1}+k_{2}+\cdots +k_{m-1})^{2}$) tending to zero. This fact will be implicit in the calculations that we will do in subsections \ref{Case of the 5-point}, \ref{Case of the 6-point} and \ref{Case of the 7-point}. In these subsections we will use formula (\ref{collinear}) and its generalization, eq.(\ref{unitarity2}), to find $\alpha'$ terms of the $N$-point amplitude for $N=5,6,7,$ by just using the well known open superstring $4$-point $\alpha'$ expansion (see eqs.(\ref{formula1}) and (\ref{expansionGamma})).\\
\subsection{Analyticity of the momentum factors}
\label{Analyticity}
\noindent Before going to the details of the $\alpha'$ calculations in the $N$-point amplitudes (where $N=5,6,7$) an important remark proceeds.\\
\noindent From the analysis that we did in section \ref{Closed} we have that the $N$-point subamplitude of gauge bosons in Open Superstring Theory is given by
\begin{eqnarray}
A_b(1, \ldots, N) & = & \sum_{ \sigma_N \ \varepsilon \ S_{N-3} } F^{\{\sigma_N\}}(\alpha') \ A_{YM}(1,\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}, N-1, N) \ ,
\label{Ab}
\end{eqnarray}
where the $F^{\{\sigma_N\}}(\alpha')$'s are given by the integral expression (\ref{MSS2}) (or equivalently, if $N \geq 5$, by eq.(\ref{MSS3})).\\
\noindent In any of the two expressions of the momentum factors we see that the $\alpha'$ dependence of them is completely enclosed in the dimensionless Mandelstam variables, $\alpha' s_i$ and $\alpha' t_j$. So, on one side, if these variables are analytically continued in the complex plane, then the $F^{\{\sigma_N\}}(\alpha')$'s will have a well defined Laurent expansion with respect to $\alpha' s_i=\alpha' t_j=0$.\\
\noindent On the other side, it is well known that the low energy limit of the gauge boson (tree level) subamplitudes in Open Superstring Theory gives the corresponding Yang-Mills subamplitudes:
\begin{eqnarray}
\lim_{\alpha' \rightarrow 0} A_b(1, \ldots, N) & = & A_{YM}(1, \ldots, N) \ .
\label{low}
\end{eqnarray}
In eq.(\ref{Ab}) this last condition implies that
\begin{eqnarray}
\lim_{\alpha' \rightarrow 0} F^{\{\sigma_N\}}(\alpha') & = & \left\{ \begin{array}{ccc}
1 & \mbox{if} & \sigma_N = \{2, 3, \ldots, N-2 \} \\
0 & \mbox{if} & \sigma_N \neq \{2, 3, \ldots, N-2 \}
\end{array} \right.
\label{Fs}
\end{eqnarray}
where $\{2, 3, \ldots, N-2 \}$ is the identity permutation of those indices.\\
\noindent The result in (\ref{Fs}), together with the behaviour of the $F^{\{\sigma_N\}}(\alpha')$'s under an analytical continuation of the dimensionless Mandelstam variables, mentioned before, implies that the momentum factors are analytic functions at the origin of the complex plane ($\alpha' s_i=\alpha' t_j=0$). This implies that the $F^{\{\sigma_N\}}(\alpha')$'s have a well defined $\alpha'$ expansion as a (Maclaurin) power series. \\
\noindent This is a quite non trivial result since it is well known, by explicit computations, that the $\alpha'$ expansion of many of the individual multiple integrals which arise in the expanded version of $A_b(1, \ldots, N)$ (see eq.(\ref{ANfermionic})) do indeed have some negative powers of $\alpha'$ \footnote{See, for example, Appendix {\it A.3} of \cite{Brandt1} for the case of $N=5$, and refs.\cite{Oprisa1, Hemily1} for the case of $N=6$.}. In fact, the explicit expression for the $F^{\{\sigma_N\}}(\alpha')$'s, either in (\ref{MSS2}) or in (\ref{MSS3}), is given by a $(N-3)$ multiple integral multipled by ${\alpha'}^{N-3}$, where $(N-3)$ is a positive integer number. So, at least the first of those multiple integrals (the one corresponding to $\sigma_N = \{2, 3, \ldots, N-2\}$) does have an $\alpha'$ expansion which begins as ${\alpha'}^{-(N-3)}$. \\
\noindent The analytic behaviour of the $F^{\{\sigma_N\}}(\alpha')$'s, when $\alpha' \rightarrow 0$, will be an important fact which is behind the $\alpha'$ expansions which we will obtain in the next three subsections.\\
\subsection{Case of the 5-point momentum factors}
\label{Case of the 5-point}
\noindent In the $N=5$ case, we will give the details of how using tree level unitarity of the subamplitudes we can arrive at the ${\alpha'}^2$ order terms of $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$. The higher order terms of these two momentum factors will be afterwards written as a consequence of repeating the procedure for the corresponding $\alpha'$ order.\\
\noindent It will turn out to be convenient to introduce here the notation
\begin{eqnarray}
\alpha_{ij} & = & 2 \ k_i \cdot k_j \ ,
\label{alphaij}
\end{eqnarray}
where $k_i$ and $k_j$ are, respectively, the $i$-th and the $j$-th external leg momentum (and $i < j$). This notation will also be useful in the case of $N=6$ (subsection \ref{Case of the 6-point}) and the case of $N=7$ (subsection \ref{Case of the 7-point}).\\
\noindent Now, in order to find the $\alpha'$ expansion of $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$ within our approach, it will be convenient to write, once again, eq.(\ref{A12345-3}) in the present subsection:
\begin{eqnarray}
A_b(1,2,3,4,5) & = & F^{\{23\}}(\alpha') \ A_{YM}(1,2,3,4,5) + F^{\{32\}}(\alpha') \ A_{YM}(1,3,2,4,5) \ .
\label{A12345-7}
\end{eqnarray}
\noindent We begin by looking for the poles of all three subamplitudes involved in eq.(\ref{A12345-7}). In figure \ref{feynmandiag} we have drawn the two type of Feynman diagrams that contribute to the Yang-Mills $5$-point amplitude. Examining all possible diagrams of this type we can infer that $A_{YM}(1,2,3,4,5)$ and $A_{YM}(1,3,2,4,5)$ have first and second order poles in the $\alpha _{ij}$ variables (one and two propagators, respectively) that we list in table (\ref{poles5p}).\\
\begin{figure}[th]
\centerline{\includegraphics*[scale=0.1,angle=0]{diagramas.jpg}}
\caption{Two type of Feynman diagrams contribute to the YM 5-point amplitude. Permutations of the legs of these diagrams should also be considered in order to account for the full amplitude.}
\label{feynmandiag}
\end{figure}
\begin{equation}
\begin{tabular}{ccc}
\hline
& first order & second order \\ \hline
$A_{YM}(1,2,3,4,5)$ & $\alpha _{12}$, $\alpha _{23}$, & $\alpha _{12}\alpha
_{34}$, $\alpha _{23}\alpha _{45}$, \\
& $\alpha _{34}$, $\alpha _{45}$, $\alpha _{15}$ & $\alpha _{34}\alpha _{15}$%
, $\alpha _{45}\alpha _{12}$, $\alpha _{15}\alpha _{23}$ \\ \hline
$A_{YM}(1,3,2,4,5)$ & $\alpha _{13}$, ~$\alpha _{23}$, & $\alpha _{13}\alpha
_{24}$, $\alpha _{23}\alpha _{45}$, \\
& $\alpha _{24}$, $\alpha _{45}$, $\alpha _{15}$ & $\alpha _{24}\alpha _{15}$%
, $\alpha _{45}\alpha _{13}$, $\alpha _{15}\alpha _{23}$ \\ \hline
&& \\
\end{tabular}
\label{poles5p}
\end{equation}
\noindent On the other hand, in the left side of equation (\ref{A12345-7}), the
5-point superstring subamplitude, $A_b(1,2,3,4,5)$, presents only first order poles in
the same variables that $A_{YM}(1,2,3,4,5)$ does, namely,
$\{ \alpha _{12}$,~$\alpha _{23}$,~$\alpha _{34}$,~$\alpha _{45}, ~\alpha_{15}\}$ \footnote{Since $A_b(1,2,3,4,5)$ agrees with $A_{YM}(1,2,3,4,5)$ when $\alpha' \rightarrow 0$, and $A_{YM}(1,2,3,4,5)$ does have second order poles, the statement that $A_b(1,2,3,4,5)$ has only first order poles is only valid perturbatively, for any {\it non zero} order in $\alpha'$.}. We will refer to these poles as the {\it physical} poles.\\
\noindent For reasons that will become clear in the next lines, it will be convenient to write $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$ in terms of the simple poles that $A_{YM}(1,2,3,4,5)$ and $A_{YM}(1,3,2,4,5)$ have, respectively:
\begin{eqnarray}
\label{defF23}
F^{\{23\}}(\alpha') & = & F^{\{23\}}[\ \alpha_{12}, \alpha_{23}, \alpha_{34}, \alpha_{45}, \alpha_{15}; \ \alpha' ] \ , \\
\label{defF32}
F^{\{32\}}(\alpha') & = & F^{\{32\}}[\ \alpha_{13}, \alpha_{23}, \alpha_{24}, \alpha_{45}, \alpha_{15}; \ \alpha' ] \ .
\end{eqnarray}
\noindent Now, since the momentum factors have a well defined $\alpha'$ power series (as it was seen in the previous subsection), we may write that\footnote{In eqs.(\ref{expF23-1}) and (\ref{expF32-1}) we are already using the fact the $F^{\{ \sigma_5\}}(\alpha')$'s satisfy low energy requirement in eq.(\ref{Fs}).}
\begin{eqnarray}
\label{expF23-1}
F^{\{23\}}(\alpha')& = & 1 + \Big( a_1 \ \alpha_{12} + a_2 \ \alpha_{23} + a_3 \ \alpha_{34} + a_4 \ \alpha_{45} + a_5 \ \alpha_{15} \Big) \alpha' + \Big( b_1 \ \alpha_{12} \alpha_{23} + b_2 \ \alpha_{12} \alpha_{34} + \nonumber \\
&& \ b_3 \ \alpha_{12} \alpha_{45} + b_4 \ \alpha_{12} \alpha_{15} + b_5 \ \alpha_{23} \alpha_{34} + b_6 \ \alpha_{23} \alpha_{45} + b_7 \ \alpha_{23} \alpha_{15} + b_8 \ \alpha_{34} \alpha_{45} + \nonumber \\
&& \ b_9 \ \alpha_{34} \alpha_{15} + b_{10} \ \alpha_{45} \alpha_{15}
+ b_{11} \ \alpha_{12}^2 + b_{12} \ \alpha_{23}^2 + b_{13} \ \alpha_{34}^2 + b_{14} \ \alpha_{45}^2 + + b_{15} \ \alpha_{15}^2 \Big) {\alpha'}^2+ \nonumber \\
&& \ {\cal O}({\alpha'}^3) \ , \\
\label{expF32-1}
F^{\{32\}}(\alpha')& = & \Big( c_1 \ \alpha_{13} + c_2 \ \alpha_{23} + c_3 \ \alpha_{24} + c_4 \ \alpha_{45} + c_5 \ \alpha_{15} \Big) \alpha' + \Big( d_1 \ \alpha_{13} \alpha_{23} + d_2 \ \alpha_{13} \alpha_{24} + \nonumber \\
&& \ d_3 \ \alpha_{13} \alpha_{45} + d_4 \ \alpha_{13} \alpha_{15} + d_5 \ \alpha_{23} \alpha_{24} + d_6 \ \alpha_{23} \alpha_{45} + d_7 \ \alpha_{23} \alpha_{15} + d_8 \ \alpha_{24} \alpha_{45} + \nonumber \\
&& \ d_9 \ \alpha_{24} \alpha_{15} + d_{10} \ \alpha_{45} \alpha_{15} + d_{11} \ \alpha_{13}^2 + d_{12} \ \alpha_{23}^2 + d_{13} \ \alpha_{24}^2 + d_{14} \ \alpha_{45}^2 + d_{15} \ \alpha_{15}^2 \Big) {\alpha'}^2+ \nonumber \\
&& \ {\cal O}({\alpha'}^3) \ .
\end{eqnarray}
\noindent We will find the value of all the $a_i$'s, $b_j$'s, $c_k$'s and $d_l$'s that participate in these two expansions. We will do this by demanding three requirements:
\begin{enumerate}
\item Absence of {\it unphysical} poles.
\item Tree level unitarity.
\item Cyclic invariance.
\end{enumerate}
All these requirements will be demanded at every {\it non zero} $\alpha'$ order, in $A_b(1,2,3,4,5)$.\\
\noindent \underline{Step 1}: Absence of {\it unphysical} poles in $A_b(1,2,3,4,5)$\\
\noindent According to eq.(\ref{A12345-7}), the only unphysical poles that could arise in $A_b(1,2,3,4,5)$ are the ones that come from $A_{YM}(1,3,2,4,5)$, namely, $\alpha_{13}$ and $\alpha_{24}$. These poles appear always as {\it simple} poles in $A_{YM}(1,3,2,4,5)$, in spite of they participating, also, in the second order poles of this subamplitude (see table(\ref{poles5p})). \\
\noindent The only possible way to cancel, in the right hand-side of eq.(\ref{A12345-7}), the independent simple poles that $A_{YM}(1,3,2,4,5)$ has at $\alpha_{13}=0$ and $\alpha_{24}=0$, is demanding that $F^{\{32\}}(\alpha')$ should be possible to be written with a common $\alpha_{13} \alpha_{24}$ factor:
\begin{eqnarray}
F^{\{32\}}(\alpha') & = & \alpha_{13} \alpha_{24} \ {\alpha'}^2 \ f^{\{32\}}[\ \alpha_{13}, \alpha_{23}, \alpha_{24}, \alpha_{45}, \alpha_{15}; \ \alpha' ] \ ,
\label{factor1}
\end{eqnarray}
where $f^{\{32\}}[\ \alpha_{13}, \alpha_{23}, \alpha_{24}, \alpha_{45}, \alpha_{15}; \ \alpha' ]$ is analytic in all its five dimensionless $\alpha' \alpha_{ij}$ variables.\\
\noindent Comparing eqs.(\ref{factor1}) and (\ref{expF32-1}) we see that the only possibility is that
\begin{eqnarray}
c_i = 0 \ , (\mbox{for} \ i=1, \ldots, 5) \ \mbox{and} \ d_j =0 \ , (\mbox{for} \ j=1, \ldots, 15; \ \mbox{except for} \ j=2) \ ,
\label{nullcoeffs}
\end{eqnarray}
so, in (\ref{expF32-1}) this automatically leads us to
\begin{eqnarray}
F^{\{32\}}(\alpha')& = & d_2 \ \alpha_{13} \alpha_{24} \ {\alpha'}^2 + {\cal O}({\alpha'}^3) \ .
\label{simpleF32-1}
\end{eqnarray}
\vspace{0.5cm}
\noindent \underline{Step 2}: Unitarity relation for $A_b(1,2,3,4,5)$\\
\noindent In Appendix \ref{Unitarity-N5} we prove that demanding unitarity of $A_b(1,2,3,4,5)$ with respect to its $\alpha_{12}$ pole implies that
the $5$-point momentum factor, $F^{\{23\}}(\alpha')$, is related to the $4$-point momentum factor, $F^{\{2\}}(\alpha')$, by the following relation:
\begin{eqnarray}
F^{\{23\}}[ \ 0, \alpha_{23}, \alpha_{34}, \alpha_{45}, \alpha_{15} ; \alpha ^{\prime } ]=F^{\{2\}}[ \alpha_{45}, \alpha_{34}; \alpha ^{\prime }] \ ,
\label{factorization5p1}
\end{eqnarray}
where we have introduced the notation
\begin{eqnarray}
\label{not-F2a}
F^{\{2\}}(\alpha') & = & F^{\{2\}}[ \alpha_{12}, \alpha_{23}; \alpha ^{\prime }] \ , \\
\label{not-F23}
F^{\{23\}}(\alpha') & = & F^{\{23\}}[ \alpha_{12}, \alpha_{23}, \alpha_{34}, \alpha_{45}, \alpha_{15} ; \alpha ^{\prime } ] \ .
\end{eqnarray}
\noindent Notice that the two Mandelstam variables in which $F^{\{2\}}$ is being evaluated in (\ref{factorization5p1}) are not the same two ones that are implicit in the notation (\ref{not-F2a})\footnote{The notation introduced in eq.(\ref{not-F2a}) is not casual: $\alpha_{12}$ and $\alpha_{23}$ are the two Mandelstam variables in which the $4$-point amplitudes in eq.(\ref{A1234-3}) have poles.}.\\
\noindent Relation (\ref{factorization5p1}) is an extremely strong constraint since in the left hand-side $F^{\{23\}}$ is being evaluated at arguments which do not appear in the right hand-side of this relation (namely, $\alpha_{23}$ and $\alpha_{15}$). Here, we will just look for the implications that relation (\ref{factorization5p1}) has in the determination of the coefficients in the $\alpha'$ expansion of $F^{\{23\}}(\alpha')$ only up to ${\alpha'}^2$ order. In fact, after substituing (\ref{expF23-1}) and (\ref{expansionGamma}) in (\ref{factorization5p1}), we can conclude that
\begin{eqnarray}
a_i = 0 \ \ (i=2,3,4,5) \ , \ \ \ b_5 = b_6 = b_7 = 0 \ , \ \ \ b_8 = -\frac{\pi^2}{6} = -\zeta(2) \ , \ \ \
b_j = 0 \ \ (j=9, \ldots, 15) \hspace{0.7cm}
\label{abs}
\end{eqnarray}
and, therefore, the $\alpha'$ expansion of $F^{\{23\}}(\alpha')$ begins as
\begin{eqnarray}
\label{expF23-2}
F^{\{23\}}(\alpha')& = & 1 + \big( a_1 \ \alpha_{12} \big) \alpha' + \nonumber \\
&& \big( b_1 \ \alpha_{12} \alpha_{23} + b_2 \ \alpha_{12} \alpha_{34} + b_3 \ \alpha_{12} \alpha_{45} + b_4 \ \alpha_{12} \alpha_{15} -\zeta(2) \ \alpha_{34} \alpha_{45} \big) {\alpha'}^2+ \ {\cal O}({\alpha'}^3) \ . \hspace{1cm}
\end{eqnarray}
\vspace{0.5cm}
\noindent \underline{Step 3}: Cyclic invariance of $A_b(1,2,3,4,5)$\\
\noindent In Appendix \ref{Cyclicity-N5} we prove that demanding cyclic invariance of $A_b(1,2,3,4,5)$ in (\ref{A12345-7}) implies that
\begin{eqnarray}
\label{F23cycl}
F^{\{23\}}(\alpha ^{\prime }) &=&F_{cycl}^{\{23\}}(\alpha ^{\prime })+\frac{%
\alpha _{13}+\alpha _{23}}{\alpha _{35}}F_{cycl}^{\{32\}}(\alpha ^{\prime })
\ , \\
\label{F32cycl}
F^{\{32\}}(\alpha ^{\prime }) &=&\frac{\alpha _{13}}{\alpha _{35}}%
F_{cycl}^{\{32\}}(\alpha ^{\prime }) \ ,
\end{eqnarray}%
\noindent where $F_{cycl}^{\{23\}}(\alpha')$ and $F_{cycl}^{\{32\}}(\alpha')$ denote
doing $\{k_1 \rightarrow k_2$, $k_2 \rightarrow k_3$, $\ldots$, $k_5 \rightarrow k_1\}$ in
$F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$, respectively.\\
\noindent The $N=5$ BCJ relations (which we have completely found in Appendix \ref{N5-2}) have played an important role in the intermediate steps to arrive to eqs.(\ref{F23cycl}) and (\ref{F32cycl}) (see Appendix \ref{Cyclicity-N5} for more details).\\
\noindent In light of the result in (\ref{expF23-2}), after doing the calculations (\ref{F23cycl}) implies that\footnote{At ${\alpha'}^2$ order, relation (\ref{F32cycl}) does not give any new information: it is simply a consistency condition.}
\begin{eqnarray}
a_1 = 0 \ , \ \ \ b_1 = b_3 = 0 \ , \ \ \ b_2 = - b_4 = d_2 = \zeta(2) \ ,
\end{eqnarray}
so (\ref{expF23-2}) and (\ref{simpleF32-1}) finally become, respectively,\\
\begin{eqnarray}
\label{}
F^{\{23\}}(\alpha')& = & 1 + \big( \alpha_{12} \alpha_{34} - \ \alpha_{12} \alpha_{15} - \ \alpha_{34} \alpha_{45} \big) \ \zeta(2) \ {\alpha'}^2+ \ {\cal O}({\alpha'}^3) \ , \\
\label{}
F^{\{32\}}(\alpha')& = & \zeta(2) \ \alpha_{13} \alpha_{24} \ {\alpha'}^2 + {\cal O}({\alpha'}^3) \ .
\end{eqnarray}
We have successfully executed steps $1$, $2$ and $3$, together with the corresponding $N=6$ calculations (see subsection \ref{Case of the 6-point}), up to ${\alpha'}^6$ order, finding {\it all} the coefficients \footnote{We have also computed ${\alpha'}^7$ and higher order calculations, but some undetermined coefficients arise.\\
This is not in conflict with the revisited S-matrix method since this method only guarantees that, at a given ${\alpha'}^p$ order calculation, the $(\zeta \cdot k)^{p+2}$ terms should be absent in the $(p+2)$-point amplitude. For $p=7$ this means to have demanded a $9$-point calculation, which we have not done in this work.}. The explicit result up to ${\alpha'}^4$ terms is the following:
\begin{eqnarray}
F^{\{23\}}(\alpha ^{\prime }) &=&1+{\alpha ^{\prime }}^{2}\zeta (2)\left(
\alpha _{12}\alpha _{34}-\alpha _{34}\alpha _{45}-\alpha _{12}\alpha
_{15}\right) + \notag \\
&&{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( -\alpha _{12}^{2}\alpha
_{34}-2\alpha _{12}\alpha _{23}\alpha _{34}-\alpha _{12}\alpha
_{34}^{2}+\alpha _{34}^{2}\alpha _{45}+\alpha _{34}\alpha _{45}^{2}+\alpha
_{12}^{2}\alpha _{15}+\alpha _{12}\alpha _{15}^{2}\right) + \notag \\
&&\frac{{\alpha ^{\prime }}^{4}}{10}\zeta ^{2}(2)\left( 4\alpha
_{12}^{3}\alpha _{34}+5\alpha _{12}^{2}\alpha _{23}\alpha _{34}+5\alpha
_{12}\alpha _{23}^{2}\alpha _{34}+4\alpha _{12}^{2}\alpha _{34}^{2}+5\alpha
_{12}\alpha _{23}\alpha _{34}^{2}+4\alpha _{12}\alpha _{34}^{3} + \right.
\notag \\
&&7\alpha _{12}\alpha _{23}\alpha _{34}\alpha _{45}-3\alpha _{12}\alpha
_{34}^{2}\alpha _{45}-4\alpha _{34}^{3}\alpha _{45}-\alpha _{34}^{2}\alpha
_{45}^{2}-4\alpha _{34}\alpha _{45}^{3}-4\alpha _{12}^{3}\alpha _{15} - \notag
\\
&&\left. 3\alpha _{12}^{2}\alpha _{34}\alpha _{15}+7\alpha _{12}\alpha
_{23}\alpha _{34}\alpha _{15}+3\alpha _{12}\alpha _{34}\alpha _{45}\alpha
_{15}-\alpha _{12}^{2}\alpha _{15}^{2}-4\alpha _{12}\alpha _{15}^{3}\right)+\mathcal{O}(\text{$\alpha ^{\prime 5})$} \ , \
\end{eqnarray}
\begin{eqnarray}
F^{\{32\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\alpha
_{13}\alpha _{24} +{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( \alpha _{13}^{2}\alpha
_{24}+\alpha _{13}\alpha _{23}\alpha _{24}+\alpha _{13}\alpha
_{24}^{2}-2\alpha _{13}\alpha _{24}\alpha _{45}-2\alpha _{13}\alpha
_{24}\alpha _{15}\right) + \notag \\
&&\frac{{\alpha ^{\prime }}^{4}}{10}\zeta ^{2}(2)\left( 4\alpha
_{13}^{3}\alpha _{24}+11\alpha _{13}^{2}\alpha _{23}\alpha _{24}+14\alpha
_{13}\alpha _{23}^{2}\alpha _{24}+4\alpha _{13}^{2}\alpha _{24}^{2}+11\alpha
_{13}\alpha _{23}\alpha _{24}^{2} + \right. \notag \\
&&~~~4\alpha _{13}\alpha _{24}^{3}-12\alpha _{13}^{2}\alpha _{24}\alpha
_{45}-12\alpha _{13}\alpha _{23}\alpha _{24}\alpha _{45}-5\alpha _{13}\alpha
_{24}^{2}\alpha _{45}+12\alpha _{13}\alpha _{24}\alpha _{45}^{2} - \notag \\
&&\left. 5\alpha _{13}^{2}\alpha _{24}\alpha _{15}-12\alpha _{13}\alpha
_{23}\alpha _{24}\alpha _{15}-12\alpha _{13}\alpha _{24}^{2}\alpha
_{15}+7\alpha _{13}\alpha _{24}\alpha _{45}\alpha _{15}+12\alpha _{13}\alpha
_{24}\alpha _{15}^{2}\right) \notag \\
&&+\mathcal{O}(\text{$\alpha ^{\prime 5})$}
\end{eqnarray}
\noindent The corresponding full expressions up to ${\alpha'}^6$ terms are given in the text files that we have submitted, attached to this work, to the hep-th arXiv preprint basis.\\
\noindent We have confirmed that our results are in perfect agreement with the ones found previously in \cite{Barreiro1, Boels1, Broedel3}.
\vspace{0.5cm}
\subsection{Case of the 6-point momentum factors}
\label{Case of the 6-point}
\noindent Besides the notation introduced in eq.(\ref{alphaij}), here it
will be convenient to introduce the notation
\begin{eqnarray}
{\beta}_{ijk} & = & \alpha_{ij} + \alpha_{ik} + \alpha_{jk} \ ,
\label{tijk}
\end{eqnarray}
where $i < j < k$.\newline
\noindent The notation in eqs.(\ref{alphaij}) and (\ref{tijk}) will also be
valid for the $N=7$ case (subsection \ref{Case of the 7-point}).\\
\noindent So, we will repeat the procedure that we did in subsection \ref{Case of the 5-point} for $N=5$. For a general $N$ the procedure consists in the following four stages: \\
\\
\noindent {\bf i)} \ \ Identify the poles of each of the $(N-3)!$ Yang-Mills subamplitudes, \\
\hphantom{ \bf i) } $A_{YM}(1,\{ 2_{\sigma}, 3_{\sigma}, \ldots, (N-2)_{\sigma} \}, N-1, N)$, where $\sigma_{N} \ \varepsilon \ S_{N-3}$.\\
\noindent {\bf ii)} \ Define each $F^{\{ \sigma_N\}}(\alpha')$ momentum factor as a function of the $N(N-3)/2$ Mandelstam variables which were found in stage {\bf i)}.\\
\noindent {\bf iii)} Write the $\alpha'$ expansion of each $F^{\{ \sigma_N\}}(\alpha')$ up to the desired $\alpha'$ order, in terms of unknown coefficients. \\
\noindent {\bf iv)} Determine the coefficients of the previous $\alpha'$ expansions by following the three steps specified in subsection \ref{Case of the 5-point}, namely, demand absence of {\it unphysical} poles, tree level unitarity and cyclic invariance. \\
\noindent Let us do this procedure for the case of $N=6$.\\
\noindent \underline{Stage {\bf i)}:}\\
\noindent In this case there are first, second and third order poles, as we can see in figure \ref{feynmandiag2}.
\begin{figure}[th]
\centerline{\includegraphics*[scale=0.1,angle=0]{diagramas2.jpg}}
\caption{Three type of Feynman diagrams contribute to the YM 6-point amplitude. Permutations of the legs of these diagrams should also be considered in order to account for the full amplitude.}
\label{feynmandiag2}
\end{figure}
\noindent From the Feynman diagrams in figure \ref{feynmandiag2}, and considering
cyclic permutations of the external momenta, we obtain the poles of $A_{YM}(1,2,3,4,5,6)$, shown in table (\ref{poles6p}),
\begin{equation}
\begin{tabular}{cccc}
\hline
& first & second & third \\
& order & order & order \\ \hline
& & $\alpha _{12}\alpha _{45}$, $\alpha _{23}\alpha _{56}$, $\alpha
_{34}\alpha _{16}$, & $\alpha _{12}{\beta}_{123}\alpha _{45}$, $\alpha _{23}{%
\beta}_{234}\alpha _{56}$, \\
& ${\beta}_{123}$ & $\alpha _{12}{\beta}_{345}$, $\alpha _{23}{\beta}_{123}$%
, $\alpha _{34}{\beta}_{234}$, & $\alpha _{12}{\beta}_{345}\alpha _{45}$, $%
\alpha _{23}{\beta}_{123}\alpha _{56}$, \\
$A_{YM}(1,2,3,4,5,6)$ & ${\beta}_{234}$ & $\alpha _{45}{\beta}_{345}$, $%
\alpha _{56}{\beta}_{123}$, $\alpha _{16}{\beta}_{234}$, & $\alpha _{12}{%
\beta}_{345}\alpha _{34}$, $\alpha _{23}{\beta}_{123}\alpha _{45}$, \\
& ${\beta}_{345}$ & $\alpha _{12}{\beta}_{123}$, $\alpha _{23}{\beta}_{234}$%
, $\alpha _{34}{\beta}_{345}$, & $\alpha _{12}\alpha _{34}\alpha _{56}$, $%
\alpha _{23}\alpha _{45}\alpha _{16}$ \\
& & $\alpha _{45}{\beta}_{123}$, $\alpha _{56}{\beta}_{234}$, $\alpha _{16}{%
\beta}_{345}$ & $\alpha _{34}{\beta}_{345}\alpha _{16}$, $\alpha _{34}{\beta}%
_{234}\alpha _{16}$, \\
& & & $\alpha _{34}{\beta}_{234}\alpha _{56}$ \\ \hline
\end{tabular}
\label{poles6p}
\end{equation}%
\noindent where we are using the notation of eqs.(\ref{alphaij}) and (\ref{tijk}).\\
\noindent The poles for the remaining five Yang-Mills subamplitudes can be obtained by
permutations of the indexes $\{2,3,4\}$. \\
\noindent \underline{Stage {\bf ii)}:}\\
\noindent So, we will work with the six momentum factors as functions of nine independent Mandelstam variables, according to
\begin{eqnarray}
\label{F234-1}
F^{\{234\}}(\alpha') = F^{\{234\}}[ \alpha_{12}, \alpha_{23}, \alpha_{34}, \alpha_{45}, \alpha_{56}, \alpha_{16}; \beta_{123}, \beta_{234}, \beta_{345} ; \alpha' \ ] \ , \\
\label{F324-1}
F^{\{324\}}(\alpha') = F^{\{324\}}[ \alpha_{13}, \alpha_{23}, \alpha_{24}, \alpha_{45}, \alpha_{56}, \alpha_{16}; \beta_{123}, \beta_{234}, \beta_{245} ; \alpha' \ ] \ , \\
\label{F243-1}
F^{\{243\}}(\alpha') = F^{\{243\}}[ \alpha_{12}, \alpha_{24}, \alpha_{34}, \alpha_{35}, \alpha_{56}, \alpha_{16}; \beta_{124}, \beta_{234}, \beta_{345} ; \alpha' \ ] \ , \\
\label{F342-1}
F^{\{342\}}(\alpha') = F^{\{342\}}[ \alpha_{13}, \alpha_{34}, \alpha_{23}, \alpha_{25}, \alpha_{56}, \alpha_{16}; \beta_{134}, \beta_{234}, \beta_{245} ; \alpha' \ ] \ , \\
\label{F423-1}
F^{\{423\}}(\alpha') = F^{\{423\}}[ \alpha_{14}, \alpha_{24}, \alpha_{23}, \alpha_{35}, \alpha_{56}, \alpha_{16}; \beta_{124}, \beta_{234}, \beta_{235} ; \alpha' \ ] \ , \\
\label{F432-1}
F^{\{432\}}(\alpha') = F^{\{432\}}[ \alpha_{14}, \alpha_{34}, \alpha_{23}, \alpha_{25}, \alpha_{56}, \alpha_{16}; \beta_{134}, \beta_{234}, \beta_{235} ; \alpha' \ ] \ .
\end{eqnarray}
\noindent These are the $N=6$ generalization of formulas (\ref{defF23}) and (\ref{defF32}) (seen for the case of $N=5$).\\
\noindent \underline{Stage {\bf iii)}:}\\
\noindent The general form of the $\alpha'$ power series for the momentum factors is the following:
\begin{equation}
\label{expF234}
F^{\{234\}}(\alpha ^{\prime })=1+\sum_{N=1}^{\infty }{\alpha ^{\prime }}^{N}%
\hspace{-0.5cm}\underset{n_{5},n_{6},n_{7},n_{8},n_9=0}%
{\sum_{n_{1},n_{2},n_{3},n_{4}, }^{ \ \ N \ , }} \hspace{-0.5cm}%
A_{n_{1},n_{2}, \ldots ,n_{9}}^{(N)}\ \alpha _{12}^{n_{1}} \ \alpha
_{23}^{n_{2}} \ \alpha _{34}^{n_{3}} \ \alpha _{45}^{n_{4}} \ \alpha
_{56}^{n_{5}} \ \alpha _{16}^{n_{6}} \ {\beta}_{123}^{n_{7}} \ {\beta}_{234}^{n_{8}} \ {%
\beta}_{345}^{n_{9}} \ ,
\end{equation}%
\begin{equation}
\label{expF324}
F^{\{324\}}(\alpha ^{\prime })=\sum_{N=1}^{\infty }{\alpha ^{\prime }}^{N}%
\hspace{-0.5cm}\underset{n_{5},n_{6},n_{7},n_{8},n_9=0}%
{\sum_{n_{1},n_{2},n_{3},n_{4}, }^{ \ \ N \ , }} \hspace{-0.5cm}%
B_{n_{1},n_{2}, \ldots ,n_{9}}^{(N)}\ \alpha _{13}^{n_{1}} \ \alpha
_{23}^{n_{2}} \ \alpha _{24}^{n_{3}} \ \alpha _{45}^{n_{4}} \ \alpha
_{56}^{n_{5}} \ \alpha _{16}^{n_{6}} \ {\beta}_{123}^{n_{7}} \ {\beta}_{234}^{n_{8}} \ {%
\beta}_{245}^{n_{9}} \ ,
\end{equation}%
\begin{equation}
\label{expF243}
F^{\{243\}}(\alpha ^{\prime })=\sum_{N=1}^{\infty }{\alpha ^{\prime }}^{N}%
\hspace{-0.5cm}\underset{n_{5},n_{6},n_{7},n_{8},n_9=0}%
{\sum_{n_{1},n_{2},n_{3},n_{4}, }^{ \ \ N \ , }} \hspace{-0.5cm}%
C_{n_{1},n_{2}, \ldots ,n_{9}}^{(N)}\ \alpha _{12}^{n_{1}} \ \alpha
_{24}^{n_{2}} \ \alpha _{34}^{n_{3}} \ \alpha _{35}^{n_{4}} \ \alpha
_{56}^{n_{5}} \ \alpha _{16}^{n_{6}} \ {\beta}_{124}^{n_{7}} \ {\beta}_{234}^{n_{8}} \ {%
\beta}_{345}^{n_{9}} \ ,
\end{equation}%
\begin{equation}
\label{expF342}
F^{\{342\}}(\alpha ^{\prime })=\sum_{N=1}^{\infty }{\alpha ^{\prime }}^{N}%
\hspace{-0.5cm}\underset{n_{5},n_{6},n_{7},n_{8},n_9=0}%
{\sum_{n_{1},n_{2},n_{3},n_{4}, }^{ \ \ N \ , }} \hspace{-0.5cm}%
D_{n_{1},n_{2}, \ldots ,n_{9}}^{(N)}\ \alpha _{13}^{n_{1}} \ \alpha
_{34}^{n_{2}} \ \alpha _{23}^{n_{3}} \ \alpha _{25}^{n_{4}} \ \alpha
_{56}^{n_{5}} \ \alpha _{16}^{n_{6}} \ {\beta}_{134}^{n_{7}} \ {\beta}_{234}^{n_{8}} \ {%
\beta}_{245}^{n_{9}} \ ,
\end{equation}%
\begin{equation}
\label{expF423}
F^{\{423\}}(\alpha ^{\prime })=\sum_{N=1}^{\infty }{\alpha ^{\prime }}^{N}%
\hspace{-0.5cm}\underset{n_{5},n_{6},n_{7},n_{8},n_9=0}%
{\sum_{n_{1},n_{2},n_{3},n_{4}, }^{ \ \ N \ , }} \hspace{-0.5cm}%
E_{n_{1},n_{2}, \ldots ,n_{9}}^{(N)}\ \alpha _{14}^{n_{1}} \ \alpha
_{24}^{n_{2}} \ \alpha _{23}^{n_{3}} \ \alpha _{35}^{n_{4}} \ \alpha
_{56}^{n_{5}} \ \alpha _{16}^{n_{6}} \ {\beta}_{124}^{n_{7}} \ {\beta}_{234}^{n_{8}} \ {%
\beta}_{235}^{n_{9}} \ ,
\end{equation}%
\begin{equation}
\label{expF432}
F^{\{432\}}(\alpha ^{\prime })=\sum_{N=1}^{\infty }{\alpha ^{\prime }}^{N}%
\hspace{-0.5cm}\underset{n_{5},n_{6},n_{7},n_{8},n_9=0}%
{\sum_{n_{1},n_{2},n_{3},n_{4}, }^{ \ \ N \ , }} \hspace{-0.5cm}%
F_{n_{1},n_{2}, \ldots ,n_{9}}^{(N)}\ \alpha _{14}^{n_{1}} \ \alpha
_{34}^{n_{2}} \ \alpha _{23}^{n_{3}} \ \alpha _{25}^{n_{4}} \ \alpha
_{56}^{n_{5}} \ \alpha _{16}^{n_{6}} \ {\beta}_{134}^{n_{7}} \ {\beta}_{234}^{n_{8}} \ {%
\beta}_{235}^{n_{9}} \ .
\end{equation}%
\noindent In all expansions, from (\ref{expF234}) to (\ref{expF432}), the prime on the internal sum means that the set of indices
$\{ n_1$, $n_2$, $n_3$, $n_4$, $n_5$, $n_6$, $n_7$, $n_8$, $n_9 \}$ obeys the constraint
\begin{eqnarray}
n_{1}+n_{2}+n_{3}+n_{4}+n_{5}+n_{6}+n_{7}+n_{8}+n_9=N \ .
\label{constraint-N}
\end{eqnarray}
\noindent In these expansions we have already used the condition of low energy behaviour, eq.(\ref{Fs}), for the momentum factors: that is why the $\alpha'$ series of $F^{\{234\}}(\alpha')$ begins with `$1$' and the five others begin with ${\cal O}({\alpha'}^1)$ order contributions\footnote{Once the unitarity and the cyclicity relations have been demanded it will naturally happen that the order `$1$' coefficients of all expansions will be zero, but as a matter of principles here we are just proposing their general $\alpha'$ expansions in such a way that they respect the low energy requirement in (\ref{Fs}) and the analyticity of the momentum factors.}.
\vspace{0.5cm}
\noindent \underline{Stage {\bf iv)}:}\\
\noindent Here we apply the three steps that will allow us to obtain the coefficients of the $\alpha'$ expansions in eqs.(\ref{expF234})-(\ref{expF432}). We will do our calculations up to ${\alpha'}^6$ order terms.\\
\noindent \underline{Step 1}: Absence of {\it unphysical} poles in $A_b(1,2,3,4,5,6)$\\
\noindent Each of the $\alpha'$ expansions initially has 9 coefficients at ${\alpha'}^1$ order,
45 coefficients at ${\alpha'}^2$ order, 165 coefficients at ${\alpha'}^3$ order, 495 coefficients at ${\alpha'}^4$ order, 1287 coefficients at ${\alpha'}^5$ order and 3003 coefficients at ${\alpha'}^6$ order\footnote{In this case, the geral formula for the number of coefficients at ${\alpha'}^N$ order is $(N+8)!/(N! \ 8!)$.}. \\
\noindent Demanding absence of unphysical poles in $A_b(1,2,3,4,5,6)$ implies that there should be cancellations between the unphysical simple poles which come in the five Yang-Mills subamplitudes which are different of $A_{YM}(1,2,3,4,5,6)$ (which has only physical poles). After this procedure has been done, the number of independent coefficients reduces considerably according to table (\ref{tableaux}).\\
\noindent There is also the subtlety that $F^{\{234\}}(\alpha')$ and the remaning five momentum factors should be such that they cancel the third order poles that come in $A_{YM}(1,2,3,4,5,6)$. These poles are not present in $A_b(1,2,3,4,5,6)$ (at non zero $\alpha'$ order). \\
\noindent In the case of the ${\alpha'}^1$ order terms, the requirement of absence of unphysical poles is strong enough to forbid them: the coefficients of those terms are all zero and that is the reason of why the first line in table (\ref{tableaux}) contains only $0$'s (zeroes). \\
\noindent We will not write down here the partial result that we have obtained for the expansions in eqs.(\ref{expF234})-(\ref{expF432}), as we did at the end of Step 1 in subsection \ref{Case of the 5-point}. Instead, for each of the six $F^{\{ \sigma_6\}}(\alpha')$'s we have presented two data at a given order in $\alpha'$, in table (\ref{tableaux}). The first data is the number of undetermined coefficients after the actual Step (Step 1) and the second data is the number of undetermined coefficients after Step 2. We see there, that before demanding cyclic symmetry (in Step 3) there is still a big number of undetermined coefficients. \\
\noindent \underline{Step 2}: Unitarity relations for $A_b(1,2,3,4,5,6)$\\
\noindent Since in the $N=6$ case there are two type of simple poles ($\alpha_{ij}$ and $\beta_{ijk}$, see table (\ref{poles6p})), demanding unitarity of $A_b(1,2,3,4,5,6)$ with respect to each of them will lead to independent unitarity relations. The remaining unitarity relations will be a consequence of the previous ones once cyclic symmetry has been taken into account (in Step 3).\\
\noindent In Appendix \ref{Unitarity-N6} we prove that demanding unitarity of $A_b(1,2,3,4,5,6)$ with respect to its $\alpha_{12}$ pole implies that the $6$-point momentum factors, $F^{\{234\}}(\alpha')$ and $F^{\{243\}}(\alpha')$, are related to the $5$-point momentum factors, $F^{\{23\}}(\alpha')$ and $F^{\{32\}}(\alpha')$, by means of the following two relations:
\begin{eqnarray}
\label{fact-F234}
F^{\{234\}}[ \ 0 ,\alpha _{23},\alpha _{34},\alpha_{45},\alpha _{56},\alpha _{16},{\beta }_{123},{\beta }_{234},{\beta }_{345};\alpha ^{\prime }]&=& F^{\{23\}}[ {\beta }_{123},\alpha _{34},\alpha _{45},\alpha _{56},{\beta }_{345};\alpha^{\prime }] \ , \hspace{1.0cm}\\
\label{fact-F243}
F^{\{243\}}[ \ 0 , \alpha_{24}, \alpha_{34}, \alpha_{35}, \alpha_{56}, \alpha_{16}; \beta_{124}, \beta_{234}, \beta_{345} ; \alpha' ]&=&F^{\{32\}}[ {\beta }_{124},\alpha _{34},\alpha _{35},\alpha _{56},{\beta }_{345};\alpha^{\prime }] \ .
\end{eqnarray}
\noindent In analogy to relation (\ref{factorization5p1}), found after demanding unitarity for the $5$-point amplitude, relations (\ref{fact-F234}) and (\ref{fact-F243}) are strong constraints for the coefficients of the $\alpha'$ expansions of $F^{\{234\}}$ and $F^{\{243\}}$ since there are four arguments (like $\alpha_{23}$, $\alpha_{16}$, $\beta_{123}$ and $\beta_{234}$, in the first case) which are not present in the right hand-side of the corresponding relation.\\
\noindent Also, demanding unitarity of $A_b(1,2,3,4,5,6)$ with respect to its $\beta_{123}$ pole implies that the $6$-point momentum factors,
$F^{\{234\}}(\alpha')$ and $F^{\{324\}}(\alpha')$, are related to the $4$-point momentum factor, $F^{\{2\}}(\alpha')$, by means of the relation:\\
\begin{eqnarray}
F^{\{234\}}[\alpha _{12},\alpha _{23},\alpha _{34},\alpha_{45},\alpha _{56},\alpha _{16}, \ 0 ,{\beta }_{234},{\beta }_{345};\alpha ^{\prime }] -\frac{\alpha _{12}}{\alpha _{12}+\alpha _{23}} \times \hspace{4.8cm} \nonumber \\
F^{\{324\}}[ - \alpha _{12} - \alpha_{23} ,\alpha _{23}, \beta_{234} - \alpha_{23} -\alpha_{34} ,\alpha _{45},\alpha
_{56},\alpha _{16}, \ 0 ,{\beta }_{234}, -\alpha_{23}+\alpha_{45} +\alpha_{16} - \beta_{345} ;\alpha
^{\prime }]= \notag \\
\hspace{6cm}=F^{\{2\}}[\alpha _{12},\alpha _{23};\alpha ^{\prime
}] \ F^{\{2\}}[\alpha _{56},\alpha _{45};\alpha ^{\prime }] \ . \nonumber \\
\label{fact-mixed}
\end{eqnarray}
\noindent In this case there are even stronger contraints\footnote{But, in contrast to (\ref{fact-F234}) and (\ref{fact-F243}), there is only one unitarity relation now.} because, besides the fact that there are four arguments which are not present in the right hand-side of (\ref{fact-mixed}), in the left hand-side a miraculous cancelation of the $(\alpha_{12}+\alpha_{23})$ denominator should happen, since in the right hand-side there is only a product of two $\alpha'$ power series (which involve no denominators at all).\\
\noindent The curious non zero arguments in which $F^{\{324\}}$ is being evaluated, in the left hand-side of (\ref{fact-mixed}), are simply the original ones (see eq.(\ref{F324-1})), but written in terms of the basis of Mandelstam variables used for $F^{\{234\}}(\alpha')$ (see eq.(\ref{F234-1})), with $\beta_{123}=0$.\\
\noindent The three relations that we have written in eqs.(\ref{fact-F234}), (\ref{fact-F243}) and (\ref{fact-mixed}), are conditions for only three of the six momentum factors ($F^{\{234\}}$, $F^{\{243\}}$ and $F^{\{324\}}$). For these momentum factors the number of its undetermined coefficients has been reduced as a consequence of Step 2. This is illustrated in table (\ref{tableaux}), in the second data which we have presented for each $F^{\{ \sigma_6\}}(\alpha')$ at a given order in $\alpha'$\footnote{This data corresponds to the number of coefficients which are still undetermined after Step 2, at a given $\alpha'$ order.}. \\
\noindent For the remaining momentum factors, the number of undetermined coefficients has not changed from Step 1 to Step 2. \\
\noindent After demanding cyclic invariance the coefficients of all six momentum factors will be related and it will be possible to find them all, at least up to ${\alpha'}^6$ order.\\
\begin{equation}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Order & \multicolumn{2}{|c|}{$F^{\{234\}}(\alpha ^{\prime })$} &
\multicolumn{2}{|c|}{$F^{\{324\}}(\alpha ^{\prime })$} &
\multicolumn{2}{|c|}{$F^{\{243\}}(\alpha ^{\prime })$} &
\multicolumn{2}{|c|}{$F^{\{342\}}(\alpha ^{\prime })$} &
\multicolumn{2}{|c|}{$F^{\{423\}}(\alpha ^{\prime })$} &
\multicolumn{2}{|c|}{$F^{\{432\}}(\alpha ^{\prime })$} \\ \hline
${\alpha'}^1$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
${\alpha'}^2$ & 44 & 4 & 3 & 3 & 1 & 2 & 3 & 3 & 3 & 3 & 3 & 3 \\ \hline
${\alpha'}^3$ & 164 & 24 & 25 & 10 & 25 & 17 & 25 & 25 & 25 & 25 & 25 & 25 \\ \hline
${\alpha'}^4$ & 494 & 109 & 177 & 53 & 177 & 81 & 177 & 177 & 177 & 177 & 177 & 177 \\
\hline
${\alpha'}^5$ & 1286 & 371 & 405 & 201 & 405 & 287 & 405 & 405 & 405 & 405 & 405 & 405
\\ \hline
${\alpha'}^6$ & 3002 & 1039 & 1155 & 615 & 1155 & 842 & 1155 & 1155 & 1155 & 1155 & 1155
& 1155 \\ \hline
\end{tabular}
\label{tableaux}
\end{equation}
\vspace{0.5cm}
\noindent \underline{Step 3}: Cyclic invariance of $A_b(1,2,3,4,5,6)$\\
\noindent In Appendix \ref{Cyclicity-N6} we prove that demanding cyclic invariance for $A_b(1,2,3,4,5,6)$, in the closed formula that we have for it,
\begin{multline}
A_b(1,2,3,4,5,6) = \\
\begin{split}
& F^{\{234\}}(\alpha') A_{YM}(1,2,3,4,5,6) + F^{\{324\}}(\alpha') A_{YM}(1,3,2,4,5,6) + F^{\{243\}}(\alpha') A_{YM}(1,2,4,3,5,6) + \\
& F^{\{342\}}(\alpha') A_{YM}(1,3,4,2,5,6) + F^{\{423\}}(\alpha') A_{YM}(1,4,2,3,5,6) + F^{\{432\}}(\alpha') A_{YM}(1,4,3,2,5,6) \ ,
\end{split}
\label{6point-aux}
\end{multline}
implies the following six relations for the momentum factors:
\begin{eqnarray}
F^{\{234\}}(\alpha')&=&F_{cycl}^{\{234\}}(\alpha^\prime)+ \frac{\alpha _{56}-{\beta}_{123}}{{\beta}_{123}-\alpha_{45}-\alpha_{56}} F_{cycl}^{\{243\}}(\alpha^\prime)+
\frac{ \alpha _{12}-{\beta}_{123} }{ {\beta}_{123}+{\beta}_{345}-\alpha _{12}-\alpha _{45} } F_{cycl}^{\{342\}}(\alpha^\prime) \nonumber \\
&&+\frac{\text{$F_{cycl}^{\{432\}}(\alpha^\prime)$}}{\left( {\beta}_{123}+{\beta}_{345}-\alpha
_{12}-\alpha _{45}\right) \left( -{\beta}_{345}+\alpha _{12}+\alpha _{34}-\alpha
_{56}\right) } \times \nonumber \\
&&\lbrack \alpha _{12}({\beta}_{123}+{\beta}_{345}-\alpha _{12}-\alpha _{34}+\alpha
_{56}-\alpha _{45})+{\beta}_{123}(\alpha _{34}+\alpha _{45}-{\beta}_{345}-\alpha _{56})] + \nonumber \\
&&\frac{\text{$F_{cycl}^{\{423\}}(\alpha^\prime)$}}{\left( {\beta}_{345}-\alpha
_{12}-\alpha _{34}+\alpha _{56}\right) \left( -{\beta}_{123}+\alpha _{45}+\alpha
_{56}\right) }\alpha _{45}(\alpha _{12}-{\beta}_{123}) \ ,
\label{F234-expansion}
\end{eqnarray}
\begin{multline}
\begin{split}
F^{\{324\}}(\alpha')=&\frac{\text{$F_{cycl}^{\{342\}}(\alpha^\prime)$}}{{\beta}_{123}+{\beta}_{345}-\alpha _{12}-\alpha _{45}}%
(\alpha _{12}+\alpha _{23}-{\beta}_{123}) \\
&+\frac{\text{$F_{cycl}^{\{432\}}(\alpha^\prime)$}}{\left( {\beta}_{123}+{\beta}_{345}-\alpha
_{12}-\alpha _{45}\right) \left( -{\beta}_{345}+\alpha _{12}+\alpha _{34}-\alpha
_{56}\right) } \\
&\times \lbrack \alpha _{12}({\beta}_{123}+{\beta}_{345}-\alpha _{12}-\alpha _{23}-\alpha
_{34}+\alpha _{56}-\alpha _{45}) \\
&+{\beta}_{123}(\alpha _{34}+\alpha
_{45}-{\beta}_{345}-\alpha _{56})+\alpha _{23}({\beta}_{345}-\alpha _{34}+\alpha
_{56}-\alpha _{45})] \\
&+\frac{\text{$F_{cycl}^{\{423\}}(\alpha^\prime)$}}{\left( {\beta}_{345}-\alpha
_{12}-\alpha _{34}+\alpha _{56}\right) \left( -{\beta}_{123}+\alpha _{45}+\alpha
_{56}\right) }\alpha _{45}(\alpha _{12}+\alpha _{23}-{\beta}_{123}) \ ,
\end{split}
\end{multline}
\begin{multline}
\begin{split}
F^{\{243\}}(\alpha')=&\text{$F_{cycl}^{\{324\}}(\alpha^\prime)$}+\frac{\text{$F_{cycl}^{\{243\}}(\alpha^\prime)$}}{%
{\beta}_{123}-\alpha _{45}-\alpha _{56}}(\alpha _{56}-\alpha _{34}-{\beta}_{123}) \\
&\frac{%
\text{$F_{cycl}^{\{342\}}(\alpha^\prime)$}}{{\beta}_{123}+{\beta}_{345}-\alpha _{12}-\alpha _{45}}%
(\alpha _{12}-\alpha _{34}-{\beta}_{123}) + \\
&\frac{\text{$F_{cycl}^{\{432\}}(\alpha^\prime)$}}{\left( {\beta}_{123}+{\beta}_{345}-\alpha
_{12}-\alpha _{45}\right) \left( {\beta}_{345}-\alpha _{12}-\alpha _{34}+\alpha
_{56}\right) } \times \\
&\lbrack \alpha _{34}(-{\beta}_{123}+{\beta}_{345}-\alpha _{34}-\alpha
_{45}+\alpha _{56})+({\beta}_{123}-\alpha _{56})({\beta}_{345}-\alpha _{45})] + \\
&\frac{\text{$F_{cycl}^{\{423\}}(\alpha^\prime)$}}{\left( -{\beta}_{345}+\alpha _{12}+\alpha
_{34}-\alpha _{56}\right) \left( -{\beta}_{123}+\alpha _{45}+\alpha _{56}\right) }
\times \\
&\lbrack \alpha _{56}({\beta}_{123}+\alpha _{12}+\alpha _{34}-\alpha
_{45}-\alpha _{56})+({\beta}_{123}+\alpha _{34})(\alpha _{45}-\alpha
_{12})] \ ,
\end{split}
\end{multline}
\begin{multline}
\begin{split}
F^{\{342\}}(\alpha^\prime) = &\frac{\text{$F_{cycl}^{\{432\}}(\alpha^\prime)$}}{\left( {\beta}_{123}+{\beta}_{345}-\alpha
_{12}-\alpha _{45}\right) \left( -{\beta}_{345}+\alpha _{12}+\alpha _{34}-\alpha
_{56}\right) } \\
&\times \lbrack \alpha _{12}({\beta}_{123}-{\beta}_{234}+{\beta}_{345}-\alpha _{12}-\alpha
_{45}+\alpha _{56}) \\
&+{\beta}_{123}({\beta}_{234}-{\beta}_{345}-\alpha _{23}+\alpha _{45}-\alpha
_{56})+\alpha _{23}({\beta}_{345}-{\beta}_{234}+\alpha _{23}-\alpha _{45}+\alpha _{56})]
\\
&+\frac{\text{$F_{cycl}^{\{423\}}(\alpha^\prime)$}}{\left( {\beta}_{345}-\alpha _{12}-\alpha
_{34}+\alpha _{56}\right) \left( -{\beta}_{123}+\alpha _{45}+\alpha _{56}\right) }
\\
&\times \lbrack \alpha _{23}({\beta}_{123}+{\beta}_{234}-\alpha _{12}-\alpha _{23}-\alpha
_{34}+\alpha _{45})+({\beta}_{123}-\alpha _{12})(\alpha _{34}-{\beta}_{234}-\alpha
_{45})] \ ,
\end{split}
\end{multline}
\begin{multline}
\begin{split}
F^{\{423\}}(\alpha^\prime)=&\frac{\text{$F_{cycl}^{\{243\}}(\alpha^\prime)$}}{{\beta}_{123}-\alpha _{45}-\alpha _{56}}(\alpha
_{23}+\alpha _{56}-{\beta}_{123}-{\beta}_{234}) \\
&+\frac{\text{$F_{cycl}^{\{432\}}(\alpha^\prime)$}}{\left( {\beta}_{123}+{\beta}_{345}-\alpha
_{12}-\alpha _{45}\right) \left( {\beta}_{345}-\alpha _{12}-\alpha _{34}+\alpha
_{56}\right) } \\
&\times \lbrack {\beta}_{345}({\beta}_{123}+{\beta}_{234}-\alpha _{23}-\alpha _{56})+(\alpha
_{34}+\alpha _{45})(\alpha _{23}+\alpha _{56}-{\beta}_{123}-{\beta}_{234})] \\
&+\frac{\text{$F_{cycl}^{\{423\}}(\alpha^\prime)$}}{\left( {\beta}_{345}-\alpha _{12}-\alpha
_{34}+\alpha _{56}\right) \left( -{\beta}_{123}+\alpha _{45}+\alpha _{56}\right) } \\
&\times \lbrack \alpha _{56}(\alpha _{23}+\alpha _{45}-{\beta}_{123}-{\beta}_{234}-\alpha
_{12}+\alpha _{56})+(\alpha _{12}-\alpha _{45})({\beta}_{123}+{\beta}_{234}-\alpha
_{23})] \ ,
\end{split}
\end{multline}
\begin{multline}
F^{\{432\}}(\alpha^\prime)= \\
\begin{split}
=&\frac{\text{$F_{cycl}^{\{432\}}(\alpha^\prime)$}}{\left( {\beta}_{123}+{\beta}_{345}-\alpha
_{12}-\alpha _{45}\right) \left( {\beta}_{345}-\alpha _{12}-\alpha _{34}+\alpha
_{56}\right) } \times \\
&\lbrack \alpha _{23}({\beta}_{123}+{\beta}_{234}-{\beta}_{345}-\alpha _{23}+\alpha
_{34}+\alpha _{45}-\alpha _{56})+(\alpha _{56}-{\beta}_{234}-{\beta}_{123})(\alpha
_{34}-{\beta}_{345}+\alpha _{45})] \\
&+\frac{\text{$F_{cycl}^{\{423\}}(\alpha^\prime)$}}{\left( {\beta}_{123}-\alpha _{45}-\alpha
_{56}\right) \left( {\beta}_{345}-\alpha _{12}-\alpha _{34}+\alpha _{56}\right) }
\times \lbrack \alpha _{23}(\alpha _{12}-{\beta}_{123}-{\beta}_{234}+\alpha _{23}-\alpha
_{45}) + \\
&\alpha _{56}(\alpha _{12}+{\beta}_{123}+{\beta}_{234}-\alpha _{45}-\alpha
_{56})+({\beta}_{123}+{\beta}_{234})(\alpha _{45}-\alpha _{12})] \ ,
\end{split}
\label{F432-expansion}
\end{multline}
\noindent where each $F_{cycl}^{\{ \sigma_6 \}}(\alpha')$ denotes doing $\{k_1 \rightarrow k_2$, $k_2 \rightarrow k_3$, $\ldots$, $k_6 \rightarrow k_1\}$ in the corresponding $F^{\{ \sigma_6 \}}(\alpha')$ momentum factor.\\
\noindent The $N=6$ BCJ relations (which we have completely found in Appendix \ref{N6-2}) have played an important role in the intermediate steps to arrive to eqs.(\ref{F234-expansion})-(\ref{F432-expansion}) (see Appendix \ref{Cyclicity-N6} for more details).\\
\noindent So, summarizing, we have successfully executed steps $1$, $2$ and $3$, together with the corresponding $N=5$ calculations (see subsection \ref{Case of the 5-point}), up to ${\alpha'}^6$ order, finding {\it all} the coefficients. The explicit result up to ${\alpha'}^3$ terms is the following:
\begin{eqnarray}
F^{\{234\}}(\alpha ^{\prime }) &=&1+\text{$\alpha $}^{\prime 2}\zeta
(2)\left( -{\beta }_{123}{\beta }_{345}+{\beta }_{345}\alpha _{12}+{\beta }%
_{123}\alpha _{45}-\alpha _{45}\alpha _{56}-\alpha _{12}\alpha_{16}\right) +
\notag \\
&&\text{$\alpha ^{\prime }$}^{3}\text{$\zeta (3)\left( {\beta }_{123}^{2}{%
\beta }_{345}+{\beta }_{123}{\beta }_{345}^{2}-{\beta }_{345}^{2}\alpha
_{12}-{\beta }_{345}\alpha _{12}^{2}-2{\beta }_{345}\alpha _{12}\alpha
_{23}\right. $} - \notag \\
&&{\beta }_{123}^{2}\alpha _{45}-2{\beta }_{234}\alpha _{12}\alpha
_{45}+2\alpha _{12}\alpha _{23}\alpha _{45}-2{\beta }_{123}\alpha
_{34}\alpha _{45} + \notag \\
&&\left. 2 \alpha _{12}\alpha _{34}\alpha _{45}-{\beta }_{123}\alpha
_{45}^{2}+\alpha _{45}^{2}\alpha _{56}+\alpha _{45}\alpha _{56}^{2}+\alpha
_{12}^{2}\alpha _{16}+\alpha _{12}\alpha _{16}^{2}\right) + {\cal O}({\alpha'}^4) \ , \hspace{0.5cm}
\end{eqnarray}
\begin{eqnarray}
F^{\{324\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta }_{245}\alpha _{13}-\alpha _{13}\alpha _{45}\right) + \notag \\
&&{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{13}\left[ {\beta }%
_{245}^{2}-2{\beta }_{123}\left( {\beta }_{245}-\alpha _{45}\right) +\alpha
_{45}\left( -\alpha _{13}-\alpha _{23}+2\alpha _{24}+\alpha _{45}+2\alpha
_{16}\right) \right. + \notag \\
&&\left. {\beta }_{245}\left( \alpha _{13}+\alpha _{23}-2\left( \alpha
_{45}+\alpha _{16}\right) \right) \right] + {\cal O}({\alpha'}^4) \ ,
\end{eqnarray}
\begin{eqnarray}
F^{\{243\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{124}\alpha _{35}-\alpha _{12}\alpha _{35}\right) + \notag \\
&&{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{35}\left[ {\beta}%
_{124}^{2}+{\beta}_{124}\left( -2{\beta}_{345}-2\alpha _{12}+\alpha
_{34}+\alpha _{35}-2\alpha _{56}\right) \right. + \notag \\
&&\left. \alpha _{12}\left( 2{\beta}_{345}+\alpha _{12}+2\alpha
_{24}-\alpha _{34}-\alpha _{35}+2\alpha _{56}\right) \right] + {\cal O}({\alpha'}^4) \ ,
\end{eqnarray}
\begin{eqnarray}
F^{\{342\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\text{$\zeta $}(2)\alpha _{13}\alpha _{25} -\notag \\
&&{\alpha ^{\prime }}^{3}\text{$\zeta $}(3)\alpha _{13}\alpha _{25}\left(
-2{\beta}_{134}-{\beta}_{234}+\alpha _{13}-\alpha _{25}+\alpha _{34}+2\alpha
_{56}+2\alpha _{16}\right) + {\cal O}({\alpha'}^4) \ , \hspace{0.7cm}
\end{eqnarray}
\begin{eqnarray}
F^{\{423\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\alpha
_{14}\alpha _{35} + \notag \\
&&{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{14}\alpha _{35}\left( {%
\beta}_{234}+2{\beta}_{235}+\alpha _{14}-\alpha _{23}-\alpha _{35}-2\alpha
_{56}-2\alpha _{16}\right) + {\cal O}({\alpha'}^4) \ , \hspace{0.7cm}
\end{eqnarray}
\begin{eqnarray}
F^{\{432\}}(\alpha ^{\prime }) &=&-{\alpha ^{\prime }}^{2}\zeta (2)\alpha
_{14}\alpha _{25} -{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{14}\alpha _{25}\left( {%
\beta}_{234}+\alpha _{14}+\alpha _{25}-2\alpha _{56}-2\alpha _{16}\right) + {\cal O}({\alpha'}^4) \ . \hspace{0.8cm}
\end{eqnarray}
\noindent The corresponding full expressions up to ${\alpha'}^6$ terms are given in the text files that we have submitted, attached to this work, to the hep-th arXiv preprint basis.\\
\noindent We have confirmed that our results are in perfect agreement with the ones found previously in \cite{Mafra3, Broedel3}.
\vspace{0.5cm}
\subsection{Case of the 7-point momentum factors}
\vspace{0.5cm}
\label{Case of the 7-point}
\noindent Doing the procedure for the $N=7$ case is straight forward (although very tedious) once it has been done for $N=5$ and $N=6$. We will only mention here a few details.\\
\noindent First, with respect to the poles of $A_{YM}(1,2,3,4,5,6,7)$ (and of the other twenty three subamplitudes of the $7$-point basis), it has second, third and fourth order ones. They come from the Feynman diagrams in figure \ref{feynmandiag3}. The poles can happen in any of the fourteen Mandelstam variables that we have specified in eq.(\ref{Mandelstam7}): seven $\alpha_{ij}$'s and also seven $\beta_{ijk}$'s.\\
\begin{figure}[th]
\centerline{\includegraphics*[scale=0.1,angle=0]{diagramas3.jpg}}
\caption{Tree kind of Feynman diagrams used to calculate the YM subamplitude
$A_{YM}(1,2,3,4,5,6,7)$. }
\label{feynmandiag3}
\end{figure}
\noindent We have (shortly) presented the poles in table (\ref{poles7p}):
\begin{equation}
\begin{tabular}{cccc}
\hline
& second order & third order & fourth order \\
& (28 terms) & (84 terms) & (42 terms) \\ \hline
& ${\beta}_{167}{\beta}_{234},$ & ${\beta}_{127}{\beta}_{345}\alpha _{12},$
& ${\beta}_{167}{\beta}_{234}\alpha _{17}\alpha _{23}$, \\
$A_{YM}(1,2,3,4,5,6,7)$ & ${\beta}_{345}\alpha _{12},$ & ${\beta}_{167}{%
\beta}_{234}\alpha _{17},$ & ${\beta}_{167}{\beta}_{234}\alpha _{17}\alpha
_{34},$ \\
& $\vdots $ & $\vdots $ & $\vdots $ \\
& ${\beta}_{567}\alpha _{34}$ & ${\beta}_{234}{\beta}_{567}\alpha _{23}$ & ${%
\beta}_{167}{\beta}_{345}\alpha _{17}\alpha _{34}$ \\ \hline
\end{tabular}
\label{poles7p}
\end{equation}%
\noindent So, $F^{\{2345\}}(\alpha')$ will be naturally defined in terms of the $N=7$ Mandelstam variables that we have defined in (\ref{Mandelstam7}):
\begin{equation}
F^{\{2345\}}(\alpha') = F^{\{2345\}}\left[
\begin{array}{ccccccc}
\scriptstyle \alpha_{12}, & \scriptstyle\alpha _{23}, & \scriptstyle\alpha _{34}, & %
\scriptstyle\alpha _{56}, & \scriptstyle\alpha _{56}, & \scriptstyle\alpha
_{67}, & \scriptstyle\alpha _{17} \\
\scriptstyle {\beta}_{123}, & \scriptstyle {\beta}_{234}, & \scriptstyle {\beta}_{345}, & %
\scriptstyle {\beta}_{456}, & \scriptstyle {\beta}_{567}, & \scriptstyle {\beta}_{167}, & %
\scriptstyle {\beta}_{127}%
\end{array}%
;\alpha ^{\prime }\right] \ .
\label{canonic}
\end{equation}%
\noindent The Mandelstam variables for the remaining twenty three $F^{\{ \sigma_7 \}}(\alpha')$ momentum factors are obtained from the ones that appear in eq.(\ref{canonic}), by considering in them exactly the $\sigma_7$ permutation of indices $\{2, 3, 4, 5\}$.\\
\noindent A computational detail that coould be of interest to the reader is that finding the coefficients of the $\alpha'$ expansions, along the same lines presented in subsection \ref{Case of the 6-point} (in four stages, in particular, demanding cyclic symmetry after unitarity has been demanded in one physical pole), becomes a heavy task if compared with solving the coefficients from purely demanding unitarity in all poles. These two methods happen to agree at low orders in $\alpha'$ (but not necessarily at high orders\footnote{We have verified this computationally.}). The advantage of the second method lies in the fact that each of its equations contain a few momentum factors (containing, therefore, a low number of unknowns) while in the first method each of the equations contains many more momentum factors. \\
\noindent In Appendix \ref{Unitarity-N7} we have written the momentum factor relations that arise when unitarity of the amplitudes is demanded with respect to their $\alpha_{12}$ and their $\beta_{123}$ poles. Our complete result is that, when any ${\alpha}_{ij} \rightarrow 0$ there arise six relations and when any ${\beta}_{ijk} \rightarrow 0$ there arise only two relations. The first type of relations involves $7$-point with $6$-point momentum factors while the second type involves $7$-point with $4$-point and $5$-point momentum factors (see Appendix \ref{Unitarity-N7} for the detailed relations).\\
\noindent We have found the $\alpha'$ expansions up to ${\alpha'}^4$ order\footnote{As mentioned before, the reason for not going to higher orders is a purely computational one: we have had memory difficulties to deal with $N=7$ ${\alpha'}^5$ calculations on the computers . In spite of this complication, as we mentioned in subsection \ref{Using}, we believe that only using the $4$-point amplitude information our method is, in principle, able to acheive ${\alpha'}^7$ order calculations, for any number of legs.}. In the follwing we have listed these expansions only up to ${\alpha'}^3$ order\footnote{There are specific cases of momentum factors which, as remarked in \cite{Mafra3}, have an $\alpha'$ expansion which begins at ${\alpha'}^4$ order. For these cases we have written the ${\alpha'}^4$ terms.}.
\begin{eqnarray}
F^{\{2345\}}(\alpha ^{\prime }) &=&1-{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{123}\left( {\beta}_{127}-{\beta}_{456}\right) +{\beta}_{456}{\beta}%
_{567}-{\beta}_{127}\alpha _{12}+\alpha _{12}\alpha _{17}-{\beta}%
_{567}\alpha _{56}+\alpha _{56}\alpha _{67}\right) \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)\left[ {\beta}_{123}^{2}\left( {%
\beta}_{127}-{\beta}_{456}\right) +{\beta}_{456}^{2}{\beta}_{567}-{\beta}%
_{127}^{2}\alpha _{12}-{\beta}_{127}\alpha _{12}^{2}+\alpha _{12}^{2}\alpha
_{17}+\alpha _{12}\alpha _{17}^{2}\right. $} \notag \\
&&-2{\beta}_{127}\alpha _{12}\alpha _{23}+{\beta}_{456}\left( {\beta}%
_{567}^{2}+2\alpha _{12}\left( -{\beta}_{234}+\alpha _{23}+\alpha
_{34}\right) \right) -{\beta}_{567}^{2}\alpha _{56}-2{\beta}_{167}\alpha
_{12}\alpha _{56} \notag \\
&&+2{\beta}_{234}\alpha _{12}\alpha _{56}+2{\beta}_{345}\alpha _{12}\alpha
_{56}-2\alpha _{12}\alpha _{34}\alpha _{56}-2{\beta}_{567}\alpha _{45}\alpha
_{56}-{\beta}_{567}\alpha _{56}^{2} \notag \\
&&+\left. {\beta}_{123}\left( {\beta}_{127}^{2}-{\beta}_{456}^{2}-2{\beta}%
_{456}\alpha _{34}+2\left( -{\beta}_{345}+\alpha _{34}+\alpha _{45}\right)
\alpha _{56}\right) +\alpha _{56}^{2}\alpha _{67}+\alpha _{56}\alpha
_{67}^{2}\right] \nonumber \\
&& +{\cal O}({\alpha'}^4) \ ,
\label{expF2345}
\end{eqnarray}%
\begin{eqnarray}
F^{\{2354\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( -{%
\beta}_{123}+{\beta}_{467}\right) \alpha _{46} - \notag \\
&&{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{46}\left[ -{\beta}%
_{123}^{2}+2{\beta}_{456}{\beta}_{467}-{\beta}_{467}^{2}-2{\beta}%
_{235}\alpha _{12}+2\alpha _{12}\alpha _{23}+2\alpha _{12}\alpha _{35} - \right.
\notag \\
&&\left. {\beta}_{467}\alpha _{45}-{\beta}_{467}\alpha _{46}+{\beta}%
_{123}\left( -2{\beta}_{456}+2{\beta}_{467}-2\alpha _{35}+\alpha
_{45}+\alpha _{46}-2\alpha _{67}\right) + \right . \nonumber \\
&& \left. 2{\beta}_{467}\alpha _{67}\right] +{\cal O}({\alpha'}^4) \ ,
\label{expF2354}
\end{eqnarray}
\begin{eqnarray}
F^{\{2435\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{124}-\alpha _{12}\right) \left( {\beta}_{356}-\alpha _{56}\right)
\notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)\left[ {\beta}_{124}^{2}\left( {%
\beta}_{356}-\alpha _{56}\right) +{\beta}_{124}\left( {\beta}_{356}^{2}+{%
\beta}_{356}\left( -2{\beta}_{567}-2\alpha _{12}+\alpha _{34}-2\alpha
_{56}\right) \right. \right. $} \notag \\
&&\left. -2{\beta}_{127}\left( {\beta}_{356}-\alpha _{56}\right) +\alpha
_{56}\left( 2{\beta}_{567}+2\alpha _{12}-\alpha _{34}+2\alpha _{35}+\alpha
_{56}\right) \right) \notag \\
&&+\alpha _{12}\left( -{\beta}_{356}^{2}+2{\beta}_{127}\left( {\beta}%
_{356}-\alpha _{56}\right) -\alpha _{56}\left( 2{\beta}_{567}+\alpha
_{12}+2\alpha _{24}-\alpha _{34}+2\alpha _{35}+\alpha _{56}\right) \right)
\notag \\
&&+\alpha _{12}{\beta}_{356}\left( 2{\beta}_{567}+\alpha _{12}+2\alpha
_{24}-\alpha _{34}+2\alpha _{56}\right) +\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF2435}
\end{eqnarray}%
\begin{eqnarray}
F^{\{2453\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{346}-\alpha _{34}-\alpha _{46}\right) \left( -{\beta}_{345}-{\beta}%
_{467}+\alpha _{35}+\alpha _{67}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left[ 2\alpha _{12}\left( {\beta%
}_{346}-\alpha _{34}-\alpha _{46}\right) \left( -{\beta}_{167}+{\beta}_{235}-%
{\beta}_{467}+\alpha _{67}\right) \right. \notag \\
&&+\left. \left( {\beta}_{346}-\alpha _{34}-\alpha _{46}\right) \left( {\beta%
}_{345}+{\beta}_{467}-\alpha _{35}-\alpha _{67}\right) \left( 2{\beta}%
_{125}-2{\beta}_{127}+{\beta}_{345}+{\beta}_{346}\right. \right] \notag \\
&&\left. \left. -{\beta}_{467}-2\alpha _{34}-\alpha _{46}-\alpha
_{67}\right) \right] +\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF2453}
\end{eqnarray}%
\begin{eqnarray}
F^{\{2534\}}(\alpha ^{\prime }) &=&-{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{127}-{\beta}_{345}-{\beta}_{356}+\alpha _{35}\right) \left( {\beta}%
_{124}-{\beta}_{367}+\alpha _{45}\right) \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{127}-{\beta}%
_{345}-{\beta}_{356}+\alpha _{35}\right) \left[ {\beta}_{124}^{2}+{\beta}%
_{345}{\beta}_{367}-{\beta}_{356}{\beta}_{367}+{\beta}_{367}^{2}+2{\beta}%
_{245}\alpha _{12}\right. \notag \\
&&-2\alpha _{12}\alpha _{24}+2{\beta}_{367}\alpha _{36}-{\beta}_{345}\alpha
_{45}+{\beta}_{356}\alpha _{45}-3{\beta}_{367}\alpha _{45}-2\alpha
_{12}\alpha _{45}-2\alpha _{36}\alpha _{45} \notag \\
&&+2\alpha _{45}^{2}+{\beta}_{127}\left( -{\beta}_{367}+\alpha _{45}\right)
-2{\beta}_{367}\alpha _{67}+2\alpha _{45}\alpha _{67} \notag \\
&&+\left. {\beta}_{124}\left( {\beta}_{127}-{\beta}_{345}+{\beta}_{356}-2{%
\beta}_{367}-2\alpha _{36}+3\alpha _{45}+2\alpha _{67}\right) \right] +%
\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF2534}
\end{eqnarray}%
\begin{eqnarray}
F^{\{2543\}}(\alpha ^{\prime }) &=&-{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{125}-\alpha _{12}\right) \alpha _{36} \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{36}\left[ {\beta}%
_{125}^{2}+{\beta}_{125}\left( -2{\beta}_{127}+{\beta}_{345}-2\alpha
_{12}+\alpha _{36}-2\alpha _{67}\right) \right. \notag \\
&&\left. +\alpha _{12}\left( 2{\beta}_{127}-{\beta}_{345}+\alpha
_{12}+2\alpha _{25}-\alpha _{36}+2\alpha _{67}\right) \right] +\mathcal{O}%
(\alpha ^{\prime }{}^{4}) \ ,
\label{expF2543}
\end{eqnarray}%
\begin{eqnarray}
F^{\{3245\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{137}-{\beta}_{456}\right) \alpha _{13} \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{13}\left[ {\beta}%
_{137}^{2}-2{\beta}_{123}\left( {\beta}_{137}-{\beta}_{456}\right) +{\beta}%
_{456}^{2}-{\beta}_{456}\alpha _{13}+2{\beta}_{456}\alpha _{17}-{\beta}%
_{456}\alpha _{23}\right. \notag \\
&&\left. +{\beta}_{137}\left( -2{\beta}_{456}+\alpha _{13}-2\alpha
_{17}+\alpha _{23}\right) +2{\beta}_{456}\alpha _{24}+2{\beta}_{245}\alpha
_{56}-2\alpha _{24}\alpha _{56}-2\alpha _{45}\alpha _{56}\right] \notag \\
&&+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF3245}
\end{eqnarray}%
\begin{equation}
F^{\{3254\}}(\alpha ^{\prime })=-2{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}%
\alpha _{13}\alpha _{25}\alpha _{46}+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF3254}
\end{equation}
\begin{eqnarray}
F^{\{3425\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( -{%
\beta}_{137}-{\beta}_{234}+\alpha _{17}+\alpha _{24}\right) \left( {\beta}%
_{134}-\alpha _{13}-\alpha _{34}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{134}-\alpha
_{13}-\alpha _{34}\right) \left[ -{\beta}_{137}^{2}+{\beta}_{234}^{2}+2{\beta%
}_{234}{\beta}_{256}-2{\beta}_{234}{\beta}_{567}-{\beta}_{234}\alpha
_{13}\right. \notag \\
&&-2{\beta}_{234}\alpha _{17}-2{\beta}_{256}\alpha _{17}+2{\beta}%
_{567}\alpha _{17}+\alpha _{13}\alpha _{17}+\alpha _{17}^{2}+{\beta}%
_{134}\left( {\beta}_{137}+{\beta}_{234}-\alpha _{17}-\alpha _{24}\right)
\notag \\
&&-{\beta}_{234}\alpha _{24}-2{\beta}_{256}\alpha _{24}+2{\beta}_{567}\alpha
_{24}+\alpha _{13}\alpha _{24}+\alpha _{17}\alpha _{24}-2{\beta}_{234}\alpha
_{34} \notag \\
&&+2\alpha _{17}\alpha _{34}+2\alpha _{24}\alpha _{34}+{\beta}_{137}\left( 2{%
\beta}_{256}-2{\beta}_{567}-\alpha _{13}+\alpha _{24}-2\alpha _{34}-2\alpha
_{56}\right) \notag \\
&&-2{\beta}_{167}\alpha _{56}+2{\beta}_{245}\alpha _{56}+2\alpha _{17}\alpha
_{56}+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF3425}
\end{eqnarray}%
\begin{eqnarray}
F^{\{3452\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{137}-{\beta}_{245}-{\beta}_{256}+\alpha _{25}\right) \left( -{\beta}%
_{134}+{\beta}_{267}-{\beta}_{345}+\alpha _{34}\right) \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{137}-{\beta}%
_{245}-{\beta}_{256}+\alpha _{25}\right) \left( {\beta}_{134}-{\beta}_{267}+{%
\beta}_{345}-\alpha _{34}\right) \notag \\
&&\times \left( {\beta}_{134}+{\beta}_{137}-{\beta}_{245}+{\beta}_{256}-{%
\beta}_{267}+2{\beta}_{345}-2\alpha _{26}-2\alpha _{34}+2\alpha _{67}\right)
\notag \\
&&+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\end{eqnarray}%
\begin{eqnarray}
F^{\{3524\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\alpha
_{14}\alpha _{36} \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{14}\alpha _{36}\left( -2%
{\beta}_{145}-{\beta}_{167}-2{\beta}_{236}+\alpha _{14}+2\alpha _{17}+\alpha
_{23}+\alpha _{36}+\alpha _{45}+2\alpha _{67}\right) \notag \\
&&+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF3524}
\end{eqnarray}%
\begin{eqnarray}
F^{\{3542\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{137}+{\beta}_{167}-{\beta}_{245}-\alpha _{17}\right) \left( {\beta}%
_{135}-\alpha _{13}-\alpha _{35}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{137}+{\beta}%
_{167}-{\beta}_{245}-\alpha _{17}\right) \left( {\beta}_{135}-\alpha
_{13}-\alpha _{35}\right) \notag \\
&&\times \left( -{\beta}_{135}+{\beta}_{137}-{\beta}_{167}+{\beta}%
_{345}+\alpha _{13}+\alpha _{17}-\alpha _{24}-2\alpha _{26}+\alpha
_{35}-\alpha _{45}+2\alpha _{67}\right) \notag \\
&&+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF3542}
\end{eqnarray}
\begin{eqnarray}
F^{\{4235\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{147}-{\beta}_{356}-\alpha _{23}\right) \left( -{\beta}_{124}-{\beta}%
_{234}+{\beta}_{567}+\alpha _{24}\right) \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{124}+{\beta}%
_{234}-{\beta}_{567}-\alpha _{24}\right) \left[ -{\beta}_{147}^{2}+{\beta}%
_{234}{\beta}_{356}-{\beta}_{356}^{2}-{\beta}_{356}{\beta}_{567}\right.
\notag \\
&&+2{\beta}_{356}\alpha _{14}-2{\beta}_{356}\alpha _{17}+{\beta}_{124}\left(
{\beta}_{147}-{\beta}_{356}-\alpha _{23}\right) +{\beta}_{234}\alpha _{23}-3{%
\beta}_{356}\alpha _{23} \notag \\
&&-{\beta}_{567}\alpha _{23}+2\alpha _{14}\alpha _{23}-2\alpha _{17}\alpha
_{23}-2\alpha _{23}^{2}-2{\beta}_{235}\alpha _{56}+2\alpha _{23}\alpha
_{56}+2\alpha _{35}\alpha _{56} \notag \\
&&\left. +{\beta}_{147}\left( -{\beta}_{234}+2{\beta}_{356}+{\beta}%
_{567}-2\alpha _{14}+2\alpha _{17}+3\alpha _{23}\right) \right] +\mathcal{O}%
(\alpha ^{\prime }{}^{4}) \ ,
\label{expF4235}
\end{eqnarray}%
\begin{eqnarray}
F^{\{4253\}}(\alpha ^{\prime }) &=&-\frac{1}{10}{\alpha ^{\prime }}^{4}\zeta
^{2}(2)\left( {\beta}_{157}-{\beta}_{234}-{\beta}_{346}+\alpha _{34}\right)
\left( -{\beta}_{167}+{\beta}_{245}-{\beta}_{367}+\alpha _{67}\right) \notag
\\
&&\left[ 7\alpha _{15}\alpha _{24}+17\alpha _{24}\left( {\beta}_{167}-{\beta}%
_{234}-{\beta}_{245}+\alpha _{24}\right) -3\alpha _{15}\left( {\beta}%
_{346}-\alpha _{34}-\alpha _{36}\right) \right. \notag \\
&&\left. -10\left( {\beta}_{167}-{\beta}_{234}-{\beta}_{245}+\alpha
_{24}\right) \left( {\beta}_{346}-\alpha _{34}-\alpha _{36}\right) \right] +%
\mathcal{O}(\alpha ^{\prime }{}^{5}) \ ,
\label{expF4253}
\end{eqnarray}%
\begin{eqnarray}
F^{\{4325\}}(\alpha ^{\prime }) &=&-{\alpha ^{\prime }}^{2}\zeta (2)\alpha
_{14}\left( {\beta}_{256}-\alpha _{56}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{14}\left[ {\beta}%
_{256}^{2}+{\beta}_{234}\left( {\beta}_{256}-\alpha _{56}\right) +\alpha
_{56}\left( 2{\beta}_{567}-\alpha _{14}+2\alpha _{17}+2\alpha _{25}+\alpha
_{56}\right) \right. \notag \\
&&\left. +{\beta}_{256}\left( -2{\beta}_{567}+\alpha _{14}-2\left( \alpha
_{17}+\alpha _{56}\right) \right) \right] +\mathcal{O}(\alpha ^{\prime
}{}^{4}) \ ,
\label{expF4325}
\end{eqnarray}%
\begin{eqnarray}
F^{\{4352\}}(\alpha ^{\prime }) &=&\frac{1}{10}{\alpha ^{\prime }}^{4}\zeta
^{2}(2)\left( -{\beta}_{147}-{\beta}_{167}+{\beta}_{235}+\alpha _{17}\right)
\left( -{\beta}_{167}+{\beta}_{245}-{\beta}_{367}+\alpha _{67}\right) \notag
\\
&&\left[ 10\alpha _{24}\left( -{\beta}_{124}-{\beta}_{245}+{\beta}%
_{367}+\alpha _{24}\right) +27\alpha _{24}\alpha _{35}\right. \notag \\
&&+3\left( -{\beta}_{124}-{\beta}_{245}+{\beta}_{367}+\alpha _{24}\right)
\left( {\beta}_{147}-{\beta}_{235}-{\beta}_{356}+\alpha _{35}\right) \notag
\\
&&\left. +10\alpha _{35}\left( {\beta}_{147}-{\beta}_{235}-{\beta}%
_{356}+\alpha _{35}\right) \right] +\mathcal{O}(\alpha ^{\prime }{}^{5}) \ ,
\label{expF4352}
\end{eqnarray}%
\begin{eqnarray}
F^{\{4523\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{137}+{\beta}_{167}-{\beta}_{245}-\alpha _{17}\right) \left( {\beta}%
_{167}-{\beta}_{235}+{\beta}_{467}-\alpha _{67}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{137}+{\beta}%
_{167}-{\beta}_{245}-\alpha _{17}\right) \left( {\beta}_{167}-{\beta}_{235}+{%
\beta}_{467}-\alpha _{67}\right) \notag \\
&&\times \left[ -2{\beta}_{135}+{\beta}_{137}+{\beta}_{167}-2{\beta}_{235}-2{%
\beta}_{245}-2{\beta}_{246}+{\beta}_{467}\right. \notag \\
&&\left. +2\alpha _{13}+\alpha _{17}+3\alpha _{24}-\alpha _{25}+3\alpha
_{35}+2\alpha _{46}+\alpha _{67}\right] +\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF4523}
\end{eqnarray}%
\begin{eqnarray}
F^{\{4532\}}(\alpha ^{\prime }) &=&-{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{236}-\alpha _{23}-\alpha _{26}\right) \left( {\beta}_{145}-\alpha
_{14}-\alpha _{45}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{236}-\alpha
_{23}-\alpha _{26}\right) \left( {\beta}_{145}-\alpha _{14}-\alpha
_{45}\right) \notag \\
&&\times \left( {\beta}_{145}+{\beta}_{167}+{\beta}_{236}-\alpha
_{14}-2\alpha _{17}+\alpha _{26}-\alpha _{45}-2\alpha _{67}\right) +\mathcal{%
O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF4532}
\end{eqnarray}%
\begin{eqnarray}
F^{\{5234\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( -{%
\beta}_{125}-{\beta}_{235}+{\beta}_{467}+\alpha _{25}\right) \left( {\beta}%
_{157}-{\beta}_{234}-{\beta}_{346}+\alpha _{34}\right) \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{125}+{\beta}%
_{235}-{\beta}_{467}-\alpha _{25}\right) \left( {\beta}_{157}-{\beta}_{234}-{%
\beta}_{346}+\alpha _{34}\right) \notag \\
&&\times \left( {\beta}_{125}-{\beta}_{157}+2{\beta}_{234}-{\beta}_{235}+{%
\beta}_{346}+{\beta}_{467}-2\alpha _{15}+2\alpha _{17}-2\alpha _{34}\right)
\notag \\
&&+\mathcal{O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF5234}
\end{eqnarray}%
\begin{eqnarray}
F^{\{5243\}}(\alpha ^{\prime }) &=&\frac{1}{10}{\alpha ^{\prime }}^{4}\zeta
^{2}(2)\left( -{\beta}_{147}-{\beta}_{167}+{\beta}_{235}+\alpha _{17}\right)
\left( -{\beta}_{134}+{\beta}_{267}-{\beta}_{345}+\alpha _{34}\right) \notag
\\
&&\times \left[ 3\alpha _{26}\left( {\beta}_{134}-\alpha _{14}-\alpha
_{34}\right) -7\alpha _{26}\alpha _{35}\right. \notag \\
&&+\left. 10\left( {\beta}_{134}-\alpha _{14}-\alpha _{34}\right) \left( {%
\beta}_{167}-{\beta}_{235}-{\beta}_{345}+\alpha _{35}\right) \right. \nonumber \\
&& \left. -17\alpha_{35}\left( {\beta}_{167}-{\beta}_{235}-{\beta}_{345}+\alpha _{35}\right) %
\right] +\mathcal{O}(\alpha ^{\prime }{}^{5}) \ ,
\label{expF5243}
\end{eqnarray}%
\begin{eqnarray}
F^{\{5324\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{246}-\alpha _{24}-\alpha _{46}\right) \left( {\beta}_{167}-{\beta}%
_{235}+{\beta}_{467}-\alpha _{67}\right) \notag \\
&&+{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{167}-{\beta}%
_{234}+{\beta}_{246}-{\beta}_{467}+2\alpha _{15}-2\alpha _{17}+\alpha
_{23}-\alpha _{24}+\alpha _{35}-\alpha _{46}-\alpha _{67}\right) \notag \\
&&\times \left( {\beta}_{246}-\alpha _{24}-\alpha _{46}\right) \left( {\beta}%
_{167}-{\beta}_{235}+{\beta}_{467}-\alpha _{67}\right) +\mathcal{O}(\alpha
^{\prime }{}^{4}) \ ,
\label{expF5324}
\end{eqnarray}%
\begin{eqnarray}
F^{\{5342\}}(\alpha ^{\prime }) &=&\frac{1}{10}{\alpha ^{\prime }}^{4}\text{$%
\zeta ^{2}(2)$}\alpha _{15}\alpha _{26}\left[ -3{\beta}_{246}\alpha
_{15}+10\alpha _{15}\alpha _{24}+{\beta}_{135}\left( 3{\beta}_{246}-10\alpha
_{24}-3\alpha _{26}\right) \right. \notag \\
&&\left. +3\alpha _{15}\alpha _{26}-10{\beta}_{246}\alpha _{35}+27\alpha
_{24}\alpha _{35}+10\alpha _{26}\alpha _{35}\right] +\mathcal{O}(\alpha
^{\prime }{}^{5}) \ ,
\label{expF5342}
\end{eqnarray}%
\begin{eqnarray}
F^{\{5423\}}(\alpha ^{\prime }) &=&-{\alpha ^{\prime }}^{2}\zeta (2)\left( {%
\beta}_{236}-\alpha _{23}-\alpha _{36}\right) \left( {\beta}_{145}-\alpha
_{15}-\alpha _{45}\right) \notag \\
&&-{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\left( {\beta}_{236}-\alpha
_{23}-\alpha _{36}\right) \left( {\beta}_{145}-\alpha _{15}-\alpha
_{45}\right) \notag \\
&&\times \left( {\beta}_{145}+{\beta}_{167}+{\beta}_{236}+\alpha
_{15}-2\alpha _{17}-\alpha _{23}-\alpha _{36}-2\alpha _{67}\right) +\mathcal{%
O}(\alpha ^{\prime }{}^{4}) \ ,
\label{expF5423}
\end{eqnarray}%
\begin{eqnarray}
F^{\{5432\}}(\alpha ^{\prime }) &=&{\alpha ^{\prime }}^{2}\zeta (2)\alpha
_{15}\alpha _{26} +{\alpha ^{\prime }}^{3}\text{$\zeta (3)$}\alpha _{15}\alpha _{26}\left( {%
\beta}_{167}+\alpha _{15}-2\alpha _{17}+\alpha _{26}-2\alpha _{67}\right) +%
\mathcal{O}(\alpha ^{\prime }{}^{4}) \ . \hspace{0.8cm}
\label{expF5432}
\end{eqnarray}
\noindent The corresponding expressions of the ${\alpha'}^4$ terms for the remaining terms are included in the text files that we have submitted, attached to this work, to the hep-th arXiv preprint basis.\\
\noindent We have confirmed that our results are in perfect agreement with the ones found previously in \cite{Broedel3}.
\vspace{0.5cm}
\section{Summary and conclusions}
\label{Summary}
\noindent We have successfully derived a closed formula for the tree level $N$-point amplitude of open massless superstrings (for $3 \leq N \leq 7$). This is the same formula first found by Mafra, Schlotterer and Stieberger in \cite{Mafra1}, using the Pure Spinor formalism \cite{Berkovits1}.\\
\noindent Our approach has consisted in working only within the RNS formalism, so spacetime supersymmetry has been present throughout our approach, but in a {\it non manifestly} manner. First, it has been present implicitly in the computation of the $N$-point gauge boson amplitude in a closed form (see eq.(\ref{Ab})) because, in order to arrive to it, the condition of absence of $(\zeta \cdot k)^N$ terms\footnote{This is the main observation of our revisited S-matrix method \cite{Barreiro0}.} has been used. And second, it has been used to find {\it uniquely} the amplitudes involving fermions, once the $N$-point formula for gauge bosons, eq.(\ref{Ab}), has been found. \\
\noindent We believe that a deeper understanding of our procedure can eventually arrive to the MSS formula in eq.(\ref{MSS}), for arbitrary $N$.\\
\noindent The kinematic analysis that we have required to arrive to MSS formula, naturally leads us to a space of $N$-point gauge boson subamplitudes which is $(N-3)!$-dimensional (at least for $3 \leq N \leq 7$)\footnote{In the main body of this work we have referred to this space by ${\cal V}_N$. See section \ref{Kinematical} for more details.}. At this point is where the basis of Yang-Mills subamplitudes first proposed by Bern, Carrasco and Johansson \cite{Bern1}, plays an important role in our procedure. Once this basis has been identified as a basis for ${\cal V}_N$, then the MSS formula and the explicit BCJ relations themselves become linear algebra problems in which the components of a certain vector, with respect to a given basis, are desired to be found. We have done these calculations in section \ref{Closed} and Appendix \ref{BCJ}, respectively. \\
\noindent Following the same spirit of the revisited S-matrix approach \cite{Barreiro0}, we have found $\alpha'$ correction terms to the open superstring $N$-point amplitudes (where $N=5,6,7$) by only using the well known $4$-point amplitude $\alpha'$ expansion (see eqs.(\ref{formula1}) and (\ref{expansionGamma})) and demanding cyclic symmetry and tree level unitarity for the scattering amplitudes. We have done these calculations to at most ${\alpha'}^6$ order. It is quite remarkable that we have not needed to compute, for the calculations that we have proposed to do in this work, any coefficient as a numerical series or any integral involving polylogarithms.\\
\noindent We expect that, within our approach, only for ${\alpha'}^8$ order onwards\footnote{With the remarkable exception of the ${\alpha'}^9$ order terms, as mentioned in subsection \ref{Using}.} the $\alpha'$ expansion of the $5$-point amplitude will be required to obtain some of the coefficients of the series\footnote{These coefficients would be the ones which have dependence in the non trivial MZV's that we referred to in subsection \ref{Using}}.\\
\noindent In the following table we have summarized the existing parallel of the revisited S-matrix method in the determination of the OSLEEL and the corresponding scattering amplitude $\alpha'$ calculations.
\begin{eqnarray}
\begin{tabular}{|c|c|c|}
\hline
Step of the revisited & Open superstring & Open superstring \\
S-matrix method & low energy effective lagrangian & scattering amplitude \\
at ${\alpha'}^p$ order & at ${\alpha'}^p$ order & at ${\alpha'}^p$ order\\
\hline
\hline
$Step \ I:$ & & \\
Requirement of absence & & Finds that the $N$-point \\
of $(\zeta \cdot k)^N$ terms in the & Reduces the general basis & amplitude can be written \\
$N$-point amplitude & to a constrained basis of terms & in terms of the ${\cal B}_N$ basis \\
$(N=4, \ldots, p+2)$ & & (see eq.(\ref{basis})) \\
\hline
$Step \ II:$ & & \\
Use of $n$-point amplitudes & & \\
information ($n \ll p+2$) & Determines $all$ the coefficients & Determines $all$ the \\
(presumably, only $n=4$ & of the constrained basis & momentum factors \\
and $n=5$) & & \\
\hline
\end{tabular}
\label{table3}
\end{eqnarray}
\vspace{0.5cm}
\noindent In a forthcoming work we will use the revisited S-matrix method to compute the ${\alpha'}^5$ order terms of the OSLEEL (in analogy to the calculations that we did in \cite{Barreiro0}) and also to compute those terms in the $N=7$ scattering amplitude \cite{Barreiro3}.
\section*{Acknowledgements}
We would like to thank N. Berkovits, R. Boels, E. Hatefi and S. Stieberger for useful e-mail correspondence at different stages in the development of this work. This work has been partially supported by the brazilian agency FAPEMIG (research projects PPM-00617-11 and APQ-00140-12).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Acknowledgements}
We thank A. Ashtekar and the anonymous referees whose input has greatly improved the paper.
DS is supported by a grant from the Templeton Foundation. This work was in part supported by CONACyT 0177840 and DGAPA-UNAM IN100212 grants.
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
Transverse coherent oscillations of bunches induced by a fast kicker magnet
are routinely used in synchrotron or storage ring to measure for example the tune or other
ring parameters, see e.g.\ \cite{jones06}.
The transverse offset of a bunch, averaged over the bunch length, can be
recorded every single turn. The spectrum is then
concentrated around the base-band $Q_{f0} f_0$,
where $Q_{f0}$ is the fractional part of the betatron tune $Q_0$
and $f_0$ is the revolution frequency.
This diagnostics is usually used for time-resolved and
very accurate measurements of the tune $Q_{f0}$.
Transverse bunch decoherence is a process of
a turn-to-turn reduction of the total bunch offset signal
after an initial bunch displacement.
In a linear focusing lattice
the bunch decoherence is a manifestation of the lattice chromaticity $\xi$
where the synchrotron dynamics also plays an important role,
causing the signal recoherence exactly after the synchrotron period.
Other damping mechanisms, as due to lattice nonlinearities,
additionally damp the transverse oscillations.
Transverse decoherence is often used as a machine diagnostics tool.
Undesired transverse bunch oscillations can also appear
after the bunch-to-bucket transfer between synchrotrons.
In order to use transverse decoherence as a diagnostics tool for intense bunches of arbitrary length
and also to control undesired oscillations of such bunches it is important to understand
the decoherence in the presence of transverse space charge and nonlinear synchrotron oscillations.
We demonstrate that the decoherence signal
can be explained in terms of the transverse head-tail bunch mode spectrum.
For finite chromaticity also the $k>0$ head-tail modes contribute
to the bunch coherent spectrum.
The shift of the head-tail mode frequencies due to space charge and wall currents can be well
explained in terms of the analytical expressions for an airbag bunch distribution
\cite{blask98,boine2009}.
The head-tail mode frequencies are also modified
by changes in the individual particle synchrotron frequency.
In long bunches one has to account for the spread of the synchrotron frequencies.
Both, transverse space charge and nonlinear synchrotron oscillations
are important to understand the decoherence signals and transverse spectra.
We demonstrate that, once the spectrum- and decoherence modifications are understood,
they can be used to extract useful information about the bunches.
In this work we describe measurements of transverse bunch spectra
and decoherence signals obtained in the heavy-ion synchrotron SIS18 at GSI Darmstadt.
The observed modification of the head-tail spectrum and of the decoherence signal
caused by transverse space charge and nonlinear synchrotron oscillations
are explained in terms of our theoretical approach.
This approach is based on an expansion of an analytical theory for
head-tail modes in combination with particle tracking simulations.
In Sec. \ref{sec:theory} we use theoretical and numerical approaches to
analyse the effects of space charge and nonlinear synchrotron motion on the transverse spectra
and on the bunch decoherence signal.
We show that a simple model for the head-tail mode frequencies
with fitting parameters can be used
to explain the numerically obtained spectrum modifications
as well as the bunch decoherence as a function of the chromaticity.
In Sec.\,III the results of measurements performed at the SIS18 synchrotron are presented.
The space charge tune shifts determined from the transverse spectra are summarized,
the role of nonlinear synchrotron motion is demonstrated and transverse bunch
decoherence signals measured for different bunch conditions
are presented and explained. The work is concluded in Sec.\,IV.
\section{THEORY AND NUMERICAL SIMULATIONS}\label{sec:theory}
The Fourier transformation of the transverse
bunch signal provides peaks at frequencies
which represent the bunch eigenmodes, also called head-tail modes.
For short, low intensity bunches (the synchrotron frequency $f_s=Q_s f_0$ does
not depend on the amplitude, no collective effects)
the transverse spectrum has peaks at
$\Delta Q = Q-Q_{f0} = 0$ for $k=0$, $\Delta Q = \pm Q_s$ for $k = \pm 1$,
$\Delta Q = \pm 2 Q_s$ for $k = \pm 2$, and so forth.
Collective effects, like transverse space charge or
ring impedances change the bunch eigenfrequencies
and thus shift the peaks in the transverse spectrum.
Transverse space-charge effects are described
by the characteristic tune shift,
\begin{eqnarray}
\Delta Q_{\rm sc} =
\frac{\lambda_0 r_p R}{\gamma^3 \beta^2 \varepsilon_\perp} \ ,
\label{eq09}
\end{eqnarray}
where $R$ is the ring radius,
$\beta$ and $\gamma$ are the relativistic parameters,
$r_p=q_{\rm ion}^2/(4 \pi \epsilon_0 m c^2)$ is the classical particle radius,
$\lambda_0$ is the peak line density (at the bunch center),
and $\varepsilon_\perp$ is the transverse total emittance.
This tune shift corresponds to a round cross-section
with a transverse K-V distribution
and is defined as the modulus of the negative shift.
In a rms-equivalent bunch with the Gaussian transverse profile,
i.e.\ the transverse rms emittance is $\varepsilon_x=\varepsilon_\perp/4$,
the maximum space-charge tune shift is twice of this value,
$\Delta Q_{\rm sc}^{\rm max} = 2 \Delta Q_{\rm sc}$.
In the case of an elliptic transverse cross-section
with the rms emittances $\varepsilon_y, \varepsilon_x$,
the parameter $\varepsilon_\perp$ in Eq.\,(\ref{eq09})
should be replaced by
\begin{eqnarray}
\varepsilon_\perp =
2 \Bigl( \varepsilon_y + \sqrt{\varepsilon_y\varepsilon_x
\frac{Q_{0y}}{Q_{0x}} } \Bigr) \ ,
\label{eq10}
\end{eqnarray}
here for the vertical ($y$) plane, for the horizontal plane correspondingly.
The parameter for the effect of space charge in a bunch
is defined as a ratio of the characteristic space-charge tune shift Eq.\,(\ref{eq09})
to the small-amplitude synchrotron tune,
\begin{eqnarray}
q = \frac{\Delta Q_{\rm sc}}{Q_{s0}} \ .
\label{eq11}
\end{eqnarray}
\subsection{LONGITUDINAL DIPOLE FREQUENCY}
An important parameter for head-tail bunch oscillations
in long bunches is the effective synchrotron frequency
which will be different from the small-amplitude synchrotron frequency
in short bunches. We will show
that in long bunches the longitudinal dipole frequency $Q_{\rm dip} f_0$ can be chosen
as a substitute for the small-amplitude incoherent synchrotron frequency $Q_{s0} f_0$.
The longitudinal dipole frequency can be accurately measured from the bunch signal,
as we will show in the experimental part of this paper.
The frequency of small-amplitude dipole oscillations
can be calculated as \cite{boine_rf2005}
\begin{eqnarray}
\frac{Q_{\rm dip}^2}{Q_{s0}^2} =
2 \int_0^{\tau_{\rm max}} \frac{V_{\rm rf}}{V_0} \lambda^\prime (\tau) d \tau \ ,
\label{eq01}
\end{eqnarray}
where the rf voltage form is $V_{\rm rf} = V_0 \sin (\tau)$,
$\tau$ is rf bucket radian and $\lambda$ is the line density.
The small-amplitude bare synchrotron tune is given by
\begin{eqnarray}
Q_{s0}^2 =
\frac{q_{\rm ion} V_0 h |\eta|}{2 \pi m \gamma \beta^2 c^2} \ ,
\label{eq07}
\end{eqnarray}
where $\eta$ is the machine slip factor
and $h$ is the rf harmonic number.
The dependence of $Q_{\rm dip}$ on
the rms bunch length $\sigma_z$
for a Gaussian bunch is shown (red curve) in Fig.\,\ref{fg05}.
The bunch length $\sigma_z$ is dimensioned in radian
of the rf bucket, i.e.\ $\sigma_z = L_{\rm rms} h/R$,
where $L_{\rm rms}$ is the rms bunch length in meter.
For a parabolic longitudinal distribution (or elliptic bunch)
with the total half-length $\tau_p = \sqrt{5} \sigma_z$ one obtains
the analytic expression \cite{boine_rf2005},
\begin{eqnarray}
\frac{Q_{\rm dip}^2}{Q_{s0}^2} =
\frac{2 \tau_p - \sin (2 \tau_p)}{4 \sin (\tau_p) -
4 \tau_p \cos (\tau_p)} \ ,
\label{eq02}
\end{eqnarray}
which can be approximated in the case of a short bunch as
\begin{eqnarray}
\frac{Q_{\rm dip}}{Q_{s0}} =
\sqrt{1 - \frac{\sigma_z^2}{2}} \ .
\label{eq03}
\end{eqnarray}
From the comparison in Fig.\,\ref{fg05} it follows that
for short bunches with $\sigma_z \lesssim 0.6$
the approximation Eq.\,(\ref{eq03}) is sufficient.
For long bunches with $\sigma_z \gtrsim 1$ the
dipole frequencies for Gaussian and parabolic bunches start to differ.
\begin{figure}[h!]
\includegraphics*[width=.5\linewidth]{mfig01}
\caption{\label{fg05}
The longitudinal dipole oscillation frequency
as a function of the rms bunch length.
The red curve is obtained using Eq.\,(\ref{eq01}) for a Gaussian bunch,
the blue dashed curve is given by Eq.\,(\ref{eq02})
and the back chain curve is given by Eq.\,(\ref{eq03}).
}
\end{figure}
\subsection{SPECTRUM OF A LONG BUNCH WITH SPACE CHARGE}
We use particle tracking simulations \cite{boine2006, rumolo2002}
in order to investigate the combined effect of space charge and
nonlinear synchrotron motion on transverse head-tail oscillations.
The numerical codes have been validated \cite{kornilov-icap09} using
analytic results \cite{blask98}. For the transverse space charge force,
a frozen electric field model is used,
i.e.\ a fixed potential configuration which follows
the center of mass for each bunch slice.
This approach is justified in the rigid-slice regime
and can be considered as a reasonable approach for
moderate and strong space charge \cite{burov-lebed2009, burov2009}.
A round transverse cross-section and a Gaussian transverse beam profile
were used in the simulations in this work.
Figure\,\ref{fg01} demonstrates differences in the
transverse mode frequencies for bunches of different lengths,
and with all the other parameters kept identical, including
the space charge parameter $q=8$.
\begin{figure}[h!]
\includegraphics*[width=.5\linewidth]{mfig02}
\caption{\label{fg01}
Example transverse spectra
of long bunches from particle tracking simulations,
with space charge and nonlinear synchrotron motion taken into account.
Bunches with two different rms length $\sigma_z$ are assumed,
the space charge parameter $q=8$ and
the bare synchrotron tune $Q_{s0}$ are kept constant.
The spectra clearly show the head-tail modes
$k=0$, $k=1$ and $k=2$.
The tune shift $\Delta Q$ is related to the
bare betatron tune as $\Delta Q = Q-Q_{f0}$.
}
\end{figure}
The three lowest-order modes can be seen very well, the modes of the longer bunch
are shifted closer to the bare betatron tune than those of the shorter bunch.
In order to describe the bunch spectrum for
arbitrary bunch length and space charge strength,
simulation scans over different parameters have been performed.
Our simulation results suggest that the airbag bunch model \cite{blask98}
can be applied to the head-tail modes in a long Gaussian bunch,
\begin{eqnarray}
\frac{\Delta Q_k}{Q_{s0}} =
-\frac{q}{2} \pm \sqrt{\frac{q^2}{4} + k^2 q_*^2} \ ,
\label{eq05}
\end{eqnarray}
where $q_*=Q_{s*}/Q_{s0}$ is a characteristic parameter depending on the bunch length
and the nonlinear synchrotron oscillations. In our case $q_*$ is used as a fitting parameter.
Keeping the space charge parameter constant,
the bunch length has been varied and the resulting
eigenfrequencies analyzed, see Fig.\,\ref{fg04}
for a scan with $q=8$.
We observe substantial changes in the bunch mode frequencies
with increasing bunch length.
The parameter $q_*$ has been obtained from these simulation scans.
Figure\,\ref{fg02} shows a comparison between simulation results
and the model Eq.\,(\ref{eq05}) for a fixed bunch length
and for different space charge parameters.
The plot demonstrates that the model Eq.\,(\ref{eq05}) is fairly
accurate over the parameter range of the interest.
As we additionally show in Fig.\,\ref{fg02},
there is a small difference between transverse Gaussian bunch profiles
(with nonlinear transverse space charge)
and transverse K-V distributions (with linear space charge).
In our simulations we use the more realistic
Gaussian profile.
\begin{figure}[h!]
\includegraphics*[width=.5\linewidth]{mfig03}
\caption{\label{fg04}
Results of a simulation scan (circles) over the rms bunch length
for a bunch with space charge parameter $q=8$.
Red corresponds to the $k=1$ head-tail mode and
blue to $k=2$ modes.
For comparison, the chain curves show an estimation
using Eq.\,(\ref{eq05}) with $q_*=Q_{\rm dip}/Q_{s0}$.
}
\end{figure}
\begin{figure}[h!]
\includegraphics*[width=.5\linewidth]{mfig04}
\caption{\label{fg02}
Results of a simulation scan over the space charge parameter
for a bunch with the rms length $\sigma_z=1.06$\,rad.
The crosses show the eigenfrequencies of the modes
$k=1$ and $k=2$ for bunches with a transverse K-V distribution,
while the circles are for bunches with a Gaussian transverse
profile.
The chain curves are given by Eq.\,(\ref{eq05})
with the coefficients $q_*=0.95$ for $k=1$
and $q_*=0.83$ for $k=2$,
which corresponds to the results summarized in Fig.\,\ref{fg03}.}
\end{figure}
The chain curves in Fig.\,\ref{fg04} show that it would be
not correct to use the longitudinal dipole tune $Q_{\rm dip}$
for the parameter $Q_{s*}$.
An interesting observation is that
the type of the dependence of the mode frequencies on
the bunch length is similar to $Q_{\rm dip}$,
being however slightly different.
Also, the scale factor between $Q_{\rm dip}$ and the real $\Delta Q$
is quite different for $k=1$ and $k=2$.
The bare synchrotron tune, which would mean $q_*=1$, is not
an adequate value, too, $\Delta Q$ would then be a constant for changing
bunch length and it would correspond to the value
of the chain curve at small $\sigma_z$.
Simulation results for practical usage are presented in Fig.\,\ref{fg03}.
These $q_*$ values can be included in Eq.\,(\ref{eq05})
in order to estimate the space charge tune shift
of the bunch eigenfrequencies for a given bunch length.
The chain line demonstrates again the difference between
$Q_{s*}$ which describes the tune shift and the longitudinal dipole
frequency.
\begin{figure}[h!]
\includegraphics*[width=.5\linewidth]{mfig05}
\caption{\label{fg03}
Summary of the simulation scans for the effect of the bunch length
on the eigenfrequencies of the head-tail modes $k=1$ and $k=2$
with space charge.
For comparison, the chain curve shows the longitudinal
dipole frequency from Eq.\,(\ref{eq01}) for a Gaussian bunch.
}
\end{figure}
\subsection{TRANSVERSE DECOHERENCE}
\subsubsection{LINEAR DECOHERENCE}
First, we discuss the linear transverse decoherence
due to chromaticity,
i.e.\ the only source of the tune shift is the linear
dependence of the betatron tune shift on the momentum shift
$\Delta Q_\xi / Q = \xi \Delta p / p$.
As a result of an initial transverse displacement
$\overline{x}(\tau)=A_0$,
a bunch oscillates in the corresponding plane (here $x$).
As we consider the linear case, all the particles
have the identical synchrotron frequency $Q_s f_0$.
The betatron phase shift related to the
bare tune $Q_0$ has a harmonic dependency along a synchrotron period.
Hence, after one synchrotron oscillation,
the betatron phase shift is exactly compensated
and the transverse amplitude is equal to the
initial displacement $A_0$.
Assuming the Gaussian momentum distribution,
the amplitude of the bunch offset evolves
with the turn number $N$ as \cite{meller87}
\begin{eqnarray}
A(N) = A_0 \exp \Biggl\{ -2
\Bigl( \frac{\xi Q_0 \delta_p}{Q_s}
\sin (\pi Q_s N)
\Bigr)^2 \Biggr\} \ ,
\label{eq04}
\end{eqnarray}
here
$\delta_p$ is the normalized rms momentum spread.
Figure\,\ref{fg08} shows an example for bunch decoherence
after a rigid kick.
It demonstrates that a higher chromaticity
provides a faster decoherence, and that after the synchrotron period
$N_{\rm s}=1/Q_{\rm s}$ the initial offset amplitude appears again,
which is called recoherence.
\begin{figure}[h!]
\includegraphics*[width=.6\linewidth]{mfig06}
\caption{\label{fg08}
A particle tracking simulation for a Gaussian bunch
after an offset kick $\overline{x}(\tau)={\rm const}$
without space charge and for a linear rf bucket, $Q_s=Q_{s0}=0.01$.
The full lines show the time evolution of the bunch offset
for the chromaticities
$\xi Q_0=-4.3$ (blue) and $\xi Q_0=-12.5$ (red),
the dashed lines are analytical results and are given by Eq.\,(\ref{eq04}).
}
\end{figure}
\subsubsection{DECOHERENCE WITH SPACE CHARGE}
Transverse space charge causes a betatron frequency shift,
which depends on the particle transverse amplitude
and on the longitudinal particle position in the bunch.
The decoherence behaviour is thus very different from the
linear decoherence at low bunch intensities Eq.\,(\ref{eq04}).
Figure\,\ref{fg06} shows examples of the bunch oscillations
after a rigid kick for three different values
of the space-charge parameter.
The chromaticity corresponds to $\chi_b=4.5$,
where $\chi_b = Q_0 \xi L_b / (\eta R)$ is
the chromatic phase shift over the bunch length,
the bunch rms length is $\sigma_z=1.06$\,rad.
We observe the periodic recoherence
with the periodicity 770 turns ($q=7$, top), 1270 turns ($q=12$, middle)
and 1640 turns ($q=16$, bottom), while the low-intensity recoherence
would have a periodicity of 100 turns for the same parameters.
\begin{figure}[h!]
\includegraphics*[width=.6\linewidth]{mfig07}
\caption{\label{fg06}
Transverse bunch decoherence from
particle tracking simulations for a Gaussian bunch
after a rigid kick $\overline{x} (\tau) = {\rm const}$
for different space charge parameters.
Top plot: $q=7$, middle plot: $q=12$
and for the bottom plot $q=16$.
The bare synchrotron tune is $Q_{s 0} = 0.01$,
i.e.\ the low-intensity recoherence has a periodicity of 100\,turns.
After the transition period of higher-order mode damping,
the periodicity always corresponds
to the frequency difference
$\Delta Q = Q_{k=1} - Q_{k=0}$.
Top plot: $\Delta Q_{k=1} = 0.13 \ Q_{s0}$ (periodicity 770 turns),
middle plot: $\Delta Q_{k=1} = 0.079 \ Q_{s0}$ (periodicity 1270 turns),
and for the bottom plot
$\Delta Q_{k=1} = 0.061 \ Q_{s0}$ (periodicity 1640 turns).
}
\end{figure}
The key in understanding of the decoherence
for a bunch with transverse space charge is the representation
of the initial kick as a superposition of the
bunch head-tail eigenmodes,
\begin{eqnarray}
A_0 = \sum_k a_k \exp
\Bigl(-i \frac{\chi_b \tau}{\tau_b} + i \phi_k \Bigr)
\overline{x}_k (\tau) \ ,
\label{eq08}
\end{eqnarray}
where we have extracted the chromatic phase shift along the bunch
with the corresponding phase $\phi_k$ for each eigenfunction.
The second key is the fact that the different eigenmodes
are prone to Landau damping mechanisms, but with different intensity thresholds and damping rates.
Landau damping due to the space charge tune spread along the bunch
\cite{burov2009, balb2009, kornilov_prstab10} is the most important
mechanism in the beam parameter regime considered in the simulations of this work.
In the presence of space charge especially the negative
and the high-$k$ eigenmodes present in the initial kick Eq.\,(\ref{eq08})
are quickly suppressed, so that after a transition period
a mixture of the survived eigenmodes continues to oscillate.
In Ref.\,\cite{kornilov_hb10d} we have discussed in detail the case
$q=1$, where all the head-tail modes $k \geq 1$ are strongly
suppressed by Landau damping such that the mode $k=0$ is left alone.
For stronger space charge, as in Fig.\,\ref{fg06},
the modes $k \geq 2$ are damped and the resulting oscillation
is the mixture of the $k=0$ and $k=1$ modes.
The recoherence periodicity seen in Fig.\,\ref{fg06} corresponds exactly
to the frequency difference between these two modes,
as it is the case for the wave beating.
In a real machine there are often nonlinear damping mechanisms which would further suppress
the $k=0$ and $k=1$ modes, but in the simulation
we only have the space charge induced Landau damping which is zero for the $k=0$ mode
and is very weak for the $k=1$ mode at these $q$ parameters.
It is obvious, and can be seen in Eq.\,(\ref{eq08}),
that the composition of the eigenmodes after a rigid kick depends on the chromaticity.
This is also demonstrated in Fig.\,\ref{fg07} which shows
a comparison of the bunch decoherence for three different chromaticities.
The bunch parameters correspond to Fig.\,\ref{fg06}, the
space charge parameter is chosen as $q=7$.
We see that the periodicity of 770 turns does not change. It
corresponds to the frequency difference $\Delta Q = Q_{k=1} - Q_{k=0}=0.13 Q_{s0}$.
The reason for the different oscillation amplitudes in Fig.\,\ref{fg06}
is the increasing contribution of higher-order modes $k \ge 2$
with growing $\xi$ in the eigenmode mixture of the initial rigid bunch offset (see Eq.\,\ref{eq08}).
Recall that these modes are quickly suppressed for the parameters of the bunch and
the resulting recoherence is a beating of the remaining $k=0$ and $k=1$ modes.
\begin{figure}[h!]
\includegraphics*[width=.6\linewidth]{mfig08}
\caption{\label{fg07}
Transverse bunch decoherence for a bunch with space charge parameter $q=7$
from particle tracking simulation for different chromaticities.
The black curve:
$\chi_b=3$,
the red curve:
$\chi_b=4.5$
and for the blue curve
$\chi_b=6$.
The recoherence results from a mixture of the $k=0$ mode
and $k=1$ mode, $\Delta Q_{k=1} = 0.13 \ Q_{s0}$ (periodicity 770 turns).
}
\end{figure}
The airbag \cite{blask98} eigenmodes $\overline{x}_k (\tau)=
A \cos (k \pi \tau / \tau_b)$
can be taken as a reasonable approximation \cite{kornilov_prstab10}
of the eigenfunctions in a Gaussian bunch.
The rigid offset decomposition Eq.\,(\ref{eq08}) can then be solved
and the resulting mode coefficients are
$a_0=2 / \chi_b \sin (\chi_b/2)$,
$a_1=4 \chi_b/ |\chi_b^2 - \pi^2| \cos (\chi_b/2)$,
$a_2=4 \chi_b/ |\chi_b^2 - 4 \pi^2| \sin (\chi_b/2)$.
The negative modes have the same coefficients
but can be disregarded in the case of a bunch with space charge \cite{burov2009, kornilov_prstab10},
because of their large damping rates.
These coefficients are plotted in Fig.\,\ref{fg09},
where we see that for the chromaticity range of interest
the relative part of the $k=2$ mode increases with growing $\chi_b$.
The higher-order modes follow this trend.
The contribution of the $k=0$ and $k=1$ modes thus decreases
as we also can observe in the simulations, see Fig.\,\ref{fg07}.
A perfect agreement with the coefficients in Fig.\,\ref{fg09}
can not be expected, since the analytical model is for an
airbag \cite{blask98} bunch.
\begin{figure}[h!]
\includegraphics*[width=.6\linewidth]{mfig09}
\caption{\label{fg09}
Relative amplitudes of the airbag-bunch eigenmodes
for the rigid offset $\overline{x} (\tau)={\rm const}$
as a function of the chromatic phase shift
$\chi_b=Q_0 \xi L_b / (\eta R)$.
}
\end{figure}
\clearpage
\section{MEASUREMENTS}
Transverse decoherence experiments have been performed
in the heavy ion synchrotron SIS18 \cite{sis18} at GSI Darmstadt.
Bunches of Ar$^{18+}_{40}$ ions were stored at the energy of 100\,MeV/u
and kicked transversally with a kick duration of one turn.
The rf harmonic number was $h=4$ and all the four bunches
had generally an identical behaviour.
The Beam Position Monitors (BPMs) provide a higher quality signal in the vertical plane
than in the horizontal one due to a smaller plate gap,
thus we use the vertical BPM signals in the results presented here.
The vertical bare tune was around $Q_0=4.31$ although
it could vary for different intensities and machine parameters.
SIS18 general parameters are: $R=34.492$\,m, $\gamma_t=5.45$,
$\xi \approx -1.4$.
Similar to the theory section, first we discuss
the longitudinal dipole frequency.
Figure\,\ref{fg20} demonstrates the bunch spectrum
obtained from the sum BPM signal.
The satellites of the central frequency are well resolved,
the peaks are equidistant what provides the longitudinal dipole frequency.
The longitudinal dipole frequency determined
by this way is $Q_{\rm dip}=2.5\times10^{-3}$,
the peak rf voltage was $V_0=9$\,kV here.
The bare synchrotron tune
can also be accurately determined using Eq.\,(\ref{eq07}) and it is
$Q_{s0}=3.24\times10^{-3}$ in this case.
Note the large difference between the bare synchrotron
frequency and the dipole frequency.
Using the curves from Fig.\,\ref{fg05} we can obtain
the rms bunch length
$\sigma_z=1.0$\,rad,
which is a typical length in the experiments at SIS18.
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig10}
\caption{\label{fg20}
An example for a longitudinal
spectrum from BPM bunch measurements at SIS18.
The spectrum corresponds to the 22nd harmonic
at the frequency of 13.073\,MHz,
$\Delta Q = (f - n f_0)/f_0$.
}
\end{figure}
The first example for the decoherence measurements
is presented in Fig.\,\ref{fg21} and Fig.\,\ref{fg22}.
Figure\,\ref{fg21} shows the turn-per-turn transverse
bunch offset after the kick.
Figure\,\ref{fg22} demonstrates the spectrum of these
bunch oscillations, the frequency on the horizontal axis
is normalized by the bare synchrotron tune.
The red line is for the spectrum of the whole bunch
and shows mainly peaks of two modes which we can identify
as the $k=0$ mode and the $k=2$ mode.
If we calculate a Fourier transform for the bunch head,
its spectrum (the blue line) clearly reveals other peaks,
so that we can identify five head-tail modes, see Fig.\,\ref{fg22}.
The spectrum is very different from the case without collective effects:
the lines are not equidistant, the negative-modes ($k<0$) are suppressed.
The fact that the mode tune shifts are consistent with the space-charge model
can be proved by calculating the space charge parameter,
\begin{eqnarray}
q = \frac{k^2 q_*^2 - ( \Delta Q_k / Q_{s0} )^2 }{
\Delta Q_k / Q_{s0} } \ ,
\label{eq06}
\end{eqnarray}
which corresponds to the model Eq.\,(\ref{eq05}).
The synchrotron oscillation parameter $q_*$
for the modes $k=1$ and $k=2$ is obtained from
the results given in Fig.\,\ref{fg03}.
$\Delta Q_k$ is the tune shift of the bunch mode from the measured spectrum.
Here and for the examples to follow we summarize the space charge parameters $q$
obtained from the different eigenfrequencies of the spectra in Fig.\,\ref{fg30}.
The relevant bunch parameters are summarized in Table\,1.
The values for the modes from Fig.\,\ref{fg22}
are shown in Fig.\,\ref{fg30} with the blue circles, $q \approx 10$.
Since this was a rather short bunch, $\sigma_z=0.66$,
the $q_*$-parameter was close to 1.0
and thus it was possible
to estimate the space charge parameter for the $k=3$ mode as well.
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig11}
\caption{\label{fg21}
Time evolution of the bunch offset in the vertical plane
at SIS18 after a transverse kick.
The recoherence periodicity corresponds to the mix of the dominating head-tail modes
$k=0$ and $k=2$ with $\Delta Q_{k=2}=1.35\times10^{-3}$,
giving the periodicity of 740\,turns.
}
\end{figure}
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig12}
\caption{\label{fg22}
Transverse coherent spectrum
for the bunch from Fig.\,\ref{fg21}, $Q_{s0}=4.04\times10^{-3}$.
The red spectrum is obtained from a frequency analysis
of the complete bunch offset,
while the blue spectrum is a result of a frequency analysis
for the bunch head.
In the complete-bunch spectrum the mode $k=2$ dominates,
and the bunch head spectrum reveals the uneven modes
$k=1$, $k=3$ but also the mode $k=4$.
The frequencies of the head-tail modes
provide the space charge parameter $q \approx 10$,
see blue circles in Fig.\,\ref{fg30}.
}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
signals & symbols in Fig.\,\ref{fg30} & $\sigma_z$ & $\chi_b$ &
$Q_{\rm dip}, 10^{-3}$ & $Q_{s0}, 10^{-3}$ & $q$ \\
\hline
Figs.\,\ref{fg21},\,\ref{fg22} & blue circles &
\ \ 0.66 \ \ & \ $\approx$ 3 \ & 3.63 & 4.0 & $\approx$ 10 \\
\hline
Figs.\,\ref{fg23},\,\ref{fg24} & red squares &
1.15 & $\approx$ 5 & 2.35 & 3.24 & $\approx$ 9 \\
\hline
Figs.\,\ref{fg25},\,\ref{fg26} & black crosses &
1.2 & $\approx$ 5 & 2.28 & 3.24 & $\approx$ 4.5 \\
\hline
Figs.\,\ref{fg28},\,\ref{fg29} & cyan triangles &
1.0 & $\approx$ 0 & 2.5 & 3.24 & $\approx$ 4 \\
\hline
\end{tabular}
\end{center}
\caption{Bunch parameters for the signals shown in this paper
and the space-charge parameter $q$ obtained from the transverse spectra.
The $q$--values from the different head-tail modes for every bunch
are summarized in Fig.\,\ref{fg30}.
In the first case the rf voltage was $V_0=14$\,kV,
in the last three cases $V_0=9$\,kV.
}
\end{table}
\begin{figure}[h!]
\includegraphics*[width=.5\linewidth]{mfig13}
\caption{\label{fg30}
Summary for the space charge parameter determined
from the coherent head-tail spectra of different Ar$^{18+}_{40}$ bunched beams
in the SIS18 synchrotron.
The method is given by Eq.\,(\ref{eq06}),
with the coefficients $q_*$ corresponding to Fig.\,\ref{fg03}.
The spectra are shown in Fig.\,(\ref{fg22}) (blue circles),
Fig.\,\ref{fg24} (red squares),
Fig.\,\ref{fg26} (black crosses) and
in Fig.\,\ref{fg29} (cyan triangles).
}
\end{figure}
Figure\,\ref{fg30} demonstrates a certain consistency
between different head-tail modes for the space charge parameter,
that, however, can not be expected perfect.
The model Eq.\,(\ref{eq05}) is based on the airbag \cite{blask98} bunch
which is a reasonable,
but still an approximation for a Gaussian bunch \cite{kornilov_prstab10}.
The bunch spectra are also weakly affected by the facility impedances,
image charges and nonlinear field components neglected in our analysis.
Finally, in our simulations Gaussian bunch profiles
in the transverse and in the longitudinal plane
have been assumed. It is a good, but not exact description
for the bunches in the machine experiments.
The space-charge parameter $q=\Delta Q_{\rm sc} / Q_{s0}$ can be additionally estimated
using Eq.\,(\ref{eq09}) and the measured bunch parameters.
The particle number and the bunch length could be measured
with a reasonable accuracy.
The transverse beam radius,
which enters the space-charge tune shift as squared
($\varepsilon_y = a_y^2 Q_{0y}/R$, $a_y$ is the vertical rms radius)
and is thus especially important,
could not be determined with a satisfactory precision,
as it was also the case in the previous coasting-beam measurements \cite{paret2010}
at SIS18.
As an example, here we provide
an estimation for the bunch presented in Figs.\,\ref{fg21},\,\ref{fg22}.
The transverse rms emittances were $\epsilon_y=6.2$\,mm\,mrad,
$\epsilon_x=8.4$\,mm\,rad,
number of ions per bunch was 5.1$\times 10^9$.
Using these parameters, the bunch length, and the bare synchrotron tune
(see Table\,1), we obtain from Eqs.\,(\ref{eq09})\,,(\ref{eq10}) $q_{\rm est} \approx$7.
In this work we make no claim on a perfect agreement
of the $q$--values obtained from the transverse spectra
with the $q$--estimations provided by the bunch parameters and Eq.\,(\ref{eq09}),
mainly due to the uncertainty in the transverse beam size measurements
at SIS18.
In the next example we show a longer bunch, $\sigma_z=1.15$,
due to a lower rf voltage, see Table\,1.
The transverse bunch oscillations after the kick are shown in Fig.\,\ref{fg23}
and the corresponding spectrum is shown in Fig.\,\ref{fg24}.
In comparison to the previous example (Figs.\,\ref{fg21},\,\ref{fg22}),
the bunch here is longer, but the particle number is higher and
the synchrotron tune is larger, thus the space charge parameter
is similar, $q \approx 9$.
As we can see in Fig.\,\ref{fg24}, the spectrum is dominated
by two modes, the $k=0$ mode at the bare tune,
and another one at $\Delta Q=0.91\times10^{-3}$,
which gives the periodicity of the bunch recoherence,
see Fig.\,\ref{fg23}.
The mode $k=1$ is suppressed as it is the case in the previous example,
and it is to expect that here we have the $k=2$ mode again.
Additionally, this could be proved as follows.
Plotting the bunch vertical traces and subtracting
the total bunch offset, thus reducing the contribution
of the $k=0$ mode, we observe a clear two-knot structure
of the $k=2$ modes, see Fig.\,\ref{fg27}.
The frequencies of the further peaks in Fig.\,\ref{fg24} correspond
rather well to the space-charge model with $q \approx 9$.
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig14}
\caption{\label{fg23}
Time evolution of the bunch offset in the vertical plane
at SIS18 after a transverse kick.
The recoherence periodicity corresponds to the mix of the dominating head-tail modes
$k=0$ and $k=2$ with $\Delta Q_{k=2}=0.91\times10^{-3}$,
giving the periodicity of 1100\,turns.
}
\end{figure}
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig15}
\caption{\label{fg24}
Transverse coherent spectrum
for the bunch from Fig.\,\ref{fg23}, $Q_{s0}=3.24\times10^{-3}$.
Head-tail modes up to $k=5$ are well seen,
except for the modes $k=1$.
The frequencies of the head-tail modes
provide the space charge parameter $q \approx 9$,
see red squares in Fig.\,\ref{fg30}.
}
\end{figure}
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig16}
\caption{\label{fg27}
Traces of the transverse bunch signal for
100 consecutive turns for the bunch from Fig.\,\ref{fg23} and
Fig.\,\ref{fg24}.
This result proves that the mode $k=2$ dominates
during the process of bunch decoherence.
}
\end{figure}
In the next example we demonstrate a bunch decoherence
dominated by a mixture of the $k=0$ mode with the $k=1$ mode;
the bunch oscillations are shown in Fig.\,\ref{fg25},
the spectrum is shown in Fig.\,\ref{fg26}.
The horizontal chromaticity was partly compensated,
by a half of the natural value,
the associated nonlinearities probably contributed to
establishing of the longer bunch
and to a stronger damping of the $k=2$ mode.
The recoherence is thus quite slower, nearly one and a half thousand turns
(see Fig.\,\ref{fg25}),
which is given by the frequency of the $k=1$ mode
in a good agreement with the bunch spectrum, Fig.\,\ref{fg26}.
Another outstanding feature of this spectrum
is the clear presence of the $k=-1$ mode,
with the frequency shifted strongly downwards
in a very good agreement with the space-charge model,
see black crosses in Fig.\,\ref{fg30}.
In part, the presence of the $k=-1$ mode was probably possible due to
rather moderate space charge $q \approx 4.5$ in this case.
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig17}
\caption{\label{fg25}
Time evolution of the bunch offset in the vertical plane
at SIS18 after a transverse kick.
The recoherence periodicity corresponds to the mix of the dominating head-tail modes
$k=0$ and $k=1$ with $\Delta Q_{k=1}=0.68\times10^{-3}$,
giving the periodicity of 1470\,turns.
}
\end{figure}
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig18}
\caption{\label{fg26}
Transverse coherent spectrum
for the bunch from Fig.\,\ref{fg25}, $Q_{s0}=3.24\times10^{-3}$.
The mode $k=1$ dominates,
the spectrum shows clearly the mode $k=-1$,
with the eigenfrequency corresponding well to the model Eq.\,(\ref{eq05}).
The frequencies of the head-tail modes
provide the space charge parameter $q \approx 4.5$,
see black crosses in Fig.\,\ref{fg30}.
}
\end{figure}
The transverse decoherence observed in the case presented
in Figs.\,\ref{fg28},\,\ref{fg29}, is very different
from the third example, Fig.\,\ref{fg25},
although the space-charge parameter is similar, $q \approx 4$,
as well as the bunch length, see Table\,1.
We see that the recoherence periodicity is quite faster
which is due to the dominance of the $k=2$ mode
as it is confirmed in the bunch spectrum, see the red line
in Fig.\,\ref{fg29}.
More remarkable, the bunch decoherence in Fig.\,\ref{fg28}
shows a much weaker amplitude drop between the recoherence peaks.
The reason is the compensated vertical chromaticity to nearly zero,
according to the set parameters.
This is predicted by the linear theory Eq.\,(\ref{eq04}),
also shown in Fig.\,\ref{fg08}.
According to the interpretation of the mode mixture,
at a small chromaticity the part of the $k=0$ mode is very large,
see Fig.\,\ref{fg09}.
The spectrum from the measurements in Fig.\,\ref{fg29} confirms this.
The relatively small part of the $k=2$ mode provides
the periodicity of a weak recoherence.
For the determination of the space-charge parameter,
the eigenfrequency of the $k=1$ mode is needed
which could be obtained by a frequency analysis of the
bunch head oscillations, see the blue line in Fig.\,\ref{fg29},
and the resulting $q$--values in Fig.\,\ref{fg30} (cyan triangles).
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig19}
\caption{\label{fg28}
Time evolution of the bunch offset in the vertical plane
at SIS18 after a transverse kick.
The vertical chromaticity was compensated for this beam to $\xi_y \approx 0$.
The recoherence periodicity corresponds to the mix of the dominating head-tail modes
$k=0$ and $k=2$ with $\Delta Q_{k=2}=1.86\times10^{-3}$,
giving the periodicity of 540\,turns.
}
\end{figure}
\begin{figure}[h!]
\includegraphics*[width=.7\linewidth]{mfig20}
\caption{\label{fg29}
Transverse coherent spectrum
for the bunch from Fig.\,\ref{fg28}, $Q_{s0}=3.24\times10^{-3}$.
The red spectrum is obtained from a frequency analysis
of the complete bunch offset,
while the blue spectrum is a result of a frequency analysis
for the bunch head.
The mode $k=0$ highly dominates,
in the complete-bunch spectrum the mode $k=2$ is stronger,
and the bunch head spectrum reveals the uneven modes
$k=1$, $k=3$.
The frequencies of the head-tail modes
provide the space charge parameter $q \approx 4$,
see cyan triangles in Fig.\,\ref{fg30}.
}
\end{figure}
\section{CONCLUSIONS}
The transverse decoherence and coherent eigenspectra
in long bunches with space charge have been studied using measurements at the SIS18 heavy-ion synchrotron and
particle tracking simulations.
A model Eq.\,(\ref{eq05}) for the combined effect of space charge and nonlinear synchrotron oscillations
has been formulated, with the fitting parameter $q_*$ obtained from
the particle tracking simulations for the low-order head-tail modes.
The space-charge parameter $q=\Delta Q_{\rm sc}/Q_{s0}$
of the bunch can be determined for every head-tail mode
from the corresponding frequency shift $\Delta Q_k$, see Eq.\,(\ref{eq06}),
according to the given bunch length.
The transverse decoherence in bunches with space charge has been observed experimentally
and quantitively explained, using simulations and analytic models.
An initial rigid bunch offset can be decomposed into
head-tail modes. The chromaticity determines
the contribution of the different head-tail modes.
Using the airbag \cite{blask98} eigenmodes as an approximation for
the bunch head-tail modes, the relative amplitudes
can be found analytically, see Fig.\,\ref{fg07}.
Different head-tail modes experience also different Landau damping rates.
After a transition period the bunch oscillation
is a combination of the remaining modes.
For example, it can be a mix of the $k=0$ mode and $k=1$ mode.
The periodicity of the bunch recoherence corresponds then
to the frequency difference between these two modes.
Our simulation examples demonstrate this explanation of the bunch decoherence
for different space-charge parameters and for different chromaticities,
see Figs.\,\ref{fg06},\,\ref{fg07}.
In the simulations the dominating Landau damping mechanism
is due to the variation of the space charge tune shift along the bunch
\cite{burov2009, balb2009, kornilov_prstab10}.
Experimental observations of the transverse bunch decoherence with space charge in the SIS18
heavy-ion synchrotron at GSI are presented.
The space charge parameter $q$ has been determined from the
bunch spectra for different head-tail modes,
summarized in Fig.\,\ref{fg30}.
With increasing bunch length we observe that nonlinear synchrotron oscillations
modify the head-tail mode frequencies.
The bunch decoherence always corresponded to the mix of
the dominating modes, in our case the $k=0$ and $k=1$ modes
or the $k=0$ and $k=2$ modes.
Compared to the simulation it is more difficult to predict which modes
would be faster suppressed due to additional damping mechanisms
in a real machine.
In the experiment the oscillations are further damped after the transition period,
possibly due to the nonlinear magnet field errors.
The periodicity of the recoherence was
exactly confirmed by the mode frequencies from the spectra.
A direct comparison of the first two examples
(Figs.\,\ref{fg21},\,\ref{fg22} vs.\ Figs.\,\ref{fg23},\,\ref{fg24})
demonstrates the role of the bunch length.
A comparison of the the fourth example (Figs.\,\ref{fg28},\,\ref{fg29})
with the others demonstrates the role of the chromaticity:
at a nearly zero chromaticity the mode $k=0$
dominates the bunch decoherence alone.
The third example (Figs.\,\ref{fg25},\,\ref{fg26})
shows a decoherence with pronounced flat
between the recoherence peaks, corresponding to the
mix of the $k=0$ and $k=1$ modes.
The results of this work apply to the evolution of a possible transverse
injection offset during bunch-to-bucket
transfer from one ring to another.
Transverse coherent spectra can be used not only
to measure the betatron tune, the head-tail mode frequencies
can be used to extract useful information about the long intense bunches.
The understanding of the decoherence with space charge
and rf nonlinearity
has direct consequences for the chromaticity
measurements in intense bunches, which should be
analysed in detail in a future work.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Blazars are a class of active galactic nuclei (AGN) and are the primary sources of extragalactic $\gamma$-rays~\citep{Romero:2016hjn}. The spectral energy distribution (SED) of blazars are characterized by two non-thermal peaks~\citep{Abdo_2010}. The first peak, in the infrared to X-ray energy region, is from the synchrotron photons emitted by relativistic electrons in the magnetic field of the blazar jet. The second peak, in the X-ray to $\gamma$-ray energy regime is generated either from the Synchrotron Self-Compton (SSC) scattering of high-energy electrons off the self-produced low-energy synchrotron seed photons in the propagating jet~\citep{Maraschi:1992iz,Murase_2012,Gao:2012sq} or from the scattering of the relativistic electrons off the photons from the external sources such as the accretion disk and/or the broad-line regions~\citep{Sikora:1994zb,Blazejowski:2000ck}. In both these scenarios, the low energy photons are boosted in energy by colliding with the high-energy electrons. Alternatively, the second peak can also be attributed to different combinations of leptonic and hadronic processes taking place in the blazar jet and in the surrounding environment. In the blazar family, the BL Lac objects are generally classified according to the position of the synchroton peak frequency, and are referred to as, low-energy peaked blazars (LBLs, $\nu_{peak} < 10^{14}\, \mathrm{Hz}$), intermediate-energy peaked blazars (IBLs, $10^{14}\, \mathrm{Hz} < \nu_{peak} < 10^{15}\, \mathrm{Hz}$), high-energy peaked blazars (HBLs, $10^{15}\, \mathrm{Hz} < \nu_{peak} < 10^{17}\, \mathrm{Hz}$~\citep{1995ApJ...444..567P, Abdo_2010, Boettcher:2013wxa}) and extreme high-energy peaked blazars (EHBLs, $\nu_{peak} > 10^{17}\, \mathrm{Hz}$~\citep{Costamante:2001pu}). The maximum energy of the second peak of the EHBL can reach up to a few TeV. However, due to the low flux output from the entire SED, EHBLs are difficult to detect with the current generation of the Imaging Atmospheric Cerenkov Telescopes (IACTs). Thus, the number of such detected blazers is small. Amongst all the EHBLs observed so far, 1ES 0229+200 is prototypical, with the synchroton peak at $3.5\times 10^{19}$ Hz and the SSC peak at $1.5\times 10^{27}\, \mathrm{Hz}\, (\sim 6.2$ TeV). At least two different temporal behavior have been observed among the EHBLs. Some of them constantly exhibit extreme properties and variability timescales ranging from months to years, with the possibility of small amplitude variability~\citep{Aharonian:2007nq,Costamante:2017xqg}. Examples of these sources are 1ES 0229+200, 1ES 0347-232, RGB J0710+591, and 1ES 1101-232~\citep{Aharonian:2007nq,Costamante:2017xqg}. Others are temporary members of the EHBL family, such as Markarian 421 (Mrk 421), Markarian 501 (Mrk 501) and 1ES 1959+650~\citep{refId0,Foffano:2019itc}, and are well known HBLs but sometimes show the extreme behavior. It is believed that the EHBL class might be a complex population of sources having different spectral behaviors at VHE ($>\, 100$ GeV) $\gamma$-rays and with different subclasses within~\citep{Foffano:2019itc}. In fact, recently, we have shown that a temporary EHBL may envisage two different subclasses~\citep{Sahu:2020tko, Sahu:2021wue, Sahu:2020kce, 10.1093/mnras/stac2093}.
The VHE $\gamma$-rays observed by the Cerenkov telescopes from the extragalactic sources are attenuated by the extragalactic background light (EBL) through $e^+e^-$--pair production~\citep{1992ApJ...390L..49S,doi:10.1126/science.1227160} and through interactions with the background radiation at various electromagnetic frequencies. As such, the detection of high-energy neutrinos and photons from sources at cosmological distances (redshift $z > 0.1$) requires a detailed account of the photon background along the line-of-sight of the source. Several EBL models~\citep{Franceschini:2008tp,Dominguez:2010bv,10.1111/j.1365-2966.2012.20841.x} have been developed to study the resulting attenuation at different redshifts.
The shift in the second peak and the hard VHE spectra of the EHBLs are difficult to account for in the framework of the one-zone leptonic SSC model, as this model predicts a soft spectrum in the Klein-Nishina regime. Proposed solutions to overcome these problems require large values of the minimum electron Lorentz factor, the bulk Lorentz factor~\citep{refId0,refId01} and also require a small magnetic field. In an alternative scenario, it is assumed that the VHE photons are produced in the intergalactic space, rather than in the jet. The escaping ultra high energy (UHE) protons from the jet travel far from the source before interacting with the cosmic microwave background (CMB) photons and/or the EBL to produce $\gamma$-rays. Thus, these $\gamma$-rays are produced closer to the Earth and hence travel less distance compared to the $\gamma$-rays produced in the jet. As a result, the $\gamma$-rays produced from the interaction of UHE protons with the CMB or EBL suffer less absorption when arriving to the Earth~\citep{2010APh....33...81E,2011ApJ...731...51E,2013ApJ...771L..32T}. Also, many alternative models such as the two-zone leptonic model, the spine-layer structured jet model, the inverse Compton (IC) scattering of the high energy electrons with the cosmic microwave photons model, and hybrids of leptonic and hadronic models are proposed to explain these spectra~\citep{B_ttcher_2008,refId0,10.1093/mnras/stz2725}.
Previously, we explained the VHE flaring events from many HBLs and EHBLs using the photohadronic model~\citep{Sahu_2019,10.1093/mnras/staa023}. There it was shown that the spectral index $\delta$ lies in the range $2.5 \leq \delta \le 3.0$ and the photohadronic fit can not differentiate between the HBL and EHBL. In the present work, we again use the photohadronic model to fit the VHE spectrum of PGC 2402248 and compare our results with the fittings of other aforementioned models. We conclude that our model fits are comparable and fare better than most other models.
\section{Flaring of PGC 2402248}
The MAGIC collaboration undertook an observational program to search for new EHBLs. In this program it selected the object PGC 2402248 (also known as 2WHSP J073326.7+515354) from the 2WHSP catalogue~\citep{Chang:2017} on the basis of its high synchrotron peak frequency $\nu^{peak}_{syn} =10^{17.9}\, \mathrm{Hz}$. The MAGIC telescopes observed the source from January 23 to April 19, 2018 (MJD 58141-58227) for 25 nights for a total of 23.4 h. On 19th April, for the first time, it detected TeV $\gamma$-rays from the blazar PGC 2402248. During this period, simultaneous multiwavelength observations were also carried out by the KVA and the {\it Swift}-UVOT in the optical and the UV band, in the X-ray band by the {\it Swift}-XRT, and in the $\gamma$-ray band by the {\it Fermi}-LAT~\citep{10.1093/mnras/stz2725}. Observations were also carried out by the optical telescope Gran Telescopio Canarias (GTC) to estimate the redshift as it was unknown at the time. The new redshift measurement reported by the GTC is $z=0.065$~\citep{becerra2018}. The observed broad-band SED of PGC 2402248 was studied by including the $\gamma$-ray archival data collected during more than 10 years (from 4th August 2008 to 24th June 2019) by {\it Fermi}-LAT to compare the flux variability and to construct the multiwavelength SED.
The synchrotron peak frequency was estimated from the {\it Swift}-XRT data that is simultaneous to the MAGIC observations and from the non-simultaneous 105-month archival data from {\it Swift}-BAT. The newly estimated $\nu^{peak}_{syn}=10^{17.8\pm 0.3}\, \mathrm{Hz}$ was compatible with the one reported in the 2WHSP catalogue. The estimate of $\nu^{peak}_{syn}$ during different observation periods confirm that PGC 2402248 is a stationary EHBL. During the MAGIC observation period on PGC 2402248, the simultaneous observations carried out showed no significant variability except for a moderate variability in the the {\it Swift}-UVOT/XRT data. Fitting to the spectrum of the long-term observations has the possibility of averaging out the short-term variability. Although, the overall non variability in the observed SED during the MAGIC observations on PGC 2402248 shows that the source was in a stable state, the short-term flaring cannot be ignored.
\section{Photohadronic model}
The photohadronic model is based on the assumption of a double jet structure along the common axis~\citep{Sahu:2019lwj,Sahu_2019}. During the VHE flaring, a compact and confined smaller jet, of size $R'_f$, is formed within the bigger jet of size $R'_b$ ($R'_b > R'_f$, with the primed symbols indicating the quantity in the jet comoving frame). The photon density in the inner jet region $n'_{\gamma,f}$ is much higher than the photon density in the outer jet region $n'_{\gamma}$ ($n'_{\gamma,f} \gg n'_{\gamma}$). The photohadronic model relies on the standard interpretation of the first two peaks of the SED, i.e., the first peak is due to the synchrotron radiation of the relativistic electrons in the jet environment and the second peak is from the SSC process. Although the inner jet moves (slightly) faster than the outer jet, their respective bulk Lorentz factors satisfy $\Gamma_{in} \,> \Gamma_{ext}$, for simplicity we assume $\Gamma_{ext}\simeq \Gamma_{in}\equiv \Gamma$.
Protons are accelerated to very high energies in the inner jet region and their differential spectrum is a power-law of the form, $dN_p/dE_p \propto E^{-\alpha}_p$~\citep{Gupta_2008}, where $E_p$ is the proton energy and the spectral index $\alpha \ge 2$. In the inner jet region, the dominant process through which protons interact with the seed photons is $p+\gamma \rightarrow \Delta^+$, followed by
\begin{equation}
\Delta^+\rightarrow
\left\{
\begin{array}{l l}
p\,\pi^0, & \quad \text {fraction~ 2/3}\\
n\,\pi^+ ,
& \quad \text {fraction~ 1/3}\\
\end{array} \right. .
\label{decaymode}
\end{equation}
The production of $\Delta$-resonance has a cross section $\sigma_{\Delta}\sim 5\times 10^{-28}\,{\rm cm}^2$. Although the direct single pion production and the multi-pion production processes contribute, they are less efficient in the energy range under consideration here \citep{1999PASA...16..160M,2018MNRAS.481..666O}. We neglect such contributions in the present work. The produced charged and neutral pions decay through $\pi^+\rightarrow e^+{\nu}_e\nu_{\mu}{\bar\nu}_{\mu}$ and $\pi^0\rightarrow\gamma\gamma$ respectively. In the present scenario, the $\gamma$-rays produced from the neutral pion decay are the observed VHE $\gamma$-rays on Earth.
From the $\pi^0$ decay, the observed VHE $\gamma$-ray energy $E_{\gamma}$ and the seed photon energy $\epsilon_{\gamma}$ satisfy the following condition~\citep{Sahu:2019lwj,Sahu_2019},
\begin{equation}
E_{\gamma} \epsilon_\gamma \simeq\frac{0.032\ \Gamma{\mathcal D}}{(1+z)^{2}}\ \mathrm{GeV^2},
\label{eq:kingamma}
\end{equation}
where ${\cal D}$ is the Doppler factor. The observed $\gamma$-ray energy $E_{\gamma}$ and the proton energy $E_p$ are related through
\begin{equation}
E_p=\frac{10\,\Gamma}{\cal D} E_{\gamma}.
\label{eq:epeg}
\end{equation}
For blazars $\Gamma\simeq {\cal D}$, which gives $E_p=10\, E_{\gamma}$. For most of the VHE flaring events from the HBLs, the value of $\Gamma$ (or ${\cal D})$ is such that the seed photon energy $\epsilon_{\gamma}$ always lies in the low energy tail region of the SSC band.
The efficiency of the $\Delta$-resonance production in the inner jet region depends on the optical depth $\tau_{p\gamma}$ and is given by
\begin{equation}
\tau_{p\gamma}=n'_{\gamma, f} \sigma_{\Delta} R'_f,
\label{optdepth}
\end{equation}
By assuming that the Eddington luminosity $L_{Edd}$ is shared equally by the jet and the counter jet during a flaring event, the luminosity $L'_{jet}$ for a seed photon of energy $\epsilon'_{\gamma}$ satisfies $L'_{jet} \ll {L_{Edd}}/{2}$ and this gives
\begin{equation}
\tau_{p\gamma} \ll \frac{L_{Edd}}{8\pi}\, \frac{\sigma_{\Delta}}{R'_f\,\epsilon'_{\gamma}}.
\label{optdepth2}
\end{equation}
As the inner jet region is hidden, there is no direct way to determine the photon density there. Due to the adiabatic expansion of the inner jet into the outer jet, the photon density will be reduced to $n'_{\gamma}$ and the efficiency of the $p\gamma\rightarrow\Delta$ will be suppressed, leading to $\tau_{p\gamma}\ll 1$. This is the reason that in a single jet scenario the efficiency of the $\Delta$-resonance production is suppressed and one needs super-Eddington luminosity in protons to explain the observed VHE $\gamma$-ray flux in the hadronic model. However, the additional compact inner jet we consider here overcomes the excess energy budget in protons~\citep{Sahu:2016bdu}. From the SED we can calculate the photon density in the outer jet region $n'_{\gamma}$. To keep matters simple, we assume a scaling behavior of the photon densities in the inner and the outer jets as~\citep{Sahu:2019lwj,Sahu_2019}
\begin{equation}
\frac{n'_{\gamma,f}(\epsilon_{\gamma,1})}{n'_{\gamma,f}(\epsilon_{\gamma,2})} \simeq\frac{n'_{\gamma}(\epsilon_{\gamma,1})}{n'_{\gamma}(\epsilon_{\gamma,2})}.
\label{eq:scaling}
\end{equation}
\noindent
From this, we deduce that the ratio of the photon densities at two different background energies $\epsilon_{\gamma,1}$ and $\epsilon_{\gamma,2}$ in the inner (flaring) and the outer jet (non-flaring) regions are almost the same. The photon density in the outer region can be calculated in terms of the SSC photon energy $\epsilon_{\gamma}$ and its corresponding flux, $\Phi_{SSC}(\epsilon_{\gamma})$, as
\begin{equation}
n'_{\gamma}(\epsilon_{\gamma})=\eta \left ( \frac{d_L}{R'_b} \right )^2 \frac{1}{(1+z)}\frac{\Phi_{SSC}(\epsilon_{\gamma})}{{\cal D}^{2+\kappa}\, \epsilon_{\gamma}},
\label{photondensity}
\end{equation}
where the efficiency of the SSC process is defined by $\eta$ and in this work we consider 100\% efficiency by taking $\eta=1$. The luminosity distance to the source is given by $d_L$ and $\kappa=0(1)$ corresponds to a continuous (discrete) blazar jet. Using the above equation we can express $n'_{\gamma,f}$ in terms of $\Phi_{SSC}$. In many previous studies we have shown that the $\Phi_{SSC}$ in the low energy tail region is a perfect power-law for HBLs and in many EHBLs, with the synchrotron peak position always above $10^{17}$ Hz and variability timescales ranging from months to years (referred to as stationary EHBLs), it can be expressed as $\Phi_{SSC}\propto \epsilon^{\beta}_{\gamma}\propto E_{\gamma}^{-\beta}$ with $\beta \, > 0$~\citep{Sahu:2019lwj,Sahu_2019}. The observed VHE $\gamma$-ray flux $F_{\gamma}$ depends on the seed photon density $n'_{\gamma,f}$, and the high energy proton flux $F_p\equiv E^2_p\,dN/dE_p$. Expressing $n'_{\gamma,f}$ in terms of $\Phi_{SSC}$ implies that $F_{\gamma}\propto n'_{\gamma,f}\propto E^{-\beta+1}_{\gamma}$. Similarly, for $F_{\gamma}\propto F_{p}$, we deduce $F_{\gamma}\propto E^{-\alpha+2}_{\gamma}$. The VHE $\gamma$-ray flux is attenuated due to the EBL effect by a factor $e^{-\tau_{\gamma\gamma}}$, where $\tau_{\gamma\gamma}$ is the optical depth for the lepton pair production process $\gamma\gamma\rightarrow e^+e^-$ and depends on $z$ and $E_{\gamma}$. By including the EBL correction, the observed VHE flux on Earth is
\begin{equation}
F_{\gamma}(E_{\gamma})=F_0 \left ( \frac{E_\gamma}{TeV} \right )^{-\delta+3}\,e^{-\tau_{\gamma\gamma}}=F_{\gamma, in}(E_{\gamma})\, e^{-\tau_{\gamma\gamma}},
\label{eq:fluxgeneral}
\end{equation}
where the spectral index $\delta=\alpha+\beta$ which should be in the range $2.5 \le \delta \le 3.0$~\citep{Sahu:2019lwj,Sahu_2019}. $F_0$ is the normalization constant that is fixed from the observed VHE spectrum and $F_{\gamma,in}$ is the intrinsic VHE flux.
In the photohadronic process each pion carries 20\% of the proton energy. The decay products produced from $\pi^+$ decay, each carries approximately 25\% of the pion energy. This gives the neutrino energy $E_{\nu}=E_{\gamma}/2$ and the neutrino flux $F_{\nu}$ can be calculated from $F_{\gamma}$, which gives~\citep{PhysRevD.87.103015}
\begin{equation}
F_{\nu}= \frac{3}{8} \,F_{\gamma}.
\label{eq:nuflux}
\end{equation}
It is important to note that the photohadronic process works well for $E_\gamma \gtrsim 100$ GeV. However, below this energy the leptonic processes such as the electron synchrotron mechanism and the SSC process have the dominant contribution to the multiwavelength SED. As our main motivation is to interpret the VHE spectrum of the flaring event, we favor the photohadronic model for this analysis.
\section{Results and analysis}
Simultaneous multiwavelength observations of the blazar PGC 2042248 during the MAGIC observation period from January 23 to April 19, 2018 and the 10 year archival data collected by the Fermi-LAT telescope helped to construct the broadband SED of the object with a high degree of precision in Ref.~\citet{10.1093/mnras/stz2725}. In this reference, the X-ray data shows that the synchrotron peak frequency is in the EHBL region. Also, the observed VHE spectrum reconstructed in the energy range 0.1 TeV to 8 TeV can be described well by a power-law of the form $dN/dE_{\gamma}=f_0 (E_{\gamma}/200\ \mathrm{GeV})^{-\lambda}$ with $f_0=(1.95\pm 0.10_{stat})\times 10^{-11}\,\mathrm {ph\, cm^{-2}\, s^{-1}\, TeV^{-1}}$ and $\lambda=2.41\pm 0.17_{stat}$ and the intrinsic spectrum is also fitted with a power-law which is almost flat in this energy regime, as shown in Fig.~2 of the same reference. Also in~\citet{10.1093/mnras/stz2725}, the multiwavelength SED is fitted by using the one-zone SSC model~\citep{1992ApJ...397L...5M, Tavecchio:1998xw}, the 1D conical jet model~\citep{2015ApJ...808L..18A, 2018ApJ...861...31A}, the spine-layer model~\citep{ ref}, and the proton-synchrotron model~\citep{10.1093/mnras/stu2691}. As concluded there, extreme physical parameters would be required for the three of the four SED modelling scenarios to provide compatible models for the SED. Out of these four models the spine-one layer model fits better with theoretical predictions and provides a reasonable framework to explain the broad-band SED of PGC 2042248.
In previous studies, it was shown that the photohadronic model is successful in explaining the VHE spectra of many HBLs and EHBLs~\citep{Sahu_2019,10.1093/mnras/staa023}. It was found that the VHE spectra of the EHBLs, 1ES 0229+200, 1ES 0347-232, and several other blazars are fitted very well by taking the spectral index $\delta$ in the range $2.5\le \delta \le 3.0$. In view of the success of the photohadronic model, it is natural to extend it to the study of the VHE spectrum of the EHBL PGC 2042248.
We fit the VHE spectrum observed by the MAGIC telescopes on EHBL PGC 2042248 by using the photohadronic model and the EBL model of~\cite{Franceschini:2008tp}. The only free independent parameter in the photohadronic model is the spectral index, $\delta$, and it is estimated by varying the value of the normalization constant, $F_0$, to find the best fit values for the spectrum. The best fit is obtained for $F_0=0.7\times 10^{-12}\, \mathrm{erg\, cm^{-2}\, s^{-1}}$ and $\delta=3.0$, with $\chi^2_{min}=1.52$.
As defined by the error ellipse, $\chi^2_{min}+2.3$, shown in Fig.~\ref{fig:figure1}, statistical errors are obtained by varying one parameter while the other is frozen at its optimum value and they represent the $68.27\%$ confidence intervals for the joint estimation of the two parameters. The statistical errors obtained are $F_0=(0.7^{+0.33}_{-0.32})\times 10^{-12}\, \mathrm{erg\, cm^{-2}\, s^{-1}}$ and $\delta=3.0^{+0.40}_{-0.31}$. In Fig.~\ref{fig:figure2}, we show the best fit to the VHE spectrum. The shaded region corresponds to the $1\sigma\ (68.27\%)$ confidence level region and is obtained by varying $F_0$ and $\delta$ to their individual $68.27\%$ confidence intervals as defined by the ellipse $\chi^2_{min}+1$, see Fig.~\ref{fig:figure1}.
\begin{figure*
\includegraphics[width=5.5in]{Figure1.pdf}
\caption{\label{fig:figure1}Error ellipses at $\chi^2_{min}+1$ and $\chi^2_{min}+2.3$. The contour at $\chi^2_{min}+2.3$ corresponds to a coverage probability of $68.27\%$ for joint estimation of $F_0$ and $\delta$. The statistical error bars of $F_0$ and $\delta$ are obtained by varying one parameter while the other is frozen at its optimum value. The contour at $\chi^2_{min}+1$ corresponds to a coverage probability of $68.27\%$ for individual estimation of $F_0$ and $\delta$. The individual $68.27\%$ confidence intervals of $F_0$ and $\delta$ (used to build the $1\sigma$ confidence level region of Fig.~\ref{fig:figure2}) are determined by the horizontal and the vertical tangents to the ellipse, respectively.}
\end{figure*}
\begin{figure*
\includegraphics[width=5.5in]{Figure2.pdf}
\caption{\label{fig:figure2}The MAGIC observation of the VHE spectrum of April 19, 2018 from the EHBL PGC 2402248 is fitted with the photohadronic model using the EBL model of~\citet{Franceschini:2008tp}. The blue shaded region corresponds to the $1\sigma\ (68.27\%)$ confidence level region, obtained by varying $F_0$ and $\delta$, to their individual $68.27\%$ confidence intervals (see~Fig.~\ref{fig:figure1}). The dashed curve is the intrinsic flux.}
\end{figure*}
\begin{figure*
\includegraphics[width=5.5in]{Figure4.pdf}
\caption{\label{fig:figure4}The observed VHE spectrum of the EHBL PGC 2402248 is fitted with the photohadronic model by taking into account three different EBL models, by~\citet{Franceschini:2008tp},~\citet{10.1111/j.1365-2966.2012.20841.x}, and~\citet{Dominguez:2010bv}. Although there is a minor difference for $E_{\gamma} > 1$ TeV, all of them are compatible with each other. The dashed curves correspond to the intrinsic fluxes.}
\end{figure*}
We have also used the EBL models of~\citet{Dominguez:2010bv} and~\citet{10.1111/j.1365-2966.2012.20841.x} to fit the VHE spectrum and compared them with the photohadronic model. The results are shown in Fig.~\ref{fig:figure4}. All these models fit very well to the observed data with $\delta=3.0$ and with $F_0$ value almost the same. The comparison shows that above 1 TeV, there is a small difference in these fits. However, all these EBL models are compatible with each other. For the rest of our analysis we will use the EBL model of~\citet{Franceschini:2008tp} for comparisons purposes since the other EBL models will give similar results.
The observed VHE spectrum of the EHBL PGC 2042248, being substantially flat, implies that its flux should increase up to several TeV with a hard spectral index~\citep{Costamante:2017xqg}. However, the photohadronic model fits very well to the VHE spectrum with $\delta=3.0$, which corresponds to a low emission state and the spectrum is soft~\citep{Sahu_2019}. The intrinsic spectrum is constant and given by $F_{\gamma}=F_0$ which is shown as dashed line in Fig.~\ref{fig:figure2} and Fig.~\ref{fig:figure4}. The spectral index of the differential proton spectrum, $\alpha=2$, is used here which corresponds to $\beta=1.0$ implying that $\Phi_{SSC}\propto E^{-1}_{\gamma}$.
\begin{table}
\centering
\caption{
\label{tab:tab1}In Fig.~\ref{fig:figure5} (VHE Spectrum) and in Fig.~\ref{fig:figure6} the broad-band SED of PGC 2042248 are fitted with various models: the one-zone SSC (SSC1), the 1D conical jet model (1D SSC), the spine-layer (SL), the proton synchrotron model (PS), and the photohadronic model (PH). The various parameters, like the bulk Lorentz factor ($\Gamma$), the blob Radius ($R'_b$ in units of $10^{16}$ cm~\citep{10.1093/mnras/stz2725}), and the magnetic field ($B'$ in G) used in these models are summarized below.
}
\begin{tabular}{llll}
\hline
Model & $\Gamma$ & $R^{\prime}_b$ & $B^{\prime}$\\
\hline
SSC1 & 30 & 1 & 0.01\\
1D SSC & 30 & 2.1 & 0.005\\
SL & 30, 5 & 3, 3.5 & 0.02, 0.1\\
PS & 30 & 0.1--14.6 & 1.2--46.8\\
PH & $\le 34$ & 1 & $\sim 10^{-4.3}$\\
\hline
\end{tabular}
\end{table}
\begin{figure*
\includegraphics[width=5.5in]{Figure5.pdf}
\caption{\label{fig:figure5}The VHE spectrum of PGC 2402248 is fitted with the leptonic models (One-zone SSC, 1D conical jet, and spine-layer), the proton-synchrotron model and the photohadronic model. The shaded region corresponds to the $1\sigma\ (68.27\%)$ confidence level region for the photohadronic model, where EBL model of~\citet{Franceschini:2008tp} is used for the EBL correction.}
\end{figure*}
In Ref.~\citet{10.1093/mnras/stz2725}, the multiwavelength SED of PGC 2402248 is fitted using the leptonic and the proton-synchrotron models. It is observed that the one-zone SSC model, the 1D conical jet model, and the spine-layer model (the leptonic models) fit the observed VHE spectrum well as shown in Fig.~\ref{fig:figure5}. Their behavior is similar in the low and the high energy limits. However, the proton synchrotron model does not fit well to the spectrum and is very different from the other fits. Also, its flux falls faster and earlier than in the rest of the models. Although, different leptonic models fit well to the spectrum, among them, the spine-layer model provides the best fit with an estimated $\chi^2_{min}=2.16$ in the VHE region. We compare the photohadronic fit with these models, which is shown in Fig.~\ref{fig:figure5}. In the photohadronic model, the best fit to the spectrum gives $\chi^2_{min}=1.52$. Thus, the $\chi^2_{min}$ comparison of the photohadronic model and the spine-layer model shows that the photohadronic fit is as good as or better than the leptonic model fits. Also, we observe a slight dip in the spectrum around 1 to 2 TeV energy in the photohadronic models, which is not so obvious in the leptonic scenarios. This happens due to a slight dip in the EBL contribution around this region. We have also shown the multiwavelength SED with the simultaneous data and the archival data from different observations, along with the VHE fit by the leptonic models, the proton-synchrotron model and the photohadronic model in Fig.~\ref{fig:figure6}.
\begin{figure*
\includegraphics[width=5.5in]{Figure6.pdf}
\caption{\label{fig:figure6}The multiwavelength SED of PGC 2402248 is constructed using the simultaneous multiwavelength observations and the archival data collected by the {\it Fermi}-LAT. The SED is fitted using the leptonic models and the proton-synchrotron model. The photohadronic fit is also shown for comparison.}
\end{figure*}
The important parameters used in all these models are summarized in Table~\ref{tab:tab1}. By using Eq.~(\ref{eq:epeg}), the $\gamma$-rays in the energy range of $0.1 \, \mathrm{TeV} \le E_{\gamma} \le 8\, \mathrm{TeV}$ correspond to Fermi accelerated proton energy in the range $1 \,\mathrm{TeV} \le E_p \le 80 \, \mathrm{TeV}$. In the inner jet region of radius $R'_f\sim 5\times 10^{15}\, \mathrm{cm}$, to accelerate the protons to energies $E_p\sim 80$ TeV, the magnetic field can be estimated from the relation $eB'=E_p/ R'_f$, which gives $B'\sim 0.5\times 10^{-4}\, \mathrm{G}$. It can be seen from Fig.~\ref{fig:figure6} that the low energy tail of the SSC band starts around $\epsilon_{\gamma}\simeq 10^{21}\, \mathrm{Hz}$ ($\sim 4.1$ MeV). By using Eq.~(\ref{eq:kingamma}), we can estimate the bulk Lorentz factor $\Gamma$ by assuming that the protons with energy $E_p\simeq 80$ TeV interact with the SSC seed photons with energy $\epsilon_{\gamma}\simeq 4.1$ MeV to produce $E_{\gamma}\simeq 8$ TeV. This gives the maximum value of $\Gamma \simeq 34$. To fit the broad band SED of PGC 2042248, the leptonic models take $\Gamma=30$~\citep{10.1093/mnras/stz2725}. We conclude that our estimate is consistent with the leptonic model value. Using the velocity dispersion measurement, the derived central blackhole mass is $M_{BH}\simeq (4.8 \pm 0.9)\times 10^8\, M_{\odot}$~\citep{10.1093/mnras/staa1144} and this corresponds to Eddington luminosity $L_{Edd}\sim (4.9 - 7.2)\times 10^{46}\, \mathrm{erg\, s^{-1}}$. The integrated VHE flux in the energy range of $0.13\, \mathrm{TeV} \lesssim E_{\gamma} \lesssim 5.3\, \mathrm{TeV}$ gives $F_{\gamma}\sim 4.3\times 10^{-12}\,\mathrm{erg \,cm^{-2}\, s^{-1}}$ and the VHE $\gamma$-ray luminosity is $L_{\gamma}\sim 4.8\times 10^{43}\,\mathrm{erg\, s^{-1}}$.
In order to estimate $\tau_{p\gamma}$ and the photon density in the inner jet region we take the inner jet radius $R'_f\sim 5\times 10^{15}\, \mathrm{cm}$ and the outer jet radius $R'_b\sim 10^{16}\, \mathrm{cm}$~\citep{10.1093/mnras/stz2725}. For a moderate efficiency of the $\Delta$-resonance production we expect $\tau_{p\gamma} < 1$ and this gives $n'_{\gamma,f} < 4\times 10^{11}\, \mathrm{cm^{-3}}$. We can also constrain the value of $\tau_{p\gamma}$ from the fact that the Fermi accelerated proton luminosity (which produces VHE $\gamma$-rays) $L_p=7.5\tau_{p\gamma}^{-1} L_{\gamma}$ should always satisfy $L_p < L_{Edd}/2$. By taking $L_{Edd}\sim 6.0\times 10^{46}\, \mathrm{erg\, s^{-1}}$, we obtain $\tau_{p\gamma} > 0.012$. By taking $\tau_{p\gamma}\sim 0.05$, the SSC photon density in the inner jet region is $n'_{\gamma,f}\sim 2.0\times 10^{10}\, \mathrm{cm^{-3}}$ and the proton luminosity $L_p\sim 7.2\times 10^{45}\, \mathrm{erg\, s^{-1}}$.
In the photohadronic scenario, as previously discussed, the neutrinos produced from the charged pion decay have energy $E_{\nu}=0.5\, E_{\gamma}$. From the VHE flare of PGC 2402248 on April 19, 2018, the maximum observed $\gamma$-ray energy was $E_{\gamma}=8$ TeV, which corresponds to $E_{\nu}=4$ TeV. For a neutrino detector like IceCube, this neutrino energy is very low to be observed. Additionally, in the VHE $\gamma$-ray regime the source is in the low emission state, having the integrated flux $F_{\gamma}\sim 4.3\times 10^{-12}\,\mathrm{erg \,cm^{-2}\, s^{-1}}$. Using Eq.~(\ref{eq:nuflux}), we determine the neutrino flux to be $F_{\nu}\sim 1.6\times 10^{-12}\,\mathrm{erg \,cm^{-2}\, s^{-1}}$, which is also low. Thus, such low energy neutrinos and low neutrino flux pose a challenging detecting task for IceCube.
\section{Discussion and Conclusions}
The MAGIC telescopes for the first time observed multi-TeV $\gamma$-rays from the blazar PGC 2402248 on April 19, 2018. Also, simultaneously, the source was observed in a wide range of frequency bands. The synchrotron peak frequency observed during January - April, 2018 period and the 10 years archival data of {\it Swift}-XRT establishes that PGC 2402248 is a constantly exhibiting extreme HBL. The observed VHE spectrum of the source is flat as compared to several other EHBLs. The broadband SED of PGC 2402248 is fitted using several leptonic models and the proton synchrotron model. It is observed that the leptonic models fit well to the observed VHE spectrum. In previous studies, we have established the success of the photohadronic model in explaining the VHE spectra of several HBLs and EHBLs. For this reason, the VHE spectrum of the EHBL PGC 2402248 is also studied in the context of photohadronic model using different EBL models. It is observed that, the flat VHE spectrum can be fitted very well for the spectral index $\delta=3.0$ which corresponds to a low emission state. The estimated bulk Lorentz factor in this model is consistent with the other leptonic models. Also, we have compared the photohadronic fit with the other leptonic and hadronic fits. We conclude that the photohadronic fit is as good as or better than the other models.
It is to be noted that although the photohadronic model has explained the enigmatic VHE spectra of many HBLs and EHBLs very well, the population of EHBLs detected so far is small and the hard VHE spectra pose a challenge to the leptonic model interpretation. The interpretation of the multi-TeV flaring events in the context of the hadronic models offers the opportunity to look for high-energy neutrinos by the IceCube neutrino observatory. With the present understanding of the EHBLs and their classification, possibility of other subclass of the EHBL can not be ruled out. For these reasons, it is necessary to undertake future observational programs to look for more EHBLs with simultaneous and/or quasi-simultaneous observations at several wavelengths to elucidate their nature and to test the validity of different emission models.
\section*{Acknowledgements}
B. M-C and G. S-C would like to thank CONACyT (México) for partial support. The work of S.S. is partially supported by DGAPA-UNAM (México) Project No. IN103522. Partial support from CSU-Long Beach is gratefully acknowledged.
\section*{Data Availability}
No new data were generated or analysed in support of this research.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In general, the study of a high-energy multi-particle system and its
quantum dynamics involves three essential aspects:
first, the aspect of space-time, geometry and
the structure of the vacuum; second, the quantum
field aspect of the particle excitations; and third, the statistical
aspect of their interactions. These three elements are generally
interconnected in a non-trivial way by their overall dynamical
dependence.
Therefore, in order to formulate a quantum description
of the complex non-equilibrium dynamics, one needs to find a
quantum-statistical and kinetic formulation of QCD that unifies
the three aspects self-consistently.
The main tools to achieve this are:
the {\it closed-time-path} (CTP) formalism
\cite{schwinger,chou} (for treating initial
value problems of irreversible systems), and (ii)
{\it transport theory} based on Wigner function techniques
\cite{degroot}
(for a kinetic description of inhomogenous non-equilibrium systems).
The common feature of high-energy particle collisions is that they allow
a distinction between
a short-distance quantum field theoretical scale and a
larger distance statistical-kinetic scale, which is essentially
an effect of ultra-relativistic kinematics.
This advantagous property facilitates
the passage from {\it exact QCD field theory}
of coherent non-abelian gauge fields to
an {\it approximate quantum kinetic theory} of an ensemble of incoherent
gluons.
When described in a reference frame, in which the particles move close
to the speed of light,
the effects of time dilation and Lorentz contraction
separate the intrinsic quantum motion of the individual
particles from the statistical correlations among them.
On the one hand,
the quantum dynamics is
determined by the self-interactions of the bare quanta,
which dresses them up to quasi-particles with a substructure of quantum fluctuations.
This requires a fully quantum theoretical analysis including renormalization.
On the other hand, the kinetic dynamics
can well be described statistical-mechanically
by the motion of the quasi-particles
that is,
by binary interactions between these quasi-particles, and
by the possible
presence of a coherent mean color field that may be induced by the
collective motion of the partons.
Such a distinct description of quantum and kinetic dynamics
is possible, because the quantum
fluctuations are highly concentrated around the light cone,
occurring at very short distances, and decouple to very
good approximation from the kinetic evolution
which is dictated by comparably large space-time scales.
As mentioned,
the natural two-scale separation is just the consequence of time dilation
and Lorentz contraction, and is true for any lightcone
dominated process. In fact, at asymptotic energies the
quantum fluctuations are exactly localized on the lightcone, and
so the decoupling becomes perfect.
This observation is the key to formulate a quantum kinetic description
in terms of particle phase-space densities, involving
a simultanous specification of momentum space and space-time,
because at sufficiently high energy,
the momentum scale $\Delta p$ of the individual particles' quantum fluctutions
and the scale $\Delta r$ of space-time variations of the system of particles
satisfiy $\Delta p \Delta r \gg 1$, consistent with the uncertainty principle.
In what follows, I am guided by
the recent paper \cite{ms39}
and the related literature discussed therein, plus on preliminary results
of work in progress \cite{ms42}.
For purpose of lucidity, I will henceforth confine myself to
pure Yang-Mills theory, i.e.
consider gluons only and ignore the quark degrees of freedom. The latter are
straightforward to include.
\section{Non-equilibrium techniques for QCD}
\subsection{Basics of the closed-time-path formalism}
As proclaimed, the goal is to describe the time evolution of
a non-equilibrium quantum system consisting of an initial ensemble
of high-energy gluons at starting time $t_0$.
In this context, the starting point of non-equilibrium
field theory is to write down the CTP {\it in-in amplitude} $Z_P$
for the evolution of
the initial quantum state $\vert in\rangle$ forward in time
into the remote future, in the presence of
a medium which described by the density matrix. The amplitude
$Z_P$ is formally given by \cite{chou}:
\begin{equation}
Z_P[{\cal J},\hat{\rho}]
\;=\;
\langle \; in \;\vert \;in\;\rangle _{{\cal J}}
\,\;=\,\;
\mbox{Tr}\left\{
U^\dagger(t_0 ,t)\, U(t,t_0)\, \hat{\rho}(t_0)\right\}
_{{\cal J}}
\;,
\label{Z1}
\end{equation}
where
${\cal J} = ({\cal J}^+,{\cal J}^-)$ is an external source with components on the $+$ and
$-$ time branch.
$\hat{\rho}(t_0)$ denotes the
The initial state density matrix is denoted $\hat{\rho}(t_0)$,
$U$ is the time evolution operator, and $T$ ($T^\dagger$)
denotes the time (anti-time) ordering.
Within the CTP formalism the amplitude $Z_P$ can be evaluated by time integration
over the {\it closed-time-path} $P$ in the complex $t$-plane.
This closed-time path
extends from $t=t_0$ to $t=t_\infty$ in the remote future
along the positive ($+$) branch and back to $t=t_0$ along the negative ($-$) branch.
where any point on the $+$ branch is understood at an earlier instant
than any point on the $-$ branch.
With $Z_P$ defined on this closed-time-path,
one may then, as in standard field theory, derive from it
the Green functions and their equations of motion. The differences
between the CTP and the standard field theory, which are briefly
summarized below, arise then solely from
the different time contour.
\smallskip
The interpretation of this formal apparatus for the
evolution along the closed-time path $P$ is rather simple:
If the initial state is the vacuum itself, that is,
the absence of a medium generated
by other particles, then the density matrix $\hat{\rho}$ is diagonal and
in (\ref{Z1}), one has $\vert in\rangle \rightarrow \vert 0 \rangle$.
In this case the evolution along the $+$ branch is identical to the anti-time ordered
evolution along the $-$ branch (modulo an irrelevant phase),
and space-time points on different branches cannot cross-talk.
In the presence of a medium however, the density matrix contains off-diagonal elements,
and there are statistical correlations between the quantum system and the
medium particles (e.g. scatterings) that lead to correlations between space-time
points on the $+$ branch with space-time points on the $-$ branch.
Hence, when addressing the evolution of a multi-particle system, both the
deterministic self-interaction of the quanta, i.e.
the time (anti-time) ordered evolution along the $+$ ($-$) branch,
{\it and} the statistical mutual interaction with each other,
i.e. the cross-talk between $+$ and $-$ branch,
must be included in a self-constistent manner.
The CTP method achieves this through the time integtation along the contour $P$.
Although for physical observables the time values are on the $+$ branch,
both $+$ and $-$ branches will come into play at intermediate steps in
a self-consistent calculation.
\smallskip
The convenient feature of this Green function formalism on the closed-time path
is that it is formally completely analogous to standard quantum field theory,
except for the fact that the fields have contributions from both
time branches, and the path-integral representation of the {\it in-in} amplitude
(\ref{Z1}), contains as usual the classical action $I[{\cal A}$ and
source terms ${\cal J}\circ{\cal A}$, but now for both time branches,
\newpage
\begin{eqnarray}
Z_P[{\cal J}^+,{\cal J}^-,\hat{\rho}]
&=&
\int \,{\cal D} {\cal A}^+{\cal D} {\cal A}^-
\;\, \exp \left[i \left( \frac{}{} I[{\cal A}^+] \,+\, {\cal J}^+\cdot{\cal A}^+
\right)
\right.
\label{Z14a}
\\
& &
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\left.
\;-\;
i \left( \frac{}{} I^\ast[{\cal A}^-] \,+\, {\cal J}^-\cdot{\cal A}^-
\right)
\right]
\;{\cal M}[\hat{\rho}]
\nonumber
\;.
\end{eqnarray}
From this path-integral representation one obtains
the $n$-point Green functions $G^{(n)}(x_1,\ldots, x_n)$, which
are now
$$
G^{\alpha_1\alpha_2 \ldots\,\alpha_n}(x_1,\ldots, x_n)
\;\;, \;\;\;\;\;\;\;\;\;\;\;\;
\alpha_i \;= \pm
\;,
$$
depending on whether the space-time points
$x_i$ lie on the $+$ or $-$ time branch, and
it is possible to construct
a perturbative expansion of the non-equilibrium Green functions in terms
of modified Feynman rules (as compared to standard field theory),
\begin{description}
\item[(i)]
The number of elementary vertices is doubled,
because each propagator line of a Feynman diagram can be
either of the four components of the Green functions.
The interaction vertices in which
all the fields are on the $+$ branch are the usual ones,
while the vertices in which the fields are on
the $-$ branch have the opposite sign.
On the other hand, combinatoric factors, rules for loop integrals, etc.,
remain exactly the same as in usual field theory.
\smallskip
\item[(ii)]
All local 1-point functions, such as the
gauge-field or the color current, are `vectors' with 2 components,
\begin{equation}
{\cal A}(x) \;\equiv\;
\left( \begin{array}{c}
{\cal A}^{+} \\ {\cal A}^{-}
\end{array}\right)
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
{\cal J}(x) \;\equiv\;
\left( \begin{array}{c}
{\cal J}^{+} \\ {\cal J}^{-}
\end{array}\right)
\end{equation}
Similarly, all 2-point functions, as
the gluon propagator $\Delta_{\mu\nu}$ and the polarization tensor
$\Pi_{\mu\nu}$, are 2$\times$2 matrices with components
\begin{equation}
\Delta(x_1,x_2) \;\equiv\;
\left(\begin{array}{cc}
\Delta^{++} & \;\Delta^{+-} \\
\Delta^{-+} & \;\Delta^{--}
\end{array} \right)
\;\;\;\;\;\;\;\;
\Pi(x_1,x_2) \;\equiv\;
\left(\begin{array}{cc}
\Pi^{++} & \;\Pi^{+-} \\
\Pi^{-+} & \;\Pi^{--}
\end{array} \right)
\;.
\nonumber
\end{equation}
Explicitely, the components of the propagator are
\begin{eqnarray}
\Delta_{\mu\nu}^{F}(x,y)\;&\equiv&
\Delta_{\mu\nu}^{++}(x,y)\;=\;
-i\,
\langle \;T\, {\cal A}_\mu^+(x)\, {\cal A}_\nu^+(y) \;\rangle
\nonumber \\
\Delta_{\mu\nu}^{<}(x,y) \;&\equiv&
\Delta_{\mu\nu}^{+-}(x,y)\;=\;
-i\,
\langle \; {\cal A}_\nu^+(y)\, {\cal A}_\mu^-(x) \;\rangle
\nonumber \\
\Delta_{\mu\nu}^{>}(x,y)\;&\equiv&
\Delta_{\mu\nu}^{-+}(x,y)
\;=\;
-i\,
\langle \; {\cal A}_\mu^-(x)\, {\cal A}_\nu^+(y) \;\rangle
\nonumber\\
\Delta_{\mu\nu}^{\overline{F}}(x,y) \;& \equiv&
\Delta_{\mu\nu}^{--}(x,y)\;=\;
-i\,
\langle \;\overline{T}\, {\cal A}_\mu^-(x)\, {\cal A}_\nu^-(y) \;\rangle
\label{D22}
\;,
\end{eqnarray}
where
$\Delta^F$ is the usual time-ordered Feynman propagator, $\Delta^{\overline{F}}$
is the corresponding anti-time-ordered propagator, and $\Delta^>$ ($\Delta^<$) is
the unordered correlation function for $x_0 > y_0$ ($x_0 < y_0$).
In compact notation,
\begin{equation}
\Delta_{\mu\nu}(x,y)\;=\;
-i\,\langle \,T_P {\cal A}(x) {\cal A}(y) (y) \,\rangle
\; ,
\end{equation}
where
the generalized time-ordering operator $T_P$ is defined as
\begin{equation}
T_P\,A(x)B(y)
\;:=\;
\theta_P (x_0,y_0)\, A(x) B(y) \;+ \;\theta_P(y_0,x_0) \,B(y) A(x)
\label{gto1}
\;,
\end{equation}
with the $\theta_P$-function defined as
\begin{equation}
\theta_P(x_0,y_0)\;=\;
\left\{
\begin{array}{ll}
1 & \;\; \mbox {if $x_0$ {\it succeeds} $y_0$ on the contour $P$}
\\
0 & \;\; \mbox{if $x_0$ {\it precedes} $y_0$ on the contour $P$}
\end{array}
\right.
\;.
\label{gto2}
\end{equation}
Higher order products $A(x)B(y)C(z)\ldots$ are ordered analogously.
Finally, for later use, let me also introduce
the generalized $\delta_P$-function defined on the closed-time path $P$:
\begin{equation}
\delta^4_P(x,y) \;\,:=\;\,
\left\{
\begin{array}{ll}
+\delta^4(x-y) & \;\; \mbox {if} \; x_0 \;\mbox{and} \;y_0 \; \mbox{from positive
branch} \\
-\delta^4(x-y) & \;\; \mbox {if} \; x_0 \;\mbox{and} \;y_0 \; \mbox{from negative
branch} \\
0 & \;\; \mbox {otherwise}
\end{array}
\right.
\;.
\label{gto3}
\end{equation}
\end{description}
Henceforth
I will not explicitly label the $+$, $-$ components,
unless it is necessary. Instead a compressed notation
is used, in which it is understood that, e.g., 1-point functions
such as ${\cal A}(x)$ or ${\cal J}(x)$, 2-point functions
such as $\Delta_{\mu}(x,y)$
or $\Pi_{\mu\nu}(x,y)$, receive contributions from both $+$ and $-$ time
branches.
\subsection{The generating functional for the non-equilibrium Green functions}
The amplitude $Z_P$ introduced in (\ref{Z1}) admits a path-integral representation which
gives the {\it generating functional for the CTP Green functions}
defined on closed-time-path $P$ \cite{ms42}:
\begin{equation}
Z_P[{\cal J},\hat{\rho}] \;=\;
{\cal N}\;\int \,{\cal D}{\cal A} \; \mbox{det}{\cal F}\;\delta \left( f[{\cal A}]\right)
\;\,
\exp \left\{i \left( \frac{}{} I \left[ {\cal A}, {\cal J}\right]
\right)\right\}
\;\; {\cal M}(\hat{\rho})
\label{Z2}
\;,
\end{equation}
where ${\cal A} = ({\cal A}^+,{\cal A}^-)$ and
${\cal J} = ({\cal J}^+,{\cal J}^-)$
have two components, living on the $+$ and $-$ time branches.
\medskip
\noindent
The structure of the functional $Z_P$ in (\ref{Z2})
is the following:
\begin{description}
\item[(i)]
The functional integral (with normalization ${\cal N}$)
is over all gauge field configurations
with measure ${\cal D}{\cal A} \equiv \prod_{\mu,a} {\cal D}{\cal A}_\mu^a$,
subject to the condition of gauge fixing, here for the
{\it class of non-covariant gauges} defined by
\begin{equation}
f^a[{\cal A}] \;:=\; \hat{n}\cdot {\cal A}^a(x) \;-\; B^a(x)
\;\;\;\;\;\;\;\;\;\;\;
\Longrightarrow
\;\;\;\;\;\;\;\;\;\;
\langle \;\hat{n}^\mu\,{\cal A}_\mu^a(x)\;\rangle \;=\;0
\label{gauge1}
\;,
\end{equation}
where
$\hat{n}^\mu\,\equiv \,\frac{n^\mu}{\sqrt{\vert n^2\vert }}$ and
$n^\mu$ is a constant 4-vector, being either space-like ($n^2 < 0$),
time-like ($n^2 > 0$), or light-like ($n^2=0$).
With this choice of gauge class the {\it local gauge constraint}
on the fields ${\cal A}^a_\mu(x)$ in
the path-integral (\ref{Z2}) becomes,
\begin{eqnarray}
\mbox{det}{\cal F}\;\delta \left( \hat{n}\cdot {\cal A}^a-B^a\right)
&=&
\mbox{const}\,\times\,
\exp\left\{- \frac{i}{2\alpha}
\,\int_P d^4x \,\left[\hat{n}\cdot {\cal A}^a(x)\right]^2
\right\}
\nonumber \\
&\equiv&
I_{GF}\left[\hat{n}\cdot{\cal A}\right]
\;,
\label{gauge2}
\end{eqnarray}
where $\mbox{det}{\cal F}$ is the Fadeev-Popov determinant (which in the case
of the non-covariant gauges turns out to be a constant factor),
and where $\delta (\hat{n}\cdot{\cal A}) \equiv \prod_{a}\delta (\hat{n}\cdot{\cal A}^a)$.
The right side translates this constraint into a
the {\it gauge fixing} functional $I_{GF}$.
The particular choice of
the vector $\hat{n}^\mu$ and of
the real-valued parameter $\alpha$ is dictated by
the physics or computational convenience, and distinguishes
further within the class
of non-covariant gauges \cite{gaugereview,gaugebook}:
\begin{eqnarray}
\mbox{homogenous axial gauge}: \;& & \;\;\;\;\; n^2 \;< \; 0 \;\;\;\;\;\;\alpha \;=\;0
\nonumber \\
\mbox{inhomogenous axial gauge}: \;& & \;\;\;\;\; n^2 \;< \; 0 \;\;\;\;\;\;\alpha \;=\;1
\nonumber \\
\mbox{temporal axial gauge}: \;& & \;\;\;\;\; n^2 \;> \; 0 \;\;\;\;\;\;\alpha \;=\;0
\nonumber \\
\mbox{lightcone gauge}: \;& & \;\;\;\;\; n^2 \;= \; 0 \;\;\;\;\;\;\alpha \;=\;0
\label{gauge1a}
\;.
\end{eqnarray}
\item[(ii)]
The exponential $I$ is the {\it effective classical action}
with respect to both the $+$ and the $-$ time contour,
$
I\left[ {\cal A}, {\cal J}\right]
\equiv I\left[ {\cal A}^+, {\cal J}^+\right]
\;-\;
I^\ast\left[ {\cal A}^-, {\cal J}^-\right]
$,
including
the usual Yang-Mills action $I_{YM}=\int d^4x {\cal L}_{YM}$, plus the source
${\cal J}$ coupled to the gauge field ${\cal A}$:
\begin{eqnarray}
I\left[ {\cal A}, {\cal J}\right]
& =&
-\frac{1}{4} \int_P d^4x \,{\cal F}_{\mu\nu}^a(x) {\cal F}^{\mu\nu, \,a}(x)
\;+\;
\int_P d^4x \,{\cal J}_{\mu}^a(x) {\cal A}^{\mu, \,a}(x)
\nonumber \\
& &\nonumber \\
&\equiv&
\;\;\;\;\;\;\;\;
I_{YM}\left[{\cal A}\right] \;\;\;+\;\;\; {\cal J}\circ {\cal A}
\label{I1}
\;,
\end{eqnarray}
where
$
{\cal F}_{\mu\nu}^a=
\partial^x_\mu {\cal A}_{\nu}^a - \partial^x_\nu {\cal A}_{\mu}^a
+ g\, f^{abc} \,{\cal A}_\mu^b {\cal A}_\nu^c
$.
\item[(iii)]
The form of the initial state at $t=t_0$ as
described by the density matrix $\hat{\rho}$ is
embodied in the function
${\cal M}(\hat{\rho})$ which is the density-matrix element of the gauge fields
at initial time $t_0$,
\begin{equation}
{\cal M}(\hat{\rho})
\;=\;
\langle \,{\cal A}^+ (t_0) \vert \,{\hat \rho}
\,\vert\,{\cal A}^-(t_0)\,\rangle
\;\equiv\;\,
\exp\left( i\; {\cal K}[{\cal A}] \right)
\label{K}
\;,
\end{equation}
where ${\cal A}^\pm$ refers to the $+$ and $-$ time branch {\it at} $t_0$,
respectively.
The functional ${\cal K}$ may be expanded in a series of
non-local kernels corresponding to multi-point correlations concentrated
at $t=t_0$,
\begin{eqnarray}
{\cal K}[{\cal A}]
&=&
{\cal K}^{(0)} \;+\;
\int_P d^4x \;{\cal K}^{(1)\;a}_{\;\;\;\,\mu}(x) \;{\cal A}^{\mu, \,a}(x)
\nonumber \\
& &
\;\;\;\;\;\;\;
\;+\;
\frac{1}{2}
\int_P d^4xd^4y \;{\cal K}^{(2) \;ab}_{\;\;\;\,\mu\nu}(x,y)\;{\cal A}^{\mu, \,a} (x)
\,{\cal A}^{\nu, \,b}(y)
\,\ldots
\label{Kexpansion}
\;.
\end{eqnarray}
Clearly, the sequence of kernels ${\cal K}^{(n)}$ contains as much information
as the original density matrix.
In the special case that $\hat{\rho}$ is diagonal,
the kernels ${\cal K}^{(n)}=0$ for all $n$, and
the usual `vacuum field theory' is recovered.
\end{description}
\medskip
The path-integral representation (\ref{Z2}) can be rewritten in a
form more convenient for the following:
First, the gauge-fixing functional $I_{GF}[\hat{n}\cdot{\cal A}]$ is implemented by
using (\ref{gauge2}).
Second, the series representation (\ref{Kexpansion}) is inserted into
the initial state functional ${\cal M}(\hat{\rho})$.
Third,
${\cal K}^{(0)}$ is absorbed in the overall normalization ${\cal N}$ of $Z_P$
(henceforth set to unity),
and the external source ${\cal J}$ in the 1-point kernel ${\cal K}^{(1)}$:
\begin{equation}
{\cal K}^{(0)}\;:=\; i\, \ln {\cal N}
\;,\;\;\;\;\;\;\;\;\;\;\;\;\;\;
{\cal K}^{(1)} \;:=\; {\cal K}^{(1)}\;+\; {\cal J}
\label{NJ}
\;.
\end{equation}
Then (\ref{Z2}) becomes,
\begin{equation}
Z_P[{\cal J},\hat{\rho}]
\;\;\Longrightarrow\;\;
Z_P[{\cal K}] \;=\;
\int \,{\cal D}{\cal A} \;
\exp \left\{i \left( \frac{}{} I \left[ {\cal A}, {\cal K}\right]
\right)\right\}
\label{Z3}
\;,
\end{equation}
where, instead of (\ref{I1}),
\begin{equation}
I \left[ {\cal A}, {\cal K}\right]
\;\equiv\;
I_{YM}\left[{\cal A}\right]
\;+\;
I_{GF}\left[\hat{n}\cdot{\cal A}\right]
\;+\; {\cal K}^{(1)}\circ {\cal A} \;+\;
\frac{1}{2}\,{\cal K}^{(2)}\circ \left({\cal A}\,{\cal A}\right)
\;+\;
\frac{1}{6}\,{\cal K}^{(3)}\circ \left({\cal A}\,{\cal A} \,{\cal A}\right)
\;+\;\ldots
\label{I2}
\;.
\end{equation}
\medskip
\section{Separating soft and hard dynamics and the equations of motion}
The first step in the strategy is a separation of
soft and hard physics in the
path-integral formalism with Green functions of
both the soft and hard quanta in the presence of the soft classical field
is induced by and feeding back to the quantum dynamics.
The basic idea to split up the gauge field
${\cal A}_\mu$ appearing in the classical action
$I_{YM}\left[{\cal A}\right]$ into a soft (long-range) part $A_\mu$,
and a hard (short-range) quantum field $a_\mu$:
\begin{eqnarray}
{\cal A}_\mu^a(x) &=&
\int \frac{d^4k}{(2\pi)^4}\,e^{+i k \cdot x}
{\cal A}_\mu^a(k) \;\, \theta(\mu - k^0)
\label{Aa}
\\
& &
\;\;\;\;\;\;\;\;\;
\;\,+\;\,
\int \frac{d^4k}{(2\pi)^4}\,e^{+i k \cdot x}
{\cal A}_\mu^a(k) \,\; \theta(k^0 -\mu)
\;\;\equiv\;\;
A_\mu^a(x) \;+\; a_\mu^a(x)
\;.
\nonumber
\end{eqnarray}
This is the formal definition of the terms `soft' and `hard'.
The soft and hard physics are separated by a
(at this point arbitrary) space-time scale $\lambda \equiv 1/\mu$,
so that one may associate
the soft field $A_\mu$ being responsible for long range color collective effects,
and the hard field $a_\mu$ embodying the short-range quantum dynamics.
Consequently, the field strength tensor receives a soft,
a hard part, a mixed contribution,
\begin{equation}
{\cal F}_{\mu\nu}^{a}(x) \;\equiv\;
\left(\frac{}{}
F_{\mu\nu}^{a}[A] \;+\; f_{\mu\nu}^{a}[a] \;+\; \phi_{\mu\nu}^{a}[A,a]
\right) (x)
\label{Ffphi}
\;.
\end{equation}
Now comes physics input. Consider the following
{\it physics scenario}: The initial state is
a (dilute) ensemble of hard gluons of very small spatial extent
$\ll \lambda$, corresponding to transverse momenta $k_\perp^2 \gg \mu^2$.
By definition of $\lambda$, or $\mu$, the short-range character of
these quantum fluctuations implies that the expectation value $\langle a_\mu\rangle$
vanishes at all times. However, the long-range correlations of the eventually
populated soft modes with very small momenta $k_\perp^2\ll\mu^2$ may lead
to a collective mean field with non-vanishing $\langle A\rangle$.
Accordingly, the following condition
on the expectation values of the fields is imposed:
\begin{equation}
\langle \; A_\mu^a(x)\;\rangle \;
\left\{
\begin{array}{ll}
\;=\;0 \;\;\;\mbox{for} \;t\,\le\,t_0 \\
\;\ge\;0 \;\;\;\mbox{for} \;t\,>\,t_0
\end{array}
\right.
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\langle \; a_\mu^a(x)\;\rangle \;\stackrel{!}{=}\; 0
\;\;\;\mbox{for all} \;t
\;.
\label{MFconstraint}
\end{equation}
Furthermore, for simplicity the quantum fluctuations of the soft field are ignored, assuming
any multi-point correlations of soft fields to be small,
\begin{equation}
\langle \; A_{\mu_1}^{a_1}(x_1)\; \ldots A_{\mu_n}^{a_n}(x_n) \;\rangle
\ll\;\langle \; A_{\mu_1}^{a_1}(x_1)\;\rangle
\;\ldots \;\langle \; A_{\mu_1}^{a_n}(x_n)\;\rangle
\;,
\end{equation}
i.e. take $A_\mu$ as a non-propagating and non-fluctuating, classical field.
When quantizing this decomposed theory by writing down the
appropriate {\it in-in}-amplitude $Z_P$, one must be
consistent with the gauge field decomposition (\ref{Aa}) into soft and
hard components and with the classical character of the former.
$M^{(1)}_{\mu} = 0$, $M^{(2)}_{\mu\nu}\ge 0$. That is, I restrict
in the following to a class of non-equilibrium initial states of
Gaussian form (i.e. quadratic in the $a_\mu$ fields) and
do not consider possible linear force terms.
Substituting the soft-hard mode decomposition (\ref{Aa})
with the condition (\ref{MFconstraint}) into (\ref{Z3}),
the functional integral of the {\it in-in} amplitude (\ref{Z3}) becomes:
\begin{equation}
Z_P[ {\cal K}] \;=\;
\int \,{\cal D} A \,{\cal D} a \;
\exp \left\{i \left( \frac{}{} I \left[ A\right]
\;+\; I \left[ a\right] \;+\; I \left[ A, a\right]
\right)\right\}
\label{Z4}
\;,
\end{equation}
with a soft, hard, and mixed contribution, respectively \cite{ms42}.
Introducing the {\it connected} generating functional for the
{\it connected} Green functions, denoted by ${\cal G}^{(n)}$,
\begin{equation}
W_P\left[{\cal K}\right] \;\;=\;\;
- i \,\ln\, Z_P\left[ {\cal K}\right]
\label{W0}
\;,
\end{equation}
from which one obtains
the {\it connected} Green functions ${\cal G}^{(n)}$
by functional differentiation,
in terms of mixed products of $a_\mu$ and $A_\mu$ fields
\begin{equation}
(-i)\, {\cal G}_{\;\;\;\;\mu_1\ldots \mu_n}^{(n)\;a_1\ldots a_n}(x_1,\ldots, x_n)
\;\equiv\;
\left.
\frac{\delta}{i \,\delta{\cal K}^{(n)}}
W_P[{\cal K}]\right|_{{\cal K}=0}
\label{Green2}
\;,
\end{equation}
where the superscript $(c)$ indicates the `connected parts'.
Specifically, one finds
\begin{eqnarray}
{\cal G}_{\;\;\;\;\mu}^{(1)\;a}(x)
&=&
\langle \;A_\mu^a(x) \;\ \rangle_P^{(c)}
\;\;\equiv\;\; \overline{A}_\mu^a(x)
\nonumber \\
{\cal G}_{\;\;\;\;\mu\nu}^{(2)\;ab}(x,y)
&=&
\langle \;a_\mu^a(x) a_\nu^b(y)\;\rangle_P^{(c)}
\;\;\equiv\;\;
i \widehat{\Delta}_{\mu\nu}^{ab}(x,y)
\;.
\label{Green3}
\end{eqnarray}
These relations
define the soft mean field $\overline{A}$ and the hard propagators
$\widehat{\Delta}$.
\medskip
The equations of motions for $\overline{A}$ and $\widehat{\Delta}$ follow
now as in usual field theory by functional differentiation of the
effective action,
\begin{equation}
\Gamma_P\left[\overline{A}, \widehat{\Delta}\right]
\;=\;
W_P\left[{\cal K}\right]
\;-\;{\cal K}^{(1)}\circ \overline{A}
\;-\;\frac{1}{2}{\cal K}^{(2)}\circ
\left( i\widehat{\Delta}+ \overline{A}^2\right)
\;.
\end{equation}
Note that the main approximation at this point
is the truncation of the infinite hierarchy of
equations for the $n$-point Green functions of the excact theory,
to the 1-point function (the soft mean field $\overline{A}(x)$) and the
2-point function (the hard propagator $\widehat{\Delta}(x,y)$),
with all higher-point functions being combinations of these and connected
by the 3-gluon and 4-gluon vertices.
\subsection{Yang-Mills equation for the `soft' mean field}
The equation of motion for the soft field $\overline{A}_\mu^a(x)$,
is given by
$
\delta \Gamma_P
/
\delta \overline{A}
=
-{\cal K}^{(1)}
- {\cal K}^{(2)} \circ\overline{A}
$,
from which one obtains,
upon taking into account the initial condition
${\cal K}^{(1)} = 0$, the {\it Yang-Mills equation for} $\overline{A}$:
\begin{equation}
\left[\frac{}{}
\overline{D}^{\lambda,\;ab} , \; \overline{F}_{\lambda \mu}^{b}
\right](x)
\;=\;
- \,\widehat{j}_{\mu}^{a}(x)
\;-\; \int_P d^4y \,{\cal K}_{\;\;\;\mu\lambda}^{(2)\,ab}(x,y)
\,\overline{A}^{\lambda,\,b}(y)
\label{YME2}
\;,
\end{equation}
where $[\overline{D}, \overline{F}] =\overline{D}\, \overline{F} -
\overline{F}\, \overline{D}$
with the covariant derivative defined as
$\overline{D}^\lambda \equiv D^\lambda[\overline{A}] =
\partial_x^\lambda - ig \overline{A}^\lambda$,
and $\overline{F}_{\lambda \mu}\equiv F_{\lambda \mu}[\overline{A}]
= \left[ \overline{D}_{\lambda}\,,\, \overline{D}_{\mu}\right]/(-ig)$.
The left hand side of (\ref{YME2}) may be written as
\begin{equation}
\left[\frac{}{}
\overline{D}^{\lambda,\;ab} , \; \overline{F}_{\lambda \mu}^{b}
\right](x)
\;=\;
{\cal D}_{(0)\;\mu\nu}^{-1\;\;ab} \;\overline{A}^{\lambda,\,b}(x)
\;\;+\;\;\overline{\Xi}_{\mu}^{a}(x)
\end{equation}
\begin{equation}
{\cal D}_{(0)\;\mu\nu}^{-1\;\;ab}
\;\equiv\;\delta^{ab} \,\left( g_{\mu\lambda} \partial_x^{2} -
\partial^x_\mu\partial^x_\lambda
- \hat{n}_\mu\hat{n}_\lambda \right)
\;,
\label{DF1}
\end{equation}
where, upon taking into account the gauge constraint (\ref{gauge1}),
the $-\hat{n}_\mu\hat{n}_\lambda \overline{A}^\lambda$
does not contribute, because
$0=\langle \hat{n}\cdot A\rangle = \hat{n}^\nu \overline{A}_\nu$,
and where
\begin{equation}
\overline{\Xi}_{\mu}^{a}(x)
\;\;=\;\;
\overline{\Xi}_{(1)\;\mu}^{a}(x)\;\;+\;\; \overline{\Xi}_{(2)\;\mu}^{a}(x)
\label{DF2}
\end{equation}
\begin{eqnarray}
\overline{\Xi}_{(1)\;\mu}^{a}(x)
&= &
\;-\; \frac{g}{2} \,
\int_P\prod_{i=1}^{2} d^4x_i \;
V_{(0)\;\mu\nu\lambda}^{\;\;\;\;abc}(x,x_1,x_2)
\;\overline{A}^{\nu,\,b}(x_1) \overline{A}^{\lambda,\,c}(x_2)
\label{DF3} \\
\overline{\Xi}_{(2)\;\mu}^{a}(x)
&= &
\;+\;
\frac{i\,g^2}{6} \,
\int_P\prod_{i=1}^{3} d^4x_i \;
W_{(0)\;\mu\nu\lambda\sigma}^{\;\;\;\;abcd}(x,x_1,x_2,x_3)
\nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;
\times
\;\overline{A}^{\nu,\,b}(x_1) \overline{A}^{\lambda,\,c}(x_2)\overline{A}^{\sigma,\,d}(x_3)
\label{DF4}
\;.
\end{eqnarray}
On the right hand side of (\ref{YME2}), the current $\widehat{j}$ is the
{\it induced current} due to the `hard' quantum dynamics
in the presence of the `soft' field $\overline{A}$:
\begin{equation}
\widehat{j}_{\mu}^{a}(x)
\;\;=\;\;
\widehat{j}_{(1)\;\mu}^{a}(x) \;\;+\;\;\widehat{j}_{(2)\;\mu}^{a}(x)
\;\;+\;\;\widehat{j}_{(3)\;\mu}^{a}(x)
\label{J}
\end{equation}
\begin{eqnarray}
\widehat{j}_{(1)\;\mu}^{a}(x)
& = &
-\; \frac{i\,g}{2} \,
\int_P\prod_{i=1}^{2} d^4x_i \;
V_{(0)\;\mu\nu\lambda}^{\;\;\;\;abc}(x,x_1,x_2)
\;\widehat{\Delta}^{\nu\lambda,\,bc}(x_1,x_2)
\label{J1}
\\
\widehat{j}_{(2)\;\mu}^{a}(x)
&= &
-\;
\frac{g^2}{2} \,
\; \int_P\prod_{i=1}^{3} d^4x_i
\;\;W_{(0)\;\mu\nu\lambda\sigma}^{\;\;\;\;abcd}(x,x_1,x_2,x_3)\;
\nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;
\times
\;\overline{A}^{\nu,\,b}(x_1)
\;\widehat{\Delta}^{\lambda\sigma,\,cd}(x_2,x_3)
\label{J2}
\\
\widehat{j}_{(3)\;\mu}^{a}(x)
&= &
-\;
\frac{i g^3}{6} \,
\; \int_P\prod_{i=1}^{3} d^4x_i d^4y_i
\;\;W_{(0)\;\mu\nu\lambda\sigma}^{\;\;\;\;abcd}(x,x_1,x_2,x_3)\;
\widehat{\Delta}^{\nu\nu',\,bb'}(x_1,y_1) \;
\nonumber \\
& &
\times \;
\widehat{\Delta}^{\lambda\lambda',\,cc'}(x_2,y_2) \;
\widehat{\Delta}^{\sigma\sigma',\,dd'}(x_3,y_3) \;
\;V_{(0)\;\mu'\nu'\lambda'\sigma'}^{\;\;\;\;abcd}(y_1,y_2,y_3)
\label{J3}
\;.
\end{eqnarray}
Finally, the second term on the right side of (\ref{YME2}) is the
initial state contribution to the current, which vanishes for
$t = x^0 > t_0$.
\smallskip
Notice that the function $\overline{\Xi}$ on the left hand side of (\ref{YME2})
contains the non-linear self-coupling of the soft field $\overline{A}$
alone, whereas the induced current $\widehat{j}$ on the right hand side
is determined by the hard propagator $\widehat{\Delta}$,
thereby generating the soft field.
\subsection{Dyson-Schwinger equation for the `hard' Green function}
The equation of motionfor the `hard' propagator,
$\widehat{\Delta}_{\mu\nu}^{ab}(x,y)$, is
$
\delta \Gamma_P
/
\delta \widehat{\Delta}
=
{\cal K}^{(2)}/(2i)
$,
from which one finds after incorporating the initial condition ${\cal K}^{(1)}=0$,
the {\it Dyson-Schwinger equation for} $\widehat{\Delta}$:
\begin{equation}
\left[\frac{}{}
\left(\widehat{\Delta}_{\mu\nu}^{ab}\right)^{-1} \;-\;
\left(\Delta_{(0)\,\mu\nu}^{\;\;ab}\right)^{-1} \;+\;
\overline{\Pi}_{\mu\nu}^{ab} \;+\;\widehat{\Pi}_{\mu\nu}^{ab}
\,\; \right](x,y)
\;=\;
{\cal K}^{(2)\;ab}_{\;\;\;\;\mu\nu}(x,y)
\label{DSE2}
\;,
\end{equation}
where
$\widehat{\Delta} \equiv \widehat{\Delta}_{[\overline{A}]}$
is the {\it fully dressed propagator} of
the `hard' quantum fluctuations in the presence of the `soft' mean field,
whereas $\Delta_{(0)}$ is the {\it free propagator}.
The polarization tensor $\Pi$ has been decomposed in two parts,
a mean-field part, and a quantum fluctuation part.
The {\it mean-field polarization tensor} $\overline{\Pi}$ incorporates
the {\it local} interaction between the `hard' quanta and the `soft' mean field,
\begin{equation}
\overline{\Pi}_{\mu\nu}^{ab}(x,y) \;\;=\;\;
\overline{\Pi}_{(1)\;\mu\nu}^{\;\;\;\;ab}(x,y) \;\;+\;\;
\overline{\Pi}_{(2)\;\mu\nu}^{\;\;\;\;ab}(x,y)
\label{PiMF}
\end{equation}
\begin{eqnarray}
\overline{\Pi}_{(1)\;\mu\nu}^{\;\;\;\;ab}(x,y)
&=&
\frac{i g}{2}
\;\delta_P^4(x,y)\;
\, \int_P d^4z \,V_{(0)\;\mu\nu\lambda}^{abc}(x,y,z)
\,\overline{A}^{\lambda ,\,c}(z)
\label{PiMF1}
\\
\overline{\Pi}_{(2)\;\mu\nu}^{\;\;\;\;ab}(x,y)
&=&
\frac{g^2}{6}
\;\delta_P^4(x,y)\;
\, \int_P d^4z d^4 w \,W_{(0)\;\mu\nu\lambda\sigma}^{abcd}(x,y,z,w)
\nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;
\times
\,\overline{A}^{\lambda ,\,c}(z) \,\overline{A}^{\sigma ,\,d}(w)
\label{PiMF2}
\;.
\end{eqnarray}
plus terms of order $g^3 \overline{A}^3$ which one may safely ignore
within the present approximation scheme.
The {\it fluctuation polarization tensor} $\widehat{\Pi}$ contains
the quantum self-interaction among the `hard' quanta in the presence of
$\overline{A}$, and is given by the
variation of 2-loop part $\Gamma_P^{(2)}$,
of the effective action,
$2i\delta \Gamma_P^{(2)} / \delta\widehat{\Delta}$,
\begin{equation}
\widehat{\Pi}^{ab}_{\mu\nu}(x,y)
\;\;=\;\;
\left(\frac{}{}
\widehat{\Pi}_{(1)}
\;\;+\;\;
\widehat{\Pi}_{(2)}
\;\;+\;\;
\widehat{\Pi}_{(3)}
\;\;+\;\;
\widehat{\Pi}_{(4)}
\right)^{ab}_{\mu\nu}(x,y)
\;,
\label{PiQU}
\end{equation}
\begin{eqnarray}
\widehat{\Pi}^{\;\;\;\;ab}_{(1)\;\mu\nu}(x,y)
&=&
-\, \frac{g^2}{2}
\; \int_P d^4x_1 d^4y_1
\;\;W_{(0)\;\mu\nu\lambda\sigma}^{\;\;\;\;abcd}(x,y,x_1,y_1)\;
\widehat{\Delta}^{\lambda\sigma\,,cd}(y_1,x_1) \;\;\;\;\;\;\;\;
\\
\widehat{\Pi}^{\;\;\;\;ab}_{(2)\;\mu\nu}(x,y)
&=&
-\,\frac{i\,g^2}{2}
\; \int_P\prod_{i=1}^{2} d^4x_id^4y_i
\;\;V_{(0)\;\mu\lambda\sigma}^{\;\;\;\;acd}(x,x_1,x_2)\;\;
\nonumber \\
& &
\times
\;\widehat{\Delta}^{\lambda\lambda',\,cc'}(x_1,y_1)\;
\widehat{\Delta}^{\sigma\sigma',\,dd'}(x_2,y_2)\;
\widehat{V}_{\sigma'\lambda'\nu}^{d'c'b}(y_2,y_1,y)
\label{Pib}
\\
\widehat{\Pi}^{\;\;\;\;ab}_{(3)\;\mu\nu}(x,y)
&=&
-\;\frac{g^4}{6}
\; \int_P\prod_{i=1}^{3} d^4x_id^4y_i
\;\;W_{(0)\;\mu\lambda\sigma\tau}^{\;\;\;\;acde}(x,x_1,x_2,x_3)\;\;
\nonumber \\
& &
\times
\;
\widehat{\Delta}^{\lambda\lambda',\,cc'}(x_1,y_1)\;
\widehat{\Delta}^{\sigma\sigma',\,dd'}(x_2,y_2)\;
\nonumber \\
& &
\times
\;
\widehat{\Delta}^{\tau\tau',\,ee'}(x_3,y_3)\;
\widehat{W}_{\tau'\sigma'\lambda'\nu}^{e'd'c'b}(y_3,y_2,y_1,y)
\label{Pic}
\\
\widehat{\Pi}^{\;\;\;\;ab}_{(4)\;\mu\nu}(x,y)
&=&
-\;\frac{i\,g^4}{24}
\;\int_P\prod_{i=1}^{2} d^4x_id^4y_id^4z_i
\;\;W_{(0)\;\mu\lambda\sigma\tau}^{\;\;\;\;acde}(x,x_1,x_2,x_3)\;\;
\nonumber \\
& &
\times
\;
\widehat{\Delta}^{\sigma\rho',\,df'}(x_2,z_2)\;
\widehat{\Delta}^{\tau\rho'',\,ef''}(x_3,z_3)\;
\;
\widehat{V}_{\rho''\rho'\rho}^{f''f'f}(z_3,z_2,z_1) \;\;
\nonumber \\
& &
\times
\;
\widehat{\Delta}^{\rho\lambda',\,fc'}(z_1,y_1)\;
\widehat{\Delta}^{\lambda\sigma',\,cd'}(x_1,y_2)\;
\;\;\widehat{V}_{\lambda'\sigma'}^{c'd'}(y_1,y_2,y)
\label{Pid}
\;.
\end{eqnarray}
Note that the usual Dyson-Schwinger equation
in {\it vacuum} is contained in (\ref{DSE2}) -(\ref{Pid}) as the special case
when the mean field vanishes, $\overline{A}(x)= 0$,
and initial state correlations are absent, ${\cal K}^{(2)}(x,y)=0$. In this case,
the propagator becomes the usual vacuum propagator,
since the mean-field contribution $\overline{\Pi}$ is identically zero,
and the quantum part $\widehat{\Pi}$ reduces to the vacuum contribution.
\section{Transition to qantum kinetics}
\label{sec:section4}
The equations of motion (\ref{YME2}) and (\ref{DSE2}) are
non-linear integro-differential equations and clearly not solvable in
all their generality. To make progress, one must be more specific
and employ now the details of the proclaimed physics scenario, described
above.
\subsection{Quantum and kinetic space-time regimes}
The key assumption is the separability of hard and soft dynamics in terms
of the space-time scale $r(\mu)\propto 1/\mu\approx 1$ $fm$.
This implies that one
may characterize the dynamical evolution of the gluon system
by a short-range {\it quantum scale} $r_{qua}\ll r(\mu)$, and a comparably
long-range {\it kinetic scale}
$r_{kin}\,\lower3pt\hbox{$\buildrel > \over\sim$}\,r(\mu)$.
Low-momentum collective excitations
that may develop at the particular momentum scale $g\mu$ are
thus well seperated from the typical hard gluon momenta of the order $\mu$,
if $g\ll 1$. Therefore, collectivity can arise, because the wavelength of the
soft oscillations $\sim 1/g\mu$ is much larger than the typical
extention of the hard quantum fluctuations $\sim 1/\mu$.
Notice that this separation of scales is not an academic
construction, but rather is a general property of quantum field theory.
A simple example is a freely propagating electron:
In this case, the quantum scale is given its the Compton wavelength
$\sim 1/m_e$ in the restframe of the charge, and measures the size of
the radiative vacuum polarization cloud around the bare charge.
The kinetic scale, on the other hand, is determined by the mean-free-path
of the charge, which is infinite in vacuum, and in medium is
inversely proportional to the local density times the interaction cross-section,
$\sim 1/(n_g \,\sigma_{int})$.
Adopting this notion
to the present case of gluon dynamics, let me define
$r_{qua}$ and $r_{kin}$ as follows:
\begin{description}
\item[quantum scale {\boldmath $r_{qua}$}:]
Measures the spatial extension of quantum fluctuations associated with virtual
and real radiative emission and re-absorption
off a given hard gluon, described by the hard propagator $\widehat{\Delta}$.
It can thus be interpreted as
its Compton wavelength, corresponding
to the typical transverse extension of the fluctuations and
thus inversely proportional to the average transverse momentum,
\begin{equation}
r_{qua} \;\,\equiv\;\,\widehat{\lambda}
\;\;\simeq\;\;\frac{1}{\langle \;k_\perp\;\rangle}
\;,
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\langle \;k_\perp\;\rangle \;\ge \;\mu
\;,
\label{rqua}
\end{equation}
where the second relation is imposed by means of the definition (\ref{Aa})
of hard and soft modes.
Note that $\widehat{\lambda}_C$ is a space-time dependent quantity, because
the magnitude of $\langle k_\perp \rangle$ is determined by both the
radiative self-interactions of the hard gluons and ther interactions
with the soft field.
\item[kinetic scale {\boldmath $r_{kin}$}:]
Measures the range of the long-wavelength correlations, described by
the soft mean-field $\overline{A}$, and may be parametrized in terms of the
average momentum of soft modes $\langle q \rangle$, such that
\begin{equation}
r_{kin} \;\,\equiv\;\,\overline{\lambda}
\;\;\simeq\;\;\frac{1}{\langle \;q_\perp\;\rangle}
\;,
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\langle \;q_\perp\;\rangle \; \,\lower3pt\hbox{$\buildrel < \over\sim$}\,
\;g\,\mu
\;,
\label{rkin}
\end{equation}
where $\overline{\lambda}$ may vary from one space-time point to
another, because the population of soft modes $\overline{A}(q)$ is determined
locally by the hard current $\widehat{j}$ with dominant contribution
from gluons with transverse momentum $\simeq \mu$.
\end{description}
The above classification of quantum- (kinetic-) scales
specifies in space-time the relevant regime for the hard (soft)
dynamics, and the separability of the two scales
$r_{qua}$ and $r_{kin}$
imposes the following condition on the relation between space-time and
momentum:
\begin{equation}
\widehat{\lambda} \;\;\ll\;\;\overline{\lambda}
\;,\;\;\;\;\;\;\;\;\;\;\;\;\;
\mbox{or}
\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\langle\; k_\perp \;\rangle
\; \;\;\approx
\;\mu\;\;\gg\;\; g\,\mu \;\;
\approx \;\;\;
\langle\; q_\perp \;\rangle
\label{kqcond}
\;.
\end{equation}
The physical interpretation of (\ref{kqcond})
is simple:
At short distances $r_{qua} \ll 1/(g \mu)$ a hard gluon can be considered
as an {\it incoherent quantum} which emits and partly reabsorbs other
hard gluons corresponding to the combination of real
bremstrahlung and virtual radiative fluctuatiuons.
Only a hard probe with a short wavelength $\le r_{qua}$ can resolve
this quantum dynamics.
On the other hand,
at larger distances $r_{kin} \approx 1/(g \mu)$, a gluon appears
as a {\it coherent quasi-particle}, that is, as an extended
object with a changing transverse size corresponding
to the extent of its intrinic quantum fluctuations. This dynamical
substructure is however not resolvable by long-wavelength modes
$\ge r_{kin}$ of the soft field $\overline{A}$.
\medskip
Accordingly, one may classify the quantum and kinetic regimes, respectively,
by associating with two distinct space-time points $x^\mu$ and $y^\mu$ the
following characteristic scales:
\begin{eqnarray}
s^\mu
\;&\equiv&
\;\;
x^\mu\;-\;y^\mu \;\;
\sim
\;\widehat{\lambda} \;=\;\frac{1}{g\mu}
\;,\;\;\;\;\;\;\;
\partial_s^\mu
\;=\;\frac{1}{2}\;\left(\partial_x^\mu\;-\;\partial_y^\mu \right)
\;\;\sim\;\;g\,\mu
\nonumber \\
r^\mu
\;&\equiv&
\frac{1}{2}\;\left(x^\mu\;+\;y^\mu\right)
\;\;\sim
\;\overline{\lambda} \;=\;\frac{1}{\mu}
\;,\;\;\;\;\;\;\;
\partial_r^\mu
\;=\; \;\;\;\;\partial_x^\mu\;+\;\partial_y^\mu
\;\;\;\;\sim\;\;\;\mu
\label{sr}
\;.
\end{eqnarray}
The {\it kinetic scale} is therefore $g^2\mu^2$:
The effect of the soft field modes of $\overline{A}$
on the hard quanta involves the coupling $g \overline{A}$
to the hard propapgator
and is of the order of the soft wavelength $\overline{\lambda} = 1/(g\mu)$,
so that one may characterize the soft field strength by
\begin{equation}
g \overline{A}_\mu(r) \;\;\sim \;\; g \mu
\;,\;\;\;\;\;\;\;\;\;\;\;\;
g \overline{F}_{\mu\nu}(r) \;\;\sim \;\;g^2\,\mu^2
\label{AF2}
\;,
\end{equation}
plus corrections of order $g^2\mu^2$ and $g^3\mu^3$, respectively,
which are assumed to be small.
The {\it quantum scale} on the other hand is $\mu^2$, because
\begin{equation}
\widehat{\Delta}_{\mu\nu}^{-1}
\;\;\sim k_\perp^2 \;
\,\lower3pt\hbox{$\buildrel > \over\sim$}\,
\;\mu^2
\;\;\gg\;\;g^2\mu^2 \;\; \sim \;\; g\,\overline{F}_\mu\nu
\;,
\end{equation}
and one expects that
that the short-distance fluctuations corresponding to
emission and reabsorption of gluons with momenta $k_\perp \ge \mu$,
are little affected by the long-range, soft mean field, because the
color force $\sim g\overline {F}$ acting on a gluon with momentum
$k_\perp \sim \mu$ produces only a very small change in its momentum.
\smallskip
Concerning the Yang-Mills equation (\ref{YME2}),
one finds then immediately from the above scale relations that
both the derivative terms $\partial^2\overline{A}$ and the
self-coupling terms $\overline{\Xi}$ are of the same order
and need to be included consistently in order to
preserve the gauge symmetry when performing a perturbative
analysis.
Of course, if the field is weak, $\overline{F}_{\mu\nu}\ll g \mu^2$,
the nonlinear effects contained in the function $\overline{\Xi}$
of (\ref{YME2}) are be subdominant, so that in leading order of $g$,
the color fields would then behave like abelian fields.
\subsection{The kinetic approximation}
The realization of the two space-time scales, short-distance quantum and
quasi-classical kinetic, allows to reformulate the quantum field theoretical
problem as a relativistic many-body problem within kinetic theory.
The key element is to establish the connection between the preceding
quantum-theoretical description in terms of Green functions
and a probabilistic kinetic description in terms of
of so-called Wigner functions \cite{wigner}.
Whereas the 2-point functions, such as the propagator or the polarization tensor,
depend on two separate space-time points $x$ and $y$,
their Wigner transform
utilizes a mixed space-time/momentum representation, which is
particularly convenient
for implementing the assumption of well separated quantum and
kinetic scales, i.e., that the long-wavelength
field $\overline{A}$ is slowly varying in space-time on the scale
of short-range quantum fluctuations.
Moreover, the trace of the Wigner transformed propagaor is the quantum analogue
of the single particle phase-space distribution of gluons, and
therefore provides the basic quantity to make the connection with
kinetic theory of multi-particle dynamics.
\smallskip
In terms of the center-of-mass coordinate,
$r = \frac{1}{2}(x+y)$, and relative coordinate $s= x-y$,
of two space-time points $x$ and $y$, eq. (\ref{sr}),
one can express any 2-point function
${\cal G}(x,y)$, such as $\widehat{\Delta},\Pi$, in terms of these coordinates,
\begin{equation}
{\cal G}_{\mu\nu}^{ab}(x,y)\; =\; {\cal G}_{\mu\nu}^{ab}\left(r+\frac{s}{2}, r-\frac{s}{2}\right)
\;\;\equiv\;\; {\cal G}_{\mu\nu}^{ab}\left( r,s\right)
\;,\;\;\;\;\;\;\;\;\;\;\;\;\;
\;.
\end{equation}
The {\it Wigner transform} ${\cal G}(r,k)$
is then defined as the Fourier transform with respect to the relative
coordinate $s$, being the canonical conjugate to the momentum $k$.
In general, the necessary preservation of local gauge symmetry
leads to additional constraint, but for the specific choice
of gauge (\ref{gauge1}), the Wigner transform is simply
\begin{equation}
{\cal G}(r,s) \;=\;
\int \frac{d^4k}{(2\pi)^4} \, e^{-i\,k\,\cdot\, s}\;\,
{\cal G}\left( r,k\right)
\;\;,\;\;\;\;\;\;\;\;\;\;\;\;\;
{\cal G}\left( r,k\right) \;=\;
\int d^4 s \, e^{i\,k\,\cdot\, s}\;\, {\cal G}\left( r,s\right)
\;.
\label{W}
\end{equation}
The Wigner representation (\ref{W}) will facilitate a systematic identification
of the dominant contributions of the soft field $\overline{A}$ to the hard
propagator $\widehat{\Delta}$:
First one expands both $\overline{A}$ and $\widehat{\Delta}_{[\overline{A}]}$
in terms of the long-range variation with the kinetic scale $r$ (gradient expansion),
then one makes an additional expansion in powers of the soft field $\overline{A}$
and of the induced perturbations
$\widehat{\Delta}_{[\overline{A}]} \sim g\widehat{\Delta}_{[\overline{0}]}$.
On this basis, one isolates and
keep consistently terms up to order $g^2\mu^2 \widehat{\Delta}_{[\overline{0}]}$.
\medskip
To proceed, recall that
the coordinate $r^\mu$ describes
the kinetic space-time dependence $O(\Delta r_{kin}$),
whereas $s$ measures the quantum space-time distance $O(\Delta r_{qua}$).
In translational invariant situations, e.g.
in vacuum or thermal equilibrium, $W(r,s)$ is independent of $r^\mu$ and
sharply peaked about $s^\mu =0$. Here the range of the variation is fixed
by $\widehat{\lambda} = 1/\mu$, eq. (\ref{rqua}), corresponding to
the confinement length $\approx 0.3$ $fm$ in the case of vacuum,
or to the thermal wavelength $\approx 1/T$ in equilibrium.
On the other hand, in the presence of a slowly varying
soft field $\overline{A}$ with a wavelength $\overline{\lambda} = 1/(g\mu)$,
eq. (\ref{rkin}),
the $s^\mu$ dependence is little affected, while the acquired $r^\mu$ dependence
will have a long-wavelength variation.
This suggests therefore to neglect the derivatives of ${\cal G}(r,k)$ with respect to
$r^\mu$ of order $g\mu$, relative to those with respect to
$s^\mu$ of order $\mu$.
Hence one can perform an expansion of the soft field and the hard propagator and
polarization tensor in terms of gradients, and keep only terms up to first order,
i.e.,
\begin{eqnarray}
\overline{A}_\mu(x)
&=&
\overline{A}_\mu\left(r+\frac{s}{2}\right) \;\simeq \;
\overline{A}_\mu(r)+
\frac{s}{2}\cdot \partial_r\overline{A}_\mu(r)
\nonumber \\
\overline{A}_\mu(y)
&=&
\overline{A}_\mu\left(r-\frac{s}{2}\right) \;\simeq \;
\overline{A}_\mu(r)-
\frac{s}{2}\cdot \partial_r\overline{A}_\mu(r)
\nonumber \\
{\Delta}_{(0)\;\mu\nu}\left(x,y\right)
&=&
{\Delta}_{(0)\;\mu\nu}\left(0,s\right)
\nonumber \\
\widehat{\Delta}_{\mu\nu}\left(x,y\right)
\;\;\;
&=&
\widehat{\Delta}_{\mu\nu}\left(r,s\right) \;\simeq \;
\widehat{\Delta}_{\mu\nu}\left(0,s\right) \;+ \;
s\,\cdot\, \partial_r\, \widehat{\Delta}_{\mu\nu}\left(r,s\right)
\nonumber \\
\overline{\Pi}_{\mu\nu}(x,x)
\;\;\;
&=&
\overline{\Pi}_{\mu\nu}\left(r\right)
+\frac{s}{2}\cdot \partial_r\overline{\Pi}_{\mu\nu}(r)
\nonumber \\
\widehat{\Pi}_{\mu\nu}\left(x,y\right)
\;\;\;
&=&
\widehat{\Pi}_{\mu\nu}\left(r,s\right) \;\simeq \;
\widehat{\Pi}_{\mu\nu}\left(0,s\right) \;+ \;
s\,\cdot\, \partial_r\, \widehat{\Pi}_{\mu\nu}\left(r,s\right)
\label{gradexp3}
\;,
\end{eqnarray}
and furthermore,
in order to isolate the leading effects of the soft mean field $\overline{A}$
on the hard quantum propagator $\widehat{\Delta}$, one separates
the mean field contribution from the quantum contribution by writing
\begin{equation}
\widehat{\Delta} (r,k)\;\;\equiv\;\;
\widehat{\Delta}_{[\overline{A}]}(r,k) \;\,=\;\,
\widehat{\Delta}_{[\overline{0}]}(k) \;\,+\;\,
\delta\widehat{\Delta}_{[\overline{A}]}(r,k)
\label{Dsep}
\end{equation}
with a translation-invariant vacuum quantum contribution and a
$r$-dependent mean field part, respectively,
\begin{equation}
\widehat{\Delta}_{[\overline{0}]}^{-1}
\;=\; \left. \widehat{\Delta}^{-1} \right|_{\overline{A}=0}
\;=\;
\Delta_{(0)}^{-1} \;-\; \left.\widehat{\Pi} \right|_{\overline{A}=0}
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\delta\widehat{\Delta}_{[\overline{A}]}^{-1} \;=\;
\overline{\Delta}^{-1} \;-\; \Delta_{(0)}^{-1}
\;=\; -\; \overline{\Pi}
\label{DA}
\;.
\end{equation}
where ${\Delta}_{(0)}$ denotes the {\it free}
propagator, and the
$\overline{\Delta}$ the mean-field proagator, that is, the
free propagator in the presence of the mean field, but in the absence
of quantum fluctuations.
Given the ansatz (\ref{Dsep}), with the
feedback of the induced soft field to the hard propagator
being contained in $\delta\widehat{\Delta}_{[\overline{A}]}$,
one can expand the latter in powers of the soft field coupling $g \overline{A}$, and
anticipate that it is {\it at most $g$ times}
the vacuum piece $\widehat{\Delta}_{[\overline{0}]}$,
that is,
\begin{equation}
\delta\widehat{\Delta}_{[\overline{A}]}(r,k) \;=\;
\sum_{n=1,\infty} \frac{1}{n!}\,\left( g\overline{A}(r)\cdot\partial_k\right)^n
\widehat{\Delta}_{[\overline{0}]}(k)
\;\;\simeq\;\;
g\overline{A}(r)\cdot\partial_k
\widehat{\Delta}_{[\overline{0}]} (k)
\label{DA2}
\;,
\end{equation}
and, to the same order of approximation,
$
\partial_r^\mu\delta\widehat{\Delta}_{[\overline{A}]\,\mu\nu}(r,k)
\;\simeq\;
g(\partial_r^\mu\overline{A}^\lambda)\partial_k^\lambda
\widehat{\Delta}_{[\overline{0}]\;\mu\nu}
$.
Inserting now into eqs. (\ref{YME2}) and (\ref{DSE2})
the decomposition (\ref{Dsep}) with the approximation (\ref{DA2}),
and keeping consistently all terms up to order
$g^2\mu^2 \widehat{\Delta}_{[\overline{0}]}$, one arrives
(after quite some journey \cite{ms42}) at
a set of equations that can be compactly expressed in terms
of the {\it kinetic} momentum $K_\mu$ rather than the {\it canonincal}
momentum $k_\mu$ (as always in the context of interactions with a
gauge field). For the
class of gauges gauge (\ref{gauge1a}) amounts to the
replacements
\begin{equation}
k_\mu \;\longrightarrow \;K_\mu \;=\; k_\mu \;-\;g \overline{A}_\mu(r)
\;\;,\;\;\;\;\;\;\;\;\;\;
\partial^r_\mu \;\longrightarrow \;
\overline{D}_\mu^r \;=\; \partial^r_\mu \;-\;
g \partial^r_\mu \overline{A}^\nu(r)\,\partial_\nu^k
\;.
\label{kincan}
\end{equation}
and, within the present approximation scheme,
one has $K^2\widehat{\Delta} \gg \overline{D}_r^2 \widehat{\Delta}$.
The result of this procedure is:
\begin{eqnarray}
&&
\;\;
\left[\frac{}{}
\overline{D}_r^{\lambda\;,ab} , \; \overline{F}_{\lambda \mu}^b
\right](r)
\;\;=\;\;
-\;\widehat{j}_\mu(r)
\nonumber\\
& & \;\;\;
\;\;=\;\;
- g
\;\int \frac{d^4K}{(2\pi)^2}
\;
\mbox{Tr}\left\{
\;
- K_\mu \, \widehat{\Delta}_{[\overline{A}] \;\nu}^{\;\;\;\;\nu}(r,K)
\;+\;
\widehat{\Delta}_{[\overline{A}] \;\mu}^{\;\;\;\;\nu}(r,K) \,K_\nu
\right\}
\label{YME5}
\;\;\;
\\ & & \nonumber
\\
& &
\;\;
\left\{
\; K^2\,
,
\; \widehat{\Delta}^{\mu\nu}_{[\overline{0}]}
\right\}
(K)
\;\;=\;\;
\,d^{\mu\nu}(K)\,\,
\;+\;
\frac{1}{2} \;
\left\{\widehat{\Pi}^\mu_{[\overline{0}]\;\sigma}(K)\,,\,
\widehat{\Delta}^{\sigma\nu}_{[\overline{0}]}(K) \right\}
\label{R2}
\\ & & \nonumber
\\
& &
\;\;
\left[
K\cdot\overline{D}_r , \; \widehat{\Delta}^{\mu\nu}_{[\overline{A}]}
\;\right]
(r,K)
\;\;=\;\;
-\;\frac{i}{2} \;\left[\overline{\Pi}^\mu_{\sigma}(r,K)\,,\,
\widehat{\Delta}^{\sigma\nu}_{[\overline{0}]}(K) \right]
\nonumber \\
& &
\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;
\;-\;
\frac{i}{2} \;\left[\widehat{\Pi}^\mu_{[\overline{0}]\;\sigma}(K)\,,\,
\widehat{\Delta}^{\sigma\nu}_{[\overline{0}]}(K) \right]
\label{T2}
\;.
\end{eqnarray}
\bigskip
One sees that the original
Dyson-Schwinger equation reduces in the kinetic approximation to
a coupled set of algebraic equations. Recall that
(\ref{R2}) and (\ref{T2})
are still $2\times 2$ matrix equations mix the four different
components of $\widehat{\Delta} = (\widehat{\Delta}^F,\widehat{\Delta}^>,
\widehat{\Delta}^<,\widehat{\Delta}^{\overline{F}})$ and of
$\widehat{\Pi} = (\widehat{\Pi}^F,\widehat{\Pi}^>,
\widehat{\Pi}^<,\widehat{\Pi}^{\overline{F}})$.
For the following it is more convenient to
employ instead
an equivalent set of independent functions, namely,
the {\it retarded} and {\it advanced functions} $\widehat{\Delta}^{ret}$,
$\widehat{\Delta}^{adv}$, plus
the {\it correlation function} $\widehat{\Delta}^{cor}$,
and analogously $\widehat{\Pi}$.
This latter set is more directly connected with
physical, observable quantities, and is commonly referred to as
{\it physical representation} \cite{chou}:
\begin{equation}
\widehat{\Delta}^{ret} \;=\; \widehat{\Delta}^F \;-\; \widehat{\Delta}^>
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\widehat{\Delta}^{adv} \;=\; \widehat{\Delta}^F \;-\; \widehat{\Delta}^<
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\widehat{\Delta}^{cor} \;=\; \widehat{\Delta}^> \;+\; \widehat{\Delta}^<
\label{retadv1}
\end{equation}
Similarly, for the polarization tensor the retarded, advanced
and correlation functions are defined as
(note the subtle difference to (\ref{retadv1})):
\begin{equation}
\widehat{\Pi}^{ret} \;=\; \widehat{\Pi}^F \;+\; \widehat{\Pi}^<
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\widehat{\Pi}^{adv} \;=\; \widehat{\Pi}^F \;+\; \widehat{\Pi}^>
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\widehat{\Pi}^{cor} \;=\; -\widehat{\Pi}^> \;-\; \widehat{\Pi}^<
\label{retadv2}
\end{equation}
Loosely speaking, the retarded and advanced functions characterize
the intrinsic quantum nature of
a `dressed' gluon, describing its substructural state of
emitted and reabsorbed gluons, whereas the correlation function describes
the kinetic correlations among different such `dressed' gluons.
The great advantage \cite {chou} of this physical representation is that
in general the dependence
on the phase-space occupation of gluon states (the local density)
is essentially carried by the correlation functions $\widehat{\Delta}^{>}$,
$\widehat{\Delta}^<$,
whereas the dependence of the retarded and advanced functions,
$\widehat{\Delta}^{ret}$, $\widehat{\Delta}^{adv}$,
on the local density is weak.
More precisely,
the retarded and advanced propagators
and the imaginary parts of the self-energies embody the
renormalization effects and dissipative quantum dynamics that is associated
with short-distance emission and absorption of quantum fluctuations,
whereas the correlation function contains both the effect of interactions
with the soft mean field and of statistical binary scatterings among the hard
gluons.
In going over to the physical representation, one arrives at the
set of master equations:
The Yang-Mills equation (\ref{YME5}) reads
\begin{eqnarray}
&&
\;\;
\left[\frac{}{}
\overline{D}_r^{\lambda} , \; \overline{F}_{\lambda \mu}
\right]
\;\;=\;\;
- g
\;\int \frac{d^4k}{(2\pi)^2}
\;
\mbox{Tr}\left\{
\;
\left(
- K_\mu \,\widehat{\Delta}_{[\overline{A}]\;\nu}^{cor\;\nu}
\;+\;
\widehat{\Delta}_{[\overline{A}]\mu}^{cor\;\nu} \,K_\nu
\right)
\right\}
\label{YME6}
\;\;\;\;
\end{eqnarray}
and
the renormalization (\ref{R2}) and transport equations (\ref{T2}),
become \cite{ms39}
\begin{eqnarray}
& &
\left\{
\;K^2\,,
\; \widehat{\Delta}^{ret}_{[\overline{0}]}-\widehat{\Delta}^{adv}_{[\overline{0}]}
\right\}_{\mu\nu}
\;=\;
-\frac{1}{2}
\left\{{\cal M}^2\, , \, \mbox{Im} \widehat{\Delta}_{[\overline{0}]}
\right\}_{\mu\nu}
\;-\;
\frac{1}{2}
\left\{ \Gamma\, , \, \mbox{Re}\widehat{\Delta}_{[\overline{0}]}
\right\}_{\mu\nu}
\label{X1}
\;\;\;\;\;\;
\\
& \nonumber \\
& &
\left[
K\cdot \overline{D}_r \, ,\, \widehat{\Delta}^{cor}_{[\overline{A}]}
\right]_{\mu\nu}
\;=\;
+\frac{i}{2}
\left[ {\Pi}^{cor}\, , \, \mbox{Re} \widehat{\Delta}_{[\overline{0}]}
\right]_{\mu\nu}
\;-\;
\frac{1}{4}
\left\{ {\Pi}^{cor}\, , \, \mbox{Im}\widehat{\Delta}_{[\overline{0}]}
\right\}_{\mu\nu}
\nonumber \\
& &
\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\;+\;
\frac{i}{2}
\left[ {\cal M}^2\, , \, \widehat{\Delta}^{cor}_{[\overline{0}]}
\right]_{\mu\nu}
\;-\;
\frac{1}{4}
\left\{ \Gamma\, , \, \widehat{\Delta}^{cor}_{[\overline{0}]}
\right\}_{\mu\nu}
\;,
\label{X2}
\end{eqnarray}
where $\,\Pi \,=\,\overline{\Pi}\,+\,\widehat{\Pi}$,
and the real and imaginary components of the
polarization tensor are denoted by
\begin{equation}
{\cal M}^2_{\mu\nu} \;\equiv\;
\mbox{Re} {\Pi}_{\mu\nu}\;=\; \frac{1}{2} \,\left( {\Pi}^{ret}+{\Pi}^{adv}\right)_{\mu\nu}
\;\;\;\;\;\;
\Gamma_{\mu\nu} \;\equiv\;
\mbox{Im} {\Pi}_{\mu\nu}\;=\; i \, \left( {\Pi}^{ret}-{\Pi}^{adv}\right)_{\mu\nu}
\end{equation}
Note also, that
are the real and imaginary components of the
Hard propagator are given by the sum and difference of the retarded
and advanced contributions, respectively,
\begin{equation}
\mbox{Re} \widehat{\Delta}_{\mu\nu}\;=\;
\frac{1}{2}\, \left( \widehat{\Delta}^{ret}+\widehat{\Delta}^{adv}\right)_{\mu\nu}
\;\;\;\;\;\;\;\;\;\;\;\;
\mbox{Im} \widehat{\Delta}\;=\; i \,
\left( \widehat{\Delta}^{ret}-\widehat{\Delta}^{adv}\right)_{\mu\nu}
\;.
\end{equation}
The physical significance of the (\ref{X1}) and (\ref{X2}) is
the following:
Eq. (\ref{X1}) determines the state of a dressed parton with
respect to their virtual fluctuations and real emission (absorption) processes,
corresponding to the real and imaginary parts of the retarded and advanced
self-energies.
Eq. (\ref{X2}), on the other hand characterizes the correlations
mong different dressed parton states,
and the self-energies appear here in two distinct ways.
The first two terms on the right hand side account for scatterings between quasi-particle
states, i.e. dressed partons, whereas the last two terms incorporate the renormalization effects
which result from the fact that the dressed partons between collisions do not behave as
free particles, but change their dynamical structure due to virtual
fluctuations, as well as real emission and absorption of quanta.
For this reason ${\Pi}^\rightarrow$ is called {\it radiative} self-energy, and ${\Pi}^{cor}$
is termed {\it collisional} self-energy.
It is well known \cite{chou}, that the imaginary parts of
the retarded and advanced Green functions and self-energies are
just the spectral density $\rho = \mbox{Im}\widehat{\Delta}$,
giving the probability for finding an intermediate
multi-particle state in the dressed parton, respectively the decay width $\Gamma$,
describing the dissipation of the dressed parton.
The formal solution of (\ref{X1}) and (\ref{X2}) for
the spectral density ${\rho}$ is
\begin{equation}
{\rho}(r,k)\;=\;
\frac{\Gamma}{k^2 \,-\,{\cal M}^2\,+\,(\Gamma/2)^2}
\;\,\equiv\;\,
\,{\rho}_{{\cal M}^2}
\;+\; {\rho}_{\Gamma}
\;,
\label{X4}
\end{equation}
describing the particle density in terms of the finite width $\Gamma$
and the dynamical `mass term' ${\cal M}^2$ (which in the `free-field'
case are $\Gamma = {\cal M}^2= 0$, corresponding to
an on-shell, classically stable particle).
On the right hand side of (\ref{X4}), the second form
exhibits the physical meaning more suggestively in
terms of the `wavefunction'-renormalization
(${\rho}_{{\cal M}^2}={\rho}_{\Gamma =0}$) due to
virtual fluctuations, and
the dissipative parts (${\rho}_{\Gamma}=\rho_{{\cal M}^2=0}$)
due to real emission (absorption) processes.
\section{Outlook}
What remains to be done is to solve the set of equations (\ref{YME6})-(\ref{X2})
which is the hardest part. For the case of
$\overline{A}_\mu = 0 = \overline{F}_{\mu\nu}$, this was discussed in
Ref. \cite{ms39}. For the present general case, the coupling between
hard gluons ($\widehat{\Delta}$) and the soft field ($\overline{A}$), complicates
things considerably.
A possible iterative scheme of solution, which is currently under
study \cite{ms42}, may be as follows:
\begin{description}
\item[a)]
Specify initial condition in terms of a phase-space density
of hard gluons at time $t=t_0$.
This initial gluon distribution determines
$\widehat{\Delta}_{[\overline{0}]}(t=t_0,\vec{r}, k)$.
\item[b)]
Solve the renomalization equation (\ref{R2}) with $\overline{A}(t_0,\vec{r}) = 0$,
i.e. just as in the case of vacuum \cite{ms39}, except that now
$K = k-g\overline{A}$ contains the soft field.
Substitute the resulting form of
$\widehat{\Delta}^{ret}_{[\overline{0}]}$ and $\widehat{\Delta}^{adv}_{[\overline{0}]}$ into the transport equation (\ref{T2}) to get the solution
for $\widehat{\Delta}^{cor}_{[\overline{0}]}$.
\item[c)]
Insert
$\widehat{\Delta}^{cor}_{[\overline{0}]}$ into the right hand side of the Yang-Mills
equation (\ref{YME6}), and solve for $\overline{A}$.
\item[d)]
Repeat from a) but now include the finite contribution from the
coupling between
$\widehat{\Delta}_{[\overline{0}]}$ and $\overline{A}$.
\end{description}
\section{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
The purpose of this work is to develop a machine learning (ML) based model that can predict whether a coronal mass ejection (CME) will be geoeffective, using only numerical solar parameters as input.
Coronal mass ejections are solar eruptive events whose magnetically charged particles can, directly or indirectly, under certain circumstances, reach Earth and cause geomagnetic storms (GSs), i.e., be geoeffective. These storms represent perturbations in the Earth's magnetic field, which have the potential to lead to electrical systems and grids failure and/or damage, power outages, navigation errors, radio signal perturbations, significant exposure to dangerous radiations for astronauts during space missions, etc. Given the potential negative impacts of such storms, predicting their occurrence is paramount for enabling safeguarding of human technology
\citep{swesolar2006, sweearth2007, NAP13060, vourlidas2019, swesolar2021}. Moreover,
CMEs are the most geoeffective solar phenomena \citep{schwenn2005,gopalswamy2007,vourlidas2019}, being associated with more intense GSs than other, more frequent GS sources, such as high-speed streams (HSSs) and corotating interaction regions (CIRs) \citep{kilpuaetall2017}.
The intensity of the storms can be measured by various geomagnetic indices such as Ap, Kp, AE, PC or Dst \citep[see][and references therein]{lockwood2013LRSP}. Herein, we have chosen to use the values of the Dst index \citep{sugiura1964} to establish whether the magnetic field perturbations do, in fact, manifest as storms. This is an index that is calculated using four geomagnetic stations situated at low latitudes. Depending on the value of this index, it can be established whether these perturbations are associated with geomagnetic storms or not. In terms of storm intensity, one of the most popular classifications that takes into consideration the minimum value of the Dst index is that of \cite{gonzalezetal1994}. Thus, a minor storm has -30 nT $\geqslant$ Dst$_{min}$ $>$ -50 nT, a moderate storm has -50 nT $\geqslant$ Dst$_{min}$ $>$ -100 nT, while intense storms are considered to have Dst$_{min}$ $\leqslant$ -100 nT. Consequently, we only considered GSs defined by a Dst value $\leqslant$ -30 nT that were associated with ICMEs driven by CMEs.
Literature available prediction models can be divided by the methods used and final purpose.
Methods based on physical processes are used to model the propagation of the CME through the interplanetary space \citep[][]{roussevlugaz07} or the interaction of two different CMEs travelling towards Earth \citep[][]{lugazetal2005, mancvo08}.
Methods based on physical processes with the purpose of predicting the arrival time of the CME at Earth are preferred for operational use, e.g., the WSA-ENLIL Cone model
being currently used by the US National Ocean and Atmospheric Administration (NOAA) for providing warnings of potential geomagnetic storms caused by Earth-directed CMEs and solar wind structures \citep[][]{Parsons2011}.
ML methods, such as neural networks, are an emerging research area for this subject \citep[][]{vourlidas2019}. \cite{10.1093/mnras/stv2782} used a neural network and a dataset of 153 CMEs for predicting their transit times.
\cite{wangetal2019convolution} proposed an approach based on convolutional neural networks for predicting CMEs' arrival times. They analyzed transit times from the Sun to Earth of 223 geoeffective CME events observed between 1996 to 2018 and obtained a 12.4 hours mean absolute error (MAE). \cite{DBLP:journals/remotesensing/FuZYFLM21} used a deep learning framework with image data as inputs, for the same purpose, obtaining a MAE = 5.8 hours, as well as a 27\% F1 score and 75.1\% accuracy for the geoeffectiveness prediction.
There are non-linear logistic models that can predict the association of CMEs with intense or superintense GSs \citep[][]{srivastava2005} - where events that generated a superintense GS are considered positive, while those that generated an intense one are considered negative. \cite{srivastava2005} predicted 78\% geomagnetic storms from the training set, and 85\% from the validation one, using a dataset containing 64 CMEs observed between 1996 and 2002.
\cite{besliuionescuetal2019, bdimm2021} applied the same technique, using only solar parameters, but defined negative events as CMEs that were not associated with any GS (i.e., Dst $>$ -30 nT).
Another type of model employed for predicting geoeffectiveness is the support vector machine (SVM). \cite{2012JKAS...45...31C} described its utility for predicting the geoeffectiveness of halo CMEs (having the angular width $>$ 120°). The dataset used for the study contained 733 events, observed between January 1996 and April 2010. The authors used the grid search method to experiment with various combinations of kernel functions and hyperparameter values, eventually obtaining 66\% accuracy, 76\% real positive rate (TPR), 49\% real negative rate (TNR), 72\% false alarm rate (FAR).
In this study, we used data covering 19 years (1996-2014) to train and test our models, taking into consideration all storms, regardless of their intensity (i.e., including minor ones). Despite evidence that interplanetary parameters influence the geoeffectiveness of CMEs \citep[e.g.][]{akasofu81,srivastava2004,yermolaevetal2005, besliuionescuetal2019}, we decided to not utilize such data, as it would limit the warning time from the order of days to that of hours, at best. Such models can then be used as early warning modules for more complex setups, involving forward modelling of solar eruptions and interplanetary propagation and interactions.
We employ a number of different supervised ML techniques for binary classification, as described in section \ref{sec:methods}, using only solar parameters as inputs. Moreover, in response to the challenges posed by the dataset (as discussed in section \ref{sec:data}), we artificially generate new virtual geoeffective CME data samples, which is an innovative aspect for this specific problem. In addition to this, we introduce the notion of uncertainty regarding the lack of geoeffective potential for the CMEs prior to a geomagnetic storm with no certain cause associated, by using sample weights.
\section{Data and interpretation}
\label{sec:data}
\subsection{The dataset}
The dataset utilized for this research contains aggregated data from 3 different sources: the Solar and Heliospheric Observatory \citep[SOHO; ][]{domingo1995} Large Angle and Spectrometric Coronagraph \citep[LASCO;][]{lasco1995} catalog, provided by the 'Coordinated Data Analysis Workshops (CDAW) Data Center'\footnote{\url{https://cdaw.gsfc.nasa.gov/CME\_list/,}\citep[][]{2009EM&P..104..295G}}, for the CMEs' attributes, the Solar H-alpha Flare Index (FI) \citep[][]{1987Ap&SS.135..201A} values made available by courtesy of Kandilli Solar Observatory and NOAA's 'National Geophysical Data Center (NGDC)' \footnote{\url{https://www.ngdc.noaa.gov/stp/space-weather/solar-data/solar-features/solar-flares/index/flare-index/}}, as well as the catalog compiled by \cite{richardsoncane2010} for the correlation between the LASCO events and the Dst values of the associated geomagnetic storms, as can be seen in Table \ref{tab:dataset}.
\begin{deluxetable}{lllllllllll}[!t]
\label{tab:dataset}
\tablehead{ \colhead{CME} & \colhead{CPA} & \colhead{AW} & \colhead{LS} & \colhead{SOSFHI} & \colhead{SOSFHF} & \colhead{SOS20RS} & \colhead{ACC} & \colhead{MPA} & \colhead{DST} & \colhead{FI}}
\tablecaption{A sample of the aggregated dataset used for this work.}
\startdata
1997-01-20 09:31:00 & 281 & 72 & 175.0 & 237.0 & 115.0 & 0.0 & -3.3 & 285 & 0 & 0.470\\
2014-04-01 14:00:00 & 242 & 43 & 409.0 & 468.0 & 352.0 & 125.0 & -8.3 & 237 & 0 & 1.210\\
2006-07-09 20:06:00 & 54 & 19 & 142.0 & 85.0 & 199.0 & 649.0 & 17.4 & 46 & 0 & 0.000\\
2010-12-21 02:48:00 & 48 & 114 & 369.0 & 300.0 & 439.0 & 449.0 & 4.6 & 62 & 0 & 0.000\\
2012-05-21 09:12:00 & 208 & 41 & 645.0 & 407.0 & 882.0 & 830.0 & 22.0 & 222 & 0 & 0.560\\
\enddata
\tablecomments{Table 1 is published in its entirety in the machine-readable format. A portion is shown here for guidance regarding its form and content.}
\end{deluxetable}
We primarily use a cleaned selection of CMEs in the LASCO CDAW catalog and then use the \citet{richardsoncane2010} catalog to identify geoeffective events. If a CME is not directly connected to a \citet{richardsoncane2010} Dst event, we assume it to be non-geoeffective. The implications of this are presented below.
Since its launch in 1995, LASCO has been producing synoptic observations with a cadence of 12 minutes per coronagraph, to the present day. Two coronagraphs, C2 and C3, observe the corona between heights of 2-6 and 3.7-32 solar radii, respectively. The catalog contains data for all the CMEs manually identified since 1996. There are 9 numerical parameters available in the catalog for each event, whose details (including computational ones) can be found in the LASCO documentation. Out of these attributes, the mass and kinetic energy have not been included in our selected dataset. This is due to the fact that the number of events for which these parameters' values were not computed amounts to approximately one third of the total number of CMEs in the catalog. As such, the information provided by these two parameters proved to be insufficient for this research. All other attributes available in the catalog (i.e., Central Position Angle - CPA, Angular Width - AW, Measurement Position Angle - MPA, Linear Speed - LS, 2$^{nd}$ order Speed at final height, 2$^{nd}$ order Speed at 20 Rs, Acceleration - Acc) were taken into consideration. In order to obtain a "clean" dataset, all rows from the catalog (i.e., events) containing missing values in any of the columns (e.g. 2000/03/05, 12:54:05) have been removed. This amounts to 461 events, out of which 108 miss all speed-related values, together with Acc, and 353 have non-null LSs, but missing the other 3 parameters' values. Analyzing the LASCO catalog, it can be noted that CMEs missing linear speeds were considered very poor events, unable to be properly measured. After accounting for all discussed above, we report one geoeffective event (2005/05/13 17:12, Dst = -247 nT) was lost as a result of the dataset cleaning.
Since predicting the geoeffectiveness is fundamentally a binary classification task, an output label is necessary. Therefore, we defined the labels as following:
\begin{itemize}
\setlength\itemsep{-0.5em}
\item 1 (i.e., positive event; geoeffective CME) for CMEs associated with Dst $\leqslant$ -30 nT;
\item 0 (i.e., negative event; CME that was not geoeffective) for CMEs associated with Dst $>$ -30 nT;
\end{itemize}
The events labeled 1 are computed using the \cite{richardsoncane2010}, based on their association with a Dst value $\leqslant$ -30 nT (172 out of 24403 events) where an ICME had an associated CDAW event. All other CMEs, either not present in the catalog or correlated with a Dst value $>$ - 30 nT, were labeled 0 (24231 out of 24403 events). The distribution of the Dst values in the dataset can be observed in Figure \ref{fig:histogram} left and right.
\begin{figure}[!h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[height=0.23\textheight]{images/dst_histplot_ylog.png}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[height=0.23\textheight]{images/geoeffective_dst_histplot.png}
\end{minipage}
\caption{Left: Histogram (logscale) of all Dst values in the dataset. Right: Histogram of the Dst values $\leqslant$ -30}
\label{fig:histogram}
\end{figure}
In addition to the above-mentioned parameters, we use the FI as input for our models, similar to \cite{uwamahoro2012}'s approach concerning the use of neural networks for predicting the geoeffectiveness of halo CMEs. The index's purpose is to quantify the daily flare activity. We have utilized the FI based on findings of \citet[][]{2004} where the association of some CMEs (i.e., fast, full-halo events originating from low and middle latitudes, close to central meridian) with flares is a significant driver for intense geomagnetic storms. Importantly, the FI does not address individual events, but daily averages of flaring activity. Thus, this parameter does not represent a one to one flare to CME association.
On this topic, we also mention the study of \citet{yashiroetal2006} that focused on the individual associations between CMEs and flares. The authors provide a maintained list that is available online\footnote{\url{https://cdaw.gsfc.nasa.gov/pub/yashiro/flare_cme/fclist_pub.txt}}. We have cross-correlated this list with our subset of geoeffective CME's and found that $<$40\% of combined CDAW and \citet{richardsoncane2010} events could be associated with a flare. Due to the extreme class imbalance issues discussed below in sec. \ref{sec:challenges}, we considered that removing more than half of our already small geoeffective subset would result in weak model performances. A manual association attempt on the remaining events could not be confidently established by us. Thus, we did not utilize any individual CME to flare association throughout this work.
Due to the FI only being made available for the 1976-2014 timeframe, we needed to also restrict the data from the LASCO catalog up until 2014 only. Therefore, the collected data covers the period between 1996 and 2014. After the removal of samples containing empty cells, as mentioned above, and restricting the period from 1996 to 2014, our final dataset contains 24403 CMEs, out of which 172 are geoeffective.
Using the de facto approach, we manually selected the independent features to be used as inputs. Therefore, the features used for all models were CPA, AW, LS, Acc, FI, unless specified otherwise. However, we also experimented with an automatic feature selection by reducing the total number of features using Principal Component Analysis \citep[PCA,][]{jolliffe2005principal}. PCA works by redefining data attributes by selecting the first k components in descending order of their variances (average of the squared differences from the mean). The feature redefinition refers to creating the principal components, which are linear combinations of the initial variables. From a visual perspective, the principal components can be interpreted as the directions associated with the biggest variances (i.e., most dispersed values) in a multidimensional scatterplot. These directions are, in fact, eigenvectors, while the eigenvalues signify how important the directions are, or, in other words, how much of the most significant information regarding any data point they represent. Choosing the highest variance values when lowering the number of attributes is, therefore, important in order to keep as much information from the original data as possible.
The PCA implementation utilized (see section \ref{sec:methods}) has the option of automatically choosing the number of principal components by using Minka's Maximum Likelihood Estimates method \citep[MLE,][]{Minka2000AutomaticCO}. Using this, we obtained a dataset with only one less feature. Therefore, we also chose to use k = 5 (the number of independent variables in the dataset) for the number of components to be returned. It is important to note that the new features obtained as a result of using this method are harder to interpret.
In order to improve both the models' performances and the training times, a feature scaling preprocessing step was used. Both standard scaling (eq. \ref{eq:std_scaling}) and l2 normalization (i.e., scaling samples to have unit norm) were tested. However, standard scaling led to the best results in all cases.
\begin{equation}
\label{eq:std_scaling}
z = \frac{x - \mu}{\sigma},
\textnormal{where $\mu$ = mean of training samples and $\sigma$ = standard deviation}\
\end{equation}
\subsection{Challenges in data interpretation}\label{sec:challenges}
One of the most salient observations regarding the data, which is also the main obstacle for creating high-performance, reliable models (i.e. binary classifiers in this case) is the extreme class imbalance, as only 0.71\% of the events in the dataset are geoeffective. ML models are sensitive to skewed distributions \citep[][]{2016, article, inbook}, resulting in a tendency to more frequently predict the label of the majority class. In consequence, models will commonly appear to yield high proportions of correctly labeled samples for imbalanced classification problems. In other words, they will have high accuracy values. However, this would be a misleadingly optimistic performance evaluation, since this could translate into simply outputting the label with the most occurrences, instead of actually distinguishing between classes. This discussion is extended in section \ref{sec:methods}.
In this particular case, a dummy, untrained predictor outputting only the label of the majority class would have 99.29\% accuracy, corresponding to the proportion of 99.29\% of events in the dataset that were not geoeffective.
However, the model would identify 0 positive events, failing to achieve its primary goal of forecasting the geoeffective CMEs, despite the high accuracy value. We include the accuracy metrics for methodological completeness reasons, but we stress out that this metric in particular is of less importance when interpreting model performance.
Therefore, since the minority class is our main focus, due to the potential negative effects of the events belonging to it, the imbalance issue needs to be addressed.
Our attempts at overcoming this challenge include using class weights \citep[][]{krawczyk2016learning} and creating artificial data samples for the minority class using the Synthetic Minority Oversampling TEchnique \citep[SMOTE,][]{2002, 2018art}.
In order to better understand the necessity for class weights, a few elementary notes on the principles behind the models described in this paper are required. A model learns to map inputs to outputs (i.e., to predict/output values close to the real ones), through training. Generally, training refers to making adjustments to several parameters, according to the error between the real output and the predicted one, in order to minimize it.
Given that, as previously stated, the geoeffective CMEs are of more concern, information about their importance needs to be embedded in the model, which can be achieved with the help of weights. The weights are used together with the error between the real output and the predicted one for determining the next adjustments to the parameters. For this specific case, the desired outcome is a more significant change of the model's parameters as a consequence of the model predicting 0 instead of 1. In other words, a model should be penalized more severely when misclassifying a more important (here, geoeffective) event.
For this research, we considered the importance of an event to be the weight of the class it belongs to ($weight_{c}$ for class c), which is inversely proportional to the frequency of the class' appearances in the dataset (eq. \ref{eq:weight}).
\begin{equation}
\label{eq:weight}
weight_c = \frac{\textnormal{total number of samples}}{\textnormal{number of classes * number of samples belonging to class c}}
\end{equation}
Thus, the weights associated with the samples are 69.72 for the events belonging to the minority class and 0.5 for those in the majority class.
The other technique for overcoming the class imbalance is oversampling. This refers to the creation of additional samples, most commonly belonging to an underrepresented class. One way of achieving this is \emph{random oversampling}, meaning creating multiple copies of some of the samples from the minority class, selected at random. However, this is similar to using higher weights for those samples, by feeding them more times to the model. Therefore, we chose to use the Synthetic Minority Oversampling Technique (SMOTE) to create new samples altogether, as described in section \ref{sec:class_imbalance}.
Another observed issue is the high similarity degree between samples from different classes, known as \emph{class overlap}. Visually, this problem manifests as two or more points from different classes overlaying each other. This can be visualized in the figures below, representing only the negative events (fig. \ref{fig:UMAP} left), as well as both negative and positive ones (fig. \ref{fig:UMAP} right) in the dataset, respectively. In Figure \ref{fig:UMAP} right, it can be observed that there are points belonging to the positive class overlapping negative ones. In other words, some samples have approximately equal probabilities of belonging to different classes (in this case 0 and 1), turning this prediction into a particularly challenging task.
\begin{figure}[!h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[height=0.17\textheight]{images/UMAP_full_ineffective.png}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[height=0.17\textheight]{images/UMAP_full_geoeffective.png}
\end{minipage}
\caption{Left: UMAP of the negative events only. Right: UMAP of both negative and positive events}
\label{fig:UMAP}
\end{figure}
The 2D plots presented in fig \ref{fig:UMAP} left and rigth were obtained with the help of the dimensionality reduction technique called Uniform Manifold Approximation and Projection \citep[UMAP,][]{mcinnes2020umap}. Dimensionality reduction refers to data transformations meant to reduce the number of attributes used to describe the essential information embedded in the data. The use of this technique enabled the 2D display of the data, which provides visual insight into the it. While, arguably, the plots are oversimplified, overlapping is apparent by data analysis, which shows substantial similarities in the feature values of the two classes. \cite{VUTTIPITTAYAMONGKOL2021106631} show that class overlap is a highly overlooked issue which negatively impacts classifications, amplifying the negative effects of class imbalance. Due to the overlap, the models tend to misclassify positive instances in the proximity of the class boundaries, which will be discussed in section \ref{sec:conclusion}.
We hypothesize that one of the causes behind the class overlap issue is the limited number of independent features used for distinguishing between positive and negative events. We consider the location of the origin of the CME and its direction \citep[][]{2005} as examples of such features, that are not adequately contained in any of the catalogs we used.
Additionally, information that is also likely to reduce the class overlap is the CMEs' association with flares, as studied by \cite{yashiroetal2006}.
Another potential issue is the presumed incompleteness of the known associations between geomagnetic storms and the causal CMEs identified by \citet{richardsoncane2010}. In other words, not all the disturbances identified by \citet{richardsoncane2010} are correlated with a specific LASCO identified event, which allows for presumptions regarding their cause. One reason behind this issue is the fact that, due to the variance of travel times, it is not always possible to determine which, out of several different CMEs, was the unquestionable cause of a specific disturbance. This means that, given that we consider all CMEs not found in the catalog compiled by \cite{richardsoncane2010} to not be geoeffective, we could be training the models with inaccurate information (i.e. events labeled as 0 due to lack of certainty regarding their exact cause that are, in fact, geoeffective).
In the \cite{richardsoncane2010} catalog, there are 227 events with no associated LASCO CME cause having occurred until 2015, out of which 131 have Dst $\leqslant$ -30.
Following the approach described by \cite{uwamahoro2012}, we consider all CMEs from 15 hours and up to 5 days prior to any of the 131 geoeffective disturbances identified by \cite{richardsoncane2010} before 2015 that have no association with a LASCO event to be \emph{possibly} geoeffective. This means that neither the label 0, nor 1 is unequivocal for these events.
We experimented with capturing such information about potentially geoeffective events by using example weights. For this investigation, the samples were not assigned weights based only on the class they belonged to, as previously described. All potentially geoeffective events, as defined above, were assigned half the weight of an event that was not geoeffective. This way, if the model predicted a potentially geoeffective event as geoeffective, it would not be penalized as highly as it would be for misclassifying an event that has an unambiguous label.
\section{Machine Learning methods, their applications and results}
\label{sec:methods}
For this research, we studied and compared the following binary classifiers from the perspective of predicting CMEs' geoeffectiveness: logistic regression, K-Nearest Neighbors, Support Vector Machines, feed forward artificial neural networks and ensemble models. For the implementations of the various models and algorithms (e.g., preprocessing, grid search, cross validation, PCA), the scikit-learn library \citep[][]{scikit-learn} was used, unless stated otherwise. In addition to this, the artificial neural networks were created with the help of TensorFlow 2 \citep[][]{tensorflow2015-whitepaper} and Keras \citep[][]{chollet2015keras}.
In this section, the principled inner-workings of the models employed will be described, together with our actual experiments and the hyperparameter options we explored. Hyperparameters are model parameters that are not learned and need to be submitted beforehand. Finding their optimal values is one of the most important aspects of creating a model and represents a process called hyperparameter tuning, which generally involves testing various inputs. For this study, with the exception of the neural networks, this process was performed using the grid search approach. Grid search involves specifying the values to be tested for each desired hyperparameter, exhaustively trying all the possible combinations and comparing the yielded performances in order to find the optimal set of values.
The significance of each hyperparameter we tuned, together with the values used, will be detailed for each model.
When evaluating the performance of our models, we analyzed and compared a number of performance metrics (recall, precision, F1 score, accuracy and specificity), as detailed in the following paragraphs. For a better understanding of the terms used for defining the metrics and their evaluation, the essential notations used throughout this paper are the following:
\begin{itemize}
\setlength\itemsep{-0.5em}
\item TP - True Positive (event correctly predicted to be geoffective)
\item TN - True Negative (event correctly predicted as not geoeffective)
\item FP - False Positive (false alarm)
\item FN - False Negative (undetected geoefective event)
\end{itemize}
We considered the recall (eq. \ref{eq:recall}) - also known as sensitivity - to be the main performance indicator, given the high cost associated to not anticipating a potential storm (e.g., without any warning, electrical/satellite/communication systems would not be safeguarded). The recall value should be interpreted as the percentage of the known geoeffective events that were correctly identified by the model.
\begin{equation}
\label{eq:recall}
recall = \frac{TP}{TP + FN}
\end{equation}
Recall's complementary performance metric is the precision (eq. \ref{eq:precision}), which indicates how many of the events predicted to be geoeffective \emph{actually} were geoeffective.
\begin{equation}
\label{eq:precision}
precision = \frac{TP}{TP + FP} = 1 - \textnormal{False Alarm Ratio}
\end{equation}
Despite the fact that the recall and precision can each be improved at the expense of the other, it is decidedly desirable for both their values to be as high as possible. A good indicator of their balance is given by the F1 score (eq. \ref{eq:f1score}):
\begin{equation}
\label{eq:f1score}
F1 score = \frac{2 * (precision * recall)}{precision + recall}
\end{equation}
Additionally, we also computed the accuracy (eq. \ref{eq:accuracy}), which is equal to the proportion of correct predictions. We reiterate that the accuracy is not a reliable metric by itself in the case of binary classification on imbalanced data.
\begin{equation}
\label{eq:accuracy}
accuracy = \frac{TP + TN}{TP + TN + FP + FN}
\end{equation}
Finally, the specificity (eq. \ref{eq:specificity}) refers to the percentage of correctly labeled negative events.
\begin{equation}
\label{eq:specificity}
specificity = \frac{TN}{TN + FP}
\end{equation}
For testing the performances of our models, we used stratified k-fold cross validation \citep[][]{Refaeilzadeh2016}, with the default value k = 5. This means the data is shuffled and split into 5 subsets, each preserving the approximate proportion of positive to negative samples in the original dataset. The models are trained on 4 out of 5 subsets and tested on the remaining one. The process is repeated until the model has been trained and tested on all subsets (i.e., 5 times). The performance values are the average for all tests. Therefore, this evaluation method is expected to yield less biased results, given that the models are not tested only on a fraction of the data, which could have various particularities, making it easier or harder to predict on. However, in order to obtain a visual representation of the predictions with the help of UMAP (similar to those in Figure \ref{fig:UMAP} left and right, a single test set had to be chosen. For this, we used 20\% of the data, split in a stratified manner, as further described in section \ref{sec:ensemble}.
In the following subsections, the methods and models used for the experiments are detailed.
\subsection{Logistic Regression}
\label{sec:lin_reg}
The logistic regression is a statistical model used to obtain a dependent variable (the label) based on one or more independent ones (the attributes). The predicted value can be interpreted as the probability
of the event belonging to the positive class (e.g.: a value of 0.2 represents a 20\% chance that the real label is 1, leading to the choice of the predicted label to be 0). This is expressed in eq. \ref{eq:probability}, where X represents all the examples, while y is one real target value and $\hat{y}$ is a predicted one:
\begin{equation} \label{eq:probability}
\hat{y} = P(y = 1|X), \;\;\; where
\end{equation}
\begin{equation} \label{eq:logReg}
\hat{y} = \sigma(W^{T}X + b), \;\;\; and
\end{equation}
\begin{equation} \label{eq:sigma}
\sigma(z) = \frac{1}{1 + e^{-z}}
\end{equation}
Generally, in the case of normal distributions, the threshold used for discriminating between the 2 classes is 0.5, but, in practice, the threshold may depend on the context of the problem and, implicitly, the class imbalance \citep[][]{2018book}. The adjustments of the threshold is known as \emph{threshold moving}.
Since the distribution of the classes in the dataset is not normal, we expected that the separation between the classes using the threshold associated to normal distributions to be susceptible to improvement. Given that there is no a priori knowledge of a cost associated with missclassifications, we have tested values between 0.5 and 0.9 for the threshold, through cross validation. Similarly, \cite{DBLP:journals/remotesensing/FuZYFLM21} analyzed the influence of 20 different thresholds between 0 and 1, observing performance improvements when choosing a value higher than 0.5.
We discovered empirically that for this model, the threshold leading to the highest F1 score (therefore the best balance between recall and precision) is 0.9. The performances obtained with all other tested thresholds are summarized in Table \ref{tab:linreg_thresholds}.
\begin{table}[h]
\centering
\caption{Performances of the linear regression model with class-balanced weights with various thresholds, on the entire dataset, after 5-fold cross-validation}
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}ccccc}
\hline
LR Model Threshold & 0.5 & 0.6 & 0.7 & 0.8 & 0.9\\
\hline
Recall & 89.24\% & 86.63\% & 82.00\% & 79.21\% & 73.24\%\\
Precision & 9.79\% & 11.39\% & 12.98\% & 14.59\% & 16.29\%\\
F1 score & 17.64\% & 20.13\% & 22.39\% & 24.56\% & 26.63\%\\
Accuracy & 94.11\% & 95.13\% & 95.96\% & 96.61\% & 97.13\%\\
Specificity & 94.13\% & 95.14\% & 96.05\% & 96.73\% & 97.28\%\\
\hline
\end{tabular*}
\label{tab:linreg_thresholds}
\vspace{-4mm}
\end{table}
\begin{table}[h!]
\centering
\setlength{\tabcolsep}{-0.25pt}
\caption{Performances of linear regression models with threshold = 0.9, on the entire dataset, after 5-fold cross-validation}
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}cccc}
\hline
LR Model & Balanced weights & Uncertainty-based weights & With PCA, 5 components\\
\hline
Recall & 73.24\% & 72.65\% & 72.65\%\\
Precision & 16.29\% & 15.98\% & 16.60\%\\
F1 score & 26.63\% & 26.18\% & 27.02\%\\
Accuracy & 97.13\% & 97.11\% & 97.23\%\\
Specificity & 97.28\% & 97.28\% & 97.40\%\\
\hline
\end{tabular*}
\label{tab:linreg_cv}
\vspace{-4mm}
\end{table}
Using example weights based on our certainty regarding whether the events were geoeffective or not did not lead to remarkable performance changes, while training the model using 5 principal components instead of the manually selected independent features led to precision increase, as shown in Table \ref{tab:linreg_cv}.
\subsection{K-Nearest Neighbors}
\label{sec:knn}
K-Nearest Neighbors (KNN) is a similarity-based algorithm, where similarity is expressed in terms of a generalized concept of distance. Various ways of computing the distance can be used, e.g., Manhattan (eq. \ref{eq:manhattan}), euclidean (eq. \ref{eq:euclidean}), Minkowski (eq. \ref{eq:minkowski}). KNN models assign the label of an event based on the label of the majority of the k most similar examples, where k is predefined (not to be confused with the number of folds used for cross-validation, which is an independent variable). This is done by following the steps below:
\begin{enumerate}
\setlength\itemsep{-0.5em}
\item Compute the distance between the point (example) whose label needs to be predicted and all the labeled points in the dataset.
\item Choose the first k smallest distances (i.e., the k-nearest neighbors).
\item Assign the label of the majority of the k-nearest neighbors to the unlabeled point.
\end{enumerate}
Simply using the predominant label in a group of k neighbors to decide upon the label of a new sample is known as using uniform weights. However, for extremely imbalanced datasets, such as the one used for this research, there is a high chance the majority of a sample's neighbors belong to the majority class. This means that regardless of how similar a sample might be to another one belonging to the minority class, the low number of such examples might not be enough to form a majority in many cases, resulting in numerous misclassifications. One way of tackling this issue is by using distance-based weights when determining the label. In other words, the smaller the distance between a sample and its neighbor (i.e. the more similar they are), the more the label of the latter will weigh when deciding what the prediction should be. Therefore, the role of distance-based weights is to ensure that minority samples have the potential to counterbalance the high number of majority samples.
For this study, we have experimented with various values for k, in order to find an optimal one, bearing in mind that too small a value could make the model sensitive to noise, while too big a value eventually leads to no performance improvements, at the cost of computation time. At the same time, we have also attempted to use different weights and distances.
The values tested using grid search were:
\begin{itemize}
\setlength\itemsep{-0.5em}
\item k = 3, 5, 7;
\item weight: uniform and distance-based;
\item distance: Manhattan (eq. \ref{eq:manhattan}), euclidean (eq. \ref{eq:euclidean}), Minkowski (eq. \ref{eq:minkowski}) with p = 3;
\end{itemize}
\begin{equation}
\label{eq:manhattan}
d(x, y) = \sum{|x - y|}
\end{equation}
\begin{equation}
\label{eq:euclidean}
d(x, y) = \sqrt{\sum{(x - y)^{2}}}
\end{equation}
\begin{equation}
\label{eq:minkowski}
d(x, y, p) = (\sum{|x- y|^{p}})^{\frac{1}{p}}
\end{equation}
The performances of the best KNN models are summarized in Table \ref{tab:knn_cv}. Our grid-search experiments showed the optimal number of neighbors in this case to be 7 when using uniform weights and 5 when using distance-based weights. The KNN models attained the highest precision values out of all tested models, while, in turn, also having the lowest recall values. The best precision values were obtained using uniform weights and the Minkowski distance, with p = 3. Simply using a distance-base weighted approach, with the same hyperparameters, led to no recall improvement, accompanied by an $\sim$10\% precision decrease, which could indicate the need for better signaling great differences between the values of various examples' features. As indicated by the grid search, Manhattan distance is a more suitable distance metric. Nonetheless, the discrepancy between the recall increase and the precision decrease when using distance-based weights, as opposed to uniform weights, emphasizes the models' difficulty in overcoming the class imbalance and class overlap. In terms of similarity, additional information could have a great impact on distinguishing between the events that are geoeffective and those that are not.
\begin{table}[!ht]
\centering
\caption{Performances of KNN models on the entire dataset, after 5-fold cross-validation}
\begin{tabular}{lccc}
\hline
KNN Model&\makecell[l]{\\7 neighbours\\Minkowski distance\\uniform weights} & \makecell[l]{\\5 neighbours\\Manhattan distance\\distance-based weights} & \makecell[l]{PCA, 5 components\\ 7 neighbours \\Minkowski distance\\ uniform weights} \\
\hline
Recall & 4.03\% & 7.51\% & 2.89\%\\
Precision & 67.99\% & 22.94\% & 60.00\%\\
F1 score & 7.49\% & 11.17\% & 5.49\%\\
Accuracy & 99.29\% & 99.19\% & 99.30\%\\
Specificity & 99.97\% & 99.85\% & 99.98\%\\
\hline
\end{tabular}
\label{tab:knn_cv}
\vspace{-4mm}
\end{table}
After 5-fold stratified cross-validated predictions, the KNN model with uniform weights correctly identified 3 geoeffective CMEs, all being full halo events (namely, the CMEs from 2000-07-14 10:54, 2002-08-16 12:30, 2004-07-25 14:54).
The model also predicted 12 false positives, all having the CPA and AW equal to 360 (i.e., full halos). An example of such an event is the CME from 2012-03-07 01:30. The false alarm's attributes are presented in Table \ref{tab:neighbors}, together with those of its nearest neighbors, listed from closest to farthest. It can be observed that, out of the 7 events considered to be most similar to it, 5 were geoeffective, which, consequently, led to the model predicting the event to be geoeffective as well. The model's high number of false positives, compared to its number of true positives, is due to the high number of geoffective CMEs having AW = 360 in our dataset (113 out of 172), in addition to the even higher number of CMEs having AW = 360 that were not geoeffective (537) and suggests that there is additional information needed to better differentiate between the full halo CMEs that are geoeffective and those that are not.
\begin{table}[!ht]
\centering
\caption{Example of a false alarm detected by the KNN model with uniform weights, together with its nearest neighbors}
\begin{tabular*}{\textwidth}{c@{\extracolsep{\fill}}ccccc}
\hline
CME & LS & Acc & FI & Dst & Geoeffective\\
\hline
2012-03-07 01:30 & 1825 & -160.9 & 36.29 & 0 & 0\\
\hline
2003-10-29 20:54 & 2029 & -146.5 & 31.88 & -383 & 1\\
2005-09-10 21:52 & 1893 & -171.7 & 43.74 & 0 & 0\\
2005-01-17 09:30 & 2094 & -118.8 & 33.60 & -80 & 1\\
2000-07-14 10:54 & 1674 & -96.1 & 25.15 & -301 & 1\\
2002-08-16 12:30 & 1585 & -67.1 & 29.66 & -106 & 1\\
2003-11-02 09:30 & 2036 & -64.2 & 30.47 & 0 & 0\\
2006-12-13 02:54 & 1774 & -61.4 & 47.55 & -162 & 1\\
\hline
\end{tabular*}
\label{tab:neighbors}
\vspace{-4mm}
\end{table}
\subsection{Support Vector Machines}
The Support Vector Machines \citep[SVM,][]{1995} aim to find the optimal hyperplane delimiting the points belonging to 2 classes (while the method can actually be used for multiple classes, this is outside the scope of this work).
The hyperplane represents an ensemble of points in an n-1 dimensional vector space, where n is the number of attributes used (e.g., for bidimensional datasets, i.e., having only 2 attributes, the hyperplane, is, in fact, a straight line; for 3D data, the classes will be separated by a 2D plane, etc). The essence of this algorithm lies in the way the optimal hyperplane is found, which is by ensuring that the area between the plane and the closest points (which are called the support vectors) is maximal.
In the case of binary classification with linearly separable classes, the decision function is:
\begin{equation} \label{eq:svmLine}
W^{T}X + b = 0
\end{equation}
Thus, the predicted labels are as follows:
\begin{equation} \label{eq:svmPredictions}
\hat{y} = \begin{cases}
0, & \text{if W$^{T}$X + b $<$ 0}\\
1, & \text{if W$^{T}$X + b $\geq$ 0}
\end{cases}
\end{equation}
Ensuring that the separation area is maximal is equivalent to finding the W that maximizes the value of \(\frac{2}{||W||}\).
\begin{figure}
\centering
\caption{Illustration of the principles behind the SVM approach}
\label{fig:SVM}
\input figures/svmtikz.tex
\end{figure}
However, not all datasets are initially linearly separable (e.g., Figure \ref{fig:SVM} and Figure \ref{fig:SVM2} left). Nevertheless, it is generally considered that there does exist a higher dimensional feature space in which the data can be linearly separated \citep[Cover's Theorem,][]{4038449, haykin2009neural} (fig. \ref{fig:SVM2} right). Therefore, mapping the initial values to this new space is required.
\begin{figure}[!ht]
\centering
\begin{minipage}{0.45\textwidth}
\centering
\input figures/nonseparabletikz.tex
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\input figures/separabletikz.tex
\end{minipage}
\caption{Left: data that is not linearly separable. Right: hyperplane separating data in a 3D space}
\label{fig:SVM2}
\end{figure}
Since this transformation can be computationally demanding, the \emph{kernel trick} \citep[][]{1000150} is used. The kernel is a function whose inputs are vectors (in the original space) returning their dot product in the feature space, instead of explicitly computing the new coordinates. Thus, a kernel function k is defined as:
\begin{equation} \label{eq:kernel}
k(a, b) = \langle{\phi(a), \phi(b)}\rangle
\end{equation}
for a, b \(\in\) X, where \(\phi : X \rightarrow \Re^n\).
For SVMs, the kernel function is a hyperparameter itself. The ones we have experimented with are: linear (eq. \ref{eq:linKernel}, also known as a "non-kernel", since it does not project the data onto a higher dimension), polynomial (eq. \ref{eq:polyKernel}), Radial Basis Function (RBF) (eq. \ref{eq:rbfKernel}) and sigmoid (eq. \ref{eq:sigmoidKernel}).
\begin{equation} \label{eq:linKernel}
k(a, b) = a \cdot b + c \textnormal{, where c = 1 for these experiments}
\end{equation}
\begin{equation} \label{eq:polyKernel}
k(a, b) = (\gamma\langle{a, b}\rangle + r)^d
\end{equation}
For the polynomial kernel, the tested degrees were 2, 3, 4 and 5.
For all our experiments, the value of \emph{r}, which is also referred to as the \emph{0 coefficient} was 0.
\begin{equation} \label{eq:rbfKernel}
k(a, b) = \exp(-\gamma||a - b||^2)
\end{equation}
For the RBF kernel, we experimented with $\gamma \in \{10^{p_1}, 2^{p_2}\}$, where $p_1 \in \mathbb{N} \cap$[-2, 2] and $p_2 \in \{\pm15, \pm11, \pm9, \pm5, \pm3\}$. This hyperparameter controls the region of influence of a single example, where a lower value allows for a larger area (i.e., more influence).
\begin{equation} \label{eq:sigmoidKernel}
k(a, b) = \tanh(\gamma\langle{a, b}\rangle + r)
\end{equation}
For the sigmoid kernel, $\gamma$ had the default value set for the scikit-learn implementation, equal to the $n * \sigma^{2}(X)){-1}$, where \emph{n} is the number of attributes and $\sigma^{2}(X)$ denotes the variance of X, while, as previously mentioned, \emph{r} was equal to 0.
The other hyperparameter that was tuned as part of our experiments was the regularization parameter C, whose role is to trade between a simple decision and a low number of missclassifications. Thus, the lower its value, the smoother the decision surface; the higher its value, the fewer the incorrect predictions. The domain of C values explored was $\{10^{p_1}, 2^{p_2}\}$, where $p_1 \in \mathbb{N} \cap$[-2, 2] and $p_2 \in \{\pm15, \pm11, \pm9, \pm5, \pm3\}$, similar to \cite{2012JKAS...45...31C}. Nonetheless, it should be noted that however appealing a perfect classification is, this most commonly points to the issue of overfitting \citep[][]{doi:10.1021/ci0342472}, which manifests as the model having poor generalization skill, despite excellent training performance. We ensured the models learned to generalize well by examining the train and test errors at cross-validation and ensuring there were no high discrepancies between them.
Performance wise, out of all the SVM models applied on original data only, with balanced weights, the best results were obtained with the polynomial kernel, according to the F1 score, as shown in Table \ref{tab:svm_comparison_cv}.
\begin{table}[!ht]
\caption{Performances of various SVM configurations on the entire dataset, with balanced weights, after 5-fold cross-validation. The 5 columns represent the following configurations: (1) - linear kernel; (2) - polynomial kernel, degree=2, C = 1, $\gamma$ = 1/(no. features $\cdot$ Var(X)); (3) - RBF kernel, C = $2^{-3}$, $\gamma$ = $2^{-15}$; (4) - RBF kernel, with uncertainty, C = $2^{-5}$, $\gamma$ = $2^{-11}$; (5) - linear kernel, with PCA, 5 components}
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}ccccc}
\hline
SVM Model & (1) & (2) & (3) & (4) & (5) \\
\hline
Recall & 90.16\% & 81.44\% & 71.69\% & 76.36\% & 89.51\%\\
Precision & 8.92\% & 14.38\% & 14.50\% & 14.16\% & 8.73\%\\
F1 score & 16.24\% & 24.43\% & 24.11\% & 23.89\% & 15.91\%\\
Accuracy & 93.40\% & 96.40\% & 96.79\% & 96.55\% & 93.32\%\\
Specificity & 93.42\% & 96.51\% & 96.51\% & 100.00\% & 93.35\%\\
\hline
\end{tabular*}
\label{tab:svm_comparison_cv}
\vspace{-4mm}
\end{table}
Despite the different performance values, some false positives and false negatives are common to all 3 models. There are 18 geoeffective CMEs that all these SVM models fail to identify, e.g., the event on 2005-06-09 14:36:05, actually labeled as "Very Poor Event" in the LASCO catalog, associated with a Dst = -106 nt, as well as 627 CMEs that the models incorrectly classify as geoeffective. Out of these 627 examples, 513 have the Central Position Angle = Angular Width = 360 (halo events) e.g. the event on 2013-12-07 07:36:05.
The experiment regarding the use of special weights for what we suspected could be potentially geoeffective events increased the recall value of the model having an RBF kernel with $\sim$5\%, accounting for a $<$1\% precision drop, while for the models with different kernels, the recall value dropped. The use of PCA lowered both the recall and the precision values, however, with $<$1\% each. These performances are also summarized in Table \ref{tab:svm_comparison_cv}, columns (4) and (5).
\subsection{Artificial Neural Networks}
Feed-forward, fully connected artificial neural networks \citep[][]{Rosenblatt1958ThePA, 1986} are computational systems that map inputs to one or more outputs, having the following layered structure (fig. \ref{fig:ann} left), where each layer is fully connected to the previous one:
\begin{itemize}
\setlength\itemsep{-0.5em}
\item 1 input layer, representing the input variables and the bias
\item 1 or more hidden layers, each with a previously defined number of neurons
\item 1 output layer, representing the prediction
\end{itemize}
The number of both the layers and the neurons are hyperparameters that will be tuned.
The working principle of such a network is based on the neurons' way of working, which is similar to a linear regression, as shown in figure \ref{fig:ann} right. However, the complexity of the model is increased with the addition of the activation function.
\begin{figure}[!ht]
\centering
\begin{minipage}{0.45\textwidth}
\centering
\input figures/anntikz.tex
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\input figures/neurontikz.tex
\end{minipage}
\caption{Left: the architecture of a feed-forward artificial neural network. Right: an artificial neural network neuron}
\label{fig:ann}
\end{figure}
Therefore, the output of the \emph{i$^{th}$} neuron in the hidden layer \emph{l} is expressed as:
\begin{equation}
\label{neuronOutput}
y_i^{[l]} = \varphi(W_i^{[l]T}X + b_i^{[l]})
\end{equation}
where $\varphi$ is the activation function and X are the outputs of all the neurons from layer \emph{l - 1}.
Neural networks are characterized by multiple hyperparameters, which will be discussed in the following paragraph. In terms of activation function, the sigmoid was used for the last layer, since the result is mapped to a number between 0 and 1, which can be interpreted as a probability for binary classification. For all other neurons, the Rectified Linear Unit \citep[ReLU,][]{inproceedings} (eq. \ref{eq:relu}) was chosen, being an adequate common activation function \citep[][]{nwankpa2018activation}.
\begin{equation}
\label{eq:relu}
ReLU(x) = max(0, x)
\end{equation}
The optimization algorithm used was Adam \citep[][]{kingma2017adam}, which has an adaptive learning rate whose default initial value is 1e-3 (for the TensorFlow and Keras implementations). Although it is considered that this algorithm needs little, if any, manual learning rate adjustments, we also tested the following values: 1e-1, 1e-2, 5e-2, 5e-3. The other specific parameters were unmodified.
The loss function of choice was binary cross-entropy. The best batch size proved to be 32, out of the set of values we experimented with, i.e., 16, 32, 64, 128. The optimal number of epochs varied based on the number of neurons and learning rate. We employed the early stopping technique \citep[][]{1998} for finding the epoch when the loss value no longer decreases.
We trained and evaluated networks with 1 and 2 hidden layers, each with up to 10 neurons. Values higher than this proved to lead to either overfitting or no performance improvements, case in which less complex models that are easier to interpret are favoured. For these hyperparameter tuning experiments, the KerasTuner \citep[][]{omalley2019kerastuner} framework was used.
\begin{table}[!ht]
\centering
\caption{Performances of ANN models for various experiments, on the entire dataset, after 5-fold cross-validation. The headers represent the following 1 hidden layered architectures:\\
(1) - 3 neurons, learning rate = 1e-3, batch size = 32, 25 epochs, balanced weights;\\
(2) - 7 neurons, learning rate = 1e-3, batch size = 32, 23 epochs, balanced weights;\\
(3) - 9 neurons, learning rate = 1e-3, batch size = 32, 30 epochs, with sample weights for uncertainty;\\
(4) - 3 neurons, learning rate = 1e-3, batch size = 32, 23 epochs, after using PCA with 5 components, balanced weights;}
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}cccc}
\hline
ANN Model & (1) & (2) & (3) & (4)\\
\hline
Recall & 92.40\% & 91.11\% & 90.34\% & 89.35\%\\
Precision & 7.48\% & 7.82\% & 8.13\% & 5.34\%\\
F1 score & 13.84\% & 14.41\% & 14.81\% & 10.05\%\\
Accuracy & 91.88\% & 92.37\% & 92.64\% & 88.03\%\\
Specificity & 92.92\% & 91.86\% & 92.52\% & 89.25\%\\
\hline
\end{tabular*}
\label{tab:ann_cv}
\vspace{-4mm}
\end{table}
What stood out for the ANN experiments was the similarity between the results from various experiments and architectures (including an additional hidden layer). A possible explanation for this may be the lack of essential data for distinguishing between the CMEs that are geoeffective and those that are not (e.g., interplanetary parameters) that such a model type could effectively use for better predictions.
\subsection{Ensemble models}
\label{sec:ensemble}
Ensemble models \citep[][]{book} harness the power of two or more different models combined, in an attempt to achieve better prediction results than the individual ones. The results yielded by this approach can typically be interpreted as more confident ones, being obtained by combining the prediction capabilities of more than one metaphorical expert (i.e., model).
For this research, a stacked classifier was employed. Stacked classifiers use the predictions of the models aggregated in the ensemble as inputs for an additional meta-learner. This learner is trained to find an optimal manner to combine the previous prediction in order to get a final label as accurate as possible.
We decided to build an ensemble model with the aim of improving the precision, without significantly decreasing the recall value. Therefore, naturally, the ensemble needed to include both a model with high precision and one with high sensitivity. We decided on using the KNN model (with uniform weights, 7 neighbours and Manhattan distance), given that it has the highest precision. Additionally, a logistic regression model with balanced class weights and 0.9 threshold was used, given both its predictive performances and simplicity (e.g., compared to the more complex neural network). For the meta-learner, we chose a logistic regression model, which is also the scikit-learn default option for such classifiers, with the same class cutoff.
The recall values obtained for all models employed in this study generally varied between 70\% and 91\%, with precision values roughly fluctuating between 7\% and 16\%, with the exception of the KNN models, whose precision could reach 67.9\%, for a recall of 4\%. As previously mentioned, while a recall value as high as possible is desirable, we decided to choose the best model according to the highest F1 score value, and subsequently, the best precision value among models with a recall value of at least 75\%. Therefore, we consider this stacked classifier to be the best performing model. In order to obtain a visual representation of the correctness of the predictions, the dataset was randomly shuffled and split into 2 subsets, while preserving the 0-1 label ratio of the original dataset for all the subsets. 20\% of the data (4881 samples, out of which 33 positive ones) was set aside for final tests, while the remaining 80\% account for the \emph{traindev} (used for training and development) data. The model was trained on the traindev data and tested on the test set. The resulting 4 types of binary prediction categories, as summarized in Sec. \ref{sec:methods}, are showcased in Figure \ref{fig:UMAPbest}.
\begin{figure}[!ht]
\centering
\caption{The predictions of the ensemble model, on the test set (20\% of the data), colored by their correctness}
\label{fig:UMAPbest}
\includegraphics[width=\textwidth]{images/UMAP_test_ensemble.png}
\end{figure}
This model correctly classifies 4719 out of 4849 negative samples, represented by the blue dots in Figure \ref{fig:UMAPbest}. The 7 false negatives (red dots) are presented in Table \ref{tab:ensemble_fn}.
However, out of these 7 false negative events, 5 are questionable events, having either poor/very poor CME data or an uncertain CME association.
\begin{table}[!ht]
\centering
\caption{Example false negatives predicted by the ensemble model on the test set}
\begin{tabular*}{\textwidth}{c@{\extracolsep{\fill}}cccccc}
\hline
CME & CPA & AW & LS & Acc & FI & Dst\\
\hline
1997-10-06 15:28 & 139 & 174 & 293 & 15.9 & 0.08 & -130\\
2000-05-13 12:26 & 166 & 182 & 666 & 10.8 & 2.09 & -92\\
2001-09-20 19:31 & 306 & 207 & 446 & -3.4 & 3.08 & -73\\
2001-09-27 04:54 & 226 & 182 & 509 & 36.6 & 5.25 & -66\\
2004-07-22 07:31 & 66 & 151 & 700 & 4.8 & 13.59 & -136\\
2005-06-09 14:36 & 260 & 151 & 377 & -3.7 & 0.56 & -106\\
2011-10-02 02:00 & 167 & 103 & 259 & 0.6 & 7.85 & -43\\
\hline
\end{tabular*}
\label{tab:ensemble_fn}
\vspace{-4mm}
\end{table}
In addition to this, the model predicted 130 false alarms, out of which 92 were full halos, having CPA = AW = 360. Analyzing Figure \ref{fig:UMAPbest} above, together with Figure \ref{fig:UMAP} left and right, while bearing in mind that the first one depicts only 20\% of the data (i.e., the test set), whereas the other two are representations of the entire dataset, the pattern followed by the samples incorrectly classified as geoffective can be easily observed. It is, therefore, more apparent how the similarities between some geoeffective samples and others pose a substantial challenge for the models.
The stacked classifier qualifies as our best model, given its high recall and F1 score values, as noted in Table \ref{tab:ensemble_cv} (the second and third columns). This model has been employed for additional experiments, described below, that are summarized in the fourth and fifth columns.
\begin{table}[h]
\centering
\caption{Performance of the stacked classifier on the full set with 5-fold cross validation, test, and subsampled data sets}
\begin{tabular}{lcccc}
\hline
Stacking classifier &
Entire dataset &
Test dataset &
\makecell[l]{Samples with\\AW$ \geqslant$ 60} &
\makecell[l]{Samples with\\AW $\geqslant$ 120}\\
\hline
Recall & 80.28\% & 78.99\% & ~~~~~78.59\% & ~~~~~73.59\% \\
Precision & 15.19\% & 14.82\% & ~~~~~17.12\% & ~~~~~17.61\%\\
F1 score & 25.54\% & 24.96\% & ~~~~~27.95\% & ~~~~~28.40\%\\
Accuracy & 96.67\% & 96.65\% & ~~~~~90.69\% & ~~~~~74.27\%\\
Specificity & 96.79\% & 96.78\% & ~~~~~90.90\% & ~~~~~74.42\%\\
\hline
\end{tabular}
\label{tab:ensemble_cv}
\vspace{-4mm}
\end{table}
\newpage
\subsection{Addressing the class imbalance}
\label{sec:class_imbalance}
We reiterate that the class imbalance represents one of the main challenges of this task. In section \ref{sec:challenges}, we discussed minimizing its negative effects by using of class weights by undersampling (decreasing the number of samples from the majority class) and oversampling (increasing the number of samples from the minority class.
We perform two additional experiments to attempt improving the prediction metrics: firstly, we attempted to artificially increase the number of geoeffective events, using SMOTE. The implementation used was provided by \emph{imbalanced-learn} library \citep[][]{JMLR:v18:16-365}.
This technique relies on the principles of K-Nearest Neighbors (as detailed in section \ref{sec:knn}). A sample is considered to be a point in an n-dimensional space, where n is equal to the number of features. In order to generate a new sample, an existing point from the minority class is chosen randomly, together with k of the most similar points from the same class (i.e. nearest neighbors). Out of the k neighbors, one is randomly selected and a new point is generated, arbitrarily, on the line determined by the original point and its neighbor. Thus, a new sample has been created, its features' values being the coordinates of the newly generated point.
The sampling strategy was set to 0.1, meaning that new minority samples were created so as to obtain a 1-10 label ratio (i.e., 1 out of 10 samples would be geoeffective). In order to further reduce the imbalance to a 1-5 ratio with a reasonable amount of artificial data only, random samples from the majority class were eliminated (random undersampling with 0.2 sampling strategy).
For this experiment, the traindev set was further split into 2 stratified subsets, further called train (80\%) and dev (20\%). SMOTE was only applied on the train set. The newly obtained set was then concateneted with the dev set, containing original data only, and shuffled. The resulting set was used for cross-validated experiments. We tested the models both on the entire dataset, containing all original and artificially generated samples, through 5-fold cross validation and on the test set containing original data only. The results are summarized in Table \ref{tab:smote_test}.
\begin{table}[ht!]
\centering
\caption{Performances of the models trained on the SMOTE augmented dataset, on the test set}
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}cccc}
\hline
Model & \makecell[l]{Linear regression\\threshold = 0.9\\balanced weights} & \makecell[l]{KNN, 3 neighbours\\Manhattan distance\\distance-based weights} & \makecell[l]{SVM\\Linear kernel\\balanced weights} & ANN \\
\hline
Recall & 75.00\% & 43.75\% & 93.75\% & 71.87\%\\
Precision & 17.77\% & 20.00\% & 10.20\% & 2.24\%\\
F1 score & 28.74\% & 27.45\% & 18.40\% & 4.35\%\\
Accuracy & 97.56\% & 98.48\% & 94.55\% & 79.28\%\\
Specificity & 97.71\% & 98.84\% & 94.55\% & 79.33\%\\
\hline
\end{tabular*}
\label{tab:smote_test}
\vspace{-4mm}
\end{table}
Both the linear regression model and the KNN models exceed all F1 scores obtained using original data only.
While we were able to build KNN and SVM models with higher precision values trained on original data only, their recall values in Table \ref{tab:smote_test} are higher than any obtained before. The SVM model, for example, only misses 2 geoeffective events, out of the 32 present in the test set: the events on 2005-06-09 14:36, 2011-10-02 02:00 (Dst values for these events being -43, -106 nT, respectively).
For the ANN we used the same architecture as model 1 (i.e., 1 hidden layer, 3 neurons, learning rate = 1e-3, batch size = 32, 25 epochs, balanced weights). This was the only model with lower performances. Nevertheless, it is important to bear in mind that the additional samples could showcase some particularities, since they were artificially generated, as well as the fact that one substantial disadvantage of this method is the lack of consideration regarding the positioning of the newly generated points in relation to other existent ones, from a different class. This means that any new point could have the exact same attribute values as a point from a different class, yet still have a different label. Additionally, the presence of this issue is all the more justified given that there are very few geoeffective examples to begin with and that, apart from the use of SMOTE, the issue of class overlap is, as previously stated, a challenge for this project in itself.
Whilst we acknowledge the shortcomings of this technique and do not rule out the influence of other factors, these results indicating that having more positive samples to train on could improve the performances.
To pursue this, we present the second experiment, where we eliminate small-width CMEs, which are shown in the literature to be less likely to be heading towards the Earth \citep[][]{schw06,zhang2007,Chi2016}. We set two threshold levels, i. selecting CMEs having the AW $\geqslant$ 60, limiting our data set to 7373 samples, where the ratio of geoeffective samples raised from 0.8\% to 2.27\%. ii. selecting CMEs having AW $\geqslant$ 120, as analogous to \cite{2012JKAS...45...31C}), resulting in a subset of 2360 samples, out of which 6.90\% are geoeffective. We trained and tested the performance of the stacked classifier on these two subsampled data sets. While the precision and F1 score values increased (at the expense of the recall value), their improvement was not significant, as shown in Table \ref{tab:ensemble_cv}.
As mentioned in section \ref{sec:lin_reg}, the choice of the threshold used for delimiting the two classes can influence the performance, as can be seen in Table \ref{tab:linreg_thresholds}. The results in Table \ref{tab:ensemble_cv} were obtained using 0.7 and 0.5, respectively, as thresholds for the logistic regressors used as part of the ensembles. These values were chosen empirically.
The model still yielded 106 false alarms (130 for the full set), out of which 91 (92 for the full set) were full halos on the first subset, and 95 false alarms, out of which 85 full halos, on the second subset respectively, as seen in Figure \ref{fig:UMAPsubsampled} and Table \ref{tab:ensemble_cv}. This denotes that the inconspicuous differences between full halo CMEs that are geoffective and those that are not remained an issue even for less imbalanced sets. In conclusion, we could not clearly establish that the low number of geoeffective samples is the most influential factor that negatively impacts the model's prediction capabilities.
\begin{figure}[!h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[height=0.17\textheight]{images/UMAP_subsampled_60.png}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[height=0.17\textheight]{images/UMAP_subsampled_120.png}
\end{minipage}
\caption{Left: Predictions on the 1st subset (AW $\geqslant$ 60). Right: Predictions on the 2nd subset (AW $\geqslant$ 120)}
\label{fig:UMAPsubsampled}
\end{figure}
\newpage
\section{Conclusions and discussion}
\label{sec:conclusion}
The purpose of this study was to explore the usability of various ML methods for computing the probability that a given CME will be associated with a GS. For this investigation, we also assessed and addressed a variety of context specific issues, such as class imbalance and class overlap. We consider that, in the case of geoeffectiveness prediction, the sensitivity is the most important performance indicator, because of the high cost associated with a false negative in the eventuality of a powerful storm. Therefore, we believe that the recall values we obtained (e.g., 80.28\% for the stacked classifier) prove the practical utility of our models, given that only solar parameters have been used, which ensures extended warning times.
Apart from the numerical values of the performance metrics, a series of significant observations should be noted. One of the most important remarks emerged from this study is the association between the majority of the false alarms reported by our models with full halo CMEs. This is in agreement with the current beliefs that these ejections are the ones having the highest probability of being geoeffective, as well as being associated with the most powerful storms \citep[][]{2007JGRA..112.6112G}. Therefore, while the precision values are low, they are justified by the high rate of geomagnetic storms caused by full halo and/or fast CMEs (e.g. 113 out of 172 geoeffective CMEs had an angular width of 360; 537 CMEs having an angular width of 360 were not geoeffective). One notable remark is that halo CMEs are observed equally well originating on both the frontside and the backside of the Sun. While backside CMEs do not reach Earth and, hence, are not geoeffective, the observations and catalogs used for this study do not discern between the two classes. Therefore, we did not include such separation in the model setups. An additional consideration is that threshold adjustments were the most effective technique for increasing precision values. Furthermore, using the original values of the independent features led to better results than using the values obtained after lowering the number of dimensions by using PCA, indicating the original CME attributes used are appropriate inputs for such studies. Lastly, most models trained on SMOTE augmented data exceeded the average performance values obtained with real data only, on the test set.
The present study adds a number of original contributions to the growing body of research on the application of ML in the field of heliophysics. To the best of our knowledge, no previous studies focused on this topic created artificial data samples using SMOTE, introduced mechanisms for dealing with potentially geoeffective events or employed UMAP for data visualization.
Our experiments use a considerably more extensive dataset than the ones used by \cite{srivastava2005} (64 geoeffective CMEs) and \cite{10.3389/fspas.2021.672203} (2796 CMEs, 32 geoeffective ones), who employ logistic regression models for predictions.
Our results are also comparable with those obtained by \cite{2012JKAS...45...31C}. We used a more extensive dataset, i.e., not limited to halo CMEs, and incorporated minor CMEs with -50 nT $<$ Dst $\leqslant$ -30 nT. Their SVM model (RBF kernel, C = $2^{-5}$, $\gamma$ = $2^{-15}$) obtained 76\% sensitivity, together with 28\% precision, based on a narrower definition for geoeffectiveness (where a geoeffective event is defined as being associated with Dst $<$ -50 nT in that work), therefore not including weak storms. Most of our models, including the stacked classifier, compare favourably to this in terms of the proportion of geoeffective CMEs identified, while in turn obtaining a lower precision, which might be explained by our inclusion of both non-halo CMEs and weak storms in our dataset.
Similarly, \cite{uwamahoro2012} only include GS with Dst $\leqslant$ -50 nT and halo CMEs in their study on using neural networks for predicting the geoeffectiveness. Their results (86\% sensitivity, 81.4\% precision) are obtained with a 3 layer network that takes interplanetary data as inputs, in addition to solar ones. While these values appear promising, the use of interplanetary parameters substantially limits the warning time to less than a day, given that this data is collected closer to Earth. This is the reason why the authors also emphasize the importance of prediction models that only use solar parameters, which are accessible earlier and can lead to up to 3-4 days of warning time. Such a model could be used for filtering out most CMEs with high chances of not being geoeffective, therefore creating warnings for potentially geoffective ones with days in advance. Their actual geoeffectiveness could, then, be estimated with more complex methods and higher precision when more ongoing data has been collected.
\cite{DBLP:journals/remotesensing/FuZYFLM21} obtained a similar F1 score (27\%), with a lower accuracy value (75\%) using deep learning with only solar images as inputs. Although the main scope of their work is different (i.e., predicting the arrival time) and the methods cannot be directly compared, the difference between accuracy values is notable. Our future goals consist of assessing the feasibility of augmenting our method with image based ML methods and results, with the hope of enhancing the results presented here.
We consider our results to be promising in terms of recall values, in spite of the limitations imposed by the nature of the data. This research shows that, using only the independent CME solar attributes available online, easily interpretable models such as logistic regressions can correctly identify over 75\% of the geoeffective CMEs. In fact, the increased complexity of the model (e.g., neural network) did not translate, in this case, into substantially increased prediction performances. Our best model is the stacking classifier with 2 estimators: a logistic regression with a 0.9 threshold for discerning between classes and a KNN model with uniform weights, 7 neighbours and Manhattan distance, tied together by a meta learner in the form of an additional logistic regression, with balanced weights. We considered this to be the best performing model, based on having the highest value for the recall and f1 score pair, thus having the best recall-precision balance (based on f1 score), while also identifying most of the geoeffective events (based on recall). This model correctly identified 80.28\% of the geoeffective events, with a precision of 15.19\%, based on 5-fold cross-validation on the entire dataset , as shown in Table \ref{tab:ensemble_cv}.
We believe the results obtained in this study might be enhanced by including comprehensive flare associations. While such associations were not included in this work, this could be a fruitful direction for further exploration. An example of such an association is the flare to CME association list of \citet{yashiroetal2006}. This, however, only includes flares of M1.0 class, which would severely limit our already small sample of geoeffective events. Thus, it may be significant to expand the \citet{yashiroetal2006} list to include smaller C, or even B flares that might be linked to geoeffective CMEs from filament eruptions. We believe a comprehensive set of associations covering the majority of the geoeffective events studied here would sort out additional uncertainty factors, such as the back-halo CMEs, and might significantly improve the model metrics described in this work.
Lastly, we mention that the underlying challenges associated with this task, namely the extreme class imbalance, the low number of variables available at the moment and the issue of class overlap, remain significant obstacles for this type of predictions. This exploratory study presents encouraging results using numerical data only. We are further exploring the extension of our research to include solar image data, as well as more complex learning frameworks.
\begin{acknowledgments}
The authors thank Dr. Sarah Gibson for the initial review of this work. We are grateful for the suggestions offered by the anonymous reviewer, which have significantly enhanced this work.
A.R.P. was funded by the High Altitude Observatory of the National Center for Atmospheric Research, facilities that are sponsored by the National Science Foundation under cooperative agreement No. 1852977.
We acknowledge the use of the SOHO/LASCO CME catalog. This CME catalog is generated and maintained at the CDAW Data Center by NASA and The Catholic University of America in cooperation with the Naval Research Laboratory. SOHO is a project of international cooperation between ESA and NASA. Flare Index Data used in this study were calculated by T.Atac and A.Ozguc from Bogazici University Kandilli Observatory, Istanbul, Turkey and made available through the NOAA National Geophysical Data Center (NGDC).
\end{acknowledgments}
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec_intro}
\noindent
In the last two decades, evidence of CO depletion has been widely found in numerous sources (e.g. \citealt{Kramer99, Caselli99, Bergin02, Fontani12, Wiles16}), with important consequences for the chemistry, such as an increase in the fraction of deuterated molecules (\citealt{Bacmann03}).
Cold and dense regions within molecular clouds with temperatures, $T$, $< 20$~K (e.g. \citealt{Caselli08}) and number densities, $n$(H$_2$), $>$ a few $\times$ 10$^{4}$ cm$^{-3}$ (e.g. \citealt{Crapsi05}; see also \citealt{BerginTafalla07} and references therein), provide the ideal conditions that favour depletion of heavy elements onto the surface of dust grains. Here, chemical species can contribute not only in the gas-phase chemistry, but also in surface reactions by being frozen onto the dust grains. This chemical dichotomy can lead to the formation of complex molecules, modifying the chemical composition over a significant volume of a cloud (e.g. \citealt{Herbst&vanDishoeck09}).\\
\indent How much of the CO is depleted onto the surface of dust grains is usually characterized by the depletion factor (e.g. \citealt{Caselli99,Fontani12}), defined as the ratio between the ``expected'' CO abundance with respect to H$_2$~($\chi^E_{\rm CO}$) and the observed one ($\chi^O_{\rm CO}$):
\begin{equation}\label{eq:depletionfactor}
f_D=\frac{\chi^E_{\rm CO}}{\chi^O_{\rm CO}}.
\end{equation}
\noindent
The main CO isotopologue (i.e. \ce{^{12}C^{16}O}) is virtually always optically thick, and thus its intensity is not proportional to the CO column density.
One of the less abundant CO isotopologues (i.e. C$^{18}$O, as in this work) can be used to obtain a much more accurate estimate of $f_D$.\\
\indent In general, in dense and cold environments, such as massive clumps in an early stage of evolution, the estimate of the depletion degree provides important information such as the increase of the efficiency of chemical reactions that occur on the dust grain surface, favoured by the high concentration of the depleted chemical species.\\
High-mass star forming regions are potentially more affected by large-scale depletion: the high volume densities of H$_2$~both in the clumps and in the surrounding intra-clump regions make these sources more prone to high levels of molecular depletion (e.g. \citealt{Giannetti14}) because the timescale after which depletion becomes important (see, e.g. \citealt{Caselli99,Caselli02}) decreases with increasing volume density. In fact, the depletion time scale ($\tau_{dep}$) is inversely proportional to the absorption rate, $\kappa_{abs} = \sigma\langle v \rangle n_g S$ [s$^{-1}$], of the freezing-out species: where $\sigma$ is the dust grains cross-section, $\langle v \rangle$ is the mean velocity of the Maxwellian distribution of gaseous particles, $n_g$ is the dust grains number density, and S is the sticking coefficient (see also \citealt{BerginTafalla07} for more details).\\
\indent In different samples of young high-mass star-forming regions, the observed depletion factors vary between $\approx1$, in the case of complete absence of depletion, and a few tens (e.g. in \citealt{Thomas08}, \citealt{Fontani12} and \citealt{Feng16}). The largest values reported are those in \cite{Fontani12} where $f_D$ can reach values of up to 50-80, a factor $\sim 10$ larger than those observed in samples of low mass clouds (e.g. \citealt{Bacmann02, Bacmann03, Ceccarelli07, Christie12}). \cite{Giannetti14} noted that the method used by \cite{Fontani12} to calculate N(H$_2$) yielded values $\sim˘2.7$ times larger than that they themselves used to obtain this quantity, mainly due to the different dust absorption coefficient, $\kappa$, assumed (i.e. an absorption coefficient at 870 $\mu$m, $\kappa_{870 \mu\rm{m}} = 1.8$ cm$^2$ g$^{-1}$ assumed by \citealt{Giannetti14}, while $\kappa_{870 \mu\rm{m}}\sim 0.77$ cm$^2$ g$^{-1}$ assumed by \citealt{Fontani12} and derived here following \citealt{Beuther05} and \citealt{Hildebrand83}).\\
\indent Rescaling the depletion factors of these sources under the same assumptions, we note that the typical values of $f_D$ vary from a minimum of 1 to a maximum of $\sim$10, except for some particular cases. However, there are studies, which were made with much higher resolution, that reported even larger depletion factors: this is the case for the high-mass core in the IRDC G28.34+0.06 where, at a spatial resolution of $\sim 10^{-2}$ pc (using a source distance of 4.2 kpc; \citealt{Urquhart18}), $f_D$ reaches values of $10^2 - 10^3$, the highest values of $f_D$ found to date (\citealt{Zhang09}).\\
\indent Observing depletion factors ranging from 1 to 10, means that along the line of sight there will be regions in which the CO is completely in gas phase (i.e. not depleted), and other regions in which the depletion factor reaches values larger than 10. Knowledge of the size of the region within which most of the CO is largely frozen onto dust grains (the depletion radius) provides information on the approximate spatial scales on which different chemical processes operate in high-mass star forming regions. At these scales, for example, it is reasonable to think that derivation of H$_2$~column densities from CO lines (see e.g. \citealt{Bolatto13} and reference therein) and the studies of the gas-dynamics using the carbon monoxide isotopologues lines are strongly affected by depletion process, as not all of the CO present is directly observable.
Furthermore, the disappearance of CO from the gas phase also favours the deuteration process (e.g. \citealt{Roberts00, Bacmann03}). This is due to the following reaction which becomes increasingly inefficient, while the amount of frozen CO increases:
\begin{equation}\label{reaction1}
{\rm H^+_3 + CO \rightarrow HCO^+ + H_2} .
\end{equation}
\noindent
In a high CO-depletion regime, ${\rm H_3^+}$ is destroyed less efficiently by reaction~(\ref{reaction1}), remaining available for other reactions. Deuterium enrichment in the gas phase is enhanced by the the formation ${\rm H_2D^+}$ through the reaction:
\begin{equation}\label{reaction2}
{\rm H_3^+ + HD \rightleftharpoons H_2D^+ + H_2 + \Delta E}\,,
\end{equation}
\noindent
which proceeds from left-to-right, unless there is a substantial fraction of ortho-H$_2$~(e.g. \citealt{Gerlich02}), the only case in which reaction (\ref{reaction2}) is endothermic and naturally proceeds in the direction in which the observed deuterated fraction decreases. For this reason, for a higher degree of CO depletion, the formation of ${\rm H_2D^+}$ by reaction~(\ref{reaction2}) is more efficient due to the greater amount of ${\rm H_3^+}$ available.\\
\indent Single cell and one-dimensional numerical calculations (e.g. \citealt{Sipila13, Kong15}) are fast enough to include detailed treatment of the depletion process together with large comprehensive networks to follow the chemistry evolution in these environments.\\
\indent However, when we move to more accurate three-dimensional simulations, strong assumptions are needed to limit the computational time and allow the study of the dynamical effects on a smaller set of chemical reactions. \cite{Koertgen17, Koertgen18} for example, analysed the dependence of the deuteration process on the dynamical evolution, exploring how the chemical initial conditions influence the process. They performed 3D magnetohydrodynamical (MHD) simulations fully coupled for the first time with a chemical network that describes deuterium chemistry under the assumption of complete depletion (i.e. $f_D = \infty$) onto a spatial scale of $\sim 0.1$ pc from the collapse centre. However, the real extent of the depletion radius is still unknown, and this uncertainty might alter the theoretical models that describe the chemical evolution of star-forming regions.\\
\indent One way to shed light on these issues, is by mapping high-mass star-forming regions on both large and small scales with the goal to determine the depletion factor fluctuations over a broad range of densities and temperatures.\\
\indent In this paper, we selected the nearest and most massive filament found in the $870 \mu$m {\it APEX Telescope Large AREA Survey of the Galaxy} (ATLASGAL) (see \citealt{Schuller09} and \citealt{Li16}), the IRDC G351.77-0.51, presented in Sect.~\ref{sec:datapresentation}. In Sect.~\ref{sec:results}, we present the dust temperature map obtained by fitting the Spectral Energy Distribution (SED) using continuum observations of the {\it Herschel} Infrared Galactic Plane Survey (Hi-GAL, \citealt{Molinari10}). In addition, we compute the column density maps of H$_2$\:and C$^{18}$O. Combining the H$_2$\:and C$^{18}$O\:column density maps allows us to produce the large-scale depletion map of the entire source. We evaluate variations of depletion over the whole structure with the aim to understand which scales are affected by depletion the most. By assuming a volume density distribution profile for the H$_2$\: and a step-function to describe the abundance profile of C$^{18}$O\:with respect to H$_2$, we estimate the size of the depletion radius (Sect.~\ref{discussion}). In Sect.~\ref{caveats} we finally explore how much the estimates of the depletion radius are affected by the model's assumptions.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{G351_870_panel.png}
\caption{{\it LArge APEX BOlometer CAmera} (LABOCA) map of the 870$\mu$m dust continuum emission from the IRDC G351.77-0.51. The white contours are from 5$\sigma$ (0.2 Jy beam$^{-1}$) to 2 Jy beam$^{-1}$ while the black contours are from 2.4 Jy beam$^{-1}$ to 4 Jy beam$^{-1}$ in steps of 10$\sigma$. The white labels indicate some of the dust clumps identified by \citet{Leurini11}. Orange and yellow boxes (i.e., regions C5, C7 and F1) are linked to the models discussed in Sect.~\ref{discussion}.}
\label{fig:leurini}
\end{figure}
\section{SOURCE AND DATA: IRDC G351.77-0.51}\label{sec:datapresentation}
\noindent
In Fig.~\ref{fig:leurini} we show a continuum image at 870 $\mu$m of G351.77-0.51 from the ATLASGAL survey (\citealt{Li16}), in which the clumps (labels 1, 2, 3, 5 and 7) appear well-pronounced along the filamentary structure of the star-forming region. Among the twelve original clumps defined in \citet{Leurini11}, we selected only the five clumps for which data of both C$^{17}$O and C$^{18}$O~are available (see Sect. \ref{opacity}).\\
\indent G351.77-0.51 is the most massive filament in the ATLASGAL survey within 1 kpc from us.
\citealt{Leurini19} estimate M$\sim$ 2600 M$_\odot$ (twice that listed by \citealt{Li16} who used different values for dust temperature and opacity).
Following the evolutionary sequence of massive clumps defined by \cite{Konig17} - originally outlined by \cite{Giannetti14, Csengeri16} - the main clump of G351.77-0.51 was classified as an infrared-bright (IRB) source\footnote{This class of object shows a flux larger than 2.6 Jy at 21-24 $\mu$m and no compact radio emission at 4-8 GHz within 10'' of the ATLASGAL peak.}, and revealed hot-cores features (see \cite{Giannetti17_june}.
\cite{Leurini11_bis, Leurini19} studied the velocity field of the molecular gas component. They discussed the velocity gradients by considering whether they might be due to rotation, or outflow(s) around clump-1, or indicative of multiple velocity components detected in several C$^{18}$O\:spectra.\\
\indent In this paper we assume the same nomenclature as \cite{Leurini19} to identify the structures that compose the complex network of filaments of G351.77-0.51: below, we will refer to the central filament as ``main body'' or ``main ridge''. It is well-visible in Fig.~\ref{fig:leurini} as the elongated structure that harbours the five clumps, identified by white labels. The gas that constitutes the main body is cold and chemically young, in which \cite{Giannetti19} find high abundances of o-H$_2$D$^+$ and N$_2$D$^+$ and an age $\lesssim 10^5$ yr for the clump-7 (see that paper for more details). The northern part of the main body appears in absorption against the mid-IR background of our Galaxy up to 70 $\mu$m and in emission in the (sub)millimetre range (\citealt{Faundez04}). The analysis of the J$=$2$\rightarrow$1 C$^{18}$O~molecular line (see Sect.~\ref{C18O}) confirmed the presence of less dense molecular sub-filaments, linked to the main body: we refer to them as ``branches'', following the naming convention of \cite{Schisano14}.
\begin{figure*}
\centering
{\includegraphics[width=.44\textwidth]{Tdust2.png}}
{\includegraphics[width=.44\textwidth]{NH2_CORR.png}}
\caption{(a) Dust temperature map from \citet{Leurini19}, generated by a pixel-by-pixel SED-fitting of the 160-500 $\mu$m continuum fluxes of the Hi-Gal Survey. The white dashed lines mark the region observed in C$^{18}$O~with APEX (Sect.~\ref{c18o_observations}). Cyan contours are defined on the integrated intensity map of the APEX C$^{18}$O\:J$=2\rightarrow 1$ line in Fig.~\ref{fig:OPACITY} (a) at 3, 9, 27 and 81$\sigma$ contours, where 3$\sigma =$ 0.9 K km s$^{-1}$; (b) H$_2$\:column density map from the pixel-by-pixel SED-fitting from \citet{Leurini19}, scaled to a gas mass-to-dust ratio equal to 120 (see text). Red contours are the same of the LABOCA map of the 870$\mu$m dust continuum emission shown in Fig.~\ref{fig:leurini}. In both panels, we masked the saturated hot-core region in {\it Herschel} data (white circle).}
\label{fig:Tmaps}
\end{figure*}
\subsection{C$^{18}$O\:map}\label{c18o_observations}
The C$^{18}$O\:J$=$2$\rightarrow$1 observations were carried out with the {\it Atacama Pathfinder Experiment} 12-meter submillimeter telescope (APEX) between 2014 August and November. The observations were centred at 218.5 GHz, with a velocity resolution of 0.1 km~s$^{-1}$. The whole map covers an approximate total area of 234 ${\rm arcmin}^2$. We refer to \cite{Leurini19}, for a more detailed description of the dataset.\\
The root mean square (rms) noise was calculated iteratively in each pixel because it is not uniform in the map, due to the different exposure times. The first estimate of rms noise was obtained from the unmasked spectra, then, any emission higher than 3$\sigma$ was masked and the rms was recomputed. The cycle continues until the difference between the rms noise of two consecutive iterations (i.e. $\sigma_{i+1} -\sigma_{i}$) is equal to 10$^{-4}$ K. We estimate a typical-final noise of 0.30 K (between 0.15 and 0.45 K) per velocity channel.
\section{Results}\label{sec:results}
\subsection{Temperature and H$_2$~column density maps}\label{temperature_maps}
Calculating C$^{18}$O\:column densities requires the gas excitation temperature, $T_{\rm ex}^{\rm C^{18}O}$. We derived $T_{\rm ex}^{\rm C^{18}O}$ from the dust temperature, $T_{\rm dust}$, map, which can be obtained from the {\it Herschel} data through pixel-by-pixel SED fits. This allows one to determine also the H$_2$\:column density map from the emission between 160 and 500 $\mu$m.\\
\indent In this work we adopt the dust temperature map presented in \cite{Leurini19}. To estimate the dust temperature of the whole filamentary structure of G351.77-0.51, these authors used the {\it Herschel} (\citealt{Pilbratt10}) Infrared Galactic Plane Survey (Hi-GAL, \citealt{Molinari10}) images at 500, 350 and 250 $\mu$m from SPIRE (\citealt{Griffin10}) and at 160 $\mu$m from PACS (\citealt{Poglitsch10}).
The authors adopted a model of two emission components (Schisano et al. $subm.$), splitting the fluxes observed by {\it Herschel} in each pixel into the filament and background contributions, as discussed by \cite{Peretto10}. Then, they fitted the filament emission pixel-by-pixel with a grey body model deriving the dust temperature and the H$_2$~column density.
They assumed a dust opacity law $\kappa_0(\nu/\nu_0)^{\beta}$ with $\beta=2$, $\kappa_0=0.1$ cm$^2$ g$^{-1}$ at $\nu_0=1250$ GHz (\citealt{Hildebrand83}). This prescription assumes a gas-to-dust ratio equal to 100.
A detailed description of the procedure is given in \cite{Leurini19}.\\
\indent Their maps, shown in Fig.~\ref{fig:Tmaps} have a resolution of $36''$, i.e. the coarsest resolution in {\it Herschel} bands, that is comparable to the resolution of our C$^{18}$O\:data ($29''$). In most of the map, the dust temperature (panel (a) of Fig.~\ref{fig:Tmaps}) ranges between 10 and 30 K, while along the main body and in the region to the south of clump-3 in Fig.~\ref{fig:leurini} typically $T_{dust}\lesssim$ 15 K are found.\\
\indent $T_{\rm ex}^{\rm C^{18}O}$ was derived from $T_{\rm dust}$ by applying the empirical relation defined by \cite{Giannetti17_june}, who suggest a relation $T_{\rm ex}^{\rm C^{18}O}=$ 1.46$\times T_{\rm dust} -12.9$ with an intrinsic scatter of 6.7 K. However, this relation is only valid for dust temperatures between $\sim 10$ and $\sim 45$ K, while at low $T_{\rm dust}$ it may underestimate $T_{\rm ex}^{\rm C^{18}O}$. Our data are concentrated in this low temperature regime, where $T_{\rm dust}\lesssim$ 9 K would be translated into negative $T_{\rm ex}^{\rm C^{18}O}$ following the relation mentioned earlier. Therefore we decided to use the flattest curve allowed by the uncertainties at 2$\sigma$ of the original fit to limit this issue: $T_{\rm ex}^{\rm C^{18}O}=$ $T_{\rm dust} -2.6$ and imposing a lower limit of $T_{\rm ex}=$ 10.8 K, equal to the separation between the levels of the J$=$2$\rightarrow$1 transition of the C$^{18}$O~line.\\
$T_{\rm ex}^{\rm C^{18}O}$ typically ranges between $\sim$ 11 and 28 K with peaks of up to 40 K. Most of the pixels along the main ridge show a temperature of 12-13 K. At these temperatures CO is efficiently removed from the gas phase and frozen onto dust grains for densities exceeding few $\times$ 10$^4$~cm$^{-3}$ (see \citealt{Giannetti14}). This suggests that the depletion factor in these regions will have the highest values of the entire IRDC (as confirmed by the analysis of the depletion map - Sect.~\ref{observeddepletion}).\\
\begin{figure*}
\centering
{\includegraphics[width=1\textwidth]{panelsNtN_2.png}}
\caption{(a) Integrated intensity distribution of the C$^{18}$O\:J=2$\rightarrow$1 molecular line transition detected with {\it APEX} in units of K km s$^{-1}$. (b) Map of the opacity from the N(H$_2$) map, following the procedure described in Sect.~\ref{opacity}. The final $\tau$-N(H$_2$) relation used is: log$_{10}$($\tau_{\rm C^{18}O}$)$=$2.5$~$log$_{10}$(N(H$_2$))-57.4. (c) C$^{18}$O\:column density map obtained from the integrated intensity distribution, shown in Fig.~\ref{fig:OPACITY} (a), using eq.~(\ref{eq:nc18o}). The red and cyan contours are the 3, 9, 27 and 81$\sigma$ contours of the integrated flux density distribution in panel (a), where 3$\sigma =$ 0.9 K km s$^{-1}$.}
\label{fig:OPACITY}
\end{figure*}
\indent We modified the H$_2$~column density map of \cite{Leurini19} by using a different value of gas mass-to-dust ratio, $\gamma$, equal to 120 as derived by \cite{Giannetti17_oct}, by assuming a galactocentric distance $D_{GL} = 7.4$ kpc (\citealt{Leurini19}). The resulting H$_2$~column density map is shown in Fig.~\ref{fig:Tmaps} ($b$). The saturated region in {\it Herschel} data, corresponding to the hot-core clump, was masked out and excluded from our analysis. In the coldest regions of Fig. \ref{fig:Tmaps} (a), the molecular hydrogen reaches a column density of 2.4 $\times$ 10$^{23}$ cm$^{-2}$, decreasing to 4.2 $\times$ 10$^{21}$ cm$^{-2}$ in the regions in which the dust is warmer. Fig. \ref{fig:Tmaps} (b) shows that the high-column density material (i.e. $\gtrsim 1 \times 10^{23}$ cm$^{-2}$) is distributed in a single filamentary feature and in clump-like structures, the so called ``main-body'' in \cite{Leurini19}.
\subsection{Column densities of C$^{18}$O~}\label{C18O}
Under the assumption of local thermal equilibrium (LTE), we derived the C$^{18}$O\:column density, N(C$^{18}$O), from the integrated line intensity of the (2-1) transition. Following \cite{Kramer91}, the general formula for ${\rm N}({\rm C^{18}O})$ is:
\begin{multline}\label{eq:nc18o}
{\rm N}({\rm C^{18}O}) = \frac{C_\tau}{\eta_c} \frac{3h}{8\pi^3\mu^2} \frac{Z}{2} e^{\frac{E_{low}}{k_B T_{ex}}} \left[ 1- e^{-\frac{h\nu_{J,J-1}}{k_B T_{ex}}} \right]^{-1} \\
\times [J(T_{ex}, \nu_{\rm C^{18}O}) - J(T_{bg}, \nu_{\rm C^{18}O})]^{-1} \int T_{MB}\: d\upsilon \\
= C_\tau\:f(T_{ex}) \int T_{MB}\: d\upsilon,\\
\end{multline}
\noindent
where $C_\tau$ is the optical depth correction factor defined as $\tau/[1 - {\rm exp}(-\tau)]$ and the $\tau$ is the optical depth of the J$=$2$\rightarrow$1 transition of C$^{18}$O (see Sect.~\ref{opacity}); $h$ and $k_B$ are the Planck and Boltzmann constants, respectively; $\eta_c$ is the {\it filling factor}; $\mu$ is the dipole moment of the molecule; $Z$ is the partition function; $E_{low}$ is the energy of the lower level of the transition and $\nu_{J,J-1}$ is the frequency of the $J\rightarrow J-1$ transition of the considered molecule (in this case, the C$^{18}$O~J$=$2$\rightarrow$1, equal to $\sim$219.5 GHz); $J(T_{\rm ex},\nu)= (h\nu/k_B)($exp$(h\nu/k_B T)-1)^{-1}$; $T_{bg}\cong2.7$ K, is the background temperature; $T_{MB}$ is the main beam temperature, and its integral over the velocity range covered by the C$^{18}$O\:line is shown in Fig.~\ref{fig:OPACITY} ($a$). In the last line of eq.~(\ref{eq:nc18o}), $f({T}_{ex}$) incorporates all the constants and the terms depending on $T_{ex}$. We further considered possible saturation effects of the continuum {\it Herschel} maps, in the hot-core region (i.e. clump-1 in Fig.~\ref{fig:leurini}).
In the following paragraphs, we discuss the steps that allowed us to derive the $C_\tau$ map (Fig.~\ref{fig:OPACITY}$b$) and the final C$^{18}$O\:column density map (Fig.~\ref{fig:OPACITY}$c$), necessary to produce the depletion map discuss in Sect.~\ref{observeddepletion}.
\subsubsection{Opacity correction}\label{opacity}
We estimate the optical depth of C$^{18}$O~J$=$2$\rightarrow$1 in the clumps by means of the detection equation (see \citealt{Hofner00} for more details). We use the C$^{18}$O\:and C$^{17}$O J$=$2$\rightarrow$1 APEX observations presented in \cite{Leurini11}, assuming the relative abundance, $\phi$, equal to 4 as found by \cite{Wouterloot08}\footnote{Assuming a galactocentric distance of the source equal to D$_{GL}=$7.4 kpc as in \cite{Leurini19}.}.
Both transitions were observed in seven single-pointing observations, centered at the coordinates of the clumps. We we did not consider the data of clumps 9 and 10 (defined in \citealt{Leurini11}), because they are not part of the source, nor the data of the hot-core (i.e. clump-1), due to the saturation of the continuum observations (especially at 250 micron).\\
\indent The ratio between the peaks of the two CO isotopologues line intensities in each clump is then equal to:
\begin{multline}\label{eq:ratio}
R_{{\rm C^{18}O},{\rm C^{17}O}}=\frac{T_{MB,{\rm C^{18}O}}}{T_{MB,{\rm C^{17}O}}}= \\
\frac{\eta_{\rm C^{18}O}[J(T_{{\rm ex},{\rm C^{18}O}},\nu_{\rm C^{18}O})-J(T_{bg}, \nu_{\rm C^{18}O})](1- e^{-\tau_{\rm C^{18}O}})}{\eta_{\rm C^{17}O}[J(T_{{\rm ex},{\rm C^{17}O}},\nu_{\rm C^{17}O})-J(T_{bg}, \nu_{\rm C^{17}O})](1- e^{-\tau_{\rm C^{17}O}})},
\end{multline}
\noindent
where the considered transition of C$^{18}$O\:and C$^{17}$O is the J$=$2$\rightarrow$1, at frequencies $\nu_{\rm C^{18}O}=219.5$ GHz and $\nu_{\rm C^{17}O}=224.7$ GHz, respectively; $\tau$ is the optical depth of the two considered CO isotopologues, with $\tau_{\rm C^{18}O}=\phi~\tau_{\rm C^{17}O}$.\\
\indent In eq.~(\ref{eq:ratio}) we assumed that the J$=$2$\rightarrow$1 transition of the two CO isotopologues correct under the same excitation conditions (i.e. $T_{{\rm ex},{\rm C^{17}O}} = T_{{\rm ex},{\rm C^{18}O}}$; e.g. \citealt{Martin&Barrett78,Ladd04}), are tracing the same volume of gas. This assumption has the consequence that the {\it filling factor}, $\eta$, is the same for both the transitions and therefore it can be eliminated from eq.~(\ref{eq:ratio}).\\
\indent We fitted a linear relation between $log_{10}$($\tau$) and $log_{10}$(N(H$_2$))\footnote{We used a $\tau$-N(H$_2$) relation and not a $\tau$-N(C$^{18}$O) one because N(H$_2$) is not affected by opacity (i.e. $\kappa_\nu$ correction already applied in Sect.~\ref{temperature_maps}) and the first relation has less scatter than the second.}, and then computed $C_\tau$ to correct N(C$^{18}$O) following the schematic diagram in Fig.~\ref{fig:scheme}.
The best linear fit was computed with a least squares regression by extracting the C$^{17}$O and C$^{18}$O~peak fluxes 10$^4$ times (step 2 in Fig.~\ref{fig:scheme}). The basic assumption - see step 1a - was that the rms ($\varepsilon$) is normally distributed\footnote{This step was performed using the {\sf numpy.random.normal} function of NumPy (\citealt{Numpy06}) v1.14.}. For each clump, the $\tau$ value was calculated following eq.~(\ref{eq:ratio}) including the contribution of the rms and assuming a 10\% calibration uncertainty ($\sigma$), summed in quadrature to $\varepsilon$ (i.e. $T_{MB,{\rm C^{18}O}} = T^{\rm C^{18}O}_{MB} + \sqrt{\varepsilon^2 + \sigma^2}$ and the same for $T_{MB,{\rm C^{17}O}}$). We note that when applying this method to select different values of $T_{MB}$ in the C$^{18}$O\:and C$^{17}$O spectra, the errors do not change significantly if the distributions are built by 10$^{3}$ elements or more. Applying this procedure to each clump - steps 1(a-d) - we derived the linear fit $log_{10}(\tau)-log_{10}[{\rm N(H_2)}]$. We repeated this procedure $10^4$ times and we generated a cube of CO column density maps, one for each solution $log_{10}(\tau)-log_{10}[{\rm N(H_2)}]$ found. At the end of this procedure, a distribution of $10^{4}$ values of N(C$^{18}$O) has been associated to each pixel, used to compute the average value of N(C$^{18}$O) and its relative error bar in each pixel. We finally imposed an upper limit for the correction at $\tau=2$. This value would only be achieved in the saturated hot-core region and thus no pixel included in our analysis is affected by this condition. We then generated the opacity map from the N(H$_2$) map. The results of this procedure are shown in Fig.~\ref{fig:OPACITY} ($b$), together with the C$^{18}$O\:integrated intensity map - Fig.~\ref{fig:OPACITY} ($a$). We note that the estimated $\tau$ values range between $\sim 0.1$ and $\sim 1.9$ along the main body of G351.77-0.51.\\
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{diagram.png}
\caption{Schematic diagram showing how the correction $C_\tau$ was calculated starting from the {\rm C$^{18}$O} and {\rm C$^{17}$O} single pointing observation of clumps 2, 3, 5 and 7 in \citet{Leurini11}.}
\label{fig:scheme}
\end{figure}
\indent The final C$^{18}$O\:column density map was derived by including the opacity correction using the bestfit final $log_{10}(\tau)-log_{10}[{\rm N(H_2)}]$ relation - i.e. log$_{10}$($\tau_{\rm C^{18}O}$) $=$ 2.5log$_{10}$(N(H$_2$))-57.4. The opacity-corrected column density map of C$^{18}$O\:is shown in Fig.~\ref{fig:OPACITY} (c).
Our correction has increased the C$^{18}$O\:column densities by up to a factor of 2.3, and the remain almost constant in the branches. Over the whole structure, the column densities of C$^{18}$O\:range between 1 and 6$\times$10$^{16}$~cm$^{-2}$.\\
To summarize, the C$^{18}$O\:column density map was generated under the assumption of LTE following eq.~(\ref{eq:nc18o}). Possible saturation in the continuum data and opacity effects have been considered. The column density map was then used to evaluate the depletion map, as discussed in the next section.
\begin{figure*}
\centering
\includegraphics[width=1.8\columnwidth]{fD.png}
\caption{Depletion factor ($f_D$) map obtained by taking the ratio between the expected and observed N(C$^{18}$O). We assumed a canonical abundance of 2.1$\times$10$^{-7}$ of N(C$^{18}$O) relative to N(H$_2$). The cyan contours are the same as in Fig.~\ref{fig:OPACITY}, while the ones in magenta are the defined at $f_D = $ 1.5, 2, 3, 4 and 5. Red stars in clump-5 and -7 indicate locations of the clumps reported in \citet{Leurini19}, while the one in clump-3 is set on the coordinates of the candidate HII region reported in \citet{Anderson14}.}
\label{fig:depletion_map}
\end{figure*}
\section{DISCUSSION}\label{discussion}
\subsection{The large-scale depletion map in G351.77-0.51}\label{observeddepletion}
\begin{figure}
\centering
\includegraphics[width=1.15\columnwidth]{sp_T-H2-fD.png}
\caption{Pixel-by-pixel scatter plot of the whole structure detected in Fig. \ref{fig:depletion_map}. Circles represent the pixels of the main body, while the squares indicate those of the branches. The red-dashed line represents the 10.8 K lower limit imposed on the $T_{\rm ex}^{\rm C^{18}O}$ (as discussed in Sect.~\ref{temperature_maps}) that corresponds to $T_{\rm dust}\sim 13.4$ K.}
\label{fig:scatter}
\end{figure}
The final CO-depletion factor map - Fig.~\ref{fig:depletion_map} - was generated by taking the ratio between the expected and the observed CO emission, using an abundance of C$^{18}$O\:with respect to H$_2$\:(i.e. $\chi^E_{\rm C^{18}O}$) equal to $2.1\times 10^{-7}$ (see \citealt{Giannetti17_oct}).\\
\indent As visible in Fig. \ref{fig:depletion_map}, in almost half of the map the depletion factor is $<$1.5, highlighting the absence of strong CO depletion around the core and along the branches in the southern directions with respect to the main ridge. This effect can have two different causes for the region of the central clump and for the branches, respectively: in the first case (i.e. the surroundings of clump-1 in Fig. \ref{fig:leurini}), the absence of depletion can be linked to the intense star formation activity, demonstrated in previous papers (e.g. \citealt{Leurini11}, \citealt{Konig17}, \citealt{Giannetti17_june} and \citealt{Leurini19}). The increase in temperature induced by the forming stars is able to completely desorb the ice mantles around dust grains in which the CO molecules are locked. This effect lowers the observed depletion factor, until it reaches unity. Following eq.~(2) in \cite{Schuller09}, we computed N(H$_2$) in the hot-core (saturated) region from the ATLASGAL peak flux at 870 $\mu$m, using $\kappa_{870 \mu{\rm m}} =$ 0.76 $\mathrm{cm^2\, g^{-1}}$ consistent with the dust opacity law discussed in Sect.~\ref{temperature_maps}. To obtain a depletion of 1 here, consistent with the surroundings, the dust temperature should be 80 K.\\
\indent To evaluate the effects of N(H$_2$) and T$_{dust}$ on $f_D$, Fig.~\ref{fig:scatter} was obtained by the pixel-by-pixel combination of Figures~\ref{fig:Tmaps} (b), \ref{fig:Tmaps} (a) and \ref{fig:depletion_map}. Each point represents a pixel shared between the three maps, where dots and squares are used to distinguish the pixels of the main body from those of the branches, respectively.\\
\indent Instead, in the branches (square markers in Fig. \ref{fig:scatter}) we notice that depletion reaches values $\sim$3.5. This result suggests that even in these structures the depletion process can start to occur. On the other hand, where $f_D$ is close to 1, the lower density disfavours a high degree of depletion (e.g. see \citealt{Caselli99}).
Furthermore, we should consider that the observed depletion factor is averaged along the line-of-sight and in the beam.\\
\indent Along the main ridge and in the surroundings of clump-3 (Fig.~\ref{fig:leurini}) the depletion factor ranges between 1.5 and 6 and reaches its maximum in clump-3. Both regions appear in absorption at 8 $\mu$m in the \cite{Leurini11} maps, showing H$_2$~column densities of a few 10$^{22}$ cm$^{-2}$. In particular, it may appear counter-intuitive that we observe the highest depletion factor of the whole structure in clump-3, as an HII region has been identified in Wide-field Infrared Survey Explorer (WISE) at a distance of only $\approx 8''$ from the center of clump-3 (\citealt{Anderson14}). For these reasons, clump-3 should show similar depletion conditions to those in clump-1. However, such a high depletion factor suggests dense and cold gas close to the HII region contained in the clump. Within the cloud, the degree of depletion could be maintained if self-shielding would attenuate the effects of the radiation field of the HII region.
These ideas are supported by the analysis of the 8 and 24 $\mu$m maps shown in \cite{Leurini19}. At the location of the clump-3, a region slightly offset with respect to the bright spot associated with the HII region, is clearly visible in absorption at both wavelengths.\\
\indent In clumps 5 and 7 (i.e. regions C5 and C7 in Fig.~\ref{fig:leurini}), along the main ridge of G351.77-0.51, the average depletion factors are f$_{D, C7}=3.4^{+0.4}_{-0.5}$ (in clump-7; peaking at 4.5) and f$_{D, C5}=3.1^{+0.5}_{-0.6}$ (in clump-5; peaking at 4.3), respectively (see Sect.~\ref{Rfd_estime} for more details about error estimation). Compared to the average values of the samples mentioned in Sect.~\ref{sec_intro}, our values are slightly lower.
However, we should consider that the observed depletion factors are affected by many factors such as the opacity correction applied and the C$^{18}$O/H$_2$~abundance ratio, which can vary up to a factor of 2.5 (e.g. \citealt{Ripple13}).\\
\indent Along the main ridge the depletion factors reach values of $\sim$ 6.
This is comparable to what is found in \cite{Hernandez11}, where the authors studied depletion factor variations along the filamentary structure of IRDC G035.30-00.33 with IRAM 30-m telescope observations.
Along the filament, they estimate a depletion factor of up to 5. Of course, also in this case the considerations made earlier hold, but if we consider the different opacity corrections applied and C$^{18}$O/H$_2$~relative abundance assumed, we note the difference between our results and those of \cite{Hernandez11} are not larger than $\Delta f_D \sim$ 1.
To summarise, the final depletion map of G351.77-0.51 - Fig.~\ref{fig:depletion_map} - shows widespread CO-depletion in the main body, as well as at various locations in the branches. Comparing our results with those of \cite{Hernandez11} in G035.30-00.33, we note that in both cases the phenomenon of depletion affects not only the densest regions of the clumps, but also the filamentary structures that surround them. This result suggests that CO-depletion in high-mass star forming regions affects both small and large scales.
\subsection{Depletion modeling}\label{model}
High densities and low gas temperatures favour CO-depletion. Given this, it is reasonable to think that C$^{18}$O/H$_2$~is not constant within a cloud. This quantity varies as a function of location, following the volume density and gas temperature distributions/profiles.\\
\indent The regions in which the depletion degree is higher are those where the dust surface chemistry becomes more efficient, due to the high concentration of frozen chemical species on the dust surface. To understand how the efficiency of the various types of chemical reactions change, we need to understand how the depletion degree varies within the dark molecular clouds.\\
\indent In order to reproduce the average depletion factor observed in G351.77-0.51 we built a simple 1D model describing the distributions of C$^{18}$O\:and H$_2$.
We focus our attention on the main ridge, identifying three distinct regions: clumps 5 and 7, and the filamentary region between them (i.e. regions C5, C7 and F1 in Fig.~\ref{fig:leurini}, respectively).\\
\indent The model assumes that both profiles have radial symmetry with respect to the center of the ridge, i.e. the ``spine''. $R_{flat}$ is the distance relative to the spine within which the density profiles remain roughly flat (i.e. $n({\rm H}_2, R_{flat})=0.5n( {\rm H}_{2,spine})$ if $p=2$) . We normalize the volume density profiles of the model with respect to R$_{flat}$, while $n({\rm H}_{2,spine})$ is the central volume density of H$_2$.\\
\indent Our H$_2$~volume density profile is described by:
\begin{equation}\label{eq:nH2model}
n({\rm H}_2) = n( {\rm H}_{2,spine}) \left[1+\left(\frac{R}{R_{flat}} \right)^{\alpha} \right]^{-p/2},
\end{equation}
\noindent
up to a maximum distance R$_{max}\sim 0.2$ pc, two times larger than the filament width estimated in \cite{Leurini19}, that encompasses the entire filamentary structure.\\
\indent For the clumps (i.e. regions C5 and C7), we assume $p=2$ and eq.~(\ref{eq:nH2model}) takes the functional form described by \cite{Tafalla02}. In this case, the free parameters of the model are $\alpha$, $n({\rm H}_{2,spine})$ and R$_{flat}$.
Starting from R$_{flat}$, both the distributions of C$^{18}$O~and H$_2$~scale as a power-law with the same index: $\alpha_{C5}=1.9$ and $\alpha_{C7}=1.8$, for the clump-5 and clump-7, respectively. The solutions for $\alpha$ was found by exploring the parameter space defined by the results of \cite{Beuther02}, i.e. 1.1$<\alpha<$2.1. The best-fitting models returns the values of $n({\rm H}_{2,spine})$ and R$_{flat}$ equal to 8$\times 10^6$ cm$^{-3}$ and 5.5$\times$10$^{-3}$ pc in the case of clump-5 and 5.8$\times 10^7$ cm$^{-3}$ and 2$\times$10$^{-3}$ pc for the clump-7.\\
\indent In the case of the main body (i.e. region F1), $\alpha = 2$ the volume density profile of H$_2$\:is described by a Plummer-like profile (see \citealt{Plummer11} for a detailed discussion). For this model, the free parameters are $p$ and $n({\rm H}_{2,spine})$, while R$_{flat}$ corresponds to the thermal Jeans length, $\lambda_{J}$, for an isothermal filament in hydrostatic equilibrium, calculated at R$=0$:
\begin{equation}\label{eq:lambdaJ}
\lambda_{J}=\frac{c^2_s}{G~\mu~m_H~{\rm N}({\rm H}_2)_{R=0}},
\end{equation}
\noindent
where $c^2_s$ is the isothermal sound speed for the mean temperature in the region F1 (equal to 13.8 K), $G$ is the gravitational constant in units of [cm$^3$ g$^{-1}$ s$^{-2}$], $\mu$ is the mean molecular weight of interstellar gas ($\mu=2.3$), $m_H$ is the hydrogen mass in grams and N$({\rm H}_2)_{R=0}$ is the averaged H$_2$~column density calculated at R$=0$. Applying eq.~(\ref{eq:lambdaJ}) to region F1, we obtained R$_{flat} \sim 0.008$ pc, while the value of $p$ index was found equal to $1.9$ after exploring the range of values reported in \cite{Arzoumanian11}. We note that an R$_{flat}$ variation of $~30\%$ does not significantly change the N(H$_2$) profile.\\
\indent To simulate depletion effects, we introduced the depletion radius, $R_{dep}$, that is the distance from the spine where the degree of depletion drastically changes, following equations:
\begin{equation}\label{eq:stepfunction}
\left.
\begin{aligned}
& R\leqslantR_{dep} \qquad f_D = (10, \infty); \qquad \chi_{\rm C^{18}O} = \frac{\chi^E_{\rm C^{18}O}}{f_{D}}\\
& R>R_{dep} \qquad f_D = 1; \qquad \chi_{\rm C^{18}O} = \chi^E_{\rm C^{18}O}
\end{aligned}
\right\},
\end{equation}
\noindent
where the canonical abundance for C$^{18}$O\:($\chi^E_{\rm C^{18}O}= 2.1 \times 10^{-7} $) was set by using the findings of \cite{Giannetti17_oct} and references therein. The lower limit of $f_D=10$ for$R<R_{dep}$, is motivated by the observational constraints for high-mass clumps, which in most cases show depletion values between 1 and 10 (e.g. \citealt{Thomas08, Fontani12, Giannetti14}, as discussed in Sect.~\ref{sec_intro}). Instead, the choice of extending the model until reaching the theoretical condition of full depletion (i.e., $f_D=\infty$) is done to study the effect that such a drastic variation of $f_D$ has on the size of $R_{dep}$.\\
\indent In Fig.~\ref{fig:model} we present a sketch of the model: the profile for $n$(H$_2$) is indicated in blue, that for $\chi_{\rm C^{18}O}$ is shown in green (bottom panel), while profiles for $n$(C$^{18}$O) appear in orange and red (two values of $f_D$ are considered). Radial profiles are then convolved with the {\it Herschel} beam at $\lambda= 500~\mu$m (i.e. FWHM$= 36\arcsec$). For each profile we calculate the set of values that best represent the observed data in each considered region (i.e. C5, C7 and F1 of Fig.~\ref{fig:leurini}): we generate a set of profiles of $n$(H$_2$) by exploring the parameter space defined by the free parameters of each modeled region, and we select the curve that maximizes the total probability of the fit (i.e. P$_{\rm TOT}={\prod_{i} p(r_i)}$, where $p(r_i)$ are the probability density values corresponding to the error bars in Fig.~\ref{fig:clumpsProfiles}, and calculated at the various distances - $r_i$ - from the spine). The same was done for the profile of $n$(C$^{18}$O), selecting the best profile from a set of synthetic profiles generated by assuming different sizes of $R_{dep}$.\\
In the next section, we present the results of the model applied to the clumps and the filament region, providing an estimate for both $R_{dep}$ and $n$(H$_2$) calculated at $R_{dep}$.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{modello_split.png}
\caption{Schematic representation of the model applied to simulate the depletion effect inside/outside the depletion radius. In blue we show the number density profile of H$_2$; in green the step-function reproducing the variation of the degree of depletion inside/outside $R_{dep}$. The C$^{18}$O profiles are obtained by the product of the other two.}
\label{fig:model}
\end{figure}
\begin{figure*}\label{profiles_clumps}
\centering
{\includegraphics[width=.322\textwidth]{5_nH2_profilesZOOMED.png}}
{\includegraphics[width=.322\textwidth]{5_nCO_profiles.png}}
{\includegraphics[width=.322\textwidth]{5_depletion_corrected.png}}\\
{\includegraphics[width=.322\textwidth]{7_nH2_profilesZOOMED.png}}
{\includegraphics[width=.322\textwidth]{7_nCO_profiles.png}}
{\includegraphics[width=.322\textwidth]{7_depletion_corrected.png}}\\
{\includegraphics[width=.322\textwidth]{f_nH2_profilesZOOMED.png}}
{\includegraphics[width=.322\textwidth]{f_nCO_profiles.png}}
{\includegraphics[width=.322\textwidth]{f_depletion_corrected.png}}
\caption{From top: Column density profiles of H$_2$\:and C$^{18}$O\:in panel (a) and (b), respectively, while panel (c) shows the los- and beam-averaged depletion factor profile from region C5 in Fig. \ref{fig:leurini}. Panels (d), (e) and (f) are the same as (a), (b) and (c), respectively, for region C7, while panels (g), (h) and (i) refer to the region F1. Blue profiles in panels (a), (d) and (g) are the synthetic column densities profiles obtained by the integration of the number densities distribution; solid-blue lines in panels (b), (e) and (h) are the same for C$^{18}$O\:assuming $\chi_{\rm C^{18}O} = 2.1 \times 10^{-8}$ for R$<R_{dep}$, and the estimated position of $R_{dep}$ corresponds to the cusp. Orange profiles in panel (a), (b), (d), (e), (g) and (h) are the convolution results of the blue profiles and the {\it Herschel} beam at 500$\mu$m, while black-dotted line in panel (b), (e) and (h) indicates the convolved undepleted-profile of N(C$^{18}$O) assuming a constant $\chi_{\rm C^{18}O} = 2.1 \times 10^{-7}$. Green and magenta profiles in panel (c), (f) and (i) show the depletion factor profiles obtained by assuming different abundances for R$<R_{dep}$ (see eq.~\ref{eq:stepfunction}). Red points are the observed values for each quantity plotted. The yellow- shaded area shows the dimensions of the {\it Herschel} HWHM at $\lambda= 500~\mu$m (i.e. FWHM$= 36\arcsec$; 0.17 pc at the source distance). All profiles are plotted as a function of the radial distance from the spine.}
\label{fig:clumpsProfiles}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{fD_Rdep.png}
\caption{$f_D$ variation as a function of $R_{dep}$ for the best-fitting model applied to clump-5. Full lines represent the $f_D$-$R_{dep}$ relation assuming $f_D=$ 10 for R$<R_{dep}$, while the dash-dotted line are the ones for $f_D=\infty$. The blue profile in Fig.~\ref{fig:clumpsProfiles} (a) was convolved at different angular resolutions in order to predict how the same relation changes with different observational conditions. Colours are linked to the different FWHMs used for the convolution.}
\label{fig:fDRdep_relation}
\end{figure}
\subsubsection{Estimate of $R_{dep}$}\label{Rfd_estime}
\noindent
In Fig.~\ref{fig:clumpsProfiles} ($a$) and ($b$) we show the best-fitting mean-radial column density profiles of N(H$_2$) and N(C$^{18}$O) of clump-5, respectively. In blue, we show the column density profiles before the convolution while in orange the convolved profiles. Black-dashed line in the central panels are the convolved undepleted-profile of N(C$^{18}$O), assuming a constant $\chi_{\rm C^{18}O} = 2.1 \times 10^{-7}$ for all R. The observed data are plotted as red points. In Fig.~\ref{fig:clumpsProfiles} ($c$) we report (in green) the depletion factor as a function of the distance from the projected centre in clump-5 assuming an abundance of C$^{18}$O\:for R$<R_{dep}$ equal to 2.1$\times 10^{-8}$, while in magenta the same profile for $\chi_{\rm C^{18}O} = 0$. Figures~\ref{fig:clumpsProfiles} ($d, e, f$) are the same for the clump-7, respectively. In both cases, we estimate $R_{dep}$ by varying the abundance $\chi_{\rm C^{18}O}$ for $R<R_{dep}$ considering the two limiting cases, i.e. with $f_D =$ 10 and $f_D = \infty$.\\
\indent All error bars have been defined taking the range between the 5$th$ and 95$th$ percentiles. In the case of N(H$_2$), the uncertainties are generated by computing the pixel-to-pixel standard deviation of the H$_2$~column density along the direction of the spine, and assuming a Gaussian distribution of this quantity around the mean value of N(H$_2$). The errors associated with N(C$^{18}$O) are computed propagating the uncertainty on the line integrated intensity and on the optical depth correction (as discussed in Sect.~\ref{opacity}), using a Monte Carlo approach. Finally, the uncertainties on $f_D$ are estimated computing the abundance, randomly sampling the probability distributions of N(H$_2$) and N(C$^{18}$O).\\
\indent In clump-5, the estimated $R_{dep,i}$, ranges between a minimum of $\sim$ 0.07 pc and a maximum of $\sim$ 0.10 pc. In clump-7 instead, which shows a larger average depletion factor compared to clump-5, $R_{dep}$ ranges between $\sim$ 0.07 and 0.12 pc (for $f_D=\infty$ and $f_D=$10, respectively). Comparing Fig.~\ref{fig:clumpsProfiles} (c) and (f), both synthetic profiles are able to well reproduce the observed ones up to a distance of $\sim$0.15 pc.
In the case of total depletion regime (i.e. $f_D=\infty$), at $R_{dep}$ we estimate a volume density $n$(H$_2$)$ = 5.7 \times 10^4$ cm$^{-3}$ and $5.3 \times 10^4$ cm$^{-3}$, for clump-5 and 7, respectively. Our results are comparable with those found in low-mass prestellar cores (i.e. \citealt{Caselli02, Ford-Shirley11}) and in high-mass clumps in early stages of evolution (i.e. \citealt{Giannetti14}). Furthermore, it is important to note that both values should be considered as upper limits due to the approximation in eq.~(\ref{eq:stepfunction}) for all the models discussed. For the case in which $f_D = 10$, in clump-5 at a distance equal to $R_{dep} = 0.10$ pc, the model provides a volume density of $n$(H$_2$)$ = 3.1 \times 10^4$ cm$^{-3}$. Similarly, in the case of clump-7, $R_{dep} = 0.12$ pc is reached at a density of $n$(H$_2$)$ = 2.2 \times 10^4$ cm$^{-3}$.\\
\indent In Fig.~\ref{fig:clumpsProfiles} ($g$) and ($h$) we show the mean-radial column density profiles of N(H$_2$) and N(C$^{18}$O), respectively, in the region between the two clumps (i.e. F1 region in Fig.~\ref{fig:leurini}), while the depletion profile is shown in the panel ($i$). The results are in agreement with those estimated for clumps 5 and 7, showing a 0.08 pc $<R_{dep}<$ 0.15 pc (comparable with the typical filament width $\sim$ 0.1 pc; e.g. \citealt{Arzoumanian11}), which corresponds to densities of $4.2 \times 10^4$ cm$^{-3}$ $>$ $n$(H$_2$) $>$ 2.3 $\times 10^4$ cm$^{-3}$ for $f_D=\infty$ and 10 respectively. The summary of the $R_{dep}$ estimates is shown in Tab.~\ref{tab:Rdep} (Sect.\ref{caveats}).\\
\indent Once the model presented in Sect.~\ref{model} has been calibrated with the data, it allows one to estimate how the size of $R_{dep}$ can vary according to the obtained $f_D$. An example of these predictions is shown in Fig.~\ref{fig:fDRdep_relation}. Starting from the best fitting synthetic profile obtained for clump-5 (i.e. the blue profile in Fig.~\ref{fig:clumpsProfiles}a), we generated a family of N(C$^{18}$O) profiles, convolved with different beams (i.e. FWHM between 10 and $40''$). The value of $f_D$ reported in Fig.~\ref{fig:fDRdep_relation} is calculated at $R=$ 0 according to eq.~(\ref{eq:depletionfactor}), by varying the dimensions of $R_{dep}$ from 0 up to the maximum size of the model radius for both $f_D$ = (10, $\infty$) at$R<R_{dep}$ (i.e. full and dash-dotted lines, respectively). We stress that this type of prediction is strongly dependent on the model assumptions (i.e. the assumed C$^{18}$O/H$_2$~abundance) and valid only for sources for which clump-5 is representative.\\
Nevertheless, a variation in $R_{dep}$ is notable and it changes by varying the value of $f_D$ for $R<R_{dep}$: at FWHM $=$ 10'' for example, a factor of 1.5 in $f_D$ (from 4 to 6) corresponds to a factor of 1.6 in the $R_{dep}$ size if we assume $f_D=\infty$ for $R<R_{dep}$, while for $f_D=$ 10 it corresponds to a factor of 2.2. These variations are slightly lowered if FWHM increases: a factor of 1.4 and 1.6 for $f_D$ = ($\infty$, 10) at $R<R_{dep}$, respectively for FWHM $=$ 40''.
\subsubsection{Comparison with 3D models}\label{Rfd_in_C1C2}
As a first step, we compared the estimated values of $R_{dep}$ with the results obtained by \cite{Koertgen17}. These authors imposed a condition of total depletion on the whole collapsing region of their 3D MHD simulations, to model the deuteration process in prestellar cores. The initial core radius range is 0.08 $<R_c<$ 0.17 pc, depending on the simulation setup. This assumption is necessary to reduce the computational costs of the 3D-simulations but, as discussed in Sect.~\ref{sec_intro}, the outcome can strongly deviate from reality when we move to larger scales.\\
\indent For G351.77-0.51, the largest estimated depletion radius is $\lesssim 0.15$ pc, comparable with the initial core size assumed by \cite{Koertgen17}.\\
\indent We note also that, imposing the condition of total depletion on a scale of 0.17 pc, \cite{Koertgen17} find a deuterium fraction of H$_2$ D$^+$ reaching value of 10$^{-3}$ (100 times the canonical values reported in \citealt{Oliveira03}) on scales comparable with our size of $R_{dep}$, after $\sim$ 4$\times$10$^4$ yr (see their Fig. 3). This result is in agreement with the ages suggested by the depletion timescale estimated in our models following \cite{Caselli99}:
\begin{equation}\label{depletion_time}
\tau_{dep} \sim \frac{10^9}{S~n(H_2)}\,\,\, \mathrm{yr},
\end{equation}
\noindent
assuming the estimated volume densities at $R_{dep}$ and a sticking coefficient S$=$1.\\
\indent A similar picture is reported by \cite{Koertgen18}, who show that the observed high deuterium fraction in prestellar cores can be readily reproduced in simulations of turbulent magnetized filaments. After $\sim$10$^4$ yr, the central region of the filament shows a deuterium fraction 100 times higher than the canonical value, and has a radius of $\sim$0.1~pc (see their Fig.~1). The dimensions of the regions showing a high deuterium fraction can be compared with our estimates of $R_{dep}$ since the two processes are connected, as shown in Sect.~\ref{sec_intro}.
The comparison of our findings with both small- and large-scale simulations, suggests that the total-depletion assumption could provide reliable results within reasonable computational times.
\section{The influence of a different abundance profile on $R_{dep}$}\label{caveats}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Xi_prof_doble2.png}
\caption{Sketch of the abundance profiles assumed to describe the $\chi_{\rm C^{18}O}$ variation in our tests. The orange-dashed profile shows the step function described in the Sect.~\ref{model}. In this plot, only the position of the step-function discontinuity has a physical meaning, equal to the estimated size of $R_{dep}$ in the C5 model with $\chi_{\rm C^{18}O}=0$ for R$<R_{dep}$; the green-dotted profile describes a more gradual variation of $\chi_{\rm C^{18}O}$ for R$<R_{drop}$ as a linear profile between canonical C$^{18}$O/H$_2$~abundance and the value of $\chi_{\rm C^{18}O}$ at the center of the model; blue profile reproduces the trend $\chi_{\rm C^{18}O} \propto exp(-t_{age}/\tau_{dep})$, assuming only the CO absorption as dominant process (see Sect.\ref{caveats}).}
\label{fig:XIprof}
\end{figure}
It is important to stress that the model discussed in this paper presents a number of limitations:
\begin{itemize}
\item A first caveat is represented by how we obtain the final values from observations (i.e. red dots in Fig.~\ref{fig:clumpsProfiles}), with which our models were then optimized. The main assumption we made was the canonical C$^{18}$O/H$_2$\:abundance, assumed equal to $2.1\times 10^{-7}$, which actually can vary by up to a factor of 2.5 (Sect.~\ref{observeddepletion}). While knowing these limits, we note that the results shown in the final $f_D$ map of G351.77-0.51 (i.e. Fig.~\ref{fig:depletion_map}) are in good agreement with what was found by \cite{Hernandez11, Hernandez12} in the IRDC G035.30-00.33, the only other case in which a filamentary-scale depletion factor map has been shown to date;
\item A second limitation comes from the H$_2$~volume density profile we assumed in Sect.~\ref{model} to reproduce the data, which ignores all the sub-structure of clumps. Although this assumption is strong, it seems reasonable because the best fitted N(H$_2$) profiles shown in Fig.~\ref{fig:clumpsProfiles} - panels (a), (d) and (g) - are in agreement with the data points;
\item The third and strongest assumption we made is the shape of the $\chi_{\rm C^{18}O}$ profile to describe the C$^{18}$O~variation with respect to H$_2$~within the main ridge of G351.77-0.51. In Fig.~\ref{fig:XIprof} the step-function profile assumed in Sect.~\ref{model} is shown as an orange-dashed line. All the results of Figures~\ref{fig:clumpsProfiles} and \ref{fig:fDRdep_relation}, and discussed in Sections~\ref{Rfd_estime} and \ref{Rfd_in_C1C2}, assume this abundance profile.\\
\indent We therefore examined how much the estimates of the $R_{dep}$ size, reported in the previous paragraphs, depend on the abundance profile assumed. For these tests, we studied the three $\chi_{\rm C^{18}O}$ profiles shown in Fig.~\ref{fig:XIprof}.
In addition to the already discussed step function, we tested two profiles describing a smoother variation of $\chi_{\rm C^{18}O}$ as a function of R. In the first test, the C$^{18}$O/H$_2$~abundance was assumed to be constant up to a certain distance, called $R_{drop}$, at which the abundance starts to linearly decrease (the green-dotted profile in Fig.~\ref{fig:XIprof}). In the second test, we considered the physical case in which $\chi_{\rm C^{18}O} \propto exp(-t_{age}/\tau_{dep})$, where $t_{age}$ is the age of the source, as expected integrating the evolution of $n($C$^{18}$O$)$ over time, $dn$(C$^{18}$O)$/dt$, only considering depletion (i.e. $dn$(C$^{18}$O)$/dt = -\kappa_{ads} n$(C$^{18}$O), and ignoring desorption). The latter case is shown as a blue profile in Fig.~\ref{fig:XIprof}, which reaches zero in the innermost regions of the model (i.e., in this example, for R $\lesssim 0.02$ pc), due to the high volume densities of H$_2$~(see eq.~\ref{depletion_time}).\\
\indent Since we no longer have a strong discontinuity in the new abundance profiles, we have to redefine the concept of $R_{dep}$. In the new tests, $R_{dep}$ will correspond to the distance at which $\chi_{\rm C^{18}O} = 2.1 \times 10^{-8}$ (i.e. the distance at which the 90\% of the CO molecules are depleted) and its new estimates are shown in Tab.~\ref{tab:Rdep}.\\
\indent We note that the exponential profile describes the physically more realistic case, whereas the other two are simpler approximations, easier to model. Therefore, we compare the results of the exponential case with those described by the other two.\\
\indent As visible in Tab.~\ref{tab:Rdep}, the new $R_{dep}$ estimated from the model with the exponential profile are within a factor of about 2-3 of those found from the semi-linear and the step-function models, respectively. These results suggest that, although the assumption of a step-function to describe the $\chi_{\rm C^{18}O}$ profile may seem too simplistic, it actually succeeds reasonably well in reproducing the results predicted by a physically more realistic profile. Furthermore, the step-function and semi-linear profiles represent two extreme (and opposites) cases of a drastic and a constant change in the $\chi_{\rm C^{18}O}$ profile, respectively, while the exponential profile can be interpreted as a physically more realistic situation. Because of this, even if the relative 3$\sigma$ level errors on $R_{dep}$, calculated by probability distributions, are always less than 15\% of the values reported in Tab.~\ref{tab:Rdep}, the uncertainty of a factor of 2-3 on the estimates of $R_{dep}$ from the exponential profile seems more realistic.
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Summary of the estimated $R_{dep}$ assuming the profiles in Fig.~\ref{fig:XIprof} and discussed in Sect.~\ref{caveats}. For all the estimates of $R_{dep}$ shown in the table, we estimate the relative 3$\sigma$ level error to be less than 15\%.}\label{tab:Rdep}
\begin{tabular}{c|p{1cm}|p{1cm}|p{1cm}}
\hline
\hline
{\bf Regions} & {\bf C5} & {\bf C7} & {\bf F1} \\
{\it Profiles} &{\bf [pc]} &{\bf [pc]} &{\bf [pc]} \\
\hline
{\it Step-function} & $0.10$ & $0.12$ & $0.15$ \\
($f_D = 10;$ R$<R_{dep}$) & & & \\
{\it Step-function} & $0.07$ & $0.07$ & $0.08$ \\
($f_D = \infty;$ R$<R_{dep}$) & & & \\
{\it Exponential} & $0.04$ & $0.04$ & $0.05$ \\
{\it Semi-linear} & $0.02$ & $0.02$ & $0.03$ \\
\hline
\hline
\end{tabular}
\end{table}
\indent Finally, it is worth discussing the predictions of the depletion timescales in the three models. In clump-5, for example, the $R_{dep}$ of the model with the semi-linear profile, provides $n($H$_2$$) = 5.5 \times 10^{5}$ cm$^{-3}$. With this result, we estimated $\tau_{dep} = 1.8 \times 10^{3}$ yr, smaller by a factor of about 10-20 than those obtained using the model with the step-function profile, for which $\tau_{dep} = 1.8 \times 10^{4}$ yr ($f_D=\infty$ for R$<R_{dep}$) or $3.2 \times 10^{4}$ yr ($f_D=10$), respectively. This large discrepancy can be explained once again by noting that these tests represent the two extreme cases with respect to the exponential profile and therefore define the lower and upper limits for the estimate of $\tau_{dep}$.\\
For the exponential-model the best fit $t_{age}$ is $2.1 \times 10^{4}$ yr, while at $R_{dep}$, $n($H$_2$$) = 1.7 \times 10^5$ cm$^{-3}$ which corresponds to a $\tau_{dep} = 5.8\times 10^{3}$ yr. As mentioned above, for this model only the adsorption process of CO is included, neglecting desorption at T $\gtrsim 20$ K. Nevertheless, the estimated $t_{age}$ and the corresponding lower limit of $\tau_{dep}$ suggest that the contribution of the adsorption process is negligible at these scales. Furthermore, both ages seem in good agreement with the values of $\tau_{dep}$ provided by the step-function profiles, suggesting again that this is a reasonable approximation to describe the profile of $\chi_{\rm C^{18}O}$.
\end{itemize}
\section{conclusions}\label{conclusions}
\noindent
In order to add a new piece of information to the intriguing puzzle of the depletion phenomenon and its consequences on the chemical evolution in star-forming regions, in the first part of this work we presented and discussed the depletion factor derived for the whole observed structure of G351.77-0.51. In the second part, we reported the results of the estimated depletion radius, as obtained from a simple toy-model described in Sect.~\ref{model}, testing different hypotheses (Sect.~\ref{caveats}).\\
\indent G351.77-0.51, with its proximity and its filamentary structure, seems to be the perfect candidate for this type of study.
We use {\it Herschel} and LABOCA continuum data together with APEX J(2-1) line observations of C$^{18}$O, to derive the $f_D$ map (Fig.~\ref{fig:depletion_map}).
Along the main body and close to clump-3 the observed depletion factor reaches values as large as $\sim$ 6, whereas in lower density higher temperatures structures $f_D$ is close to 1, as expected.\\
\indent We constructed a simple model that could reproduce the observed $f_D$ in two clumps along the main ridge and along the filament itself.\\
\indent As results of this work, we conclude the following:
\begin{enumerate}
\item In many regions of the spine and the branches of G351.77-0.51, the depletion factor reaches values larger than $\sim$2.5. This highlights that even in the less prominent structures, as the branches, the depletion of CO can start to occur, altering the chemistry of the inter stellar medium and making it more difficult to study the gas dynamics and to estimate the mass of cold molecular gas (using CO);
\item We find that CO-depletion in high-mass star forming regions affects not only the densest regions of the clumps, but also the filamentary structures that surround them;
\item The model assumed to estimate the size of the depletion radius suggests that it ranges between 0.15 and 0.08 pc, by changing the depletion degree from 10 to the full depletion state. These estimates agree with the full depletion conditions used in the 3D-simulations of \cite{Koertgen17} and \cite{Koertgen18}. This results highlights that such assumptions are not so far from the limits constrained by the observations. We also show that under certain assumptions, it is possible to estimate the size of the depleted region from $f_D$ (see Fig.~\ref{fig:fDRdep_relation});
\item We verified that by changing the shape of the profile assumed to describe the C$^{18}$O/H$_2$\:variations inside the filament and clumps, the estimates of the size of $R_{dep}$ do not change more than a factor of 3.
This difference was interpreted as the final uncertainty associated with the new estimates of $R_{dep}$ from the physically more realistic case of the exponential profile, where $R_{dep} \sim 0.04-0.05$ pc;
\item At $R=R_{dep}$ the model shows a number densities of H$_2$~between 0.2 and 5.5$\times 10^{5}$ cm$^{-3}$. Following \cite{Caselli99}, we estimated the characteristic depletion time scale for the clump-5, the clump-7 and the filament region included between them (i.e. $\tau_{dep}\sim 5$ and $0.2 \times 10^{4}$ yr, respectively). It is interesting to note that at similar ages, in \cite{Koertgen17, Koertgen18} the simulated deuterium degree also suggest a regime of high depletion on scales that are consistent with the $R_{dep}$ estimated by our model.
\end{enumerate}
\noindent
Observations at higher resolutions (even in the case of G351.77-0.51) are necessary to put stronger constraints on the models.
Nevertheless, that our results are comparable with the modeling presented by \cite{Koertgen17, Koertgen18} suggests that even if on a large scale it is necessary to include a detailed description of the depletion processes, on the sub-parsec scales we can exploit the conditions of total depletion used in the simulations. This assumption does not seem to strongly alter the expected results but has the enormous advantage of a considerable decrease of computational costs of the 3D-simulations. It would however be necessary in the future to explore the effect of properly modelling the depletion in 3D-simulations, to confirm the findings reported in this work.
\section*{Acknowledgments}
{\rm The authors wish to thank an anonymous referee for their comments and suggestions that have helped clarify many aspects of this work and P.Caselli for fruitful scientific discussions and feedbacks.\\
This paper is based on data acquired with the Atacama Pathfinder EXperiment (APEX). APEX is a collaboration between the Max Planck Institute for Radioastronomy, the European Southern Observatory, and the Onsala Space Observatory. This work was also partly supported by INAF through the grant Fondi mainstream ``{\it Heritage of the current revolution in star formation: the Star-forming filamentary Structures in our Galaxy''}. This research made use of Astropy, a community-developed core Python package for Astronomy (\citealt{Astropy13, Astropy18}; see also \url{http://www.astropy.org}), of NASA's Astrophysics Data System Bibliographic Services (ADS), of Matplotlib (\citealt{Matplotlib07}) and Pandas (\citealt{Pandas10}). GS acknowledges R. Pascale for feedbacks about Python-improvements. SB acknowledge for funds through BASAL Centro de Astrofisica y Tecnologias Afines (CATA) AFB-17002, Fondecyt Iniciaci\'on (project code 11170268), and PCI Redes Internacionales para Investigadores(as) en Etapa Inicial (project number REDI170093).}
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Computer Graphics (CG) has evolved in recent years. This evolution helped in the development of more realistic CG characters in terms of their looks and behaviors, bordering on a high level of human likeness and making the audience feel more comfortable with them~\cite{katsyri2015review}.
According to the Uncanny Valley (UV) theory~\cite{mori2012uncanny},
the more realistic an artificial human is, the more likely it is to cause discomfort to the observer. According to~\cite{katsyri2015review}, this feeling of strangeness
is related to the perception of human likenes
, being an identification issue, i.e., the humans perceive realistic characteristics in the virtual humans. Following this lin
, one of these characteristics is gender. According to~\cite{scott2007gender, wallach2010gender}, the term gender is related to social and cultural constructions
about "appropriate roles" for women and men
\cite{draude2011intermediaries} claimed that simulated human likeness (in terms of visuals and behaviors) can utilize human self-reliabilit
, turning something abstract into comfortable. With that, Draude made a provocation, which raised the hypothesis that most artificial beings
can be assigned a female gender to ensure that the audience feels comfortable. In this sense, in the work of~\cite{araujo2021analysis},
results showed that women can feel more comfortable with realistic female characters than realistic male characters, while men feel comfortable similarly about realistic female and male characters.
Furthermore, following the concept of charismatic leadership~\cite{weber1947legitimate, spencer1970weber, west1980charisma} and relating it to the charisma presented by characters created using CG from different media, the results of Araujo et al. showed that both women and men perceived realistic female characters as more charismatic than realistic male characters. In this case,
the characters were visibly male and female, but would these results be similar if the virtual human had no gender identification?
Still in relation to gender categorization,~\cite{draude2011intermediaries} made an analogy, stating that Computer Science has the potential to deconstruct the gender binary.
What about virtual humans? Is there this potential for gender deconstruction in a genderless virtual human?
According to~\cite{condry1976sex}, children often look for an answer about how they should behave by looking at the ways adults react to them.
Hypothetically, the chances of observing stereotypes in young children are lower.
The study with babies is relevant because they are accepted as naturally ''genderless'' in terms of their face and body models, so the bias measure can be considered with less interference.
There are several games in which the player can choose the design of a characte
\footnote{https://twinfinite.net/2018/08/games-nailed-character-creation-systems/}\footnote{https://starloopstudios.com/character-design-in-video-games-types-classes-and-characteristics/}.
In that games, users can create and customize their avatars, however results showed that although some users try to model a
no defined gender,
the created type was mostly a male character, according to human perception. The preference to categorize a non-gendered character into a male character can be a problem of gender bias~\cite{nass2000machines}.
In the movies industry, specifically when we talk about cartoons, female characters are usually more stereotyped~\cite{garza2019emotional} both in terms of visuals\footnote{https://thedissolve.com/news/655-disney-animator-says-women-are-hard-to-draw-becaus/} and movement.
But are these exaggerations really necessary for the recognition of a female character?
To answer some of these questions,
this work replicates and expands the work of~\cite{condry1976sex}, in which the authors measured the perception of men and women about a video containing a real baby reacting to different stimuli. The authors separated the participants into two groups, one receiving the video of the baby with a male name, and the other receiving the baby with a female name.
The results showed that women
perceived more emotions in the baby with a female name, while men
perceived more emotions in the baby with a male name. The research also showed
that people's perception is influenced by the gender characteristics, even if that characteristic is just the categorization of the gender or the name.
We replicate Condry and Condry's experiment using a virtual baby, and also expand the experiment using a third group of people who receives the unnamed baby, in addition to the other two groups that receive the baby with a female and male names.
Therefore, to help answer our questions, the following hypotheses are created and are discussed in this text.
\begin{itemize}
\item $H0_{1}$ defining that women and men perceive emotions similarly in female, male and unnamed animated virtual babies;
\item $H0_{2}$ defining that the unnamed baby is recognized as genderless; and
\item $H0_{3}$ defining that women and men perceive comfort and charisma similarly in baby with male name and unnamed baby, and different in relation to baby with female name, (following the results of work of~\cite{araujo2021analysis}).
\end{itemize}
The main contributions of this paper are: the investigation of human bias attributing gender to the virtual humans, and its impact on the participants perceived emotions, comfort and charisma. We use virtual babies because they are easily perceived as genderless~\cite{condry1976sex} and at the same time there is literature on gender bias observed in real life that can be used as comparison.
\section{Related Work}
\label{sec:relatedWork}
According to~\cite{scott2007gender, wallach2010gender}, gender is a social and cultural construction
built on stereotypes. For example, a pink shirt is assigned to a girl, while a blue shirt is assigned to a boy. So, a woman self-identify
as female had her construction of her feminine self based on a social standard of femininity. In this sense, in the work of~\cite{will1976maternal}, the authors carried out a perceptual experiment with two groups of mothers, one group receiving the male baby with a male name and wearing an outfit considered male, and the other group receiving the male baby with a female name and dressed in clothing considered feminine. In one of the results, the group that received the baby with a female name presented more a doll than a train.
In the two work by~\cite{zibrek2015exploring, zibrek2013evaluating}, the authors investigated the perception of gender of virtual humans in relation to different emotional feelings, and measured the effect on people's perception. The results indicated that participants classify the gender according to the emotion showed by the character.
In this case, it was more evident when the motion had a stereotyped gender movement than when the character walked based on real human motio
. In the work of~\cite{bailey2017gender}, the authors assessed gender differences in the perception of avatars, and the results indicated that gender is important in the perception of emotions. Similar to the work of~\cite{nag2020gender}, where the authors conducted a study to assess people's perception of male, female, and androgynous virtual humans. The authors evaluated the stereotyped assumptions of gender traits and roles in virtual humans. Results showed that gender stereotypes in the virtual humans were not perceived, and the androgynous virtual humans were perceived as a middle ground between genders stereotypes. It is important to emphasize that this work used other information to present the gender, as hair and neck size. By the other hand, the work by~\cite{garza2019emotional} presented a methodology to perform content analysis on the representation of characters in children's 3D animation movies. The authors noted that female characters and their emotional expressions are still developed to fit into patterns of social stereotypes, which are "easier" to introduce into 3D animated children's movies.
\cite{seyama2007uncanny} investigated UV by measuring human perceptions of facial images with a high level of realism. In addition, the authors evaluated whether there was a correlation between the gender classification of virtual humans and UV, but there were no significant results. In the work by~\cite{mcdonnell2012render}, the authors evaluated whether perception towards virtual humans can be affected by rendering styles of virtual humans. One of the results showed that people had more correct answers when they saw female virtual humans than male ones.
Human perception is important for the design of virtual humans~\cite{zell2019perception}. Charisma is in another universe of perceptive features, for example,~\cite{weber1947legitimate} identifies charisma as a type of leadership, which one entity can exercise over another in terms of worship.
This worship is a kind of human trait that can be observed in artificial beings, as shown in the work of~\cite{macdorman2005mortality} and the work of~\cite{rosenthal2015individuals}. According to~\cite{west1980charisma}, a trait that can be more simply perceived in some sort of of fiction. According to the work of~\cite{goethals2014kings}, and the work of~\cite{awamleh1999perceptions}, the appearance and posture of a charismatic entity can have some sort of emotional impact on the viewer.
In work of~\cite{groves2005gender}, the results showed that women are considered more charismatic than men. In~\cite{araujo2021perceived, araujo2021analysis}, the results showed that CG characters are considered more charismatic in videos (animations) than in still images. The results also showed that, for both women and men, adult female characters can be considered more charismatic than male characters. In addition, men perceived more charisma in unrealistic than realistic characters.
\section{Proposed Methodology}
\label{sec:gender_bias_methodology}
The proposed methodology is presented in two sections, firstly in Section~\ref{sec:gender_bias_stimuli}, we present the creation of stimuli used in this work, and in the Section~\ref{sec:gender_bias_questionnaire} we detail the applied questionnaire.
\subsection{Creation of Stimuli}
\label{sec:gender_bias_stimuli}
First, explaining Condry and Condry's experiment, the authors presented to participants a video of a real 9-month-old baby sitting in a baby chair facing a mirror (with a camera mounted behind the mirror) reacting to four stimuli (the teddy bear, a jack-in-the-box, a doll, and a buzzer). The baby had neutral clothes and no accessories to avoid gender stereotypes. Then, the authors separated the participants into two groups, firstly a group receiving the baby with a female name, and the other group receiving the baby with a male name. Thus, the authors asked the participants to use predefined scales to assess the baby's emotional and semantic levels in relation to the four stimuli presented. The main goals of the authors were to know whether the baby's gender
influenced the perception of emotional and semantic levels of the participants.
In the present work, we used a 3D model of a baby purchased on website\footnote{https://www.cgtrader.com/3d-models/character/child/game-ready-baby} to replicate the original wor
. The model has animations of crawling, walking and playing with a ball.
In the original wor
, the authors reported that the baby reacted positively (smiled, laughed, reached out) to the teddy bear and the doll, and reacted negatively (turned away, stared, cried) to the other two objects. As facial
reactions
can evoke strangeness in some participants,
our decision was to avoid explicitly facial animations but focusing on movement and animations. Our goal here is to avoid all interference that could affect the human perception.
Three stimulus objects were used for the virtual baby to interact.
Firstly, the animation of the baby playing with a ball, which in our hypothesis was perceived as a positive reaction from the baby.
Secondly, the negative reaction was hypothetically created using the same object than in the original work, i.e., a virtual model of a jack-in-the-bo
\footnote{https://www.blendswap.com/blend/27680}, which contains an animation of "Jack" jumping out of the box and back into the box. The crawl animation was used for the virtual baby to reach the jack-in-the-box, and a simple facial animation (mouths opened as in a surprise)
was created (using the facial blendshapes of the baby).
Regarding the object that caused a hypothetically neutral reaction, a 3D model of a colored unicorn\footnote{https://free3d.com/3d-model/unicorn-doll-772526.html} was used, and the virtual baby's reaction was to crawl in the opposite direction to the unicorn, that is, without interest in the object.
The objects used in this work can be seen in Figure~\ref{fig:baby_environment}.
\begin{figure*}[ht]
\centering
\subfigure[fig:baby_environment][Ball]{\includegraphics[width=0.32\textwidth]{samples/BallEnvironment.png}}
\subfigure[fig:baby_environment][Unicorn]{\includegraphics[width=0.32\textwidth]{samples/UnicornEnvironment.png}}
\subfigure[fig:baby_environment][Jack-in-the-box]{\includegraphics[width=0.32\textwidth]{samples/JackInTheBoxEnvironment.png}}
\caption{Environment and three objects that the baby interacts with: (a) the baby plays with the ball; (b) the baby crawls in a direction opposite to the unicorn; (c) the baby has a negative emotion seeing Jack jump out of the box (Jack-in-the-box).}
\label{fig:baby_environment}
\end{figure*}
Three videos as stimuli were created, each video for each objec
, to present to the participants. For all videos, a virtual scenario was created to put context in the scene.
The set contains a room which has a table, a sofa, an armchair, a curtain, an open window showing the sky with clouds, and a wooden floor (all of these objects in the set were obtained from the internet). The scenario can be seen in
Figure~\ref{fig:baby_environment}. The camera always remains in the same position pointed at the window. At the beginning of the videos, the virtual baby always starts facing the camera and objects, and with his back to the window. Also, the baby never leaves the camera view.
The videos duration are
between $6$ and $19$ seconds, and were sent to YouTube to be added to the questionnaire.
\subsection{Questionnaire}
\label{sec:gender_bias_questionnaire}
The questionnaire was also based on the work of Condry and Condr
. First, the participants were presented with the consent form approved by the Ethics Committee of our University\footnote{Information about the Ethics committee and the project name were omitted for blind review}.
Therefore, participants were informed of potential risks.
In addition, the questionnaire was created on the Qualtrics platform\footnote{{https://www.qualtrics.com}},
distributed on social networks, and
the participants were volunteers. Participants were also asked about their demographics information: age, educational level, gender, and familiarity with CG (games, movies, simulations, etc).
After this, three types of questionnaire were created, one containing the baby with a female name, another containing the baby with a male name, and finally one containing the unnamed baby. When one participant used the link to answer the test, we randomly select one of the questionnaires.
Both
names were presented in an introductory text block, which presented the baby with the name and gender (no name and gender in the unnamed group), and stating that the baby was 9 months old. While in the work of Condry and Condry there were two groups,
we decided to also test the possibility of unnamed baby,
in order
to assess whether people perceive that an unnamed baby does not have a defined gender.
As in the original study, this present work also used emotional rating scales.
For each vide
, the participants were instructed to rate, on 11-Likert Scales, the pleasure, anger and fear felt by the bab
. If the participant perceived that an emotion did not appear in the video, then she/he was instructed to score the lowest value on the scale (with 0 = "Absence of Emotion"), and the highest value if the emotion appeared as much as possible (with 10 = "High Intensity"). This step aims to test the $H0_{1}$ hypothesis ("defining that women and men perceive emotions similarly in female, male and unnamed animated virtual babies"), and it provides a 2 (Women and Men responses) x 3 (Ball, Jack-in-the-box and Unicorn) x 3 (Pleasure, Anger, Fear) x 3 (Female name, Male name, and Unnamed) design structure.
To investigate the $H0_{3}$ ("defining that women and men perceive comfort and charisma similarly in baby with male name and unnamed baby, and different in relation to baby with female name"), we followed the same question structure of the emotional scales using 11-Likert scales, but containing only two questions (we called them perceptual scales), one about perceived comfort and the other about perceived charisma. The comfort question was "How comfortable are you with the virtual baby? For example, did you feel any discomfort (strangeness) while watching the video? (0 = Totally Uncomfortable, and 10 = Totally Comfortable)".
Regarding the question of perceived charisma,
the question given to participants was "What is the level of charisma displayed by the virtual baby? (0 = Not at all Charismatic, and 10 = Very Charismatic)".
The comfort and charisma questions were inspired by some work of literature~\cite{mori2012uncanny, araujo2021perceived, araujo2021analysis, katsyri2015review}. This step had a 2 (Women and Men responses) x 3 (Ball, Jack-in-the-box and Unicorn) x 2 (Comfort and Charisma) x 3 (Female name, Male name, and Unnamed) design structure.
Finally, three questions were asked to the participants, and also based on the original work: \textit{i)} "Did the baby have a name? If so, what was the name?", being a question with an open text answer; \textit{ii)} "How old was the baby?", with nine
possible answers, with "Less than 7 months old" being the first one, "More than 12 months old" being the penultimate one, and "I don't know how to inform" the last one; and finally \textit{iii)} "What was the baby's gender?", with "Female", "Male", and "I don't know" as possible answers. However, only the question of item \textit{iii)} was part of our focus in this work, as it aimed to test hypothesis $H0_{2}$ ("defining that the unnamed baby is recognized as genderless").
This step had a 2 (Women and Men responses) x 3 (Ball, Jack-in-the-box and Unicorn) x 3 (questions about name, age, and gender) x 3 (Female name, Male name, and Unnamed) design structure.
\section{Results}
\label{sec:gender_bias_results}
This section presents the analysis and results of the questionnaire responses presented in Section~\ref{sec:gender_bias_methodology}. In total, the questionnaire was answered by 168 volunteers through social networks, but only 148 responded completely, being 79 women, 66 men and three people who chose another option\footnote{We removed these three participants because this groups was very small.}
Regarding age, from 145, 105 people were younger than 36 years old.
Regarding education,
105 people completed undergraduate studies.
In addition, 130 responded that they were familiar with CG. Regarding the statistical analyses, we used the tests: \textit{Kruskal-Wallis} (in this case we used the \textit{Bonferroni} correction as a post hoc test), \textit{Mann-Whitney}, and \textit{Chi-Square}. These methods were used through the Statsmodels\footnote{{https://www.statsmodels.org/dev/index.html}} and SciPy\footnote{{https://docs.scipy.org/doc/scipy/index.html}} libraries of the Python language.
In all analysis, we used a significance level of $.05$. This section is divided as follows: \textit{i)} Section~\ref{sec:analysis_emotional_perception}, presents the results regarding the responses of the emotional scales presented in the questionnaire; \textit{ii)} Section~\ref{sec:analysis_gender_categorization} discusses the participants' answers regarding the perceived virtual human gender;
and finally \textit{iii)} Section~\ref{sec:analysis_comfort_charisma} presents results about the perceived comfort and charisma.
\subsection{Analysis of Emotional Perception}
\label{sec:analysis_emotional_perception}
Explaining the results related to hypothesis $H0_{1}$, Table~\ref{tab:gender_bias_survey} presents the averages of the emotional rating scales, where the lines present the values referring to the separate videos (Ball, Unicorn, and Jack-in-the-box) and
to all of them together (All = Ball+Unicorn+Jack-in-the-box). The columns represent the separate emotional scales (Pleasure, Anger, and Fear), and all together (Emotions = Pleasure+Anger+Fear). The first four data columns refer to the answers of the participants who received the virtual baby that had the female name, followed by the
columns with the results regarding the answers of the participants who received the virtual baby that had the male name, and followed by the
columns referring to to the unnamed baby. In addition, in the last columns
there are the results of the statistical analysis performed using the Kruskal-Wallis test (non-parametric), which measures the comparison between the responses of the three groups (values referring to the female name x values referring to the male name x values referring to the to the unnamed). Therefore,
the first lines refer to women answers, the middle five lines refer to men answers, and the last lines refer to \textit{p}-values of \textit{Mann-Whitney} test of comparisons between data of woman and men. Remembering that the mean values can vary between 0 and 10, which were the values of the 11-Likert Scales used in the questions.
\begin{table*}[ht]
\centering
\caption{Table of results referring to the averages of emotional rating scales. The first four data columns correspond to the responses of the group of participants that received the questionnaire containing the baby with a female name, the next four columns correspond to the responses of the group that received the questionnaire containing the baby with a male name, and the next four columns were referring to the answers on the questionnaire that had the unnamed baby. In addition to ratings on the three videos (Ball, Unicorn, Jack* = Jack-in-the-box), All* is about rating the three videos together, or about rating against all emotions together (Pleasure+Anger+Fear). Regarding the last three columns, the \textit{p}-values referring to the \textit{Kruskal-Wallis} test are presented, comparing the response values of the three groups, that is, the group that received the baby with a female name versus the group that received the baby with male name versus group that received the unnamed baby. In addition, the first five lines refer to women's data, the middle five lines refer to men's data, and the last five lines present the \textit{p}-values referring to the \textit{Mann-Whitney} test of the comparisons between the values of the lines of the women's data versus the values of the lines of the men's data. In bold we highlight the \textit{p}-values smaller or equal than $.05$. }
\label{tab:gender_bias_survey}
\begin{adjustbox}{max width=0.99\linewidth}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{19}{|c|}{\textbf{Women}}\\
\hline
& \multicolumn{4}{c|}{\textbf{Female Name}} & & \multicolumn{4}{c|}{\textbf{Male Name}}& & \multicolumn{4}{c|}{\textbf{Unnamed}}&&\multicolumn{3}{c|}{\textbf{\textit{Kruskal-Wallis} test}} \\
\hline
\textbf{Stimuli} & \textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*} & &\textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*}&&\textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*}&&\textbf{Emotion} & \textbf{Stats} & \textbf{\textit{p}-value} \\
\hline
\textbf{Ball} & $6.96$ & $1.72$ & $1.24$ & - & - & $5.66$ & $1.04$ & $0.91$ & - & - & $4.80$ & $0.90$ & $0.50$ & - & - & \textbf{Pleasure} & $10.19$ & $\mathbf{.006}$ \\
\hline
\textbf{Unicorn} & $2.64$ & $0.72$ & $1.24$ & - & - & $3.16$ & $0.41$ & $1.12$ & - & - & $2.33$ & $0.63$ & $1.16$ & - & - & \textbf{Anger} & $1.91$ & $.38$\\
\hline
\textbf{Jack*} & $5.96$ & $0.52$ & $2.48$ & - & - & $4.37$ & $0.54$ & $2.12$ & - & - & $3.40$ & $1.00$& $2.66$ & - & - & \textbf{Fear} & $0.77$ & $.67$\\
\hline
\textbf{All*} & $5.18$ & $0.98$ & $1.65$ & $2.60$ & - & $4.40$ & $0.66$ & $1.38$ & $2.15$ & - & $3.51$ & $0.84$& $1.44$ & $1.93$ & - & \textbf{All*} & $7.00$ & $\mathbf{.03}$ \\
\hline
\multicolumn{19}{|c|}{\textbf{Men}}\\
\hline
& \multicolumn{4}{c|}{\textbf{Female Name}} & & \multicolumn{4}{c|}{\textbf{Male Name}}& & \multicolumn{4}{c|}{\textbf{Unnamed}}&&\multicolumn{3}{c|}{\textbf{\textit{Kruskal-Wallis} test}} \\
\hline
\textbf{Stimuli} & \textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*} & &\textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*}&&\textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*}&&\textbf{Emotion} & \textbf{Stats} & \textbf{\textit{p}-value} \\
\hline
\textbf{Ball} & $6.08$ & $0.56$ & $0.60$ & - & - & $6.00$ & $1.95$ & $0.86$ & - & - & $6.55$ & $0.75$ & $3.05$ & - & - & \textbf{Pleasure} & $1.88$ & $.38$ \\
\hline
\textbf{Unicorn} & $2.64$ & $0.40$ & $0.40$ & - & - & $3.26$ & $0.56$ & $1.17$ & - & - & $2.70$ & $0.75$ & $1.95$ & - & - & \textbf{Anger} & $3.54$ & $.16$\\
\hline
\textbf{Jack*} & $4.32$ & $0.68$ & $1.96$ & - & - & $5.56$ & $0.60$ & $3.52$ & - & - & $6.10$ & $1.04$& $0.90$ & - & - & \textbf{Fear} & $6.57$ & $\mathbf{.01}$\\
\hline
\textbf{All*} & $4.34$ & $0.54$ & $0.98$ & $1.96$ & - & $4.94$ & $1.04$ & $1.85$ & $2.61$ & - & $5.11$ & $0.88$& $2.03$ & $2.67$ & - & \textbf{All*} & $7.96$ & $\mathbf{.01}$ \\
\hline
\multicolumn{19}{|c|}{\textbf{Men vs. Women - \textit{Mann Whitney} test}}\\
\hline
& \multicolumn{4}{c|}{\textbf{Female Name}} & & \multicolumn{4}{c|}{\textbf{Male Name}}& & \multicolumn{4}{c|}{\textbf{Unnamed}}&&\multicolumn{3}{c|}{\textbf{}} \\
\hline
\multirow{2}{*}{\textbf{Stimuli}} & \textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*} & &\textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*}&&\textbf{Pleasure} & \textbf{Anger} & \textbf{Fear} & \textbf{All*}&& & & \\
& \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & &\textbf{\textit{p}-value} & \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & \textbf{\textit{p}-value}&&\textbf{\textit{p}-value} & \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & \textbf{\textit{p}-value}&&&& \\
\hline
\textbf{Ball} & $.31$ & $\mathbf{.008}$ & $.21$ & - & - & $.36$ & $.25$ & $.29$ & - & - & $\mathbf{.02}$ & $.33$ & $0.15$ & - & - & - & - & - \\
\hline
\textbf{Unicorn} & $.31$ & $\mathbf{.05}$ & $\mathbf{.01}$ & - & - & $.34$ & $.33$ & $.14$ & - & - & $.21$ & $.41$ & $.11$ & - & - & - & - & -\\
\hline
\textbf{Jack*} & $\mathbf{.04}$ & $.33$ & $.45$ & - & - & $.14$ & $.35$ & $\mathbf{.04}$ & - & - & $\mathbf{.005}$ & $.42$& $.45$ & - & - & - & - & -\\
\hline
\textbf{All*} & $.07$ & $\mathbf{.01}$ & $.08$ & $\mathbf{.006}$ & - & $.17$ & $.20$ & $\mathbf{.03}$ & $\mathbf{.01}$ & - & $\mathbf{.002}$ & $.48$& $.12$ & $\mathbf{.01}$ & - & - & - & -\\
\hline
\end{tabular}
\end{adjustbox}
\end{table*}
\textbf{With regard to women population}, the obtained data in Table~\ref{tab:gender_bias_survey} showed that, in general, emotional perception was higher for the baby with a female name than for the other groups in mostly cases.
In particular, applying the statistical \textit{Kruskal-Wallis} test with the global average values,
we can see that there were significantly different values in pleasure and in global analysis (highlighted in bold). Furthermore, using a \textit{Bonferroni correction} as a \textit{Post Hoc} test, in both cases,
the women's perception of emotion ($.03$) and pleasure ($.004$) had the greatest difference in the comparison between the group that received the female name and the group that received the unnamed baby.
Still regarding the answers from women, when we analysed the stimulus videos separately\footnote{This data are not in Table~\ref{tab:gender_bias_survey} due to lack of space.}, we only found significant \textit{p}-values in the ball ($.02$) and Jack-in-the-box ($.03$) stimuli, in the comparisons of the three groups of perceptions about pleasure.
Therefore, \textbf{we can say that the women perceived that the baby with female name was more emotional, in general, and felt more pleasure than the baby with male name and the unnamed baby.}
\textbf{With regard to the results of men}, the opposite happene
, that is, in most cases the mean values were lower for participants who received the virtual baby with a female name. In the statistical comparison, the \textit{p}-values were significant in fea
, and in relation to emotions in general. Regarding the \textit{Post Hoc} test, both in the perception of all emotions ($.01$) and in the perception of fear ($.03$), the most different groups were the groups of female and male names.
Analyzing only the videos separately, we only found a significant result ($.02$) in the perception of fear in the unicorn stimulus, and the biggest perceptual difference was between the group that received the female name and the group that received the unnamed baby ($.03$). Therefore, \textbf{we can say that the men perceived both the baby with male name and the unnamed baby were more emotional and felt more fear than the baby with female name. In general, these cases were more evident when we compared men's perception of the baby with male and female names.} Bearing in mind that for both women and men, the baby is the same, only the name were changed.
To measure the \textbf{difference between the perception of women and men}, we performed a \textit{Mann-Whitney} test between the values. Regarding emotions in general, we found significant values in the three cases, that is, in the comparison between women and men in the three group
. Interpreting these three values and looking at the averages of general emotions in Table~\ref{tab:gender_bias_survey}
we can say that \textbf{women perceived more emotion in the babies with a female name than men, and men perceived more emotion in the babies with a male name and unnamed than women}.
\subsection{Analysis of Gender Categorization in Virtual Humans}
\label{sec:analysis_gender_categorization}
Regarding $H0_{2}$ (defining that the unnamed baby is recognized as genderless),
in all cases, both when the baby had a female name and a male name, only five women and three men answered wron
. However, when the baby was not assigned a gender,
18 women and 11 men responded that the baby was male. We used the \textit{Chi-Square} test to compare the correct answers between the three groups, that is,
"female" for
who received the female name, "male" for
the male name, and "I don't know" for
without name.
Comparing women's and men's correct answers, we found no significant results, that is, \textbf{women and men were similarly correct when the babies were assigned a gender (female and male names), and were wrong in a similar way when the baby was not assigned a gender (unnamed baby).} \textbf{This is an indication that even if the baby
does not have visual gender stereotypes, people will still point out that it is male.}
\subsection{Perceived Comfort and Charisma}
\label{sec:analysis_comfort_charisma}
Explaining the results related to hypothesis $H0_{3}$ (defining that women and men perceive comfort and charisma similarly in baby with male name and unnamed baby, and different in relation to baby with female name~\footnote{The goal was to compare with the results provided in~\cite{araujo2021analysis}.}), Table~\ref{tab:perceptual_scales} presents the averages of the perceptual scales and the statistical analysi
. The different columns state for the perceived comfort and charisma.
\begin{table*}[ht]
\centering
\caption{Table of results referring to the averages of perceptual scales (Comfort and Charisma)
Regarding the last three columns, the \textit{p}-values referring to the \textit{Kruskal-Wallis} test are presented, comparing the response values of the three groups
In addition, the last five lines present the \textit{p}-values referring to the \textit{Mann-Whitney} test of the comparisons between the values of the lines of the women's data versus the values of the lines of the men's data. In bold, \textit{p}-values smaller than or equal to $.05$. }
\label{tab:perceptual_scales}
\begin{adjustbox}{max width=0.99\linewidth}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{13}{|c|}{\textbf{Women}}\\
\hline
& \multicolumn{2}{c|}{\textbf{Female Name}} & & \multicolumn{2}{c|}{\textbf{Male Name}}& & \multicolumn{2}{c|}{\textbf{Unnamed}}&&\multicolumn{3}{c|}{\textbf{\textit{Kruskal-Wallis} test}} \\
\hline
\textbf{Stimuli} & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Perceived} & \textbf{Stats} & \textbf{\textit{p}-value} \\
\hline
\textbf{Ball} & $7.40$ & $6.60$ & - & $6.50$ & $5.87$ & - & $6.70$ & $4.76$ & - & \textbf{Comfort} & $5.06$ & $.07$ \\
\hline
\textbf{Unicorn} & $7.20$ & $6.04$ & - & $5.41$ & $4.79$ & - & $6.36$ & $4.56$ & - & \textbf{Charisma} & $13.49$ & $\mathbf{.001}$\\
\hline
\textbf{Jack*} & $7.08$ & $6.68$ & - & $5.58$ & $5.12$ & - & $6.53$ & $4.63$ & - & - & - & -\\
\hline
\textbf{All*} & $7.22$ & $6.44$ & - & $5.83$ & $5.26$ & - & $6.53$ & $4.65$ & - & - & - & -\\
\hline
\multicolumn{13}{|c|}{\textbf{Men}}\\
\hline
& \multicolumn{2}{c|}{\textbf{Female Name}} & & \multicolumn{2}{c|}{\textbf{Male Name}}& & \multicolumn{2}{c|}{\textbf{Unnamed}}&&\multicolumn{3}{c|}{\textbf{\textit{Kruskal-Wallis} test}} \\
\hline
\textbf{Stimuli} & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Perceived} & \textbf{Stats} & \textbf{\textit{p}-value} \\
\hline
\textbf{Ball} & $6.92$ & $6.36$ & - & $7.43$ & $6.60$ & - & $6.85$ & $5.80$ & - & \textbf{Comfort} & $3.94$ & $.13$ \\
\hline
\textbf{Unicorn} & $6.20$ & $4.56$ & - & $7.30$ & $6.30$ & - & $6.60$ & $5.80$ & - & \textbf{Charisma} & $3.59$ & $.16$\\
\hline
\textbf{Jack*} & $5.72$ & $5.52$ & - & $6.78$ & $6.21$ & - & $6.15$ & $6.20$ & - & - & - & -\\
\hline
\textbf{All*} & $6.28$ & $5.48$ & - & $7.17$ & $6.37$ & - & $6.53$ & $5.93$ & - & - & - & -\\
\hline
\multicolumn{13}{|c|}{\textbf{Men vs. Women - \textit{Mann Whitney} test}}\\
\hline
& \multicolumn{2}{c|}{\textbf{Female Name}} & & \multicolumn{2}{c|}{\textbf{Male Name}}& & \multicolumn{2}{c|}{\textbf{Unnamed}}&&\multicolumn{3}{c|}{\textbf{}} \\
\hline
\multirow{2}{*}{\textbf{Stimuli}} & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Comfort} & \textbf{Charisma} & & \textbf{Comfort} & \textbf{Charisma} & & & & \\
& \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & & \textbf{\textit{p}-value} & \textbf{\textit{p}-value} & & \textbf{\textit{p}-value} & \textbf{\textit{p}-value} &&&& \\
\hline
\textbf{Ball} & $.23$ & $.41$ & - & $.27$ & $.20$ & - & $.46$ & $.11$ & - & - & - & - \\
\hline
\textbf{Unicorn} & $.09$ & $\mathbf{.04}$ & - & $\mathbf{.02}$ & $\mathbf{.03}$ & - & $.47$ & $\mathbf{.05}$ & - & - & - & - \\
\hline
\textbf{Jack*} & $\mathbf{.02}$ & $.07$ & - & $.17$ & $.12$ & - & $.24$ & $\mathbf{.01}$ & - & - & - & - \\
\hline
\textbf{All*} & $\mathbf{.009}$ & $\mathbf{.02}$ & - & $\mathbf{.02}$ & $\mathbf{.01}$ & - & $.36$ & $\mathbf{.001}$ & - & - & - & - \\
\hline
\end{tabular}
\end{adjustbox}
\end{table*}
Looking at the averages of women's response
, with respect to perceived comfort, women felt more comfortable when observing a baby with female name than when observing a baby with male name or an unnamed baby. Overall among the three groups, perceived comfort values were lower when women observed the baby with male name. However, we did not find significant results, that is, \textbf{there was no gender effect in relation to the comfort perceived by women.} Regarding perceived charisma, women rated the baby with female name as more charismatic than the baby with male name and the unnamed baby. In this case, the unnamed baby was always rated as the least charismatic. Overall, we found a significant difference (gender effect), being greater in the evaluative comparison between the female name and the unnamed \textit{p}-value = $.0008$).
Therefore, \textbf{in general, we can say that women rated the baby with female name as more charismatic, especially when compared to the unnamed baby.}
Regarding the comfort perceived by men, in almost all cases, men felt more comfortable with the baby with male name, while the female had the lowest comfort value
. However, we also did not find significant results, that is, \textbf{there was also no gender effect in relation to perceived comfort by men.} Regarding perceived charisma, men also rated the male as the most charismatic, and the female having most of the lowest values. Overall, we also did not find a significant result. Therefore, we can say that, \textbf{in general, as well as in terms of perceived comfort, there was no gender effect in relation to perceived charisma.}
Comparing women and men, regarding the group that received the female name, we can notice that in all cases the average values of comfort and charisma were higher for women than for men. On the other hand, regarding the male name, in both perceived comfort and charisma, the mean values were higher in the perception of men than in the perception of women. Statistically, this differences were significant.
With respect to the unnamed baby group, the perceived comfort values were mostly higher for men than for women, but the values were very close, and therefore the results were not significantly different. While for perceived charisma, all mean values were higher for men than for women.
\textbf{Therefore, comparing women and men,
we can say that women felt more comfortable and perceived more charisma with a baby with female name, while men had the same result in relation to the male-named baby.}
\section{Discussion}
\label{sec:discussion}
This section aims to discuss the results presented in Section~\ref{sec:gender_bias_results}.
The Hypothesis $H0_{1}$ ("defining that women and men perceive emotions similarly in female, male and unnamed animated virtual babies") was refuted in our study.
In fact, our results indicate that the baby's gender thorough a gendered name can impact the way people perceive emotions, even if that baby does not have behavioral gender stereotypes. Furthermore, our results are in agreement with the work of~\cite{condry1976sex}, who studied gender bias in real life. It is interesting to note that we apparently also maintain our bias in virtual environments.
This can be related to gender identification~\cite{scott2007gender}, for example, women identify more with female virtual humans, but this could mean the industry does not need to worry about creating stereotyped animated virtual humans to convey gender.
Our research indicates that by just informing the baby's name was enough to create a gender perception that impact the participant emotional answer. This is an indication that it is possible to deconstruct gender stereotypes through virtual humans, as mentioned by~\cite{draude2011intermediaries}. Certainly, more research with adult virtual humans are needed, however genderless aspects should be provided and verified in order to really present a genderless character (avoiding signs like hair, size of neck, mouth color, etc).
In relation to the $H0_{2}$ hypothesis ("defining that the unnamed baby is recognized as genderless"), most participants who received the unnamed baby defined that the virtual baby was male. These results are also in accordance with the work of Condry and Condry, as the unnamed baby was assessed as male, that is, there is a gender bias, both in the real life experiment and in the virtual experiment.
Regarding hypothesis $H0_{3}$ ("defining that women and men perceive comfort and charisma similarly in baby with male name and unnamed baby, and different in relation to baby with female name"), our results were compared with the work by~\cite{araujo2021analysis}. It is important to highlight that the work of Araujo analyzed the perceived comfort and charisma of women and men over characters from different media (mostly adults) created using CG. However, even with this difference, our results were similar to those of Araujo, that is, women felt more comfortable with a realistic female virtual human than a male virtual human, and men felt more comfortable with a realistic male virtual human than a female virtual human. Therefore, as in our results, Araujo's work did not find a significant result in animations, but the authors did find a significant results in terms of static images, comparable to what has been in the present work. Regarding perceived charisma, in our work, the effect was similar to perceived comfort, that is, women rated the female virtual baby as the most charismatic, and men rated the male virtual baby as the most charismatic. This was more evident for women, as the result was significant. In the work of Araujo and his colleagues, for both women and men, realistic female characters were considered more charismatic than realistic male characters. In addition, the results are in accordance also with the work from~\cite{groves2005gender} who studies the perception of charisma in real life and observed that women can be considered more charismatic than men.
Our work shows that as in "real life", gender identification seems to impact the users perception.
In the case of~\cite{nag2020gender} work, men and women participants had similar successes and errors in relation to the three characters tested: female, male and androgynous, but most part of guesses were correct, once the results showed that the designed characters were indeed perceived consistently with
their desired appearance. In our case, mostly participants (men and women) were more wrong than correct when trying to guess the gender of the unnamed baby, in relation to the other groups.
While we only informed (textually) the gender of a virtual human without visual identifications, that is, our virtual human was always the same, in~\cite{nag2020gender}, the authors used visual identifications to differentiate the virtual humans. It shows that visual cues impact the participants answers creating genders determination, as expected. On the other hand, the errors obtained in our work to guess the gender of unnamed baby seem consistent with a gender bias in the participants answers.
Therefore, those who develop virtual humans for the public (from small to large industries) may consider that they do not necessarily need stereotyped animations to represent gender. For example, knowing the likely human gender bias, the CG community can revisit the stereotypical visual aspects and propose different ways of modeling and animating virtual humans, perhaps deconstructing the binary gender, or reinforcing it if desired. Remembering that the most important thing is that people of any gender identify with and have good experiences with virtual humans.
\section{Final Considerations}
\label{sec:finalConsiderations}
This work evaluated whether the perception of women and men is influenced by virtual humans assigned gender. We raised four questions:
\textit{i)} "How do people perceive genderless characters?", \textit{ii)} "Does this change with respect to participant gender?" and \textit{iii)} "Do we have a bias to perceptive gender?". Firstly, to answer these three questions, we raised hypotheses $H0_{1}$ and $H0_{2}$, and recreated the work of~\cite{condry1976sex} using a virtual baby. In our results, women perceived more emotions in a baby named as female, and men perceived more emotions in babies with a male name and unnamed babies (refuting $H0_{1}$). In this case, the baby was always the same, that is, just changing the name and the textually defined gender. Furthermore, the group of participants who received the unnamed baby defined that this baby was male, again, even though the baby did not have gender stereotypes (refuting also $H0_{2}$).
We also studied the question \textit{iv)} "Can gender categorization influence the perception of comfort and charisma of an animated virtual human?".
Our results indicate that women felt more comfortable and perceived more charisma in the virtual baby with a female name, and men in relation to the baby with a male name. In addition, the man rated the unnamed baby as more charismatic in comparison to women. These findings refute $H0_{3}$.
From our point of view, this article simultaneously brings contributions to the field of psychology and computing. For the field of psychology, the article advances showing how the process of gender attribution, already classically studied in real humans, also happens through subtle clues (such as a name) in virtual humans. Therefore, we have increased the level of evidence for this type of perceptual bias. In the field of computer sciences we show how it is not necessary to spend resources for the animation of the genre from a behavioral point of view if the genre can be suggested by other 'cheaper' features such as a mere name.
This work has some limitations. Firstly, we only present three animations hypothesizing that the videos did not present any gender indication. Another hypothesis is that the virtual baby is genderless. Both of these hypothesis are based on literature~\cite{condry1976sex}, which we extended for virtual humans.
We also did not test the human perception with respect to interactions with the virtual babies, but only videos. A final word regarding our results indicate that human perception about genderless virtual humans, even if it is a baby, already shows human bias in gender detection and attribution, just as in real life. It may impact some decisions taken by CG community for instance re-visiting the stereotyped constructions to model and animate virtual humans using visual attributes such as colors, cloth, animations, hair used to define a specific gender, or not. In future work, we intend to include other features to study perception in relation to virtual babies, but also with adults virtual humans.
\section{ACKNOWLEDGMENT}
The authors would like to thank CNPq and CAPES for partially funding this work.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Knowledge of the properties of QCD at large baryon density is needed to interpret the results of heavy ion collisions experiments.
In particular, this is the case at the future experiments of NICA (JINR, Dubna) and FAIR (Darmstadt, Germany), which are designed to study the region of high baryon density. Input from the theory side is hence urgently needed. An understanding of the properties of matter in the corresponding region of the QCD phase diagram is also extremely important in astrophysics, for example, for a correct description of the fusion of neutron stars.
In general, lattice QCD is a powerful tool to study the nonperturbative properties of strongly interacting matter from first principles.
By virtue of lattice simulations vital insight into QCD at finite temperature~\cite{Borsanyi:2016bzg}, nonzero magnetic field~\cite{DElia:2013xrj},
isospin chemical potential~\cite{Brandt:2017oyy, Scior:2017cne} and chiral chemical potential~\cite{Braguta:2015zta, Braguta:2015owi, Braguta:2016aov} has been obtained.
An interesting area of finite temperature lattice simulations is the study of
the interaction between a quark-antiquark pair and the interaction of the
pair with the QCD medium (see e.g.~\cite{Kaczmarek:2002mc, Kaczmarek:2004gv, Kaczmarek:2005ui, Kaczmarek:2005gi,Maezawa:2010vj,Maezawa:2011aa}).
The in-medium properties of QCD are prominently encoded in the correlation function of Polyakov loops (i.e. Polyakov lines with maximum temporal extent). As the Polyakov loop constitutes a quasi order parameter of the strong interactions, its correlator is greatly affected by the phase transitions which take place in QCD. One pertinent example is the confinement/deconfinement phase
transition which considerably modifies the value of the correlator.
Furthermore the Polyakov loop correlator is directly related to the free energy of the in-medium quark-antiquark pair.In the confinement phase the free energy extracted is known to be a linear increasing function at intermediate distances. At zero temperature and distances $\sim 1.2~$fm the free energy asymptotes to a plateau due to the string breaking phenomenon. On the other hand in the deconfinement phase at large distances the free energy also flattens off, the reason being a screening of the interactions between the quark and antiquark due to liberated colored medium degrees of freedom. The question of whether or how the screening properties of QCD may be captured by an analogous and equally simple Debye screening formula in analogy with the Abelian theory is a ongoing field of research. The properties of the correlation function of Polyakov loops at finite temperature in QCD have been thoroughly studied in lattice simulations~\cite{Kaczmarek:2002mc, Kaczmarek:2004gv, Kaczmarek:2005ui, Kaczmarek:2005gi,Maezawa:2010vj,Maezawa:2011aa}. More recently the Polyakov loop correlator on the lattice has been compared to effective field theory predictions, both in a perturbative setting in pNRQCD, as well as perturbatively matched EQCD~\cite{Bazavov:2018wmo}. For analytic studies of the Polyakov loop see e.g.~\cite{Fischer:2013eca,Agasian:2017tag}.
While it is an interesting proposition to carry out similar studies of the Polyakov loop at finite Baryon density in QCD, the usual methods of lattice QCD unfortunately break down because of the so-called sign problem. Instead approaches, such as the Taylor expansion or analytical continuation (both in quark chemical potential) allow one to obtain useful results at small values of the chemical potential (see e.g.~\cite{Bazavov:2017dus,Gunther:2016vcp,DElia:2016jqh} ).
There are a lot of analytical attempts to study properties of dense matter (see e.g.~\cite{Buballa:2003qv, Alford:2007xm, Andreichikov:2017ncy}. However, most of them are not based on first principles and it is not clear how to reliably estimate the systematic uncertainty of different models.
Instead of pursuing the question of finite density physics in QCD directly, we here turn to the study of theories, which are similar to QCD but are not plagued by the sign problem. We believe that in particular the study of dense two-color QCD~\cite{Kogut:1999iv,Kogut:2000ek} allows us to learn about the properties of regular QCD at nonzero chemical potential. Other candidate theories not further pursued here are e.g. QCD at nonzero isospin chemical potential~\cite{Son:2000xc, Janssen:2015lda}. Of course we cannot expect to obtain quantitative predictions from such a strategy, while vital qualitative insight may be gained.
Two-color QCD at finite chemical potential has been studied with lattice simulations quite intensively before, see, e.g.~\cite{Hands:1999md,Muroya:2002ry,Kogut:2002cm, Cotter:2012mb, Braguta:2016cpw, Holicki:2017psk} and references therein.
Mostly these papers are aiming at the study of the phase diagram of two-color QCD in the region of small and moderate baryon densities.
The phase structure of two-color QCD at large baryon densities was studied in our previous paper~\cite{Bornyakov:2017txe}, where lattice simulations were carried out at a relatively small lattice spacing $a=0.044$ fm. Compared to previous works, it allows us to extent the range of accessible values of the baryon density, up to quark chemical potential $\mu>2000~$MeV, avoiding strong lattice artifacts. The main result of the paper~\cite{Bornyakov:2017txe} is the observation of the confinement/deconfinement transition at finite density and low temperature.
In view of this finding we are interested in studying the properties of the Polyakov loop correlation function in cold dense quark matter and to shed light on how they are affected by the confinement/deconfinement transition, which takes place at finite density. This is the central question addressed in this paper.
The manuscript is organized as follows. In the next section~\ref{sec:simdet} we describe our lattice set-up. In section~\ref{sec:phasediag} we give an overview of the present status of the cold dense two-color QCD phase diagram.
Section~\ref{sec:grandpot} is devoted to the calculation of the renormalized Polyakov loop correlation functions, as well as of the
color averaged, color singlet and color triplet grand potentials. In addition, in the same section we determine the renormalized Polyakov loop and the grand potential of a single quark/antiquark.
Using the color singlet grand potential, the quark number and internal energy induced by a static quark-antiquark pair are obtained in section~\ref{sec:quarknum}. We consider the string breaking phenomenon in dense quark matter in section~\ref{sec:stringbreak} and Debye screening in section~\ref{sec:DebyeScreen}. In the last section~\ref{sec:conslusion} we discuss our results and conclude.
\section{Simulation details}
\label{sec:simdet}
In our lattice study we used the tree level improved Symanzik gauge action~\cite{Weisz:1982zw, Curci:1983an}. For the fermionic degrees of freedom we used staggered fermions with an action of the form
\begin{equation}
\label{eq:S_F}
S_F = \sum_{x, y} \bar \psi_x M(\mu, m)_{x, y} \psi_y + \frac{\lambda}{2} \sum_{x} \left( \psi_x^T \tau_2 \psi_x + \bar \psi_x \tau_2 \bar \psi_x^T \right)
\end{equation}
with
\begin{equation}
\label{eq:Dirac_operator}
M(\mu,m)_{xy} = ma\delta_{xy} + \frac{1}{2}\sum_{\nu = 1}^4 \eta_{\nu}(x)\Bigl[ U_{x, \nu}\delta_{x + h_{\nu}, y}e^{\mu a\delta_{\nu, 4}} - U^\dagger_{x - h_{\nu}, \nu}\delta_{x - h_{\nu}, y}e^{- \mu a\delta_{\nu, 4}} \Bigr]\,,
\end{equation}
where $\bar \psi$, $\psi$ are staggered fermion fields, $a$ is the lattice spacing, $m$ is the bare quark mass, and $\eta_{\nu}(x)$ are the standard staggered phase factors: $\eta_1(x) = 1,\, \eta_\nu(x) = (-1)^{x_1 + ...+ x_{\nu-1}},~\nu=2,3,4$.
The chemical potential $\mu$ is introduced into the Dirac operator ~\eqref{eq:Dirac_operator} through the multiplication of the links along
and opposite to the temporal direction by factors $e^{\pm \mu a}$ respectively.
This way of introducing
the chemical potential makes it possible to avoid additional divergences and to reproduce well known
continuum results~\cite{Hasenfratz:1983ba}.
In addition to the standard staggered fermion action we add
a diquark source term~\cite{Hands:1999md}
to equation (\ref{eq:S_F}). The diquark source term explicitly violates $U_V(1)$ and allows to observe diquark
condensation even on finite lattices, because this term effectively chooses one vacuum from the family of $U_V(1)$-symmetric vacua.
Typically one carries out numerical simulations at a few nonzero values of the parameter $\lambda$ and then extrapolates to zero $\lambda$.
Notice, however, that this paper is aimed at studying the region of large baryon density where lattice simulations are numerically very expensive. For this reason, in this paper we have chosen a different strategy. Most of our lattice simulations are conducted at a single fixed value $\lambda=0.00075$.
In order to check
the $\lambda$-dependence of our results for chemical potentials $a\mu=0.0,\,0.1,\,0.2,\,0.3,\,0.4$ we carry out additional lattice simulations
for values of $\lambda=0.0005,\,0.001$.
Integrating out the fermion fields, the partition function for the theory with the action $S=S_G+S_F$
can be written in the form
\begin{eqnarray}
Z = \int DU e^{-S_G} \cdot Pf \begin{pmatrix} \lambda \tau_2 & M \\ -M^T & \lambda \tau_2 \end{pmatrix} = \int DU e^{-S_G} \cdot {\bigl ( \det (M^\dagger M + \lambda^2) \bigr )}^{\frac 1 2},
\label{z1}
\end{eqnarray}
which corresponds to $N_f=4$ dynamical fermions in the continuum limit. Note that the pfaffian $Pf$ is strictly positive, such that one can use
Hybrid Monte-Carlo methods to study this system.
First lattice studies of the theory with partition function (\ref{z1}) have been carried
out in the following papers~\cite{Kogut:2001na, Kogut:2001if, Kogut:2002cm}. In
the present study we are going to investigate instead a theory with the partition function
\begin{eqnarray}
Z=\int DU e^{-S_G} \cdot {\bigl ( \det (M^\dagger M + \lambda^2) \bigr )}^{\frac 1 4},
\label{z2}
\end{eqnarray}
which corresponds to $N_f=2$ dynamical fermions in the continuum limit.
It is known that the symmetries of the staggered fermion action are different from those of two-color QCD
with fundamental quarks~\cite{Hands:1999md}. In particular, the symmetry breaking pattern of QC$_2$D
with fundamental quarks is SU(2$N_f$) $\to$ Sp(2$N_f$), whereas for staggered quarks it is SU(2$N_f$) $\to$ O(2$N_f$).
Notice, however, that in the naive continuum limit for the staggered action with the diquark source term, the action factorizes into two copies of $N_f=2$ fundamental fermions\cite{Braguta:2016cpw}.
In addition, for sufficiently small lattice spacing $a$ the $\beta$-function of the theory (\ref{z2}) measured in \cite{Braguta:2016cpw} corresponds
to the $\beta$-function of QC$_2$D with two fundamental flavors. For these reasons we believe, that
the partition function (\ref{z2}) after the rooting procedure corresponds to QC$_2$D with $N_f = 2$ fundamental fermions with a correct
continuum chiral symmetry breaking pattern.
The results presented in this paper have been obtained in lattice simulations performed on a $32^4$ lattice for a set of the chemical potentials in the region $a\mu \in (0, 0.5)$.
At zero density we performed scale setting using the QCD Sommer scale $r_0=0.468(4) \mathrm{~fm}$ ~\cite{Bazavov:2011nk}.
In this case the string tension associated to $\mu_q=0$ amounts to $\sqrt{\sigma_0}=476(5) \mathrm{~MeV}$ at $a = 0.044 \mathrm{~fm}$.
Numerical simulations in the region of large baryon density require
considerable computer resources. For this reason, for the present paper
we performed our study at a pion mass of $m_{\pi}=740(40) \mathrm{~MeV}$, where the cost is manageable. We will preferentially choose a smaller pion mass in future simulations.
To calculate Wilson loops we have employed the following smearing scheme: one step of HYP smearing~\cite{Hasenfratz:2001hp} for temporal links with the smearing parameters according to the HYP2 parameter set~\cite{DellaMorte:2005nwx} followed by 100 steps of APE smearing~\cite{Albanese:1987ds} (for spatial links only) with a smearing parameter $\alpha_{APE} = 2/3$. We found that the HYP2 parameter set provides a better signal-to-noise ratio for Wilson loops and correlators of the Polyakov loops than the HYP1 set. As for the calculations of Polyakov loop, color-averaged~(\ref{eq:Omega_av}), color-singlet~(\ref{eq:Omega_1}) and color-triplet~(\ref{eq:Omega_3}) correlators one step of HYP2 smearing for temporal links was performed, but in the case of singlet and triplet correlators the Coulomb gauge without residual gauge fixing was fixed at first. The formulas used in the calculation are
\begin{eqnarray}
\label{eq:Omega_av}
\text{exp}\left[- \frac{\Omega_{\bar q q}(r, \mu)}{T}\right] &=& \frac{1}{4} \left\langle \text{Tr} L(\vec{r}) \text{Tr} L^\dagger(0) \right\rangle\,, \\
\label{eq:Omega_1}
\text{exp}\left[- \frac{\Omega_1(r, \mu)}{T}\right] &=& \frac{1}{2} \left\langle \text{Tr} L(\vec{r}) L^\dagger(0) \right\rangle\,, \\
\label{eq:Omega_3}
\text{exp}\left[- \frac{\Omega_3(r, \mu)}{T}\right] &=& \frac{1}{3} \left\langle \text{Tr} L(\vec{r}) \text{Tr} L^\dagger(0) \right\rangle - \frac{1}{6} \left\langle \text{Tr} L(\vec{r}) L^\dagger(0) \right\rangle\,.
\end{eqnarray}
The reason why we performed smearing is that, due to the large time extension ($L_t = 32$) correlators of the Polyakov loops by default are very noisy. Thus one has to introduce some smoothing technique to extract the signal. An analogous smearing scheme was applied in the papers~\cite{Bonati:2017uvz,Bonati:2014ksa}.
\section{The phase diagram of dense two-color QCD at low temperatures}
\label{sec:phasediag}
Let us explore the tentative phase structure of two-color QCD as basis for our study of interquark interactions. Based on symmetry arguments it is possible to build a chiral perturbation theory (ChPT) for sufficiently small chemical potential~\cite{Kogut:1999iv, Son:2000xc, Kogut:2000ek}.
This ChPT can be used to predict the phase transitions at sufficiently small values of
the chemical potential. In particular, it was predicted that for small
values of chemical potential ($\mu<m_{\pi}/2$) the system is in the hadronic phase.
In this phase the system exhibits confinement and chiral symmetry is broken.
\begin{figure}[hb]
\centering
\includegraphics[scale=0.8]{./figs/phase_diagram.pdf}
\caption{Schematic phase diagram of dense two-color QCD at low temperatures.}
\label{fig:phase_diagram}
\end{figure}
At $\mu=m_{\pi}/2=370(20)$~MeV~($a\mu\simeq0.08$ in lattice units) there is a second order phase transition to a
phase where scalar diquarks form a Bose-Einstein condensate (BEC phase).
The order parameter of this transition is the diquark condensate $\langle q^T \bigl [ (C\gamma_5)\times \tau_2 \times \sigma_2 \bigr ] q \rangle$,
where $C\gamma_5$ is the matrix which acts on Dirac indices and $\tau_2, \sigma_2$ are Pauli matrices which act on flavor and color indices of the quark field $q$.
In the massless limit there is no chiral symmetry breaking, if the diquarks are condensed.
However, for massive quarks the chiral condensate is not zero. Instead it is proportional
to the quark mass and
decreases with increasing chemical potential. Let us note that dense QC$_2$D
in the hadronic phase and
the BEC phase was intensively studied within lattice simulations in a series of papers~\cite{Hands:1999md, Kogut:2001na, Kogut:2001if, Boz:2015ppa, Braguta:2016cpw, Holicki:2017psk}
where reasonable agreement with ChPT was observed.
In the ChPT the interactions between different degrees of freedom are accounted for by perturbation theory, so they are assumed to be weak and in addition the baryon density in the region of application is small. Together with the fact that in two-color QCD the diquarks are baryons this implies that the system is similar to a dilute baryon gas at $\mu$ above $\mu\geq m_{\pi}/2$ but below the values corresponding to large density.
Enhancing the baryon density further, we proceed to dense matter, where the interactions between baryons cannot be accounted for within perturbation theory. This transition can
be seen through the deviation of different observables from the predictions of ChPT. In our paper~\cite{Braguta:2016cpw}
this deviation was observed in the diquark condensate, the chiral condensate and the baryon density.
At sufficiently large baryon density ($\mu \sim 1000~\mbox{MeV},~a\mu \sim 0.22$) some observables of the system under study can be described using Bardeen-Cooper-Schrieffer theory (BCS phase)\footnote{The properties of the BCS phase
will be considered in a forthcoming study of ours.}.
In particular, the baryon density is well described by the density of noninteracting
fermions which occupy a Fermi sphere of radius $r_F=\mu$. In addition, the diquark
condensate, which plays the role of a condensate of Cooper pairs, is proportional to
the Fermi surface. In lattice simulation the BCS phase was observed in the following papers~\cite{Hands:2006ve, Hands:2010gd, Cotter:2012mb, Braguta:2016cpw} and we found that the transition from the BEC to the BCS phase is smooth~\cite{Braguta:2016cpw}.
In addition to the transition to the BCS phase at $\mu \sim 1000$~MeV ($a\mu \sim 0.22$) there is the confinement/deconfinement transition in dense two-color QCD~\cite{Bornyakov:2017txe}. This transition manifests itself in a rise of the Polyakov loop and vanishing of the string tension.
It was also found that after deconfinement is achieved, we observe a monotonous
decrease of the spatial string tension $\sigma_s$ which ends up vanishing at $\mu_q \geq 2000$ MeV ($a\mu \geq 0.45$). It should be noted that the results of this study suggest that the confinement/deconfinement transition is rather smooth.
The Polyakov loop results do not allow us to locate the transition region from the confinement to the deconfinement phase.
For this reason we consider here the transition region to be around $\mu = 1000$~MeV . This value was found in our previous study \cite{Bornyakov:2017txe},
where it was determined by the condition that the string tension, extracted from the Wilson loops, becomes zero within
the uncertainty of the calculation. Thus throughout the paper we use the term "the confinement/deconfinement transition region" in the sense of vanishing of the string tension extracted from the Wilson loops.
\section{The grand potential of a static quark-antiquark pair in dense quark matter}
\label{sec:grandpot}
In this section we are going to study the grand potential $\Omega_{\bar q q}(r, \mu)$ of a static quark-antiquark pair placed within a distance of $r$ into the dense medium. It can be represented in terms of the correlator of Polyakov loops
\begin{eqnarray}
\frac{\Omega_{\bar q q}(r, \mu)}{T} = - \log \langle \tilde \operatorname{Tr} L_{\vec{x}} \tilde \operatorname{Tr} L_{\vec{y}}^{\dagger}\rangle + c(\mu),\; r = |\vec{x} - \vec{y}|,
\label{omega_r_mu}
\end{eqnarray}
where $\tilde \operatorname{Tr} = \frac{1}{2} \operatorname{Tr}$ and the Polyakov loop is given as the trace of a product of gauge links in temporal direction $L_{\vec{x}} = \prod\limits_{\tau = 0}^{N_{\tau}-1} U_{\mu = 0}(\vec{x}, \tau).$
The quantity $c(\mu)$ denotes a divergent renormalization constant, which is related to the self-energy of a quark or antiquark source. In the limit $r \to \infty$, the correlation between the Polyakov lines becomes negligible and the grand potential $\Omega_{\infty}(\mu)$ is given by the squared expectation value of the volume-averaged Polyakov loop, $\langle L \rangle = \langle N_s^{-3} \sum_{\vec{x}} \tilde \operatorname{Tr} L_{\vec x}\rangle$:
\begin{eqnarray}
\frac{\Omega_{\infty}(\mu)}{T} = \frac{1}{T}\lim\limits_{r \to \infty}\Omega_{\bar q q}(r, \mu) = -\log |\langle L \rangle|^2 + c(\mu).
\label{omega_r}
\end{eqnarray}
To find the grand potentials from formulae (\ref{omega_r_mu}) and (\ref{omega_r}) one has to determine the renormalization constant $c(\mu)$.
In pure gauge theory the expectation value of the Polyakov line which is defined as
\begin{eqnarray}
L^{ren}(\mu)=\exp {(-\Omega_{\infty}(\mu)/2T)},
\label{Polyakov_line_ren}
\end{eqnarray}
is the order parameter of the confinement/deconfinement transition.
In particular, $L^{ren}(\mu)$ vanishes in the confined phase, whereas it is non-zero in the deconfined phase.
After inclusion of dynamical quarks in the simulations, the expectation value of the Polyakov line is no longer an order parameter.
However, one can interpret the $\Omega_{\infty}(\mu)/2$ as the grand potential of one quark or one antiquark in dense quark matter.
Thus one may expect that in the confined phase $\Omega_{\infty}(\mu)$ is much larger than that in the deconfined phase.
Below we will also need the color-singlet grand potential $\Omega_1(r,\mu)$, which is defined as
\begin{eqnarray}
\frac{\Omega_1(r, \mu)}{T} = - \log\langle \tilde \operatorname{Tr} (L_{\vec x} L_{\vec y}^{\dagger})\rangle + c^\prime(\mu).
\end{eqnarray}
Notice that contrary to the color averaged grand potential $\Omega_{\bar q q}(r,\mu)$,
the singlet one $\Omega_1(r,\mu)$ is not gauge invariant. So, in order
to calculate $\Omega_1(r,\mu)$ we have to fix the gauge and we choose here conventionally the Coulomb gauge.
Both the color averaged and the color singlet grand potentials are calculated up to renormalization constants. Now let us define the relative normalization of these observables. It is clear that at sufficiently large spatial separation between quarks the relative orientation of charges in color space is not important due to screening. For this reason the authors of~\cite{Kaczmarek:2002mc, Kaczmarek:2004gv, Kaczmarek:2005ui} chose a relative normalization of the color averaged and color singlet free energies, such that they are identical at large distances. In our paper we are going to use the same relative normalization between the $\Omega_{\bar q q}(r, \mu)$ and $\Omega_1(r, \mu)$.
For two-colors, the color averaged grand potential $\Omega_{\bar q q}(r, \mu)$ can be represented~\cite{Nadkarni:1986cz} through the
color singlet $\Omega_1(r, \mu)$ and the color triplet grand potential $\Omega_3(r, \mu)$ as
\begin{eqnarray}
\exp \left(-\frac{\Omega_{\bar q q}(r, \mu)}{T}\right) = \frac{1}{4} \exp \left(-\frac{\Omega_1(r, \mu)}{T}\right) + \frac{3}{4} \exp \left(-\frac{\Omega_3(r, \mu)}{T}\right).
\label{eq:exponents}
\end{eqnarray}
Let us consider short distances ($r \mu \ll 1$), i.e. distances where the Debye screening can be neglected.
In this limit the running of the coupling constant is determined by the scale $\sim 1/r$, and the influence
of the chemical potential on the running coupling can be neglected. The perturbative one-gluon exchange
expression for the color singlet and the triplet grand potentials at short distances ($r \mu \ll 1$) can be written as
\begin{eqnarray}
\Omega_1(r, \mu ) = -3 \Omega_3(r, \mu) + \mathcal{O}(g^4) = -\frac{g^2(r)}{8 \pi r} + \mathcal{O}(g^4).
\label{eq:relation}
\end{eqnarray}
We have already discussed relative normalization between the grand potentials $\Omega_{\bar q q}(r, \mu)$ and $\Omega_1(r, \mu)$. Thus to renormalize the grand potentials it is sufficient to renormalize one of them. Let us consider $\Omega_1(r, \mu)$.
To do this we use the procedure proposed in~\cite{Kaczmarek:2002mc, Kaczmarek:2004gv, Kaczmarek:2005ui},
adopted here for the calculation at finite density. The grand potential at finite temperature and chemical
potential is defined as
\begin{eqnarray}
\Omega_1(r, T, \mu) = U_1(r, T, \mu) - T S_1(r, T, \mu)-\mu N_1(r, T, \mu),
\label{grand_p}
\end{eqnarray}
where function $U_1(r, T, \mu)$ is the internal energy, the function $S_1(r, T, \mu)$ is the entropy and the function $N_1(r, T, \mu)$ is the quark number density of a static color singlet quark-antiquark pair. From the above discussion it is clear that at short distances $r$ the grand potential
does not depend neither on the chemical potential $\mu$ nor on the temperature $T$. This implies that at short distances
the entropy $S_1 = - \partial \Omega_1 / \partial T$ and the quark number density $N_1 = - \partial \Omega_1 / \partial \mu$ are zero.
It is also clear that at short distances the internal energy equals to the interaction potential in a quark-antiquark pair at zero temperature and density.
So, at short distances the grand potential $\Omega_1(r, T, \mu)$ coincides with the zero temperature and density
potential $V(r)$ which is calculated in Appendix A. Similarly to papers~\cite{Kaczmarek:2002mc, Kaczmarek:2004gv, Kaczmarek:2005ui}
we fix the renormalization constant $c'(\mu)$ through the matching condition for $\Omega_1(r, \mu)$ at short distances to the
short distance behavior of the interaction potential $V(r)$.
The renormalization for the grand potential $\Omega_{\bar q q}(r, \mu)$ can be fixed using matching at large distances, $r$ where the color averaged and the color singlet grand potentials are expected to be identical.
Evidently, this procedure allows us to get rid of the divergent self-energy contributions and uniquely fixes the renormalization constants
$c(\mu)$ and $c'(\mu)$.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.00\textwidth]{./figs/F1_data.pdf}
\caption{The color singlet grand potential as a function of distance for few values of the chemical potential under study. The black curve is the potential of a static quark-antiquark pair at zero density and temperature. Note the absence of a Coulombic small distance regime, due to smearing.}
\label{fig:F1data}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.00\textwidth]{./figs/F_data.pdf}
\caption{The $\Omega_{\bar q q}$ as a function of distance for few values of the chemical potential under study. The black curve is the potential of the static quark-antiquark pair at zero density and temperature.Note the absence of a Coulombic small distance regime, due to smearing. }
\label{fig:Fdata}
\end{minipage}
\end{figure}
Having gone through the renormalization procedure we are ready to present the results of the calculation of the renormalized
grand potentials $\Omega_{\bar q q}(r, \mu)$, $\Omega_1(r, \mu)$, $\Omega_3(r, \mu)$.
In figure~\ref{fig:F1data} we plot the renormalized $\Omega_1(r,\mu)$ as a function of distance for different values of the chemical potential.
In figure~\ref{fig:Fdata} we plot the grand potential
$\Omega_{\bar q q}(r, \mu)$.
To get an idea how the $\Omega_{\bar q q}(r, \mu)$, $\Omega_1(r, \mu)$, $\Omega_3(r, \mu)$ look in one figure
we plot figure~\ref{fig:F1F3F} where these potentials are shown for the values $\mu=671$~MeV
and $\mu=1790$~MeV.
The grand potential of a single quark/antiquark in quark matter and the Polyakov loop are important quantities in two-color QCD.
After the renormalization these observables can be extracted from the Polyakov loop correlator at large distances.
In the calculation we take $\Omega_1(\infty, \mu)=\Omega_1(L_s/2, \mu)$ and calculate the renormalized Polyakov loop
applying formula (\ref{Polyakov_line_ren}). Notice that for the calculation it is important that
the grand potential extracted from the Polyakov loop correlator goes to a plateau value.
In the confined, phase the plateau in the grand potential is due to string breaking, which takes place
at large distance. Due to the relatively small spatial lattice size we can observe the string breaking
only for sufficiently large chemical potential ($\mu > 440$~MeV). For this reason
the calculation of the $\Omega_1(\infty, \mu)$ and $L^{ren}(\mu)$ based on the renormalized
correlator of the Polyakov loops can be carried out for $\mu > 440$~MeV. The results for $\Omega_1(\infty, \mu)$ and $L^{ren}(\mu)$
are shown in figure~\ref{fig:F1_inf} and in figure~\ref{fig:F1_inf_exp} by red triangles.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{./figs/F1_F3_F.pdf}
\caption{The singlet, triplet and the color averaged grand potentials for two values of the chemical potential: $\mu=671$~MeV and $\mu=1790$~MeV.}
\label{fig:F1F3F}
\end{figure}
The renormalization of the $\Omega_1(\infty, \mu)$ and the Polyakov line can be carried out through
the measurement of the latter on the lattice. In this case it is possible to find both observables
for all values of the chemical potential under study. To calculate the Polyakov loop
we conduct one step of the HYP smearing. The Polyakov loop is renormalized according to
\begin{eqnarray}
L^{ren}(\mu)=L^{bare}(\mu) \frac {L^{ren}(\mu=1030~\mbox{MeV})} {L^{bare}(\mu=1030~\mbox{MeV})},
\label{Lren}
\end{eqnarray}
where $L^{ren}(\mu=1030~\mbox{MeV})$
is the Polyakov loop measured in the previuos approach\footnote{The point $\mu=1030~$MeV was chosen since here we have large statistics and, therefore, rather good accuracy in the calculation of the $L^{ren}(\mu=1030~\mbox{MeV})$.},
and $L^{bare}(\mu)$ is the bare Polyakov loop measured on the lattice. Similarly to the renormalization of the
correlators of the Polyakov loops, the approach based on (\ref{Lren}) gets rid of infinite ultraviolet divergence and
uniquely fixes the renormalization.
Having calculated the renormalized Polyakov line, we can find the $\Omega_1(\infty, \mu)$ using formula (\ref{Polyakov_line_ren}).
The results for $\Omega_1(\infty, \mu)$ and $L^{ren}(\mu)$
are shown in figure~\ref{fig:F1_inf} and in figure~\ref{fig:F1_inf_exp} by blue circles.
From these figures one sees that both approaches to the calculation of the $\Omega_1(\infty, \mu)$ and $L^{ren}(\mu)$
are in agreement with each other.
Here a few comments are in order: the measurement of the Polyakov loop correlation functions
in this section was carried out at $\lambda=0.00075$.
As was discussed above, in order to check the $\lambda$-dependence of our results we carried out a similar study
at $a\mu=0.0, 0.1, 0.2, 0.3, 0.4$ and $\lambda=0.0005, 0.001$. We found that with the
exception of the chemical potential $a\mu=0.4$
the results obtained with different values of the $\lambda$ parameter are in agreement with each other within the uncertainty of the calculation.
For the chemical potential $a\mu=0.4$ the results obtained at different $\lambda$ deviate from each other by around $2\times \sigma$.
From this fact we conclude that the $\lambda$-dependence of our results is weak.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.49]{./figs/Finf_data.pdf}
\caption{The renormalized color singlet grand potential $\Omega_1(\infty, \mu)$ as a function of $\mu$. The red tringles correspond to the $\Omega_1(\infty, \mu)$ extracted from the renormalized correlators of Polyakov loops. The blue circles correspond to the $\Omega_1(\infty, \mu)$ extracted from the average Polyakov loops measured on the lattice and renormalized according to formula (\ref{Lren}).}
\label{fig:F1_inf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.49]{./figs/Fexpinf_data.pdf}
\caption{The renormalized Polyakov loops $L^{ren}(\mu)$ as a function of $\mu$. The red tringles correspond to the $L^{ren}(\mu)$ extracted from the renormalized correlators of Polyakov loops. The blue circles correspond to the average Polyakov loops measured on the lattice and renormalized according to formula (\ref{Lren}).}
\label{fig:F1_inf_exp}
\end{minipage}
\end{figure}
The confinement/deconfinement transition at finite temperature manifests itself in an increasing value of the Polyakov loop and its rise may become quite rapid in the transition region~\cite{Kaczmarek:2005ui}. A similar
behaviour can be seen from figure~\ref{fig:F1_inf_exp}, where the confinement/deconfinement transition is observed through the rise of the Polyakov line.
At the same time from this figure we don't see any specific region in the chemical potential where the rise of the Polyakov line
is dramatically different as compared to other regions.
This observation corroborates our previous finding that the finite density confinement/deconfinement transition
in two-color QCD is rather smooth.
\footnote {Our data do not allow us to confirm or exclude existence of an inflection point at some value of the chemical
potential similar to the inflection point in the temperature dependence of the Polyakov
loop. The search of possible inflection point in dense matter requires better accuracy and
more data points as compared to the data available in this study.}
Let us now pay attention to the region $\mu>2000~$MeV. In this region the Polyakov loop/grand potential reaches a maximum/minimum and then drops/rises. Below it will be shown that the region $\mu>2000~$MeV differs from the region $\mu < 2000~$MeV not only for the Polyakov loop
but also the grand potential. In turn also derived observables, such as the screening length $R_{sc}$, the Debye mass and effective coupling constant show a distinctive behavior.
At the moment we do not fully understand the physics, which is responsible for this behavior. One possibility is that the value of the chemical potential
$\mu \sim 2000~$MeV is exceptional since there is nonzero spatial string tension in the region $\mu<2000~$MeV whereas
the spatial string tension is zero for $\mu > 2000~$MeV. This might imply that the point $\mu \sim 2000~$MeV separates
systems with and without magnetic screening. However a definite answer to this hypothesis requires further study.
From figure~\ref{fig:F1_inf} one sees that the potential $\Omega_{\bar q q}(r,\mu)$ changes its sign at $\mu \sim 1300~$MeV, which at first sight may be unexpected. However, let us recall that $\Omega_{\bar q q}(r,\mu)$ is not a grand potential of the whole system.
On the contrary, it is the difference of the grand potential of dense quark matter with a static quark-antiquark pair
and dense quark matter without a static quark-antiquark pair. So, $\Omega_{\bar q q}(r,\mu)$ in our context is the
additional grand potential due to the introduction of the quark-antiquark pair to quark matter. Now figure~\ref{fig:F1_inf}
can be interpreted as follows: introducing a static quark-antiquark pair increases the grand potential of the system
for $\mu<1300~$MeV and decreases the grand potential of the system for $\mu>1300~$MeV.
An explanation of this fact will be presented in the next section.
The authors of~\cite{Cotter:2012mb} studied QC$_2$D with $N_f=2$ quarks within lattice simulation with dynamical Wilson fermions.
In particular, they measured the Polyakov loop as a function of the chemical potential and observed the following behavior:
It remains zero up to $a\mu \sim 0.75$ and then quickly rises. In the region $a \mu>1$, due to the saturation the quark degrees of
freedom, quarks are no longer dynamical and the theory becomes quenched QCD and exhibits confinement, i.e. the Polyakov loop goes to zero for $a \mu>1$.
Further measurement of the string tension carried out in~\cite{Boz:2013rca} did not confirm the presence of a confinement/deconfinement transition
and the decrease of the string tension with chemical potential. Although the behavior of the Polyakov line in figure~\ref{fig:F1_inf_exp}
seems similar to that obtained in~\cite{Cotter:2012mb}, we believe that this behavior can be explained as a lattice artifact of Wilson fermions.
In order to explain the results of~\cite{Cotter:2012mb}, let us recall that in Wilson fermions one has one light quark and
15 heavy quark species with masses $\sim 1/a$. If the chemical potential is $a\mu \sim 1$ the heavy quarks are not suppressed any longer and
additional color degrees of freedom are released to the system under study. We believe that this mechanism is responsible for the
rise of the Polyakov line observed in~\cite{Cotter:2012mb}. Notice that this mechanism does not work for staggered quarks as used in
our paper, since there are no heavy species. In addition we observe a considerable rise of the Polyakov loop already at $a\mu \sim 0.2$.
The decrease of the Polyakov line in figure~\ref{fig:F1_inf_exp} hence cannot be attributed to the saturation since
it starts at $a\mu \sim 0.4$, which is rather far away from the saturation in staggered fermions~\cite{Braguta:2016cpw}.
\section{The quark number and internal energy induced by a static quark-antiquark pair}
\label{sec:quarknum}
The authors of~\cite{Kaczmarek:2005gi} calculated the free energy of a
static quark-antiquark pair in QCD at finite temperature.
In addition they calculated the entropy of the QCD medium in the presence of a static-quark antiquark pair. In dense quark matter
there is in addition a contribution of the entropy to the grand potential (\ref{grand_p}). However,
this contribution is not important at low temperature. What becomes important in dense quark matter is
the quark number induced by the static quark antiquark pair $N(r)$. This quantity can be
calculated as follows
\begin{eqnarray}
N(r, \mu)=- \frac {\partial \Omega(r,\mu)} {\partial \mu}.
\end{eqnarray}
Notice that the $N(r, \mu)$ is the quark number which arise in the system due to the introduction of the quark-antiquark pair to the dense matter. So, it is the difference of the quark number with and without static quark-antiquark pair.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.5]{./figs/N_1_data.pdf}
\caption{The quark number $N_1(r,\mu)$ induced by a static quark-antiquark pair as a function of distance for few values of chemical potential.}
\label{fig:N_1}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.5]{./figs/N_1_inf_data.pdf}
\caption{The quark number $N_1(r,\mu)$ at large distance as a function of chemical potential.}
\label{fig:N1_inf}
\end{minipage}
\end{figure}
In this paper we calculate the $N_1(r, \mu)$ for the color singlet grand potential. Similar
observables can be calculated for the color averaged and color triplet grand potentials.
To find the $N_1(r, \mu)$ we determine the derivative of the grand potential over chemical potential through the finite difference approximation.
The results of the calculation of the $N_1(r, \mu)$ for a few values of the chemical potential are shown in figure~\ref{fig:N_1}.
We found that the smaller the chemical potential, the larger uncertainty of the calculation in the $N_1(r, \mu)$.
For this reason we show only for $N_1(r, \mu)$ for sufficiently large chemical potentials.
From figure~\ref{fig:N_1} one sees that $N_1(r, \mu)$ is rising from zero at short distances to some plateau value $N_1(\infty, \mu)$, which is an important observable, since it is proportional to the derivative of the Polyakov loop of a single quark/antiquark
in a dense medium over the chemical potential. One might expect that at the critical chemical potential where the confinement/deconfinement
phase transition takes place there is an inflection point of the Polyakov loop. For this reason the confinement/deconfinement
phase transition might manifest itself in the maximum of $N_1(\infty, \mu)$. For the same reason the authors of~\cite{Kaczmarek:2005gi} observed a maximum of the entropy at the critical temperature. Our results for
$N_1(\infty, \mu)$ are shown in figure~\ref{fig:N1_inf}. Unfortunately due to large uncertainties of the calculation we are unable to locate this maximum.
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{./figs/U_1_data.pdf}
\caption{The internal energy of a static quark-antiquark pair as a function of the distance for few values of the chemical potential under study. The black curve is the potential of static quark-antiquark pair at zero density and temperature.}
\label{fig:U_1}
\end{figure}
If we ignore the entropy contribution to the grand potential, which is small in cold dense matter, one can calculate also the internal $U(r,\mu)$
energy using the formula
\begin{eqnarray}
U(r,\mu)=\Omega(r,\mu) + \mu N(r,\mu).
\end{eqnarray}
In this paper we calculate the $U_1(r, \mu)$ for the color singlet grand potential, the result of which is shown in figure~\ref{fig:U_1}.
\section{String breaking in dense quark matter}
\label{sec:stringbreak}
Let us consider again figure~\ref{fig:F1data} and figure~\ref{fig:Fdata}. We have already mentioned
that in dense QC$_2$D the confinement/deconfinement transition takes place at $\mu\sim 1000~$MeV. Despite
this fact from figure~\ref{fig:Fdata} we see that $\Omega_{\bar q q}(r,\mu)$ reaches the plateau already at $\mu=447~$MeV.
This happens because of the string breaking phenomenon, which for $\mu=447~$MeV takes place at $r\sim 0.5~$fm.
Of course string breaking occurs also for smaller chemical potentials, but we do not observe it, as it takes place beyond our spatial lattice size. From figure~\ref{fig:F1data} one also sees that the larger
the chemical potential, the smaller the distance at which the string breaking takes place. Let us consider this phenomenon more quantitatively
\footnote{At sufficiently large values of the chemical potential the screening properties of
the dense medium are described by the Debye screening phenomenon rather than by the string breaking.
In this section we consider the chemical potential values corresponding to nonzero string tension extracted from the Wilson loops \cite{Bornyakov:2017txe}.}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.725]{./figs/r_screening_extended.pdf}
\caption{The screening length calculated from equation (\ref{r_screening}) as a function of chemical potential. Black dashed lines represent mean squared radii $\sqrt{\langle r^2\rangle}$ of charmonia calculated in Appendix B. The blue dashed line is the description of the screening length $R_{sc}$ by the Debye screening formula (\ref{rsc_debye}).}
\label{fig:r_screening}
\end{figure}
To study the string breaking phenomenon we introduce the screening length $R_{sc}$ which can be calculated
from the solution of the equation~\cite{Kaczmarek:2002mc}
\begin{eqnarray}
V_{\mu=0}(R_{sc})=\Omega_{\bar q q} (\infty,\mu),
\label{r_screening}
\end{eqnarray}
where $V_{\mu=0}(r)$ is the static potential at zero density. For $\Omega_{\bar q q} (\infty,\mu)$
we take the grand potential calculated from the renormalized Polyakov loop measured on the lattice(see figure~\ref{fig:F1_inf}).
The results of this calculation are shown in figure~\ref{fig:r_screening}. This plot tells us that the larger the chemical
potential the smaller the string breaking distance.
In order to understand this behaviour, let us recall that in three-color QCD the string breaking phenomenon
can be explained by the possibility to break the string between static quarks by a quark-antiquark pair created
from vacuum. If the length of the string is larger than the critical one it becomes energetically favorable
to break the string and form two heavy-light meson instead of increasing the length of the string.
In dense two-color QCD in addition to the possibility to break the string by quark-antiquark pairs it becomes
possible to break the string by two quarks. As the result of this phenomenon, after the string has been broken, one ends up with a heavy-light meson
and one heavy-light diquark. Due to confinement, the two quarks have to be extracted from some hadron.
The two-color baryon -- the scalar diquark is a good candidate for such a hadron. Indeed at nonzero $\mu$ the scalar
diquark is a lightest state in the system. At $\mu>m_{\pi}/2$ there is condesation of the scalar diquarks,
so the two quarks can be extracted from the diquark condensate, which does not require any energy. This picture is supported at large $\mu$
in the BCS phase, where one has a Fermi sphere with radius $\mu$. Evidently one cannot break the string
by taking two quarks deep inside the Fermi sphere, since in that case, the quarks which break the string due to the interactions have to move from one point of the Fermi sphere to some other point inside the Fermi sphere. However, all points inside the Fermi sphere are occupied.
So, the only possibility to break the string is to take two quarks close to the Fermi surface.
In the confined phase, quarks on the Fermi surface are condensed as diquarks. Thus we again
confirm the picture that two quarks, which break the string between a quark-antiquark pair,
can be taken from the available diquarks.
Further, let us consider the following model: if one diquark penetrates inside the string,
it breaks the string with some probability $\omega$. For the density of diquarks $n$, the string
length $R$ and the transverse area $S$ the number of diquarks inside the string is $n \times R \times S$.
If the string breaking events are independent, the total probability to break the string $P \simeq \omega \times n \times R \times S$.
The condition for the string breaking is $P \sim 1$. From last statement we conclude that $R_{sc} \sim 1/n$.
Finally the larger the chemical potential, the larger the condensate and the density of diquarks, which explains
why $R_{sc}$ is decreasing with the $\mu$.
If one increases the chemical potential then at some density $R_{sc}$ becomes so small
that the string cannot be created.
I.e. at the instant of creation it will be immediately broken by the
two-color baryons -- diquarks. This is our hypothesis of the deconfinement mechanism in two-color
dense quark matter. It is not clear how to find unambiguously the distance at which the
string ceases to be stable. From the interaction potential at zero density (see figure~\ref{fig:F1data}) it is
found that this happens in the region $r\in (0.2,0.3)~$fm. Using figure~\ref{fig:r_screening} one can
infer that the interactions in this interval are screened, as the chemical potential lie within $\mu \in (900, 1300)$ and which agrees with the position of the confinement/deconfinement transition. We believe that
this fact confirms our hypothesis.
For a chemical potential larger than the $\mu\sim 1300~$MeV($a\mu\sim 0.3$) the $R_{sc}$ is smaller
than $0.2~$fm. At such small distances the entropy $S = - \partial \Omega / \partial T$, the quark number density $N = - \partial \Omega_1 / \partial \mu$
are small and the grand potential is mainly determined by the interaction potential at zero temperature and density.
At the same time the renormalized interaction potential (see figure~\ref{fig:F1data}) is negative at distances $r<0.2~$fm.
For this reason the grand potential of one quark in dense quark matter becomes negative, which was observed in the previous section.
In addition to the $R_{sc}$ in figure~\ref{fig:r_screening} we plot the average heavy quarkonia $J/\Psi, \chi_c, \psi'$
radii which where estimated in Appendix B within a simple potential model. It is clear that if the screening length is close to the heavy quarkonium
radius this state is considerably modified by dense quark matter. From figure~\ref{fig:r_screening} one sees that
the heaviest state the $\psi'$ due to its rather large radius should be considerably modified at nonzero density before
the transition to BEC phase. The $\chi_c$ meson will instead be modified in the BEC phase. Finally
we predict that the $J/\Psi$ meson will be modified in dense quark matter but below the deconfiment transition. Notice, however,
that if the radius of a charmonium equals to the $R_{sc}$ at some density $n_0$, the dissociation of this charmonium
takes place at densities larger than $n_0$.
A more detailed study of quarkonium dissociation in two-color dense quark matter will be presented in a future study. In particular the question of the presence of an imaginary part in the interquark potential at finite density, which may further destabilize the bound states will be carefully investigated.
\section{Debye screening in dense quark matter}
\label{sec:DebyeScreen}
In the region $\mu>900~$MeV the system under study transitions from the confined to the deconfined phase. In the deconfined phase the contribution of the string is markedly reduced and one may attempt to describe $R_{sc}$ in a dense quark-gluon plasma via an analogy with the Abelian theory, i.e. purely Coulombic Debye screening.
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{./figs/F1_F1inf_r.pdf}
\caption{The expression $(\Omega_1(\infty, \mu) - \Omega_1(r, \mu)) r$ in logarithmic scale as a function of distance for various $\mu$.}
\label{fig:omegalog}
\end{figure}
The scale of the Debye screening in perturbtion theory is denoted by the Debye mass, which to one-loop order (for the $N_c=2$) reads
\begin{eqnarray}
m_D^2(\mu) = \frac {4} {\pi} \alpha_s(\mu) \mu^2\,.
\label{md}
\end{eqnarray}
To describe or results for the $R_{sc}$ it is reasonable to assume that the screening length is inversely proportional to $m_D(\mu)$.
For this reason we fit our data by the formula
\begin{eqnarray}
R_{sc}=\frac 1 {A m_D(\mu)}\,,
\label{rsc_debye}
\end{eqnarray}
where the $A$ is some factor. We fit our data in the region $\mu \in (900, 1800)~$MeV and use a two-loop approximation for the running of the coupling constant
$\alpha_s(\mu)$ (see formula (\ref{g_two_loops}) with $N_f=N_c=2$). The fit describes our data well ($\chi^2/dof\simeq 0.8$) and the best fit parameters are
$A=1.4 \pm 0.4$, $\Lambda = 140 \pm 80~$MeV. In the region $\mu>1800~$MeV the data cannot be described by formula (\ref{rsc_debye}).
\begin{figure}[b]
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.5]{./figs/screening_data.pdf}
\caption{The ratio $m_D/\mu$ as a function of the chemical potential calculated from the fit of lattice data by formula (\ref{omega_large_d}).}
\label{fig:mD}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.5]{./figs/alphas_data.pdf}
\caption{The strong coupling constant $\alpha_s$ as a function of the chemical potential calculated from the fit of lattice data by formula (\ref{omega_large_d}).}
\label{fig:alphas}
\end{minipage}
\end{figure}
Now let us study how the Debye screening phenomenon manifests itself in the large distance behavior ($r\mu \gg 1$) of the grand potential.
In this case the dominant scale is the chemical potential, i.e. the running coupling constant depends only on $\mu$: $g(r,\mu)=g(\mu)$. For sufficiently large density one can apply perturbation theory to calculate grand potentials. Perturbatively the grand potential $\Omega_{\bar q q}(r, \mu)$
is determined by two-gluon exchange and it is rapidly decreasing with distance function. Contrary to
$\Omega_{\bar q q}(r, \mu)$ the color singlet grand potential $\Omega_1(r,\mu)$ is determined by one-gluon exchange.
In this paper we consider only $\Omega_1(r,\mu)$, whose leading order contribution has the form
\begin{eqnarray}
\Omega_1(r,\mu)=\Omega_1(\infty,\mu)-\frac 3 4 \frac {\alpha_s(\mu)} r e^{-m_D r}\,,
\label{omega_large_d}
\end{eqnarray}
where $m_D$ is the Debye mass given by the expression (\ref{md}). It tells us that due to Debye screening at sufficiently large distance
the expression $(\Omega_1(\infty, \mu) - \Omega_1(r, \mu)) r$ is an exponentially decreasing function of the distance.
We plot $(\Omega_1(\infty, \mu) - \Omega_1(r, \mu)) r$ in logarithmic scale in figure~\ref{fig:omegalog}. From this
figure the exponential decrease at large distance is seen starting from the $\mu=850~$MeV what confirms Debye screening phenomenon
in deconfined dense quark matter. The deviation from a purely Coulombic Debye-like behavior at intermediate distances may be related to the remnants of the string, which is not perfectly screened.
Further we fit our data in the deconfinement phase for $\Omega_1(r, \mu)$ at sufficiently large $r$ by the formula (\ref{omega_large_d}).
The results for $m_D/\mu$ and $\alpha_s(\mu)$ as a function of the chemical potential are shown in figure~\ref{fig:mD} and figure~\ref{fig:alphas}.
From figure~\ref{fig:mD} it is seen that the dependence of the Debye mass on the chemical potential is $m_D \sim \mu$. Due to large uncertainties of the calculation, we are not able to resolve the running of the coupling constant with $\mu$. For the same
reason we are not able to observe the running of the $\alpha_s$, as shown in figure~\ref{fig:alphas}. The running coupling
is constant within the uncertainty of the calculation up to the $\mu<1800~$MeV and it starts to drop in the region $\mu>1800~$MeV.
From figure~\ref{fig:alphas} one sees that the coupling constant is of order of unity $\alpha_s \sim 1$.
It means that for all densities the system under study is strongly correlated. In addition one can
expect that the one-loop formula for the Debye mass (\ref{md}) is considerably modified by higher order radiative corrections. In the deconfined phase at finite temperature and zero density one also obtains a large coupling constant (see e.g. \cite{Kaczmarek:2004gv, Kaczmarek:2005ui}).
Despite the large coupling constant, it turns out that the one-loop formula (\ref{md}) works quite well.
To show this, we compute the ratio $m_D/(\mu \sqrt {\alpha_s})$. At the leading order approximation this ratio is
$\sqrt {4/\pi}$. In figure~\ref{fig:ratio} we plot this ratio and find that within the uncertainties of the calculation formula (\ref{md}) works quite well for $m_D$ and $\alpha_s$ extracted from the
color singlet grand potential.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.65]{./figs/ratio_data.pdf}
\caption{The expression $m_D(\mu) / (\mu \sqrt{\alpha_s(\mu)} )$ as a function of chemical potential. The $m_D(\mu)$ and the $\alpha_s(\mu)$ are extracted from fit of the $\Omega_1(r, \mu)$ data by formula (\ref{omega_large_d}). At the leading order approximation the ratio equals to the constant $\sqrt{4 / \pi}$ shown as a black horizontal line.}
\label{fig:ratio}
\end{figure}
\section{Conclusion and discussion}
\label{sec:conslusion}
In this paper we continued our study of two-color QCD
at finite density and low temperature based on lattice simulations.
Our simulations were performed on 32$^4$ lattices with rooted staggered fermions at a relatively small lattice spacing a=0.044 fm, which allowed us to study two-color QCD very large baryon densities (up to quark chemical potential $\mu>2000~$MeV) while avoiding strong lattice artifacts.
The aim of the present paper was the study of the interaction between a static quark-antiquark pair in two-color dense quark matter. To this end we performed computations of the Polyakov loop correlation functions
and calculated the color averaged, color singlet and color triplet grand potentials.
To handle appropriately the divergent self-energy contribution to the Polyakov loop correlation functions, we conduct renormalization through a matching of the color singlet grand potential to the static interaction
potential of quark-antiquark pair at short distances. Having determined the renormalized grand potentials, we calculated
the renormalized grand potential of a single quark/antiquark and average Polyakov loop.
In addition we calculated the quark number induced by a static quark antiquark pair and its internal energy.
The confinement/deconfinement transition at finite density manifests itself in an increasing value of the Polyakov loop. The finite density transition does not show a region of rapid rise of the Polyakov loop contrary to the finite temperature case. For this reason we conclude that the transition from confinement to deconfinement at finite density is smooth.
In addition we calculated the screening length $R_{sc}$ which is defined as
\begin{eqnarray}
V_{\mu=0}(R_{sc})=\Omega(\infty, \mu),
\end{eqnarray}
where $V_{\mu=0}(r)$ is the static potential at zero density and the $\Omega(\infty, \mu)$ is the
grand potential of a static quark-antiquark pair at infinite distance.
In the confined phase, the screening length is determined by the string breaking length, whereas
in the deconfined phase $R_{sc}$ is determined by the Debye screening phenomenon.
The result of the calculation of the screening length shows that consistent with intuition, the larger the chemical potential the smaller $R_{sc}$, the string breaking distance.
We believe that the decrease of the string breaking distance with density can be attributed to a further string breaking mechanism in dense matter. In dense two-color QCD, in addition to the possibility
to break the string by a quark-antiquark pair, it becomes possible to break the string by two quarks
which can be extracted from a two-color baryon -- the scalar diquark. Notice also that
it does not cost any energy to remove the scalar diquark from the condensate and break the string.
As the result of this phenomenon,
after the string breaking one end up with one heavy-light meson and one heavy-light diquark.
Lattice studies show \cite{Braguta:2016cpw} that in the region $\mu>m_{\pi}/2$ the scalar diquark condensate increases with the chemical potential, i.e. it becomes easier to find two quarks and to break the string.
If one increases the chemical potential then at some density $R_{sc}$ becomes so small
that the string cannot be created at all. Once created it will be immediately broken by the
two-color baryons -- the scalar diquarks. This is our hypothesis of the deconfinement mechanism in two-color dense quark matter.
The behavior of the string breaking distance in dense matter and the deconfinement mechanism
are not specific only for two-color QCD. We believe that a similar process can be realized in SU(3) QCD with the difference that one has to replace two-quark baryon in SU(2) by three-quark baryon in SU(3).
In particular, one can expect that the screening length which has the same definition as in two-color
QCD is decreasing function of the chemical potential. in turn, the larger the density the smaller the string breaking distance. For three colors this behavior can be explained as follows: at nonzero chemical potential one has a nonzero baryon density in the system. Baryons
which form this density can break the string, splitting it into one quark and one diquark.
Notice that one does not need additional energy to create the baryon since the baryons are already present, due to the nonzero chemical potential. After the string breaking
one has one heavy-light meson and heavy-light baryon. Finally the larger the
the chemical potential, the larger the number of baryons which can break the string,
i.e. the string breaking distance is a decreasing function of the chemical potential.
Notice that in three-color QCD one might also have a similar mechanism of deconfinement
at finite density, as we proposed above for two-color. In particular, deconfinement
takes place at the density at which the $R_{sc}$ is so small that the string
cannot be created.
In the previous section we considered the large distance behavior of the color singlet grand
potential in the deconfined phase. In analogy with Debye screening in the Abelian theory and using leading order perturbation theory, we attempt to quantitatively describe the observed behavior and find good agreement with the lattice data.
We calculated the Debye mass and the coupling
constant for various chemical potentials. The coupling constant extracted in this way takes on values $\alpha_{s} \sim 1$, which tells us that despite the large baryon density, the system remains strongly coupled. It was also found that despite the large coupling
constant the one-loop formula for the Debye mass works well at large distances within the uncertainty
of the calculation.
In this paper we found that the region $\mu<2000~$MeV physically differs from the region $\mu>2000~$MeV.
This manifests itself in different behavior of the following observables: the Polyakov line, the grand potential, the screening length $R_{sc}$, the Debye mass and effective coupling constant.
While we do not yet fully understand the physics, which is responsible for this behavior, one possibility is that the value of the chemical potential $\mu \sim 2000~$MeV is exceptional since it divides the region $\mu<2000~$MeV with
a spatial string tension from that at $\mu > 2000~$MeV where it vanishes. This may imply that the point $\mu \sim 2000~$MeV separates
systems with and without magnetic screening. Further study in this direction is required.
Finally we are going to discuss lattice artifacts which result from the saturation effect.
It is known that at large values of the chemical potential $a\mu \sim 1$ a saturation effect starts
to be seen. The essence of this effect is
that all lattice sites are filled with fermionic degrees of freedom, and it is not possible to put
more fermions on the lattice (``Pauli blocking''). It is known that the saturation effect is
accompanied by the decoupling of the gluons from fermions. Thus, effectively due to saturation,
the system becomes simply gluodynamics, which is confined at low temperatures. From this
consideration it is clear that in order to study the properties of quark matter at
large baryon density one should have sufficiently small lattice spacing such that the properties
are not spoiled by this kind of artificial confinement at large values of the chemical potential.
We believe that because of the saturation effect the deconfinement in dense SU(2)
matter has not been seen before.
The results of the study presented in this paper are obtained for the chemical potentials $\mu < 2200~$MeV ($a\mu \leq 0.5)$.
We believe that our results are not spoiled by saturation for the following reasons.
First, for the $\mu > 2000~$MeV (up to $\mu \sim 2500~$ MeV~\cite{Bornyakov:2017txe})
the spatial string tension is vanishing. Second, we do not see a respective rise
of the timelike string tension.
Moreover, the static potential for $\mu > 2000~$MeV (up to $\mu \sim 2500~$ MeV~\cite{Bornyakov:2017txe})
is well described by Debye screening potential.
So, the properties of the system in the range $\mu > 2000~$MeV are
very different from those of plain gluodynamics at small temperatures.
Notice also that in our previous study of dense two-color QCD~\cite{Braguta:2016cpw} we found that
the onset of the saturation effects are seen at $a\mu \sim 0.7-0.8$. This was deduced through the decrease of the diquark
condensate for $a\mu>0.7$ (while it is rising
with $\mu$ for the $a\mu < 0.7$). The rise of the diquark condensate in
the continuum is related to the rise of the
Fermi surface. The decrease of the diquark condensate on the lattice
is evidently related to the onset of the saturation effect what can be seen as follows.
Due to finite number of the fermion states in the lattice Brillouin
zone there is a chemical potential from which
the rise of the chemical potential does not lead to the rise of the
Fermi surface on the lattice. Notice that at this value of the chemical potential
not all fermion states on the lattice are filled and the saturation takes place at larger
values of the chemical potential.
Finally the deviation of the lattice measured baryon density from the
baryon density calculated for free fermions
is 10 \% for $a\mu=0.45$(~2000~MeV) and 20\% for
$a\mu=0.50$(~2250 MeV). We argue that such a deviation
even if it could be attributed to saturation cannot lead
to considerable modification of physics. Notice also that such a deviation
may also be explained by other mechanism than saturation, e.g. the
finite lattice spacing which is present for any $a\mu$.
Taking into account all what is written above we believe that in the region under consideration in this paper, ($a\mu<0.5$)
our results are not spoiled by eventual saturation effects. Notice that the strict proof of last statement requires additional lattice simulations at smaller lattice spacing, which are planned in the future.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\section{Introduction}
The last decade has witnessed
an explosion of data traffic over the communication network
attributed to
the rapidly growing cloud computing and pervasive mobile devices.
This trend is expected to continue for the foreseeable future with
a whole new generation of applications
including 4K/8K UHD video,
hologram,
interactive mobile gaming,
tactile Internet, virtual/augmented reality (VR/AR),
mission-critical communication, smart homes,
and a variety of IoT applications \cite{mchi16}.
As the cloud infrastructure and number of devices will continue to expand at an accelerated rate, a tremendous burden will be put on the network.
Thus,
it is imperative
for network operators to develop innovative solutions to meet the soaring traffic demand and accommodate diverse requirements of various services and use cases in
the next generation communication network.
Thanks to the economy of scale and supercomputing capability advantages, cloud computing
will likely continue to play a prominent role in the future computing landscape.
However, cloud data centers (DC) are often geographically distant from the end-user, which induces
enormous network traffic, along with significant communication delay and jitter.
Hence, despite the immense power and potential,
cloud computing alone is
facing growing limitations in satisfying the stringent requirements
in terms of latency, reliability, security, mobility, and
localization
of many new systems and applications
(e.g.,
embedded artificial intelligence,
manufacture automation,
5G wireless systems)
\cite{mchi16}.
To this end, edge computing (EC) \cite{msat17}, also known as fog computing (FC) \cite{mchi16},
has emerged as a new computing paradigm that complements the cloud to enable the implementation of
innovative services right at the network edge.
EC forms a virtualized platform that
distributes computing, storage,
control, and networking services closer to end-users to smarten the edge network.
The size of an EN is flexible ranging from smartphones, PCs,
smart access points (AP), base stations (BS) to edge clouds \cite{wshi16}.
For example, a smartphone is the edge between wearable devices and the cloud, a home gateway is the edge between smart appliances and the cloud, a cloudlet, a telecom central office, a micro DC
is the edge between mobile devices and cloud core network.
Indeed, the distributed EC infrastructure
encompasses any computing, storage, and networking nodes along the path between end devices and cloud DCs, not just exclusively nodes located at the customer edge \cite{wshi16}.
By providing elastic resources and intelligence at the edge of the network,
EC offers many remarkable capabilities, including local data processing and analytics, distributed caching,
location awareness, resource pooling and scaling, enhanced privacy and security, and
reliable connectivity.
These capabilities combined with the shorter communication distance allow operators to efficiently handle both downstream and upstream data between the cloud and the customer edge, which translates into drastic
network traffic reduction
and significant
user experience improvement.
For instance,
with edge caching, location-awareness, and real-time data processing and analysis, not only can service providers serve user content requests locally,
but also can adaptively optimize video coding and resolution according to the user device information
and the varying wireless channel conditions.
Also, it is envisioned that
most of data produced by IoT sensors will be processed at the edge and only important information and metadata will be sent to the cloud for further analytics.
Additionally, EC is the key enabler for ultra-reliable low-latency applications such as AR/VR, cognitive assistance, autonomous driving, industrial automation, remote robotics,
and healthcare.
A myriad of benefits and other use cases (e.g., computation offloading, caching, advertising, smart homes/grids/cities) of EC
can be found in \cite{wshi16,msat17,mchi16}.
Today, EC is still in the developing stages
and presents many new challenges,
such as network architecture design, programming models and abstracts, IoT support, service placement, resource provisioning and management, security and privacy, incentive design,
and reliability and scalability of edge devices
\cite{wshi16,msat17,mchi16}.
To unlock the huge potential of this new technology, it requires
significant collaborative efforts between various entities in the ecosystem.
%
In this work, we focus on the EC resource
allocation problem.
Unlike cloud computing, where computational capacity of large DCs is virtually unlimited and network delay is
high,
EC is characterized by relatively low network latency but considerable processing delay due to the limited computing power of ENs. Also, there are a massive number of distributed computing nodes compared to a small number of large DCs.
Moreover, ENs may come with different sizes (e.g., number of computing units) and configurations (e.g., computing speed)
ranging from a smartphone to an edge cloud with tens/hundreds of servers. These nodes are dispersed in numerous locations
with varying network and service delay towards end-users.
On the other hand, different services
may have
different requirements and
properties.
Some services can only be handled by
ENs satisfying
certain criteria.
Additionally, different services
may be given different priorities.
While every service not only wants to obtain
as much
resource as possible but also prefers to be served by its closest ENs with
low response time,
the capacities of ENs are limited.
Also, due to the diverse preferences of the services towards the ENs, some nodes can be under-demanded while other nodes are over-demanded.
Thus, a fundamental problem is:
\textit{given a set of geographically distributed heterogeneous ENs, how can we efficiently allocate their limited computing resources to competing services with different desires and characteristics,
considering service priority and fairness?}
This work introduces
a novel market-based solution framework which aims not only to maximize the resource utilization of the ENs but also to make every service happy with the allocation decision.
The basic idea behind our approach is to assign different prices to resources of different ENs. In particular, highly
sought-after resources are priced high while prices of under-demanded resources are low. We assume that each service has a certain budget for resource procurement. The budget can be virtual or real money. Indeed, budget is used to capture service priority/differentiation. It can also be interpreted as the market power of each service.
Given the resource prices, each service buys the favorite resource bundle that it can afford.
When all the resources are fully allocated, the resulting prices and allocation form a \textit{market equilibrium} (ME).
If there is
only one EN, an ME can be found easily by adjusting the
price gradually until demand equals supply or locating the
intersection of the demand and supply curves. However,
when there are multiple heterogeneous ENs and multiple
services with diverse objectives and different buying power,
the problem becomes challenging.
We consider two distinct market models in this work.
In the first
model, the money does not have intrinsic value to the
services. Given resource prices, each service aims to maximize its
revenue from the allocated resources, without caring about
how much it has to pay as long as the total payment does not
exceed its budget.
This model arises in many real-world scenarios.
For example, in 5G networks, the Mobile Edge Computing (MEC) servers of a Telco are shared among different network slices, each of which runs a separate service (e.g., voice, video streaming, AR/VR, connected vehicles, sensing) and serves a group of customers who pay for the service. The Telco can allot different budgets to the slices depending on their importance and/or potential revenue generation (e.g., the total fee paid by the users/subscribers of each slice).
Similarly, an application provider (e.g., Uber, Pokemon Go)
or a sensor network may own a number of ENs in a city and need to allocate the edge resources to handle requests of different groups of users/sensors. The budget can be decided based on criteria such as the populations of users/sensors in different areas and/or payment levels (subscription fees) of different groups of users.
Another example is that
a university (or other organizations) can grant different virtual budgets to different departments or research labs so that they can fairly share the edge servers on the campus.
The first model may also emerge in the setting of cloud federation at the edge where several companies (i.e., services) pool their resources together and each of them
contributes a fixed portion of resource of every EN.
Here, the budgets are proportional to the initial contributions of the companies. Instead of resource pooling, these companies may agree upfront on
their individual budgets, and then
buy/rent a given set of ENs together.
In these scenarios, it is important to consider both fairness and efficiency. Thus,
conventional schemes such as social welfare maximization, maxmin fairness, and auction models may not be suitable. In particular, a welfare
maximization allocation often gives most of the resources to users who have high marginal utilities while
users with low marginal utilities receive a very small amount of resources, even nothing. Similarly, in auction models, the set of losers are not allocated any resource. Hence, these solutions can be unfair to some users. On the other
hands, a maxmin fairness solution often allocates too many resources to users with low marginal utilities, hence, it
may not be efficient.
To strive the balance between fairness and efficiency, we advocate the General Equilibrium Theory \cite{karr54}, with a specific focus on the Fisher market model \cite{wbra00}, as an effective solution concept for this problem.
Specifically, the first model can be cast as a Fisher market in which services act as buyers as ENs act as different goods in the market.
For the linear additive utility function
as considered in this work, given resource prices,
a service may have an infinite set of optimal resource bundles,
which renders difficulty in designing distributed algorithms.
We suggest several methods to overcome this challenge.
Moreover, we show that the obtained allocation is Pareto-optimal, which means there is no other allocation that would make some service better off without making someone else worse off \cite{eco}. In other words, there is no strictly ``better'' allocation. Thus, a Pareto-optimal allocation is efficient.
We furthermore link the ME to the fair division literature \cite{hmou04} and prove that the
allocation satisfies remarkable fairness properties including envy-freeness,
sharing-incentive, and proportionality, which provides strong incentives for the services to participate in the proposed scheme.
Indeed, these properties were rarely investigated explicitly in the ME literature.
\textit{Envy-freeness} means that every service prefers its allocation to the allocation of any other service.
In an envy-free allocation, every service feels that its share is at least as good as the share of any other service, and thus no service feels envy.
\textit{Sharing-incentive} is another well-known fairness concept. It ensures that services get better utilities than what they would get in the \textit{proportional sharing} scheme that gives each service an amount of resource from every EN proportional to its budget.
Note that proportional sharing is an intuitive way to share resources fairly in terms of quantity.
For the federation setting, sharing-incentive implies that every service gets better off by pooling their resources (or money) together.
Finally, it is natural for a service to expect to obtain a utility of at least $b/B$ of the maximum utility that it can achieve by getting all the resources, where $b$ is the payment of the service and $B$ is the total payment of all the services.
The \textit{proportionality} property guarantees that the utility of every service at the ME is at least proportional to its payment/budget. Thus, it makes every service feel fair in terms of the achieved utility.
In the second model, the money does have intrinsic value to the services. The services not only want to maximize their revenues but also want to minimize their payments. In particular,
each service aims to maximize the sum of its remaining budget (i.e., surplus) and the revenue from the procured resources, which is equivalent to maximizing the net profit (i.e., revenue minus cost).
This model is prevalent in practice.
For example, several service providers (SP), each of which has a certain budget, may compete
for the available resources of an edge infrastructure provider (e.g., a Telco, a broker).
The SPs only pay for their allocated resources and can take back their remaining budgets.
Obviously, a SP will only buy a computing unit if the potential gain from that unit outweighs the cost.
It is natural for the SPs to maximize their net profits in this case.
The traditional Fisher market model does not capture this setting since the utility functions of the services depend on the resource prices.
It is worth mentioning that, conventionally, the optimal dual variables associated with the supply demand constraints (i.e., the capacity constraints of the ENs) are often interpreted as the resource prices \cite{boyd} and common approaches such as network utility maximization (NUM) \cite{dpal06} can be used to compute an ME. However, these approaches do not work for our models that take budget into consideration.
Indeed, the main difficulty in computing an ME in both models stems from the budget constraints which contain both the dual variables (i.e., prices) and primal variables (i.e., allocation). In the second model, the prices also appear in the objective functions of the services. Therefore, the ME computation problem becomes challenging.
Note that the pair of equilibrium prices and equilibrium allocation has to not only clear the market but also simultaneously maximize the utility of every service (as elaborated later in Section 4).
Fortunately, for a wide class of utility functions, the ME in the first model can be found by solving a simple Eisenberg-Gale (EG) convex program \cite{EG,EG1,AGT}. However, the EG program does not capture the ME in the second model.
Interesting, by reverse-engineering the structure of the primal and dual programs in the first model,
we can rigorously construct a novel convex optimization problem whose solution is an ME of the second model.
Our main contributions include:
\begin{itemize}
\item \textit{Modeling}. We formulate a new market-based EC resource allocation framework
and advocate the General Equilibrium theory as an effective solution method for the proposed problem.
\item \textit{Centralized solution}. The unique ME in the first model can be determined by the EG program.
We also prove
some salient fairness features of the ME.
\item \textit{Decentralized algorithms}. We introduce several distributed algorithms that
efficiently overcome the difficulty raised by the non-unique demand functions of the services and converge to the ME.
\item \textit{Extended Fisher market.}
We systematically derive a new convex optimization problem whose optimal solution is an exact ME in the extended Fisher market model where buyers value the money.
\item \textit{Performance Evaluation.} Simulations are conducted to illustrate the efficacy of the proposed techniques.
\end{itemize}
The rest of the report is organized as follows. Section \ref{related} describes related work. The system model and problem formulation are given in Section \ref{model} and Section \ref{formu1}, respectively.
The centralized solution using the EG program is analyzed in Section \ref{EG}. Then, we introduce several distributed algorithms in Section \ref{dist}. The market model in which buyers aim to maximize their net profits is studied in Section \ref{formu2}.
Simulation results are shown in Section \ref{sim} followed by conclusions and discussion of future work in Section \ref{con}.
\vspace{-0.1in}
\section{Related Work}
\label{related}
The potential benefits and many technical aspects of EC have been studied extensively in the recent literature.
First, the hybrid edge/fog-cloud system can be leveraged to improve the performance of emerging applications such as cloud gaming and healthcare \cite{lgu17,ylin17}.
A. Mukherjee {\em et. al.} \cite{amuk17} present a power and latency aware cloudlet selection strategy for computation offloading in a multi-cloudlet environment.
The tradeoff between power consumption and service delay in a fog-cloud system is investigated in \cite{rden16} where the authors formulate a workload allocation problem to minimize the
system energy cost
under latency constraints.
%
A latency aware workload offloading scheme in a cloudlet network is formulated in \cite{xsun17} to minimize the average
response time for mobile users.
In \cite{mjia17}, M. Jia {\em et. al.} explore the joint optimization of cloudlet placement and user-to-cloudlet assignment to minimize service latency while considering load balancing.
A unified service placement and request dispatching framework is presented in \cite{lyan16} to evaluate the tradeoffs between the user access delay and service cost.
%
Stackelberg game and matching theory are employed in \cite{hzha17} to study the
joint optimization among
data service operators (DSO), data service subscribers (DSS), and a set of ENs in a three-tier edge network where the DSOs can obtain computing resources from different ENs to serve their DSSs.
Another major line of research has recently focused on the
joint allocation of communication and computational resources for task offloading in
the Mobile Edge Computing (MEC) environment \cite{ssar15,xche154,xlyu17}.
MEC allows mobile devices to offload computational tasks to resource-rich servers located near or at cellular BSs, which could potentially reduce the devices' energy consumption and task execution delay.
However, these benefits could be jeopardized if multiple
users
offload their tasks to MEC servers simultaneously. In this case, a user may not only suffer severe interference but also receive a very small amount of EC resource,
which would consequently reduce data rate, increase transmission delay, and cause high task execution time on the servers.
Hence, offloading decision, allocation and scheduling of radio resources, and computational resources should be jointly considered in an integrated framework.
Instead of optimizing the overall system performance from a single network operator's point of view,
we study the EC resource allocation problem from the
game theory and market design perspectives \cite{AGT}.
Specifically, we exploit the General Equilibrium \cite{karr54}, a Nobel prize-winning theory, to construct an efficient market-based resource allocation framework.
Although this concept was proposed more than 100 years ago \cite{wbra00},
only until 1954, the existence of an ME was
proved
under mild conditions in the seminal work of Arrow and Debreu \cite{karr54}.
However, their proof of existence based on fixed-point theorem is non-constructive and does not
give an algorithm to compute an equilibrium
\cite{AGT}.
Recently, theoretical computer scientists have expressed great interests in understanding algorithmic aspects of the General Equilibrium concept. Various efficient algorithms and complexity analysis for ME computation have been accomplished over the past decade \cite{ndev08,vvaz11,nche16,AGT,xche17,jgag15}.
Note that although the existence result has been established, there is no general technique for computing an ME.
Our proposed models are inspired by the Fisher market \cite{wbra00} which is a special case of the exchange market model in the General Equilibrium theory.
An \textit{exchange market} model
consists of a set of economic agents trading different types of divisible goods. Each agent has an initial endowment of goods
and a utility function
representing her preferences for the different bundles of goods. Given the goods' prices,
every agent sells the initial endowment, and then uses the revenue to buy the best bundle of goods they can afford \cite{AGT,karr54}.
The goal of the market is to find the equilibrium prices and allocations
that maximize every agent's utility
respecting the budget constraint, and the market clears.
In the Fisher market model, every agent comes to the market with an initial endowment of money only and wants to buy goods available in the market.
We cast the EC resource allocation problem as a Fisher market. We not only show appealing fairness properties of the equilibrium allocation, but also introduce efficient distributed algorithms to find an ME.
More importantly, we systematically devise a new and simple convex program to capture the market in which money has intrinsic value to the buyers, which is beyond the scope of
the Fisher and exchange market models.
Note that
there is a rich literature on cloud resource allocation and pricing \cite{nluo17}. In \cite{hxu13,atoo15}, the authors propose
different profit maximization frameworks for cloud providers.
References \cite{lmas152,mhad17,ipet15} study how to efficiently share resource and profit among cloud providers in a cloud federation. Several resource procurement mechanisms are introduced in \cite{apra14} to assist a cloud user to select suitable cloud vendors in a multi-cloud market.
In \cite{dard13}, the interaction between a cloud provider and multiple services is modeled as a generalized Nash game. This model is extended to a multi-cloud multi-service environment in \cite{dard17}. A single-cloud multi-service resource provision and pricing problem with flat, on-demand, and on-spot VM instances is formulated in \cite{vcar18} as a Stackelberg game, which not only maximizes the revenue of the cloud provider but also minimizes costs of the services.
Auction theory has been widely used to study cloud resource allocation \cite{lmas151,schi17,xwan15}.
A typical system consists of one or several clouds and multiple users. First, the users submit bids, which include their desired resource bundles in terms of VM types and quantities as well as the price that they are willing to pay, to an auctioneer. Then, the auctioneer solves a winner determination problem to identify accepted bids. Finally, the auctioneer calculates the payment that each winner needs to pay to ensure truthfulness. In auction, the common objectives are to maximize the social welfare or maximize the profit of the cloud provider.
Additionally, only winners receive cloud resources. Furthermore, most of existing auction models do not consider elastic user demands. For example, previous works often assume that cloud users are single-minded, who are interested in a specific bundle only and have zero value for other bundles.
Different from the existing works on cloud economics and resource allocation in general, our design objective is to find a fair and efficient way to allocate resources from multiple nodes (e.g., ENs) to budget-constrained agents (i.e., services), which makes every agent happy with her resource allotment and ensures high edge resource utilization.
The proposed model also captures practical aspects, for example, a service request can be served at different ENs and service demands can be defined flexibly rather than fixed bundles as in auction models.
\section{System Model}
\label{model}
An EC environment is depicted in Fig.~\ref{fig:sys}. Besides local execution and remote processing at cloud DCs, data and requests from end-devices (e.g., smartphones, set-top-boxes, sensors) can be handled by the EC platform.
Note that some data and computing need to be done in the local to keep data privacy.
A request typically first goes to a Point of Aggregation (PoA) (e.g., switches/routers, BSs, APs), then it will be routed to an EN for processing.
Indeed, enterprises, factories, organizations (e.g., hospitals, universities, museums), commercial buildings (shopping malls, hotels, airports), and other third parties (e.g., sensor networks) can also outsource their services and computation to the intelligent edge network. Furthermore, service/content/application providers like Google, Netflix, and Facebook
can proactively install their content and services onto ENs to serve better their customers.
In the EC environment, various sources (e.g., smartphones, PCs, servers in a lab, underutilized small/medium
data centers in schools/hospitals/malls/enterprises, BSs,
telecom central offices) can act as ENs.
We consider a system encompassing
various services and a set of geographically distributed ENs with different configurations and limited computing capacities. Each service has a budget for resource procurement
and wants to offload as many requests as possible to the edge network. The value of an EN to a service is measured in terms of the maximum revenue that it can generate by using the EN's resource.
An EN may have different values to different services.
Since some ENs (e.g., ones with powerful servers)
can be over-demanded while some others are under-demanded, it is desirable to harmonize the interests of the services so that each service is happy with its allotment while ensuring high resource utilization.
An intuitive
solution is to assign prices to ENs and let each service choose its favorite resource bundle.
We assume that there is a platform
lying between the services and the ENs.
Based on the information collected from the ENs (e.g., computing capacity) and the services (e.g., budgets, preferences), the platform computes an ME solution including resource prices and allocation, which not only maximizes the satisfaction of every service but also fully allocates the ENs' resources.
%
In the first model, each service seeks solely to maximize its revenue under the budget constraint, without concerning about the money surplus after purchasing resources.
This can be the case where the services and ENs belong to the same entity, and each service is assigned a virtual budget representing the service's priority.
For instance, a Telco can give different budgets to different network slices, each of which runs a service (e.g., voice, video streaming, AR/VR, connected vehicles).
%
In the second model, the remaining money does have intrinsic value to the services. In this case, each service aims to maximize the sum of its remaining budget and the revenue from the procured resources. For example, this can be the case where services and ENs are owned by different entities, and each SP (e.g., Google, Facebook, enterprises)
has a certain budget for leasing resources from an infrastructure provider (e.g., a Telco).
For simplicity, we assume that the values of ENs to the services are fixed.
Our model can be extended to capture time-varying valuation in a multi-period model by considering each pair of an EN and a time slot as an independent EN.
\section{Problem Formulation}
\label{formu1}
\subsection{EC Resource Allocation Problem}
Let $\mathcal{M}$, $\mathcal{N}$, M, and N be the sets of ENs and services, and the numbers of ENs and services, respectively.
Denote $i$ as the service index and $j$ as the EN index. We assume that each EN $j$ has
$c_j$ homogeneous computing units (e.g., servers)
\cite{hzha17}. If an EN has several types of computing units,
we can always divide the EN into several clusters, each of which contains only homogeneous units. Then, each cluster can be considered as a separate EN. While the computing units in each EN are homogeneous, different ENs can have different types of computing units.
Let $x_{i,j}$ be the number of computing units of EN $j$ allocated to service $i$. The vector of resources allocated to service $i$ is $x_i = \big( x_{i,1}, x_{i,2}, \ldots, x_{i,M} \big)$. Finally, define $B_i$ as the budget of service $i$.
Our goal is to compute an ME including an equilibrium price vector $p = (p_1, p_2, . . . , p_M)$, where $p_j$ is price of EN $j$,
and a resource allocation matrix $\mathcal{X}$,
in which the element at the $i$th row and $j$th column is $x_{i,j}$. The utility $U_i(x_i,p)$ of service $i$ is defined as a function of the amount of resources $x_i$ that it receives and the resource prices $p$.
The capacity constraint of ENs renders: $\sum_{i=1}^N x_{i,j} \leq c_j, ~\forall j \in \mathcal{M} $.
Without loss of generality, we normalize the capacity of every EN to be 1 (i.e., $c_j = 1, ~\forall j$) and scale
related parameters (e.g., price, resource allocation) accordingly.
This normalization is just to simplify expressions and equations. Hence, we have:
$\sum_{i=1}^N x_{i,j} \leq 1, ~ \forall j, ~~ x_{i,j} \geq 0,~~ \forall j.$
Each service is a player in our market game. Given a price vector $p$, service $i$ aims to maximize its utility $U_i(x_i,p)$ subject to the budget constraint $\sum_j x_{i,j} p_j \leq B_i$.
\begin{definition}
\label{MEdef}
An ME solution ($p^*$,$X^*$) needs to satisfy the two conditions:
\end{definition}
\begin{itemize}
\item \textit{Condition 1}: Given the equilibrium resource price vector $p^* = (p_1^*, p_2^*, . . . , p_M^*)$, every service $i$ receives its optimal resource bundle $x_i^*$, i.e., we have
\begin{eqnarray}
x_i^* = (x_{i,1}^*, \ldots, x_{i,M}^*) \in \operatornamewithlimits{argmax}_{x_i \geq 0; \sum_j p_j^* x_{i,j} \leq B_i} U_i(x_i,p^*)
\end{eqnarray}
\item \textit{Condition 2}: All the resources are fully allocated, i.e. , we have: $\sum_{i} x_{i,j} = 1, ~~\forall j$.
\end{itemize}
The first condition can be interpreted as the \textit{user satisfaction condition} while the second condition is often called the \textit{market clearing condition} in Economics \cite{eco}. The first condition ensures that the equilibrium allocation $x_{i}^*$ maximizes the utility of service $i$ at the equilibrium prices $p^*$ considering the user budget constraint. The second condition maximizes the resource utilization of the ENs. It also means the ENs' resources are fully sold in the market, which consequently maximizes the profit of every EN since the equilibrium prices are non-negative. The services are players competing for the limited EC resources, while the platform tries to satisfy the market clearing condition. Prices are used to coordinate the market.
Let $u_i(x_i)$ be the gain/profit/revenue of service $i$ can achieve from the procured resources. We consider two models. In the first model (basic model), every service $i$ wants to maximize $U_i(x_i,p) = u_i(x_i)$ and does not care about how much it has to pay as long as the total payment is under its budget.
Here, utility of a service is its revenue.
In the second model, instead of revenue, the services aim to maximize their net profits (i.e., revenue minus cost).
The service utility in this model is $U_i(x_i,p) = u_i(x_i) - \sum_j p_j x_{i,j},~\forall i$.
We focus on the first model throughout the report. The second model is examined in Section \ref{formu2}.
\vspace{-0.1in}
\subsection{Service Utility Model}
In practice, the services may use different criteria to define $u_i(x_i)$.
Our framework takes $u_i(x_i)$ as an input to compute an ME solution. How each service evaluates the ENs is not the focus of this work. While the proposed model is generic, we consider linear functions for the ease of exploring the framework.
Extensions to more general functions will be discussed throughout the work. Let $a_{i,j}$ be the gain of service $i$ from one unit of resource of EN $j$. Then, we have: $u_i(x_i) = \sum_j a_{i,j} x_{i,j}, ~\forall i$.
In the following, we present an example of how $a_{i,j}$ can be computed.
We consider only delay-sensitive services, which are also the main target application of EC. For simplicity, we assume that the transmission bandwidth is sufficiently large and the data size of a request is small (e.g., Apple Siri, Google Voice Search, Google Maps, AR, and Translation).
Hence, the data transmission delay (i.e., size/bandwidth) is assumed to be negligible and we consider only propagation delay and processing delay \cite{zliu15,dard13}.
The total delay of a request of service $i$ from the time a user sends the request to the time she receives a response includes the round-trip delay $d_i^{\sf UE-PoA}$ between the user and a PoA of the service, the round-trip network delay
$d_{i,j}^{\sf n}$ between the PoA and an EN $i$ hosting the service, and the processing delay at the EN $d_{i,j}^{\sf p}$. Note that an EN can be located in the same place with a PoA (e.g., a BS).
In reality, $d_i^{\sf UE-PoA}$ is quite small,
and we assume it is fixed similar to \cite{xsun17}.
In other words, we study the system only \textit{from the aggregation level to the EC platform}.
For simplicity, we assume that each service is located at one PoA (e.g., an IoT gateway, a BS, a building). If a service has several PoAs, we need to take sum over all the PoAs to get the total number of requests
of the service
handled by the EC platform.
Denote $T_i^{\sf max}$ as the maximum tolerable delay of service $i$,
we have
\begin{eqnarray}
d_{i,j}^{\sf p} + d_{i,j}^{\sf n} \leq T_i^{\sf max}, \quad \forall i,~j.
\end{eqnarray}
Obviously, the maximum number of requests $\lambda_{i,j}^{\sf max}$ that EN $j$ can process is zero if $d_{i,j}^{\sf n} \geq T_i^{\sf max}$.
We model the processing delay at ENs using the widely used M/G/1 queues and assume that the workload is evenly shared among computing units \cite{ltan17,hzha17,dard13,zliu15}.
The average response time $d_{i,j}^{\sf p}$ of EN $j$ for processing service $i$ can be computed as follows:
\begin{eqnarray}
\label{queuemodel}
d_{i,j}^{\sf p} = \frac{1}{ \mu_{i,j} - \frac{\lambda_{i,j}}{x_{i,j}} }, \quad \forall i,~j
\end{eqnarray}
where $\mu_{i,j}$ be the service rate of one computing unit of EN $j$ for handling service $i$,
and $\lambda_{i,j}$ is the request arrival rate (i.e., number of requests per time unit) of service $i$ to EN $j$. For queue stability, we have $\frac{\lambda_{i,j}}{x_{i,j}} < ~\mu_{i,j},~ \forall i,~j.$
Otherwise, the queuing delay will be infinite as requests accumulated.
From (\ref{queuemodel}), we have
\begin{eqnarray}
\frac{1}{ \mu_{i,j} - \frac{\lambda_{i,j}}{x_{i,j}} } \leq T_i^{\sf max} - d_{i,j}^{\sf n} \\ \nonumber
\Rightarrow \lambda_{i,j} \leq x_{i,j} \Big( \mu_{i,j} - \frac{1}{T_i^{\sf max} - d_{i,j}} \Big) .
\end{eqnarray}
Therefore, if $d_{i,j}^{\sf n} < T_i^{\sf max}$, the maximum number of requests that service $i$ can process at EN $j$ is
\begin{eqnarray}
\lambda_{i,j}^{\sf max} &=& \max \Big\{ x_{i,j} \Big( \mu_{i,j} - \frac{1}{T_i^{\sf max} - d_{i,j}}\Big),~0 \Big\}\\ \nonumber
&=& x_{i,j} q_{i,j} , \quad \forall i,~j
\end{eqnarray}
where $q_{i,j} = \max \Big\{ \Big( \mu_{i,j} - \frac{1}{T_i^{\sf max} - d_{i,j}}\Big),~0 \Big\}$.
Define a successful request as the request whose total delay is smaller or equal to the maximum delay tolerance.
Let $r_i$ be the benefit of successfully serving one request of service $i$ \cite{hzha17}. Then, given $x_{i,j}$ computing units, the revenue of service $i$ is
\begin{eqnarray}
\label{a_uti}
u_{i,j}(x_{i,j}) = r_i q_{i,j} x_{i,j} = a_{i,j} x_{i,j}, \quad \forall i,~j
\end{eqnarray}
with $a_{i,j} = r_i q_{i,j}$.
Thus, we have
\begin{eqnarray}
\label{utifunc}
u_i(x_i) = \sum_{j = 1}^M u_{i,j} = \sum_{j=1}^M a_{i,j} x_{i,j}, \quad \forall i
\end{eqnarray}
in which $a_{i,j}$ can be computed beforehand. Note that we implicitly assume the request pool of a service is unlimited. We will discuss later how some assumptions can be relaxed.
\begin{definition}
\label{homo}
A function $u(.)$ is \textit{homogeneous of degree} $d$, where $d$ is a constant, if $u(\alpha x) = \alpha^d u(x),~\forall ~\alpha > 0$ \cite{AGT}.
\end{definition}
From (\ref{utifunc}), it is easy to verify that
$u_i(x_i)$ is a linear function that is homogeneous of degree 1.
\textbf{Remark:} The value of an EN to a service can be defined flexibly. For example, a service may give higher values to ENs in a populated area or ENs with high reliability. A suitable weight can be added to $a_{i,j}$.
In the proposed model, each service informs the platform its budget and how much it values different ENs. Based on these information, the platform computes suitable resource allocation satisfying given design objectives. How each service utilizes its allocated resources in the operation stage is not the focus of this work.
The key concern of our work is how to harmonize the interests of different services that may have different preferences towards the ENs. Also, we consider only delay-sensitive services to illustrate one way to model the service utility function.
It can be justified by the fact that non-delay-sensitive services can be handled effectively by cloud DCs and the precious edge resources can be reserved for important low-latency services. Nevertheless, our model is generic enough to handle other service types as long as we can define the utility of a service as a suitable function of its allocated EC resources.
Finally, although we consider computing resources only, the proposed framework can apply to a system in which each service evaluates an EN based on a combination of different resource types of the EN, such as computing, storage, and bandwidth.
\vspace{-.02in}
\section{Centralized Solution}
\label{EG}
In the first model, each service $i$ aims to maximize $U_i(x_i,p) = u_i(x_i) = \sum_j a_{i,j}x_{i,j}$ subject to the budget constraint $\sum_j p_j x_{i,j} \leq B_i$, $\forall i$.
If $p$ is a price vector, the ratio $a_{i,j}/p_j$ is defined as the \textit{bang-per-buck} of EN $j$ to service $i$, which indicates the utility gained by service $i$ through one unit of money spent on EN $j$ (assuming 0/0 = 0). The \textit{maximum bang-per-buck (MBB)} of service $i$ over the set of ENs is $\alpha_i = \max_j \{a_{i,j}/p_j\}$ \cite{ndev08}.
The demand set $D_i(p)$ of service $i$ includes all ENs giving it the MBB value, i.e.,
$D_i(p) = \{ j : a_{i,j}/p_j = \alpha_i \},~\forall i$.
Intuitively, to maximize its utility, each service will spend full budget to buy resources from only ENs giving it the MBB. Therefore, a pair $(X, p)$ is an ME if: i) given prices $p$, service $i$ will exhaust its budget to buy resources only from ENs in $D_i(p)$; and ii) the market clears at prices $p$.
In the following, we will show that the ME in the first model can be inferred from the optimal solution of a convex optimization problem. Also, we will describe some properties of the equilibrium.
Specifically, for the case of buyers with linear utilities, the ME
can be found by solving the EG convex program given below \cite{EG,AGT}:
\begin{eqnarray}
\label{EGprogram1}
\vspace{-0.1in}
\underset{\mathcal{X},u}{\text{maximize}} ~\sum_{i=1}^N B_i \ln u_i
\end{eqnarray}
subject to
\vspace{-0.3in}
\begin{eqnarray}
\label{EQ11}
u_i =\sum_{j=1}^M a_{i,j} x_{i,j}, \quad \forall i \\
\vspace{-0.5in}
\label{EQ12}
\sum_{i=1}^N x_{i,j} \leq 1, \quad \forall j \\
\label{EQ13}
x_{i,j} \geq 0, \quad \forall i,~j.
\end{eqnarray}
This problem always has an interior feasible solution by simply setting $x_{i,j} = \epsilon > 0$, for all $i$ and $j$, where $\epsilon$ is sufficiently small such that all constraints (\ref{EQ12})-(\ref{EQ13}) are satisfied with strict inequality.
Hence, Slater's condition holds and the the Karush--Kuhn--Tucker (KKT) conditions are necessary and sufficient for optimality \cite{boyd}.
Denote $\eta_i$, $p_j$, and $\nu_{i,j}$ as the dual variables associated with constraints (\ref{EQ11}), (\ref{EQ12}), and (\ref{EQ13}), respectively. We have the Lagrangian
\begin{eqnarray}
L(u,X,\eta,p,\nu) = \sum_i B_i \ln u_i + \sum_j p_j ( 1 - \sum_i x_{i,j}) \\ \nonumber
+ \sum_i \eta_i \Big( \sum_j a_{i,j} x_{i,j} - u_i \Big) + \sum_i \sum_j \nu_{i,j} x_{i,j}.
\end{eqnarray}
\vspace{-0.1in}
The KKT conditions give
\begin{eqnarray}
\label{kkt1}
\frac{\partial L}{\partial u_i} = \frac{B_i}{u_i} -\eta_i = 0,~\forall i \\
\vspace{-0.1in}
\label{kkt2}
\frac{\partial L}{\partial x_{i,j}} = B_i \frac{a_{i,j}}{u_i} - p_j + \nu_{i,j} = 0, \quad \forall i,~j \\
\vspace{-0.2in}
\label{kkt3}
u_i =\sum_j a_{i,j} x_{i,j} ,~ \forall i; ~p_j (1 - \sum_i x_{i,j}) = 0, \quad \forall j \\
\label{kkt4}
\nu_{i,j} x_{i,j} = 0, ~ \forall i,~j;~ p_j \geq 0, ~ \forall j; ~ \nu_{i,j} \geq 0, ~ \forall i,~j.
\end{eqnarray}
We can infer the following
\begin{eqnarray}
\label{con1}
\forall i,j: \frac{u_i}{B_i} \leq \frac{a_{i,j}}{p_j}\\
\label{con2}
\forall i,j: \text{if} ~x_{i,j} > 0 \Rightarrow \nu_{i,j} = 0 \Rightarrow \frac{u_i}{B_i} = \frac{a_{i,j}}{p_j}\\
\label{con3}
\forall j: p_j > 0 \Rightarrow \sum_i {x_{i,j}} = 1; ~ \sum_i {x_{i,j}} < 1 \Rightarrow p_j = 0.
\end{eqnarray}
If $p$ is a price vector, the ratio $a_{i,j}/p_j$ is defined as the \textit{bang-per-buck} of EN $j$ to service $i$, which indicates the utility gained by service $i$ through one unit of money spent on EN $j$ (assuming 0/0 = 0). The \textit{maximum bang-per-buck (MBB)} of service $i$ over the set of ENs is $\alpha_i = \max_j \{a_{i,j}/p_j\}$ \cite{ndev08}.
The demand set $D_i(p)$ of service $i$ includes all ENs giving it the MBB value, i.e.,
$D_i(p) = \{ j : a_{i,j}/p_j = \alpha_i \},~\forall i$.
The dual variable $p_j$ in the EG program can be interpreted as the price of EN $j$. Hence, conditions (\ref{con1}) and (\ref{con2}) imply that $x_{i,j} > 0 $ if and only if $j \in D_i(p)$, i.e., each service buys resources only from ENs giving it the MBB. This also maximizes $u_i(x_i)$.
The following theorem captures the relationship between the EG program and the ME solution as well as some properties of the equilibrium.
\begin{theorem}
\label{theorem1}
The optimal solution to the EG convex program (\ref{EGprogram1})-(\ref{EQ13}) is an ME. Specifically, the Lagrangian dual variables corresponding to the ENs' capacity constraints (\ref{EQ12}) are the equilibrium prices.
At the equilibrium, the resource allocation not only maximizes the utility but also exhausts the budget of every service.
Furthermore, each service purchases resources only from ENs giving its MBB.
Additionally, the optimal utilities of the services as well as equilibrium prices are unique.
\end{theorem}
\textbf{Proof}: Let $X^*$ and $u_i^*$ be the optimal solution to the EG program. Then, $X^*$ and $u_i^*$ need to satisfy the KKT conditions (\ref{kkt1})-(\ref{con3}). Denote $\eta^*$, $p^*$, and $\nu^*$ as the optimal dual variables.
From (\ref{kkt2}), we have
\begin{eqnarray}
\label{budeq1}
B_i \frac{a_{i,j}}{u_i^*} = p_j^* - \nu_{i,j}^* , \quad \forall i,~j.
\end{eqnarray}
Multiplying both sides of (\ref{budeq1}) by $x_{i,j}^*$ and adding the resulting equalities, we get
\begin{eqnarray}
\label{budeq2}
\frac{B_i}{u_i^*} \sum_j a_{i,j} x_{i,j}^* = \sum_j (p_j^* - \nu_{i,j}^*) x_{i,j}^* , \quad \forall i,~j.
\end{eqnarray}
Since $\nu_{i,j}^* x_{i,j}^* = 0, \forall i,~j$, and $u_i* = \sum_j a_{i,j} x_{i,j}^*, ~\forall i$, equation (\ref{budeq2}) implies $\sum_j p_j^* x_{i,j}^* = B_i, ~\forall i$. Thus, the optimal solution to the EG program (\ref{EGprogram1})-(\ref{EQ13}) fully exhausts the budget of every service. Furthermore, as shown above, at the optimality, each service buys resources only from ENs giving its MBB value. In other words, the optimal solution to the EG program maximizes the utility of every service subject to the budget constraint because every service uses all of its money to purchase its MBB resources. This can be inferred from (\ref{con1}) and (\ref{con2}).
We now
consider the market clearing condition. From (\ref{con3}), we can observe that resources of ENs with positive price $p_j$ are fully allocated. For ENs with zero prices, their resources can be allocated arbitrarily without affecting the optimal utility of service since the price is zero \cite{AGT}. Thus, the market clears. Since ($X^*$,~ $p^*$) satisfies both conditions of an ME, the optimal solution to the EG program is an ME.
Finally, since the objective function (\ref{EGprogram1}) is strictly concave in $u_i$ for all $i$, the optimal utilities are unique.
The uniqueness of equilibrium prices can be inferred from (\ref{con2}).
\qed
From (\ref{budeq1}), if $p_j^* = 0, \text{then} ~ \nu_{i,j}^* = 0$ and $a_{i,j} = 0, ~\forall i,~j$, which means an EN has price of zero only when it is not wanted by all services. We can remove this EN from our system. In the following, we consider only the case where $p_j > 0, \forall j$.
Also, it can be shown that Theorem \ref{theorem1} is not only applied to linear utilities,
but also true for a wider class of homogeneous concave utility functions \cite{EG1}. Please refer to \textbf{Appendix D} for more details.
Next, we study the properties of the equilibrium allocation. First, from (\ref{EGprogram1})-(\ref{EQ13}), it can be easily verified that the equilibrium allocation is scale-free. It means that it does not matter if service $i$ reports $a_i = (a_{i,1},\ldots,a_{i,M})$ or $e_i a_i$ for some constant $e_i$, the allocation that it receives is the same.
Also, if a service divides its budget into two parts and acts as two different services with the same original utility function, then the total allocation it obtains from the new ME is equal to the original equilibrium allocation.
Furthermore, the equilibrium allocation is not only Pareto-optimal but also possesses many appealing fairness properties such as envy-freeness, sharing incentive, and proportionality.
An allocation is \textit{Pareto-optimal} if there is no other allocation that would make some service better off without making someone else worse off \cite{eco}, which means there is no strictly ``better'' allocation. Hence, a Pareto-optimal allocation is efficient and non-wasteful because the remaining resources (if any) cannot improve utility of any service.
\textit{Envy-freeness} means that every service prefers its allocation to the allocation of any other service. When the services have equal budgets, an envy-free allocation $\mathcal{X}$ implies $u_i(x_i) \geq u_i(x_{i'})$ for all $i$ and $i' \in \mathcal{N}$ \cite{hmou04}.
In an envy-free allocation, every service feels that her share is at least as good as the share of any other service, and thus no service feels envy.
Since the budgets can be different, we need to extend the classical definition of envy-freeness.
An allocation $\mathcal{X}$ is envy-free if $ u_i(\frac{x_i}{B_i}) \geq u_i ( \frac{x_{i'}}{B_{i'}}),~\forall i,~i' \in \mathcal{N}$.
Let $\hat{x}$ be the allocation where each service receives
resource from every EN proportional to its budget, i.e., $\hat{x}_{i,j} = \frac{B_i}{\sum_i' B_{i'}}, \forall i,~j$.
\textit{Sharing-incentive} property implies $u_i(x_i) \geq u_i(\hat{x}_i),~\forall i.$
Indeed, $\hat{x}$ is an intuitive resource-fair allocation that allocates resources from every EN to each service proportional to the service budget. We can also understand that each service $i$ contributes an amount of $\hat{x}_{i,j}$ to EN $j$ in a resource pool consisting of the ENs.
Sharing-incentive ensures that every service prefers the equilibrium allocation to its initial resource contribution to the pool. This can be interpreted as resource-fairness.
Finally, if $u_i(x_i) \geq \frac{B_i}{\sum_{i'} B_{i'}} u_i (C)$, for all $i$, in which $u_i(C)$ is the utility of service $i$ when it receives all the resources from the market (i.e., $C = (1,...,1), C \in \mathcal{R}^M$), we say that the allocation $\mathcal{X}$ satisfies the \textit{proportionality} property.
Indeed, $u_i(C)$ is the maximum utility that every service $i$ can achieve from the EC resource pool. The proportionality property guarantees that the utility of every service at the ME is at least proportional to its payment/budget. Thus, this property can be interpreted as utility-fairness.
Obviously, these fairness properties encourage services to participate in the proposed resource allocation scheme.
\begin{theorem}
\label{theorem2}
At equilibrium, the allocation is Pareto-optimal and envy-free. It also satisfies the sharing-incentive and proportionality properties.
\end{theorem}
\textbf{Proof:}
Since at the equilibrium, every service exhausts its budget and receives its favorite resource bundle, it does not envy with other services. Hence, the equilibrium allocation is envy-free. The Pareto-optimality follows directly from the first-welfare theorem in Economics \cite{eco,AGT}. Indeed, Pareto-optimality can also be inferred from the Nash Bargaining concept \cite{jnas50}. In particular, the problem (\ref{EGprogram1})-(\ref{EQ13}) has the objective in the form of a Nash Social Welfare function with closed, compact, and convex feasible region. Thus, it enjoys all compelling properties of a Nash Bargaining solution such as Pareto efficiency and scale-invariance. For linear utilities, we can prove the properties above directly
as follows.
- \textit{Pareto Optimality:}
We show this by contradiction. Assume allocation $X^*$ is not Pareto-optimal. Then, there exists an allocation $X'$ such that $u_i(x'_i) \geq u_i(x_i^*)$ for all $i$, and $u_i(x'_i) > u_i(x_i^*)$ for some $i$. Note that $u_i(x_i) = \sum_j a_{i,j} x_{i,j}$. Consider any feasible allocation $X'$.
Recall the MBB of buyer $i$ is $\alpha_i = \max_j \frac{a_{i,j}}{p_j}$. We have
\begin{eqnarray}
\label{PO1}
\sum_j x'_{i,j} p_j \geq \sum_j x'_{i,j} \frac{a_{i,j} }{\alpha_i} \geq \sum_j x_{i,j}^* a_{i,j} \frac{1}{\alpha_i} = \sum_j x_{i,j}^* p_j .
\end{eqnarray}
\vspace{-0.1in}
The second inequality is due to
$u_i(x'_i) \geq u_i(x_i^*),~\forall i$.
Thus
\begin{eqnarray}
\label{PO2}
\sum_j x'_{i,j} p_j \geq B_i, \quad \forall i.
\end{eqnarray}
Since $u_i(x'_i) > u_i(x_i^*)$ for some $i$,
$\sum_j x'_{i,j} p_j \geq B_i$ for some $i$.
Adding both sides of (\ref{PO2}) over all buyers renders
\begin{eqnarray}
\label{PO3}
\sum_i B_i < \sum_i \sum_j x'_{i,j} p_j = \sum_i x_{i,j} \sum_j p_j \leq \sum_j p_j
\end{eqnarray}
because $\sum_i x'_{i,j} \leq 1, ~ \forall j$ (i.e., the capacity constraints of ENs).
However, (\ref{PO3}) means the total prices of all the ENs is greater than the total budget of all buyers, which cannot occur. Thus, the equilibrium allocation $X^*$ is Pareto-optimal.
- \textit{Envy-freeness}: To prove that $X^*$ is envy-free,
we need to show: $B_{i'} u_i(x_i^*) \geq B_i u_i(x_{i'}^*), ~\forall i,~i' \in \mathcal{N}$. Let $b_{i,j}$ be the total money that service $i$ spends on EN $j$.
We have
\begin{eqnarray}
\label{envy}
B_{i'} u_i(x_i^*) &=& B_{i'} \sum_j a_{i,j} x_{i,j}^* = B_{i'} \sum_j a_{i,j} \frac{b_{i,j}^*}{p_j} \\ \nonumber
&=& B_{i'} \sum_j \frac{a_{i,j}}{p_j} b_{i,j}^* = B_i' \alpha_i \sum_j b_{i,j}^* \\ \nonumber
&=& B_i' \alpha_i B_i = B_i \alpha_i \sum_j b_{i',j}^* \\ \nonumber
\vspace{-0.1in}
&\geq& B_i \sum_j \frac{a_{i,j}}{p_j} b_{i',j}^* = B_i \sum_j a_{i,j} \frac{b_{i',j}^*}{p_j}\\ \nonumber
&=& B_i \sum_j a_{i,j} x_{i',j}^* = B_i u_i(x_{i'}^*), ~\forall i,~j.
\end{eqnarray}
Note that the equalities in the second line of (\ref{envy}) can be inferred from the fact that each buyer only buys resources from ENs in its demand set $D_i$ while the first inequality in the fourth line holds because $\alpha_i \geq \frac{a_{i,j}}{p_j},~\forall i,~j$.
- \textit{Proportionality:} From Theorem \ref{theorem1}, $\sum_i x_{i,j}^* = 1,~\forall j$. Thus, for linear utilities and the envy-free property, we have
\begin{eqnarray}
u_i(C) = u_i \big(\sum_i x_i^* \big)
= u_i \big(x_i^* \big) + \sum_{i' \neq i} u_i \big( x_{i'}^* \big) \\ \nonumber
\leq u_i \big(x_i^* \big) + \sum_{i' \neq i} \frac{B_{i'}}{B_i} u_i \big(x_i^* \big) = \frac{\sum_{i'} B_{i'}}{B_i} u_i \big(x_i^* \big).
\end{eqnarray}
Hence, $u_i(x_i^*) \geq \frac{B_i}{\sum_{i'} B_{i'}} u_i (C), \forall i$.
- \textit{Sharing-incentive:} At the ME ($X^*, p^*$), no service spends more than its budget. We have
\begin{eqnarray}
\sum_i \sum_j x_{i,j}^* p_j^* \leq \sum_i B_i \Rightarrow \sum_j p_j^* \sum_i x_{i,j}^* \leq \sum_i B_i
\end{eqnarray}
Thus, $\sum_j p_j^* \leq \sum_i B_i$. Consequently, resource bundle $\hat{x}_i$ costs service $i$: $\sum_j \hat{x}_{i,j} p_j^* = \sum_j \frac{B_i}{\sum_{i'} B_{i'}} p_j^* \leq B_i, \forall i.$ So, service $i$ can afford to buy bundle $\hat{x}_i$ at prices $p^*$.
However, out of all feasible bundles that are affordable to service $i$, its favorite one is $x_i^*$.
It means $u_i(x_i^*) \geq u_i(\hat{x}_i), ~\forall i.$
\qedb
\vspace{-0.1in}
\section{Decentralized Solution}
\label{dist}
A common approach for implementing distributed algorithm is to let the platform iteratively compute prices of the ENs and broadcast the updated prices to the services. Then, each service finds its optimal demand bundle and sends the updated demand to the platform.
This price-based strategy can be implemented in a tatonnment style or using the dual decomposition method \cite{dpal06}.
Unfortunately, linear utilities may result in non-unique optimal demand bundles because multiple ENs may give the same MBB to a buyer.
Hence, the algorithm cannot terminate without aggregated demand coordination from the platform.
Consider an example with two services and three ENs.
The system parameters are: $B_1$ = \$1, $B_2$ = \$4, $a_1 = (1, 10, 4)$, and $a_2 = (4, 8, 8)$.
Fig.~\ref{fig:fail1} presents the ME from the centralized EG program. The value associated with each edge between a service and an EN indicates the amount of resource that the service buys from the EN. For example, in Fig. ~\ref{fig:fail1}, we have: $x_{1,1} = 0, ~x_{1,2} = 0.5$, and $x_{1,3} = 0.$ The equilibrium price vector is $p = (1, 2, 2)$. The demand sets
are: $D_1 = \{2\}$ and $D_2 = \{1, 2, 3\}$. Given the equilibrium prices, the set of optimal (i.e., utility-maximizing) resource bundles of service 2 is infinite.
Hence, even if a distributed algorithm reaches the exact equilibrium prices at some iteration, it may not stop
since the total demand reported by the buyers may not equal to the total supply. For instance, in Fig.~\ref{fig:fail2}, although the platform announces the exact equilibrium prices, service 2 may choose to buy all resources from EN2 and EN3.
Then, the algorithm may never terminate. In the following, we present two distributed algorithms
to find the ME.
\begin{figure}[ht]
\centering
\subfigure[With coordination]{
\includegraphics[width=0.22\textwidth,height=0.10\textheight]{fail1}
\label{fig:fail1}
} \hspace*{-0.5em}
\subfigure[Without coordination]{
\includegraphics[width=0.22\textwidth,height=0.10\textheight]{fail2}
\label{fig:fail2}
\caption{Market equilibrium with linear utilities}
\end{figure}
\vspace{-0.18in}
\subsection{Dual Decomposition with Function Approximation}
\label{dualapp}
Using Lagrangian relaxation \cite{boyd,dpal06}, we can decompose the EG convex program into
sub-problems, each of which can be solved by a service.
We observe that the EG program (\ref{EGprogram1})-(\ref{EQ13}) can be written equivalently as follows.
\begin{eqnarray}
\label{EGprogram11}
\underset{\mathcal{X}}{\text{maximize}} && \sum_{i=1}^N B_i \ln u_i(x_i) \\ \nonumber
\vspace{-0.2in}
\label{EQ18}
\text{subject to} ~ && \sum_{i=1}^N x_{i,j} \leq 1, ~ \forall j; \quad x_{i,j} \geq 0, ~ \forall i,~j.
\end{eqnarray}
Relaxing the coupling constraints, the partial Lagrangian is
\begin{eqnarray}
L(X,p) &=& \sum_i B_i \ln u_i(x_i) + \sum_j p_j \big( 1 - \sum_i x_{i,j} \big) \\ \nonumber
&=& \sum_i \Big( B_i \ln u_i(x_i) - \sum_j p_j x_{i,j} \Big) + \sum_j p_j.
\end{eqnarray}
Thus, given a price vector $p$, each service solves
\begin{eqnarray}
\label{subprob}
\underset{x_i \geq 0}{\text{maximize}} ~ B_i \ln u_i(x_i) - \sum_j p_j x_{i,j}.
\end{eqnarray}
To overcome the difficulty raised by the non-uniqueness of the optimal demand of the services with linear utilities, we propose to approximate the linear utility function by a Constant Elasticity of Substitution (CES) function, which is widely used in Economics and Computer Science \cite{eco,AGT}.
A CES function has the following form: $u_i^{\sf CES} (x_i) = \Big( \sum_{j=1}^M (a_{i,j} x_{i,j} )^\rho \Big)^{\frac{1}{\rho}}, ~ \rho < 1,~ \rho \neq 0.$
%
Indeed, the linear utility function is a special case of the CES function family as $\rho \rightarrow 1$. We can approximate the original linear utility function by a CES function where $\rho = 1 - \epsilon$ with $\epsilon$ is arbitrarily small. As $\epsilon \rightarrow 0$, $u_i^{\sf CES} \rightarrow u_i$.
Clearly,
a CES function is strictly concave and homogeneous \cite{eco}. Hence, the EG program and Theorem \ref{theorem1} also apply to CES functions \cite{EG1,AGT}. Additionally, we can observe that maximizing a CES function above is equivalent to maximizing $u_i(x_i) = \sum_j (a_{i,j} x_{i,j} )^\rho$.
Since a CES function is strictly concave, the optimal demand bundle of a service is unique.
Consider the following optimization problem
\begin{eqnarray}
\label{prob1}
\underset{x_{i} \geq 0}{\text{maximize}} ~u_i(x_i) \quad
\text{subject to} ~ \sum_j p_j x_{i,j} \leq B_i.
\end{eqnarray}
\begin{proposition} Given a positive price vector $p$ and a CES approximation function, each service $i$ can either solve Problem (\ref{subprob}) or Problem (\ref{prob1}). Both the problems have the same closed form solution as follows:
\begin{eqnarray}
\label{cesdemand}
x_{i,j} = \Big( \frac{a_{i,j}^{\rho}}{p_j}\Big)^{\frac{1}{1-\rho}} \frac{B_i}{ \sum_{j=1}^M \Big( \frac{a_{i,j}}{p_j} \Big)^{\frac{\rho}{1-\rho}}}.
\end{eqnarray}
\end{proposition}
\textbf{Proof:} Refer to \textbf{Appendix A}.\qedb
Thus, based on the dual decomposition method where each service solves the sub-problem (\ref{subprob}), we have the following distributed algorithm with CES function approximation (\textbf{Algorithm 1}).
With a sufficiently small step size, it is guaranteed to terminate and converge to an (approximate) global optimal solution \cite{boyd,dpal06}. Our simulation results confirm that \textbf{Algorithm 1} produces a solution arbitrarily close to the optimal one from the centralized EG program.
\begin{algorithm}[H
\footnotesize
\caption{\textsc{Function Approximation Algorithm}}
\label{CESalg}
\begin{algorithmic}[1]
\STATE Initialization: iteration t = 0, set initial prices of ENs $p(0) = p_0$, and set step size $\alpha(0)$ and tolerance $\gamma$ to be small.
\REPEAT
\STATE At iteration $t$, the platform broadcasts prices p(t) to the buyers.
\STATE Each buyer computes its optimal demand $x_i(t)$
using (\ref{cesdemand}) and sends it to the platform.
\STATE The platform updates
the prices \\
$p_j(t+1) = \max \Big\{ p_j(t) + \alpha(t) \big( 1 - \sum_{i=1}^N x_{i,j}(t) \big), 0 \Big\}, \quad \forall j$
\UNTIL {$\big| p_j(t+1) - p_j(t) \big| < \gamma, ~\forall j$, or the number of iterations $t$ is too large.}
\STATE Output: equilibrium prices $p^*$ and optimal allocation $X^*$.
\end{algorithmic}
\end{algorithm}
\subsection{Proportional Response Dynamics Strategy}
\label{propdyn}
In this section, we present the Proportional Response Dynamics (\textbf{PropDyn}) algorithm proposed by the P2P community.
This distributed algorithm is very simple to implement and has been proved to converge to an ME \cite{fwu07}.
Basically, in every iteration $t$, each service updates its bids
proportional to the utilities it receives from the previous iteration.
Specifically, $b_{i,j}(t) = B_i \frac{u_{i,j}(t-1)}{u_i(t-1)}, ~ \forall i,~ j,~ t.$
Since the ENs' capacities are normalized, the price of an EN equals to the total bids sent to it, i.e., $p_j(t) = \sum_{i} b_{i,j}(t)$. By bidding $b_{i,j}(t-1)$ to EN $j$,
service $i$ obtains an amount of resource $x_{i,j}(t-1) = b_{i,j}(t-1)/p_j$, and gains a utility $u_{i,j}(t-1) = a_{i,j} x_{i,j}(t-1)$.
Finally, $u_i(t-1) = \sum_j u_{i,j}(t-1)$ is the total utility of service $i$
at iteration $t-1$.
The salient feature of this algorithm is that it can be implemented efficiently in a distributed manner. In particular, each EN only needs to know the total bid that it receives to compute the price while each buyer only needs to know its own
information and learns its utilities achieved in the previous iteration to compute its new bids.
The algorithm terminates when the price deviation of every EN is
sufficiently small \cite{fwu07}. The major difference between this novel algorithm and traditional distributed algorithms is that in each iteration, every service computes its new bids as mentioned above instead of its optimal demand bundle.
\begin{algorithm}[H
\footnotesize
\caption{\textsc{Best Response Dynamics Algorithm \cite{mfel09}}}
\label{CESalg}
\begin{algorithmic}[1]
\STATE Sort ENs according to the decreasing order of $\frac{a_{i,j}}{b_{-i,j}}$. \\
Output a sorted list $L_i = \{i_1,~i_2,\ldots,~i_M\}. $
\STATE Find the largest k such that ~\\ ~\\
$ \frac{\sqrt{a_{i,i_k} b_{-i,i_k}}} {\sum_{j=1}^k \sqrt{a_{i,i_j} b_{-i,i_j}}} \Big(B_i + \sum_{j=1}^k b_{-i,i_j} \Big) - b_{-i,i_k} \geq 0$ ~\\ ~\\
\STATE Set $b_{i_l} = 0 ~\text{for} ~l > k$, and for $1 \leq l \leq k$, set ~\\ ~\\
$b_{i_l} = \frac{\sqrt{a_{i,i_l} b_{-i,i_l}} }{\sum_{j=1}^k \sqrt{a_{i,i_j} b_{-i,i_j}} } \Big(B_i + \sum_{j=1}^k b_{-i,i_j} \Big)- b_{-i,i_l}$ ~\\
\end{algorithmic}
\end{algorithm}
To illustrate the effectiveness of the \textbf{PropDyn} mechanism as well as the ME concept, we compare it with the \textit{Proportional Sharing Best Response (BR)} mechanism (\textbf{PropBR}) proposed in \cite{mfel09}, which aims to find a Nash Equilibrium (NE). In a non-cooperative game, a NE is a stable state of a system where no player can gain by a unilateral change of strategy if the strategies of the others are fixed \cite{AGT}.
Both \cite{fwu07} and \cite{mfel09} study a proportional sharing system where the resource of every node is shared proportionally to the services according to their bids. Specifically, we have $x_{i,j} = \frac{b_{i,j}}{b_{i,j} + b_{-i,j}},~ \forall i,~j$, where $b_{-i,j}$ is the total bid of all the services except $i$.
In both mechanisms, the actions of the
services are the bids ($b_{i,j}$)
submitted to the ENs.
However, instead of updating its bids following the rule in \textbf{PropDyn}, each service in the \textbf{PropBR} mechanism tries to selfishly maximize its utility given strategies taken by other services \cite{mfel09}.
\textbf{Algorithm 2} is the BR algorithm that buyer $i$ will execute given the total bid $b_{-i,j}$ of other buyers.
The whole algorithm is implemented in rounds. In each round, each buyer in turns runs \textbf{Algorithm 2} and updates its bid vector $b_i$
to the platform. The platform broadcasts new bids to all buyers in the system. A round completes when all buyers have updated their bids. Obviously, whenever this BR dynamics strategy converges, it converges to an NE. As mentioned in \cite{mfel09}, the algorithm normally converges after a few rounds.
%
%
%
%
Interestingly, our simulation shows that buyers do not gain significantly by playing BR. Indeed, most of buyers achieve lower utilities in the \textbf{PropBR} scheme compared to the \textbf{PropDyn} scheme.
Furthermore, to play BR dynamics, each buyer has to know
total bids of others and the actual capacity of every EN \cite{mfel09}. In \textbf{PropDyn}, buyers only need to know their own information.
Therefore, in a proportional sharing system, buyers may not have incentives to play BR.
\section{Net Profit Maximization }
\label{formu2}
Different from the \textbf{basic model}, in the second model, the services try to optimize their net profits (i.e., revenue minus cost) instead of revenue. Specifically, the net profit of service $i$ is $v_i(x_i) = \sum_j (a_{i,j} - p_j) x_{i,j},~\forall i.$
Given prices $p$, the objective of service $i$ is to maximize $U_i(x_i,p) = v_i(x_i)$ subject to: $\sum_j x_{i,j} p_j \leq B_i,~\forall i$ and $x_{i,j} \geq 0,~ \forall i,~j$.
Indeed, maximizing the net profit $v_i(x_i)$ is equivalent to maximizing $\sum_j (a_{i,j} - p_j) x_{i,j} + B_i = \sum_j a_{i,j} x_{i,j} + s_i $, where $s_i = B_i - \sum_j p_j x_{i,j}$ is the surplus money of service $i$ after purchasing $x_i$.
Inspired by the EG program for the \textbf{basic model}, we would like to construct a similar convex program to capture the ME in this new model.
Note that without budget consideration, this game-theoretic problem can be solved efficiently by writing down a social welfare maximization problem (i.e., maximizing sum of utilities of all the services), then use the dual decomposition method \cite{dpal06} to decompose it into sub-problems, each of which is solved by one service. Each sub-problem is exactly a net profit maximization problem of a service. Unfortunately, this strategy fails when we consider budget since the social welfare maximization problem cannot be decomposed due to the coupling budget constraints.
Our derivation of the new convex optimization problem is based on reverse-engineering the \textbf{basic model}.
\begin{proposition}
The equilibrium prices in the \textbf{basic model} can be found by solving the following convex problem.
\begin{eqnarray}
\label{dual1}
\underset{p,\eta}{\text{minimize}} ~&&\sum_{j=1}^M p_j - \sum_{i=1}^N B_i \ln (\eta_i) \\ \nonumber
\label{dual1EQ1}
\text{subject to} ~ &&p_j \geq a_{i,j} \eta_i, ~\forall i,~j;
~ p_j \geq 0,~\forall j.
\end{eqnarray}
\end{proposition}
\textbf{Proof:}
We can obtain this convex problem by using Lagrangian and Fenchel conjugate function \cite{boyd} to construct the dual problem of the original EG program. Indeed, $\eta_i$ and $p_j$ are the dual variables associated with (\ref{EQ11}) and (\ref{EQ12}). See our \textbf{Appendix B} for the full proof. \qedb
Clearly, to maximize $v_i(x_i) = \sum_j (a_{i,j} - p_j) x_{i,j}$, service $i$ will never buy resource from EN $j$ if $a_{i,j} < p_j$. In other words, service $i$ would only buy resources from ENs in the set $A_i = \big\{j: \frac{p_j}{a_{i,j}} \leq 1\big\}.$
From (\ref{dual1}), we have $ \eta_i \leq \frac{p_j}{a_{i,j}}, ~\forall i$.
From these observations, we conjecture that the following prorgram captures the equilibrium prices in our second market model (i.e., net profit maximization).
\begin{eqnarray}
\label{dual2}
\underset{p,\eta}{\text{minimize}}~~ \sum_{j=1}^M p_j - \sum_{i=1}^N B_i \ln (\eta_i)
\end{eqnarray}
\vspace{-0.1in}
subject to
\begin{eqnarray}
\label{dual2EQ1}
p_j \geq a_{i,j} \eta_i, ~\forall i,~j;~ \eta_i \leq 1,~ \forall i;~\eta_i \geq 0, ~\forall i;~p_j \geq 0, ~\forall j. \nonumber
\end{eqnarray}
\vspace{-0.2in}
\begin{theorem} The solution of the following convex program is exactly an ME of the new market model.
\begin{eqnarray}
\label{EGprogram2}
\underset{\mathcal{X},u,s}{ \text{maximize}} ~ &&\sum_{i=1}^N \Big( B_i \ln u_i - s_i \Big) \\ \nonumber
\vspace{-0.2in}
\label{EQ21}
\text{subject to} && u_i \leq \sum_{j=1}^M a_{i,j} x_{i,j} + s_i,\forall i \\ \nonumber
\vspace{-0.1in}
&& \sum_{i=1}^N x_{i,j} \leq 1, \forall j;x_{i,j} \geq 0, \forall i,j;~ s_i \geq 0, ~\forall i.
\end{eqnarray}
At the equilibrium, the total of money spent and surplus money of every service equals to its budget.
Additionally, the optimal utility of every service is unique and greater or equal to its budget. For any buyer who has surplus money, her utility equals her budget.
\end{theorem}
\textbf{Proof:}
See our Appendix C . \qedb
The convex problem (\ref{EGprogram2}) is indeed the dual program of problem (\ref{dual2}).
We can interpret problem (\ref{EGprogram2}) as follows. First, the utility of a service is the sum of its revenue and its surplus money.
The first part of the objective function is the weighted sum of logarithmic utilities of the services similar to that of the EG program. However, since the surplus money does not contribute (i.e., not visible) to the market, we should subtract this amount from the aggregated utility function, i.e., the objective function. Finally, similar to the EG program, although budget constraints are not included in (\ref{EGprogram2}), the optimal solution satisfies these constraints. It is worth noting that, somewhat surprisingly, although our reverse-engineering approach is specialized for linear revenue functions only, the convex program (\ref{EGprogram2}) works also for a wider class of homogeneous concave revenue functions.
Interested readers can find more details in Appendix E.
\section{Numerical Results}
\label{sim}
\subsection{Simulation Settings}
We consider a square area with dimensions of 10km x 10km. The locations of ENs and services are generated randomly in the area. We generate a total of 100 ENs and 1000 locations. We assume that each service is located at one location. For the sake of clarity in analysis, in the \textbf{base case}, we consider a small system with 8 ENs and 4 services (i.e., M = 8 and N = 4), which are selected randomly in the set of 100 ENs and 1000 services. The network delay between a service and an EN is assumed to be proportional to the distance between them. The maximum tolerable delay of the services follows a uniform distribution over the interval [15,~25]. The service rate $\mu_{i,j}$ is generated randomly from 80 to 240 requests per time unit. The service price is from 2 to 3 per 100000 requests. The number of computing units in the ENs ranges from 10 to 20. From these parameters, we can compute $a_{i,j}$
of the services as in (\ref{a_uti}).
The net profit maximization model is considered in Section \ref{sim2}.
In the \textbf{ base case}, we assume that the services have equal budget.
Fig.~\ref{fig:value} depicts the valuations of the ENs to the buyers in the \textbf{base case}. The \textbf{base case} is used in all the simulations unless mentioned otherwise.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.35\textwidth,height=0.10\textheight]{plot_bar_individualvaluationN1-4}
\caption{Valuations of ENs to the buyers}
\label{fig:value} \vspace{-0.2in}
\end{figure}
\subsection{Performance Comparison}
In the first model captured by the EG program, the absolute value of the budget only affects the equilibrium prices by a scaling factor (e.g., all the prices increase twice as the budget of every service is double) and does not affect the allocation and utilities of the services.
The budget is normalized such that the total budget of all services is one.
The prices act as a means to allocate resources only.
\begin{figure}[ht]
\subfigure[Same budget]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_compare_utility_maxSW_maxmin_B_1111}
\label{fig:compa1}
} \hspace*{-1.9em}
\subfigure[Budget ratio (4/4/1/4)]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_compare_utility_maxSW_maxmin_B_4414}
\label{fig:compa2}
}\vspace{-0.2cm}
\caption{Performance comparison}
\end{figure}
We consider five schemes, including: the proposed \textit{ME}, the proportional sharing (\textit{Prop.}), the social welfare maximization with equal weights (\textit{SW1}), social welfare maximization with different weights (\textit{SW2}), and the maxmin fairness (\textit{maxmin}) schemes. In the proportional sharing,
each buyer $i$ receives $\frac{B_i}{\sum_i B_i}$ portion of resource of every EN.
In the social welfare maximization schemes, budget is not considered, and the objective is to maximize $\sum_i w_i u_i(x_i)$ subject to the capacity constraints of the ENs. $w_i$ is the weighting factor of service $i$. In \textit{SW1}, all weights are equal. In \textit{SW2}, the weight of each service is its budget. Finally, without budget consideration, the \textit{maxmin} scheme aims to maximize $\min_i u_i(x_i)$ under ENs' capacity constraints.
\begin{figure}[ht]
\subfigure[Same budget]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1fig5a}
\label{fig:compa3}
} \hspace*{-1.9em}
\subfigure[Budget ratio (4/4/1/4)]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1Fig5b}
\label{fig:compa4}
}\vspace{-0.2cm}
\caption{Utility efficiency comparison (N = 4)}
\end{figure}
\vspace{-0.2in}
\begin{figure}[ht]
\subfigure[Same budget]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1fig6a}
\label{fig:compa5}
} \hspace*{-1.9em}
\subfigure[Budget ratio (4/4/1/4)]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1Fig6b}
\label{fig:compa6}
}\vspace{-0.2cm}
\caption{Envy-freeness comparison (N = 4)}
\end{figure}
\begin{figure*}[ht]
\subfigure[ME scheme]{
\includegraphics[width=0.35\linewidth,height=0.11\textheight]{plot_proportionality_same_budget}
\label{fig:pr1}
} \hspace*{-1.9em}
\subfigure[SW scheme]{
\includegraphics[width=0.35\linewidth,height=0.11\textheight]{plot_proportionality_same_budget_SW}
\label{fig:pr2}
} \hspace*{-1.9em}
\subfigure[MM scheme]{
\includegraphics[width=0.35\linewidth,height=0.11\textheight]{plot_proportionality_same_budget_MM}
\label{fig:pr3}
}
\caption{Proportionality ratio comparison (N = 4, same budget)}
\end{figure*}
\vspace{-0.1in}
\begin{figure*}[ht]
\subfigure[Budget ratio 0.67/1.33/1/1]{
\includegraphics[width=0.35\linewidth,height=0.10\textheight]{plot_budget_ratio_rhalf_M8N4_bar_allocation_stacked}
\label{fig:rhalfbar}
} \hspace*{-1.9em}
\subfigure[Budget ratio 1/1/1/1]{
\includegraphics[width=0.35\linewidth,height=0.10\textheight]{plot_budget_ratio_r1_M8N4_bar_allocation_stacked}
\label{fig:r1bar}
} \hspace*{-1.9em}
\subfigure[Budget ratio 1.33/0.67/1/1]{
\includegraphics[width=0.35\linewidth,height=0.10\textheight]{plot_budget_ratio_r2_M8N4_bar_allocation_stacked}
\label{fig:r2bar}
}
\caption{Impact of budget ratio on the equilibrium allocation}
\end{figure*}
\vspace{-0.1in}
Figs.~\ref{fig:compa1}--\ref{fig:compa6} present performance comparison among these schemes under both equal budget and different budget settings. We can observe that the ME scheme balances well the tradeoff between system efficiency and fairness.
First, the \textit{ME} scheme considerably outperforms the \textit{Prop.} scheme, which confirms the sharing-incentive property of the ME solution. The \textit{maxmin} scheme produces a fair allocation among the buyers but the total utility of the buyers is much lower compared to other schemes. Noticeably, although the total utility is largest, both schemes \textit{SW1} and
\textit{SW2} produce undesirable allocations since some buyers (e.g., buyer 1) are not allocated anything and have zero utility in these schemes.
%
Also, as can be seen from Figs.~\ref{fig:compa5},~\ref{fig:compa6} which compare envy-freeness indices ($EF = \min_{i,j} \frac{u_i(x_i)/B_i}{u_i(x_j)/B_j}$ \cite{mfel09}) among the different schemes. An allocation is envy-free if EF equals to one. We can observe that our proposed ME scheme significantly outperforms the social welfare maximization and \textit{maxmin}
schemes in terms of envy-free fairness.
Finally, Figs.~ {fig:pr1}--{fig:pr3} depict the proportionality ratio (PR) of different schemes. When there are 4 buyers with equal budget, every buyer naturally expects to obtain a PR of at least 1/4. It can be observed that our proposed ME scheme satisfies the proportionality property as shown in Theorem 5.2.
\subsection{Sensitivity Analysis}
First, we examine the impact of budget on the equilibrium allocation by varying the budget ratio among the buyers.
Figs.~ \ref{fig:rhalfbar}-\ref{fig:r2bar} show impact of budget on the equilibrium allocation
as we vary the budget ratio between services 1 and 2.
We observe that buyer 1 is allocated more resources as her budget increases, which also increases her utility.
The allocation and utility of buyer 2 decrease as her budget decreases.
Fig.~\ref{fig:r12uti} further supports this observation where $r$ is the budget ratio between services 1 and 2.
Hence, we can conclude that the proposed algorithm is \textit{effective to capture service priority} in terms of budget in the allocation decision.
\begin{figure}[ht]
\centering
\includegraphics[width=0.35\textwidth,height=0.12\textheight]{plot_utilities_vary_budget_ratio_r12} \vspace{-0.2cm}
\caption{Impact of budget ratio on the buyers' utilities}
\label{fig:r12uti}\vspace{-0.2cm}
\end{figure}
Fig.~\ref{fig:r12price} shows the dependence of the equilibrium prices on the budget ratio of the buyers. For example, since only EN2, EN7, and EN8 can satisfy the delay requirement of service 1 as seen in Fig.~\ref{fig:value}, the prices of EN7 and EN8 change considerably as budget of service 1 varies. Also, because EN5 and EN6 are less valuable to the buyers, their equilibrium prices are significantly lower than the prices of other ENs while the prices of EN2 and EN8 are highest because they have high values to all the buyers. These observations imply the proposed method is \textit{effective in pricing}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.35\textwidth,height=0.10\textheight]{plot_price_vary_r12_bar} \vspace{-0.2cm}
\caption{Impact of budget ratio on the equilibrium prices}
\label{fig:r12price}
\end{figure}
In Fig.~\ref{fig:demand}, we change the equilibrium prices by a small amount to obtain different price vectors $P1, P2, P3, P4$ such that demand of each buyer at these prices is unique.
Price vector $P1$ ($P2$) is obtained by increasing (decreasing) every price in the equilibrium price vector in the \textbf{base case} by 0.01. Price vectors $P3$ ($P4$) is generated by adding a random number between 0 and 0.0005 (0.0003). We can observe that even under small price variation, the market is not cleared. Some ENs are over-demanded while some others are under-demanded. It means \textit{proper equilibrium pricing is important to clear the market}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.35\textwidth,height=0.10\textheight]{plot_optimalEQprices} \vspace{-0.2cm}
\caption{Impact of resource prices on total demand}
\label{fig:demand}
\end{figure}
The impact of the number of players (i.e., number of ENs and number of services) on the ME is illustrated in Figs.~\ref{fig:varyNu}, \ref{fig:varyMu}. The buyers have the same budget in this case. We show the utilities of buyers 1,~2,~3, and~4 in these figures. As expected, as the number of buyers increases, the utility of individual buyer decreases since the same set of ENs has to be shared among more services. On the other hand, the service utility increases significantly as the number of ENs increases.
\vspace{-0.1in}
\begin{figure}[ht]
\subfigure[M = 8, varying N]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1Fig11a}
\label{fig:varyNu}
} \hspace*{-1.9em}
\subfigure[N = 4, varying M]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1Fig11b}
\label{fig:varyMu}
}\vspace{-0.2cm}
\caption{Impact of M and N on the players' utilities}
\end{figure}
\vspace{-0.2cm}
\subsection{Analysis of Distributed Algorithms}
\subsubsection{Proportional Dynamics Allocation}
The proportional dynamics mechanism (\textbf{PropDyn}) has low complexity and can be implemented in a distributed manner.
The convergence properties of this algorithm in the base case with 8 ENs and 4 services is shown in Figs.~\ref{fig:PDptrace}, \ref{fig:PDbtrace}. As we can see, the prices and the bids converge after a few tens of iterations. The running time of the algorithm is in order of milliseconds.
Figs.~ \ref{con1}--\ref{con4} show the convergence properties of \textbf{PropDyn} as we change the number of ENs and the number of services. The the stopping condition is defined according the relative change of the prices between iterations (i.e., absolute value of the price deviation over the price in the previous iteration). In other words, at iteration $t+1$, if $\max_j \Big(|p_j(t+1) - p_j(t)|/p_j(t) \Big)$ is sufficiently small (i.e., smaller than a tolerance threshold), the \textbf{PropDyn} algorithm terminates/converges. We have plotted the number of iterations of the \textbf{PropDyn} algorithm with different values of number of services (N) and number of ENs (M) as well as different tolerance thresholds in Figs. \ref{con1}--\ref{con4}. The goal of these figures is to give a rough idea about the number of iterations.
For each of these figures, we generated 50 different datasets and took the average number of iterations over 50 runs.
Theoretical results on the convergence rate of \textbf{PropDyn} can be found in \cite{fwu07,lzha09,lzha11}. Specifically, the number of iterations depends on the number of ENs, the tolerance threshold, and the system parameters $a_{i,j}$-which is indirectly related to the number of services.
\begin{figure}[ht]
\subfigure[Price trace]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_proportional_price_trace}
\label{fig:PDptrace}
} \hspace*{-1.9em}
\subfigure[Buyer 1's bids]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_proportional_bid_trace}
\label{fig:PDbtrace}
}\vspace{-0.2cm}
\caption{Convergence of EN prices and bids}
\end{figure}
\begin{figure}[ht]
\centering
\subfigure[Tolerance = $10^{-4}$]
{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1PropDyn10e4_K50_relative1}
\label{con1}
}\hspace*{-1.9em}
\subfigure[Tolerance = $10^{-5}$]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1PropDyn10e5K50_1}
\label{con2}
}\vspace{-0.2cm}
\subfigure[Tolerance = $10^{-4}$]
{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1PropDyn10e4_K50_relative}
\label{con3}
} \hspace*{-1.9em}
\subfigure[Tolerance = $10^{-5}$]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{R1PropDyn10e5K50}
\label{con4}
}\vspace{-0.2cm}
\caption{Number of iterations of PropDyn}
\end{figure}
Figs.~\ref{fig:PDite},~\ref{fig:ave1},~\ref{fig:ave2} compare the buyers' utilities in the \textbf{PropDyn} and \textbf{PropBR} schemes. We select a particular instance with a set of 10 buyers and 20 ENs from the generated system data. Note that we have run simulation with numerous instances and obtain similar trends. The utility of each buyer in the particular instance is presented in Fig.~\ref{fig:PDite}. As we can see, the utility values are higher for most of the buyers in the PropDyn scheme compared to the those in the PropBR scheme. In \ref{fig:ave1}, we add a random variable to each $a_{i,j}$ and run the schemes 100 times and take the average results. In \ref{fig:ave2}, we generate $a_{i,j}$ randomly in the range between 0.01 and 0.09. As we can observe, the buyers' utilities tend to be higher in the PropDyn scheme in comparison with the PropBR scheme. Furthermore, the PropBR requires buyers to know more system information
to play their BR actions in each round.
The numerical results show that it brings almost no benefit to the buyers (no utility gain in most cases) to play PropBR scheme.. Hence, we can infer that \textit{the buyers should just follow the PropDyn scheme and obtain an ME allocation.}
\begin{figure*}[ht]
\subfigure[Utility in one instance]{
\includegraphics[width=0.35\linewidth,height=0.10\textheight]{Plot_bar_Prop_Dyn_BR_compare_uti}
\label{fig:PDite}
} \hspace*{-1.9em}
\subfigure[Average utility in case 1]{
\includegraphics[width=0.35\linewidth,height=0.10\textheight]{Plot_bar_Prop_Dyn_BR_compare_uti_100runs1}
\label{fig:ave1}
} \hspace*{-1.9em}
\subfigure[Average utility in case 2]{
\includegraphics[width=0.35\linewidth,height=0.10\textheight]{Plot_bar_Prop_Dyn_BR_compare_uti_100runs}
\label{fig:ave2}
}
\caption{Utility comparison between PropBR and PropDyn}
\end{figure*}
\subsubsection{Function Approximation Algorithm}
The convergence properties of the CES approximation scheme as well as its performance are reported. Thanks to the closed form expression of the optimal demand, the algorithm runs very fast even with high number of iterations. As expected, the number of iterations depends strongly on the step size and the initial prices. The convergence of EN6's price ($p_6$) is shown in Fig.~\ref{fig:cesite}. The number of iterations decreases as the initial prices are close to the final ME prices, which are unknown. The number of iterations decreases as the step size increases, but we cannot increase the step size $\gamma$ too much to ensure convergence. Fig.~\ref{fig:cesprice} presents the price traces of different ENs until convergence with $\alpha = 0.001 $ and $p_0 = 0.2$.
\begin{figure}[ht]
\subfigure[Number of iterations ($p_6$)]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_concave_appro_iterations}
\label{fig:cesite}
} \hspace*{-1.9em}
\subfigure[Price convergence]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_concave_appro_price_converge_step0002_tol5}
\label{fig:cesprice}
}\vspace{-0.2cm}
\caption{Convergence of CES approximation}
\end{figure}
In Fig.~\ref{fig:cesu}, we study the performance of the approximation scheme by comparing utility of the buyers under the centralized convex program (EG), the approximation CES utility (CES), and the approximation linear utility (\textbf{Approx.}). In the \textit{Approx.} scheme, the utility of buyer $i$ is $x_{i,j} a_{i,j}$ where $x_{i,j}$ is the solution of the optimization problem with CES approximation utilities. As we can observe, the values of the utilities are very similar, which confirms that the proposed approximation scheme performs well. In this figure, we set $\rho$ to be 0.99. Finally, the equilibrium prices with different values of $\rho$ is shown in Fig.~\ref{fig:cesp}. It is easy to see that the prices are almost equal for different values of $\rho$.
\begin{figure}[ht]
\subfigure[Utility comparison]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_concave_uti}
\label{fig:cesu}
} \hspace*{-1.9em}
\subfigure[Varying $\rho$]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_concave_vary_rho_price}
\label{fig:cesp}
}\vspace{-0.2cm}
\caption{CES approximation utility comparison}
\end{figure}
\begin{figure*}[ht]
\subfigure[Price]{
\includegraphics[width=0.35\textwidth,height=0.10\textheight]{plot_quasi_price_vary_budget_scale}
\label{fig:quasiprice}
} \hspace*{-1.9em}
\subfigure[Utility]{
\includegraphics[width=0.35\textwidth,height=0.10\textheight]{plot_quasi_uti_vary_budget_scale}
\label{fig:quasiuti}
} \hspace*{-1.9em}
\subfigure[Surplus Ratio (surplus/budget)]{
\includegraphics[width=0.35\textwidth,height=0.10\textheight]{plot_quasi_surplus_ratio_vary_budget_scale1}
\label{fig:quasisurplus}
}\vspace{-0.1cm}
\caption{ME in the net profit maximization model }
\end{figure*}
\subsection{Net Profit Maximization Model}
\label{sim2}
We now evaluate the second model where
the services aim to maximize their net profits. We use the same system setting with 4 services and 8 ENs in the \textbf{base case} as before. From the objective function of the buyer, we know that a buyer will buy resource from an EN only when the price of the EN is less than or equal to its utility gain from the EN.
In the revenue maximization case as in the \textbf{basic model}, the equilibrium prices increase linearly at the same rate as the budget. However, as can be seen in Fig.~\ref{fig:quasiprice}, this property does not hold in the net profit maximization model. Budget scale is the scaling factor by which we multiply the original budget. The figure shows that the equilibrium prices increase then become saturated after certain values of the budgets. At these (saturated) prices, buying resources from the ENs or not does not change the utility for a buyer (i.e., $p_j = a_{i,j}$). When the budget is large enough, the utilities of the buyers become equal to their budgets. It means procuring resources or not does not bring any additional benefit to the buyers. These results are shown in Figs.~ \ref{fig:quasiuti},~\ref{fig:quasisurplus}.
Figs.~\ref{fig:quasip}, \ref{fig:quasiu} present equilibrium prices and optimal utilities, respectively, as the budget varies.
\textbf{Rev.max} corresponds to the first model (i.e., revenue maximization) with scale equal to 1. As we can observe, for the same budget (i.e., scale = 1), equilibrium prices in the second model are smaller than equilibrium prices in the first model because in the net profit maximization model, a service only buys resource from an EN that gives it positive gain.
Also, the service utilities at the equilibrium in the second model is greater than those in the first model due to lower equilibrium prices and budget surplus is considered in the second model. Finally, in the second model, the equilibrium prices and optimal utilities increase as the budget increases. As explained above, equilibrium prices become saturated in the second model at certain points. Hence, the equilibrium prices increase very little as budget scale increases from 1 to 1.5.
\begin{figure}[ht]
\centering
\subfigure[Prices]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_bar_quasi_price_vary_budget_scale1}
\label{fig:quasip}
} \hspace*{-1.9em}
\subfigure[Utilities]{
\includegraphics[width=0.245\textwidth,height=0.10\textheight]{plot_bar_quasi_utilities}
\label{fig:quasiu}
}\vspace{-0.2cm}
\caption{Impact of budget on equilibrium prices and utilities}
\end{figure}
\section{Conclusion and Future Works}
\label{con}
In this work, we consider the resource allocation for an EC system which consists geographically distributed heterogeneous ENs with different configurations and a collection of services with different desires and buying power.
Our main contribution is to suggest the famous concept of General Equilibrium in Economics as an effective solution for the underlying EC resource allocation problem. The proposed solution produces an ME that not only Pareto-efficient but also possesses many attractive fairness properties. The potential of this approach are well beyond EC applications.
For example, it can be used to share storage space in edge caches to different service providers. We can also utilize the proposed framework to share resources (e.g., communication, wireless channels) to different users or groups of users (instead of services and service providers).
Furthermore, the proposed model can extend to the multi-resource scenario where each buyer needs a combination of different resource types (e.g., storage, bandwidth, and compute) to run its service. We will formally report these cases (e.g., network slicing, NFV chaining applications) in our future work.
The proposed framework could serve as a first step to understand new business models and unlock the enormous potential of the future EC ecosystem. There are several future research directions. For example, we will investigate the ME concept in the case when several edge networks cooperate with each other to form an edge/fog federation. Investigating the impacts of the strategic behavior on the efficiency of the ME is another interesting topic. Note that N. Chen {\em et. al.} \cite{nche16} have shown that the gains of buyers for strategic behavior in Fisher markets are small. Additionally, in this work, we implicitly assume the demand of every service is unlimited. It can be verified that we can add the maximum number of requests constraints to the EG program to capture the limited demand case, and the solution of this modified problem is indeed an ME. However, although the optimal utilities of the services in this case are unique, there can have infinite number of equilibrium prices. We are investigating this problem in our ongoing work. Also, integrating the operation cost of ENs into the proposed ME framework
is a subject of our future work. Finally, how to compute market equilibria with more complex utility functions that capture practical aspects such as task moving expenses among ENs and data privacy is an interesting future research direction.
%
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Co-location mining aims to discover patterns of spatial features often located close to each other, i.e. in geographic proximity. An example of a pattern is a co-location of symbiotic species of plants and animals depending on ecological conditions. Figure~\ref{fig_sample} illustrates a sample spatial dataset with point features. As it can be observed, instances of feature ``$+$" are often located close to instances of ``$\circ$". Similarly, objects of feature ``$\star$" are seen close to instances of ``$\triangledown$". The main purpose of co-location mining is to come up with a set of hypotheses based on data features and statistics that are potentially useful to domain experts, so that they can uncover possible patterns that are hidden in the data sets. The discovery of spatial co-location patterns may lead to useful knowledge in various applications. For instance, one might be interested in an animal species that lives close to certain types of landmarks such as rivers, meadows, forests, etc. Another example includes the co-location patterns detected between crime incidents and locations of various businesses, which can be useful for criminologists. Some of the application domains for co-location mining are biology, urban studies, health sciences, earth and atmospheric sciences, etc. Even though this task seems to be similar to association rule mining (ARM) which is used in knowledge discovery, the use and adaptation of ARM techniques are not trivial due to the fact that features are embedded into a geographic space and there is no clear notion of transactions. ARM consists of discovering rules that express associations between items in a database of transactions. The associations are based on a notion of frequency bounded by a given threshold, such as the minimum frequency.
\begin{figure}[!t]
\centering
\includegraphics[width=3in, height = 2.5in]{Fig1.eps}
\caption{A sample dataset with point spatial features. Instances of feature sets $\{+, \circ\}$ and $\{\star, \triangledown\}$ are often located close to each other}
\label{fig_sample}
\end{figure}
An application which motivates the work of this paper is the detection of possible spatial associations of different chemicals and cases of childhood cancer. Cancer, a multifactorial class of diseases, characterized by uncontrolled growth of abnormal cells, their invasion into other tissues, and metastasis, is one of the leading causes of adults' death in both the developed and developing world~\cite{armstrong1975environmental,boffetta2003contribution}. Although some individuals are genetically predisposed to cancer, most cases of cancer are suspected to be linked to environmental factors such as air pollutants, radiation, various infections, tobacco, and alcohol. However, causes of childhood cancer are difficult to determine, partially because of the fact that children's cancer cases are relatively rare and the levels of exposure to environmental factors are difficult to evaluate. Our collaborative research efforts with the Faculty of Medicine at the University of Alberta attempt to identify associations between cancer cases and known chemical emissions from the industry. Some of these chemicals are proven to be carcinogens while others are not known to cause cancer on their own. It is yet to be discovered whether certain combinations of chemicals can be associated with higher rates of cancer.
Moreover, even if chemicals in such combinations with a potential threat are not emitted by the same industry, atmospheric conditions can contribute to the mixture. Given these concerns, we deploy our model to detect co-location patterns on a spatial dataset which contains information on chemical emission points, amounts of released chemicals and the locations of childhood cancer cases if they were first diagnosed in the two Canadian provinces, Alberta and Manitoba, where the data are collected from. Figure~\ref{fig_map} displays a part of the dataset with rectangles representing pollutant emission points, triangles for cancer cases, and polygons for urban municipalities. To mine such a rich dataset, first we build a modeling framework which handles the data to represent the real world conditions as accurate as possible while taking various factors which affect distribution of chemicals into account. This underlying modeling framework helps the co-location mining algorithms we develop to detect more accurate patterns. While we are not intending to find causalities, the goal of our study is to identify potentially interesting spatial associations in order to state hypotheses and further investigate relationships between childhood cancer and specific combinations of chemicals.
Most of the existing approaches to the co-location mining problem~\cite{Shekhar2001,Huang2006,Yoo2006,Xiong2004} deploy a framework which requires a user-defined minimum prevalence threshold. Without prior knowledge it could be difficult to choose a proper threshold. Furthermore, spatial features often have various frequencies in datasets, and one global threshold might lead to the omission of some co-location patterns and rules with rare events or the detection of meaningless patterns. Another limitation of most of the existing algorithms is that they work with point spatial features and a single neighborhood distance threshold, whereas in reality there are datasets which, in addition to point instances, also have lines and polygons (e.g., a road network map). Moreover, the information in some datasets can be uncertain: the presence of a feature in the region could depend on different factors, thus associating it with an existential probability. For example, a pollutant released from a facility distributes according to the climatic factors in its area, and the probability of detecting the chemical in a region close to the emission point is higher than in remote regions.
In this paper, we address these limitations in existing approaches by proposing a new framework which combines ideas from co-location mining, frequent pattern mining and association rule mining. Our proposed framework uses statistical tests to determine the significance of co-location patterns and rules. A co-location pattern or rule is considered as significant if it has a surprisingly high level of prevalence in comparison with randomized datasets which are generated under the null hypothesis that the spatial features are independent from each other. The uncertainty of the information is modeled as a dependent relationship based on the distance from the spatial object. We also paid attention to the computational complexity of the proposed algorithms. In this paper, we discuss some of the filtering techniques we used to increase the efficiency of the algorithms.
In one of our previous papers \cite{adilmagambetov2013discovering} we outlined some of the core ideas of the framework proposed in this paper and provided some preliminary results. In this paper, we extend the above work in few different aspects. First, we provide a concrete description of our complete co-location mining framework by properly defining the Grid Transactionization algorithm and including it in the workflow of the mining process. Furthermore we include details on how to work with uncertainty in Spatial datasets. We also tested our proposed framework on a completely new, real dataset for the province of Manitoba, Canada, which allowed us to analyze the robustness and effectiveness of our approach while also helping to derive new co-location rules. We also include results from further experiments conducted on synthetic datasets to help validate the results. Results from another variation of our proposed framework are also included, which ignores the uncertainty attribute of the spatial dataset. The results from this ``certain method" assist in further exploring the extensibility of the proposed framework. We also implemented a baseline co-location rule miner and compared the rules mined by that with the rules mined by our approach. This has provided more evidence that our algorithm is not only capable of deriving highly prevalent rules but also it can detect statistically significant rules which may not be highly prevalent. Furthermore, we provide a closer analysis on the computational complexity of the proposed algorithms, as well as on the choice of a proper grid granularity measure.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{Fig2.eps}
\caption{Part of the dataset from the province of Alberta: rectangles - pollutant points, triangles - cancer cases, polygons - urban municipalities}
\label{fig_map}
\end{figure}
The remainder of the paper is organized as follows. An overview of the related work is given in Section 2. The proposed framework and its outline are described in Section 3. Section 4 describes the challenges and the modeling framework used to mine the co-location patterns between pollutants and childhood cancer cases. The experiments are presented in Section 5, followed by conclusions in Section 6.
\section{Related Work}
In recent years spatial data mining has gained significant attention due to the abundance of data with spatial and temporal attributes. In particular spatial data mining is the process of extracting interesting and useful patterns in geographic datasets. The technological advances in data storage and the widespread use of GPS technologies, remote sensing devices, and location-based services have created large amounts of spatial data. The spatial data processing and analysis is useful in a wide range of applications such as population analysis, social sciences, environmental sciences, and business applications.
In contrast to classical data mining, spatial data mining has some specific features. In classical data mining, it is assumed that data objects are independent from each other, whereas in spatial datasets, objects situated close to each other tend to be more similar, and have the same characteristics than objects located at a great distance. Autocorrelation is another important feature which can be observed in spatial datasets that helps to explain the gradual change of temperature with precipitation levels. One of the major difficulty in dealing with spatial data is the relatively higher complexity of its data object types and their relations. For instance, there are not only points but also lines and polygons in spatial databases. Furthermore, the relationship between objects are implicit, such as intersection, containment and enclosure.
Various types of methods and approaches are used to analyze such spatial dasets. Insights from traditional data mining techniques have been useful in developing these methods. Some of the tasks in spatial data analysis include spatial clustering, co-location mining, spatial trend detection, outlier detection and spatial classification~\cite{Ester2001,Shekhar2010}. We discuss some of the significant previous works in this context under two major categories:1) Co-location Mining as a category of spatial data mining techniques, and 2) Frequent Pattern Mining as a general data mining technique which can provide insights in co-location pattern detection.
\subsection{Co-location Mining}
Co-location mining is one of the tasks of spatial data mining that can be divided into two classes of methods: spatial statistics approaches, and spatial data mining approaches.
\subsubsection{Spatial Statistics Approaches}
Spatial Statistics Approaches deploy statistical techniques such as cross K-functions with Monte-Carlo simulations~\cite{Cressie1991}, mean nearest-neighbor distance, and spatial regression models~\cite{Chou1997} to evaluate and find co-location patterns between two features. Disadvantages of these approaches are expensive computation time and the difficulty in applying them to patterns consisting of more than two spatial features.
\subsubsection{Spatial Data Mining Approaches}
Spatial Data Mining Approaches could be categorized into transaction-based methods (i.e. works with transactions), and spatial join-based methods (i.e. use spatial joins of instance tables or feature layers).
Transaction-based approaches work by creating transactions over the space and using association rule mining like algorithms on these transactions~\cite{Agrawal1994,Koperski1995,Morimoto2001}. One of these methods, a reference centric model~\cite{Koperski1995}, creates transactions around a reference feature specified by the user. Each set of spatial features that form neighbourhood relationships with an instance of the reference feature is considered as a transaction. However, not all applications have a clearly defined reference feature. For example, in urban studies, features could be schools, fire stations, hospitals, etc., and there is no single specific feature of interest. Another approach, the window-centric model~\cite{Shekhar2001}, divides the space into cells, and considers instances in each cell as a transaction. The model can consider all possible windows as transactions or use spatially disjoint cells. However, a major drawback of the model is that some instance sets are divided by the boundaries of cells. Therefore some of the spatial relationship information is lost. On the other hand maximal cliques (i.e. maximal sets of instances which are pair-wise neighbors) in spatial data are also proposed to be used as transactions~\cite{Naymat2008,Kim2011}. However this approach does not preserve the information regarding the relative distance of the objects in cliques as long as they are considered neighbours.
Spatial join-based approaches work with spatial data directly. They include cluster-and-overlay methods and instance-join methods. In the cluster-and-overlay approach, clustering is used to mine associations. For example, concentrations of objects in layers are found in order to search for possible causal features~\cite{Estivill-Castro1998}. In another work~\cite{Estivill-Castro2001}, layers of points and area data are constructed for each spatial feature based on clusters of data instances or boundaries of those clusters. The above proposes two algorithmic approaches for cluster association rule mining: vertical-view approach and horizontal-view approach. In the former, after the clusters in the layers are found, the layers are segmented into a finite number of cells. Then, a relational table is constructed where an element is equal to one, if the corresponding cell satisfies an event in a layer, and the element is zero otherwise. Afterwards an association rule mining algorithm is applied to that table. The second approach evaluates intersections of clustered layers. A clustered spatial association rule defined in the above work is of the form $X\rightarrow Y(CS\%,CC\%)$, where $X$ and $Y$ are the sets of layers, $CS\%$ is the clustered support (i.e. the ratio of the area that satisfies both $X$ and $Y$ to the total area of the study region), and $CC\%$ is the clustered confidence (i.e. the percentage of cluster areas of $X$ that intersect with clusters of $Y$). However, these approaches might be highly sensitive to the choice of clustering methods. In addition, this cluster based approach assumes that features are clustered, even though spatial features may not form explicit clusters. The other spatial join-based approach, the instance-join algorithm, is similar to classical association rule mining. One of the first proposed co-location pattern mining frameworks of this type~\cite{Shekhar2001,Huang2004} is based on the neighbourhood relations and the participation index concepts.
Following the previous cluster-overlay methods to detect co-location patterns ~\cite{Estivill-Castro2001}, another approach~\cite{Huang2006dens} uses clustering to detect co-location patterns as follows. For two spatial features $f_{1}$ and $f_{2}$, if the density of objects of the feature $f_{1}$ in proximity of objects of the feature $f_{2}$ is higher than the overall density of objects of $f_{1}$, then the feature $f_{1}$ is considered to be co-located with the feature $f_{2}$, i.e. their objects tend to be situated close to each other. Similar to some of the previous approaches, this algorithm also suffers from a limitation where it is based on an assumption that spatial instances of a feature are situated close to each other and form clusters which may not be the case in some real-world applications.
The basic concepts in the co-location mining frameworks are analogous to the concepts in traditional association rule mining. As an input, the framework takes a set of spatial features and a set of instances, where each instance is a vector that contains information on the instance ID, the feature type of the instance, and the location of the instance. As an output, the method returns a set of co-location rules, where a co-location rule is of the form $C_{1}\rightarrow C_{2}(PI,cp)$, where $C_{1}$ and $C_{2}$ are co-location patterns, $PI$ is the prevalence measure (i.e. the participation index), and $cp$ is the conditional probability. The participation index $PI(C)$ of a co-location pattern $C$ is defined as:
\begin{equation}
PI(C) = min_{f_{i}\in C}\{pr(C,f_{i})\}
\end{equation}
where $pr\{C,f_{i}\}$ is the participation ratio of a feature $f_{i}$ in a co-location $C$ and it is computed as:
\begin{equation}
pr(C,f_{i}) = \frac{\textnormal{\# distinct instances of } f_{i} \textnormal{ in instances of } C}{\textnormal{\# instances of } f_{i}}
\end{equation}
A co-location pattern is considered prevalent, or interesting, if for each feature of the pattern, at least $PI\%$ instances of that feature form a clique with the instances of all the other features of the pattern according to the neighbourhood relationship. Similar to association rule mining, only frequent $(k-1)$ patterns are used for the $k$ candidate generation process. A co-location rule $C_{1}\rightarrow C_{2}$ is considered prevalent if its conditional probability is higher than a threshold. The conditional probability $cp(C_{1}\rightarrow C_{2})$ is defined as:
\begin{equation}
cp(C_{1}\rightarrow C_{2}) = \frac{\textnormal{\# distinct instances of }C_{1} \textnormal{ in instances of }C_{1} \rightarrow C_{2}}{\textnormal{\# instances of }C_{1}}
\end{equation}
In the approach mentioned above, it is assumed that spatial features occur with similar levels of frequency. Therefore, if a dataset contains rare spatial features, co-locations involving these rare events will be pruned by a prevalence threshold because more frequent features dominate rare ones, and no pattern with a rare event can become prevalent. For example, a rare disease will not be captured in co-location patterns due to the fact that its causes are more frequent in the database. To solve this limitation, Huang et al.~\cite{Huang2006} continue their previous work by introducing an algorithm that finds co-location patterns with rare features. Instead of the participation index threshold, the authors propose to use the maximal participation ratio threshold. Briefly, a co-location pattern is considered prevalent if $maxPR\%$ instances of at least one of the features in the pattern are co-located with instances of all the other features, where $maxPR$ is the maximal participation ratio:
\begin{equation}
maxPR(C) = max_{f_{i}\in C}\{pr(C,f_{i})\}
\end{equation}
It is not well explained how the algorithm deals with noisy features. They identify noisy features as the features that have a relatively insignificant number of instances in the given spatial dataset than the other features, and those instances are concentrated only on few co-location patterns. In this case it is highly probable that every co-location pattern with these features will be considered prevalent because of their high participation ratio irrespective of their insignificant representation in the whole dataset. Both previously mentioned methods use computationally expensive instance joins to identify instances of co-location patterns, and their running time grows fast as the number of instances and sizes of candidate patterns increase.
Yoo et al.~\cite{Yoo2004} propose a partial-join approach for mining co-location patterns. A study space is partitioned into square cells with the side length equal to a neighbourhood distance threshold. A set of spatial instances in a cell form a clique. Join operations are required to identify neighbourhood relationships divided by boundaries of cells. Even though this approach reduces the computation time, it still requires a large amount of spatial joins.
The joinless algorithm~\cite{Yoo2006} is a follow-up work to the partial-join approach. It further decreases computation time of constructing neighbourhood relationships. The main idea is to find star neighbourhoods instead of calculating pairwise distances between all the instances in a dataset. The neighbourhood relationship is materialized in the form of a table where for each instance, all its neighbors are listed. Then, in order to ensure that pattern instances form cliques, an instance-lookup scheme is used to filter co-location instances. In addition, three filtering steps are used to find a set of prevalent co-location patterns. The authors prove that their algorithm finds a complete and correct set of co-location patterns and rules. The experiments on synthetic and real datasets show that the joinless approach has better performance in terms of the running time than the join-based algorithm.
Based on their work, Xiao et al.~\cite{Xiao2008} improve the running time by dividing spatial objects into partitions and detecting neighboring instances in dense regions first. The algorithm finds instances in dense regions and maintains an upper bound on a prevalence measure for a candidate pattern. If the upper bound becomes less than a threshold, the method decides that it is a false candidate and stops identifying its instances in less dense regions.
Several other studies have extended the basic co-location mining framework to work with more complex spatial objects. For example, Xiong et al.~\cite{Xiong2004} propose a framework for detecting patterns in datasets with extended objects. Extended objects are objects that are not limited to spatial points but also include lines and polygons. This framework also uses the notion of buffers, which can be defined as zones of specified distances created around spatial objects. The size and the shape of these buffer zones might depend on the types of the spatial objects. In the proposed model, candidate patterns are pruned by a coverage ratio threshold. In other words, if an area covered by the features of a candidate pattern is greater than a predefined threshold, this pattern is considered prevalent. In order to minimize the usage of geographic information systems (GIS) overlay methods, a coarse-level mining step is used. At this level, minimum buffer bounding boxes of spatial objects are considered by the algorithm instead of true buffer shapes. Then, patterns that have coarse level coverage ratio higher than the threshold are evaluated using actual buffers. Compared to previous models, this approach takes into account the shapes of spatial objects and their distribution in space rather than using a single neighbourhood distance for varying types of features. Expensive GIS overlays are used in this method and a filtering technique is proposed in order to improve its performance.
The approaches mentioned above use thresholds on interestingness measures, which causes meaningless patterns to be considered as significant with a low threshold, and a high threshold may prune interesting rare patterns. Instead of a threshold based approach, Barua and Sander~\cite{Barua2011} use a statistical test to mine frequent co-location patterns. The participation index of a pattern in observed data is calculated as in previous studies. Then, for each co-location pattern the authors compute a probability $p$ of seeing the same or a greater value of the prevalence measure under a null hypothesis model. A co-location is considered significant if $p\leq \alpha$, where $\alpha$ is the level of significance. However, the statistical significance is not a monotonic property and it cannot be used to prune insignificant co-location rules as apriori-like algorithms. Thus in their work, they limit the size of the co-location pattern/rule and test each possible candidate pattern/rule to see if it passes the statistical test. To generate more general co-location rules without rule size constraint, Li et al.~\cite{Li2014discovering} propose a new co-location pattern mining framework by exploiting the property of statistical significance. The results of co-location rules are hard to evaluate even for domain experts, they also propose to use a classifier to help evaluate the results of co-location rules
\subsection{Frequent Pattern Mining}
The co-location mining problem is similar to the canonical data mining problem: association rule mining. The most classical example of association rule mining is discovering sets of goods that are often bought together. The concepts of association rule mining and co-location mining are compared in Table~\ref{table_ARM}.
\begin{table}[!t]
\caption{A comparison of association rule mining and co-location mining~\cite{Huang2006}.}
\label{table_ARM}
\centering
\begin{tabular}{|l|l|}
\hline
{\bf Association rule mining} & {\bf Co-location mining}\\
\hline
Item & Spatial feature\\
\hline
Itemset & Spatial feature set\\
\hline
Frequent pattern & Co-location pattern\\
\hline
Support \& Confidence & Interestingness measures\\
\hline
Transactional database & Spatial database\\
\hline
\end{tabular}
\end{table}
Considering the similarity between frequent pattern mining and co-location pattern mining, we discuss the classical frequent pattern mining problem in this subsection. The concepts of frequent pattern and association rule mining were first introduced by Agrawal et al.~\cite{Agrawal1993}. Various approaches to these problems have been proposed over the past two decades. Apriori~\cite{Agrawal1994} is the first and one of the most-known algorithms used for frequent itemset mining. This approach is designed to work on transactional data and consists of a bottom-up candidate generation process where $k$-size candidate itemsets are generated from frequent $(k-1)$-itemsets and tested against the database to obtain frequent $k$-itemsets. This process is repeated until no more candidate patterns can be generated. The correctness of the algorithm is based on the downward closure, or apriori, property, which states that if an itemset is frequent, then all its subsets are also frequent. In other words, an itemset cannot be frequent if one of its subsets is infrequent.
The ARM problem is defined as follows. Let $I = \{i_1, i_2, ..., i_m\}$ be a set of $m$ items and $T = \{t_1, t_2, ..., t_n\}$ be a set of $n$ transactions where a transaction $t$ is a subset of items in $I$. For an itemset $X \subseteq I$, the support of $X$ is defined as the ratio of transactions in $T$ that contain instances of $X$. An itemset is considered frequent if its support is higher than a user-specified minimum support threshold. An association rule is a rule of the form $X \to Y$, where $X \subseteq I$, $Y \subseteq I$, and $X \cap Y = \emptyset$. The confidence of a rule $X \to Y$ is the support of $X \cup Y$ divided by the support of $X$.
\begin{equation}
conf(X \to Y)= \frac{sup(X \cup Y)}{sup(X)}.
\end{equation}
\subsubsection{Frequent Pattern Mining with Uncertain Data}
The algorithms and approaches mentioned above are constructed to work with data where the presence of items in transactions is certain (i.e. definite). For example, market basket datasets are certain and precise (i.e. we know for sure the items were purchased). However, in some applications, data may be incomplete or may have errors. For instance, sensor reading records might include some erroneous data due to various internal and external factors such as sensor failures or extreme weather conditions. Uncertainty can be expressed in terms of existential probabilities; each item of a transaction is followed by a probability of its existence in this transaction. An example transactional dataset is shown in Table~\ref{table_transactions}.
\begin{table}[!t]
\caption{An example of a probabilistic transactional dataset.}
\label{table_transactions}
\centering
\begin{tabular}{|c|l|}
\hline
ID & Transaction\\
\hline
1 & A(0.7); B(1.0); C(0.2)\\
\hline
2 & A(0.9); D(0.5); E(0.4); F(0.8)\\
\hline
3 & B(0.3); D(1.0); G(0.7)\\
\hline
4 & A(0.1); B(0.6); C(0.7); E(0.2); G(0.4)\\
\hline
5 & C(0.5); D(0.2); E(0.8)\\
\hline
6 & B(0.6); C(0.3); E(1.0); F(0.4)\\
\hline
\end{tabular}
\end{table}
Most studies use a notion of expected support~\cite{Chui2007,Chui2008} to mine frequent patterns from uncertain databases. The expected support $E(s(I))$ of an itemset $I$ is defined as the sum of expected probabilities of the presence of $I$ in each of the transactions in a database. A probability $p(I, T)$ of presence of $I$ in a transaction $T$ is a product of corresponding probabilities of items in the transaction. An itemset is considered significant if its expected support exceeds a $minsup$ (i.e. minimum support) threshold.
Several approaches to frequent pattern mining problem with uncertain data have been studied by Aggarwal et al.~\cite{Aggarwal2009}. These approaches are extended from existing classical frequent itemset mining methods and can be divided into two categories: candidate generate-and-test algorithms (an extension of the Apriori algorithm) and pattern growth algorithms (extensions of FP-growth~\cite{Han2004} and H-Mine~\cite{Pei2001}). According to this study, while FP-growth is efficient and scalable in the deterministic case, its extension to the uncertain case behaves differently due to the challenges associated with uncertain data. UH-Mine, an extension of H-Mine, is reported to provide the best trade-offs in terms of running time and memory usage.
Bernecker et al.~\cite{Bernecker2009} proposed a PFIM (Probabilistic Frequent Itemset Mining) framework which is based on the possible world method. Instead of the expected support, PFIM uses the frequentness probability as a significance measure. By using a dynamic computation method, the algorithm is reported to run in $O(|T| minsup)$, where $|T|$ is the number of transactions and $minsup$ is a user-defined threshold. Without it, the approach runs in exponential time. However, the algorithm requires the $minsup$ threshold to be defined, and it is nontrivial to apply the statistical test to the frequentness probability.
\section{Proposed Algorithm}
As we discussed in the previous section, various approaches to the co-location mining problem have been proposed during the past decade. Most of them were focused on improving the performance of existing frameworks that had some limitations. Several studies have addressed some of these issues one at a time, but not all of them at the same time. Yet, these algorithms are unable to be used in some real-world applications, such as our motivating question of exploring whether co-locations of cancer cases and sets of released chemicals exist. We discuss some of these issues and limitations under three categories as follows.
First, the usage of the prevalence measure threshold for the detection of interesting co-location patterns and rules is a main limitation factor in many co-location mining algorithms. In spatial datasets, features usually have a varying number of instances; they could be extremely rare or be present in abundance. Therefore, one threshold for the participation index (or any other significance measure) cannot capture all meaningful patterns, while other patterns could be reported as significant even if their relation is caused by autocorrelation or other factors. In addition, most of the existing algorithms use a candidate generation process which forms $(k+1)$-size candidates patterns or rules only from significant $k$-size patterns. However, a set of features could be interesting even if some of its subsets are not significant. For example, two chemicals may not be correlated with a particular disease on their own, but could be correlated when in combination. In our approach we do not follow such a $(k+1)$-size candidate generation process from $(k)$-size patterns. Instead we use a statistical test to replace the prevalence measure threshold and to identify statistically significant co-location patterns from a given set of candidate patterns. Barua and Sander~\cite{Barua2011} first proposed the usage of statistical tests to find significant co-location patterns. In such a statistical framework, the pattern is considered as significant if the probability of seeing a similar or greater value of the prevalence measures in $N$ artificial datasets than in the observed or original dataset is less than $\alpha$ (the significance level) under the null hypothesis that there is no spatial dependency among features of the pattern. In this approach each candidate pattern is evaluated separately for their statistical significant rather than applying a single prevalence threshold for all of them.
Second, most of the existing co-location mining approaches use a single distance threshold to identify neighbourhood relationships among spatial objects. However, in some applications this might oversimplify the real situation. For instance, in zoological research, various species have different habitat ranges: birds (especially, birds of prey) might interact with other species in large distances, while subterranean animals are limited in their movements. Therefore, the usage of a single distance threshold might lead to inaccurate results. Furthermore, most of these existing approaches are designed to work with point datasets. However, other types of objects may exist in spatial datasets such as lines (i.e. roads and communication networks) and polygons (i.e. polluted regions, or areas that had no precipitation for some period or were exposed to other climatic factors). Even though some of the frameworks for extended spatial objects~\cite{Xiong2004} deal with lines and polygons, they also use a global threshold value as the prevalence measure. Furthermore, this framework cannot deal with the uncertainty in datasets (as explained in the following paragraph).
Third, in some applications, information in datasets can be uncertain; data may be incomplete or may have errors. For example, distribution of a chemical released from a chimney in a polluted region is not uniform. Areas closer to an emission point are generally exposed to higher pollutions than places far away from the release point. Another example is climatic data collected by sensors which may have errors in their readings. Uncertainty can be expressed in terms of existential probabilities where each item of the transaction is followed by the probability of its existence in this transaction. Although uncertainty in datasets has been researched for the frequent itemset mining problem, to the best of our knowledge there is no such work done for spatial data and the co-location pattern mining problem.
In this paper, we propose a new framework that addresses the aforementioned limitations. It uses a grid-based ``transactionization" approach (i.e. creating transactions from a dataset). The statistical test is performed on the derived set of transactions to get significant co-location rules or patterns. In the following section we explain the methods and the algorithms in our framework.
\subsection{Algorithm Design}
\label{ad}
The objective of this work is to detect significant patterns in a given spatial dataset that have a prevalence measure value higher than the expected value. Such spatial datasets may contain points and extended spatial objects such as lines or polygons. We use buffers, which are zones of specified distances created around spatial objects, to define the area affected by the instances in the dataset. For example, a buffer defined around a chemical emission point shows the area polluted by a released chemical. The size and the shape of these buffers may depend on the types of spatial instances depending on various factors, which may vary for different applications. In addition, the likelihood or the probability of the presence of the corresponding feature in the zone covered by the spatial object and its buffer is not uniform, and may depend on factors such as the distance from the object. Considering the above factors, we propose a new transaction-based co-location pattern mining algorithm that is suitable for extended spatial objects and uncertain data.
\subsubsection{Grid-based Transactionization}
Recall that previous transaction-based methods have some limitations. A window-centric model cuts off neighbourhood relations of instances located close to each other, but in different partitions. A reference-centric model may get duplicate counts of spatial instances. In addition, it is nontrivial to generalize this approach to applications with no reference feature, as in the case of our motivating application. Due to these limitations in previous transactionization models we propose a new grid-based transactionization method. Our grid-based transactionization procedure is outlined in Algorithm~\ref{alg_transaction}: \textit{GetTransactions(S)}.
Given a spatial dataset \textit{S}, in its first step, Algorithm~\ref{alg_transaction} obtains the set of grid points by imposing a grid with a suitable granularity over the geographic space covering the spatial points in \textit{S}. Each point in this grid can be seen as a representation of a respective part of the corresponding geographic space. Once the grid points are obtained, then Algorithm~\ref{alg_transaction} constructs buffer zones around spatial objects in \textit{S}. In the dataset of our motivating application we have two types of spatial objects: 1) Childhood cancer cases and, 2) chemical emission points. Defining buffer zones for these two types of spatial objects are performed in a different manner. A childhood cancer case spatial object represents the location of a patient who is diagnosed with cancer. For such objects we define a fixed buffer which is a circular region around the source point with a fixed radius. This region defines the mobility range of the patient. On the other hand, when defining buffer zones for chemical emission points we consider factors such as the amount of chemicals emitted, wind speed and the direction. This is further discussed in Section~\ref{mf}: Modelling Framework. There we explain how we morph an original circular buffer region based on the emission quantity for chemical emission points, to an elliptical region which more realistically represents the chemical dispersion based on other factors such as the wind. In the next step of the algorithm the constructed grid is imposed over the dataset \textit{S}. Figure~\ref{fig_grid_a} illustrates an example dataset with buffers around spatial point instances, and a grid is laid over it in Figure~\ref{fig_grid_b}. Similarly, buffers can also be created around linear and polygonal spatial objects. In a two-dimensional space, grid points represent a square regular grid. Due to the spheroid shape of the Earth, a grid used for real-world applications becomes irregular. However, with a careful choice of a grid granularity this fact should not considerably affect the accuracy of the results.
\begin{figure*}[!t]
\centering
\begin{minipage}{1.3in}
\subfigure[A sample spatial dataset with point feature instances and their buffers]
{\includegraphics[width=1.3in]{Fig3_a.eps}
\label{fig_grid_a}}
\end{minipage}\hspace{0.1in}
\begin{minipage}{1.3in}
\subfigure[A grid imposed over the space]
{\includegraphics[width=1.3in]{Fig3_b.eps}
\label{fig_grid_b}}
\end{minipage}\hspace{0.1in}
\begin{minipage}{1.3in}
\subfigure[Grid points which intersect with buffers are used to create transactions]
{\includegraphics[width=1.3in]{Fig3_c.eps}
\label{fig_grid_c}}
\end{minipage}
\centering
\caption{Transactionization step}
\label{fig_grid}
\end{figure*}
A grid point may intersect with one or several spatial objects and their buffers. A transaction is defined as a set of features corresponding to these objects. Hence each grid point can be considered as a potential candidate to obtain a transaction. Let us assume that a sensor capable of detecting various features is placed at each grid point. A set of features detected by each sensor can be seen as a transaction. However, sensor readings are not fully reliable; they are uncertain and can be affected by extreme environmental conditions, sensors' hardware, durability, and other factors. For example, it is possible that a sensor detects a pollutant only if certain amount of it is present in the sensor's environment. In addition, a likelihood of a presence of a feature in a region covered by an object and its buffer is not uniform. Alternatively, since we work with estimates and not with sensors and sensor collected data, we can use in our model the notion of concentration of features. While the fading concentration is not a probability, it can be used to show the feasibility of our model using uncertainty. Intuitively, a feature is more likely to be detected in buffer parts which are closer to a feature point than in parts that are farther away from it. Furthermore, spatial datasets can be noisy and contain errors; locations of instances and their presence can be uncertain.
In order to take into account these uncertainties, the probability of a feature being present in a transaction is also stored when computing the transaction associated with each grid point in the steps six and seven of Algorithm~\ref{alg_transaction}. One of the ways to model uncertainty when transforming a spatial dataset into a set of transactions is to use a distance from a spatial object to a grid point (our method of estimating the existential probability of a feature is explained in the following section). For example, the grid point $gp_2$ in Figure~\ref{fig_grid_c} is located closer to the point $A_1$ than point $gp_1$; we can assume that $p(A,gp_2)>p(A,gp_1)$. An example transaction set is shown in Table~\ref{table_transactions}. When a grid point intersects with several instances of the same feature or their buffers, the highest existential probability is taken as the probability of detecting this feature at the given grid point. Moreover, the granularity of the grid, or the distance between points of the grid, should be carefully chosen for each application, and it may depend on an average size of a region covered by a spatial object and its buffer. A large distance between grid points may negatively affect the accuracy of the results because small feature regions and their overlaps may get a different number of intersecting grid points depending on a grid imposition. On the other hand, when the distance between grid points is too short, the number of derived transactions increases, and the following computation to discover the significance of candidate co-location patterns or rules might become prohibitively expensive, especially when the number of candidates is large.
\begin{algorithm}[!t]
\caption{{\it GetTransactions(S)}: Transactionization step.}
\label{alg_transaction}
\begin{algorithmic}[1]
\STATE $T = \emptyset$: set of transactions
\STATE $G$: set of grid points
\STATE Build buffer zones around spatial objects of $S$
\STATE Impose a grid $G$ over the dataset $S$
\FORALL {point $g \in G$}
\STATE $t$ = get a set of features which instances contain $g$ with corresponding existential probabilities
\STATE $T = T \cup t$
\ENDFOR
\RETURN $T$
\end{algorithmic}
\end{algorithm}
\subsubsection{Co-location Pattern/Rule Mining Algorithm}
Once the transactions are derived using Algorithm~\ref{alg_transaction}, we perform co-location pattern mining using Algorithm~\ref{alg_main}. As the first step after the transactionization, given the set of derived transactions $T$, and a set of spatial features $F$, Algorithm~\ref{alg_main} computes a prevalence measure value (i.e. the level of interestingness) for all the candidate co-location patterns or rules. Depending on the objective this prevalence measure could be different. In some applications, experts look for sets of features that are often co-located with each other, but not necessarily in a cause-effect relationship. In this case, which is analogous to the frequent pattern mining, the expected support $ExpSup(P)$ might be used to define a level of interestingness of a pattern $P$. In frequent pattern mining with certain data, the support of a pattern is counted deterministically as the number of transactions containing all features of the pattern. However, in the case of uncertain data, transactions are probabilistic, and therefore, the support is counted in the expected value, which is defined as follows.
\begin{definition}
The expected support $ExpSup(P)$ of a pattern $P$ is defined as the sum of probabilities of the presence of $P$ in each of the transactions $t$ in the uncertain database:
\begin{equation}ExpSup(P)=\sum \limits_{t \in T} p(P,t).\end{equation}
\end{definition}
The probability of a presence of a pattern $P$ in a transaction $t$ can be computed as follows:
\begin{definition}
The probability $p(P,t)$ of the pattern $P$ occurring in a transaction $t$ is the product of corresponding feature instance probabilities:
\begin{equation}p(P,t)= \prod \limits_{f \in P}p(f,t).\end{equation}
\end{definition}
For some other applications, researchers intend to discover a predefined set of rules. For example, for a dataset of disease outbreaks and possible cause factors, a typical co-location rule is of the form $C\rightarrow D$, where $C$ is the set of cause features and $D$ is the disease feature. For this scenario, the expected confidence $ExpConf(X \to Y)$ can be used as the prevalence measure of a co-location rule $(X \to Y)$, where $X \subseteq F$, $Y \subseteq F$, and $X \cap Y = \emptyset$. The definition of $ExpConf(X\rightarrow Y)$ is as follows.
\begin{definition}
The expected confidence $ExpConf(X \to Y)$ of a rule $X \to Y$ is defined as:
\begin{equation}ExpConf(X \to Y)=\frac{ExpSup(X \cup Y)}{ExpSup(X)}.\end{equation}
\end{definition}
Hence, based on the objective whether to mine rules or patterns, our algorithm uses either $ExpSup(P)$ or $ExpConf(X \to Y)$ as the prevalence measure in Algorithm~\ref{alg_main}. For example, Algorithm~\ref{alg_main} shows the pseudocode of our approach in a case where co-location patterns are mined and the expected support is used as the prevalence measure.
In the steps of Algorithm~\ref{alg_main} discussed above, we build buffers around each instance and then perform grid transactionization to obtain a set of transactions from the spatial dataset, and compute a prevalence measure value for each of the candidate co-location pattern or rule using the derived transactions. Now our goal is to discover a set of significant patterns and rules. As discussed above, the usage of one threshold on the prevalence measure may result in the discovery of wrong patterns and the omission of interesting ones. Moreover, only co-location patterns or rules with surprising levels of the prevalence measure should be considered as significant. In other words, it is unlikely that instances of features in a significant pattern are located close to each other only by chance according to a predefined significance level threshold.
Thus, in order to avoid the possibility that instances of features are co-located together by chance, we need to do some randomization tests; the instances of features are considered to be independent in each randomized test. Therefore, in this step, we utilize the statistical test to help estimate the likelihood of seeing the same, or a greater, level of the prevalence measure under a null hypothesis that the features of a pattern or rule are spatially independent from each other, i.e. the randomization test. The definition of significance is as follows:
\begin{definition}
A co-location pattern $P$ is considered significant at level $\alpha$, if the probability $p$ of detecting an expected support higher or similar to the $ExpSup_{obs}(P)$ (i.e. expected support in the original dataset) in a dataset complying with the null hypothesis is not greater than $\alpha$.
\end{definition}
The same logic can be applied to a case when significant co-location rules are mined:
\begin{definition}
A co-location rule $R$ is considered significant at level $\alpha$, if the probability $p$ of detecting an expected confidence higher or similar to the $ExpConf_{obs}(R)$ (i.e. expected confidence in the original dataset) in a dataset complying with the null hypothesis is not greater than $\alpha$.
\end{definition}
Let us suppose that we are mining for co-location rules. In this case, Algorithm~\ref{alg_main} uses the expected confidence $ExpConf$ as the prevalence measure. Let $ExpConf_{obs}(X \to Y)$ denote the expected confidence of a co-location rule $X \to Y$ in a real dataset, and $ExpConf_{rand}(X \to Y)$ denote the expected confidence of a rule $X \to Y$ in a randomized dataset which is generated under the null hypothesis. The expected confidence of the co-location rule in each of $R$ randomized datasets is calculated in order to estimate the probability $p$. Having the number of simulations $R$, the value of $p$ is computed as:
\begin{equation}\label{eq_pi_value}p=\frac{R_{\ge ExpConf_{obs}}+1}{R+1},\end{equation}
where $R_{\ge ExpConf_{obs}}$ is the number of simulations in which $ExpConf_{rand}(X \to Y) \ge ExpConf_{obs}(X \to Y)$. The observed dataset is added to both the numerator and the denominator.
If the $p$-value is less or equal to a predefined level of significance $\alpha$, the null hypothesis is rejected. Therefore, it is unlikely that the features of the rule are spatially independent; thus, they are not situated close to each other only by chance. Hence, the co-location rule $X \to Y$ is considered significant at level $\alpha$. The above explanation can also illustrate a process of detecting co-location patterns. The difference is that, instead of the expected confidence, the expected support $ExpSup$ is used as the prevalence measure.
In an incremental manner, Algorithm~\ref{alg_main} computes the $p$-value described above for each candidate pattern or rule as depicted in the iterative loop from step 10 to 22. In order to estimate this probability $p$, a set of randomized datasets (i.e. $RD[1...R]$) is generated under the null hypothesis. Each randomized dataset has the same number of instances of each feature as in the original dataset. In addition, the distribution of instances of each feature in a randomized dataset should be similar to its distribution in the original data. For instance, disease cases should be placed within populated areas. Obviously, the random placement of disease cases all over the study region can lead to invalid results, especially in the case when most of the region is unpopulated. Another example can be found in biology. Some animal species may have various requirements to their habitats such as a location close to water reservoirs or presence of certain types of vegetation. This observation needs to be taken into account in a randomized dataset generation process. When computing the $p$-values and identifying significant rules or patterns, rather than going through the full iteration, we perform a candidate filtering which would help reduce the computation time. In the following discussion we explain how this computation of $p$ values and detecting significant rules works with randomized datasets and the candidate filtering technique.
\begin{algorithm}[!t]
\caption{Mining significant co-location patterns.}
\label{alg_main}
\begin{algorithmic}[1]
\REQUIRE Spatial dataset $D$.\\Level of significance $\alpha$.\\Number of simulation runs $R$.\\Set of randomized spatial datasets $RD[1..R]$.
\ENSURE Set of significant co-location patterns $P$
\STATE $T$: set of transactions
\STATE $CP$: set of candidate patterns
\STATE $T=GetTransactions(D)$
\FORALL {$cp \in CP$}
\STATE $cp.ExpSup_{obs} = Compute ExpSup(cp,T)$
\IF {$cp.ExpSup_{obs} == 0$}
\STATE $CP = CP \backslash cp$
\ENDIF
\ENDFOR
\FOR {$i = 1 \to R$}
\STATE $T=GetTransactions(RD_i)$
\FORALL {$cp \in CP$}
\STATE $cp.ExpSup_{sim}[i] = Compute ExpSup(cp,T)$
\IF {$cp.ExpSup_{sim}[i] \ge cp.ExpSup_{obs}$}
\STATE $cp.R_{\ge ExpSup_{obs}} = cp.R_{\ge ExpSup_{obs}}+1$
\STATE $cp.\alpha = \frac {cp.R_{\ge ExpSup_{obs}}+1}{R+1}$
\IF {$cp.\alpha > \alpha$}
\STATE $CP = CP \backslash cp$
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\STATE $P = CP$
\RETURN $P$
\end{algorithmic}
\end{algorithm}
The calculation of the $p$-value is repeated for all candidate co-location patterns or rules. The number of candidates grows exponentially with the number of spatial features in the dataset. In addition, the accuracy of the $p$-value depends on the number of simulation runs. Therefore, the more randomized datasets are checked, the more accurate are the results. These two factors may lead to an enormous amount of computation. However, the support of a co-location decreases as the size of a candidate pattern or rule increases, because fewer transactions contain all its features. Therefore, researchers may put a threshold on the support or the maximal size of a candidate in order to analyze only the patterns and rules that are backed by a meaningful number of transactions. That is one important constraint we use when applying the algorithm in real datasets. In addition, we use the following filtering techniques to prune candidate patterns or rules that are definitely not significant from the p-value computation in Algorithm~\ref{alg_main}.
\begin{enumerate}
\item After the calculation of the prevalence measure for candidate patterns in the real spatial dataset, some of the patterns may have a prevalence measure value equal to zero. It means that combinations of features of these patterns do not exist in the dataset. Obviously, these patterns cannot be statistically significant and they can be excluded from the set of candidate patterns (see steps 6-8 in Algorithm~\ref{alg_main}). In some other applications, a different, low-value threshold on a prevalence measure can be used in order to get significant patterns and rules with a certain level of interestingness. In this case, candidate patterns and rules with a prevalence value lower than this threshold can also be pruned and excluded from further computations.
\item During the calculation of the $p$-value for candidate patterns for which an observed prevalence is higher than zero (i.e in the main iterative loop from step 10-22 in Algorithm~\ref{alg_main}), some of the candidate patterns might show a $p$-value that has already exceeded the level $\alpha$. For example, let us assume that the number of simulation runs is 99 and $\alpha=0.05$ (which is most commonly used $p$-value to claim significance). If after ten simulation runs, the prevalence measure of a pattern $P$ is greater than the observed prevalence in 5 randomized datasets, pattern $P$ already surpassed the threshold ($(5+1)/(99+1) > 0.05$). Therefore, it definitely cannot be significant and can be excluded from the following 89 checks (see steps 16-19 in Algorithm~\ref{alg_main}). Thus, the computation time is greatly reduced. With this filter, after the last simulation run the remaining set of candidates contains only significant patterns or rules allowing Algorithm~\ref{alg_main} to return this remaining set as the statistically significant rules set.
\end{enumerate}
\subsection{Computational Complexity}
\label{cc}
The computational time complexity or cost of the proposed algorithm is very important to understand the effectiveness, efficiency and the practicality of it. Let's first analyze the time cost of Algorithm~\ref{alg_transaction}. A closer look at the algorithm reveals that there are four major operations which influence the computational cost of the GetTransactions(S) procedure (i.e. Algorithm~\ref{alg_transaction}). Let's define \textit{n} as the number of chemical emission points and \textit{m} as the number of cancer cases.
\begin{enumerate}
\item Obtain the set of grid points G (step 2 of Algorithm~\ref{alg_transaction}): This operation first selects maximum/minimum latitudes and longitudes from the recorded cancer cases and chemical emission points which causes an O(n+m) computational complexity. Then given the granularity of the grid (distance between two grid points) the operation would compute the grid points by iterating from the maximum latitude to minimum latitude and in each of these iterations it would also iterate from the maximum longitude to the minimum longitude. This would give a computational complexity of $O(k_{1}k_{2})$, where $k_1$ is the number of divisions from the maximum latitude to the minimum latitude and $k_2$ is the number of divisions from the maximum longitude to the minimum longitude. Let's also introduce $k_3$, where $k_3= k_1k_2$, which is the number of grid points.
\item Build the buffer zones around the spatial objects (step 3 of Algorithm~\ref{alg_transaction}) : This operation defines a buffer or a range which is a circle or an elliptical area around a particular spatial point or an object. This buffer construction takes a constant time and can be defined as $k_4$. Computational cost incurred by this operation can be represented as O($k_4$(n+m)).
\item Impose the grid G over the dataset S (step 4 of the Algorithm~\ref{alg_transaction}): This operation requires to join each of the record in the input data (size of (n+m)) with each of the grid points, which leads to the run time complexity of O($k_3$(n+m)).
\item Compute the transactions (step 5-8 of the Algorithm~\ref{alg_transaction}) : This operation groups the temporary transactions obtained by joining input data records with the grid points in the previous step. This causes a run time cost in the order of O($k_3$(n+m)). Then each of the categories is processed to compute the count of the feature instances or to identify the maximum existential probability of a particular feature. This would incur a constant cost \textit{c} on average. This would give a run time complexity in the order of O(c$k_3$).
\end{enumerate}
Based on the above, the overall run time complexity of the GetTransaction(S) algorithm can be represented as, $O(k_3)$+O($k_4$(n+m))+O(2$k_3$(n+m))+O(c$k_3$). If simplified by fixing all the other input except the original dataset which contains chemical emission points and cancer cases, we obtain, O(n+m) for Algorithm~\ref{alg_transaction}. However, the grid granularity can play a major role in deciding the value of the constants $k_1,k_2$ and $k_3$ giving the possibility of them to grow to a large number in the order of hundreds of thousands. Other constants such as $k_4$ and c can be safely considered as having an insignificant effect.
\par
When analyzing the computational complexity of Algorithm~\ref{alg_main} it can be recognized that there are three major operations. The computational complexity of these operations would determine the overall time cost of Algorithm~\ref{alg_main}.
\begin{enumerate}
\item GetTransactions(D) (i.e. Algorithm~\ref{alg_transaction}): The computational complexity of this method is in the order of O(n+m) as it is discussed previously.
\item Compute expected support of the candidate patterns: Part of this procedure may be recognized as constructing a candidate pattern set. Based on the number of features and the size of the itemset this could gain a computational complexity in the order of O($|f|^d$), where $|f|$ is the size of the feature set $F$ and the $d$ is the size of the largest itemset. However since the number of features and the size of the largest itemset can be considered as fixed constants, in reality, this complexity does not affect the time cost hugely. On the other hand computing the expected support can cost a time complexity of O($k_3|f|^d$).
\item Execute simulation runs and compute expected support with randomized datasets: This would have a run time complexity in the order of O(R$k_3|f|^d$), where R is the number of Randomized datasets and when the other costs to compare the expected support in the actual and randomized datasets, and to compute and compare the p-values are considered constant and insignificant.
\end{enumerate}
Based on this the overall run time complexity of Algorithm~\ref{alg_main} can be represented as, $O(n+m)$+O($k_3|f|^d$)+O(R$k_3|f|^d$). This emphasizes that the original number of inputs plays a minor role in determining the overall computational complexity of the Algorithm, whereas the number of Randomized datasets/ simulation times, R, number of grid points, $k_3$, (i.e. indirectly the grid granularity), and the largest itemset size ($d$) play a major role in determining the overall computational complexity of Algorithm~\ref{alg_main}. However, since each randomization or simulation is done independently, Algorithm~\ref{alg_main} can be done in an ``embarrassingly parallel" procedure with $R$ parallel nodes.
\subsection{Advantages of the Proposed Algorithm}
As we have shown in our previous discussions, by combining techniques of co-location mining and frequent pattern mining, we address the limitations in previous co-location mining approaches. Using our framework has the following advantages over others:
\begin{itemize}
\item The main advantage of our framework is that the only parameter it requires is the $p-value$ (i.e. level of significance). Other than that our framework does not need thresholds on prevalence measures. The statistical test replaces a usage of one global threshold for a prevalence measure of candidate co-location patterns or rules. Only meaningful patterns are reported as significant. These patterns have the prevalence measure higher than an expected value under a null hypothesis that features of a pattern are independent from each other. Sometimes researchers do not need patterns or rules with very low support values even if they are significant. In this case a threshold on support can be used. However, it should have a relatively low value in comparison with other approaches, so it does not exclude meaningful patterns or rules.
\item While a neighbourhood distance threshold used in many co-location algorithms is set to one value for all spatial features, our model can deal with varying buffer sizes. A buffer size may depend on the types of features or on attributes of individual spatial objects. So, the algorithm can be used for applications where features differ from each other with regard to the effect to the environment around them, e.g., plant and animal species. Moreover, the model can be further extended to take into account not only point instances but also other types of spatial objects such as lines and polygons which are present in many spatial datasets.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{Fig4.eps}
\caption{Neighboring objects $A_1 - B_1$ and $A_2 - B_2$. In the transactionization step, the intersection of $A_1$ and $B_1$ receives more transactions (black dots) than the pair $A_2$ and $B_2$}
\label{fig_prox_remote}
\end{figure}
\item In most previous algorithms, two or more objects are considered to have a neighbourhood relationship, if they are located at a distance not farther than a distance threshold. However, these approaches do not take into account spatial information and context: how close or far the objects are situated from each other. Figure~\ref{fig_prox_remote} illustrates an example of two pairs of neighboring spatial objects with corresponding buffer zones. Both pairs, $A_{1} - B_{1}$ and $A_{2} - B_{2}$, are neighbors and treated similarly by most co-location approaches, even though $A_1$ and $B_1$ are closer to each other than $A_2$ to $B_2$. Being located at a closer distance, the instances of the former pair are more likely to be related than the instances of the latter pair. By using buffer zones around spatial instances and transactions that are created from grid points, our algorithm ensures that the spatial location of objects is not ignored. The pair $A_1 - B_1$ gets more transactions (shown as black dots in Figure~\ref{fig_prox_remote}) than the second pair of objects. Therefore, the real situation is represented more accurately. Consider another example. Let spatial points $A_1$, $B_1$ and $C_1$ be pairwise neighbors (Figure~\ref{fig_intersect_a}). They are considered to form a clique by previous algorithms. However, as it can be seen in Figure~\ref{fig_intersect_b}, with certain buffer sizes it is possible that an actual intersection area of three buffers is relatively small. Furthermore, a scenario exists when there is no intersection of the three objects at all, although they form pairwise neighbourhood relationships, as it is illustrated in Figure~\ref{fig_intersect_c}. Our buffer-based framework is able to distinguish these cases. A varying number of transactions is derived from intersecting regions of multiple objects depending on distances between them and their buffer sizes.
\begin{figure*}[!t]
\centering
\begin{minipage}{1.3in}
\subfigure[$A_1$, $B_1$ and $C_1$ are pairwise neighbors]
{\includegraphics[width=1.3in]{Fig5_a.eps}
\label{fig_intersect_a}}
\end{minipage}\hspace{0.2in}
\begin{minipage}{1.3in}
\subfigure[An intersection of $A_1$, $B_1$ and $C_1$]
{\includegraphics[width=1.3in]{Fig5_b.eps}
\label{fig_intersect_b}}
\end{minipage}\hspace{0.2in}
\begin{minipage}{1.3in}
\subfigure[No intersection of $A_1$, $B_1$ and $C_1$]
{\includegraphics[width=1.3in]{Fig5_c.eps}
\label{fig_intersect_c}}
\end{minipage}
\centering
\caption{Intersection of neighboring objects}
\label{fig_intersect}
\end{figure*}
\item Similar to classical frequent pattern mining applications where data can be certain (deterministic) or uncertain (probabilistic), spatial datasets can also exhibit uncertainty of feature existence in space. In other words, a probability of detecting an existence of a feature in a region closer to an observation point is higher than in regions situated farther from it. By taking into account uncertainty and including it in our framework, we believe that our model increases the accuracy of the results.
\end{itemize}
\section{Modeling Framework}
\label{mf}
A modeling framework that is used to handle and analyze data is an important part of any practical research. In theoretical studies it could be simplified in order to generalize a task and define algorithms that could be applied for a wide range of applications and domains. However, the usage of general approaches and algorithms may result in misleading or even wrong results. For example, a neighbourhood distance threshold is an important measure of an interaction and relationship between features. Obviously, a single distance threshold cannot accurately capture all possible relationships among features. In biology, various animal species have different home ranges, areas where they search for food. Rodents may require little space, while birds forage on wider regions. Another example can be derived from urban studies. Consider two points of interest, for example a shopping mall and a grocery store, situated in a distance exceeding some threshold. However if they are connected by a high quality road, they are more likely to be co-located than any other two points positioned seemingly close to each other but separated by some obstacles. To tackle this kind of application specific issues, most of the application domains, if not all, have their own nuances that must be taken into account, when performing any analysis or mining tasks, in order to get most accurate and significant results.
The motivating task of this paper, detecting co-locations of pollutant emission points and childhood cancer cases, has unique difficulties and challenges. A distribution of a pollutant in a region is not uniform and it could depend on several factors: types of pollutants, amounts of release, weather conditions (wind, precipitation), topography, etc. Various chemicals have different levels of harmfulness and toxicity. In addition, a pollutant concentration might be inversely proportional to the distance from the emitting point. These are only a few examples. We show how we tackled some of these problems such as pollutant amounts, wind speed and direction, and uncertainty of presence of chemicals. Certainly, we do not aim to reproduce complicated air pollution distribution models which require considering many other variables and parameters. Instead, our model gives a simple framework that attempts to simulate real scenarios while operating with available data.
The motivating application of this paper, detecting co-location patterns/rules of pollutant emission points and childhood cancer cases, has unique difficulties and challenges which need to be addressed. For example, a distribution of a pollutant, in a region surrounding the emission point of it, is not uniform and it could depend on several factors such as pollutant type, released amount, weather conditions (e.g. wind and precipitation), topography, etc. On the other hand various chemicals have different levels of harmfulness and toxicity. In addition, the concentration of some pollutants might be inversely proportional to the distance from the emitting point. These are only a few examples of possible issues in our motivating application. Addressing these issues can be really important to the accuracy of our algorithm. For example, when defining buffers for spatial data points, an accurate model which can tackle the above mentioned issues is very important. In following we discuss how our modelling framework tackled some of these problems such as released amount, wind speed and direction, and the uncertainty in the existence of the chemicals. Certainly, we do not aim to reproduce complicated air pollution distribution models which require to consider many other variables and parameters. Instead, our model gives a simple framework that attempts to simulate real world conditions while operating with available data.
\subsection{Pollutant Amounts}
The dataset on pollutants contains the data on estimated yearly releases of chemicals by industry, according to Canada's National Pollutant Release Inventory~\cite{npri}. For our research we use the average amount of released chemicals by a facility in a given year. The range of the average amount values varies from several kilograms to tens of thousands of tons; the minimum and maximum average yearly release in the dataset is 0.001 grams and 80,000 tons, respectively. Figure~\ref{fig_framework}~(a) displays an example dataset containing cancer points (feature $A$) and chemical points (features $B$ and $C$). On Figure~\ref{fig_framework}~(b) buffer zones around pollutant points are based on the amount of a release at that location. For example, instance $C_1$ has a larger zone affected by this source point than instance $C_3$ which has a smaller amount of emission. Buffer zones of cancer points are not changed.
For simplicity, we decided to take the maximal distance as the natural logarithm function of the release amount. This function gives a smooth curve which does not grow as fast as linear or root functions that give large numbers for heavier releases. Even though this technique oversimplifies the real world conditions of pollutant dispersion, it helps to make the results more precise. Other functions can be used to calculate the maximal distribution distance and they can depend on a type of a pollutant (a heavier chemical settles faster and on a shorter distance from a chimney) or a height of a chimney. An additional point that could be considered in future work is that the area very close to a chimney does not get polluted, and the higher is the chimney, the bigger is that region.
\begin{figure*}[!t]
\centering
\includegraphics[width=4.5in]{Fig6.eps}
\caption{Modeling framework usage examples: (a) an example spatial dataset (A - cancer, B and C - pollutants); (b) buffer sizes vary depending on the pollutant release amount; (c) buffer shapes change with the wind direction and speed (shown by arrows)}
\label{fig_framework}
\end{figure*}
\subsection{Wind Speed and Direction}
The weather conditions and topographical features may affect the distribution of chemicals in the air. The examples of these factors are prevailing winds, precipitation, relative humidity, mountains, hills, etc. At the first step in this part of the modeling framework we include the wind speed and the prevailing wind direction on source points as variables of the model.
Regarding the wind speed and direction, two situations are possible. First, a region where a facility is located is windless throughout the year. In this case, a pollutant is assumed to disperse in a circular region around the source point with a radius of a circle derived from a released amount as discussed in the previous subsection. However, a second situation is more frequent; there is nonzero wind speed with a prevailing wind direction. In this case we presume that the original distribution circle is morphed into a more ellipse-like region. Figure~\ref{fig_framework}(c) illustrates elliptical buffer regions; their forms are dependent on the wind speed and its frequent direction. Our calculations of the characteristics of an ellipse are based on the works by Getis and Jackson~\cite{Getis1971}, and Reggente and Lilienthal~\cite{Reggente2009}. The major axis of the ellipse is in the direction of the prevailing wind. We assume that the area polluted by a chemical when a wind is present is the same as when there is no wind. Therefore, the coverage area of the ellipse is kept equal to the area of the original circle. The source point can be placed on the major axis of the ellipse between the center and upwind focus; in our model we locate it in the middle of the segment between these two points. Figure~\ref{fig_circle_ellipse} shows an example of buffer transformation. The original circle buffer zone around the emission point $P$ is changed to an ellipse. As Figure~\ref{fig_circle_ellipse} shows that the center of the buffer (i.e. the source of the pollutant) becomes a foci of the ellipse.
\begin{figure}[!t]
\centering
\includegraphics[width=4in,height = 1.1in]{Fig7.eps}
\caption{A buffer circle around emission point $P$ is morphed into an ellipse}
\label{fig_circle_ellipse}
\end{figure}
Obviously, wind with a higher speed distributes chemicals to greater distances. Therefore, we need to include the wind speed value in the computations. The lengths of the major semi-axis $a$ and minor semi-axis $b$ are dependent on the wind speed and derived from the equations:
\begin{equation}a= r+\gamma |\vec v|,\end{equation}
\begin{equation}b= \frac{r^2}{a},\end{equation}
where $r$ is the radius of the original circle, $\vec v$ is the wind speed, and $\gamma$ is the stretching coefficient.
The larger a value of the stretching coefficient, the longer is the ellipse's major axis. In this work it is fixed at 0.3, but it could be changed and have a different value for each of pollutants. The calculation of the length of the semi-minor axis $b$ follows our assumption that the area of the ellipse is equal to the area of the original circle.
We improve our model by using elliptical buffer zones, which depend on the average wind speed and most frequent wind direction, instead of circular buffers. However, this is only a simplified model. Other factors which affect chemical distribution in air might be taken into account in future research to more accurately simulate real processes. As an example multiple alternating seasonal wind direction would affect the buffer shape to a more complex appearance.
\subsection{Wind Stations and Data Interpolation}
In order to get values of the wind speed and prevailing wind direction, an interpolation of wind fields between weather stations is used. The data of monitoring stations in Alberta comes from two sources. First, the data from 18 stations is obtained from Environment Canada~\cite{env_canada} which provides climate normals that are based on climate stations with at least 15 years of data between 1971 and 2000. The most frequent wind direction is a direction (out of possible eight directions) with the highest average occurrence count. Second, the data from 156 stations is derived from AgroClimatic Information Service (ACIS)~\cite{acis}. The locations of stations in Alberta are displayed in Figure~\ref{fig_wind}. The data provided by ACIS is daily from 2005 to 2011. In order to make the data consistent, the average wind speed and the most frequent wind direction are calculated using a method similar to the one used by Environment Canada~\cite{env_canada}. The average wind speed is simply the average value of this parameter for all available days. The wind direction is rounded to eight points of the compass. A direction with the highest count of daily observations is assigned as the most prevailing wind direction.
Unlike Alberta, the data of monitoring stations in Manitoba comes from only one source, 20 stations from Environment Canada~\cite{env_canada}, it also contains climate normals information from the years of 1971 to 2000. The locations of stations in Manitoba are displayed in Figure~\ref{fig_wind_manitoba}.
The climate normals from two sources in Alberta and one source in Manitoba are used to make interpolations in the ArcGIS tool~\cite{arcgis}. However, ArcGIS is restricted to linear surface interpolations and the wind direction is a nonlinear attribute. In linear systems (e.g., the number of sunny days or days with precipitation) there is only one unique path when moving from one number to another, for example, if we want to move the temperature from 37$^{\circ}$C to 40$^{\circ}$C, the only path is to linearly increase the first degree. On the other hand, nonlinear systems may have several paths. For example, there are clockwise and counter-clockwise directions to move from 90$^{\circ}$ to 270$^{\circ}$: through 0$^{\circ}$ or 180$^{\circ}$. These directions go from one point to the second but both are unique. Therefore, linear interpolations lead to wrong results when deployed directly to non-linear systems.
\begin{figure*}[!t]
\centering
\begin{minipage}{2.0in}
\subfigure[18 stations of Environment Canada]
{\includegraphics[width=2.0in,height = 2.6in]{Fig8_a.eps}
\label{fig_wind_a}}
\end{minipage}\hspace{0.3in}
\begin{minipage}{2.0in}
\subfigure[156 stations of ACIS]
{\includegraphics[width=2.0in,height = 2.6in]{Fig8_b.eps}
\label{fig_wind_b}}
\end{minipage}
\centering
\caption{The monitoring stations in Alberta}
\label{fig_wind}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=3in, height = 3in]{Fig9.eps}
\caption{The monitoring stations in Manitoba}
\label{fig_wind_manitoba}
\end{figure}
Interpolation of wind fields requires a technique that considers the non-linear nature of the wind direction attribute. A transformation is done according to the work by Williams~\cite{Williams1999}. The wind speed and wind direction from each monitoring station is represented as a vector with the magnitude $S$ (wind speed) and direction $\theta$ (wind direction). The vector is divided into axial components $X$ (northern wind) and $Y$ (eastern wind):
\begin{equation}X=S \sin \theta,\end{equation}
\begin{equation}Y=S \cos \theta.\end{equation}
Based on these two components, two ArcGIS surface interpolations are created. The type of interpolation used is spline. As a result we get two grids: for northern $X'$ and eastern wind $Y'$. The magnitude of the vector, the wind speed $S'$, is computed as:
\begin{equation}S'=\sqrt{X'^2+Y'^2}.\end{equation}
The calculation of wind direction angle $\theta'$ is more complicated. From geometry, the wind direction is calculated as $\theta' = \tan^{-1}(Y'/X')$. However, the inverse tangent is defined only for values between -90$^{\circ}$ and 90$^{\circ}$ and it is only half of our domain. Therefore, each of the four quadrants of our domain (the quadrants are shown in Figure~\ref{fig_quadrants}) requires its own formula~\cite{Williams1999}:
\begin{equation}Quad~I:\theta' = \tan^{-1}(X'/Y'),\end{equation}
\begin{equation}Quad~II:\theta' = \tan^{-1}(Y'/X')+90^{\circ},\end{equation}
\begin{equation}Quad~III:\theta' = \tan^{-1}(X'/Y')+180^{\circ},\end{equation}
\begin{equation}Quad~IV:\theta' = \tan^{-1}(Y'/X')+270^{\circ}\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in, height = 2.5in]{Fig10.eps}
\caption{Four quadrants defined by the signs of values of $X'$ (northern wind) and $Y'$ (eastern wind)}
\label{fig_quadrants}
\end{figure}
As a result we get interpolated values of wind speed and wind direction for each point of the studied space.
\subsection{Data Uncertainty}
The dispersion of a pollutant in a distribution region is not uniform, and intuitively the concentration of the pollutant near a chimney is higher than at a border of the dispersion region. Furthermore, pollutants are subject to decay and deposition processes. In other words, it is more likely that people living near an emitting facility are exposed to higher levels of pollutants than people who live kilometers away from the facility. Therefore, presence of a chemical in a given point is uncertain and a probability of detecting it depends on a distance from the point to the emission source. This dependency is inversely proportional. For example, in Figure~\ref{fig_grid_c} the probability of detecting $A$ at the point $gp_1$ is lower than at the point $gp_2$.
Various functions can be used to determine the dependency of the pollutant presence probability in a given point on the distance to the emitting facility. For instance, with a categorical function (Figure~\ref{fig_uncertain_func_a}), we assign probabilities according to distance ranges, e.g., 1.0 for 0-2 km from the facility, 0.75 for 2-4 km, 0.50 for 4-6 km, etc. Another example is a linear function (Figure~\ref{fig_uncertain_func_b}) which can be represented as $1 - x'/x$, where $x'$ is the distance from a given point to the facility and $x$ is the maximal distance where pollutant distributes. In this work we use a third example, the curve function (Figure~\ref{fig_uncertain_func_c}), which is derived from the cosine function, $p = \frac{\cos{\pi x}}{2}+0.5$. With this function the probability decreases slowly with the increasing distance. Then, it starts declining more linearly, and at the end slows down again. We believe that the curve function models the real-life pollutant behavior more accurately than the other two methods.
\begin{figure*}[!t]
\centering
\begin{minipage}{2.0in}
\subfigure[Categorical function.]
{\includegraphics[width=2.0in]{Fig11_a.eps}
\label{fig_uncertain_func_a}}
\end{minipage}\hspace{0.3in}
\begin{minipage}{2.0in}
\subfigure[Linear function.]
{\includegraphics[width=2.0in]{Fig11_b.eps}
\label{fig_uncertain_func_b}}
\end{minipage}\hspace{0.3in}
\begin{minipage}{2.0in}
\subfigure[Curve function ($\frac{\cos{\pi x}}{2}+0.5$).]
{\includegraphics[width=2.0in]{Fig11_c.eps}
\label{fig_uncertain_func_c}}
\end{minipage}
\centering
\caption{Examples of functions that can be used to represent the dependency of the pollutant presence probability on the distance to the source point}
\label{fig_uncertain_func}
\end{figure*}
These three examples are only some of possible curves that can be used to model pollutant distribution within buffer zones. However, other functions could be used in order to improve the accuracy of the results. They could depend on the types of chemicals. For example, denser chemicals may settle out in a region closer to the emitting facility, while only small amounts would reach places at medium and far distances.
In the case of datasets which contain other types of spatial objects (i.e. lines and polygons), grid points intersecting a line or located inside a polygon are assigned a feature presence probability of one for the corresponding feature. For an example see point $gp_3$ in Figure~\ref{fig_uncertainty}. On the other hand, uncertainty for grid points positioned in buffer zones depends on the shortest distance from the point to the line or polygon. Points $gp_1$ and $gp_2$ in Figure~\ref{fig_uncertainty} are located in buffer zones of line $L$ and polygon $P$ respectively. Existential probabilities at these points are computed using shortest distances to respective spatial objects.
\begin{figure}[!t]
\centering
\includegraphics[width=4.5in]{Fig12.eps}
\caption{Defining a distance to an object in datasets with polygons and lines}
\label{fig_uncertainty}
\end{figure}
\section{Experimental Evaluation}
We evaluate our method on both real and synthetic datasets to measure the performance and the effectiveness of the approach. In the following discussion we explain our datasets, results and analyze the results on various aspects such as grid granularity, computational complexity, etc. We also compare our approach with a baseline approach to understand how effective our approach is in identifying not only highly prevalent rules, but also statistically significant ones.
\subsection{Real Data}
We conduct experiments on two real datasets which contain data on pollutant emission points and childhood cancer cases in the provinces of Alberta and Manitoba, Canada. The sources of the databases are the National Pollutant Release Inventory (NPRI, the data is publicly available)~\cite{npri}, and the provincial cancer registries. The National Pollutant Release Inventory (NPRI), established in 1992, is the national Pollutant Release and Transfer Register (PTRT) of Canada. The information on pollutants is taken for the period between 2002 and 2007 and contains the type of a chemical, location of release, and average amount of release per year. In order to get reliable results, the chemicals that had been emitted from less than three facilities are excluded from the dataset. There are 47 different chemicals and 1,442 pollutant emission points in Alberta; 26 different chemicals and 545 pollutant emission points in Manitoba, several chemicals might be released from the same location. The number of cancer points (the centroids of postal code regions where children lived when cancer was first diagnosed) are 1254 and 520 in Alberta and Manitoba, respectively.
Environmental pollutants are suspected to be one of the causes of cancer in children. However, there are other factors that could lead to this disease (genetic susceptibility, parental exposure to chemicals or radiation, parental medical conditions, etc.). Considering this fact, we attempt to find ``correlations" rather than ``causalities". The results are currently under careful evaluation by domain experts in our interdisciplinary team. It suffices to mention, however, that some surprising rules were discovered indicating significant association between groups of chemicals, not categorized individually as carcinogens, and children cancers. Additionally, we identified rules co-locating cancer cases with a pair of chemicals, one known as carcinogen and another with no-carcinogenic properties. Since the carcinogen alone did not correlate with cancer, the occurrence of those pairs of chemicals suggests a ``catalyzing" effect by the non-carcinogen.
We are interested in co-location rules of the form $Pol \to Cancer$, where $Pol$ is a set of pollutant features and $Cancer$ is a cancer feature. The expected confidence is used as a prevalence measure. The distance between points in a grid is 1 km; the change in the grid granularity is also evaluated. The number of simulations (randomized datasets) for the statistical test is set to 99, so that with the observed data the denominator in Equation~(\ref{eq_pi_value}) is 100. The level of significance $\alpha$ is set to 0.05. The size of an antecedent of candidate rules is up to three. Larger candidates have low support values due to the fact that the average number of features in a transaction in the experiment is up to 3.
The randomized datasets that are used in the statistical test are generated as follows. Pollutant emitting facilities are not random and usually located close to regions with high population density, while they are not present in other places (e.g., in protected areas). Based on this observation, we do not randomize pollutant points all over the region, but instead keep locations of facilities and randomize pollutants within these positions. Among 1,254 cancer points in Alberta, 1,134 were located within ``urban" municipalities (cities, towns, villages, etc.) and the rest were diagnosed in ``rural" areas. For the cancer cases in Manitoba, 400 out of 520 were located in urban areas, while the rest, 120 in rural areas. In order to have the randomized cancer occurrence rate close to the real-world rate, we keep the number of cancer feature instances positioned in ``urban" /``rural" regions the same as in the real dataset. The number of random cancer cases placed within each ``urban" municipality is directly proportional to the number of children counted in the 2006 census~\cite{census}. The rest of the cancer cases in the real datasets, are randomly placed in rural regions on the map of Alberta and Manitoba. The detailed information of pollutants and cancers in Alberta and Manitoba is displayed in Table~\ref{table_information}.
\begin{table}[!t]
\caption{Detailed pollutants and cancers information in Alberta and Manitoba}
\label{table_information}
\centering
\begin{tabular}{|c|c|c|c|c|}\hline
Dataset & $\#$Pollutants & $\#$Cancers & $\#$Cancers (urban) & $\#$Cancers (rural)\\\hline
Alberta & 1,455 & 1,254 & 1,120 & 134 \\ \hline
Manitoba & 545 & 520 & 400 & 120 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Comparison with Certain Data Method}
We compare the results of our uncertain data method (UM) with the results of a method using certain deterministic data (CM) where existential probabilities are not stored as a part of a transactional database. As an interestingness measure in the CM we use the confidence $Conf(Pol \to Cancer)$, which is a fraction of transactions containing all features in $Pol$ that also include the cancer feature.
\begin{equation}Conf(Pol \to Cancer)=\frac{Sup(Pol \cup Cancer)}{Sup(Pol)}.\end{equation}
In Alberta, the number of significant co-location rules detected by both UM or CM together is 496; from these 204 rules are found by both methods, 278 rules are identified only by UM, and 14 rules by CM. In Manitoba, the number of detected significant co-location rules by UM and CM is 362 and 263, respectively, and these two methods have 232 common rules. From these two datasets, we can find that the UM method covers most of the significant co-location rules detected by the CM method. Moreover, UM captures more statistically significant rules that can not be mined by CM otherwise. The difference in the results could be explained by the fact that our approach deals with probabilities of feature presence in transactions rather than with deterministic values. It considers not only the presence of a feature in a transaction but also distances from grid points to pollutant features and cancer cases. The grid points that are situated closer to spatial instances are given more weight than points located relatively farther.
Some of the co-location rules discovered by the uncertain method have a low level of $ExpSup(Pol)$ or $ExpSup(Pol \cup Cancer)$. For example, 348 out of 482 rules in the Alberta dataset and 193 out of 362 rules in the Manitoba have $ExpSup(Pol \cup Cancer)$ less than 1. It means either a low number of transactions or a relatively long distance from grid points. Although with a low $p$-value ($\le 0.05$) they have an expected confidence level higher than in most randomized datasets, domain experts might not be interested in these co-location rules. In that case, a threshold on the expected support might be introduced to the model for detection of significant co-location rules. This threshold should not be set too high, so that the algorithm does not miss some of the interesting co-location rules or patterns with rare features.
\subsubsection{Comparison with Support/Confidence based Method}
Comparison of our approach with other methods existing in the literature is not straightforward. Especially because most of the existing methods expect boolean spatial features, are not capable of handling uncertainty, require to define reference features, require to define neighborhood relationships based on a single threshold value, etc. Our proposed approach attempts to address these limitations and the way we model the spatial data and features to represent the real world conditions are hard to be captured by these existing models. Hence we implement a na{\"i}ve co-location mining method based on the traditional support/confidence framework but is capable of gaining the advantage of our underlying data modelling framework. More specifically this approach takes advantage of the grid-based transactionization and uncertainty information of features.
To compare the effectiveness and the robustness of our approach the implemented na{\"i}ve approach acts as a baseline method which can obtain a set of significant rules based on the traditional confidence/support framework. This na{\"i}ve method uses our Algorithm~\ref{alg_transaction} to compute the transactions, then compute the prevalence measure for each candidate pattern as in Algorithm~\ref{alg_main} (see steps 4-9). In the present case we are interested in mining association rules, hence we used $ExpConf(X \to Y)$ as the prevalence measure. Once we computed the expected confidence for all the candidate patterns, we calculated the average expected confidence for all the candidate rules. We used this as a cut-off threshold to prune the candidate rules. In the case of Certain Method this pruning step was able to prune 12239 rules from a candidate rules set size of 17343, obtaining a rules set size of 5104 as important rules. In this case the expected confidence threshold was $62\%$. In the case of Uncertain Method the pruning step was able to prune 12326 rules from a candidate rules set size of 17343, obtaining a rules set size of 5017 as important rules. In this case the expected confidence threshold was $46\%$.
We tested the statistical significance of these rules using the statistical tests we explained in Section~\ref{ad} (see steps 10-22 in Algorithm~\ref{alg_main}) and obtained 467 rules are statistically significant from the rules obtained from both Certain and Uncertain methods.
In the previous discussion we explained that CM and UM together identified 496 rules as statistically significant. We observed that all the 467 rules detected under the expected confidence based na{\"i}ve approach we implemented above are also obtained by our approach. In addition to that our approach was able to identify 29 more rules as statistically significant. This result emphasizes the key advantage of our approach: our approach is not only capable of detecting highly prevalent rules, but is also capable of detecting rules which may not be highly prevalent but statistically significant.
\subsubsection{Effect of Randomization}
As mentioned above, in the randomized datasets, we take an intuitive reasonable strategy by randomizing the pollutant emitting facilities and cancer cases. We randomize the pollutant emitting facilities in the areas with high density population, and randomize the cancer cases according to the population distribution in ``urban" and ``rural" areas. Moreover, the number of cancer cases in ``urban"/``rural" regions are the same as the real dataset. We compare this randomized strategy with other two randomized methods: randomizing the pollutants only and randomizing cancer cases only.
First in the Alberta dataset, it is discussed above that when we randomize both pollutants and cancers, we get 482 significant co-location rules by UM. Then we try to fix the cancer cases as the real dataset and only randomize the pollutant emitting facilities among the high density population regions. Up to 710 rules are reported under this setting. However, when we fix the pollutant facilities and only randomize the cancer cancers, only 127 rules are detected.
Then in the Manitoba dataset, we found 362 significant rules with both pollutant facilities and cancer cases randomized. 243 and 314 significant rules are reported when we only randomize the pollutant facilities and only randomize cancer cases, respectively.
All of three randomize strategies ensure that in the randomized datasets, the pollutant emitting facilities and the cancer cases are independent from each other. Knowing which randomization strategy is the best is still under investigation. All the patterns and rules discovered with the different strategies are being manually evaluated by domain experts. This detailed assessment would provide insights on which strategy would be best. However, in our current experiments, we take the first randomization strategy, by randomizing both the pollutant facilities and cancer cases which is more reasonable. This seems reasonable because it fairly randomizes all types of spatial data points. In the future, based on the suggestions from our domain experts, we might choose a different randomization strategy.
\subsubsection{Effect of the Grid Granularity}
As already mentioned, a granularity of the grid (a distance between grid points which affects the number of points per unit of space) is crucial for the result accuracy. A great distance between grid points may lead to the omission of some regions of the space especially when the average buffer distance is short. On the other hand, when the distance between points is too small, more transactions are derived by the algorithm. Decreasing the distance by a factor of two increases the transaction set size approximately by four times. Therefore, more computation needs to be done during the statistical test step. The grid resolution might be set up depending on the average buffer size.
For the Alberta dataset, in addition to the grid with a distance of 1 km between its points, we conduct two experiments with 2 and 0.5 km grids. As mentioned above, the algorithm reports 482 significant co-location rules with 1 km grid. With 2 km granularity 547 rules are detected from which 335 are present in both 1 and 2 km result sets, and 212 are unique for 2 km grid. The difference means that 2 km distance between grid points is too long for our dataset, where the average buffer size is 7.3 km, and its accuracy is comparatively low due to the smaller number of transactions which is not sufficient to capture intersections of instance buffers accurately. The 0.5 km granularity grid reports 472 co-location rules as significant. From these, 426 are found with both 1 and 0.5 km grids, and 46 rules are identified only by 0.5 km grid. As we can see, the difference between 0.5 km and 1 km result sets is smaller than that between 1 km and 2 km grids. As the distance between points in a grid decreases, the accuracy of the results improves.
We also conduct three sets of experiments with different grid granularity, 0.5 km, 1 km and 2 km on the dataset of Manitoba. With 1 km grid granularity, we detect 362 rules significant co-location rules. When the grid distance is increased to 2 km distance, 280 rules are reported and a large portion of them (271) rules also appear in 1 km result set. 364 significant co-location rules are found with 0.5 km grid granularity. Among these 364 rules, 356 rules are found in both 1 km and 0.5 km grid granularity. As can be observed, the difference between 1 km and 0.5 km grid distance is very small, and 1 km grid distance can cover most of significant rules detected by both 2km and 0.5km grid distance. Therefore, in our experiment, we choose 1km grid distance without loss of accuracy and efficiency. The industry emitted chemicals in Alberta and Manitoba are not the same, indeed they overlap very lightly. Therefore, we did not compare the set of significant co-location rules discovered in both datasets.
A closer analysis on the rules obtained by the CM method of our algorithm with Alberta dataset under the 0.5, 1 and 2km gird granularity measures provides some interesting insights on choosing a better granularity measure. We conduct this analysis focussing our concerns on two different aspects of the performance: accuracy, in terms of the p-value and efficiency, in terms of the execution time. Table~\ref{gridgrantime} shows when the grid granularity is decreased, how the number of transactions increases along with the execution time. This clearly shows when the grid granularity is small the number of transactions is huge leading to a larger execution time.
\begin{table}[t!]
\centering
\caption{Execution time, number of transactions and number of rules with respect to different grid granularity measures}
\label{gridgrantime}
\begin{tabular}{|l|l|l|l|l|}
\hline
Grid Granularity (km) & \begin{tabular}[c]{@{}l@{}}Number of\\ Transactions\end{tabular} & \begin{tabular}[c]{@{}l@{}}Execution\\ Time\end{tabular} & Rules \\ \hline
0.5 & 443653 & 286 min 27 s & 199 \\ \hline
1 & 129151 & 110 min 33 s & 218 \\ \hline
2 & 32255 & 57 min 7s & 273 \\ \hline
\end{tabular}
\end{table}
On the other hand, as depicted in Figure~\ref{fig_venn}, rules detected with the grid granularity of 1km shares all of its rules except one with the rules sets detected under 2 and 0.5km grid granularity. Out of these the rules shared with the rules set from 0.5km gird granularity has the lowest p-value (0.03) suggesting that it least complies with the null hypothesis. This indicates that the rules set commonly detected under both 1km and 0.5km grid granularity are better than any other rule set. Figure~\ref{fig_venn} explains that the rule set detected under 1km grid granularity was able to capture most of the rules with better p-value from both the other rules set detected under 2 and 0.5 km grid granularity. This emphasizes that 1km could be a better choice when it comes to choosing a grid granularity because it captures most of the rules with better p-values while maintaining an efficient program execution. It can also be noticed that the all three rules set from different gird granularity measures shares 126 rules with each other. This common rule set is consistent irrespective of the grid granularity and shows better p-value, hence might be valuable for further studies.
\begin{figure}[!t]
\centering
\includegraphics[scale = 0.6]{Fig13.eps}
\caption{Number of rules detected under various grid granularity measures (i.e. 0.5, 1 and 2 km) with the CM method in the Alberta dataset and the average p-value of those rules}
\label{fig_venn}
\end{figure}
\subsubsection{Effect of The Filtering Technique}
In the Alberta dataset, the number of candidate co-location rules in the experiment (i.e. not yet determined as significant) is 17,343 (co-locations with the antecedent size up to three). With the na\"{\i}ve approach all candidates are checked in each of the 99 simulation runs which results in a large amount of computation. After the exclusion of rules with zero-level confidence, 10,125 candidates remain in these two datasets which also form a big set. Figure~\ref{alberta_filtering} show that the usage of the second filtering method (the exclusion of candidates which $p$-value) passed $\alpha$ during the evaluation of randomized datasets) considerably reduces the amount of computation. In the first simulation run the confidence value is computed for 10,125 rules, while 3,098 candidate rules are checked in the 20th simulation, and only 482 candidates are evaluated in the last run.
While in the Manitoba dataset, the total number of the candidate co-location rules with up to three pollutants is 2,951. Only 665 rules are left if we exclude the zero-level confidence co-location rules. It means that the na\"{\i}ve method can already help us prune a large amount of unnecessary rules. Figure~\ref{manitoba_filtering} shows the usage of the second filtering method. In the first simulation run the 665 rules are checked, and the candidate rules reduce to 408 in the 10th simulation which reduce around $\frac{1}{3}$ candidate rules, and in the last simulation only 362 candidates are checked.
\begin{figure*}[!t]
\centering
\begin{minipage}{2.4in}
\subfigure[Filtering in Alberta dataset]
{\includegraphics[width=2.4in]{Fig14_a.eps}
\label{alberta_filtering}}
\end{minipage}\hspace{-0.2in}
\begin{minipage}{2.4in}
\subfigure[Filtering in Manitoba dataset]
{\includegraphics[width=2.4in]{Fig14_b.eps}
\label{manitoba_filtering}}
\end{minipage}
\centering
\caption{The number of candidate rules evaluated in each simulation run with filtering technique}
\label{filtering}
\end{figure*}
\subsubsection{Effect of the Number of Simulation Runs on the Run Time}
As explained in Section~\ref{cc}, Algorithm~\ref{alg_main} has a run time complexity of $O(n+m)$+O($k_3|f|^d$)+O(R$k_3|f|^d$), where \textit{n} is the number of chemical emission points, \textit{m} is the number of cancer cases, R is the number of simulation runs, $k_3$ is the number of grid points, $|f|$ is the size of the feature set $f$ and $d$ is the size of the largest item set. According to this if all the other variables except R are fixed and considered as constants the algorithm should have a linear time complexity with respect to the number of simulation runs (i.e. R). Figure~\ref{fig:compcomp1} and Figure~\ref{fig:compcomp2} represent the run time collected for both certain and uncertain methods on the province Alberta's datasets. A simple regression analysis reveals that the best line fits for the run time of the CM version of Algorithm~\ref{alg_main} is $y=3/8x+105/2$, as depicted in Figure~\ref{fig:compcomp1}. A similar regression analysis reveals that the best line fits for the run time of the UM version of Algorithm~\ref{alg_main} is $0.87x+240$, as depicted in Figure~\ref{fig:compcomp2}. This verifies the results from the previous analysis in Section~\ref{cc}, that run time has a linear complexity with respect to the number of simulation runs.
\begin{figure*}[!t]
\centering
\begin{minipage}{2.4in}
\label{fig:compcomp}
\subfigure[Certain Method]
{\includegraphics[width=2.4in]{Fig15_a.eps}
\label{fig:compcomp1}}
\end{minipage}\hspace{-0.2in}
\begin{minipage}{2.4in}
\subfigure[Uncertain Method]
{\includegraphics[width=2.4in]{Fig15_b.eps}
\label{fig:compcomp2}}
\end{minipage}
\centering
\caption{Plots illustrating the execution time relative to the number of simulation runs}
\label{fig:exectime}
\end{figure*}
\subsection{Synthetic Data}
We conduct experiments on synthetic datasets to demonstrate that our framework can discover a correct set of co-location rules. In addition, we show that our transaction-based method takes into account spatial context and information.
\subsubsection{Discovery of Co-Location Rules}
In order to evaluate our algorithm on the synthetic data we generate a dataset and attempt to emulate the real-world information. Similar to the real dataset, it contains point features that appear in the antecedent part of co-location rules (features $C$), and a disease feature $D$ as a consequent. The study region is a 100$\times$100 unit square. The buffer size is 1 unit. We simulate 7 chemicals by using $C=\{C_{1},...C_{7}\}$, and $D$ is simulated as the cancer. The features $C_1$ and $C_2$ have 20 instances each and they are associated with each other. The features $C_3$ and $C_4$ have 30 points each; 20 of them are associated with each other, while remaining 10 instances are placed randomly. These two pairs represent co-located chemicals. The disease feature $D$ is positively associated with sets $C_1 \cup C_2$, $C_3 \cup C_4$, and with 30 out of 40 instances of feature $C_5$. It has no association with the feature $C_6$ (30 instances), and negatively correlated with $C_7$ (30 points), so that no pair of instances $D$ and $C_7$ are neighbors. In addition there are 30 disease cases spread randomly. We look for co-location rules of the form $C \to D$. In 99 randomized datasets all eight features are distributed randomly with no association (neither positive nor negative) with each other.
The significant co-location rules with $p$-value $\le 0.05$ are shown in Table~\ref{table_synt}. As expected, rules 5, 6, and 11 are reported as significant because they have strong correlation of $C$ features and feature $D$. Rules 1-4 are also detected because either total ($C_1-C_2$) or partial of ($C_3-C_4$) have associations with $D$. Rules with features $C_6$ and $C_7$ are not reported because of their zero and negative association~\cite{Antonie2014negative,Li2015associative} with the disease feature. The remaining co-location rules (7-10, 12-14) are detected due to their random correlation with features associated with feature $D$. However, they all have very low $ExpSup(C+D)$ values and can be pruned if a threshold on $ExpSup$ is used as discussed in the experiments with the real data.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Co-location rules detected in synthetic data. $ExpSup$ is the value of the expected support of patterns of the form $C+D$, where $C$ is the set of cause features and $D$ is the disease feature}
\label{table_synt}
\centering
\begin{tabular}{|r|l|r|r|}
\hline
N & Co-location Rule & $ExpSup$ & $ExpConf$\\
\hline
1 & $C_1 \to D$ & 763.1 & 0.41\\
\hline
2 & $C_2 \to D$ & 765.8 & 0.42\\
\hline
3 & $C_3 \to D$ & 717.1 & 0.26\\
\hline
4 & $C_4 \to D$ & 807.8 & 0.30\\
\hline
5 & $C_5 \to D$ & 1,256.6 & 0.34\\
\hline
6 & $C_1 + C_2 \to D$ & 432.8 & 0.50\\
\hline
7 & $C_1 + C_4 \to D$ & $1.0 \cdot 10^{-3}$ & 0.82\\
\hline
8 & $C_1 + C_5 \to D$ & 10.6 & 0.49\\
\hline
9 & $C_2 + C_4 \to D$ & 0.4 & 0.49\\
\hline
10 & $C_2 + C_5 \to D$ & 14.4 & 0.44\\
\hline
11 & $C_3 + C_4 \to D$ & 390.5 & 0.53\\
\hline
12 & $ C_5 + C_6 \to D$ & 2.8 & 0.08\\
\hline
13 & $ C_1 + C_2 + C_4 \to D$ & $4.8 \cdot 10^{-4}$ & 0.83\\
\hline
14 & $ C_1 + C_2 + C_5 \to D$ & 7.9 & 0.51\\
\hline
\end{tabular}
\end{table}
The experiment on synthetic data shows that our approach finds co-location rules in which features in the antecedent part are co-located with the feature in the consequent part. A threshold with a relatively low value can help to exclude rules with noise features.
\subsection{Distance between Features}
In this experiment we evaluate the effect of an average distance between features on the expected support. Recall that one of the advantages of our algorithm is that it takes into account a distance between spatial objects, so two objects located close to each other are represented in more transactions than a pair of objects situated farther (Figure~\ref{fig_prox_remote}). Let us consider two scenarios: 1) objects which belong to two distinct features are located on average very close to each other, and 2) they are situated on the farthest possible distance so they are still considered to have neighbourhood relationships. Most previous approaches assign the same prevalence measure value in both cases as long as a neighbourhood relationship is kept. Obviously, it is not correct; the prevalence measure should be higher in the first situation. On the other hand, with our approach in the first case the features are included in more transactions with higher existential probabilities. This leads to a higher prevalence measure than in the second case.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The average expected support for ranges of an average distance between two spatial features}
\label{table_synt_remote}
\centering
\begin{tabular}{|r|l|r|}
\hline
N & Range & Average $ExpSup$\\
\hline
1 & [0.0, 0.2) & 1,558.9\\
\hline
2 & [0.2, 0.4) & 1,355.3\\
\hline
3 & [0.4, 0.6) & 1,017.1\\
\hline
4 & [0.6, 0.8) & 649.9\\
\hline
5 & [0.8, 1.0) & 353.3\\
\hline
6 & [1.0, 1.2) & 155.9\\
\hline
7 & [1.2, 1.4) & 52.8\\
\hline
8 & [1.4, 1.6) & 16.6\\
\hline
9 & [1.6, 1.8) & 8.3\\
\hline
10 & [1.8, 2.0) & 5.7\\
\hline
\end{tabular}
\end{table}
For this experiment we create synthetic datasets with two spatial features $f_1$ and $f_2$. The study region is a 100$\times$100 unit square. The buffer size is 1 unit. In each dataset, features have 30 instances each. We randomly place the instances of feature $f_1$ in the study region. One instance of feature $f_2$ is placed on a varying distance $d$ from an instance of $f_1$. The distance $d$ between instances of two features is taken randomly from ten ranges \{[0.0, 0.2), [0.2, 0.4), ..., [1.8, 2.0)\} (given in units). The first range [0.0, 0.2) is for the scenario when features are located very close to each other on average. The last range [1.8, 2.0) simulates a situation when an intersection of each pair of instance buffers is very small. The expected support of pattern $(f_1 \cup f_2)$ is calculated and averaged over 100 synthetic datasets for each of ten ranges.
The results are presented in Table~\ref{table_synt_remote}. As can be observed, the expected support rapidly decreases with the increase in the average distance between instances of features $f_1$ and $f_2$. Expectedly, the range [0.0, 0.2) gets the highest value of the expected support, and the range [1.8, 2.0) has the lowest prevalence value. While a pattern with these features would be considered having the same prevalence measure value in all ten synthetic datasets by most previous algorithms, out transaction-based approach takes into account the actual spatial information and a relative proximity or remoteness of features from each other.
\section{Conclusion}
Co-location pattern and rule mining is one of the tasks of spatial data mining. Discovery of co-location patterns and rules can be useful in many projects and applications and may lead to the discovery of new knowledge in various domains. In this paper we propose a new co-location mining framework which combines classical co-location mining, and uncertain frequent pattern and association rule mining.
The approach was motivated by a real-world application of detecting possible associations of pollutant emission points and childhood cancer cases. We take into account some of the limitations that can prevent previously proposed approaches from being used in some real-world applications and domains. Our novel transactionization method allows the conversion of spatial data into a set of transactions by imposing a regular grid over a given map. Each grid point can be seen as a representation of a study region. Features of objects and their buffers that contain a grid point form a transaction. In addition, our approach takes into account uncertainty of data by storing feature existence probabilities in transactions in order to simulate the real scenarios. A probability of feature presence in a transaction depends on a distance from the feature instance to the respective grid point. A usage of user-defined thresholds on prevalence measures like in previous algorithms is replaced by the statistical test which helps to identify significant co-location patterns and rules that are unlikely to occur only by chance. In order to decrease computation, the filtering techniques are presented which prune candidate patterns and rules that appear to be definitely not significant.
The experiments on two real and synthetic datasets show that our approach finds significant co-location patterns and rules. We deploy three different randomization strategies to find significant co-location rules. The effect of grid granularity is also evaluated. We also compared the results of our method with a baseline method which revealed that our method is not only capable of detecting highly prevalent rules, but is also capable of detecting rules which may not be highly prevalent but statistically significant. A dependence of a prevalence measure value on an average distance between feature instances is shown. The usage of transactions preserves the spatial context and information such as the relative locations of instance objects and distances between them. The consideration of feature presence probabilities helps to distinguish various cases when feature instances are situated at different distances from grid transaction points. We also demonstrate that the difference in the results obtained by our uncertain data model and certain data method can be explained and justified.
The motivating application of this paper has its unique challenges. We examine several factors which affect dispersion of pollutants in the air. In order to more accurately model chemical distribution we used buffer zones differing in their sizes depending on released amounts. Circular buffers are transformed into elliptical figures with the consideration of wind speed and its direction at locations of emitting facilities. Finally, we model uncertainty of a pollutant presence at transaction points. In addition to pollution, other factors can also cause cancer in children. In this paper we do not intend to find true causalities but attempted to identify possible associations of pollutants and childhood cancer. The results that are derived by our algorithm can be useful for domain experts and help in further analysis of pollutant-cancer relationships. These algorithms we propose can also be used to discover co-locations for other diseases and other multiple factors.
\begin{acknowledgements}
This material is supported by Canadian Institutes of Health Research, under grant number REF16629.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
\subsection{Reaction-diffusion equations}
In the past few decades, the interest in reaction-diffusion equations and systems has increased, both due to their prevalence in many chemical and physical instances, and their mathematical beauty and intricacy.
In general, reaction-diffusion equations describe the behaviour of a physical or chemical system where two fundamental processes are competing against each other: the diffusion of matter in the system, and the chemical/physical reactions that govern it.
Many times in the context of reversible chemical processes one encounters the situation where $n$ different chemical compounds, $\br{\mathcal{U}_i}_{i=1,\dots,n}$, diffuse in a certain medium, while still obeying the reaction
\begin{equation}\nonumber
\alpha_1 \mathcal{U}_1+\dots+\alpha_n\mathcal{U}_n \underset{k_f}{\overset{k_b}{\leftrightarrows}} \beta_1 \mathcal{U}_1+\dots+\beta_n\mathcal{U}_n
\end{equation}
with stoichiometric coefficients $\alpha_i, \beta_i \in \{0\}\cup [1,\infty)$ for $i=1,\ldots, n$, where $k_f>0, k_b>0$ are the reaction rate constants. We would like to mention that it is common in chemistry to only consider stoichiometric coefficients that are natural numbers. By applying {\it mass action kinetics}, the above system is modelled by the following (reaction-diffusion) system for the concentration functions, $u_i(x,t)$, of the compound $\mathcal{U}_i$, which are non-negative functions defined in the domain of the problem, $\Omega \subset \mathbb{R}^N$, and which satisfy a homogeneous Neumann boundary condition on the boundary of that domain:
\begin{equation}\label{eq:general_chemical_rd}
\begin{split}
\partial_t u_i-d_i &\Delta u_i= \pa{\beta_i-\alpha_i}\pa{k_f \prod_{i=1}^n u_i^{\alpha_i}-k_b \prod_{i=1}^n u_i^{\beta_i}},\quad i=1,\dots,n,\; x\in \Omega,\; t>0
\end{split}
\end{equation}
\begin{equation}\label{eq:general_chemical_neumann}
\nabla u_i \cdot \nu = 0,\quad x\in \partial \Omega,\; t>0,
\end{equation}
where $\br{d_i}_{i=1,\dots,n}\geq 0$ are the diffusion coefficients, and $\nu(x)$ is the outward normal vector to $\partial \Omega$ at a given point $x\in \partial \Omega$.
\medskip
Clearly, the difficulty in analysing this system increases greatly with $n$ and with the degree of the polynomial that appears on the right hand side of \eqref{eq:general_chemical_rd}. It is not even clear if there is a global solution to this system, although as most of the chemical reactions we view in real life do not end up in an explosion, we have a strong intuition that this is indeed the case (see \cite{PS00} for an "artificial chemical reaction system" in which the solution blows up in $L^\infty$ norm in finite time despite the dissipation of total mass). The issues one must face become more complicated when possible degeneracies in the diffusion of some of the compounds are considered. Such situations have been reported in several works, such as those modelling optical sensors or biosensors (see e.g. \cite{BIK09,JJMS96,MGM98}). However, one can intuitively guess that due to the interactions between the concentrations, governed by the chemical reactions, an effect of \textit{indirect diffusion} will be observed, i.e. a concentration that does not have self diffusion will nonetheless exhibit diffusive like behaviour. Showing that such an intuition is indeed correct, at least as it manifests itself in the question of convergence to equilibrium, is the goal of this presented work.
\subsection{The setting of the problem}
In our work we will focus on a specific type of reaction-diffusion system of the form \eqref{eq:general_chemical_rd} with $n=4$ and
$$\alpha_1=\alpha_2=\beta_3=\beta_4=1,\quad \alpha_3=\alpha_4=\beta_1=\beta_2=0,$$
meaning that the compounds $\mathcal{U}_1$ and $\mathcal{U}_2$ completely change to $\mathcal{U}_3$ and $\mathcal{U}_4$, and vice versa:
\begin{equation}\label{binary_reaction}
\mathcal{U}_1+\mathcal{U}_2 \leftrightarrows \mathcal{U}_3 + \mathcal{U}_4.
\end{equation}
As our focus in this work is on the indirect diffusion effect, we will assume for simplicity that the reaction rate constants are both one, i.e. $k_f = k_b = 1$. This simplification should bear no real impact on our result. In this case, the reaction-diffusion system with {\it full diffusion}, i.e. $d_i > 0$ for $i=1,\ldots, 4$, reads as
\begin{equation}\label{eq:sys_full}
\begin{cases}
\partial_t u_1 - d_1\Delta u_1 &= -u_1u_2 + u_3u_4, \quad x\in\Omega, \; t>0,\\
\partial_t u_2 - d_2\Delta u_2 &= -u_1u_2 + u_3u_4, \quad x\in\Omega, \; t>0,\\
\partial_t u_3 - d_3\Delta u_3 &= +u_1u_2 - u_3u_4, \quad x\in\Omega, \; t>0,\\
\partial_t u_4 - d_4\Delta u_4 &= +u_1u_2 - u_3u_4, \quad x\in\Omega, \; t>0,\\
\end{cases}
\end{equation}
together with the homogeneous Neumann boundary conditions
\begin{equation}\label{sys_bc}
\nabla u_i \cdot \nu = 0, \quad \text{ for all } \quad i=1,\ldots,4\quad\text{ and } x\in\partial\Omega, \; t>0,
\end{equation}
and non-negative initial data
\begin{equation}\label{sys_id}
u_i(x,0) = u_{i,0}(x) \geq 0 \quad \text{ for all } \quad i=1,\ldots, 4, \; x\in\Omega.
\end{equation}
The key feature of the system we will focus on, is the fact that one of the compounds, say $\mathcal{U}_4$ without loss of generality, will have no self diffusion. That means that, we shall study the well-posedness and large time behaviour of the following system:
\begin{equation}\label{eq:sys}
\begin{cases}
\partial_t u_1 - d_1\Delta u_1 &= -u_1u_2 + u_3u_4, \quad x\in\Omega, \; t>0,\\
\partial_t u_2 - d_2\Delta u_2 &= -u_1u_2 + u_3u_4, \quad x\in\Omega, \; t>0,\\
\partial_t u_3 - d_3\Delta u_3 &= +u_1u_2 - u_3u_4, \quad x\in\Omega, \; t>0,\\
\partial_t u_4 &= +u_1u_2 - u_3u_4, \quad x\in\Omega, \; t>0,\\
\end{cases}
\end{equation}
subject to homogeneous Neumann boundary conditions \eqref{sys_bc} \emph{only for $i=1,2,3$} and initial data \eqref{sys_id}. Thanks to the homogeneous Neumann boundary conditions, one can observe that \eqref{eq:sys_full} (and also \eqref{eq:sys}), possesses, at least formally, the following conservation laws
\begin{equation}\label{eq:mass_conservation}
\int_{\Omega}(u_i(x,t) + u_j(x,t))dx = M_{ij}:= \int_{\Omega}(u_{i,0}(x) + u_{j,0}(x))dx, \quad \forall \; t\geq 0,
\end{equation}
for $i=1,2$, $j=3,4$. This means that no single mass is conserved, but some combinations of them are. This phenomena is not surprising, as the chemical reactions in the systems move one compound to another. We would also remark that exactly three of the four laws in \eqref{eq:mass_conservation} are linearly independent, and these laws allow us to uniquely identify a unique positive equilibrium which balances the reaction \eqref{binary_reaction}. Intuitively speaking we have two competing long term behaviours of \eqref{eq:sys_full}, corresponding to two processes:
\begin{itemize}
\item The diffusion attempts to distribute the compound evenly on $\Omega$, i.e. it tries to push $\br{u_i(x,t)}_{i=1,\dots,4}$ to become constants;
\item The chemical reaction, on the other hand, tries to push the compound to the point where the reaction is balanced, i.e
$$u_1(x,t)u_2(x,t)=u_3(x,t)u_4(x,t).$$
\end{itemize}
Only the combination of these two processes can drive the system \eqref{eq:sys_full} towards chemical equilibrium. Together with the aforementioned conservation laws, we expect that an equilibrium, if it exists, will be of the form $\bm{u}_\infty = (u_{i,\infty})_{i=1,\ldots, 4} \in (0,\infty)^4$, such that:
\begin{equation}\label{equi_sys}
\begin{cases}
u_{1,\infty}u_{2,\infty} = u_{3,\infty} u_{4,\infty},\\
u_{1,\infty} + u_{3,\infty} = |\Omega|^{-1} M_{13},\\
u_{1,\infty} + u_{4,\infty} = |\Omega|^{-1}M_{14},\\
u_{2,\infty} + u_{3,\infty} = |\Omega|^{-1}M_{23},
\end{cases}
\end{equation}
where the values $M_{ij}$ are defined in \eqref{eq:mass_conservation}.
\medskip
Despite a large amount of work dealing with large time behaviour of systems with {\it full diffusion} \eqref{eq:sys_full} (or in larger generality), to our knowledge there has been none so far that has dealt with degenerate systems of the form \eqref{eq:sys}. The main aims in this paper are to show global existence of strong solutions to these degenerate reaction-diffusion systems, and to find a quantitative exponential convergence to equilibrium.
\subsection{The current state of the art}
The study of reaction-diffusion systems is a classical topic (see, for instance, \cite{Rot84}), yet it still poses a lot of challenging open questions. One example, concerning the systems of the form \eqref{eq:general_chemical_rd}, is the global well-posedness of the system when the non-linearities of the system have super quadratic growth. We refer the interested reader to the extensive review \cite{Pie10}.
The case of a quadratic system with {\it full diffusion}, \eqref{eq:sys_full}, has been investigated extensively: In \cite{GV10}, it was shown that when $N\leq 2$, \eqref{eq:sys_full} has a unique global strong solution. Remaining still in two dimensions, this strong solution has later been shown to be uniformly bounded in time in \cite{CDF14,PSY19}. In higher dimensions, i.e. $N\geq 3$, \cite{DFPV07} showed the global existence of a weak solution, but the uniqueness remains open. In the 2014 work \cite{CDF14}, it was shown that conditional global strong solutions for $N\geq 3$ can be obtained if the diffusion coefficients are ``close enough''. Recently, in three parallel studies \cite{CGV19}, \cite{Sou18} and \cite{FMT19}, the system \eqref{eq:sys_full} was shown to have a unique global strong solution in all dimensions.
The investigation of the long time behaviour of these systems has also bore fruit. The trend to equilibrium for \eqref{eq:sys_full} was first studied in one dimension in \cite{DF08} using the so-called \textit{entropy method} (which we will explain shortly). When dealing with higher dimension, the work \cite{CDF14} have shown that under certain conditions (such as the ``closeness'' of diffusion coefficients) one can again recover some relaxation information. Most notably, in the recent work \cite{FMT19}, the authors have shown that the global strong solution, which they have proven to exist, converges exponentially fast to equilibrium in $L^\infty(\Omega)$-norm. We would also like to mention the work \cite{GZ10}, where the author treated \eqref{binary_reaction} with a probabilistic approach (and not entropic methodology). Due to the global existence and bounds found in \cite{FMT19}, the question of large time behaviour of \eqref{eq:sys_full} in all dimensions is now settled. This can be summed in the following theorem.
\begin{theorem}\label{thm:convergence_no_degen}\cite{FMT19}
Assume that $\Omega\subset\mathbb R^N$, with $N\geq 1$, is a bounded domain with boundary, $\partial \Omega$, that is $C^{2+\epsilon}$ for some $\epsilon>0$, with $\Omega$ lying locally only on one side of $\partial\Omega$. Assume in addition that the diffusion coefficients, $d_i$, $i=1,\ldots, 4$ in \eqref{eq:sys_full} are positive. Then for any bounded, non-negative initial data, $\bm{u}_0 = (u_{i,0})_{i=1,\ldots, 4}$, there exists a unique global strong solution to \eqref{eq:sys_full} which converges exponentially to the equilibrium $\bm{u}_\infty$ defined in \eqref{equi_sys}, in $L^\infty(\Omega)$-norm, i.e.
\begin{equation*}
\sum_{i=1}^4 \norm{u_i(t)-u_{i,\infty}}_{L^\infty\pa{\Omega}} \leq K_1 e^{-\lambda_1 t},\quad t\geq 0,
\end{equation*}
where $K_1,\lambda_1 > 0$ are given explicitly, and depend only on $\Omega$, $\br{d_i}_{i=1,\dots, 4}$, the initial masses $M_{ij}$, and $\br{\norm{u_{i,0}}_{L^\infty(\Omega)}}_{i=1,\dots, 4}$.
\end{theorem}
The interested reader can find the most recent advances in the study of the trend to equilibrium in general chemical reaction-diffusion systems (with full diffusion) in \cite{MHM14,FT17,DFT17,FT18,Mie16,MM18}.
\medskip
Needless to say, the analysis of the degenerate system \eqref{eq:sys} is much less satisfactory. The first work studying global existence of \eqref{eq:sys} is \cite{DFPV07} in which a global weak solution was shown in all dimensions. Global strong solutions in one and two dimensions were then shown in \cite{DF15} and \cite{CDF14} respectively. In higher dimensions, global strong solutions were also obtained in \cite{DF15} under an additional condition that the diffusion coefficients are close to each other.
The convergence to equilibrium for the degenerate system \eqref{eq:sys} is, on the other hand, completely open, as far as we know.
As the large time behaviour of the reversible reaction \eqref{binary_reaction} is the combination of diffusion and chemical reactions, the presence of \textit{diffusion in all species} plays an important role in the commonly used entropy method, with which a convergence to equilibrium is sought. When some diffusion is missing, things become much more involved. In some preliminary works which treated special systems of two species, see, for instance, \cite{DF07} or \cite{FLT18}, the authors utilised a phenomenon, which they called the {\it indirect diffusion effect}, to get convergence to equilibrium using the entropy method. Roughly speaking, the idea behind this phenomenon is that a combination of reversible reactions and diffusion of diffusive species lead to some ``diffusion effect" for the non-diffusive species. While this intuition is natural, showing it quantitatively in the form of a functional inequality, which is the heart of the entropy method, turns out to quite complicated in most cases (for the simple linear case, we refer the reader to \cite{FPT17}). It is worth mentioning, as it will play a role in our investigation as well, that in both \cite{DF07} and \cite{FLT18}, the uniform-in-time bounds of $L^\infty(\Omega)$-norm of solutions are essential.
In this paper, we push forward the theory of indirect diffusion effects by investigating the degenerate system \eqref{eq:sys}.
We study the well-posdeness of the system and find suitable estimates for the solution which, while weaker than a uniform $L^\infty\pa{\Omega}$ bound, will suffice for our application of the entropy method. Remarkably, once these estimates are found, and the entropy method is successfully applied to obtain convergence to equilibrium, we are able to use our explicit convergence to revisit our estimates on the solutions to \eqref{eq:sys_full} and improve them up to a uniform $L^\infty(\Omega)$ bound, which will then improve the convergence rate of the entropy method, and yield an exponential convergence.
\subsection{Main results and key ideas}
The main results of our work concern themselves with the existence, uniqueness and long time behaviour of solutions to our problems. In particular, one can ask oneself what type of solutions one can achieve. As our goal here is not only to deal with the system of PDEs, but also to show the intimate connection to the notions of entropy and indirect diffusion, we elected to consider only classical solutions - that is solutions that are at least $C^{2,1}$ ($C^2$ in space and $C^1$ in time), and solve the differential equations, as well as satisfy the boundary conditions, pointwise (in the classical sense). Many of our proofs can be extended to weaker notions of solutions, yet we leave this discussion out of our present work.\\
In addition, to simplify some of the writing, we decided to employ a couple of notions:
\begin{itemize}
\item We will often refer to conditions \eqref{sys_bc} \textit{only for $i=1,2,3$} as \textit{the compatibility conditions} of our system of equations. Note that without diffusion, the equation for $u_4$ acts as an ODE, which is why we require no boundary conditions.
\item We will use the notion of \textit{smoothness} to indicate that our initial data and domain itself guarantee the existence of a local classical solution. Following theorems from \cite{Ama85} (as shall be seen in the proofs), this will be valid when the initial data is in $C^2\pa{\overline{\Omega}}$ and that the boundary of $\Omega$, $\partial \Omega$, is $C^{2+\epsilon}$ for some $\epsilon>0$, with $\Omega$ lying locally only on one side of $\partial \Omega$.
\end{itemize}
An important consequence of having classical solutions is that the formal mass conservation equations, given by \eqref{eq:mass_conservation}, are no longer formal but exact, as one can easily differentiate the appropriate integrals. \\
The main results of this paper are the following two theorems.
\begin{theorem}[$N=1,2$]\label{thm:main}
Let $N=1,2$ and let $\Omega$ be a bounded domain of $\mathbb{R}^{N}$ with smooth boundary, $\partial \Omega$. Then for any smooth, non-negative initial data $\br{u_{i,0}(x)}_{i=1,\dots,4}$ that satisfies the compatibility condition, \eqref{eq:sys} has a unique, classical, global non-negative, bounded solution $\br{u_i}_{i=1,\ldots, 4}$ which satisfies the exponential convergence to equilibrium
\begin{equation*}
\sum_{i=1}^4\norm{u_i(t) - u_{i,\infty}}_{L^\infty(\Omega)} \leq K_2e^{-\lambda_2 t} \quad \text{ for all } \quad t\geq 0,
\end{equation*}
where $K_2>0$ and $\lambda_2>0$ are {\normalfont explicit} constants depending only on $N$, $\Omega$, the initial masses $M_{ij}$, and the diffusion coefficients $d_1, d_2$ and $d_3$. Here $\bm{u}_\infty$ is the unique positive equilibrium determined by the equilibrium system \eqref{equi_sys}.
\end{theorem}
\begin{theorem}[$N\geq 3$]\label{thm:main-3D}
Let $N\geq 3$ and let $\Omega$ be a bounded domain of $\mathbb{R}^N$ with smooth boundary, $\partial \Omega$. There exists an explicit $\delta > 0$ such that if
\begin{equation}\label{quasi-uniform_theorem}
\frac{|d_i-d_3|}{d_i+d_3} < \delta \quad \text{ for } \quad i = 1 \quad \text{ or } \quad i=2,
\end{equation}
then for any smooth, non-negative initial data $\br{u_{i,0}(x)}_{i=1,\dots,4}$ that satisfies the compatibility condition, \eqref{eq:sys} has a unique, classical, global non-negative, bounded solution $\br{u_i}_{i=1,\ldots, 4}$ which satisfies the exponential convergence to equilibrium
\begin{equation*}
\sum_{i=1}^4\norm{u_i(t) - u_{i,\infty}}_{L^\infty(\Omega)} \leq K_3e^{-\lambda_3 t} \quad \text{ for all } \quad t\geq 0,
\end{equation*}
where $K_3>0$ and $\lambda_3>0$ are {\normalfont explicit} constants depending only on $\delta$, $N$, $\Omega$, the initial masses $M_{ij}$, and the diffusion coefficients $d_1, d_2$ and $d_3$. Here $\bm{u}_\infty$ is the unique positive equilibrium determined by the equilibrium system \eqref{equi_sys}.
\end{theorem}
\begin{remark}
Condition \eqref{quasi-uniform_theorem} means that either $d_1$ and $d_3$ or $d_2$ and $d_3$ are ``close'' to each other. As was mentioned, the constant $\delta$ can in fact be quantified, and chosen to be
\begin{equation*}
\delta = \frac{1}{C_{\mathrm{mr},p_0'}},
\end{equation*}
for some $p_0' > 0$ is such that $p_0 = \frac{p_0'}{p_0' - 1} > \frac{N+2}{2}$, and where $C_{\mathrm{mr},p_0'}$ is the optimal constant in the maximal regularity for a linear parabolic equation (see Lemma \ref{lem_mr}). Note that in fact we have here a family of potential constants, corresponding to $p_0'$ and $C_{\mathrm{mr},p_0'}$, and we only need to find one such constant, for which condition \eqref{quasi-uniform_theorem} is satisfied.
\end{remark}
\begin{remark}\label{rem:mass_bounds}
As was mentioned at the beginning of the subsection, since we have classical solutions, \eqref{eq:mass_conservation} is valid. Tying this together with the non-negativity of the solutions leads to the following set of bounds
\begin{equation}\label{eq:mass_bounds}
\norm{u_i(t)}_{L^1\pa{\Omega}} \leq M_{ij},\quad\quad i=1,2,\;j=3,4,
\end{equation}
We shall use this fact in some of our proofs.
\end{remark}
Let us briefly sketch the ideas we use to prove Theorems \ref{thm:main} and \ref{thm:main-3D}.
The global existence of bounded solutions to \eqref{eq:sys} will be shown by duality arguments. More precisely, when $N=1,2$ or when $N\geq 3$ and the required conditions are satisfied, a duality estimate will show that $u_i \in L^p(\Omega\times(0,T))$ for $i=1,2,3$, and some $p > \frac{N+2}{2}$. \\
These estimates are enough to begin a bootstrap process that will end up with $u_i\in L^\infty(\Omega\times(0,T))$ for $i=1,2,3$ and $T>0$ fixed, which in turn will imply that $u_4 \in L^\infty(\Omega\times(0,T))$.
These results are not surprising, as a similar methodology was used in \cite{CDF14} and \cite{DF15} in the case of one and two dimensions.\\
We would like to emphasise that tools required to achieve the above change drastically between $N=1,2$ and $N\geq 3$ (see for instance Lemma \ref{PW} and Proposition \ref{Global3D}) which is the reason behind the separation of these cases in our main theorems.
\medskip
To consider convergence to equilibrium that is given by the system \eqref{equi_sys}, we need to find a ``distance'' under which the solution converges as time goes to infinity. This aforementioned ``distance'' needs not be a norm or a metric, but a non-negative functional that behaves well under the flow of the equations and will be zero \emph{only} when the solution is at equilibrium. This situation is very common in many physically/chemically motivated systems. In practice what we do is look for a functional for the equation, $H(\bm{u}|\bm{u}_\infty)$, which we call \emph{the entropy} or \emph{the relative entropy}, such that
\begin{itemize}
\item $H(\bm{u}|\bm{u}_\infty)\geq 0$, and $H(\bm{u}|\bm{u}_\infty)=0$ with $\bm{u}$ satisfying all the conservation laws if and only if $\bm u=\bm u_\infty$.
\item For any solution $\bm u(t)$ to the equation/s, $\frac{d}{dt}H(\bm{u}(t)|\bm{u}_\infty) \leq 0$ (i.e. $H$ is a Lyapunov functional for the system).
\end{itemize}
To find the rate of convergence for the entropy we employ the so-called \emph{entropy method}. Defining the entropy production of $H$ by the functional that formally satisfies
$$D(\bm u(t))=-\frac{d}{dt}H(\bm u(t)|\bm u_\infty)$$
we forget that $D$ is connected to $H$ via the system, and attempt to find a \emph{functional inequality} that connects $H$ and $D$, of the form
\begin{equation}\label{eede}
D(\bm u) \geq \Psi(H(\bm u|\bm u_\infty))
\end{equation}
for a suitable function $\Psi$. Once such an inequality has been obtained, we recall that under the flow of the equations, $D$ is minus the derivative of $H$, and the above functional inequality becomes a differential inequality which, under some conditions on $\Psi$, yields a quantitative rate of convergence to equilibrium. The case $\Psi(y)=Cy$, which one hopes to get in many physically relevant equations, yields an exponential rate of convergence. An entropy that seems to be appropriate in numerous setting is the so-called \emph{Boltzmann entropy}:
$$H(\bm u|\bm u_\infty)= \int \phi\pa{\frac{\bm u}{\bm u_\infty}}\bm u_\infty dx,$$
where $\phi(x)=x\log x -x +1$. Unsurprisingly, this entropy will be our choice for the entropy of the system \eqref{eq:sys}. More precisely, the relative entropy for \eqref{eq:sys} (and also for \eqref{eq:sys_full}) is
\begin{equation*}
H(\bm u|\bm u_\infty) = \sum_{i=1}^4\int_{\Omega}\pa{u_i\log\pa{\frac{u_i}{u_{i,\infty}}} - u_i + u_{i,\infty}}dx.
\end{equation*}
A simple calculation (formal in general, but exact when the functions are smooth enough) shows that the entropy production under \eqref{eq:sys} is given by
\begin{equation*}
D(\bm u) = \sum_{i=1}^3d_i\int_{\Omega}\frac{|\nabla u_i|^2}{u_i}dx + \int_{\Omega}(u_1u_2 - u_3u_4)\log\pa{\frac{u_1u_2}{u_3u_4}}dx
\end{equation*}
where as under \eqref{eq:sys_full} it is given by
\begin{equation*}
D^{\rm{full}}(\bm u) = \sum_{i=1}^4d_i\int_{\Omega}\frac{|\nabla u_i|^2}{u_i}dx + \int_{\Omega}(u_1u_2 - u_3u_4)\log\pa{\frac{u_1u_2}{u_3u_4}}dx.
\end{equation*}
The first sum in $D(\bm{u})$ (or in $D^{\rm{full}}(\bm{u})$) corresponds to the diffusion of the system, while the second term corresponds to the reversible reaction \ref{binary_reaction}. If all diffusion coefficients are present (i.e. we deal with \eqref{eq:sys_full}), it can be shown that
\begin{equation*}
D^{\rm{full}}(\bm u) \geq \lambda H(\bm u|\bm u_\infty),
\end{equation*}
for some $\lambda>0$, which gives an exponential convergence (see e.g. \cite{FT17} or \cite{DFT17}). The degenerate case, however, is more delicate.
A particular case of only two substances was considered in \cite{DF08} and \cite{FLT18}, where the authors assumed a reversible reaction of the form $\alpha S_1 \leftrightarrows \beta S_2$, $\alpha, \beta \geq 1$, with either $S_1$ or $S_2$ lacking self diffusion. To employ the entropy method, these works utilised the so-called {\it indirect diffusion effect}. Roughly speaking, a combination of diffusion from diffusive species and the reversible reaction leads to some diffusion effect on the non-diffusive species. To successfully show this phenomena, the proofs in \cite{DF08} or \cite{FLT18} essentially used a uniform-in-time $L^\infty(\Omega)$ bound of the solution. Mimicking this strategy, one would like to show for the system considered in this paper that
\begin{equation}\label{IDE}
D(\bm{u}) \geq \beta\norm{\sqrt{u_4} - \overline{\sqrt{u_4}}}_{L^2(\Omega)}^2,
\end{equation}
assuming that the solution is uniformly bounded in time in $L^\infty(\Omega)$-norm, and $\overline{\sqrt{u_4}}$ is the spatial average of $\sqrt{u_4}$. It is worth noting that when the diffusion of $u_4$ is present, the Poincar\'e-Wirtinger inequality implies that
$$D^{\rm{full}}(\bm u) \geq d_4\int_{\Omega}\frac{|\nabla u_4|^2}{u_4}dx = 4d_4\norm{\nabla \sqrt{u_4}}_{L^2(\Omega)}^2 \geq 4d_4C_{\Omega}\norm{\sqrt{u_4} - \overline{\sqrt{u_4}}}_{L^2(\Omega)}^2.$$ Therefore, \eqref{IDE} can be seen as some ``diffusion behaviour" of $u_4$. \\
Unfortunately, as the system \eqref{eq:sys} has degenerate diffusion, it is unclear how to a-priori obtain uniform-in-time $L^\infty(\Omega)$ bounds. Nevertheless, we will be able to show that
\begin{equation}\label{u3Linf}
\sup_{t\geq 0}\norm{u_3(t)}_{L^\infty(\Omega)} < +\infty
\end{equation}
and
\begin{equation}\label{u124Lq}
\norm{u_i(t)}_{L^q(\Omega)} \leq C(1+t)^\alpha \quad \text{ for } \quad i=1,2,4,
\end{equation}
with some $C>0$, $0<\alpha<1$ and $q$, which is dimension dependent.
This will motivate us to (successfully) seek a modified version of \eqref{IDE}, in which the constant $\beta$ is replaced by a function that depends on the $L^q(\Omega)$ norm of the solution, i.e.
\begin{equation}\label{indirect_ineq_modified}
D(\bm u) \geq \beta\pa{\br{\norm{u_i}_{L^q(\Omega)}}_{i=1,\dots,4}}\norm{\sqrt{u_4} - \overline{\sqrt{u_4}}}_{L^2(\Omega)}^2.
\end{equation}
Finding such inequality, and using the assistance of the Poincar\'e-Wirtinger inequality, we will be able to show that along the flow of \eqref{eq:sys} we have that
\begin{equation*}
D(\bm{u}(t)) \geq \lambda(t)H(\bm{u}(t)|\bm{u}_\infty)
\end{equation*}
for an explicit non-negative function $\lambda: [0,\infty) \to [0,\infty)$ which satisfies $\lambda(t) \leq C_{\varepsilon}(1+t)^{1-\alpha-\varepsilon}$ for any $0<\varepsilon<1-\alpha$. This will result in a {\it stretched exponential} convergence for the relative entropy, i.e.
\begin{equation*}
H(\bm{u}(t)|\bm{u}_\infty) \leq H(\bm{u}_0|\bm{u}_\infty)e^{-C_{1,\varepsilon}t^{1-\alpha-\varepsilon}}.
\end{equation*}
While one can stop at this point, the known Csisz\'ar-Kullback-Pinsker inequality (see Lemma \ref{CKP}) implies that the stretched exponential convergence of the relative entropy passes to the $L^1(\Omega)$ norm of the solution. This fact allows us to interpolate stretched exponential convergence in $L^1(\Omega)$ norm, algebraic growth in $L^\infty(\Omega)$ norm, and the smoothing effect of the heat operator, to obtain stretched exponential convergence in $L^\infty(\Omega)$ norm for $u_1, u_2, u_3$, which will then be transferred to $u_4$ as well. With these newly found bounds we will be able to return to \eqref{indirect_ineq_modified} and obtain an inequality of the form \eqref{IDE}. At this point, similar arguments as in \cite{DF08} and \cite{FLT18} will yield the desired exponential convergence.
We find this interaction between a-priori bounds and entropic convergence both beautiful and extremely revealing.
\medskip
\noindent{\bf Structure of the paper}.
As the study of well-posedness and a-priori bounds of \eqref{eq:sys} and the entropy method are, at least at the first iteration, disjoint, we begin with exploring the entropy method and the manifestation of the indirect diffusion phenomena in it in Section \ref{sec:entropy}.
In Section \ref{sec:bounds} we delve into our reaction-diffusion system and first show the well-posedness of \eqref{eq:sys}, followed by the estimates that are needed to apply the functional inequality from the previous section. Besides their usage in our current study of the long time behaviour of the solutions, we find these estimates to be interesting in their own right. In Section \ref{sec:proof}, we explore the intimate interaction between the entropy and the previously found norm bounds on the solution to significantly improve our initial estimates, and achieve our main theorems. Lastly, in Section \ref{sec:remarks} we discuss what we believe will be a natural path to continue this line of research, and potential extensions to our work.
\medskip
\noindent{\bf Notation}. In this paper, we regularly use the following notation.
\begin{itemize}
\item For any $0\leq \tau < T$,
\begin{equation*}
\Omega_{\tau,T} = \Omega\times(\tau,T), \quad \norm{\cdot}_{L^p(\Omega\times(\tau,T))} = \norm{\cdot}_{L^p(\Omega_{\tau,T})}.
\end{equation*}
When $\tau = 0$, we write $\Omega_T$ instead of $\Omega_{0,T}$ for simplicity.
\item For any function $f: \Omega \to \mathbb R$ we denote the spatial average of $f$ by
\begin{equation*}
\overline{f} = \frac{1}{|\Omega|}\int_{\Omega}f(x)dx.
\end{equation*}
\item $C_T$ {\it always} denotes a general constant which depends {\it at most algebraically} on the time horizon $T>0$.
\item We often write $C(\alpha, \beta, \gamma, \ldots)$ for a generic constant that depends on the arguments $\alpha, \beta, \gamma$, etc, but {\it not on} the time horizon $T>0$.
\end{itemize}
\section{Indirect diffusion effect and the entropy method}\label{sec:entropy}
This section is divided into two parts: In subsection \ref{FI} we prove the functional inequality that quantifies the indirect diffusion effect that we have mentioned before, and in subsection \ref{sec:convergence} we show how one can utilise this inequality to obtain convergence to equilibrium to our system \eqref{eq:sys}, under the assumption on the growth in time of relevant norms.
\subsection{The entropic inequalities}\label{FI}
In this subsection we will focus solely on the functional inequality between the entropy (defined in Section \ref{sec:intro})
\begin{equation}\nonumber
H\pa{\bm{u}|\bm{u}_{\infty}}=\sum_{i=1}^4 \int_{\Omega}\left(u_i\log\pa{\frac{u_i}{u_{i,\infty}}} - u_i + u_{i,\infty}\right)dx,
\end{equation}
and its production
\begin{equation}\label{eq:entropy_production}
D\pa{\bm{u}}=\sum_{i=1}^3 \int_{\Omega} d_i \frac{\abs{\nabla u_i}^2}{u_i}dx+ \int_{\Omega}\pa{u_1u_2-u_3u_4}\log \pa{\frac{u_1u_2}{u_3u_4}}dx.
\end{equation}
The first term in this expression is the known \emph{Fisher information}, which is commonly associated with the entropy method and the Boltzmann entropy, while the second term is connected to the chemical reaction and is always non-negative. This second term will be invaluable to get the ``indirect diffusion'' effect, we have discussed in Section \ref{sec:intro}.
Our main theorem for this section is the following:
\begin{theorem}\label{thm:entropy_method}
Let $\{u_i\}_{i=1,\ldots, 4}$ be non-negative functions on $\Omega \subset \mathbb R^N$. Denoting by
\begin{equation}\label{eq:masses}
\int_{\Omega}(u_{i}(x) + u_{j}(x))dx = M_{ij}=\abs{\Omega}\pa{u_{i,\infty}+u_{j,\infty}} \quad\text{ for } \quad i\in \{1, 2\} \quad \text{and} \quad j\in \{3,4\}.
\end{equation}
we find that
\begin{equation}\label{eq:entropy_method}
\frac{H(\bm{u}|\bm{u}_\infty)}{D(\bm{u})} \leq K_1\pa{1+\max_{i=1,\ldots, 4}\{\log(\|u_i\|_{L^\infty(\Omega)} + 1)\}}\left(1+\max_{i=1,4}\|u_i\|_{L^q(\Omega)}\right)
\end{equation}
where
\begin{equation*}
q = \begin{cases}
\frac N2 &\text{ when } N\geq 3,\\
1+\gamma \text{ for an abitrary fixed } \gamma > 0 &\text{ when } N = 2,\\
1 &\text{ when } N = 1,
\end{cases}
\end{equation*}
and the constant $K_1$ depends only on the domain $\Omega$, the dimension $N$, the diffusion coefficients $d_1, d_2, d_3$, the initial masses $M_{13}, M_{23}, M_{14}$, and in the case where $N=2$, it also depends on $\gamma>0$. In \eqref{eq:entropy_method}, $D(\bm{u})$ is understood to be $+\infty$ when $\sqrt{u_i}$ is not in $H^1(\Omega)$ for some $i=1,2,3$.
\end{theorem}
The proof of this Theorem is involved and is therefore divided to several lemmas. In what follows we will denote by $U_i=\sqrt{u_i}$, $i=1,\ldots, 4$, and notice that
$$u_i\in L^1\pa{\Omega}\;\Rightarrow\; U_i\in L^2\pa{\Omega},$$
$$\nabla U_i = \frac{\nabla u_i}{2\sqrt{u_i}}\quad\text{and as such}\quad \frac{\abs{\nabla u_i}^2}{u_i}=4\abs{\nabla U_i}^2.$$
\medskip
One ingredient to prove Theorem \ref{thm:entropy_method} is the following Poincar\'e-Wirtinger inequality:
\begin{lemma}(Poincar\'e-Wirtinger-type inequality)\label{PW}
There exists a constant $0<C_{\Omega,q}<\infty$ depending on the domain $\Omega$ and $q$ such that
\begin{equation}\label{eq:PW}
\|\nabla f\|_{L^2(\Omega)} \geq C_{\Omega,q}\|f - \overline{f}\|_{L^q(\Omega)}
\end{equation}
for all $f\in H^1(\Omega)$, where
\begin{equation*}
q = \begin{cases}
\frac{2N}{N-2} &\text{ for } N\geq 3,\\
\in [1,\infty) \text{ arbitrary } &\text{ for } N = 2,\\
\infty &\text{ for } N = 1.
\end{cases}
\end{equation*}
Naturally, when $q = \infty$, the constant $C_{\Omega,q}$ depends only on $\Omega$.
\end{lemma}
\begin{proof}
If $N\geq 3$ inequality \eqref{eq:PW} is the result of the Sobolev embedding theorem, $H^1\pa{\Omega}\subset L^{\frac{2N}{N-2}}\pa{\Omega}$, and the Poincar\'e-Wirtinger inequality
$$\norm{f - \overline{f}}_{L^2\pa{\Omega}} \leq \mathcal{C}_{\Omega,2}\norm{\nabla f}_{L^2\pa{\Omega}}$$
(see, for instance, \cite{Evans}). Indeed, one find that
$$\|f - \overline{f}\|_{L^{\frac{2N}{N-2}}(\Omega)}\leq S_{\Omega,N}\|f - \overline{f}\|_{H^1(\Omega)}$$
$$= S_{\Omega,N}\pa{\|f - \overline{f}\|_{L^2(\Omega)}+\|\nabla f\|_{L^2(\Omega)}}\leq S_{\Omega,N}\pa{1+\mathcal{C}_{\Omega,2}}\|\nabla f\|_{L^2(\Omega)}.$$
The case $N=2$ (and $p=2$) is a bit more delicate, as it is the critical case. However, as $\Omega$ is bounded we have that $H^1\pa{\Omega}\subset W^{1,\eta}\pa{\Omega}$ for any $1\leq \eta<2$, and as such the choice $\eta=\frac{2q}{q+2}$, together with the Sobolev embedding and the Poincar\'e inequality, yield the desired result.\\
Lastly, the case $N=1$ follows from the Sobolev embedding $H^1\pa{\Omega}\subset L^\infty\pa{\Omega}$, and the Poincar\'e inequality again.
\end{proof}
\begin{lemma}\label{lem:first_D_estimation}
Let $\br{u_i}_{i=1,\dots,4}$ be non-negative functions on $\Omega$. Then
\begin{equation}\label{eq:first_D_estimation}
D\pa{\bm{u}} \geq \widetilde{D}(\bm{U}):= 4\pa{\sum_{i=1}^3 d_i\norm{\nabla U_i}_{L^2\pa{\Omega}}^2+\norm{U_1U_2-U_3U_4}_{L^2\pa{\Omega}}^2}.
\end{equation}
Note that $\widetilde{D}(\bm U)$, and consequently $D(\bm u)$, is understood to be $+\infty$ when $U_i$ is not in $H^1(\Omega)$ for some $i=1,2,3$.
\end{lemma}
\begin{proof}
Using the inequality $\pa{x-y}\log\pa{\frac{x}{y}}\geq 4 \pa{\sqrt{x}-\sqrt{y}}^2 $ we see that
$$\int_{\Omega}\pa{u_1(x)u_2(x)-u_3(x)u_4(x)}\log \pa{\frac{u_1(x)u_2(x)}{u_3(x)u_4(x)}}dx$$
$$ \geq 4\int_{\Omega}\pa{\sqrt{u_1(x)u_2(x)}-\sqrt{u_3(x)u_4(x)}}^2dx=4\norm{U_1U_2-U_3U_4}_{L^2\pa{\Omega}}^2. $$
The connection between the $L^2$ norm of $\nabla U_i$ and the Fisher information of $u_i$ completes the proof.
\end{proof}
The next series of Lemmas express the ``indirect diffusion'' phenomenon we keep mentioning.
\begin{lemma}[$N\geq 3$]\label{lem:second_D_estimation_N>2}
Let $\br{u_i}_{i=1,\dots,4}$ be bounded non-negative functions on a bounded domain $\Omega\subset \mathbb{R}^N$, with $N\geq 3$. Then there exists an explicit $K_2>0$, depending only on $\Omega$, $\br{d_i}_{i=1,2,3}$, $M_{13},M_{23},M_{14}$, such that
\begin{equation}\label{eq:second_D_estimation_N>2}
\begin{gathered}
\widetilde{D}\pa{\bm{U}}
\geq \frac{K_2}{1 +\max_{i=1,4}\pa{\norm{u_i}_{L^{\frac N2}\pa{\Omega}}}}\norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}
\end{gathered}
\end{equation}
where $\widetilde D(\bm{U})$ is defined in \eqref{eq:first_D_estimation}. Note that $\widetilde{D}(\bm{U})$ is understood to be $+\infty$ if $U_i$ is not $H^1(\Omega)$ for some $i=1,2,3$.
\end{lemma}
\begin{proof}
By applying Poincar\'e-Wirtinger inequality from Lemma \ref{PW}, we have
\begin{equation}\label{first_estimate}
\begin{aligned}
\widetilde D(\bm{U})&\geq 4\sum_{i=1}^3 d_i\norm{\nabla U_i}_{L^2\pa{\Omega}}^2+4\norm{U_1U_2-U_3U_4}_{L^2(\Omega)}^2\\
&\geq 4C_{\Omega}^2\sum_{i=1}^3d_i\|U_i - \overline{U_i}\|_{L^{\frac{2N}{N-2}}(\Omega)}^2 + 4\|U_1U_2 - U_3U_4\|_{L^2(\Omega)}^2\\
&\geq 4\min_{i=1,2,3}\{C_\Omega^2d_i,1\}\widehat{D}(\bm{U})
\end{aligned}
\end{equation}
where
\begin{equation*}
\widehat{D}(\bm{U}) = \sum_{i=1}^3\|U_i - \overline{U_i}\|_{L^{\frac{2N}{N-2}}(\Omega)}^2 + \|U_1U_2 - U_3U_4\|_{L^2(\Omega)}^2.
\end{equation*}
We consider the zero average functions
$$\delta_i(x)=U_i(x)-\overline{U_i},\quad\quad i=1,\dots,4,$$
with whom we can rewrite
$$
\widehat{D}(\bm{U}) = \sum_{i=1}^3\|\delta_i\|_{L^{\frac{2N}{N-2}}(\Omega)}^2 + \|U_1U_2 - U_3U_4\|_{L^2(\Omega)}^2.
$$
From this point onwards we estimate the above expressions with respect to the ``smallness'' of $\delta_i$, and analyse possible cases. We start with a few simple estimates on $\br{\delta_i}_{i=1,\dots,4}$. From \eqref{eq:masses}, and the non-negative of all $\br{u_i}_{i=1,\dots,4}$, we see that for $M=\max\br{M_{1,3},M_{2,3},M_{1,4}}$
\begin{equation*}
\|U_i\|_{L^2(\Omega)}^2 = \int_{\Omega}u_idx \leq M \quad \text{ for all } \quad i=1,\ldots, 4,
\end{equation*}
Since $$\overline{U^2_i}=\frac{\|U_i\|_{L^2(\Omega)}^2}{\abs{\Omega}}$$ we see that for all $i=1,\ldots, 4$,
$$\overline{U^2_i}\leq \frac{M}{\abs{\Omega}},$$
and
$$\norm{\delta_i}^2_{L^2\pa{\Omega}}=|\Omega|(\overline{U_i^2}-\overline{U_i}^2) \leq |\Omega|\overline{U_i^2} \leq M,$$
Similarly, as
$$|\Omega|\overline{U_i}^2=|\Omega|\overline{U_i^2}-\norm{\delta_i}^2_{L^2\pa{\Omega}} \leq M$$
we find that
$$\overline{U_i}^2 \leq \frac{M}{\abs{\Omega}}.$$
Next, for a given $\epsilon>0$, to be chosen shortly (see \eqref{epsilon}) we consider the following two cases:
\medskip
\noindent\textbf{Case I: some $\delta_i$ is large.} If
$$\max_{i=1,2,3}\norm{\delta_i}^2_{L^2\pa{\Omega}} \geq \epsilon,$$
then
\begin{equation}\label{second_estimate}
\begin{aligned}
\widehat{D}\pa{\bm{U}} &\geq \max_{i=1,2,3}\norm{\delta_i}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}} \geq |\Omega|^{-2/N}\max_{i=1,2,3}\|\delta_i\|_{L^2(\Omega)}^2\\
&\geq |\Omega|^{-2/N}\epsilon \geq \frac{\epsilon \abs{\Omega}^{-2/N}}{M} \norm{\delta_4}^2_{L^2\pa{\Omega}}=\frac{ \epsilon \abs{\Omega}^{-2/N}}{M} \norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}.
\end{aligned}
\end{equation}
\medskip
\noindent\textbf{Case II: all $\delta_i$ are small.} If
$$\max_{i=1,2,3}\norm{\delta_i}^2_{L^2\pa{\Omega}} \leq \epsilon.$$
then
\begin{equation}\label{eq:crucial_estimate_D_part_I}
\begin{aligned}
\norm{U_1U_2-U_3U_4}_{L^2\pa{\Omega}}^2&=\norm{\pa{\overline{U_1}+\delta_1}\pa{\overline{U_2}+\delta_2}-\pa{\overline{U_3}+\delta_3}U_4}_{L^2\pa{\Omega}}^2\\
&\geq \frac{\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2}{2}-\norm{\delta_1\overline{U_2}+\delta_2\overline{U_1}+\delta_1\delta_2 -\delta_3 U_4}^2_{L^2\pa{\Omega}}
\end{aligned}
\end{equation}
where we have used the inequality $\pa{a-b}^2 \geq \frac{a^2}{2}-b^2$. Using the facts that $\pa{a+b+c+d}^2 \leq 4\pa{a^2+b^2+c^2+d^2}$ and $\overline{U_i}^2\leq M/\abs{\Omega}$, we find that
\begin{equation}\label{eq:crucial_estimate_D_part_II}
\begin{aligned}
&\norm{\delta_1\overline{U_2}+\delta_2\overline{U_1}+\delta_1\delta_2 -\delta_3 U_4}^2_{L^2\pa{\Omega}}\\
&\leq \frac{4M}{\abs{\Omega}} \pa{\norm{\delta_1}^2_{L^2\pa{\Omega}}+\norm{\delta_2}^2_{L^2\pa{\Omega}}}
+4\int_{\Omega}|\delta_1\delta_2|^2dx
+4\norm{U_4^2\delta_3^2}_{L^1\pa{\Omega}}\\
&\leq 4M\abs{\Omega}^{2/N-1} \pa{\norm{\delta_1}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}}+\norm{\delta_2}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}}}
+4\int_{\Omega}|\delta_1(x)\delta_2(x)|^2dx
+4\norm{U_4^2\delta_3^2}_{L^1\pa{\Omega}}.
\end{aligned}
\end{equation}
Using H\"older's inequality we see that
\begin{equation}\label{change_1}
\begin{aligned}
\int_{\Omega}|\delta_1(x)\delta_2(x)|^2dx &= \int_{\Omega}|U_1(x) - \overline{U_1}|^2|\delta_2(x)|^2dx \leq\|U_1 - \overline{U_1}\|_{L^{N}(\Omega)}^2 \|\delta_2\|_{L^{\frac{2N}{N-2}}(\Omega)}^2\\
&\leq 2\left(\|U_1\|_{L^N(\Omega)}^2 + |\Omega|^{\frac{2}{N}}\overline{U_1}^2\right) \|\delta_2\|_{L^{\frac{2N}{N-2}}(\Omega)}^2\\
&\leq 2\left(\|u_1\|_{L^{\frac N2}(\Omega)} + M\abs{\Omega}^{\frac{2}{N}-1}\right)\|\delta_2\|_{L^{\frac{2N}{N-2}}(\Omega)}^2.
\end{aligned}
\end{equation}
Similarly,
\begin{equation}
\label{change_2}
\norm{U_4^2\delta_3^2}_{L^1\pa{\Omega}} \leq \norm{U_4}^2_{L^{N}\pa{\Omega}} \norm{\delta_3}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}} = \norm{u_4}_{L^{\frac{N}{2}}\pa{\Omega}}\norm{\delta_3}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}}.
\end{equation}
Combining \eqref{eq:crucial_estimate_D_part_I}, \eqref{eq:crucial_estimate_D_part_II}, \eqref{change_1} and \eqref{change_2}, we see that
\begin{equation}\label{eq:crucial_estimate_D_part_III}
\begin{aligned}
&\norm{U_1U_2-U_3U_4}_{L^2\pa{\Omega}}^2\geq \frac{\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2}{2}-4M\abs{\Omega}^{2/N-1} \norm{\delta_1}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}}\\
&-4\pa{3M\abs{\Omega}^{2/N-1} +2\norm{u_1}_{L^{\frac{N}{2}}\pa{\Omega}}}\norm{\delta_2}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}}-4\norm{u_4}_{L^{\frac{N}{2}}\pa{\Omega}}\norm{\delta_3}^2_{L^{\frac{2N}{N-2}}\pa{\Omega}}
\end{aligned}
\end{equation}
With this at hand, we see that for any $\eta\in (0,1]$ we have that
\begin{equation*}
\begin{aligned}
\widehat{D}\pa{\bm{U}}&\geq \sum_{i=1}^3 \norm{\delta_i}_{L^{\frac{2N}{N-2}}\pa{\Omega}}^2+\eta\norm{U_1U_2-U_3U_4}_{L^2\pa{\Omega}}^2\\
&\geq \left(1-4\eta M |\Omega|^{\frac 2N - 1}\right)\|\delta_1\|_{L^{\frac{2N}{N-2}}(\Omega)}^2\\
&\quad + \left(1 - 4\eta\left(3M|\Omega|^{\frac 2N - 1}+2\|u_1\|_{L^{\frac N2}(\Omega)} \right)\right)\|\delta_2\|_{L^{\frac{2N}{N-2}}(\Omega)}^2\\
&\quad + \left(1-4\eta\|u_4\|_{L^{\frac N2}(\Omega)} \right)\|\delta_3\|_{L^{\frac{2N}{N-2}}(\Omega)}^2 + \frac{\eta}{2}\|\overline{U_1}\,\overline{U_2} - \overline{U_3}U_4\|_{L^2(\Omega)}^2.
\end{aligned}
\end{equation*}
Thus, choosing
\begin{equation}\label{eta}
\eta=\frac{1}{4\left(3M|\Omega|^{\frac 2N -1} + 2\max_{i=1,4}\|u_i\|_{L^{\frac N2}(\Omega)} \right)+1}<1
\end{equation}
yields
\begin{equation}\label{third_estimate}
\widehat{D}\pa{\bm{U}}\geq \frac{\eta}{2}\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2.
\end{equation}
We now estimate the right hand side of \eqref{third_estimate}. This involves two sub-cases, regarding the size of $\overline{U_3}$.
\medskip
\textbf{Sub-case A: $\overline{U_3}$ is large}. Assume that $\overline{U_3} \geq \sqrt{\epsilon}$. Then, motivated by the connection $u_{4,\infty} = \frac{u_{1,\infty}u_{2,\infty}}{u_{3,\infty}}$, we write
$$U_4(x)=\frac{\overline{U_1}\;\overline{U_2}}{\overline{U_3}}\pa{1+\xi(x)} \quad \text{ for } \quad x\in\Omega.$$
As such
$$\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2=\pa{\overline{U_1}\;\overline{U_2}}^2 \norm{\xi}^2_{L^2\pa{\Omega}}=\pa{\overline{U_1}\;\overline{U_2}}^2 \overline{\xi^2}|\Omega|.$$
On the other hand
$$\norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}= \frac{\pa{\overline{U_1}\;\overline{U_2}}^2}{\overline{U_3}^2}\norm{\xi-\overline{\xi}}^2_{L^2\pa{\Omega}}=\frac{\pa{\overline{U_1}\;\overline{U_2}}^2}{\overline{U_3}^2} \pa{\overline{\xi^2}-\overline{\xi}^2}|\Omega|.$$
Thus
\begin{equation}\label{fourth_estimate}
\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2 \geq \epsilon\frac{\pa{\overline{U_1}\;\overline{U_2}}^2}{\overline{U_3}^2} |\Omega| \overline{\xi^2} \geq \epsilon\norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}.
\end{equation}
\medskip
\textbf{Sub-case B: $\overline{U_3}$ is small.} Assume now that $\overline{U_3} \leq \sqrt{\epsilon}$. In this case since
$$\norm{\delta_3}^2_{L^2\pa{\Omega}}= |\Omega|(\overline{U_3^2}-\overline{U_3}^2),$$
and we are in Case II, we see that $\overline{U_3^2} \leq \overline{U_3}^2 + |\Omega|^{-1}\|\delta_3\|_{L^2(\Omega)}^2 \leq \epsilon(1+|\Omega|^{-1})$. Thus
$$\overline{U_1}^2 = \overline{U_1^2} -|\Omega|^{-1} \norm{\delta_1}^2_{L^2\pa{\Omega}}=\overline{U_1^2}+\overline{U_3^2} -\overline{U_3^2}- |\Omega|^{-1}\norm{\delta_1}^2_{L^2\pa{\Omega}} \geq \frac{M_{13}}{|\Omega|} - \epsilon(1+2|\Omega|^{-1}). $$
Similarly, $\overline{U_2}^2\geq \frac{M_{23}}{|\Omega|} - \epsilon(1+2|\Omega|^{-1})$. With that, and using again the facts that $\pa{a-b}^2 \geq \frac{a^2}{2}-b^2$ and that $\norm{U_i}_{L^2\pa{\Omega}}^2=\abs{\Omega}\overline{U_i^2}$, we have
\begin{equation*}
\begin{aligned}
\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2 &\geq \frac{\abs{\Omega}}{2}\pa{\overline{U_1}^2\overline{U_2}^2 - 2 \overline{U_3}^2 \overline{U_4^2}}\\
&\geq \frac{\abs{\Omega}}{2}\pa{\pa{\frac{M_{13}}{\abs{\Omega}} - \epsilon(1+2|\Omega|^{-1})}\pa{\frac{M_{23}}{\abs{\Omega}} - \epsilon(1+2|\Omega|^{-1})}-2M\epsilon}\\
&\geq \frac{\abs{\Omega}}{2}\pa{\pa{\frac{\min\{M_{13}, M_{23} \}}{\abs{\Omega}} - \epsilon(1+2|\Omega|^{-1})}^2-2M\epsilon}.
\end{aligned}
\end{equation*}
Therefore, if we choose
\begin{equation}\label{epsilon}
\varepsilon = \min\left\{\frac{\min\{M_{13},M_{23} \}^2}{16 M|\Omega|^2}, \frac{\min\{M_{13},M_{23} \}}{2\pa{2+|\Omega|}}
\right\}
\end{equation}
then
\begin{equation*}
\norm{\overline{U_1}\,\overline{U_2} - \overline{U_3}U_4}^2_{L^2(\Omega)} \geq \frac{|\Omega|}{2}\frac{\min\{M_{13},M_{23} \}^2}{8|\Omega|^2} = \frac{\min\{M_{13},M_{23} \}^2}{16|\Omega|}.
\end{equation*}
Much like in Case I, this \emph{uniform bound} and the fact that $\norm{U_4 - \overline{U_4}}_{L^2(\Omega)}^2 \leq M$ imply that
\begin{equation}\label{fifth_estimate}
\norm{\overline{U_1}\;\overline{U_2}-\overline{U_3}U_4}_{L^2\pa{\Omega}}^2 \geq \frac{|\Omega|\min\{M_{13},M_{23} \}^2}{16M}\norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}.
\end{equation}
It is important to note that this is \emph{the only case/sub-case} where an explicit $\epsilon$ was needed. All other cases are valid for \emph{any} $\epsilon>0$.
\medskip
Combining \eqref{first_estimate}, \eqref{second_estimate}, \eqref{third_estimate}, \eqref{fourth_estimate} and \eqref{fifth_estimate} we have
\begin{equation*}
\begin{gathered}
\widetilde D(\bm{U}) \geq 4\min_{i=1,2,3}\{C_{\Omega}^2d_i, 1\}\min\left\{\frac{\varepsilon|\Omega|^{-\frac 2N}}{M}, \frac{\varepsilon \eta}{2},\frac{\eta}{2}\frac{|\Omega|\min\{M_{13}, M_{23} \}^2 }{16M} \right\}\norm{U_4 - \overline{U_4}}_{L^2(\Omega)}^2\\
\geq 4\min_{i=1,2,3}\{C_{\Omega}^2d_i,1\}\eta \min\left\{\frac{\varepsilon|\Omega|^{-\frac 2N}}{M}, \frac{\varepsilon }{2}, \frac{|\Omega|\min\{M_{13}, M_{23} \}^2 }{32M} \right\}\norm{U_4 - \overline{U_4}}_{L^2(\Omega)}^2
\end{gathered}
\end{equation*}
where $\varepsilon$ is in \eqref{epsilon} and $\eta$ is in \eqref{eta}. Therefore, the desired estimate \eqref{eq:second_D_estimation_N>2} follows immediately with a suitable explicit constant $K_2>0$.
\end{proof}
\begin{lemma}[$N=2$]\label{lem:second_D_estimation_N=2}
Let $\br{u_i}_{i=1,\dots,4}$ be non-negative functions on a bounded domain $\Omega\subset \mathbb{R}^2$. Then for any $\gamma>0$, there exists an explicit $K_{3,\gamma}>0$, depending only on $\gamma$, $\Omega$, $\br{d_i}_{i=1,2,3}$, $M$ (defined in Lemma \ref{lem:second_D_estimation_N>2}) such that
\begin{equation}\label{eq:second_D_estimation_N=2}
\begin{gathered}
\widetilde{D}(\bm{U})
\geq \frac{K_{3,\gamma}}{1 +\max_{i=1,4}\pa{\norm{u_i}_{L^{1+\gamma}\pa{\Omega}}}}\norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}
\end{gathered}
\end{equation}
where $\widetilde{D}(\bm{U})$ is defined in \eqref{eq:first_D_estimation}. Note that $\widetilde{D}(\bm{U})$ is understood to be $+\infty$ if $U_i$ is not $H^1(\Omega)$ for some $i=1,2,3$.
\end{lemma}
\begin{proof}
The proof of this lemma is almost identical to that of Lemma \ref{lem:second_D_estimation_N>2}, with a few small changes, which we will clarify here.
\medskip
Using the two dimensional version of the Poincar\'e-Wirtinger inequality from Lemma \ref{PW}, we conclude that for any $\gamma>0$ and $i=1,2, 3$,
\begin{equation*}
\norm{\nabla U_i}_{L^2(\Omega)}^2 \geq C_{\Omega,\gamma}^2\|U_i - \overline{U_i}\|_{L^{\frac{2(1+\gamma)}{\gamma}}}^2,
\end{equation*}
which implies that $\widehat{D}(\bm{U})$ from the previous proof can be changed to
$$
\widehat{D}(\bm{U}) = \sum_{i=1}^3\|\delta_i\|_{L^{\frac{2\pa{1+\gamma}}{\gamma}}(\Omega)}^2 + \|U_1U_2 - U_3U_4\|_{L^2(\Omega)}^2,
$$
and we obtain
\begin{equation}\nonumber
\begin{aligned}
\widetilde D(\bm{U})\geq 4\min_{i=1,2,3}\{C_{\Omega,\gamma}^2d_i,1\}\widehat{D}(\bm{U}).
\end{aligned}
\end{equation}
Next, since
\begin{equation*}
\norm{\delta_i}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2 \geq |\Omega|^{-\frac{1}{1+\gamma}}\norm{\delta_i}_{L^2(\Omega)}^2.
\end{equation*}
we see that estimate \eqref{second_estimate} in Case I, i.e.
$\max_{i=1,2,3}\norm{\delta_i}^2_{L^2\pa{\Omega}} \geq \epsilon,$
changes to
\begin{equation}\nonumber
\begin{aligned}
\widehat{D}\pa{\bm{U}} \geq \frac{ \epsilon \abs{\Omega}^{\frac{\gamma}{1+\gamma}}}{M} \norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}.
\end{aligned}
\end{equation}
Turning our attention to \eqref{eq:crucial_estimate_D_part_II}, \eqref{change_1} and \eqref{change_2}, we see that
\begin{equation}\nonumber
\begin{aligned}
&\norm{\delta_1\overline{U_2}}^2_{L^2\pa{\Omega}} \leq \overline{U_2}^2\abs{\Omega}^{\frac{1}{1+\gamma}}\norm{\delta_1}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2 \leq \abs{\Omega}^{-\frac{\gamma}{1+\gamma}}M\norm{\delta_1}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2
\end{aligned}
\end{equation}
and similarly
\begin{equation}\nonumber
\begin{aligned}
&\norm{\delta_2\overline{U_1}}^2_{L^2\pa{\Omega}}\leq \abs{\Omega}^{-\frac{\gamma}{1+\gamma}}M\norm{\delta_2}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2.
\end{aligned}
\end{equation}
Also,
\begin{equation*}
\begin{aligned}
\int_{\Omega}|\delta_1(x)\delta_2(x)|^2dx &\leq \norm{U_1 - \overline{U_1}}_{L^{2(1+\gamma)}(\Omega)}^2\norm{\delta_2}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2\\
&\leq 2\left(\norm{U_1}_{L^{2(1+\gamma)}(\Omega)}^2 + \abs{\Omega}^{\frac{1}{1+\gamma}}\overline{U_1}^2\right)\norm{\delta_2}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2\\
&\leq 2\left(\norm{u_1}_{L^{1+\gamma}(\Omega)}+\abs{\Omega}^{-\frac{\gamma}{1+\gamma}}M \right)\norm{\delta_2}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2
\end{aligned}
\end{equation*}
and
\begin{equation*}
\norm{U_4^2\delta_3^2}_{L^1(\Omega)} \leq \norm{U_4}_{L^{2(1+\gamma)}(\Omega)}^2\norm{\delta_3}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2 = \|u_4\|_{L^{1+\gamma}(\Omega)}\norm{\delta_3}_{L^{\frac{2(1+\gamma)}{\gamma}}(\Omega)}^2.
\end{equation*}
From this point onwards the proof is identical, with $\frac{N}{2}$ replaced by $1+\gamma$, and as such we omit it.
\end{proof}
\begin{lemma}[$N=1$]\label{N=1}
Let $\br{u_i}_{i=1,\dots,4}$ be non-negative functions on a bounded, open interval $\Omega\subset \mathbb{R}$. Then there exists an explicit $K_{4}>0$, depending only on $\Omega$, $\br{d_i}_{i=1,2,3}$, $M$ (defined in Lemma \ref{lem:second_D_estimation_N>2}) such that
\begin{equation}\label{IDE-1D}
\widetilde{D}(\bm{U})
\geq K_4\norm{U_4-\overline{U_4}}^2_{L^2\pa{\Omega}}
\end{equation}
where $\widetilde{D}(\bm{U})$ is defined in \eqref{eq:first_D_estimation}.
Note that $\widetilde{D}(\bm{U})$ is understood to be $+\infty$ if $U_i$ is not $H^1(\Omega)$ for some $i=1,2,3$.
\end{lemma}
\begin{proof}
Much like the Proof of Lemma \ref{lem:second_D_estimation_N=2}, the proof of this lemma is based on the proof of Lemma \ref{lem:second_D_estimation_N>2}, with appropriate modification due to improved embedding of the Poincar\'e-Wirtinger inequality. In the one dimensional setting, this equates to the embedding $H^1(\Omega)\hookrightarrow L^\infty(\Omega)$.\\
Again, we only show the required changes in our appropriate estimates. As was mentioned, the Poincar\'e-Wirtinger inequality in $N=1$ reads as
\begin{equation*}
\norm{\frac{d}{dx}U_i}_{L^2(\Omega)}^2 \geq C_{\Omega}^2\norm{U_i - \overline{U_i}}_{L^\infty(\Omega)}^2,
\end{equation*}
giving us the modified production
$$
\widehat{D}(\bm{U}) = \sum_{i=1}^3\|\delta_i\|_{L^{\infty}(\Omega)}^2 + \|U_1U_2 - U_3U_4\|_{L^2(\Omega)}^2.
$$
Since
\begin{equation*}
\norm{\delta_i}_{L^\infty(\Omega)}^2 \geq |\Omega|^{-1}\norm{\delta_i}_{L^2(\Omega)}^2,
\end{equation*}
Case I is again secured. The estimates \eqref{eq:crucial_estimate_D_part_II}, \eqref{change_1} and \eqref{change_2} become
\begin{equation}\nonumber
\begin{aligned}
&\norm{\delta_1\overline{U_2}}^2_{L^2\pa{\Omega}} \leq M\norm{\delta_1}_{L^{\infty}(\Omega)}^2 \\
&\norm{\delta_2\overline{U_1}}^2_{L^2\pa{\Omega}} \leq M\norm{\delta_2}_{L^{\infty}(\Omega)}^2
\end{aligned}
\end{equation}
and
\begin{equation*}
\begin{gathered}
\int_{\Omega}|\delta_1(x)\delta_2(x)|^2dx \leq \norm{U_1 - \overline{U_1}}_{L^2(\Omega)}^2\norm{\delta_2}_{L^\infty(\Omega)}^2\\
\leq 2\left(\|u_1\|_{L^1(\Omega)} + \abs{\Omega}\overline{U_1}^2 \right)\norm{\delta_2}_{L^\infty(\Omega)}^2 \leq 4M\norm{\delta_2}_{L^\infty(\Omega)}^2.
\end{gathered}
\end{equation*}
Lastly
\begin{equation*}
\norm{U_4^2\delta_3^2}_{L^1(\Omega)} \leq \norm{U_4}_{L^2(\Omega)}^2\norm{\delta_3}_{L^\infty(\Omega)}^2 \leq M\norm{\delta_3}_{L^\infty(\Omega)}^2.
\end{equation*}
From this point onwards the proof is again exactly as in Lemma \ref{lem:second_D_estimation_N>2}, though there is \emph{no longer dependency in $\br{u_i}_{i=1,\dots,4}$}, which is why $K_4$ is a fixed constant that depends only on the masses $M_{1,3},M_{2,4}$ and $M_{1,4}$.
\end{proof}
We now have the tools to prove our desired functional inequality.
\begin{proof}[Proof of Theorem \ref{thm:entropy_method}]
We start by considering the function
$$\Phi(x,y)=\frac{x\log\pa{\frac{x}{y}}-x+y}{\pa{\sqrt{x}-\sqrt{y}}^2}=\frac{\phi\pa{\frac{x}{y}}}{\pa{\sqrt{\frac{x}{y}}-1}^2},$$
where $\varphi(z) = z\log z - z + 1$.
It is simple to see that $\Phi$ is continuous on any subset of $\pa{\mathbb{R}_+\cup\br{0}}\times\mathbb{R}_+$. Moreover, as
$$\lim_{s\rightarrow\infty}\frac{\phi(s)}{s\log s}=1,\quad \lim_{s\rightarrow 0^+}\phi(s)=1$$
we see that we can find a uniform constant, $C>0$, such that $\Phi(x,y) \leq C\max\pa{1,\log \pa{\frac{x}{y}}}$.\\
Thus, we have that
\begin{equation}\label{eq:first_entropy_estimation}
\begin{aligned}
H\pa{\bm{u}|\bm{u_\infty}}&=\sum_{i=1}^4\int_{\Omega}\phi\pa{\frac{u_i(x)}{u_{i,\infty}}}u_{i,\infty}dx\\
&=\sum_{i=1}^4\int_{\Omega}\Phi\pa{u_i(x),u_{i,\infty}} \pa{U_i(x)-U_{i,\infty}}^2 dx\\
&\leq C\max_{i=1,\dots,4}\pa{1,\log \pa{{\norm{u_i}_{L^\infty\pa{\Omega}}+ 1}}+\abs{\log u_{i,\infty}}}\sum_{i=1}^4 \norm{U_i-U_{i,\infty}}^2_{L^2\pa{\Omega}}.
\end{aligned}
\end{equation}
Using $(a+b+c)^2 \leq 3(a^2+b^2+c^2)$, we have
\begin{align*}
\norm{U_i-U_{i,\infty}}^2_{L^2\pa{\Omega}} &\leq 3\pa{\norm{U_i-\overline{U_i}}^2_{L^2\pa{\Omega}}+\norm{\overline{U_i} -\sqrt{\overline{U_i^2}} }_{L^2(\Omega)}^2 + \norm{\sqrt{\overline{U_i}^2}-U_{i,\infty}}^2_{L^2\pa{\Omega}}}\\
&\leq 6\pa{\norm{U_i-\overline{U_i}}^2_{L^2\pa{\Omega}} + |\Omega|\abs{\sqrt{\overline{U_i^2}}-U_{i,\infty}}^2}
\end{align*}
where at the last step we used the fact that
\begin{equation*}
\abs{\sqrt{\overline{U_i^2}}-\overline{U_i}}^2=\overline{U_i^2}+\overline{U_i}^2 - 2 \sqrt{\overline{U_i^2}}\overline{U_i} \underset{\overline{U_i} \leq \sqrt{\overline{U_i^2}}}{\leq} \overline{U_i^2}-\overline{U_i}^2 =\frac{\norm{U_i-\overline{U_i}}^2_{L^2\pa{\Omega}}}{\abs{\Omega}}
\end{equation*}
Therefore,
\begin{equation}\label{estimate_H}
H(\bm{u}|\bm{u}_\infty) \leq C\pa{1+\max_{i=1,\ldots, 4}\log\pa{\norm{u_i}_{L^\infty(\Omega)}+1}}\!\pa{\sum_{i=1}^4\norm{U_i-\overline{U_i}}^2_{L^2\pa{\Omega}} + \sum_{i=1}^4\left|\sqrt{\overline{U_i^2}}-U_{i,\infty}\right|^2}.
\end{equation}
From Lemmas \ref{lem:second_D_estimation_N>2}, \ref{lem:second_D_estimation_N=2} and \ref{N=1}, we know that we can find a constant $K_5$ such that
\begin{equation*}
D(\bm{u}) \geq \frac{K_5}{1+\max_{i=1,\ldots, 4}\pa{\norm{u_i}_{L^q(\Omega)}} }\norm{U_4 - \overline{U_4}}_{L^2(\Omega)}^2
\end{equation*}
where $q = \frac N2$ when $N\geq 3$, $q = 1+\gamma$ for an arbitrary $\gamma > 0$ when $N=2$, and
\begin{equation*}
D(\bm{u}) \geq K_5\norm{U_4 - \overline{U_4}}_{L^2(\Omega)}^2,
\end{equation*}
when $N=1$. Using the above with the definition of $D(\bm{u})$, Lemma \ref{lem:first_D_estimation} and the Poincar\'e inequality
$$\norm{U_i - \overline{U_i}}_{L^2(\Omega)}^2 \leq C_{\Omega}\norm{\nabla U_i}_{L^2}^2,$$
we see that we can find an appropriate constant $K_6$ that depends only on $\Omega$ and the diffusion coefficients, such that
\begin{equation}\label{first_D}
\begin{gathered}
\frac 12 D(\bm{u})= \frac 14 D(\bm{u})+\frac 14 D(\bm{u}) \geq C_{\Omega,\br{d_i}_{i=1,\dots,3}} \sum_{i=1}^3\norm{U_i - \overline{U_i}}_{L^2(\Omega)}^2\\
+\frac{K_5}{4\pa{1+\max_{i=1,\ldots, 4}\pa{\norm{u_i}_{L^q(\Omega)}}} }\norm{U_4 - \overline{U_4}}_{L^2(\Omega)}^2 \\
\geq \frac{K_6}{1+\max_{i=1,\ldots, 4}\pa{\norm{u_i}_{L^q(\Omega)}} }\sum_{i=1}^4\norm{U_i - \overline{U_i}}_{L^2(\Omega)}^2,
\end{gathered}
\end{equation}
and
\begin{equation}\label{second_D}
\frac 12 D(\bm{u}) \geq \frac{K_6}{1+\max_{i=1,\ldots, 4}\pa{\norm{u_i}_{L^q(\Omega)}}}\rpa{\sum_{i=1}^4\norm{U_i - \overline{U_i}}_{L^2(\Omega)}^2 + \norm{U_1U_2 - U_3U_4}_{L^2(\Omega)}^2}.
\end{equation}
Using the estimate
\begin{equation}\label{third_D}
\sum_{i=1}^4\norm{U_i - \overline{U_i}}_{L^2(\Omega)}^2 + \norm{U_1U_2 - U_3U_4}_{L^2(\Omega)}^2 \geq K_7\sum_{i=1}^4\abs{\sqrt{\overline{U_i^2}} - U_{i,\infty}}^2,
\end{equation}
which was shown in greater generality in \cite[estimate (61) in Section 4]{FLT19}, we can now combine \eqref{estimate_H}, \eqref{first_D}, \eqref{second_D} and \eqref{third_D} to conclude \eqref{eq:entropy_method} and finish the proof of Theorem \ref{thm:entropy_method}.
\end{proof}
\subsection{Conditional convergence to equilibrium}\label{sec:convergence}
The previous subsection was dedicated to finding the functional inequality that governs the entropy method, applied to our system of equations. In this short subsection we will show how we can use this inequality to obtain quantitative decay to zero of the relative entropy, under the assumption that the solutions to \eqref{eq:sys} have a certain growth for appropriately relevant norms, appearing in Theorem \ref{thm:entropy_method}. Showing such estimates will be the focus of our next Section, where we will study the global well-posedness for \eqref{eq:sys}. We have elected to leave this short subsection relatively general, to both emphasise the generality of the method and to allow for future progress in bounds of our system \eqref{eq:sys} to have an immediate consequence in the question of convergence to equilibrium.
\medskip
Our only theorem for this subsection is
\begin{theorem}\label{thm:convergence_general}
Let $\Omega$ be a bounded domain of $\mathbb{R}^N$, $N\geq 1$, with smooth boundary, $\partial \Omega$, and let $\br{u_i(t)}_{i=1,\dots,4}$ be a classical solution for the system of equations \eqref{eq:sys}. Assume that there exist constants $\mu \geq 0$, $0\leq \alpha<1$, and $C_{\mu,\alpha}>0$ such that
\begin{equation}\label{eq:convergence_bound_condition}
\sup_{t\geq 0}\pa{\max_{i=1,\dots,4}(1+t)^{-\mu }\norm{u_i(t)}_{L^\infty\pa{\Omega}} + \max_{i=1,4}(1+t)^{-\alpha}\norm{u_i(t)}_{L^q(\Omega)}} \leq C_{\mu,\alpha},
\end{equation}
where
\begin{equation*}
q = \begin{cases}
\frac N2 &\text{ when } N\geq 3,\\
1+\gamma \text{ for some } \gamma > 0 &\text{ when } N = 2,\\
1 &\text{ when } N = 1.
\end{cases}
\end{equation*}
Then for any $\varepsilon>0$ with $\alpha+\varepsilon<1$ we have
\begin{equation*}
H(\bm{u}(t)|\bm{u}_\infty) \leq H(\bm{u}_0| \bm{u}_\infty)e^{-C_{\alpha, \varepsilon}\pa{1+t}^{1-\alpha-\varepsilon}} \quad \text{ for all } \quad t\geq 0.
\end{equation*}
Moreover, if $\mu = 0$, which is equivalent to
\begin{equation}\label{uniform_Linf_bound}
\sup_{i=1,\ldots, 4}\sup_{t\geq 0} \norm{u_i(t)}_{L^\infty(\Omega)} < +\infty,
\end{equation}
we have the full exponential convergence
\begin{equation*}
H(\bm{u}(t)|\bm{u}_\infty) \leq Ce^{-\lambda_0 t} \quad \text{ for all } \quad t\geq 0,
\end{equation*}
for some $\lambda_0 > 0$.
In both cases the constants that govern the convergence are explicit, and depend only on the domain $\Omega$, the dimension $N$, the diffusion coefficients $d_1, d_2, d_3$, the initial masses $M_{13}, M_{23}, M_{14}$, and in the case where $N=2$, also on $\gamma>0$.
\end{theorem}
\begin{proof}
From \eqref{eq:convergence_bound_condition} we have
\begin{equation*}
\norm{u_i(t)}_{L^\infty(\Omega)} \leq C_{\mu,\alpha}(1+t)^{\mu} \quad \text{ for all } i=1,\ldots, 4,
\end{equation*}
and
\begin{equation*}
\norm{u_i(t)}_{L^q(\Omega)} \leq C_{\mu,\alpha}(1+t)^{\alpha} \quad \text{ for } \quad i=1,4.
\end{equation*}
This, together with \eqref{eq:entropy_method} from Theorem \ref{thm:entropy_method}, imply that
\begin{equation*}
\frac{H(\bm{u}(t)|{\bm{u}_\infty})}{D(\bm{u}(t))} \leq K_1\log\pa{C_{\mu,\alpha}(1+t)^\mu + 1}\pa{1+C_{\mu,\alpha}(1+t)^{\alpha}} \leq K\pa{1+t}^{\alpha+\varepsilon}
\end{equation*}
for some $K$ depending on $K_1$, $C_{\mu,\alpha}$, $\mu$, and $\varepsilon>0$. As $K_1$ depends only on parameters of the problem, and conserved quantities, we conclude that $K$ is a constant that is independent of time, and as such
$$-\frac{d}{dt}H\pa{\bm{u}(t)|\bm{u}_\infty}=D\pa{\bm{u}(t)} \geq \frac{H\pa{\bm{u}(t)|\bm{u}_\infty}}{K\pa{1+t}^{\alpha+\varepsilon}}.$$
Thus,
$$H\pa{\bm{u}(t)|\bm{u}_{\infty}} \leq H\pa{\bm{u}_0|\bm{u}_{\infty}}e^{-\frac{K}{1-\alpha-\epsilon}\pa{1+t}^{1-\alpha-\epsilon}}, $$
which is the desired estimate.
\medskip
In the case where $\mu = 0$ we have for any $1\leq p \leq \infty$,
\begin{equation*}
\sup_{i=1,\ldots, 4}\sup_{t\geq 0}\norm{u_i(t)}_{L^p(\Omega)} \leq C,
\end{equation*}
since the domain is bounded and as such
$$\norm{u}_{L^p\pa{\Omega}}\leq \abs{\Omega}^{\frac{1}{p}}\norm{u}_{L^\infty\pa{\Omega}}.$$
Using Theorem \ref{thm:entropy_method} we conclude that
\begin{equation*}
\frac{H(\bm{u}(t)|\bm{u}_\infty)}{D(\bm{u}(t))} \leq \lambda_0
\end{equation*}
for some $\lambda_0>0$, which leads to
\begin{equation*}
\frac{d}{dt}H(\bm{u}(t)|u_\infty) = D(\bm{u}(t)) \leq -\lambda_0 H(\bm{u}(t)|\bm{u}_\infty)
\end{equation*}
and consequently
\begin{equation*}
H(\bm{u}(t)|\bm{u}_\infty) \leq e^{-\lambda_0t}H(\bm{u}_0|\bm{u}_\infty).
\end{equation*}
\end{proof}
\section{Global existence of solutions}\label{sec:bounds}
Now that we have concluded our investigation of the entropy method, we turn our attention to the systematic study of \eqref{eq:sys}: the existence of global bounded solutions to it, and time estimates on appropriately relevant norms.
Let us consider our system of equations more closely: the first three equations, which describe the evolution of $\br{u_i}_{i=1,\dots,3}$, include diffusion. As such, if the reaction relents in the end, one expects that the solutions to such equations would remain bounded uniformly in time. The last equation, for $u_4$, is more of an ODE in its nature than a full blown PDE, and can provide us with no further regularity on the solution than the initial datum gives. Moreover, the non-negativity of the solutions implies
$$\partial_t u_4(x,t) \leq u_1(x,t) u_2(x,t) \leq \norm{u_1(t)}_{L^\infty\pa{\Omega}}\norm{u_2(t)}_{L^\infty\pa{\Omega}},$$
which tells us that, even when $u_1$ and $u_2$ are uniformly bounded in time, $u_4$ might grow linearly with $t$.
The above considerations are, unfortunately, more of an intuition than an actual fact. In fact, since potential growth of $u_4$ can considerably affect $\br{u_i}_{i=1,\dots,3}$, we cannot achieve the desired uniform bound of $L^\infty(\Omega)$ norm on them so readily. Interestingly, we \emph{are able} to achieve such a uniform bound on $u_3$, and additionally we can estimate the growth in time for the $L^p(\Omega)$ norm of $u_1$ and $u_2$, for any $1<p<\infty$. These results are expressed in the next two propositions, which are the main results of this section.
\begin{proposition}[$N=1,2$]\label{Global1D}
Consider a bounded domain $\Omega\subset \mathbb{R}^N$, $N = 1,2$ with smooth boundary, $\partial \Omega$. Then, for any non-negative and smooth initial data $\bm{u}_0$ that satisfies the compatibility condition, \eqref{eq:sys} has a unique classical global solution which satisfies
\begin{equation}\label{eq:bound_u_3}
\sup_{t\geq 0}\|u_3(t)\|_{L^\infty(\Omega)} \leq C < +\infty,
\end{equation}
and
\begin{equation}\label{eq:bound_u124}
\norm{u_1(t)}_{L^\infty(\Omega)} + \norm{u_2(t)}_{L^\infty(\Omega)} + \norm{u_4(t)}_{L^\infty(\Omega)} \leq C(1+t)^{\mu}
\end{equation}
for some {\normalfont explicit} $C>0$ and $\mu \geq 0$. Moreover, for any $0<\epsilon<\mu$, there exist {\normalfont explicit} $\gamma>0$ and $C_{\gamma,\epsilon}>0$ such that
\begin{equation}\label{eq:bound_u_1_u_2}
\norm{u_1(t)}_{L^{1+\gamma}(\Omega)} + \norm{u_2(t)}_{L^{1+\gamma}(\Omega)} + \norm{u_4(t)}_{L^{1+\gamma}(\Omega)} \leq C_{\gamma,\varepsilon}(1+t)^{\varepsilon}
\end{equation}
for all $t\geq 0$. All constants here depend only on the parameters of the problem and the initial datum
\end{proposition}
\begin{proposition}[$N\geq 3$]\label{Global3D}
Consider a bounded domain $\Omega\subset \mathbb{R}^N$, $N \geq 3$ with smooth boundary, $\partial \Omega$. Assume moreover that
\begin{equation}\label{quasi-uniform}
\frac{|d_i - d_3|}{d_i+d_3} < \frac{1}{C_{\mathrm{mr},p_0'}}
\end{equation}
with $i=1$ or $i=2$, and $p_0' > 1$ such that
\begin{equation*}
p_0 = \frac{p_0'}{p_0' - 1} > \frac{N+2}{2},
\end{equation*}
where $C_{\mathrm{mr},p_0'}$ is a fixed constant that relates to the maximal regularity property of the heat equation (see Lemma \ref{lem_mr}). Then, for any non-negative and smooth initial data $\bm{u}_0$ that satisfies the compatibility condition, \eqref{eq:sys} has a unique classical global solution which satisfies
\begin{equation}\label{3D_u3}
\sup_{t\geq 0}\norm{u_3(t)}_{L^\infty(\Omega)} \leq C <+\infty,
\end{equation}
and, there exist {\normalfont explicit} constants $\mu \geq 0$ and $0\leq\alpha < 1$ such that
\begin{equation}\label{3D_u124_Linfty}
\norm{u_1(t)}_{L^{\infty}(\Omega)} + \norm{u_2(t)}_{L^{\infty}(\Omega)} + \norm{u_4(t)}_{L^{\infty}(\Omega)} \leq C(1+t)^\mu,
\end{equation}
\begin{equation}\label{3D_u124}
\norm{u_1(t)}_{L^{\frac N2}(\Omega)} + \norm{u_2(t)}_{L^{\frac N2}(\Omega)} + \norm{u_4(t)}_{L^{\frac N2}(\Omega)} \leq C(1+t)^\alpha \quad \text{ for all } \quad t\geq 0.
\end{equation}
All constants here depend only on the parameters of the problem and the initial datum.
\end{proposition}
The rest of this section is organised as follow: in subsection \ref{auxiliary} we provide some auxiliary results which we will use in subsequent subsections. Subsections \ref{3D_proof} and \ref{12D_proof} will be devoted to proving Propositions \ref{Global3D} and \ref{Global1D} respectively.
\medskip
Finally, we recall that we {\it always} denote by $C_T$ a generic constant depending {\it at most algebraically} on the time horizon $T>0$, which can be different from line to line (or even in the same line).
\subsection{Auxiliary results}\label{auxiliary}
We emphasise that all results of this subsection are valid for all $N\geq 1$.
\begin{lemma}\label{lem:polynomial}
Let $d>0$, $T>0$ and $f\in L^p(\Omega_{T})$ for $1<p<\infty$. Assume, in addition, that $\norm{f}_{L^p(\Omega_{T})} \leq C_{T}$ and let $u$ be the solution the heat equation
\begin{equation}\label{heat_equation}
\begin{cases}
u_t(x,t) - d\Delta u(x,t) = f, &(x,t)\in \Omega_{T},\\
\nabla u(x,t) \cdot \nu(x) = 0, &(x,t)\in \partial\Omega\times(0,T),\\
u(x,0) = u_0(x), &x\in\Omega,
\end{cases}
\end{equation}
with initial datum $u_0\in L^\infty(\Omega)$.
\begin{itemize}
\item[(i)] If $p< (N+2)/2$, then
\begin{equation*}
\norm{u}_{L^s(\Omega_{T})} \leq C_{T} \quad \text{ for all } \quad s < \frac{(N+2)p}{N+2-2p}.
\end{equation*}
\item[(ii)] If $p = (N+2)/2$, then
\begin{equation*}
\norm{u}_{L^s(\Omega_{T})} \leq C_{T} \quad \text{ for all } \quad s<+\infty.
\end{equation*}
\item[(iii)] If $p > (N+2)/2$, then
\begin{equation*}
\norm{u}_{L^\infty(\Omega_{T})} \leq C_{T}.
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
The proofs of part (i) and (ii) can be found in \cite[Lemma 3.3]{CDF14}, while part (iii) is proved in \cite[Lemma 4.6]{Tan18}.
\end{proof}
The following lemma is in the same spirit with Lemma \ref{lem:polynomial} but on any time interval $(\tau, T)$, which will be useful in studying the uniform boundedness of solutions.
\begin{lemma}\label{cor:embedding_for_N}
Let $d>0$, $0\leq \tau < T$ and $0\leq \theta \in L^p(\Omega_{\tau,T})$ for some $1<p<\infty$. Let $\psi$ be the solution to
\begin{equation}\label{heat_equation_bw}
\begin{cases}
\partial_t \psi(x,t) + d\Delta \psi(x,t) = -\theta(x,t), &(x,t)\in \Omega_{\tau, T},\\
\nabla \psi(x,t) \cdot \nu(x) = 0, &(x,t)\in \partial\Omega\times(\tau,T),\\
\psi(x,T) = 0, &x\in\Omega.
\end{cases}
\end{equation}
Then
\begin{equation*}
\psi(x,t)\geq 0 \quad \text{ a.e. in } \quad \Omega_{\tau,T}.
\end{equation*}
Moreover, we have the following estimates.
\begin{itemize}
\item[(i)] If $p< (N+2)/2$, then
\begin{equation}\label{item:less}
\norm{\psi}_{L^s(\Omega_{\tau,T})} \leq C(T-\tau,\Omega,d,p,s)\norm{\theta}_{L^p(\Omega_{\tau,T})} \quad \text{ for all } \quad s < \frac{(N+2)p}{N+2-2p}.
\end{equation}
\item[(ii)] If $p = (N+2)/2$, then
\begin{equation}\label{item:equal_N}
\norm{\psi}_{L^s(\Omega_{\tau,T})} \leq C(T-\tau,\Omega,d,p,s)\norm{\theta}_{L^p(\Omega_{\tau,T})} \quad \text{ for all } \quad s<+\infty.
\end{equation}
\item[(iii)]If $p > (N+2)/2$, then
\begin{equation}\label{item:greater_N}
\norm{\psi}_{L^\infty(\Omega_{\tau,T})} \leq C(T-\tau,\Omega,d,p)\norm{\theta}_{L^p(\Omega_{\tau,T})}.
\end{equation}
\end{itemize}
\begin{proof}
At the first glance, equation \eqref{heat_equation_bw} looks like a backward heat equation. However, defining $\widetilde{\psi}(x,t) = \psi(x,T+\tau-t)$, we see that $\widetilde{\psi}$ solves the usual heat equation
\begin{equation*}
\begin{cases}
\partial_t \widetilde{\psi}(x,t) - d\Delta\widetilde{\psi}(x,t) = \widetilde{\theta}(x,t), &(x,t)\in \Omega_{\tau,T},\\
\nabla \widetilde{\psi}(x,t)\cdot \nu(x) = 0, &(x,t)\in \partial\Omega\times (\tau,T),\\
\widetilde{\psi}(x,\tau) = 0, &x\in\Omega,
\end{cases}
\end{equation*}
where $\widetilde{\theta}(x,t)=\theta(x,T+\tau-t)$. Since $\theta \geq 0$, the non-negativity of $\widetilde{\psi}$, and hence of $\psi$, follows from the maximum principle. By the maximal regularity principle for the heat equation, see \cite{LSU68}, we have
\begin{equation*}
\norm{\widetilde{\psi}}_{W_{p,\Omega_{\tau,T}}^{(2,1)}} \leq C(T-\tau,\Omega,d,p)\norm{\widetilde{\theta}}_{L^p(\Omega_{\tau,T})},
\end{equation*}
where $W_{p,\Omega_{\tau,T}}^{(2,1)}$ is the Banach space equipped with the norm
\begin{equation*}
\norm{\xi}_{W_{p,\Omega_{\tau,T}}^{(2,1)}} = \sum_{2r+|\beta| \leq 2}\norm{\partial_t^{r}\partial_{x}^{\beta}\xi}_{L^p(\Omega_{\tau,T})}.
\end{equation*}
Using the embedding
\begin{equation*}
W_{p,\Omega_{\tau,T}}^{(2,1)} \hookrightarrow L^s(\Omega_{\tau,T})
\end{equation*}
from \cite{LSU68}, where
$$1\leq s < \frac{(N+2)p}{N+2-2p} \quad\text{if} \quad p<\frac{N+2}{2},$$
$$ 1\leq s < \infty\quad\text{ arbitrary if}\quad p = \frac{N+2}{2},$$
and
$$s=\infty \quad \text{if}\quad p>\frac{N+2}{2},$$
we obtain the desired estimates for $\psi$.\\
\end{proof}
\end{lemma}
The following lemmas are necessary for the improved duality method.
\begin{lemma}\label{lem_mr}
Let $0\leq \tau < T$ and $\theta \in L^p(\Omega_{\tau,T})$ for some $1<p<\infty$. Let $\psi$ be the solution to the equation
\begin{equation}\label{normalized_diffusion}
\begin{cases}
\partial_t \psi(x,t) + \Delta \psi(x,t)= -\theta(x,t), &(x,t)\in\Omega_{\tau,T},\\
\nabla\psi(x,t) \cdot \nu(x) = 0, &(x,t)\in \partial\Omega\times(\tau,T),\\
\psi(x,T) = 0, &x\in\Omega.
\end{cases}
\end{equation}
Then there exists an optimal constant $C_{\mathrm{mr},p}$ depending only on $p$, the domain $\Omega$, and the dimension $N$ such that the following \emph{maximal regularity} holds
\begin{equation}\label{Cmr}
\norm{\Delta \psi}_{L^p(\Omega_{\tau,T})} \leq C_{\mathrm{mr},p}\norm{\theta}_{L^p(\Omega_{\tau,T})}.
\end{equation}
\end{lemma}
\begin{proof}
The proof for the regular heat equation can be found in \cite[Theorem 1]{Lam87}, and in the same manner we have shown the validity of the previous lemma (by defining $\widetilde{\psi}$), the result follows.
\end{proof}
\begin{lemma}\label{lem_mr_rescaled}
Let $d>0$, $0\leq \tau < T$ and $\theta \in L^p(\Omega_{\tau,T})$ for some $1<p<\infty$. Let $\psi$ be the solution to \eqref{heat_equation}. Then
\begin{equation}\label{b5}
\norm{\Delta\psi}_{L^p(\Omega_{\tau,T})} \leq \frac{C_{\mathrm{mr},p}}{d}\norm{\theta}_{L^p(\Omega_{\tau,T})}
\end{equation}
where $C_{\mathrm{mr},p}$ is the constant in \eqref{Cmr}.
\end{lemma}
\begin{proof}
Define $\widehat{\psi}(x,t) = \psi(x,t/d)$ we have
\begin{equation*}
\begin{cases}
\partial_t\widehat{\psi}(x,t) - \Delta \psi(x,t) = \frac 1d\widehat{\theta}(x,t), &(x,t)\in \Omega_{d\tau, dT},\\
\nabla \widehat{\psi}(x,t) \cdot \nu (x)= 0, &(x,t)\in \partial \Omega\times (d\tau, dT),\\
\widehat{\psi}(x,d\tau) = 0, &x\in\Omega,
\end{cases}
\end{equation*}
where $\widehat{\theta}(x,t) = \theta(x,t/d)$. From Lemma \ref{lem_mr}, we have
\begin{equation*}
\|\Delta\widehat{\psi}\|_{L^p(\Omega_{d\tau, dT})} \leq \frac{C_{\rm{mr},p}}{d}\|\widehat{\theta}\|_{L^p(\Omega_{d\tau, dT})}
\end{equation*}
or equivalently
\begin{equation*}
\int_{d\tau}^{dT}\int_{\Omega}|\Delta\widehat{\psi}(x,t)|^pdxdt \leq \pa{\frac{C_{\rm{mr},p}}{d}}^p\int_{d\tau}^{dT}\int_{\Omega}|\widehat{\theta}(x,t)|^pdxdt
\end{equation*}
Making the change of variable $s = t/d$ yields
\begin{equation*}
d\|\Delta \psi\|_{L^p(\Omega_{\tau,T})}^p \leq \frac{C_{\rm{mr},p}^p}{d^{p-1}}\|\theta\|_{L^p(\Omega_{\tau,T})}^p,
\end{equation*}
which is \eqref{b5}.
\end{proof}
\begin{lemma}\label{elementary}
Let $\{a_n\}_{n\in \mathbb N}$ be a non-negative sequence. Define $\mathfrak N = \{n\in \mathbb N: \; a_{n-1} \leq a_n \}$. If there exists a constant $C$ independent of $n$ such that
\begin{equation*}
a_n \leq C \quad \text{ for all } \quad n \in \mathfrak N,
\end{equation*}
then
\begin{equation*}
a_n \leq \max\{a_0; C\} \quad \text{ for all } \quad n \in \mathbb N.
\end{equation*}
\end{lemma}
\begin{proof}
The proof of this lemma is straightforward. We merely mentioned it explicitly as we will use it several times.
\end{proof}
\begin{lemma}[A Gronwall-type inequality]\label{Gronwall}
Assume that $y(t)$ is a non-negative function that satisfies
\begin{equation*}
y'(t) \leq \beta(t) + \kappa y(t)^{1-r}, \quad \forall t> 0,
\end{equation*}
for $\kappa>0$, $r\in (0,1)$, and a non-negative function $\beta(t)$. Then
\begin{equation*}
y(t) \leq C\rpa{y(0) + \int_0^t\beta(s)ds + t^{\frac 1r}}, \quad \forall t>0,
\end{equation*}
where the constant $C$ depends only on $\kappa$ and $r$.
\end{lemma}
\begin{proof}
Using Young's inequality, we find that for \emph{any} given $\delta>0$,
\begin{equation*}
y^{1-r} \leq \delta y + r\pa{\frac{1-r}{\delta}}^{\frac{1-r}{r}}.
\end{equation*}
It then follows that
\begin{equation*}
y' \leq \kappa\delta y + \beta(t) + \kappa r\pa{\frac{1-r}{\delta}}^{\frac{1-r}{r}} =: \kappa\delta y + \gamma_{\delta}(t).
\end{equation*}
which implies that,
\begin{equation*}
y(t) \leq y(0)e^{\kappa\delta t} + \int_0^t\gamma_{\delta}(s)e^{\kappa\delta(t-s)}ds.
\end{equation*}
Choosing $\delta = (\kappa t)^{-1}$ we obtain
\begin{align*}
y(t) &\leq ey(0) + e\int_0^t\pa{\beta(s) + \kappa^{\frac 1r}r\pa{1-r}^{\frac{1-r}{r}}t^{^{\frac{1-r}{r}}}}ds\\
&\leq ey(0) + e\int_0^t\beta(s)ds + e\kappa^{\frac 1r}r(1-r)^{\frac{1-r}{r}}t^{\frac 1r}\\
&\leq \max\br{e,e\kappa^{\frac 1r}r(1-r)^{\frac{1-r}{r}}}\pa{y(0) + \int_0^t\beta(s)ds + t^{\frac 1r}}
\end{align*}
which finishes the proof.
\end{proof}
\subsection{Proof of Proposition \ref{Global3D}}\label{3D_proof}
The proof of Proposition \ref{Global3D} is quite involved. We shall show the existence and uniqueness of a classical solution, together with \eqref{3D_u124_Linfty}, in Proposition \ref{global3D}. \eqref{3D_u3} will be shown in Lemma \ref{lem:L_infty_bound_on_u_3}, and Lemma \ref{3D_u4LN2} will prove \eqref{3D_u124}.\\
The first step in achieving a proof is the following improved duality lemma.
\begin{lemma}\label{lem_b1}
Assume that condition \eqref{quasi-uniform} is satisfied. Then, any classical solution to \eqref{eq:sys} on $\Omega_{T}$ satisfies
\begin{equation}\label{u123_Lp}
\|u_i\|_{L^{p_0}(\Omega_T)} \leq C_T \quad \text{ for } \quad i=1,2,3,
\end{equation}
where $p_0 = \frac{p_0'}{p_0'-1}$ with $p_0'$ is as in \eqref{quasi-uniform}, and the constant $C_T$ depends on the initial data $\br{u_{i,0}}_{i=1,\dots,3}$ and grows at most algebraically in $T$.
\end{lemma}
\begin{proof}
Without loss of generality, we assume that \eqref{quasi-uniform} holds for $i=1$. Fix $0\leq \theta\in L^{p_0'}(\Omega_T)$ and let $\psi$ be the solution to \eqref{heat_equation_bw} with $\tau = 0$, and $d = \frac{d_1+d_3}{2}$. Thanks to Lemma \ref{lem_mr_rescaled} we know that
\begin{equation}\label{2star}
\|\Delta \psi\|_{L^{p_0'}(\Omega_T)} \leq \frac{2C_{\mathrm{mr},p_0'}}{d_1+d_3}\norm{\theta}_{L^{p_0'}(\Omega_T)}.
\end{equation}
Using integration by parts together with the fact that
\begin{equation}\label{eq:u_1+u_3}
\partial_t \pa{u_1(x,t)+u_3(x,t) }= d_1\Delta u_1(x,t)+d_3\Delta u_3(x,t)
\end{equation}
we find that
\begin{equation}\label{b1}
\begin{aligned}
&\int_0^T\int_{\Omega}(u_1(x,t)+u_3(x,t))\theta(x,t) dxdt\\
&= \int_0^T\int_{\Omega}(u_1(x,t)+u_3(x,t))(-\partial_t \psi(x,t) - d\Delta \psi(x,t))dxdt\\
&= \int_{\Omega}(u_{1,0}(x)+u_{3,0}(x))\psi(x,0)dx + (d_1 - d)\int_0^T\int_{\Omega}u_1(x,t)\Delta \psi(x,t) dxdt\\
&+ (d_3 - d)\int_0^T\int_{\Omega} u_3(x,t)\Delta \psi(x,t) dxdt \leq \norm{u_{1,0}+u_{3,0}}_{L^{p_0}(\Omega)}\norm{\psi(0)}_{L^{p_0'}(\Omega)} \\&+ \frac{|d_1 - d_3|}{2}\norm{u_1 + u_3}_{L^{p_0}(\Omega_T)}\norm{\Delta \psi}_{L^{p_0'}(\Omega_T)}\\
&\leq \norm{u_{1,0}+u_{3,0}}_{L^{p_0}(\Omega)}\norm{\psi(0)}_{L^{p_0'}(\Omega)} + \frac{|d_1 - d_3|}{d_1+d_3}C_{\rm{mr}, p_0'}\norm{u_1 + u_3}_{L^{p_0}(\Omega_T)}\norm{\theta}_{L^{p_0'}(\Omega_T)}
\end{aligned}
\end{equation}
where the last inequality followed from \eqref{2star}. Since $\psi$ also satisfies
\begin{equation*}
\norm{\partial_t \psi}_{L^{p_0'}(\Omega_T)} \leq \frac{d_1+d_3}{2}\norm{\Delta \psi}_{L^{p_0'}(\Omega_T)} + \norm{\theta}_{L^{p_0'}(\Omega_T)} \leq \pa{C_{\rm{mr},p_0'} + 1}\norm{\theta}_{L^{p_0'}(\Omega_T)},
\end{equation*}
where we used \eqref{2star} again, we find that
\begin{equation*}
\norm{\psi(0)}_{L^{p_0'}(\Omega)}^{p_0'} = \int_{\Omega}\abs{\int_0^T\partial_t\psi(x,s) ds}^{p_0'}dx \leq T^{\frac{p_0^\prime}{p_0}}\norm{\partial_t\psi}_{L^{p_0'}(\Omega_T)}^{p_0'},
\end{equation*}
and as such
\begin{equation*}
\norm{\psi(0)}_{L^{p_0'}(\Omega)} \leq T^{1/p_0}\pa{C_{\rm{mr}, p_0'} + 1}\norm{\theta}_{L^{p_0'}(\Omega_T)}.
\end{equation*}
Inserting this into \eqref{b1}, and using duality we obtain
\begin{align*}
\norm{u_1+u_3}_{L^{p_0}(\Omega_T)} \leq \norm{u_{1,0}+u_{3,0}}_{L^{p_0}(\Omega)}\pa{C_{\rm{mr}, p_0'} + 1}T^{1/p_0}\\
+ \frac{|d_1 - d_3|}{d_1+d_3}C_{\rm{mr}, p_0'}\norm{u_1+u_3}_{L^{p_0}(\Omega_T)}.
\end{align*}
Thus, since \eqref{quasi-uniform} is valid, we have that
\begin{equation*}
\norm{u_1+u_3}_{L^{p_0}(\Omega_T)} \leq \frac{C_{\rm{mr}, p_0'} + 1}{1-\frac{|d_1 - d_3|}{d_1+d_3}C_{\rm{mr}, p_0'}}\norm{u_{1,0}+u_{3,0}}_{L^{p_0}(\Omega)}T^{1/p_0}:=C_T.
\end{equation*}
Since $u_1$ and $u_3$ are non-negative, this estimate also holds for $u_1$ and $u_3$ individually. \\
To deal with $u_2$ we remember that \eqref{eq:sys} implies that
\begin{equation*}
\partial_t(u_2 + u_3)(x,t) - \Delta(d_2u_2 + d_3u_3)(x,t) = 0
\end{equation*}
and therefore, using \cite[Lemma 33.3, page 291]{QS07}, we find that
\begin{equation*}
\norm{u_2}_{L^{p_0}(\Omega_T)} \leq C\pa{\norm{u_{2,0}+u_{3,0}}_{L^{p_0}(\Omega)} + \norm{u_3}_{L^{p_0}(\Omega_T)}} \leq C_T,
\end{equation*}
which completes the proof.
\end{proof}
\begin{proposition}\label{global3D}
Assume the quasi-uniform condition \eqref{quasi-uniform} for the diffusion coefficients holds. Given any smooth, bounded, and non-negative initial data that satisfies the compatibility condition, \eqref{eq:sys} admits a unique classical global, bounded solution which obeys the estimate \eqref{3D_u124_Linfty}.
\end{proposition}
\begin{proof}
To begin with we notice that since the non-linearities on the right hand side of \eqref{eq:sys} are locally Lipschitz, the local existence of a classical bounded solution on a maximal interval $(0,T_{\max})$ is a classical result (see, \cite{Ama85}). Furthermore, as the non-linearities are quasi-positive, that is $f_i(\bm{u})\geq 0$ as long as $\bm{u} \in \mathbb R_+^4$ and $u_i = 0$, we know that if the initial data is non-negative, then when a solution exists it must also be non-negative (see e.g. \cite{Pie10}). Moreover,
\begin{equation}\label{global_criterion}
\text{ if } \; \sup_{i=1,\ldots, 4}\limsup_{t\to T_{\max}^-}\|u_i(t)\|_{L^\infty(\Omega)}<+\infty.
\end{equation}
then $T_{\max}=\infty$, and the solution is in fact global.
From Lemma \ref{lem_b1} we have that for all $0<T<T_{\max}$,
\begin{equation*}
\norm{u_i}_{L^{p_0}(\Omega_T)} \leq C_T \quad \text{ for } \quad i=1,2,3
\end{equation*}
where $C_T$ \textit{grows at most algebraically} in $T$, and $p_0 = \frac{p_0'}{p_0'-1} > \frac{N+2}{2}$. We use that together with our system of PDEs to bootstrap our norm estimate until we reach $L^\infty$.
The non-negativity of the solution implies that
\begin{equation*}
\partial_t u_3(x,t) -d\Delta u_3(x,t) \leq u_1(x,t)u_2(x,t).
\end{equation*}
Moreover, since $u_1u_2 \in L^{\frac{p_0}{2}}(\Omega_T)$ we can use the comparison principle for the heat equation (see, for instance, \cite[Theorem 2.2.1]{Pao12}) and the maximal regularity from Lemma \ref{lem:polynomial} to conclude that
\begin{equation}\label{b2}
\|u_3\|_{L^{p_1-}(\Omega_T)} \leq C_{T,p_1-}
\end{equation}
with $p_1 = \frac{(N+2)\frac{p_0}{2}}{N+2-p_0}$ if $N+2 > p_0$ and $p_1 = +\infty$ otherwise, where above means that for any $s<p_1\leq +\infty$
$$\norm{u_3}_{L^{s}(\Omega_T)} \leq C_{T,s}.$$
Using the fact that
\begin{equation*}
\partial_t(u_1 + u_3)(x,t)- \Delta(d_1u_1 + d_3u_3) (x,t)= 0, \quad \quad \partial_t(u_2 + u_3)(x,t) - \Delta(d_2u_2 + d_3u_3)(x,t) = 0
\end{equation*}
and applying \cite[Lemma 33.3]{QS07} again, we conclude, due to \eqref{b2}, that
\begin{equation*}
\norm{u_1}_{L^{p_1-}(\Omega_T)} \leq C_{T,p_1-} \quad \text{ and } \quad \norm{u_2}_{L^{p_1-}(\Omega_T)} \leq C_{T,p_1-}.
\end{equation*}
If $p_1=+\infty$, we stop here. Otherwise, we repeat this process and construct a sequence $\{p_n\}_{n\geq 0}$ such that
$$p_{n+1}=\begin{cases} \frac{(N+2)\frac{p_n}{2}}{N+2-p_n} & N+2 > p_n\\
+\infty & \text{otherwise},
\end{cases}$$
and
\begin{equation*}
\norm{u_i}_{L^{p_n-}(\Omega_T)} \leq C_T \quad \text{ for } \quad i=1,2,3 \quad \text{ and } \quad n\ge 1.
\end{equation*}
We claim that since $p_0>\frac{N+2}{2}$, there must exist $n_0\geq 1$ such that $p_{n_0} \geq N+2$. Indeed, assume that $p_n < N+2$ for all $n\geq 1$. Then by definition, we have that
\begin{equation*}
\frac{p_{n+1}}{p_n} = \frac{N+2}{2(N+2-p_n)}.
\end{equation*}
which will be shown to be greater or equal to $1$. Since $p_0 > \frac{N+2}{2}$ we have that
$$\frac{p_1}{p_0}=\frac{N+2}{2(N+2-p_0)} > \frac{N+2}{2(N+2-\frac{N+2}{2})}=1.$$
Continuing by induction we assume that $\frac{p_{n}}{p_{n-1}}>1$ and conclude that
$$\frac{p_{n+1}}{p_n}=\frac{N+2}{2(N+2-p_n)} > \frac{N+2}{2(N+2-p_{n-1})}=\frac{p_n}{p_{n-1}}>1.$$
Thus, $\br{p_n}_{n\in\mathbb{N}}$ is increasing and satisfies
$$p_0 < p_n < \frac{N+2}{2},\quad \forall n\in\mathbb{N}.$$
This implies that $\br{p_n}_{n\in\mathbb{N}}$ converges to a finite limit $p>p_0$, that must satisfy
$$1=\frac{N+2}{2\pa{N+2-p}},$$
which is, of course, impossible. We have thus found $n_0 \geq 1$ such that $p_{n_0} \geq N+2$. Applying Lemma \ref{lem:polynomial} one more time to
\begin{equation}\label{eq:u3_eq_for_bootstrap}
\partial_t u_3 (x,t)- d_3\Delta u_3 (x,t)\leq u_1(x,t)u_2(x,t),
\end{equation}
now for $u_1u_2 \in L^{\frac{p_{n_0}}{2}}(\Omega_T)$ and $\frac{p_{n_0}}{2} \geq \frac{N+2}{2}$, we find that
\begin{equation*}
\norm{u_3}_{L^q(\Omega_T)} \leq C_{q,T} \quad \text{ for all } \quad 1\leq q < +\infty.
\end{equation*}
Consequently, like before,
\begin{equation*}
\norm{u_1}_{L^q(\Omega_T)} + \norm{u_2}_{L^q(\Omega_T)} \leq C_{q,T} \quad \text{ for all } \quad 1\leq q < +\infty.
\end{equation*}
Looking at \eqref{eq:u3_eq_for_bootstrap}, with the above knowledge, we use the comparison principle and Lemma \ref{lem:polynomial} again, this time with any $q_0>N+2$, to conclude that
\begin{equation}\label{eq:u3_infty}
\norm{u_3}_{L^\infty(\Omega_T)} \leq C_{T,q_0}.
\end{equation}
To obtain boundedness of $u_1, u_2$ and $u_4$ we first observe that
\begin{equation}\label{b3}
u_4(x,t) \leq u_{4,0}(x) + \int_0^tu_1(x,s)u_2(x,s)ds,
\end{equation}
which implies for any $1\leq q <\infty$,
\begin{equation*}
\begin{gathered}
\norm{u_4}_{L^q(\Omega_T)} \leq T\sup_{t\in (0,T)}\norm{u_4(t)}_{L^q(\Omega)} \leq T\pa{\norm{u_{4,0}}_{L^q(\Omega)} + T^{\frac{1}{q^\prime}}\norm{u_1u_2}_{L^q(\Omega_T)}}\\
\leq T\pa{\norm{u_{4,0}}_{L^q(\Omega)} + T^{\frac{1}{q^\prime}}\pa{\norm{u_1}_{L^{2q}(\Omega_T)}\norm{u_2}_{L^{2q}(\Omega_T)}}}\leq C_{T,q}.
\end{gathered}
\end{equation*}
Together with \eqref{eq:u3_infty} we find that $u_3u_4 \in L^q(\Omega_T)$ for any $1\leq q<\infty$, with a time bound that grows at most algebraically in $T$ (and depends on $q$). Since
$$\partial_t u_1 (x,t)- d_1\Delta u_2 (x,t)\leq u_3(x,t)u_4(x,t), $$
$$\partial_t u_2 (x,t)- d_2\Delta u_2(x,t) \leq u_3(x,t)u_4(x,t) $$
we use the comparison principle and Lemma \ref{lem:polynomial} to see that
\begin{equation*}
\norm{u_1}_{L^\infty(\Omega_T)} \leq C_{T}
\end{equation*}
and
\begin{equation*}
\norm{u_2}_{L^\infty(\Omega_T)} \leq C_{T}.
\end{equation*}
Lastly, the above, together with \eqref{b3}, imply that
\begin{equation*}
\norm{u_4}_{L^\infty(\Omega_T)} \leq C_T,
\end{equation*}
and we can conclude that \eqref{global_criterion} is satisfied, completing the proof.
\end{proof}
\begin{remark}\label{rem:existence_sol_no_dim}
A careful look at the proofs of Lemma \ref{lem_b1} and Proposition \ref{global3D} show that we have not used any condition on the dimension $N$. Indeed, as long as condition \eqref{quasi-uniform} is satisfied, with $C_{\mathrm{mr},p_0^\prime}$ the appropriate constant from Lemma \ref{lem_mr} (even for $N=1,2$), we can conclude the existence of a global classical solution, that is non-negative when the initial data is non-negative. \\
An immediate consequence of this, which we will use in what follows, is that Remark \ref{rem:mass_bounds} is valid, and we also obtain estimates \eqref{eq:mass_bounds}, i.e.
$$\norm{u_i(t)}_{L^1\pa{\Omega}} \leq M_{ij},\quad\quad i=1,2,\;j=3,4.$$
\end{remark}
To fully show Proposition \ref{Global3D}, we now turn our attention to prove the norm estimates \eqref{3D_u3} and \eqref{3D_u124}. The key idea we will employ to show these estimates is to study the system \eqref{eq:sys} on cylinders $\Omega_{\tau,\tau+1}$ with $\tau \in \mathbb{R}_{+}$, and to find corresponding estimates {\it uniformly in $\tau$}.\\
In what follows we will always assume that $\br{u_i}_{i=1,\dots,4}$ are classical solutions to our system of equations, and will write our lemmas more succinctly.
\begin{lemma}\label{lem:some_L_p_estimaes_for_123}
Assume that \eqref{quasi-uniform} holds for $i_0\in \{1,2\}$. Then
\begin{equation}\label{eq:some_L_p_estimates_for_123}
\sup_{\tau \geq 0}\pa{\|u_{i_0}\|_{L^{p_0}(\Omega_{{\tau,\tau+1}})}+\|u_3\|_{L^{p_0}(\Omega_{{\tau,\tau+1}})}} \leq C_0,
\end{equation}
where the constant $C_0$ depends only on $p_0^\prime$, the diffusion coefficients $d_1$, $d_2$, $d_3$, the domain $\Omega$, the initial masses $M_{ij}$, and $\br{\norm{u_i}_{L^{\infty}(\Omega_{0,1})}}_{i=1,2,3}$.
\end{lemma}
\begin{proof}
Without loss of generality, we assume that \eqref{quasi-uniform} holds for $i_0=1$. For any $0\leq \theta \in L^{p_0'}\pa{\Omega_{{\tau,\tau+2}}}$ we set $\psi_{\theta}$ to be the solution of \eqref{heat_equation_bw} with final time $T:= \tau + 2$, diffusion coefficient $d=\frac{d_1+d_3}{2}$ and inhomogeneous source $-\theta$. From Lemma \ref{lem_mr_rescaled} we know that
\begin{equation}\label{eq:est_for_lapl}
\|\Delta \psi_{\theta}\|_{L^{p_0'}(\Omega_{\tau,\tau+2})} \leq \frac{2C_{\rm{mr},p_0'}}{d_1+d_3}\|\theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})}.
\end{equation}
Lemma \ref{cor:embedding_for_N}, and the fact that
$$p_0^\prime = \frac{p_0}{p_0-1} < \frac{N+2}{N} \underset{N\geq 3}{<}\frac{N+2}{2}$$
assures us that for any $q<\frac{(N+2)p_0'}{N+2-2p_0'}$
\begin{equation}\label{eq:some_L_p_estimates_proof_I}
\|\psi_{\theta}\|_{L^{q}(\Omega_{\tau,\tau+2})} \leq C(p_0',d,\Omega)\|\theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})},
\end{equation}
where the constant $C(p_0', d, \Omega)$ in the above is independent of $\tau$. Let $\phi(s)$ be a $C^\infty\pa{\mathbb{R}}$ function such that $\phi(0)=0$, $0\leq\phi\leq 1$ and $\phi\vert_{[1,\infty)}\equiv 1$. Defining the function $\phi_{\tau}(s)=\phi\pa{s+\tau}$ we notice that
\begin{equation}\nonumber
\begin{gathered}
\partial_t(\varphi_\tau(t) u_1(x,t)) - d_1\Delta_x(\varphi_\tau(t) u_1(x,t)) = u_1(x,t)\varphi^\prime_\tau(t) + \phi_\tau(t)\pa{\partial_t u_1(x,t)+d_1\Delta u_1(x,t)}\\
=u_1(x,t)\varphi^\prime_\tau(t) + \phi_\tau(t) f_1\pa{\bm{u}(x,t)}
\end{gathered}
\end{equation}
Using the fact that $\psi_{\theta}(x,\tau+2)=0$ and $\phi_\tau(\tau)u_1(x,\tau)=0$ we see that by integration by parts
\begin{align*}
&\int_\tau^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))\theta(x,t) dxdt\\
&=-\int_\tau^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))(\partial_t \psi_{\theta}(x,t) + \pa{d_1+d-d_1}\Delta \psi_{\theta}(x,t))dxdt\\
&=\int_\tau^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)\pa{\partial_t(\varphi_\tau(t) u_1(x,t)) - d_1\Delta(\varphi_\tau(t) u_1(x,t))}dxdt \\
&\quad + (d_1 - d)\int_\tau^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))\Delta \psi_{\theta}(x,t) dxdt\\
&= \int_\tau^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)\pa{\varphi_\tau^\prime (t) u_1(x,t) + \varphi_\tau(t) f_1(\bm{u}(x,t))}dxdt\\
&\quad + (d_1 - d)\int_\tau^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))\Delta \psi_{\theta}(x,t) dxdt.
\end{align*}
Similarly,
\begin{align*}
&\int_\tau^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_3(x,t))\theta(x,t) dxdt\\
&= \int_\tau^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)\pa{\varphi_\tau^\prime (t) u_3(x,t) + \varphi_\tau(t) \underbrace{f_3(\bm{u}(x,t))}_{-f_1\pa{\bm{u}(x,t)}}}dxdt\\
& \quad + (d_3 - d)\int_\tau^{\tau+2}\int_{\Omega}\pa{\varphi_\tau(t) u_3(x,t)}\Delta \psi_{\theta}(x,t) dxdt.
\end{align*}
Summing these two equalities, and using the non-negativity of the solutions together with the choice $d=\frac{d_1+d_3}{2}$, yields
\begin{equation}\label{eq:some_L_p_estimates_proof_II}
\begin{aligned}
&\int_{\tau}^{\tau+2}\int_{\Omega}(\varphi_\tau(t)(u_1(x,t)+u_3(x,t)))\theta(x,t) dxdt\\
&\leq \int_{\tau}^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t) \varphi_\tau^\prime (t)(u_1(x,t) + u_3(x,t))dxdt\\
&+ \frac{\abs{d_1-d_3}}{2}\int_{\tau}^{\tau+2}\int_{\Omega}\varphi_\tau(t)(u_1(x,t) + u_3(x,t))\abs{\Delta \psi_{\theta}(x,t)}dxdt=\mathcal{I}+\mathcal{II}.
\end{aligned}
\end{equation}
Using H\"older's inequality together with \eqref{eq:est_for_lapl} we find that
\begin{equation}\label{eq:some_L_p_estimates_proof_III}
\begin{aligned}
\mathcal{II} &\leq \frac{|d_1-d_3|}{2}\|\Delta \psi_{\theta}\|_{L^{p_0'}(\Omega_{\tau,\tau+2})}\|\varphi_\tau(u_1 + u_3)\|_{L^{p_0}(\Omega_{\tau,\tau+2})}\\
&\leq \frac{|d_1-d_3|}{d_1+d_3}C_{\rm{mr},p_0'}\| \theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})}\|\varphi_\tau(u_1 + u_3)\|_{L^{p_0}(\Omega_{\tau,\tau+2})}
\end{aligned}
\end{equation}
To estimate $\mathcal{I}$, we pick some $1<q^\prime<p_0$ such that its H\"older conjugate, $q = \frac{q'}{q'-1}$, satisfies
\begin{equation}\nonumber
q < \frac{(N+2)p_0'}{N+2-2p_0'}.
\end{equation}
This is possible since $q^\prime < p_0$ if and only if $p_0'<q$, and as $p_0^\prime<\frac{(N+2)p_0'}{N+2-2p_0'}$, we can easily choose $q$ between them. Using Lemma \ref{cor:embedding_for_N} again, we find that
\begin{equation}\nonumber
\|\psi_{\theta}\|_{L^{q}(\Omega_{\tau,\tau+2})} \leq C\pa{p_0',d,\Omega}\|\theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})},
\end{equation}
and as such, using the fact that $\phi^{\prime}_{\tau}\vert_{\rpa{\tau+1,\tau+2}}=0$ and $|\varphi_\tau'| \leq C$,
\begin{equation}\label{eq:some_L_p_estimates_proof_IV}
\begin{gathered}
\abs{\mathcal{I}}\leq C\pa{p_0',d,\Omega} \|\psi_\theta\|_{L^{q}(\Omega_{\tau,\tau+2})}\|u_1+u_3\|_{L^{q'}(\Omega_{\tau,\tau+1})}\\
\leq C\pa{p_0',d,\Omega}\|\theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})}\|u_1+u_3\|_{L^{q'}(\Omega_{\tau,\tau+1})}
\end{gathered}
\end{equation}
Since $1< q^\prime < p_0$, we know that for $0<\alpha<1$ such that
$$\frac{1}{q^\prime} = \frac{\alpha}{1} + \frac{1-\alpha}{p_0}$$
one has
\begin{equation}\nonumber
\|f\|_{L^{q^\prime}(\Omega)} \leq \|f\|_{L^1(\Omega)}^{\alpha}\|f\|_{L^{p_0}(\Omega)}^{1-\alpha},
\end{equation}
which, together with \eqref{eq:some_L_p_estimates_proof_IV}, yield the estimate
\begin{equation}\label{eq:some_L_p_estimates_proof_V}
\begin{gathered}
\abs{\mathcal{I}}\leq C\pa{p_0',d,\Omega}\|\theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})}M_{13}^\alpha\|u_1+u_3\|^{1-\alpha}_{L^{p_0}(\Omega_{\tau,\tau+2})},
\end{gathered}
\end{equation}
where we have used the non-negativity of the solutions, and the mass conservation property again. By inserting \eqref{eq:some_L_p_estimates_proof_III} and \eqref{eq:some_L_p_estimates_proof_V} into \eqref{eq:some_L_p_estimates_proof_II} we obtain
\begin{equation}\nonumber
\begin{aligned}
&\int_{\tau}^{\tau+2}\int_{\Omega}(\varphi_\tau(t)(u_1(x,t)+u_3(x,t)))\theta(x,t) dxdt \\
&\leq \|\theta\|_{L^{p_0'}(\Omega_{\tau,\tau+2})}\left(C\|u_1+u_3\|_{L^{p_0}(\Omega_{\tau,\tau+1})}^{1-\alpha} +\frac{|d_1-d_3|}{d_1+d_3}C_{\rm{mr},p_0'}\|\varphi_\tau(u_1+u_3)\|_{L^{p_0}(\Omega_{\tau,\tau+2})}\right)
\end{aligned}
\end{equation}
with $C>0$ depending only on $p_0'$, $d$, $\Omega$ and $M_1(0)+M_3(0)$. As $\theta$ was arbitrary, we conclude that
\begin{equation}\nonumber
\|\varphi_\tau (u_1+u_3)\|_{L^{p_0}(\Omega_{\tau,\tau+2})} \leq C\|u_1+u_3\|_{L^{p_0}(\Omega_{\tau,\tau+1})}^{1-\alpha} + \frac{|d_1-d_3|}{d_1+d_3}C_{\rm{mr},p_0'}\|\varphi_\tau(u_1+u_3)\|_{L^{p_0}(\Omega_{\tau,\tau+2})},
\end{equation}
which translates to
\begin{equation}\label{eq:some_L_p_estimates_proof_VI}
\|\varphi_\tau (u_1+u_3)\|_{L^{p_0}(\Omega_{\tau,\tau+2})} \leq \frac{C}{1-\frac{|d_1-d_3|}{d_1+d_3}C_{\rm{mr},p_0'}}\|u_1+u_3\|_{L^{p_0}(\Omega_{\tau,\tau+1})}^{1-\alpha}
\end{equation}
thanks to \eqref{quasi-uniform}.
\medskip
To show the uniform in $\tau$ boundedness of $\norm{u_1+u_3}_{L^{p_0}(\Omega_{\tau,\tau+1})}$ it will be enough to show the boundedness of the sequence
$$a_n=\norm{u_1+u_3}_{L^{p_0}(\Omega_{n,n+1})}$$
where $n\in \mathbb{N} \cup\br{0}$, as
$$\norm{u_1+u_3}_{L^{p_0}(\Omega_{\tau,\tau+1})}\leq a_{\rpa{\tau}}+a_{\rpa{\tau}+1}.$$
We start by noticing that \eqref{eq:some_L_p_estimates_proof_VI}, the fact that $\phi_\tau\vert_{\rpa{\tau+1,\tau+2}}\equiv 1$, and the fact that $\phi_\tau,u_1$ and $u_3$ are non-negative, imply that
\begin{equation}\nonumber
a_{n+1} \leq \|\varphi_n (u_1+u_3)\|_{L^{p_0}(\Omega_{n,n+2})} \leq \frac{C}{1-\frac{|d_1 - d_3|}{d_1+d_3}C_{\rm{mr}, p_0'}}\|u_1+u_3\|_{L^{p_0}(\Omega_{n,n+1})}^{1-\alpha}=\mathcal{C} a_n^{1-\alpha}.
\end{equation}
Thus if $a_n \leq a_{n+1}$ we must have that
$$a_n \leq \mathcal{C}a_n^{1-\alpha},$$
resulting in $a_n \leq \mathcal{C}^{\frac{1}{\alpha}}.$ From this we infer that
$$a_{n+1} \leq \mathcal{C}\pa{\mathcal{C}}^{\frac{1-\alpha}{\alpha}}.$$
At this point we recall Lemma \ref{elementary}, and conclude our claim. Thus, there exists a constant $C_0$ that depends only on $p_0^\prime$, $d_1$, $d_2$, $\Omega$ and $M_{13}$ such that
$$\sup_{\tau \geq 0} \norm{u_1+u_3}_{L^{p_0}(\Omega_{\tau,\tau+1})} \leq C_0.$$
This together with the non-negativity of $\br{u_i}_{i=1,3}$ finishes the proof of Lemma \ref{lem:some_L_p_estimaes_for_123} for $u_1$ and $u_3$.
\end{proof}
\begin{remark}\label{rem:dimension_usage}
It is worthwhile to note that in the above proof the only point where we have used the fact that $N\geq 3$ is in using \eqref{item:less} from Lemma \ref{cor:embedding_for_N} for $p_0^\prime $ which was less than $\frac{N+2}{2}$. In dimensions $1$ and $2$, there is a chance that $p^\prime_0$ will not satisfy this condition, but according to \eqref{item:equal_N} and \eqref{item:greater_N} from the same lemma, we can still get inequality \eqref{eq:some_L_p_estimates_proof_I} {\it for any $q<+\infty$}. As such, the result remains valid when $N=1,2$, as long as condition \eqref{quasi-uniform} is satisfied.
\end{remark}
Now that we have an initial $L^{p_0}\pa{\Omega_{\tau,\tau+1}}$ bound on $u_{i_0}$ and $u_3$, with $i_0=1$ or $i_0=2$, depending on which index satisfies \eqref{quasi-uniform}, we continue to explore the interconnections of the solutions. In particular, we show that any uniform in $\tau$ $L^q\pa{\Omega_{\tau,\tau+1}}$ estimate for $u_3$ can be transferred to $u_1$ and $u_2$. This is, in a sense, a generalisation of \cite[Lemma 33.3]{QS07} in which we exploit the uniform $L^1(\Omega)$ bounds.
\begin{lemma}\label{lem:second_duality}
If $\sup_{\tau\in \mathbb N}\|u_3\|_{L^p(\Omega_{\tau,\tau+1})} \leq C_p$ for some $1<p<\infty$, then
\begin{equation}\nonumber
\sup_{\tau\in \mathbb N}\|u_i\|_{L^p(\Omega_{\tau,\tau+1})} \leq \mathcal{C}_p \quad \text{ for } \quad i=1,2,
\end{equation}
where $\mathcal{C}_p$ is a constant that depends only on $C_p$, $p$, $N$, $\Omega$, $d_1$, $d_3$, $M_{ij}$ and $\br{\norm{u_i}_{L^p(\Omega_{{0,1}})}}_{i=1,2}$.
\end{lemma}
\begin{proof}
The proof follows similar ideas to the proof of Lemma \ref{lem:some_L_p_estimaes_for_123}. Once again, we choose a function $\phi_\tau$, which is a shift by $\tau$ of a $C^\infty$ function such that $\phi(0)=0$, $0\leq \phi \leq 1$, and $\phi \vert_{[1,\infty)}\equiv 1$. Let $\psi_{\theta}$ be the solution to \eqref{heat_equation_bw}, with $d=d_1$ and $\theta\in L^{p^\prime}\pa{\Omega_{\tau,\tau+2}}$, where $p^\prime$ is the H\"older conjugate of $p$. Then, using integration by parts, the fact that $\phi_\tau(\tau)=0$, $\psi_\theta\pa{\tau+2,x}=0$ and the equations for $u_1$ and $u_3$ from \eqref{eq:sys}, we find that
\begin{align*}
&\int_{\tau}^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))\theta(x,t) dxdt = -\int_{\tau}^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))(\partial_t\psi_\theta(x,t) + d_1\Delta \psi_\theta(x,t))dxdt\\
&= \int_{\tau}^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)(\partial_t(\varphi_\tau(x,t) u_1(x,t)) - d_1\Delta(\varphi_\tau(t) u_1(x,t)))dxdt\\
&=\int_{\tau}^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)(\varphi_\tau^\prime(t) u_1(x,t) +\phi_\tau(t)\pa{ u_3(x,t)u_4(x,t)-u_1(x,t)u_2(x,t)})dxdt\\
&=\int_{\tau}^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)(\varphi_\tau^\prime(t) u_1(x,t) -\phi_\tau(t)\pa{\partial_t u_3(x,t) - d_3\Delta u_3(x,t)})dxdt\\
&= \int_{\tau}^{\tau+2}\int_{\Omega}\psi_{\theta}(x,t)(\varphi_\tau^\prime(t) \pa{u_1(x,t)+u_3(x,t)} - \partial_t(\varphi_\tau(t) u_3(x,t)) + d_3\Delta (\varphi_\tau(t) u_3(x,t)))dxdt\\
&= \int_{\tau}^{\tau+2}\int_{\Omega}\psi_\theta(x,t) \varphi_\tau^\prime(t) (u_1(x,t) + u_3(x,t))dxdt \\
&\;\;\;\; + \int_{\tau}^{\tau+2}\int_{\Omega}\phi_\tau(t)u_3(x,t)(\underbrace{\partial_t \psi_\theta(x,t) + d_3\Delta \psi_\theta(x,t)}_{\pa{d_3-d_1}\Delta \psi_\theta(x,t) -\theta(x,t)})dxdt\\
&= \int_{\tau}^{\tau+2}\int_{\Omega}\psi_\theta(x,t) \varphi_\tau^\prime(t) (u_1(x,t) + u_3(x,t))dxdt - \int_{\tau}^{\tau+2}\int_{\Omega}\phi_\tau(t)u_3(x,t)\theta(x,t) dxdt\\
&\quad + (d_3 - d_1)\int_{\tau}^{\tau+2}\int_{\Omega}\phi_{\tau}(t)u_3(x,t)\Delta \psi_\theta(x,t) dxdt= \mathcal{I}+\mathcal{II}+\mathcal{III}.
\end{align*}
Using H\"older's inequality, and Lemma \ref{lem_mr_rescaled} we find that
\begin{equation}\label{eq:II_estimation}
\abs{\mathcal{II}}\leq \|\phi_\tau u_3\|_{L^p(\Omega_{\tau,\tau+2})}\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})}\leq \| u_3\|_{L^p(\Omega_{\tau,\tau+2})}\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})}
\end{equation}
and
\begin{equation}\label{eq:III_estimation}
\begin{split}
\abs{\mathcal{III}} \leq C&\|\phi_\tau u_3\|_{L^p(\Omega_{\tau,\tau+2})}\|\Delta \psi_{\theta}\|_{L^{p'}(\Omega_{\tau,\tau+2})}\\
&\leq C\|u_3\|_{L^p(\Omega_{\tau,\tau+2})}\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})},
\end{split}
\end{equation}
where the constants $C$ depends only on $p$, $d_1$, $d_3$, $\Omega$ and $N$. To estimate $\mathcal{I}$ we choose $q < p$ such that its H\"older conjugate, $q^\prime$, satisfies
$$q^\prime < \begin{cases} \frac{(N+2)p'}{N+2-2p'} &\text{ if } \quad p^\prime < \frac{N+2}{2} \\ p^\prime &\text{ if } \quad p^\prime \geq \frac{N+2}{2} \end{cases},$$
and with the help of Lemma \ref{cor:embedding_for_N} we conclude that
\begin{equation}\nonumber
\|\psi_{\theta}\|_{L^{q^\prime}(\Omega_{\tau,\tau+2})} \leq C\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})}.
\end{equation}
From the above we see that
\begin{align*}
\abs{\mathcal{I}} &\leq C\|\psi_\theta\|_{L^{q'}(\Omega_{\tau,\tau+1})}\pa{\|u_1\|_{L^q(\Omega_{\tau,\tau+1})} + \|u_3\|_{L^q(\Omega_{\tau,\tau+1})}}\\
&\underset{p>q}{\leq} C\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})}\pa{\|u_1\|_{L^q(\Omega_{\tau,\tau+1})} + \|u_3\|_{L^p(\Omega_{\tau,\tau+1})}},
\end{align*}
where we have used the fact that $\phi_\tau^\prime \vert_{[\tau+1,\infty)}=0$. Much like the proof of the previous lemma, since $1<q<p$ we can find $0<\beta<1$ such that the interpolation inequality
$$\|u_1\|_{L^q(\Omega_{\tau,\tau+1})} \leq \|u_1\|_{L^1(\Omega_{\tau,\tau+1})}^{\beta}\|u_1\|_{L^p(\Omega_{\tau,\tau+1})}^{1-\beta} \leq M_{13}^\beta \|u_1\|_{L^p(\Omega_{\tau,\tau+1})}^{1-\beta}$$
is valid. Thus, we find that
\begin{equation}\label{eq:I_estimation}
\abs{\mathcal{I}} \leq C_1\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})}\left(\|u_1\|_{L^p(\Omega_{\tau,\tau+1})}^{1-\beta} + \|u_3\|_{L^p(\Omega_{\tau,\tau+2})}\right),
\end{equation}
with $C_1$ depending on $M_{13}$, as well as the other parameters of the problem. Combining \eqref{eq:II_estimation}, \eqref{eq:III_estimation} and \eqref{eq:I_estimation} we obtain
\begin{equation}\nonumber
\int_{\tau}^{\tau+2}\int_{\Omega}(\varphi_\tau(t) u_1(x,t))\theta(x,t) dxdt \leq C_1\|\theta\|_{L^{p'}(\Omega_{\tau,\tau+2})}\pa{\|u_1\|_{L^p(\Omega_{\tau,\tau+1})}^{1-\beta} + \|u_3\|_{L^p(\Omega_{\tau,\tau+2})}},
\end{equation}
which implies by duality that
\begin{equation}\label{eq:norm_of_phi_u_1}
\|\varphi_\tau u_1\|_{L^p(\Omega_{\tau,\tau+2})} \leq C_1\pa{\|u_1\|_{L^p(\Omega_{\tau,\tau+1})}^{1-\beta} + \|u_3\|_{L^p(\Omega_{\tau,\tau+2})}}.
\end{equation}
Again, like in the proof of the previous lemma, it is enough to find a bound for the sequence
$$a_n=\|u_1\|_{L^p(\Omega_{n,n+1})}$$
for all $n\in \mathbb{N}\cup\br{0}$ for which $a_{n+1}\geq a_n$.
Using the fact that $\phi_{\tau}\vert_{[\tau+1,\infty)}\equiv 1$ and the non-negativity of $u_1$ we see that \eqref{eq:norm_of_phi_u_1} implies that
\begin{equation}\nonumber
a_{n+1} \leq \|\varphi_n u_1\|_{L^p(\Omega_{n,n+2})} \leq C_1\pa{a_n^{1-\beta} + 2\sup_{\tau\in\mathbb{N}}\|u_3\|_{L^p(\Omega_{\tau,\tau+1})}}.
\end{equation}
If, in addition, $a_{n+1}\geq a_n$ then since for any $0<\beta<1$ and $a,b>0$
$$ab^{1-\beta} \leq \beta a^{\frac{1}{\beta}}+\pa{1-\beta}b$$
we see that
\begin{equation}\nonumber
a_{n+1} \leq C_1 a_{n+1}^{1-\beta} + 2C_1C_p \leq \pa{1-\beta}a_{n+1}+\pa{\beta C_1^{\frac{1}{\beta}}+2C_1C_p}
\end{equation}
from which the independent in $n$ bound
$$a_{n+1}\leq \frac{\beta C_1^{\frac{1}{\beta}}+2C_1C_p}{\beta}$$
arises. The proof is thus complete for $u_1$, and the exact same considerations show the result for $u_2$.
\end{proof}
\begin{remark}\label{rem:not_working_for_infinity}
In general, Lemma \ref{lem:second_duality} is not applicable for $p=\infty$, as our embedding theorems do not always cover this case.
\end{remark}
\begin{lemma}\label{lem:L_infty_bound_on_u_3}
Assume that condition \eqref{quasi-uniform} holds. Then
\begin{equation}\label{eq:L_infty_bound_on_u_3}
\sup_{t\geq 0}\|u_3(t)\|_{L^\infty(\Omega)}<+\infty.
\end{equation}
Moreover, for any $1\leq p <\infty$, there exists $C_p>0$, that depends only on the parameters of the problem and not $\tau$, such that
\begin{equation}\label{eq:Lp_bounds_on_u_1_u_2_u_3}
\max_{i=1,2,3}\sup_{\tau\geq 0}\|u_i\|_{L^p(\Omega_{\tau,\tau+1})} \leq C_p.
\end{equation}
\end{lemma}
\begin{proof}
Using the time cut off function $\phi_\tau$, defined in the Proof of Lemma \ref{lem:some_L_p_estimaes_for_123}, and the non-negativity of the solutions, we see that
\begin{equation}\label{eq_u3}
\begin{aligned}
\partial_t(\varphi_\tau(t) u_3(x,t)) &- d_3\Delta(\varphi_\tau(t) u_3(x,t)) \\
&=\phi^\prime_{\tau}(t)u_3(x,t)+\phi_\tau(t)\pa{u_1(x,t)u_2(x,t)-u_3(x,t)u_4(x,t)}\\
&\leq \varphi_\tau^\prime(t) u_3(x,t) + \varphi_\tau(t) u_1(x,t)u_2(x,t):=g(x,t).
\end{aligned}
\end{equation}
and
$$\nabla (\varphi_\tau(t) u_3(x,t))\cdot \nu(x)\vert_{x\in \partial \Omega} = 0, \quad \varphi_\tau(\tau) u_3(x,\tau) = 0.$$
This implies that $\phi_\tau(x,t)u_3(x,t)$ is a sub-solution to the heat equation
\begin{equation}\label{eq:sub_sol_heat}
\begin{cases}
\partial_t \psi(x,t) - d_3\Delta \psi(x,t) = g(x,t), &(x,t)\in \Omega_{\tau,\tau+2},\\
\nabla \psi(x,t) \cdot \nu(x) = 0, &(x,t)\in \partial\Omega\times \pa{\tau,\tau+2},\\
\psi(x,\tau) = 0, &x\in\Omega.
\end{cases}
\end{equation}
Lemma \ref{lem:some_L_p_estimaes_for_123} assures us that there exists $C$, depending only on the parameters of the problem and not $\tau$, and $p_0>2$ such that
\begin{equation}\nonumber
\begin{gathered}
\sup_{\tau \geq 0}\|g\|_{L^{\frac{p_0}{2}}(\Omega_{\tau,\tau+2})} \leq C\sup_{\tau \geq 0}\norm{u_3}_{L^{\frac{p_0}{2}}(\Omega_{\tau,\tau+2})}+\sup_{\tau \geq 0}\norm{u_1u_2}_{L^{\frac{p_0}{2}}(\Omega_{\tau,\tau+2})}\\
\leq C\abs{\Omega}^{\frac{1}{p_0}}\sup_{\tau \geq 0}\norm{u_3}_{L^{p_0}(\Omega_{\tau,\tau+2})}+\sup_{\tau \geq 0}\norm{u_1}_{L^{p_0}(\Omega_{\tau,\tau+2})}\norm{u_2}_{L^{p_0}(\Omega_{\tau,\tau+2})}\leq C.
\end{gathered}
\end{equation}
The comparison principle for heat equation assures us that since $\phi_\tau u_3$ is a sub-solution to \eqref{eq:sub_sol_heat},
\begin{equation*}
\varphi_\tau (t)u_3(x,t) \leq \psi(x,t) \quad \text{for all } \quad (x,t)\in\Omega_{\tau,\tau+2}.
\end{equation*}
From Lemma \ref{cor:embedding_for_N}, we conclude that if $p_0<{N+2}$ then for any $\eta>0$ such that
$$p_1-\eta=\frac{(N+2)\frac{p_0}{2}}{N+2-2\frac{p_0}{2}}-\eta >1$$
we have that $\psi\in L^{p_1-\eta}\pa{\Omega_{\tau,\tau+2}}$, and consequently $\phi_\tau u_3\in L^{p_1-\eta}\pa{\Omega_{\tau,\tau+2}}$. It is important to note that the embedding constant \emph{is independent of $\tau$}. Due to the fact that $\phi_{\tau}\vert_{[\tau+1,\infty)}\equiv 1$ we can conclude that
\begin{equation}\nonumber
\sup_{\tau \geq 0}\norm{u_3}_{L^{p_1-\eta}(\Omega_{{\tau+1,\tau+2}})} < \infty,
\end{equation}
and since $u_3\in L^\infty\pa{\Omega_{T}}$ for any $T>0$, we have that
$$\sup_{\tau\in \mathbb{N}}\norm{u_3}_{L^{p_1-\eta}(\Omega_{\tau,\tau+1})}=\max\pa{\norm{u_3}_{L^{p_1-\eta}(\Omega_{1})},\sup_{\tau \geq 0}\norm{u_3}_{L^{p_1-\eta}(\Omega_{{\tau+1,\tau+2})}} }<\infty.$$
From Lemma \ref{lem:second_duality} we know that
\begin{equation}
\sup_{\tau\in \mathbb{N}}\norm{u_i}_{L^{p_1-\eta}(\Omega_{\tau,\tau+1})}<\infty,\quad i=1,2
\end{equation}
which in turn implies that $g\in L^{\frac{p_1-\eta}{2}}\pa{\Omega_{\tau,\tau+2}}$ with a uniform in $\tau$ bound. Repeating this procedure, we define $p_{k+1}=\frac{(N+2)\frac{p_k}{2}}{N+2-p_k}$ when $p_k<N+2$, and find for any sufficiently small $\eta$
$$\sup_{\tau\in \mathbb{N}}\norm{u_3}_{L^{p_{k+1}-\eta}(\Omega_{\tau,\tau+1})}<\infty.$$
Similarly to the proof of Proposition \ref{global3D}, since $p_0>\frac{N+2}{2}$, there exists $k_0\in \mathbb N$ such that
\begin{equation*}
p_{k_0} \geq N+2.
\end{equation*}
If the inequality is strict, then by choosing $\eta>0$ small enough for which $p_{k_0}-\eta>N+2$, we will find that
$$\sup_{\tau\in \mathbb{N}}\norm{u_3}_{L^{p_{k_0}-\eta}(\Omega_{\tau,\tau+1})}<\infty,$$
and the same bound will transfer to $u_1$ and $u_2$, thanks to Lemma \ref{lem:second_duality}. This will imply that $u_1u_2 \in L^{\frac{p_{k_0}-\eta}{2}}(\Omega_{\tau,\tau+2})$ with $\frac{p_{k_0}-\eta}{2}>\frac{N+2}{2}$, and according to Lemma \ref{cor:embedding_for_N} we find that
$$\norm{u_3}_{L^\infty(\Omega_{\tau,\tau+1})}<C,$$
for $C$ independent of $\tau$. As $u_3$ is continuous in its variables
$$\sup_{\tau\leq t\leq \tau+1}\norm{u_3(t)}_{L^\infty(\Omega)}\leq \norm{u_3}_{L^\infty(\Omega_{\tau,\tau+1})}$$
and \eqref{eq:L_infty_bound_on_u_3} is shown. The desired estimates \eqref{eq:Lp_bounds_on_u_1_u_2_u_3} for $u_1$ and $u_2$ follow again from Lemma \ref{lem:second_duality}.\\
We are only left to show that we can find $p_{k_0}$ as above. Since $p_{k+1}=\frac{(N+2)\frac{p_k}{2}}{N+2-p_k}$ implies
$$p_{k}=\frac{2(N+2)p_{k+1}}{N+2+2p_{k+1}}$$
we see that the case where $p_{k_0}=N+2$ {\it can come from only one choice of $p_0>\frac{N+2}{2}$.} Since all inequalities will remain the same if we replace $p_0$ with $\widetilde{p_0}< p_0$ such that $\widetilde{p_0} > \frac{N+2}{2}$ (as the domain is bounded), choosing $\widetilde{p_0}$ close enough to $p_0$ such that
$$\widetilde{p_0} > \frac{2(N+2)p_{0}}{N+2+2p_{0}}$$
assures us the sequence we will construct using $\widetilde{p_0}$ will not pass through $p_0$ and as such the $\widetilde{p_{k_0}}$ we will find from this process will satisfy $\widetilde{p_{k_0}}>N+2$. The proof is thus complete.
\end{proof}
The penultimate component to prove Proposition \ref{Global3D} is the following lemma:
\begin{lemma}\label{3D_u4LN2}
Assume that condition \eqref{quasi-uniform} holds. Then there exists $\alpha \in (0,1)$ such that
\begin{equation*}
\norm{u_1(t)}_{L^{\frac N2}(\Omega)} + \norm{u_2(t)}_{L^{\frac N2}(\Omega)} + \norm{u_4(t)}_{L^{\frac N2}(\Omega)} \leq C(1+t)^\alpha \quad \text{ for all } \quad t\geq 0.
\end{equation*}
\end{lemma}
\begin{proof}
We start by noticing that due to Lemma \ref{lem:L_infty_bound_on_u_3} we can find a constant $C_p$, for any $1<p<+\infty$ such that
$$\sup_{\tau \geq 0}\norm{u_i}_{L^p\pa{\Omega_{\tau,\tau+1}}}\leq C_p,\quad i=1,2.$$
As such, for any $t>0$
\begin{equation}\label{eq:L_p_estimation_on_omega_t_12}
\norm{u_i}^p_{L^p\pa{\Omega_{t}}}\leq \sum_{n=0}^{\rpa{t}+1}\norm{u_i}^p_{L^p\pa{\Omega_{n,n+1}}} \leq C^p_{p}\pa{2+\rpa{t}}\leq 3C^p_p\pa{1+t},\quad i=1,2.
\end{equation}
Next, using the equation for $u_4$ and the non-negativity of the solutions, we see that
$$\partial_t u_4(x,t) \leq u_1(x,t)u_2(x,t)$$
from which, together with Young's inequality, we conclude that for any $1<q<+\infty$ and an arbitrary $\epsilon\in(0,1)$
\begin{equation*}
\begin{gathered}
\partial_t\int_{\Omega}u_4(x,t)^qdx \leq q\int_{\Omega}u_4(x,t)^{q-1}u_1(x,t)u_2(x,t)dx \leq \frac{q(q-1)}{q-\epsilon}\int_{\Omega}u_4(x,t)^{q-\varepsilon}dx\\
+ \frac{q(1-\varepsilon)}{q-\varepsilon}\int_{\Omega}(u_1(x,t)u_2(x,t))^{\frac{q-\varepsilon}{1-\varepsilon}}dx
\leq \mathcal{C}_{q,\epsilon,\Omega}\left(\int_{\Omega}u_4(x,t)^qdx \right)^{1-\frac{\varepsilon}{q}} \\
+ \frac{1-\epsilon}{2(q-\epsilon)}\pa{\norm{u_1(t)}_{L^{\frac{2(q-\epsilon)}{1-\epsilon}}\pa{\Omega}}^{\frac{2(q-\epsilon)}{1-\epsilon}}+\norm{u_2(t)}_{L^{\frac{2(q-\epsilon)}{1-\epsilon}}\pa{\Omega}}^{\frac{2(q-\epsilon)}{1-\epsilon}}}
\end{gathered}
\end{equation*}
Applying Lemma \ref{Gronwall} with $y(t) = \norm{u_4(t)}_{L^q(\Omega)}^q$ and $r = \varepsilon/q$, and using \eqref{eq:L_p_estimation_on_omega_t_12}, we find that
\begin{equation}\label{eq:u4_Lq_bound_time}
\begin{gathered}
\norm{u_4(t)}_{L^q(\Omega)}^q \leq C_{q,\epsilon}\pa{\norm{u_{4,0}}_{L^q(\Omega)}^q + \norm{u_1(t)}_{L^{\frac{2(q-\epsilon)}{1-\epsilon}}\pa{\Omega_{t}}}^{\frac{2(q-\epsilon)}{1-\epsilon}}+\norm{u_2(t)}_{L^{\frac{2(q-\epsilon)}{1-\epsilon}}\pa{\Omega_{t}}}^{\frac{2(q-\epsilon)}{1-\epsilon}}+ t^{\frac{q}{\varepsilon}}}\\
\leq \mathcal{C}_{q,\epsilon}\pa{\norm{u_{4,0}}_{L^q(\Omega)}^q + \pa{1+t}+ t^{\frac{q}{\varepsilon}}}.
\end{gathered}
\end{equation}
Thus, we conclude that
\begin{equation*}
\begin{gathered}
\norm{u_4(t)}_{L^q(\Omega)} \leq C_{q,\epsilon}\pa{\norm{u_{4,0}}_{L^q(\Omega)} + (1+t)^\frac{1}{q}+ t^{\frac{1}{\varepsilon}} }\\
\leq C_{q,\epsilon}\pa{1+\norm{u_{4,0}}_{L^\infty(\Omega)}\abs{\Omega}^{\frac{1}{q}} }\pa{ (1+t)^\frac{1}{q}+ t^{\frac{1}{\varepsilon}} }
\end{gathered}
\end{equation*}
for any $1<q<+\infty$ and $\epsilon\in (0,1)$. Since
$$\frac{1}{q}=\frac{\alpha}{4q}+\frac{1-\alpha}{1}$$
for $\alpha=\frac{4q-4}{4q-1}$, we have that the interpolation
$$\norm{u(t)}_{L^{q}(\Omega)}\leq \norm{u(t)}_{L^{4q}(\Omega)}^{\frac{4q-4}{4q-1}}\norm{u(t)}_{L^1(\Omega)}^{\frac{3}{4q-1}}$$
is valid for any $q> 1$. As such
$$\norm{u_4(t)}_{L^{q}(\Omega)}\leq C_{q,\epsilon}\pa{M_{2,4}}^{\frac{3}{4q-1}}\pa{1+\norm{u_{4,0}}_{L^\infty(\Omega)}\abs{\Omega}^{\frac{1}{4q}} }^{\frac{4q-4}{4q-2}}\pa{ (1+t)^{\frac{1}{4q}}+ t^{\frac{1}{\varepsilon}} }^{\frac{4q-4}{4q-1}}.$$
For any $\eta<1$ such that
$$\frac{4q-4}{4q-1}<\eta$$
we see that since $\frac{1}{4q}\frac{4q-4}{4q-1} < \frac{1}{4q}$, choosing $\epsilon=\frac{4q-4}{\eta(4q-1)}$ yields the bound
\begin{equation}\label{eq:bound_u4_typ}
\norm{u_4(t)}_{L^{q}(\Omega)}\leq C_{q,\alpha,u_{4,0},M_{2,4}}\pa{1+t}^{\alpha},
\end{equation}
with $\alpha=\max\pa{\frac{1}{4q},\eta}<1$. In particular, one can always choose
$$\eta= \frac{4q-1}{4q+2}$$
and in the case $q=\frac{N}{2}$ with $N\geq 3$, this translates into
$$\alpha = \max\pa{\frac{3}{2N},\frac{2N-1}{2N+2}}.$$
We turn our attention to $u_1$ and $u_2$. We will focus only $u_1$, as it and $u_2$ solve the same type of equation, and as such the desired result, once proven, will also hold for $u_2$.\\
We remind ourselves that $u_1$ solves the equation
$$\partial_{t}u_1(x,t)-d_1\Delta u_1(x,t)=u_3(x,t)u_4(x,t)-u_1(x,t)u_2(x,t),$$
and we mimic the steps of Lemma \ref{lem:some_L_p_estimaes_for_123}. We start by choosing a function $\phi_\tau$, which is a shift by $\tau$ of a $C^\infty$ function such that $\phi(0)=0$, $0\leq \phi \leq 1$, and $\phi \vert_{[1,\infty)}\equiv 1$.
Due to the non-negativity of the solutions to \eqref{eq:sys} we have that
$$\partial_{t}\pa{\phi_\tau(t)u_1(x,t)} - d_1\Delta(\varphi_\tau(t)u_1(x,t) \leq \varphi^\prime_\tau(t) u_1(x,t)+ \phi_{\tau}(t)u_3(x,t)u_4(x,t)\in L^{q}\pa{\Omega_{\tau,T}}$$
for any $1<q<\infty$ and $0\leq \tau<T$. We can also choose $\phi$ to be increasing, so that the right hand side of the above inequality is a non-negative function. Thus, using the comparison principle for the heat equation, together with the same elements of the proof of Lemma \ref{cor:embedding_for_N} which relied on the maximum regularity principle of the heat equation, and noticing that $\phi_{\tau}u_1 \vert_{t=\tau}=0$ we find that for any $n\in\mathbb{N}$
$$\sup_{n+1\leq t \leq n+2}\norm{u_1(t)}_{L^\infty\pa{\Omega}}\leq \sup_{n\leq t \leq n+2}\norm{\phi_{n}(t)u_1(t)}_{L^\infty\pa{\Omega}}\leq \norm{\phi_n u_1}_{L^{\infty}\pa{\Omega_{n,n+2}}}$$
$$\leq C_{q_0,\Omega}\pa{\norm{\phi_n^\prime u_1}_{L^{q_0}\pa{\Omega_{n,n+2}}}+\norm{u_3u_4}_{L^{q_0}\pa{\Omega_{n,n+2}}}}$$
where $q_0>\frac{N+2}{2}$ is fixed. Lemma \ref{lem:L_infty_bound_on_u_3} assures us that
$$\sup_{n\in\mathbb{N}}\norm{\phi^\prime_n u_1}_{L^{q_0}\pa{\Omega_{n,n+2}}} \leq C\pa{\sup_{n\in\mathbb{N}}\norm{ u_1}_{L^{q_0}\pa{\Omega_{n,n+1}}}+\sup_{n\in\mathbb{N}}\norm{ u_1}_{L^{q_0}\pa{\Omega_{n+1,n+2}}}}\leq C_{q_0},$$
and due to \eqref{eq:bound_u4_typ} we see that
$$\norm{u_3u_4}_{L^{q_0}\pa{\Omega_{n,n+2}}}\leq C_{q_0,\alpha}C_{q_0,\alpha,u_{4,0},M_{2,4}}\sup_{t\geq 0}\norm{u_3(t)}_{L^\infty\pa{\Omega}}\pa{\int_{n}^{n+2} \pa{1+s}^{q_0\alpha} ds}^{\frac{1}{q_0}}.$$
Thus, since $\int_n^{n+2}(1+s)^{q_0\alpha}ds\leq 2(3+n)^{q_0\alpha}$,
$$\sup_{n+1\leq t \leq n+2}\norm{u_1(t)}_{L^\infty\pa{\Omega}}\leq C_{q_0,\Omega}\pa{C_{q_0}+C_{q_0,\alpha,u_{4,0},M_{2,4}}\sup_{t\geq 0}\norm{u_3(t)}_{L^\infty\pa{\Omega}}\pa{3+n}^{\alpha}}$$
$$\leq C_{q_0,\alpha,u_{4,0},M_{2,4},\sup_{t\geq 0}\norm{u_3(t)}_{L^\infty\pa{\Omega}}}\pa{2+t}^{\alpha}.$$
As $n\in\mathbb{N}$ was arbitrary and
$$\sup_{0\leq t \leq 1}\norm{u_1(t)}_{L^\infty\pa{\Omega}} =\norm{u_1}_{L^\infty\pa{\Omega_{1}}}\leq \norm{u_1}_{L^\infty\pa{\Omega_{1}}}\pa{1+t}^\alpha$$
we find that
\begin{equation}\label{eq:u1_infty_bound}
\norm{u_1(t)}_{L^\infty\pa{\Omega}} \leq C_{q_0,\alpha,\br{u_{i,0}}_{i=1,\dots,4},M_{2,4}}\pa{1+t}^\alpha,
\end{equation}
where the constant depends on $\norm{u_1}_{L^\infty\pa{\Omega_{1}}}$, which by itself depends on the initial data. From the above any $L^q\pa{\Omega}$ bound follows. As $\alpha<1$, the proof is completed.
\end{proof}
\begin{remark}\label{rem:N_independent}
It is worth to mention that while Lemma \ref{3D_u4LN2} considered $L^{\frac{N}{2}}\pa{\Omega}$, we have actually managed to prove that for any $q>1$, we can find $0<\alpha<1$ such that
\begin{equation}\nonumber
\norm{u_4(t)}_{L^{q}(\Omega)}\leq C_{q,\alpha,u_{4,0},M_{2,4}}\pa{1+t}^{\alpha},
\end{equation}
and
\begin{equation}\nonumber
\norm{u_i(t)}_{L^\infty\pa{\Omega}} \leq C_{\alpha,\br{u_{i,0}}_{i=1,\dots,4},M_{2,4}}\pa{1+t}^\alpha,\quad i=1,2.
\end{equation}
Moreover, the dimension $N$ played no role!
\end{remark}
Collecting all the tools that we have developed, we can finally prove Proposition \ref{Global3D}.
\begin{proof}[Proof of Proposition \ref{Global3D} ]
This follows from Proposition \ref{global3D} (where \eqref{3D_u124_Linfty} has also been proved), Lemma \ref{lem:L_infty_bound_on_u_3} and Lemma \ref{3D_u4LN2}.
\end{proof}
\subsection{Proof of Proposition \ref{Global1D}}\label{12D_proof}
Looking at the main ingredients we used to prove Proposition \ref{Global3D} (Proposition \ref{global3D}, Lemma \ref{lem:L_infty_bound_on_u_3} and Lemma \ref{3D_u4LN2}), we notice that there is no real dimension dependency in the proof any of them. The only dimensional dependency may come from condition \eqref{quasi-uniform}. We also note that the only difference between the conclusions of Proposition \ref{Global1D} and \ref{Global3D} is the $L^p\pa{\Omega}$ norm estimates - namely \eqref{eq:bound_u_1_u_2} and \eqref{3D_u124}. However, it is simple to see that condition \eqref{eq:bound_u_1_u_2} in Proposition \ref{Global1D} is an immediate consequence of \eqref{eq:bound_u124} and the mass conservation. Indeed, for $i\in \{1,\ldots, 4\}$,
$$\norm{u_i(t)}_{L^{1+\gamma}\pa{\Omega}} \leq \norm{u_i(t)}_{L^\infty\pa{\Omega}}^{\frac{\gamma}{1+\gamma}}\norm{u_i(t)}^{\frac{1}{1+\gamma}}_{L^{1}\pa{\Omega}}$$
$$\leq C^{\frac{\gamma}{1+\gamma}}\pa{M_{1,3}+M_{2,4}}^{\frac{1}{1+\gamma}}\pa{1+t}^{\frac{\gamma\mu}{1+\gamma}}.$$
Thus, choosing $\gamma=\frac{\epsilon}{\mu-\epsilon}$ gives the desired estimate. As \eqref{eq:bound_u124} follows from Proposition \ref{global3D}, we only truly need it and Lemma \ref{lem:L_infty_bound_on_u_3} to conclude Proposition \ref{Global1D}.
From the above discussion we conclude that in order to prove Proposition \ref{Global1D} it is enough to show that when $N=1,2$, condition \eqref{quasi-uniform} is \emph{always satisfied}.
\begin{lemma}\label{improved_duality_12D}
For any $\omega_1, \omega_2>0$, there exists a constant $p_*<2$ such that
\begin{equation}\label{b4}
\frac{\abs{\omega_1-\omega_2}}{\omega_1+\omega_2}C_{\rm{mr},p_*}<1.
\end{equation}
In particular, condition \eqref{quasi-uniform} is always satisfied.
\end{lemma}
\begin{proof}
First, we show that
\begin{equation}\label{Cmr2}
C_{\rm{mr},2} \leq 1.
\end{equation}
Indeed, by multiplying \eqref{normalized_diffusion} with $\Delta \psi$ and integrating on $\Omega_{\tau,T}$, we find that
\begin{equation*}
\frac{1}{2}\int_{\Omega}\abs{\nabla \psi(x,\tau)}^2dx+ \norm{\Delta \psi}_{L^2(\Omega_{\tau,T})}^2 = -\int_\tau^T\int_{\Omega}\theta(x,t) \Delta \psi(x,t) dxdt$$
$$ \leq \frac 12 \norm{\Delta \psi}_{L^2(\Omega_{\tau,T})}^2 + \frac 12 \norm{\theta}_{L^2(\Omega_{\tau,T})}
\end{equation*}
where we have used the fact that
$$\int_{\Omega_{\tau,T}}\partial_t\psi(x,t) \Delta \psi(x,t)dxdt \underset{\text{Neumann condition}}{=}-\int_{\Omega_{\tau,T}}\nabla\pa{\partial_t\psi(x,t)} \nabla\psi(x,t)dxdt$$
$$=-\frac{1}{2}\int_{\Omega_{\tau,T}}\partial_t\abs{\nabla \psi(x,t)}^2dtdx\underset{\psi(x,t)=0}{=}\int_{\Omega}\abs{\nabla \psi(x,\tau)}^2dx.$$
Thus,
\begin{equation*}
\norm{\Delta \psi}_{L^2(\Omega_{\tau,T})} \leq \norm{\theta}_{L^2(\Omega_{\tau,T})},
\end{equation*}
which proves \eqref{Cmr2}. It is then obvious that for $\omega_1,\omega_2>0$
\begin{equation*}
\frac{|\omega_1 - \omega_2|}{\omega_1 +\omega_2}C_{\rm{mr}, 2} < 1.
\end{equation*}
To conclude our lemma, it will be sufficient to show that
\begin{equation*}
C_{\rm{mr},2}^- := \liminf_{\eta \to 0^+}C_{\rm{mr},2-\eta} \leq C_{\rm{mr},2}.
\end{equation*}
To see that, we follow the idea in \cite[Remark 4]{PSU17}. Let $2_\eta$ satisfy
\begin{equation*}
\frac{1}{2_\eta} = \frac 12 \rpa{\frac 12 + \frac{1}{2-\eta}} \quad \text{ or equivalently } \quad 2_\eta = 2 - \frac{2\eta}{4-\eta}.
\end{equation*}
Applying the Riesz-Thorin interpolation theorem (see, for instance, \cite[Chapter 2]{Lun18}), we find that
\begin{equation*}
C_{\rm{mr},2_\eta} \leq C_{\rm{mr},2}^{\frac 12}C_{\rm{mr},2-\eta}^{\frac 12}.
\end{equation*}
This implies that
\begin{equation*}
C_{\rm{mr},2}^{-} \leq C_{\rm{mr},2}^{\frac 12}\pa{C_{\rm{mr},2}^{-}}^{\frac 12}
\end{equation*}
and therefore
\begin{equation*}
C_{\rm{mr},2}^{-} \leq C_{\rm{mr},2}.
\end{equation*}
We are only left with showing that the above implies that condition \eqref{quasi-uniform} is always satisfied. Given any diffusion coefficients, let $p_0^\prime$ be the exponent $p_\ast<2$ that we found, which can depend on the coefficients. As
$$p_0=\frac{p_0^\prime}{p_0^\prime-1}=\frac{p_\ast}{p_\ast-1}=1+\frac{1}{p_\ast-1}>2$$
and $\frac{N+2}{2} \leq 2$ when $N=1,2$, the proof is complete.
\end{proof}
\section{The interaction between the entropy and the norm bounds: Proof of the main theorems}\label{sec:proof}
Until this point we have considered two aspects of our system of equations, \eqref{eq:sys}:
\begin{itemize}
\item The entropy of the system, and the functional inequality that governs its dissipation under the flow of our system.
\item The well-posedness of the solution to the equation, and time dependent estimates on the growth of norms of interest.
\end{itemize}
Using the second point in the above will allow us to show convergence to equilibrium in the entropic sense. However, the entropy itself controls $L^1(\Omega)$-norm, a fact that will allow us to \emph{revisit and improve} our norm estimates, which in turn will give better entropic convergence and so on and so forth. This idea of bootstraping ourselves, not only using the equations themselves, but also their interplay with the entropy, is the key to prove our main theorems.
We start with this ``entropic control'' - which is known as a Csisz\'ar-Kullback-Pinsker-type inequality, see \cite[Lemma 2.3]{FT17}.
\begin{lemma}\label{CKP}
There exists a constant $C_{CKP}>0$ such that, for any measurable non-negative functions $\bm{u} = (u_i)_{i=1,\ldots, 4}$ who satisfy
\begin{equation*}
\int_{\Omega}(u_i(x)+u_{j}(x))dx = |\Omega|(u_{i,\infty}+u_{j,\infty}) \quad \text{ for } \quad i\in \{1,2\}, \; j\in \{3,4\},
\end{equation*}
we have
\begin{equation*}
H(\bm{u}|\bm{u}_\infty) \geq C_{CKP}\sum_{i=1}^4\norm{u_i - u_{i,\infty}}_{L^1(\Omega)}^2.
\end{equation*}
\end{lemma}
The Csisz\'ar-Kullback-Pinsker inequality will allow us to pass the stretched exponential convergence of the entropy, which is guaranteed due to our estimates, to a stretched exponential convergence to equilibrium for \eqref{eq:sys} in $L^1(\Omega)$-norm, and consequently {\it in any} $L^p(\Omega)$ norm.
\begin{proposition}\label{stretched_exp}
Consider the system of equations \eqref{eq:sys}, where $\Omega\in \mathbb{R}^N$, with $N\geq 3$, is a bounded domain with smooth boundary. Assume in addition that condition \eqref{quasi-uniform_theorem} is satisfied. Then for any $1<p<\infty$, there exist $C_{p,\br{u_{i,0}}_{i=1,\dots,4}},c_p>0$ and $\varepsilon_p>0$ such that
\begin{equation}\label{eq:stretched_Lp}
\sum_{i=1}^4\norm{u_i(t) - u_{i,\infty}}_{L^p(\Omega)} \leq C_{p,\br{u_{i,0}}_{i=1,\dots,4}} e^{-c_p(1+t)^{\varepsilon_p}} \quad \text{ for all } \quad t\geq 0.
\end{equation}
The above remains valid when $N=1,2$, without any conditions on the diffusion coefficients.
\end{proposition}
\begin{proof}
When $N\geq 3$, using Proposition \ref{Global3D}, we conclude that the estimates \eqref{3D_u3}, \eqref{3D_u124_Linfty} and \eqref{3D_u124} are valid for some $0<\alpha<1$ and $\mu>0$. Invoking Theorem \ref{thm:convergence_general}, we find that
\begin{equation*}
H(\bm{u}(t)|\bm{u}_\infty) \leq H(\bm{u}_0|\bm{u}_\infty)e^{-C(\alpha,\varepsilon)(1+t)^{1-\alpha-\varepsilon}}
\end{equation*}
for any $0<\varepsilon<1-\alpha$ and some $C(\alpha,\varepsilon)>0$. Applying the Csisz\'ar-Kullback-Pinsker inequality we have that
\begin{equation}\label{eq:L1_entropy_bound}
\sum_{i=1}^4\norm{u_i(t) - u_{i,\infty}}_{L^1(\Omega)} \leq C_{CKP}^{-1}H(\bm{u}_0|\bm{u}_\infty)e^{-C(\alpha,\varepsilon)(1+t)^{1-\alpha-\varepsilon}} \quad \text{ for all } \quad t\geq 0.
\end{equation}
Using the fact that
$$\norm{u}_{L^p\pa{\Omega}} \leq \norm{u}_{L^\infty\pa{\Omega}}^{1-\frac{1}{p}}\norm{u}^{\frac{1}{p}}_{L^1\pa{\Omega}}$$
together with \eqref{3D_u3}, \eqref{3D_u124_Linfty} and \eqref{eq:L1_entropy_bound} we have that for any $1<p<+\infty$
\begin{align*}
\sum_{i=1}^4\norm{u_i(t) - u_{i,\infty}}_{L^p(\Omega)}
\leq &C_{p,\br{u_{i,0}}_{i=1,\dots,4}}(1+t)^{\mu\left(1 - \frac 1p \right)}e^{-\frac{C(\alpha,\varepsilon)}{p}(1+t)^{1-\alpha-\varepsilon}}\\
&\leq \mathcal{C}_{p,\br{u_{i,0}}_{i=1,\dots,4},\mu,\alpha,\epsilon}e^{-\frac{C(\alpha,\varepsilon)}{p}(1+t)^{\epsilon_p}},
\end{align*}
for any fixed $0<\varepsilon_p < 1-\alpha-\varepsilon$. This concludes the proof when $N\geq 3$. The cases $N=1,2$ follow from Proposition \ref{Global1D}, which has no conditions on the diffusion coefficients, and the exact same method.
\end{proof}
The above proposition, and the structure of \eqref{eq:sys}, will allow us to boost ourselves even further and obtain a stretched exponential convergence to equilibrium in $L^\infty(\Omega)$ norm for $u_1, u_2$ and $u_3$.
A key observation to achieve this is that
\begin{equation}\label{eq:f_estimation}
\begin{aligned}
f(\bm{u})&=u_1u_2-u_3u_4\\
&=\pa{u_1-u_{1,\infty}}u_2 + u_{1,\infty}\pa{u_2-u_{2,\infty}}
-\pa{u_3-u_{3,\infty}}u_4 - u_{3,\infty}\pa{u_4-u_{4,\infty}},
\end{aligned}
\end{equation}
where we used the fact that $u_{1,\infty}u_{2,\infty}=u_{3,\infty}u_{4,\infty}$.
\begin{proposition}\label{stretched_exp_1}
Under the same conditions of Proposition \ref{stretched_exp} there exist $C,c>0$ and $\varepsilon_0>0$ such that
\begin{equation}\label{eq:stretched_exp_1}
\norm{u_i(t) - u_{i,\infty}}_{L^\infty(\Omega)} \leq Ce^{-c(1+t)^{\varepsilon_0}} \quad \text{ for } \quad i=1,2,3.
\end{equation}
\end{proposition}
\begin{proof}
For $i=1,2,3$, we denote by $S_i(t) = e^{d_i\Delta t}$ the heat semigroup generated by the operator $-d_i\Delta$ with homogeneous Neumann boundary condition. We will use the classical estimate
\begin{equation}\label{semigroup}
\norm{S_i(t)u_0}_{L^\infty(\Omega)} \leq C_{i}t^{-\frac{N}{2p}}\norm{u_0}_{L^p(\Omega)} \quad \text{ for all } \quad t>0,
\end{equation}
which can be found in e.g. \cite[Eq. (1.15), page 274]{Tay13}. Denoting by $f_1=-f$, defined in \eqref{eq:f_estimation}, we see that
\begin{equation*}
\partial_t(u_1 - u_{1,\infty})(x,t) - d_i\Delta(u_1 - u_{1,\infty})(x,t) = f_1(\bm{u}(x,t))
\end{equation*}
and as such, using Duhamel's formula on $[t,t+1]$, we see that
\begin{equation*}
\begin{aligned}
u_1(t+1,x) - u_{1,\infty} &= S_i(1)(u_1(x,t) - u_{1,\infty}) + \int_t^{t+1}S_i\pa{t+1-s}f_1(\bm{u}(s,x)ds\\
&=S_i(1)(u_1(x,t) - u_{1,\infty}) + \int_0^{1}S_i\pa{1-s}f_1(\bm{u}(t+s,x)ds.
\end{aligned}
\end{equation*}
Using \eqref{eq:stretched_Lp} and \eqref{semigroup} we find that
\begin{equation}\label{b6}
\begin{aligned}
&\norm{u_1(t+1) - u_{1,\infty}}_{L^\infty(\Omega)}\\
&\leq C_{i}\norm{u_1(t) - u_{1,\infty}}_{L^p(\Omega)} + \int_0^1\norm{S_i\pa{1-s}f_1(\bm{u}(t+s))}_{L^\infty(\Omega)}ds\\
&\leq C_{i,p}e^{-c_p(1+t)^{\varepsilon_p}} + C_{i}\int_0^1(1-s)^{-\frac{N}{2p}}\norm{f_1(\bm{u}(t+s))}_{L^p(\Omega)}ds.
\end{aligned}
\end{equation}
Equality \eqref{eq:f_estimation}, together with \eqref{eq:stretched_Lp} and the $L^\infty$ bounds on $\br{u_i}_{i=1,\dots,4}$, show that
\begin{align*}
&\norm{f_1(\bm{u}(t+s)) }_{L^p(\Omega)}^p\leq \norm{u_2(t+s)}_{L^\infty(\Omega)}^p\norm{u_1(t+s) - u_{1,\infty}}_{L^p(\Omega)}^p + u_{1,\infty}^p\norm{u_2(t+s) - u_{2,\infty}}_{L^p(\Omega)}^p\\
& + \norm{u_4(t+s)}_{L^\infty(\Omega)}^p\norm{u_3(t+s) - u_{3,\infty}}_{L^p(\Omega)}^p + u_{3,\infty}^p\norm{u_4(t+s) - u_{4,\infty}}_{L^p(\Omega)}^p\\
& \quad \leq C_{p}(1+t+s)^{\mu p}e^{-c_p(1+t+s)^{\varepsilon_p}}
\end{align*}
Inserting this into \eqref{b6} yields
\begin{align*}
&\norm{u_1(t+1) - u_{1,\infty}}_{L^\infty(\Omega)} \leq C_{i,p}e^{-c_p(1+t)^{\varepsilon_p}} + C_p\int_0^1(1-s)^{-\frac{N}{2p}}(1+t+s)^{\mu}e^{-c_p(1+t+s)^{\varepsilon_p}}ds\\
&\leq Ce^{-C(1+t)^{\varepsilon_p}}\rpa{1+(2+t)^{\mu}\int_0^1(1-s)^{-\frac{N}{2p}}ds}.
\end{align*}
As for any $p > N/2$ we have that $\int_0^1(1-s)^{-\frac{N}{2p}}ds < +\infty$ we conclude that choosing $p_0>\frac{N}{2}$ yields
\begin{equation*}
\norm{u_1(t+1) - u_{1,\infty}}_{L^\infty(\Omega)} \leq C_{p_0,\br{u_i}_{i=1,\dots,4}}e^{-c_{p_0}(1+t)^{\varepsilon_{p_0}}}\pa{1+(2+t)^{\mu}} \leq C_{p_0,\br{u_i}_{i=1,\dots,4}}e^{-c_{p_0}(1+t)^{\varepsilon_0}}
\end{equation*}
for any $0<\varepsilon_0 < \varepsilon_{p_0}$. The same argument is valid for $u_2$ and $u_3$ (here we use $f$ and not $f_1$), which concludes the proof of the proposition.
\end{proof}
\begin{corollary}\label{cor:extra_bondedness}
Under the assumption of Proposition \ref{stretched_exp} we have that
$$\sup_{t\geq 0}\norm{u_i}_{L^\infty\pa{\Omega}} < +\infty,\quad i=1,2,3.$$
\end{corollary}
\begin{proof}
This follows immediately from \eqref{eq:stretched_exp_1} (though we already knew this for $u_3$).
\end{proof}
Before proving our main theorems we turn our attention to the only missing function: $u_4$.
\begin{proposition}\label{Linf_u4}
Under the assumption of Proposition \ref{stretched_exp} we have that
\begin{equation*}
\sup_{t\geq 0}\norm{u_4(t)}_{L^\infty(\Omega)} <+\infty.
\end{equation*}
\end{proposition}
\begin{proof}
From Proposition \ref{stretched_exp_1} we have
\begin{equation*}
\norm{u_3(t) - u_{3,\infty}}_{L^\infty(\Omega)} \leq Ce^{-ct^{\varepsilon_0}}.
\end{equation*}
for some fixed constants $C,\epsilon_0>0$. We can find $T_1>0$ large enough such that
$$Ce^{-ct^{\varepsilon_0}} \leq \frac 12u_{3,\infty}$$
for all $t\geq T_1$, for instance
$$T_1 = \rpa{\frac 1c\ln(2C/u_{3,\infty})}^{1/\varepsilon_0}.$$
Clearly, for all $t\geq T_1$ we have that $u_3(x,t) \geq \frac 12u_{3,\infty}$, and as such
\begin{equation*}
\partial_t u_4(x,t) + \frac 12u_{3,\infty}u_4(x,t) \leq \partial_t u_4(x,t) + u_3(x,t)u_4(x,t) = u_1(x,t)u_2(x,t).
\end{equation*}
Gronwall's lemma, with $\lambda:= \frac 12 u_{3,\infty}$, assures us that
\begin{align*}
u_4(x,t) &\leq e^{-\lambda (t-T_1)}u_4(T_1,x) + \int_{T_1}^te^{-\lambda(t-s)}u_1(s,x)u_2(s,x)ds\\
&\leq e^{-\lambda (t-T_1)}u_4(T_1,x) + \pa{\sup_{t\geq 0}\norm{u_1(t)}_{L^\infty\pa{\Omega}}}\pa{\sup_{t\geq 0}\norm{u_2(t)}_{L^\infty\pa{\Omega}}}\int_{T_1}^te^{-\lambda (t-s)}ds\\
&\leq e^{-\lambda (t-T_1)}u_4(T_1,x) + \frac{\pa{\sup_{t\geq 0}\norm{u_1(t)}_{L^\infty\pa{\Omega}}}\pa{\sup_{t\geq 0}\norm{u_2(t)}_{L^\infty\pa{\Omega}}}}{\lambda}
\end{align*}
The non-negativity of $u_4$ implies that
$$\sup_{t\geq 0}\norm{u_4(t)}_{L^\infty\pa{\Omega}} \leq \sup_{0\leq t\leq T_1}\norm{u_4(t)}_{L^\infty\pa{\Omega}}+\frac{\pa{\sup_{t\geq 0}\norm{u_1(t)}_{L^\infty\pa{\Omega}}}\pa{\sup_{t\geq 0}\norm{u_2(t)}_{L^\infty\pa{\Omega}}}}{\lambda}$$
and since
$$\sup_{0\leq t\leq T_1}\norm{u_4(t)}_{L^\infty\pa{\Omega}}<C_{u_{4,0}}\pa{1+T_1}^\mu.$$
The proof is complete.
\end{proof}
With these tools at hand, we are now ready to prove Theorem \ref{thm:main} and \ref{thm:main-3D}.
\begin{proof}[Proof of Theorems \ref{thm:main} and \ref{thm:main-3D}]
The existence of a classical global solution and its non-negativity are covered in Propositions \ref{Global1D} and \ref{Global3D}. We now turn our attention to the $L^\infty\pa{\Omega}$ convergence.\\
Corollary \ref{cor:extra_bondedness} and Proposition \ref{Linf_u4}, together with Theorem \ref{thm:convergence_general} yield the exponential entropic convergence
\begin{equation*}
H(\bm{u}(t)|\bm{u}_\infty) \leq Ce^{-\lambda_0t}
\end{equation*}
for some $C, \lambda_0>0$. Using the Csisz\'ar-Kullback-Pinsker inequality \eqref{CKP} again leads to
\begin{equation}\label{exp_L1}
\sum_{i=1}^4\norm{u_i(t) - u_{i,\infty}}_{L^1(\Omega)}^2 \leq Ce^{-\lambda_0t},
\end{equation}
and consequently, for any $1<p<\infty$
\begin{equation}\label{exp_Lp}
\norm{u_i(t)-u_{i,\infty}}_{L^p(\Omega)} \leq \norm{u_i(t)-u_{i,\infty}}_{L^{\infty}(\Omega)}^{1-\frac 1p}\norm{u_i(t)-u_{i,\infty}}_{L^1(\Omega)}^{\frac 1p} \leq Ce^{-\frac{\lambda_0}{2p}t}
\end{equation}
thanks to the uniform in time bounds on $L^\infty\pa{\Omega}$ norms of all functions involved. To finally obtain the exponential convergence in $L^\infty(\Omega)$ norm for $\br{u_i}_{i=1,2,3}$ we use similar arguments to Proposition \ref{stretched_exp_1}.
The function $f$, defined in \eqref{eq:f_estimation}, now has the bound
$$\norm{f(t)}_{L^p\pa{\Omega}}\leq \mathcal{C}e^{-\frac{\lambda_0}{2p}t}$$
and as such \eqref{b6} can be replaced with
\begin{align*}
&\norm{u_i(t+1) - u_{i,\infty}}_{L^\infty(\Omega)}
\leq Ce^{-\frac{\lambda_0}{2p}t}\rpa{1+\int_0^1(1-s)^{-\frac{N}{2p}}ds}\leq Ce^{-\frac{\lambda_0}{2p}t},\quad i=1,2,3
\end{align*}
if we choose $p>\frac N2$.
We are only left to deal with $u_4$. With $f$ as in \eqref{eq:f_estimation} we see that
$$\partial_t\pa{u_4-u_{4,\infty}}(x,t) = f(\bm{u}(x,t)),$$
which we can rewrite as
$$\partial_t\pa{u_4-u_{4,\infty}}(x,t)+ u_{3,\infty}\pa{u_4(x,t)-u_{4,\infty}} = \pa{u_1(x,t)-u_{1,\infty}}u_2(x,t)$$
$$ + u_{1,\infty}\pa{u_2(x,t)-u_{2,\infty}}
-\pa{u_3(x,t)-u_{3,\infty}}u_4(x,t) $$
Integrating the above we find that
$$u_4(x,t)-u_{4,\infty} = e^{-u_{3,\infty}t}\pa{u_{4,0}(x)-u_{4,\infty}}$$
$$+\int_{0}^t e^{-u_{3,\infty}\pa{t-s}}\pa{\pa{u_1(s,x)-u_{1,\infty}}u_2(s,x)+ u_{1,\infty}\pa{u_2(s,x)-u_{2,\infty}}
-\pa{u_3(s,x)-u_{3,\infty}}u_4(s,x)}ds$$
which implies that
$$\norm{u_4(x,t)-u_{4,\infty}}_{L^\infty\pa{\Omega}} \leq e^{-u_{3,\infty}t}\norm{u_{4,0}(x)-u_{4,\infty}}_{L^\infty\pa{\Omega}}+Ce^{-u_{3,\infty}t}\int_{0}^t e^{\pa{u_{3,\infty}-\lambda}s}ds$$
where $\lambda>0$ is such that
$$\norm{u_i(t)-u_{i,\infty}}_{L^\infty\pa{\Omega}} \leq \mathcal{C}e^{-\lambda t},\quad i=1,2,3$$
and where we have used the uniform in time $L^\infty\pa{\Omega}$ bounds of $\br{u_i}_{i=1,\dots,4}$. Thus
$$\norm{u_4(x,t)-u_{4,\infty}}_{L^\infty\pa{\Omega}}\leq e^{-u_{3,\infty}t}\norm{u_{4,0}(x)-u_{4,\infty}}_{L^\infty\pa{\Omega}}+\begin{cases}\frac{e^{-\lambda t}-e^{-u_{3,\infty}t}}{u_{3,\infty}-\lambda} & \lambda\not=u_{3,\infty} \\
te^{-u_{3,\infty}t} & \lambda=u_{3,\infty}\end{cases}$$
which shows that for any $\lambda_1 < \min\pa{\lambda, u_{3,\infty}}$, we have that
$$\norm{u_4(x,t)-u_{4,\infty}}_{L^\infty\pa{\Omega}}\leq C_{\br{u_{i,0}}_{i=1,\dots,3},\lambda_1}e^{-\lambda_1 t}.$$
This concludes the proof.
\end{proof}
\section{Final Remarks}\label{sec:remarks}
We feel that our work is just the beginning of further investigation of systems of the form \eqref{eq:sys_full} and \eqref{eq:sys}, especially in light of the indirect diffusion effect we've shown, and the important interplay between the entropy of the system and properties of the solutions to it.
\medskip
There are many additional interesting open problems, which we hope to address in the near future. A few examples are:
\begin{itemize}
\item \textit{Is it possible to obtain global bounded solutions in dimensions higher than $2$ without assuming additional assumption on the diffusion coefficients?} A positive answer to this question will potentially extend Theorem \ref{thm:main-3D}, and perhaps allow us to consider similar techniques to deal with the more complicated systems \eqref{eq:general_chemical_rd}.
\item \textit{How does the indirect diffusion effect work for more general systems?} For example in systems of the form \eqref{eq:general_chemical_rd} with one or more species without diffusion. We believe that our approach in this paper can be reused once enough estimates on the solutions are available.
\item \textit{Can one obtain an indirect diffusion effect of the form \eqref{IDE} in our setting, where the constant $\beta$ only depends on quantities that are conserved?} While it is interesting, and enlightening, to see how the interaction between the entropic trend to equilibrium and bounds on the solutions lead to showing that \eqref{IDE} is valid for a constant $\beta$ that depends on the $L^\infty\pa{\Omega}-$norm, finding a direct method to show \eqref{IDE} with the aforementioned $\beta$ gives hope to extending our results to more complicated systems.
\end{itemize}
\medskip
\par{\bf Acknowledgements:} The third author is supported by the International Research Training Group IGDK 1754 "Optimization and Numerical Analysis for Partial Differential Equations with Nonsmooth Structures", funded by the German Research Council (DFG) project number 188264188/GRK1754 and the Austrian Science Fund (FWF) under
grant number W 1244-N18. This work is partially supported by NAWI Graz.
The first and third authors gratefully acknowledge the support of the Hausdorff Research Institute for Mathematics (Bonn), through the Junior Trimester Program on Kinetic Theory.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Random access codes}
In general \emph{random access code} (or simply RAC) stands for ``encoding a long message into fewer bits with the ability to recover (decode) any one of the initial bits (with some probability of success)''. A random access code can be characterized by the symbol ``\racm{n}{m}'' meaning that $n$ bits are encoded into $m$ and any one of the initial bits can be recovered with probability at least $p$. We require that $p > 1/2$ since $p = 1/2$ can be achieved by guessing. In this paper we consider only the case when $m = 1$. So we have the following problem:
\begin{problem}[Classical]
There are two parties---Alice and Bob. Alice is asked to encode some classical \mbox{$n$-bit} string into $1$ bit and send this bit to Bob. We want Bob to be able to recover any one of the $n$ initial bits with high success probability.
\end{problem}
Note that Alice does not know in advance which bit Bob will need to recover, so she cannot send only that bit. If they share a quantum channel then we have the quantum version of the previous problem:
\begin{problem}[Quantum]
Alice must encode her classical \mbox{$n$-bit} message into $1$ \emph{qubit} (quantum bit) and send it to Bob. He performs some measurement on the received qubit to extract the required bit (the measurement that is used depends on which bit is needed).
\end{problem}
Both problems look similar, however the quantum version has an important feature. In the classical case the fact that Bob can recover any one of the initial bits implies that he can actually recover all of them---each with high probability of success. Surprisingly in the quantum case this is not true, because after the first measurement the state of the qubit will be disturbed and further attempts to extract more information can fail.
\subsection{History and applications} \label{sect:History}
As noted in \cite{Galvao,Severini}, the idea behind \emph{quantum random access codes} or QRACs is very old (relative to quantum information standards). It first appeared in a paper by Stephen Wiesner \cite{Wiesner} published in 1983 and was called \textit{conjugate coding}. Later these codes were \mbox{re-discovered} by Ambainis et al. in \cite{DenseCoding1,DenseCoding2}. They show that there exists \racx{2}{0.85} QRAC and mention its immediate generalization to \racx{3}{0.79} QRAC due to Chuang (see also \cite{No41} and \cite{Severini} for more details). However, Hayashi et al. \cite{No41} show that it is impossible to construct a \racp{4} QRAC with $p>1/2$. We will discuss these results more in Sect.~\ref{sect:KnownQRACs}.
There has also been work on \racm{n}{m} codes with $m>1$, see \cite{DenseCoding1,DenseCoding2,Nayak}. Ambainis et al. \cite{DenseCoding1} show that if a \racm{n}{m} QRAC with $p>1/2$ exists, then $m = \Omega(n / \log n)$, which was later improved by Nayak \cite{Nayak,DenseCoding2} to $m \geq (1-H(p)) n$, where $H(p) = -p \log p - (1-p) \log (1-p)$ is the \emph{binary entropy function}. Other generalizations include: considering \mbox{$d$-valued} bits instead of qubits \cite{Galvao,Severini} and recovering several rather than a single bit \cite{Hypercontractive}.
Originally quantum random access codes were studied in the context of quantum finite automata \cite{DenseCoding1,DenseCoding2,Nayak}. However, they also have applications in quantum communication complexity \cite{Galvao,Klauck,Aaronson,Gavinsky}, in particular for \emph{network coding} \cite{No41,NetworkCoding} and \emph{locally decodable codes} \cite{Kerenidis,KerenidisDeWolf,Wehner,Hypercontractive}. Recently results on quantum random access codes have been applied for quantum state learning \cite{AaronsonLearnability}.
Experimental feasibility of QRACs and their relation to \emph{contextuality} and \emph{non-locality} has been discussed in \cite[Chapter 7]{Galvao}. Recently a similar protocol called \emph{parity-oblivious multiplexing} has been considered in \cite{Spekkens}. It has an additional cryptographic constraint that Alice is not allowed to transmit any information about the parity of the input string. In addition \cite{Spekkens} also discuss the first experimental demonstration of \rac{2} and \rac{3} QRACs.
We want to emphasize the setting in which the impossibility of \racp{4} QRAC with $p>1/2$ was proved in \cite{No41}: Alice is allowed to perform a locally randomized encoding of the given string into a one-qubit state and Bob is allowed to perform different \emph{positive operator-valued measure} (POVM) measurements to recover different bits. This is the most general setting when information is encoded into a one-qubit state and both parties are allowed to use randomized strategies, but only have access to local coins. However, we can consider an even more general setting---when both parties share a common coin. This means that Alice and Bob are allowed to cooperate by using some shared source of randomness to agree on which strategy to use. We will refer to this source as a \emph{shared random string} or \emph{shared randomness} (SR). Note that shared randomness is a more powerful resource than local randomness, since parts of the shared random string can be exclusively used only by Alice or Bob to simulate local coins. It turns out that in this new setting \racp{4} QRAC is possible with $p>1/2$. In fact, \racp{n} QRACs with $p > 1/2$ can be constructed for all $n \geq 1$ (see Sect.~\ref{sect:LowerBounds}).
\subsection{Outline of results}
In Sect.~\ref{sect:ClassicalRACs} we study classical \rac{n} random access codes with shared randomness. In Sect.~\ref{sect:Yao} we introduce Yao's principle that is useful for understanding both classical and quantum codes. A classical code that is optimal for all $n$ is presented in Sect.~\ref{sect:OptimalClassical} and the asymptotic behavior of its success probability is considered in Sect.~\ref{sect:ClassicalBound}.
In Sect.~\ref{sect:QuantumRACs} we study quantum random access codes with shared randomness. In Sect.~\ref{sect:KnownQRACs} we discuss what is known in the case when shared randomness is not allowed, i.e., \rac{2} and \rac{3} QRACs and the impossibility of \rac{4} QRAC. In Sect.~\ref{sect:UpperBound} we give an upper bound of success probability of QRACs with SR and generalize it in Sect.~\ref{sect:GeneralUpperBound} for POVM measurements. In Sect.~\ref{sect:LowerBounds} we give two constructions of \racp{n} QRAC with SR and $p > 1/2$ for all $n \geq 2$ that provide a lower bound for success probability.
In Sect.~\ref{sect:Constructions} we try to find optimal QRACs with SR for several small values of $n$. In particular, in Sect.~\ref{sect:Numerical} we discuss QRACs obtained by numerical optimization, and in Sect.~\ref{sect:Symmetric} we consider symmetric constructions.
Finally, we conclude in Sect.~\ref{sect:Conclusion} with a summary of the obtained results (Sect. \ref{sect:Summary}), a list of open problems (Sect.~\ref{sect:OpenProblems}) and possible generalizations (Sect.~\ref{sect:Generalizations}).
\section{Classical random access codes} \label{sect:ClassicalRACs}
\subsection{Types of classical encoding-decoding strategies}
As a synonym for random access code we will use the term \emph{strategy} to refer to the joint \emph{encoding-decoding scheme} used by Alice and Bob. Two measures of how good the strategy is will be used: the \emph{worst case success probability} and the \emph{average success probability}. Both probabilities must be calculated over all possible pairs $(x,i)$ where $x\in\set{0,1}^n$ is the input and $i\in\set{1,\dotsc,n}$ indicates which bit must be recovered. We are interested in the worst case success probability, but in our case according to Yao's principle (introduced in Sect.~\ref{sect:Yao}) the average success probability can be used to estimate it.
Depending on the computational model considered, different types of strategies are allowed. The simplest type corresponds to Alice and Bob acting deterministically and independently.
\begin{definition}
A \emph{pure classical \rac{n} encoding-decoding strategy} is an ordered tuple $(E,D_1,\dotsc,D_n)$ that consists of an \emph{encoding function} $E: \set{0,1}^n \mapsto \set{0,1}$ and $n$ \emph{decoding functions} $D_i: \set{0,1} \mapsto \set{0,1}$.
\end{definition}
These limited strategies yield RACs with poor performance. This is because Bob can recover all bits correctly for no more than two input strings, since he receives either $0$ or $1$ and acts deterministically in each case. For all other strings at least one bit will definitely be recovered incorrectly, therefore the worst case success probability is $0$. If we allow Alice and Bob to act probabilistically but without cooperation, then we get mixed strategies.
\begin{definition}
A \emph{mixed classical \rac{n} encoding-decoding strategy} is an ordered tuple $(P_E,P_{D_1},\dotsc,P_{D_n})$ of probability distributions. $P_E$ is a distribution over encoding functions and $P_{D_i}$ over decoding functions.
\end{definition}
It is obvious that in this setting the worst case probability is at least $1/2$. This is obtained by guessing---we output either $0$ or $1$ with probability $1/2$ regardless of the input. Formally this means that for each $i$, $P_{D_i}$ is a uniform distribution over two constant decoding functions $0$ and $1$. It has been shown that in this setting for \rac{2} case one cannot do better than guessing, i.e., there is no \racp{2} RAC with worst case success probability $p > 1/2$ \cite{DenseCoding1,DenseCoding2}.
However, we can allow cooperation between Alice and Bob---they can use a shared random string to agree on some joint strategy.
\begin{definition}
A \emph{classical \rac{n} encoding-decoding strategy with shared randomness} is a probability distribution over pure classical strategies.
\end{definition}
Note that this is the most general randomized setting, since both randomized cooperation and local randomization are possible. This is demonstrated in the following example.
\begin{example}
Consider the following strategy: randomly agree on $i\in\set{1,\dotsc,n}$ and send the \mbox{$i$th} bit; if the \mbox{$i$th} bit is requested, output the received bit, otherwise guess. This strategy can formally be specified as follows: uniformly choose a pure strategy from the set
\begin{equation}
\bigcup_{i \in \set{1,\dotsc,n}}
\bigl\{
(e_i, c_1, \dotsc, c_{i-1}, d, g_1, \dotsc, g_{n-i}) \mid
c \in \set{d_0,d_1}^{i-1},
g \in \set{d_0,d_1}^{n-i}
\bigr\},
\end{equation}
where the encoding function $e_i$ is given by $e_i(x) = x_i$ and decoding functions $d_0$, $d_1$, and $d$ are given by $d_0(b) = 0$, $d_1(b) = 1$, and $d(b) = b$, where $b$ is the received bit. The total amount of required randomness is $n - 1 + \log n$ bits, because one out of $n \cdot 2^{n-1}$ pure strategies must be selected. Note that only $\log n$ of these bits must be shared among Alice and Bob, so that they can agree on the value of $i$. The remaining $n - 1$ random bits are needed only by Bob for choosing random decoding functions $c \in \set{d_0,d_1}^{i-1}$ and $g \in \set{d_0,d_1}^{n-i}$.
\end{example}
Note that the amount of randomness used in the above example can be reduced. Since only one bit must be recovered, there is no need to choose each of the decoding functions independently. Thus Bob needs only one random bit that he will output whenever some bit other than the \mbox{$i$th} bit is requested. This is illustrated in the next example.
\begin{example}
Alice and Bob uniformly sample a pure strategy from the following set:
\begin{equation}
\bigl\{
(e_i, \underbrace{c, \dotsc, c}_{i-1}, d, \underbrace{c, \dotsc, c}_{n-i}) \mid
1 \leq i \leq n,
c \in \set{d_0,d_1}
\bigr\}.
\end{equation}
This requires $\log n$ random bits to be shared among Alice and Bob and $1$ private random bit for Bob, i.e., $1 + \log n$ random bits in total.
\end{example}
We are interested in classical strategies with SR, because they provide a classical analogue of QRACs with SR. However, in this setting finding the optimal strategy seems to be hard, therefore we will turn to Yao's principle for help.
\subsection{Yao's principle} \label{sect:Yao}
When dealing with randomized algorithms, it is hard to draw general conclusions (like proving optimality of a certain randomized algorithm) because the possible algorithms may form a continuum. In such situations it is very helpful to apply Yao's principle \cite{Yao}. This allows us to shift the randomness in the algorithm to the input and consider only deterministic algorithms.
Let $\rand$ be a classical strategy with SR. One can think of it as a stochastic process consisting of applying the encoding map $E$ to the input $x$, followed by applying the decoding map $D_i$ to the \mbox{$i$th} bit. Both of these maps depend on the value of the shared random string. The result of $\rand$ is $\rand(x,i) = D_i(E(x))$, which is a stochastic variable over the set $\set{0,1}$. Let $\Pr[\rand(x,i) = x_i]$ denote the probability that the stochastic variable $\rand(x,i)$ takes value $x_i$. Then the worst case success probability of the optimal classical strategy with SR is given by
\begin{equation}
\max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i].
\label{eq:YaoRandomized}
\end{equation}
Let $\mu$ be some distribution over the input set $\set{0,1}^n \times \set{1,\dotsc,n}$ and let ${\textstyle\Pr_{\mu}[\pure(x,i) = x_i]}$ denote the expected success probability of a pure (deterministic) strategy $\pure$. If the ``hardest'' input distribution is chosen as $\mu$, then the expected success probability of the best pure strategy for this distribution is
\begin{equation}
\min_{\mu} \max_{\pure} {\textstyle\Pr_{\mu}[\pure(x,i) = x_i]}.
\label{eq:YaoPure}
\end{equation}
\emph{Yao's principle} states that the quantities given in (\ref{eq:YaoRandomized}) and (\ref{eq:YaoPure}) are equal \cite{Yao}:
\begin{equation}
\max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i] =
\min_{\mu} \max_{\pure} {\textstyle\Pr_{\mu}[\pure(x,i) = x_i]}.
\label{eq:Yao}
\end{equation}
Thus Yao's principle provides us with an upper bound for the worst case probability (\ref{eq:YaoRandomized}). All we have to do is to choose an arbitrary input distribution $\mu_0$ and find the best pure strategy $\pure_0$ for it. Then according to Yao's principle we have
\begin{equation}
{\textstyle\Pr_{\mu_0}[\pure_0(x,i) = x_i]} \geq \max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i],
\label{eq:YaoBound}
\end{equation}
with equality if and only if $\mu_0$ is the ``hardest'' distribution. It turns out that for random access codes the uniform distribution $\uniform$ is the ``hardest''. To prove it, we must first consider the randomization lemma.
\begin{lemma}
$\forall \pure \exists \rand: \min_{x,i} \Pr[\rand(x,i) = x_i] = {\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]},$ where $\uniform$ is the uniform distribution. In other words: the worst case success probability of $\rand$ is the same as the average case success probability of $\pure$ with uniformly distributed input.
\label{lem:Randomization}
\end{lemma}
\begin{proof}
This can be achieved by randomizing the input with the help of the shared random string. Alice's input can be randomized by \mbox{XOR-ing} it with an \mbox{$n$-bit} random string $r$. But Bob's input can be randomized by adding (modulo $n$) a random number $d\in\set{0,\dotsc,n-1}$ to it (assume for now that bits are numbered from $0$ to $n-1$). To obtain a consistent strategy, these actions must be identically performed on both sides, thus a shared random string of $n + \log n$ bits\footnote{We will not worry about how Bob obtains a uniformly distributed $d$ from a string of random bits when $n \neq 2^k$.} is required. Assume that $E$ and $D_i$ are the encoding and decoding functions of the pure strategy $\pure$; then the new strategy $\rand$ is
\begin{align}
E'(x) &= E(\mathrm{Shift}_d(x \oplus r)), \\
D'_i(b) &= D_{i + d \bmod n}(b) \oplus r_{i},
\end{align}
where $\mathrm{Shift}_d(s)$ substitutes $s_{i + d \bmod n}$ by $s_i$ in string $s$. Due to input randomization, this strategy has the same success probability for all inputs $(x,i)$, namely
\begin{equation}
\Pr[\rand(x,i) = x_i] \
= \sum_{y\in\set{0,1}^n} \sum_{j=0}^{n-1} \frac{1}{2^n \cdot n} \Pr[\pure(y,j) = y_j]
= {\textstyle\Pr_{\uniform}[\pure(y,j) = y_j]},
\end{equation}
coinciding with the average success probability of the pure strategy $\pure$.
\end{proof}
Now we will show that inequality (\ref{eq:YaoBound}) becomes an equality when $\mu_0 = \uniform$, meaning that the uniform distribution $\uniform$ is the ``hardest''.
\begin{lemma}
The minimum of (\ref{eq:YaoPure}) is reached at the uniform distribution $\uniform$, i.e.,
\begin{equation}
\min_{\mu} \max_{\pure} {\textstyle\Pr_{\mu}[\pure(x,i) = x_i]} =
\max_{\pure} {\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]}.
\label{eq:HardestDistribution}
\end{equation}
\end{lemma}
\begin{proof}
From the previous Lemma we know that there exists a strategy with SR $\rand_0$ such that
\begin{equation}
\min_{x,i} \Pr[\rand_0(x,i) = x_i] = \max_{\pure} {\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]}
\end{equation}
($\rand_0$ is obtained from the best pure strategy by prepending it with input randomization). However, among all strategies with SR there might be one that is better than $\rand_0$, thus
\begin{equation}
\max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i] \geq \max_{\pure} {\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]}.
\label{eq:FromRandomization}
\end{equation}
But if we put $\mu_0=\uniform$ into inequality (\ref{eq:YaoBound}), we obtain
\begin{equation}
\max_{\pure} {\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]} \geq \max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i],
\end{equation}
which is the same as (\ref{eq:FromRandomization}), but with reversed sign. This means that both sides are actually equal:
\begin{equation}
\max_{\pure} {\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]} = \max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i].
\label{eq:TightBound}
\end{equation}
Applying Yao's principle to the right hand side of (\ref{eq:TightBound}) we obtain the desired equation (\ref{eq:HardestDistribution}).
\end{proof}
\begin{theorem}
For any pure strategy $\pure$
\begin{equation}
{\textstyle\Pr_{\uniform}[\pure(x,i) = x_i]} \leq \max_{\rand} \min_{x,i} \Pr[\rand(x,i) = x_i],
\end{equation}
with equality if and only if $\pure$ is optimal for the uniform distribution $\uniform$.
\label{thm:SRtoPure}
\end{theorem}
\begin{proof}
To obtain the required inequality, do not maximize the left hand side of equation (\ref{eq:TightBound}), but put an arbitrary $\pure$. It is obvious that we will obtain equality if and only if $\pure$ is optimal.
\end{proof}
This theorem has important consequences---it allows us to consider pure strategies with uniformly distributed input rather than strategies with SR. If we manage to find the optimal pure strategy, then we can also construct an optimal strategy with SR using input randomization\footnote{If the encoding function depends only on the Hamming weight of the input string $x$ (e.g., majority function) and the decoding function does not depend on $i$, there is no need to randomize over $i$, so $n$ instead of $n + \log n$ shared random bits are enough.}. If the pure strategy is not optimal, then we get a lower bound for the strategy with SR.
\subsection{Classical \rac{n} RAC}
Before considering \rac{n} QRACs with shared randomness, we will find an optimal classical \rac{n} RAC with shared randomness and derive bounds for it.
\subsubsection{Optimal strategy} \label{sect:OptimalClassical}
According to Theorem~\ref{thm:SRtoPure} we can consider only pure strategies. As a pure strategy is deterministic, for each input it gives either a correct or a wrong answer. To maximize the average success probability we must find a pure strategy that gives the correct answer for as many of the $n \cdot 2^n$ inputs as possible---such a strategy we will call an \emph{optimal pure strategy}.
Let us first consider the problem of finding an optimal decoding strategy, when the encoding strategy is fixed. An encoding function $E: \set{0,1}^n \mapsto \set{0,1}$ divides the set of all strings into two parts:
\begin{equation}
\begin{aligned}
X_0 &= \set{x\in\set{0,1}^n \mid E(x) = 0}, \\
X_1 &= \set{x\in\set{0,1}^n \mid E(x) = 1}.
\end{aligned}
\end{equation}
If Bob receives bit $b$, he knows that the initial string was definitely from the set $X_b$, but there is no way for him to tell exactly which string it was. However, if he must recover only the \mbox{$i$th} bit, he can check whether there are more zeros or ones among the \mbox{$i$th} bits of strings from set $X_b$. More formally, we can introduce the symbol $N_i^b(k)$ that denotes the number of strings from set $X_b$ that have the bit $k$ in \mbox{$i$th} position:
\begin{equation}
N_i^b(k) = \abs{\set{x \in X_b \mid x_i = k}},
\end{equation}
Therefore the optimal decoding strategy $D_i: \set{0,1} \mapsto \set{0,1}$ for the \mbox{$i$th} bit is
\begin{equation}
D_i(b) =
\begin{cases}
0& \text{if $N_i^b(0) \geq N_i^b(1)$,}\\
1& \text{otherwise}.
\end{cases}
\label{eq:OptimalClassicalDecoding}
\end{equation}
Of course, if $N_i^b(0) = N_i^b(1)$, Bob can output $1$ as well. For pure strategies there are only $4$ possible decoding functions for each bit: $0$, $1$, $b$, or $\textstyle\mathop{\mathrm{NOT}} b$. But this is still quite a lot so we will consider the two following lemmas. The first lemma will rule out the \emph{constant decoding functions} $0$ and $1$.
\begin{lemma}
For any $n$ there exists an optimal pure classical \rac{n} RAC that does not use constant decoding functions $0$ and $1$ for any bits.
\label{lm:NoConst}
\end{lemma}
\begin{proof}
We will show that if there exists an optimal strategy that contains constant decoding functions for some bits, then there also exists an optimal strategy that does not. Let us assume that there is an optimal strategy with constant decoding function $0$ for the \mbox{$i$th} bit (the same argument goes through for $1$ as well). Then according to equation (\ref{eq:OptimalClassicalDecoding}) we have $N_i^0(0) \geq N_i^0(1)$ and $N_i^1(0) \geq N_i^1(1)$. Note that $N_i^0(0) + N_i^1(0) = N_i^0(1) + N_i^1(1) = 2^{n-1}$, because $x_i=0$ in exactly half of all $2^n$ strings. This means that actually $N_i^0(0) = N_i^0(1)$ and $N_i^1(0) = N_i^1(1)$. If we take a look at (\ref{eq:OptimalClassicalDecoding}) again, we see that in such situation any decoding strategy is optimal and we can use any non-constant strategy instead.
\end{proof}
\begin{lemma}
For any $n$ there exists an optimal pure classical \rac{n} RAC that does not use decoding function $\textstyle\mathop{\mathrm{NOT}} b$ for any bits.
\label{lm:NoNot}
\end{lemma}
\begin{proof}
We will show that for each pure strategy $\pure$ that uses negation as the decoding function for the \mbox{$i$th} bit, there exists a pure strategy $\pure'$ with the same average case success probability that does not. If $\pure$ consists of encoding function $E$ and decoding functions $D_j$, then $\pure'$ can be obtained from $\pure$ by inverting the \mbox{$i$th} bit before encoding and after decoding:
\begin{align}
E'(x) &= E(\textstyle\mathop{\mathrm{NOT}}_i x), \\
D_j'(b) &=
\begin{cases}
\textstyle\mathop{\mathrm{NOT}} D_j(b)& \text{if $j=i$,} \\
D_j(b)& \text{otherwise,}
\end{cases}
\end{align}
where $\textstyle\mathop{\mathrm{NOT}}_i$ inverts the \mbox{$i$th} bit of string. It is obvious that $\pure$ and $\pure'$ have the same average success probabilities, because if $\pure$ gives the correct answer for input $(x,i)$ then $\pure'$ gives the correct answer for input $(\textstyle\mathop{\mathrm{NOT}}_i x, i)$. The same holds for wrong answers.
\end{proof}
\begin{theorem}
The pure classical \rac{n} RAC with identity decoding functions and majority encoding function is optimal.
\label{thm:OptimalClassical}
\end{theorem}
\begin{proof}
According to Lemma~\ref{lm:NoConst} and Lemma~\ref{lm:NoNot}, there exists an optimal pure classical \rac{n} RAC with identity decoding function for all bits. Now we must consider the other part---finding an optimal encoding given a particular (identity) decoding function. It is obvious that in our case optimal encoding must return the majority of bits:
\begin{equation}
E'(x) =
\begin{cases}
0& \text{if $\abs{x} < n/2$,} \\
1& \text{otherwise,}
\end{cases}
\end{equation}
where $\abs{x}$ is the Hamming weight of string $x$ (the number of ones in it).
\end{proof}
\subsubsection{Asymptotic bounds} \label{sect:ClassicalBound}
Let us find the exact value of the average success probability for the optimal pure RAC suggested in Theorem~\ref{thm:OptimalClassical}. We will separately consider the even and odd cases.
In the odd case ($n=2m+1$) the average success probability is given by
\begin{equation}
p(2m+1) = \frac{1}{(2m+1) \cdot 2^{2m+1}}
\Biggl( 2 \sum_{i=m+1}^{2m+1} i \binom{2m+1}{i} \Biggr),
\label{eq:Odd}
\end{equation}
where the factor $2$ stands for either zeros or ones being the majority, and $\binom{2m+1}{i}$ stands for the number of strings where the given symbol dominates and appears exactly $i$ times.
In the even case ($n=2m$) there are a lot of strings with the same number of zeros and ones. These strings are bad, because with majority encoding and identity decoding it is not possible to give the correct answer for more than half of all bits. The corresponding average success probability is given by
\begin{equation}
p(2m) = \frac{1}{2m \cdot 2^{2m}}
\Biggl( 2 \sum_{i=m+1}^{2m} i \binom{2m}{i} + m \binom{2m}{m} \Biggr),
\label{eq:Even}
\end{equation}
where the last term stands for the bad strings.
In Appendix~\ref{app:MagicFormulas} we give a combinatorial interpretation of the sums in (\ref{eq:Odd}) and (\ref{eq:Even}). Equations (\ref{eq:Magic1}) and (\ref{eq:Magic2}) derived in Appendix~\ref{app:MagicFormulas} can be used to simplify $p(2m+1)$ and $p(2m)$, respectively. It turns out that both probabilities are equal:
\begin{equation}
p(2m) = p(2m+1) = \frac{1}{2} + \frac{1}{2^{2m+1}} \binom{2m}{m}.
\label{eq:Exact}
\end{equation}
These two expressions can be combined as follows:
\begin{equation}
p(n) = \frac{1}{2} + \frac{1}{2^n} \binom{n-1}{\floor{\frac{n-1}{2}}}.
\end{equation}
\image{0.9}{PlotClassical2}{Success probability for optimal pure classical \rac{n} RAC}{Exact probability of success $p(n)$ for optimal pure classical \rac{n} RAC (black dots) according to (\ref{eq:Exact}) and its approximate value (dashed line) according to (\ref{eq:Approx}). Dotted lines show upper and lower bounds of $p(n)$ for odd and even $n$ according to inequalities (\ref{eq:ApproxOdd}) and (\ref{eq:ApproxEven}).}{fig:Classical}
We can apply Stirling's approximation \cite{Stirling} $m! \approx \left(\frac{m}{e}\right)^m \sqrt{2 \pi m}$ to (\ref{eq:Exact}) and obtain
\begin{equation}
p(2m) = p(2m+1) \approx \frac{1}{2} + \frac{1}{2 \sqrt{\pi m}}.
\label{eq:ApproxEvenOdd}
\end{equation}
If we put $m \approx \frac{n}{2}$, then (\ref{eq:ApproxEvenOdd}) turns to
\begin{equation}
p(n) \approx \frac{1}{2} + \frac{1}{\sqrt{2 \pi n}}.
\label{eq:Approx}
\end{equation}
We see that the value of (\ref{eq:Approx}) approaches $1/2$ as $n$ increases. Thus the obtained codes are not very good for large $n$, since $p = 1/2$ can be obtained by guessing. We will observe a similar (but slightly better) behavior also in the quantum case. The exact probability (\ref{eq:Exact}) and its approximation (\ref{eq:Approx}) are shown in Fig.~\ref{fig:Classical}.
For odd and even cases asymptotic upper and lower bounds on $p(n)$ can be obtained using the following inequality \cite{Stirling}:
\begin{equation}
\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n e^\frac{1}{12n+1} < n! <
\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n e^\frac{1}{12n}.
\end{equation}
For the odd case we have
\begin{equation}
\frac{\exp\left(\frac{1}{12n-11}-\frac{2}{6n-6}\right)}{\sqrt{2 \pi (n-1)}}
< p(n) - \frac{1}{2} <
\frac{\exp\left(\frac{1}{12n-12}-\frac{2}{6n-5}\right)}{\sqrt{2 \pi (n-1)}},
\label{eq:ApproxOdd}
\end{equation}
but for the even case
\begin{equation}
\frac{\exp\left(\frac{1}{12n}-\frac{2}{6n+1}\right)}{\sqrt{2 \pi n}}
< p(n) - \frac{1}{2} <
\frac{\exp\left(\frac{1}{12n+1}-\frac{2}{6n}\right)}{\sqrt{2 \pi n}}.
\label{eq:ApproxEven}
\end{equation}
All four bounds are shown in Fig.~\ref{fig:Classical}.
\section{Quantum random access codes} \label{sect:QuantumRACs}
\subsection{Visualizing a qubit}
When dealing with quantum random access codes (at least in the qubit case), it is a good idea to try to visualize them. We provide two ways.
\subsubsection{Bloch sphere representation} \label{sect:BlochSphere}
A \emph{pure qubit state} is a column vector $\ket{\psi} \in \mathbb{C}^2$. It can be expressed as a linear combination $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$, where $\ket{0} = \smx{1\\0}$ and $\ket{1} = \smx{0\\1}$. The coefficients $\alpha, \beta \in \mathbb{C}$ must satisfy $|\alpha|^2 + |\beta|^2 = 1$. Since the physical state is not affected by the \emph{phase factor} (i.e., $\ket{\psi}$ and $e^{i\phi}\ket{\psi}$ are the same states for any $\phi\in\mathbb{R}$), without the loss of generality one can write
\begin{equation}
\ket{\psi} =
\mx{
\cos{\frac{\theta}{2}} \\
e^{i\varphi}\sin{\frac{\theta}{2}}
},
\label{eq:Psi}
\end{equation}
where $0\leq\theta\leq\pi$ and $0\leq\varphi<2\pi$ (the factor $1/2$ for $\theta$ in (\ref{eq:Psi}) is chosen so that these ranges resemble the ones for \emph{spherical coordinates} in $\mathbb{R}^3$).
For almost all states $\ket{\psi}$ there is a unique way to assign the parameters $\theta$ and $\varphi$. The only exceptions are states $\ket{0}$ and $\ket{1}$, that correspond to $\theta=0$ and $\theta=\pi$, respectively. In both cases $\varphi$ does not affect the physical state. Note that the spherical coordinates with \emph{latitude} $\theta$ and \emph{longitude} $\varphi$ have the same property, namely---the longitude is not defined at poles. This suggests that the state space of a single qubit is topologically a sphere.
Indeed, there is a \mbox{one-to-one} correspondence between pure qubit states and the points on a unit sphere in $\mathbb{R}^3$. This is called the \emph{Bloch sphere representation} of a qubit state. The \emph{Bloch vector} for state (\ref{eq:Psi}) is $\vc{r} = (x,y,z)$, where the coordinates (see Fig.~\ref{fig:UnitVector}) are given by
\begin{equation}
\left\{
\begin{aligned}
x &= \sin{\theta}\cos{\varphi}, \\
y &= \sin{\theta}\sin{\varphi}, \\
z &= \cos{\theta}.
\end{aligned}
\right.
\label{eq:UnitVector}
\end{equation}
Given the Bloch vector $\vc{r} = (x,y,z)$, the coefficients of the corresponding state $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$ can be found as follows \cite[pp.~102]{PrinciplesOfQ}:
\begin{equation}
\alpha = \sqrt{\frac{z+1}{2}}, \quad \beta = \frac{x+iy}{\sqrt{2(z+1)}}
\label{eq:AlphaBeta}
\end{equation}
with the convention that $(0,0,-1)$ corresponds to $\alpha=0$ and $\beta=1$.
\doubleimage
{UnitVector}{Angles $\theta$ and $\varphi$ of the Bloch vector}{Angles $\theta$ and $\varphi$ of the Bloch vector corresponding to state $\ket{\psi}$.}{fig:UnitVector}
{BlochSphere}{Geometric interpretation of orthogonal measurement}{Geometric interpretation of orthogonal measurement.}{fig:BlochSphere}
The \emph{density matrix} of a pure state $\ket{\psi}$ is defined as $\rho = \ket{\psi}\!\bra{\psi}$. For the state $\ket{\psi}$ in (\ref{eq:Psi}) we have
\begin{equation}
\rho
= \frac{1}{2} \mx{
1 + \cos{\theta} & e^{-i \varphi} \sin{\theta} \\
e^{ i \varphi} \sin{\theta} & 1 - \cos{\theta}}
= \frac{1}{2} \left( I + x \sigma_x + y \sigma_y + z \sigma_z \right),
\label{eq:LongRho}
\end{equation}
where $(x,y,z)$ are the coordinates of the Bloch vector $\vc{r}$ given in (\ref{eq:UnitVector}) and
\begin{equation}
I = \mx{1& 0\\0& 1}, \quad
\sigma_x = \mx{0& 1\\1& 0}, \quad
\sigma_y = \mx{0&-i\\i& 0}, \quad
\sigma_z = \mx{1& 0\\0&-1}
\label{eq:Pauli}
\end{equation}
are called \emph{Pauli matrices}. We can write (\ref{eq:LongRho}) more concisely as
\begin{equation}
\rho = \frac{1}{2} \left( I + \vc{r} \cdot \vc{\sigma} \right)
\label{eq:Rho}
\end{equation}
where $\vc{r} = (x, y, z)$ and $\vc{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$.
If $\vc{r}_1$ and $\vc{r}_2$ are the Bloch vectors of two pure states $\ket{\psi_1}$ and $\ket{\psi_2}$, then
\begin{equation}
\abs{\braket{\psi_1}{\psi_2}}^2
= \tr (\rho_1 \rho_2)
= \frac{1}{2} (1 + \vc{r}_1 \cdot \vc{r}_2).
\label{eq:ScalarProduct}
\end{equation}
This relates the inner product in $\mathbb{C}^2$ to the one in $\mathbb{R}^3$. Since $\vc{r}_1$ and $\vc{r}_2$ are unit vectors, $\vc{r}_1 \cdot \vc{r}_2 = \cos \alpha$, where $\alpha$ is the angle between $\vc{r}_1$ and $\vc{r}_2$.
An \emph{orthogonal measurement} $M$ on a qubit can be specified by a set of two orthonormal states: $M = \set{\ket{\psi_0}, \ket{\psi_1}}$. Orthonormality means that \mbox{$\braket{\psi_i}{\psi_j} = \delta_{ij}$}. If we measure a qubit that is in state $\ket{\psi}$ with measurement $M$ then the \emph{outcome} will be either $0$ or $1$ and the state will ``collapse'' to $\ket{\psi_0}$ or $\ket{\psi_1}$ with probabilities $\abs{\braket{\psi_0}{\psi}}^2$ and $\abs{\braket{\psi_1}{\psi}}^2$, respectively. Observe that for orthogonal states equation (\ref{eq:ScalarProduct}) implies $\vc{r}_1 \cdot \vc{r}_2 = -1$, therefore they correspond to antipodal points on the Bloch sphere. If we denote the angle between the Bloch vectors of $\ket{\psi}$ and $\ket{\psi_0}$ by $\alpha$, then according to (\ref{eq:ScalarProduct}) the probabilities of the outcomes are
\begin{equation}
\left\{
\begin{aligned}
p_0 &= \frac{1}{2}(1 + \cos{\alpha}), \\
p_1 &= \frac{1}{2}(1 - \cos{\alpha}).
\end{aligned}
\right.
\label{eq:Projections}
\end{equation}
There is a nice geometrical interpretation of these probabilities. If we project the Bloch vector corresponding to $\ket{\psi}$ on the axes spanned by the Bloch vectors of $\ket{\psi_0}$ and $\ket{\psi_1}$ (see Fig.~\ref{fig:BlochSphere}), then $p_0 = d_1 / 2$ and $p_1 = d_0 / 2$ (note the different indices), where $d_0$ is the distance between the projection and $\ket{\psi_0}$, but $d_1$ is the distance between the projection and $\ket{\psi_1}$. Observe that vectors on the upper hemisphere have greater probability to collapse to $\ket{\psi_0}$, but on lower hemisphere, to $\ket{\psi_1}$. On the equator both probabilities are equal to $\frac{1}{2}$.
\subsubsection{Unit disk representation} \label{sect:UnitDisk}
\image{0.9}{Beta}{Hadamard transformation in the unit disk representation}{Curves of constant $\theta$ and $\varphi$ before (on the left) and after the Hadamard transformation (on the right). Initially the curves of constant $\theta$ are concentric circles, but after the transformation they appear as deformed circles around both poles. The curves of constant $\varphi$ transform form radial rays to ``field lines'' connecting both poles. The image on the left appears to have only the North pole $\ket{0}$, since the Bloch sphere is punctured at the South pole $\ket{1}$ which must be identified with the boundary of the unit disk. The ``left pole'' and ``right pole'' in the image on the right correspond to the states $\ket{1}$ and $\ket{0}$, respectively.}{fig:Beta}
There is another way of visualizing a qubit. Unlike the Bloch sphere representation, this way of representing a qubit is not known to have appeared elsewhere. The idea is to use only one complex number to specify a pure qubit state $\ket{\psi} = \smx{\alpha \\ \beta} \in \mathbb{C}^2$. This is possible since $\ket{\psi}$ can be written in the form (\ref{eq:Psi}), which is completely determined by its second component
\begin{equation}
\beta = e^{i\varphi}\sin{\frac{\theta}{2}}.
\end{equation}
The first component is just $\sqrt{1 - \abs{\beta}^2} = \alpha$. As $\abs{\beta} \leq 1$, the set of all possible qubit states can be identified with a unit disk in the complex plane (the polar coordinates assigned to $\ket{\psi}$ are $(r,\varphi)$, where $r = \sin{\frac{\theta}{2}}$). The origin $\beta = 0$ corresponds to $\ket{\psi}=\ket{0}$, and all points on the unit circle $\abs{\beta}=1$ are identified with $\ket{\psi}=\ket{1}$, since $e^{i \varphi} \ket{1}$ corresponds to the same quantum state for all $\varphi \in \mathbb{R}$.
The relation between the unit disk representation and the Bloch sphere representation can be visualized as follows:
\begin{itemize}
\item the unit disk is obtained by puncturing the Bloch sphere at its South pole and flattening it,
\item the Bloch sphere is obtained by gluing together the boundary of the unit disk.
\end{itemize}
It is much harder to visualize how a unitary transformation acts in the unit disk representation. Let us consider a simple example.
\begin{example}
Let us consider the action of the \emph{Hadamard gate} $H=\frac{1}{\sqrt{2}}\smx{1&1\\1&-1}$ in the unit disk representation. Note that $H^2 = I$ thus $H$ is an involution (self-inverse). It acts on the standard basis states as follows:
\begin{align}
H \ket{0} &= \tfrac{1}{\sqrt{2}} \ket{0} + \tfrac{1}{\sqrt{2}} \ket{1} = \ket{+}, \label{eq:H0} \\
H \ket{1} &= \tfrac{1}{\sqrt{2}} \ket{0} - \tfrac{1}{\sqrt{2}} \ket{1} = \ket{-}. \label{eq:H1}
\end{align}
The way $H$ transforms the curves of constant $\theta$ and $\varphi$ is shown in Fig.~\ref{fig:Beta}. From equation (\ref{eq:H0}) we see that the origin $\beta = 0$ corresponding to $\ket{0}$ is mapped to the ``right pole'' $\beta = \frac{1}{\sqrt{2}}$ corresponding to $\ket{+}$ (and vice versa). Recall that all points on the boundary of the unit disk in Fig.~\ref{fig:Beta} (on the left) are identified with $\ket{1}$. Thus equation (\ref{eq:H1}) tells us that the unit circle $\abs{\beta} = 1$ is mapped to the ``left pole'' $\beta = -\frac{1}{\sqrt{2}}$ in Fig.~\ref{fig:Beta} (on the right) corresponding to $\ket{-}$ (and vice versa). This means that $\ket{-}$ is mapped to the boundary of the unit disk in Fig.~\ref{fig:Beta} (on the right).
\end{example}
Since we use only one complex number $\beta$ to represent a quantum state, a finite set of quantum states $\set{\beta_1, \beta_2, \dotsc, \beta_n}$ can be represented by a polynomial
\begin{equation}
c \, (\beta - \beta_1) (\beta - \beta_2) \dotsb (\beta - \beta_n)
\end{equation}
whose roots are $\beta_i$ (here $c \neq 0$ is arbitrary). We will use this representation in Sects.~\ref{sect:KnownQRACs} and \ref{sect:Numerical} to describe the qubit states whose Bloch vectors are the vertices of certain polyhedra. It is surprising that for those states the values of $c$ can be chosen so that the resulting polynomials have integer coefficients.
\subsection{Types of quantum encoding-decoding strategies} \label{sect:TypesOfQRACs}
Let us now consider the quantum analogue of a pure strategy.
\begin{definition}
A \emph{pure quantum \rac{n} encoding-decoding strategy} is an ordered tuple $(E,M_1,\dotsc,M_n)$ that consists of encoding function $E: \set{0,1}^n \mapsto \mathbb{C}^2$ and $n$ orthogonal measurements: $M_i = \set{\ket{\psi^i_0}, \ket{\psi^i_1}}$.
\end{definition}
If Alice encodes the string $x$ with function $E$, she obtains a pure qubit state $\ket{\psi} = E(x)$. When Bob receives $\ket{\psi}$ and is asked to recover the \mbox{$i$th} bit of $x$, he performs the measurement $M_i$. The probability that Bob recovers $x_i$ correctly is equal to
\begin{equation}
p(x,i) = \abs{\braket{\psi^i_{x_i}\big}{\psi}}^2.
\label{eq:pxi}
\end{equation}
As in the classical setting, we can allow Alice and Bob to have probabilistic quantum strategies without cooperation. Though we will not need it, mixed quantum strategies can be defined in complete analogy with mixed classical strategies.
\begin{definition}
A \emph{mixed quantum \rac{n} encoding-decoding strategy} is an ordered tuple $(P_E,P_{M_1},\dotsc,P_{M_n})$ of probability distributions. $P_E$ is a distribution over encoding functions $E$ and $P_{M_i}$ are probability distributions over orthogonal measurements of qubit.
\end{definition}
The main objects of our research are quantum strategies with cooperation, i.e., with shared randomness. They are defined in complete analogy with the classical ones.
\begin{definition}
A \emph{quantum \rac{n} encoding-decoding strategy with shared randomness} is a probability distribution over pure quantum strategies.
\end{definition}
We would like to point out two very important things about quantum strategies with shared randomness. The first thing is that all statements about classical strategies with SR in Sect.~\ref{sect:Yao} are valid for quantum strategies as well (the only difference is that \emph{``pure strategy''} now means \emph{``pure quantum strategy''} instead of \emph{``pure classical strategy''} and \emph{``strategy with SR''} means \emph{``quantum strategy with SR''} instead of \emph{``classical strategy with SR''}). The most important consequence of this observation is that Theorem~\ref{thm:SRtoPure} is valid also for quantum strategies with SR. This means that the same technique of obtaining the upper bound can be used in the quantum case, i.e., we can consider the average success probability of a pure quantum strategy instead of the worst case success probability of the quantum strategy with SR.
The second thing is that the quantum strategy with SR is the most powerful quantum encoding-decoding strategy when both kinds of classical randomness (local and shared) is allowed. However, it is not the most general strategy, since it cannot be used to simulate certain classical strategies, e.g., the ones with fixed output. However, it turns out that the ability to simulate such strategies does not give any advantage (see Sect.~\ref{sect:GeneralUpperBound} and Appendix~\ref{app:POVMs}).
\subsection{Known quantum RACs} \label{sect:KnownQRACs}
In \cite{DenseCoding1,DenseCoding2} it has been shown that for \rac{2} classical RACs in the mixed setting the decoding party cannot do better than guessing, i.e., the worst case success probability cannot exceed $1/2$. However, if quantum states can be transmitted, there are pure quantum \rac{2} and \rac{3} schemes \cite{DenseCoding1,DenseCoding2}. This clearly indicates the advantages of quantum RACs. On the other hand, a quantum \rac{4} scheme cannot exist \cite{No41}. We will review these results in the next three sections.
\doubleimage
{KnownQRAC2}{Bloch sphere representation of \rac{2} QRAC}{Bloch sphere representation of encoding for \rac{2} quantum random access code.}{fig:KnownQRAC2}
{KnownQRAC3}{Bloch sphere representation of \rac{3} QRAC}{Bloch sphere representation of encoding for \rac{3} quantum random access code.}{fig:KnownQRAC3}
\subsubsection{The \rac{2} QRAC} \label{sect:KnownQRAC2}
The \rac{2} QRAC is described in \cite{DenseCoding1,DenseCoding2,No41}. The main idea is to use two mutually orthogonal pairs of antipodal Bloch vectors for measurement bases. For example, let $M_1$ and $M_2$ be the measurements along the $x$ and $y$ axes, respectively. The corresponding Bloch vectors are $\vc{v}_1=(\pm 1,0,0)$ and $\vc{v}_2=(0,\pm 1,0)$. The measurement bases are
\begin{align}
M_1&=\set{\frac{1}{\sqrt{2}}\mx{1\\1},\frac{1}{\sqrt{2}}\mx{1\\-1}}, \label{eq:M1} \\
M_2&=\set{\frac{1}{\sqrt{2}}\mx{1\\i},\frac{1}{\sqrt{2}}\mx{1\\-i}}. \label{eq:M2}
\end{align}
The planes orthogonal to the $x$ and $y$ axes cut the Bloch sphere into four parts. Note that in each part only one definite string can be encoded (otherwise the worst case success probability will be less than $\frac{1}{2}$). According to (\ref{eq:Projections}), all encoding points must be as far from both planes as possible in order to maximize the worst case success probability (recall the geometrical interpretation of the measurement shown in Fig.~\ref{fig:BlochSphere}). In our case the best encoding states are the vertices of a square $\frac{1}{\sqrt{2}}(\pm 1,\pm 1,0)$ inscribed in the unit circle on the $xy$ plane (see Fig.~\ref{fig:KnownQRAC2}). Given a string $x = x_1 x_2$, the Bloch vector of the encoding state can be found as follows:
\begin{equation}
\vc{r}(x) = \frac{1}{\sqrt{2}}
\mx{
(-1)^{x_1} \\
(-1)^{x_2} \\
0
}.
\end{equation}
The corresponding encoding function is
\begin{equation}
E(x_1,x_2) = \frac{1}{\sqrt{2}} \ket{0} + \frac{(-1)^{x_1}+i(-1)^{x_2}}{2} \ket{1}.
\label{eq:QRAC2Encoding}
\end{equation}
The success probability is the same for all input strings and all bits to be recovered:
\begin{equation}
p = \frac{1}{2}\left(1+\cos{\frac{\pi}{4}}\right) = \frac{1}{2}+\frac{1}{2\sqrt{2}} \approx \p{2}.
\label{eq:p2}
\end{equation}
\subsubsection{The \rac{3} QRAC} \label{sect:KnownQRAC3}
It is not hard to generalize the \rac{2} QRAC to a \rac{3} code---just take three mutually orthogonal pairs of antipodal Bloch vectors, i.e., the vertices of an \emph{octahedron} \cite{No41,Severini}. The third pair is $\vc{v}_3=(0,0,\pm 1)$ and the corresponding measurement basis is
\begin{equation}
M_3=\set{\mx{1\\0},\mx{0\\1}}.
\label{eq:M3}
\end{equation}
In this case we have three orthogonal planes that cut the sphere into eight parts and only one string can be encoded into each part. In this case the optimal encoding states correspond to the vertices of a cube $\frac{1}{\sqrt{3}}(\pm 1,\pm 1,\pm 1)$ inscribed in the Bloch sphere (see Fig.~\ref{fig:KnownQRAC3}). The Bloch vector of the encoding state of string $x = x_1 x_2 x_3$ is
\begin{equation}
\vc{r}(x) = \frac{1}{\sqrt{3}}
\mx{
(-1)^{x_1} \\
(-1)^{x_2} \\
(-1)^{x_3}
}.
\end{equation}
The corresponding encoding function is $E(x_1,x_2,x_3)=\alpha\ket{0}+\beta\ket{1}$ with coefficients $\alpha$ and $\beta$ explicitly given by
\begin{equation}
\left\{
\begin{aligned}
\alpha &= \sqrt{\frac{1}{2}+\frac{(-1)^{x_3}}{2\sqrt{3}}}, \\
\beta &= \frac{(-1)^{x_1}+i(-1)^{x_2}}{\sqrt{6+2\sqrt{3}(-1)^{x_3}}}.
\end{aligned}
\right.
\end{equation}
In fact, the coefficients $\beta$ are exactly the eight roots of the polynomial\footnote{The unit disk representation of a quantum state and the representation of a finite set of quantum states using a polynomial was discussed in Sect.~\ref{sect:UnitDisk}.}
\begin{equation}
36 \beta^8 + 24 \beta^4 + 1
\end{equation}
This code also has the same success probability in all cases:
\begin{equation}
p = \frac{1}{2}+\frac{1}{2\sqrt{3}} \approx \p{3}.
\label{eq:p3}
\end{equation}
\subsubsection{Impossibility of the \rac{4} QRAC} \label{sect:noQRAC4}
Hayashi et al. \cite{No41} have shown that \rac{2} and \rac{3} codes discussed above cannot be generalized for $4$ (and hence more) encoded bits. The reason is simple---it is not possible to cut the Bloch sphere into $16$ parts with $4$ great circles (see the proof below). Thus the number of strings will exceed the number of parts, hence at least two strings must be encoded in the same part. This makes the worst case success probability drop below $\frac{1}{2}$.
\image{0.6}{Gnomonic}{Gnomonic projection}{Gnomonic projection transforms great circles to lines and vice versa.}{fig:Gnomonic}
Let us consider how many parts can be obtained by cutting a sphere with $4$ great circles. Without loss of generality we can assume that the first great circle coincides with the equator. We use the \emph{gnomonic projection} (from the center of the sphere) to project the remaining three circles to a plane tangent to the South pole. Note that great circles are transformed into lines and vice versa (see Fig.~\ref{fig:Gnomonic}), thus we will obtain three lines. Also note that each region in the plane corresponds to two (diametrically opposite) regions on the sphere. It is simple to verify that three lines cannot cut the plane into more than $7$ parts (see Fig.~\ref{fig:CuttingPlane}). Thus the sphere cannot be cut into more than $14$ parts with four great circles.\footnote{In general, if we have $n$ great circles on the sphere, the maximal number of parts we can obtain is twice what we can obtain by cutting the plane with $n-1$ lines. If each line we draw intersects all previous lines and no three lines intersect at the same point, the sphere is cut into $n(n-1)+2$ parts after the inverse gnomonic projection.} An example achieving the upper bound is shown in see Fig.~\ref{fig:CuttingSphere} (see also Figs.~\ref{fig:QRAC4[sym]} and \ref{fig:Tetrahedron}). Using essentially the same argument for generalized Bloch vectors Hayashi et al. \cite{No41} show that \racm{2^{2m}}{m} QRACs with $p>1/2$ do not exist for all $m \geq 1$. The generalized Bloch vector will be briefly introduced in Sect.~\ref{sect:Generalizations}.
\doubleimage
{CuttingPlane}{Cutting the plane with $3$ lines into $7$ parts}{Cutting the plane with $3$ lines into $7$ parts.}{fig:CuttingPlane}
{CuttingSphere}{Cutting the sphere with $4$ great circles into $14$ parts}{Cutting the sphere with $4$ great circles into $14$ parts (seven diametrically opposite parts are equal).}{fig:CuttingSphere}
\subsection{Optimal encoding for given decoding strategy} \label{sect:OptimalEncoding}
We just reviewed the known results on pure \rac{n} quantum random access codes. From now on we will consider QRACs with shared randomness. In this section we will show how to find the optimal encoding strategy for a given decoding strategy. More precisely, we will show that the measurement directions of a QRAC with SR determine the corresponding optimal encoding states in a simple way.
An orthogonal measurement for the \mbox{$i$th} bit is specified by antipodal points on the Bloch sphere: $M_i = \set{\vc{v}_i, -\vc{v}_i}$. Let $\vc{r}_x$ be the Bloch vector that corresponds to the quantum state in which string $x \in \set{0,1}^n$ is encoded. According to equations in (\ref{eq:Projections}) the success probability for input $(x,i)$ is
\begin{equation}
p(x,i) = \frac{1}{2} \bigl(1 + (-1)^{x_i} \vc{v}_i \cdot \vc{r}_x \bigr)
\end{equation}
and the average success probability is given by
\begin{equation}
\begin{split}
p &= \frac{1}{2^n \cdot n} \sum_{x \in \set{0,1}^n} \sum_{i=1}^n
\frac{1}{2} \bigl(1 + (-1)^{x_i} \vc{v}_i \cdot \vc{r}_x \bigr) \\
&= \frac{1}{2} \Biggl(1 + \frac{1}{2^n \cdot n}
\underbrace{\sum_{x \in \set{0,1}^n} \vc{r}_x \cdot \sum_{i=1}^n (-1)^{x_i} \vc{v}_i}_{S_{\vc{v},\vc{r}}} \Biggr).
\end{split}
\label{eq:AverageProbability}
\end{equation}
In order to maximize the probability $p$, we only need to maximize $S_{\vc{v},\vc{r}}$ in equation (\ref{eq:AverageProbability}) over all possible measurements $\vc{v}_i$ and encodings $\vc{r}_x$ (in total $n + 2^n$ unit vectors in $\mathbb{R}^3$). We will denote the maximum of $S_{\vc{v},\vc{r}}$ by $S(n)$:
\begin{equation}
S(n) \
= \max_{\vis, \rxs} S_{\vc{v},\vc{r}} \
= \ \max_\vis \sum_{x \in \set{0,1}^n}
\max_{\vc{r}_x} \ \vc{r}_x \cdot \sum_{i=1}^n (-1)^{x_i} \vc{v}_i.
\label{eq:rxvx}
\end{equation}
If we define
\begin{equation}
\vc{v}_x = \sum_{i=1}^n (-1)^{x_i} \vc{v}_i,
\label{eq:vx}
\end{equation}
then it is obvious that the scalar product $\vc{r}_x \cdot \vc{v}_x$ in (\ref{eq:rxvx}) will be maximized when $\vc{r}_x$ is chosen along the same direction as $\vc{v}_x$, i.e. $\vc{r}_x = \vc{v}_x / \norm{\vc{v}_x}$ when $\norm{\vc{v}_x} \neq 0$. In this case we have $\vc{r}_x \cdot \vc{v}_x = \norm{\vc{v}_x}$ and
\begin{equation}
S(n) = \max_\vis \sum_{x \in \set{0,1}^n} \norm{\sum_{i=1}^n (-1)^{x_i} \vc{v}_i}.
\label{eq:Sn}
\end{equation}
Therefore we only need to maximize over all possible measurements succinctly represented by $n$ unit vectors $\vc{v}_i \in \mathbb{R}^3$, because the optimal encoding is already determined by measurements (see Sect.~\ref{sect:Numerical} for some numerical results obtained in this way). When the value of $S(n)$ is found, then according to (\ref{eq:AverageProbability}) the corresponding probability is
\begin{equation}
p(n) = \frac{1}{2} \left(1 + \frac{S(n)}{2^n \cdot n}\right).
\label{eq:ProbabilitySn}
\end{equation}
We can observe a connection between quantum and classical RACs with SR. Assume that Marge and Homer\footnote{In this scenario it is more convenient to replace Alice and Bob with Marge and Homer from \textit{The Simpsons}.} have to implement \rac{n} QRAC with SR and are deciding what strategies to use---Homer is responsible for choosing the measurements, but Marge has to choose how to encode the input string. Once they have decided, they have to follow the agreement and cannot cheat. Unfortunately, Homer is foolish and he proposes to measure all bits in the same basis. Luckily Marge is clever enough to choose the optimal encoding for Homer's measurements. According to the discussion above, she has to use the majority encoding function. Thus the obtained QRAC is as good as an optimal classical RAC discussed in Sect.~\ref{sect:OptimalClassical}, Theorem~\ref{thm:OptimalClassical}.
It looks plausible that using the same measurement for all bits is the worst decoding strategy. However, we have not proved this, so we leave it as a conjecture:
\begin{conjecture}
For any choice of measurements there is an encoding such that the resulting \rac{n} quantum RAC with SR is at least as good as the optimal \rac{n} classical one.
\end{conjecture}
\subsection{Relation to a random walk in \texorpdfstring{$\mathbb{R}^3$}{R3}} \label{sect:RandomWalk}
QRACs with shared randomness are related to random walks in $\mathbb{R}^3$. This relation can be seen by suitably interpreting equations (\ref{eq:Sn}) and (\ref{eq:ProbabilitySn}). Let us consider an \rac{n} QRAC with SR whose measurement directions are given by unit vectors $\set{\vc{v}_i}$ and let us assume that the corresponding optimal encoding for these measurements is used as described in the previous section. Then we can write the success probability $p(\vc{v}_1, \dotsc, \vc{v}_n)$ of this QRAC in the following suggestive form:
\begin{equation}
p(\vc{v}_1, \dotsc, \vc{v}_n) = \frac{1}{2} \Bigl( 1 + \frac{1}{n} \; d(\vc{v}_1, \dotsc, \vc{v}_n) \Bigr),
\label{eq:pv}
\end{equation}
where
\begin{equation}
d(\vc{v}_1, \dotsc, \vc{v}_n) = \frac{1}{2^n} \sum_{a \in \set{1,-1}^n} \norm{\sum_{i=1}^n a_i \vc{v}_i}
\label{eq:dv}
\end{equation}
is the average distance traveled by a random walk whose \mbox{$i$th} step is $\vc{v}_i$ or $-\vc{v}_i$, each with probability $1/2$. For example, $\vc{v}_1 = \vc{v}_2 = \dotsb = \vc{v}_n$ corresponds to a random walk on a line and $d(\vc{v}_1, \dotsc, \vc{v}_n)$ is the average distance traveled after $n$ steps of this walk. Recall from the previous section that this choice of $\set{\vc{v}_i}$ corresponds to the optimal classical RAC and we conjecture that this is the worst possible choice. Similarly, if we choose roughly one third of vectors $\set{\vc{v}_i}$ along each coordinate axis, we obtain a random walk in a cubic lattice and $d(\vc{v}_1, \dotsc, \vc{v}_n)$ is the average distance traveled when roughly $n/3$ steps are performed along each coordinate axis (see Sect.~\ref{sect:Lower2}).
In Sects.~\ref{sect:LowerBounds} we will use this relation between random access codes and random walks to prove a lower bound for the success probability of \rac{n} QRACs with SR.
\subsection{Upper bound} \label{sect:UpperBound}
In this section we will derive an upper bound for $S(n)$. For this purpose we rewrite the equation (\ref{eq:Sn}) in the following form:
\begin{equation}
S(n) = \max_\vis S_{\vc{v}}
\label{eq:MaxSv}
\end{equation}
where
\begin{equation}
S_{\vc{v}} \ = \sum_{a \in \set{1,-1}^n} \norm{\sum_{i=1}^n a_i \vc{v}_i}
\label{eq:Sv}
\end{equation}
(for convenience we take the sum over the set $\set{1,-1}^n$ instead of $\set{0,1}^n$).
\begin{lemma}
For any unit vectors $\vc{v}_1,\dotsc,\vc{v}_n$ we have
\begin{equation}
\sum_{a_1, \dotsc, a_n \in \set{1,-1}} \norm{a_1 \vc{v}_1 + \dotsb + a_n \vc{v}_n}^2 = n \cdot 2^n.
\label{eq:NormSquareSum}
\end{equation}
\label{lm:NormSquares}
\end{lemma}
\begin{proof}
For $n = 1$ we have
\begin{equation}
\sum_{a_1 \in \set{1,-1}} \norm{a_1 \vc{v}_1}^2 = \norm{\vc{v}_1}^2 + \norm{-\vc{v}_1}^2 = 2.
\end{equation}
Let us assume that equation (\ref{eq:NormSquareSum}) holds for $n = k$. Then for $n = k + 1$ we have
\begin{equation}
\sum_{a_1, \dotsc, a_{k}, a_{k+1} \in \set{1,-1}}
\norm{a_1 \vc{v}_1 + \dotsb + a_k \vc{v}_k + a_{k+1} \vc{v}_{k+1}}^2.
\end{equation}
If we write out the sum over $a_{k+1}$ explicitly, we obtain
\begin{equation}
\sum_{a_1, \dotsc, a_{k} \in \set{1,-1}}
\left(
\norm{a_1 \vc{v}_1 + \dotsb + a_k \vc{v}_k + \vc{v}_{k+1}}^2 +
\norm{a_1 \vc{v}_1 + \dotsb + a_k \vc{v}_k - \vc{v}_{k+1}}^2
\right).
\end{equation}
We can use the parallelogram identity
\begin{equation}
\norm{\vc{u}_1 + \vc{u}_2}^2 + \norm{\vc{u}_1 - \vc{u}_2}^2 = 2 \left(\norm{\vc{u}_1}^2 + \norm{\vc{u}_2}^2\right),
\end{equation}
which holds for any two vectors $\vc{u}_1$ and $\vc{u}_2$, to simplify the sum as follows:
\begin{equation}
\sum_{a_1, \dotsc, a_{k} \in \set{1,-1}}
2 \left( \norm{a_1 \vc{v}_1 + \dotsb + a_k \vc{v}_k}^2 + \norm{\vc{v}_{k+1}}^2 \right).
\label{eq:AlmostDone}
\end{equation}
We know that $\vc{v}_{k+1}$ is a unit vector and we have assumed that (\ref{eq:NormSquareSum}) holds for $n = k$; therefore (\ref{eq:AlmostDone}) simplifies to $2 \left( k \cdot 2^k + 2^k \right) = (k + 1) \cdot 2^{k + 1}$.
\end{proof}
We will use the previous lemma to obtain an upper bound for $S_{\vc{v}}^2$ defined in (\ref{eq:Sv}). According to (\ref{eq:MaxSv}) this will give us an upper bound for $S(n)$ as well.
\begin{lemma}
For any set of unit vectors $\set{\vc{v}_i}_{i=1}^n$, the inequality $S_{\vc{v}} \leq \sqrt{n} \cdot 2^n$ holds.
\label{lm:Sv}
\end{lemma}
\begin{proof}
We can interpret the first sum in equation (\ref{eq:Sv}) as an inner product with $(1, \dotsc, 1) \in \mathbb{R}^{2^n}$. Then the Cauchy-Schwarz inequality $\vc{x} \cdot \vc{y} \leq \norm{\vc{x}} \norm{\vc{y}}$ says that
\begin{equation}
S_{\vc{v}} \leq \sqrt{2^n} \; \sqrt{\sum_{a \in \set{1,-1}^n} \norm{\sum_{i=1}^n a_i \vc{v}_i}^2}
= \sqrt{2^n} \; \sqrt{n \cdot 2^n} = \sqrt{n} \cdot 2^n,
\end{equation}
where Lemma~\ref{lm:NormSquares} was used to obtain the first equality.
\end{proof}
\begin{theorem}
For any \racp{n} QRAC with shared randomness, $p \leq \frac{1}{2} + \frac{1}{2 \sqrt{n}}$.
\label{thm:UpperBound}
\end{theorem}
\begin{proof}
From Lemma~\ref{lm:Sv} we have $S_{\vc{v}} \leq \sqrt{n} \cdot 2^n$. From equation (\ref{eq:MaxSv}) we see that the same upper bound applies to $S(n)$. Putting this into (\ref{eq:ProbabilitySn}) we get
\begin{equation*}
p \leq \frac{1}{2} + \frac{1}{2\sqrt{n}}. \qedhere
\end{equation*}
\end{proof}
In particular, this means that the known \rac{2} and \rac{3} QRACs discussed in Sect.~\ref{sect:KnownQRACs} cannot be improved even if shared randomness is allowed.
The intuition behind this upper bound is as follows. If instead of $\mathbb{R}^3$ the Bloch vector of a qubit state would be in $\mathbb{R}^n$, we could choose all $n$ measurements to be mutually orthogonal. For example, we could choose the vectors forming measurement bases to be the vertices of the \emph{cross polytope}, i.e., all permutations of $(\pm 1, 0, \dotsc, 0)$. The optimal encoding corresponding to this choice are the vertices of the \emph{hypercube}, i.e., points $(\pm 1, \pm 1, \dotsc, \pm 1)$, thus all terms in equation (\ref{eq:Sv}) are equal to $\sqrt{n}$ and sum to $2^n \sqrt{n}$, so the probability (\ref{eq:ProbabilitySn}) is $\frac{1}{2} (1 + \frac{2^n \sqrt{n}}{2^n n}) = \frac{1}{2} + \frac{1}{2\sqrt{n}}$. Since we have only three dimensions, the actual probability should not be larger.
\subsection{General upper bound} \label{sect:GeneralUpperBound}
Let us prove an analogue of Theorem~\ref{thm:UpperBound} for a more general model, because quantum mechanics allows us to consider more general quantum states and measurements. Namely, Alice can encode her message into a \emph{mixed state} instead of a pure state and Bob can use a POVM measurement instead of an orthogonal measurement to recover information. A mixed state is just a probability distribution over pure states, so it does not provide a more general encoding model. In contrast, a POVM measurement provides a more general decoding model. In fact, there is another reason to extend the model.
\begin{example}
It is not possible to construct a pure QRAC (as defined in Sect.~\ref{sect:TypesOfQRACs}) that simulates the following pure classical \rac{2} RAC:
\begin{itemize}
\item \emph{encoding:} encode the first bit,
\item \emph{decoding:} if the first bit is requested, output the received bit; if the second one is requested---output $0$ no matter what is received.
\end{itemize}
To recover the first bit with certainty, Alice and Bob have to agree on two antipodal points on the Bloch sphere, where the information is encoded. Unfortunately the second bit will cause a problem---it is not possible to choose an orthogonal measurement of a qubit in an unknown state, so that the result is always the same.
\end{example}
This example suggests that the model of pure quantum encoding-decoding strategies introduced in Sect.~\ref{sect:TypesOfQRACs} should be extended in one way or the other. It is obvious that a \emph{constant decoding function} ($0$ or $1$) can be implemented using a single-outcome POVM measurement. However, it turns out that in the qubit case a two-outcome POVM measurement can be replaced by a probability distribution over orthogonal measurements and constant decoding functions (see appendix~\ref{app:POVMs}). This means that both extensions are equivalent. For simplicity we choose to extend the model by allowing constant decoding functions, thus Bob can either perform an orthogonal measurement or use a constant decoding function. The goal of this section is to show that constant decoding functions do not give any advantage.
\begin{definition}
An \emph{enhanced orthogonal measurement} is either an orthogonal measurement or one that always gives the same answer.
\end{definition}
\begin{definition}
An \emph{enhanced pure quantum \rac{n} encoding-decoding strategy} is given by an ordered tuple $(E,M_1,\dotsc,M_n)$ consisting of encoding function $E: \set{0,1}^n \mapsto \mathbb{C}^2$ and $n$ decoding functions $M_i$ that are enhanced orthogonal measurements.
\end{definition}
\begin{definition}
An \emph{enhanced quantum encoding-decoding strategy with SR} is a probability distribution over enhanced pure quantum strategies.
\end{definition}
Now it is straightforward to construct a pure quantum RAC for the previous example. In fact, now any classical RAC (either pure, mixed or with SR) can be simulated by the corresponding type of a quantum RAC.
There is no need to further extend the model of enhanced QRACs with SR by adding other types of classical randomness. For example, a probabilistic combination of POVMs does not provide a more general measurement, because it can be simulated by a probabilistic combination of enhanced orthogonal measurements. The same holds for probabilistic \mbox{post-processing} of the measurement results (which can be simulated by a probabilistic combination of enhanced orthogonal measurements as shown in Appendix~\ref{app:POVMs}). Therefore enhanced QRACs with SR constitute the most general model when any kind of classical randomness is allowed.
One might suspect that the upper bound obtained in Theorem~\ref{thm:UpperBound} does not hold for this model, but this is not the case.
\begin{theorem}
For any \racp{n} enhanced QRAC with SR, $p \leq \frac{1}{2} + \frac{1}{2 \sqrt{n}}$.
\label{thm:GeneralUpperBound}
\end{theorem}
\begin{proof}
According to Yao's principle and Theorem~\ref{thm:SRtoPure}, we can consider the average success probability of pure enhanced QRACs instead. It suffices to rule out the constant decoding functions. More precisely, we have to show that QRACs having a constant decoding function for some bit give a smaller upper bound than those without it. In fact, we are proving a quantum analogue of Lemma~\ref{lm:NoConst} from Sect.~\ref{sect:OptimalClassical}.
We will use induction on $n$. The case $n=1$ is trivial---a pure enhanced QRAC with a constant decoding function has average success probability $\frac{1}{2} < 1$. Let us assume that for some $n = k - 1 \geq 1$ the constant decoding functions do not give any benefit. We now prove that the same holds for $n = k$. Let us assume that the constant decoding function $0$ is used for the \mbox{$k$th} bit. The average case success probability is
\begin{equation}
p(k) = \frac{1}{2^k \cdot k} \sum_{x \in \set{0,1}^k}
\left( \sum_{i=1}^{k-1} p(x,i) + \delta_{0,x_k} \right),
\end{equation}
where $p(x,i)$ is the success probability (\ref{eq:pxi}) for the input $(x,i)$ where $i \leq k-1$ and $\delta_{0,x_k}$ is the probability that the decoding function $0$ gives a correct answer for the \mbox{$k$th} bit. The last bit can be ignored during the encoding and decoding of other bits:
\begin{align}
p(k) &= \left( \frac{1}{2^k \cdot k} \sum_{\ x \in \set{0,1}^{k-1}} 2 \sum_{i=1}^{k-1} p(x,i) \right)
+ \frac{1}{2k} \\
&= \frac{k-1}{k} \left( \frac{1}{2^{k-1} \cdot (k-1)} \sum_{\ x \in \set{0,1}^{k-1}}
\sum_{i=1}^{k-1} p(x,i) \right) + \frac{1}{2k}. \label{eq:pk}
\end{align}
Note that the bracketed expression in (\ref{eq:pk}) is the success probability $p(k-1)$ of a shorter QRAC. Therefore
\begin{equation}
p(k) = \frac{k-1}{k} \cdot p(k-1) + \frac{1}{2k}.
\end{equation}
Now we can apply the inductive hypothesis:
\begin{equation}
p(k) \leq \frac{k-1}{k} \left( \frac{1}{2} + \frac{1}{2\sqrt{k-1}} \right) + \frac{1}{2k}
= \frac{1}{2} + \frac{\sqrt{k-1}}{2k}
< \frac{1}{2} + \frac{1}{2\sqrt{k}},
\label{eq:pk2}
\end{equation}
completing the proof. Thus the upper bound obtained in Theorem~\ref{thm:UpperBound} holds for the general model as well.
\end{proof}
Observe again that for $n=2$ and $n=3$ this upper bound matches equations (\ref{eq:p2}) and (\ref{eq:p3}), respectively. This means that the known \rac{2} and \rac{3} quantum random access codes with pure encoding-decoding strategies (see Sects.~\ref{sect:KnownQRAC2} and \ref{sect:KnownQRAC3}, respectively) are optimal even among enhanced strategies with SR. For $n=4$ we get $p\leq\frac{3}{4}$.
A similar upper bound was recently obtained by \mbox{Ben-Aroya} et al. \cite{Hypercontractive} for \racm{n}{m} QRACs, where $k$ bits must be recovered. They allow randomized strategies without shared randomness. In particular, they show that for any $\eta > 2 \ln 2$ there exists a constant $C_\eta$ such that for $n \gg k$
\begin{equation}
p \leq C_\eta \left( \frac{1}{2} + \frac{1}{2} \sqrt{\frac{\eta m}{n}} \right)^k.
\label{eq:HypercontractiveUpperBound}
\end{equation}
It might be possible to generalize our upper bound (\ref{eq:pk2}) to obtain something similar to (\ref{eq:HypercontractiveUpperBound}).
\subsection{Lower bounds} \label{sect:LowerBounds}
In the next two sections we will describe two constructions of \rac{n} QRAC with SR for all $n \geq 1$. These constructions provide a lower bound on the success probability. They use random and orthogonal measurements, respectively. In the first case it is hard to compute the exact success probability even for small values of $n$, but we will obtain an asymptotic expression. However, in the second case we do not know the asymptotic success probability, but can easily compute the exact success probability for small $n$.
\subsubsection{Lower bound by random measurements} \label{sect:Lower1}
We now turn to lower bound for $p$. A lower bound for QRACs with shared randomness can be obtained by randomized encoding. Alice and Bob can use the shared random string to agree on some random orthogonal measurement for each bit. Each of these measurement bases can be specified by antipodal points on the Bloch sphere (see Sect.~\ref{sect:BlochSphere}). These points can be sampled by using some sphere point picking method \cite{SpherePointPicking}, near uniformly given enough shared randomness. The chosen measurements determine the optimal encoding scheme (see Sect.~\ref{sect:OptimalEncoding}) which is known to both sides.
The expected success probability of randomized \rac{n} QRAC similarly to (\ref{eq:pv}) is given by
\begin{equation}
\E(p) = \frac{1}{2} \left( 1 + \frac{1}{n} \E_\vis d(\vc{v}_1, \cdots, \vc{v}_n) \right)
\label{eq:Eofp}
\end{equation}
where according to equation (\ref{eq:dv})
\begin{align}
\E_\vis d(\vc{v}_1, \cdots, \vc{v}_n)
&= \E_\vis \left( \frac{1}{2^n} \sum_{a \in \set{1,-1}^n} \norm{\sum_{i=1}^n a_i \vc{v}_i} \right) \\
&= \frac{1}{2^n} \sum_{a \in \set{1,-1}^n} \E_\vis \norm{\sum_{i=1}^n a_i \vc{v}_i}. \label{eq:Eofav}
\end{align}
Each $a \in \set{1,-1}^n$ influences the direction of some vectors $\vc{v}_i$, but the resulting set $\set{a_i \vc{v}_i}$ is still uniformly distributed. Therefore the expected value in equation (\ref{eq:Eofav}) does not depend on $a$ and we have
\begin{equation}
\E_\vis d(\vc{v}_1, \cdots, \vc{v}_n) = \E_\vis \norm{\sum_{i=1}^n \vc{v}_i}.
\label{eq:Eofd}
\end{equation}
This expression has a very nice geometrical interpretation---it is the average distance traveled by a particle that performs $n$ steps of unit length each in a random direction. This distance can be found by evaluating the following integral:
\begin{equation}
\frac{1}{(4\pi)^n}
\int_{\theta_1=0}^{\pi}
\int_{\varphi_1=0}^{2\pi}
\dotsi
\int_{\theta_n=0}^{\pi}
\int_{\varphi_n=0}^{2\pi}
\norm{
\sum_{i=1}^n
\mx{ \sin \theta_i \cos \varphi_i \\
\sin \theta_i \sin \varphi_i \\
\cos \theta_i }
}
\prod_{i=1}^n \sin \theta_i \, d \theta_i \, d \varphi_i.
\end{equation}
Unfortunately it is very hard to evaluate it even numerically, since the integrand is highly oscillatory. An alternative approach is to directly simulate a random walk by sampling points uniformly from the sphere \cite{SpherePointPicking}. For small values of $n$ the success probability (\ref{eq:Eofp}) averaged over $10^6$ simulations is given in Table~\ref{tab:LowerBounds}. Luckily, we have the following asymptotic result:
\begin{theorem}[Chandrasekhar \protect{\cite[pp.~14]{Chandrasekhar}}, Hughes \protect{\cite[pp.~91]{Hughes}}]
The probability density to arrive at point $\vc{R}$ after performing $n \gg 1$ steps of random walk is
\begin{equation}
W(\vc{R}) \approx \left(\frac{3}{2 \pi n}\right)^{3/2} \exp{\left(-\frac{3\norm{\vc{R}}^2}{2n}\right)}.
\label{eq:Chandrasekhar}
\end{equation}
\end{theorem}
\begin{theorem}
For every $n \gg 1$ there exists an \racp{n} QRAC with expected success probability $p \approx \frac{1}{2} + \sqrt{\frac{2}{3 \pi n}}$.
\label{thm:Lower}
\end{theorem}
\begin{proof}
Because of the spherical symmetry of the probability density in formula (\ref{eq:Chandrasekhar}), the average distance traveled after $n \gg 1$ steps of random walk is given by
\begin{equation}
\E_\vis \norm{\sum_{i=1}^n \vc{v}_i} \approx
\int_0^\infty R \cdot W(R) \cdot 4 \pi R^2 \, dR
= 2 \sqrt{\frac{2n}{3\pi}}.
\label{eq:ApproxDistance}
\end{equation}
From (\ref{eq:Eofd}) and (\ref{eq:Eofp}) we obtain
\begin{equation}
\E(p) \approx \frac{1}{2} + \sqrt{\frac{2}{3 \pi n}},
\label{eq:LowerBound}
\end{equation}
which gives the desired lower bound.
\end{proof}
Formally this lower bound holds only for large $n$. However, if one estimates the actual value of (\ref{eq:ApproxDistance}) by random sampling one can see that the asymptotic expression (\ref{eq:LowerBound}) is indeed smaller than the actual value (see Table.~\ref{tab:LowerBounds}).
\begin{table}[!hb]
\centering
\begin{tabular}{r|c|c|c|l|}
& \multicolumn{2}{|c|}{Random measurements}
& \multicolumn{2}{|c|}{Orthogonal measurements} \\ \cline{2-5}
$n$ & Asymptotic & Sampling & Numerical & \multicolumn{1}{|c|}{Exact} \\
\hline
$2$ & $0.825735$ & $0.8333$ & $\p{2}$ & $\frac{1}{2} + \frac{1}{2\sqrt{2}}$ \\
$3$ & $0.765962$ & $0.7708$ & $\p{3}$ & $\frac{1}{2} + \frac{1}{2\sqrt{3}}$ \\
$4$ & $0.730329$ & $0.7333$ & $0.741481$ & $\frac{1}{2} + \frac{1+\sqrt{3}}{8\sqrt{2}}$ \\
$5$ & $0.706013$ & $0.7082$ & $0.711803$ & $\frac{1}{2} + \frac{2+\sqrt{5}}{20}$ \\
$6$ & $0.688063$ & $0.6897$ & $0.686973$ & $\frac{1}{2} + \frac{1+\sqrt{3}+\sqrt{6}}{16\sqrt{3}}$ \\
$7$ & $0.674113$ & $0.6754$ & $0.677458$ & $\frac{1}{2} + \frac{15+6\sqrt{5}+2\sqrt{13}+\sqrt{17}}{224}$ \\
$8$ & $0.662868$ & $0.6638$ & $0.666270$ & $\frac{1}{2} + \frac{12+9\sqrt{3}+6\sqrt{5}+6\sqrt{7}+\sqrt{11}}{256\sqrt{2}}$ \\
$9$ & $0.653553$ & $0.6544$ & $0.656893$ & $\frac{1}{2} + \frac{10\sqrt{3}+9\sqrt{11}+3\sqrt{19}}{384}$
\end{tabular}
\caption[Comparison of both lower bounds]{Comparison of \rac{n} QRACs with SR that use random and orthogonal measurements, respectively. For the first code we give the success probability according to the asymptotic expression (\ref{eq:LowerBound}) and a numerical value obtained by $10^6$ random samples. For the second code we give both the numerical and the exact value of the success probability according to equation (\ref{eq:CubicDistance}).}
\label{tab:LowerBounds}
\end{table}
\subsubsection{Lower bound by orthogonal measurements} \label{sect:Lower2}
According to the upper bound obtained in Sect.~\ref{sect:UpperBound} the known \rac{2} and \rac{3} QRACs (see Sect.~\ref{sect:KnownQRACs}) are optimal. This suggests that orthogonal measurements can be used to construct good codes. Unfortunately this idea cannot be directly applied when $n > 3$, since in $\mathbb{R}^3$ there are only three mutually orthogonal directions. However, if we choose roughly one third of all measurements along each coordinate axis, we will get quite a lot of mutually orthogonal measurement pairs.
Let $\vc{v}_1 = (1,0,0)$, $\vc{v}_2 = (0,1,0)$, $\vc{v}_3 = (0,0,1)$, and $\forall i: \vc{v}_{i+3} \equiv \vc{v}_i$. According to equation (\ref{eq:pv}) in Sect.~\ref{sect:RandomWalk} the success probability of this \rac{n} QRAC with SR is related to the average distance (\ref{eq:dv}) traveled by a random walk. For our choice of measurement directions $\vc{v}_i$ the random walk takes place in a cubic lattice and consists of roughly $n/3$ steps along each coordinate axis. Thus we can simplify the equation (\ref{eq:dv}) for the average distance traveled to avoid having an exponential number of terms in it:
\begin{equation}
d(\vc{v}_1, \dotsc, \vc{v}_n) =
\frac{1}{2^n}
\sum_{i=0}^{x}
\sum_{j=0}^{y}
\sum_{k=0}^{z}
\binom{x}{i}
\binom{y}{j}
\binom{z}{k}
\sqrt{(x - 2i)^2 + (y - 2j)^2 + (z - 2k)^2},
\label{eq:CubicDistance}
\end{equation}
where $x + y + z = n$ and each of $x$, $y$, $z$ is roughly $n/3$. The corresponding success probability can be obtained by plugging this expression in equation (\ref{eq:pv}).
This lower bound is better than the one obtained in the previous section using random measurements and it also requires less shared randomness. The difference of both lower bounds is shown in Fig.~\ref{fig:OrthBound}. The periodic pattern of length $6$ in this picture can be explained as follows. When $n$ is a multiple of $3$, the same number of steps of a random walk is performed along each coordinate axis (this explains the factor $3$). To explain the factor $2$, let us consider a random walk on a line, i.e., one of the three coordinate axis. The distinction between odd an even number of steps of such a walk is that the probability distribution after an even number of steps is peaked at the origin, but this peak has no contribution whatsoever to the average distance traveled. This intuition suggests that it should be especially hard to beat this lower bound when $n$ is of the form $6k+3$.
\image{0.9}{PlotQuantum2}{The difference between two lower bounds for QRACs with SR}{The \emph{difference} of both lower bounds for QRACs with SR. Black squares correspond to the bound obtained using measurements along coordinate axes and the horizontal line corresponds to the asymptotic bound (\ref{eq:LowerBound}) using random measurements (see Sects.~\ref{sect:Lower1} and \ref{sect:Lower2}, respectively). The first bound is better, except for $n=6$ (notice a periodic pattern of length $6$).}{fig:OrthBound}
\section{Constructions of QRACs with SR} \label{sect:Constructions}
It is plausible that one can do better than the lower bound obtained above, which used random measurements. In this section we will consider several constructions of quantum random access codes with shared randomness for some particular values of $n$. First, in Sect.~\ref{sect:Numerical} we will describe numerically obtained QRACs. Then, in Sect.~\ref{sect:Symmetric} we will construct new QRACs with high degree of symmetry. In Sect.~\ref{sect:Discussion} we will compare both kinds of codes and draw some conclusions.
\subsection{Numerical results} \label{sect:Numerical}
\begin{table}[!hb]
\centering
\begin{tabular}{r|c|c}
$n$ & Section & Probability \\
\hline
$2$ & \ref{sect:QRAC2,QRAC3} & $\p{2}$ \\
$3$ & \ref{sect:QRAC2,QRAC3} & $\p{3}$ \\
$4$ & \ref{sect:QRAC4} & $\p{4}$ \\
$5$ & \ref{sect:QRAC5} & $\p{5}$ \\
$6$ & \ref{sect:QRAC6} & $\p{6}$ \\
$7$ & & $0.678638$ \\
$8$ & & $0.666633$ \\
$9$ & \ref{sect:QRAC9} & $\p{9}$ \\
$10$ & & $0.648200$ \\
$11$ & & $0.641051$ \\
$12$ & & $0.634871$
\end{tabular}
\caption[The success probabilities of numerical \rac{n} QRACs]{The success probabilities of numerical \rac{n} QRACs.}
\label{tab:NumericalProbabilities}
\end{table}
In this section we will discuss some particular \rac{n} QRACs with shared randomness for several small values of $n$. These codes were obtained using numerical optimization. The optimization must be performed only over all possible measurements, because in Sect.~\ref{sect:OptimalEncoding} we showed that the choice of measurements determines the optimal encoding in a simple way. Each measurement is specified by a unit vector $\vc{v}_i\in\mathbb{R}^3$. For \rac{n} QRAC there are $n$ such vectors and one needs two angles to specify each of them. Without loss of generality we can assume that $\vc{v}_1=(0,0,1)$ due to the rotational symmetry of the Bloch sphere. Thus only $2(n-1)$ real parameters are required to specify all $\vc{v}_i$ and therefore an \rac{n} QRAC. To find the best configuration of measurements $\vc{v}_i$, one needs to maximize $S_{\vc{v}}$ given by (\ref{eq:Sv}). According to (\ref{eq:ProbabilitySn}) the success probability of the corresponding QRAC is given by
\begin{equation}
p_\vc{v} = \frac{1}{2} \left(1 + \frac{S_{\vc{v}}}{2^n \cdot n}\right).
\label{eq:SuccessProbability}
\end{equation}
This is not a convex optimization problem, since the feasible set (given by $\norm{\vc{v}_i}=1$ for all $1 \leq i \leq n$) is not convex. Note that it is not convex even if we relax equalities $\norm{\vc{v}_i}=1$ to inequalities $\norm{\vc{v}_i} \leq 1$. We used the \textit{Mathematica}'s general-purpose built-in function \texttt{NMaximize} to solve this problem.
Once the measurements $\vc{v}_i$ are found, one can easily obtain the Bloch vector $\vc{r}_x$ of the qubit state that must be used to optimally encode the string $x$. We showed (see Sect.~\ref{sect:OptimalEncoding}) that $\vc{r}_x$ is a unit vector in direction $\vc{v}_x$, where $\vc{v}_x$ is given by (\ref{eq:vx}). For almost all QRACs that we have found using numerical optimization, the points $\vc{r}_x$ form a symmetric pattern on the surface of the Bloch ball. Thus we were able to guess the exact values of $\vc{r}_x$ and $\vc{v}_i$. However, as in any numerical optimization, optimality of the resulting codes is not guaranteed.
In order to make the resulting codes more understandable, we depict them in three dimensions using the following conventions:
\begin{itemize}
\item each \emph{red point} encodes the string indicated,
\item each \emph{blue point} defines the axis of the measurement when the indicated bit is to be output, and
\item for each measurement there is a corresponding (unlabeled) \emph{blue great circle} containing states yielding $0$ and $1$ equiprobably.
\end{itemize}
More precisely, the blue point with label $i$ defines the basis vector $\ket{\psi^i_0}$ corresponding to the outcome $0$ of the \mbox{$i$th} measurement (see Sect.~\ref{sect:TypesOfQRACs}). Note that the blue circles and blue points come in pairs---the vector $\ket{\psi^i_0}$ defined by the blue point is orthogonal to the corresponding circle. As a cautionary note, occasionally, the blue point for one measurement falls on the great circle of a different measurement (for example, blue points $1$ and $2$ in Fig.~\ref{fig:QRAC2} lie on one another's corresponding circles). If there are too many red points, we omit the string labels for clarity.
Usually the codes have some symmetry; for example, the encoding points may be the vertices of a polyhedron. In such cases we show the corresponding polyhedron instead of the Bloch sphere. We do not discuss \rac{7} and \rac{8} QRACs since the best numerical results have almost no discernible symmetry. We also do not discuss the numerical results for $n \geq 10$ (see Table~\ref{tab:NumericalProbabilities} for success probabilities). The numerically obtained \rac{10} code is symmetric and resembles \rac{6} code discussed in Sect.~\ref{sect:QRAC6}, but the \rac{11} and \rac{12} codes again have almost no discernible symmetry. Success probabilities of all numerical \rac{n} QRACs with SR are summarized in Table~\ref{tab:NumericalProbabilities} and Fig.~\ref{fig:NumericalResults}.
\image{0.9}{PlotQuantum1}{Success probabilities of numerical \rac{n} QRACs with SR}{Success probabilities $p(n)$ of numerical \rac{n} QRACs with SR from Table~\ref{tab:NumericalProbabilities}. The upper bound $\frac{1}{2} + \frac{1}{2 \sqrt{n}}$ and the lower bound $\frac{1}{2} + \sqrt{\frac{2}{3 \pi n}}$ are indicated by dashed lines (see Sects.~\ref{sect:GeneralUpperBound} and \ref{sect:Lower1}, respectively).}{fig:NumericalResults}
\subsubsection{The \rac{2} and \rac{3} QRACs with SR} \label{sect:QRAC2,QRAC3}
\doubleimage
{Exact2}{The \rac{2} QRAC with SR}{The \rac{2} QRAC with SR.\protect\footnote{For those who are using a black-and-white printout: this is how \textbf{\textcolor{red}{red}} and \textbf{\textcolor{blue}{blue}} looks like.}}{fig:QRAC2}
{Exact3}{The \rac{3} QRAC with SR}{The \rac{3} QRAC with SR.}{fig:QRAC3}
We used numerical optimization as described above to find \rac{2} and \rac{3} QRACs with shared randomness and obtained the optimal codes discussed in Sects.~\ref{sect:KnownQRAC2} and \ref{sect:KnownQRAC3}.
The codes are shown in Fig.~\ref{fig:QRAC2} and \ref{fig:QRAC3}, respectively. In the first case the encoding points are the vertices of a square and the success probability is
\begin{equation}
p = \frac{1}{2} + \frac{1}{2\sqrt{2}} \approx \p{2}.
\end{equation}
In the second case they are the vertices of a cube. The success probability is
\begin{equation}
p = \frac{1}{2} + \frac{1}{2\sqrt{3}} \approx \p{3}.
\end{equation}
\subsubsection{The \rac{4} QRAC with SR} \label{sect:QRAC4}
\image{0.6}{Exact4}{The \rac{4} QRAC with SR}{The \rac{4} QRAC with SR.}{fig:QRAC4}
In Sect.~\ref{sect:noQRAC4} we discussed the impossibility of a \rac{4} QRAC when Alice and Bob are not allowed to cooperate. However, a \rac{4} QRAC can be obtained if they have shared randomness. The particular \rac{4} QRAC with SR discussed here was found by a numerical optimization. It is a hybrid of the \rac{2} and \rac{3} codes discussed in Sects.~\ref{sect:KnownQRAC2} and \ref{sect:KnownQRAC3}, respectively.
The measurements are performed in the bases $(M_1, M_2, M_3, M_3)$, where $M_1$, $M_2$, and $M_3$ are the same as in the \rac{3} case (note that the last two bits are measured in the same basis, namely $M_3$). These bases are given by (\ref{eq:M1}), (\ref{eq:M2}), and (\ref{eq:M3}), respectively. The points that correspond to an optimal encoding for these bases are the vertices of a regular square $\frac{1}{\sqrt{2}}(\pm 1,\pm 1,0)$ in the $xy$ plane and a cube $\frac{1}{\sqrt{6}}(\pm 1,\pm 1,\pm 2)$ that is stretched in the $z$ direction (see the Bloch sphere in Fig.~\ref{fig:QRAC4}). The Bloch vector for the string $x = x_1 x_2 x_3 x_4$ is explicitly given by
\begin{equation}
\vc{r}(x) = \frac{1}{\sqrt{6}}
\mx{
(-1)^{x_1}\bigl(1-(1-\sqrt{3})\abs{x_3-x_4}\bigr) \\
(-1)^{x_2}\bigl(1-(1-\sqrt{3})\abs{x_3-x_4}\bigr) \\
(-1)^{x_3}+(-1)^{x_4}
}.
\end{equation}
The encoding function can be described as follows:
\begin{itemize}
\item if $x_3=x_4$, use the usual \rac{3} QRAC with an emphasis on $x_3$ to encode the string $x_1 x_2 x_3$,
\item if $x_3 \neq x_4$---encode only $x_1 x_2$ using the usual \rac{2} QRAC.
\end{itemize}
In the \rac{3} scheme the probability to recover $x_3$ must be increased by stretching the cube along the $z$ axis, because $x_3$ equals $x_4$ and therefore it is of greater value than $x_1$ or $x_2$.
This \rac{4} QRAC can also be seen as a combination of two \rac{3} QRACs: the string $x_1 x_2 x_3$ is encoded into the vertices of a smaller cube inscribed in a half of the Bloch ball (the vertices that lie within the sphere are projected to its surface). The last bit $x_4$ indicates in which half the smaller cube lies (the upper and lower hemispheres correspond to $x_4 = 0$ and $1$, respectively).
The qubit state is explicitly given by $E(x_1,x_2,x_3,x_4)=\alpha\ket{0}+\beta\ket{1}$, where
\begin{equation}
\left\{
\begin{aligned}
\alpha &= \sqrt{\frac{1}{2}+\frac{(-1)^{x_3}+(-1)^{x_4}}{2\sqrt{6}}}, \\
\beta &= \frac{(-1)^{x_1}+i(-1)^{x_2}}
{\sqrt{4\bigl(3-2\abs{x_3-x_4}\bigr)+2\sqrt{6}\bigl((-1)^{x_3}+(-1)^{x_4}\bigr)}}.
\end{aligned}
\right.
\end{equation}
The $16$ values for $\beta$ are exactly the sixteen roots of the polynomial (recall Sect.~\ref{sect:UnitDisk})
\begin{equation}
2304 \beta^{16} + 3072 \beta^{12} + 1120 \beta^8 + 128 \beta^ 4 + 1.
\end{equation}
If a shared random string is not available, the worst case success probability of this QRAC is $\frac{1}{2}$. However, if shared randomness is available, input randomization (as in Lemma~\ref{lem:Randomization}) can be used and we will get the same success probability for all inputs, namely
\begin{equation}
p = \frac{1}{2} + \frac{1+\sqrt{3}}{8\sqrt{2}} \approx \p{4}.
\end{equation}
We do not know if this \rac{4} QRAC with SR is optimal.
\subsubsection{The \rac{5} QRAC with SR} \label{sect:QRAC5}
\image{0.5}{Exact5}{The \rac{5} QRAC with SR}{The \rac{5} QRAC with SR.}{fig:QRAC5}
To obtain a \rac{5} QRAC, we take the bases $M_1$, $M_2$, and $M_3$, given by (\ref{eq:M1}), (\ref{eq:M2}), and (\ref{eq:M3}), respectively, and also
\begin{align}
M_4&=\set{\frac{1}{2}\mx{ \sqrt{2}\\ i+1},
\frac{1}{2}\mx{-\sqrt{2}\\ i+1}}, \label{eq:M4} \\
M_5&=\set{\frac{1}{2}\mx{ \sqrt{2}\\ i-1},
\frac{1}{2}\mx{-\sqrt{2}\\ i-1}}. \label{eq:M5}
\end{align}
The Bloch vectors $\vc{v}_3=(0,0,\pm1)$ for the basis $M_3$ are along the $z$ axis, but the Bloch vectors of the other four bases form a regular octagon in the $xy$ plane (shown in Fig.~\ref{fig:QRAC5}): $\vc{v}_1=(\pm1,0,0)$, $\vc{v}_2=(0,\pm1,0)$, $\vc{v}_4=\pm\frac{1}{\sqrt{2}}(1,1,0)$, $\vc{v}_5=\pm\frac{1}{\sqrt{2}}(-1,1,0)$. The Bloch vector encoding the string $x = x_1 x_2 x_3 x_4 x_5$ is
\begin{equation}
\vc{r}(x) = \frac{1}{\sqrt{10+s(x)4\sqrt{2}}}
\mx{
\sqrt{2}(-1)^{x_1}+(-1)^{x_4}-(-1)^{x_5} \\
\sqrt{2}(-1)^{x_2}+(-1)^{x_4}+(-1)^{x_5} \\
\sqrt{2}(-1)^{x_3}
},
\end{equation}
where $s(x) \in \set{-1,1}$ and is given by
\begin{equation}
s(x) = \frac{(-1)^{x_1}+(-1)^{x_2}}{2} (-1)^{x_4} - \frac{(-1)^{x_1}-(-1)^{x_2}}{2} (-1)^{x_5}.
\end{equation}
The great circles with equiprobable outcomes of the measurements partition the Bloch sphere into $16$ equal spherical triangles. There are two strings encoded into each triangle. The idea for how to locate the correct point for the given string $x$ is as follows. Observe that the strings with $x_3=0$ and $x_3=1$ are encoded into the upper and lower hemisphere, respectively (this means that for all strings the probability that the measurement $M_3$ gives the correct value of $x_3$ is greater than $\frac{1}{2}$). Next observe that half of all strings have $s(x)=1$, but the other half have $s(x)=-1$ (in fact, the two strings in the same triangle have distinct values of $s$).
Let us first consider the case $s(x)=1$. We call such string \emph{compatible} with the measurements, because it can be encoded in such a way that every measurement gives the correct value of the corresponding bit with probability greater than $\frac{1}{2}$. For the \mbox{$i$th} bit of $x$ we can define the ``preferable region'' on the Bloch sphere as the hemisphere where $M_i$ recovers $x_i$ with probability greater than $\frac{1}{2}$. The intersection of these five regions is one sixteenth of the Bloch sphere---the triangle where $x$ must be encoded. The point with the smallest absolute value of the $z$ coordinate in this triangle must be chosen (it has smaller probability than other points in the triangle to recover $x_3$ correctly, but the probabilities for the other four bits are larger).
\image{0.3}{Slices}{The ``preferable regions'' of the measurement $M_2$}{The ``preferable regions'' of the measurement $M_2$ (only the upper hemisphere is shown, the other half is symmetric). For each of the measurements the direction of the Bloch vector $\ket{\psi_0}$ is indicated by the corresponding number. The white triangles correspond to $x_2=0$, but the gray ones to $x_2=1$.}{fig:Slices}
If $s(x)=-1$, the string $x$ is \emph{incompatible} with the measurements, because the intersection of the ``preferable regions'' is empty. Thus, no matter where the string is encoded, at least one bit will differ from the most probable outcome of the corresponding measurement. We can take this into account and modify the definition of the ``preferable region'' for the \mbox{$i$th} bit ($i \neq 3$). It is a union of eight triangles: four triangles where the most probable outcome of $M_i$ equals $x_i$, and four triangles where it does not equal $x_i$ (in either case the triangles with maximal probability of correct outcome of $M_i$ must be taken). For example, the ``preferable regions'' for $x_2$ are shown in Fig.~\ref{fig:Slices}. The regions for $x_3$ remain the same as in the previous case. The intersection of all five regions for the given string $x$ is the triangle where the string must be encoded. The point in the triangle with the largest absolute value of the $z$ coordinate must be chosen. As a result, three of the measurements will give the correct value of the corresponding bit of the string $x$ with probability greater than $\frac{1}{2}$.
The corresponding qubit state is given by $E(x_1,x_2,x_3,x_4,x_5)=\alpha\ket{0}+\beta\ket{1}$ with coefficients $\alpha$ and $\beta$ defined as follows:
\begin{equation}
\left\{
\begin{aligned}
\alpha &= \sqrt{\frac{1}{2} + \frac{(-1)^{x_3}}{2 \sqrt{5 + s(x) 2 \sqrt{2}}}}, \\
\beta &= \frac{(-1)^{x_1} + i (-1)^{x_2} + \frac{i + 1}{\sqrt{2}} (-1)^{x_4} + \frac{i - 1}{\sqrt{2}} (-1)^{x_5}}
{\sqrt{10 + s(x) 4 \sqrt{2} + 2 (-1)^{x_3} \sqrt{5 + s(x) 2 \sqrt{2}}}}.
\end{aligned}
\right.
\end{equation}
The coefficients $\beta$ are the roots of the polynomial
\begin{equation}
1336336 \beta^{32} + 961792 \beta^{24} + 151432 \beta^{16} + 1600 \beta^8 + 1.
\end{equation}
Again, using input randomization we obtain the same success probability for any input, namely
\begin{equation}
p = \frac{1}{2} + \frac{1}{20} \sqrt{2(5+\sqrt{17})} \approx \p{5}.
\end{equation}
\subsubsection{The \rac{6} QRAC with SR} \label{sect:QRAC6}
\doubleimage
{Cube}{The measurements for the \rac{6} QRAC}{The measurements for the \rac{6} QRAC shown on the right.}{fig:Cube}
{Exact6}{The \rac{6} QRAC with SR}{The \rac{6} QRAC with SR.}{fig:QRAC6}
The Bloch vectors corresponding to the $6$ measurements are as follows:
\begin{equation}
\begin{aligned}
\vc{v}_1 &= \pm ( 0,+1,+1) / \sqrt{2}, \\
\vc{v}_2 &= \pm ( 0,-1,+1) / \sqrt{2}, \\
\vc{v}_3 &= \pm (+1, 0,+1) / \sqrt{2}, \\
\vc{v}_4 &= \pm (+1, 0,-1) / \sqrt{2}, \\
\vc{v}_5 &= \pm (+1,+1, 0) / \sqrt{2}, \\
\vc{v}_6 &= \pm (-1,+1, 0) / \sqrt{2}.
\end{aligned}
\label{eq:v6}
\end{equation}
They correspond to the $12$ vertices of the \emph{cuboctahedron} (or the midpoints of the $12$ edges of the cube) and are shown in Fig.~\ref{fig:Cube}. The great circles orthogonal to these vectors form the projection of the edges of a normalized\footnote{The vertices of the \emph{tetrakis hexahedron} are not all at the same distance from the origin (the ones forming an octahedron are $2/\sqrt{3}$ times closer than those forming a cube). So the polyhedron has to be \emph{normalized} to fit inside the Bloch sphere (the vectors pointing to the vertices have to be rescaled to have a unit norm).} \emph{tetrakis hexahedron} and partition the Bloch sphere into $24$ parts (see Fig.~\ref{fig:QRAC6}). Each of these parts contains one vertex of a \emph{truncated octahedron}---the dual of tetrakis hexahedron. It is inscribed in the Bloch sphere shown in Fig.~\ref{fig:QRAC6}.
The measurement bases corresponding to $\vc{v}_i$ can be found using (\ref{eq:AlphaBeta}):
\begin{equation}
\begin{aligned}
M_1 &= \left\{ \frac{1}{2} \mx{ \sqrt{2+\sqrt{2}} \\ i \sqrt{2-\sqrt{2}} },
\frac{1}{2} \mx{ \sqrt{2-\sqrt{2}} \\ -i \sqrt{2+\sqrt{2}} } \right\}, \\
M_2 &= \left\{ \frac{1}{2} \mx{ \sqrt{2+\sqrt{2}} \\ -i \sqrt{2-\sqrt{2}} },
\frac{1}{2} \mx{ \sqrt{2-\sqrt{2}} \\ i \sqrt{2+\sqrt{2}} } \right\}, \\
M_3 &= \left\{ \frac{1}{2} \mx{ \sqrt{2+\sqrt{2}} \\ \sqrt{2-\sqrt{2}} },
\frac{1}{2} \mx{ \sqrt{2-\sqrt{2}} \\ - \sqrt{2+\sqrt{2}} } \right\}, \\
M_4 &= \left\{ \frac{1}{2} \mx{ \sqrt{2-\sqrt{2}} \\ \sqrt{2+\sqrt{2}} },
\frac{1}{2} \mx{ \sqrt{2+\sqrt{2}} \\ - \sqrt{2-\sqrt{2}} } \right\}, \\
M_5 &= \left\{ \frac{1}{2} \mx{ \sqrt{2} \\ i+1 },
\frac{1}{2} \mx{ \sqrt{2} \\ -i-1 } \right\}, \\
M_6 &= \left\{ \frac{1}{2} \mx{ \sqrt{2} \\ i-1 },
\frac{1}{2} \mx{ \sqrt{2} \\ -i+1 } \right\}.
\end{aligned}
\end{equation}
Note that $M_5$ and $M_6$ are the same as (\ref{eq:M4}) and (\ref{eq:M5}) for the \rac{5} QRAC described in the previous section. Another way to describe these $6$ bases is to consider the $\beta$ coefficients for the $12$ vectors that form them. It turns out that these coefficients are exactly the roots of the polynomial
\begin{equation}
256 \beta^{12} - 128 \beta^8 - 44 \beta^4 + 1.
\end{equation}
Let us consider how to determine the point where a given string should be encoded. According to (\ref{eq:vx}) we have to find the sum of the vectors $\vc{v}_i$ defined in (\ref{eq:v6}), each taken with either a plus or a minus sign. These vectors correspond to six pairs of opposite edges of a cube and the signs determine which edge from each pair we are taking (see Fig.~\ref{fig:Cube}). There are only three distinct ways of doing this (see Fig.~\ref{fig:Cubes}). Regardless of which way it is, for each of the chosen edges there is exactly one other that shares a common face and is parallel to it. Thus we can partition the chosen edges into three pairs (in Fig.~\ref{fig:Cubes} such pairs are joined with a thick blue line). The sum of the vectors $\vc{v}_i$ for edges in a pair is always parallel to one of the axes and its direction is indicated with an arrow in Fig.~\ref{fig:Cubes}. From these arrows one can see where the encoding point should lie.
Now let us classify all $2^6 = 64$ strings of length $6$ into $3$ types according to the location of the encoding point on the Bloch sphere. Each type of string is encoded into a vertex of a specific polyhedron (see Fig.~\ref{fig:Polyhedra6}). These polyhedra are the \emph{cube}, the \emph{truncated octahedron}, and the \emph{octahedron} and the number of strings of each type are $16$, $24$, and $24$, respectively. Let us consider them case by case:
\image{1.0}{Cubes}{Three distinct ways of choosing one edge from each pair of opposite edges of a cube}{Three distinct ways of choosing one edge from each pair of opposite edges of a cube. The chosen edges are marked with blue points. Points lying on opposite edges of the same face are connected and the direction of the sum of the corresponding vectors is indicated with an arrow. The corresponding encoding point is shown in red. The red points obtained from all possible choices of the same kind are the vertices of a cube, a truncated octahedron, and an octahedron, respectively (see Fig.~\ref{fig:Polyhedra6}).}{fig:Cubes}
\begin{table}[!hb]
\centering
\begin{tabular}{c|c}
Truncated & \multirow{2}{*}{Octahedron} \\
octahedron & \\
\hline
$**1110$ & $**1101$ \\
$**0001$ & $**0010$ \\
$10**11$ & $01**11$ \\
$01**00$ & $10**00$ \\
$1110**$ & $1101**$ \\
$0001**$ & $0010**$ \\
\end{tabular}
\caption[Patterns of strings corresponding to the vertices of truncated octahedron and octahedron]{Patterns of strings corresponding to the vertices of truncated octahedron and octahedron (``$*$'' stands for any value).}
\label{tab:Patterns}
\end{table}
\begin{itemize}
\item The \emph{cube} has $8$ vertices:
\begin{equation}
\frac{1}{\sqrt{3}}(\pm 1, \pm 1, \pm 1)
\end{equation}
and there are $2$ strings encoded into each vertex. These $16$ strings are exactly those $x_1 x_2 \ldots x_6 \in \set{0,1}^6$ that satisfy
\begin{equation}
\abs{x_1 - x_2} + \abs{x_3 - x_4} + \abs{x_5 - x_6} \in \set{0,3}.
\end{equation}
This condition ensures that the three arrows in Fig.~\ref{fig:Cubes} are orthogonal.
\item The \emph{truncated octahedron} has $24$ vertices. Their coordinates are obtained by all permutations of the components of
\begin{equation}
\frac{1}{\sqrt{5}}(0, \pm 1, \pm 2).
\end{equation}
There is just $1$ string encoded into each vertex. In this case there will be two pairs of chosen edges that belong to the same face (note the ``cross'' in the Fig.~\ref{fig:Cubes} formed by pairs whose arrows are pointing outwards of the page). The third pair (with the arrow pointing up) can be rotated around this face to any of the four possible positions. This corresponds to fixing four bits of the string and choosing the remaining two bits in an arbitrary way. Since the ``cross'' can be on any of the six faces of the cube, one can easily describe all $24$ strings of this type (they are listed in the first column of Table~\ref{tab:Patterns}).
\item The \emph{octahedron} has $6$ vertices:
\begin{equation}
(\pm 1, 0, 0) \cup
(0, \pm 1, 0) \cup
(0, 0, \pm 1)
\end{equation}
and there are $4$ strings encoded into each vertex. In this case two arrows in Fig.~\ref{fig:Cubes} are pointing to opposite directions (up and down). If we fix these arrows, we can rotate the third one (pointing outwards) in any of four directions. Hence we can describe all $24$ strings of this type in a similar way (see the second column of Table~\ref{tab:Patterns}).
\end{itemize}
\image{1.0}{Polyhedra6}{Polyhedra corresponding to three different types of strings for \rac{6} QRAC with SR}{Three polyhedra (cube, truncated octahedron, and octahedron) corresponding to three different types of strings for \rac{6} QRAC with SR. The red points in Fig.~\ref{fig:QRAC6} are obtained by superimposing these three polyhedra.}{fig:Polyhedra6}
The coefficients $\beta$ of the encoding states are the $64$ roots of the polynomial
\begin{gather}
\beta^4 (\beta-1)^4 (4 \beta^4-1)^4 (36 \beta^8+24 \beta^4+1)^2 \nonumber \\
(25 \beta^8-15 \beta^4+1) (400 \beta^8-360 \beta^4+1) (400 \beta^8+56 \beta^4+25).
\end{gather}
The obtained success probability using input randomization is
\begin{equation}
p = \frac{1}{2} + \frac{2+\sqrt{3}+\sqrt{15}}{16\sqrt{6}} \approx \p{6}.
\end{equation}
\subsubsection{The \rac{9} QRAC with SR} \label{sect:QRAC9}
\image{0.4}{Exact9}{The \rac{9} QRAC with SR}{The \rac{9} QRAC with SR.}{fig:QRAC9}
This QRAC is a combination of three \rac{3} QRACs described in Sect.~\ref{sect:KnownQRAC3}. It has three measurements along each axis:
\begin{equation}
\begin{aligned}
\vc{v}_1 &= \vc{v}_4 = \vc{v}_7 = \pm (1, 0, 0), \\
\vc{v}_2 &= \vc{v}_5 = \vc{v}_8 = \pm (0, 1, 0), \\
\vc{v}_3 &= \vc{v}_6 = \vc{v}_9 = \pm (0, 0, 1).
\end{aligned}
\end{equation}
The measurement bases $M_1$, $M_2$, and $M_3$ corresponding to the Bloch vectors $\vc{v}_1$, $\vc{v}_2$, and $\vc{v}_3$ are given by (\ref{eq:M1}), (\ref{eq:M2}), and (\ref{eq:M3}), respectively.
The encoding points can be characterized as a $4 \times 4 \times 4$ \emph{cubic lattice} formed by vectors (\ref{eq:vx}) projected on the surface of the Bloch ball. Note that this lattice consists of vertices of $8$ equal cubes each lying in a different octant. Then the $7$ points inside of each spherical triangle in Fig.~\ref{fig:QRAC9} are the projection of the vertices of the corresponding cube.\footnote{We get $7$ points instead of $8$ since the projections of two diagonally opposite vertices coincide.}
All $2^9=512$ strings can be classified into $3$ types. First consider a string $a_1 a_2 a_3 \in \set{0,1}^3$ and define
\begin{equation}
s(a_1, a_2, a_3) =
\frac{
\abs{a_1 - a_2} +
\abs{a_2 - a_3} +
\abs{a_3 - a_1}
}{2}.
\end{equation}
Notice that $s(a_1, a_2, a_3) \in \set{0,1}$. Now for $x = x_1 x_2 \ldots x_9 \in \set{0,1}^9$ define
\begin{equation}
t(x) =
s(x_1, x_4, x_7) +
s(x_2, x_5, x_8) +
s(x_3, x_6, x_9).
\end{equation}
Then the type of the string $x$ can be determined as follows:
\begin{equation}
t(x) =
\begin{cases}
0, 3 & \text{cube}, \\
1 & \text{truncated cube}, \\
2 & \text{small rhombicuboctahedron}.
\end{cases}
\end{equation}
These types are named after polyhedra, since each type of string is encoded into the vertices of the corresponding polyhedron (see Fig.~\ref{fig:Polyhedra9}):
\begin{itemize}
\item The \emph{cube} has $8$ vertices and there are $28$ strings encoded into each vertex. These vertices are:
\begin{equation}
\frac{1}{\sqrt{3}}(\pm 1, \pm 1, \pm 1).
\end{equation}
\item The deformed\footnote{The edges of the \emph{truncated cube} are of the same length. In our case the eges forming triangles are $\sqrt{2}$ times longer than the other edges.} \emph{truncated cube} has $24$ vertices and there are $3$ strings encoded into each vertex. These vertices are:
\begin{equation}
\frac{1}{\sqrt{19}}(\pm 1, \pm 3, \pm 3) \cup
\frac{1}{\sqrt{19}}(\pm 3, \pm 1, \pm 3) \cup
\frac{1}{\sqrt{19}}(\pm 3, \pm 3, \pm 1).
\end{equation}
\item The deformed\footnote{The edges of the \emph{small rhombicuboctahedron} are also of the same length, but in our case the edges forming triangles again are $\sqrt{2}$ times longer.} \emph{small rhombicuboctahedron} also has $24$ vertices and there are $9$ strings encoded into each vertex. These vertices are:
\begin{equation}
\frac{1}{\sqrt{11}}(\pm 3, \pm 1, \pm 1) \cup
\frac{1}{\sqrt{11}}(\pm 1, \pm 3, \pm 1) \cup
\frac{1}{\sqrt{11}}(\pm 1, \pm 1, \pm 3).
\end{equation}
\end{itemize}
\image{1.0}{Polyhedra9}{Polyhedra corresponding to three different types of strings for \rac{9} QRAC with SR}{Three polyhedra (cube, small rhombicuboctahedron, and truncated cube) corresponding to three different types of strings for \rac{9} QRAC with SR. The red points in Fig.~\ref{fig:QRAC9} are obtained by superimposing these three polyhedra.}{fig:Polyhedra9}
The coefficients $\beta$ for the corresponding qubit states $\alpha\ket{0}+\beta\ket{1}$ are the roots of the following polynomial:
\begin{gather}
(36 \beta^8 + 24 \beta^4 + 1)^{28} (1444 \beta^8 + 760 \beta^4 + 81)^3 (484 \beta^8 + 440 \beta^4 + 1)^9 \nonumber \\
(52128400 \beta^{16} - 21509824 \beta^{12} + 26780424 \beta^8 - 372400 \beta^4 + 15625)^3 \nonumber \\
(5856400 \beta^{16} - 1788864 \beta^{12} + 1232264 \beta^8 - 92400 \beta^4 + 15625)^9.
\end{gather}
Using input randomization we get success probability
\begin{equation}
p = \frac{1}{2} + \frac{192+10\sqrt{3}+9\sqrt{11}+3\sqrt{19}}{384} \approx \p{9}.
\end{equation}
\subsection{Symmetric constructions} \label{sect:Symmetric}
In Sect.~\ref{sect:Numerical} we have discussed in great detail \rac{n} quantum random access codes with shared randomness for some particular values of $n$. Since these codes were obtained using numerical optimization, there are still some questions left open. Most importantly, are the codes for $n \geq 4$ discussed in Sect.~\ref{sect:Numerical} optimal? If this is the case, do these codes (see Figs.~\ref{fig:QRAC2}, \ref{fig:QRAC3}, \ref{fig:QRAC4}, \ref{fig:QRAC5}, \ref{fig:QRAC6}, and \ref{fig:QRAC9}) have anything in common that makes them so good?
The purpose of this section is to shed some light on these two questions. We will explore the possibility that \emph{symmetry} is the property that makes QRACs with SR good. In Sect.~\ref{sect:GreatCircles} we will explore what symmetries the codes found by numerical optimization have and what other symmetries are possible. In several subsequent sections we will use these symmetries to construct new codes and compare them with the numerical ones (the success probabilities of the obtained codes are summarized in Table~\ref{tab:SymmetricProbabilities}). In Sect.~\ref{sect:Discussion} we will conclude that symmetric codes are \emph{not} necessarily optimal and speculate about what else could potentially be used to construct good QRACs.
\begin{table}[!hb]
\centering
\begin{tabular}{r|c|c}
$n$ & Section & Probability \\
\hline
$4$ & \ref{sect:QRAC4[sym]} & $\p{4[sym]}$ \\
$6$ & \ref{sect:QRAC6[sym]} & $\p{6[sym]}$ \\
$9$ & \ref{sect:QRAC9[sym]} & $\p{9[sym]}$ \\
$15$ & \ref{sect:QRAC15[sym]} & $\p{15[sym]}$
\end{tabular}
\caption[The success probabilities of symmetric \rac{n} QRACs with SR]{The success probabilities of symmetric \rac{n} QRACs with SR. See Table~\ref{tab:ComparisonOfProbabilities} for the comparison with numerically obtained codes.}
\label{tab:SymmetricProbabilities}
\end{table}
\subsubsection{Symmetric great circle arrangements} \label{sect:GreatCircles}
If we want to construct a QRAC with SR that has some sort of symmetry, we have to choose the directions of measurements in a symmetric way. In other words, we have to symmetrically arrange the great circles that are orthogonal to the measurement directions.
In this section we will discuss two ways that great circles can be arranged on a sphere in a symmetric way. These arrangements come from \emph{quasiregular polyhedra} and \emph{triangular symmetry groups}, respectively. The first kind of arrangement is not directly observed in numerically obtained examples, despite its high symmetry. However, the second one is observed in almost all numerically obtained codes. Since our approach is empirical, we will not justify when an arrangement is ``symmetric enough''\footnote{Several possible criteria are: (a) any great circle can be mapped to any other by a rotation from the symmetry group of the arrangement, (b) the sphere is cut into pieces that are regular polygons, (c) the sphere is cut into pieces of the same form. However, not all examples we will give satisfy these three conditions. In fact, each condition is violated by at least one of the examples we will consider.} to be of interest. We will use the term \emph{symmetric codes} to refer to the codes constructed below. This is just to distinguish them from numerically obtained codes in Sect.~\ref{sect:Numerical}, not because they satisfy some formal criterion of ``being symmetric''.
\subsubsubsection{Quasiregular polyhedra}
A (convex) \emph{quasiregular polyhedron} is the intersection of a \emph{Platonic solid} with its dual. There are only three possibilities:
\begin{align}
\text{octahedron} &= \text{tetrahedron} \cap \text{tetrahedron}, \\
\text{cuboctahedron} &= \text{cube} \cap \text{octahedron}, \\
\text{icosidodecahedron} &= \text{icosahedron} \cap \text{dodecahedron}.
\end{align}
The tetrahedron is self-dual thus the octahedron, which is the intersection of two tetrahedrons, has slightly different properties than the other two polyhedra (e.g., its all faces are equal). For this reason octahedron may be considered as a degenerate quasiregular polyhedron or not be considered quasiregular at all since it is Platonic. Thus there are only two (non-degenerate) convex quasiregular polyhedra (see Fig.~\ref{fig:Quasiregular}).
\image{0.6}{Quasiregular}{Quasiregular polyhedra}{Quasiregular polyhedra: \emph{cuboctahedron} and \emph{icosidodecahedron}.}{fig:Quasiregular}
These polyhedra have several nice properties. For example, all their edges are equivalent and there are exactly two types of faces (both regular polygons), each completely surrounded by the faces of the other type. The most relevant property for us is that their edges form great circles. Since the arrangements of great circles formed by the edges of cuboctahedron and icosidodecahedron do not appear in the numerical codes, we will use them in Sects.~\ref{sect:QRAC4[sym]} and \ref{sect:QRAC6[sym]} to construct new (symmetric) \rac{4} and \rac{6} QRACs with SR, respectively.
\subsubsubsection{Triangular symmetry groups}
Consider a spherical triangle---it is enclosed by three planes that pass through its edges and the center of the sphere. Let us imagine that these planes are mirrors that reflect our triangle. These three reflections generate a \emph{reflection group} \cite{FiniteReflectionGroups,TriangularSymmetryGroups}. For some specific choices of the triangle this group is finite and the images of the triangle under different group operations do not overlap. Hence they form a \emph{tiling} of the sphere. This tiling can also be seen as several (most likely more than three) great circles cutting the sphere into equal triangles.
We can choose any of the triangles in the tiling and repeatedly reflect it along its edges so that it moves around one of its vertices. This means that the angles of the corners that meet at any vertex of the tiling must be equal. Moreover, we do not want the triangle to intersect with any of the mirrors, so only an even number of triangles can meet at a vertex.\footnote{Fore example, if we project the edges of an icosahedron on the sphere, we obtain arcs that form a tiling with five triangles meeting at each vertex. We cannot use these arcs as mirrors, since they do not form great circles (we cannot extend any of them to a great circle, without intersecting other triangles).}
\begin{figure}
\centering
\begin{minipage}[t]{0.8\textwidth}
\centering
\includegraphics[width=0.3\textwidth]{Tri222}
\includegraphics[width=0.3\textwidth]{Tri223}
\includegraphics[width=0.3\textwidth]{Tri224}
\end{minipage}
\begin{minipage}[t]{0.8\textwidth}
\centering
\includegraphics[width=0.3\textwidth]{Tri233}
\includegraphics[width=0.3\textwidth]{Tri234}
\includegraphics[width=0.3\textwidth]{Tri235}
\end{minipage}
\caption[Triangular symmetry groups]{Triangular symmetry groups. First row: $(2,2,2)$, $(2,2,3)$, $(2,2,4)$. Second row: $(2,3,3)$, $(2,3,4)$, $(2,3,5)$.}
\label{fig:TriangularSymmetryGroups}
\end{figure}
Hence the angles of the spherical triangle must be $(\frac{\pi}{p}, \frac{\pi}{q}, \frac{\pi}{r})$ for some integers $p,q,r \geq 2$. The sum of the angles of a spherical triangle is at least $\pi$, so the numbers $p,q,r$ must satisfy
\begin{equation}
\frac{1}{p} + \frac{1}{q} + \frac{1}{r} > 1.
\end{equation}
If $p \leq q \leq r$, the only solutions are: $(2,2,k)$ for any $k \geq 2$, $(2,3,3)$, $(2,3,4)$, and $(2,3,5)$. The tilings corresponding to these solutions are shown in Fig.~\ref{fig:TriangularSymmetryGroups}. The symmetry group of such tiling is called \emph{triangular symmetry group} \cite[pp.~158]{TriangularSymmetryGroups} and is denoted by $(p,q,r)$.
We can observe these tilings in almost all numerically obtained QRACs discussed in Sect.~\ref{sect:Numerical}. They are formed when the great circles corresponding to measurements partition the Bloch sphere into equal triangles. All such cases are summarized in Table~\ref{tab:TriangularSymmetryGroups}. Tilings appearing in \rac{2} and \rac{4} QRACs that are not mentioned in the table can be seen as degenerate cases.
\begin{table}[!ht]
\centering
\begin{tabular}{c|c|l|l}
$n$ & $(p,q,r)$ & Polyhedron & Section and figure \\
\hline
$3$ & $(2,2,2)$ & octahedron & Sect.~\ref{sect:QRAC2,QRAC3}, Fig.~\ref{fig:QRAC3} \\
$5$ & $(2,2,4)$ & normalized octagonal dipyramid & Sect.~\ref{sect:QRAC5}, Fig.~\ref{fig:QRAC5} \\
$6$ & $(2,3,3)$ & normalized tetrakis hexahedron & Sect.~\ref{sect:QRAC6}, Fig.~\ref{fig:QRAC6} \\
$9$ & $(2,2,2)$ & octahedron & Sect.~\ref{sect:QRAC9}, Fig.~\ref{fig:QRAC9} \\
\end{tabular}
\caption[Triangular symmetry groups of numerical \rac{n} QRACs]{Triangular symmetry groups of numerical \rac{n} QRACs.}
\label{tab:TriangularSymmetryGroups}
\end{table}
The tilings corresponding to triangular symmetry groups $(2,3,4)$ and $(2,3,5)$ do not appear in numerically obtained codes. Thus we will use them to construct new (symmetric) \rac{9} and \rac{15} QRACs with SR in Sects.~\ref{sect:QRAC9[sym]} and \ref{sect:QRAC15[sym]}, respectively. To each tiling one can associate a corresponding polyhedron with equal triangular faces. The polyhedra corresponding to tilings $(2,3,4)$ and $(2,3,5)$ are called the normalized\footnote{\emph{Normalized} means that all vectors pointing from the origin to the vertices of the polyhedron are rescaled to have unit norm.} \emph{disdyakis dodecahedron} and the normalized \emph{disdyakis triacontahedron}, respectively.
\vspace{1.3ex}
Polyhedra arising from both types of symmetric great circle arrangements (qua\-si\-re\-gu\-lar polyhedra and triangular symmetry groups) are summarized in Table~\ref{tab:GreatCircleSolids}. The great circle arrangements corresponding to the four marked polyhedra do not appear in numerically obtained codes, so we will use them to construct new (symmetric) QRACs with SR.
\begin{table}[!ht]
\centering
\begin{tabular}{r|r|r|c|l}
$n$ & \multicolumn{2}{|c|}{Faces} & $(p,q,r)$ & Polyhedron \\
\hline
$3$ & $8$ & $8$ & $(2,2,2)$ & octahedron \\
$4$ & $14$ & $14$ & QR & cuboctahedron \checkmark \\
$6$ & $32$ & $32$ & QR & icosidodecahedron \checkmark \\
$6$ & $24$ & $32$ & $(2,3,3)$ & normalized tetrakis hexahedron \\
$9$ & $48$ & $74$ & $(2,3,4)$ & normalized disdyakis dodecahedron \checkmark \\
$15$ & $120$ & $212$ & $(2,3,5)$ & normalized disdyakis triacontahedron \checkmark
\end{tabular}
\caption[Polyhedra whose edges form great circles]{Polyhedra whose edges form great circles. The first column indicates the number of great circles. The next two indicate, respectively, the number of faces of the polyhedron and the maximal number of pieces achievable by cutting the sphere with $n$ great circles (see Sect.~\ref{sect:noQRAC4}). The fourth column indicates the triangular symmetry group (QR means quasiregular). The name of the polyhedron is given in the last column. Four marked polyhedra will be used in subsequent sections to construct symmetric QRACs with SR.}
\label{tab:GreatCircleSolids}
\end{table}
\subsubsection{Symmetric \rac{4} QRAC with SR} \label{sect:QRAC4[sym]}
\image{0.4}{Exact4sym}{Symmetric \rac{4} QRAC with SR}{Symmetric \rac{4} QRAC with SR.}{fig:QRAC4[sym]}
\image{0.3}{Tetrahedron}{A regular tetrahedron and four great circles parallel to its faces}{A regular tetrahedron and four great circles parallel to its faces. The circles are determined by the measurements in the direction of the vertices of the tetrahedron. The numbers at the vertices indicate the Bloch vectors of basis states $\ket{\psi_0}$ of the measurements for the \rac{4} QRAC shown in Fig.~\ref{fig:QRAC4[sym]}.}{fig:Tetrahedron}
Recall that in Sect.~\ref{sect:noQRAC4} we proved that four planes passing through the center of the Bloch sphere partition its surface into at most $14$ parts. The most symmetric way to obtain $14$ parts is to use the four planes parallel to the four faces of a regular \emph{tetrahedron}. The measurements are along the four directions given by the vertices (see Fig.~\ref{fig:Tetrahedron}).
The simplest way to construct a regular tetrahedron is to choose four specific vertices of the cube, i.e., from the set $\frac{1}{\sqrt{3}} (\pm 1, \pm 1, \pm 1)$. For example, we could choose the ones with an odd number of positive coordinates. They provide us with the following pairs of antipodal Bloch vectors as the measurement bases:
\begin{equation}
\begin{aligned}
\vc{v}_1 &= \pm (+1,-1,-1) / \sqrt{3}, \\
\vc{v}_2 &= \pm (-1,+1,-1) / \sqrt{3}, \\
\vc{v}_3 &= \pm (-1,-1,+1) / \sqrt{3}, \\
\vc{v}_4 &= \pm (+1,+1,+1) / \sqrt{3}.
\end{aligned}
\label{eq:SICPOVM}
\end{equation}
The qubit states corresponding to these Bloch vectors are as follows:
\begin{equation}
\begin{aligned}
M_1 &= M(+1,+1), \\
M_2 &= M(+1,-1), \\
M_3 &= M(-1,+1), \\
M_4 &= M(-1,-1),
\end{aligned}
\end{equation}
where
\begin{equation}
M(s_1,s_2) = \set{
\frac{1}{2} \sqrt{1+\frac{s_1}{\sqrt{3}}} \mx{ \sqrt{3}-s_1 \\ s_2 (s_1 - i) },
\frac{1}{2} \sqrt{1-\frac{s_1}{\sqrt{3}}} \mx{ \sqrt{3}+s_1 \\ s_2 (i - s_1) }
}.
\end{equation}
The great circles determined by these measurements partition the Bloch ball into $14$ parts. In fact, the grid formed by these circles is a projection of the edges of a \emph{cuboctahedron} (see the part on quasireglar polyhedra in Sect.~\ref{sect:GreatCircles}) on the surface of the Bloch ball (see Figs.~\ref{fig:QRAC4[sym]} and \ref{fig:Tetrahedron}).
In each of the $14$ parts of the Bloch sphere a definite string can be encoded so that each bit can be recovered with a probability greater than $\frac{1}{2}$. Strange as it may seem, the remaining $2$ strings ($x=0000$ and $x=1111$) can be encoded anywhere without affecting the success probability of this QRAC. This is not a surprise if we recall from Sect.~\ref{sect:OptimalEncoding} that the optimal encoding $\vc{r}_x$ of the string $x$ is a unit vector in the direction of $\vc{v}_x$ given by equation (\ref{eq:vx}). In our case the Bloch vectors of the measurement bases point to the vertices of a regular tetrahedron centered at the origin. They clearly sum to zero, so $\vc{v}_{0000} = \vc{v}_{1111} = 0$. Thus the scalar product $\vc{r}_x \cdot \vc{v}_x$ in (\ref{eq:rxvx}) is also zero and the success probability does not depend on the vectors $\vc{r}_{0000}$ and $\vc{r}_{1111}$. Therefore, we will ignore these two strings in the following discussion.
\image{0.3}{Ellipses}{Strings encoded into the spherical square and the adjacent spherical triangles of the \rac{4} QRAC}{The relationship between the strings encoded into the spherical square and the adjacent spherical triangles according to the \rac{4} QRAC shown in Fig.~\ref{fig:QRAC4[sym]}.}{fig:Ellipses}
The other $14$ strings are encoded into the vertices of a normalized \emph{tetrakis hexahedron} (the \emph{convex hull} of the \emph{cube} and \emph{octahedron}). The string $x = x_1 x_2 x_3 x_4$ is encoded into the Bloch vector $\vc{r}(x) = \vc{r}_{w}(x)$, where
\begin{equation}
w = x_1 \oplus x_2 \oplus x_3 \oplus x_4 \in \set{0,1}
\end{equation}
is the parity of the input. In the case $w=0$ the encoding points are the vertices $(\pm 1, 0, 0) \cup (0, \pm 1, 0) \cup (0, 0, \pm 1)$ of an \emph{octahedron}:
\begin{equation}
\vc{r}_0(x) = (-1)^{x_4}
\mx{
1 - \abs{x_1 - x_4} \\
1 - \abs{x_2 - x_4} \\
1 - \abs{x_3 - x_4}
}.
\end{equation}
But for $w=1$ we get the vertices $(\pm 1, \pm 1, \pm 1)/\sqrt{3}$ of a \emph{cube}:
\begin{equation}
\vc{r}_1(x) = \frac{(-1)^{x_1 x_2 + x_3 x_4}}{\sqrt{3}}
\mx{
(-1)^{x_1 + x_4} \\
(-1)^{x_2 + x_4} \\
(-1)^{x_3 + x_4}
}.
\end{equation}
Note that the Bloch vectors $\vc{r}_1(x)$ are the vertices of the same cube as the Bloch vectors of the \rac{3} QRAC discussed in Sect.~\ref{sect:KnownQRAC3}.
One can observe the following properties of this encoding. The surface of the Bloch ball is partitioned into $6$ \emph{spherical squares} and $8$ \emph{spherical triangles}. Strings with $w=0$ and $w=1$ are encoded into squares and triangles, respectively. If $w=1$ ($x=1000$ or $x=0111$ and their permutations), the string has one bit that differs from the other three. Such a string is encoded into the basis state of the corresponding measurement so that this bit can be recovered with certainty. If $w=0$, the string is encoded into a square and has the following property: each of its bits takes the value that occurs more frequently at the same position in the strings of the four neighboring triangles (see Fig.~\ref{fig:Ellipses} as an example).
The corresponding encoding function is $E(x)=\alpha_w\ket{0}+\beta_w\ket{1}$ with coefficients $\alpha_0$, $\beta_0$ and $\alpha_1$, $\beta_1$ explicitly given by
\begin{equation}
\left\{
\begin{aligned}
\alpha_0 &= \sqrt{\frac{1}{2} + (-1)^{x_4} \frac{1 - \abs{x_3 - x_4}}{2}}, \\
\beta_0 &= x_3 x_4 + (-1)^{x_4} \frac{1 - \abs{x_1 - x_4} + i \bigl(1 - \abs{x_2 - x_4}\bigr)}{\sqrt{2}},
\end{aligned}
\right.
\end{equation}
and
\begin{equation}
\left\{
\begin{aligned}
\alpha_1 &= \sqrt{\frac{1}{2} + \frac{s(x)}{2 \sqrt{3}}}, \\
\beta_1 &= (-1)^{x_3} s(x) \frac{(-1)^{x_1} + i (-1)^{x_2}}{\sqrt{6 + s(x) 2 \sqrt{3}}},
\end{aligned}
\right.
\end{equation}
where $s(x) \in \set{-1,1}$ is given by
\begin{equation}
s(x) = (-1)^{x_1 x_2 + x_3 x_4 + x_3 + x_4}.
\end{equation}
The $14$ coefficients $\beta_0$ and $\beta_1$ are the roots of the polynomial
\begin{equation}
\beta (\beta - 1) (4 \beta^4 - 1) (36 \beta^8 + 24 \beta^4 + 1).
\end{equation}
Using input randomization we get the same success probability for any input:
\begin{equation}
p = \frac{1}{2} + \frac{2+\sqrt{3}}{16} \approx \p{4[sym]}.
\end{equation}
It is surprising that despite higher symmetry (compare Figs.~\ref{fig:QRAC4} and \ref{fig:QRAC4[sym]}) this QRAC has a lower success probability than the \rac{4} QRAC discussed in Sect.~\ref{sect:QRAC4}.
\subsubsection{Symmetric \rac{6} QRAC with SR} \label{sect:QRAC6[sym]}
\image{0.4}{Exact6sym}{Symmetric \rac{6} QRAC with SR}{Symmetric \rac{6} QRAC with SR.}{fig:QRAC6[sym]}
According to the discussion in Sect.~\ref{sect:noQRAC4}, six great circles can cut the sphere into at most $32$ parts. It turns out that there is a very symmetric arrangement that achieves this maximum. Observe that the \emph{dodecahedron} has $12$ faces and diametrically opposite ones are parallel. For each pair of parallel faces we can draw a plane through the origin parallel to both faces. These six planes intersect the sphere in six great circles that define our measurements. They are the projections of the edges of the \emph{icosidodecahedron} (see Fig.~\ref{fig:Quasiregular}), which is one of the quasiregular polyhedra discussed in Sect~\ref{sect:GreatCircles}.
There is another way to describe these measurements. Notice that the \emph{icosahedron} (the dual of the dodecahedron) has $12$ vertices that consist of six antipodal pairs. Our measurements are along the six directions defined by these pairs. The coordinates of the vertices of the icosahedron are as follows:
\begin{equation}
\frac{1}{\sqrt{1+\tau^2}} (0, \pm \tau, \pm 1) \cup
\frac{1}{\sqrt{1+\tau^2}} (\pm 1, 0, \pm \tau) \cup
\frac{1}{\sqrt{1+\tau^2}} (\pm \tau, \pm 1, 0),
\label{eq:Icosahedron}
\end{equation}
where $\tau = \frac{1+\sqrt{5}}{2}$ is the \emph{golden ratio} (the positive root of $x^2 = x + 1$).
Each of the $64$ strings is encoded either in a vertex of an icosahedron or dodecahedron. They have $12$ and $20$ vertices, respectively, so there are two strings encoded in each vertex. The union of the icosahedron and the dodecahedron is called the \emph{pentakis dodecahedron} (see the polyhedron in Fig.~\ref{fig:QRAC6[sym]}).
The success probability of this code is
\begin{equation}
p = \frac{1}{2} + \frac{\sqrt{5}}{32} + \frac{1}{96} \sqrt{75 + 30 \sqrt{5}} \approx \p{6[sym]}.
\end{equation}
\subsubsection{Symmetric \rac{9} QRAC with SR} \label{sect:QRAC9[sym]}
\doubleimage
{Exact9sym}{Symmetric \rac{9} QRAC with SR}{Symmetric \rac{9} QRAC with SR.}{fig:QRAC9[sym]}
{Exact15sym}{Symmetric \rac{15} QRAC with SR}{Symmetric \rac{15} QRAC with SR.}{fig:QRAC15[sym]}
This code is based on the triangular tiling of the sphere whose symmetry group is $(2,3,4)$. The great circles corresponding to measurements coincide with the projection of the edges of the \emph{normalized disdyakis dodecahedron}. We can think of this QRAC as the union of \rac{3} and \rac{6} codes. The first three measurements are along the coordinate axis as in the \rac{3} QRAC discussed in Sect.~\ref{sect:KnownQRAC3}. The remaining six measurements are exactly the same as for the \rac{6} code discussed in Sect.~\ref{sect:QRAC6} (see Figs.~\ref{fig:Cube} and \ref{fig:QRAC6}), i.e., they are along the six antipodal pairs of $12$ vertices of the cuboctahedron shown in Fig.~\ref{fig:Quasiregular}. Note that a great circle of the first kind cannot be transformed to a great circle of the second kind via an operation from the symmetry group of the code.\footnote{For the other three symmetric codes we can transform any circle to any other in this way, i.e., the symmetry group acts transitively on the circles.}
The resulting QRAC is shown in Fig.~\ref{fig:QRAC9[sym]} and its success probability is
\begin{equation}
p \approx \p{9[sym]}.
\end{equation}
\subsubsection{Symmetric \rac{15} QRAC with SR} \label{sect:QRAC15[sym]}
The triangular symmetry group of this code is $(2,3,5)$ and the great circles coincide with the projection of the edges of the \emph{normalized disdyakis triacontahedron}. To understand what the measurements are in this case, note that the \emph{icosidodecahedron} (see Fig.~\ref{fig:Quasiregular}) has $30$ vertices. Their coordinates are:
\begin{gather}
(\pm 1, 0, 0) \cup
(0, \pm 1, 0) \cup
(0, 0, \pm 1), \\
\frac{1}{2 \tau} (\pm 1, \pm \tau, \pm \tau^2) \cup
\frac{1}{2 \tau} (\pm \tau^2, \pm 1, \pm \tau) \cup
\frac{1}{2 \tau} (\pm \tau, \pm \tau^2, \pm 1).
\label{eq:Icosidodecahedron}
\end{gather}
The measurement directions are given by $15$ antipodal pairs of these vertices.
The obtained QRAC is shown in Fig.~\ref{fig:QRAC15[sym]}. Its success probability is
\begin{equation}
p \approx \p{15[sym]}.
\end{equation}
\subsection{Discussion} \label{sect:Discussion}
In this section we will compare and analyze the numerical and symmetric QRACs with SR described in Sects.~\ref{sect:Numerical} and \ref{sect:Symmetric}, respectively. Hopefully these observations can be used to find new \rac{n} QRACs with SR or to generalize the existing ones (see Sect.~\ref{sect:Generalizations} for possible generalizations).
The success probabilities of numerical and symmetric QRACs with SR are given in Tables~\ref{tab:NumericalProbabilities} and \ref{tab:SymmetricProbabilities}, respectively (see Table~\ref{tab:ComparisonOfProbabilities} for the comparison). We see that none of the symmetric codes discussed in Sect.~\ref{sect:Symmetric} is optimal. However, the success probabilities of numerical and symmetric codes do not differ much. Moreover, recall that there are two more symmetric codes (\rac{3} and \rac{6}) that coincide with the numerically obtained ones (see Table~\ref{tab:GreatCircleSolids}). Concerning these two codes we can reach more optimistic conclusions: the \rac{3} QRAC is optimal (see Sect.~\ref{sect:UpperBound}) and possibly the \rac{6} QRAC (see Sect.~\ref{sect:QRAC6}) is as well, since we did not manage to improve it in Sect.~\ref{sect:QRAC6[sym]}.
\begin{table}[!ht]
\centering
\begin{tabular}{r|c|r}
$n$ & Section & \multicolumn{1}{c}{Probability} \\
\hline
\multirow{2}{*}{$4$} & \ref{sect:QRAC4} & $\p{4}$ \\
& \ref{sect:QRAC4[sym]} & $>\p{4[sym]}$ \\
\hline
\multirow{2}{*}{$6$} & \ref{sect:QRAC6} & $\p{6}$ \\
& \ref{sect:QRAC6[sym]} & $>\p{6[sym]}$ \\
\hline
\multirow{2}{*}{$9$} & \ref{sect:QRAC9} & $\p{9}$ \\
& \ref{sect:QRAC9[sym]} & $>\p{9[sym]}$ \\
\hline
\multirow{2}{*}{$15$} & & $\p{15}$ \\
& \ref{sect:QRAC15[sym]} & $>\p{15[sym]}$
\end{tabular}
\caption[Comparison of the success probabilities of \rac{n} QRACs with SR]{Comparison of the success probabilities of \rac{n} QRACs with SR. For each $n$ the first probability corresponds to a numerical code, but the second one to a symmetric code. For $n=15$ we do not have numerical results, so we just use five measurements along each coordinate. In fact, the numerical \rac{4} and \rac{9} QRACs also use measurements only along coordinate axis. The \rac{6} QRAC with two measurements along each coordinate axis has success probability $\p{6[ort]}$.}
\label{tab:ComparisonOfProbabilities}
\end{table}
We just saw that symmetric QRACs are not necessarily optimal. One could ask if there are other heuristic methods that potentially could be used to construct good QRACs with SR. We will give a few speculations in the remainder of this section. In particular, we will discuss some special kinds of measurements that could be useful. To make the discussion more general, we will not restrict ourselves to the case of a single qubit.
\begin{definition}
Two orthonormal bases $\mathcal{B}_1$ and $\mathcal{B}_2$ of $\mathbb{C}^d$ are called \emph{mutually unbiased bases} (MUBs) if $\abs{\braket{\psi_1}{\psi_2}}^2 = \frac{1}{d}$ for all $\ket{\psi_1} \in \mathcal{B}_1$ and $\ket{\psi_2} \in \mathcal{B}_2$. The maximal number of pairwise mutually unbiased bases in $\mathbb{C}^d$ is $d+1$. \cite{MUBs}
\end{definition}
When $d=2$, equation (\ref{eq:ScalarProduct}) implies that Bloch vectors corresponding to basis vectors of \emph{different} mutually unbiased bases are orthogonal\footnote{The notion of the Bloch vector can be generalized for $d \geq 2$ (see \cite{Kimura}). Then a similar duality holds as well (see equation (\ref{eq:GeneralizedScalarProduct}) in Sect.~\ref{sect:Generalizations}): mutually unbiased quantum states correspond to orthogonal Bloch vectors, but orthogonal quantum states correspond to ``mutually unbiased'' Bloch vectors, i.e., equiangular vectors pointing to the vertices of a regular simplex.}. There are three such bases in $\mathbb{C}^2$ and their Bloch vectors correspond to the vertices of an octahedron. For example, the bases $M_1$, $M_2$, and $M_3$ defined in Sects.~\ref{sect:KnownQRAC2} and \ref{sect:KnownQRAC3} are MUBs (they correspond to measuring along $x$, $y$, and $z$ axis).
Note that the measurements for numerical \rac{2}, \rac{3}, \rac{4}, and \rac{9} QRACs are performed entirely using MUBs and three out of five measurement bases for numerical \rac{5} QRAC are also MUBs.
There is another very special measurement that appears in our QRACs.
\begin{definition}
A set of $d^2$ unit vectors $\ket{\psi_i} \in \mathbb{C}^d$ is called \emph{symmetric, informationally complete POVM} (\mbox{SIC-POVM}) if $\abs{\braket{\psi_i}{\psi_j}}^2 = \frac{1}{d+1}$ for any $i,j$. \cite{SIC-POVMs}
\end{definition}
For $d=2$ there are four such quantum states. Again, from equation (\ref{eq:ScalarProduct}) we see that the inner product between any two Bloch vectors corresponding to these states is $-\frac{1}{3}$. Such equiangular Bloch vectors are exactly the vertices of a tetrahedron, e.g., $\vc{v}_1$, $\vc{v}_2$, $\vc{v}_3$, $\vc{v}_4$ defined in (\ref{eq:SICPOVM}). They were used in Sect.~\ref{sect:QRAC4[sym]} to construct a symmetric \rac{4} QRAC.
Let us compare numerical and symmetric \rac{4} QRACs from Sects.~\ref{sect:QRAC4} and \ref{sect:QRAC4[sym]}, respectively. The first one is based on MUBs and is not very symmetric. Moreover, it looks like we are wasting one out of four bits, since two measurements are along the same direction. However, all measurement directions in the Bloch sphere are mutually orthogonal, except the ones that coincide. The second \rac{4} code is based on a \mbox{SIC-POVM} and is very symmetric. However, it appears that in this case we are wasting two out of $16$ strings, since the way we encode them does not influence the success probability.
Now, if we compare the success probabilities of both \rac{4} codes (see Table~\ref{tab:ComparisonOfProbabilities}), we see that the first one is clearly better. Hence we conclude that
\begin{center}
\emph{orthogonality} of the measurement Bloch vectors \\
seems to be more important than \emph{symmetry}.
\end{center}
One can come to a similar conclusion when comparing \rac{9} and \rac{15} codes. Thus it looks like using roughly $n/3$ measurements along each coordinate axis is quite a good heuristic for constructing \rac{n} QRAC with SR (see Sect.~\ref{sect:Lower2}).
\section{Conclusion} \label{sect:Conclusion}
\subsection{Summary} \label{sect:Summary}
We study the \emph{worst} case success probability of random access codes with shared randomness. Yao's principle (see equation (\ref{eq:Yao}) in Sect.~\ref{sect:Yao}) and input randomization (see Theorem~\ref{thm:SRtoPure}) is applied to consider the \emph{average} case success probability instead (this works in both classical and quantum cases).
In Sect.~\ref{sect:ClassicalBound} we construct an optimal \emph{classical} \rac{n} RAC with SR as follows (see Theorem~\ref{thm:OptimalClassical}): Alice XORs the input string with $n$ random bits she shares with Bob, computes the majority and sends it to Bob; if the \mbox{$i$th} bit is requested, Bob outputs the \mbox{$i$th} bit of the shared random string XORed with the received bit. The asymptotic success probability of this code is given by equation (\ref{eq:Approx}) in Sect.~\ref{sect:ClassicalBound}:
\begin{equation}
p(n) \approx \frac{1}{2} + \frac{1}{\sqrt{2 \pi n}}.
\end{equation}
The worst case success probability of an optimal \emph{quantum} RAC with SR satisfies the following inequalities:
\begin{equation}
\frac{1}{2} + \sqrt{\frac{2}{3 \pi n}} \leq p(n) \leq \frac{1}{2} + \frac{1}{2 \sqrt{n}}.
\end{equation}
These upper and lower bounds are obtained in Sects.~\ref{sect:UpperBound} and \ref{sect:Lower1}, respectively.
The success probabilities of classical and quantum RACs with SR are compared in Fig.~\ref{fig:Comparison}.
\image{0.9}{PlotComparison}{Comparison of success probabilities of classical and quantum RACs}{Comparison of success probabilities of classical and quantum RACs. Black dots correspond to optimal classical RACs and the dotted line shows the asymptotic behavior. Circles correspond to numerical QRACs and dashed lines to quantum upper and lower bounds, respectively.}{fig:Comparison}
\subsection{Open problems for \rac{n} QRACs} \label{sect:OpenProblems}
\emph{Lower bound by orthogonal measurements.} The known \rac{2} and \rac{3} QRACs (see Sect.~\ref{sect:KnownQRACs}) and our numerical \rac{4} and \rac{9} QRACs with SR (see Sects.~\ref{sect:QRAC4} and \ref{sect:QRAC9}) suggest that MUBs can be used to obtain good QRACs (see Sect.~\ref{sect:Discussion}). Indeed, \rac{n} QRAC with orthogonal measurements (see Sect.~\ref{sect:Lower2}) is better than the one with random measurements (see Sect.~\ref{sect:Lower1}). However, we were not able to obtain an asymptotic expression for its success probability. This is equivalent to obtaining an asymptotic expression for (\ref{eq:CubicDistance}), i.e., the average distance traveled by a random walk with roughly $n/3$ steps along each coordinate axis.
In Fig.~\ref{fig:Optimality} we show how close both lower bounds and the success probabilities of numerical QRACs are relative to the upper bound from Sect.~\ref{sect:UpperBound}. Assume that Alice and Bob are given a point in the light gray region in Fig.~\ref{fig:Optimality} and asked to construct a QRAC with SR whose success probability is at least as good. Then they can use measurements along coordinate axis as in Sect.~\ref{sect:Lower2}. If the point is in the dark gray region, they can use one of the numerical codes from Sect.~\ref{sect:Numerical}. However, if it is in the white region, they have to solve the next open problem.
\emph{Optimality of numerical codes.} Prove the optimality of any of the numerically obtained \rac{n} QRACs with SR for $n \geq 4$ discussed in Sect.~\ref{sect:Numerical}. Are the optimal constructions unique (up to isomorphism)?
\emph{Prove the ``Homer conjecture''} that quantum RACs with SR are at least as good as their classical counterparts in the sense discussed at the end of Sect.~\ref{sect:OptimalEncoding}.
\image{1.0}{PlotQuantum3}{\mbox{Close-up} of the region between the quantum upper and lower bound}{\mbox{Close-up} of the narrow region in Fig.~\ref{fig:Comparison} between the quantum upper and lower bound (everything is shown relative to the upper bound that corresponds to the horizontal axis). Circles indicate the gap between the upper bound and numerical QRACs with SR. Black squares show the gap between the upper bound and the lower bound by measurements along coordinate axes (see Fig.~\ref{fig:OrthBound}). Dashed line corresponds to the gap between the quantum upper bound and the lower bound by random measurements.}{fig:Optimality}
\subsection{Possible generalizations} \label{sect:Generalizations}
There are several ways that random access codes with SR can be generalized, both classically and quantumly. In particular, one can consider
\begin{itemize}
\item \racp{n} codes in base $d$, $d > 2$ (called \emph{qudits} in the quantum case),
\item \racm{n}{m} codes with $m > 1$,
\item \racm{n}{m} codes where any $k > 1$ bits (qubits) must be recovered.
\end{itemize}
Of course, one can consider several of these generalizations simultaneously. In the setting without shared randomness such generalizations have already appeared in the literature (see Sect.~\ref{sect:History}). We will briefly introduce the notion of the generalized Bloch vector which we believe can be useful to study such generalizations (it has been explicitly used in \cite{No41} to prove the impossibility of \racm{2^m}{m} QRAC with $p>1/2$, when SR is not allowed).
The notion of the Bloch vector introduced in Sect.~\ref{sect:BlochSphere} can be generalized for $d>2$. For example, to write down the density matrix for $d=3$ one uses eight \emph{\mbox{Gell-Mann} matrices} denoted by $\lambda_i$ instead of three Pauli matrices $\sigma_i$ defined in equation (\ref{eq:Pauli}). In general one needs $d^2-1$ matrices $\lambda_i$ that span the set of all traceless $d \times d$ Hermitian matrices. A convenient choice of $\lambda_i$ are the so called \emph{generalized Gell-Mann matrices}, also known as the \emph{generators of the Lie algebra of $SU(d)$}, given in \cite{SUGenerators}. We can use them to generalize equation (\ref{eq:Rho}):
\begin{equation}
\rho = \frac{1}{d} \left( I + \sqrt{\frac{d(d-1)}{2}} \; \vc{r} \cdot \vc{\lambda} \right),
\label{eq:GeneralRho}
\end{equation}
where $\vc{\lambda} = (\lambda_1, \dotsc, \lambda_{d^2-1})$ and $\vc{r} \in \mathbb{R}^{d^2-1}$ is the \emph{generalized Bloch vector}\footnote{Our normalization follows \cite{Positivity}, where the generalized Bloch sphere has radius $1$. Another widely used convention is to assume radius $\sqrt{2(d-1)/d}$, e.g., see \cite{Kimura, Kimura2}.\label{foo:Norm}} or \emph{coherence vector} \cite{Kimura, Positivity}. Since the $\lambda_i$ are chosen so that $\tr \lambda_i = 0$ and $\tr (\lambda_i \lambda_j) = 2 \delta_{ij}$, equation (\ref{eq:ScalarProduct}) generalizes to
\begin{equation}
\abs{\braket{\psi_1}{\psi_2}}^2 = \tr(\rho_1 \rho_2) =
\frac{1}{d} \bigl( 1 + (d-1) \; \vc{r}_1 \cdot \vc{r}_2 \bigr).
\label{eq:GeneralizedScalarProduct}
\end{equation}
To recover a base $d$ digit, we perform a measurement in an orthonormal basis $\set{\ket{\psi_1}, \dotsc, \ket{\psi_d}}$ of $\mathbb{C}^d$. Since $\abs{\braket{\psi_i}{\psi_j}}^2 = 0$ for any pair $i \neq j$, the corresponding Bloch vectors must satisfy $\vc{r}_i \cdot \vc{r}_j = -\frac{1}{d-1}$. This means that they are the vertices of a regular simplex that belongs to a \mbox{$(d-1)$-dimensional} subspace and is centered at the origin (for $d=2$ this is just a line segment).
On the other hand, in Sect.~\ref{sect:Discussion} we observed that it might be advantageous to perform measurements along orthogonal directions in the Bloch sphere to recover different bits. Let $\vc{r_i} \perp \vc{s_j}$ be two orthogonal Bloch vectors. Then the corresponding quantum states $\ket{\psi_i}$ and $\ket{\varphi_j}$ must satisfy $\abs{\braket{\psi_i}{\varphi_j}}^2 = \frac{1}{d}$. This is exactly the case when $\ket{\psi_i}$ and $\ket{\varphi_j}$ belong to \emph{different} mutually unbiased bases (see Sect.~\ref{sect:Discussion}). This suggests that distinct bits should be recovered using mutually unbiased measurements. Note that the Bloch vectors of the states from two MUBs correspond to the vertices of two regular simplices in mutually orthogonal subspaces. In general, the Bloch vectors of the states from all $d+1$ MUBs are the vectices of the so called \emph{complementarity polytope} \cite{ComplementarityPolytope}, which is just the octahedron when $d=2$.
The conclusion of Sect.~\ref{sect:Discussion} and our discussion above suggests the use of MUBs to construct QRACs also for $d>2$. Such attempts have already been made \cite{Galvao,Severini}. Galv\~ao \cite{Galvao} gives an example of \racx{2}{0.79} QRAC for qutrits ($d=3$) and Casaccino et al. \cite{Severini} numerically investigate \rac{(d+1)} QRACs based on MUBs for \mbox{$d$-level} quantum systems. However, there is a significant difference between the qubit and qudit case. Recall that for $d=2$ the optimal way to encode the message $x$ is to use a unit vector in the direction of $\vc{v}_x$ (see equation (\ref{eq:vx}) in Sect.~\ref{sect:OptimalEncoding}). A similar expression for $\vc{v}_x$ can be obtained when $d>2$, but then the matrix $\rho$ assigned to $\vc{r} = \vc{v}_x/\norm{\vc{v}_x}$ according to equation (\ref{eq:GeneralRho}) is not necessarily \emph{positive semidefinite} and hence may not be a valid density matrix. However, it is known that for small enough values of $\norm{\vc{r}}$ (in our \mbox{case$^\text{\ref{foo:Norm}}$} $\norm{\vc{r}} \leq \frac{1}{d-1}$), \emph{all} Bloch vectors correspond to valid density matrices \cite{Kimura2}. Hence, if we cannot use the pure state corresponding to $\vc{v}_x/\norm{\vc{v}_x}$, we can always use the mixed state corresponding to $\frac{1}{d-1} \vc{v}_x/\norm{\vc{v}_x}$. If one knows more about the shape of the region corresponding to valid quantum states, one can make a better choice and use a longer vector, possibly in a slightly different direction. Unfortunately, apart from convexity, not much is known about this shape. Already for $d=3$ it is rather involved \cite{Kimura, Kimura2}. In general the conditions (in terms of the coordinates of the generalized Bloch vector $\vc{r}$) for $\rho$ to have \mbox{non-negative} eigenvalues are given in \cite{Positivity, Kimura}.
However, for proving only an upper bound, one can ignore all such details. Thus we believe it might be possible to generalize our upper bound (see Sect.~\ref{sect:UpperBound} and \ref{sect:GeneralUpperBound}) using generalized Bloch vectors. It would be interesting to compare such a result with the upper bound (\ref{eq:HypercontractiveUpperBound}) that was obtained by \mbox{Ben-Aroya} et al. in \cite{Hypercontractive}.
Finally, another way of generalizing QRACs with SR is to add other resources. A good candidate is \emph{shared entanglement}.
\subsection{Acknowledgments}
Most of these results were obtained in the summer of 2006 during the undergraduate summer research program at the University of Waterloo. We would like to thank University of Waterloo and IQC for their hospitality. We would also like to thank Andrew Childs and Ashwin Nayak for many useful comments on the preliminary version of this manuscript. In particular, the simplification of the proof of Lemma~\ref{lm:Sv} is due to Ashwin.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Personas are a great way of raising awareness of user needs throughout a project. A
persona is a rich description of a potential user of your system and consists of several
stereotypical traits, such as preferences, needs, attitudes, habits, and desires, composed
into a realistic, but fake, person with a name and picture. Instead of arguing for the
needs of a generic user that can morph from being a complete novice in one situation to an
experienced expert in another, stakeholders ground themselves in the facts of the personas in
front of them. The idea is that the resulting system will be more successful when it is
designed with specific users in mind rather than a vague idea of an average user.
Since personas help focus on the concrete facts about your potential
users, it would be beneficial if the needs of people with disabilities are included among
the personas. While this has been encouraged and used in different places, there is little
information on how one should create personas with disabilities. This results in a risk of
creating personas that do not capture the needs of people with disabilities or that are
based on incorrect information. We introduce a methodology for creating personas with
disabilities that minimizes this risk by providing ways of collecting information and
opinions from the people with disabilities and taking into account the different Assistive
Technology (AT) they use. This methodology can be easily included in a standard persona
creation methodology contributing to systems that are designed for all.
\section[State of the Art]{State of the Art}
\label{SOTA}
Input from users is important to ensure that new systems can be usable and cover their
needs. Participation from the users is one of the principles behind Scandinavian Design
\cite{Ehn1993}. Yet, having constant access to users is resource-intensive and may be too
costly for a project. Using personas helps ensure that users' perspectives are included.
Lindgren, Chen, Amdahl, and Chaikiat \cite[p.~461]{Lindgren2007} describe personas as
``\ldots a hypothetical archetype of real users described in great detail and defined by
their goals and needs, rather than just demographics.'' Personas are usually presented in
a persona description, this usually includes a picture of the persona; background
information including family, tasks, motivations, attitudes to technology and technical
knowledge.
Personas were popularized by Cooper \cite{Cooper1999} in his book \emph{The Inmates are Running the Asylum}.
Cooper presents personas as a way to include viewpoints from different user groups without
falling into the trap of using a generic user. He stresses that designing something for
specific people will be more successful than trying to create something that works for
generic users. Cooper then uses the persona technique to aid in designing an entertainment
system for an airline. While one of the personas is an elderly man with some vision
problems, there is little information about how Cooper created these personas.
Personas are normally used to keep the focus on the users on the project, but it is also
possible to find out about your users based on the personas you have created. Chapman,
Love, Milham, ElRif, and Alford \cite{Chapman2008} have demonstrated a novel use of
personas. They took personas' properties and found how prevalent each property was among their user groups.
To ensure that systems we design are accessible, we should also create personas that have
disabilities. Zimmermann and Vanderheiden \cite{Zimmermann2007} point out that using
personas with impairments help make the accessibility requirements real. Using personas
with disabilities gives us an opportunity to include their needs without the resource
intensive task of recruiting disabled people for all stages of the project. Although he
does not provide detailed instructions on how to create personas, Henry \cite{Henry2007}
encourages the use of personas with disabilities in the design of ICT, but reminds us that
everything that is true for one person with one disability is not necessarily applicable
to all people with that disability. This advice, however, is useful for all kinds of
personas.
The use of personas with disabilities has gained traction in the industry and in several
recent EU research projects like ACCESSIBLE \cite{Isacker2008a} and ÆGIS \cite{Isacker2008}. The Ubuntu operating system
has also adapted personas in guiding its accessibility development \cite{Bell2011}. These projects
have provided their personas online. Others have encouraged people to use these specific
personas for other projects \cite{Korn2010}. This may seem like a shortcut for creation, but recycling
personas is not recommended \cite{Pruitt2006}. This is because an equally important aspect
is to engage the stakeholders and the development team, to let them get to know the
personas and to empathize with them. This part is lost when recycling personas from
another project. Knowledge of creating personas can be recycled, however.
Pruitt and Grudin \cite{Pruitt2003} list several problems with personas in projects. The personas are not believable, because they were not based on real data; that they are not presented well, and then not used; no understanding of how to use the personas; and finally that only part of the group is interested in using personas and there are no resources to make personas come alive. There are several methods for combating these problems and we will present some in Section \ref{WorkAndResults}.
Even though personas have become a popular method for raising awareness of users'
needs in a project, it is important to remember that personas are \emph{not} a
replacement for actual users. That is, one should not create personas out of thin air to replace gathering
input from users. This may allow stakeholders to think about the
users, but it will be problematic when creating personas with disabilities. This is because
many people have misconceptions about how people with disabilities interact with
technology and this can lead to these personas having extra powers or disadvantages that
they may not have. As pointed out by Grudin and Pruitt \cite{Grudin2002} personas can be used poorly and for most people ``\ldots a more solid foundation will prove necessary.'' In particular, Grudin and Pruitt recommend basing personas on user research and facts.
\section[Methodology]{Methodology}
As mentioned in the Section \ref{SOTA}, even though personas are fictional, they should be based on experiences with and
information from real users \cite{Calabria2004}. We underscore that personas cannot replace contact with
real users altogether, but rather be used as a supplement, and as a way of keeping a
continual focus on the users throughout the project life-cycle. There are many ways that
one can go about collecting information about real users and, depending on the resources
available, selecting more than one method may be useful.
One way that can be useful for collecting information about users is by simply asking
them. This includes methods like using focus groups, interviews, and surveys. Observation
is another good method. As Gould \cite{Gould1995} points out, you may not have any idea
about what you need to know about users and their environment until you see them. It is
useful to study information from case studies and other user research. Market information may be another
source to consider including.
When it comes to recruiting people with disabilities, having contacts inside the user
organizations that support people with disabilities is a good start. User organizations
can contact members for you and provide opportunities for you to talk to users at meetings
or provide a location to host a focus group: a well-known location can make it easier for
people with visual or physical impairments to participate rather than traveling to your site, which is unlikely unknown to them.
Using surveys can also be a way of gathering information. An online survey can help you
reach a wider audience that might have been impossible or cost prohibitive otherwise, but
it is important that the tools for gathering the information are usable for your audience.
For example, using a web survey tool may create a survey that is inaccessible to people
using certain types of AT like screen readers \cite{Wentz2009}. A plain text email with
the questions may be an alternative for getting these voices heard. Getting more
information will help create more well-rounded personas and highlight different issues
that will need to be taken into a system.
Looking at the AT used by people with disabilities also helps in creating personas with
disabilities. Some personas will be using AT for accessing information. It is important to
know how these technologies work and how people work with them. It is vital that someone in
the design team has actual experience working with people with disabilities, either from
user tests or from teaching them to use technology. You should at least include people
with this kind of experience in the process of creating personas. One way to do this could
be to invite them to a persona workshop.
As an example of how to involve users, in one project, the UNIMOD project \cite{UNIMOD2007}, a navigation system for drivers
working at a rehabilitation center was to be developed. A persona workshop was arranged on
the premises of the rehabilitation company. Employees at the company with ample
experience with the target population were invaluable during the persona workshop. As
various aspects of the target population were discussed during the persona creation
process, the employees could fill in with related real-life stories. The stories were
told in connection to discussions of traits, needs, attitudes, and habits of the various
personas. Later in the project, project participants remembered several of these stories;
they were referred to when using the personas, and they were useful for keeping the
personas alive.
As outlined by Pruitt and Adlin \cite{Pruitt2006}, a persona workshop gathers the stakeholders to
generate personas based on assumptions and factoids. Assumptions are quotes, opinions, or
goals that these potential personas would have along with a possible name for the persona.
Factoids are small facts or news items from literature, research, or data from your own
user research. During the workshop, participants re-read through the collected research and writes out factoids (e.g., on post-it notes). This can also be repeated for assumptions.
Starting first with the assumptions, stakeholders build groups of similar assumptions to
see if there are any patterns that emerge. This could be done digitally with mind-mapping software or in analog with post-it notes and a clear wall or whiteboard. This process is repeated with the factoids
usually resulting in a rearrangement or new groups being created. These groups are the
starting point for creating persona skeletons.
Persona skeletons are the outlines of the actual personas. They consist of the assumptions
and factoids that were collected earlier, but they also are where sketches of information
about the personas start to emerge. One way of organizing this information is to use a
template with all the different areas of the final persona description. Start by filling
in information first as keywords and continue until you have fleshed-out the entire
section. A mind map is another good way of creating the ``bones'' of the skeletons before
adding ``flesh.''
Once everyone agrees on the persona skeletons, writing up of the actual personas can
begin. The outcome is usually what most people see when they are presented personas, the
persona description as detailed in Section \ref{SOTA}. If the persona has a disability, this information is
also presented along with information about the AT the persona uses. Since others in the
project may not have an understanding about how a person with a disability works with an
AT, it may be necessary to include information about how a disability affects a persona or
how particular AT plays a part in the persona's life. After this, the personas are ready to
become active participants in the project.
How many personas should be created for a project? If we want to
aim for universal design, targeting the four main groups of
disabilities is a good start. That is, create personas with vision, hearing, movement, and cognitive
impairments. Yet, as mentioned in Section \ref{SOTA}, one should keep in mind that each of these impairments group are
diverse and have different abilities. Another option to consider is to create an
elderly persona. Elderly personas usually have a combination of several milder versions of
impairments from these groups. In our experience, we have found that three to six personas
is a manageable amount of work and covers important aspects of our target groups.
\section[Work and Results]{Work and Results}
\label{WorkAndResults}
This technique has been used in several of our projects. Currently we are using it in
researching the Internet of Things (IoT) and Ambient Assisted Living (AAL). We wanted to
ensure that the needs of users with disabilities were included in the requirements and
prototypes. For the IoT project, we wanted to examine the issues that people with vision
impairment and those with dyslexia have when interacting with the Internet of Things. Of
our five personas, one has twenty percent vision, another has dyslexia, and another is
elderly and has begun to suffer from mild dementia. We have documented the different AT
these personas use and tried to describe real issues. For example, our persona with vision
impairment uses a screen reader and magnifier, but has one version at work and another at
home; the different software results in our persona sometimes forgetting which keyboards
shortcuts work where.
The AAL project focuses on elderly people's use of mobile phones and getting help on them.
We want to make sure that we can reach the largest group of the elderly as possible. All
the personas for this project have a slight vision impairment and other disabilities like
hearing loss or problems remembering information. Since the project is about using mobile
phones and asking for assistance, we made sure that the elderly personas have similar
attitudes to technology and to learning new information that match the different focus
groups we held when gathering user requirements. This is also reflected in our personas
choice of mobile phone.
We have found that including disabilities in the persona creation phase has helped in
raising awareness for universal design and accessibility both during the creation process
and in many other areas of the project. One of the most obvious places was in the creation
of the user scenarios. Our personas became the performers in these scenarios and it was
necessary to ensure that the different actions in the scenarios could be accomplished by
the specific persona. This bubbled up into later requirements work such as selecting
technology, defining use cases, and in recruiting informants for evaluations. It is
important to keep the personas in project participants' thoughts. This has been done in
different ways. Each month we get an email message with a story from one of the personas explaining an issue
that persona faced with technology or some other aspect that is related to the project.
The task of writing such a story is distributed among the project participants. During the
process of creating these stories and describing in detail how a persona interacts with the
technology may raise questions for the story's author. Is the story realistic for the actual persona? Would the
persona actually do it in this way? If the project participant authoring the story does not
have experience with how people with the particular type of disability interacts with
technology, the story should be presented to someone who has this experience, or even
users themselves. The process of writing the story and getting it validated either by
experts or by users, helps to reveal potentially wrong assumptions among the project
participants. Because the process is creative and active, it encourages learning about the
issues this persona, and people with similar disabilities, have. Project participants also received gifts related to the personas, such as a chocolate
Advent Calendar with their pictures, reminding project participants about the personas
every day in December. Pruitt and Grudin \cite{Pruitt2003} list many additional ideas that can keep
personas alive.
Another valuable method to utilize the personas is to do \emph{persona testing} with
prototypes as an analog to user testing. Tasks for the personas to perform are created as
in a user test. The personas are divided between members of the project team according to
their experience and familiarity with the disability that the persona has. Then, the team
member acts as the persona while doing the tasks with the prototype. The more experience
the team member has, either from user tests with or training people with the type of
disability that the persona has, the easier it is to do a realistic and credible acting
performance when persona testing the prototype. If none of the team members have this
experience, one should consider inviting someone who does. The person performing the
persona test may take notes, but we advise to have another team member to be observer and
to takes notes during the persona testing. This approach is informal and relatively quick
to do. It can be done in between user testing with real users. We have also used persona
testing to pilot user tests, to identify potential problems that can be corrected before
the user test, and to see how many and what types of tasks would be fruitful to do in the
user test.
\section{Impact}
As more countries have started to create requirements that new ICT
targeting the public be universally designed or accessible, including the needs of people
with disabilities will be increasingly needed for projects. The methodology outlined above
is useful for others that want to include the perspectives of people with disabilities in
their work. It requires some initial work upfront to build competence and knowledge about
AT and people with disabilities, but this work would be needed in any sort of work for
universal design. Once this knowledge is acquired, it can easily be incorporated into any
persona creation process. Rather than using personas to replace user research, it can be used as a means to elicit knowledge and experience from people in your team or network that do have experience with people who have the type of disability the persona has.
\section{Conclusion}
We have presented the state of the art for persona creation and outlined a methodology for
creating personas with disabilities. In our own work, we have found that using this
methodology has helped raise awareness among partners about the needs of people with
disabilities and has ensured that the personas' needs are included in all steps in the
project. We hope that this methodology will result in more universally designed ICT and
that others will use this technique themselves. We also have found that it is important
that to involve people in the project who have experience with how people with
disabilities use technology. This can either be with people with disabilities themselves
or others who aid people with disabilities or research
issues in the universal design of ICT. Including these people can only help ensure that a
project focuses on the needs for universal design.
Finally, it is not sufficient to simply create personas. They need to be used in order for
them to be alive. This can include things like creating stories to document things that
are happening in a persona's life and remind everyone that they should keep these personas
in mind in the work that they do. A persona walkthrough using the proper AT is also a
concrete way to remind everyone about what type of users will actually be using the final
product or service. Following this advice should ensure that personas you create will
capture the needs of people with disabilities and capture the attention
of the project members.
\subsubsection*{Acknowledgments.}
This research is funded as part of the uTRUSTit project. The uTRUSTit project is funded by
the EU FP7 program (Grant agreement no: 258360).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Notes (for us)}
\section{Introduction}
\label{S:1}
Most popular consensus protocols, such as Proof of Work (PoW) \cite{Nakamoto2008, satoshi_literature} and Proof of Stake (PoS) \cite{king2012ppcoin, on_stake} resemble a rent-seeking contest \cite{Tullock1980, Thum2018} for the right to create the next block. The contest usually consists of selecting a participant from a population of miners, proportional to a costly sacrifice they have made, such as hashrate in PoW or stake in PoS. The winner gains a monopoly to create the next block, meaning they are free to compose a block of any available transactions they wish, within certain protocol limits such as block size \cite{mastering-bitcoin}.
The high dependence of PoW mining on energy costs has led to a geographical concentration of miners in areas where energy is cheap, such as China \cite{kaiser2018looming}, while economies of scale and a high variance in rewards has led to a further concentration of power in the hands of operators of mining pools \cite{Gervais2014, Beikverdi2015, Arnosti2018}.
With this inherent progression towards centralisation there is a rising threat that some users or transactions can be censored. This issue represents a threat to ``freedom of speech'' in the context of on-chain governance mechanisms and a security threat in the form of punitive and feather forking attacks \cite{Bonneau2015, Mosakheil2018}. In addition selective transaction censoring can also lead to manipulation of and security threats to financial protocols such as decentralized exchanges \cite{censorship}.
As the security of a decentralised blockchain depends on the total mining rewards (transaction fees and block rewards) \cite{BiS, budish}, the ability to extract fees is essential to the system.
Allowing users to freely decide on the fee attached to their transaction is the current most popular approach. Given each individual miner's incentive to maximise the fees they can extract with their block, this approach relies on restricting throughput on a protocol level to generate fees \cite{Huberman2017, Lavi2017}. If block producers act as profit maximisers and select transactions according to their fee, the most prevalent fee selection mechanism (used by, among others, Ethereum and Bitcoin) can be characterised as a generalized first price auction \cite{Nakamoto2008, Lavi2017, ethereum}. While numerous other mechanisms have been proposed, the monopoly of the block producer to select transactions heavily constrains the mechanisms that can be employed: unless the block producer is awarded a hundred percent of the fees, the producer can always gain by circumventing the protocol. Colluding with a transaction sender through a third channel payment, the sender can simply pay the block producer the original fee $f - \epsilon$ while sending the transaction with the lowest possible fee $\epsilon$ to the network.
On the other side of the spectrum we can find algorithmic fee setting mechanisms. The main difficulty with these approaches is that they have to internalise the externalities a transaction is causing, which include system security, usage of computation, bandwidth and storage as well as decentralisation.
The time propagation of blocks, for sizes larger than 20kB, is almost linear with their size \cite{Decker2013}. It can therefore be more profitable to mine empty blocks in situations where the block reward is much larger than the sum of the transaction fees and if there is a competition between soft forks, such as in Bitcoin \cite{kaiser2018looming, propagation-cost}. This dynamic lowers the efficiency of the system and should therefore be mitigated against in protocols aiming to maximise throughput. One method to address this issue, as well as the censorship, could be to increase the cost of creating a sub-optimal block to the producer by scaling their rewards by the proportion of transactions included in their block.
Another problem with the incentive structure of many blockchain systems, such as Bitcoin and Ethereum, is the lack of incentive for nodes to propagate transactions \cite{Babaioff2011}. Information sharing is carried out by the nodes in these networks, which may or may not additionally participate in mining, and is not rewarded. This is therefore only sustainable when the cost of running a node is sufficiently low so that the system is able to maintain a large population of nodes relative to miners. In directed acyclic graph (DAG) protocols the dynamic is slightly different as nodes have an incentive to gossip transactions that build upon their own \cite{iota, byteball}.
The problems of censorship and limited fee pricing algorithms are to a large part rooted in the monopoly over block construction. Here we will propose a new way to collaboratively pack a block through the use of a DAG, and to then discuss the implications and benefits of minimising the block proposer's freedom.
\section{Literature review}
\label{S:2}
The Prism consensus protocol \cite{Prism} aims to maximise the transaction throughput of a decentralised payment system by parallelising the task of collecting transactions, producing blocks and reaching finality on the blockchain. Although the idea of separating these processes has appeared in other protocols \cite{fruitchains, algorand, inclusive-blockchains}, very few completely divide all three. The Prism proposal shares many similarities with our model described in Section \ref{S:3}. In particular, they also maximise the throughput of the system by incentivising the sharing of transactions sets, in their case in the form of transaction blocks, alongside the block production, allowing block producers to propagate smaller final blocks. However, their protocol does not use this to limit the censorship power of the block producer, who is still free to create an empty block or a block that contains only transactions of their own choice.
The model described in the next section builds upon protocols utilising a decentralised round based random seed, which has been implemented in many protocols \cite{ouroboros, algorand, dfinity}, through the introduction of a DAG. Here we focus on the DFinity protocol which makes use of a verifiable random function (VRF) \cite{VRF} to enable the notarisation of blocks by a committee in each round. The system is designed to accommodate high throughput while being resistant to various common attack vectors. Double-spend attacks involving the withholding blocks, such as Finney attacks \cite{finney}, are defended against, in these systems, by requiring the notarisation of each block by a random committee in each round. The difficulty of such an attack is therefore increased by the attacker now being being required to control 51\% of the committee, and having the highest-ranking block proposal. The protocol also detects attacks involving network partitions and resolves these by either pausing the protocol, preventing further blocks from being generated, or only continuing on the majority branch \cite{dfinity}.
The consensus protocols of IOTA \cite{iota} and Byteball \cite{byteball} also make use of a DAG as the structure naturally scales to accommodate a high transaction throughput. In these protocols, users append their own transactions onto the graph and in the process validate the transactions to which their transaction is attached. In this way, IOTA is able to offer zero nominal transaction fees\footnote{However, note that to submit transactions IOTA users must solve a PoW problem, which is a cost on the transactor and serves a similar purpose to fees.} and the security of the system is maintained directly by its users. However, despite its advantages, the DAG only defines a partial ordering of transactions and therefore additional layers in the protocol are required to establish a strict ordering of transactions. The strict ordering is necessary to implement smart contracts and both those protocols rely on trusting a third party to do so.
In the next section we show that by using the DAG, in combination with the traditional chain structure, we can obtain a strict ordering of transactions without relying on trusted parties.
\section{Minimal agency consensus}
\label{S:3}
We propose a model that constrains the monopolistic power of a block proposer.
The fundamental data structure that is used to record transaction events is a DAG, that we denote as $G=(V, E)$, where $v_i \in V$ are the vertices that contain collections of transaction hashes and $E$ represents the set of edges. In this model each vertex has two out-going edges\footnote{In general, the vertex out-degree $k_{\mathrm{out}}\ge 2$. The lower bound is chosen to minimise additional storage requirements.}, which are hash-references to previous vertices. The tips of the DAG refer to vertices with no in-coming edges. A DAG provides partial ordering in the sense that the children of $v_i$, the set of vertices that are both directly and indirectly referenced by a specific vertex, $v_i$, occurred with certainty in its past.
The ledger uses a decentralised random beacon (DRB) as a mechanism to elect a committee $\mathcal{C}$ that collectively agree on the next block to be published, similarly to other protocols that use a DRB \cite{dfinity}. The value of the random beacon, $s_r$, for the current round is also used to define a ranking of staked nodes in the network which specifies their priority for being the publisher of a block in the current round. We refer to members of this group as vertex-proposers. Furthermore, in our model the random beacon selects a set $\mathcal{A}$ of vertex-attachers. Each attacher has the right to append one vertex to the DAG per round, containing a list of hash-references to all transactions not yet included in any vertex in its past.
At the end of a block round, the highest ranked vertex-proposers can propose a number of vertices (and thereby transactions) to be packed into the next block, which are then notarised by the committee based on the proposer's rank. In particular, each vertex-proposer signs the following information: a proposed set of vertices, previous block hash, and the hash of the Merkle tree of transactions in the block.
The signed proposals are propagated around the network and all committee members notarise the highest ranked proposals they receive. Under certain assumptions, the protocol can then guarantee finalisation of a notarised proposal after two rounds, based on the overall ranking of the blocks in the different chains \cite{dfinity}. Once a proposal is finalised, each participant can create the block of packed transactions, defined from the proposed vertices, and can discard that part of the DAG structure that was included in the block, creating very low additional memory requirements for participants.
Given the chosen vertices, further protocol-level constraints, e.g. on price or block-size can then be imposed on the transaction if desired. Transactions covered by the vertex-proposal that were not included in the block, can then be carried over to the next round, be included again in new vertices or rejected (leaving it to the sender to re-send them).
To maximise system throughput with the DAG, the protocol has to incentivise:
\begin{itemize}
\setlength\itemsep{0em}
\item[i)] attachers to build a compact DAG, which allows the proposers to maximise the descendants of any proposed set of vertices as much as possible
\item[ii)] the proposers to select the vertices maximising the number of descendants
\end{itemize}
The minimal agency of the vertex-proposer is enforced by the collaborative construction of the DAG, which links together groups of transactions, and in incentivising vertex selection by ii). This latter constraint can be imposed in various ways, each with a different cost of censorship to the vertex-proposer, which are discussed in section \ref{S:4}. Note that, given honest actors, ii) should be almost equivalent to selection by fee maximisation (with difference only stemming from some participants not yet being aware of certain transactions).
\section{Results}
\label{S:4}
In this section we detail previously unenforceable strategies that can be employed with our model and how they can overcome the shortcomings outlined in section \ref{S:1}.
\subsection{Fees}\label{S:fees}
Fees in decentralised blockchain systems have two major purposes: first, to secure the system by encouraging sufficient mining activity for the cost of an attack to be high enough to deter attacks, and second, to distribute new tokens. Here we are concerned with the ability to efficiently extract fees to provide the necessary security. In this section we will deal with the more difficult scenario where additional, inflation-financed, block rewards are not present. Several of the most popular cryptocurrencies, including Bitcoin and Ethereum~1.0, are based on a fixed total token supply, meaning any inflation can only be temporary and will eventually need to approach zero. Unless the system operates at its absolute transaction limit, the externalities between a miner's incentive to maximise their block reward and the social good of system security\footnote{Further externalities stem from the requirement that all (future) miners will have to compute and store the transaction, but the reward is only attributed to the block producer.} imply that we cannot let the free market decide on both block size and price of transactions \cite{freemarket-fees}. To account for these inefficiencies, the protocol has to impose some form of constraint by either limiting the miner's ability to choose a block size or the users ability to set a price.
The prevailing approach, used in protocols such as Bitcoin and Ethereum, is to set an explicit absolute limit on the block size and let the users freely bid for a place in a block, resulting in what is effectively a first price auction\footnote{In practice, users are motivated by the time required for the transaction to be entered into a block. This wider competition for space in block's that are being produced at a constant rate has been shown to be a Vickery-Clark-Groves mechanism}. The literature around this pricing mechanism and its application in blockchain technology has identified a number of weaknesses and alternative mechanisms. However, a block producer's freedom to select transactions combined with the existence of third channel payments renders many of these mechanisms unenforceable. We will now discuss some of the most common ones and show how our proposal clearly extends the set of pricing mechanisms that can be employed.
\textit{Decoupling mining rewards:} when the Bitcoin protocol fully transitions to transaction fee-based block rewards, miners may no longer be incentivised to mine on the longest chain. Intuitively, this arises when the shorter alternative chain includes blocks that contain lower total transaction fees. The availability of a larger number of high-value transactions means that mining a block on the shorter chain can lead to a higher reward for the miner. This can decrease chain stability due to increased forking, lead to equilibria with partially empty blocks and increase the gains from selfish-mining strategies \cite{decouple-block-reward}. By decoupling miners' rewards from their blocks, this misalignment of incentives can be removed. This idea has been explored by a few cryptocurencies, such as Ouroboros \cite{ouroboros} and Fruitchains \cite{fruitchains}. The idea here is that miners are instead rewarded based on the block rewards of a sequence of $Q$ blocks. While this solves the outlined problem by making them indifferent as to which block contains a particular transaction (within the defined window $Q$), it suffers from two other issues: first, it reduces the marginal return of a miner with regard to a specific transaction, reducing the incentive to include the highest-paying transactions and to build blocks with the highest possible reward (with $Q$ approaching infinity the miner would become increasingly indifferent between the alternatives of creating an empty and a block full of transactions). Furthermore, the block producer can now profit from colluding with transaction senders: instead of sending a transaction with a fee $f$, the sender can directly pay the producer $f-\epsilon$ and send the transaction to the network with a negligible fee $\epsilon$. As the proposer cannot freely choose the transactions in the block, this type of indirect payment cannot be prevented.
\textit{Share mining rewards between different functions:} Many protocols have developed more complex differentiation between different network functions, such as block proposing and voting \cite{Prism}, or including off-chain blocks \cite{inclusive-blockchains, GHOST}. In many cases, it would be desirable to directly link these functions to the block fees and in the absence of inflation based block rewards this is a necessity. However, by awarding non-block producing roles with a share $x$ of the rewards, we reduce the reward of the producer to $(1-x)$ of the full block reward. This approach reduces the incentive for including transactions in a block, similarly to the decoupling approach, and leads to the same outcome where a block producer can gain by colluding with a transaction sender through third-channel payments.
\textit{$k^{\mbox{\scriptsize th}}$ price auctions:} first price auctions require complex and inefficient strategies to deduce a good bid \cite{auction-anarchy}. A classical alternative are $k^{\mbox{\scriptsize th}}$ price auctions: as a higher bid does not marginally affect the resulting fee, the simple optimal strategy is to bid one's true willingness to pay \cite{Lavi2017}. The reason this has not been implemented in existing blockchains is its vulnerability to collusion: a block producer can bribe a transaction sender, who is willing to pay a fee $f$ to instead send their transaction with a fee $f + \Delta$ and privately refund them with the bribe $\Delta$. This raises the price for everyone else while still reaping fees from the low paying transactions\footnote{Even simpler, excluding some low-fee transactions, the block producers could simply include dummy transactions to themselves with high transaction fees to raise the price level to a (for themselves) more optimal level.} \cite{BlockchainResourcePricing}. While this strategy does not depend on the block producer's monopoly, it quickly becomes costly if the block producer does not receive 100\% of the transaction fees of the block. This in turn depends on limiting the producers freedom, as outlined above.
\textit{Algorithmic pricing:} the alternative to limiting the block size and allowing users decide the transaction fee they bid is to instead prescribe an algorithm on the protocol level that determines the price of a given transaction. While some discussions of this exist \cite{BlockchainResourcePricing, Prism}, we are not aware of any implementations. The main difficulty stems from finding a price that reflects the externalities a transaction imposes on the system. These externalities include, but is not limited to: security, processing and bandwidth cost for all nodes currently online, and the long-term storage cost of maintaining the state for every node. Crucially, due to these externalities, the optimal price depends not only on characteristics of the transaction itself (such as byte size or in the case of a smart contract complexity), but on all the other transactions submitted to the system. While this direction is promising, it still requires further research. A potential benefit of our proposed protocol could be that, by first establishing consensus on collectively added transactions, all participants essentially create a ``common memory pool", on top of which pricing algorithms could be applied. This might enable the algorithm to take into account statistics that would otherwise only exist outside of the protocol, such as the current demand and congestion.
\textit{Empty blocks:} In situations where (inflation-financed) block rewards make up the largest part of the block reward, the race between block producers, and the resulting risk of a block becoming an orphan, can make it more profitable to publish smaller blocks that propagate faster than full blocks that include the additional transaction fees \cite{kaiser2018looming, propagation-cost}. While this maximises the miner's rewards, it is obviously socially undesirable. By constraining the proposer's ability to select the transactions and decoupling the attachers' incentives from the propagation time of the block, this behaviour can be made unprofitable.
\subsection{Censorship}
If we assume that the attachers of vertices have a different identity to the proposers or notarizers of the DAG, the exclusion of a specific transaction requires the proposer to forgo the fees of all vertices that build upon vertices containing this transaction. However, anonymity means that a participant can create multiple identities and therefore participate in both groups or, in absence of this, attachers and proposers could collude to attach alternative vertices including all but the censored transaction\footnote{An alternative would be to restrict the size of a vertex, but avoiding hard-coded rules is generally desirable.}. This would allow the proposer to simply ignore the vertices appended by other attachers, bringing us back to a monopoly over the next block.
To prevent the this situation from occurring, and to make the proposals a fair representation of the collectively built vertices, it would make sense to provide incentives to force proposers to maximise the number of descendants of the selected vertices\footnote{In protocols where each identity represents a fixed stake, as required for many VRFs, this coincides with the maximisation of the total stake associated with the DAG.}. As noted earlier, this is still almost equivalent to maximising the fee covered, but strips away the proposer's power to decide alone on the block content.
Given a global view of the DAG, this objective can easily be enforced. However, abandoning the synchronicity assumption, this becomes non-trivial as we cannot ensure that every participant will reach the same conclusion about the optimal decision within a finite time. Nonetheless, we can implement approximations of this selection rule. In doing so, we differentiate between soft (incentive based) and hard constraints:
\begin{itemize}
\item[a)] Hard constraints: impose a function $f(n_{vertices}, x)$ that any proposal has to satisfy, where $n_{vertices} = |\mathcal{A}|$, i.e. the theoretical maximum number of vertices that could be covered. The main difficulty in this approach is that the effective maximum in a given round varies with the structure of the DAG in that round. The function $f$ can either be a simple requirement of the number of descendants to be a minimum percentage of vertices or a more complex approximation of the effective maximum. Critically, it can only depend on further inputs $x$ that are verifiable even with a partial view of the DAG. While a very promising direction, defining a robust rule with guaranteed and non-interactive resolution under all circumstances will require further research.
\item[b)] Soft constraint: reward the proposer proportionate to $$\delta = \frac{n_{descendants}}{n_{vertices}}\,,$$ where again $n_{vertices} = |\mathcal{A}|$, and $n_{descendants}$ is the number of descendants of the proposal. The cost of circumventing the DAG will quickly grow proportional to the depth of any vertex the proposer wants to ignore. Note that this introduces an element that is outside of the proposer's control, as the best $\delta$ achievable in a given round depends on both the attacher and the network delays that occurred in this round. This does come at the cost of increased volatility of an individual proposer's rewards. However, the possibility to now more broadly distribute the rewards can still result in an overall lower volatility, when taking into consideration the different roles of a staker.
\end{itemize}
While both types of constraints ensure that a profit-maximising agent will choose to maximise the number of descendants and thereby enable the advantages outlined in section \ref{S:fees}, only the hard constraint makes it impossible to ignore specific vertices within the DAG. Since under the soft constraint it is possible that an attacker's gain from censorship will be larger than the economic loss of the rewards from the round.
A third alternative would be a competitive protocol, which optimises the throughput of the network, where the vertex-proposals are ranked according to a function of the proposer's ranking and the number of descendants. This encourages proposers to make maximal use of their knowledge of the DAG for fear of losing out on the block reward.
\subsection{Efficiency and Throughput}
Most existing blockchains, including Ethereum and Bitcoin, combine the role of adding new transactions and defining the order between them. This coupling is a major limitation of both block size and the block production rate. Increasing either property leads to an increase in the forking rate and thereby reduces the proportion of byzantine participants $\beta$ that the protocol can tolerate. These relationships and tradeoffs have been studied extensively for the Bitcoin protocol \cite{GHOST, bitcoin-security-synchronous, bitcoin-security-asynchronous}. While more recent protocols, such as GHOST \cite{GHOST}, have altered the fork choice rule to incorporate ``uncle blocks", a similar tradeoff between the block production rate and $\beta$ exists. In this case the system becomes vulnerable to balancing attacks \cite{balancing-attack}, in which an attacker splits the work of honest nodes into equal subtrees.
By decoupling the addition of new transactions from block production (delaying specification of the final ordering), this constraint can be lifted, which allows transactions to be added much more rapidly. Switching from a stochastic block production rate to comparably deterministic block times dictated by the random beacon further alleviates this tradeoff. The nature of the DAG then allows the communication of proposals for the set of vertices to be propagated quickly, as they imply all vertices in their past.
The reduction in bandwidth from this can be calculated as follows: let us assume we would like to achieve $n_{tps}$ transactions per second, and the block time is $t_{block}$. At a bare minimum this requires the propagation of vertices, assuming each 32 byte transaction hash appears in the DAG only once, during each round involves
$$
B_{\text{DAG}} = (32 n_{tps} t_{block} + 129 n_{vertices}) \text{ bytes} \,,
$$
where the coefficient in front of $n_{vertices}$ contains the size of a signature and two transaction hashes, which are the links for the vertex. A compact block\footnote{A compact block introduced in Bitcoin \cite{compactblock} replaces the list of transactions in a block with a list of 6 byte hashes, vastly reducing the data transmitted through the network. The idea utilises the fact that the memory pools between nodes generally contain the same transactions and therefore propagating full transactions in blocks is mostly redundant. In this scheme, any node that does not possess the transaction corresponding to a hash in the compact block, can simply query its peers for the information.}, ignoring the block header, containing the same set of transactions has size $6 n_{tps} t_{block}$ bytes. Assuming that there are an $O(1)$ number of block proposals each round, and neglecting the size of tip proposals, the gains in terms of bandwidth from the DAG are of $O(1)$ (meaning, that it is constant and does not depend on the number of transactions) and come from the fact that $B_{\text{DAG}}$ is not sent through the network in a single event but is synchronised continuously over the entire period of the round.
As mentioned in section \ref{S:3}, in order to maximise the throughput of the system, the protocol should incentivise the vertex-attachers to create a compact DAG from which the vertex-proposer can easily maximise the number of descendants. The choice of committee members for vertex-attachers allows for various incentivisation schemes depending on whether this group consists of all users or is a random subset. In the former case, the incentives align with those of IOTA and Byteball where vertex-attachers are naturally incentivised to attach vertices to the DAG without a reward (as they will contain their own transactions), and they will choose to implement the algorithm that maximises the probability of their transactions being included in the next block. In the latter case, to avoid a tragedy-of-the-commons situation and to maximise the transaction throughput it is necessary to either reward the vertex-attachers, or punish i.e.\ through stake slashing \cite{slasher}, the nodes that do not cooperate.
In order to evaluate the effectiveness of these strategies, we simulated the construction of a DAG using various algorithms to append vertices, varying also the size of the group allowed to attach vertices to the DAG. Four different algorithms were studied:
\renewcommand\labelitemi{\raisebox{\mylen}{\tiny$\bullet$}}
\begin{itemize}
\setlength\itemsep{0em}
\item \emph{Random}: the tips are chosen randomly from available orphans.
\item \emph{Joint cardinality}: the pair of tips which have the maximum joint span of vertices, also referred to as the cardinality, is chosen. More concretely, the union of the two sets of descendants is taken and its size considered.
\item \emph{Metropolis}: pairs of tips are chosen at random until the joint cardinality of the tips is within the threshold of the total number of vertices.
\item \emph{Greedy}: links are chosen by maximising cardinality on each link separately. First one tip is selected, and then the second is selected by considering the cardinalities of the remaining tips. This differs from the joint cardinality algorithm in that the set of descendants of each tip are considered separately even though the same elements may appear in both.
\end{itemize}
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
\multirow{2}{*}{Algorithm Type} & \multicolumn{3}{c}{Vertex Proposal Size} \\
& $n_{vertices} = 10$ & $n_{vertices} = 100$ & $n_{vertices} = 1000$ \\\hline
\emph{Random} & 2 & 15 & 144 \\\hline
\emph{Joint Cardinality} & 2 & 24 & 308 \\\hline
\emph{Greedy} & 2 & 20 & 209 \\ \hline
\emph{Metropolis} & 2 & 15 & 145 \\ \hline
\end{tabular}
\caption{The average size of a vertex proposal across 100 blocks for each DAG construction algorithm for $n_{vertices} = 10,\ 100,\ 1000$. In this simulation tips of the DAG are discarded if they are older than 10 block rounds.}
\label{tab:metrics}
\end{table}
The size of the vertex proposals required to capture all vertices appended in the previous round are shown in table \ref{tab:metrics} and shows that allowing a large committee of vertex-attachers significantly increases the size of vertex-proposals that are needed to specify the DAG. It is therefore more beneficial for the efficiency of the system to restrict vertex-attachers to a small fraction of staked nodes to balance security of the system with the feasibility of vertex proposal by maximising the number of descendants.
\section{Conclusions}
In this article we have proposed a new DAG-based consensus protocol which leverages the separation of functions to both minimise agency and maximise throughput.
We describe incentive incompatibilities and restrictions stemming from the monopolistic power of a block producer to select transactions, once they have won the right to produce the next block. By restricting their freedom to do so, we eliminate issues that prevent existing protocols from implementing a range of fee pricing mechanisms. We further discuss different approaches to incentivise or enforce collective block construction in asynchronous networks. While more research is needed to proof the security and efficiency of these new approaches, the strict increase in enforceable pricing schemes opens up new opportunities to extract the fees necessary to secure a decentralised system in the absence of inflation-financed block rewards.
\bibliographystyle{bib-style}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The Kadomtsev-Petviashvily (KP) hierarchy is one of the most fundamental in the modern
theory of integrable systems. It has at least three well-known
de\-fi\-ni\-ti\-ons/rep\-re\-sen\-ta\-ti\-ons. In its original,
the so-called Zakharov-Shabat form \cite{ZS75}, it is an infinite
system of equations on an infinite number of variables which are
the coefficients of monic ordinary linear differential operators
\begin{equation}\label{Bk}
B_k=\partial_x^k+\sum_{i=0}^{k-2} u_{k,i}(x,{\bf t})\partial_x^i
\end{equation}
depending on $x$ and an infinite set of ``times'' ${\bf t}=\{t_1, t_2, t_3, \ldots\}$.
The equations of the hierarchy are equivalent to the operator equations
\begin{equation}\label{kp10}
\partial_{t_l}B_k-\partial_{t_k}B_l+[B_k, B_l]=0, \quad \mbox{for all pairs $k,l$}.
\end{equation}
For each pair $(k,l)$ the operator equation (\ref{kp10})
is equivalent to a system of partial differential equations on
the coefficients of the operators $B_k,B_l$. The system is well-defined
in the sense that the number of equations is equal to the number of unknown functions.
For example, for the case $k=2, l=3$ in which $B_2=\partial^2_x+2u$ and
$B_3=\partial_x^3+3u\partial_x +w$ equation (\ref{kp10}) is equivalent
to a system of two equations for $u$ and $w$. After eliminating
$w$ from this system, and after the change of the notation for
independent variables $t_2=y, t_3=t$, the remaining equation for $u$
becomes the original KP equation
\begin{equation}\label{kporig}
3u_{yy}=\left( 4u_{t}-12u u_x -u_{xxx}\right)_x.
\end{equation}
\medskip
\noindent
{\bf Remark.}
Under the assumption that $u$ is a periodic function of the variables $x,y$, i.e. $u(x+\ell_1,y)=u(x,y+\ell_2)=u(x,y)$,
the KP hierarchy can be defined as a set of commuting flows on the space
of Cauchy data for (\ref{kporig}): the space of {\it one} periodic function
of two variables \cite{kp1}.
\medskip
The second form of the KP hierarchy (which is often called the Sato form)
was introduced in \cite{sato} as a system of commuting flows
on the space of sequences $(u_1(x),u_2(x),\ldots) $
of functions of one variable $x$, which can be identified
with the space of pseudo-differential operators of the form
\begin{equation}\label{satokp}
\mathcal L=\partial_x +u_1\partial_x^{-1}+u_2 \partial_x^{-2}+\ldots
\end{equation}
The flows are defined by the Lax equations
\begin{equation}\label{kp3}
\partial_{t_k} \mathcal L=[B_k, \, \mathcal L], \quad B_k =
\Bigl (\mathcal L^k\Bigr )_+, \quad k=1,2,3, \ldots
\end{equation}
where $(\cdot)_+$ stands for the differential part of a pseudo-differential operator.
The statement that equations (\ref{kp10}) follow from equations (\ref{kp3}) is easy.
The inverse statement is true up to a triangular change of time variables \cite{takebe}.
The third form of the KP hierarchy is an infinite system of equations for {\it one} function
$\tau^{\rm KP}({\bf t})$ of an {\it infinite} number of variables
generated by the Hirota bilinear equation \cite{JimboMiwa,DJKM83}
\begin{equation}\label{ch02}
\oint_{C_\infty}\tau^{\rm KP}({\bf t}-[z^{-1}])\tau^{\rm KP}({\bf t'}+[z^{-1}])\,
\exp \Bigl (\sum_{k\geq 1}(t_k-t_k')z^k\Bigr )dz=0,
\end{equation}
which should be valid for all sets of times ${\bf t}$, ${\bf t'}$.
Here and below $C_\infty$ is a big circle around the infinity $z=\infty$ and
${\bf t}\pm [z^{-1}]$ denotes the following special shift of time variables:
\begin{equation}\label{tshift}
{\bf t}\pm [z^{-1}]:=
\Bigl \{ t_1\pm \frac{1}{z}, t_2\pm \frac{1}{2z^2}, t_3\pm
\frac{1}{3z^3}, \ldots \Bigr \}.
\end{equation}
Note that equation (\ref{ch02}) is invariant under the transformation
\begin{equation}\label{linear}
\tau^{\rm KP}(x, {\bf t}) \longrightarrow \exp \Bigl (\gamma_0 +\gamma_1 x+
\sum_{j\geq 1}\gamma_j t_j\Bigr )\, \tau^{\rm KP}(x, {\bf t})
\end{equation}
with constant $\gamma_j$. The transformed
tau-function corresponds to the same solution of the KP equations
in the Sato or Zakharov-Shabat form. The tau-functions which differ by multiplication by exponent of a linear function of times
are called equivalent.
In \cite{DJKM81} an infinite integrable hierarchy of partial differential
equations with ${\rm Sp}\, (\infty)$ symmetry was introduced and called the
Kadomtsev-Petviashvili hierarchy of type C (CKP). It is a hierarchy of commuting flows that are the restriction of the flows of the KP hierarchy corresponding
to ``odd'' times ${\bf t}_{\rm o}=\{t_1, t_3, t_5, \ldots \}$
onto the space of {\it anti self-adjoint}
pseudo-differential operators $\mathcal L_{\rm CKP}$ of the form (\ref{satokp}), i.e. such that
\begin{equation}\label{ckp01}
\mathcal L_{\rm CKP}^{\dag}=-\mathcal L_{\rm CKP},
\end{equation}
where $\dag$ means the formal adjoint
defined by the rule $\Bigl (f(x)\circ \partial_x^{m}\Bigr )^{\dag}=(-\partial_x)^m \circ f(x)$.
The CKP hierarchy was revisited in \cite{DM-H09,CW13,CH14,LOS12}.
The first goal of this work is to characterize the CKP hierarchy in terms of the
KP tau-function. More precisely, each solution of the CKP
hierarchy has a unique extension to the solution of the full KP
hierachy via the flows (\ref{kp3}) with even $k$ (which obviously do not preserve constraint (\ref{ckp01})). In what follows we will refer to
the corresponding solution to the KP hierarchy as {\it KP extension
of the solution to the CKP hierarchy}.
In section 2 we prove that the KP tau-function is the tau-function of such a solution if the equation
\begin{equation}\label{int3}
\partial_{t_2}\log \tau^{\rm KP}\Bigl |_{{\bf t}_{\rm e}=0}=0
\end{equation}
holds for all ${\bf t}_{\rm o}$, where
all ``even'' times ${\bf t}_{\rm e}=(t_2,t_4,\ldots)$ are set equal to zero.
Conversely, in the equivalence class of any KP tau-function which is the tau-function of
the KP extension of a solution to the CKP hierarchy there exists one which satisfies
the condition (\ref{int3}). We note that this condition selects ``turning points''
of the KP hierarchy in the sense that if $x_i$ are zeros of the tau-function
$\tau^{\rm KP}(x, {\bf t})$, i.e., $\tau^{\rm KP}(x_i, {\bf t})=0$, then
$\partial_{t_2}x_i(t_1, t_3, \ldots )=0$ for all $t_1, t_3, t_5, \ldots$.
We also prove the existence of the tau-function
$\tau (x,{\bf t}_{\rm o})$ for the
CKP hierarchy which is a function of ``odd'' times ${\bf t}_{\rm o}$ only and
prove that it is the square root of $\tau^{\rm KP}$
satisfying the condition (\ref{int3}):
\begin{equation}\label{int3a}
\tau (t_1, t_3, t_5, \ldots , )=\sqrt{\vphantom{A^a}\tau^{\rm KP}(t_1, 0, t_3, 0, t_5, 0,
\ldots )}.
\end{equation}
In the first part of Section 3 we present in detail the algebraic-geometrical
construction of quasi-periodic solutions to the CKP hierarchy briefly outlined in \cite{DJKM81}.
We start from the general scheme proposed in \cite{Krichever77,Krichever77a}.
The specialization for the CKP hierarchy is a certain reduction of this general scheme.
The data defining the algebraic-geometrical solutions of the CKP hierarchy are the following:
a smooth algebraic curve $\Gamma$ of genus $g$ with a holomorphic involution
having at least one fixed point $P_{\infty}\in \Gamma$, a local parameter
in a neighborhood of $P_{\infty}$ which is
{\it odd} with respect to the involution and a generic {\it admissible}
divisor of degree $g$. The locus of the admissible divisors
in the Jacobian is a translate of the Prym variety of $\Gamma$.
In the second part of Section 3 we prove the new identity for the
Riemann theta-function of a curve with involution having at least
one fixed point (Theorem \ref{thm-main}). The identity is an
algebraic-geometrical incarnation of the relations between KP-
and CKP-tau-functions discussed in Section 2.
In Section 4 we study double-periodic (elliptic) in the variable $x=t_1$ solutions to the
$C$-version of the KP equation and their trigonometric and rational degenerations.
In the seminal paper \cite{AMM77} the motion of poles of singular solutions to the
Korteweg-de Vries and Boussinesq equations was considered. It was discovered that the poles move as particles of the many-body Calogero-Moser system \cite{Calogero71,Calogero75,Moser75}
with some additional restrictions in the phase space.
In \cite{Krichever78,CC77} it was shown that in the case of the KP
equation this correspondence becomes an isomorphism: the dynamics of
poles of rational solutions to the
KP equation is given by equations of motion for the Calogero-Moser system
with pairwise interaction potential
$1/(x_i-x_j)^2$. This remarkable connection
was further generalized to elliptic
solutions in \cite{Krichever80}:
poles $x_i$ of the elliptic solutions as functions
of $t_2=y$ move according to the equations of motion
\begin{equation}\label{int1}
\partial_y^2 x_i=4\sum_{k\neq i} \wp ' (x_i-x_k)
\end{equation}
of Calogero-Moser particles with the elliptic
interaction potential $\wp (x_i-x_j)$ ($\wp$ is the Weierstrass $\wp$-function).
Moreover, in \cite{Krichever80} it was shown that the origin of
equations (\ref{int1}) is related to a more fundamental problem:
when a linear equation with elliptic coefficients has {\it double-Bloch} solutions
(i.e. solution which are sections of a line bundle over
the elliptic curve, see \cite{kr-nested}).
Recently, the method proposed in \cite{Krichever80} was applied
to the theory of elliptic solutions of the BKP equation \cite{RZ20,Z19}.
Along the same line of arguments we derive in Section 4 the equations
of motion for poles of elliptic solutions to the CKP equation:
\begin{equation}\label{int2}
\dot x_i =3\sum_{k\neq i}^n\wp (x_i-x_k)-6c,
\end{equation}
where $c$ is a constant and dot means the $t_3$-derivative.
In contrast to the KP and BKP cases, where
the equations of motion are of the second order
(see \cite{Krichever80,RZ20,Z19}) equations (\ref{int2}) are of the first order.
As follows from the comparison of the CKP and KP hierarchies in
Section 2 equation (\ref{int2}) coincide with the restriction of the Calogero-Moser
flow corresponding to the higher Hamiltonian $H_3$
to the manifold of {\it turning points} in the $2n$-dimensional
phase space $(p_i, x_i)$, i.e. the $n$-dimensional submanifold
$p_i=\partial_y x_i=0$ for all $i=1, \ldots , n$.
\medskip\noindent
{\bf Remark.}
The notion of the turning points of the elliptic
Calogero-Moser system and the study of the corresponding spectral
curves in the forthcoming paper \cite{KN} was motivated by the
problem of construction of explicit solutions to the
two-dimensional $O(2m+1)$ sigma-model.
\medskip
\section{The CKP hierarchy}
\subsection{The $\mathcal L$-operator and the dressing operator}
The set of independent variables (``times'') in the CKP hierarchy is
${\bf t}_{\rm o}=\{t_1, t_3, t_5, \ldots \}$.
Like in the BKP case, they are indexed by positive odd numbers. It is convenient
to set $t_1 = x+\mbox{const}$, so that the vector fields $\partial_{t_1}$ and $\partial_x$ are
identical: $\partial_{t_1}=\partial_x$.
The hierarchy is defined on the space of pseudo-differential operators $\mathcal L_{\rm CKP}$
of the form
\begin{equation}\label{ckp1}
\mathcal L_{\rm CKP}=\partial_x +u_1\partial_x^{-1}+u_2 \partial_x^{-2}+\ldots
\end{equation}
subject to the constraint
\begin{equation}\label{ckp2}
\mathcal L_{\rm CKP}^{\dag}=-\mathcal L_{\rm CKP},
\end{equation}
The coefficients $u_j$ of $\mathcal L_{\rm CKP}$ depend on $x$ and on all the times.
It is convenient to introduce the wave operator (or dressing operator)
\begin{equation}\label{ckp1a}
W=1+\xi_1 \partial_x^{-1}+\xi_2 \partial_x^{-2}+\ldots
\end{equation}
such that
\begin{equation}\label{ckp1ab}
\mathcal L_{\rm CKP}=W\partial_x W^{-1}.
\end{equation}
The wave operator is unique up to multiplication from the right by a pseudo-differential operator with constant coefficients.
The constraint (\ref{ckp2}) implies that $W^{\dag}W$ commutes with $\partial_x$, i.e.,
it is a pseudo-differential operator with constant coefficients.
We fix the above mentioned ambiguity in the definition of the
wave operator by imposing the equation
$W^{\dag}W=1$, i.e.
\begin{equation}\label{ckp1b}
W^{\dag}=W^{-1}.
\end{equation}
The hierachy of flows is defined by the Lax equations
\begin{equation}\label{ckp3}
\partial_{t_k}\mathcal L_{\rm CKP}=[B_k, \, \mathcal L_{\rm CKP}], \quad B_k =
\Bigl (\mathcal L_{\rm CKP}^k\Bigr )_+, \quad k=1,3,5, \ldots ,
\end{equation}
which obviously preserve the constraint (\ref{ckp1b}) since $B_k^{\dag}=-B_k$ (for odd $k$).
The zero curvature (Zakharov-Shabat) equations
\begin{equation}\label{ckp3a}
\partial_{t_l}B_k-\partial_{t_k}B_l+[B_k, B_l]=0, \quad \mbox{$k,l$ odd}
\end{equation}
is an easy corollary of (\ref{ckp3}). They are equivalent to the statement that the flows (\ref{ckp3}) commute with each other.
The first equation of the CKP hierarchy is the equation
$\partial_{t_3}B_5-\partial_{t_5}B_3+[B_5, B_3]=0$ with
\begin{equation}\label{ckp5}
\begin{array}{c}
B_3 =\partial_x^3 +6u\partial_x +3u', \quad u'\equiv \partial_x u, \quad u=\frac{1}{2}\, u_1,
\end{array}
\end{equation}
\begin{equation}\label{B5}
\begin{array}{c}
B_5 =\partial_x^5 +10u\partial^3_x +15u' \partial_x^2 +v\partial_x +\frac{1}{2}\, (v'-5u''').
\end{array}
\end{equation}
Straightforward calculations give the following
system of equations for the unknown functions $u,v$:
\begin{equation}\label{ckp0}
\left \{ \begin{array}{l}
10\partial_{t_3}u=3v' -35u''' -120 uu'
\\ \\
6\partial_{t_5}u-\partial_{t_3}v=\frac{57}{2}\, u''''' +150 uu''' +180u'u'' -
\frac{5}{2}\, v''' +6vu'-6uv'.
\end{array} \right.
\end{equation}
Note that the variable $v$ can be excluded by passing to the unknown function $U$
such that $U'=u$.
\subsection{The wave function and the tau-function}
\label{section:wave}
The Lax equations (\ref{ckp3}) are compatibility conditions of the auxiliary linear problems
\begin{equation}\label{ckp4}
\partial_{t_k}\Psi =B_k \Psi , \quad \mathcal L_{\rm CKP}\Psi =z\Psi
\end{equation}
for the formal wave function
\begin{equation}\label{ckp4a}
\Psi=\Psi({\bf t}_{\rm o}, z)=We^{xz+\zeta ({\bf t}_{\rm o}, z)},
\end{equation}
where $z$ is the spectral parameter and
\begin{equation}\label{ckp7}
\zeta ({\bf t}_{\rm o}, z)=\sum_{k\geq 1, \, {\rm odd}}t_k z^k.
\end{equation}
Note that the operator $\partial_x^{-1}$ acts to the exponential function as $\partial_x^{-1}e^{xz}=z^{-1}e^{xz}$. Therefore, from (\ref{ckp1a}), (\ref{ckp4a}), it follows that
the wave function has the following expansion as $z\to \infty$:
\begin{equation}\label{ckp5b}
\Psi(x, {\bf t}_{\rm o}, z)=e^{xz+\zeta ({\bf t}_{\rm o}, z)}
\Bigl (1+\sum_{k\geq 1}\xi_k z^{-k}\Bigr ).
\end{equation}
\begin{proposition} (\cite{DJKM81})
The wave function $\Psi$ satisfies the
bilinear relation
\begin{equation}\label{ckp5a}
\oint_{C_{\infty}}\! \Psi (x, {\bf t}_{\rm o}, z)\Psi (x, {\bf t}_{\rm o}', -z){dz}=0
\end{equation}
for all ${\bf t}_{\rm o}, {\bf t}_{\rm o}'$.
\end{proposition}
\noindent
For completeness, we give a sketch of the proof here.
By virtue of differential equations (\ref{ckp3}), the bilinear relation is equivalent
to vanishing of the coefficients
$$
b_m=\frac{1}{2\pi i}\,
\partial_{x'}^m \oint_{C_{\infty}}\! \Psi (x, {\bf t}_{\rm o}, z)\Psi (x', {\bf t}_{\rm o}, -z)
{dz} \Biggr |_{x'=x}\quad \mbox{for all $m\geq 0$.}
$$
We have:
$$
b_m=\frac{1}{2\pi i}\oint_{C_{\infty}}\Bigl (\sum_{k\geq 0}\xi_k(x)z^{-k}\Bigr )\partial_{x'}^m
\Bigl (\sum_{l\geq 0}\xi_l(x')(-z)^{-l}\Bigr )e^{(x-x')z} {dz} \Biggr |_{x'=x}
$$
$$
=\frac{1}{2\pi i}\oint_{C_{\infty}}\Bigl (\sum_{k\geq 0}\xi_kz^{-k}\Bigr )(\partial_x -z)^m
\Bigl (\sum_{l\geq 0}\xi_l(-z)^{-l}\Bigr ){dz}
$$
$$
=\sum_{j+k+l=m+1}(-1)^{m+j+l}\left (\! \begin{array}{c}m\\ j \end{array} \! \right )
\xi_k \partial_x^j \xi_l.
$$
The last expression is the coefficient at $(-1)^m \partial_x^{-m-1}$ in the operator
$WW^{\dag}$:
$$
WW^{\dag}=1+\sum_{m\geq 0}(-1)^m b_m \partial_x^{-m-1}.
$$
Since $WW^{\dag}=1$ (see (\ref{ckp1b})), we get that $b_m=0$ for all $m\geq 0$.
\begin{theorem}
\label{theorem:exist}
There exists a function $\tau = \tau (x, {\bf t}_{\rm o})$
such that
\begin{equation}\label{e4}
\Psi (x, {\bf t}_{\rm o},z) =
(2z)^{-1/2}\sqrt{\vphantom{B^{a^a}}\partial_x \psi^2(x, {\bf t}_{\rm o},z)},
\end{equation}
where
\begin{equation}\label{e3}
\psi (x, {\bf t}_{\rm o},z) = e^{xz+\zeta ({\bf t}_{\rm o}, z)}
\frac{\tau (x, {\bf t}_{\rm o}-2[z^{-1}]_{\rm o})}{\tau (x, {\bf t}_{\rm o})}
\end{equation}
and we use the notation
\begin{equation}\label{ckp8}
{\bf t}_{\rm o}+j[z^{-1}]_{\rm o}:= \Bigl \{ t_1 +\frac{j}{z}, t_3 +\frac{j}{3z^3},
t_5 +\frac{j}{5z^5}, \, \ldots \Bigr \}, \quad j\in \mbox{\Bbb Z} .
\end{equation}
\end{theorem}
\begin{definition}The function $\tau = \tau (x, {\bf t}_{\rm o})$ is called the tau-function
of the CKP hierarchy.
\end{definition}
\noindent
{\it Proof of Theorem \ref{theorem:exist}.}
Representing the right hand side of (\ref{e4}) in the explicit form,
we see that we should prove the formula
\begin{equation}\label{ckp6}
\Psi = e^{xz+\zeta ({\bf t}_{\rm o}, z)}G(x, {\bf t}_{\rm o}, z)\,
\frac{\tau (x, {\bf t}_{\rm o}-2[z^{-1}]_{\rm o})}{\tau (x, {\bf t}_{\rm o})},
\end{equation}
where
\begin{equation}\label{ckp6a}
G(x, {\bf t}, z)=\left (1+z^{-1}\partial_{t_1}\log \frac{\tau (x, {\bf t}_{\rm o}
-2[z^{-1}]_{\rm o})}{\tau (x,{\bf t}_{\rm o})}
\right )^{1/2}.
\end{equation}
The proof is based on the bilinear relation (\ref{ckp5a}).
Let us represent the wave function in the form
$$
\Psi (x, {\bf t}_{\rm o}, z)=e^{xz+\zeta ({\bf t}_{\rm o}, z)}w(x,{\bf t}_{\rm o}, z)
$$
and set ${\bf t}_{\rm o}'={\bf t}_{\rm o}-2[a^{-1}]_{\rm o}$ in the bilinear relation. We have
$\displaystyle{e^{\zeta(2[a^{-1}]_{\rm o}, z)}=
\frac{a+z}{a-z}}$. Then the residue calculus yields
\begin{equation}\label{der1}
w({\bf t}_{\rm o}, a) w({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, -a)=f({\bf t}_{\rm o}, a),
\end{equation}
where
\begin{equation}\label{der2}
f({\bf t}_{\rm o}, z)=1+\frac{1}{2z}\, \Bigl (\xi_1({\bf t}_{\rm o})
-\xi_1({\bf t}_{\rm o}-2[z^{-1}]_{\rm o})\Bigr )
\end{equation}
and we do not indicate the dependence on $x$ for brevity.
Next, we set ${\bf t}_{\rm o}'={\bf t}_{\rm o}-2[a^{-1}]_{\rm o}-2[b^{-1}]_{\rm o}$
in the bilinear relation and the residue calculus yields
\begin{equation}\label{der7}
\begin{array}{c}
\displaystyle{\frac{a+b}{a-b}\Bigl (aw({\bf t}_{\rm o}, a)w({\bf t}_{\rm o}\! -\!
2[a^{-1}]_{\rm o}\!
-\! 2[b^{-1}]_{\rm o}, -a)-
bw({\bf t}_{\rm o}, b)w({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}
\! -\! 2[b^{-1}]_{\rm o}, -b)\Bigr )}
\\ \\
\displaystyle{=
a+b+\frac{1}{2}\Bigl (\xi_1({\bf t}_{\rm o})-\xi_1({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}
\! -\! 2[b^{-1}]_{\rm o}\Bigr )}.
\end{array}
\end{equation}
Using the relation (\ref{der1}), we can represent this equation in the form
\begin{equation}\label{der8}
\begin{array}{c}
\displaystyle{
\frac{1}{a-b}\left (af({\bf t}_{\rm o}\! -\! 2[b^{-1}]_{\rm o}, a)
\frac{w({\bf t}_{\rm o}, a)}{w({\bf t}_{\rm o}\! -\! 2[b^{-1}]_{\rm o},a)}-
bf({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}, b)
\frac{w({\bf t}_{\rm o}, b)}{w({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o},b)}\right )
}
\\ \\
\displaystyle{=\, 1+\frac{\xi_1({\bf t}_{\rm o})-
\xi_1({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}\! -\! 2[b^{-1}]_{\rm o})}{2(a+b)}.}
\end{array}
\end{equation}
Shifting here ${\bf t}_{\rm o}\rightarrow {\bf t}_{\rm o}+2[b^{-1}]_{\rm o}$,
changing the sign of $b$ (i.e, changing $b\to -b$)
and using (\ref{der1}) in the second term in the
left hand side after that, we arrive at the equation
\begin{equation}\label{der9}
\begin{array}{c}
\displaystyle{
\frac{1}{a+b}\left (af({\bf t}_{\rm o}, a)
\frac{w({\bf t}_{\rm o}\! -\! 2[b^{-1}]_{\rm o}, a)}{w({\bf t}_{\rm o},a)}-
bf({\bf t}_{\rm o}, b)
\frac{w({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}, b)}{w({\bf t}_{\rm o},b)}\right )
}
\\ \\
\displaystyle{=\, 1+\frac{\xi_1({\bf t}_{\rm o}\! -\! 2[b^{-1}]_{\rm o})-
\xi_1({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o})}{2(a-b)}.}
\end{array}
\end{equation}
Together equations (\ref{der8}), (\ref{der9}) form the system of equations
\begin{equation}\label{der10}
\left \{
\begin{array}{l}
\displaystyle{\frac{1}{a\! -\! b}\left (af({\bf t}_{\rm o}\! -\! 2[b^{-1}]_{\rm o}, a)X_a^{-1}\! -\!
bf({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}, b)X_b^{-1}\right )\! =\!
\frac{af({\bf t}_{\rm o}, a)\! +\! bf({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}, b)}{a+b}}
\\ \\
\displaystyle{\frac{1}{a\! +\! b}\, \Bigl (af({\bf t}_{\rm o}, a)X_a -
bf({\bf t}_{\rm o}, b)X_b\Bigr ) =
\frac{af({\bf t}_{\rm o}, a) - bf({\bf t}_{\rm o}, b)}{a-b}}
\end{array}
\right.
\end{equation}
for the ``unknowns''
\begin{equation}\label{der11}
X_a=\frac{w({\bf t}_{\rm o}-2[b^{-1}]_{\rm o}, a)}{w({\bf t}_{\rm o}, a)}, \qquad
X_b=\frac{w({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, b)}{w({\bf t}_{\rm o}, b)}.
\end{equation}
Multiplying the two equations (\ref{der10}), one obtains, using
the identity
\begin{equation}\label{der12}
af({\bf t}_{\rm o}, a)-af({\bf t}_{\rm o}-2[b^{-1}]_{\rm o}, a)
-bf({\bf t}_{\rm o}, b)+bf({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, b)=0,
\end{equation}
the following simple relation:
\begin{equation}\label{der13}
\frac{w({\bf t}_{\rm o}, a)
w({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, b)}{w({\bf t}_{\rm o}, b)
w({\bf t}_{\rm o}-2[b^{-1}]_{\rm o}, a)}=
\left (\frac{f({\bf t}_{\rm o}, a)
f({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, b)}{f({\bf t}_{\rm o}, b)
f({\bf t}_{\rm o}-2[b^{-1}]_{\rm o}, a)}\right )^{1/2}.
\end{equation}
Therefore, introducing $w_0({\bf t}_{\rm o}, z)=w({\bf t}_{\rm o}, z)
f^{-1/2}({\bf t}_{\rm o}, z)$, we get
\begin{equation}\label{der14}
\frac{w_0({\bf t}_{\rm o}, a)
w_0({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, b)}{w_0({\bf t}_{\rm o}, b)
w_0({\bf t}_{\rm o}-2[b^{-1}]_{\rm o}, a)}=1.
\end{equation}
Our goal is to prove that there exists a function $\tau ({\bf t}_{\rm o})$ such that
\begin{equation}\label{der15}
w_0({\bf t}_{\rm o}, z)=\frac{\tau ({\bf t}_{\rm o}-2[z^{-1}]_{\rm o})}{\tau ({\bf t}_{\rm o})}.
\end{equation}
For that it is enough to show that there is a function $\tau$ such that the equation
\begin{equation}\label{der151}
\hat{\mathcal D} (w_0({\bf t}_{\rm o}, z)+\tau ({\bf t}_{\rm o}))=0
\end{equation}
with
\begin{equation}\label{cD}
\hat{\mathcal D}:=\partial_z-2\! \sum_{m\geq 1, \,\, {\rm odd}}\! z^{-m-1}\partial_{t_m}
\end{equation}
holds.
Indeed, integrating equation $\hat{\mathcal D} F=0$ along its characteristics we get that a function
$F({\bf t}_{\rm o}, z)$ is in the kernel of the differential operator $\hat{\mathcal D}$ if and only it is of the form
$$F({\bf t}_{\rm o}, z)=f ({\bf t}_{\rm o}-2[z^{-1}]_{\rm o}) $$
for some function $f({\bf t}_{\rm o})$. For $F$ as in (\ref{der151}) the initial condition $w_0({\bf t}_{\rm o}, \infty)=0$ allows to identify the corresponding function $f$ with $\tau$.
Equation (\ref{der151}) is equivalent to the equations
$$
Y_n :=\mathop{\hbox{res}}\limits_{z=\infty} \Bigl [ z^n \hat{\mathcal D} \log w_0 \Bigr ]=2\frac{\partial \log \tau}{\partial t_n}.
$$
Therefore, to complete the proof of the existence of the tau-function it remains only to show that
$\partial_{t_n}Y_m({\bf t}_{\rm o})=\partial_{t_m}Y_n({\bf t}_{\rm o})$.
Changing $a\to z$, $b\to \zeta$ in (\ref{der14}), and applying the operator
$\hat{\mathcal D}$ to logarithm of this equality, we get
$$
\hat{\mathcal D}\left(\log w_0({\bf t}_{\rm o}, z)-\log w_0({\bf t}_{\rm o}\! -\! 2[\zeta ^{-1}]_{\rm o}, z)
+\log w_0({\bf t}_{\rm o}, \zeta )\right)=0,
$$
or
\begin{equation}\label{der16}
Y_n ({\bf t}_{\rm o})-Y_n ({\bf t}_{\rm o}-2[\zeta^{-1}]_{\rm o})=-2
\partial_{t_n}\log w_0({\bf t}_{\rm o}, \zeta ).
\end{equation}
Denote $F_{mn}=\partial_{t_m}Y_n-\partial_{t_n}Y_m$. Then, from (\ref{der16}) it follows
that the equation
\begin{equation}\label{der17}
F_{mn}({\bf t}_{\rm o})=F_{mn}({\bf t}_{\rm o}-2[\zeta^{-1}]_{\rm o})
\end{equation}
holds identically in $\zeta$. Expanding the right hand side in a power series,
$$
\begin{array}{c}
F_{mn}({\bf t}_{\rm o}\! -\! 2[\zeta^{-1}]_{\rm o})=F_{mn}({\bf t}_{\rm o})
\! -\! 2\zeta^{-1}\partial_{t_1}F_{mn}({\bf t}_{\rm o})\! -\! \frac{2}{3}\, \zeta^{-3}(\partial_{t_3}
F_{mn}({\bf t}_{\rm o})\! +\! 2 \partial_{t_1}^3F_{mn}({\bf t}_{\rm o}))+\ldots ,
\end{array}
$$
we see from the $\zeta^{-1}$-term that $F_{mn}$ does not depend on $t_1$. Then from
the $\zeta^{-3}$-term we conclude that it does not depend on $t_3$ and so on, so it
does not depend on $t_k$ for all (odd) $k$:
$F_{mn}=2a_{mn}$, where $a_{mn}$ are some constants such that $a_{mn}=-a_{nm}$.
Therefore, we can write
$$
Y_n =\sum_m a_{mn}t_m +\partial_{t_n}h,
$$
where $h=h({\bf t}_{\rm o})$ is some function. Then from (\ref{der16}) we have
$$
-2\partial_{t_n}\log w_0({\bf t}_{\rm o}, z)=\partial_{t_n}(h({\bf t}_{\rm o})-
h({\bf t}_{\rm o}-2[z^{-1}]_{\rm o}))+2\! \sum_{m\,\, {\rm odd}}\frac{a_{mn}}{m}\, z^{-m},
$$
or, after integration,
$$
\log w_0({\bf t}_{\rm o}, z)=\frac{1}{2}\, h({\bf t}_{\rm o}\! -\!
2[z^{-1}]_{\rm o}) -\frac{1}{2}\, h({\bf t}_{\rm o})-
\sum_{m\,\, {\rm odd}}\frac{a_{mn}}{m}\, z^{-m}t_n
+\varphi (z),
$$
where $\varphi (z)$ is a function of $z$ only. Substituting this into logarithm of
(\ref{der14}), we conclude that $a_{mn}=0$.
Now, writing $w({\bf t}_{\rm o}, z)=f^{1/2}({\bf t}_{\rm o}, z)w_0({\bf t}_{\rm o}, z)$
and noting that $f({\bf t}_{\rm o}, z)=1+O(z^{-2})$, we see that
\begin{equation}\label{der18}
\xi_1 ({\bf t}_{\rm o})=-2\partial_{t_1}\log \tau ({\bf t}_{\rm o})
\end{equation}
and we arrive at (\ref{ckp6})
with $G=f^{1/2}$.
\square
\medskip\noindent
{\bf Remark.} The proof given above is rather involved.
It is instructive to obtain (\ref{ckp6}) up to a common
$x$-independent factor in the following easy way \cite{DM-H09,CW13}.
Let us apply $\partial_{t_1}$ to
(\ref{ckp5a}) and set ${\bf t}_{\rm o}'={\bf t}_{\rm o}-2[a^{-1}]_{\rm o}$.
The residue calculus yields
\begin{equation}\label{der3}
\begin{array}{c}
2a^2\Bigl (1\! -\! w({\bf t}_{\rm o}, a) w({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o}, -a)\Bigr )-2a
w'({\bf t}_{\rm o}, a) w({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, -a)
\\ \\
+2a\Bigl (\xi_1({\bf t}_{\rm o})-\xi_1({\bf t}_{\rm o}\! -\!
2[a^{-1}]_{\rm o})\Bigr ) +\xi_2({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o})
+\xi_2({\bf t}_{\rm o})+\xi_1'({\bf t}_{\rm o})
\\ \\
\phantom{aaaaaaaaaaaaaaaaaaaaaaaa}-
\xi_1({\bf t}_{\rm o})\xi_1({\bf t}_{\rm o}\! -\! 2[a^{-1}]_{\rm o})=0,
\end{array}
\end{equation}
where prime means the $x$-derivative and
we again do not indicate the dependence on $x$ explicitly.
Letting $a\to \infty$, we get the relation
\begin{equation}\label{der4}
2\xi_2 ({\bf t}_{\rm o})=\xi_1^2 ({\bf t}_{\rm o})-\xi_1' ({\bf t}_{\rm o})
\end{equation}
which also directly follows from $WW^{\dag}=1$.
Plugging it back to (\ref{der3}), we can rewrite equation (\ref{der3}) in the form
\begin{equation}\label{der5}
\begin{array}{c}
w'({\bf t}_{\rm o}, a) w({\bf t}_{\rm o}-2[a^{-1}]_{\rm o}, -a)=
af({\bf t}_{\rm o}, a)(f({\bf t}_{\rm o}, a)-1)+\frac{1}{2}
f'({\bf t}_{\rm o}, a).
\end{array}
\end{equation}
Using (\ref{der1}), we conclude that
\begin{equation}\label{der6}
\begin{array}{c}
\partial_x \log w({\bf t}_{\rm o}, a)=a(f({\bf t}_{\rm o}, a)-1)+\frac{1}{2}\, \partial_x
\log f({\bf t}_{\rm o}, a)
\\ \\
=\frac{1}{2}\Bigl (\xi_1({\bf t}_{\rm o})-\xi_1({\bf t}_{\rm o}-2[a^{-1}]_{\rm o})\Bigr )+
\frac{1}{2}\, \partial_x \log f({\bf t}_{\rm o}, a).
\end{array}
\end{equation}
Now, setting $\xi_1(x, {\bf t}_{\rm o})=-2\partial_x\log \tau (x, {\bf t}_{\rm o})$ and integrating,
we arrive at (\ref{ckp6})
with $G=f^{1/2}$ up to a common multiplier which does not depend on $x$.
\medskip
\noindent{\bf Remark}. Substitution of (\ref{ckp5b}) into (\ref{ckp4}) with $k=3$ gives that the function $u$ in (\ref{ckp5}) equals
\begin{equation}\label{ckp5d}
u=-\frac{1}{2}\, \xi_1'=\partial_x^2\log \tau
\end{equation}
\subsection{CKP hierarchy versus KP hierarchy}
The goal of this section is to prove that the CKP hierarchy can be
identified with the restriction of {\it odd-times} flows
of the KP hierarchy onto the locus of {\it turning points of even-times flows}.
Recall that wave function $\Psi^{\rm KP}$ and the
adjoint wave function $\Psi^{\dag \rm KP}$ of the KP hierarchy are expressed through the
tau-function $\tau^{\rm KP}$ as
\begin{equation}\label{ch1}
\Psi^{\rm KP}(x, {\bf t}; z)=
\exp \Bigl (xz+\sum_{k\geq 1}t_k z^k\Bigr )
\frac{\tau^{\rm KP} (x, {\bf t}-[z^{-1}] )}{\tau^{\rm KP} (x, {\bf t})},
\end{equation}
\begin{equation}\label{ch1a}
\Psi^{\dag \rm KP}(x, {\bf t}; z)=
\exp \Bigl (-xz-\sum_{k\geq 1}t_k z^k\Bigr )
\frac{\tau^{\rm KP} (x, {\bf t}+[z^{-1}] )}{\tau^{\rm KP} (x, {\bf t})}.
\end{equation}
where the notation (\ref{tshift}) for the special shift of times is used.
The origin of these expressions is the bilinear relation \cite{JimboMiwa}
\begin{equation}\label{ch2}
\oint_{C_{\infty}}\Psi^{\rm KP}(x, {\bf t}, z)
\Psi^{\dag \rm KP}(x, {\bf t}', z) dz=0
\end{equation}
equivalent to (\ref{ch02}).
A direct consequence of the bilinear relation (\ref{ch2}) with the wave functions
given by (\ref{ch1}), (\ref{ch1a}) is the Hirota-Miwa equation
for the tau-function of the KP hierarchy
\begin{equation}\label{tau6b}
\begin{array}{l}
(z_1-z_2)(z_3-z_4)\tau^{\rm KP} ({\bf t}-[z_1^{-1}]-[z_2^{-1}])
\tau^{\rm KP} ({\bf t}-[z_3^{-1}]-[z_4^{-1}])
\\ \\
\phantom{aaaaa}
+(z_2-z_3)(z_1-z_4)\tau^{\rm KP} ({\bf t}-[z_2^{-1}]-[z_3^{-1}])
\tau^{\rm KP} ({\bf t}-[z_1^{-1}]-[z_4^{-1}])
\\ \\
\phantom{aaaaaaaaaa}
+ (z_3-z_1)(z_2-z_4)\tau^{\rm KP} ({\bf t}-[z_1^{-1}]-[z_3^{-1}])
\tau^{\rm KP} ({\bf t}-[z_2^{-1}]-[z_4^{-1}])=0.
\end{array}
\end{equation}
It is a generating equation for the differential equations of the hierarchy.
The differential equations are obtained by expanding it in negative powers of
$z_1, z_2, z_3, z_4$.
In the limit $z_4\to \infty$, $z_3\to \infty$ equation (\ref{tau6b}) becomes
\begin{equation}\label{ch3}
\begin{array}{l}
\displaystyle{
\partial_{x}\log \frac{\tau^{\rm KP}\Bigl (x, {\bf t}+
[z_1^{-1}]-[z_2^{-1}]\Bigr )}{\tau^{\rm KP}(x, {\bf t})}}
\\ \\
\phantom{aaaaaaaaaaaaa}\displaystyle{=
(z_2-z_1)\left (\frac{\tau^{\rm KP}\Bigl (x, {\bf t}+
[z_1^{-1}]\Bigr )\tau^{\rm KP}\Bigl (x, {\bf t}-
[z_2^{-1}]\Bigr )}{\tau^{\rm KP}(x, {\bf t})
\tau^{\rm KP}\Bigl (x, {\bf t}+
[z_1^{-1}]-[z_2^{-1}]\Bigr )}-1\right )}.
\end{array}
\end{equation}
We will need a particular case of (\ref{ch3}) at $z_2=-z_1=z$ which we write
in the form
\begin{equation}\label{ch3a}
\frac{1}{2z}\, \partial_x \!\! \left (e^{2xz} \frac{\tau^{\rm KP}
\Bigl (x,{\bf t}+[-z^{-1}]-[z^{-1}]\Bigr )}{\tau^{\rm KP}({x,\bf t})}\right )=
e^{2xz}\frac{\tau^{\rm KP}
\Bigl (x,{\bf t}+[-z^{-1}]\Bigr )\tau^{\rm KP}\Bigl (x,{\bf t}-[z^{-1}]\Bigr )}{(\tau^{\rm KP}({x,\bf t}))^2}.
\end{equation}
The following theorem gives an expression for the CKP tau-functions in terms of the KP tau-functions satisfying the ``turning points'' constraint (\ref{int3}).
\begin{theorem}
\label{theorem:conditions}
The KP tau-function $\tau^{\rm KP}(x, {\bf t})$
is the KP extension of a solution of the CKP hierarchy if the equation
\begin{equation}\label{ch6}
\partial_{t_2}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}=0
\end{equation}
holds for all $t_1, t_3, t_5, \ldots$ when ``even'' times
${\bf t}_{\rm e}=\{t_2, t_4, t_6, \ldots \}$ are set equal to zero.
Conversely, in the equivalence class of any KP tau-function
corresponding to KP extension of a solution to the CKP hierarchy there is one
which satisfies (\ref{ch6}).
Moreover, the CKP tau-function defined in Theorem \ref{theorem:exist} is equal to
\begin{equation}\label{e1}
\tau (x,{\bf t}_{\rm o})=\sqrt{\tau^{\rm KP}(x,t_1, 0, t_3, 0, \ldots )}.
\end{equation}
\end{theorem}
\medskip
\noindent
{\it Proof}.
Comparing (\ref{ch2}) and (\ref{ckp5a}), we see that the wave functions of the CKP and KP
hierarchies are related as
$$
\begin{array}{l}
\Psi (x, {\bf t}_{\rm o}, z)=e^{\chi (z)}\Psi^{\rm KP}(x, t_1, 0, t_3, 0, \ldots , z),
\\ \\
\Psi (x, {\bf t}_{\rm o}, -z)=e^{-\chi (z)}
\Psi^{\dag \rm KP}(x, t_1, 0, t_3, 0, \ldots , z)
\end{array}
$$
with some function $\chi (z)$ such that $\chi (\infty )=0$, i.e.
\begin{equation}\label{ch4a}
\Psi^{\dag \rm KP}(x, t_1, 0, t_3, 0, \ldots , z)=
e^{2\chi_{\rm e}(z)}\Psi^{\rm KP}(x, t_1, 0, t_3, 0, \ldots , -z),
\end{equation}
where $\chi_{\rm e}(z)=\frac{1}{2}(\chi(z)+\chi (-z))$ is the even part of the function $\chi (z)$.
From (\ref{ch1}), (\ref{ch1a}) and (\ref{ch4a}) it follows
that the KP tau-function is the extension of a solution of the CKP hierarchy
if and only if the equation
\begin{equation}\label{ch4}
\begin{array}{l}
\tau^{\rm KP}\Bigl (x, t_1+z^{-1}, \frac{1}{2}\, z^{-2},
t_3+\frac{1}{3}\, z^{-3}, \frac{1}{4}\, z^{-4}, \ldots \Bigr )
\\ \\
\phantom{aaaaaaaaaaa}=e^{2\chi_{\rm e}(z)}\tau^{\rm KP}\Bigl (x, t_1+z^{-1}, -\frac{1}{2}\, z^{-2},
t_3+\frac{1}{3}\, z^{-3}, -\frac{1}{4}\, z^{-4}, \ldots \Bigr )
\end{array}
\end{equation}
holds identically for all $z, x, t_1, t_3, t_5, \ldots$. Shifting the odd times, we can
rewrite this condition as
\begin{equation}\label{ch4b}
\begin{array}{c}
\log \tau^{\rm KP}\Bigl (x, t_1, \frac{1}{2}\, z^{-2},
t_3, \frac{1}{4}\, z^{-4}, \ldots \Bigr )
-\log \tau^{\rm KP}\Bigl (x, t_1, -\frac{1}{2}\, z^{-2},
t_3, -\frac{1}{4}\, z^{-4}, \ldots \Bigr )=2\chi_{\rm e}(z).
\end{array}
\end{equation}
Comparing the coefficients at $z^{-2}$ of the expansions of the left and right hand sides of (\ref{ch4b})
and passing to an equivalent tau-function
if necessary, we get (\ref{ch6}), i.e. the ``only if'' part of the theorem statement is proven.
We begin the proof of the ``if part'' by the following lemma.
\begin{lemma}
\label{proposition:even}
On solutions of the KP hierarchy
equation (\ref{ch6}) implies that
all derivatives of odd degree higher then $1$ with respect
to various even times are equal to zero for all $x, {\bf t}_{\rm o}$, i.e.
\begin{equation}\label{ch5}
\partial_{t_{2k_1}}\partial_{t_{2k_2}}\ldots \partial_{t_{2k_{2m+1}}}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}=0
\end{equation}
for all $k_1, k_2, \ldots , k_{2m+1}\geq 1$, $m\geq 1$. Besides, first order derivatives
with respect to even times satisfy
\begin{equation}\label{ch6a}
\partial_x\partial_{t_4}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}
=\partial_x\partial_{t_6}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}
=\ldots =0.
\end{equation}
\end{lemma}
\noindent
The proof is given in Appendix A. From equations (\ref{ch6a}) we see that
$$
\partial_{t_{2k}}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}=\chi_{2k}(t_3, t_5, \ldots )
$$
does not depend on $x$. Equation (\ref{ch6}) means that
$\chi_2=0$. Next, from (\ref{ch5}) we conclude that
$$
\begin{array}{c}
\log \tau^{\rm KP}\Bigl (x, t_1, t_2,
t_3, t_4, \ldots \Bigr )
-\log \tau^{\rm KP}\Bigl (x, t_1, -t_2,
t_3, -t_4, \ldots \Bigr )=2\displaystyle{\sum_{k\geq 2}\chi_{2k}(t_3, t_5, \ldots )t_{2k}}
\end{array}
$$
is a linear function of ${\bf t}_{\rm e}$. Therefore, we can write
\begin{equation}\label{ch7}
\begin{array}{c}
\tau^{\rm KP}\Bigl (x, t_1, \frac{1}{2}\, z^{-2},
t_3, \frac{1}{4}\, z^{-4}, \ldots \Bigr )=
e^{2\chi_{\rm e}(t_3, t_5, \ldots ;z)}\tau^{\rm KP}\Bigl (x, t_1, -\frac{1}{2}\, z^{-2},
t_3, -\frac{1}{4}\, z^{-4}, \ldots \Bigr ),
\end{array}
\end{equation}
where $\chi_{\rm e}(t_3, t_5, \ldots ;z)$ is a function of the times $t_3, t_5, \ldots $ and
an even function of $z$. In its turn, (\ref{ch7}) implies
\begin{equation}\label{ch8}
\begin{array}{c}
\Psi^{\dag {\rm KP}}(x, {\bf \dot t}, z)=
C(t_3+\frac{1}{3}\, z^{-3}, t_5+\frac{1}{5}\, z^{-5}, \ldots ;z)
\Psi^{{\rm KP}}(x, {\bf \dot t}, -z),
\end{array}
\end{equation}
where $C(t_3, t_5, \ldots ;z)=e^{\chi_{\rm e}(t_3, t_5, \ldots ;z)}$ and
we use the short-hand notation $${\bf \dot t}=\{t_1, 0, t_3, 0, \ldots \}.$$
(In this notation equation (\ref{ch4}) takes the form
$\tau^{\rm KP}(x, {\bf \dot t}+[z^{-1}])=\tau^{\rm KP}(x, {\bf \dot t}-[-z^{-1}])$.)
The adjoint wave function $\Psi^{\dag {\rm KP}}$
satisfies the adjoint linear equation (see the independent proof
in the next section), which restricted to the locus ${\bf \dot t}$ where $B_k^{\dag}=-B_k$ for odd $k$ coincides with the linear equation for $\Psi^{{\rm KP}}$, so we simultaneously have
\begin{equation}\label{ch9}
\begin{array}{l}
\partial_{t_k}\Psi^{\dag {\rm KP}}(x, {\bf \dot t}, z)=
B_k\Psi^{\dag {\rm KP}}(x, {\bf \dot t}, z),
\\ \\
\partial_{t_k}\Psi^{{\rm KP}}(x, {\bf \dot t}, z)=
B_k\Psi^{{\rm KP}}(x, {\bf \dot t}, z).
\end{array}
\end{equation}
for odd $k$. Substituting (\ref{ch8}) into the first of these equations, we get,
after the change $z\to -z$,
$$
\begin{array}{c}
\partial_{t_k}\Psi^{{\rm KP}}(x, {\bf \dot t}, z)
+\partial_{t_k}\log C\Bigl (t_3-\frac{1}{3}\, z^{-3}, t_5-\frac{1}{5}\, z^{-5}, \ldots ;z\Bigr ) =B_k
\Psi^{{\rm KP}}(x, {\bf \dot t}, z),
\end{array}
$$
and from the second equation in (\ref{ch9}) we conclude that
$$
\begin{array}{c}
\partial_{t_k}\log C\Bigl (t_3-\frac{1}{3}\, z^{-3}, t_5-\frac{1}{5}\, z^{-5}, \ldots ;z\Bigr )=0,
\end{array}
$$
i.e. $\chi_{\rm e}(t_3, t_5, \ldots ;z)=\chi_{\rm e}(z)$
is an even function of $z$ which does not depend on the times. (This function can be eliminated in
(\ref{ch7}) by passing to an equivalent tau-function.) Therefore, the equation (\ref{ch4})
which guarantees that $\tau^{\rm KP}$ is the KP extension of a solution to the CKP hierarchy
is proved.
\medskip
\noindent
{\bf Remark.}
Passing to an equivalent tau-function using the transformation (\ref{linear}),
one obtains the condition $\partial_{t_{2}}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}}=\gamma_{2}$
instead of (\ref{ch6}). Conversely, if
$\partial_{t_{2}}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}}=\gamma_{2}$ with some nonzero
$\gamma_{2}$, it is possible to pass to an equivalent tau-function
satisfying (\ref{ch6}) by a transformation
of the form (\ref{linear}).
\medskip
In order to prove that $\tau =\sqrt{\vphantom{A^A}\tau^{\rm KP}}$ (\cite{CH14})
we compare two expressions for the wave
function $\Psi$ of the CKP hierarchy. The first one is
in terms of the KP tau-function (satisfying (\ref{ch6})),
\begin{equation}\label{e5a}
\Psi^{\rm KP}
= e^{xz+\zeta ({\bf t}_{\rm o}, z)}\frac{\tau^{\rm KP}(x, t_1-z^{-1}, -\frac{1}{2}\, z^{-2},
t_3-\frac{1}{3}\, z^{-3}, -\frac{1}{4}\, z^{-4}, \ldots )}{\tau^{\rm KP}
(t_1, 0, t_3, 0, \ldots )},
\end{equation}
and the second one (\ref{ckp6}) is in terms of the CKP tau-function $\tau$.
Recall that
\begin{equation}\label{e4a}
\Psi =z^{-1/2} \sqrt{\vphantom{B^{a^a}}\partial_x \log \psi} \cdot \psi =
(2z)^{-1/2}\sqrt{\vphantom{B^{a^a}}\partial_x \psi^2}
\end{equation}
(see (\ref{e4})),
where
\begin{equation}\label{e3a}
\psi = e^{xz+\zeta ({\bf t}_{\rm o}, z)}
\frac{\tau (x, {\bf t}_{\rm o}-2[z^{-1}]_{\rm o})}{\tau (x, {\bf t}_{\rm o})}.
\end{equation}
Comparing (\ref{e4a})
and (\ref{e5a}), we get the equation
\begin{equation}\label{e5b}
\frac{1}{2z}\, \partial_x \!\! \left (e^{2xz}\frac{\tau^2 \Bigl (
x, {\bf t}_{\rm o}-2[z^{-1}]_{\rm o}\Bigr )}{\tau^2 (
x, {\bf t}_{\rm o})}\right )=
e^{2xz}\left (\frac{\tau^{\rm KP}(x, {\bf \dot t}-[z^{-1}])}{\tau^{\rm KP}
(x, {\bf \dot t})}\right )^2,
\end{equation}
where we again use the short-hand notation ${\bf \dot t}=\{t_1, 0, t_3, 0, \ldots \}$.
Then
using equation (\ref{ch3a}) we get that (\ref{e5b}) is equivalent to
the differential equation
\begin{equation}\label{e6}
\partial_x \varphi = -2z \varphi ,
\end{equation}
where
\begin{equation}\label{e7}
\varphi = \frac{\tau^2 \Bigl (x, {\bf t}_{\rm o}-2[z^{-1}]_{\rm o}
\Bigr )}{\tau^2 (x, {\bf t}_{\rm o})}-
\frac{\tau^{\rm KP}\Bigl (x, {\bf \dot t}-2[z^{-1}]_{\rm o}\Bigr )}{\tau^{\rm KP}
(x, {\bf \dot t})}.
\end{equation}
In (\ref{e7}), ${\bf \dot t}-2[z^{-1}]_{\rm o}=\{t_1-2z^{-1}, 0, t_3-\frac{2}{3}\, z^{-3}, 0,
\ldots \}$.
The general solution of the differential equation (\ref{e6}) is
$$
\varphi =c(z, t_3, t_5, \ldots )e^{-2(x+t_1)z}
$$
but from (\ref{e7}) it follows that $\varphi$
is expanded in a power series
as $\varphi =\varphi_1 z^{-1}+\varphi_2 z^{-2}+\ldots$ as $z\to \infty$, and
this means that $c$ must be equal to 0. Therefore, $\varphi=0$, i.e.
\begin{equation}\label{e8}
\frac{\tau^2 \Bigl (x, {\bf t}_{\rm o}-2[z^{-1}]_{\rm o}
\Bigr )}{\tau^2 (x, {\bf t}_{\rm o})}=
\frac{\tau^{\rm KP}\Bigl (x, {\bf \dot t}-2[z^{-1}]_{\rm o}\Bigr )}{\tau^{\rm KP}
(x, {\bf \dot t})}
\end{equation}
for all $z$. This is an identity on solutions to the KP/CKP hierarchies. It follows from
(\ref{e8}) that $\tau^{\rm KP}={\rm const}\cdot \tau^2$, i.e.
$\tau (x, {\bf t}_{\rm o})=\sqrt{\tau^{\rm KP} (x, {\bf \dot t})}$ is a tau-function
of the CKP hierarchy.
\square
\medskip \noindent
{\bf Remark.} Equation (\ref{e5b}) is the CKP analog of the BKP statement that the corresponding
$\tau^{\rm KP}$ is a full square, i.e. $\tau = \sqrt{\tau^{\rm KP}\vphantom{A^A}}$
is an entire function of its variables. In the CKP case
$\partial_x \psi^2$ is a full square.
\section{Algebraic-geometrical solutions to the KP and\\ CKP hierarchies}
The algebraic-geometrical construction of quasi-periodic solutions to
the CKP hierarchy briefly outlined in \cite{DJKM81} is a reduction
of the algebraic-geometrical construction of solutions to the KP hierarchy proposed in \cite{Krichever77,Krichever77a}. The main goal of this section is to give
a pure algebraic-geometrical proof of an identity for the Riemann theta-function
of a curve with involution having at least one fixed point. This identity is an algebraic-geometrical incarnation of the relations between KP and CKP tau-functions discussed in Section 2.
\subsection{Prelimineries}\label{sub:prel}
Let $\Gamma$ be a smooth compact algebraic curve of genus $g$. We fix a canonical basis of
cycles $a_{\alpha}, b_{\alpha}$ ($\alpha =1, \ldots , g$) with the intersections
$a_{\alpha}\circ a_{\beta}=b_{\alpha}\circ b_{\beta}=0$,
$a_{\alpha}\circ b_{\beta}=\delta_{\alpha \beta}$ and a basis of holomorphic
differentials $d\omega_{\alpha}$ normalized by the condition
$\displaystyle{\oint_{a_{\alpha}}d\omega_{\beta}=\delta_{\alpha \beta}}$.
The period matrix is defined as
\begin{equation}\label{qp1}
T_{\alpha \beta}=\oint_{b_{\alpha}}d\omega_{\beta}, \qquad \alpha , \beta =1, \ldots , g.
\end{equation}
It is a symmetric matrix with positively defined imaginary part.
The Riemann theta-function is defined by the series
\begin{equation}\label{qp2}
\theta(\vec z)=\theta(\vec z|T)=
\sum_{\vec n \in \raise-1pt\hbox{$\mbox{\Bbbb Z}$} ^{g}}e^{\pi i (\vec n, T\vec n)+2\pi i (\vec n, \vec z)},
\end{equation}
where $\vec z=(z_1, \ldots , z_g)$ and $\displaystyle{(\vec n, \vec z)=
\sum_{\alpha =1}^g n_{\alpha}z_{\alpha}}$.
The Jacobian of the curve $\Gamma$ is the $g$-dimensional complex torus
\begin{equation}\label{qp4}
J(\Gamma )=\mbox{\Bbb C} ^g /\{2\pi i \vec N +2\pi i T \vec M\},
\end{equation}
where $\vec N$, $\vec M$ are $g$-dimensional vectors with integer components.
Fix a point $Q_0\in \Gamma$ and define the Abel map $\vec A(P)$, $P\in \Gamma$
from $\Gamma$ to $J(\Gamma )$, as
\begin{equation}\label{qp3}
\vec A(P)=\vec \omega (P)=
\int_{Q_0}^P d \vec \omega , \qquad d\vec \omega =(d\omega_1, \ldots , d\omega_g ).
\end{equation}
The Abel map can be extended to the group of divisors ${\cal D}=n_1Q_1+\ldots +n_KQ_K$ as
\begin{equation}\label{qp3a}
\vec A({\cal D})=\sum_{i=1}^K n_i\int_{Q_0}^{Q_i} d \vec \omega =\sum_{i=1}^K
n_i\vec A(Q_i).
\end{equation}
Let $P_{\infty}\in \Gamma$ be a marked point and $k^{-1}$ a local parameter
in a neighborhood of the marked point ($k=\infty$ at $P_{\infty}$). Let $d\Omega_j$
be abelian differentials of the second kind with the only pole at $P_{\infty}$ of the form
\begin{equation}\label{defdomega}
d\Omega_j = dk^j +O(k^{-2})dk, \quad k\to \infty
\end{equation}
normalized by the condition $\displaystyle{\oint_{a_{\alpha}}d\Omega_{j}=0}$, and $\Omega_j$
be the (multi-valued) functions
$$
\Omega_j(P)=\int_{Q_0}^{P}d\Omega_j +q_j,
$$
where the constants $q_j$ are chosen in such a way that $\Omega_i(P)=k^i +O(k^{-1})$, namely,
\begin{equation}\label{qp3b}
\Omega_i(P)=k^i +\sum_{j\geq 1} \frac{1}{j}\, \Omega_{ij}k^{-j}.
\end{equation}
The Riemann identity implies that the matrix $\Omega_{ij}$ is symmetric:
$\Omega_{ij}=\Omega_{ji}$.
Set
\begin{equation}\label{qp5}
U_j^{\alpha}=\frac{1}{2\pi i}\oint_{b_{\alpha}}d\Omega_j, \qquad
\vec U_j =(U_j^{1}, \ldots , U_j^g).
\end{equation}
One can prove the following relation:
\begin{equation}\label{qp6a}
\vec d\omega =\sum_{j\geq 1}\vec U_j k^{-j-1}dk
\end{equation}
or
\begin{equation}\label{qp6}
\vec A(P)-\vec A(P_{\infty})=\int_{P_{\infty}}^P d\vec \omega =
-\sum_{j\geq 1}\frac{1}{j}\, \vec U_j k^{-j}.
\end{equation}
We will also use the following fact \cite{Fay,Mumford}: for any non-special effective divisor
${\cal D}=Q_1+\ldots +Q_g$ of degree $g$ the function
$$
f(P)=\theta \Bigl (\vec A(P)-\vec A({\cal D}) -\vec K\Bigr )
$$
has exactly $g$ zeros at the points $Q_1, \ldots , Q_g$. Here $\vec K=
(K_1, \ldots , K_g)$ is the
vector of Riemann's constants
\begin{equation}\label{qp7}
K_{\alpha}= \pi i +\pi i T_{\alpha \alpha}-2\pi i
\sum_{\beta \neq \alpha}\oint_{a_{\beta}}\omega_{\alpha} (P)d\omega_{\beta}(P).
\end{equation}
Let ${\cal K}$ be the canonical class of divisors (the equivalence class of divisors
of poles and zeros of abelian differentials on $\Gamma$), then one can show that
\begin{equation}\label{qp8}
2\vec K=-\vec A({\cal K}).
\end{equation}
It is known that $\mbox{deg}\, {\cal K}=2g-2$. In particular, this means that
holomorphic differentials have $2g-2$ zeros on $\Gamma$.
We also need the bi-differential $d_P d_Q \Omega (P, Q)$ such that it is symmetric in
$P, Q$, its
only singularity is a second order
pole at $P=Q$ and the integrals over $a$-cycles vanish. It is related to the
differentials $d\Omega _j$ as follows:
\begin{equation}\label{qp81}
\mathop{\hbox{res}}\limits_{Q=P_{\infty}} \Bigl (k^i(Q)d_P d_Q \Omega (P, Q)\Bigr ) =-d\Omega_i (P).
\end{equation}
The expansion in the local parameters is
\begin{equation}\label{qp82}
d_P d_Q \Omega (P, Q)=\Biggl ( \frac{1}{(k^{-1}(P)\! -\! k^{-1}(Q))^2}-
\sum_{i,j\geq 1} \Omega_{ij} k^{1-i}(P)k^{1-j}(Q)\Biggr ) dk^{-1}(P)dk^{-1}(Q).
\end{equation}
In fact this bi-differential can be expressed in terms of the odd theta-function
$$\theta_{*}(\vec z)=\theta \left [\begin{array}{c}\vec \delta ' \\ \vec \delta^{''}
\end{array}\right ] (\vec z)
=\sum_{\vec n \in \raise-1pt\hbox{$\mbox{\Bbbb Z}$} ^{g}}e^{\pi i (\vec n +\vec \delta ',
T(\vec n +\vec \delta '))+2\pi i (\vec n+\vec \delta ', \vec z+\vec \delta '')},$$
where $(\vec \delta ', \vec \delta '')$ is a non-singular odd theta-characteristics.
One has:
\begin{equation}\label{qp83}
d_P d_Q \Omega (P, Q)=d_P d_Q \log \theta_{*}\Bigl (\vec A(P)-\vec A(Q)\Bigr ).
\end{equation}
Calculating the double integral
$$
\int_{P_{\infty}}^{P_1}\int_{Q_0}^{P_2}d_P d_Q \Omega (P, Q)
$$
in two ways (using first (\ref{qp82}) and then (\ref{qp83})), we obtain the equality
$$
\log \frac{(k^{-1}(P_1)-k^{-1}(P_2))
k^{-1}(Q_0)}{(k^{-1}(P_1)-k^{-1}(Q_0))k^{-1}(P_2)}
-\sum_{i,j\geq 1} \Omega_{ij} \frac{k^{-i}(P_1)k^{-j}(P_2)}{ij}+
\sum_{i,j\geq 1} \Omega_{ij} \frac{k^{-i}(P_1)k^{-j}(Q_0)}{ij}
$$
$$
=\log \frac{\theta_{*}\Bigl (\vec A(P_2)-\vec A(P_1)\Bigr )
\theta_{*}\Bigl (\vec A(P_\infty )\Bigr )}{\theta_{*}\Bigl (\vec A(P_2)-
\vec A(P_\infty )\Bigr )\theta_{*}\Bigl (\vec A(P_1 )\Bigr )}
$$
Tending here $Q_0\to P_{\infty}$, we arrive at the important relation
\begin{equation}\label{qp84}
\exp \Biggl (-\sum_{i,j\geq 1}\Omega_{ij}\frac{k_1^{-i}k_2^{-j}}{ij}\Biggr )=
\frac{C \theta_{*}\Bigl ( \vec A(P_1)-\vec A(P_2)\Bigr )}{(k_1-k_2)
\theta_{*}\Bigl ( \vec A(P_1)\! -\! \vec A(P_\infty )\Bigr )
\theta_{*}\Bigl ( \vec A(P_2)\! -\! \vec A(P_\infty )\Bigr )},
\end{equation}
where
$$C=\sum_{\alpha =1}^{g}U_1^{\alpha} \theta_{*, \alpha} (\vec 0), \qquad
\theta_{*, \alpha}(\vec 0)=\frac{\partial \theta_{*}(\vec z)}{\partial z_{\alpha}}\Biggr |_{\vec z=0}
$$
is a constant and $k_1=k(P_1), k_2=k(P_2)$. In particular, tending
$k_1\to k_2$, we get
\begin{equation}\label{qp85}
\exp \Biggl (-\sum_{i,j\geq 1}\Omega_{ij}\frac{k^{-i-j}}{ij}\Biggr )dk=
\frac{Cd\zeta}{\theta_{*}^2\Bigl ( \vec A(P)\! -\! \vec A(P_\infty )\Bigr )},
\end{equation}
where $d\zeta$ is the holomorphic differential
\begin{equation}\label{qp86}
d\zeta =\sum_{\alpha =1}^g \theta_{*, \alpha}(\vec 0)d\omega_{\alpha}.
\end{equation}
As is explained in \cite{Mumford}, the differential $d\zeta$ has double zeros at
$g-1$ points $R_1, \ldots , R_{g-1}$ while the function
$$
f_{*}(P)= \theta_{*}\Bigl ( \vec A(P)\! -\! \vec A(P_\infty )\Bigr )
$$
has simple zeros at the same points $R_i$ and $P_{\infty}$. Therefore, the differential
in the right hand side of (\ref{qp85}) has the only (second order) pole at $P_{\infty}$
and no zeros. However, this differential is well-defined only on a covering of the curve
$\Gamma$ because it is not single-valued.
Finally, we mention the trisecant Fay identity \cite{Fay}:
\begin{equation}\label{Fay}
\begin{array}{c}
\theta_{*}\Bigl (\vec A(P_1)-\vec A(P_2)\Bigr )
\theta_{*}\Bigl (\vec A(P_3)-\vec A(P_4)\Bigr )
\theta \Bigl (\vec z +\vec A(P_1)+\vec A(P_2)\Bigr )
\theta \Bigl (\vec z +\vec A(P_3)+\vec A(P_4)\Bigr )
\\ \\
+\theta_{*}\Bigl (\vec A(P_2)-\vec A(P_3)\Bigr )
\theta_{*}\Bigl (\vec A(P_1)-\vec A(P_4)\Bigr )
\theta \Bigl (\vec z +\vec A(P_2)+\vec A(P_3)\Bigr )
\theta \Bigl (\vec z +\vec A(P_1)+\vec A(P_4)\Bigr )
\\ \\
+ \theta_{*}\Bigl (\vec A(P_3)\! -\!\vec A(P_1)\Bigr )
\theta_{*}\Bigl (\vec A(P_2)\! -\! \vec A(P_4)\Bigr )
\theta \Bigl (\vec z \! +\! \vec A(P_1)\! +\! \vec A(P_3)\Bigr )
\theta \Bigl (\vec z \! +\! \vec A(P_2)\! +\! \vec A(P_4)\Bigr )=0.
\end{array}
\end{equation}
\subsection{The Baker-Akhiezer function}
Let $x, t_1, t_2, t_3, \ldots$ be a set of complex parameters
(here we assume that only a finite
number of them are different from zero) and let $\Gamma$ be a smooth
genus $g$ algebraic curve with fixed local coordinate $k^{-1}(P)$ in the
neighborhood of a fixed point $P_\infty$.
\begin{lemma}\label{lm:BA} (\cite{Krichever77,Krichever77a}) Let
${\cal D}=Q_1+ \ldots + Q_g$ be an effective non-special divisor of
degree $g$. Then there is the unique function $\Psi_{BA}(x,{\bf t},P)$ such that:
$1^0$. As a function of $P\in \Gamma$ it is meromorphic away from the marked
point $P_\infty$ with poles at the points $Q_s$ of multiplicity not
greater then the multiplicity of $Q_s$ in $\cal D$.
$2^0$. In the neighborhood of $P_{\infty}$ it has the form
\begin{equation}\label{ba101}
\Psi_{BA} = \exp \Bigl (xk+\sum_{j\geq 1}t_j k^j\Bigr )
\Bigl (1+\xi_1 k^{-1} +\xi_2 k^{-2} +\ldots \Bigr ), \ \ \ k=k(P).
\end{equation}
\end{lemma}
The function $\Psi_{BA}$ is called (one-point) Baker-Akhiezer function.
An easy corollary of the uniqueness of the Baker-Akhiezer function is
\begin{theorem}\label{thm:BA}(\cite{Krichever77,Krichever77a})
Let $\Psi_{BA}$ be the Baker-Akhiezer function defined by
Lemma \ref{lm:BA}. Then for each $j=1,2,3, \ldots$ there is a unique
differential operator $B_j$ such that the equation
\begin{equation}\label{w5}
(\partial_{t_j}-B_j)\Psi_{BA} =0
\end{equation}
holds.
\end{theorem}
The operators $B_j$ above can be easy expressed in terms of the dressing operator $W$ for the Baker-Akhiezer function. Namely, the infinite series (\ref{ba1}) can be represented as
\begin{equation}\label{w1}
\Psi_{BA} = W \exp \Bigl (xk+\sum_{j\geq 1}t_j k^j\Bigr ),
\end{equation}
where $W$ is of the form (\ref{ckp1a}). The corresponding Lax operator of the KP
hierarchy is $\mathcal L=W\partial_xW^{-1}$.
By the definition we have
\begin{equation}\label{w4}
\mathcal L\Psi_{BA} =k\Psi_{BA} .
\end{equation}
The operator $B_j$ in Theorem \ref{thm:BA} was defined as the unique
monic order $j$ operator such that the congruence
$$ (k^j-B_j) \Psi_{BA}=O(1/k)\exp \Bigl (xk+\sum_{j\geq 1}t_j k^j\Bigr )$$
holds. Using (\ref{w4}), it is easy to identify $B_j=\mathcal L^j_+$. Indeed
$$(k^j-\mathcal L^j_+)\Psi_{BA} = (\mathcal L^j-\mathcal L_+^j)\Psi_{BA} =\mathcal L_-^j\Psi_{BA}=O(1/k)\exp \Bigl (xk+\sum_{j\geq 1}t_j k^j\Bigr )
$$
The compatibility conditions of equations (\ref{w5}) imply
\begin{corollary} The operators $B_j$ defined by the BA function satisfies the equations
\begin{equation}\label{w6}
[\partial_{t_j}-B_j, \partial_{t_l}-B_l]=0.
\end{equation}
\end{corollary}
It is the Zakharov-Shabat form (\ref{kp10}) of the KP hierarchy.
Note that equation (\ref{w5}) implies the evolution equation for the
dressing operator:
\begin{equation}\label{w6a}
\partial_{t_j}W=-(W\partial_x^j W^{-1})_{-}W,
\end{equation}
where $(\ldots )_{-}$ is the projection to negative powers of the operator $\partial_x$.
\subsection{The dual Baker-Akhiezer function}
For further comparison with the tau-functional formulation
of the KP hierarchy let us present the notion of the {\it dual} (adjoint)
Baker-Akhiezer function introduced in \cite{Cherednik} (see the
details in \cite{kp1,handbook}).
First we define duality for divisors of degree $g$.
For a generic effective degree $g$ divisor ${\cal D}=Q_1+\ldots+Q_g$ there
is a unique up to a constant factor the abelian differential $d\Omega$ with
the only (second order) pole at $P_{\infty}$ vanishing (with the
corresponding multiplicity) at the points $Q_s$. The zero
divisor of $d\Omega$ is of degree $2g$. Hence, it has other $g$ zeros at some points
$Q_1^{\dag}, \ldots , Q_g^{\dag}$. The divisor
$${\cal D}^{\dag}=Q_1^{\dag}+ \ldots + Q_g^{\dag}$$
is called {\it dual to $\cal D$}.
By the definition we have the equality
\begin{equation}\label{ba201}
{\cal D}+{\cal D}^{\dag}={\cal K}+2P_{\infty}
\end{equation}
(where ${\cal K}$ is the canonical class),
which under the Abel transform takes the form
\begin{equation}\label{ba301}
\vec A({\cal D})+\vec A({\cal D}^{\dag})+2\vec K -2\vec A(P_{\infty})=0
\end{equation}
The dual (adjoint) Baker-Akhiezer function
$\Psi^{\dag}_{BA}$ has the divisor of poles ${\cal D}^{\dag}$ and
in the vicinity of $P_{\infty}$ it
has the form
\begin{equation}\label{ba401}
\Psi^{\dag}_{BA} = \exp \Bigl (-xk-\sum_{j\geq 1}t_j k^j\Bigr )
\Bigl (1+\xi_1^{\dag} k^{-1} +\xi_2^{\dag} k^{-2} +\ldots \Bigr ).
\end{equation}
The differential $\Psi_{BA} (x,{\bf t}, P)\Psi_{BA}^{\dag}(x, {\bf t}', P)d\Omega (P)$,
where we have denoted the set of times
as ${\bf t}=\{ t_1, t_2, t_3, \ldots \}$ for brevity, is holomorphic
everywhere on $\Gamma$ except the point $P_{\infty}$ (because poles of the Baker-Akhiezer
functions are canceled by zeros of $d\Omega$). Therefore, its ``residue'' at this point
is equal to zero, i.e.,
\begin{equation}\label{ba701}
\oint_{C_{\infty}} \! \Psi_{BA} (x,{\bf t}, P)\Psi_{BA}^{\dag}(x, {\bf t}', P)d\Omega (P) =0
\end{equation}
for all ${\bf t}, {\bf t}'$, where $C_{\infty}$ is a small contour around the point $P_{\infty}$.
Equation (\ref{ba701}) is equivalent to the equation
\begin{equation}\label{ba701b}
\mathop{\hbox{res}}\limits_{P_\infty}\,\left(\partial_x^i \Psi_{BA} (x,{\bf t}, P)
\Psi_{BA}^{\dag}(x, {\bf t}, P)\right)d\Omega (P)=0,\,\ \ i=1,2,3,\ldots
\end{equation}
from which one can derive the following theorem (see \cite{kp1}
and \cite{handbook} for more details).
\begin{theorem}
The dual Baker-Akhiezer function is equal to
\begin{equation}\label{ad1}
\Psi_{BA}^{\dag}=(W^{\dag})^{-1} \exp \Bigl (-xk -\sum_{j\geq 1}t_jk^j\Bigr )
\end{equation}
and satisfies the adjoint equations
\begin{equation}\label{ad2}
\mathcal L^{\dag}\Psi_{BA}^{\dag}=k\Psi_{BA}^{\dag},
\quad -\partial_{t_j}\Psi_{BA}^{\dag}=B_j^{\dag}\Psi_{BA}^{\dag}.
\end{equation}
\end{theorem}
\noindent
For completeness we outline here a direct proof of the theorem.
Equations (\ref{ad2}) immediately follow from (\ref{w6a}) and (\ref{ad1}):
$$
-\partial_{t_j}\Psi_{BA}^{\dag}=\Bigl (k^j (W^{\dag})^{-1}-(W^{\dag})^{-1}\partial_{t_j}W^{\dag}(W^{\dag})^{-1}
\Bigr )\exp \Bigl (-xk -\sum_{j\geq 1}t_jk^j\Bigr )
$$
$$
=\Bigl (k^j (W^{\dag})^{-1}-((W^{\dag})^{-1}(-\partial_x )^j W^{\dag})_- (W^{\dag})^{-1}
\Bigr )\exp \Bigl (-xk -\sum_{j\geq 1}t_jk^j\Bigr )
$$
$$
=\Bigl ((\mathcal L^{\dag})^j -(\mathcal L^{\dag})^j_-\Bigr ) \Psi_{BA}^{\dag}=B_j^{\dag}\Psi_{BA}^{\dag}.
$$
In order to prove (\ref{ad1}) we note that equation (\ref{ba701}) written in the local
parameter $k$ implies
$$
b_m=\frac{1}{2\pi i}\,
\partial_{x'}^m \oint_{C_{\infty}}\! \Psi_{BA} (x, {\bf t}, k)\Psi_{BA}^{\dag} (x', {\bf t}, k)
\varphi (k)\frac{dk}{2\pi i} \Biggr |_{x'=x}=0 \quad \mbox{for all $m\geq 0$.}
$$
Here
$$
\varphi (k) = \frac{d\Omega}{dk} =\sum_{j\geq 0}\varphi_j k^{-j}.
$$
We set
$$
\Psi_{BA}^{\dag}=V\exp \Bigl (-xk -\sum_{j\geq 1}t_jk^j\Bigr ), \quad
V=1+\xi_1^{\dag}\partial_x^{-1}+\xi_2^{\dag}\partial_x^{-2}+\ldots
$$
We have:
$$
b_m=\frac{1}{2\pi i}\oint_{C_{\infty}}
\Bigl (\sum_{l\geq 0}\varphi_l k^{-l}\Bigr )
\Bigl (\sum_{j\geq 0}\xi_j(x)k^{-j}\Bigr )\partial_{x'}^m
\Bigl (\sum_{i\geq 0}\xi_i^{\dag}(x')(-k)^{-i}\Bigr )e^{(x-x')z} \frac{dk}{2\pi i} \Biggr |_{x'=x}
$$
$$
=\frac{1}{2\pi i}\oint_{C_{\infty}}
\Bigl (\sum_{l\geq 0}\varphi_l k^{-l}\Bigr )
\Bigl (\sum_{j\geq 0}\xi_j k^{-j}\Bigr )(\partial_x -k)^m
\Bigl (\sum_{i\geq 0}\xi_i (-k)^{-i}\Bigr )\frac{dk}{2\pi i}
$$
$$
=\sum_{l=0}^m (-1)^{l}\frac{m!}{(m-l)!}\, \varphi_l
\! \sum_{i+j+s=m-l+1}(-1)^{m-l+i+s}\left (\! \begin{array}{c}m-l\\ s \end{array} \! \right )
\xi_j \partial_x^s \xi^{\dag}_i.
$$
But the last sum is the coefficient of $(-1)^m \partial_x^{-m+l-1}$ in the operator
$WV^{\dag}$, so we can write:
$$
b_m= \sum_{l=0}^m (-1)^{m-l}\frac{m!}{(m-l)!}\, \varphi_l
\Bigl (W V^{\dag}\Bigr )_{-m+l-1}=0 \quad \mbox{for all $m\geq 0$.}
$$
This is a homogeneous triangular system of linear equations for the coefficients
$\Bigl (W V^{\dag}\Bigr )_{-l}$. The unique solution is
$\Bigl (W V^{\dag}\Bigr )_{-l}=0$ for all $l\geq 1$, hence
$WV^{\dag}=1$, i.e. $V=(W^{\dag})^{-1}$.
\square
\subsection{Theta-functional formulae}
The Baker-Akhiezer function can be explicitly written in terms of the Riemann-theta function \cite{Krichever77}:
\begin{equation}\label{ba501}
\begin{array}{l}
\displaystyle{
\Psi_{BA} (P)= \exp \Bigl (x\Omega_1(P)+\! \sum_{j\geq 1}t_j \Omega_j (P)\Bigr )}
\\ \\
\displaystyle{
\phantom{aaaaaaaaaa}\times \,
\frac{\theta \Bigl (\vec A(P)+\vec U_1x+\sum\limits_{j\geq 1}\vec U_j t_j -
\vec A({\cal D}) -\vec K\Bigr )\theta \Bigl (\vec A({\cal D}) +
\vec K-\vec A(P_{\infty})\Bigr )}{\theta \Bigl (\vec A(P)-
\vec A({\cal D}) -\vec K\Bigr )\theta \Bigl (\vec U_1x+\sum\limits_{j\geq 1}\vec U_j t_j
-\vec A({\cal D}) -
\vec K+\vec A(P_{\infty})\Bigr ) }}.
\end{array}
\end{equation}
For the proof of (\ref{ba501}) it is enough to check that the right-hand
side of (\ref{ba501}) $(a)$ is a single-valued function on $\Gamma$, i.e.
does not depend on the choice of path of integration in the definition of
the Abel map and the Abelian integral $\Omega_j$ (which is assumed to be
the same for the both objects); $(b)$ has required exponential singularity
at the marked point $P_\infty$; $(c)$ outside of $P_\infty$ is meromorphic
with divisor of poles at the divisor $\cal D$.
The proof of $(a)$ directly follows from the monodromy properties of the
theta-function and the definition (\ref{qp5}) of the vectors $U_j$. The
proof of $(b)$ follows from the definition of the differentials $d\Omega_j$.
The $(c)$ part follows from the Jacoby inversion theorem above.
The corresponding expression for the adjoint Baker-Akhiezer function is
\begin{equation}\label{ba5a01}
\begin{array}{l}
\displaystyle{
\Psi_{BA}^{\dag} (P)= \exp \Bigl (-x\Omega_1(P)-\! \sum_{j\geq 1}t_j \Omega_j (P)\Bigr )}
\\ \\
\displaystyle{
\phantom{aaaaaaaaaa}\times \,
\frac{\theta \Bigl (\vec A(P)-\vec U_1x-\sum\limits_{j\geq 1}\vec U_j t_j -
\vec A({\cal D}^{\dag}) -\vec K\Bigr )\theta \Bigl (\vec A({\cal D}^{\dag}) +
\vec K-\vec A(P_{\infty})\Bigr )}{\theta \Bigl (\vec A(P)-
\vec A({\cal D}^{\dag}) -\vec K\Bigr )\theta \Bigl (\vec U_1x+\sum\limits_{j\geq 1}\vec U_j t_j
+\vec A({\cal D}^{\dag}) +
\vec K-\vec A(P_{\infty})\Bigr ) }}.
\end{array}
\end{equation}
Using the relation (\ref{ba3}), one can rewrite it in the form
\begin{equation}\label{ba601}
\begin{array}{l}
\displaystyle{
\Psi_{BA}^{\dag} (P)= \exp \Bigl (-x\Omega_1(P)-\! \sum_{j\geq 1}t_j \Omega_j (P)\Bigr )}
\\ \\
\displaystyle{
\phantom{aaaaa}\times \,
\frac{\theta \Bigl (\vec A(P)-\vec U_1x-\sum\limits_{j\geq 1}\vec U_j t_j +
\vec A({\cal D}) +\vec K -2\vec A(P_{\infty})\Bigr )\theta \Bigl (\vec A({\cal D}) +
\vec K-\vec A(P_{\infty})\Bigr )}{\theta \Bigl (\vec A(P)+
\vec A({\cal D}) +\vec K-2\vec A(P_{\infty})\Bigr )
\theta \Bigl (\vec U_1x+\sum\limits_{j\geq 1}\vec U_j t_j
-\vec A({\cal D}) -
\vec K+\vec A(P_{\infty})\Bigr ) }}.
\end{array}
\end{equation}
In order to find the solution $u_1=u_1(x, t_1, t_2, t_3, \ldots )$ to the KP hierarchy,
one should find the coefficient $\xi_1$ in the expansion
$$
\log \Psi_{BA} = xk +\sum_{j\geq 1}t_jk^j +\xi_1 k^{-1}+O(k^{-2}).
$$
A direct calculation with the help of the explicit formula (\ref{ba501}) yields
\begin{equation}\label{xi1}
\xi_1 = x\Omega_{11} +\sum_{i\geq 1}t_i \Omega_{1i} -\partial_x \log \theta \Bigl (
\vec U_1 x +\sum_{j\geq 1}\vec U_j t_j +\vec Z\Bigr ) + \mbox{const},
\end{equation}
where $\vec Z = -\vec A({\cal D})-\vec K +\vec A(P_{\infty})$. Therefore,
\begin{equation}\label{xi2}
u_1=-\xi_1'= \partial_x^2 \log \theta \Bigl (
\vec U_1 x +\sum_{j\geq 1}\vec U_j t_j +\vec Z\Bigr )-\Omega_{11}.
\end{equation}
\subsection{The tau-function}
Without loss of generality we can put $x=0$ for simplicity.
The dependence on $x$ can be restored by the substitution $t_1\to t_1+x$.
The theta-functional formula (\ref{ba501}) for the Baker-Akhiezer function
and the expansion (\ref{qp6}) of the Abel map near $P_\infty$ allows to
reformulate the above presented construction of algebraic-geometrical
construction in terms of the tau-functional formulation of the KP hierarchy. Namely,
we have the following theorem.
\begin{theorem} (\cite{Krichever77,Dubrovin81})
The right hand side of the equation
\begin{equation}\label{tau1}
\tau^{\rm KP} ({\bf t})=\exp \Bigl ( -
\frac{1}{2}\sum_{i,j\geq 1}\Omega_{ij}t_it_j\Bigr )\,
\theta \Bigl (\sum_{j\geq 1}\vec U_j t_j +\vec Z \Bigr ),
\end{equation}
where the constant vector $\vec Z$ is parameterized through the divisor ${\cal D}$ as
\begin{equation}\label{tau1a}
\vec Z=-\vec A({\cal D})-\vec K
+\vec A(P_{\infty})
\end{equation}
is the KP tau-function.
\end{theorem}
\noindent
{\it Proof.}
Equation (\ref{qp6}) implies that
$$
\begin{array}{c}
\displaystyle{
\theta \Bigl (\sum_{j\geq 1}\vec U_j (t_j \mp \frac{1}{j}\, k^{-j}) +\vec Z \Bigr )
=\theta \Bigl (\pm \vec A(P)+\sum_{j\geq 1}\vec U_j t_j +\vec Z \mp \vec A(P_{\infty}) \Bigr ),}
\end{array}
$$
so we see that the Baker-Akhiezer functions (\ref{ba501}), (\ref{ba601})
are connected with the tau-function by the standard formulas \cite{JimboMiwa,DJKM83}
\begin{equation}\label{tau2}
\Psi_{BA} = C(k)\exp \Bigl (\sum_{j\geq 1}t_jk^j\Bigr )
\frac{\tau^{\rm KP} ({\bf t}-[k^{-1}] )}{\tau^{\rm KP} ({\bf t} )},
\end{equation}
\begin{equation}\label{tau3}
\Psi_{BA}^{\dag} = C^{\dag}(k)\exp \Bigl (-\sum_{j\geq 1}t_jk^j\Bigr )
\frac{\tau^{\rm KP} ({\bf t}+[k^{-1}] )}{\tau^{\rm KP} ({\bf t} )},
\end{equation}
where $C(k)$, $C^{\dag}(k)$ are normalization factors such that
$C(k)=1+O(k^{-1})$, $C^{\dag}(k)=1+O(k^{-1})$.
A simple calculation shows that
\begin{equation}\label{tau6a}
\begin{array}{l}
\displaystyle{
\frac{\tau^{\rm KP} ({\bf t}-[k_1^{-1}]-[k_2^{-1}])\tau^{\rm KP}
({\bf t})}{\tau^{\rm KP} ({\bf t}-[k_1^{-1}])
\tau^{\rm KP} ({\bf t}-[k_2^{-1}])}=
\exp \Biggl (-\! \sum_{i,j\geq 1}\Omega_{ij}\frac{k_1^{-i}k_2^{-j}}{ij}\Biggr )}
\\ \\
\phantom{aaaaaaaaaaaaaa}\displaystyle{\times \,
\frac{\theta \Bigl (\vec A(P_1)+\vec A(P_2) +\sum\limits_{j\geq 1}\vec U_j t_j +
\vec Z \Bigr )\theta \Bigl (\sum\limits_{j\geq 1}\vec U_j t_j +
\vec Z \Bigr )}{\theta \Bigl (\vec A(P_1)+\sum\limits_{j\geq 1}\vec U_j t_j +
\vec Z \Bigr )\theta \Bigl (\vec A(P_2) +\sum\limits_{j\geq 1}\vec U_j t_j +
\vec Z \Bigr )}}.
\end{array}
\end{equation}
Using (\ref{qp84}), it is straightforward to check that the tau-function
(\ref{tau1}) satisfies the Hirota-Miwa equation (\ref{tau6b})
which is the generating equation for the KP hierarchy. It appears to be equivalent to the
Fay identity (\ref{Fay}).
\square
It is interesting to compare equation (\ref{ba701}) and the bilinear
relation (\ref{ch02}) for the tau-function. They coincide if
\begin{equation}\label{tau5}
\begin{array}{c}
\displaystyle{
d\Omega =\frac{\tau^{\rm KP} (-[k^{-1}])\tau^{\rm KP} ([k^{-1}])}{(\tau^{\rm KP}(0))^2}}\, dk
\\ \\
\displaystyle{
=
\frac{\theta \Bigl (\vec A (P) -\vec A({\cal D}) -\vec K\Bigr )
\theta \Bigl (\vec A (P) -\vec A({\cal D}^{\dag}) -
\vec K)\Bigr )}{\theta^2 \Bigl (\vec A({\cal D}) +
\vec K-\vec A(P_{\infty})\Bigr )}
\exp \Bigl (-\! \sum_{i,j\geq 1}\Omega_{ij}\, \frac{k^{-i-j}}{ij}\Bigr ) \, dk.}
\end{array}
\end{equation}
Using (\ref{qp85}), we can rewrite this as
\begin{equation}\label{tau7}
d\Omega =
C\frac{\theta \Bigl (\vec A (P) -\vec A({\cal D}) -\vec K\Bigr )
\theta \Bigl (\vec A (P) -\vec A({\cal D}^{\dag}) -
\vec K)\Bigr )}{\theta_{*}^2\Bigl ( \vec A(P)\! -\! \vec A(P_\infty )\Bigr )} \, d\zeta ,
\end{equation}
where the holomorphic differential $d\zeta$ is given by (\ref{qp86}). Its properties
(see \cite{Mumford}) imply that the differential
in the right hand side is a well-defined meromorphic differential on $\Gamma$ with the
only second order pole at $P_{\infty}$ and $2g$ zeros at the points of the divisors
${\cal D}$, ${\cal D}^{\dag}$. Therefore, it has all the properties of the
differential $d\Omega$ and hence must be proportional to it. The equality (\ref{tau7})
just reflects this fact.
\medskip \noindent
{\bf Remark.}
The function $\Psi_{BA}$ and the wave function $\Psi$ introduced in section \ref{section:wave}
(see (\ref{ckp4a})) differ by a normalization factor depending on $k(P)$. From (\ref{tau5})
it follows that
\begin{equation}\label{tau7101}
\Psi_{BA}(x,{\bf t}, P)\Psi^{\dag}_{BA}(x,{\bf t}', P)d\Omega =
\Psi (x,{\bf t}, k)\Psi^{\dag}(x,{\bf t}', k)dk.
\end{equation}
\subsection{Curves with involution: solutions to the CKP hierarchy}
Let $\Gamma$ be a smooth genus $g$ algebraic curve with involution $\iota$
having $2(n+1)>0$ fixed points. By the Riemann-Hurwitz formula $g=2g_0+n$
where $g_0$ is the genus of the factor-curve $\Gamma_0=\Gamma/\iota$. It is
known that on $\Gamma$ there is a basis of $a$- and $b$-cycles with canonical
intersection matrix: $a_i\cdot a_j=b_i\cdot b_j=0, a_i\cdot b_j=\delta_{ij};$ and such that
in this basis the action of the involution $\iota$ has the form
\begin{equation}\label{sa}\iota(a_i)=a_{i+g_0}, \ \ \iota(b_i)=b_{i+g_0}, \ i=1,\ldots, g_0,
\end{equation}
and
\begin{equation}\label{sa1}\iota(a_i)=-a_i, \ \ \iota(b_i)=-b_i, \ i=2g_0+1, \ldots, 2g_0+n.
\end{equation}
Let the marked point $P_\infty$ on $\Gamma$ be one of the fixed points of the involution, $\iota(P_\infty)=P_\infty$ and let $z=k^{-1}$ be a local coordinate in the neighborhood
of $P_\infty$ that is odd with respect to the involution, $\iota^* (k)=-k$.
From the definition of the abelian differentials
$d\Omega _j$ in subsection \ref{sub:prel} it follows that
\begin{equation}\label{iota0}
d\Omega_j (\iota P)=(-1)^jd\Omega_j (P)
\end{equation}
and, therefore,
\begin{equation}\label{iota4}
\Omega_j (\iota P)=(-1)^j \, \Omega_j(P).
\end{equation}
Suppose that the divisor ${\cal D}$ satisfies the constraint
\begin{equation}\label{iota1}
{\cal D}+\iota {\cal D}={\cal K}+2P_{\infty}.
\end{equation}
Then for the Baker-Akhiezer function defined by $\Gamma, P_\infty$,
the local coordinate $k^{-1}$ and the divisor $\cal D$ the equation
\begin{equation}\label{iota2}
\Psi_{BA}^{\dag} (t_1, 0, t_3, 0, \ldots , P)=\Psi_{BA} (t_1, 0, t_3, 0, \ldots , \iota P).
\end{equation}
holds. The bilinear relation (\ref{ba701}) takes the form
\begin{equation}\label{ba701a}
\oint_{C_{\infty}} \! \Psi_{BA} (x,{\bf t}_{\rm o}, P)
\Psi_{BA} (x, {\bf t}_{\rm o}', \iota P)d\Omega (P) =0
\end{equation}
for all ${\bf t}_{\rm o}, {\bf t}_{\rm o}'$.
Using formulas (\ref{ba501}), (\ref{ba601}) we can write the relation
(\ref{iota2}) in the explicit form:
\begin{equation}\label{iota3}
\begin{array}{l}
\displaystyle{
\Psi_{BA} (t_1, 0, t_3, 0, \ldots , \iota P)=
\exp \Bigl (\sum_{j\geq 1, \, j \,\, {\rm odd}}t_j \Omega_j (\iota P)\Bigr )}
\\ \\
\displaystyle{
\phantom{aaaaaaaaaa}\times \,
\frac{\theta \Bigl (\vec A(\iota P)+\sum\limits_{j\geq 1, \, j \,\, {\rm odd}}\vec U_j t_j -
\vec A({\cal D}) -\vec K\Bigr )\theta \Bigl (\vec A({\cal D}) +
\vec K-\vec A(P_{\infty})\Bigr )}{\theta \Bigl (\vec A(\iota P)-
\vec A({\cal D}) -\vec K\Bigr )\theta \Bigl (
\sum\limits_{j\geq 1, \, j \,\, {\rm odd}}\vec U_j t_j
-\vec A({\cal D}) -
\vec K+\vec A(P_{\infty})\Bigr ) }}
\\ \\
\displaystyle{
=\Psi_{BA}^{\dag} (t_1, 0, t_3, 0, \ldots , P)=
\exp \Bigl (-\! \sum_{j\geq 1, \, j \,\, {\rm odd}}t_j \Omega_j (P)\Bigr )}
\\ \\
\displaystyle{
\phantom{aaaaa}\times \,
\frac{\theta \Bigl (\vec A(P)-\sum\limits_{j\geq 1, \, j \,\, {\rm odd}}\vec U_j t_j +
\vec A({\cal D}) +\vec K -2\vec A(P_{\infty})\Bigr )\theta \Bigl (\vec A({\cal D}) +
\vec K-\vec A(P_{\infty})\Bigr )}{\theta \Bigl (\vec A(P)+
\vec A({\cal D}) +\vec K-2\vec A(P_{\infty})\Bigr )
\theta \Bigl (\sum\limits_{j\geq 1, \, j \,\, {\rm odd}}\vec U_j t_j
-\vec A({\cal D}) -
\vec K+\vec A(P_{\infty})\Bigr ) }}.
\end{array}
\end{equation}
The tau-function of the CKP hierarchy is the square root of
\begin{equation}\label{tau1b}
\tau^{\rm KP} ({\bf t}_{\rm o})=\exp \Bigl ( -
\frac{1}{2}\sum_{i,j\geq 1,\, i,j \,\, {\rm odd}}\Omega_{ij}t_it_j\Bigr )\,
\theta \Bigl (\sum_{j\geq 1, \, j \,\, {\rm odd}}\vec U_j t_j -\vec A({\cal D})-\vec K
+\vec A(P_{\infty}) \Bigr ),
\end{equation}
where the divisor ${\cal D}$ satisfies the condition (\ref{iota1}).
The statement of the following theorem is in fact a corollary of
Theorem \ref{theorem:exist} and the above identification of the
square root of (\ref{tau1b}) with the tau-function of the CKP hierarchy but below we give its closed algebraic-geometrical proof.
\begin{theorem}\label{thm-main}
Let $\Gamma$ be a genus $g$ smooth curve with holomorphic involution $\iota$ having at least one fixed point $P_\infty$ and let $Y$ be the locus in the Jacobian
\begin{equation}\label{locus}
Y\subset Jac(\Gamma)=\{\vec Z\in Y|\, \vec Z+ \iota(\vec Z)=-2\vec A(P_{\infty})\}
\end{equation}
Then for any point $Q\in \Gamma$ and $\vec Z\in Y$ the equation
\begin{equation}\label{th1}
\hspace{-1cm}
\begin{array}{c}
\theta \Bigl (\vec Z\Bigr )\partial_1 \theta \Bigl (\vec A(Q)\! -\! \vec A(\iota Q)+\vec Z\Bigr )-
\theta \Bigl (\vec A(Q)\! -\! \vec A(\iota Q)+\vec Z\Bigr )\partial_1 \theta \Bigl (\vec Z\Bigr )
\\ \\
\phantom{aaaaaaaaaaaaaaaa}+\,
2\Omega_1(Q)\theta \Bigl (\vec Z\Bigr )\theta \Bigl (\vec A(Q)\! -\!
\vec A(\iota Q)+\vec Z \Bigr)=
C(Q) \theta^2 \Bigl (\vec A(Q)+\vec Z\Bigr )
\end{array}
\end{equation}
with
$$\partial_1\theta (\vec Z):= \partial_t \theta (\vec Z +\vec U_1 t)\Bigr |_{t=0}$$
holds.
\end{theorem}
\noindent
{\bf Remark.} Note that $Y$ is the locus of vectors such that
$\vec Z=-\vec A({\cal D})-\vec K$, where the divisor ${\cal D}$ satisfies
the condition (\ref{iota1}).
\noindent
{\it Proof.} Let us fix a point $Q\in \Gamma$, an effective divisor ${\cal D}$ of degree $g$ and
define the auxiliary Baker-Akhiezer function $\Psi_{Q}({\bf t}_{\rm o}, P)$
by the following properties:
\begin{itemize}
\item[$1^0$.] Outside $P_{\infty}$ the singularities of $\Psi_Q$ are poles at the divisor
${\cal D}+\iota Q$;
\item[$2^0$.] It has simple zero at the point $Q$,
i.e., $\Psi_{Q}({\bf t}_{\rm o}, Q)=0$;
\item[$3^0$.] In a small neighborhood of $P_{\infty}$ the function $\Psi_Q$ has the form
\begin{equation}\label{th3}
\Psi_{Q}({\bf t}_{\rm o}, P)=e^{\zeta ({\bf t}_{\rm o}, k)}\Biggl (
1+\sum_{j\geq 1}\xi_{j, Q}({\bf t}_{\rm o})k^{-j}\Biggr ), \qquad k=k(P).
\end{equation}
\end{itemize}
The standard argument shows that this function is unique up to a common factor.
The explicit formula for $\Psi_Q$ in theta-functions is
\begin{equation}\label{th2}
\Psi_Q({\bf t}_{\rm o}, P)=\frac{\theta \Bigl (\vec A(P)
-\vec A(\iota Q)+\vec A(Q)+\vec Z_{\bf t_{\rm o}}\Bigr )\theta \Bigl (\vec Z\Bigr )}{\theta
\Bigl (\vec A(Q)-\vec A(\iota Q)+\vec Z_{\bf t_{\rm o}}\Bigr )\theta \Bigl (
\vec A(P)+\vec Z\Bigr )}
\, \exp \Biggl (\Omega_0(P)+\!\!\!\sum_{j\geq 1, \, {\rm odd}}t_j \Omega_j(P)\Biggr ),
\end{equation}
where $\displaystyle{\vec Z_{\bf t_{\rm o}}=\vec Z +\sum\limits_{j\geq 1, \, {\rm odd}}
U_jt_j}$ and $\Omega_0$ is the abelian integral of the normalized dipole differential
$d\Omega_0$ with simple poles at the points $Q, \iota Q$ with residues $\pm 1$:
$$
\Omega_0(P)=\int_{Q_0}^P d\Omega_0.
$$
\medskip \noindent
{\bf Remark.}
The standard Baker-Akhiezer function $\Psi_{BA}$ corresponds to the case $Q=P_{\infty}$.
\medskip
Consider the differential $\widetilde{d\Omega} (P)= \partial_{t_1}\!\Psi_Q(P) \Psi_Q(\iota P)d\Omega (P)$,
where $d\Omega$ is the differential entering the bilinear relation (\ref{ba701}).
It is a meromorphic differential on $\Gamma$ with the only pole at $P_{\infty}$. Hence it has no
residue $P_{\infty}$. Computing the residue in terms of the coefficients of the expansion
(\ref{th3}), we get
\begin{equation}\label{th4}
2\xi_{2,Q}-\xi^2_{1,Q}+\partial_{t_1} \xi_{1, Q}+c_1=0,
\end{equation}
where $c_1$ is a constant defined by the Laurent expansion of $d\Omega$ at $P_{\infty}$.
Consider now the differential
$d\Omega_Q (P)= \Psi_Q(P) \Psi_{BA} (\iota P)d\Omega (P)$. It is a meromorphic differential
with poles at $P_{\infty}$ and $\iota Q$. Therefore,
\begin{equation}\label{th5}
f_Q:=\mathop{\hbox{res}}\limits_{P_{\infty}}d\Omega_Q =\xi_{1, Q}-\xi_1 =-\mathop{\hbox{res}}\limits_{\iota Q}d\Omega_Q =-\phi_Q \phi,
\end{equation}
where
\begin{equation}\label{th6}
\phi_Q:= \mathop{\hbox{res}}\limits_{\iota Q}(\Psi_Q d\Omega ), \qquad
\phi = \Psi_{BA} ({\bf t}_{\rm o}, \iota Q).
\end{equation}
The residue argument for the differential $\widetilde{d\Omega}_Q(P)=
\partial_{t_1}\!\Psi_Q(P) \Psi_{BA} (\iota P)d\Omega (P)$ gives the relation
\begin{equation}\label{th7}
\xi_{2, Q}+\xi_2 -\xi_{1, Q}\xi_1 +\partial_{t_1}\xi_{1, Q} +c_1 =-(\partial_{t_1}\phi_Q)\phi.
\end{equation}
Then, using (\ref{th4}), we obtain
\begin{equation}\label{th8}
\frac{1}{2}\, (f_Q^2 +\partial_{t_1}f_Q)=-(\partial_{t_1}\phi_Q)\phi .
\end{equation}
From comparison of (\ref{th5}) and (\ref{th8}) it follows that
\begin{equation}\label{th9}
\partial_{t_1}\log \phi_Q =\frac{1}{2}\, (f_Q +\partial_{t_1}\log f_Q).
\end{equation}
Recalling the definition of $\phi_Q$ and using formula (\ref{th2}), we get
\begin{equation}\label{th10}
\partial_{t_1}\log \phi_Q=\partial_{t_1}\log \left (
\frac{\theta \Bigl (\vec A(Q)+\vec Z_{{\bf t}_{\rm o}}\Bigr )}{\theta \Bigl (
\vec A(Q)-\vec A (\iota Q)+\vec Z_{{\bf t}_{\rm o}}\Bigr )}\right )+\Omega_1(\iota Q).
\end{equation}
The expansion of (\ref{th2}) around $P_{\infty}$ yields
\begin{equation}\label{th11}
f_Q=\partial_{t_1}\log \left (
\frac{\theta \Bigl (\vec Z_{{\bf t}_{\rm o}}\Bigr )}{\theta \Bigl (
\vec A(Q)-\vec A (\iota Q)+\vec Z_{{\bf t}_{\rm o}}\Bigr )}\right )+\Omega_{01},
\end{equation}
where $\Omega_{01}$ equals the coefficient at $k^{-1}$ in the expansion of $\Omega_0$ at
$P_{\infty}$. The Riemann's bilinear relation for the differentials
$d\Omega_1$ and $d\Omega_0$ has the form
\begin{equation}\label{th12}
\Omega_{01}=\Omega_1(\iota Q)-\Omega_1(Q)=2\Omega_1(\iota Q).
\end{equation}
Therefore, equations (\ref{th8}) and (\ref{th11}) imply
\begin{equation}\label{th13}
\partial_{t_1}\log \left (
\frac{\theta^2 \Bigl (\vec A(Q)+\vec Z_{{\bf t}_{\rm o}}\Bigr )}{\theta \Bigl (
\vec A(Q)-\vec A (\iota Q)+\vec Z_{{\bf t}_{\rm o}}\Bigr )
\theta \Bigl (\vec Z_{{\bf t}_{\rm o}}\Bigr )}\right )=
\partial_{t_1}\log f_Q.
\end{equation}
Equation (\ref{th11}) and (\ref{th13}) with ${\bf t}_{\rm o}=0$
after integration in $t_1$ give (\ref{th1}) with constant $C(Q,\vec Z)$
which is $\partial_{t_1}$-invariant, i.e. $C(Q,\vec Z)=C(Q,\vec Z+t_1\vec U_1)$ for
any value of $t_1$. For a generic curve the complex line $\vec Z+t_1\vec U_1$ is
dense in the Jacobian. Hence, the integration constant $C$ does not depend on
$\vec Z$ and depends on $Q$ only.
Since the matrix of $b$-periods depends analytically on the
curve and $C$ is independent of $\vec Z$ for generic curve it is independent of
$\vec Z$ for any curve.
\square
\subsection{Degeneration of algebraic-geometrical solutions: soliton solutions}
The algebraic-geometrical integration scheme naturally extends to the case of singular curves.
In particular, the case when $\Gamma$ is the Riemann sphere $\mbox{\Bbb C} P^1$ with nodes (double points)
corresponds to soliton solutions. $N$-soliton solutions of the CKP hierarchy are obtained by imposing certain constraints on the parameters of $2N$-soliton solutions to the KP hierarchy. We recall that
$\tau =\sqrt{\vphantom{B^{a^a}}\tau^{\rm KP}}$, with ``even'' times $t_{2k}$ put equal to zero
and it is implied that the parameters of the KP tau-function $\tau^{\rm KP}$
are chosen in a special way.
$M$-solutions of the KP hierarchy are constructed starting from a singular curve which is
$\mbox{\Bbb C} P^1$ with $M$ double points. Let $z$ be the global coordinate. The Baker-Akhiezer
function has simple poles at $M$ points $q_i$. It has the form
\begin{equation}\label{sol1}
\Psi^{\rm KP}({\bf t}, z)=\exp \Bigl (\sum_{j\geq 1}t_jz^j\Bigr )
\Biggl (1+\sum_{l=1}^M \frac{y_l({\bf t})}{z-q_l}\Biggr ).
\end{equation}
Let us impose $M$ linear conditions of the form
\begin{equation}\label{sol2}
\mathop{\hbox{res}}\limits_{z=q_i}\Bigl [
\Psi^{\rm KP}({\bf t}, z)dz \Bigr ]=
-\alpha_i (p_i-q_i)\Psi^{\rm KP}({\bf t}, p_i), \quad i=1, \ldots , M,
\end{equation}
which mean that the points $p_i, q_i$ are glued together forming a double point.
Here $\alpha_i$ are complex parameters. These conditions make the Baker-Akhiezer
function unique (up to a common multiplier). The conditions (\ref{sol2}) are equivalent
to the following linear system for $y_l$:
\begin{equation}\label{sol3}
y_i+\sum_{l=1}^M \frac{\tilde \alpha_i y_l}{p_i-q_l}=-\tilde \alpha_i,
\end{equation}
where
$$
\tilde \alpha_i= \alpha_i (p_i-q_i)\exp \Bigl (\sum_{j\geq 1}(p_i^j -q_i^j)t_j\Bigr ).
$$
Solving this system, we obtain the Baker-Akhiezer function in the explicit form:
\begin{equation}\label{sol4}
\Psi^{\rm KP}=\frac{\phantom{a}\left | \begin{array}{ccccc}
1& \frac{1}{z-q_1} & \frac{1}{z-q_2} & \ldots & \frac{1}{z-q_M}
\\ \\
\tilde \alpha_1 & 1\! +\!\frac{\tilde \alpha_1}{p_1-q_1} &\frac{\tilde \alpha_1}{p_1-q_2}&
\ldots & \frac{\tilde \alpha_1}{p_1-q_M}
\\ \\
\tilde \alpha_2 & \frac{\tilde \alpha_2}{p_2-q_1} &1\! +\! \frac{\tilde \alpha_2}{p_2-q_2}&
\ldots & \frac{\tilde \alpha_2}{p_2-q_M}
\\ \ldots & \ldots & \ldots & \ldots & \ldots
\\ \\
\tilde \alpha_M & \frac{\tilde \alpha_M}{p_M-q_1} &\frac{\tilde \alpha_M}{p_M-q_2}&
\ldots & 1\! +\! \frac{\tilde \alpha_M}{p_M-q_M}
\end{array}
\right |\phantom{a}}{\left | \begin{array}{cccc}
\vphantom{\frac{A^{a^a}}{a}}1\! +\! \frac{\tilde \alpha_1}{p_1-q_1} &\frac{\tilde \alpha_1}{p_1-q_2}&
\ldots & \frac{\tilde \alpha_1}{p_1-q_M}
\\ \\
\frac{\tilde \alpha_2}{p_2-q_1} &1\! +\! \frac{\tilde \alpha_2}{p_2-q_2}&
\ldots & \frac{\tilde \alpha_2}{p_2-q_M}
\\ \ldots & \ldots & \ldots & \ldots
\\ \\
\frac{\tilde \alpha_M}{p_M-q_1} &\frac{\tilde \alpha_M}{p_M-q_2}&
\ldots & 1\! +\! \frac{\tilde \alpha_M}{p_M-q_M}
\end{array}
\right |}\exp \Bigl (\sum_{j\geq 1}t_jz^j\Bigr ).
\end{equation}
The denominator of this expression is the tau-function.
The general KP tau-function for $M$-soliton solution has $3M$ arbitrary parameters
$\alpha_i$, $p_i$, $q_i$ ($i=1, \ldots , M$) and is given by
\begin{equation}\label{ms1}
\tau^{\rm KP} (x, {\bf t})
=\det_{1\leq i,j\leq M}\left ( \delta_{ij}+
\alpha_i \, \frac{p_i-q_i}{p_i-q_j}\, \exp
\Bigl ((p_i-q_i)x+\!\! \sum_{k\geq 1}(p_i^k-q_i^k)t_k \Bigr )
\right ).
\end{equation}
Let us denote this tau-function as
$$
\tau^{\rm KP} \left [ \begin{array}{c}\alpha_1 \\ p_1, q_1\end{array};
\begin{array}{c}\alpha_2 \\ p_2, q_2\end{array};
\begin{array}{c}\alpha_3 \\ p_3, q_3\end{array};
\begin{array}{c}\alpha_4 \\ p_4, q_4\end{array}; \, \cdots \, ;
\begin{array}{c}\alpha_{2N-1} \\ p_{M-1}, q_{M-1}\end{array};
\begin{array}{c}\alpha_{2N} \\ p_{M}, q_{M}\end{array}\right ].
$$
The parameters $p_i, q_i$ are sometimes called momenta of solitons.
In the CKP case we have the involution $z\to -z$ which means that the double points
should be symmetric under the involution.
The multi-soliton tau-function of the CKP hierarchy is the square root of the $\tau^{\rm KP}$
specialized as
\begin{equation}\label{ms2}
\begin{array}{c}
\displaystyle{
\tau^{\rm KP} \left [ \begin{array}{c}\alpha_0 \\ p_0, -p_0\end{array};
\begin{array}{c}\alpha_1 \\ p_1, -q_1\end{array};
\begin{array}{c}\alpha_1 \\ q_1, -p_1\end{array};
\begin{array}{c}\alpha_2 \\ p_2, -q_2\end{array};
\begin{array}{c}\alpha_2 \\ q_2, -p_2\end{array}; \, \cdots \, ;
\begin{array}{c}\alpha_{N} \\ p_{N}, -q_{N}\end{array};
\begin{array}{c}\alpha_{N} \\ q_{N}, -p_{N}\end{array}\right ]},
\end{array}
\end{equation}
where it is assumed that even times evolution is suppressed ($t_{2k}=0$ for all $k\geq 1$).
Clearly, the total number of independent parameters is $3N+2$. If $\alpha_0=0$, the tau-function
(\ref{ms2}) reduces to
\begin{equation}\label{ms3}
\begin{array}{c}
\displaystyle{
\tau^{\rm KP} \left [
\begin{array}{c}\alpha_1 \\ p_1, -q_1\end{array};
\begin{array}{c}\alpha_1 \\ q_1, -p_1\end{array};
\begin{array}{c}\alpha_2 \\ p_2, -q_2\end{array};
\begin{array}{c}\alpha_2 \\ q_2, -p_2\end{array}; \, \cdots \, ;
\begin{array}{c}\alpha_{N} \\ p_{N}, -q_{N}\end{array};
\begin{array}{c}\alpha_{N} \\ q_{N}, -p_{N}\end{array}\right ]},
\end{array}
\end{equation}
and it is this tau-function which is usually
called the $N$-soliton CKP tau-function in the literature (see, e.g. \cite{DJKM81}).
It is a specialization of $2N$-soliton KP tau-function and
has $3N$ free parameters.
The simplest example is one-soliton solution.
The tau-function for one CKP soliton is the square root
of a specialization of 2-soliton tau-function
of the KP hierarchy:
\begin{equation}\label{e2}
\tau^{\rm KP}= 1+2\alpha w-
\frac{\alpha^2 (p-q)^2}{4pq} \, w^2,
\end{equation}
where
\begin{equation}\label{e5}
w=e^{(p+q)x+ \zeta ({\bf t}_{\rm o}, p)+\zeta ({\bf t}_{\rm o}, q)}, \quad
\mbox{$\zeta ({\bf t}_{\rm o}, z)$ is given by (\ref{ckp7}).}
\end{equation}
A direct calculation shows that $\partial_x \psi^2$ (where $\psi$
is given by (\ref{e3})) for the solution (\ref{e2})
is a full square for all $z$.
\medskip
\noindent
{\bf Remark.} It is instructive to prove directly that the tau-functions (\ref{ms2}) and (\ref{ms3}) satisfy equation (\ref{ch6}).
Consider (\ref{ms3}) first. We represent the tau-function as
$$
\tau^{\rm KP}=\det_{2N\times 2N} (I+HK),
$$
where $H$ is the diagonal matrix $W_{jk}=\delta_{jk}W_j$ with matrix elements
$$
H_{2i-1}=\alpha_i (p_i+q_i)\exp \Biggl ( (p_i+q_i)x +\sum_{k\geq 1}t_k
(p_i^k -(-q_i)^k)\Biggr ), \quad i=1, \ldots , N,
$$
$$
H_{2i}=\alpha_i (p_i+q_i)\exp \Biggl ( (p_i+q_i)x +\sum_{k\geq 1}t_k
(q_i^k -(-p_i)^k)\Biggr ), \quad i=1, \ldots , N,
$$
and $K$ is the Cauchy matrix $K_{jk}=1/(x_j-y_k)$ with
$x_{2i-1}=-y_{2i}=p_i$, $x_{2i}=-y_{2i-1}=q_i$, $i=1, \ldots , N$.
We have:
$$
\partial_{t_{2m}}\log \tau^{\rm KP}\Biggr |_{t_{2k}=0}=
\partial_{t_{2m}}\log \det (I+HK)\Biggr |_{t_{2k}=0}=
\partial_{t_{2m}}\mbox{tr}\, \log \, (I+HK)\Biggr |_{t_{2k}=0}
$$
$$
=\mbox{tr} \Bigl [ VHK(I+HK)^{-1}\Bigr ] =\mbox{tr}\, V -
\mbox{tr} \Bigl [ V(I+HK)^{-1}\Bigr ],
$$
where $V$ is the diagonal matrix $V_{jk}=\delta_{jk}V_j$ with the matrix elements
$$V_{2i-1}=-V_{2i}=p_i^{2m}-q_i^{2m}. $$
Note also that when all even times are put equal to zero, we have also
$H_{2i-1}=-H_{2i}$.
Obviously, $\mbox{tr} V=0$. A careful inspection shows that
$(I+HK)^{-1}_{2i-1, 2i-1}=(I+HK)^{-1}_{2i, 2i}$, and, therefore,
$\mbox{tr} \Bigl [ V(I+HK)^{-1}\Bigr ]=0$, too, and the conditions
(\ref{ch6}) are satisfied. Indeed, permuting rows and columns, one can see that
the diagonal $(2i-1, 2i-1)$ and $(2i, 2i)$ minors of the matrix
$I+HK$ are equal.
As for the tau-function (\ref{ms2}) with $\alpha_0\neq 0$, it is obvious that
the additional pair of soliton momenta of the form $p_0, -p_0$ does not lead to
any extra dependence on the even times, and so the conditions (\ref{ch6}) are still
satisfied.
\section{Elliptic solutions}
By elliptic solutions of the CKP equation (\ref{ckp0}) we mean
solutions $u$ that are double-periodic in the complex plane of the variable $x$
with periods $2\omega_1, 2\omega_2, \, {\rm Im}(\omega_2/ \omega_1)>0$.
Equations of motion for their poles and their algebraic integrability is an
easy corollary of the established above relation between the CKP and KP hierarchies
and the well-developed theory of elliptic solutions to the KP hierarchy,
equivalent to the theory of the elliptic Calogero-Moser (eCM) system.
Namely, elliptic solutions of the CKP equation can be
extended to elliptic solutions of the KP equation and further to the whole KP hierarchy. From that perspective the pole dynamics of the elliptic solutions of the CKP equation in $t_3$ is just the restriction of $t_3$-dynamics generated
by the Hamiltonian $H_3$ of the eCM system,
\begin{equation}\label{ca8}
\left \{
\begin{array}{l}
\displaystyle{\dot x_i =-3p_i^2 +3 \sum_{j\neq i}\wp (x_i-x_j)-6c}
\\ \\
\displaystyle{\dot p_i =-3\sum_{j\neq i}(p_i+p_j)\wp '(x_i-x_j)},
\end{array}
\right.
\end{equation}
onto the locus of turning points $p_i=0$ that is invariant under
${\bf t_{\rm o}}$ flows of the eCM system, i.e.
\begin{equation}\label{ca1a}
\dot x_i =3\sum_{k\neq i}\wp (x_i-x_k)-6c,
\end{equation}
where $c$ is a constant and dot means the $t_3$-derivative.
Here $\wp$ is the Weiershtrass $\wp$-function which is an
even double-periodic function with periods $2\omega_1, \, 2\omega_2$
having second order poles at the lattice points
$2\omega_1 m_1+2\omega_2 m_2$ with integer $m_1, m_2$ and
$$
\wp (x)=\frac{1}{x^2} + O(x^2), \quad x\to 0.
$$
For further use recall the definitions of the Weierstrass functions. The Weierstrass
$\sigma$-function is given by the infinite product
$$
\sigma (x)=\sigma (x |\, \omega_1 , \omega_2)=
x\prod_{s\neq 0}\Bigl (1-\frac{x}{s}\Bigr )\, e^{\frac{x}{s}+\frac{x^2}{2s^2}},
\ \ s=2\omega_1 m_1+2\omega_2 m_2\,, \ \ m_1, m_2\in \mbox{\Bbb Z}.
$$
The Weierstrass $\zeta$- and $\wp$-functions are connected with the $\sigma$-function
as follows: $\zeta (x)=\sigma '(x)/\sigma (x)$,
$\wp (x)=-\zeta '(x)=-\partial_x^2\log \sigma (x)$.
\medskip
The algebraic integrability of the eCM system established
in \cite{Krichever80} restricted to the locus of turning points can be stated as follows.
\begin{theorem} For each set of constants $x_i^0\neq x_j^0$
define the algebraic curve $\Gamma$ by the characteristic equation $\det (zI-L)=0$ for the matrix
\begin{equation}\label{laxckp} L_{ii}=0,\qquad L_{ij}=-\Phi(x_i^0-x_j^0,\lambda), \qquad i\neq j,
\end{equation}
where
\begin{equation}\label{phidef}
\Phi (x, \lambda )=\frac{\sigma (x+\lambda )}{\sigma (\lambda )\sigma (x)}\,
e^{-\zeta (\lambda )x}.
\end{equation}
Let $P_\infty$ be the point on $\Gamma$ that is the
pre-image of $\lambda=0$ in the neighborhood of which $z$ has the
expansion $z=-(n-1) \lambda^{-1}+O(\lambda )$.
Then the solution of (\ref{ca1a})
with the initial conditions $x_i(0)=x_i^0$ are roots $x_i(t_3)$ of the equation
\begin{equation}\label{algformula}
\theta \Bigl(\vec U_1 x_i+\vec U_3 t_3+ \vec Z\,\Bigl |\,T\Bigr )=0 .
\end{equation}
Here $\theta(z\,|\,T)$ is Riemann theta-function defined by the matrix of $b$-periods of normalized holomorphic differentials on $\Gamma$; the vectors $\vec U_j$ are given
by (\ref{qp5}) with $d\Omega_j$ defined in (\ref{defdomega}); the vector $\vec Z$ is in the
locus $Y$ defined in (\ref{locus}), where the involution $\iota$ of the Jacobian is induced by the involution $\iota(z,\lambda)\to (-z,-\lambda)$ of $\Gamma$.
\end{theorem}
The elliptic solutions are particular cases of the general algebraic-geometrical
solutions considered in the
previous section. The corresponding spectral data are singled
out by the following constraint: the vectors $2\omega_1\vec U_1, 2\omega_2 \vec U_1$
are in the lattice of
periods of the Jacobian of the spectral curve, where
$\vec U_1$ is the vector of $b$-periods of the normalized
differential with the only pole (of order 2) at the marked point $P_\infty$.
\subsection{The generating problem}
For completeness, in this section we present the scheme proposed in
\cite{Krichever80} which allows one to derive the
equations of motion for poles of elliptic solutions to a variety of
soliton equations together with their Lax-type representation
(see more in \cite{kr-nested}). With the help of this scheme
we will get the equations (\ref{ca1a}) and the Lax matrix (\ref{laxckp})
directly without use of relations to the theory of the eCM system.
The elliptic solution of the CKP equation is an
elliptic function with double poles at the points $x_i$:
\begin{equation}\label{int7}
u=-\frac{1}{2}\sum_{i=1}^{n}\wp (x-x_i)+c,
\end{equation}
where $c$ is a constant. The poles depend on the times $t_3$, $t_5$ (as well as on the higher times)
and are assumed to be
all distinct. The corresponding CKP tau-function has the form
\begin{equation}\label{int7a}
\tau (x, {\bf t}_{\rm o})=C_0e^{cx^2/2}\left ( \prod_{i=1}^n
\sigma (x-x_i({\bf t}_{\rm o}))\right )^{1/2}.
\end{equation}
In the rest of this section we denote $t_3=t$.
According to the scheme proposed in \cite{Krichever80},
the basic tool is the auxiliary linear problem
$\partial_{t}\Psi =B_3\Psi$ for the wave
function $\Psi$, i.e.,
\begin{equation}\label{ba0}
\partial_t \Psi =\partial_x^3\Psi +6u \partial_x \Psi +3u'\Psi ,
\end{equation}
for which one can state the following problem: characterize
an elliptic in $x$ function $u$ of the form (\ref{int7}) for which equation ({\ref{ba0})
has {\it double-Bloch solutions} $\Psi (x)$, i.e., solutions such that
$\Psi (x+2\omega_{\alpha} )=B_{\alpha} \Psi (x)$
with some Bloch multipliers $B_{\alpha}$}.
Equations (\ref{e3}), (\ref{e4}) imply that the
wave function has simple poles at the points $x_i$.
Therefore, if a double-Bloch solution exists, then it is of the following pole ansatz form:
\begin{equation}\label{ba1}
\Psi = e^{xz+tz^3}\sum_{i=1}^n c_i \Phi (x-x_i, \lambda ),
\end{equation}
where the coefficients $c_i$ do not depend on $x$ (but do depend on $t$, $z$ and $\lambda$).
Indeed, the function $\Phi (x, \lambda )$ given by formula (\ref{phidef})
has the following monodromy properties:
\begin{equation}\label{phimon}
\Phi (x+2\omega_{\alpha} , \lambda )=e^{2(\zeta (\omega_{\alpha} )\lambda -
\zeta (\lambda )\omega_{\alpha} )}
\Phi (x, \lambda ), \quad \alpha =1,2.
\end{equation}
Therefore, the wave function $\Psi$ given by (\ref{ba1})
is a double-Bloch function with Bloch multipliers
$B_{\alpha}=e^{2(\omega_{\alpha} z + \zeta (\omega_{\alpha} )\lambda -
\zeta (\lambda )\omega_{\alpha} )}$ parameterized by $z$ and $\lambda$.
In what follows we will often suppress the second argument of $\Phi$ writing simply
$\Phi (x)=\Phi (x, \lambda )$. For further use note also that $\Phi$ has a simple pole at $x=0$ with residue $1$. The coefficients $\beta_{1}, \beta_2$ of its expansion
$$
\Phi (x, \lambda )=\frac{1}{x}+\beta_1 x +\beta_2 x^2 +O(x^3) \quad
\mbox{as $x\to 0$},
$$
are equal to
\begin{equation}\label{alpha}
\beta_1 =-\frac{1}{2} \, \wp (\lambda ), \quad
\beta_2 =-\frac{1}{6} \, \wp '(\lambda ).
\end{equation}
The function $\Phi$
We will also need the $x$-derivatives
$\Phi '(x, \lambda )=\partial_x \Phi (x, \lambda )$, $\Phi ''(x, \lambda )=\partial^2_x \Phi (x, \lambda )$
and so on.
\begin{theorem}
The equations of motion (\ref{ca1a}) for
poles $x_i$ of elliptic solutions as functions of $t=t_3$ have the following
commutation representation of the Manakov's triple kind:
\begin{equation}\label{ca6}
\dot L+[L,M]=3D'(zI-L),
\end{equation}
where
\begin{equation}\label{laxckp1} L_{ii}=0,\qquad L_{ij}=-\Phi(x_i-x_j,\lambda), \qquad i\neq j;
\end{equation}
the matrix $M$ is defined by (\ref{ca3}), and
$D'$ is the diagonal matrix $\displaystyle{D'_{ik}=
\delta_{ik}\sum_{j\neq i}\wp '(x_i-x_j)}$.
\end{theorem}
\noindent
{\it Proof.}
Substituting (\ref{ba1}) into (\ref{ba0}) with
$u=-\frac{1}{2}\displaystyle{\sum_{i}\wp (x-x_i)}+c$,
we get:
$$
\sum_i \dot c_i\Phi (x-x_i)-\sum_i c_i \dot x_i \Phi '(x-x_i)=3z^2
\sum_i c_i \Phi '(x-x_i)+3z\sum_i c_i \Phi ''(x-x_i)+\sum_i c_i \Phi '''(x-x_i)
$$
$$
-3z\Bigl (\sum_k \wp (x-x_k)\Bigr ) \Bigl (\sum_i c_i \Phi (x-x_i)\Bigr )
-3\Bigl (\sum_k \wp (x-x_k)\Bigr ) \Bigl (\sum_i c_i \Phi ' (x-x_i)\Bigr )
$$
$$
-\frac{3}{2}\Bigl (\sum_k \wp '(x-x_k)\Bigr ) \Bigl (\sum_i c_i \Phi (x-x_i)\Bigr )
+6cz\sum_i c_i\Phi (x-x_i) +6c\sum_i c_i \Phi ' (x-x_i).
$$
It is enough to cancel all poles in the fundamental domain
which are at the points $x_i$ (up to fourth order).
It is easy to see that poles of the fourth order cancel identically.
A direct calculation shows that the conditions of cancellation of third, second and first
order poles have the form
\begin{equation}\label{ba2a}
zc_i=-\sum_{k\neq i}c_k \Phi (x_i-x_k),
\end{equation}
\begin{equation}\label{ba2}
c_i\dot x_i=-3z^2c_i +3c_i \sum_{k\neq i}\wp (x_i-x_k)-3z \sum_{k\neq i}c_k \Phi (x_i-x_k)
-6cc_i,
\end{equation}
\begin{equation}\label{ba3}
\begin{array}{lll}
\dot c_i &=&\displaystyle{
-3(\beta_1 z+\beta_2)c_i -3zc_i\sum_{k\neq i}\wp (x_i-x_k)
+\frac{3}{2}\, c_i \sum_{k\neq i}\wp '(x_i-x_k)}
\\ &&\\
&&\phantom{aaaaaaa}\displaystyle{-\, 3z \sum_{k\neq i}c_k \Phi '(x_i-x_k)
-\frac{3}{2}\sum_{k\neq i}c_k\Phi ''(x_i-x_k)+6czc_i}
\end{array}
\end{equation}
which have to be valid for all $i=1, \ldots , n$.
Substitution of (\ref{ba2a}) into (\ref{ba2}) gives (\ref{ca1a})
(if the coefficients $c_i$ are not identically zero).
The conditions (\ref{ba2a}), (\ref{ba3}) can be rewritten in the matrix form as
linear problems for a vector ${\bf c} =(c_1, \ldots , c_n)^T$:
\begin{equation}\label{ca2}
\left \{ \begin{array}{l}
L{\bf c} = z{\bf c}
\\ \\
\dot {\bf c} =M{\bf c},
\end{array}
\right.
\end{equation}
where $L$ is the matrix (\ref{laxckp1}),
\begin{equation}\label{ca3}
\begin{array}{l}
M= -3(\beta_1 z+\beta_2-2cz)I -3zB -3zD -\frac{3}{2}\, C +\frac{3}{2}\, D'
\end{array}
\end{equation}
and the $n\! \times \! n$ matrices $I$, $B$, $C$, $D$, are given by
$I_{ik}=\delta_{ik}$,
\begin{equation}\label{mat}
\begin{array}{l}
B_{ik}=(1-\delta_{ik})\Phi ' (x_i-x_k),
\\ \\
C_{ik}=(1-\delta_{ik})\Phi '' (x_i-x_k),
\\ \\
\displaystyle{D_{ik}=\delta_{ik}\sum_{j\neq i}\wp (x_i-x_j),}
\end{array}
\end{equation}
The matrices $L, B,C$ are off-diagonal while the matrices $D, D'$ are diagonal.
The linear system (\ref{ca2}) is overdetermined.
Differentiating the first equation in (\ref{ca2}) with respect to $t$, we see that
the compatibility condition of the linear problems (\ref{ca2}) is
\begin{equation}\label{ca4}
\Bigl (\dot L+[L,M]\Bigr ) {\bf c} =0.
\end{equation}
One can prove the following matrix identity (see the appendix):
\begin{equation}\label{ca5}
\dot L+[L,M]=3D'(zI-L) -[\dot X \! -\! 3D, \, B],
\end{equation}
where $X$ is the diagonal matrix $X_{ik}=\delta_{ik}x_i$.
Since $(zI-L){\bf c}=0$ according to (\ref{ba2a}) and $\dot X =3D-6cI$ according to
(\ref{ca2}), we see from (\ref{ca5})
that the compatibility condition (\ref{ca4}) is satisfied. From (\ref{ca5}) it follows that
the equations of motion have the commutation representation
of the Manakov's triple kind (\ref{ca6}) \cite{Manakov}.
\square
\subsection{The integrals of motion and the spectral curve}
It follows from equation (\ref{ca6}) that the characteristic polynomial of the matrix $L$
is an integral of motion. Indeed,
\begin{equation}\label{ca7}
\begin{array}{c}
\displaystyle{\frac{d}{dt}\, \log \det (L-zI )=
\frac{d}{dt}\, \mbox{tr}\log (L-zI )}
\\ \\
\displaystyle{=\, \mbox{tr}\Bigl [ \dot L (L-zI )^{-1}\Bigr ]=
-3\, \mbox{tr}D' =0,}
\end{array}
\end{equation}
where we have used equation (\ref{ca6}) and the fact that
$\displaystyle{\mbox{tr}\, D'=\sum_{i\neq j}\wp '(x_i-x_j)=0}$ ($\wp '$ is an odd function).
The expression
$
R(z, \lambda )=\det (zI-L(\lambda ) )
$
is a polynomial in $z$
of degree $n$. Its coefficients are integrals of motion
(some of them may be trivial). For example:
$$
\begin{array}{l}
n=2: \qquad \det\limits_{2\times 2}(zI-L)=z^2+\wp (x_{12})-\wp (\lambda ),
\\ \\
n=3: \qquad \det\limits_{3\times 3}(zI-L)=z^3 +z\Bigl (\wp (x_{12})+\wp (x_{13})+\wp (x_{23})-
3\wp (\lambda )\Bigr )-\wp '(\lambda ),
\end{array}
$$
where $x_{ik}\equiv x_i-x_k$.
\medskip
\noindent
{\bf Remark.}
Although the Lax equation for matrices $L,M$ does not hold, it follows from (\ref{ca7}) that
traces of the Lax matrix $L$ (and therefore its eigenvalues) are integrals of motion:
$\partial_t \mbox{tr}\, L^m =0$, $m\geq 1$. (This is equivalent to the equalities
$\mbox{tr} \, (D'L^m)=0$ for $m\geq 1$ which are based on certain non-trivial identities
for the $\wp$-function.) This mans that the time evolution is an isospectral transformation
of the Lax matrix $L$. Therefore, there should exist a matrix $M_0$ such that the Lax equation
$
\dot L+[L,M_0]=0
$
holds. In order to find it explicitly, we first note that by virtue of the matrix identity
(\ref{A4}) (see the appendix) we can write equation (\ref{ca6}) in the form
$
\dot L+[L,\hat M]=-3D'L,
$
where
$$
\hat M=M+3z\Bigl ((\beta_1-2c)I+B+D\Bigr )=-3\beta_2 I -\frac{3}{2}\, (C-D')
$$
does not depend on $z$. Using again the identity (\ref{A4}), one can see that
\begin{equation}\label{lax2}
\begin{array}{c}
M_0=\hat M -3(B+D)L=-3\beta_2 I -\frac{3}{2}\, (C-D')-3(B+D)L
\\ \\
= M+3z(\beta_1 -2c)I +3(B+D) (zI-L)
\end{array}
\end{equation}
($\beta_1$, $\beta_2$ are given in (\ref{alpha})).
\medskip
The embedding into the Calogero-Moser dynamics discussed above
implies that the integrals of motion $I_k$ for the dynamical system (\ref{ca1a})
are restrictions of the Calogero-Moser integrals of motion to the subspace
of the phase space with $p_i=0$.
For example:
\begin{equation}\label{i4}
\begin{array}{l}
I_2=\displaystyle{\sum\limits_{i<j}\wp (x_{ij})},
\\ \\
I_4=\displaystyle{\sum\limits_{i<j<k<l}\Bigl [\wp (x_{ij})\wp (x_{kl})+\wp (x_{ik})\wp (x_{jl})+
\wp (x_{il})\wp (x_{jk})\Bigr ]}.
\end{array}
\end{equation}
The spectral curve $\Gamma$ is defined by the equation $R(z, \lambda )=\det (zI-L(\lambda ))=0$.
It is an $n$-sheet covering of the elliptic curve ${\cal E}$ uniformized by the variable $\lambda$
and realized as a factor
of the complex plane with respect to the lattice generated by $2\omega_1$, $2\omega_2$.
Since $L(-\lambda )=-L^T (\lambda )$, it is easy to see that
the curve $\Gamma$ is equipped with the holomorphic involution
$\iota :(z, \lambda )\to (-z, -\lambda )$.
As it was already mentioned, the equation of the spectral curve
(the characteristic equation of the Lax matrix) is an integral of motion.
\begin{proposition}(\cite{Krichever80}) For generic values of $x_i$ the spectral curve is smooth of
genus $g=n$.
\end{proposition}
\subsection{The wave function as the Baker-Akhiezer function on the spectral curve}
Let $P$ be a point of the spectral curve $\Gamma$, i.e. $P=(z, \lambda )$, where
$z$ and $\lambda$ are connected by the equation $R(z, \lambda )=0$.
To each point $P$ of the curve there corresponds a single eigenvector
${\bf c}(0,P)=(c_1(0,P), \ldots , c_n(0,P))^T$
of the matrix
$L(t=0, \lambda )$ normalized by the condition $c_1(0, P)=1$.
The non-normalized components $c_i$ are equal to
$\Delta_i (0, P)$, where $\Delta_i (0, P)$ are suitable minors of the matrix
$zI-L(0, \lambda )$. They are holomorphic functions on $\Gamma$ outside the
points above $\lambda =0$. After normalizing the first component, all other components
$c_i(0,P)$ become meromorphic functions on $\Gamma$ outside the points $P_j$ located
above $\lambda =0$. Let ${\cal D}'$ be the poles divisor of the vector $\bf c$ with coordinates $c_i$.
Unlike the spectra curve which is time-independent the divisor ${\cal D}'$ depends on the initial data.
\begin{lemma} \label{lem25}The sum of the divisors ${\cal D}'$
and $\iota({\cal D}')$ is the zero divisor of a holomorphic
differential on the spectral curve, i.e. the equation
\begin{equation}\label{diotad}
{\cal D}'+\iota({\cal D}')={\cal K}
\end{equation}
holds.
\end{lemma}
\noindent
{\it Proof}. The idea of the proof goes back to the proof of Theorem 4 in \cite{Kr87}.
Taking the differential of the eigenvalue equation $(zI-L(\lambda )){\bf c}(P)=0$
and using the equation ${\bf c}^T(\iota P)(zI-L(\lambda ))=0$,
which follows from the definition of the involution, we get the equation
$$
{\bf c}^T(\iota P)(dzI-dL(\lambda )){\bf c}(P)=0,
$$
or
\begin{equation}\label{s10}
\left < {\bf c}(\iota P), {\bf c}(P)\right >dz =
\left < {\bf c}(\iota P), L_{\lambda}{\bf c}(P)\right >d\lambda,
\end{equation}
where $L_{\lambda}=\partial L/\partial \lambda$ and $\displaystyle{
\left < {\bf c}(\iota P), {\bf c}(P)\right >=\sum_i c_i(\iota P)c_i(P)}$.
For a generic initial data the spectral curve is smooth, i.e. the differentials $dz$ and $d\lambda$
have no common zeros. Then from (\ref{s10}) it follows that the zeros of
the differential $d\lambda$ (which are ramification points of the covering
$\Gamma \to {\cal E}$) coincide with the zeros of the
function $\left < {\bf c}(\iota P), {\bf c}(P)\right >$. Therefore,
the differential
\begin{equation}\label{s11}
d\Lambda = \frac{d\lambda}{\left < {\bf c}(\iota P), {\bf c}(P)\right >}
\end{equation}
is a holomorphic differential on the curve $\Gamma$. Its $2g-2$ zeros at the points,
where the vectors ${\bf c}(P)$ and ${\bf c}(\iota P)$ have poles.
\square
For completeness let us outline
the arguments that ultimately lead to the proof of the
algebraic integrability of equations (\ref{ca1a}).
A particular case of Theorem 2 in \cite{Krichever80} is the following statement.
\begin{theorem} The function
\begin{equation}\label{s9a}
\hat \Psi (x, t, P)=e^{-\zeta (\lambda )x_1(0)}
\sum_{i=1}^{n}c_i(t, P)\Phi (x-x_i , \lambda )e^{zx+z^3t}
\end{equation}
is the one-point Baker-Akhiezer function on the spectral
curve $\Gamma$ with the marked point $P_{\infty}$ (one of pre-images
of $\lambda=0$) corresponding to the divisor ${\cal D}={\cal D}'+P_\infty$.
\end{theorem}
\noindent
By definition the function $\hat \Psi$ has poles at $x_i(t)$. From the theta-functional formula (\ref{ba501}) for the Baker-Akhiezer function
it follows that $x_i$ are zeros of the second factor
in the denominator, i.e. they are roots in $x$ of the equation
$$\theta \Bigl (\vec U_1x+\vec U_3 t-\vec A({\cal D}) -
\vec K+\vec A(P_{\infty})\Bigr )=0.
$$
From Lemma \ref{lem25} it follows that the pole divisor
$\cal D$ of the Baker-Akhiezer function satisfies the equation
\begin{equation}\label{s12}
{\cal D}+\iota {\cal D}-2P_\infty ={\cal K},
\end{equation}
where ${\cal K}$ is the canonical class. This is precisely the condition (\ref{iota1}) on the
divisor of poles of the Baker-Akhiezer function for algebraic-geometric solutions
to the CKP equation. This completes the proof of (\ref{algformula}) since (\ref{s12}) is equivalent to equation (\ref{locus}) for the vector $\vec Z$ in (\ref{algformula}).
\subsection{Degenerations of elliptic solutions}
\subsubsection{Trigonometric solutions}
In the degenerate case, when one of the periods tends to infinity,
the elliptic solutions become trigonometric (hyperbolic). We consider trigonometric
solutions which vanish at infinity:
$$
u(x, {\bf t})=- \, \frac{1}{2}\sum_{i=1}^n
\frac{\gamma^2}{\sinh^2 (\gamma (x\! -\! x_i({\bf t}))},
$$
where $\gamma$ is a complex parameter. When $\gamma$ is purely imaginary (respectively, real),
one deals with trigonometric (respectively, hyperbolic) solutions.
The equations of motion for the poles are
\begin{equation}\label{trig1}
\dot x_i=3\sum_{k\neq i}\frac{\gamma^2}{\sinh^2(\gamma (x_i\! -\! x_k))}-\gamma^2.
\end{equation}
Tending the spectral parameter $\lambda$ to infinity, we find the
Lax matrix in the form
\begin{equation}\label{trig2}
L_{ij}=-\, \frac{\gamma(1-\delta_{ij})}{\sinh (\gamma (x_i-x_j))}.
\end{equation}
Note that it is antisymmetric.
As is shown in \cite{Z20}, the KP tau-function for trigonometric solutions
has the following determinant representation:
\begin{equation}\label{trig3}
\tau^{\rm KP}(x, {\bf t})=
\det_{n\times n}\Biggl (e^{2\gamma x}I-\exp \Bigl (-\sum_{k\geq 1}t_k {\cal L}_k\Bigr )
e^{2\gamma X_0}\Biggr )=\prod_{j=1}^n (e^{2\gamma x}-e^{2\gamma x_j({\bf t})}),
\end{equation}
where $X_0=\mbox{diag}\, (x_1(0), \ldots , x_n(0))$ and
\begin{equation}\label{trig4}
{\cal L}_k=(L_0+\gamma I)^k-(L_0-\gamma I)^k, \qquad L_0 =L({\bf t}=0).
\end{equation}
We see that ${\cal L}_k$ is a polynomial in $L_0$ of degree $k-1$. If $k$ is even
(respectively, odd), ${\cal L}_k$ contains only odd (respectively, even) powers of $L_0$.
It is easy to see that this tau-function satisfies the conditions (\ref{ch6}), and,
therefore, gives rise to the CKP tau-function
\begin{equation}\label{trig5}
\tau (x, {\bf t}_{\rm o})=\Biggl (\det_{n\times n}
\Bigl (e^{2\gamma x}I-\exp \Bigl (-\!\!\!\!\!\sum_{k\geq 1, \,\, k \,\, {\rm odd}}
\!\! t_k {\cal L}_k\Bigr )
e^{2\gamma X_0}\Bigr )\Biggr )^{1/2}.
\end{equation}
Indeed, we have
$$
\partial_{t_{2m}}\log \tau^{\rm KP}\Biggl |_{t_{2k}=0}=
\partial_{t_{2m}}\mbox{tr}\log \Biggl (e^{2\gamma x}I-\exp \Bigl (-\sum_{k\geq 1}t_k {\cal L}_k\Bigr )
e^{2\gamma X_0}\Biggr )\Biggl |_{t_{\rm e}=0}
$$
$$
=\mbox{tr} \Biggl [ {\cal L}_{2m} \exp \Bigl (-\!\!\!\!\!
\sum_{k\geq 1, \,\, k \,\, {\rm odd}}t_k {\cal L}_k\Bigr )
\Biggl (e^{2\gamma x}I-\exp \Bigl (-\!\!\!\!\!\sum_{k\geq 1, \,\, k \,\, {\rm odd}}
\!\! t_k {\cal L}_k\Bigr )
e^{2\gamma X_0}\Biggr )^{-1}\Biggr ].
$$
But this is zero for all $m\geq 1$
because $\mbox{tr}\, L_0^{2l-1}=0$ for all $l\geq 1$ and,
as it was said above, ${\cal L}_{2m}$ contains only odd
powers of $L_0$ while all other ${\cal L}_k$ in this expression contain only even
powers of $L_0$.
\subsubsection{Rational solutions}
In the most degenerate case, when both periods tend to infinity, $\wp (x)\to 1/x^2$
and the elliptic solutions become rational:
$$
u(x, {\bf t})=- \, \frac{1}{2}\sum_{i=1}^n \frac{1}{(x-x_i({\bf t}))^2}.
$$
This corresponds to the limit $\gamma \to 0$ in the trigonometric
solutions. The equations of motion for the poles are
\begin{equation}\label{rat1}
\dot x_i=3\sum_{k\neq i}\frac{1}{(x_i-x_k)^2}.
\end{equation}
Tending the spectral parameter $\lambda$ to infinity, $\lambda =\infty$, we find the
(antisymmetric) Lax matrix in the form
\begin{equation}\label{rat2}
L_{ij}=-\, \frac{1-\delta_{ij}}{x_i-x_j}.
\end{equation}
It is known that the KP tau-function for rational solutions has the following
determinant representation (see, e.g. \cite{Shiota}):
\begin{equation}\label{rat3}
\tau^{\rm KP}(x, {\bf t})=
\det_{n\times n}\Bigl (xI-X_0 +\sum_{k\geq 1}kt_k L_0^{k-1}\Bigr )=
\prod_{j=1}^n (x-x_j({\bf t})),
\end{equation}
where $X_0=\mbox{diag}\, (x_1(0), \ldots , x_n(0))$ and $L_0 =L({\bf t}=0)$.
It is easy to see that this tau-function satisfies the conditions (\ref{ch6}), and,
therefore, gives rise to the CKP tau-function
\begin{equation}\label{rat4}
\tau (x, {\bf t}_{\rm o})=\Biggl (\det_{n\times n}
\Bigl (xI-X_0 +\! \sum_{k\geq 1, \,\, k \, {\rm odd}}
\! kt_k L_0^{k-1}\Bigr )\Biggr )^{1/2},
\end{equation}
Indeed, we have
$$
\partial_{t_{2m}}\log \tau^{\rm KP}\Biggl |_{t_{2k}=0}=
\partial_{t_{2m}}\mbox{tr} \log \, \Bigl (xI-X_0 +\sum_{k\geq 1}kt_k L_0^{k-1}\Bigr )
\Biggl |_{t_{\rm e}=0}
$$
$$
=2m \, \mbox{tr}\left [ L_0^{2m-1}\Bigl ((x+t_1)I-X_0 +
3t_3 L_0^2 +5t_5L_0^4+\ldots \Bigr )^{-1}\right ]=0
$$
for all $m\geq 1$ because $\mbox{tr}\, L_0^{2l-1}=0$ for all $l\geq 1$.
\section{Concluding remarks}
The main result of this paper is the identification of the CKP hierarchy as the
hierarchy of {\it odd times} flows of the KP hierarchy restricted onto the
locus of its turning points. It suggests that a similar result might be
valid for the BKP hierarchy. Namely, we conjecture that the
BKP hierarchy can be identified with the restriction of odd
times flows of the KP hierarchy onto the locus which in the Sato formulation
is defined by the equation
\begin{equation}\label{conc1}
(\mathcal L^3)_+=\partial_x^3+6u\partial_x,
\end{equation}
i.e. the coefficient at the zero power of $\partial_x$ in $\mathcal L^3$ vanishes.
In terms of the tau-function, this condition means that
\begin{equation}\label{conc2}
(\partial_{t_2}+\partial_{t_1}^2)\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}}=0
\end{equation}
for all $t_1, t_3, t_5, \ldots$.
The latter is an analog of equation (\ref{ch6}) defining turning points of the KP hierarchy.
Another interesting problem we plan to consider in the future is the
Hamiltonian theory of equations of motion for poles
of elliptic solutions to the CKP hierarchy.
In section 4 we have derived these equation in two ways.
First, these equations can be obtained by restricting the higher equations of motion of the
elliptic Calogero-Moser system onto the locus of its
turning points. As a corollary of this, we have presented
solutions of these equations in the implicit function form using theta-function
of the spectral curve. The second approach to the equations of motion is via the
``generating linear problem'' scheme which allows us to
define the corresponding spectral curve and to prove that it is time-independent
in a direct way (i.e. without any reference to the elliptic Calogero-Moser system).
As it was shown earlier in \cite{KN}, the phase space of the elliptic
CKP system can be identified with the total space of the Prym varieties
bundle over the space of the spectral curves. Under this identification the equations of motion become linear on the fibers. Such picture is characteristic for algebraically
integrable Hamiltonian systems. However, the authors's attempts
to find the corresponding Hamiltonian formulation of equations (\ref{ca1a})
by a direct guess or by more advanced machinery proposed
in \cite{kp1,kr-nested,kp2} have failed so far.
\section*{Acknowledgments}
\addcontentsline{toc}{section}{\hspace{6mm}Acknowledgments}
We thank S. Natanzon for discussions.
The research has been funded within the framework of the
HSE University Basic Research Program and the Russian Academic Excellence Project '5-100'.
\section*{Appendix A: Proof of Lemma \ref{proposition:even}}
\defB\arabic{equation}{A\arabic{equation}}
\setcounter{equation}{0}
\addcontentsline{toc}{section}{\hspace{6mm}Appendix A}
In this appendix we give a sketch of proof of Lemma \ref{proposition:even},
i.e. we are going to prove that the conditions (\ref{ch5})
and
\begin{equation}\label{A01}
\partial_x\partial_{t_4}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}
=\partial_x\partial_{t_6}\log \tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}
=\ldots =0.
\end{equation}
follow from the constraint
\begin{equation}\label{A01a}
\partial_{t_2}\log\tau^{\rm KP}\Bigr |_{{\bf t}_{\rm e}=0}=0
\quad \mbox{for all
$t_1, t_3, t_5, \ldots $}
\end{equation}
(see (\ref{ch6}))
provided $\tau^{\rm KP}$ is a KP tau-function, i.e. satisfies all
the equations of the KP hierarchy.
We use the representation of the KP hierarchy in the unfolded form
suggested in \cite{Natanzon1,Natanzon2}, see also section 3.2 of \cite{NZ16}.
Set $F=\log \tau^{\rm KP}$ and $F_{k_1,\ldots ,\, k_m}=\partial_{t_{k_1}}\ldots \partial_{t_{k_m}}\! F$.
Then the KP hierarchy can be written in the form
\begin{equation}\label{A02}
F_{k_1,\ldots , \, k_m}=\sum_{n\geq 1}\sum R_{k_1 ,\ldots ,\, k_m}^{(n)}
\! \left (\begin{array}{lll} s_1 & \ldots & s_n \\
r_1 & \ldots & r_n \end{array} \right )
\partial_x^{r_1}F_{s_1}\ldots \partial_x^{r_n}F_{s_n},
\end{equation}
where $m\geq 2$ and $\displaystyle{R_{k_1, \ldots ,\, k_m}^{(n)}
\! \left (\begin{array}{lll} s_1 & \ldots & s_n \\
r_1 & \ldots & r_n \end{array} \right )}$ are universal rational coefficients.
The second sum is taken over all matrices
$\displaystyle{\left (\begin{array}{lll} s_1 & \ldots & s_n \\
r_1 & \ldots & r_n \end{array} \right )}$ such that $s_i, r_i \geq 1$ with the conditions
\begin{equation}\label{A03}
\sum_{i=1}^n (s_i+r_i)=\sum_{i=1}^m k_i, \qquad
\sum_{i=1}^n r_i \geq n+m-2.
\end{equation}
For example \cite{Natanzon1},
\begin{equation}\label{A03a}
F_{2,3}=\frac{3}{2}\, \partial_xF_4 -\frac{3}{2}\, \partial_x^3F_2 -3\partial_x F_2 \, \partial_x^2F.
\end{equation}
From the fact that if $\tau^{\rm KP}(x, {\bf t})$ is a tau-function, then
$\tau^{\rm KP}(-x, -{\bf t})$ is a tau-function, too
(this is a corollary of the Hirota equations), it follows that
\begin{equation}\label{A04}
\mbox{if $\displaystyle{\sum_{i=1}^n (r_i-1)-m\equiv 1}$ (mod 2), then
$\displaystyle{R_{k_1, \ldots ,\, k_m}^{(n)}
\! \left (\begin{array}{lll} s_1 & \ldots & s_n \\
r_1 & \ldots & r_n \end{array} \right )=0.}$}
\end{equation}
First we prove (\ref{A01}). The proof is by induction.
We assume that (\ref{A01}) is true
for $\partial_xF_{2}, \ldots , \partial_xF_{2k}$ (this is certainly true if $k=1$) and will
deduce from (\ref{A02}) that it is true for $k\to k+1$.
From (\ref{A01a}) and (\ref{A02}) at $m=2$ we have:
\begin{equation}\label{A02a}
\begin{array}{c}
\displaystyle{
0=F_{2, \, 2k+1}=\sum_{s_1+r_1=2k+3}R^{(1)}_{2, \, 2k+1}
\left (\begin{array}{c} s_1\\r_1\end{array}
\right ) \partial_x^{r_1}F_{s_1}}
\\ \\
\displaystyle{+
\sum_{s_1+s_2+r_1+r_2=2k+3}R^{(2)}_{2, \, 2k+1}
\left (\begin{array}{cc} s_1&s_2\\r_1&r_2\end{array}
\right ) \partial_x^{r_1}F_{s_1}\partial_x^{r_2}F_{s_2}+\ldots}
\end{array}
\end{equation}
Separating the term with $r_1=1$ in the first sum in the right hand side of
(\ref{A02a}), we write it as
\begin{equation}\label{A02b}
0=F_{2, \, 2k+1}=R^{(1)}_{2, \, 2k+1}
\left (\begin{array}{c} 2k+2\\1\end{array}
\right ) \partial_x F_{2k+2} \,\, +\;
\mbox{all the rest}.
\end{equation}
Now, recalling the condition (\ref{A04}), we see that the non-zero coefficients at the
different terms in the right hand side are when
$\displaystyle{\sum_{i=1}^n s_i =n-1 \; \mbox{(mod $2$)}}$. From this it follows that
for both odd and even $n$ at least one of the $s_i$'s must be even (and less then
$2k+2$). Therefore, ``all the rest'' terms vanish by the induction assumption.
Since the coefficient $\displaystyle{R^{(1)}_{2, \, 2k+1}
\left (\begin{array}{c} 2k+2\\1\end{array}
\right )}$ is not equal to zero (see \cite{Natanzon1}), we conclude from (\ref{A02b})
that $\partial_x F_{2k+2}=0$.
Next we are going to prove that if $\partial_xF_{2k}=0$
for all $k\geq 1$ and all $t_1, t_3, \ldots$, then
$F_{k_1,\ldots ,\, k_m}=0$ for all even $k_1, \ldots , k_m$ and odd $m\geq 3$. As soon as $m+1$
and all $k_i$'s are even, we can, using (\ref{A03}), rewrite the condition (\ref{A04})
in the form
\begin{equation}\label{A05}
\sum_{i=1}^n s_i \equiv n \,\,\, \mbox{(mod 2)}.
\end{equation}
But if at least
one of $s_i$ in (\ref{A02}) is even, then the corresponding term vanishes because
$F_{2k}=0$ for all $k\geq 1$. Therefore, all the $s_i$'s must be odd, i.e.,
$s_i=2l_i+1$ and so the condition (\ref{A05}) is satisfied which means that the
coefficient $\displaystyle{R_{k_1, \ldots ,\, k_m}^{(n)}
\! \left (\begin{array}{lll} s_1 & \ldots & s_n \\
r_1 & \ldots & r_n \end{array} \right )}$ vanishes. This proves that $F_{k_1, \ldots ,\, k_m}=0$.
\section*{Appendix B: Proof of equation (\ref{ca5})}
\defB\arabic{equation}{B\arabic{equation}}
\setcounter{equation}{0}
\addcontentsline{toc}{section}{\hspace{6mm}Appendix B}
Here we prove the matrix identity (\ref{ca5}).
First of all we note that $\dot L_{ik}=-(\dot x_i-\dot x_k)\Phi '(x_i-x_k)$, and, therefore,
we have $\dot L =-[\dot X, B]$. To transform the commutators
$[L,B]+[L,D]$, we use the identity
\begin{equation}\label{A1}
\Phi (x )\Phi '(y)-\Phi (y)\Phi '(x)=\Phi (x+y)(\wp (x) -\wp (y)).
\end{equation}
With the help of it we get for $i\neq k$
$$
-\Bigl ([L,B]+[L,D]\Bigr )_{ik}
$$
$$
=\, \sum_{j\neq i,k}\Phi (x_i-x_j)\Phi '(x_j-x_k)-
\sum_{j\neq i,k}\Phi ' (x_i-x_j)\Phi (x_j-x_k)
$$
$$
+\, \Phi (x_i-x_k)\Bigl (\sum_{j\neq k}\wp (x_j-x_k)-\sum_{j\neq i}\wp (x_i-x_j)\Bigr )=0,
$$
so we see that
$[L,B]+[L,D]$ is a diagonal matrix. To find its matrix elements, we use the limit
of (\ref{A1}) at $y=-x$:
$$
\Phi (x)\Phi '(-x)-\Phi (-x)\Phi '(x)=\wp '(x)
$$
which leads to
$$
-\Bigl ([L,B]+[L,D]\Bigr )_{ii}
$$
$$
=\,
\sum_{j\neq i}\Bigl (\Phi (x_i-x_j)\Phi ' (x_j-x_i)-\Phi ' (x_i-x_j)\Phi (x_j-x_i)\Bigr )
=\sum_{j\neq i}\wp '(x_i-x_j)=D'_{ii},
$$
so we finally obtain the matrix identity
\begin{equation}\label{A4}
[L,B]+[L,D]=-D'.
\end{equation}
Combining the derivatives of (\ref{A1}) w.r.t. $x$ and $y$, we obtain the
identity
\begin{equation}\label{A5}
\Phi (x)\Phi ''(y)-\Phi (y)\Phi ''(x)=2\Phi '(x+y)(\wp (x)-\wp (y))
+\Phi (x+y)(\wp '(x)-\wp '(y))
\end{equation}
which allows us to prove the matrix identity
\begin{equation}\label{A9}
[L,C]=-2[D, B]+D'L+LD',
\end{equation}
which
is used, together with (\ref{A4}), to transform $\dot L+[L,M]$ to the form (\ref{ca5}).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In~\cite{Cal82} Calabi introduced the problem of minimising the $L^2$-norm of
the scalar curvature (this is called the \emph{Calabi functional})
over metrics in a fixed K\"ahler class on a compact
K\"ahler manifold. A critical point of the Calabi functional is called an
extremal metric. The Euler-Lagrange equation is that the gradient of the scalar
curvature is a holomorphic vector field.
It is known that extremal metrics in fact minimise the Calabi
functional (see~\cite{Hwang95}, \cite{Chen06}, \cite{Don05}).
Recently much progress has been made in understanding
when extremal metrics exist, at least on a conjectural level. K\"ahler-Einstein
metrics are a special case and when the first Chern class of the manifold is
positive (the manifold is called Fano in this case),
Yau conjectured that the existence of
K\"ahler-Einstein metrics is
related to the stability of the manifold in the sense of geometric invariant
theory. In the case of negative or zero first Chern class Yau~\cite{Yau78}
and Aubin~\cite{Aub78} have shown
that K\"ahler-Einstein metrics always exist, answering a conjecture of Calabi.
Tian~\cite{Tian97} made significant progress towards understanding the Fano
case, solving it completely in the case of surfaces in~\cite{Tian90}.
Donaldson~\cite{Don97}
showed that the scalar curvature can be interpreted as a moment map (this was
also observed by Fujiki~\cite{Fuj92})
and this enabled extending the conjectures about the
existence of
K\"ahler-Einstein metrics to more general constant scalar curvature and extremal
metrics (see~\cite{Don02}, \cite{Mab04_1}, \cite{GSz04}).
In this paper we look at what we can say about minimising
the Calabi functional in a K\"ahler class which admits no
extremal metric, concentrating on a concrete example. Let $\Sigma$ be a
genus 2
curve and $\mathcal{M}$ a degree -1 line bundle on it. We consider the ruled
surface $X=\mathbf{P}(\mathcal{M}\oplus\mathcal{O})$ with a family of
polarisations $L_m = C + mS_\infty$, where $C$ is the class of a fibre,
$S_\infty$ is the infinity section (with self-intersection 1), and $m>0$.
Technically we should take $m$ to
be rational, especially when discussing test-configurations, but by an
approximation and continuity argument
we can take $m$ to be real. The aim is to study the problem
of minimising the Calabi functional in these K\"ahler classes. Our main result
is the following.
\begin{thm}\label{thm:main}
There exist constants $k_1\simeq 18.9,k_2\simeq 5.03$, such that
\begin{enumerate}
\item If $0<m<k_1$ then $X$ admits an extremal
metric (this is due to T{\o}nnesen-Friedman~\cite{TF97}).
\item If
$k_1\leqslant m\leqslant k_2(k_2+2)$ then there exists a minimising sequence of
metrics which breaks $X$ into two pieces and converges to complete
extremal metrics on both.
\item If $m>k_2(k_2+2)$ then there exists a minimising sequence of
metrics which breaks $X$ into three pieces. It converges to complete
extremal metrics on two of these and the third degenerates into a fibration of
infinitely long and infinitely thin cylinders.
\end{enumerate}
\end{thm}
\begin{comment}
These decompositions are analogous to the Harder-Narasimhan filtration of an
unstable vector bundle. The extremal metrics on them correspond to the
Hermitian-Einstein metrics on the successive quotients of the Harder-Narasimhan
filtration. See~\cite{Don02} for the conjectural decomposition
of unstable toric varieties.
\end{comment}
To construct metrics on our ruled surface, we use the momentum construction
given in Hwang-Singer~\cite{HS02}. This construction has been used repeatedly in
the past to
find special metrics on ruled manifolds, in particular extremal metrics.
See~\cite{ACGT3} for a unified treatment of these constructions or~\cite{HS02} for
a historical overview and more references. The momentum construction
allows us to construct circle invariant
metrics from functions on an interval and it gives a convenient expression
for the scalar curvature. More precisely, let
$\phi:[0,m]\to\mathbf{R}$ be a smooth function, positive on the
interior $(0,m)$, vanishing at the endpoints, and such that $\phi'(0)=2$,
$\phi'(m)=-2$. The momentum construction gives a metric $\omega_\phi$ in
the K\"ahler class $L_m$, with scalar curvature
\[ S(\omega_\phi) = \frac{-2}{1+\tau}-\frac{1}{2(1+\tau)}
\big[ (1+\tau)\phi\big]^{\prime\prime}.
\]
Here $\tau$ is the moment map for the $S^1$-action on the fibres and working
with this coordinate is the central idea of the momentum construction.
We will recall this construction in
Section~\ref{sec:momentum}. Of particular importance to us is the fact that we
can consider momentum profiles which vanish on a subset of $(0,m)$. These
correspond to degenerate metrics and they arise as the limits of the minimising
sequences in Theorem~\ref{thm:main}.
In Section~\ref{sec:minimising}
we consider the problem of directly minimising the Calabi
functional on the set of metrics obtained by the momentum construction. Since
the $L^2$-norm of the scalar curvature is equivalent to
the $H^2$-norm of the momentum
profiles, this is straight forward. We find that the Euler-Lagrange
equation for a minimiser $\phi$ is $\phi S(\phi)''=0$ and $S(\phi)''$ must
be a negative distribution, ie. $S(\phi)$ is concave. We show that a
unique minimiser
exists in each K\"ahler class and its momentum profile is in $C^2$.
Note that $S(\phi)''=0$ is the equation for $\phi$ to define an extremal metric.
In Section~\ref{sec:explicit} we
explicitly construct the minimisers, which can be degenerate in the sense that
the momentum profiles can vanish on a subset of $(0,m)$. Here we will see the three
different kinds of behaviour stated in Theorem~\ref{thm:main}.
In Section~\ref{sec:test-configs} we construct test-configurations for $X$ and
calculate their Futaki invariants. This will clarify the role of the
concavity of $S(\phi)$ for minimisers of the Calabi functional. In fact,
rational, piecewise-linear convex functions on $[0,m]$ give
test-configurations essentially by the construction in~\cite{Don02} as
generalised to bundles of toric varieties in~\cite{GSzThesis}. We
can approximate $-S(\phi)$ by such functions, and
Donaldson's theorem on lower bounds for
the Calabi functional in~\cite{Don05} shows that $\omega_\phi$ actually
achieves the infimum of the Calabi functional on
the whole K\"ahler class, not just the metrics arising from the momentum
construction. This will complete the proof of Theorem~\ref{thm:main}.
An alternative approach to minimising the Calabi functional is using the Calabi
flow introduced in~\cite{Cal82}. This is the flow which deforms the K\"ahler
potential in the direction of the scalar curvature. It is
expected (see~\cite{Don02}, \cite{Don04_1}) that the Calabi flow should
minimise the Calabi functional and if there is no extremal metric in a
given K\"ahler class, then it should break up the manifold into pieces
which admit complete extremal metrics or collapse in some way.
In Sections~\ref{sec:calabiflow} and \ref{sec:longtimeexist} we will
verify this, showing
\begin{thm} If the initial metric is given by the momentum construction then the
Calabi flow exists for all time and converges
to the minimiser of the Calabi functional.
\end{thm}
The Calabi flow on ruled manifolds has been previously studied
in~\cite{Gua05}, where the long time existence and convergence is
proved for the K\"ahler classes
which admit an extremal metric. We use similar techniques, the main
difference being that we introduce some variants of the Mabuchi
functional when no extremal metric exists.
In particular in the unstable case where $k_1 \leqslant m \leqslant
k_2(k_2+2)$ we define a functional which decreases along the Calabi
flow, is bounded below, and whose derivative is given by the difference
between the Calabi functional and its infimum. This leads to the
convergence result. The case $m > k_2(k_2+2)$ is more delicate since the
analogous Mabuchi-type functional is not bounded from below. Nevertheless
it has at worst logarithmic decay along the Calabi flow and this is
enough to show that the flow minimises the Calabi functional.
This is discussed in Section~\ref{sec:calabiflow}.
Note that throughout the paper we have ignored factors of $2\pi$, for example in
the definition of the Calabi functional. Also we normalise the Futaki invariant
slightly differently from usual in Section~\ref{sec:test-configs}. Hopefully
this will lead to no confusion.
\subsubsection*{Acknowledgements}
Part of this work has appeared in the author's PhD thesis~\cite{GSzThesis}. I
would like to thank my supervisor Simon Donaldson for his encouragement and
for sharing his insights.
\section{Metrics on the ruled surface} \label{sec:momentum}
In this section we describe the momentum construction for metrics on the
ruled surface (see Hwang-Singer~\cite{HS02}).
Let $X$ be the ruled surface as above, so that
$X=\mathbf{P}(\mathcal{M}\oplus\mathcal{O})\to\Sigma$, where $\Sigma$ is a
genus 2 curve, and $\mathcal{M}$ is a degree -1 line bundle over
$\Sigma$. Let $\omega_\Sigma$ be a metric on $\Sigma$ with area $2\pi$
and constant scalar curvature $-2$. Also, let $h$ be a Hermitian metric on
$\mathcal{M}$ with curvature form $i\omega_\Sigma$. We consider metrics
on the total space of $\mathcal{M}$ of the form
\[ \omega = p^*\omega_{\Sigma} + 2i\partial\bar{\partial} f(s), \]
where $p:\mathcal{M}\to\Sigma$ is the projection map,
$s=\frac{1}{2}\log|z|^2_h$ is the logarithm of the fibrewise norm and
$f(s)$ is a suitable strictly convex function that makes $\omega$
positive definite.
The point of the momentum construction is the change of coordinate from
$s$ to $\tau=f^\prime(s)$. The metric $\omega$ is invariant under the
$U(1)$-action on $\mathcal{M}$, and $\tau$ is just the moment map for
this action. Let $I\subset\mathbf{R}$ be the image of $\tau$, and let
$F:I\to\mathbf{R}$ be the Legendre transform of $f$. By definition this
means that
\[ f(s) + F(\tau) = s\tau, \]
and $F$ is a strictly convex function. The \emph{momentum profile} is
defined to be the function
\[ \phi(\tau) = \frac{1}{F^{\prime\prime}(\tau)}.\]
We have the following relations:
\[ s = F^\prime(\tau),\quad \frac{ds}{d\tau}=F^{\prime\prime}(\tau),
\quad
\phi(\tau) = f^{\prime\prime}(s). \]
\subsubsection*{The metric in local coordinates}
Let us now see what the metric $\omega$ looks like in local coordinates.
Choose a local coordinate $z$ on $\Sigma$ and a fibre coordinate $w$ for
$\mathcal{M}$. The fibrewise norm is given by $|(z,w)|^2_h = |w|^2h(z)$
for some positive function $h$, so that
\[ s = \frac{1}{2}\log |w|^2 + \frac{1}{2}\log h(z). \]
We can choose the local trivialisation of $\mathcal{M}$ in such a way that at
a point $(z_0,w_0)$ we have
$d\log h(z)=0$. We can then compute at the point $(z_0,w_0)$
\[
\begin{aligned}
2i\partial\bar{\partial} f(s) &= if^\prime(s)\partial\bar{\partial}
\log h(z) + f^{\prime\prime}(s)\frac{i\,dw\wedge d\bar{w}}{
2|w|^2} \\
&= \tau p^*\omega_\Sigma + \phi(\tau)\frac{i\,dw\wedge d\bar{w}}{
2|w|^2}.
\end{aligned}
\]
The metric at the point $(z_0,w_0)$ is therefore given by
\begin{equation}\label{eq:momentmetric}
\omega = (1+\tau)p^*\omega_\Sigma + \phi(\tau)\frac{i\,dw\wedge
d\bar{w}}{2|w|^2}.
\end{equation}
In order to compute the scalar curvature of $\omega$, note that the
determinant of the metric $g$ correponding to $\omega$ is
\[ \det(g) = \frac{1}{|w|^2}(1+\tau)\phi(\tau)\det(g_\Sigma), \]
which is valid for all points, not just $(z_0,w_0)$. The Ricci form at
$(z_0,w_0)$ is given by
\[
\begin{aligned}
\rho &= -i\partial\bar{\partial}\log\det g \\
&= p^*\rho_\Sigma - \frac{\big[
(1+\tau)\phi\big]^{\prime}}{2(1+\tau)}
p^*\omega_\Sigma-\frac{\phi}{2}\cdot \frac{
(1+\tau)\big[ (1+\tau)\phi\big]^{\prime\prime} -
\big[ (1+\tau)\phi\big]^\prime}{(1+\tau)^2}\cdot\frac{i\, dw\wedge
d\overline{w}}{\vert w\vert^2},
\end{aligned}
\]
where the derivatives are all with respect to $\tau$ (note
that $d/ds = \phi(\tau) d/d\tau$) and $\rho_\Sigma$ is the Ricci form of
the metric $\omega_\Sigma$.
Taking the trace of this, we find that the scalar curvature $S(\omega)$
is given by
\begin{equation}\label{eq:scalarcurv}
S(\omega) = \frac{-2}{1+\tau}-\frac{1}{2(1+\tau)}
\big[ (1+\tau)\phi\big]^{\prime\prime}.
\end{equation}
In~\cite{HS02} the extendability of the metrics to the projective completion of
$\mathcal{M}$ is studied. The proposition we need is the following.
\begin{prop}[see~\cite{HS02}] \label{prop:momentumprofile}
For some $m>0$ let
$\phi:[0,m]\to\mathbf{R}$ be a smooth function such that $\phi$ is
positive on $(0,m)$, and
\begin{equation}\label{eq:bdrycond}
\phi(0)=\phi(m)=0,\qquad \phi^\prime(0)=2,\quad \phi^\prime(m)=-2.
\end{equation}
Then the momentum construction defines a smooth metric $\omega_\phi$
on $X$ in the K\"ahler class $C+mS_\infty$, with scalar curvature
$S(\omega)(\tau)$ given by Equation~\ref{eq:scalarcurv}. Here $C$ is
the class of a fibre, and $S_\infty$ the infinity section.
If instead $\phi$ satisfies the boundary conditions
\[ \phi(0)=\phi(m)=0,\qquad \phi^\prime(0)=0,\quad \phi^\prime(m)=-2,
\]
and $\phi(\tau)\leqslant O(\tau^2)$ for small $\tau$,
then the momentum construction gives a complete metric with finite
volume on the complement of the zero section in $X$. Similarly if
$\phi^\prime(0)=2$ and $\phi^\prime(m)=0$ then we obtain a complete
metric on the complement of the infinity section.
The metrics are extremal, ie. their scalar curvature has holomorphic gradient,
when $S(\phi)''=0$.
\end{prop}
Let us also note the definition
\begin{defn}\label{defn:momprof}
A \emph{momentum profile} is a $C^2$ function $\phi : [0,m]\to
\mathbf{R}$ which is positive on $(0,m)$ and satisfies the boundary
conditions (\ref{eq:bdrycond}). A \emph{singular momentum profile} is
the same except we only require it to be non-negaive instead of
positive, ie. it can vanish on a subset of $(0,m)$.
\end{defn}
Let us write $\Phi$ for the unique solution of $S(\Phi)''=0$ satisfying
the same boundary conditions as a momentum profile. Then $\Phi$ is positive on
$(0,m)$ precisely when the polarisation admits an extremal metric.
We define the Calabi
functional to be
\[\begin{aligned}
Cal(\phi) &= \int_0^m (S(\phi)-S(\Phi))^2(1+\tau)\,d\tau \\
&= \int_0^m \frac{1}{4(1+\tau)}\left[\left( (1+\tau)(\Phi-\phi)
\right)''\right]^2\, d\tau.
\end{aligned} \]
\noindent This differs from the $L^2$-norm of $S(\phi)$ by a constant, since
\[
\int_0^m (S(\phi)-S(\Phi)) S(\Phi)\, (1+\tau)d\tau = \int_0^m \frac{1}{2}
[(1+\tau)(\Phi-\phi)]'' S(\Phi)\, d\tau = 0, \]
integrating by parts, so
\[ Cal(\phi) = \int_0^m S(\phi)^2\, (1+\tau)d\tau - \int_0^m S(\Phi)^2\,
(1+\tau)d\tau. \]
Throughout the paper when we integrate a function over $X$ which only depends on
$\tau$ we will often use the volume form $(1+\tau)d\tau$. From the formula
(\ref{eq:momentmetric}) we see that this is a constant multiple of
the integral with respect to the volume form $\omega^2$. Because
of the boundary conditions on $\phi$ the Poincar\'e inequality shows that the
Calabi functional is equivalent to the
$H^2$-norm of $\phi$.
This makes it easy to minimise the Calabi functional directly as we
do in the next section.
\section{Minimising the Calabi functional}\label{sec:minimising}
It is fairly simple to directly minimise the Calabi functional on the set of
metrics which are given by momentum profiles.
We introduce the set of functions
\[ A = \left\{ \phi:[0,m]\to\mathbf{R}\,\left|\begin{aligned} &\phi\in H^2,
\ \phi\geqslant0 \text{ and }
\phi\text{ satisfies the }\\ &\text{boundary conditions
in Proposition~\ref{prop:momentumprofile}}\end{aligned}\right. \right\}, \]
and we want to minimise the Calabi functional on this space. Let us choose a
minimising sequence $\phi_k\in A$. We have a bound $\Vert\phi_k\Vert_{H^2}
\leqslant C\cdot Cal(\phi_k)$, so we can choose a subsequence converging weakly to
some $\phi\in H^2$. Weak convergence in $H^2$ implies convergence in $C^1$ so
the boundary conditions and non-negativity hold in the limit, ie.
$\phi\in A$. Moreover $Cal$ is lower-semicontinuous because the
$H^2$-norm is, so $\phi$ is the required minimiser.
\begin{prop}\label{prop:minimumprofile}
The minimiser $\phi$ in $A$ satisfies
$\phi S(\phi)''=0$ and $S(\phi)''$ is
a negative distribution. In particular $S(\phi)$ is continuous, so
$\phi\in C^2$. Conversely if $\psi S(\psi)''=0$ and $S(\psi)$ is
concave, then $\psi=\phi$.
\end{prop}
\begin{proof}
The variation of $Cal$ at $\phi$ is given by
\[ DCal_\phi(\widetilde{\phi}) = -\int_0^m (S(\phi)-S(\Phi))\left[
(1+\tau)\widetilde{\phi}\right]''\,d\tau.\]
We are considering variations inside $A$, so $\widetilde{\phi}$ and its
first derivative vanishes at the endpoints. We can therefore integrate
by parts, and find that
\[ -\int_0^m S(\phi)''\widetilde{\phi}(1+\tau)\, d\tau \geqslant 0\]
for all $\widetilde{\phi}$ such that $\phi+\epsilon\widetilde{\phi}\in
A$ for small enough $\epsilon$. We can choose $\widetilde{\phi}$ to be
an arbitrary non-negative smooth function which vanishes along with its
first derivative at the endpoints. This shows that $S(\phi)''$ is a
negative distribution. On the open set where $\phi$ is positive
we can choose $\widetilde{\phi}$ to be negative or positive, so it
follows that $S(\phi)''=0$ at these points. Therefore $\phi S(\phi)''=0$
on $(0,m)$. The continuity of $S(\phi)$ follows from it being concave,
and this implies that $\phi\in C^2$.
The converse follows from the following computation.
\[ \begin{aligned}
Cal(\psi) & \leqslant Cal(\psi) +
\int_0^m (S(\phi)-S(\psi))^2(1+\tau)\, d\tau \\
&= Cal(\phi) + 2\int_0^m
(S(\psi)-S(\phi))S(\psi)(1+\tau)\, d\tau \\
&= Cal(\phi) + \int_0^m \big[ (1+\tau)\phi - (1+\tau)\psi
\big]'' S(\psi)\, d\tau \\
&= Cal(\phi) + \int_0^m \phi S(\psi)'' (1+\tau)\, d\tau \\
&\leqslant Cal(\phi).
\end{aligned} \]
Since $Cal(\phi)$ is minimal we must have equality, ie.
\[\int_0^m (S(\phi)-S(\psi))^2 (1+\tau)\, d\tau = 0.\]
This implies that $S(\phi)=S(\psi)$, from which it follows that
$\phi=\psi$.
\end{proof}
\section{Explicit minimisers}\label{sec:explicit}
In this section we compute explicitly the minimisers of the Calabi functional
for all polarisations. For each $m$ we are looking for a singular momentum
profile (Definition~\ref{defn:momprof})
such that $S(\phi)''=0$ wherever $\phi$ does not vanish, and in addition
$S(\phi)$ is concave.
There are three cases to consider depending on the polarisation.
\subsubsection*{Case 1. There exists an extremal metric, $m<k_1\simeq
18.889$}
In this case we want to solve the equation $S(\phi)''=0$. By the
Formula (\ref{eq:scalarcurv}) for the scalar curvature, this is the ODE
\[ \frac{1}{2(1+\tau)}(-4-[(1+\tau)\phi]^{\prime\prime}) = A\tau + B,\]
for some constants $A, B$. Rearranging this and integrating twice we obtain
\begin{equation}\label{eq:ODE}
(1+\tau)\phi = -\frac{A\tau^4}{6}-\frac{(A+B)\tau^3}{3}-B\tau^2-2\tau^2 +
C\tau + D,
\end{equation}
where $C$ and $D$ are also constants. The boundary conditions on $\phi$ on the
interval $[0,m]$ give a
system of linear equations on $A,B,C,D$ which we can solve to obtain
\[\begin{aligned}
\phi(\tau) = \frac{2\tau(m-\tau)}{m(m^2+6m+6)(1+\tau)}\big[& \tau^2(2m+2) +
\tau(-m^2+4m+6)\\ & + m^2 + 6m+6\big].
\end{aligned}
\]
This will give a metric when it is positive on the interval $(0,m)$ which
happens if and only if the quadratic expression in square brackets is positive
on this interval. This is the case for $m<k_1$ where $k_1$ is the only positive
real roof of the quartic $m^4-16m^3-52m^2-48m-12$. Approximately
$k_1\simeq
18.889$, which is the result obtained by
T{\o}nessen-Friedman~\cite{TF97}. See Figure~\ref{fig:smooth} for a
graph of $\phi(\tau)$ for $m=17$.
\begin{figure}[htbp]
\begin{center}
\input{smooth.pstex_t}
\caption{Momentum profile of an extremal metric on $X$ when
$m=17$.}
\label{fig:smooth}
\end{center}
\end{figure}
\subsubsection*{Case 2. $X$ breaks up into two pieces, $k_1\leqslant m\leqslant
k_2(k_2+2)\simeq 35.33$}
When $m\geqslant k_1$ we can no longer find a positive solution of $S(\phi)''=0$ on
the whole interval $[0,m]$ so we split the interval into two pieces
$[0,c]$ and $[c,m]$. We would like to find $\phi$ which vanishes at $c$, but on the
intervals $(0,c)$ and $(c,m)$ we have $S(\phi)''=0$, and
$S(\phi)$ is concave on $[0,m]$. We first let $\phi_1$ be the solution
of the equation
\[\begin{gathered}
S(\phi_1)''=0 \text{ on the interval } (0,c)\\
\phi_1(0)=\phi_1(c)=0,\quad \phi_1'(0)=2,\quad \phi_1'(c)=0.
\end{gathered}\]
We obtain
\[
\phi_1(\tau) = \frac{2\tau(c-\tau)^2}{c^2(c^2+6c+6)(1+\tau)}\big[
\tau(-c^2+2c+3)+c^2+6c+6\big].
\]
This is positive on $(0,c)$ if the linear expression in square brackets is
positive on this interval. This happens for $c\leqslant k_2$ where $k_2$ is
the only positive real root of the cubic $c^2-3c^2-9c-6$. Approximately
$k_2\simeq 5.0275$. The scalar curvature is given by
\[ S(\phi_1) = \frac{12(c^2-2c-3)}{c^2(c^2+6c+6)}\tau -
\frac{6(2c^2-c-4)}{c(c^2+6c+6)}. \]
To deal with the interval $[c,m]$ we first solve the equation
\[\begin{gathered}
S(\psi)''=0 \text{ on the interval } (0,d)\\
\psi(0)=\psi(d)=0,\quad \psi'(0)=0,\quad \psi'(d)=-2.
\end{gathered}\]
for some constant $d$, and then shift the solution to $[c,m]$. The
solution on $[0,d]$ is given by
\[
\psi(\tau) =
\frac{2\tau^2(d-\tau)}{d^2(d^2+6d+6)(1+\tau)}\big[\tau(2d^2+4d+3)
-d^3+3d^2+9d+6\big].
\]
As before, this is positive on $(0,d)$ if the linear term in square
brackets is
positive on this interval. This is the case for $d\leqslant k_2$, for the
same $k_2$ as above. The scalar curvature is given by
\[ S(\psi) = \frac{12(2d^2+4d+3)}{d^2(d^2+6d+6)}\tau - \frac{6(3d^2+5d
+2)}{d(d^2+6d+6)}. \]
Now note that if we define $\phi_2$ by
\[ \phi_2(\tau) = (c+1)\psi\left(\frac{\tau-c}{c+1}\right), \]
then $\phi_2$ solves the equation
\[ \begin{gathered}
S(\phi_2)''=0 \text{ on the interval } (c,(c+1)d+c) \\
\phi_2(c)=\phi_2( (c+1)d+c )=0, \quad \phi_2'(c)=0,\quad \phi_2'(
(c+1)d+c )=-2.
\end{gathered}\]
The scalar curvature is given by
\[ S(\phi_2)(\tau) = \frac{1}{c+1}\, S(\psi)\left(\frac{\tau-c}{c+1}
\right). \]
We now define $\phi$ by
\[ \phi(\tau) = \begin{cases} \phi_1(\tau) \quad \tau\in[0,c],\\
\phi_2(\tau) \quad \tau\in [c, (c+1)d + c].
\end{cases} \]
We can check that $S(\phi)$ will be continuous at $\tau=c$ precisely
when $c=d$. We also want $(c+1)d + c = m$, which implies that
$c=\sqrt{m+1}-1$. With these choices a simple computation shows that
$S(\phi)$ is concave for $m\geqslant k_1$ (note that it is linear for $m=k_1$,
and convex for $m<k_1$). Finally recall that the condition that $\phi$ is
non-negative means that $c\leqslant k_2$, which in turn implies $m
\leqslant k_2(k_2+2)$. See Figure~\ref{fig:case2} for a graph of $\phi$
for $m=24$.
\begin{figure}[htbp]
\begin{center}
\input{case2.pstex_t}
\caption{Momentum profile of the minimiser on $X$ when
$m=24$. The manifold breaks into two pieces both of which are
equipped with a complete extremal metric.}
\label{fig:case2}
\end{center}
\end{figure}
\subsubsection*{Case 3. $X$ breaks up into three pieces, $m >
k_2(k_2+2)$}
The previous construction no longer works for $m > k_2(k_2+2)$ so we
need to split the interval $[0,m]$ into three pieces. From the previous
case we have a solution $\phi_1$ to the equation
\[ \begin{gathered}
S(\phi_1)''=0 \text{ on the interval } (0,k_1)\\
\phi_1(0)=\phi_1(k_1)=0,\quad \phi_1'(0)=2,\quad \phi_1'(k_1)=0,
\end{gathered}\]
and also a solution $\phi_2$ to
\[ \begin{gathered}
S(\phi_2)''=0 \text{ on the interval } (c,m)\\
\phi_2(c)=\phi_2(m)=0,\quad \phi_2'(c)=0,\quad \phi_2'(m)=-2,
\end{gathered}\]
where the constant $c$ is defined by
\begin{equation}\label{eq:defc}
c = \frac{m+1}{k_2+1}-1.
\end{equation}
We define
\[ \phi(\tau) = \begin{cases} \phi_1(\tau)\quad &\tau\in[0,k_2] \\
0\quad &\tau\in[k_2,c]\\
\phi_2(\tau)\quad &\tau\in[c,m].
\end{cases}\]
We can check that $c>k_2$ precisely when $m>k_2(k_2+2)$, and this choice
of $\phi$ satisfies that $\phi S(\phi)''=0$ and $S(\phi)$ is concave.
See Figure~\ref{fig:case3} for a graph of $\phi$ for $m\simeq 41.2$.
\begin{figure}[htbp]
\begin{center}
\input{case3_1.pstex_t}
\caption{Momentum profile of the minimiser on $X$ when
$m\simeq 53.2$. The manifold breaks into three pieces, two of
which, $A$ and $C$,
admit complete extremal metrics, and in the third, $B$, the
$S^1$-orbits collapse.}
\label{fig:case3}
\end{center}
\end{figure}
\subsubsection*{Conclusion}
For any $m$ one of the previous 3 cases will hold, so we can construct a
$\phi$ which satisfies the equation $\phi S(\phi)''=0$ and $S(\phi)$ is
concave. According to Proposition~\ref{prop:minimumprofile} this $\phi$ will
give the minimum of the Calabi functional on the space of singular
momentum profiles. In the next section we will show
that they give the infimum of the Calabi
functional over all metrics in their K\"ahler class. This will complete
the proof of Theorem~\ref{thm:main}.
\section{Test-configurations}\label{sec:test-configs}
In the previous section we have found a (possibly degenerate) metric in each
K\"ahler class, which minimises the Calabi functional on the set of
metrics which come from the momentum construction. In this section we want to
show that these metrics minimise the Calabi functional on their entire K\"ahler
class. For this we use the theorem of Donaldson~\cite{Don05} which gives a lower
bound on the Calabi functional, given a destabilising test-configuration. We
will not give a detailed explanation of the test-configurations that we use, and
the computation of their Futaki invariants. For
more details see~\cite{GSzThesis} and \cite{Don02}.
\begin{prop}[Donaldson~\cite{Don05}]\label{prop:donaldson} Suppose there
exists a test-configuration $\chi$ for a polarised variety $(X,L)$
such that the Futaki invariant $F(\chi)$ is negative. Then for any
metric $\omega$ in the class $c_1(L)$ we have the inequality \[ \Vert
S(\omega)-\hat{S}\Vert_{L^2} \geqslant \frac{-F(\chi_i)}{\Vert\chi_i\Vert}.\]
\end{prop}
The idea is to produce a sequence of test-configurations $\chi_i$ for which
\[ \lim_{i\to\infty} \frac{-F(\chi_i)}{\Vert\chi_i\Vert} = \Vert S(\omega) -
\hat{S}\Vert_{L^2}, \]
where $\omega$ is the degenerate metric corresponding to the singular momentum
profile in each K\"ahler class that we have found in the previous
section. This will imply that this is the
infimum of the Calabi functional and $\omega$ minimises the Calabi
functional on its K\"ahler class.
To obtain test-configurations we use the construction
in~\cite{GSzThesis} Section
4.1 (Theorem 4.1.2), which is an extension of the construction of
test-configurations for toric varieties by Donaldson~\cite{Don02} to bundles of
toric varieties. For the case of our ruled surface we obtain
\begin{prop} Given a rational, piecewise-linear, convex function
$h:[0,m]\to\mathbf{R}$, there exists a test-configuration for
$(X,L_m)$ with Futaki invariant given by
\begin{equation} \label{eq:convexfutaki}
F(h) = h(0) + (1+m)h(m) -2\int_0^m h(\tau)\,d\tau -
\hat{S}\int_0^m h(\tau) (1+\tau)\, d\tau,
\end{equation}
and norm
\[ \Vert h\Vert^2 = \int_0^m (h(\tau)-\hat{h})^2(1+\tau)\,d\tau, \]
where $\hat{h}$ is the average of $h$ with respect to the
measure $(1+\tau)\,d\tau$.
\end{prop}
\begin{comment}
We briefly describe these test-configurations now. They are essentially
test-configurations for the $\mathbf{P}^1$ fibres. Given
$h$ choose a large number integer $R$ such that $R-h(\tau)$ is positive on
$[0,m]$. Consider the polygon $Q\in\mathbf{R}^2$ defined by
\[ Q = \{(\tau, y)\,|\, y\leqslant h(\tau)\}. \]
By rescaling the lattice if necessary (this corresponds to taking a power of
$L_m$ as our polarisation) we can assume that $Q$ is an integral polytope, and
defines a polarised toric variety $(V_Q,L_Q)$.
\end{comment}
To work with test-configurations we should restrict to polarisations $L_m$
with $m$ rational but an approximation argument
gives us the conclusion of Proposition~\ref{prop:donaldson}
for any real $m$ as well.
Given a continuous convex function $h$ on $[0,m]$ which is not
necessarily rational or piecewise-linear, we still define the ``Futaki
invariant'' $F(h)$ of $h$ by Equation~\ref{eq:convexfutaki}.
\begin{comment}
We very briefly indicate what kind of test-configurations these are. Given a
rational, piecewise-linear convex function $h$ on $[0,m]$, Donaldson's
construction gives a test-configuration for $\mathbf{P}^1$. Let us call the
total space of this test-configuration $\Chi$.
\end{comment}
\begin{comment}
We have seen that circle invariant metrics on $X$ in the K\"ahler class
$L_m=C+mS_\infty$ correspond to smooth functions $\phi$ on the interval
$[0,m]$, positive on the interior, vanishing at $0,m$ and with
$\phi^\prime(0)=2, \phi^\prime(m)=-2$. Let us define a \emph{singular
momentum profile} to be a $C^2$ function on $[0,m]$ with the same
boundary conditions, but possibly vanishing on a subset of $(0,m)$. For
a singular momentum profile $\phi$ we still define the ``scalar
curvature'' of $\phi$ by Equation~\ref{eq:scalarcurv}.
\end{comment}
\begin{lem}\label{lem:futakiintegral}
Let $\phi$ be a singular momentum profile, and
$h:[0,m]\to\mathbf{R}$ a piecewise-smooth convex function.
Suppose that $h$ is linear on any interval on which $\phi$ does not
vanish identically. Then
\[ F(h) = \int_0^m h(\tau)(S(\phi)-\hat{S})\,(1+\tau)d\tau.\]
\end{lem}
This result is analogous to the fact that the Futaki invariant of a
holomorphic vector field can be computed algebro-geometrically or
differential geometrically (see~\cite{Don02}). Here if
$h$ is rational and piecewise-linear then it does not define a
holomorphic vector field but the result says that we
can still compute the Futaki invariant of the test-configuration it
induces with a differential geometric formula as long as we use a metric
which degenerates in a suitable way at points where $h$ is not linear.
\begin{proof} The proof is a simple integration by parts, using the
formulas for $F(h)$ and $S(\phi)$.
\end{proof}
We can now complete the proof of Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
What remains to be shown is that for each polarisation, the
minimiser $\phi$ that we have constructed in the previous
section minimises the Calabi functional over the whole K\"ahler
class, not just over the set of metrics obtained from the
momentum construction. Let $\phi$ be one of these minimisers.
Since $-S(\phi)$ is convex, we can approximate it in the $C^0$-norm by
a sequence of rational, piecewise-linear convex functions $h_i$. These
define a sequence of test-configurations $\chi_i$ such that
\[ \lim_{i\to\infty}\frac{-F(\chi_i)}{\Vert\chi_i\Vert} = \frac{
-F(-S(\phi)) }{ \Vert S(\phi) - \hat{S}\Vert_{L^2}}. \]
If we let $h=-S(\phi)$, then $\phi$ and $h$ satisfy the conditions of
Lemma~\ref{lem:futakiintegral} so that
\[ F(-S(\phi)) = -\int_0^m S(\phi)(S(\phi)(\tau)-\hat{S})(1+\tau)\, d\tau
= -\Vert S(\phi)-\hat{S}\Vert_{L^2}^2.\]
Therefore
\[\lim_{i\to\infty}\frac{-F(\chi_i)}{\Vert\chi_i\Vert} = \Vert
S(\phi)-\hat{S}\Vert_{L^2},\]
so that Proposition~\ref{prop:donaldson} now implies that this limit is
the infimum of the Calabi functional on the K\"ahler class.
\end{proof}
\begin{comment}
\begin{rem}
Note that the test-configurations we defined are the worst destabilising
ones in the sense that they minimise the normalised Futaki invariant
$F(\chi)/\Vert\chi\Vert$. In this sense they are analogous to the
Harder-Narasimhan filtration of an unstable vector bundle
(see~\cite{BT05} for example). Geometrically the test-configurations we
obtain in the three cases are as follows. In the first case the
test-configuration is a product configuration induced by a holomorphic
vector field (the extremal vector field). In the second case it is
deformation to the normal cone of the zero (or infinity) section
(see~\cite{RT06} for the definition), but
the $\mathbf{C}^*$-action on the total space of the test-configuration
is multiplied by a multiple of the product $\mathbf{C}^*$-action induced
by the extremal vector field. The central fibre in this case is two
copies of $X$ with a normal crossing along the zero section of one and
the infinity section of the other. In the third case approximating
$-S(\phi)$ by piecewise-linear convex functions
\end{rem}
\end{comment}
\section{The Calabi flow} \label{sec:calabiflow}
We have seen that in the case of a ruled surface it is fairly simple to minimise
the Calabi functional directly
over the set of metrics given by momentum profiles. It is
also interesting to see whether the Calabi flow converges to these minimisers.
In this section we will prove that this is the case. In~\cite{Gua05} Guan has
shown that on a ruled manifold
when an extremal metric exists, then starting from a metric given by the
momentum construction the Calabi flow exists for all
time and converges to the extremal metric exponentially fast. Our techniques are
similar to his, but we need to introduce some new functionals which are
modifications
of the Mabuchi functional more suited for studying the unstable
polarisations.
We consider a family of metrics $\omega_s$ given by the momentum construction (see
Section~\ref{sec:momentum}), ie.
\[ \omega_t = p^*\omega_{\Sigma} + 2i\partial\overline{\partial} f_t(s), \]
for some family of suitably convex functions $f_t$. This path of metrics
satisfies the Calabi flow if
\[ \frac{\partial f_t}{\partial t} = S(\omega_t).\]
If we denote by $F_t$ the Legendre transforms of the $f_t$, then from the
definition of the Legendre transformation we find
\[ \frac{\partial F_t}{\partial t} = -\frac{\partial f_t}{\partial t}, \]
so that the path of momentum profiles $\phi_t = 1/F_t''$ satisfies
\[ \frac{\partial\phi_t}{\partial t} = \phi_t^2 S(\phi_t)'', \]
where $S(\phi_t)$ is given by Equation~\ref{eq:scalarcurv}.
It is known that the flow exists for a short time with any smooth initial metric
(see Chen-He~\cite{CH06}).
Also, the Calabi functional is decreased under the flow:
\begin{lem}\label{lem:Calabidec}
If $\phi$ is a solution to the Calabi flow, then
\[
\frac{d\, Cal(\phi)}{dt} = -\int_0^m \phi^2\left( S(\phi)'' \right)^2
(1+\tau)\,d\tau \leqslant 0.
\]
In particular the $H^2$ norm of $\phi_t$ is uniformly bounded along the flow.
\end{lem}
\begin{proof}
The result follows from the following computation of the variation.
\begin{comment}
\[\begin{aligned}
\frac{d\mathcal{M}(\phi)}{dt} &= \int_0^m \left(-\Phi S(\phi)'' + \phi
S(\phi)''\right)(1+\tau)\,d\tau \\
&= \int_0^m (\phi-\Phi)(S(\phi)-S(\Phi))''(1+\tau)\,d\tau \\
&= \int_0^m
\left[(1+\tau)(\phi-\Phi)\right]''(S(\phi)-S(\Phi))\,d\tau\\
&= -2\int_0^m (S(\phi)-S(\Phi))^2(1+\tau)\,d\tau.
\end{aligned}
\]
For the second line note that $S(\Phi)''=0$, and in the next line we
can integrate by parts twice because $\phi-\Phi$ and its first derivative
vanish at the endpoints.
\end{comment}
\[ \begin{aligned}
\frac{d\, Cal(\phi)}{dt} &= 2\int_0^m
(S(\phi)-S(\Phi))\left(-\frac{1}{2(1+\tau)}\left[ (1+\tau)\phi^2
S(\phi)'' \right]''\right)(1+\tau)\, d\tau \\
&= - \int_0^m \phi^2\left(S(\phi)''\right)^2 (1+\tau)\,d\tau.
\end{aligned}
\]
We can perform the integration by parts because $\phi^2$ and
$(\phi^2)'$ vanish at the endpoints. Also recall that $S(\Phi)''=0$.
\end{proof}
In Section~\ref{sec:longtimeexist} we will show that there is a solution to the
Calabi flow for all time for any polarisation. In this section we concentrate on
proving the following.
\begin{prop} If the flow exists for all time then the momentum profiles
converge in $H^2$ to the minimiser that we found in
Section~\ref{sec:explicit}.
\end{prop}
\begin{proof}
Let us write $\Psi$ for the minimiser, so when $m < k_1$ then $\Psi$ is
the momentum profile of an extremal metric, when $m\leqslant
m\leqslant k_2(k_2+2)$
then $\Psi$ vanishes at an interior point of $(0,m)$ and when
$m > k_2(k_2+2)$ then $\Psi$ vanishes on an interval inside
$(0,m)$.
Introduce the functional
\begin{equation}\label{eq:modmabuchi}
\mathcal{M}(\phi) = \int_0^m\left( \frac{\Psi}{\phi} +
\log\phi\right)(1+\tau)\,d\tau,
\end{equation}
defined on momentum profiles $\phi$.
When $m < k_1$ then in fact $\mathcal{M}$ is the modified Mabuchi
functional (see~\cite{ACGT3} Section 2.3).
The key point is that $\mathcal{M}$ is decreasing under the
flow (this is well-known for the modified Mabuchi functional,
since the Calabi flow is its gradient flow). This
follows from the computation
\[ \begin{aligned} \frac{d\mathcal{M}(\phi_t)}{dt} &= \int_0^m (-\Psi
S(\phi_t)'' + \phi_t S(\phi_t)'')(1+\tau)\,d\tau \\
&= \int_0^m (\phi_t-\Psi)(S(\phi_t)-S(\Psi))''(1+\tau)\, d\tau +
\int _0^m \phi_t S(\Psi)'' (1+\tau)\, d\tau \\
&\leqslant -2\int_0^m (S(\phi_t)-S(\Psi))^2 (1+\tau)\,d\tau,
\end{aligned} \]
where we have used that $\Psi S(\Psi)''=0$ and $S(\Psi)''$ is a negative
distribution.
On the other hand we have that
\[ \mathcal{M}(\phi) \geqslant \int_0^m\log\phi\cdot
(1+\tau)d\tau \geqslant -C_1\int_0^m \log\frac{\Theta}{\phi}\,
d\tau -C_2,\]
where $\Theta$ is a fixed momentum profile and $C_1,C_2$ are
constants. Since $\log$ is
concave we obtain
\[ \mathcal{M}(\phi) \geqslant -C_3\log\int_0^m
\frac{\Theta}{\phi}\,d\tau - C_4,\]
for some constants $C_3,C_4$. The lemma that follows now implies
that along the flow
\[ \mathcal{M}(\phi_t)\geqslant -C\log(1+t) - D. \]
Since $\mathcal{M}(\phi_t)$ is decreasing, we necessarily
have that along a subsequence its derivative tends to zero, ie.
$S(\phi_t)\to S(\Psi)$ in $L^2$ (integrating with respect to
$(1+\tau)d\tau$
as usual).
Since $\Vert S(\phi_t)\Vert_{L^2}$ is
decreasing along the flow, it follows that
\begin{equation}\label{eq:calabiminimise}
\lim_{t\to\infty}\Vert S(\phi_t)\Vert_{L^2}= \Vert S(\Psi)\Vert_{L^2}.
\end{equation}
Let us now take any
subsequence $\phi_i$. Because of the uniform $H^2$-bound there is a
subsequence also denoted by $\phi_i$ which converges weakly in $H^2$ to
some limit. Now Equation~\ref{eq:calabiminimise} implies the convergence
of the $H^2$-norms, which together with the weak convergence implies
strong convergence in $H^2$. The limit then has to be $\Psi$ since the
minimiser of the Calabi functional is unique
(Proposition~\ref{prop:minimumprofile}).
\end{proof}
\begin{lem}\label{lem:intbound}
Let $\Theta : [0,m]\to\mathbf{R}$ be a
momentum profile.
For the solution $\phi_t$ to the Calabi flow we have
\begin{equation}\label{eq:intbound}
\int_0^m\frac{\Theta}{\phi_t}\,d\tau < C(1+t)
\end{equation}
for some constant $C$.
\end{lem}
\begin{proof}
Let us define
the functional
\[ \mathcal{F}(\psi) = \int_0^m \frac{\Theta}{\psi} - \log\frac{
\Theta}{\psi}\,d\tau \]
for any momentum profile $\psi$. Along the Calabi flow we have
\[ \begin{aligned}
\frac{d}{dt}\mathcal{F}(\phi_t) & = \int_0^m (\phi_t - \Theta)
S(\phi_t)''\,d\tau
= \int_0^m (\phi_t - \Theta)''\, S(\phi_t)\,d\tau \\
&\leqslant \left(\int_0^m (\phi_t''-\Theta'')^2\,d\tau\right) ^{1/2}
(Cal(\phi_t)+C)^{1/2}. \end{aligned}
\]
The uniform $H^2$ bound on $\phi_t$ now implies that
$\mathcal{F}(\phi_t) \leqslant C(1+t)$
for some $C>0$. The result follows from the inequality $x-\log x >
x/2$.
\end{proof}
\begin{rem}
Note that when $m\leqslant k_2(k_2+2)$ the functional
$\mathcal{M}$ is bounded below on the set of momentum profiles. This is
because we can write
\[ \mathcal{M}(\phi) = \int_0^m \left(\frac{\Psi}{\phi} -
\log\frac{\Psi}{\phi}\right)\, (1+\tau)d\tau + \int_0^m \log\Psi\cdot
(1+\tau)d\tau.\]
Since $\Psi$ only vanishes at isolated points and to finite order, the
integral of $\log\Psi$ is finite, so the inequality $\log x<x$ implies
\[ \mathcal{M}(\phi) \geqslant \int_0^m \log\Psi\cdot (1+\tau) d\tau.\]
In the case $m > k_2(k_2+2)$ however $\mathcal{M}$ is not bounded from
below since now $\Psi$ vanishes on an interval. In particular as
$\phi\to \Psi$, it is clear that $\mathcal{M}(\phi)\to -\infty$.
\end{rem}
\section{Long time existence}\label{sec:longtimeexist}
The existence of the Calabi flow for a short time has been proved by
Chen-He~\cite{CH06} (also Guan~\cite{Gua05} for ruled manifolds).
In the case when an extremal metric exists, the long time existence has
also been shown in~\cite{Gua05} for ruled manifolds.
To show that the flow exists for all time
we first need to show that $\phi_t(x)$ does not become zero in finite
time for $x\in(0,m)$. Let $\Theta$ be a fixed momentum profile, ie. a
non-negative function on $[0,m]$, strictly positive on the interior, and
satisfying the usual boundary conditions. We want to show
\begin{prop}\label{prop:c0bound}
If $\phi_t$ is the solution to the Calabi flow, then
$\sup\frac{\Theta(x)}{\phi_t(x)}$ does not blow up in finite time.
\end{prop}
\begin{proof} This follows from Lemma~\ref{lem:intbound} and the
following lemma.
\end{proof}
\begin{lem} Given a constant $C>0$ there exists a constant $D>0$ such
that if for a momentum profile $\psi$ we have
\[\int_0^m\frac{\Theta}{\psi}\,d\tau < C \quad \text{ and }\quad
\Vert\psi\Vert_{C^{1,1/2}}<C,\]
then
\[\sup \Theta/\psi < D.\]
\end{lem}
\begin{proof}
Let us derive the estimate near the boundary first.
Because of the $C^{1,1/2}$ bound on $\psi$, there exists
a constant $C_1$ such that
\[ |\psi'(x)-\psi'(0)| < C_1\sqrt{x}, \]
ie.
\[ \psi'(x) > 2-C_1\sqrt{x}. \]
This implies that
\[ \psi(x) > x\left( 2-\frac{2}{3}C_1\sqrt{x} \right), \]
so that for $x < (3/2C_1)^2$ we have $\psi(x)>x$.
We can apply the same argument around $x=m$ as well, so we obtain a
small constant $\delta$ such that
\[ \mbox{if } x<\delta \mbox{ or } x > m-\delta, \mbox{ then }
\frac{\Theta(x)}{\psi(x)} < D. \]
Now we concentrate on the set $(\delta,m-\delta)$. On this set
we have a uniform lower bound $\Theta(x)>\epsilon>0$ so we just need a
lower bound on $\psi$.
There is a constant $C_2$
such that $|\psi'(x)|<C_2$ for all $x$. Suppose that for
some $x\in(\delta,m-\delta)$ we have
$\psi(x)<\epsilon/k$ where $k$ is large. Assume for simplicity that
$x < m/2$. Then for $y<m/2-\delta$ we have
\[ \psi(x+y) < \frac{\epsilon}{k} + C_2y. \]
Writing $a=m/2-\delta$, this implies that
\[
C > \int_0^m \frac{\Theta}{\psi}\,d\tau >
\epsilon\int_0^a\frac{1}{\frac{\epsilon}{k} + C_2y}\,dy
> \frac{\epsilon}{C_2}\left[\log C_2a -
\log\frac{\epsilon}{k}\right].
\]
Since this tends to infinity as $k\to\infty$, we get the required
lower bound on $\psi(x)$ for $x\in(\delta,m-\delta)$. Combining this
with the boundary estimate we obtain the statement of the
lemma.
\end{proof}
Next we would like to estimate the derivatives of $\phi$ following the
calculation in Guan~\cite{Gua05}. Let us introduce the
functional
\[ L(\phi) = \int_0^m (\phi S(\phi)'')^2\, (1+\tau)d\tau. \]
We want to show
\begin{lem}[Guan~\cite{Gua05}]
For $\phi_t$ a solution of the Calabi flow we have that
$L(\phi_t)\leqslant C(t)$ for some function $C(t)$ defined for all $t$.
\end{lem}
\begin{proof}
All our constants will depend on $t$
but will be finite for all $t$.
All the integral norms will be with respect to the measure $d\tau$ and not
$(1+\tau)d\tau$ as before.
In the proof we will repeatedly use the Hardy-type inequality
\[ \Vert f \Vert_{L^2(0,m)}
\leqslant C \Vert \phi_t^{-k+1} (\phi_t^k f)'\Vert_{L^2(0,m)}
\]
for $k\geqslant 1$ and any $f\in C^1[0,m]$ with the constant $C$
depending on $t$.
Using Proposition~\ref{prop:c0bound},
this is easy to derive from the inequality
\[ \int_{-1}^1 f(x)^2\, dx \leqslant C\int_{-1}^1 \left[
(1-x^2)^{-k+1} ( (1-x^2)^k f(x))'
\right]^2 \, dx. \]
This in turn follows from the inequality
\[ \int_0^1 f(x)^2\, dx \leqslant C\int_0^1 \left( x^{-k+1} (x^k
f)'\right)^2\, dx \]
for $f$ with $f(1)=0$,
applied to the intervals $[-1,0]$ and $[0,1]$ separately
(see~\cite{Gua05}).
\begin{comment}
We also use inequalities of the type
\[ \begin{aligned}
\Vert \phi_t f'\Vert_{L^2} &\leqslant C\Vert (\phi_t f)'\Vert_{L^2},
\\
\Vert \phi_t^2 f''\Vert_{L^2} &\leqslant C\Vert (\phi_t^2 f)''
\Vert_{L^2},
\end{aligned} \]
where again $f$ vanishes on the boundary.
These follow from the fact that we have a uniform $C^1$ bound on
$\phi_t$ along the flow. For example the first inequality follows from
\[ \Vert \phi_t f'\Vert_{L^2}\leqslant \Vert (\phi_t f)'\Vert_{L^2} + \Vert
\phi_t' f\Vert_{L^2}\leqslant \Vert (\phi_t f)'\Vert_{L^2} + C\Vert
f\Vert_{L^2}, \]
and
\[ \Vert f\Vert_{L^2}= \Vert \phi_t^{-1}(\phi_t f)\Vert_{L^2}
\leqslant C \Vert (\phi_t f)'\Vert_{L^2}. \]
\end{comment}
Let us compute the derivative of $L(\phi_t)$.
\[ \begin{aligned}
\frac{d}{dt} L(\phi_t) &= 2\int_0^m (\phi_t S(\phi_t)'')^3\, (1+\tau)d\tau -
\int_0^m \left[(1+\tau)\phi_t^2 S(\phi_t)''\right]''^2\frac{d\tau}{1+\tau}
\\
&\leqslant C_1\int_0^m (\phi_t S(\phi_t)'')^3\,d\tau - C_2 \left\Vert
\left(\phi_t^2 S(\phi_t)''\right)''\right\Vert^2_{L^2}.
\end{aligned} \]
Let us estimate the cubed term. We have
\[ \begin{aligned}
\int_0^m (\phi_t S(\phi_t)'')^3\,d\tau &\leqslant
C_3\Vert \phi_t S(\phi_t)''\Vert_{C^0}
L(\phi_t) \\
&\leqslant C_4\Vert (\phi_t S(\phi_t)'')'\Vert_{L^2} L(\phi_t) \\
&\leqslant C(\epsilon) L(\phi_t)^2 + \epsilon\Vert (\phi_t
S(\phi_t)'')'\Vert^2_{L^2},
\end{aligned} \]
for any $\epsilon>0$ using Young's inequality.
Using the uniform $H^2$-bound on $\phi_t$ and the Hardy-type inequality twice we
obtain
\[ \begin{aligned}
\Vert (\phi_t^{-1}\cdot \phi_t^2
S(\phi_t)'')'\Vert_{L^2} &\leqslant \Vert \phi_t^{-1}
(\phi_t^2 S(\phi)'')'\Vert_{L^2} + \Vert \phi_t'
S(\phi_t)''\Vert_{L^2}
\\
&\leqslant C_5 \Vert \phi_t^{-1} (\phi_t^2
S(\phi_t)'')'\Vert_{L^2} \\
&\leqslant C_6 \Vert (\phi_t^2 S(\phi_t)'')''\Vert_{L^2},
\end{aligned} \]
so if we choose $\epsilon$ small enough (depending on $t$), then we obtain the
inequality
\[ \frac{d}{dt} L(\phi_t) \leqslant C_1(t) L(\phi_t)^2. \]
This implies that
\[ \frac{d}{dt}\log L(\phi_t) \leqslant C_1(t) L(\phi_t),\]
ie. for any $T>0$ we have
\[ \log L(\phi_T) \leqslant \log L(\phi_0) + \sup_{t\in [0,T]}C_1(t) \int_0^T
L(\phi_t)\, dt.\]
Now Lemma~\ref{lem:Calabidec} gives a bound on the integral of $L(\phi_t)$ since the
Calabi functional is non-negative, so the proof is complete.
\end{proof}
Now we need to use the inequality
\[ \Vert f\Vert_{L^2}^2 \leqslant C( \Vert \phi f'\Vert_{L^2}^2 + f(m/2)^2) \]
for all $f\in C^1(0,m)$ which
can be proved in the same way as the Hardy-type inequalities we
used before. This implies that
\[ \Vert S(\phi_t)\Vert^2_{C^0}\leqslant C_1\Vert S(\phi_t)'\Vert^2_{L^2} \leqslant
C_2\big[\Vert \phi_t S(\phi_t)''\Vert^2_{L^2} + (S(\phi_t)'(m/2))^2\big]. \]
The bound on $\Vert \phi_t S(\phi_t)''\Vert_{L^2}$ gives a bound on
$|S(\phi_t)'(x)-S(\phi_t)'(m/2)|$ for $x$ inside the interval $\left(\frac{
m}{3}, \frac{2m}{3}\right)$. The bound on $\Vert S(\phi_t)\Vert_{L^2}$ (the
Calabi functional decreases along the flow) then gives an apriori bound on
$S(\phi_t)'(m/2)$.
Therefore as long as $L(\phi_t)$ remains bounded, we have a $C^2$ bound on
$\phi_t$ (depending on
$t$). To obtain estimates for the higher derivatives of $\phi_t$ we could either
continue with similar integral estimates
in the manner of~\cite{Gua05} or we can note that a $C^2$ bound on
the momentum profile implies a uniform bound on the Ricci curvature. According
to Chen-He~\cite{CH06} the Calabi flow exists for all time as long as the Ricci
curvature remains uniformly bounded.
\bibliographystyle{hplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Memory-augmented neural networks (MANNs) are models that extend recurrent neural networks with an
explicit external memory \cite{NTM,MANN,MANNMeta}. A main motivation for such extensions is to
enable computation tasks that require a separation of memory and computation, i.e.\ the network
needs to keep track of information over long stretches of time without this information getting
corrupted in every step of the recurrent update process \cite{NTM}.
MANNs have achieved impressive successes on that front, performing computation
tasks that were previously out of reach for neural networks, such as associative memory recalls,
question-answering, and graph traversal \cite{DNCMDS,Pushdown,NTM,MANN,DeepStacks}.
Due to their ability to perform computation tasks, MANNs have also been named
Neural Turing Machines or Differentiable Neural Computers \cite{NTM}.
While the successes of this line of research are impressive, MANNs are typically hard to train,
requiring many epochs of gradient descent and a lot of training data \cite{NTM_impl}. This is because
one needs to backpropagate through a series of quasi-discrete read and write operations on the memory
with difficult dependencies. For example: accessing the memory is only useful if the memory contains
information that one needs to produce a certain output, and the memory only contains such information
if the network has previously put it into the memory.
In this paper, we introduce the \emph{reservoir stack machine} (RSM), an echo state network \cite{ESN}
combined with an explicit stack that can store information without interference but
is much simpler to train compared to the aforementioned MANN models thanks to two tricks.
First, we follow the reservoir computing paradigm
by leaving recurrent matrices fixed after initialization, limiting training to the output layers \cite{ESN,CRJ}.
Second, we do not require our network to learn optimal write/read behavior autonomously but provide
training data for it. Note that this is a restriction of our model because we need more
annotations per training sequence compared to standard recurrent neural networks.
However, this additional labeling makes training vastly more efficient:
Our training reduces to a classification
problem that can be solved with convex optimization in seconds instead of minutes to hours, using only
$\approx$ 100 short example inputs instead of hundreds of thousands.
One could see this approach as an instance of imitation learning, where learning
a complicated recurrent process becomes significantly simpler because we imitate the
actions of a teacher on a few example demonstrations \cite{Imitation}.
The RSM architecture is inspired by three main sources. First, we build on previous work regarding
differentiable stacks \cite{Pushdown,ExtDiffStacks,DeepStacks} which suggests a stack as memory
for neural networks to recognize typical context-free languages.
Second, we build on classic parser theory \cite{LR} to show that
the RSM is at least as powerful as an LR(1)-automata, i.e.\ it can recognize all
deterministic context-free languages. Finally, our model builds upon prior work in reservoir computing,
which shows that echo state networks on their own have little computational power
\citep[below Chomsky-3;][]{DMM} but are well suited to distinguish input sequences by a constant-sized
suffix - which can be used to guide memory access behavior \cite{RMM2}.
In more detail, our contributions are as follows.
\begin{itemize}
\item We introduce a novel MANN architecture, namely the reservoir stack
machine (RSM), which combines an echo state network \cite{ESN} with an explicit stack to store inputs as
well as auxiliary symbols.
\item We prove that RSMs are at least as powerful as LR(1)-automata and thus can recognize
all deterministic context-free languages \cite{LR}, whereas past reservoir memory machines \cite{RMM2}
cannot.
\item We evaluate our model on three benchmarks for Neural Turing Machines (latch, copy, and
repeat copy) and on six context-free languages, verifying that it can learn all the tasks in a few
seconds of training time from only $100$ training sequences. By contrast, we also show that
deep models (namely GRUs \cite{GRU}, the deep stack model of Suzgun et al. \cite{DeepStacks}, and
an end-to-end trained version of our proposed model)
need much longer training times and sometimes fail to learn the task.
\end{itemize}
We note that our contributions are largely conceptual/theoretical. We do not expect
that our model is widely applicable to practical tasks because required teaching signals for
the dynamics might be missing. Instead, we investigate the interface between theoretical computer
science and reservoir computing, providing further insight into the computational capabilities of
neural networks, especially reservoir neural networks.
We begin our paper by describing background knowledge as well as related work
(refer to Section~\ref{sec:background}), then introducing our model (refer to Section~\ref{sec:method}),
and finally describing the experimental evaluation (refer to Section~\ref{sec:experiments}).
\section{Background and Related Work}
\label{sec:background}
Our work in this paper is connected to several established lines of research. Due to
the breadth of connected fields, we can only touch upon key ideas. Readers are invited to follow
our references to more detailed discussions of the underlying topics. We begin with the
central topic of our investigation, namely the computational power of neural networks,
connect this to grammatical inference and memory-augmented neural nets, before
introducing the underlying concepts if our reservoir stack machine model, namely LR(1)
automata and echo state networks.
\subsection{Computational Power of Recurrent Neural Networks}
Computational power is typically measured by comparing a computational system to a reference
automaton class in the Chomsky hierarchy, that is finite state automata (Chomsky-3), LR automata
(Chomsky-2), linearly space-bounded Turing machines (Chomsky-1), or Turing machines (Chomsky-0)
\cite{TheoInf}. For the sake of space we do not define every automaton class in detail.
We merely remark that, for our purpose, any automaton implements a function $\mathcal{A} :
\Sigma^* \to \{0, 1\}$ mapping input sequences $\bar x$ over some set $\Sigma$ to zero or one.
If $\mathcal{A}(\bar x) = 1$ we say that the automaton \emph{accepts} the word. Otherwise, we
say that it \emph{rejects} the word. Accordingly, we define the \emph{language} $\mathcal{L}(\mathcal{A})$
accepted by the automaton as the set $\mathcal{L}(\mathcal{A}) := \{ \bar x \in \Sigma^* |
\mathcal{A}(\bar x) = 1 \}$.
In this paper, we are interested in the computational power of neural networks.
We particularly focus on recurrent neural networks (RNNs), which we define as follows.
\begin{dfn}[Recurrent neural network]
A \emph{recurrent neural network} with $n$ inputs, $m$ neurons, and $k$ outputs is defined as
a $4$-tuple $(\bm{U}, \bm{W}, \sigma, g)$, where $\bm{U} \in \mathbb{R}^{m \times n}$, $\bm{W} \in \mathbb{R}^{m \times m}$,
$\sigma : \mathbb{R} \to \mathbb{R}$, and $g : \mathbb{R}^m \to \mathbb{R}^k$.
Now, let $\vec x_1, \ldots, \vec x_T$ be a sequence with $T$ elements over $\mathbb{R}^n$.
We define the \emph{state} $h_t \in \mathbb{R}^m$ and the output $y_t \in \mathbb{R}^k$ at time $t$ via the equations:
\begin{align}
\vec h_t &= \sigma\Big(\bm{U} \cdot \vec x_t + \bm{W} \cdot \vec h_{t-1}\Big), \label{eq:rnn} \\
\vec y_t &= g(\vec h_t),
\end{align}
where $\sigma$ is applied element-wise and $\vec h_0 = \vec 0$.
\end{dfn}
Now, let $\Sigma \subset \mathbb{R}^n$ be some set and $\mathcal{A} : \Sigma^* \to \{0,1\}$ be the
function implemented by an automaton.
We say that an RNN \emph{simulates} the automaton if for
any input sequence $\vec x_1, \ldots, \vec x_T \in \Sigma^*$ we obtain
$\vec y_T = \mathcal{A}(\vec x_1, \ldots, \vec x_T)$. Early work has already demonstrated
that recurrent neural networks (RNNs) with integer weights are sufficient to simulate finite state
automata \cite{FiniteStateNets}, that RNNs with rational weights can simulate
Turing machines \cite{RecTuring}, and that RNNs with real weights
even permit super-Turing computations \cite{SuperTuring}. Recently, Šíma has shown that
binary RNNs with 1 or 2 extra rational-valued neurons lie in between
Chomsky-3 and Chomsky-0, each recognizing some but not all languages in the in-between classes
\cite{AnalogNeurons}.
Importantly, all these results rely on deterministic constructions to transform a known automaton
into a neural network with carefully chosen weights. Learning languages from examples is a much
more difficult task, studied under the umbrella of grammatical inference.
\subsection{Grammatical inference}
Grammatical inference (or grammar induction) is concerned with inferring a parser for a language
just from examples for words that should be accepted (positive examples) and/or words that should be
rejected (negative examples) \cite{GrammarInference}. In general, this is a hard problem. For example,
even some Chomsky-3 languages can not be learned from positive examples alone \cite{GrammarInference}.
Further, grammar inference is typically ambiguous, i.e.\ there may be infinitely many parsers which
recognize the same language - and selecting the 'smallest' among them is typically NP-hard
\cite{GrammarInference,TheoInf}. Despite these difficulties, impressive progress has been made.
For example, the Omphalos challenge has sparked new research into learning deterministic
context-free grammars (DCFGs) \cite{Omphalos}. \cite{CPCFG,PACCFG} have shown that special subclasses
of probabilistic context-free grammars are learnable from examples.
Finally, \cite{RNNG_old,RNNG,VTBP,OrderedNeurons,ComposeWords} have suggested
neural models to learn grammatical structure from data. Our work in this paper is less general than
grammar inference approaches because we require training data on the desired
parsing behavior for (short) examples instead of learning this behavior from scratch. However,
our approach is more general than the deterministic translation schemes mentioned in the previous
section because we do not require a full parser specification - only its behavior on a
set of training data. As such, our approach lies between constructive proofs of computational
power and grammatical inference. In a sense, our scenario resembles imitation learning, where
a teacher demonstrates the correct (or at least viable) actions on a few training data instances
and the models task is to generalize this behavior to the entire space of possible inputs
\cite{Imitation}.
Most importantly, though, our focus does not lie on inferring grammatical structure, but, rather,
on studying the effect of a memory augmentation on computational power.
\subsection{Memory-Augmented Neural Networks}
For the purpose of this paper, we define a memory-augmented neural network (MANN) as any
neural network that is extended with an explicit write and read operation to interact with an
external memory. Recently, such models have received heightened interest, triggered by the
Neural Turing Machine \cite{NTM}. The Neural Turing Machine implements a content-based addressing
scheme, i.e.\ memory addresses are selected by comparing content in memory to a query vector
and assigning attention based on similarity. The memory read is then given as the average of all
memory lines, weighted by attention.
Writing occurs with a similar mechanism. To support location-based addressing, the
authors introduce the concept of a linkage matrix which keeps track of the order in which content
was written to memory such that the network can decide to access whichever content was written
after or before a queried line in memory. Further research has revised the model with
sparser memory access \cite{MANN}, refined sharpening operations \cite{DNCMDS}, or more efficient
initialization schemes \cite{NTM_impl}. However, it remains that MANNs are difficult to train,
often requiring tens of thousands of training epochs to converge even for simple memory access
tasks \cite{NTM_impl}. In this paper, we try to simplify this training by employing echo state networks
(see below).
More precisely, we focus on a certain kind of MANN, namely stack-based models.
Interestingly, such models go back far longer than the Neural Turing Machine, at least as far as
the Neural State Pushdown Automaton of \citet{Pushdown}. Such a model
involves differentiable push and pop operations to write content onto the stack and remove it again.
Recently, Suzgun et al.\ have built upon this concept and introduced stack recurrent neural
networks \cite{DeepStacks} which can learn typical context free languages.
A stack model is particularly appealing because it reduces the interaction with memory to two
very basic operations, namely to push one single piece of information onto the stack, and to pop
one single piece of information from it - no sharpening operations or linking matrices required.
Further, despite its simplicity, a stack is sufficient to implement LR(1)-automata,
which we discuss next.
\subsection{LR(1)-automata and the computational power of stacks}
We define an LR(1)-automaton as follows.
\begin{dfn}
An \emph{LR(1)-automaton} is defined as a $4$-tuple $\mathcal{A} = (\Phi, \Sigma, R, F)$,
where $\Phi$ and $\Sigma$ are both finite sets with $\Phi \cap \Sigma = \emptyset$,
where $F \subseteq \Phi$, and where $R$ is a list of $4$-tuples of the form $(\bar s, x, j, A)$
with $\bar s \in (\Phi \cup \Sigma)^*$, $x \in \Sigma \cup \{\varepsilon\}$, $j \in \mathbb{N}_0$,
and $A \in \Phi$. $\varepsilon$ denotes the empty word. We call $\Phi$ the nonterminal symbols, $\Sigma$ the terminal symbols,
$R$ the rules, and $F$ the accepting nonterminal symbols of the automaton.
In a slight abuse of notation, we also use the symbol $\mathcal{A}$ to denote the function
$\mathcal{A} : \Sigma^* \to \{0, 1\}$ of the automaton, which we define as the result of
Algorithm~\ref{alg:parse} for the automaton $\mathcal{A}$ and the input $\bar w \in \Sigma^*$.
We define the \emph{accepted language} $\mathcal{L}(\mathcal{A})$ of $\mathcal{A}$ as the set
$\mathcal{L}(\mathcal{A}) = \{ \bar w \in \Sigma^* | \mathcal{A}(\bar w) = 1\}$.
Finally, we define the set LR(1) as the set of all languages $\mathcal{L}$ that can be accepted
by some LR(1)-automaton.
\end{dfn}
\begin{algorithm}
\caption{The parsing algorithm for LR(1)-automata. The $\exists$ quantifier in line 4 always
chooses the first matching rule in the list $R$.}
\label{alg:parse}
\begin{algorithmic}[1]
\Function{parse}{automaton $\mathcal{A} = (\Phi, \Sigma, R, F)$, word $w_1, \ldots, w_T \in \Sigma^*$}
\State Initialize an empty stack $\mathcal{S}$.
\For{$y \gets w_1, \ldots, w_T, \#$}
\While{$\exists \, (\bar s, x, j, A) \in R:$ $\bar s$ is suffix of $\mathcal{S}$ and $x \in \{ y, \varepsilon\}$}
\State Pop $j$ elements from $\mathcal{S}$.
\State Push $A$ onto $\mathcal{S}$.
\EndWhile
\State Push $y$ onto $\mathcal{S}$.
\EndFor
\If{$\exists A \in F : \mathcal{S} = A\#$}
\State \Return $1$.
\Else
\State \Return $0$.
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:parse} works as follows: In each iteration, we consider one letter $y$ of the
input. Then, we check all rules $(\bar s, x, j, A) \in R$ in order.
In particular, we check if $\bar s$ is a suffix of the current stack
and if $x$ is either $y$ or $\varepsilon$. If both conditions hold, we say that the rule \emph{matches}.
In that case, we pop the top $j$ elements from
the stack and instead push the nonterminal $A$ onto the stack. Once no rule matches anymore,
we push $y$ onto $S$ and continue. The last letter we process is a special end symbol $\#$.
If only $A\#$ is left on the stack for some nonterminal $A \in F$, we accept the word and return $1$.
Otherwise, we reject the word and return $0$.
As an example, consider the language $\mathcal{L}_{a^nb^n} := \{ab, aabb, aaabbb, \ldots\}$.
This language is accepted by following LR(1)-automaton:
\begin{equation}
\mathcal{A}_{a^nb^n} = \Big(\{S\}, \{a, b\}, [(aSb, \varepsilon, 3, S), (a, b, 0, S)], \{S\}\Big),
\label{eq:anbn}
\end{equation}
where $[]$ denotes an ordered list.
Now, consider the word $aabb$. When we apply Algorithm~\ref{alg:parse}, we notice that
no rule matches for the first and second input letter. Accordingly, the first three states of the
stack are $\varepsilon$ (the empty stack), $a$, and $aa$. At this point, the current input symbol
is $y = b$. Now, the rule $(a, b, 0, S)$ matches and, hence, the next stack state is
$aaS$. Since no rule matches anymore, we proceed and obtain the next stack state $aaSb$. Now, the
rule $(aSb, \varepsilon, 3, S)$ matches and we obtain the stack state $aS$. Again, no rule matches
anymore and we proceed to $aSb$. Now, rule $(aSb, \varepsilon, 3, S)$ matches again and we obtain
the stack $S$ and the loop in lines 3-9 completes with the stack $S\#$.
Finally, because $S \in F$, we obtain the output $1$.
It is well known that the computational power of LR(1)-automata lies between
Chomsky-3 and Chomsky-2 and corresponds exactly to deterministic context-free
languages.
\begin{thm}
Chomsky-3 $\subsetneq$ LR(1) $=$ DCFG $\subsetneq$ Chomsky-2.
\end{thm}
For a proof, refer to standard text books such as \citet{TheoInf}.
Interestingly, if we provide a model with a second stack, this is sufficient to simulate an
entire Turing machine, raising the computational power to Chomsky-0 \cite{TheoInf,DeepStacks}.
Overall, stacks appear as a particularly promising datastructure to augment neural nets with
more computational power. In this paper, we augment a neural net with a parsing
scheme similar to Algorithm~\ref{alg:parse}. The underlying neural network is an echo state network.
\subsection{Echo State Networks}
We define an echo state network as follows.
\begin{dfn}[Reservoir and echo state network]\label{dfn:esn}
We define a \emph{reservoir} with $n$ inputs and $m$ neurons as a triple
$\mathcal{R} = (\bm{U}, \bm{W}, \sigma)$ where $\bm{U} \in \mathbb{R}^{m \times n}$,
$\bm{W} \in \mathbb{R}^{m \times m}$, and $\sigma : \mathbb{R} \to \mathbb{R}$.
We say that a reservoir is \emph{compact} with respect to a compact set
$\Sigma \subset \mathbb{R}^n$ if there exists a compact set $\mathcal{H} \subset \mathbb{R}^m$
such that $\vec 0 \in \mathcal{H}$ and for all $\vec h \in \mathcal{H}$ as well as all
$\vec x \in \Sigma$ it holds: $\sigma(\bm{W} \cdot \vec h + \bm{U} \cdot \vec x)
\in \mathcal{H}$.
We further say that a reservoir fulfills the \emph{echo state property}
with respect to a compact set $\Sigma \subset \mathbb{R}^n$ if it is compact with respect to $\Sigma$ and
there exists an infinite null-sequence $\delta_1, \delta_2, \ldots$, such that for any infinite
input sequence $\vec x_1, \vec x_2, \ldots$ over $\Sigma$ and any two initial states $\vec h_0, \vec h'_0 \in \mathcal{H}$,
it holds: $\lVert \vec h_t - \vec h'_t \rVert \leq \delta_t$,
where $\vec h_t = \sigma(\bm{W} \cdot \vec h_{t-1} + \bm{U} \cdot \vec x_t)$
and $\vec h'_t = \sigma(\bm{W} \cdot \vec h'_{t-1} + \bm{U} \cdot \vec x_t)$
for all $t \in \mathbb{N}$.
Now, let $\Sigma \subset \mathbb{R}^n$ be a compact set.
We define an \emph{echo state network} (ESN) over $\Sigma$ with $m$ neurons as a recurrent
neural network $(\bm{U}, \bm{W}, \sigma, g)$ where $(\bm{U}, \bm{W}, \sigma)$
is a reservoir with $n$ inputs and $m$ neurons that fulfills the echo state property with
respect to $\Sigma$.
\end{dfn}
Note that our definition of the echo state property slightly deviates from the original definition,
but is provably equivalent \cite{EchoStateProperty}.
Intuitively, the echo state property guarantees that the state $\vec h_t$
of a reservoir represents the past via a fractal-like encoding that is dominated by its
suffix \cite{FractalESN}. This ensures that the reservoir reacts similarly to all
sequences that share the same suffix and thus can be viewed as a Markovian filter \cite{MarkovESN}.
Importantly, the specific parameters of the network do not matter much. As long as the echo
state property holds, the reservoir state provides a representation of the recent past and, thus,
we only need to adapt the output function $g$ to the task at hand, without any change to the reservoir.
This makes echo state networks appealing, because $g$ can usually be trained quickly using convex
optimization, whereas the adaptation of $\bm{U}$ and $\bm{W}$ requires heuristic schemes \cite{ESN}.
This is also our main motivation for using reservoir computing: They are a promising starting point
for networks that provide computational capabilities but remain fast to train.
However, the echo state property also limits the computational power of ESNs to languages
that can be recognized by a finite suffix \citep[i.e.\ definite memory machines;][]{DMM}.
For example, consider the language $ab^*$. For any $T \in \mathbb{N}$, this language contains the string $ab^T$
but not the string $b^T$. However, the echo state property enforces that the reservoir states
for these two strings become eventually indistinguishable, meaning that no (uniformly continuous)
output function $g$ can map them to different values.
It is currently an open question whether echo state networks that operate at the
edge of chaos extend this capability \cite{EdgeChaos,MemCap,MarkovESN,DeepESN}.
The provable computational limitations of ESNs make them a particularly interesting object of study
for memory-augmented neural networks because any computational capability beyond the limits of
definite memory machines \cite{DMM} must provably be due to the augmentation.
We note that our prior work has already investigated the computational effect of some
memory augmentations to ESNs.
In particular, this paper is an extension of the ESANN 2020 Paper
\enquote{Reservoir Memory Machines} \cite{RMM}. In contrast to this prior work, we do
not use a constant-sized memory but instead a stack, and we prove that this stack
improves computational power to that of LR(1) automata. In a separate and distinct
extension \cite{RMM2}, we kept the constant-sized memory but simplified the memory access
behavior and added an associative memory access mechanism. Such networks can provably simulate
any finite state automaton. However, we prove later in Section~\ref{sec:theory} that this memory
does not suffice to simulate general LR(1) automata. Instead, we use a stack as memory architecture
and prove that such a network can simulate all LR(1) automata.
Finally, the experimental evaluation for both papers are mostly disjoint, with \citet{RMM2}
focusing on associative memory and recall tasks as well as
finite state machines, whereas, in this paper, we focus on LR(1) language recognition.
\section{Method}
\label{sec:method}
In this section, we introduce our proposed model, the reservoir stack machine. First, we
define the dynamics of our model and explain its intended mechanism. Then, in Section~\ref{sec:theory},
we prove that a reservoir stack machine with a sufficiently rich reservoir can simulate any
LR(1)-automaton, while the reservoir memory machine \cite{RMM2} cannot.
Finally, we provide a training scheme in Section~\ref{sec:training}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[shift={(0,+0.6)}]
\node[vec, inp] at (0,0) {$\vec x_1$};
\node[skyblue3] (inp) at (0,0.7) {$\vdots$};
\node[vec, inp] at (0,1.3) {$\vec x_t$};
\end{scope}
\begin{scope}[shift={(0,-0.6)}]
\node[vec, stk] at (0,-1.3) {$\vec s_1$};
\node[orange3] (stk) at (0,-0.6) {$\vdots$};
\node[vec, stk] at (0,0) {$\vec s_\tau$};
\end{scope}
\begin{scope}[shift={(3,0)}]
\node[circle, nrn] (h1) at (-36:1.2) {};
\node[circle, nrn] (h2) at (-108:1.2) {};
\node[circle, nrn] (h3) at (-180:1.2) {};
\node[circle, nrn] (h4) at (-252:1.2) {};
\node[circle, nrn] (h5) at (-324:1.2) {};
\node at (0,0) {$\bm{W}$};
\path[nrn, ->, >=stealth', semithick]
(h1) edge[bend left] (h2)
(h2) edge[bend left] (h3)
(h3) edge[bend left] (h4)
(h4) edge[bend left] (h5)
(h5) edge[bend left] (h1);
\path[nrn, <->, >=stealth', semithick]
(h1) edge (h3)
(h3) edge (h5)
(h5) edge (h2)
(h2) edge (h4)
(h4) edge (h1);
\node at (-1.8,0) {$\bm{U}$};
\node (inp_to_res) at (135:1.2) {};
\node (stk_to_res) at (225:1.2) {};
\node (res_to_h) at (45:1.2) {};
\node (res_to_g) at (-45:1.2) {};
\path[inp, ->, >=stealth', semithick]
(inp) edge[out=0,in=135, shorten <=0.5cm] (inp_to_res);
\path[stk, ->, >=stealth', semithick]
(stk) edge[out=0,in=225, shorten <=0.5cm] (stk_to_res);
\end{scope}
\begin{scope}[shift={(5.5,0)}]
\node[vec, inp] (ht) at (0,+0.4) {$\strut\vec h_t$};
\path[inp, ->, >=stealth', semithick]
(res_to_h) edge[out=45,in=180] (ht);
\node[vec, stk] (gt) at (0,-0.4) {$\strut\vec g_t$};
\path[stk, ->, >=stealth', semithick]
(res_to_g) edge[out=-45,in=180] (gt);
\node (state) at (0.6,0) {};
\end{scope}
\begin{scope}[shift={(7.6,0.5)}]
\node[right, aluminium6] (cpop) at (0,-0.4) {$c^\text{pop}$};
\node[right, orange3] (j) at (1.5,-0.4) {$j$};
\node[right, aluminium6] (cpush) at (0,+0.4) {$c^\text{push}$};
\node[right, orange3] (k) at (1.5,+0.4) {$\vec a$};
\node[right, aluminium6] (cshift) at (0,+1.2) {$c^\text{shift}$};
\node[right, orange3] (shift) at (1.5,+1.2) {$0/1$};
\node[right, aluminium6] (cout) at (0,-1.2) {$c^\text{out}$};
\node[vec, nrn] (yt) at (0.4,-2.35) {$\vec y_t$};
\path[nrn, ->, >=stealth', semithick]
(state) edge[out=0,in=180] (cpop)
(cpop) edge (j)
(state) edge[out=0,in=180] (cpush)
(cpush) edge (k)
(state) edge[out=0,in=180] (cshift)
(cshift) edge (shift)
(state) edge[out=0,in=180] (cout)
(cout) edge (yt);
\end{scope}
\begin{scope}[shift={(11.5,-1.2)}]
\node[vec, stk] at (0,0) {$\vec s_1$};
\node[orange3] at (0,0.7) {$\vdots$};
\node[vec, stk] (stop) at (0,1.3) {$\vec s_{\tau-j}$};
\node[vec, stk] (snont) at (0,2.1) {$\vec a$};
\node[vec, stk] (sinp) at (0,2.9) {$\vec x_t$};
\path[stk, ->, >=stealth', semithick]
(shift) edge (sinp)
(k) edge (snont)
(j) edge (stop);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{An illustration of the reservoir stack machine dynamics.
The input sequence up to time $t$ is represented as a state $\vec h_t$
via the reservoir $(\bm{U}, \bm{W}, \sigma)$ and the current stack is represented
as a state $\vec g_t$ via the same reservoir. The concatenated state
$(\vec h_t, \vec g_t)$ is plugged into classifiers $c^\text{shift}$, $c^\text{push}$, and
$c^\text{pop}$, which generate the next stack state (right). Note that
$c^\text{pop}$ and $c^\text{push}$ are called repeatedly until they both return $0$.
The output function $c^\text{out}$ generates the next output $\vec y_t$.}
\label{fig:rsm}
\end{figure}
We construct a reservoir stack machine (RSM) as a combination of an echo state
network (ESN) \cite{ESN} with a stack, where the stack is controlled similarly to an LR(1)-automaton
\cite{LR}. In more detail:
\begin{dfn}
Let $\Sigma \subset \mathbb{R}^n$ and $\Phi \subset \mathbb{R}^n$ be compact sets with $\Sigma \cap \Phi = \emptyset$
and $\vec 0 \notin \Phi$.
We define a reservoir stack machine (RSM) over $\Sigma$ and $\Phi$ with $m$ neurons, $J$
maximally popped symbols, and $L$ outputs as a $7$-tuple
$\mathcal{M} = (\bm{U}, \bm{W}, \sigma, c^\text{pop}, c^\text{push},$ $c^\text{shift}, c^\text{out})$
where $(\bm{U}, \bm{W}, \sigma)$ is a reservoir with $n$ inputs and $m$ neurons that
conforms to the echo state property on $\Sigma \cup \Phi$, and where
$c^\text{pop} : \mathbb{R}^{2m} \to \{0, \ldots, J\}$,
$c^\text{push} : \mathbb{R}^{2m} \to \Phi \cup \{\vec 0\}$,
$c^\text{shift} : \mathbb{R}^{2m} \to \{0, 1\}$, and
$c^\text{out} : \mathbb{R}^{2m} \to \mathbb{R}^L$.
Let $\vec x_1, \ldots, \vec x_T$ be a sequence over $\Sigma$. We define the output sequence
$\mathcal{M}(\vec x_1, \ldots, \vec x_T) = \vec y_1, \ldots, \vec y_{T+1}$
as the output of Algorithm~\ref{alg:rsm} for the input $\vec x_1, \ldots, \vec x_T$.
We say that an RSM simulates an automaton $\mathcal{A}$ if $y_{T+1} = \mathcal{A}(\vec x_1, \ldots, \vec x_T)$
for any input sequence $\vec x_1, \ldots, \vec x_T \in \Sigma^*$.
\end{dfn}
\begin{algorithm}
\caption{The dynamics of a reservoir stack machine
$\mathcal{M} = (\bm{U}, \bm{W}, \sigma, c^\text{pop}, c^\text{push}, c^\text{shift}, c^\text{out})$
on the input sequence $\vec x_1, \ldots, \vec x_T$.}
\label{alg:rsm}
\begin{algorithmic}[1]
\Function{RSM}{Input sequence $\vec x_1, \ldots, \vec x_T$}
\State Set $\vec x_{T+1} \gets \vec 0$.
\State Initialize an empty stack $\mathcal{S}$.
\State Initialize $\vec h_0 \gets \vec 0$, $\vec g_1 \gets \vec 0$.
\For{$t \gets 1, \ldots, T+1$}
\State $\vec h_t \gets \sigma\Big(\bm{U} \cdot \vec x_t + \bm{W} \cdot \vec h_{t-1}\Big)$.
\While{True}
\State $j \gets c^\text{pop}(\vec h_t, \vec g_t)$.
\If{$j > 0$}
\State Pop $j$ elements from $\mathcal{S}$.
\State $\vec g_t \gets \vec 0$.
\For{$\vec s \gets$ elements of $\mathcal{S}$}
\State $\vec g_t \gets \sigma\Big(\bm{U} \cdot \vec s + \bm{W} \cdot \vec g_t \Big)$.
\EndFor
\EndIf
\State $\vec a \gets c^\text{push}(\vec h_t, \vec g_t)$.
\If{$\vec a \neq \vec 0$}
\State Push $\vec a$ onto $\mathcal{S}$.
\State $\vec g_t \gets \sigma\Big(\bm{U} \cdot \vec a + \bm{W} \cdot \vec g_t \Big)$.
\EndIf
\If{$j = 0$ and $\vec a = \vec 0$}
\State \textbf{Break}.
\EndIf
\EndWhile
\State $\vec y_t \gets c^{\text{out}}(\vec h_t, \vec g_t)$.
\If{$c^\text{shift}(\vec h_t, \vec g_t) > 0$}
\State Push $\vec x_t$ onto $\mathcal{S}$.
\State $\vec g_{t+1} \gets \sigma\Big(\bm{U} \cdot \vec x_t + \bm{W} \cdot \vec g_t \Big)$.
\Else
\State $\vec g_{t+1} \gets \vec g_t$.
\EndIf
\EndFor
\State \Return $\vec y_1, \ldots, \vec y_{T+1}$.
\EndFunction
\end{algorithmic}
\end{algorithm}
Roughly speaking, Algorithm~\ref{alg:rsm} works as follows.
In every iteration $t$ of the RSM, we represent the input sequence
$\vec x_1, \ldots, \vec x_t$ up to time $t$ with a state vector $\vec h_t$, and the stack content
$\vec s_1, \ldots, \vec s_\tau$ with a state vector $\vec g_t$, both using the
same reservoir $(\bm{U}, \bm{W}, \sigma)$. Based on the concatenated
state vector $(\vec h_t, \vec g_t)$, we then let $c^\text{pop}$ decide how many symbols to pop from the
stack and we let $c^\text{push}$ decide which symbol (if any) to push on the stack until both classifiers
return $0$. Then, we let a function
$c^\text{out}$ decide the current output $\vec y_t$, and we let a binary classifier
$c^\text{shift}$ decide whether to push the current input symbol $\vec x_t$ onto the stack or not.
Every time we change the stack, we update the state $\vec g_t$ accordingly. Refer to Figure~\ref{fig:rsm}
for a graphical illustration.
As an example, let us construct an RSM that simulates the LR(1)-automaton $\mathcal{A}_{a^nb^n}$
in Equation~\ref{eq:anbn}. For such a simulation, we only need to distinguish three cases:
First, if the top of the stack is $b$, we need to pop $3$ symbols from the stack and push the symbol $S$;
second, if the top of the stack is $a$ and the current input symbol is $b$, we need to pop $0$ symbols
from the stack and push the symbol $S$; third, in all other cases we neither pop nor push.
Distinguishing these cases is easy with an echo state network. For example, consider a cycle reservoir
with jumps (CRJ) \cite{CRJ} with five neurons, jump length 2, input weight $1$ and cycle weight $0.5$,
i.e.\ the architecture displayed in Figure~\ref{fig:rsm}.
Figure~\ref{fig:states} shows a two-dimensional principal component analysis
of the states generated by this CRJ for one-hot-coding sequences of length up to 10, where color and
shape indicate the last symbol of the sequence. As we can see, the states are
linearly separable based on their last symbol. Accordingly, we can implement $c^\text{pop}$
and $c^\text{push}$ as linear classifiers, $c^\text{shift}$ as a constant $1$, and $c^\text{out}$
merely needs to recognize whether the current stack content is exactly $S$ or anything else.
Then, our reservoir stack machine will return $1$ for any word that is in $\mathcal{L}_{a^nb^n}$
and $0$ otherwise.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{axis}[width=8cm, height=5cm, legend pos={outer north east}, legend cell align=left]
\addplot[scatter,only marks,
scatter src=explicit,
scatter/classes={
0={mark=square*,fill=aluminium4,draw=aluminium6,mark size=0.06cm,opacity=0.5}
1={mark=triangle*,fill=skyblue1,draw=skyblue3,mark size=0.09cm,opacity=0.5}
2={mark=*,fill=orange1,draw=orange3,opacity=0.5}}]
table[x=x,y=y,meta=rule,col sep=tab]{reservoir_example.csv};
\legend{a, b, S}
\draw[semithick, aluminium6] (axis cs:-0.5,-1) -- (axis cs:2,1);
\draw[semithick, aluminium6] (axis cs:-1.5,-0.5) -- (axis cs:1.5,1.4);
\end{axis}
\end{tikzpicture}
\end{center}
\caption{A 2-dimensional PCA of the representations produced by the reservoir in Figure~\ref{fig:rsm}
for stacks up to length 10. The color and shape represents the symbol on top of the stack.
Lines indicate the linear separability of representations according to the top symbol.}
\label{fig:states}
\end{figure}
This example illustrates how the reservoir stack machine is intended to work: The input state $\vec h_t$
represents the current input symbol and the stack state $\vec g_t$ a suffix of the stack, such that
simple classifiers can make the decision which rule of an LR(1)-automaton to apply.
Next, we generalize this idea to a theorem, showing how to simulate an LR(1)-automaton with an RSM
more generally.
\subsection{Theory}
\label{sec:theory}
This section has two goals: First, we wish to show that an echo state network with a constant-sized
memory as proposed in \citet{RMM2} is not sufficient to simulate general LR(1) automata. Second, we
wish to show that the reservoir stack machine is sufficient, provided that the underlying reservoir
is sufficiently rich.
We begin with the first claim. First, we recall the definition of a reservoir memory machine from
\citet{RMM2} (in a slightly adapted form for our notation in this paper).
\begin{dfn}[Reservoir Memory Machine \cite{RMM2}]
Let $\Sigma \subset \mathbb{R}^n$ be a compact set. We define a reservoir memory machine (RMM) over $\Sigma$ with $m$ neurons,
$K$ rows of memory and $L$ outputs as a $5$-tuple of the form
$\mathcal{M} = (\bm{U}, \bm{W}, \sigma, c, g)$, where $(\bm{U}, \bm{W}, \sigma)$ is a reservoir with
$n$ inputs and $m$ neurons that fulfills the echo state property with respect to $\Sigma$, where
$c : \mathbb{R}^m \to \{0, \ldots, K\}$, and where $g : \mathbb{R}^m \to \mathbb{R}^L$.
Let $\vec x_1, \ldots, \vec x_T \in \Sigma^*$. We define the output sequence
$\mathcal{M}(\vec x_1, \ldots, \vec x_T) = \vec y_1, \ldots, \vec x_T$ as the output of
Algorithm~\ref{alg:rmm} for the input $\vec x_1, \ldots, \vec x_T$..
\end{dfn}
\begin{algorithm}
\caption{The dynamics of a reservoir memory machine $\mathcal{M} = (\bm{U}, \bm{W}, \sigma, c, g)$
on the input sequence $\vec x_1, \ldots, \vec x_T$.}
\label{alg:rmm}
\begin{algorithmic}[1]
\Function{RMM}{Input sequence $\vec x_1, \ldots, \vec x_T$}
\State Initialize an empty memory matrix $\bm{M}$ of size $K \times m$ with zeros.
\State Initialize $\vec h_0$.
\For{$t \gets 1, \ldots, T$}
\State $\vec h_t \gets \sigma\big( \bm{W} \cdot \vec h_{t-1} + \bm{U} \cdot \vec x_t\big)$.
\State $a_t \gets c(\vec h_t)$.
\If{$a_t > 0$}
\If{$\vec m_{a_t}$ has already been written to}
\State $\vec h_t \gets \vec m_{a_t}$.
\Else
\State $\vec m_{a_t} \gets \vec h_t$.
\EndIf
\EndIf
\State $\vec y_t \gets g(\vec h_t)$.
\EndFor
\State \Return $\vec y_1, \ldots, \vec y_T$.
\EndFunction
\end{algorithmic}
\end{algorithm}
Intuitively, the classifier $c$ controls which memory location we are currently accessing and the
function $g$ controls the output of the system. As long as $c$ outputs zero, the RMM behaves like a
standard ESN. When it outputs a nonzero memory address $a_t$ the first time at time $t$, we write the
current state $\vec h_t$ to the $a_t$th row of the memory. When the same address $a_{t'}$ is accessed again
at time $t' > t$, we override $\vec h_{t'}$ with the memory entry at $a_{t'}$. In other words, the output
will be the same as that of an ESN until we have two times $t$, $t'$ with $t' > t$
such that $a_t = a_{t'} > 0$. As soon as that happens, the RMM recalls past states.
This memory mechanism is provably sufficient to simulate any finite state automaton \cite{RMM2}.
However, it is insufficient to simulate at least some LR(1) automata. In particular,
consider the following LR(1) automaton:
\begin{equation}
\mathcal{A}_\text{palin} = \Big( \{S\}, \{a, b, \$\}, [(\$, \varepsilon, 1, S), (aSa, \varepsilon, 3, S), (bSb, \varepsilon, 3, S)], \{S\} \Big). \label{eq:palindromes}
\end{equation}
This automaton recognizes the language $\mathcal{L}(\mathcal{A}_\text{palin})$ of palindromes over $a$ and $b$
with $\$$ in the middle. We will now show that no RMM can simulate this automaton.
\begin{thm}\label{thm:rmm}
Let $\Sigma = \{a, b, \$\}$ be a set of $n$-dimensional vector encodings of the symbols $a$, $b$
and $\$$. Further, let $\mathcal{M} = (\bm{U}, \bm{W}, \sigma, c, g)$ be a reservoir memory
machine with $m$ neurons, $K$ rows of memory, and $L = 1$ outputs where $g$ is a uniformly
continuous output function, i.e.\ for any $\epsilon > 0$ there exists some $\tilde \delta_\epsilon$
such that for any two $\vec h, \vec h' \in \mathbb{R}^m$ with $\lVert \vec h - \vec h'\rVert < \tilde \delta_\epsilon$
it holds $|g(\vec h) - g(\vec h')| < \epsilon$. Then, $\mathcal{M}$ does not simulate $\mathcal{A}_\text{palin}$.
\begin{proof}
We perform a proof by contradiction, i.e.\ we assume that there exists an RMM $\mathcal{M}$ with a
uniformly continuous output function $g$ and which does simulate $\mathcal{A}_\text{palin}$ and we
show that this yields a contradiction.
Our proof has two steps. First, we show that any RMM that simulates $\mathcal{A}_\text{palin}$ never
visits line 9 of Algorithm~\ref{alg:rmm} for inputs of the form $a^T \$ a^T$ or $b a^{T-1} \$ a^{T-1} b$,
i.e.\ it behaves like an ESN for these inputs. Then, we show that a reservoir with the echo state
property can not distinguish the inputs $a^T \$ a^T$ and $ba^T \$ a^T$ for large enough $T$, which in
turn means that an ESN with uniformly continuous output function can not distinguish them, either.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node at (0,0) {Construction for $a^T\$a^T$};
\begin{scope}[shift={(-3.5,-2)}]
\node at (0,+1) {$\mathcal{A}_\text{palin}(\bar x) = 1$};
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-1.4,+0.25) rectangle (-0.7,-0.25);
\draw (-2.5,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.5,-0.25);
\node at (-1,0) {$a^T$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^T$};
\node[above] at (-1.4,+0.25) {$t$};
\node[above] at (-0.7,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-2)}]
\node at (0,+1) {$\mathcal{A}_\text{palin}(\bar x') = 0$};
\draw (-1.8,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.5,-0.25);
\node at (-1,0) {$a^{T-t'+t}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^T$};
\end{scope}
\begin{scope}[shift={(-3.5,-3.2)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-0.7,+0.25) rectangle (+0.4,-0.25);
\draw (-2.5,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.5,-0.25);
\node at (-1,0) {$a^T$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^T$};
\node[above] at (-0.7,+0.25) {$t$};
\node[above] at (+0.4,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-3.2)}]
\draw (-1.8,+0.25) rectangle (+2.1,-0.25);
\node at (0,0) {$a^{2T - t' + t + 1}$};
\end{scope}
\begin{scope}[shift={(-3.5,-4.4)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (+0.7,+0.25) rectangle (+1.5,-0.25);
\draw (-2.5,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.5,-0.25);
\node at (-1,0) {$a^T$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^T$};
\node[above] at (+0.7,+0.25) {$t$};
\node[above] at (+1.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+4.2,-4.4)}]
\draw (-2.5,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\node at (-1,0) {$a^T$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-t'+t}$};
\end{scope}
\node at (0,-6) {Construction for $ba^{T-1}\$a^{T-1}b$};
\begin{scope}[shift={(-3.5,-7)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-2.5,+0.25) rectangle (-1.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (-2.5,+0.25) {$t$};
\node[above] at (-1.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-7)}]
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-1,0) {$a^{T-t'}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\end{scope}
\begin{scope}[shift={(-3.5,-8.2)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-2.5,+0.25) rectangle (+0.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (-2.5,+0.25) {$t$};
\node[above] at (+0.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-8.2)}]
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (+1,0) {$a^{2T-t'}$};
\node at (+2.25,0) {$b$};
\end{scope}
\begin{scope}[shift={(-3.5,-9.4)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-2.5,+0.25) rectangle (+2.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (-2.5,+0.25) {$t$};
\node[above] at (+2.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-9.4)}]
\node at (0,0) {$\varepsilon$};
\end{scope}
\begin{scope}[shift={(-3.5,-10.6)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-1.6,+0.25) rectangle (-0.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (-1.6,+0.25) {$t$};
\node[above] at (-0.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-10.6)}]
\draw (-2.5,+0.25) rectangle (-2,-0.25);
\draw (-2,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2,-0.25);
\draw (+2,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1.15,0) {$a^{T-t'+t-1}$};
\node at (0,0) {$\$$};
\node at (+1.15,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\end{scope}
\begin{scope}[shift={(-3.5,-11.8)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-1.6,+0.25) rectangle (+0.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (-1.6,+0.25) {$t$};
\node[above] at (+0.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-11.8)}]
\draw (-2.5,+0.25) rectangle (-2,-0.25);
\draw (-2,+0.25) rectangle (+2.0,-0.25);
\draw (+2,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (0,0) {$a^{2T-t'+t-1}$};
\node at (+2.25,0) {$b$};
\end{scope}
\begin{scope}[shift={(-3.5,-13)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (-1.6,+0.25) rectangle (+2.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (-1.6,+0.25) {$t$};
\node[above] at (+2.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-13)}]
\draw (-2.5,+0.25) rectangle (-2,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{t-1}$};
\end{scope}
\begin{scope}[shift={(-3.5,-14.2)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (+0.5,+0.25) rectangle (+1.6,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (+0.6,+0.25) {$t$};
\node[above] at (+1.6,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-14.2)}]
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1.15,0) {$a^{T-t'+t-1}$};
\node at (+2.25,0) {$b$};
\end{scope}
\begin{scope}[shift={(-3.5,-15.4)}]
\draw[pattern=north west lines, pattern color=aluminium3, draw=none] (+0.5,+0.25) rectangle (+2.5,-0.25);
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\draw (+2.0,+0.25) rectangle (+2.5,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1,0) {$a^{T-1}$};
\node at (+2.25,0) {$b$};
\node[above] at (+0.6,+0.25) {$t$};
\node[above] at (+2.5,+0.25) {$t'$};
\end{scope}
\begin{scope}[shift={(+3.5,-15.4)}]
\draw (-2.5,+0.25) rectangle (-2.0,-0.25);
\draw (-2.0,+0.25) rectangle (-0.3,-0.25);
\draw (-0.3,+0.25) rectangle (+0.3,-0.25);
\draw (+0.3,+0.25) rectangle (+2.0,-0.25);
\node at (-2.25,0) {$b$};
\node at (-1,0) {$a^{T-1}$};
\node at (0,0) {$\$$};
\node at (+1.15,0) {$a^{t-T-1}$};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{All possibilities to recall memory in an RMM at different locations of processing inputs of the form
$a^T\$a^T$ (top) or $ba^{T-1}\$a^{T-1}b$ (bottom). In all inputs on the left, the shaded region indicates the portion
of the input that is ignored and the input on the right corresponds to a version of the input where
the shaded region is removed. In all cases, we observe that the input on the left would be accepted
by $\mathcal{A}_\text{palin}$, while the input on the right would not. This is a contradiction, because
both inputs would receive the same state (and hence the same output) by the RMM.}
\label{fig:rmm_proof}
\end{figure}
Regarding our first claim, assume that line 9 of of Algorithm~\ref{alg:rmm} is visited.
Then, there must exist $t, t'$ such that $0 < t < t' \leq 2T + 1$ and $a_t = a_{t'} > 0$.
Now, let $t'$ be as large as possible and $t$ be as small as possible.
In that case, the original sequence $\bar x$ and $\bar x'$ with the subsequence from $t$ to $t'$
removed yield the same state (refer to \citet{RMM2}) and, accordingly, the same output. However,
we can show that $\bar x$ would be accepted by $\mathcal{A}_\text{palin}$
while $\bar x'$ would not be accepted. Accordingly, the RMM does not simulate $\mathcal{A}_\text{palin}$,
which is a contradiction. Refer to Figure~\ref{fig:rmm_proof} for an illustration of all possible
choices of $t$ and $t'$.
By extension of this argument, we also know that line 9 is not visited for the prefix
$ba^T\$a^T$, otherwise it would also be visited for $ba^T\$a^Tb$. Further,
given that line 9 is not visited, the output generated by Algorithm~\ref{alg:rmm} is the same as
the output of the ESN $(\bm{U}, \bm{W}, \sigma, g)$ according to Definition~\ref{dfn:esn},
because lines 6-8 and 10-13 neither influence the state $\vec h_t$ nor the output.
Next, we show that the ESN $(\bm{U}, \bm{W}, \sigma, g)$ can not simulate the automaton
$\mathcal{A}_\text{palin}$. For this purpose,
we consider the output function $g$ and the echo state property in a bit more detail. Because
$g$ is uniformly continuous, we know that there must exist some $\tilde \delta$,
such that for any two $\vec h, \vec h' \in \mathbb{R}^m$ with $\lVert \vec h - \vec h' \rVert < \tilde \delta$
we obtain $|g(\vec h) - g(\vec h')| < 1$. Next, let $\delta_1, \delta_2, \ldots$ be the null sequence
from the definition of the echo state property (refer to Definition~\ref{dfn:esn}). Because this is a
null sequence, there must exist some $t^*$ such that for any $t > t^*$ we obtain
$\delta_t < \tilde \delta$. Now, consider the two inputs $a^T\$a^T$ and $ba^T\$a^T$ with
$T = \lceil t^* / 2 \rceil$ and consider the two initial states $\vec h_0$ and
$\vec h'_0 = \sigma(\bm{U} \cdot b)$, as well as the continuations
$\vec h_t = \sigma(\bm{W} \cdot \vec h_{t-1} + \bm{U} \cdot \vec x_t)$ and
$\vec h'_t = \sigma(\bm{W} \cdot \vec h'_{t-1} + \bm{U} \cdot \vec x_t)$
for all $t \in \{1, \ldots, 2T+1\}$ where $\vec x_1, \ldots, \vec x_{2T+1} = a^T\$a^T$.
The echo state property now guarantees that $\lVert \vec h_{2T+1} - \vec h'_{2T+1} \rVert \leq \delta_{2T+1}
< \tilde \delta$ because $2T+1 > t^*$. Accordingly, the uniform continuity of $g$ guarantees that
$|g(\vec h_{2T+1}) - g(\vec h'_{2T+1})| < 1$. However, we have
$|\mathcal{A}_\text{palin}(ba^T\$a^T) - \mathcal{A}_\text{palin}(a^T\$a^T)| = 1$.
Accordingly, $\mathcal{M}$ does not simulate $\mathcal{A}_\text{palin}$.
\end{proof}
\end{thm}
Our next goal is to prove that an RSM with a sufficiently rich reservoir can simulate any LR(1)-automaton,
only by adjusting the functions $c^\text{pop}$, $c^\text{push}$, and
$c^\text{out}$ and $c^\text{shift}$. To make this precise,
we first define formally what we mean by a 'sufficiently rich reservoir' and then go on to our main theorem.
\begin{dfn}[$\bar w$-separating reservoirs]
Let $\Sigma \subset \mathbb{R}^n$ be some finite set and $\mathcal{R} = (\bm{U}, \bm{W}, \sigma)$ be a
reservoir with $n$ inputs and $m$ neurons. We define the representation
$h_\mathcal{R}(\vec x_1, \ldots, \vec x_T)$ of $\vec x_1, \ldots, \vec x_T \in \Sigma^*$
according to the reservoir $\mathcal{R}$ as the state $\vec h_T$ resulting from Equation~\ref{eq:rnn}.
Now, let $\bar w \in \Sigma^*$. We say that $\mathcal{R}$ is a
\emph{$\bar w$-separating reservoir} if there exists an affine function
$f_{\bar w}(\vec h) = \bm{V} \cdot \vec h + b$ such that for all $\bar u \in \Sigma^*$
it holds: $f_{\bar w}(h_\mathcal{R}(\bar u)) > 0$ if $\bar u$ has $\bar w$ as suffix and
$f_{\bar w}(h_\mathcal{R}(\bar u)) \leq 0$ otherwise.
\end{dfn}
For example, Figure~\ref{fig:states} shows the representations of random words
$\bar w \in \{a, b, S\}^*$ up to length 10.
The figure shows that the reservoir is $a$-separating, $b$-separating,
and $S$-separating. In general, reservoirs with contractive properties have
a strong bias to be separating, because the suffix dominates the representation
\cite{FractalESN}. We also use this separation property to simulate LR(1)-automata with RSMs.
\begin{thm}\label{thm:lr_rsm}
Let $\mathcal{A} = (\Phi, \Sigma, R, F)$ be an LR(1)-automaton with $\Phi \subset \mathbb{R}^n$ and
$\Sigma \subset \mathbb{R}^n$. Futher, let
$\mathcal{X} = \{ x | (\bar s, x, j, A) \in R\}$, and let
$\mathcal{S} = \{ \bar s | (\bar s, x, j, A) \in R\}$.
Finally, let $\mathcal{R} = (\bm{U}, \bm{W}, \sigma)$ be a reservoir over $\Sigma \cup \Phi$ with
$n$ inputs and $m$ neurons such that $\mathcal{R}$ is a
$\bar w$-separator for all $\bar w \in \mathcal{X} \cup \mathcal{S}$ and such that
$\{ h_\mathcal{R}(\bar w) | \bar w \in (\Phi \cup\Sigma)^*, \bar w \notin F \} \cap
\{ h_\mathcal{R}(A) | A \in F\} = \emptyset$, i.e.\ the representations of accepting nonterminals
and all other words are disjoint.
Then, we can construct classifiers $c^\text{pop}$, $c^\text{push}$, $c^\text{shift}$,
and $c^\text{out}$, such that the reservoir stack machine
$(\bm{U}, \bm{W}, \sigma, c^\text{pop}, c^\text{push}, c^\text{shift}, c^\text{out})$
over $\Sigma$ and $\Phi$ simulates $\mathcal{A}$.
\begin{proof}
Per definition, $\mathcal{R}$ is a $\bar w$-separator for every word
$\bar w \in \mathcal{X} \cup \mathcal{S}$.
Accordingly, there must exist affine functions $f_{\bar w} : \mathbb{R}^m \to \mathbb{R}$,
such that for all $\bar u \in \Sigma^*$ it holds: $f_{\bar w}(h_\mathcal{R}(\bar u)) > 1$ if $\bar w$ is a
suffix of $\bar u$ and $f_{\bar w}(h_\mathcal{R}(\bar u)) \leq 0$ otherwise.
Now, we define the functions $c^\text{pop}$ and $c^\text{push}$ via the following procedure.
We iterate over the rules $(\bar s, x, j, A) \in R$ of the LR(1)-automaton
in ascending order. If $f_{\bar s}(\vec g_t) > 0$ and $f_x(\vec h_t) > 0$, we return
$c^\text{pop}(\vec h_t, \vec g_t) = j$ and $c^\text{push}(\vec h_t, \vec g_t) = A$.
If this does not occur for any rule, we return
$c^\text{pop}(\vec h_t, \vec g_t)= 0$ and $c^\text{push}(\vec h_t, \vec g_t) = \vec 0$.
We further define $c^\text{shift}(\vec h_t, \vec g_t)$ as constant $1$.
Finally, we define $c^\text{out}(\vec h_t, \vec g_t)$ as $1$ if
$\vec g_t \in \{ h_\mathcal{R}(A) | A \in F\}$ and as $0$ otherwise. Note that all four functions
are well-defined due to our separation requirements on the reservoir.
Our claim is now that the stack $\mathcal{S}$ when arriving at line 25 in Algorithm~\ref{alg:rsm}
is the same as the stack $\mathcal{S}$ when arriving at line 8 in Algorithm~\ref{alg:parse}.
We prove this by induction over the number of inner loop iterations.
If no iterations have occurred, the stack is empty in both cases. Now, assume that the stack
has a certain state $\mathcal{S}$ in both Algorithm~\ref{alg:parse} and Algorithm~\ref{alg:rsm}
and we now enter line 4 in Algorithm~\ref{alg:parse} and line 8 in Algorithm~\ref{alg:rsm},
respectively.
First, consider the case that the $\exists$ quantifier in line 4 of Algorithm~\ref{alg:parse}
finds no matching rule, i.e.\ there is no rule $(\bar s, x, j, A) \in R$ such that the current
stack $\mathcal{S}$ has the suffix $\bar s$ and the current input symbol $y$ equals $x$ (or $x = \varepsilon$).
In that case, $f_{\bar s}(\vec g_t) \leq 0$ or $f_x(\vec h_t) \leq 0$, which implies
$c^\text{pop}(\vec h_t, \vec g_t) = 0$ and $c^\text{push}(\vec h_t, \vec g_t) = \vec 0$. Accordingly,
lines 9-20 leave the stack unchanged and the condition in line 21 applies, such that we break
out of the loop. Equivalently, the loop in Algorithm~\ref{alg:parse} stops.
Second, consider the case that at least one rule matches in line 4 of Algorithm~\ref{alg:parse}.
In that case, the first matching rule $(\bar s, x, j, A)$ is selected (by definition in the
Algorithm description). Accordingly, the current stack $\mathcal{S}$ has the suffix $\bar s$
and the current input symbol $y$ equals $x$ or $x = \varepsilon$. By virtue of the separation
conditions on our reservoir, $f_{\bar s}(\vec g_t) > 0$ and $f_x(\vec h_t) > 0$. Therefore,
by our definition of $c^\text{pop}$ and $c^\text{push}$ above, we obtain
$c^\text{pop}(\vec h_t, \vec g_t) = j$ and $c^\text{push}(\vec h_t, \vec g_t) = A$.
Next, note that lines 10 and 18 of Algorithm~\ref{alg:rsm} update the stack just as lines 5-6 in
Algorithm~\ref{alg:parse} do and that lines 11-14 as well as line 19 update the stack state $\vec g_t$
accordingly.
Finally, note that the condition in line 26 of Algorithm~\ref{alg:rsm} is always fulfilled because we defined
$c^\text{shift}(\vec h_t, \vec g_t) = 1$. Hence, line 27 of Algorithm~\ref{alg:rsm} updates the stack
just as line 8 in Algorithm~\ref{alg:parse} does and line 25 updates the stack representation
accordingly.
In conclusion, at every iteration of the computation, the stacks of Algorithms~\ref{alg:parse} and~\ref{alg:rsm}
are the same. It only remains to show that the last output $\vec y_{T+1}$ is the same as
$\mathcal{A}(\bar w)$. If $\mathcal{A}(\bar w) = 1$, this means that the stack at the end of the computation
in Algorithm~\ref{alg:parse} was $\mathcal{S} = A\#$ for some accepting nonterminal $A \in F$.
For Algorithm~\ref{alg:rsm}, this implies that the stack before the last push operation
in line 24 must have been $\mathcal{S} = A$. Hence,
$c^\text{out}(\vec h_{T+1}, \vec g_{T+1}) = 1$. Conversely, if $\mathcal{A}(\bar w) = 0$, the stack must have
been something else, and hence $c^\text{out}(\vec h_{T+1}, \vec g_{T+1}) = 0$.
\end{proof}
\end{thm}
We note that we assume linear separability of single suffices, but we may still require nonlinear
classifiers for $c^\text{pop}$, $c^\text{push}$, and $c^\text{out}$ because these are defined over unions of suffices,
which in turn may yield non-linear boundaries. As such, we recommend non-linear classifiers in practice,
such as radial basis function SVMs.
This concludes our theory chapter. We now know that RMMs cannot recognize LR(1) languages in general,
but RSMs can. Our next step is to describe how we can train an RSM.
\subsection{Training}
\label{sec:training}
\begin{algorithm}
\caption{An algorithm to re-order training data from a $5$-tuple
$(\bar x, \bm{J}, \bm{A}, \vec \rho, \bm{Y})$ of inputs $\bar x \in \Sigma^T$,
desired pop actions $\bm{J} \in \{0, \ldots, J\}^{T+1 \times M}$,
desired push actions $\bm{A} \in \Phi^{T+1 \times M}$,
desired shift actions $\vec \rho \in \{0, 1\}^{T+1}$, and
desired outputs $\bm{Y} \in \mathbb{R}^{T+1 \times L}$
into input-output pairs for training the functions
$c^\text{pop}$, $c^\text{push}$, $c^\text{shift}$, and $c^\text{out}$.
We assume that a reservoir $(\bm{U}, \bm{W}, \sigma)$
with $\bm{U} \in \mathbb{R}^{m \times (n + K)}$, $\bm{W} \in \mathbb{R}^{m \times m}$, and $\sigma : \mathbb{R} \to \mathbb{R}$.}
\label{alg:training}
\begin{algorithmic}[1]
\Function{RSM-train}{Input training data $(\bar x, \bm{J}, \bm{A}, \vec \rho, \bm{Y})$}
\State Initialize $H^\text{pop/push}$, $Y^\text{pop}$, and $Y^\text{push}$ as empty lists.
\State Initialize an empty stack $\mathcal{S}$. Initialize $\vec h_0 \gets \vec 0$ and $\vec g_1 \gets \vec 0$.
\For{$t \gets \{1, \ldots, T+1\}$}
\State $\vec h_t \gets \sigma\Big(\bm{U} \cdot \vec x_t + \bm{W} \cdot \vec h_{t-1}\Big)$.
\For{$\tau \gets \{1, \ldots, M\}$}
\State Append $(\vec h_t, \vec g_t)$ to $H^\text{pop/push}$.
\State Append $j_{t, \tau}$ to $Y^\text{pop}$ and $\vec a_{t, \tau}$ to $Y^\text{push}$.
\State Pop $j_{t, \tau}$ elements from $\mathcal{S}$.
\If{$\vec a_{t, \tau} \neq \vec 0$}
\State Push $\vec a_{t, \tau}$ onto $\mathcal{S}$.
\EndIf
\State $\vec g_t \gets \vec 0$.
\For{$\vec s \gets $ elements of $\mathcal{S}$}
\State $\vec g_t \gets \sigma\Big(\bm{U} \cdot \vec s + \bm{W} \cdot \vec g_t \Big)$.
\EndFor
\If{$j_{t, \tau} = 0$ and $\vec a_{t, \tau} = \vec 0$}
\State \textbf{Break}.
\EndIf
\EndFor
\If{$\rho_t > 0$}
\State Push $\vec x_t$ onto $\mathcal{S}$.
\State $\vec g_{t+1} \gets \sigma\Big(\bm{U} \cdot \vec x_t + \bm{W} \cdot \vec g_t \Big)$.
\Else
\State $\vec g_{t+1} \gets \vec g_t$.
\EndIf
\EndFor
\State Convert $H^\text{pop/push}$, $Y^\text{pop}$, and $Y^\text{push}$ to matrices.
\State Set up a $T+1 \times 2m$ matrix $\bm{H}$ with rows $(\vec h_t, \vec g_t)$.
\State \Return $(\bm{H}^\text{pop/push}, \bm{Y}^\text{pop})$, $(\bm{H}^\text{pop/push}, \bm{Y}^\text{push})$, $(\bm{H}, \vec \rho)$, $(\bm{H}, \bm{Y})$.
\EndFunction
\end{algorithmic}
\end{algorithm}
In this section, we describe how to train an RSM from example data. As training data, we require
$5$-tuples of the form $(\bar x, \bm{J}, \bm{A}, \vec \rho, \bm{Y})$ where $\bar x \in \Sigma^T$ is a sequence
of $T$ input vectors for some $T$, $\bm{J} \in \{0, \ldots, J\}^{T+1 \times M}$ is a matrix of desired pop actions for some $M$,
$\bm{A} \in \Phi^{T+1 \times M}$ is a tensor of desired push actions,
$\vec \rho \in \{0, 1\}^{T+1}$ is a vector of desired shift actions, and
$\bm{Y} \in \mathbb{R}^{T+1 \times L}$ is a matrix of desired outputs. In other words, our training process
requires ground truth example data for the correct pop, push, and shift behavior in addition to the
desired outputs. However, our model is able to generalize the behavior beyond example data as we will
see later in the experiments.
If we want to simulate the behavior of a known LR(1)-automaton, the training data is simple to generate:
We merely have to execute Algorithm~\ref{alg:parse} and record the rules that are applied, which
directly yield the desired pop and push actions. The desired shift action is constant $1$ and the
desired output is $1$ whenever the stack is $\mathcal{S} = A$ for an accepting nonterminal $A \in F$
and $0$ otherwise. Importantly, we only need to know the behavior of the automaton on short training
examples to learn the general stack interaction from demonstration.
If we do not have an LR(1)-automaton available to generate the training data, we require some other
heuristic, which is dependent on the dataset. In our copy experiments, for example, we use pop and push
actions to normalize the length of the stack at crucial points of the computation, thus making memory
recall much simpler compared to a regular recurrent neural network.
Once training data in form of these tuples is constructed, we apply Algorithm~\ref{alg:training} to
re-order this training data into input-output pairs for each of the functions
$c^\text{pop}$, $c^\text{push}$, $c^\text{shift}$, and $c^\text{out}$. This algorithm is a variation
of Algorithm~\ref{alg:rsm}, controlled via teacher forcing. Finally, once these input-output-pairs
are collected, we can train $c^\text{pop}$, $c^\text{push}$, $c^\text{shift}$, and $c^\text{out}$
via any classification or regression scheme. In this paper, we opt for classic radial basis
function support vector machines (RBF-SVMs)\footnote{The only exception is $c^\text{out}$ in the copy
tasks, where a linear regression is sufficient.} with automatically chosen kernel width as implemented
in scikit-learn \cite{Sklearn}.
Because the objective of RBF-SVMs is convex, the training is fast and globally optimal.
\section{Experiments}
\label{sec:experiments}
We evaluate reservoir stack machines (RSMs) on three benchmark tasks for Neural Turing Machines (NTMs)
\cite{NTM_impl} as well as six context-free languages. In more detail, the three NTM benchmark tasks
are:
\paragraph{latch:} Both input and output are binary and one-dimensional and the
output should jump from zero to one and back whenever there is a one on the input. Equivalently, this
task can be expressed via the regular expression $(0^*10^*1)^*0^*10^*$, i.e.\ we accept any word
that contains an odd number of ones. We sample training words up to length $50$ and test words
starting from length $50$ from a probabilistic context-free gramma
\footnote{For the precise parameters and experimental details, refer to
\url{https://gitlab.com/bpaassen/reservoir_stack_machines}.}.
\paragraph{copy:} The input is a 1-20 time step long sequence of 8 random bits, followed by a
special end-of-sequence token on an extra channel (this channel is -1 during the input sequence).
The output should be zero until this extra one, after which the output
should be a copy of the input signal.
\paragraph{repeat copy:} The same as copy but where the input sequence has length up to 10 and
the end-of-sequence token can occur up to 10 times (up to 20 times in the testing data).
After each token, the input sequence should be copied again.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[shift={(-5,-1.62)}]
\begin{axis}[title={latch task},xlabel={$t$},ylabel={amplitude}, width=5cm, height=4.62cm,
xmin=0, ymax=1.8, legend pos={north east}, legend cell align={left},
ytick={0,1}]
\addplot[class1color, thick] table[x=time,y=x] {latch_example.csv};
\addlegendentry{input}
\addplot[thick, class2color, densely dashed] table[x=time,y=y] {latch_example.csv};
\addlegendentry{output}
\end{axis}
\end{scope}
\begin{groupplot}[view={0}{90}, xlabel={$t$}, ymin=0, ymax=8,
group style={group size=2 by 2,
x descriptions at=edge bottom,y descriptions at=edge left,
horizontal sep=0.7cm, vertical sep=0.2cm},
width=4cm, height=3cm]
\nextgroupplot[title={copy task},ymax=9,width=3.2cm, ylabel={input},
colormap={tango}{color(0cm)=(orange1); color(1cm)=(white); color(2cm)=(skyblue3)}]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=10,shader=flat corner] file {copy_example_input.csv};
\nextgroupplot[title={repeat copy task},ymax=9,
colormap={tango}{color(0cm)=(orange1); color(1cm)=(white); color(2cm)=(skyblue3)}, colorbar]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=10,shader=flat corner] file {repeat_copy_example_input.csv};
\nextgroupplot[width=3.2cm, ylabel={output},
colormap={tango}{color(0cm)=(white); color(1cm)=(skyblue3)}]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=9,shader=flat corner] file {copy_example_output.csv};
\nextgroupplot[colormap={tango}{color(0cm)=(white); color(1cm)=(skyblue3)}]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=9,shader=flat corner] file {repeat_copy_example_output.csv};
\end{groupplot}
\end{tikzpicture}
\vspace{-0.7cm}
\end{center}
\caption{Example input and output sequences for the three NTM benchmark data sets.}
\label{fig:data}
\end{figure}
For a graphical illustration of the three datasets, refer to Figure~\ref{fig:data}.
For the latch task, we use the LR(1)-rules $(S0, \varepsilon, 2, S)$, $(S1, \varepsilon, 2, A)$,
$(A0, \varepsilon, 2, A)$, $(A1, \varepsilon, 2, S)$, $(0, \varepsilon, 1, A)$ with the accepting
nonterminal $S$ to generate the desired stack behavior for the training data.
For the copy task, we use the following heuristic: We always shift the input onto the stack,
but we do not use pop or push until the end-of-sequence token appears. At that point, we fill up the
stack with a placeholder nonterminal $S$ until the stack has length $20$. Then, we continue.
By this construction, the output can be constructed by simply copying the $20$th stack element to the
output, i.e.\ the reservoir must be rich enough such that the $20$th stack element can be reconstructed
via $c^\text{out}$ from $\vec g_t$. For this purpose, we use linear ridge regression as implemented in sklearn
(with the $\alpha$ hyperparameter being part of the hyperparameter optimization)
because a linear operator is provably sufficient to reconstruct past inputs, at least for Legendre
reservoirs \cite{LMU}.
For repeat copy, we use the same scheme, except for one difference:
If a second end-of-sequence token appears, we pop elements from the stack until no
end-of-sequence token is on the stack anymore and then continue as before.
\begin{table}
\begin{center}
\begin{scriptsize}
\begin{tabular}{llllll}
Dyck1 & Dyck2 & Dyck3 & $a^nb^n$ & Palindromes & JSON \\
\cmidrule(lr){1-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}\cmidrule(lr){6-6}
$(\nt{SS}, \trm{\varepsilon}, 2, \nt{S})$ & $(\nt{SS}, \trm{\varepsilon}, 2, \nt{S})$ & $(\nt{SS}, \trm{\varepsilon}, 2, \nt{S})$
& $(\trm{a}\nt{S}\trm{b}, \trm{\varepsilon}, 3, \nt{S})$ & $(\trm{a}\nt{S}\trm{a}, \trm{\varepsilon}, 3, \nt{S})$ & $(\trm{\{\}}, \trm{\varepsilon}, 2, \nt{V})$ \\
$(\trm{(}\nt{S}\trm{)}, \trm{\varepsilon}, 3, \nt{S})$ & $(\trm{(}\nt{S}\trm{)}, \trm{\varepsilon}, 3, \nt{S})$ & $(\trm{(}\nt{S}\trm{)}, \trm{\varepsilon}, 3, \nt{S})$ &
$(\trm{a}, \trm{b}, \trm{\varepsilon}, \nt{S})$ & $(\trm{b}\nt{S}\trm{b}, \trm{\varepsilon}, 3, \nt{S})$ & $(\trm{[]}, \trm{\varepsilon}, 2, \nt{V})$ \\
& $(\trm{[}\nt{S}\trm{]}, \trm{\varepsilon}, 3, \nt{S})$ & $(\trm{[}\nt{S}\trm{]}, \trm{\varepsilon}, 3, \nt{S})$ &
& $(\trm{\$}, \trm{\varepsilon}, 1, \nt{S})$ & $(\trm{\{}\nt{O}\trm{\}}, \trm{\varepsilon}, 3, \nt{V})$\\
& & $(\trm{\{}\nt{S}\trm{\}}, \trm{\varepsilon}, 3, \nt{S})$ & & & $(\trm{[}\nt{A}\trm{]}, \trm{\varepsilon}, 3, \nt{V})$ \\
$(\trm{(}, \trm{)}, 0, \nt{S})$ & $(\trm{(}, \trm{)}, 0, \nt{S})$ & $(\trm{(}, \trm{)}, 0, \nt{S})$ & & & $(\trm{n}, \trm{\varepsilon}, 1, \nt{V})$ \\
& $(\trm{[}, \trm{]}, 0, \nt{S})$ & $(\trm{[}, \trm{]}, 0, \nt{S})$ & & & $(\trm{s}, \trm{\varepsilon}, 1, \nt{V})$ \\
& & $(\trm{\{}, \trm{\}}, 0, \nt{S})$ & & & $\trm{(k : }\nt{V}\trm{,} \nt{O}, \trm{\varepsilon}, 5, \nt{O})$ \\
& & & & & $(\trm{k :} \nt{V}, \trm{\}}, 3, \nt{O})$ \\
& & & & & $(\nt{V}\trm{,} \nt{A}, \trm{\varepsilon}, 3, \nt{A})$ \\
& & & & & $(\nt{V}, \trm{]}, 1, \nt{A})$ \\
\cmidrule(lr){1-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}\cmidrule(lr){6-6}
$\trm{()(())}$ & $\trm{([])[]}$ & $\trm{(\{\})[]}$ & $\trm{aaabbb}$ & $\trm{ab\$ba}$ & $\trm{\{k : [n, n], k : s\}}$ \\
\end{tabular}
\end{scriptsize}
\end{center}
\caption{The list of rules for the LR(1)-automata of all six language data sets and an example word
from each language. Terminal symbols are colored orange, nonterminals blue.}
\label{tab:lrs}
\end{table}
Our six language data sets are:
\paragraph{Dyck1/2/3:} Deterministic context-free languages of balanced bracket pairs with one, two,
and three different kinds of brackets respectively, as suggested by \citet{DeepStacks}.
\paragraph{$a^nb^n$:} The language $\mathcal{L}_{a^nb^n}$ from Equation~\ref{eq:anbn}.
\paragraph{Palindrome:} The language of palindromes over the letters $a$ and $b$ with a $\$$
symbol in the center from Equation~\ref{eq:palindromes}.
\paragraph{JSON:} A simplified version of the javascript object notation (JSON;
\url{https://www.json.org}), where we represent numbers with the symbol $n$, strings with the
symbol $s$, and keys with the symbol $k$.
The rules of the LR(1)-automata for all six languages are shown in
Table~\ref{tab:lrs}, including an example word from each language. The accepting nonterminals of
the automata are $\{S\}$ for the first five and $\{V\}$ for the last automaton. It is easy to show
that none of these languages is Chomsky-3 by using the pumping lemma \cite{Pumping,TheoInf}.
For each language task, our desired output is $y_t = 1$ if the word up to $t$ is in the language
and $y_t = 0$ otherwise.
We use the ground-truth LR(1)-automaton to annotate the training data with
the desired stack behavior. However, we sample the training words up to length $50$ and
the evaluation words starting from length $50$ as suggested by \citet{DeepStacks},
thus demonstrating that our model can generalize beyond the shown examples.
We sample the words from a probabilistic context-free grammar for the
respective language\footnote{For the precise parameters and experimental details, refer to
\url{https://gitlab.com/bpaassen/reservoir_stack_machines}.}.
In all experiments, we sample $100$ random sequences for training and another $100$ sequences for
testing. To obtain statistics, we perform $10$ repeats of every experiment.
For hyper-parameter optimization, we sample $20$ random sets of hyperparameters\footnotemark[3], each
evaluated on $3$ separate datasets of $100$ training sequences and $100$ test sequences.
We evaluate three kinds of reservoir for the RSM, namely random Gaussian numbers (rand),
a cycle reservoir with jumps (CRJ) \cite{CRJ}, and a Legendre delay network (LDN) \cite{LMU}. As baseline,
we compare against a standard echo state network \cite{ESN}, which attempts to predict the
desired output via linear regression from the reservoir state $\vec h_t$. We expect that these networks should
fail because they do not have the computational power to recognize languages that require arbitrarily
long memory \cite{DMM}. For all reservoirs, we used $256$
neurons to maintain equal representational power between models\footnote{We observed that $256$ neurons
were insufficient for the copy task. In this case, we increased the number of neurons to $512$.
Further, we report reference results for ESNs with $1024$ neurons in the appendix.}.
We also include three baselines from deep learning, namely a gated recurrent unit (GRU) \cite{GRU},
the stack recurrent neural network (SRNN) \cite{DeepStacks}, and a deep version of our reservoir
stack machine (DSM), where we replace the reservoir with a GRU. For GRUs, we used the implementation
provided by pyTorch \cite{PyTorch} and for SRNNs the reference implementation of Suzgun et
al.\footnote{\url{https://github.com/suzgunmirac/marnns/blob/master/models/rnn_models.py}}.
We trained all deep models with Adam \cite{Adam} with a learning rate of $10^{-3}$, weight
decay of $10^{-8}$, and $10,000$ epochs, where we processed a single word in each epoch.
As for the reservoir models, we used $256$ neurons for each model in a single recurrent
layer.
Note that we do \emph{not} compare against reservoir memory machines (RMMs) \cite{RMM2} because there
is no strategy to generate training data for the LR(1) language datasets (refer to
Theorem~\ref{thm:rmm}). For the latch, copy, and repeat copy datasets, it has already
been shown that RMMs achieve zero error \cite{RMM2}.
We performed all experiments on a consumer-grade 2017 laptop with Intel i7 CPU.
\begin{table}
\caption{The mean absolute error ($\pm$ std.) on the test data for all models
for the NTM benchmark datasets. All results with mean and standard deviation below $10^{-2}$
are bold-faced.}
\label{tab:benchmark_errors}
\begin{center}
\begin{tabular}{lccc}
model & {latch} & {copy} & {repeat copy} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4}
rand-ESN & $0.43 \pm 0.02$ & $0.23 \pm 0.00$ & $0.36 \pm 0.01$ \\
CRJ-ESN & $0.40 \pm 0.02$ & $0.20 \pm 0.01$ & $0.34 \pm 0.01$ \\
LDN-ESN & $0.42 \pm 0.01$ & $0.22 \pm 0.00$ & $0.30 \pm 0.01$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4}
GRU & $\bm{0.00 \pm 0.00}$ & $0.22 \pm 0.00$ & $0.32 \pm 0.01$ \\
SRNN & $0.30 \pm 0.03$ & $0.21 \pm 0.01$ & $0.36 \pm 0.01$ \\
DSM & $\bm{0.00 \pm 0.00}$ & $0.23 \pm 0.01$ & $0.33 \pm 0.02$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4}
rand-RSM & $\bm{0.00 \pm 0.00}$ & $0.36 \pm 0.03$ & $0.40 \pm 0.01$ \\
CRJ-RSM & $0.35 \pm 0.02$ & $0.22 \pm 0.00$ & $0.08 \pm 0.01$ \\
LDN-RSM & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ \\
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The mean absolute error ($\pm$ std.) on the test data for all models
for the language datasets. All results with mean and standard deviation below $10^{-2}$
are bold-faced.}
\label{tab:errors}
\begin{center}
\begin{scriptsize}
\begin{tabular}{lcccccc}
model & {Dyck1} & {Dyck2} & {Dyck3} & {$a^nb^n$} & {Palindrome} & {JSON} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-7}
rand-ESN & $0.13 \pm 0.02$ & $0.23 \pm 0.05$ & $0.28 \pm 0.06$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.01 \pm 0.00}$ \\
CRJ-ESN & $0.11 \pm 0.01$ & $0.12 \pm 0.02$ & $0.14 \pm 0.03$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.01 \pm 0.00}$ & $0.01 \pm 0.00$ \\
LDN-ESN & $0.17 \pm 0.02$ & $0.23 \pm 0.02$ & $0.26 \pm 0.03$ & $\bm{0.00 \pm 0.00}$ & $0.07 \pm 0.01$ & $0.10 \pm 0.01$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-7}
GRU & $0.04 \pm 0.01$ & $0.05 \pm 0.01$ & $0.06 \pm 0.01$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ \\
SRNN & $0.02 \pm 0.03$ & $\bm{0.00 \pm 0.00}$ & $0.03 \pm 0.03$ & $\bm{0.00 \pm 0.00}$ & $0.05 \pm 0.14$ & $0.08 \pm 0.16$ \\
DSM & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-7}
rand-RSM & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ \\
CRJ-RSM & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ \\
LDN-RSM & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ & $\bm{0.00 \pm 0.00}$ \\
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
Tables~\ref{tab:benchmark_errors} and Table~\ref{tab:errors} report the mean average error across
data sets and models. We observe that the reservoir stack machine with
Lagrange delay reservoir (LDN-RSM) achieves almost zero error (bold-faced) on all datasets.
While other reservoirs succeed on the language tasks, they fail on the copy and repeat copy task.
This is to be expected because these tasks require a lossless reconstruction of past
states from the stack, which the Lagrange delay network is designed for \cite{LMU}.
Echo state networks without a stack fail almost all tasks, except for the
$a^nb^n$, palindrome, and JSON languages. This corresponds to the theoretical findings of
\citet{DMM}, indicating that ESNs cannot recognize general regular languages, let alone deterministic
context-free languages. The fact that some languages can still be solved is likely due to the fact
that all test sequences stay within the memory capacity of our model.
Interestingly, the deep models also fail for many tasks, especially the copy and repeat copy task.
This corresponds to prior findings of \citet{NTM_impl} that these tasks likely require tens of thousands
of unique sequences to be learned in a memory-augmented neural network, whereas we only present $100$
distinct sequences. Even for the language tasks, Stack-RNNs often fail, indicating that the correct
stack behavior is not easy to learn. Indeed, even the deep variation of our model (DSM), which receives
the same amount of training data as the reservoir stack machine, fails on the copy and repeat copy task,
indicating that these tasks are not trivial to learn.
\begin{table}
\caption{The mean runtime ($\pm$ std.) in seconds as measured by Python's \texttt{time}
function for all models on the NTM benchmark datasets.}
\label{tab:benchmark_runtimes}
\begin{center}
\begin{tabular}{lccc}
model & {latch} & {copy} & {repeat copy} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4}
rand-ESN & $1.17 \pm 0.12$ & $0.48 \pm 0.01$ & $0.32 \pm 0.01$ \\
CRJ-ESN & $0.76 \pm 0.11$ & $0.11 \pm 0.01$ & $0.18 \pm 0.01$ \\
LDN-ESN & $1.71 \pm 0.19$ & $0.19 \pm 0.01$ & $0.23 \pm 0.01$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4}
GRU & $97.1 \pm 4.1$ & $108.3 \pm 12.2$ & $191.5 \pm 74.4$ \\
SRNN & $295.0 \pm 12.4$ & $314.5 \pm 24.4$ & $536.3 \pm 113.5$ \\
DSM & $619.3 \pm 20.3$ & $548.9 \pm 30.3$ & $1045.4 \pm 211.6$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4}
rand-RSM & $10.25 \pm 0.62$ & $5.20 \pm 0.14$ & $9.61 \pm 0.65$ \\
CRJ-RSM & $51.27 \pm 5.16$ & $4.11 \pm 0.14$ & $7.78 \pm 0.38$ \\
LDN-RSM & $26.74 \pm 2.04$ & $3.36 \pm 0.13$ & $10.25 \pm 0.46$ \\
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The mean runtime ($\pm$ std.) in seconds as measured by Python's \texttt{time}
function for all models on the language datasets.}
\label{tab:runtimes}
\begin{center}
\begin{scriptsize}
\begin{tabular}{lcccccc}
model & {Dyck1} & {Dyck2} & {Dyck3} & {$a^nb^n$} & {Palindrome} & {JSON} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-7}
rand-ESN & $1.68 \pm 0.30$ & $1.78 \pm 0.13$ & $1.93 \pm 0.40$ & $0.89 \pm 0.04$ & $1.29 \pm 0.29$ & $1.01 \pm 0.06$ \\
CRJ-ESN & $1.27 \pm 0.32$ & $0.88 \pm 0.13$ & $0.98 \pm 0.21$ & $0.38 \pm 0.02$ & $0.61 \pm 0.16$ & $0.80 \pm 0.05$ \\
LDN-ESN & $2.27 \pm 0.29$ & $1.28 \pm 0.10$ & $1.22 \pm 0.26$ & $0.92 \pm 0.12$ & $1.04 \pm 0.32$ & $0.95 \pm 0.15$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-7}
GRU & $45.6 \pm 3.1$ & $45.5 \pm 3.0$ & $42.3 \pm 2.0$ & $52.6 \pm 8.9$ & $47.9 \pm 2.5$ & $37.8 \pm 1.8$ \\
SRNN & $114.8 \pm 10.7$ & $114.0 \pm 10.3$ & $102.5 \pm 6.2$ & $131.0 \pm 24.7$ & $122.0 \pm 9.5$ & $88.0 \pm 5.9$ \\
DSM & $285.0 \pm 31.3$ & $282.1 \pm 30.8$ & $249.1 \pm 17.7$ & $270.3 \pm 37.5$ & $290.3 \pm 27.0$ & $217.6 \pm 17.3$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-7}
rand-RSM & $11.52 \pm 0.72$ & $16.64 \pm 1.40$ & $17.80 \pm 1.78$ & $4.01 \pm 0.08$ & $6.09 \pm 0.42$ & $13.16 \pm 1.18$ \\
CRJ-RSM & $11.08 \pm 0.69$ & $15.06 \pm 1.14$ & $17.08 \pm 1.61$ & $4.15 \pm 0.07$ & $7.06 \pm 0.50$ & $11.42 \pm 0.99$ \\
LDN-RSM & $13.71 \pm 0.97$ & $14.34 \pm 1.40$ & $14.22 \pm 0.84$ & $5.06 \pm 0.22$ & $6.49 \pm 0.42$ & $11.90 \pm 0.79$ \\
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
Tables~\ref{tab:benchmark_runtimes} and~\ref{tab:runtimes} show the runtime needed for training and evaluating all models on all datasets. Unsurprisingly, RSMs take more time compared to ESNs due to the
stack mechanism and because we need to train four classifiers instead of one linear regression,
yielding a factor of $\approx 10$ for LDN-RSMs. GRUs are about seven times slower compared to an LDN-RSM, SRNNs are about $20$ times slower, and DSMs about $35$ times slower.
Overall, we conclude that reservoir stack machines can solve tasks that are
impossible to solve for ESNs and hard to solve for deep networks. Additionally, while RSMs are much
slower compared to pure echo state networks, they are still much faster compared to deep networks
(even for small training data sets).
\section{Conclusion}
In this paper, we presented the reservoir stack machine (RSM), a combination of an echo state network
with a stack. We have shown that a sufficiently rich reservoir suffices to simulate any
LR(1)-automaton, whereas a constant-sized memory only suffices for finite state automata.
We have evaluated our model on three benchmark tasks for Neural Turing Machines (latch, copy, and repeat copy)
and six deterministic context-free languages. ESNs struggled with all three benchmark tasks and most LR(1) languages
and even deep models were unable to solve the copy and repeat copy task.
By contrast, RSMs could solve all tasks with zero generalization error.
For the LR(1) languages, this is independent of the choice of reservoir, whereas a Legendre delay
network was required for the copy and repeat copy task. The Legendre network has the advantage
that it can provably decode past inputs via a linear operator, which simplifies the output function
on these tasks.
We admit that a crucial limitation of RSMs is that they require additional training data in the form of
desired pop, push, and shift behavior. However, RSMs can generalize this behavior from examples
to longer sequences, indicating that actual learning takes place. Further, even with this additional
training data, a deep learning model failed to solve tasks that an LDN-RSM could solve, indicating that
the learning task is still non-trivial.
Accordingly, we conclude that RSMs provide a novel way to learn difficult stack behavior
within seconds and with few short reference sequences.
Future research could extend the reservoir stack machine with a second stack to a full Turing
machine and try to find mechanisms in order to construct desired pop, push, and shift behavior
for training data. Even in the present form, though, we hope that the reservoir stack machine
provides an interesting new avenue to explore memory-augmented neural networks in a way that is
faster and more reliable to train.
\section{Acknowledgements}
Funding by the German Research Foundation (DFG) under grant
numbers PA 3460/1-1 and PA 3460/2-1 is gratefully acknowledged.
\bibliographystyle{elsarticle-harv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In \cite{some results} author studied some questions about the Galois group of field generated by division points of certain formal group laws and relation between this Galois group and the ring of endomorphisms of corresponding formal group law. Present article is a continuation of this study and it's purpose is to fill up few obvious gaps of \cite{some results}.\\~\\
First, recall the set-up and main results of \cite{some results} : \\~\\
Let $p$ be a prime and let $K$ be a finite extension of $\mathbb{Q}_p$. Put $O_K$ to be the ring of integers of $K$, let $\mathfrak{p}_K$
denote the unique maximal ideal of $O_K$ and $v_{\mathfrak{p}_K}(\cdot)$ be the valuation associated to it. Fix an algebraic closure $\overline{\mathbb{Q}}_p$ and $|.|_p$ be an fixed extension of the absolute value. Let $\overline{O}$ be the ring of integers of $\overline{\mathbb{Q}}_p$ and $\overline{\mathfrak{p}}$ be the unique maximal ideal of $\overline{O}$. Clearly $\overline{\mathfrak{p}} \cap K = \mathfrak{p}_K$. For $n \in \mathbb{N}$ let $\mu_{n}$ denote the group of $n$-th roots of unity inside $\overline{\mathbb{Q}}_p$.\footnote{In our convention $0 \notin \mathbb{N}$.}\\~\\
Let $A$ be an integrally closed, complete subring of $O_K$. Assume that $\mathfrak{p}_A$ is the maximal ideal, $K(A)$ is the field of fractions and $k(A)$ is the field of residues. Put $[k(A) : \mathbb{F}_p] = f_A$. Let $\mathfrak{F}$ be a (one dimensional, commutative) formal group-law defined over $O_K$ admitting an $A$ module structure. $\pi$ be a generator of $\mathfrak{p}_A$ and say \[[\pi](X) = \pi X + a_2X^2 + a_3X^3 + \cdots \in O_K[[X]] \tag{1.1}\] with at least one $a_i \in O_K - \mathfrak{p}_K$. Then $\min \,\{ i \,|\, |a_i|_p = 1 \} = p^h$ for some positive integer $h$ (see \cite[18.3.1]{haz2}). Now if $\pi_1$ is another generator of $\mathfrak{p}_A$ and \[[\pi_1](X) = \pi_1 X + b_2X^2 + \cdots \in O_K[[X]]\] then $b_{p^h} \in O_K - \mathfrak{p}_K$ and $\min \,\{ i \,|\, |b_i|_p = 1\} = p^h$. This integer $h$ is called the height of $\mathfrak{F}$ as formal $A$ module. If $a_i \in \mathfrak{p}_K$ for all $i \geq 2$ then we say, height of $\mathfrak{F}$ is infinity. We shall only consider formal $A$ modules of finite height.\\~\\
Let $A$ be as above and $\mathfrak{F}$ be a formal $A$ module over $O_K$. It defines a $A$ module structure on $\mathfrak{p}_K$ which naturally extends to a $A$-structure on $\overline{\mathfrak{p}}$. We shall denote the corresponding addition by $\oplus_{\mathfrak{F}}$ to distinguish it from usual addition.\\
Use $\text{End}_{O_K}(\mathfrak{F})$ to denote ring of endomorphisms of formal group $\mathfrak{F}$ which are defined over $O_K$ and $\text{End}_{O_K}^{A}(\mathfrak{F})$ to denote ring of endomorphisms of formal $A$ module $\mathfrak{F}$ which are defined over $O_K$. It is well-known that \[\text{End}_{O_K}(\mathfrak{F}) = \text{End}_{O_K}^{A}(\mathfrak{F})\;\; (\text{see}\,\cite[21.1.4]{haz2}).\]\\
Let $\pi$ be a generator of $\mathfrak{p}_A$. For each $n \geq 1$, use $\mathfrak{F}[\pi^n]$ to denote the $\pi^n$-torsion submodule of $\overline{\mathfrak{p}}$. For any sub-field $L$ of $\overline{\mathbb{Q}}_p$, let $L_{\mathfrak{F}}(\pi^n)$ be the subfield of $\overline{\mathbb{Q}}_p$ generated by $\mathfrak{F}[\pi^n]$ over $L$ and put
\[ \begin{split}
\Lambda_{\pi}(\mathfrak{F}) = \bigcup_{i \geq 1} \mathfrak{F}[\pi^i],\\
L_{\mathfrak{F}}(\pi^{\infty}) = \bigcup_{i \geq 1} L_{\mathfrak{F}}(\pi^i).
\end{split}\]We shall adopt the convention $\mathfrak{F}[\pi^0] = \{0\}$.\\~\\
Let $\pi$ be a generator of $\mathfrak{p}_K$ and put $K_{\pi} = \mathbb{Q}_p(\pi)$. Use $A_{\pi}$ to denote the ring of integers of $K_{\pi}$. For simplicity we shall write $\mathfrak{p}_{A_\mathfrak{\pi}}$ as $\mathfrak{p}_{\pi}$ and $f_{A_\pi}$ as $f_{\pi}$. Note that $\pi$ is a generator of $\mathfrak{p}_\pi$. Let $\mathfrak{F}$ be a formal $A_{\pi}$ module of height $h$ defined over $O_K$. We know that $f_{\pi} \,|\, h$ (see \cite[Appendix-A, Proposition-1.1]{the}). Put $h_{r, \pi} = \frac {h} {f_{\pi}}$. If $\pi$ is clear from context we shall abbreviate $h_{r,\pi}$ as $h_r$. \\~\\
Fix a generator $\pi$ of $\mathfrak{p}_K$. Let $K_{\pi}^{h_r}/K_{\pi}$ be the unique unramified extension of degree $h_r$. Use $A_{\pi}^{h_r}$ to denote the ring of integers of this field.\\~\\
With this set-up the main results of \cite{some results} can be summarized as follows :\\~\\
\textbf{Remark 1.1 :} A) Let $\mathfrak{F}$ be a formal $A_{\pi}$ module of height $h$ defined over $O_K$. Assume that $\mu_{p^h - 1} \subseteq K$. Then the following are equivalent :\\
(i) $\text{Gal}(K_\mathfrak{F}(\pi^{\infty})|K)$ is abelian.\\
(ii) $K_{\mathfrak{F}}(\pi^n) = K(z)$ for all $z \in \mathfrak{F}[\pi^n] - \mathfrak{F}[\pi^{n-1}]$ and all $n \in \mathbb{N}$.\\
(iii) $\mathfrak{F}$ has a formal $A_{\pi}^{h_r}$ module structure defined over $O_K$ extending $A_{\pi}$ module structure. \\
B) Let $\mathfrak{F}$ be a formal $A_{\pi}$ module of height $h$ defined over $O_K$. Then, $\text{End}_{O_K}(\mathfrak{F})$ is integrally closed in its fraction field.\\~\\
Further, we have following conventions and observations :\\~\\
\textbf{Remark 1.2 :} i) Let $\pi$ be a generator of $\mathfrak{p}_K$ and let $\mathfrak{F}$ be a formal $A_{\pi}$ module over $O_K$. Such a formal module will be called a \emph{$\pi$-unramified group law} over $O_K$.\\
ii) Let $A$ be a complete, integrally closed subring of $O_K$ containing $A_{\pi}$. If $\mathfrak{F}$ is a formal $A$ module of height $h$ defined over $O_K$ then $f_A | h$ (see \cite[Remark - 1.2.v]{some results}). Note that $K(A)/K_{\pi}$ is unramified and $\pi$ is a generator of $\mathfrak{p}_A$. Using this argument one can put constraint on size of $\text{End}_{O_K}(\mathfrak{F})$. More precisely, if $F$ is fraction field of $\text{End}_{O_K}(\mathfrak{F})$ then $F/K_{\pi}$ is an unramified extension whose degree divides $h_{r}$. If the degree is exactly $h_r$ we say the endomorphism ring has \emph{full height}. (See \cite[Remark - 3.1.3]{some results})\\
iii) Let $L$ be an unramified extension of $K$ (possibly infinite). Use $\widehat{L}$ denote completion of $L$, $O_{\widehat{L}}$ for the ring of integers of $\widehat{L}$ and $\mathfrak{p}_{\widehat{L}}$ be the maximal ideal of $O_{\widehat{L}}$. If $\pi$ is a generator of $\mathfrak{p}_K$, then $\pi$ also generates $\mathfrak{p}_{\widehat{L}}$. Let $\mathfrak{F}$ be a $A_{\pi}$ module of height $h$ defined over $O_{\widehat{L}}$. Then one can prove analogues of results in remark-1.1 (note that in this case we shall replace the fields $K_{\mathfrak{F}}(\pi^n)$ by $\widehat{L}_{\mathfrak{F}}(\pi^n)$ and $K$ by $\widehat{L}$). (See \cite[Remark - 2.4.B]{some results})\\
\section{Field generated by division points}
Let $K$, $O_K$, $\pi$ be as in introduction. We introduce the following notations:\\
$E$ = a finite unramified extension of $K$,\\
$L = E^{\text{ur}} = K^{\text{ur}}$,\\
$\widehat{L}$ = completion of $L$,\\
$O_k$ = ring of integers of $k$ for any discrete valuation field $k$ ,\\
$\mathcal{F}_{\pi}(O_E)$ = set of all $A_\pi$ modules of finite height defined over $O_E$,\\
$\mathcal{F}_{\pi}(O_E)(h)$ = set of all $A_{\pi}$ modules of height $h$ defined over $O_E$,\\
$\mathcal{F}_{\pi}(O_{\widehat{L}})$ = set of all $A_{\pi}$ modules of finite height defined over $O_{\widehat{L}}$,\\
$\mathcal{F}_{\pi}(O_{\widehat{L}})(h)$ = set of all $A_{\pi}$ modules of height $h$ defined over $O_{\widehat{L}}$,\\
$C$ = completion of $\overline{\mathbb{Q}}_p$,\\
$\mathcal{E}_{E}$ = set of all subextensions of $\overline{\mathbb{Q}}_p/E$,\\
$\mathcal{E}_{L}$ = set of all subextensions of $\overline{\mathbb{Q}}_p/L$,\\
$\mathcal{E}_{\widehat{L}}$ = set of all subextensions of $C/\widehat{L}$.\\~\\
Now we have cannonical maps
\[\begin{split}
D_{\pi, E} : \mathcal{F}_{\pi}(O_E) \to \mathcal{E}_E , \, \mathfrak{F} \to E_{\mathfrak{F}}(\pi^{\infty}) \\
D_{\pi}^{\text{ur}} : \mathcal{F}_{\pi}(O_L) \to \mathcal{E}_L , \, \mathfrak{F} \to L_{\mathfrak{F}}(\pi^{\infty})\\
\widehat{D}_{\pi} : \mathcal{F}_{\pi}(O_{\widehat{L}}) \to \mathcal{E}_{\widehat{L}}, \, \mathfrak{F} \to \widehat{L}_{\mathfrak{F}}(\pi^{\infty}).
\end{split}\]
The natural question that one can ask with with this set up is, when two elements have same image under $D_{\pi, E}$ ($D^{\text{ur}}_{\pi}$, $\widehat{D}_{\pi}$).\\
If $E$ is clear from context we shall drop it from subscript.\\~\\
\textbf{Remark 2.1 :} i) Let $\mathfrak{F}, \mathfrak{G} \in \mathcal{F}_{\pi}(O_E)$. Then $D_{\pi}(\mathfrak{F}) \subseteq D_{\pi}(\mathfrak{G}) \implies D_{\pi}^{\text{ur}}(\mathfrak{F}) \subseteq D_{\pi}^{\text{ur}}(\mathfrak{G}) \implies \widehat{D}_{\pi}(\mathfrak{F}) \subseteq \widehat{D}_{\pi}(\mathfrak{G})$.\\
Conversely, let $\mathfrak{F}, \mathfrak{G} \in \mathcal{F}_{\pi}(O_E)$. Then $\widehat{D}_{\pi}(\mathfrak{F}) \subseteq \widehat{D}_{\pi}(\mathfrak{G}) \implies D^{ur}_{\pi}(\mathfrak{F}) \subseteq D^{ur}_{\pi}(\mathfrak{G})$. The last statement can be proved by noting that $L(\mathfrak{F}[\pi^n])$ (resp. $L(\mathfrak{G}[\pi^n])$) is algebraic closure of $E$ in $\widehat{L}(\mathfrak{F}[\pi^n])$ (resp $\widehat{L}(\mathfrak{G}[\pi^n])$) for each $n \in \mathbb{N}$. \\
ii) Let $\mathfrak{F}, \mathfrak{G} \in \mathcal{F}_{\pi}(O_E)$ be two group laws which are isomorphic over $O_E$. Then $D_{\pi}(\mathfrak{F}) = D_{\pi}(\mathfrak{G})$.\\
iii) Let $\mathfrak{F}, \mathfrak{G} \in \mathcal{F}_{\pi}(O_E)$ be such that there is a non-zero homomorphism of formal groups $f : \mathfrak{F} \to \mathfrak{G}$ over $O_E$. Since $\mathfrak{F}$ has finite height we conclude that $f$ is onto with finite kernel (see \cite[1.2]{lubin2}). Further by \cite[21.1.4]{haz2} $f$ is automatically a homomorphism of $A_{\pi}$ modules. Let $\beta \in \Lambda_{\pi}(\mathfrak{G})$. Then there is an $\alpha \in \Lambda_{\pi}(\mathfrak{F})$ such that $f(\alpha) = \beta$. Hence $\Lambda_{\pi}(\mathfrak{G}) \subseteq E_{\mathfrak{F}}(\pi^{\infty})$
and $E_{\mathfrak{G}}(\pi^{\infty}) \subseteq E_{\mathfrak{F}}(\pi^{\infty})$ ie $D_{\pi}(\mathfrak{G}) \subseteq D_{\pi}(\mathfrak{F})$.\\
Further, in such situation there is a non-zero homomorphism of $A_{\pi}$ modules $g : \mathfrak{G} \to \mathfrak{F}$ defined over $O_E$. (See \cite[1.6]{lubin2}) Since $\mathfrak{G}$ also has finite height we conclude $D_{\pi}(\mathfrak{G}) \subseteq D_{\pi}(\mathfrak{F})$. \\
Thus $D_{\pi}(\mathfrak{F}) = D_{\pi}(\mathfrak{G})$.\footnote{Note that our definition of height differs from Lubin's definition of height, but notion of finiteness is same.}\\
iv) Note that the arguments of $(ii)$ and $(iii)$ also hold for $D^{\text{ur}}_{\pi}$, $\widehat{D}_\pi$.\\~\\
In literature one usually considers arbitrary formal group laws (possibly without any unramified group law structure) and work with the field generated by $p$-torsion points. In such situation height means height as formal $\mathbb{Z}_p$ module and there is a notion of an absolute endomorphism ring (see \cite{lubin1}). We shall modify notations introduced above and use $p$ instead of $\pi$ in subscript, to change the set-up to arbitrary formal group laws and field generated by $p$-torsions. Note that any homomorphism of formal groups is compatible with module structure provided both groups has that module structure. We can compare this general framework with our restricted situation as follows :\\~\\
\textbf{Remark 2.2 :} i) Similar implications as in remark-2.1(i).\\
ii) Let $\mathfrak{F}, \mathfrak {G}\in \mathcal{F}_p(O_E)$ be such that $\text{Hom}_{O_E}(\mathfrak{F}, \mathfrak{G}) \neq 0$. Then by similar arguments as in remark-2.1(iii) $D_p(\mathfrak{F}) = D_p(\mathfrak{G})$.\\
iii) Let $\mathfrak{F} \in \mathcal{F}_p(O_E)$. Note that $\text{End}_{O_E}(\mathfrak{F})$ is a closed subring of $O_E$ via the $c$ map (see \cite[2.2.1]{lubin1}). There is a $\mathfrak{G} \in \mathcal{F}_p(O_E)$ such that $\text{Hom}_{O_E}(\mathfrak{F}, \mathfrak{G}) \neq 0$, fraction field of $\text{End}_{O_E}(\mathfrak{F})$ = fraction field of $\text{End}_{O_E}(\mathfrak{G})$ (say, $F$) and $\text{End}_{O_E}(\mathfrak{G})$ is integrally closed (see \cite[3.2]{lubin2}). If $E/F$ is unramified, then $\mathfrak{G}$ is an unramified group law and we can use our set-up in questions related to field generated by $p$-torsion points of $\mathfrak{F}$.\\
iv) $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h) \implies \mathfrak{F} \in \mathcal{F}_p(O_E)(eh)$ where $e$ is ramification index of $K/\mathbb{Q}_p$. Further, $D_{\pi}(\mathfrak{F}) = D_{p}(\mathfrak{F})$ (similar statements for $D^{\text{ur}}$, $\widehat{D}$). \\~\\
Let $\mathfrak{F} \in \mathcal{F}_p(O_E)$. The absolute endomorphism ring of $\mathfrak{F}$ (denoted $\text{End}(\mathfrak{F})$) is defined as \[ \text{End}(\mathfrak{F}) = \bigcup_{k} \text{End}_{O_k}(\mathfrak{F}) \]
where $k$ any complete discrete valuation field containing $E$ and $O_k$ is the associated ring of integers.\\
Assume that height of $\mathfrak{F}$ as $\mathbb{Z}_p$ module is $h$ and use $\mathcal{S}$ to denote all the field extensions of $\mathbb{Q}_p$ of degree $h$. Note that $\mathcal{S}$ is a finite set. Put $k_1$ to be the compositum of all elements in $\mathcal{S}$ and $E$. Clearly, $k_1/E$ is a finite extension. One can show that $\text{End}(\mathfrak{F}) = \text{End}_{O_{k_1}}(\mathfrak{F})$. Further $c(\text{End}_{O_k}(\mathfrak{F})) = O_k \cap c(\text{End}(\mathfrak{F}))$ for any $k$ as above. (See \cite[2.3.3]{lubin1})\\
It's easy to see that $\text{End}(\mathfrak{F})$ is a noetherian, local ring and its fraction field is a finite extension of $\mathbb{Q}_p$. \\~\\
Now we have the following proposition :\\~\\
\textbf{Proposition 2.3 :} Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)$. Then $\text{End}(\mathfrak{F})$ is integrally closed in its fraction field.\\~\\
\textbf{Proof :} Similar to proof of theorem-1.3 (see \cite[Theorem-4.1.2]{some results}). Only thing to notice in this case is $\mathfrak{m}$, the unique maximal ideal of $\text{End}(\mathfrak{F})$ is stable under action of $G_{E}$, the absolute Galois group of $E$. $\square$\\~\\
Let $\mathfrak{F}, \mathfrak{G} \in \mathcal{F}_{p}(O_{\widehat{L}})$. Use $\mathfrak{F}_p, \mathfrak{G}_p$ to denote the associated $p$-divisible groups arising from $p$-torsion points (see \cite[2.2]{tate}). Assume that $T$ is a finite extension of $\widehat{L}$ and $O_T$ is the ring of integers. Clearly $T/\widehat{L}$ is totally ramified. Now by a result of Waterhouse \[ \text{Hom}(\mathfrak{F}_p, \mathfrak{G}_p) = \text{Hom}((\mathfrak{F}_p)_{O_T}, (\mathfrak{G}_p)_{O_T})) \;\; \cite[\text{Theorem} \,3.2]{water}\]
From theory of $p$-divisible groups (\cite[2.2]{tate}) we know the canonical maps
\[\begin{split}
\text{Hom}_{O_{\widehat{L}}}(\mathfrak{F}, \mathfrak{G}) \to \text{Hom}(\mathfrak{F}_p, \mathfrak{G}_p),\\
\text{Hom}_{O_T}(\mathfrak{F}, \mathfrak{G}) \to \text{Hom}((\mathfrak{F}_p)_{O_T}, (\mathfrak{G}_p)_{O_T})\end{split}\]
are bijective.\\
Note that $\text{Hom}_{O_{\widehat{L}}}(\mathfrak{F}, \mathfrak{G}) \subseteq \text{Hom}_{O_T}(\mathfrak{F}, \mathfrak{G})$. In light of discussion above, we must have $\text{Hom}_{O_{\widehat{L}}}(\mathfrak{F}, \mathfrak{G}) = \text{Hom}_{O_T}(\mathfrak{F}, \mathfrak{G})$.\\~\\
\textbf{Lemma 2.4 :} Let $\mathfrak{F} \in \mathcal{F}_p(O_{\widehat{L}})$. Then $\text{End}(\mathfrak{F}) = \text{End}_{O_{\widehat{L}}}(\mathfrak{F})$.\\~\\
\textbf{Proof :} Let $h$ be height of $\mathfrak{F}$ as $\mathbb{Z}_p$ module and $\mathcal{S}$ be as before. Use $T$ to denote compositum of elements of $\mathcal{S}$ and $\widehat{L}$. Clearly $T/\widehat{L}$ is a finite totally ramified extension. Now the lemma follows from discussion above. $\square$\\~\\
In what follows we shall identify the endomorphism ring with its image under $c$ map.\\~\\
\textbf{Corollary 2.5 :} Let $\mathfrak{F} \in \mathcal{F}_p(O_E)$. Use $F$ to denote the fraction field of $\text{End}(\mathfrak{F})$. Then, $EF/E$ is a finite unramified extension.\\~\\
\textbf{Proof :} $\text{End}_{O_{\widehat{L}}}(\mathfrak{F}) = \text{End}(\mathfrak{F}) \cap O_{\widehat{L}}$. From lemma-2.4 we have $\text{End}(\mathfrak{F}) \subseteq O_{\widehat{L}}$. Since $F/\mathbb{Q}_p$ is finite, $EF/E$ is a finite unramified extension. $\square$\\~\\
\textbf{Lemma 2.6 :} Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h)$. Then $\text{End}(\mathfrak{F})$ is an integrally closed subring of $A_{\pi}^{h_r}$ containing $A_{\pi}$.\\~\\
\textbf{Proof :} $F$ be as before. $EF/E$ is unramified extension. Note that, $E/K_{\pi}$ is unramified. Hence $EF/K_{\pi}$ is unramified. \\
By hypothesis, $K_{\pi} \subseteq F$. Thus $F/K_{\pi}$ is unramified. By proposition-2.3 $\text{End}(\mathfrak{F}) = O_F$. Assume $[k(O_F) : \mathbb{F}_p ] = f_{F}$. Then $f_{F} | h$. Hence $K_{\pi} \subseteq F \subseteq K_{\pi}^{h_r}$. The lemma follows from here. $\square$ \\~\\
\textbf{Corollary 2.7 :} Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h)$ be such that $\text{End}_{O_E}(\mathfrak{F}) = A_{\pi}^{h_r}$ ie the endomorphism ring has full height. Then, $\text{End}_{O_E}(\mathfrak{F}) = \text{End}(\mathfrak{F})$. \\~\\
\textbf{Proof :} Follows from lemma-2.6. $\square$\\~\\
\textbf{Corollary 2.8 :} Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h)$. Assume that $\mu_{p^h - 1} \subseteq O_E$ ie $h | f_E$ where $f_E$ is the degree of residue extension corresponding to $O_E$. Then $\text{End}(\mathfrak{F}) = \text{End}_{O_E}(\mathfrak{F})$. \\~\\
\textbf{Proof :} We have $\text{End}_{O_E}(\mathfrak{F}) = \text{End}(\mathfrak{F}) \cap O_E$. Now the result follows from lemma-2.6 and hypothesis. $\square$ \\~\\
\textbf{Remark 2.9 :}
If $\text{End}_{O_E}(\mathfrak{F}) = A_{\pi}^{h_r}$, then $A_{\pi}^{h_r} \subseteq O_E$. So corollary-2.7 is a special case of corollary-2.8. \\~\\
Let $\mathfrak{F} \in \mathfrak{F}_p(O_E)(h)$. In Lubin's terminology (\cite[4.3.1]{lubin1}) $\mathfrak{F}$ is said to be full if the following holds :\\
1. $\text{End}(\mathfrak{F})$ is integrally closed in its fraction field,\\
2. $\text{End}(\mathfrak{F})$ is a free $\mathbb{Z}_p$ module of rank $h$.\\~\\
\textbf{Remark 2.10 :} If $\mathfrak{F}$ is an unramified group law of height $h$ whose endomorphism ring is full, it is full in the sense mentioned above (recall, $A_{\pi}^{h_r}$ is a free $\mathbb{Z}_p$ module of rank $eh$ and where $e$ is ramification index of $K/\mathbb{Q}_p$ use remark-2.2(iv)).\\~\\
We have the following theorem :\\~\\
\textbf{Theorem 2.11 :} Let $O$ be a complete discrete discrete valuation ring and assume that $\mathfrak{F}$ and $\mathfrak{G}$ are formal group laws over $O$, which are $\mathbb{Z}_p$ modules of finite height. Suppose that $O$ is large enough so that $\text{End}_{O}(\mathfrak{F}) = \text{End}(\mathfrak{F})$, $\text{End}_{O}(\mathfrak{G}) = \text{End}(\mathfrak{G})$ and its residue field is algebraically closed. Further assume $c(\text{End}(\mathfrak{F})) = c(\text{End}(\mathfrak{G}))$. If $\mathfrak{F}$, $\mathfrak{G}$ are full group laws (in sense defined above) then they are isomorphic over $O$. \\~\\
\textbf{Proof :} See \cite[4.3.2]{lubin1}. $\square$ \\~\\
\textbf{Corollary 2.12 :} Let $\mathfrak{F},\mathfrak{G} \in \mathcal{F}_{\pi}(O_E)(h)$ with $\text{End}_{O_E}(\mathfrak{F}) = \text{End}_{O_E}(\mathfrak{G}) = A_{\pi}^{h_r}$. Then $\mathfrak{F}$ and $\mathfrak{G}$ are isomorphic over $O_{\widehat{L}}$. \\~\\
\textbf{Proof :} Put $O = O_{\widehat{L}}$. Now the corollary follows from lemma-2.4 and theorem-2.11. $\square$\\~\\
\textbf{Corollary 2.13 :} Let $\mathfrak{F}, \mathfrak{G}$ be as in statement of corollary-2.12. Then $\widehat{D}_{\pi}(\mathfrak{F}) = \widehat{D}_{\pi}(\mathfrak{G})$.\\~\\
\textbf{Proof :} Follows from corollary-2.12. $\square$\\~\\
\textbf{Corollary 2.14 :} Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h)$ with $\text{End}_{O_E}(\mathfrak{F}) = A_{\pi}^{h_r}$. Then $D^{\text{ur}}_{\pi}(\mathfrak{F}) = E^{\text{ab}}$ where $E^{\text{ab}}$ is the maximal abelian extension of $E$.\\~\\
\textbf{Proof :} Let $\mathfrak{G}$ be a Lubin-Tate module with respect to the parameter $\pi$ on $A_{\pi}^{h_r}$. Note that, the hypothesis on $\mathfrak{F}$ implies $A_{\pi}^{h_r} \subseteq O_{E}$. Clearly $\mathfrak{G} \in \mathcal{F}_{\pi}(O_E)(h)$ and $\text{End}_{O_E}(\mathfrak{G}) = A_{\pi}^{h_r}$ (the last part follows from lemma-2.6 and definition of Lubin-Tate module). Since $E/K_{\pi}^{h_r}$ is unramified, by class field theory we conclude $D_{\pi}^{\text{ur}}(\mathfrak{G}) = E^{\text{ab}}$ (see \cite[Chapter-III, Corollary-7.7]{neu}).\\
Now the result follows from corollary-2.13 and remark-2.1(i). $\square$\\~\\
\textbf{Remark 2.15 :} From corollary-2.14 one easily sees that $D_{\pi}^{\text{ur}}(\mathfrak{F})$ is same for any unramified group law over $O_E$ whose endomorphism ring has full height ie it is independent of $\pi$, $h$ and $\mathfrak{F}$. This along with remark-2.1(i) answers two questions posed in \cite[Remark-4.1]{some results}.
\section{The associated Galois representation}
In this section we recall some standard results about Galois representation arising from formal group laws and use them in context of unramified group laws.\\
Let $E$ be as before. Put
\[\begin{split}
G_{E} = \text{Gal}(\overline{\mathbb{Q}}_p|E),\\
G_{L} = \text{Gal}(\overline{\mathbb{Q}}_p|L),\\
G_{\widehat{L}} = \text{Gal}(C|\widehat{L}).
\end{split}\]
Note that $G_L = G_{\widehat{L}}$ and $G_L$ is a closed normal subgroup of $G_E$.\\
Let $\mathfrak{F} \in \mathcal{F}_p(O_E)(h)$. It is well-known that the $p$-adic Tate module $T_{p}(\mathfrak{F})$ is a free $\mathbb{Z}_p$ module of rank $h$ (\cite[1.2]{lubin2}). Put $V_p(\mathfrak{F}) = T_p(\mathfrak{F})\underset{\mathbb{Z}_p}\otimes\mathbb{Q}_p.$ It is a $\mathbb{Q}_p$ vector space of dimension $h$. Fixing an ordered base for $T_p(\mathfrak{F})$ one obtains natural representations :
\[\begin{split}
\rho_{p}(\mathfrak{F}) : G_E \to \text{Gl}_{h}(\mathbb{Q}_p),\\
\widehat{\rho}_{p}(\mathfrak{F}) : G_{\widehat{L}} \to \text{Gl}_h(\mathbb{Q}_p).
\end{split}\]\\
\textbf{Remark 3.1 :} i) The maps $\rho_p(\mathfrak{F})$ and $\widehat{\rho}_p(\mathfrak{F})$ factor through canonical inclusion $\text{Gl}_h(\mathbb{Z}_p) \to \text{Gl}_h(\mathbb{Q}_p)$.\\
ii) These maps are continuous with respect to usual pro-finite topologies on both sides.\\
iii) Put $\rho_{p}(\mathfrak{F})(G_E) = H$ and $\widehat{\rho}_{p}(\mathfrak{F})(G_{\widehat{L}}) = \widehat{H}$. Note that $G_{\widehat{L}}$ is a closed sub-group of $G_E$ and $G_E$ is compact. Further, $\text{Gl}_{h}(\mathbb{Q}_p)$ is Hausdorff. Hence $H$ and $\widehat{H}$ are closed subgroups of $\text{Gl}_h(\mathbb{Q}_p)$. By a $p$-adic analogue of Cartan's theorem we know, any closed sub-group of $\text{Gl}_{h}(\mathbb{Q}_p)$ is a $p$-adic Lie group in analytic sense. (see \cite[V.9, Corollary of Threorem-1]{serre 1})\footnote{In this article $p$-adic Lie group means analytic group of finite dimension over a finite extension of $\mathbb{Q}_p$. If the base field is not explicitly mentioned it is assumed to be $\mathbb{Q}_p$. }. So $H$ and $\widehat{H}$ are $p$-adic Lie groups. \\
iv) $\rho_p(\mathfrak{F})$ induces an isomorphism of abstract groups between $\text{Gal}(D_{\pi}(\mathfrak{F})|E)$ and $H$. Since it is continuous, $\text{Gal}(D_{\pi}(\mathfrak{F})|E)$ is compact and $H$ is Hausdorff this isomorphism of abstract group is actually an isomorphism of topological groups.\\
Similarly $\widehat{\rho}_p(\mathfrak{F})$ induces an isomorphism of abstract groups between $\text{Gal}(\widehat{D}_{\pi}(\mathfrak{F})|\widehat{L})$ and $\widehat{H}$.\\
v) In the following discussion we shall be mostly concerned about $\rho_p(\mathfrak{F})$. Properties of $\widehat{\rho}_p(\mathfrak{F})$ are similar and we shall mention them only if we need them.\\~\\
Next part of discussion follow works of Serre and Sen. (\cite{serre 2}, \cite{sen}).\\
Put $V_C = V_p(\mathfrak{F})\underset{{\mathbb{Q}}_p}{\otimes}C$. It is a $C$ vector space of dimension $h$ on which $G_E$ acts semi-linearly ie \[ s(cx) = s(c)s(x) \] for all $s \in G_E, c \in C, x \in V_C$.\\
Let $U_p$ be the group of units of $\mathbb{Z}_p$. Fix a generator $t$ for $T_p(\mathbb{G}_m)$ and let $\chi : G_E \to U_p$ be the corresponding cyclotomic character. For $i \in \mathbb{Z}$ define, \[ V^{i} = \{ x \in V_C \,|\, s(x) = \chi(s)^i x \; \forall s \in G_E \}. \]
It is a $E$ vector-space. Put $V(i) = C$-space spanned by $V^{i}$. Following theorem is a fundamental result of the theory : \\~\\
\textbf{Theorem 3.2 (Hodge-Tate decomposition)} :
\[V_C = V(0) \oplus V(1)\]
as $C$-vector spaces. Further, $\text{dim}_C(V(0)) = h - 1$ and $\text{dim}_C(V(1)) = 1$.\\~\\
\textbf{Proof :} See \cite[Section-4, corollary-2 of theorem-3]{tate} and \cite[Section-5, Proposition-6]{serre 2}. $\square$ \\~\\
Let $H$ be as before. Use $\mathfrak{h}$ to denote the Lie algebra associated with $H$. Remember that the exponential map defines an isomorphism from a neighbourhood of $0$ in $\mathfrak{h}$ onto an open subgroup of $H$.\\
Use $H_{\text{alg}}$ to denote the smallest algebraic subgroup of $\text{Gl}_h(\mathbb{Q}_p)$ containing $H$. Similarly $\mathfrak{h}_{\text{alg}}$ be the smallest algebraic Lie sub-algebra of $\mathfrak{gl}(h, \mathbb{Q}_p)$ containing $\mathfrak{h}$. One can show that $\mathfrak{h}_{\text{alg}}$ is indeed the Lie algebra corresponding to $H_{\text{alg}}$ (\cite[Section-1, Proposition-2]{serre 2}). \\
As a consequence of theorem-3.2 the Galois representation in concern is a `Hodge-Tate' representation. The following result is due to Serre and Sen :\\~\\
\textbf{Proposition 3.3 :} Notations be as above. Then, \[\mathfrak{h}_{\text{alg}} = \mathfrak{h}\] ie $\mathfrak{h}$ is an algebraic Lie algebra. \\~\\
\textbf{Proof :} See \cite[Section-6, Theorem-2]{sen}. $\square$\\~\\
\textbf{Remark 3.4 :} Proposition-3.3 answers lie algebra version of a question in \cite[Remark-3.2.1]{some results} . \\~\\
Next result is due to Serre :\\~\\
\textbf{Proposition 3.5 :} Assume that the following holds :\\
i) $V_p(\mathfrak{F})$ is a semi-simple $H$ module.\\
ii) $\text{End}(\mathfrak{F}) = \mathbb{Z}_p$.\\
Then, $H_{\text{alg}} = \text{Gl}_h(\mathbb{Q}_p)$ and $H$ is an open sub-group of $\text{Gl}_h(\mathbb{Q}_p)$.\\~\\
\textbf{Proof :} See \cite[Section-5, Theorem-4]{serre 2} . $\square$\\~\\
Now one would like to know when condition-(i) of proposition-3.5 holds. In this direction we have the following result :\\~\\
\textbf{Proposition 3.6 :} The following are equivalent :\\
i) $V_{p}(\mathfrak{F})$ is a semi-simple $H$ module.\\
ii) $V_{p}(\mathfrak{F})$ is a semi-simple $\mathfrak{h}$ module.\\
iii) $\mathfrak{h}$ is a reductive Lie algebra ie a product of an abelian and a semi-simple Lie algebra.\\~\\
\textbf{Proof :} See \cite[Section-1, Proposition-1]{serre 2}. $\square$\\~\\
\textbf{Remark 3.7 :} $\mathfrak{h}$ reductive $\implies$ $\mathfrak{h} = \mathfrak{c} \times \mathfrak{s}$ where $\mathfrak{c}$ is abelian, $\mathfrak{s}$ is semi-simple. In particular, $[\mathfrak{h}, \mathfrak{h}] = \mathfrak{s}$ and $[[\mathfrak{h}, \mathfrak{h}], [\mathfrak{h}, \mathfrak{h}]] = \mathfrak{s}$. \\~\\
To use proposition-3.5 one would like to check if $\mathfrak{h}$ is reductive. We make two definitions and note down some preliminary observations. \\~\\
\textbf{Definition 3.8 :} Let $G$ be a pro-finite group. Put, $G^{(0)} = G$ and $G^{(i)} = [G^{(i-1)}, G^{(i-1)}]$ for each $i \in \mathbb{N}$. \\
i) $G$ is said to be \emph{pro-solvable} if given any open subgroup $U$, $G^{(n)} \subseteq U$ for large enough $n$.\\
ii) $G$ is said to be \emph{almost semi-simple} if $G^{\text{ab}} := G/G^{(1)}$ is finite.\\~\\
\textbf{Remark 3.9 :} i) It is well-known that any finite extension of $E$ is solvable. Hence $\text{Gal}(D_p(\mathfrak{F})|E)$ is pro-solvable. By remark-3.1(iii) $H$ is pro-solvable.\\
ii) Let $G$ be a $p$-adic Lie group over a local field and $\mathfrak{g}$ be the associated Lie algebra. Then $G$ is almost semi-simple if and only if $\mathfrak{g} = [\mathfrak{g}, \mathfrak{g}]$. \\
\subsection{Applications to theory of unramified group laws}
We shall apply the theory described in this section to understand the Galois representation arising from unramified group laws.\\
Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h)$. Clearly, $\mathfrak{F} \in \mathcal{F}_p(O_E)(eh)$ where $e$ is the ramification index of the extension $K/\mathbb{Q}_p$. Further, $D_{\pi}(\mathfrak{F}) = D_p(\mathfrak{F})$. Thus, to understand $\text{Gal}(D_{\pi}(\mathfrak{F})|E)$ it is enough to employ techniques described earlier in this section. But to get better results it is desirable to consider representation on $\pi$-adic Tate module $T_{\pi}(\mathfrak{F})$ (to be denoted $\rho_{\pi}(\mathfrak{F})$). For this purpose one needs to improve some of the results used in this section to the context of $p$-adic Lie groups over $K_{\pi}$ and `Hodge-Tate' representations over $K_{\pi}$. A rigorous treatment of this requires lots of effort and so we shall restrict ourselves to the special case $e = 1$ and $\pi = p$. \\
Recall that, the representation $\rho_{\pi}(\mathfrak{F})$ was studied in \cite[Section-4.2]{some results} and the author posed several questions concerning the image $H$ (see \cite[Remark - 4.2.1]{some results}). We make some observations about these questions in the special case $e=1, \pi=p$ using the theory presented in this section. For simplicity, we shall assume that $\mu_{p^h - 1} \subseteq E$. Under this hypothesis, $\text{End}_{O_E}(\mathfrak{F}) = \text{End}(\mathfrak{F})$ (corollary-2.8). \\~\\
\textbf{Remark 3.1.1 :} i) Remark-3.4 shows that $H$ has finite index in an algebraic group (namely $H_{\text{alg}}$) though $H$ may not be algebraic. This gives partial answer to a question in \cite[Remark-4.2.1]{some results}.\\
ii) In the same remark, the author asked a question about minimum dimension of a sub-variety of $\text{Gl}_{h_r}(K_{\pi})$ containing $H$ ($h_r$ as in introduction). Since $H$ is a $p$-adic Lie group the right parameter to study is its dimension as $p$-adic manifold. But by remark-3.4 $H_{\text{alg}}$ has same dimension as $p$-adic manifold. So one can refine the question asked earlier and phrase it in terms of algebraic dimension of $H_{\text{alg}}$.\\
iii) Proposition-3.5 gives answer to another question about $H$ being open, provided the hypothesis on $\mathfrak{h}$ can be verified.\\~\\
Now we make some sketchy remarks about improvement of set-up in general case.\\~\\
\textbf{Remark 3.1.2 :} i) An arbitrary closed subgroup of $\text{Gl}_{h_r}(K_{\pi})$ may not have a $p$-adic manifold structure over $K_{\pi}$. But, since $H \cong \text{Gal}(D_{\pi}(\mathfrak{F})|E)$ and $\mathfrak{F}$ has a $A_{\pi}$ module structure, one expects $H$ to have a $K_{\pi}$-structure. \\
ii) One needs to improve the theory of `Hodge-Tate representations' to vector spaces over $K_{\pi}$ (see \cite[Section-4]{sen} for definition). Note that, one may need to replace cyclotomic character by Lubin-Tate character over $K_{\pi}$. \\
iii) In \cite{sen} Sen uses results of Tate-Sen theory concerning ramifications in $p$-adic Lie extensions. One may like to generalize these results when the Galois group is a $p$-adic Lie group over general local fields. The author intends to revisit this topic in a future article.\\
iv) In \cite{some results} the author has already generalized the concept of $p$-divisible groups to $\pi$-divisible groups in context of $A_{\pi}$ formal modules following \cite{tate} (recall that, all connected $p$-divisible groups arise from divisible formal group laws). One may like to
prove all the key results of \cite{tate} in this generalized set-up. Note that, if $\mathfrak{F}, \mathfrak{G}$ are connected $p$-divisible groups of dimension 1 both arising from $A_{\pi}$ modules then one can easily generalize the `main result' of $p$-divisible groups to the set-up of $\pi$-divisible groups using the result of Hazewinkel quoted in intoduction.\\
v) We have restricted our attention to 1-dimensional formal groups in the whole theory though many of the results are valid for higher dimensions with slight modifications. The reason behind this restricted approach is inability to get a good ramification theoretic description of torsion subgroups which in 1-dimensional case is provided by Eisenstein polynomials. (see \cite[Appendix-A, Section-2]{the}).
\section{Concluding remarks}
Let $\mathfrak{F} \in \mathcal{F}_{\pi}(O_E)(h)$ and assume that $h | f_E$. The goal of this section is to pin down some facts about $\text{Gal}(D_{\pi}(\mathfrak{F})|E)$ which will lead to better understanding of the group. In particular, for $e = 1, \pi = p$ case we would like to gather some information towards the problem of finding dimension and properties of $\mathfrak{h}$.\\
For simplicity we shall denote $\text{Gal}(D_{\pi}(\mathfrak{F})|E)$ by $G$ and short-hand $E_{\mathfrak{F}}(\pi^n)$ by $E(\pi^n)$. \\~\\
\textbf{Remark 4.1 :} i) We know that, $\text{Gal}(E(\pi)|E)$ is abelian. Hence $[G, G] \subseteq \text{Gal}(E(\pi^\infty)|E(\pi))$. \\
ii) The $\pi$-adic Tate module $T_{\pi}(\mathfrak{F})$ is a free $A_{\pi}$ of rank $h_r$. Fix a base $z_1, \cdots, z_{h_r}$ of $T_{\pi}(\mathfrak{F})$. Let $z_i(n)$ be the $n$-th component $z_i$ for all $n \in \mathbb{N}$ and $ 1 \leq i \leq h_r$. Put $S_{n} = \{ z_1(n), z_2(n), \cdots z_{h_r}(n) \}$, $\mathfrak{F}'[\pi^n] = \mathfrak{F}[\pi^n] - \mathfrak{F}[\pi^{n-1}]$. Clearly \[K(\mathfrak{F}'[\pi^n]) = K(S_n) = K_{\mathfrak{F}}(\pi^n).\] Let $m_{\pi}(n)$ denote the smallest possible size of a subset of $\mathfrak{F}[\pi^n]$ which generates $K_{\mathfrak{F}}(\pi^n)$ over $K$. Without loss of generality one can assume that such a generating set is a subset of $\mathfrak{F}'[\pi^n]$. Clearly $m_{\pi}(n) \leq h_r$. Consider the sequence $\{m_{\pi}(1), m_{\pi}(2), \cdots \}$. One can conjecture following properties :\\
a) $m_{\pi}(1) \leq m_{\pi}(2) \leq m_{\pi}(3) \leq \cdots $,\\
b) the sequence is eventually constant and this constant depends only on degree of $F/K_{\pi}$ where $F$ is the fraction field of $\text{End}_{O_E}(\mathfrak{F})$.\\
iii) $m_{\pi}(1) = 1$ and $[F : K_{\pi}] = h_r$ implies $m_{\pi}(i) = 1$ for each $i \geq 1$.\\
Further if $m_{\pi}(i) = 1$ for some $i \geq 2$ looking at degree and ramification index of the extension $K_{\mathfrak{F}}(\pi^i)/K$ we conclude that $K(z) = K_{\mathfrak{F}}(\pi^i)$ for all $z \in \mathfrak{F}'[\pi^i]$. Hence if $m_{\pi}(i) = 1$ for all $i \geq 2$ one has $[F:K_{\pi}] = h_r$ ie $\mathfrak{F}$ has endomorphism ring of full height. (See \cite[Appendix-A]{the} and \cite[Theorem 2.3]{some results}) \\~\\
The author believes that to get better understanding of $\mathfrak{h}$ one should study questions along these lines and he intends to develope this point of view in a future article.
\section{Acknowledgement}
I am thankful to Ananyo Kazi for pointing out Sen's work `Ramification in $p$-adic Lie extensions' (1972).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Optimization problems with a conic quadratic objective arise often
when modeling uncertainty with a mean-risk utility.
We motivate such a model for an investment problem with a parametric Value-at-Risk (VaR) minimization objective.
Given random variables $\ell_i, \ i \in N,$ representing the uncertain loss in asset $i$,
let $y_i$ denote the amount invested in asset $i \in N$.
Then, for small $\epsilon > 0$,
minimizing the Value-at-Risk with confidence level $1-\epsilon$ is stated as
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:VaR} \tag{VaR}
\begin{split}
\zeta(\epsilon) = \min \ & \bigg \{ z : \operatorname{\mathbf{Prob}} \left( \ell'y > z \right) \leq \epsilon, \ \ y \in Y \bigg \} ,
\end{split}
\end{align*}
\tagsleft@false
\end{minipage}
\end{center}
\noindent where losses greater than $\zeta(\epsilon)$ occur with probability no more than $\epsilon$.
Here, $Y$ represents the set of feasible investments.
If $\ell_i$'s are independent normally distributed random variables with mean $\mu_i$ and variance $\sigma^2_i$, problem \eqref{eq:VaR} is equivalent to the following mean-risk optimization problem:
\begin{align} \label{eq:conic-opt}
\min \ & \bigg \{ \mu' y + \Omega \sqrt{\sum_{i \in N} \sigma_i^2 y_i^2} : \ y \in Y \bigg \},
\end{align}
where $\Omega = \Phi^{-1}(1-\epsilon)$ and $\Phi$ is the cumulative distribution function (c.d.f.) of the standard normal distribution \cite{Birge:SPbook}.
If only the mean and variance of the distribution are known, one can write a robust version by letting $\Omega = \sqrt{(1-\epsilon)/\epsilon}$, which provides an upper bound on the worst-case VaR
\cite{bertsimas.popescu:05,ghaoui.etal:03}.
Alternatively, if $\ell_i$'s are independent and symmetric with support $[u_i - \sigma_i, u_i + \sigma_i]$, then letting $\Omega = \sqrt{\ln(1/\epsilon)}$ gives an upper bound on
on the worst-case VaR as well \cite{BN:robust-mp}. Hence, under different assumptions on the random variable $\ell$, one arrives at different instances of the mean-risk model \eqref{eq:conic-opt} with a conic quadratic objective.
Ahmed \cite{ahmed:06} studies the complexity and tractability of various stochastic objectives for mean-risk optimization.
The objective of the mean-risk optimization problem \eqref{eq:conic-opt} is a conic quadratic function in $y$, hence convex.
If the feasible set $Y$ is a tractable convex set as well, then \eqref{eq:conic-opt} is an efficiently-solvable convex optimization problem \cite{NN:convex-book}. In practice, though, most problems are accompanied
with non-convex side constraints, such as a restriction on the maximum number of non-zero variables or fixed charges \cite{AAG:matchup,AAG:cruise,AG:arcset, benson.saglam:14,B:quad-mip,bonami.lejeune:09,DKRA:ramping} that are needed to obtain more realistic and implementable solutions. To model such non-convexities it is convenient to introduce auxiliary binary variables $x_i, \ i \in N,$ to indicate whether $y_i$ is non-zero or not.
The so-called \textit{on-off constraints} $0 \le y_i \leq u_i x_i$, where $u_i$ is an upper bound on $y_i$, $i \in N$, model whether asset $i$ is in the solution or not. By appropriately scaling $y_i$, we assume, without loss of generality, that $u_i = 1$ for all $i \in N$. The non-convexity introduced by the on-off constraints is a major challenge in solving practical mean-risk optimization problems. In order to address this difficulty,
in this paper, we derive strong convex relaxations for the conic quadratic mixed-integer set with indicator variables:
\begin{align}
F = \bigg \{ (x,y,z) \in \{0,1\}^N \times \ensuremath{\mathbb{R}}^N_+ \times \ensuremath{\mathbb{R}}_+: \sum_{i \in N} a_i y_i^2 \le z^2, \ \mathbf{0} \le y \le x \bigg \}.
\end{align}
\ignore{
We also consider the optimization problem over $F$:
\begin{align*}
\text{(OPT)} \ \ \ \ \
\min \bigg \{ c'x + d'y + z: (x,y,z) \in F \bigg \},
\end{align*}
which will be useful in proving the validity of the inequalities for $F$.
}
Problem \eqref{eq:conic-opt} is a special case of the mean-risk optimization problem
\begin{align}
\label{eq:cov}
\begin{split}
\ \min & \bigg \{ \mu' y + \Omega \sqrt{y' Q y}: y \in Y \bigg \}
\end{split}
\end{align}
with a positive semidefinite covariance matrix $Q$.
By decomposing $Q = V + D $, where $V, D \succeq 0 $ and $D$ is a diagonal matrix,
problem \eqref{eq:cov} is equivalently written as
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:cqip-r}
\begin{split}
\min \ & \mu y+ \Omega z \\
\text{s.t. } & y'D y + s^2 \le z^2 \\
& y' V y \le s^2 \\
& y \in Y.
\end{split}
\end{align*}
\tagsleft@false
\end{minipage}
\end{center}
Indeed, for high dimensional problems such a decomposition is readily available, as a low-rank factor covariance matrix $V$ is estimated separately from the residual variance matrix $D$ to avoid ill-conditioning \cite{grinold2000active}. Observe that the first constraint above is a conic quadratic with a diagonal matrix.
Therefore, the valid inequalities derived here for the diagonal case can be applied more generally in the presence of correlations after constructing a suitable diagonal relaxation. We provide computational experiments on the application of the results for the general case with correlations as well.
\subsubsection*{Literature}
Utilizing diagonal matrices is standard for constructing convex relaxations in binary quadratic optimization \cite{anstreicher2012convex, poljak1995convex}. In particular, for $x \in \{0,1\}^n$,
\[
x'Qx \le z \iff x'(Q-D)x + \operatorname{diag}(D)'x \le z
\]
with a diagonal matrix $D$ satisfying $Q-D \succeq 0$.
This transformation is based on the ideal (convex hull) representation of the
separable quadratic term $x'Dx$ as a linear term $\operatorname{diag}(D)'x$ for $x \in \{0,1\}^n$.
A similar approach is also available for convex quadratic optimization with indicator variables. For $x \in \{0,1\}^n$ and $y \in \ensuremath{\mathbb{R}}^n \text{ s.t. } \mathbf{0} \le y \le x$, we have
\[
y'Qy \le z \iff y'(Q-D)y + \operatorname{diag}(D)'t \le z, \ y_i^2 \le x_i t_i
\]
with $t \in \ensuremath{\mathbb{R}}^n_+$ \cite{akturk.atamturk.gurel:09,gunluk.linderoth:10}. This transformation is based on the ideal representation of each
quadratic term $D_{ii}y_i^2$ subject to on-off constraints as a linear term $D_{ii}t_i$ along with a rotated cone constraint $y_i^2 \le x_i t_i$.
Decomposing $Q$ for diagonalization is also studied for an effective application of
linear perspective cuts \cite{frangioni.gentile:07}.
For the conic quadratic constraint $\sqrt{x'Dx} \le z$, however, the terms are \textit{not} separable even for the diagonal case, and simple transformations as in the quadratic cases above are not sufficient to arrive at an ideal convex reformulation. For the pure binary case, Atamt\"{u}rk and Narayanan \cite{atamturk.narayanan:08} exploit the submodularity of the underlying set function to describe its
convex lower envelope via polymatroid inequalities. Atamt\"urk and G\'omez \cite{atamturk.gomez:16} give
strong valid inequalities for the mixed $0-1$ case without the on-off constraints.
The ideal (convex hull) representation for the conic quadratic mixed $0-1$ set with indicator variables $F$ remains an open question. We show, however, that exploiting the submodularity of the underlying set function for the $0-1$ restrictions is critical in deriving strong convex relaxations for $F$.
\autoref{tab:review} summarizes the results for the related sets described above.
In addition, general conic mixed-integer cuts \cite{AN:conicmir:ipco}, lift-and-project cuts \cite{CI:cmip}, disjunctive cuts \cite{terlaky:disj,KilincKarzan2015} are also applicable to the conic mixed-integer set $F$ considered here.
\begin{table}
\centering
\caption{Convex hull representations for $x \in \{0,1\}^n, y \in \ensuremath{\mathbb{R}}^n_+, z \in \ensuremath{\mathbb{R}}_+$.}
\label{tab:review}
{\setlength{\extrarowheight}{5pt}
\begin{tabular}{l|c|c}
\hline \hline
& Separable Quadratic & Conic Quadratic \\
\hline
Pure $0-1$ & $x'Dx \le z$: \cite{anstreicher2012convex,poljak1995convex} & $\sqrt{x'Dx } \le z$: \cite{atamturk.narayanan:08} \\
Mixed $0-1$ & $y'Dy \le z, \ \mathbf{0} \le y \le x$: \cite{akturk.atamturk.gurel:09,gunluk.linderoth:10} &
$\sqrt{y'Dy} \le z, \ \mathbf{0} \le y \le x:$ \ \ ? \\
\hline \hline
\end{tabular}}
\end{table}
\subsubsection*{Notation}
Throughout, we denote by $\mathbf{0}$ the vector of zeroes, by $\mathbf{1}$ the vector of ones,
and by $e_i$ the $i$th unit vector. $N := \{1, 2, \ldots, n\}$ and
$[k] := \{1, 2, \ldots, k\}$. For a vector $a \in \ensuremath{\mathbb{R}}^N$, let $a(S) = \sum_{i \in S} a_i$, $S \subseteq N$.
\subsubsection*{Outline}
The remainder of the paper is organized as follows. In Section~\ref{sec:prelim} we review the polymatroid inequalities for
the binary restriction of the mean-risk problem and give a polynomial algorithm for an optimization problem over $F$.
In Section~\ref{sec:cuts} we introduce three classes of convex valid
inequalities for $F$ that are obtained from binary restrictions of $F$ through lifting the polymatroid inequalities.
In Section~\ref{sec:computation} we present computational experiments performed for testing the effectiveness of the proposed inequalities in solving mean-risk optimization problems with on-off constraints. We conclude with a few final remarks in \autoref{sec:conclusion}.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Polymatroid inequalities}
\label{sec:prelim:polymatroid}
In this section, we recall the main results from Atamt\"urk and Narayanan \cite{atamturk.narayanan:08} that will be used in this paper.
Given $\sigma \ge 0$ and $a_i > 0, \ i \in N$, consider the set
\begin{align}
K_\sigma = \bigg \{ (x,z) \in \{0,1\}^N \times \ensuremath{\mathbb{R}}_+: \sqrt{\sigma + \sum_{i \in N} a_i x_i } \le z \bigg \}.
\end{align}
Observe that $K_0$ is the binary restriction of $F$ obtained by setting $y = x$.
For a given permutation $\left((1),(2),\ldots,(n)\right)$ of $N$, let
\begin{align}
\sigma_{(k)}&= a_{(k)} + \sigma_{(k-1)}, \text{ and } \sigma_{(0)} = \sigma,\notag\\
\pi_{(k)}&=\sqrt{\sigma_{(k)}}-\sqrt{\sigma_{(k-1)}},\label{eq:definitionPi}
\end{align}
and define the \emph{polymatroid inequality} as
\begin{equation}
\label{eq:extendedPolymatroidInequality}
\sum_{i=1}^n \pi_{(i)}x_{(i)}\leq z - \sqrt{\sigma}.
\end{equation}
Let $\Pi_\sigma$ be the set of such coefficient vectors $\pi$ for \textit{all} permutations of $N$.
\begin{prop}[Convex hull of $K_\sigma$]
\label{prop:convexHullK}
$$\text{conv}(K_\sigma)=\left\{(x,z)\in [0,1]^N\times \ensuremath{\mathbb{R}}_+:\pi'x \leq z - \sqrt{\sigma}, \;\; \forall \pi\in \Pi_\sigma \right\}.$$
\end{prop}
The set function defining $K_\sigma$ is non-decreasing submodular; therefore, $\Pi_\sigma$ form the extreme points of a polymatroid \cite{edmonds:70}. As shown by Edmonds, the maximization of a linear function over a polymatroid can be solved by the greedy algorithm; therefore, a point $\bar x \in \ensuremath{\mathbb{R}}_+^n$ can be separated from $\text{conv}(K_\sigma)$ via the greedy algorithm by sorting $\bar x_i$ in non-increasing order in $O(n \log n)$.
\begin{prop}[Separation]
\label{prop:separation}
A point $\bar{x}\not \in \text{conv}(K_\sigma)$ such that $\bar{x}_{(1)}\geq \bar{x}_{(2)} \geq \ldots \geq \bar{x}_{(n)}$ is separated from $\ensuremath{\operatorname{conv}}(K_\sigma)$ by inequality \eqref{eq:extendedPolymatroidInequality}.
\end{prop}
Atamt\"urk and Narayanan \cite{atamturk.narayanan:08} also consider a mixed-integer extension and give valid inequalities for the mixed-integer set
\begin{equation*}
L_\sigma=\left\{(x,y,z)\in \{0,1\}^N\times [0,1]^M \times \ensuremath{\mathbb{R}}_+: {\sqrt{\sigma + \sum_{i\in N}a_ix_i+\sum_{i\in M}c_iy_i^2}}\leq z\right\},
\end{equation*}
where $c_i > 0, \ i \in M$.
Without loss of generality, the upper bounds of the continuous variables in $L_\sigma$ are set to one by scaling.
\begin{prop}[Valid inequalities for $L_\sigma$]
\label{prop:validIneqBounded1}
For $T\subseteq M$ inequalities
\begin{equation}
\label{eq:polymatroidBoundedDominated}
\pi'x + \sqrt{\sigma + \sum_{i\in T}c_iy_i^2} \leq z, \quad \pi\in \Pi_{\sigma + c(T)}
\end{equation}
are valid for $L_\sigma$.
\end{prop}
\subsection{Optimization}
\label{sec:prelim:solve}
In this section, we consider the optimization problem
\begin{align*}
(\ensuremath{\text{OPT}}) \ \ \ \ \
\min \bigg \{ c'x + d'y + \sqrt{\sigma + \sum_{i \in N} a_i y_i^2} : \mathbf{0} \le y \le x, \ x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N \bigg \},
\end{align*}
which will be useful in proving the validity of the inequalities for $F$.
We characterize the optimal solutions and give a polynomial algorithm for (\ensuremath{\text{OPT}}).
We assume that $\sigma \ge 0, \ a_i > 0, \ i \in N$ to ensure a real-valued objective.
Without loss of generality, we assume that
$c_i > 0, \ i \in N$, otherwise, we may set $x_i$ to one;
$d_i < 0, \ i \in N$, otherwise, we may set $y_i$ to zero; and
$c_i + d_i < 0, \ i \in N$, otherwise, we may set both $x_i$ and $y_i$ to zero.
Without loss of generality, assume that the variables are indexed so that
\[\frac{c_1 + d_1}{a_1} \leq \frac{c_2 + d_2}{a_2} \leq \cdots \leq \frac{c_n+d_n}{a_n} \cdot \]
The following proposition shows that the binary part of an optimal solution to (\ensuremath{\text{OPT}}) is a vector of consecutive ones,
followed by consecutive zeroes.
\begin{prop} \label{prop:consec1}
If $(x^*, y^*)$ is an optimal solution to (\ensuremath{\text{OPT}}), then $x^*_k = 1$ for some $k \in N$, implies $x^*_i = 1$ for all $i \in [k-1]$.
\end{prop}
\begin{proof}
Suppose for contradiction that $x^*_k = 1$, but $x^*_j = 0$ for some $j < k$.
Consider two feasible points $(x',y')$ and $(x'', y'')$
with respective objective values $z'$ and $z''$, constructed as:
\ignore{
\begin{alignat*}{3}
x'_i &= \begin{cases} 1 & \ \text{if } i = j \\ x^*_i & \text{otherwise} \end{cases} \quad
y'_i &= \begin{cases} 1 & \ \text{if } i = j \\ y^*_i & \text{otherwise} \end{cases} \\
x''_i &= \begin{cases} 0 & \ \text{if } i = k \\ x^*_i & \text{otherwise} \end{cases} \quad
y''_i &= \begin{cases} 0 & \ \text{if } i = k \\ y^*_i & \text{otherwise.} \end{cases}
\end{alignat*}
}
\begin{align*}
(x', y') &= (x^*, y^*) + (e_j, e_j), \\
(x'', y'') &= (x^*, y^*) - (e_k, y^*_k e_k ).
\end{align*}
We will show that $z' < z^*$, contradicting the optimality of $(x^*,y^*)$. To this end,
let $\xi := \sigma + \sum_{i \in N} a_i {y^*_i}^2$, and
\begin{alignat*}{3}
\delta_1 &:= z^* - z'' = c_k + d_k y^*_k+ \sqrt{\xi} - \sqrt{\xi - a_k {y^*_k}^2}, \\
\delta_2 &:= z' - z^* = c_j + d_j + \sqrt{\xi+a_j} - \sqrt{\xi}.
\end{alignat*}
As $(x'', y'')$ is a feasible solution, $\delta_1 \leq 0$. Also note that $y_k^* > 0$
as otherwise $x_k^*$ would be zero in an optimal solution since $c_k > 0$.
Now, we establish that
\begin{align*}
\frac{\delta_1}{a_k {y^*_k}^2} - \frac{\delta_2}{a_j}
= \left( \frac{c_k + d_k y^*_k}{a_k {y^*_k}^2} - \frac{c_j + d_j}{a_j}\right)
+ \left( \frac{\sqrt{\xi} - \sqrt{\xi-a_k{y^*_k}^2}}{a_k{y^*_k}^2} - \frac{\sqrt{\xi+a_j} - \sqrt{\xi}}{a_j} \right) > 0,
\end{align*}
from the inequality
\[\frac{c_k + d_k y^*_k}{a_k {y^*_k}^2} \geq \frac{c_k + d_k}{a_k y^*_k} \geq \frac{c_j + d_j}{a_j}, \]
which holds by the indexing assumption and that $0 < y^*_k \le 1$, and from the inequality
\[\frac{\sqrt{\xi} - \sqrt{\xi-a_k{y^*_k}^2}}{a_k{y^*_k}^2} - \frac{\sqrt{\xi+a_j} - \sqrt{\xi}}{a_j} > 0,\]
which follows from the strict concavity of square root function. Therefore, we have
$\frac{\delta_2}{a_j} < \frac{\delta_1}{a_k {y^*_k}^2} \leq 0$, implying $\delta_2 < 0$, which contradicts the optimality of $(x^*,y^*)$.
\end{proof}
\begin{prop}
\label{prop:opt-compexity}
There is an $O(n^2)$ algorithm to solve (\ensuremath{\text{OPT}}).
\end{prop}
\begin{proof}
\autoref{prop:consec1} implies that
there exist only $n+1$ possible candidates for optimal $x$, i.e., $\mathbf{0}$ and $ \sum_{i=1}^{k} e_i$ for $k \in N$.
After a single sort of the indices in $O(n \ log \ n)$,
for each candidate $x$ the resulting convex optimization problem in $y$ can be solved in $O(n)$ time
with \autoref{alg:KKT} in the Appendix . Therefore, an optimal solution to (\ensuremath{\text{OPT}}) can be found in $O(n^2)$.
\end{proof}
\section{Lifted Polymatroid Inequalities}
\label{sec:cuts}
In this section, we derive three classes of valid inequalities for $F$ by lifting
the polymatroid inequalities \eqref{eq:extendedPolymatroidInequality} described in Section~\ref{sec:prelim:polymatroid} from
specific restrictions of the feasible set $F$. The first class of inequalities are linear, whereas the other two are nonlinear convex inequalities.
\subsection{Lifted Linear Polymatroid Inequalities }
\label{sec:lift1}
Consider the restriction of $F$ obtained by setting the continuous variables $y$ to their binary upper bounds $x$. It
follows from Section~\ref{sec:prelim:polymatroid} that for any permutation ((1),(2), \ldots, (n)) of $N$,
the polymatroid inequality
\begin{align}
\pi' x \leq z - \sqrt{\sigma}\label{ineq:EP}
\end{align}
with $\pi_{(i)} = \sqrt{\sigma_{(i)}} - \sqrt{\sigma_{(i-1)}}, \ i=1,2,\ldots,n$,
is valid for the restriction with $y=x$, but not necessarily for $F$.
In this section, we lift inequality \eqref{ineq:EP} to obtain the linear valid inequality
\begin{align} \label{ineq:EPlift}
\pi' x \leq z + \alpha' (x-y) - \sqrt{\sigma},
\end{align}
for $F$ with coefficients $\alpha_{(i)} = {a_{(i)}}/{\sqrt{\sigma_{(i)}}}, \ i=1,2,\ldots,n$.
\begin{prop}
\label{prop:validity}
Inequality \eqref{ineq:EPlift} with $\alpha$ and $\pi$ defined as above is valid for $F$.
\end{prop}
\begin{proof}
Consider the optimization problem over $F$:
\begin{align*}
\zeta = \max \ & \pi' x - \alpha' (x-y) - z + \sqrt{\sigma} \\
\text{s.t. } & \sigma + \sum_{i \in N} a_i y_i^2 \le z^2 \\
& \mathbf{0} \leq y \leq x \\
& x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N, \ z \in \ensuremath{\mathbb{R}}_+.
\end{align*}
Inequality \eqref{ineq:EPlift} is valid for $F$ iff $\zeta \le 0$.
By plugging in the values for $\pi, \ \alpha$ and eliminating $z$, the problem is equivalently written as
\begin{center}
\begin{minipage}{0.9\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:validity}
\zeta = \max \ & \sum_{i \in [n]}\bigg (\sqrt{\sigma_{(i)}} - \sqrt{\sigma_{(i-1)}} -\frac{a_{(i)}}{\sqrt{\sigma_{(i)}}} \bigg ) x_{(i)} \\
\tag{$V$} \ & + \sum_{i \in [n]} \frac{a_{(i)}}{\sqrt{\sigma_{(i)}}} y_{(i)} - \sqrt{\sigma+ \sum_{i \in [n]}a_{(i)} y_{(i)}^2} + \sqrt{\sigma} \\
\text{s.t. } & \mathbf{0} \leq y \leq x \\
& x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N.
\end{align*}
\tagsleft@false
\end{minipage}
\end{center}
Note that \eqref{eq:validity} is a special case of (\ensuremath{\text{OPT}}) with coefficients
\begin{align*}
c_{(i)} = - \bigg (\sqrt{\sigma_{(i)}} - \sqrt{\sigma_{(i-1)}} -\frac{a_{(i)}}{\sqrt{\sigma{(i)}}} \bigg ), \text { and }
d_{(i)} = -\frac{a_{(i)}}{\sqrt{\sigma{(i)}}}, \ i \in [n].
\end{align*}
Then
\[
\frac{c_{(i)} + d_{(i)}}{a_{(i)}} = - \frac{\sqrt{\sigma_{(i)}} - \sqrt{\sigma_{(i-1)}} }{a_{(i)}}, \ i \in [n],
\]
\noindent
and we have
\begin{align*}
\frac{c_{(i)}+d_{(i)}}{a_{(i)}} \leq \frac{c_{(j)}+d_{(j)}}{a_{(j)}}, \ \text{ for } i \leq j.
\end{align*}
By \autoref{prop:consec1}, there exists an optimal solution $(x^*, y^*)$ to ($V$) such that $x^* = \sum_{i=1}^{m} e_{(i)}$ for some $m \in [n]$. Then, $y^*$ is an optimal solution to the following convex problem:
\begin{center}
\begin{minipage}{0.99\textwidth}
\tagsleft@true
\begin{align*}
\begin{split}
\max \ & \sum_{i \in [m]} (\sqrt{\sigma_{(i)}} - \sqrt{\sigma_{(i-1)}}) -
\sum_{i \in [m]} \frac{a_{(i)}}{\sqrt{\sigma_{(i)}}} (1 - y_{(i)}) - \sqrt{\sigma + \sum_{i \in [m]} a_{(i)} y_{(i)}^2} + \sqrt{\sigma} \\
\text{s.t. } & \mathbf{0} \leq y \leq \mathbf{1}.
\end{split}
\end{align*}
\tagsleft@false
\end{minipage}
\end{center}
\noindent
Its KKT conditions are similar to \eqref{FOC1}--\eqref{CS} and are satisfied by $({y}, {\lambda}, {\mu})$ such that
\begin{align*}
{y}_i& = 1, \ i \in [m], \\
{\lambda}_i &= 0, \ i \in [m], \\
{\mu}_i &= \frac{a_{(i)}}{\sqrt{\sigma_{(i)}}} - \frac{a_{(i)}}{\sqrt{\sigma_{(m)}}} \geq 0, \ i \in [m].
\end{align*}
Therefore, there exists $(x^*, y^*) = (\sum_{i \in [m]} e_i, \sum_{i \in [m]} e_i)$ for some $m \in [n]$ with a binary $y^*$, implying $\zeta = 0$, i.e., the validity of \eqref{ineq:EPlift}.
\end{proof}
\begin{remark} Observe that
the proof of \autoref{prop:validity} implies that inequality \eqref{ineq:EPlift} is tight for the following $n+1$
affinely independent points of $F$:
\begin{align*}
(x,y,z) &= (\mathbf{0}, \mathbf{0}, \sqrt{\sigma}); \\
(x,y,z) &= \Big(\sum_{k \leq i}e_{(k)}, \sum_{k \leq i}e_{(k)}, \sqrt{\sigma_{(i)}} \Big), \ \ i \in [n]. \\
\end{align*}
\end{remark}
\ignore{, $c = [8, 5, 20, 11, 12]$, and $d = [-12, -6, -22, -12, -14]$.}
\ignore{which has the optimal solution
\begin{align*}
x^* = y^* = \begin{bmatrix}
1, 0, 1, 0, 1
\end{bmatrix},
z^* = -0.2540.
\end{align*}
}
\begin{example}
\label{ex:EPlift}
Consider an instance of $F$ with $a = [22, 18, 21, 19, 17]$ and
the following fractional point is contained in its continuous relaxation:
\begin{align*}
\bar{x} = \bar{y} = \begin{bmatrix}
1, 0.3817, 0.6543, 0.3616, 0.8083
\end{bmatrix},
\bar{z} = 6.8705.
\end{align*}
For the permutation $(1,3,5,2,4)$, $\pi$ is computed as
\begin{align*}
\pi_{(1)} &= \pi_1 = \sqrt{a_1} - \sqrt{0} = \sqrt{22} = 4.6904, \\
\pi_{(2)} &= \pi_3 = \sqrt{a_1+a_3} - \sqrt{a_1} = \sqrt{43} - \sqrt{22} = 1.8670, \\
& \vdots \\
\pi_{(5)} &= \pi_4 = \sqrt{a_1+ \cdots + a_5} - \sqrt{a_1 + a_2 + a_3 + a_5 } = \sqrt{97} - \sqrt{78} = 1.0171.
\end{align*}
The lifting coefficients $\alpha$ are computed accordingly, and we get inequality \eqref{ineq:EPlift} with
\begin{align*}
\pi &= \begin{bmatrix}
4.6904, 1.0858, 1.8670, 1.0171, 1.1885
\end{bmatrix}, \\
\alpha &= \begin{bmatrix}
4.6904, 2.0381, 3.2025, 1.9292, 2.1947
\end{bmatrix}.
\end{align*}
The fractional point $(\bar{x}, \bar{y}, \bar{z})$ is cut off by \eqref{ineq:EPlift}
as $\pi ' \bar{x} - \alpha ' (\bar{x} - \bar{y}) - \bar{z} = 0.7844 > 0 $.
\end{example}
Although inequalities \eqref{ineq:EPlift} cut off points of the continuous relaxation with fractional $x$, unlike for the binary case
$K_\sigma$, adding all $n!$ inequalities \eqref{ineq:EPlift}
is not sufficient to describe \ensuremath{\operatorname{conv}}(F) as \ensuremath{\operatorname{conv}}(F) is not a polyhedral set.
Therefore, in the next two subsections we present two nonlinear convex generalizations of inequalities \eqref{ineq:EPlift}.
\subsection{Lifted Nonlinear Polymatroid Inequalities I}
\label{sec:lift2}
The second class of lifted inequalities is obtained by applying the procedure described in Section~\ref{sec:lift1}
for a subset of the variables. For $S \subseteq N$, introducing an auxiliary variable $t \in \ensuremath{\mathbb{R}}_+$, let us rewrite
the conic constraint $\sum_{i\in N} a_i y_i^2 \le z^2$ as
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\begin{split}
& {t^2 + \sum_{i\in N\setminus S} a_i y_i^2} \le z^2, \\
& {\sum_{i \in S} a_i y_i^2} \le t^2.
\end{split}
\end{align*}
\tagsleft@false
\end{minipage}
\end{center}
Applying \autoref{prop:validity} to the relaxation defined by constraints
$${\sum_{i \in S} a_i y_i^2} \le t^2, \ \mathbf{0} \leq y_S \leq x_S, \ x_S \in \{0,1\}^S, y_S \in \ensuremath{\mathbb{R}}_+^S, t \in \ensuremath{\mathbb{R}}_+ $$
for each permutation $((1), (2), \ldots, (|S|))$ of $S$,
we generate a lifted polymatroid inequality \eqref{ineq:EPlift} of the form
\begin{align*}
\pi_S ' x_S \leq t + \alpha_S ' (x_S - y_S)
\end{align*}
where ${\pi_S}_{(i)} = \sqrt{\sigma_{S(i)}} - \sqrt{\sigma_{S(i-1)}}$, ${\alpha_S}_{(i)} = a_{(i)} / \sqrt{\sigma_{S(i)}}$,
and the partial sums are defined as
$\sigma_{S(i)} = a_{(i)} + \sigma_{S(i-1)} \text{ for } i=1,2,\ldots,|S|$ with $\sigma_{S(0)} = 0$.
Eliminating the auxiliary variable $t$, we obtain the following class of conic quadratic valid inequalities for $F$.
\begin{prop}
For $S \subseteq N$, the conic quadratic inequality
\begin{align}
\label{ineq:EP_subset}
(\pi_S ' x_S - \alpha_S ' (x_S - y_S))^2 + \sum_{i\in N \setminus S} a_i y_i^2 \le z^2
\end{align}
with $\pi_S$ and $\alpha_S$ defined above is valid for $F$.
\end{prop}
Note that inequality \eqref{ineq:EP_subset} is convex since it is conic quadratic.
It is equivalent to \eqref{ineq:EPlift} for $S = N$ and to the original constraint for $S = \emptyset$.
Otherwise, it is distinct from both.
\begin{remark}
The following $n+1$ affinely independent points of $F$
satisfy inequality \eqref{ineq:EP_subset} at equality:
\begin{align*}
(x,y,z) &= (\mathbf{0}, \mathbf{0}, 0); \\
(x,y,z) &= \Big(\sum_{k \le i }e_{(k)}, \sum_{k \le i }e_{(k)}, \sqrt{\sigma_{(i)}} \Big), \ i = 1, 2, \ldots, |S|; \\
(x,y,z) &= (e_i, e_i, \sqrt{a_i}), \ i \in N \setminus S.
\end{align*}
\end{remark}
The following example illustrates a point satisfying inequality \eqref{ineq:EPlift}, but
cut off by inequality \eqref{ineq:EP_subset}.
\begin{example}
\label{ex:EPsubset}
Consider the instance in \autoref{ex:EPlift}, and the fractional point
\begin{align*}
\bar{x} = \bar{y} = (1, 0, 0, 0, 0.8), \ \bar{z} = 5.7341.
\end{align*}
This point satisfies inequality \eqref{ineq:EPlift} generated in \autoref{ex:EPlift}.
Now letting $S = \{ 1, 2, 5 \}$ and using the permutation (1,5,2), $\pi_S$ is computed as
\begin{align*}
{\pi_S}_{(1)} &= \pi_1 = \sqrt{a_1} - \sqrt{0} = \sqrt{22} = 4.6904, \\
{\pi_S}_{(2)} &= \pi_5 = \sqrt{a_1 + a_5} - \sqrt{a_1} = \sqrt{39} - \sqrt{22} = 1.5546, \\
{\pi_S}_{(3)} &= \pi_2 = \sqrt{a_1 + a_5 + a_2 } - \sqrt{ a_1 + a_5} = \sqrt{57} - \sqrt{39} = 1.3048.
\end{align*}
Consequently, we obtain inequality \eqref{ineq:EP_subset} with coefficients
\begin{align*}
\pi &= \begin{bmatrix}
4.6904, 1.3048, 0, 0, 1.5546
\end{bmatrix}, \\
\alpha &= \begin{bmatrix}
4.6904, 2.3842, 0, 0, 2.7222
\end{bmatrix}.
\end{align*}
Observe that the fractional point $(\bar{x}, \bar{y}, \bar{z})$ is cut off by inequality \eqref{ineq:EP_subset} as
\[ \sqrt{(\pi_S' \bar{x} - \alpha_S'(\bar{x} - \bar{y}))^2 + \sum_{i \in N \setminus S} a_i \bar{y}_i^2 } - \bar{z} = 0.2 > 0. \]
\end{example}
\subsection{Lifted Nonlinear Polymatroid Inequalities II}
\label{sec:lift3}
The third class of inequalities are derived from a partial restriction of $F$ by setting a subset of the continuous variables to their upper bound. For $S \subseteq N$ and $T \subseteq N \setminus S$, consider the restriction of $F$ with $y_i = x_i, \ i \in S$:
\begin{align*}
\ \ \ & t^2 + \sum_{i\in N \setminus (S \cup T)} a_i y_i^2 \le z^2 \\
\ \ \ & \sum_{i\in S} a_i x_i + \sum_{i\in T} a_i y_i^2 \le t^2 \\
\ignore{(F_S) \quad \quad } & y_i \leq x_i, \ i \in N \setminus S \\
& x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N, \ t \in \ensuremath{\mathbb{R}}_+.
\end{align*}
Applying the mixed-integer inequality \eqref{eq:polymatroidBoundedDominated} to the second constraint above, we obtain inequality
\begin{align*}
\pi_S ' x_S + \sqrt{\sum_{i \in T} a_i y_i^2} \leq t,
\end{align*}
where ${\pi_S}_{(i)} = \sqrt{\sigma_{S(i)}} - \sqrt{\sigma_{S(i-1)}}$
and
$\sigma_{S(i)} = a_{(i)} + \sigma_{S(i-1)} \text{ for } i=1,2,\ldots,|S|$ with $\sigma_{S(0)} = a(T)$.
This inequality is valid for the restriction above, but not necessarily for $F$.
Next, we lift it and eliminate the auxiliary variable $t$, to obtain
the third class of valid inequalities
\begin{align}
\label{ineq:EP_mixed}
\bigg (\pi_{S} ' x_S + \sqrt{\sum_{i \in T} a_i y_i^2} - \alpha_S (x_S - y_S) \bigg )^2 + \sum_{i\in N \setminus (S \cup T)} a_i y_i^2 \leq z^2,
\end{align}
for $F$ with $\alpha_{(i)} = {a_{(i)}}/{\sqrt{\sigma_{(i)}}}, \ i=1,2,\ldots,|S|$.
\begin{prop}
Inequality \eqref{ineq:EP_mixed} with $\alpha_S$ and $\pi_S$ defined as above is valid for $F$.
\end{prop}
\begin{proof}
It suffices to prove the validity of inequality
\begin{align} \label{mixed-1}
\pi_{S} ' x_S + \sqrt{\sum_{i \in T} a_i y_i^2} - \alpha_S (x_S - y_S) \le t.
\end{align}
Consider the optimization problem:
\begin{align*}
\zeta = \max \ & \pi_S' x_S - \alpha_S' (x_S-y_S) + \sqrt{\sum_{i \in T} a_i y_i^2} -t \\
\text{s.t. } & \sum_{i \in N} a_i y_i^2 \le t^2 \\
& \mathbf{0} \leq y \leq x \\
& x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N, \ t \in \ensuremath{\mathbb{R}}_+.
\end{align*}
Inequality \eqref{mixed-1} is valid for $F$ iff $\zeta \le 0$.
Observing that $x_i^* = y_i^* = 0$ for $i \in N \setminus (S \cup T)$ for an optimal solution $(x^*,y^*)$ and
eliminating $t$, the problem is written as
\begin{center}
\begin{minipage}{0.9\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:validity}
\zeta = \max \ & \pi_S' x_S - \alpha_S' (x_S-y_S) + \sqrt{\sum_{i \in T} a_i y_i^2} - \sqrt{\sum_{i \in S \cup T}a_{i} y_{i}^2} \\
\text{s.t. } & \mathbf{0} \leq y \leq x \\
& x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N.
\end{align*}
\tagsleft@false
\end{minipage}
\end{center}
Observe that by concavity of the square root function
we have $y^*_i = 1$, $i \in T$.
The validity of
\[
\pi_{S} ' x_S - \alpha_S (x_S - y_S) \le t - \sqrt{\sigma}
\]
with $\sigma = a(T)$ and $t \ge \sqrt{\sigma + \sum_{i \in S} a_i y_i^2}$ for this restriction implies that $\zeta \le 0$.
\end{proof}
Note that when $S = \emptyset$ and $T = N$, \eqref{ineq:EP_mixed} is equivalent to the original constraint.
When $S = N$, \eqref{ineq:EP_mixed} is equivalent to \eqref{ineq:EPlift}.
When $S \subseteq N$ and $T = \emptyset$, \eqref{ineq:EP_mixed} is equivalent to \eqref{ineq:EP_subset}.
Otherwise, ir the distinct from the three.
\begin{remark}
The following $n+1$ affinely independent points of $F$
satisfy inequality \eqref{ineq:EP_subset} at equality:
\begin{align*}
(x,y,z) &= (\mathbf{0}, \mathbf{0}, 0); \\
(x,y,z) &= \Big(\sum_{k \in [i]}e_{(k)} + \sum_{j \in T} e_j, \sum_{k \in [i]}e_{(k)} + \sum_{j \in T} e_j, \sqrt{a(i) + a(T)} \Big), \
i = 1,2, \ldots, |S|;\\
(x,y,z) &= (e_i, e_i, \sqrt{a_i}), \ i \in N \setminus S .
\end{align*}
\end{remark}
The following example illustrates inequality \eqref{ineq:EP_mixed} cutting off a fractional point
that is not cut by the previous inequalities.
\begin{example}
\label{ex:EPmixed}
Consider again the instance in \autoref{ex:EPlift}, and the fractional point
\begin{align*}
\bar{x} = \bar{y} = (0.8, 0.5, 1, 0, 1), \ \bar{z} = -0.1780.
\end{align*}
Note that this point satisfies inequalities \eqref{ineq:EPlift} and \eqref{ineq:EP_subset} generated in \autoref{ex:EPlift} and \autoref{ex:EPsubset}.
Letting $S = \{ 1, 2 \}$ and $T = \{3,5\}$, we have $a(T) = a_3 + a_5 = 38$. For the permutation (1,2), $\pi_S$ is computed as
\begin{align*}
{\pi_S}_{(1)} &= \pi_1 = \sqrt{a_1 + a(T)} - \sqrt{a(T)} = \sqrt{60} - \sqrt{38} = 1.5816, \\
{\pi_S}_{(2)} &= \pi_2 = \sqrt{a_1 + a_2 + a(T)} - \sqrt{a_1 + a(T)} = \sqrt{78} - \sqrt{60} = 1.0858
\end{align*}
and we arrive at the corresponding inequality \eqref{ineq:EP_mixed} with coefficients
\begin{align*}
\pi_S &= \begin{bmatrix}
1.5816, 1.0858, 0, 0, 0
\end{bmatrix}, \\
\alpha_S &= \begin{bmatrix}
2.8402, 2.0381, 0, 0, 0
\end{bmatrix}.
\end{align*}
Observe the point $(\bar{x}, \bar{y}, \bar{z})$ is cut off by \eqref{ineq:EP_mixed} as
\[ \sqrt{ (\pi_{S} ' \bar{x}_S + \sqrt{\sum_{i \in T} a_i \bar{y}_i^2} - \alpha_S ' (\bar{x}_S - \bar{y}_S))^2 + \sum_{i \in N \setminus (S \cup T)} a_i \bar{y}_i^2 }- \bar{z} = 0.4506 > 0. \]
\end{example}
\section{Computational Experiments}
\label{sec:computation}
In this section, we report the result of computational experiments performed to test the effectiveness of inequalities
\eqref{ineq:EPlift}, \eqref{ineq:EP_subset}, and \eqref{ineq:EP_mixed} in strengthening the continuous relaxation of mean-risk problems with on-off constraints. Three types of problems are used for testing: mean-risk problem with fixed-charges, mean-risk problem with a cardinality constraint, as well as the more general mean-risk problem with correlations and cardinality constraint.
All experiments are done using CPLEX 12.6.2 solver on a workstation with a 2.93GHz Intel
R CoreTM i7 CPU and 8 GB main memory and with a single thread. The time limit is set to two hours and CPLEX' default settings are used with two exceptions: dynamic search is disabled to utilize the cut callbacks and the nodes are solved with the linear outer approximation for faster enumeration with node warm starts. The inequalities are added at nodes with depth less than ten.
\subsubsection*{Gradient cuts}
Recall that inequalities \eqref{ineq:EPlift} are linear; however, inequalities \eqref{ineq:EP_subset} and \eqref{ineq:EP_mixed} are (convex) non-linear. Since only linear cuts can be added using CPLEX callbacks, at point $(\bar{x}, \bar{y})$, instead of a nonlinear cut $f(x,y) \le z$, we add the corresponding gradient cut
\begin{align*}
f(\bar{x}, \bar{y}) + [\nabla_x f(\bar{x}, \bar{y})]'(x - \bar{x}) + [\nabla_y f(\bar{x}, \bar{y})]' (y - \bar{y}) \le z.
\end{align*}
The gradient cut for inequality \eqref{ineq:EP_subset} at $(\bar{x}, \bar{y})$ has the following form:
\begin{align}
\label{ineq:EP_subset_grad}
\frac{1}{f_1(\bar{x}, \bar{y})} \left [ \tau_1(\bar x, \bar y) \big [ \pi_S'x_S - \alpha_S' (x_S - y_S) \big ]
+ \sum_{i \in N \setminus S} a_i \bar{y}_i y_i \right ] \le z,
\end{align}
where
\begin{align*}
f_1(x,y) &= \sqrt{\tau_1(x_S,y_S)^2 + \sum_{i\in N \setminus S} a_i y_i^2}, \\
\tau_1(x_S,y_S) &= \pi_S ' x_S - \alpha_S ' ({x_S} - {y_S}).
\end{align*}
Similarly, the gradient cut for inequality \eqref{ineq:EP_mixed} at $(\bar{x}, \bar{y})$ has the form:
\begin{align}
\label{ineq:EP_mixed_grad}
\frac{1}{f_2(\bar{x}, \bar{y})} \left [ \tau_2(\bar{x}, \bar{y}) \bigg [ \pi_S 'x_S - \alpha_S'(x_S - y_S) + \sum_{i \in T} \frac{a_i \bar{y}_i}{\nu(\bar{y})} y_i \bigg ]
+ \sum_{i \in N \setminus (S \cup T)} a_i \bar{y}_i y_i \right ] \le z ,
\end{align}
where
\begin{align*}
f_2(x,y) &= \sqrt{\tau_2(x_S,y_{S \cup T})^2 + \sum_{i\in N \setminus (S \cup T)} a_i y_i^2}, \\
\tau_2(x_S,y_{S \cup T}) &= \pi_{S} ' x_S + \nu(y_T) - \alpha_S' (x_S - y_S),\\
\nu(y_T) &= \sqrt{\sum_{i \in T} a_i {y}_i^2}.
\end{align*}
\subsubsection*{Separation}
The separation problem for inequalities \eqref{eq:extendedPolymatroidInequality} and $\ensuremath{\operatorname{conv}}(K_\sigma)$ is solved exactly and fast due to Edmond's greedy algorithm for optimization over polymatroids. We do not have such an exact separation algorithm for the lifted polymatroid inequalities and, therefore, use an inexact approach.
Given a point $(\bar{x}, \bar{y}, \bar{z})$, the separation for inequalities \eqref{ineq:EPlift} and $\ensuremath{\operatorname{conv}}(F)$ entails finding a permutation
of $N$ for which the violation is maximized. If $\bar{x} = \bar{y}$, as it is the case for optimal
solutions of the continuous relaxation of $(\ensuremath{\text{OPT}})$ (see Appendix~\ref{subsec:solveR}), inequality \eqref{ineq:EPlift} coincides with the original polymatroid inequalitiy \eqref{eq:extendedPolymatroidInequality}. Therefore, we check the violation of inequality \eqref{ineq:EPlift} generated for a permutation $\left( (1), \ldots, (n) \right )$ satisfying
$\bar{x}_{(1)}\geq \bar{x}_{(2)} \geq \ldots \geq \bar{x}_{(n)}$.
If inequality \eqref{ineq:EPlift} is violated, then it is added to the formulation. Otherwise,
we attempt to find violated inequalities \eqref{ineq:EP_subset} and \eqref{ineq:EP_mixed} for the same permutation.
For inequality \eqref{ineq:EP_subset},
starting from $S = N$, we check for $i = (n), \ldots, (1)$ such that $\bar{x}_i - \bar{y}_i > 0$,
whether moving $i$ from $S$ to $N \setminus S$ results in a violated inequality. If so, the corresponding gradient cut \eqref{ineq:EP_subset_grad} is added to the formulation. Similarly, for inequality \eqref{ineq:EP_mixed}, starting from thus constructed $S$, we check for $i = (1), \ldots, (|S|)$ such that $\bar{x}_i - \bar{y}_i > 0$, whether moving $i$ from $S$ to $T$ results in a violated inequality. If so, the corresponding gradient cut \eqref{ineq:EP_mixed_grad} is added to the formulation.
This heuristic is repeated for two additional permutations of $N$: one such that
$a_{(1)}\bar{x}_{(1)}\geq a_{(2)}\bar{x}_{(2)} \geq \cdots \geq a_{(n)}\bar{x}_{(n)}$, and the other such that
$a_{(1)}/\bar{x}_{(1)}\geq a_{(2)}/\bar{x}_{(2)} \geq \cdots \geq a_{(n)}/\bar{x}_{(n)}$.
Throughout the branch-and-bound algorithm, the entire cut generation process is applied up to $5,000$ times for the first permutation, and up to $200$ times for the two additional permutations.
\subsection{Fixed-charge objective}
The first set of experiments are done on an optimization problem with fixed charges.
Each non-zero $y_i$ has a fixed-cost $c_i$, $i \in N$, which is modeled with cost vector $c$
on the binary indicator variables $x$:
\label{subsec:pen}
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:pen} \tag{$\ensuremath{\text{OPT}}_f$}
\begin{split}
\min & \ c'x + d'y + \Phi^{-1}(1-\epsilon)z \\
\text{s.t. } & \sum_{i \in N} a_i y_i^2 \le z^2 \\
& \mathbf{0} \leq y \le x \\
& x \in \{0,1\}^N, y \in \ensuremath{\mathbb{R}}_+^N, z \in \ensuremath{\mathbb{R}}_+.
\end{split}
\end{align*}
\end{minipage}
\end{center}
Five random instances are generated for each combination of confidence level $1-\epsilon \in \{0.9, 0.95, 0.975 \}$
and size $n \in \{100, 300, 500\}$.
Coefficients $a_i, \ i \in N$, are drawn from integer uniform $[0.9n,1.2n]$, $c_i, \ i \in N,$ are drawn from integer uniform $[5, 20]$. Finally,
for $i \in N$, $d_i$ is set to $-c_i - h_i$, where $h_i$ is drawn from integer uniform $[1,4]$. The data used for the experiments is publicly available for download at \texttt{http://ieor.berkeley.edu/$\sim$atamturk/data/} .
We compare the original and the strengthened formulations in Table~\ref{tb:pen_3cuts}. Each row of the table
presents the averages for five instances. We report the
percentage integrality gap at the root node (rgap), solution time (time) in CPU seconds,
the percentage gap between the best upper bound and lower bound at termination (egap), and the number of nodes explored (nodes).
The number of cuts generated for each type is also reported.
If there are instances not solved to optimality within the time limit, we report their number (\#) next to egap.
One observes in Table~\ref{tb:pen_3cuts} that the cuts have a profound effect in solving problem \eqref{eq:pen}.
With the default setting, only one of 45 instances is solved to optimality within two hours.
The integrality gap reduces from $15.6\%$ to $14.6\%$ after exploring 156,685 nodes on average.
On the other hand, when the cuts are added using the separation procedure outlined above,
all 45 instances are solved at the root node without the need for enumeration.
The average solution time is reduced by 98$\%$ from almost two hours to merely 136 seconds.
\begin{table}
\footnotesize
\centering
\caption{Computations with $\ensuremath{\text{OPT}}_f$.}
\label{tb:pen_3cuts}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|c|rrrr|rrrrrrr}
\hline \hline
\multicolumn{2}{c|}{ } & \multicolumn{4}{c|}{ Default} & \multicolumn{7}{c}{With cuts} \\
\hline
$n$ & $1-\epsilon$ & rgap & time & egap (\#) & nodes & rgap & time & egap (\#) & nodes & cuts: \eqref{ineq:EPlift} & \eqref{ineq:EP_subset} & \eqref{ineq:EP_mixed} \\
\hline
\multirow{3}{*} {100}
& 0.9 & 3.0 & 6,099 & 1.4 (4) & 237,470 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 14 & 0 & 0 \\
& 0.95 & 10.9 & 7,200 & 8.6 (5) & 166,954 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 70 & 0 & 0 \\
& 0.975 & 30.7 & 7,200 & 26.2 (5) & 226,365 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 82 & 15 & 0 \\
\hline
\multirow{3}{*} {300}
& 0.9 & 4.0 & 7,200 & 3.8 (5) & 125,005 & 0.0 & 1 & 0.0\phantom{ (5)} & 0 & 49 & 0 & 0 \\
& 0.95 & 12.8 & 7,200 & 12.7 (5) & 120,597 & 0.0 & 29 & 0.0\phantom{ (5)} & 0 & 425 & 6 & 0 \\
& 0.975 & 31.4 & 7,200 & 31.4 (5) & 134,433 & 0.0 & 23 & 0.0\phantom{ (5)} & 0 & 437 & 36 & 1 \\
\hline
\multirow{3}{*} {500}
& 0.9 & 3.9 & 7,200 & 3.8 (5) & 108,684 & 0.0 & 5 & 0.0\phantom{ (5)} & 0 & 83 & 0 & 0 \\
& 0.95 & 12.1 & 7,200 & 12.1 (5) & 112,765 & 0.0 & 253 & 0.0\phantom{ (5)} & 0 & 693 & 12 & 0 \\
& 0.975 & 31.3 & 7,200 & 31.3 (5) & 177,891 & 0.0 & 913 & 0.0\phantom{ (5)} & 0 & 1,119 & 158 & 2 \\
\hline
\multicolumn{2}{c|}{\textbf{avg} } &$ \BD{15.6}$& $\BD{7078}$ & $\BD{14.6}$\phantom{ (5)}&$\BD{156685}$
& $\BD{0.0}$ &$\BD{136}$ & $\BD{0.0}$\phantom{ (5)} &$\BD{0}$&$\BD{330}$&$\BD{25}$&$\BD{0}$ \\
\hline \hline
\end{tabular}
\end{table}
\subsection{Cardinality constraint}
\label{subsec:card}
The second problem type with binary indicator variables has a cardinality constraint on the maximum non-zero $y_i, \ i \in N$.
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:card} \tag{$\ensuremath{\text{OPT}}_c$}
\begin{split}
\min & \ d' y + \ \Phi^{-1}(1-\epsilon)z \\
\text{s.t. } & \sum_{i \in N} a_i y_i^2 \le z^2\\
& \sum_{i \in N} x_i \leq \kappa n \\
& \mathbf{0} \leq y \le x \\
& x \in \{0,1\}^N, y \in \ensuremath{\mathbb{R}}_+^N, z \in \ensuremath{\mathbb{R}}_+.
\end{split}
\end{align*}
\end{minipage}
\end{center}
Instances are tested with two cardinality levels ($\kappa = 0.2, 0.4$).
Other parameters are generated as before.
The result of computations for \eqref{eq:card} is summarized in \autoref{tb:card_3cuts}.
Although the root gap for this type of problem is smaller compared to the fixed-charge objective problem,
only 23 out of 90 instances are solved to optimality using the default setting.
When the cuts are utilized, the average root gap is reduced by 94$\%$ and all but three instances are solved to optimality
within the time limit.
The largest end gap for the three unsolved instances is merely 0.05$\%$.
Accordingly, the average solution time as well as the number of nodes explored is reduced by orders of magnitude.
\begin{table}
\footnotesize
\centering
\caption{Computations with $\ensuremath{\text{OPT}}_c$.}
\label{tb:card_3cuts}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|c|c|rrrr|rrrrrrr}
\hline \hline
\multicolumn{3}{c}{ } & \multicolumn{4}{|c|}{ Default} & \multicolumn{7}{c}{With cuts} \\
\hline
$n$ & $\kappa$ & $1-\epsilon$ & rgap & time & egap (\#) & nodes & rgap & time & egap (\#) & nodes & cuts: \eqref{ineq:EPlift} & \eqref{ineq:EP_subset} & \eqref{ineq:EP_mixed} \\
\hline
\multirow{6}{*} {100} & \multirow{3}{*}{0.4}
& 0.9 & 0.2 & 8 & 0.0\phantom{ (5)} & 4,312 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 5 & 0 & 0 \\
& & 0.95 & 0.4 & 148 & 0.0\phantom{ (5)} & 23,884 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 16 & 0 & 0 \\
& & 0.975 & 0.6 & 2,011 & 0.0\phantom{ (5)} & 110,692 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 26 & 0 & 0 \\
\cline{2-14}
& \multirow{3}{*}{0.2}
& 0.9 & 1.1 & 718 & 0.0\phantom{ (5)} & 104,967 & 0.0 & 0 & 0.0\phantom{ (5)} & 0 & 58 & 0 & 0 \\
& & 0.95 & 2.2 & 4,838 & 0.5 (2) & 357,476 & 0.0 & 1 & 0.0\phantom{ (5)} & 1 & 116 & 0 & 0 \\
& & 0.975 & 3.7 & 7,200 & 1.8 (5) & 417,491 & 0.2 & 2 & 0.0\phantom{ (5)} & 2 & 225 & 0 & 0 \\
\hline
\multirow{6}{*} {300} & \multirow{3}{*}{0.4}
& 0.9 & 0.4 & 7,200 & 0.3 (5) & 231,909 & 0.0 & 1 & 0.0\phantom{ (5)} & 0 & 62 & 0 & 0 \\
& & 0.95 & 0.7 & 7,200 & 0.6 (5) & 246,384 & 0.0 & 1 & 0.0\phantom{ (5)} & 0 & 97 & 0 & 0 \\
& & 0.975 & 1.0 & 7,200 & 1.0 (5) & 224,057 & 0.0 & 2 & 0.0\phantom{ (5)} & 0 & 128 & 4 & 0 \\
\cline{2-14}
& \multirow{3}{*}{0.2}
& 0.9 & 1.9 & 7,200 & 1.7 (5) & 276,720 & 0.0 & 22 & 0.0\phantom{ (5)} & 0 & 367 & 7 & 0 \\
& & 0.95 & 3.2 & 7,200 & 3.1 (5) & 251,031 & 0.1 & 110 & 0.0\phantom{ (5)} & 18 & 912 & 169 & 0 \\
& & 0.975 & 4.8 & 7,200 & 4.6 (5) & 265,820 & 0.3 & 463 & 0.0\phantom{ (5)} & 116 & 1,565 & 680 & 2 \\
\hline
\multirow{6}{*} {500} & \multirow{3}{*}{0.4}
& 0.9 & 0.5 & 7,200 & 0.4 (5) & 191,359 & 0.0 & 23 & 0.0\phantom{ (5)} & 0 &252 & 0 & 0 \\
& & 0.95 & 0.8 & 7,200 & 0.8 (5) & 178,825 & 0.0 & 32 & 0.0\phantom{ (5)} & 0 & 302 & 8 & 0 \\
& & 0.975 & 1.1 & 7,200 & 1.1 (5) & 170,622 & 0.0 & 59 & 0.0\phantom{ (5)} & 0 & 414 & 49 & 0 \\
\cline{2-14}
& \multirow{3}{*}{0.2}
& 0.9 & 2.0 & 7,200 & 2.0 (5) & 208,206 & 0.0 & 487 & 0.0\phantom{ (5)} & 10 & 1,140 & 51 & 27 \\
& & 0.95 & 3.4 & 7,200 & 3.3 (5) & 215,453 & 0.1 & 2,372 & 0.0 (1) & 10,521 & 1,934 & 543 & 2 \\
& & 0.975 & 4.9 & 7,200 & 4.9 (5) & 216,386 & 0.3 & 5,090 & 0.0 (2) & 5,669 & 3,283 & 1,725 & 8 \\
\hline
\multicolumn{3}{c|}{\textbf{avg} } &$ \BD{1.8}$& $\BD{5629}$ & $\BD{1.4}$\phantom{ (5)}&$\BD{205311}$
& $\BD{0.1}$ &$\BD{481}$&$\BD{0.0}$\phantom{ (.)}&$\BD{908}$&$\BD{606}$&$\BD{180}$&$\BD{2}$ \\
\hline \hline
\end{tabular}
\end{table}
\ignore{
\subsection{Threshold constraint}
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:threshold} \tag{$\ensuremath{\text{OPT}}_t$}
\begin{split}
\min & \ d' y + \ \Phi^{-1}(1-\epsilon)z \\
\text{s.t. } & \sum_{i \in N} a_i y_i^2 \le z^2\\
& \ell \circ x \leq y \le u \circ x \\
& x \in \{0,1\}^N, y \in \ensuremath{\mathbb{R}}_+^N
\end{split}
\end{align*}
\end{minipage}
\end{center}
$\ell_i$ = 0.01, 0.05
$u_i$ = 0.1
}
\subsection{Correlated case with cardinality constraint}
\label{subsec:corr}
Finally, although the cuts are developed for the diagonal uncorrelated case,
we test their effectiveness on the more general correlated case with a cardinality constraint.
Using the reformulation introduced in \autoref{sec:intro}, we state the problem as
\begin{center}
\begin{minipage}{0.8\textwidth}
\tagsleft@true
\begin{align*}
\label{eq:corr} \tag{$\ensuremath{\text{OPT}}_{corr}$}
\begin{split}
\min & \ d'y + \Phi^{-1}(1-\epsilon) z \\
\text{s.t. }
& y' V y \le s^2 \\
& s^2 + \sum_{i \in N} a_i y_i^2 \le z^2 \\
& \sum_{i \in N} x_i \leq \kappa n \\
& \mathbf{0} \leq y \leq x \\
& x \in \{0,1\}^N, \ y \in \ensuremath{\mathbb{R}}_+^N, z \in \ensuremath{\mathbb{R}}_+.
\end{split}
\end{align*}
\end{minipage}
\end{center}
The covariance matrix $V \in \mathbb{R}^{n \times n}$ is computed using a factor model $V = \rho EFE'$,
where $E \in \mathbb{R}^{n \times m}$ represents the exposures and $F \in \mathbb{R}^{m \times m}$ the factor covariance with
$m = n/10$.
We use a scaling parameter $\rho$ to test the impact of the magnitude of correlations on the difficulty of the problem.
Since the cuts are developed for the diagonal case, we expect them to perform well for small $\rho$.
To ensure positive semidefiniteness, $F$ is computed as $F = GG'$ for $G \in \mathbb{R}^{m \times m}$.
Each $G_{ij}$, $i,j \in [m]$ is drawn from uniform $[-1, 1]$, and
$E_{ij}$, $i \in [n], \ j \in [m]$ is drawn from uniform $[0,0.1]$
with probability $0.2$ and set to $0$ with probability $0.8$.
All other parameters are generated as before.
\autoref{tb:corr50_3cuts} presents the results for confidence levels $1-\epsilon \in \{0.9, 0.95, 0.975\}$, problem sizes $n \in \{100, 300\}$, and
scaling factors $\rho \in \{0.1, 1, 10\}$.
As in the case of \eqref{eq:card},
the cuts result in significant improvements.
Out of 90 instances, the number of unsolved instances is reduced from 80 to 18, and the average root gap is reduced by 96$\%$.
Especially, for instances with $\rho \in \{0.1, 1\}$, almost all instances are solved to optimality well within the time limit
and the number of nodes is reduced by an order of magnitude. Even for $\rho = 10$, the end gap is reduced from 3.1\% to only 0.2\%.
As expected, the computational results indicate that inequalities \eqref{ineq:EPlift}, \eqref{ineq:EP_subset}, and \eqref{ineq:EP_mixed} are more effective when the covariance matrix is more diagonal-dominant, i.e., for smaller values of $\rho$. The lifted polymatroid
cuts are, nevertheless, valuable for the general correlated case as well.
\exclude{
\autoref{tb:corr} summarizes the result
for \eqref{eq:corr} instances with $\kappa = 0.4$ and $\rho = 0.01$ for problem sizes $n = 100, 300$.
This choice of parameters results in 'easier' instances since $\kappa = 0.4$
imposes a weaker constraint than $\kappa = 0.2$, and scaling down the entries of matrix $V$ reduces the effect of
having more complex objective function. Nevertheless, the average root gap is much greater than those of both \eqref{eq:pen} and \eqref{eq:card}
and none of the instances were solved to optimality for $n = 300$ with or without the cuts. }
\begin{table}
\footnotesize
\centering
\caption{Computations with $\ensuremath{\text{OPT}}_{corr}$.
\label{tb:corr50_3cuts}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|c|c|rrrr|rrrrrrr}
\hline \hline
\multicolumn{3}{c|}{ } & \multicolumn{4}{c|}{ Default } & \multicolumn{7}{c}{With cuts} \\
\hline
n & $\rho$ & $1-\epsilon$ & rgap & time & egap (\#) & nodes & rgap & time & egap (\#) & nodes & cuts: \eqref{ineq:EPlift} & \eqref{ineq:EP_subset} & \eqref{ineq:EP_mixed} \\
\hline
\multirow{12}{*}{100} & \multirow{3}{*}{0.1}
& 0.9 & 1.2 & 3,039 & 0.1(2) & 97,969 & 0.0 & 0 & 0.0\phantom{(5)} & 0 & 55 & 0 & 0 \\
&& 0.95 & 2.3 & 6,814 & 0.7(4) & 228,397 & 0.0 & 1 & 0.0\phantom{(5)} & 0 & 114 & 0 & 0 \\
&& 0.975 & 3.7 & 7,200 & 2.2(5) & 260,025 & 0.2 & 2 & 0.0\phantom{(5)} & 9 & 219 & 0 & 0 \\
\cline{2-14}
&\multirow{3}{*} {1}
& 0.9 & 1.1 & 3,028 & 0.1(2) & 101,237 & 0.0 & 0 & 0.0\phantom{(5)} & 0 & 61 & 0 & 0 \\
& & 0.95 &2.3 & 7,200 & 0.8(5) & 201,105 & 0.0 & 1 & 0.0\phantom{(5)} & 1 & 120 & 0 & 0 \\
& & 0.975 &3.6 & 7,200 & 2.2(5) & 216,742 & 0.2 & 3 & 0.0\phantom{(5)} & 9 & 268 & 0 & 0 \\
\cline{2-14}
&\multirow{3}{*} {10}
& 0.9 & 1.1 & 3,017 & 0.1(2) & 111,262 & 0.0 & 1 & 0.0\phantom{(5)} & 10 & 144 & 0 & 0 \\
& & 0.95 & 2.2 & 7,200 & 0.74(5) & 229,966 & 0.0 & 1 & 0.0\phantom{(5)} & 6 & 160 & 0 & 0 \\
& & 0.975 & 3.5 & 7,200 & 2.16(5) & 243,539 & 0.2 & 3 & 0.0\phantom{(5)} & 21 & 402 & 0 & 0 \\
\hline
\multirow{12}{*}{300} & \multirow{3}{*}{0.1}
& 0.9 & 1.9 & 7,200 & 1.7(5) & 163,249 & 0.0 & 72 & 0.0\phantom{(5)} & 33 & 695 & 125 & 0 \\
& & 0.95 & 3.2 & 7,200 & 3.2(5) & 154,394 & 0.1 & 271 & 0.0\phantom{(5)} & 65 & 1075 & 384 & 14 \\
& & 0.975 & 4.8 & 7,200 & 4.7(5) & 138,175 & 0.3 & 709 & 0.0\phantom{(5)} & 233 & 1921 & 991 & 10 \\
\cline{2-14}
& \multirow{3}{*}{1}
& 0.9 & 1.9 & 7,200 & 1.7(5) & 129,379 & 0.0 & 2,034 & 0.0(1) & 14,370 & 3,328 & 500 & 0 \\
& & 0.95 & 3.2 & 7,200 & 3.2(5) & 123,180 & 0.1 & 1,804 & 0.0(1) & 8,926 & 2,099 & 810 & 35 \\
& & 0.975 & 4.8 & 7,200 & 4.7(5) & 132,002 & 0.3 & 2,579 & 0.0(1) & 6,121 & 2,315 & 1,233 & 15 \\
\cline{2-14}
& \multirow{3}{*}{10}
& 0.9 & 1.8 & 7,200 & 1.6(5) & 161,802 & 0.3 & 7,201 & 0.2(5) & 31,917 & 3,893 & 485 & 1 \\
& & 0.95 & 3.2 & 7,200 & 3.1(5) & 143,816 & 0.3 & 7,201 & 0.2(5) & 34,684 & 3,422 & 1,550 & 13 \\
& & 0.975 & 4.8 & 7,200 & 4.6(5) & 132,304 & 0.5 & 7,201 & 0.2(5) & 37,645 & 3,639 & 2,592 & 23 \\
\hline
\multicolumn{3}{c|}{ \textbf{avg} } &$ \BD{2.8}$& $\BD{6483}$ & $\BD{2.1}$\phantom{(5)}&$\BD{164919}$
& $\BD{0.1}$ &$\BD{1616}$&$\BD{0.0}$\phantom{(5)}&$\BD{7447}$&$\BD{1329}$&$\BD{482}$&$\BD{6}$ \\
\hline \hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper we study a mixed 0-1 optimization with conic quadratic objective arising when modeling utilities with risk averseness.
Exploiting the submodularity of the underlying set function for the binary restrictions,
we derive three classes of strong convex valid inequalities by lifting of the polymatroid inequalities.
Computational experiments demonstrate the effectiveness of the lifted inequalities in a cutting plane framework.
The results indicate that the inequalities are very effective in strengthening the convex relaxations, and thereby,
reducing the solution times problems with fixed charges and cardinality constraints substantially. Although the inequalities are derived for the diagonal case, they
are also effective in improving the convex relaxations for the general correlated case.
\ignore{However, the inequalities that we generated were not the strongest possible as we did not solve the separation problem exactly.
Further analysis of the separation problem may provide a heuristic that results in stronger cuts
than the empirical approach we took in this study. Also, although the lifted inequalities are computationally beneficial, they do not suffice in describing the convex hull of the original problem.
One of the goals of our future research is to identify the inequalities necessary in characterizing the convex hull which are likely to be nonlinear.
Another direction of future study would be examining our cuts for more complicated real-life problems
that has $F$ as a substructure.
}
\section*{Acknowledgement} This research is supported, in part, by grant FA9550-10-1-0168 from the Office
of the Assistant Secretary of Defense for Research and Engineering.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction:}
The holographic principle ensures that the degrees of freedom of any system is determined by the the area of the boundary and not really by the volume\cite{'thooft, susskind}. Consideration of black hole thermodyamics indicates that the holographic principle actually requires a large wavelength cut-off. A very brief but comprehensive review of this infra-red cut-off and its application in cosmology as a ``holographic'' dark energy is given by Pavon\cite{diego}. Cosmological implications of this holographic dark energy, particularly its role in driving the accelerated expansion of the universe have been quite thoroughly discussed\cite{li, diego1, diego2}. Holographic dark energy has been discussed in nonminimally couple theoreis as well, such in Brans-Dicke theory by Banerjee and Pavon\cite{diego3}, and in a chameleon scalar field model by Setare and Jamil\cite{setare}. This list is not exhaustive by any means and intends to give some examples. \\
The holographic dark energy attracted attention as it can alleviate, if not resolve, the issue of cosmic coincidence, i.e., why the energy densities due the dark matter and the dark energy should have a constant ratio for the present universe\cite{diego2}. \\
The present work deals with the stability of a holographic dark energy model in standard Einstein's gravity. The model is quite general as it allows the dark energy to interact with the dark matter, so that one can grow at the expense of the other. The method taken up is the dynamical systems study. The field equations are written as an autonomous system and the fixed points are looked for. A stable fixed point, an ``attractor'', is apt to describe the final state of the universe. This kind of dynamical systems study is not new in cosmology. There are excellent reviews on this topic\cite{ellis, coley}. However, such analysis is more frequently used where a scalar field is involved. For an inflaton field, the dynamical systems study has been used by Gunzig et al\cite{gunzig} and Carot and Collinge\cite{carot} while Urena-Lopez\cite{urena}, Roy and Banerjee\cite{nandan2} did that for a quintessence field. Kumar, Panda and Sen\cite{anjan}, Sen, Sen and Sami\cite{soma}, Roy and
Banerjee\cite{nandan1} and Fang et at \cite{fang} utilized the dynamical systems analysis for an axionic quintessence, a thawing dark energy, tracking quintessence and phantom, tachyonic and k-essence fields respectively. A few dynamical systems study on holographic dark energy models, such as one where the infra-red cut-off given by the Ricci length\cite{nairi1} and by the future event horizon\cite{nairi2} are there in the literature. Setare and Vagenas\cite{setare1} discussed the bounds on the effective equation of state parameters on the basis of dynamical systems study, with an infra-red cut off given by the future event horizon. The present deals with a holographic dark energy model where the cut-off is determeined by the Hubble length\cite{diego, diego2}.\\
In the second section we actually set up the autonomous system and investigate the stability of a holographic dark energy. In section 3, we discuss the bifurcation present in the system and in the last section we discuss the results.
\section{Phase space analysis of the model:}
We consider an interacting holographic dark energy model in which the universe is filled with a pressureless matter component with energy density $\rho_{m}$ and a holographic dark energy of density $\rho_h$. There is an interaction between these two components. The total energy density of the universe is $\rho = \rho_m + \rho_h$.
In a spatially flat FRW universe with the line element
\begin{equation} \label{metric}
ds^2 = - dt^2 + a^2(t) (dr^2 + r^2 d \omega^2),
\end{equation}
Einstein's field equations are
\begin{equation} \label{field1}
3 H^2 = 8 \pi G (\rho_{m} + \rho_{h}),
\end{equation}
\begin{equation} \label{field2}
\dot{H} = - \frac{3}{2} H^2 (1 + \frac{w}{1+r}),
\end{equation}
where $w = \frac{p_{h}}{\rho_{h}}$, is the equation of state parameter of holographic dark energy, $p_{h}$ is the contribution to the pressure by the holographic dark energy and $r = \frac{\rho_{m}}{\rho_{h}}$, is the ratio of the two energy densities\cite{li, diego}.
The conservation equations for $\rho_{m}$ and $\rho_{h}$ are
\begin{equation} \label{matter}
\dot{\rho_{m}} + 3 H \rho_{m} = Q,
\end{equation}
and
\begin{equation} \label{holo}
\dot{\rho_{h}} + 3 H (1+ w) \rho_{h} = -Q,
\end{equation}
respectively. The dark matter and the dark energy are assumed to interact amongst themselves and hence do not conserve separately. They conserve together and $Q$ is the rate of loss of one and hence the gain of the other and is assumed to be proportional to the dark energy density given by $Q= \Gamma \rho_{h}$, where $\Gamma$ is the decay rate\cite{diego1}. \\
Using (\ref{matter}) and (\ref{holo}) , one can write time evolution of $r$ as
\begin{equation} \label{r}
\dot{r} = 3 H r (1+r) [\frac{w}{1+r} + \frac{Q}{3 H \rho_{m}}].
\end{equation}
If the holographic bound is saturated, one has\cite{li}
\begin{equation} \label{bound}
\rho_{h} = 3 M_{p}^2 C^2 / L^2,
\end{equation}
where $L$ is the infra-red cut-off that sets the holographic bound. If the cut-off is chosen to be the Hubble length, which has a clue towards the resolution of the coincidence problem\cite{diego1}, one has
\begin{equation} \label{holodensity}
\rho_{h} = 3 C^2 M_{p}^2 H^2.
\end{equation}
By differentiating equation (\ref{holodensity}) and using (\ref{field2}) in the result, one can write
\begin{equation} \label{rhohdot}
\dot{\rho_{h}} = -3 H (1 + \frac{w}{1+r}) \rho_{h}.
\end{equation}
Equations (\ref{holo}) and (\ref{rhohdot}) yield the expression for the equation of state parameter $w$ for the holographic dark energy as,
\begin{equation} \label{w}
w = - (\frac{1+r}{r}) \frac{\Gamma}{3 H}.
\end{equation}
\\
From equations (\ref{bound}) and (\ref{field1}), one has $ \rho_{m} = 3 M_p ^2 H^2 (1-C^2)$. Considering a saturation of the holographic dark energy, equations (\ref{field2}) and (\ref{holo}) now yield
\begin{equation} \label{dot1}
\dot{H} = - \frac{3}{2} H^2 (1 - \frac{C^2}{3 (1- C^2)} \frac{\Gamma}{H}),
\end{equation}
and,
\begin{equation} \label{dot2}
\dot{\rho_{m}} + 3 H \rho_{m} = 3 C^2 M_p ^2 H^2 \Gamma.
\end{equation}
In the subsequent discussion, equations (\ref{dot1}) and (\ref{dot2}) will replace the field equations (\ref{field1}) and (\ref{field2}). For the study of the phase space behaviour of the system, we introduce a new set of variables $x = \rho_{m}$ , $y = \frac{\Gamma}{H}$ and $N = \ln a$. The system of equations can now be written in the form of an autonomous system in terms of new dynamical variables as
\begin{equation} \label{x}
x^{\prime} = -3 x + \frac{C^2}{1-C^2} x y,
\end{equation}
\begin{equation} \label{y}
y^{\prime} = - \frac{3}{2} (\lambda - 1) y (1- \frac{C^2}{3 (1- C^2)} y).
\end{equation}
Here $\lambda = (\dfrac{d\Gamma}{dH}) / (\frac{\Gamma}{H})$ and `prime' indicates a differentiation with respect to $N$. In an FRW cosmology, $H$, the fractional rate of change of the length scale of the universe, is the naturally available rate. We assume the decay rate $\Gamma$ to be a function of $H$, $\Gamma = \Gamma(H)$. Depending on the value of $\lambda$, we have classified our system into two classes. \\
\\
I) When $\lambda \neq 1$, $\Gamma$ is any function of $H$, except a linear function.\\
\\
II) When $\lambda =1 $, $\Gamma$ is linear functions of H. \\
\\
In order to discuss the phase space behaviour of a dynamical system, written in the form
\begin{equation*}
z_{i}^{\prime} = f_{i}(z_{j}), \hspace{.6cm} i,j=1,2,.....n,
\end{equation*}
one has to find the fixed points of the system. Here $z_{i}^{\prime} = \dfrac{dz_{i}}{dt}, z_i = {z_1,....z_n} \in \mathbb{R}^n$ and $f_i: \mathbb{R}^n \longrightarrow \mathbb{R}^n$. Fixed points ($z_i=z_i^*$) of the system are the simultaneous solutions of the equations $z_i^{\prime} = 0$ for all $i$. The stability of a fixed point can be determined from eigen values of the Jacobian matrix ($\dfrac{\delta f_i}{\delta z_j} \mid_{z_j=z_j^*}$) at that fixed point. If real part of the eigen values are negative then the fixed point is stable otherwise it is either unstable or a saddle. The details may be found in any standard text, such as in reference \cite{strog}.
\subsection{Class I: $\lambda \neq 1$}
In this case the problem is a two-dimensional one and equations (\ref{x}) and (\ref{y}) form our system. Fixed points of the system are the simultaneous solutions of the equation $x^{\prime} = 0$ and $y^{\prime} = 0$. So it is easy to check that it admits two fixed points, namely, $p_1 : (x=0, y=0)$ and $p_2 : (x$ is arbitray, $y = \frac{3 (1-C^2)}{C^2})$. The second fixed point is a set of non isolated fixed points, a straight line parallel to $x$ - axis. If Jacobian matrix at any point of a set of non isolated fixed points has at least one zero eigen value the set of fixed points is called normally hyperbolic\cite{strog}. Stability of a normally hyperbolic\cite{reza} set of fixed points can be analysed from sign of remaining eigen values. If remaining eigen values are negative then the fixed point is stable. Eigen values and stability condition of the fixed points are given in the following table.\\
\begin{center}
Table 1
\end{center}
\begin{tabular}{|c|c|c|c|}
\hline
Fixed Points & Co-ordinate & eigen values & Condition of stability \\
\hline
$p_1$ & $x=0,y=0$ & $-3, - \frac{3}{2} (\lambda -1)$ & $\lambda > 1$ \\
\hline
$p_2$ & $x(N), y = \frac{3 (1-C^2)}{C^2} $ & $0, \frac{3}{2} (\lambda -1)$ & $\lambda < 1$ \\
\hline
\end{tabular}
~
~
\linebreak
Phase plots of this system has been shown in figure 1 and figure 2 for $\lambda>1$ and $\lambda < 1$ respectively. The plots show that fixed points $p_1$ and $p_2$ are indeed stable for $\lambda>1$ and $\lambda<1$ respectively. This is consistent with the fact that negative eigenvalues indicate stable fixed points (see table 1)\\
Form the field equations and the definition of $x$ and $y$, the deceleration parameter can be written as
\begin{equation}\label{q}
q = -1 + \frac{3}{2} (1- \frac{C^2}{3(1-C^2)} y).
\end{equation}
As we have two fixed points, the system may be treated as a heteroclinic one where solutions join two fixed points.\\
For $\lambda>1$, $p_1$ is a stable fixed point and $p_2$ is an unstable one, thus indicating a sink and source respectively. So the universe can originate from $p_2$ with an acceleration ($q=-1$) and an arbitrary $\rho_{m}$ and can settle down to $p_1$, the stable fixed point where the expansion is decelerated ($q=\frac{1}{2}$) with $\rho_{m} \longrightarrow 0$. This squarely contradicts the observation which indicates an exactly opposite situation!
For $\lambda<1$, however, the fixed points actually reverese their roles as the source and the sink. In this case $p_1$ is unstable, thus, for a small perturbation, the universe starts evolving with a deceleration ($q=\frac{1}{2}$) and settles down to the final configuration of an accelerated expansion ($q=-1$). The final $\rho_{m}$ is an arbitrary function of $N$. This situation is indeed realistic.
\begin{figure}[ht]
\centering
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=.5]{plot1.eps}
\caption{Phase plot of the system when $\lambda = 10$ and $C=2$.}
\label{fig:minipage1}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=.5]{plot2.eps}
\caption{Phase plot of the system when $\lambda = -10$ and $C=2$.}
\label{fig:minipage2}
\end{minipage}
\end{figure}
\par It deserves mention that $\lambda$ is not actually a constant. So the figures 1 and 2 represents a section of the 3-dimensional figure at particular values of $\lambda$, i.e., snapshots at those values. If one changes the values of $\lambda$, the nature of the figures remain the same. For a particular value of $\lambda$, equations (\ref{x}) and (\ref{y}) lead to an expression for the deceleration parameter $q$ in terms of scale factor $a$ as,\\
$q = -1 + \frac{3}{2} (1 - \frac{C^2}{3(1-C^2)} \frac{b a^{-\frac{3}{2} (\lambda - 1)}}{1 + \frac{b C^2}{3(1-C^2)} a^{-\frac{3}{2} (\lambda - 1)} })$.\\
Where $b$ is integration constant. In figure 3 and 4, the evolution of the deceleration parameter $q$ against $\frac{a}{a_{0}}$ is shown where $a_{0}$ is the present value of the scale factor. It is clearly seen that $\lambda <1$ is favoured in order to describe the observed dynamics of the universe. For $\lambda >1$, the universe starts from a negative $q$ ($q=-1$) with $x=0$ i.e., $\rho_{m}=0$ and settles into the stable configuration of a decelerated expansion. For $\lambda<1$, however, one obtains the desired behaviour, the universe starts with a deceleration ($q=\frac{1}{2}$) and settles into an accelerated phase with $q=-1$ and an arbitrary $\rho_{m}$. It should be noted that, for, $\lambda <1$, the interaction rate $\Gamma$ decays at a slower rate than the decay of $H$ and for a negative $\lambda$, the rate actually grows with some power of the decay of $H$.
\begin{figure}[H]
\centering
\begin{minipage}[b]{0.4\linewidth}
\includegraphics[scale=.6]{q1.eps}
\caption{$q$ vs $a$, when $\lambda = 10, C=2$ and $ b = -1000$.}
\label{fig:minipage1}
\end{minipage}
\quad
\begin{minipage}[b]{0.4\linewidth}
\includegraphics[scale=.5]{q2.eps}
\caption{$q$ vs $a$, when $\lambda = -10, C=2$ and $ b = -1000$.}
\label{fig:minipage2}
\end{minipage}
\end{figure}
\subsection{Class II : $\lambda =1$}
In this case $\Gamma$ is a linear function of $H$, given by $\Gamma = \alpha H$. Our system of equations reduces to
\begin{equation} \label{x1}
x^{\prime} = - 3 x + \frac{c^2}{1-c^2} x y,
\end{equation}
\begin{equation} \label{y1}
y^{\prime} = 0
\end{equation}
As $y$ is a constant, this is essentially a one dimensional system with $x=0$ and $y=\alpha$, a constant. Integration of the equation (\ref{x1}) yields the solution for $x$ as $x = A e^{-3 k N}$ where A is a constant of integration and $k= (1-\frac{C^2}{3(1-C^2)}\alpha)$. If $k>0$, the solution is indeed stable, as for $N \longrightarrow \infty$, one has $x \longrightarrow 0$. For $k<0$, the fixed point is unstable. The phase plot is shown in figure 5, which indicateds that the $y$-axis is the attractor of all solutions for $k>0$. Since $y$ is a constant, equation (\ref{q}) indicates that there is no transition from a decelerated to an accelerated expansion for the universe.
\begin{figure}[H]
\centering
\includegraphics[scale=.5]{plot3.eps}
\caption{Phase plot of the system when $\lambda = 1$ and $C=2$.}
\end{figure}
\section{Bifurcation in the system:}
It is interesting, from the point of view of the dynamical systems, to note that there is a clear bifurcation in the system, where $\lambda$ is the bifurcation parameter and $\lambda =1$ is the bifurcation point. When $\lambda<1$, there are two fixed points, $p_1$ and $p_2$ where $p_1$ is unstable but $p_2$ is stable. As $\lambda$ approaches unity these two merge into a single fixed point where the stability depends on the choice of the values of the constants $C$ and $\alpha$. With $\lambda>1$, i.e., when it attains values on the other side of the bifurcation point, one has the same two fixed points with their roles interchanged so far as the stability is concerned. Figures 1,2 and figure 3 show the change of the behaviour of the phase space with variation of $\lambda$, which clearly indicate the occurrence of the bifurcation in the system. \\
At the bifurcation point, $\lambda =1$, $y=\frac{\Gamma}{H}$ is a constant. As already mentioned, there is no transition from a decelerated to an accelerated expansion for the universe for a spatially flat FRW metric in this case. This result is completely consistent with that obtained by Pavon and Zimdahl\cite{diego2}.
\section{Discussion}
The holographic dark energy, tipped by many as the possible saviour from the coincidence problem, is analyzed as an autonomous system. It is found that the system indeed has unstable and stable fixed points. It is found that for $\lambda <1$, where $\lambda = (\dfrac{d\Gamma}{dH}) / (\frac{\Gamma}{H})$, the system indeed has at least one natural description of the universe which starts with a decelerated expansion ($q=\frac{1}{2}$) and settles down to an accelerated expansion for the universe with dark matter completely giving way to the dark energy. So the interacting holographic dark energy warrants more attention, particularly when $\dfrac{d\Gamma}{dH} < \frac{\Gamma}{H}$ leading to $\lambda <1$. We see that the interaction rate $\Gamma$ actually plays a crucial role in this.
{\bf Acknowledgement:} N.R. wishes to thank CSIR (India) for financial support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:1}
In recent years there has been rapid progress in understanding how to
construct lattice actions for a variety of continuum supersymmetric
theories (see ref.~\cite{Catterall:2005eh} for a recent summary). Supersymmetric gauge
theories are expected to exhibit many
fascinating nonperturbative effects; furthermore, in the limit of large gauge
symmetries, they are related to quantum gravity and string theory. A
lattice construction of such theories provides a nonperturbative
regulator, and not only establishes that such theories make sense,
but also makes it possible that these theories may eventually be solved
numerically.
Although attempts to construct supersymmetric lattice theories have
been made for several decades, the new development has been
understanding how to write lattice actions which at finite lattice
spacing possess an exactly realized subset of the
continuum supersymmetries and have a Lorentz invariant continuum limit. These exact supersymmetries in many
cases have been shown to constrain relevant operators to the point
that the full supersymmetry of the target theory is attained without a fine tuning. We will refer to lattices
which possess exact supersymmetries as ``supersymmetric lattices''.
For alternative approaches
where supersymmetry only emerges in the continuum limit, see \cite{Feo:2002yi,Elliott:2005bd}.
There have been two distinct
approaches in formulating supersymmetric lattice actions, recently reviewed in Ref.~\cite{Giedt:2006pd}. One involves
a Dirac-K\"ahler construction \cite{Rabin:1981qj,Becher:1982ud} which
associates the Lorentz spinor supercharges with tensors under a diagonal
subgroup of the product of Lorentz and $R$-symmetry
groups of the
target theory. (An $R$-symmetry is a global symmetry which does not
commute with supersymmetry). These tensors can then be given a
geometric meaning, with $p$-index tensors being mapped to $p$-cells on
a lattice. A lattice action is then constructed from the target
theory which preserves the scalar supercharge even at finite lattice
spacing~\cite{Catterall:2001fr,Catterall:2001wx,Catterall:2003uf,Catterall:2003wd,Sugino:2003yb,Catterall:2004np,Giedt:2005ae,Catterall:2005fd,Catterall:2006jw,Sugino:2006uf}.
This work was foreshadowed by
an early proposal to use Dirac-K\"ahler fermions in the construction
of a supersymmetric lattice Hamiltonian in one spatial dimension \cite{Elitzur:1982vh}.
A more ambitious construction which purports to preserve all
supercharges on the lattice has been proposed
\cite{Kawamoto:1999zn,Kato:2003ss,D'Adda:2004jb,Kato:2005fj,D'Adda:2005zk},
but remains controversial \cite{Bruckmann:2006ub}.
The other method for constructing supersymmetric lattices, and the one
employed in this
Letter, is to start with a ``parent theory''--basically the target theory with a parametrically enlarged gauge
symmetry--and reduce it to a zero-dimensional matrix model. One then
creates a $d$ dimensional lattice with
$N^d$ sites by modding out a $(Z_N)^d$ symmetry, where this discrete symmetry is a particular subgroup of
the gauge, global, and Lorentz symmetries of the parent theory
\cite{Arkani-Hamed:2001ie,Kaplan:2002wv,Cohen:2003xe,Cohen:2003qw,Giedt:2003xr,Giedt:2003ve,Giedt:2004qs,Kaplan:2005ta}.
The process of modding out the discrete symmetry is called an
orbifold projection. Substituting the projected variables into the
matrix model yields the lattice action.
The continuum limit is then defined by expanding the theory about a
point in moduli space that moves out to infinity as $N$ is taken to
infinity, as introduced in the method of deconstruction
\cite{ArkaniHamed:2001ca}.
Although apparently different, these two approaches to lattice
supersymmetry yield
similar lattices. The reason for this is that in the orbifold
approach, the placement of variables on the lattice is determined by
their charges under a diagonal subgroup
of the product of the Lorentz and $R$
symmetry groups, in a manner similar to the
Dirac-K\"ahler construction \cite{Unsal:2006qp}.
To date, supersymmetric lattices have been
constructed for pure supersymmetric Yang-Mills (SYM) theories, as well
as for two-dimensional Wess-Zumino models. In this Letter we take the
next step and show how to
write down lattice actions for gauge theories with charged matter
fields interacting via a superpotential. In
particular, we focus on two
dimensional gauge theories with four supercharges.
These are called $(2,2)$ supersymmetric gauge
theories, and are of particular interest due to their relation to
Calabi-Yau manifolds, as discussed by Witten~\cite{Witten:1993yc}.
Our construction also yields general
insights into the logic of supersymmetric lattices.
\section{$(2,2)$ SYM}
\label{sec:2}
We begin with a
brief review of the $(2,2)$ pure Yang-Mills theory.
The field content of the $(2,2)$ SYM theory is a gauge field, a
two-component Dirac spinor, and a complex scalar $s$, with action
\beq
{\cal L} &=& \frac{1}{g_2^2} \, {\rm Tr\,}
\Biggl(\bigl\vert D_m s\bigr\vert^2 + \mybar \Psi \, D_m
\gamma_m
\Psi + \fourth v_{mn} v_{mn} \cr &&+\sqrt{2}(\mybar \Psi_L [ s, \Psi_R]
+\mybar\Psi_R [ s^\dagger, \Psi_L]) + \half
[s^\dagger,s\,]^2\Biggr)\ ;
\eqn{targ2}
\eeq
both $\Psi$ and $s$ transform as adjoints under the gauge symmetry.
The first supersymmetric lattice for a gauge theory was the
discretization of the above action
using the orbifold method \cite{Cohen:2003xe}.
To construct a lattice for this theory, one begins with a parent theory
which is most conveniently taken to be ${\cal N}=1$ SYM in four dimensions
with gauge group $U(kN^2)$, where the gauge group of the target theory
\eq{targ2} is $U(k)$. The parent theory possesses
four supercharges, a gauge field $v_\mu$, and a two
component Weyl
fermion $\lambda$ and its conjugate $\mybar\lambda$, each variable being a
$kN^2$ dimensional matrix. When reduced to a matrix model in zero
dimensions, the Euclidean theory has a global symmetry
$G_R=SU(2)_L\times SU(2)_R\times
U(1)$, where the nonabelian part is inherited from the
four dimensional Lorentz symmetry, and the $U(1)$ is the $R$-symmetry
consisting of a phase rotation of the gaugino. From the three Cartan
generators $L_3$, $R_3$, $Y$ of $G_R$ we construct two independent
charges $r_{1,2}$ under which the variables of the theory take on
charges $0$ and $\pm 1$ (integer values are required for the lattice
construction, and magnitude no bigger than one to ensure only nearest
neighbor interactions on the lattice). One can define:
\beq
r_1 = -L_3 + R_3 -Y\ ,\qquad
r_2 = +L_3 + R_3 -Y\ ,
\eqn{rdef}
\eeq
where $Y$ is $1/2$ times the conventionally normalized $R$-charge
in four dimensions.
By writing $v_\mu$, $\lambda$ and $\mybar\lambda$ as
\beq
v_\mu \mybar\sigma_\mu = \begin{pmatrix}
\mybar z_1 &- z_2 \cr \mybar z_2& z_1 \end{pmatrix}\ ,\quad
\lambda = \begin{pmatrix}\lambda_1\cr \lambda_2 \end{pmatrix}\ ,\quad
\mybar\lambda = \begin{pmatrix}\mybar\lambda_1 & \mybar\lambda_{2} \end{pmatrix}\ ,
\eeq
where $\mybar\sigma_\mu = \{1,i\vec\sigma\}$, we arrive at the charge
assignments shown in Table~\ref{tab:rgauge}. The $N^d$ site lattice is then
constructed by assigning to each variable a position in the unit cell
dictated by its $\vec r = \{r_1,r_2\}$ charges, where $\{0,0\}$
corresponds to a site variable, $\{1,0\}$ corresponds to an oriented
variable on the $x$-link, etc. Thus from the charges in
Table~\ref{tab:rgauge} we immediately arrive at the lattice structure
shown in Fig.~\ref{twotwo}.
\EPSFIGURE[b!]{gauge_multiplets.eps,width=2.2in}
{The lattice for pure $(2,2)$ gauge theory, from
Ref.~\cite{Cohen:2003xe}. \label{twotwo}
}
\setlength{\extrarowheight}{5pt}
\TABULAR[t]{c| c c c c | c c c c | c}{
\hline
&$ z_1 $&$ \mybar z_1 $&$ z_2 $&$ \mybar z_2 $&$ \lambda_1 $&$\lambda_2 $&$ \mybar\lambda_1 $&$ \mybar\lambda_2 $& $d$
\\ \hline
$ L_3 $&$ -\half $&$ +\half $&$ +\half $&$ -\half $&$ \,\ 0 $&$ \,\ 0 $&$ -\half $&$ +\half $
&$0$\\
$ R_3 $&$ +\half $&$ -\half $&$ +\half $&$ -\half $&$+\half $&$ -\half $&$ \,\ 0 $&$ \,\ 0 $
&$0$\\
$ Y $ &$ \,\ 0 $&$ \,\ 0 $&$ \,\ 0 $&$ \,\ 0 $&$ +\half $&$ +\half $&$ -\half $&$ -\half $
&$0$\\ \hline
$ r_1 $&$ +1 $&$ -1 $&$ \,\ 0 $&$ \,\ 0 $&$ \,\ 0 $&$ -1 $&$ +1 $&$ \,\ 0 $
&$0$\\
$ r_2 $&$ \,\ 0 $&$ \,\ 0 $&$ +1 $&$ -1 $&$ \,\ 0 $&$ -1 $&$ \,\ 0 $&$ +1 $
&$0$\\ \hline}
{ The $r_{1,2}$ charges of the gauge multiplet. \label{tab:rgauge}}
The orbifold lattice construction
technique also renders writing down the lattice action a simple
mechanical exercise; here we summarize the results of Ref.~\cite{Cohen:2003xe}.
The lattice variables in Fig.~\ref{twotwo} are $k$ dimensional
matrices, where Greek letters
correspond to Grassmann variables, while Latin letters
are bosons. The lattice action possesses a $U(k)$ gauge symmetry and
single exact supercharge which can be
realized as $Q=\partial/\partial\theta$, where $\theta$ is a Grassmann
coordinate.
To make the supersymmetry manifest, the
variables are organized into superfields as
\beq
\bfZ_{i,{\bf n}} &=& z_{i,{\bf n}} + \sqrt{2}\,\theta\,\mybar\lambda_{i,{\bf n}} \cr
\bfLambda_{\bf n}&=& \lambda_{1,{\bf n}} +\theta\,\bigl[\left(z_{i,{\bf n}}\mybar
z_{i,{\bf n}} - \mybar z_{i,{\bf n}-{\bf e}_i} z_{i,{\bf n}-{\bf e}_i} -i d_{\bf n} \right)\bigr] \cr
\bfXi_{ij,{\bf n}}&=& \lambda_{2,{\bf n}} \epsilon_{ij} + 2\,\theta \left(\mybar
z_{i,{\bf n}+{\bf e}_i}\mybar z_{j,{\bf n}} - \mybar z_{j,{\bf n}+{\bf e}_j}\mybar
z_{i,{\bf n}}\right)\ ;
\eqn{gmult}
\eeq
a sum over repeated $i$ indices being implied, where ${\bf n}$ is a lattice
vector with integer components, and
${\bf e}_i$ is a unit vector in the $i$ direction. The $\mybar z_i$
bosons are supersymmetric singlets. The lattice
action may then be written in manifestly supersymmetric form:
\beq
S& =& \frac{1}{g^2} \sum_{{\bf n}} \int\! d\theta\,
{\rm Tr\,}\biggl[\half\bfLambda_{\bf n}\partial_\theta \,\bfLambda_{\bf n}- \bfXi_{ij,{\bf n}} \bfZ_{i,{\bf n}}\bfZ_{j,{\bf n}+{\bf e}_i}\cr
&& + \bfLambda_{\bf n} \left(\bfZ_{i,{\bf n}}\mybar z_{i,{\bf n}}-\mybar
z_{i,{\bf n}-{\bf e}_i} \bfZ_{i,{\bf n}-{\bf e}_i}\right)\biggr]\ .
\eeq
The continuum limit is defined by expanding about the point in moduli
space $z_i=\mybar z_i = (1/\sqrt{2} a){\bf 1}_k$, where ${\bf 1}_k$
is the $k$ dimensional unit matrix and $a$ is identified as the
lattice spacing, and then taking $a\to 0$ with $L=N a$ and $g_2=g a$
held fixed.
An additional soft supersymmetry breaking mass term
\beq
\delta S =\frac{1}{g^2} \sum_{\bf n} a^2 \mu^2 \left(\mybar z_{i,{\bf n}} z_{i,{\bf n}} - \frac{1}{2 a^2} \right)
\eeq
may be introduced to the action in order to lift the degeneracy of the moduli and fix the
vacuum expectation value of the gauge bosons. The mass parameter $\mu$ is
chosen to scale as $\mu \sim 1/L$ so as to leave physical properties at length scales smaller
than $1/\mu$ unaffected by this modification to the action.
The lattice action has been shown to converge to the $(2,2)$ target
theory \eq{targ2} with the lattice and continuum variables
related as $z_i=\frac{1}{\sqrt{2}}\left(1/a + s_i + i v_i\right)$, where
\beq
s=\frac{s_1+is_2}{\sqrt{2}}\ ,\quad
\Psi = \begin{pmatrix} \lambda_1 \cr
\lambda_2 \end{pmatrix}\ ,\quad
\mybar\Psi = \begin{pmatrix} \mybar\lambda_1 & \mybar\lambda_2 \end{pmatrix}
\eeq
in a particular basis for the Dirac $\gamma$ matrices \cite{Cohen:2003xe}.
\section{Adjoint matter}
\label{sec:3}
We now turn to supersymmetric lattices for gauge theories with matter
multiplets, once again employing the orbifold technique. To illustrate
the general structure of these theories on the lattice, we first
consider as our target theory a $(2,2)$ gauge theory with gauge group
$G=U(k)$ (with $k=1$ a possibility) and $N_f$ flavors of adjoint matter fields.
The parent theory is a four dimensional
${\cal N}=1$ theory with gauge group $\tilde G = U(kN^2)$ and chiral
superfields $\bfPhi^a$, $a=1,\ldots,N_f$ transforming as adjoints under
$\tilde G$, and a superpotential $W(\bfPhi)$ that preserves the $U(1)$
$R$-symmetry.
The orbifold projection of the matter fields follows a similar path
from that outlined in the previous section for the gauge
multiplet. Each chiral field $ \bfPhi$ from the parent theory
contributes a boson $ A$, auxiliary field $F$, and two component fermion
$\psi_i$;
$\mybar{\bfPhi}$ contributes barred versions of the same. Once
again, the placement of these variables on the lattice is entirely
dictated by their transformation properties under the global
$SU(2)_R\times SU(2)_L \times U(1)$ symmetry of the parent theory,
which we give in Table~\ref{tab:rmatter}. An ambiguity is apparent in
the assignment of the $U(1)$ symmetry to each field, and we have
assigned in the parent theory a $U(1)$ charge $y$ to our generic
$\bfPhi$. Without a superpotential, there is freedom to assign
to each chiral superfield an independent value for $y$; however, it is
apparent from Table~\ref{tab:rmatter} that to obtain a sensible
lattice with only nearest neighbor interactions ({\it i.e.} all $r_i$
charges equal to $0$ or $\pm 1$), we are constrained
to choose $y=0$ or $y=1$. The result of this choice is shown in
Fig.~\ref{fig:matter}; in fact, we will need both types of
matter multiplets, since the superpotential $W$ must have net charge
$Y=1$.
\TABULAR[t]{c|cc|cccc| c c}{
\hline
&$ A $&$ \mybar A $&$ \psi_1 $&$ \psi_2 $&$ \mybar\psi_1 $&$ \mybar\psi_2 $ & $F$ &
$\mybar F$
\\ \hline
$ L_3 $&$ \,\ 0 $&$ \,\ 0 $&$ \,\ 0 $&$ \,\ 0 $&$ -\half $&$ +\half $
&0&0\\
$ R_3 $&$ \,\ 0 $&$ \,\ 0 $&$+\half $&$ -\half $&$ \,\ 0 $&$ \,\ 0 $
&0&0\\
$ Y $&$ +y $&$ -y $&$ y-\half $&$ y-\half $&$ -y+\half $&$ -y+\half $
&$y-1$&$-y+1$\\ \hline
$ r_1 $&$ -y $&$ +y $&$ 1-y $&$ -y $&$ +y $&$ -1+y $
&$y-1$&$-y+1$\\
$ r_2 $&$ -y $&$ +y $&$ 1-y $&$ -y $&$ -1+y $&$ +y $
&$y-1$&$-y+1$\\ \hline}
{ The $r_{1,2}$ charges of the matter multiplet. \label{tab:rmatter}}
We can organize the chiral multiplet $\bfPhi$ of the parent theory for either
case $y=0,1$ into
lattice superfields:
\beq
{\bf A}_{\bf n} &=& A_{\bf n} + \sqrt{2} \theta \psi_{2,{\bf n}} \cr
\bfPsi_{{\bf n}} &=& \psi_{1,{\bf n}} - \sqrt{2} \theta F_{\bf n} \cr
\mybar\bfPsi_{i,{\bf n}} &=& \mybar\psi_{i,{\bf n}} + 2 \theta
\,\epsilon_{ij}(\mybar A_{{\bf n}+{\bf e}_j}
\mybar z_{j,{\bf n}+ y\,{\bf e}_{12}} - \mybar z_{j,{\bf n}} \mybar A_{\bf n})
\cr
\mybar\bfF_{\bf n} &=& \mybar{F}_{\bf n} - 2 \theta
\,\biggl(\mybar A_{{\bf n}+{\bf e}_{12}} \lambda_{2,{\bf n}+ y\,{\bf e}_{12}} -
\lambda_{2,{\bf n}}\mybar A_{{\bf n}}\cr
&&+\epsilon_{ij}\epsilon_{ik}\left[\mybar\psi_{k,{\bf n}+{\bf e}_j}\mybar
z_{j,{\bf n}+ y\,{\bf e}_{12}} - \mybar z_{j,{\bf n}+{\bf e}_i}
\mybar\psi_{k,{\bf n}}\right]\biggr)
\eqn{mmult}
\eeq
where ${\bf e}_{12} =({\bf e}_1+{\bf e}_2)$ and $\mybar A$ is a supersymmetric singlet.
Note the appearance of $\lambda_2$ and $\mybar z_i$ from the gauge
supermultiplet \eq{gmult}, which implies nontrivial
consistency conditions which can be shown to hold. In the
Appendix we make contact between the rather unfamiliar multiplet
structure in \eq{mmult}, and the more familiar chiral superfields
from $N=1$ supersymmetry in the $3+1$ dimensional continuum.
In terms of the above fields, the orbifold projection of the parent
theory produces the following lattice kinetic Lagrangian for the matter:
\beq
{\cal L}_{\text{kin}} &=& \frac{1}{g^2}\int\! d\theta\, {\rm Tr\,} \big[\epsilon_{ij}
\mybar\bfPsi_{i,{\bf n}}\left(\bfZ_{j,{\bf n}+ y\,{\bf e}_{12}}{\bf A}_{{\bf n}+{\bf e}_j}-{\bf A}_{\bf n}
\bfZ_{j,{\bf n}}\right)\cr &&
+\mybar A_{\bf n}\left(\bfLambda_{{\bf n}+ y\,{\bf e}_{12}}{\bf A}_{\bf n} -
{\bf A}_{\bf n}\bfLambda_{\bf n}\right) -\frac{1}{\sqrt{2}} \mybar\bfF_{\bf n}
\bfPsi_{\bf n}\big]\ .
\eqn{adjointkin}
\eeq
The superpotential contributions for the theory are
\beq
{\cal L}_W &=& \frac{1}{g^2} {\rm Tr\,}\biggl[\left( \int\! d\theta\, \frac{1}{\sqrt{2}} \bfPsi^a W_a({\bf A}) \right)
+ \mybar \bfF^a \mybar W_a(\mybar A) -
\mybar\bfPsi_1^a\mybar\bfPsi_2^b \mybar W_{ab}(\mybar A)\biggr]
\eqn{superpot}
\eeq
where $W({\bf A})$ is a polynomial in the ${\bf A}$ fields with $R$-charge
$y=1$ (and $\mybar W(\mybar A)$ is its conjugate), while $W_a=\partial
W/\partial A^a$ and $W_{ab}=\partial^2
W/\partial A^a\partial A^b$. The space-time dependence
has been omitted as it is implied by the gauge invariance
of the Lagrangian; each term in the superpotential should form a closed loop
on the space-time lattice. One can verify by explicit calculation
that the $\theta$ dependence cancels between the second and third
terms after summing over lattice sites, and therefore the action
is annihilated by $Q=\partial/\partial\theta$ and is supersymmetric.
\EPSFIGURE[t]{matter.eps, width=4.0in}
{\sl Placement of the matter variables within the unit cell
for the two choices $y=0$ and $y=1$ for the $U(1)$ $R$-charge.
\label{fig:matter}}
As an example of how to interpret the above terms, we consider a two flavor
model ($N_f=2$) and the superpotential $W(\bfPhi)=c\,{\rm Tr\,}\bfPhi^1\bfPhi^2$.
The superpotential must carry charge $Y=1$, which can be satisfied by
choosing for the superfields $R$-charges $y_1=1$
and $y_2=0$ for $\bfPhi^1$ and $\bfPhi^2$ respectively. These charge assignments dictate the lattice
representation for these superfields, as shown in Fig.~\ref{fig:matter}.
The first term in \eq{superpot}, for example, is then
\beq
{\rm Tr\,} \bfPsi^a W_a({\bf A}) &=&
c\,{\rm Tr\,}\left(\bfPsi^1_{{\bf n}}{\bf A}^2_{{\bf n}} + {\bf A}^1_{{\bf n}}\bfPsi^2_{{\bf n}} \right)
\eeq
which is seen to be gauge invariant since $\{{\bf A}^1,\bfPsi^1\}$
are $\{-\text{diagonal, site}\}$ variables, while
$\{{\bf A}^2,\bfPsi^2\}$ are $\{\text{site, +diagonal}\}$ variables.
The continuum limit of the above theory is defined as in the previous
section for the pure gauge theory, and the desired $(2,2)$ theory
with matter results at the classical level.
An analysis of the continuum limit, including quantum corrections
can be found in the Appendix.
In the case $k=1$, the
continuum gauge symmetry is $U(1)$ and one obtains
a theory of neutral matter interacting via a superpotential.
\section{More general matter multiplets}
\label{sec:4}
More general theories of matter fields interacting via gauge interactions
and a superpotential may be obtained by orbifolding the parent
theory of \S\ref{sec:3} by some $N$-independent discrete symmetry, before orbifolding
by $Z_N\times Z_N$. Here we give several examples.
{\it Example 1: $SU(2)\times U(1)$ with charged doublets.}
Consider the parent theory with a $U(3N^2)$ gauge symmetry,
adjoint superfields $\bfPhi^1$ and $\bfPhi^2$, and the superpotential
$W(\bfPhi)=c\,{\rm Tr\,} \bfPhi^1\bfPhi^1\bfPhi^2$. Here, we choose $y_1=0$ and $y_2=1$
as R-charges for our superfields. This theory has
a $\bfPhi^a\to (-1)^a \bfPhi^a$ symmetry, and so we can impose the additional
orbifold condition
$P\bfPhi^a P = (-1)^a \bfPhi^a$ and $P \bfV P = \bfV$
where $\bfV$ is the vector supermultiplet of the parent theory and $P$ is
a $ U(3N^2)$ matrix with $\{1,1,-1\}$ along the diagonal, where each
entry is an $N^2$ dimensional unit matrix. This
projection breaks the $U(3N^2)$ gauge symmetry down to $U(2N^2)\times U(N^2)$,
under which the projected matter field $\bfPhi^1$ decomposes as
$(\Yfund,\Ybarfund) \oplus (\Ybarfund,\Yfund)$ and $\bfPhi^2$ decomposes as
$({\bf adj},1) \oplus (1,{\bf adj}) $. We then
orbifold the parent theory by $Z_N\times Z_N$, resulting in a lattice
with an $SU(2)\times U(1)\times U(1)$ gauge theory, with matter
multiplets transforming as
$3_{0,0} \oplus 2_{\pm 1/2,0} \oplus 1_{0,0} \oplus 1_{0,0}$
in the continuum limit.
The doublet couples to both the triplet and one of
the singlets in the superpotential. Evidently the
second $U(1)$ gauge sector decouples from the theory
since no fields carry that charge.
It is possible to generalize the above construction to fundamental matter
transforming as $\Yfund_{+1} \oplus \mybar\Yfund_{-1}$ under $SU(M) \times U(1)$ gauge
transformations by starting with a $U((M+1)N^2)$ theory broken down to $U(MN^2) \times U(N^2)$.
{\it Example 2: $U(1)^k$ quiver with Fayet-Iliopoulos terms.}
A different sort of theory may be obtained by considering a parent
theory with a $U(kN^2)$ gauge symmetry and a single matter adjoint $\bfPhi$
with a superpotential $W(\bfPhi)=c/k {\rm Tr\,} \bfPhi^k$. The
initial orbifold condition is
$\bfV = P \bfV P^\dagger$ and $\bfPhi = \omega P\bfPhi P^\dagger$
on the parent theory, where $\omega = exp(i2\pi/k)$ and $P$ is the diagonal $kN^2$
dimensional ``clock'' matrix
$\text{diag}\{1,\omega,\omega^2\ldots,\omega^{k-1}\}$, each
entry appearing $N^2$ times. This projection produces a quiver theory,
breaking the gauge symmetry
down from $U(kN^2)$ to $U(N^2)^k$, and producing bifundamental matter
fields $\bfPhi^a$, with $a=1,\ldots,k$ transforming as $(\Yfund,\Ybarfund)$ under
$G_a \times G_{a+1}$, where $G_a = U(N^2)$ and $G_{k+1} \equiv G_1$. The superpotential
becomes $W(\bfPhi)=c{\rm Tr\,} \bfPhi^1\cdots\bfPhi^k$.
One can assign $y=1$ to one of the $k$ matter fields, and $y=0$ to the
others. A subsequent $Z_N\times Z_N$ projection then produces a lattice
theory with a $U(1)^k$ gauge symmetry, where the descendants of the parent
multiplet $\bfPhi^a$ carry $U(1)$ charges
$q_b = (\delta_{ab} -\delta_{a,b-1})$, with $q_{k+1}\equiv q_1$.
One can also add Fayet-Iliopoulos terms to the action given by
$-i\xi\,\int d\theta\, \sum_{\bf n}\,{\rm Tr\,} \bfLambda^a_{\bf n}$, as is apparent from
\eq{gmult}. Such a theory is directly related to Calabi-Yau manifolds,
as discussed in \cite{Witten:1993yc}, and would be interesting to
study numerically.
It should be apparent that although we focused on a $U(1)^k$ quiver,
any $U(p)^q$ quiver can be constructed in a similar manner. We have
not found a way to construct lattices for arbitrary matter representations.
\acknowledgments
We thank Allan Adams and Mithat \"Unsal for enlightening conversations. This work was
supported by DOE grants DE-FG02-00ER41132.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec1}
NMT has recently shown promising results in machine translation
\citep{wu2016google,luong2015effective,bastan2017neural}.
In statistical machine translation (SMT),
the problem is decomposed into sub-models and each individual model is trained separately, while NMT is capable of training an end-to-end model.
For instance, in SMT the reordering model is a feature that is trained separately and is used jointly with other features to improve the translation, while in NMT it is assumed that the model will learn the order of the words and phrases itself.
Sequence-to-sequence NMT models consist of two parts, an encoder to encode the input sequence to the hidden state and a decoder that decodes the hidden state to get the output sequence \cite{cho2014properties,bahdanau2014neural}. The encoder model is a bidirectional RNN, the source sentence is processed once from the beginning to the end and once in parallel from the end to the beginning.
One of the ideas that have not been well-explored in NMT so far is the use of existing reordering models in SMT. We propose to add another layer to the encoder that includes reordering information.
The intuition behind our proposal comes from the improvement achieved by bidirectional encoder model. If processing the source sentence in both directions help sequence-to-sequence model to learn better representation of the context in hidden states, adding the order of the input words as they are appearing in the output sequence as another layer may also help the model to learn a better representation in both context vectors and hidden states. In this paper we investigate this hypothesis that another layer in the encoder to process a preordered sentence can outperform both encoder architecture with two or three RNN layers. We empirically show in the experiments that adding the reordering information to NMT can improve the translation quality when we are in shortage of data.\\
There are a few attempts to improve the SMT using neural reordering models \cite{cui2015lstm,li2014neural,li2013recursive,aghasadeghi2015monolingually}. In \cite{zhang2017incorporating}, three distortion models been studied to incorporate the word reordering knowledge into NMT. They used reordering information to mainly improve the attention mechanism.\\
In this paper, we are using a soft reordering model to improve the bidirectional attention based NMT. This model consists of two different parts. The first part is creating the soft reordering information using the input and output sequence, the second part is using this information in the attention based NMT.\\
The rest of the paper is as follow, in section 2 a review of sequence-to-sequence NMT is provided, in section 3 the preordered model is proposed, section 4 explains the experiments and results, and section 5 concludes the paper.
\section{Sequence-to-Sequence NMT}
\citet{bahdanau2014neural} proposed a joint translation and alignment model which can both learn the translation and the alignment between the source and the target sequence.
In this model the decoder at each time step, finds the maximum probability of the output word $y_i$ given the previous output words $y_1 ,..., y_{i-1}$ and the input sequence $X$ as follow:
\begin{equation}
p(y_i | y_1 , ... , y_{i-1},X) = softmax(g(y_{i-1} , s_i , c_i))
\end{equation}
Where $X$ is the input sequence,
$g$ is a nonlinear function, $s_i$ is the hidden state, and $c_i$ the context vector using to predict output $i$.
$s_i$ is the hidden state at the current step which is defined as follow:
\begin{equation}
s_i = f(s_{i-1} , y_{i-1} , c_{i})
\end{equation}
The notation $c_i$ is the context vector for output word $y_i$.
The context vector is the weighted sum of the hidden states as follow:
\begin{equation}
c_i = \sum_{j=1}^{T} \alpha_{ij}h_j
\end{equation}
The weights in this model are normalized outputs of the alignment model which is a feed-forward neural network. It uses $s_{i-1}$ and $h_j$ as input and outputs a score $e_{ij}$. This score is then normalized and used as the weight for computing the context vector as follow:
\begin{equation}
\alpha_{ij} = \frac{\exp{(e_{ij})}}{\sum_{k=1}^{T} (\exp{e_{ik}})}
\end{equation}
In the encoder, a bidirectional neural network is used to produce the hidden state $h$. For each input word $x_i$ there is a forward and a backward hidden state computed as
follow respectively:
\begin{equation}
\label{eq:forward}
\overrightarrow{h_i} = \overrightarrow{f}(\overrightarrow{h_{i-1}},x_i)
\end{equation}
\begin{equation}
\label{eq:backward}
\overleftarrow{h_i} = \overleftarrow{f}(\overleftarrow{h_{i-1}},x_i)
\end{equation}
Forward and backward hidden states are then concatenated to produce the hidden state $h_i$ as follow:
\begin{equation}
h_i = [ \overrightarrow{h_i} , \overleftarrow{h_i}]
\end{equation}
\section{Preordered RNN}
The attention-based model is able to address some of the shortcomings of the simple encoder-decoder model for machine translation. It works fine when we have plenty of data. But if we are in lack of data the attention-based model suffers from lack of information for tuning all the parameters. We can use some other information of the input data to inject into the model and get even better results.
In this paper, a model is proposed using reordering information of the data set to address the issue of shortage of data. Adding this information to the model, it can improve the attention-based NMT significantly.
\subsection{Building Soft Reordered Data}
Adding a preordered layer to the encoder of the sequence model boosts the translation quality. This layer add some information to the model which previously hasn't been seen.
The preordered data is the source sentence which is reordered using the information in target sentence. The reordered models have been used in statistical machine translation and they could improve the translation quality~\cite{visweswariah2011word,tromble2009learning,khalilov2010source,collins2005clause,xia2004improving}. \\
To obtain the soft reordering model, we first need to have the word alignment between the source and the target sentences, then by using heuristic rules we change the alignment to reordering. The reordered sequence model is built upon the alignment model. First by using GIZA++ \cite{och03:asc} the alignment model between the input sequence and output sequence is derived. The main difference between reordering and alignment is that alignment is a many-to-many relation, while the reordering is a one-to-one relation. It means one word in the input sequence can be aligned to many words in the output sequence while it can be reordered to just one position. The other difference is that the alignment is a relation from input sequence space to output sequence space while the reordering is a relation from input sequence space to itself. So we propose some heuristic rules to convert the alignment relation to the reordering relation as follow:
\begin{itemize}
\setlength\itemsep{-0.2em}
\item If a word $x$ in the input sequence is aligned to one and only one word $y$ in the output sequence, the position of $x$ in the reordering model will be the position of $y$.
\item If a word $x$ in the input sequence is aligned to a series of words in the output sequence, the position of $x$ in the reordering model will be the position of the middle word in the series\footnote{We arbitrary round down the even number. For example, the middle position between 1,3,5,7 is the 3rd position.}.
\item If a word in the input sequence is not aligned to any word in the output sequence, the position for that word is the average positions of the previous and the next word.
\end{itemize}
These heuristic rules are inspired by the rules which have been proposed in \cite{devlin2014fast}. The difference is that they are trying to align one and only one input word to all output words, but we are trying to align each word in the input sequence to one and only one position in the same space. \\
The order of applying these rules is important. We should apply the first rule, then the second rule and finally the third rule to all possible words. If a word is aligned to a position but that position is full, we align it to the nearest empty position. We arbitrarily prioritize the left position to the right position whenever they have the same priority. At the end, each word is aligned with only one position, but there may be some positions which are empty. We just remove the empty positions between words to map the sparse output space to the dense input space. We can build the reordered training data using these rules and use them for training the model. In the next section, we see how the reordered data is used in the bidirectional attention based NMT.
\subsection{Three-layer Encoder}
The bidirectional encoder has two different layers. The first layer consists of the forward hidden states built by reading the input sequence from left to right and the second layer consists of the backward hidden states, built by reading the input sequence from right to left. We add another hidden layer to the encoder which is built by reading the input sequence in the reordered order. We build the hidden layer of the reordered input as follow:
\begin{equation}
hr_i = f(hr_{i-1},xr_i)
\end{equation}
Here $xr_i$ is the word in position $i$ of the reordered data and $hr_i$ is the hidden representation of $x_i$ in reordered set. The function for computing $hr$ is the same as in equation \ref{eq:forward} and \ref{eq:backward}.
Then the hidden representation $h$ is computed by concatenating the forward hidden layer, backward hidden layer and reordering hidden layer as follow:
\begin{equation}
h_i = [ \overrightarrow{h_i} , \overleftarrow{h_i}, hr_{i}]
\end{equation}
\section{Experiments}
\begin{table}[t!]
\begin{center}
\begin{tabular}{c| c c c }
\multirow{2}{2em}{Corpus} &\multirow{2}{2em}{\#sents}& \multicolumn{2}{c}{\#words} \\
&&English&Persian \\ \hline
Training &26142&264235&242154 \\
Development &276&3442&3339 \\
Test & 250 &2856&2668 \\% & 1045 & 34259&32764 \\
\end{tabular}
\end{center}
\caption{\label{table:dataset} The statistics of data set}
\end{table}
The proposed model has been evaluated on English-Persian translation data set. We believe that adding the reordering information results in a better model in case of low resource data. We evaluate the translation quality based on BLEU \cite{papineni2002bleu} and TER \cite{snover2006study}.
For implementation we use the Theano \cite{bergstra2011theano} framework.
\subsection{Dataset}
We use Verbmobil \cite{bakhshaei2010farsi}, an English-Persian data set, this data set can show the effectiveness of the model on scarce data resources. The detailed information of the data set is provided in \ref{table:dataset}. In this table, the number of words, shown with \#words, number of sentences in each corpus is shown in column \#sents.
\subsection{Baseline}
The baseline model for our experiments is the bidirectional attention based neural network \cite{bahdanau2014neural} as explained in section 1. There are various papers to improve the basic attention based decoder of the baseline, among all we used guided alignment \cite{chen2016guided}.
\subsection{Reordering Development and Test Set}
For building the reordered training set, we use alignment model and heuristic rules. For development and test set, as we don't have access to the target language, we use a preordering algorithm proposed in \cite{nakagawa2015efficient}. This algorithm is the improved version of preordering algorithm based on Bracketing Transduction Grammar (BTG). Briefly, this algorithm builds a tree based on the words, so that each node has a feature vector and a weight vector. Among all possible trees on the data set, the tree with maximum value for the weighted sum of the feature vectors is chosen as reordering tree. Using a projection function, the tree is converted into the reordered output. \\
This algorithm also needs part of speech (POS) tagger and word class. for Persian POS tagging we use CMU NLP Farsi tool \cite{feely2014cmu} and for the English POS tagging, we use Stanford POS tagger \cite{toutanova2003feature}. For word class we use the GIZA ++ word class which is an output of creating alignment.
\subsection{Results}
\begin{table}[t!]
\begin{center}
\begin{tabular}{c c c c}
\multicolumn{4}{c}{Reordering Method} \\
\hline
Training Set &Dev/Test Set & BLEU & TER \\\hline
HG & BI&30.53&53.25 \\ \hline
BI & BI &27.91& 56.68 \\ \hline
BG & BG &25.93& 58.1 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:results} The comparison between different reordering methods on Verbmobil data. HG means the data reordered using alignment model with GDFA and heuristic rules, BI and BG means the data is reordered on intersection alignment and GDFA alignment, respectively, both using \cite{nakagawa2015efficient} algorithm.}
\end{table}
We analyzed our model with different configurations. First we use different methods to reorder training, development and test set. The results are shown in \ref{table:results}. In this table, the best results of different combinations for building reordered data is shown. HG means for building the reordered data, heuristic rules and alignment with GDFA \cite{koehn2005europarl} is used. BI means the algorithm in \cite{nakagawa2015efficient} and alignment with intersection method is used to build the reordered data, BG means alignment with GDFA and reordering algorithm in \cite{nakagawa2015efficient} is used. The best possible combinations are shown in Table~\ref{table:results}.
In Table~\ref{table:baseline} we can compare the best 3-layer network with two different 2-layer networks. The 3-layer network has apparently three layers in the encoder, the first two layers are the forward and the backward RNNs, the third layer is again an RNN trained either on the reordered source sentence or the original sentence. The 2-layer network refers to the bidirectional attention based NMT as described in Section 2. This model id trained once with the original sentence, and once with the reordered sentence.
As we see, reordering the input can improve the model. It shows that the information we are adding to our model is useful. So using the best 3layer model can use both information of reordering and information of the ordered data, so it can improve the translation model significantly. Also we see that adding just a simple repeated layer to bidirectional encoder, can improve the model. But not as much as the reordered layer. Finally, the ensemble of different models has the best results. \\
There are different interpretations behind this results. Because NMT has too many parameters, it is difficult for scarce data to learn all of the parameters correctly. So adding explicit information using the same data can help the model to learn the parameters better. In addition, although we expect that all the statistical features we use in SMT automatically be trained in NMT, but it can not learn them as well as SMT.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|l|l|l|l|}
\multicolumn{4}{c}{Reordering Method} \\ \hline
Data set & Model & BLEU & TER \\ \hline
\multirow{5}{*}{ En $\rightarrow$ Pr} & Baseline SMT &30.47 & -- \\
&Baseline NMT & 27.42 & 50.78 \\
&3-layer RpL & 27.58&50.04 \\
&2-layer RI & 29.6&50.96 \\
&3-layer RL & 31.03&47.5 \\
&\textbf{Ensemble}&\textbf{32.74}&\textbf{46.4} \\ \hline
\multirow{5}{*}{Pr $\rightarrow$ En}&Baseline SMT&26.91& -- \\
&Baseline NMT & 26.12 & 55.87 \\
&3-layer RpL & 26.38&57.42 \\
&2-layer RI & 27.52&54.12 \\
&3-layer RL & 30.53&53.25 \\
&\textbf{Ensemble}&\textbf{32.17}&\textbf{52.12} \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:baseline} The comparison between different models. base line in SMT is the result of translation in statistical machine translation. The base line NMT is the bidirectional attention based neural network using guided alignment \cite{bahdanau2014neural,chen2016guided}. The 2layer RI is the basic model with reordered input. The 3layer RL is the model proposed in this paper. The 3layer RpL is a 3layer model with two forward and one backward layers (No reordering layer). The ensemble model is the combination of different models.}
\vspace{-1em}
\end{table}
\section{Conclusion}
In this paper we analyzed adding reordering information to NMTs. NMTs are strong because they can translate the source language into target without breaking the problem into sub problems. In this paper we proposed a model using explicit information which covers the hidden feature like reordering. The improvements is the result of adding extra information to the model, and helping the neural network learn the parameters in case of scarce data better.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Cone regression analysis is a valuable alternative to more traditional parametric-regression models, in all cases where the functional relationships between the response (dependent) and the explanatory (independent) variables is unknown and nonlinear and the constraints are a set of linear inequalities. Several important statistical problems including isotonic, concave and constrained spline regression, or ANOVA under partial orderings can be seen as particular instances of the more general cone regression problem. Cone regression admits several formulation approaches and implementation strategies, whose choice severely impacts numerical performances. However, due to the little exposure to the topics of optimization theory in modern-day Statistics, many optimization and numerical approaches are commonly ignored by statisticians. This paper is a contribution to fill this gap in the literature, addressing the fundamentals of cone regression from a theoretical and practical point of view. With the goal of going in deep with comparisons and numerical issues, we focus in particular on the concave regression problem. In spite of its theoretical simplicity, since the number of constraints increases linearly with the data size, concave regression offers a good basis to discuss the fundamentals of cone regression and related numerical issues.
The problem of concave regression is to estimate a regression function subject to concavity constraints represented by a set of linear inequalities. Brought to the attention of the scientific community by micro-economists interested in estimating production function \cite{Hildreth1954Point,Dent1973Note,Holloway1979Estimation}, the problem of concave regression arises not only in the field of micro-economy (indirect utility, production or cost functions, Laffer curve) but also in medicine (dose response experiments) and biology (growth curves, hazard and failure rate in survival analysis). First addressed by Hildreth in 1954 \cite{Hildreth1954Point}, the search for efficient methods for solving large concave regression problems is still an open issue nowadays. This may appear quite surprising considering the noticeable advances in convex optimization since then, but it can be probably understood when considering that most efforts have been devoted to theoretical issues such as generalizations and convergence while comparatively little attention has been paid to the issues of efficiency and numerical performances in practice \cite{Perkins2003Convergence,Gould2008How,Censor2009Effectiveness}.
In this paper, we formulate the cone regression problem by different optimization approaches, we highlight similarities and difference between the various algorithms passed in review, we propose several improvements to enhance stability and to bound the computational cost and we estimate the expected performance of available algorithms, establishing in particular which is the most competitive technique for solving large instances of the problem. Finally, in the light of this study, we give recommendations for further research.
In section \ref{sec:statement}, we state formally the problem of cone regression also introducing some basic notations and results that will be used thoroughly.
In section \ref{sec:StateOfArt} we survey the state of the art distinguishing between the class of algorithms with asymptotic convergence and the class of algorithms with time finite convergence. In section \ref{sec:experiments} we make a numerical comparison of performances and finally, in section \ref{sec:conclusions}, we draw some concluding remarks.
\section{Statement of the problem, basic notations and basic facts}
\label{sec:statement}
The aim of a regression analysis is to produce a reasonable analysis to the unknown response function $f$, which can be modeled as
\begin{equation}
y = f(z) + \epsilon
\label{regressionModel}
\end{equation}
where $z \in \mathcal{R}$ is the explanatory (dependent) variable, $y \in \mathcal{R}^d$ is the response (independent) random variable, and $\epsilon$ is an error term, which is usually assumed to be a mean zero random variable.
Typically, one has observations on $y$ and $z$ for $n$ selected values of $z$. For each level of input, say $z_i$, there may be several trials and corresponding observations of output $y_i$.
Let $T_i$ be the number of trials at level of input $z_i$ and let $y_{it}$ be the observed output for the $t-$trial at this level. Than we have
\begin{equation}
y_{it} = f(z_i) + \epsilon_{it}, \quad i=1,...,n \quad t=1,...,T_i
\end{equation}
Inference about the response function may be drawn by assuming that the function $f(z)$ can be approximated by some given algebraic form with several unknown parameters to be estimated from the data. However, the difficulty with this procedure is that the inferences often depend critically upon the algebraic form chosen. Alternatively, one may know some properties of the relation being studied but does not have sufficient information to put the relation into any simple parametric form. In this case, a nonparametric approach is more appropiated.
Let $x_i$ be the expected value of output at input level $z_i$:
\begin{equation}
x_i = f(z_i) \quad i=1,...,n
\end{equation}
Estimates of $x_i$ can be derived by the method of maximum likelihood, or by the method of least squares or other formulations.
If there were no a priori restriction on $f$, than the maximum likelihood estimation of $x_i$ would just be the mean of observed output for the level of input $z_i$, that is
\begin{equation}
\tilde{x}_i = \frac{1}{T_i} \sum_{T_i}^{i=1} y_{it}\quad i=1,...,n
\end{equation}
Instead, since in the cone regression problem the known property of the regression function $f$ can be expressed by a set of linear inequalities, to obtain the maximum likelihood estimates, the likelihood function should be maximized subject to the linear inequality constraints.
Formally, given a dataset of $n$ dependent variables represented by the vectors $w, y \in \mathcal{R}^n$, corresponding to the independent variable values $z_1 < z_2 <...< z_n$,
the problem of cone regression is to estimate the closest function to the dataset via a least squares regression subject to a set of linear inequality constraints by solving
\begin{equation}
\label{regression}
\hat x = \operatornamewithlimits{argmin}_{\lbrace x'' \leq 0 \rbrace} \|x-y\|^2_{2,w}
\end{equation}
$$\mathrm{with}\quad
\|x-y\|^2_{2,w} = \sum_{i=1}^n w_i(y_i - x_i)^2$$
Denoting by $\mathcal{K}_i$ those vectors that satisfy the linear inequality constraints for a fixed $i$, then $\mathcal{K}_i\neq \varnothing$ is a closed convex set in $\mathcal{R}^n$ and the feasibility set $\mathcal{K}$ can be written as the nonempty intersection of a family of closed subsets $\mathcal{K}_i \subset \mathcal{R}^n$. Being the intersection of closed convex sets, the set $\mathcal{K}$ is also a closed convex set. More precisely, since each $\mathcal{K}_i$ is an half-space which contains the origin, the feasibility set $\mathcal{K}$ is a convex polyhedral cone.
In matrix form, $\mathcal{K}$ can be written as $\mathcal{K} = \lbrace x: Ax \leq 0 \rbrace$. In the case of concave regression $A \in \mathcal{R}^{m\times n}$ with $m=n-2$ is a matrix such that each row $A_i$ represents a concave two-piece linear function with a negative second difference at $x_{i+1}$ only and the linear inequalities are as follows
\begin{equation}
\label{eq:constraint}
\frac{x_{i+2}-x_{i+1}}{z_{i+2}-z_{i+1}} - \frac{x_{i+1}-x_{i}}{z_{i+1}-z_{i}} \leq 0,\hspace{0.5cm} i=1,...,n-2
\end{equation}
In the following, we give alternative formulations of the cone regression problem that rest on optimization theory.
\subsection{Convex quadratic programming (CQP) formulation}
\subsubsection{Primal formulation}
The problem (\ref{regression}) is to find the point $\hat{x}$ in the cone $\mathcal{K}$ that is closest to $y$. The solution is found at the orthogonal projection of $y$ onto $\mathcal{K}$, written as $\Pi(y|\mathcal{K})$ using the metric $\|\cdot\|_{2,w}$, represented by the symmetric positive definite matrix $W$.
\begin{equation}
\label{eq:primalCQP}
\hat x = \Pi(y|\mathcal{K}) = \operatornamewithlimits{argmin}_{\lbrace Ax \leq 0 \rbrace} (y-x)^T W (y-x)
\end{equation}
For the problem (\ref{eq:primalCQP}), the matrix $W$ is diagonal with element $w_i$ on the diagonal. In practice, if $y_i$ is the mean value measured at $z_i$, than $w_i$ corresponds to the size of the sample at $z_i$.
Since $K$ is a closed, convex and non empy set on the Hilbert space $\mathcal{R}^n$, it is a \textit{set of Chebyshev}, that is the projection exists and it is unique.
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm]{polarCone.png}
\end{center}
\caption{The polar cone $\mathcal{K}^o$ of a given convex cone $\mathcal{K} \subset \mathcal{R}^2$ is given by the set of all vector whose scalar product with vectors of $\mathcal{K}$ is negative. The data point $y$ can be written as the sum of $\hat{x}$, the projection onto the cone $\mathcal{K}$ and $\hat{x}^o$, the projection onto the polar cone $\mathcal{K}^o$.}
\label{fig:polarcone}
\end{figure}
\subsubsection{Dual formulation}
The dual formulation of problem (\ref{eq:primalCQP}) rests on the Moreau decomposition theorem \cite{Moreau1962Decomposition}, which is a generalization in convex analysis of the orthogonal projection theorem for vectorial sub-spaces.
Central to the Moreau decomposition theorem is the definition of \textit{polar cone} of a given convex cone $\mathcal{K}$, which is given below.
\begin{defn}
\label{def:polarcone}
The \textit{polar cone} $\mathcal{K}^o$ to any convex cone $\mathcal{K}$ is given by
\begin{equation}
\mathcal{K}^o= \lbrace x \in \mathcal{R}^n : \forall k \in \mathcal{K},\langle k',x \rangle \leq 0 \rbrace
\end{equation}
\end{defn}
The Moreau decomposition theorem is as follows.
\begin{thm}
\label{lm:moreau}
Let $\mathcal{K} \subseteq \mathcal{R}^n$ be a closed convex cone, $\mathcal{K}^o$ its polar cone and $y \in \mathcal{R}^n$ a given point. Then the following assertions are equivalent:
\begin{eqnarray*}
(i) \hspace{0.2cm} \hat{x} = \operatornamewithlimits{argmin}_{x \in \mathcal{K}} ||x-y||^2, \hspace{0.1cm} \hat{x}^o = \operatornamewithlimits{argmin}_{x \in \mathcal{K}^o} ||x-y||^2 \\
(ii) \hspace{0.2cm} \hat{x} \in \mathcal{K}, \hspace{0.2cm}\hat{x}^o \in \mathcal{K}^o, \hspace{0.2cm} \langle \hat{x},\hat{x}^o \rangle =0, \hspace{0.2cm} y = \hat{x} + \hat{x}^o
\end{eqnarray*}
\end{thm}
By relying on this theorem we can alternatively solve problem (\ref{eq:primalCQP}) by first finding the projection on the polar cone $\hat{x}^o$ and then computing the solution to the primal problem as the difference $\hat{x} = y - \hat{x}^o$ (see Fig. \ref{fig:polarcone}). This alternative is attracting since, as it will be clarified below, it implies an analytically simpler form for the constraints.
Before stating an important Lemma about the relationship between the polar cone and the constraint matrix $A$, let us introduce the definition of \textit{edges} of a polyhedral convex cone.
\begin{defn}
\label{def:edgecone}
Let $\mathcal{K}$ be a polyhedral convex cone in $\mathcal{R}^n$, then the vectors $e_i \in \mathcal{R}^n\setminus\{0\}$ are the \textit{edges or generators} of $\mathcal{K}$ if and only if $\mathcal{K}=pos(\{e_i\}) = \{ \sum k_ie_i | k \geq 0\}$.
\end{defn}
Intuitively speaking, the edges of a polyhedral convex cone are one-dimensional rays, which always passe through a fixed point (the vertex).
\begin{lmma}
\label{lm:edgesPolar}
The rows of the matrix $A$ are the edges of the polar cone, that is $\mathcal{K}^o = \lbrace x : x= \sum_{i=1}^m A_i^T a_i,a_i\geq0 \rbrace$.
\end{lmma}
To see that, observe that $\mathcal{K}^o=\lbrace \sum_{i=1}^m a_i A^T_i, a_i \geq 0 \rbrace$ is polar to $\mathcal{K}$ since
$\forall x \in \mathcal{K}$,$\forall \rho \in \mathcal{K}^o$: $\langle \rho, x \rangle = \lbrace \sum_{i=1}^m a_i \langle A^T_i,x \rangle \leq 0\rbrace$, which is the definition of polar cone of $\mathcal{K}$. Conversely, $\mathcal{K}$ is polar to $\mathcal{K}^o$ since: $\mathcal{K} = (\mathcal{K}^o)^o = \mathcal{K}$.
By relying on this results Khun-Tucker \cite{kuhn1951nonlinear} proved the following theorem:
\begin{thm}
The primal constrained quadratic minimization problem (\ref{eq:primalCQP}) is equivalent to the dual problem
\begin{equation}
\label{eq:dualCQP}
\hat \lambda = \operatornamewithlimits{argmin}_{ \lambda \geq 0 } (y-A^T\lambda)^T W (y-A^T\lambda)
\end{equation}
Denoting by $\hat{\lambda}$ the solution to the dual problem, the solution to the primal problem is $\hat{x} = y - A^T\hat{\lambda}$.
\end{thm}
As it can be observed, in the dual formulation each element of the vector $\lambda$ must satisfy a single positivity constraint.
Goldman \cite{Goldman1993Nonparametric} noticed that dual problem can be also view as a minimum distance problem in the same parameter space as the primal problem.
\begin{equation}
\label{eq:reparametrizeddualCQP}
\hat{x} = \operatornamewithlimits{argmin}_{x \in \mathcal{C}}||x||^2
\end{equation}
where $\mathcal{C} = \lbrace x| x = y - A^T\lambda, \lambda \geq 0 \rbrace$ is a rotation of the dual cone with its vertex translated to $y$. $\hat{x}$ also solves the re-parametrized dual problem.
\subsection{Linear complementarity problem (LCP) formulation}
\label{subsec:LCP}
The CQP (\ref{eq:primalCQP}) can be recasted as a linear complementarity problem (LCP). To see that, let us consider the Lagrangian associated to problem (\ref{eq:primalCQP}).
\begin{equation}
\label{eq:lagrangian}
L(x,\lambda) = ||x-y||^2_{2,w} + <\lambda,Ax>
\end{equation}
where $\lambda \geq 0$ is the vector of dual variables associated to each of the convexity constraints.
By applying the Karush–Kuhn–Tucker (KKT) optimality conditions \cite{kuhn1951nonlinear} to (\ref{eq:lagrangian}), that is
\begin{eqnarray}
\label{eq:KTT}
\nabla L(\hat{x},\hat{\lambda}) = 0\\
\lambda \geq 0\\
\lambda^TA\hat{x} = 0
\end{eqnarray}
we obtain the equivalent LCP
\begin{eqnarray}
\label{eq:LCP}
w + M\lambda = q\\
w\geq 0, \quad \lambda \geq 0, \quad w^T\lambda =0.
\end{eqnarray}
where $w = -A^Tx$, $M= -AA^T$ and $q = -A^Ty$.
Note that by dropping the constant term from the Lagrangian and dividing it by $2$: $L(x,\lambda) = \frac{1}{2}xx^T-y^Tx + \lambda^TAx$. Therefore: $\nabla L(x, \lambda) = x^T -y^T + \lambda^TA = 0$. By taking the transpose and multiplying for $-A$: $-Ax + (-AA^T)\lambda = -A^Ty$.
This LCP has a unique complementary solution. Denoting by $(\hat{w},\hat{\lambda})$ its solution, $\hat{\lambda}$ is the optimal solution of the dual problem (\ref{eq:dualCQP}).
The condition $w^T\lambda =0$ is called complementarity condition and the way in which it is dealt with determines if the optimization algorithm belongs to the class of \textit{interior point methods} that will be introduced in section \ref{subsubsec:PDinteriorPointMethods} or to the class of \textit{active set methods} that will be detailed in section \ref{subsec:finiteconvergence}.
\subsection{Proximal formulation}
The CQP (\ref{eq:primalCQP}) can be solved by using a proximity operator \cite{Moreau1962Fonctions,Moreau1963Proprietees}. Proximity operators are used to solve problems of the form
\begin{equation}
\operatornamewithlimits{argmin}_{x \in \mathcal{R}^n} f_1(x) + f_2(x)...+ f_m(x)
\end{equation}
where $f_1,f_2,...,f_m$ are convex functions from $\mathcal{R}^n$ to $]-\infty,+\infty]$, that are not necessarily differentiable.
Each $f_i$ is treated through its proximity operator which is defined as follows.
\begin{defn}
Let $\Gamma_0(\mathcal{R}^n)$ be the class of lower semicontinuous convex functions from $\mathcal{R}^n$ to $]-\infty,+\infty]$ such that their domain, denoted by $dom(f)$ is not the empty set. Let $f$ be a function $f \in \Gamma_0(\mathcal{R}^n)$, then the \textit{proximity operator} of $f$ is $prox_f(x): \mathcal{R}^n \rightarrow \mathcal{R}^n$ such that
\begin{equation}
\forall x \in \mathcal{R}^n, \quad prox_f(y) = \operatornamewithlimits{argmin}_{x\in \mathcal{R}^n} f(x) + \frac{1}{2}||x-y||^2
\end{equation}
\end{defn}
The proximity operator is characterized by the property
\begin{equation}
\forall (x,p) \in \mathcal{R}^n \times \mathcal{R}^n, \quad p = prox_f(y) \quad\Longleftrightarrow\quad y-p \in \partial f(p),
\end{equation}
where $\partial f: \mathcal{R}^n \rightarrow 2^{\mathcal{R}^n}$ is the subdifferential of $f$.
\begin{equation}
\partial f = \Big \lbrace u \in \mathcal{R}^n : \forall y \in \mathcal{R}^n: (y-p)^Tu + f(p) \leq f(y)) \Big \rbrace
\end{equation}
The proximity operator of a convex function is a generalization of the projection operator onto a closed convex set $\mathcal{C}$. To see that let us consider the indicator function of $\mathcal{C}$
\begin{center}
\[\imath_\mathcal{C}(x)= \left\{
\begin{aligned}
&0\text{ if }x\in\mathcal{C}\\
&+\infty\text{ if }x\not\in\mathcal{C}
\end{aligned}
\right.\]
\end{center}
By using the fact that minimizing $J(x)$ over $\mathcal{C}$ is equivalent to minimizing $J(x) + \imath_\mathcal{C} (x)$ over $\mathcal{R}^n$
\begin{center}
$\operatornamewithlimits{argmin}\limits_{\substack{x\in \mathcal{C}}} J(x) ~~=~~ \operatornamewithlimits{argmin}\limits_{\substack{x\in \mathcal{R}^n}} \{J(x) + \imath_\mathcal{C} (x) \}$
\end{center}
it results that $\prox_{\imath_\mathcal{C}} = \Pi(\cdot|\mathcal{C})$.
The solution to problem (\ref{eq:primalCQP}) can be therefore understood as the proximity operator over $\mathcal{K}$
\begin{align}
\label{eq:proximalDual1}
\hat{x}= \operatornamewithlimits{argmin}\limits_{\substack{x\in \mathcal{K}}}||x-y||^2
= &\operatornamewithlimits{argmin}\limits_{\substack{x\in \mathcal{R}^n}}\{||x-y||^2 + \imath_{\mathcal{K}}(x)\}
\\
= & \hspace{0.1cm}\prox_{\imath_{\mathcal{K}}}y
\end{align}
Alternatively, using the fact that $(\mathcal{K}_1 \cap ... \cap \mathcal{K}_m)^o = \mathcal{K}^o_1 + ... + \mathcal{K}^o_m$, the dual problem (\ref{eq:primalCQP}) can be seen as the proximity operator over the sum of $m$ indicator functions of the convex sets $\mathcal{K}^o_i$:
\begin{align}
\label{eq:proximalDual}
\hat{x}^o= \operatornamewithlimits{argmin}\limits_{\substack{x\in \sum_{i=1}^m\mathcal{K}^o_i}}||x-y||^2
= &\operatornamewithlimits{argmin}\limits_{\substack{x\in \mathcal{R}^n}}\{||x-y||^2 + \sum_{i=1}^m\imath_{\mathcal{K}^o_i}(x)\}
\\
= & \hspace{0.1cm}\prox_{\sum_{i=1}^m\imath_{\mathcal{K}^o_i}}y
\end{align}
Intuitively, the base operation of a proximal algorithm is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems often admit closed form solutions or can be solved very quickly with standard or simple specialized methods.
\section{State of the art}
\label{sec:StateOfArt}
In this section, we review existing algorithms for solving the cone regression problem. The algorithmic approaches, and in turn their numerical performances, strongly depend on the choice of the problem formulation. All existing methods are iterative and they can attain the optimal solution since $\mathcal{K}$ is a Chebyshev set and therefore the optimal solution must exist in this closed set. However, in terms of their numerical perfomances they can be classified into two broad classes: the class of methods never or in very simple cases attain the optimal solution \cite{Hildreth1954Point,Dykstra1983Algorithm} and those of methods that converge to the optimal solution in a finite number of steps \cite{Wilhelmsen1976Nearest,Pshenichny1978Numerical,Wu1982Some,Fraser1989Mixed,Meyer1999Extension,Meyer2013Simple,Murty1982Critical,Liu2011Active}. As it will be clarified in the following, methods with asymptotic convergence rest on the properties of the sub-gradient or more in general of proximity operators and act by finding the solution as the limit of a sequence of successive approximations. They are typically derived from the primal, the dual or from the proximal formulation. Methods with finite-time convergence exploit the geometric properties of polyhedral convex cones and find the exact solution as non-negative linear combination of functions, forming a basis in a specified finite dimensional space. They are typically derived from the linear complementarity problem formulation.
\subsection{Algorithms with asymptotic convergence}
This section includes algorithms based on the primal formulation such as Least Squares in a Product Space (section \ref{subsubsec:LSPS}, algorithms based on the dual formulation such as the Uzawa's method (section \ref{subsubsec:uzawa}) and Hildret's method (section \ref{subsubsec:hildret}), algorithms that solve the dual problem simultaneously with the primal problem such as the Dykstra's alternating projection method (section \ref{subsubsec:dykstra}) and algorithms based on the proximal formulation such as Alternating Direction Method of Multipliers (section \ref{subsubsec:ADMM}).
\subsubsection{Least squares in a product space (LSPS)}
\label{subsubsec:LSPS}
Since the Euclidean space $\mathcal{R}^n$ equipped with the dot product is an Hilbert space $\mathcal{H}$, the problem (\ref{eq:primalCQP}) can be recasted in the $m-$fold Cartesian product of $\mathcal{H}$, say $\mathcal{H}^m$ \cite{Pierra1984Decomposition}. Let $\mathcal{K}^m$ be the Cartesian product of the sets $(K_i)_{i \in I}$, i.e., the closed convex set
$\mathcal{K}^m = \times_{i \in I} \mathcal{K}_i= \lbrace x \in \mathcal{H} : \forall i \in I: x_i \in \mathcal{K}_i \rbrace$ and let $D$ be the diagonal vector subspace, i.e. $D = \lbrace (x,...,x) \in \mathcal{H}^m : x \in \mathcal{H} \rbrace$.
Then, the CQP (\ref{eq:primalCQP}) is equivalent to
\begin{equation}
\label{eq:lsps}
\operatornamewithlimits{argmin}_{\lbrace x \in \mathcal{K}^m \cap D \rbrace} \|x-\bar{y}\|^2_{2,w}
\end{equation}
where $\bar{y} = (y,...,y)$.
Using this strategy, the problem of projecting onto the intersection of $m$ convex sets is reduced to the problem of projecting, in an higher dimensional space, onto only two convex sets, one of which is a simple vector subspace. Geometrically, this is equivalent to find a point in $D$ which is at minimum distance from $\mathcal{K}^m$. This point can be obtained iteratively by
\begin{equation}
x_{k+1} = x_k + \lambda_k(P_D \circ P_{\mathcal{K}}^m(x_k) - x_k)
\end{equation}
The advantage of this strategy is in that it allows to speed up the convergence since a bigger (than 2, which is the upper bound for Féjer sequences \cite{Eremin1969Fejer}) relaxation interval can be allowed.
\subsubsection{Uzawa method}
\label{subsubsec:uzawa}
A classical method to solve a convex minimization problem subject to inequality constraints is the Uzawa method \cite{Arrow1958Studies}, which search directly for the saddle point of the Lagrangian (\ref{eq:lagrangian}).
In fact, if the Lagrangian $L(x,\lambda)$ admits a saddle point, say $(\hat{x}, \hat{\lambda})$, then the duality gap $\delta = \operatornamewithlimits{min}_{x \in \mathcal{K}}\operatornamewithlimits{max}_{\lambda \in \mathcal{R}^+}L(x,\lambda) - \operatornamewithlimits{max}_{\lambda \in \mathcal{R}^+}\operatornamewithlimits{min}_{x \in \mathcal{K}}L(x,\lambda)$ is null and $\hat{x}$ is a critical point of the Lagrangian.
Since the dual function $H(\lambda) = \operatornamewithlimits{argmin}_{x \in \mathcal{K}} L(x,\lambda)$ is differentiable, it can be minimized explicitly by using the gradient descent method. Therefore the Uzawa method alternates a minimization step over $\mathcal{R}^n$ with respect to $x$ with $\lambda$ fixed and a maximization step with respect to $\lambda$ onto $\mathcal{R}^+$, with $x$ fixed.
The algorithmic parameter $\rho > 0 $ can be fixed to optimize convergence by relying on theoretical considerations.
Therefore the CQP (\ref{eq:dualCQP}) is equivalent to find
\begin{equation}
\label{eq:uzawa}
\hat{x} =\operatornamewithlimits{argmin}_x \operatornamewithlimits{argmax}_{\mu \geq 0} L(x, \mu) = ||x-y||^2 + <\mu,Ax>
\end{equation}
\subsubsection{Hildreth's algorithm}
\label{subsubsec:hildret}
Hildreth \cite{Hildreth1954Point} proposed to apply the Gauss Seidel algorithm \cite{Kahan1958GaussSeidel} to the dual problem (\ref{eq:dualCQP}).
A single cycle of the Hildreth's algorithm consists in updating each element of $\lambda$ sequentially in an arbitrary fixed order. Therefore each cycle consists of $m$ steps, each of which corresponds to a projection onto the cone $ \mathcal{K}_i$,$i=1,...,m$. The algorithm gives rise to a sequence of points, each of one differs from the preceding in exactly one coordinate. At the cycle $k+1$, the $ \lambda_{i}^{k+1}$ is used in the estimation of the point $\lambda_{i+1}^{k+1}$ so that the best available estimations are used for computing each variable.
The convergence of the Gauss Seidel algorithm is guaranteed only if the matrix $A$ is full row rank, so that there are not redundancies among the inequality restrictions, and it is guaranteed independently of the initial point $\lambda^0$ only if $A$ is positive definite and symmetric.
The algorithm is sensitive to the normalization as well as to the order of the projections.
\subsubsection{Primal-dual interior point methods}
\label{subsubsec:PDinteriorPointMethods}
First introduced by Karmakar in 1984 \cite{Karmarkar1984New}, primal-dual interior point methods act by perturbing the complementarity condition $w^T\lambda= 0$ in the LCP formulation \ref{eq:LCP} and replacing with $w^T\lambda= \mu$. The partition of vectors $w$ and $\lambda$ into zero and nonzero elements is gradually revealed as the algorithm progresses by forcing a reduction of $\mu$. All iterates satisfy the inequality constraints strictly. The solution is approached from either the interior or exterior of the feasible region but never lie on the boundary of this region.
Let the function $F(x, \lambda, w)$ be such that the roots of this function are solutions to the first and the last optimality conditions in \ref{eq:LCP}.
\vspace{0.5cm}
$F_{\mu}(x, \lambda, w) = $$\begin{pmatrix}
w - A^Tx\\[0.3em]
x^T - y^T + \lambda^tA\\[0.3em]
w^T\lambda - \mu e\\[0.3em]
\end{pmatrix}$
\vspace{0.3cm}
The perturbed complementarity condition introduces a nonlinearity, therefore for each fixed $\mu > 0$ a system of nonlinear equations should be solved.
The nonlinear system is typically solved by using a Newton-like algorithm \cite{ben1966newton}. Each iteration of the Newton's method finds a search direction from the current iterate $(x_k, \lambda_k, s_k)$ and it is computationally expensive but can make significant progress towards the solution. For instance in barrier methods, which are the most efficent of the family, this is achieved by using a penalizing term, called barrier function, for violations of constraints whose value on a point increases to infinity as the point approaches the boundary of the feasible region. Interior point methods must be initialized at an interior point, or else the barrier function is undefined.
The interested reader is referred to \cite{Singh2002InteriorPoint} for further information about interior point methods.
\subsubsection{Dykstra's algorithm}
\label{subsubsec:dykstra}
In $1983$, Dykstra \cite{Dykstra1983Algorithm} proposed a generalization of the Hildreth's procedure applicable to the case of constraints corresponding to more general convex cones than polyhedral convex ones. The Dykstra's algorithm is based on the idea, before suggested by Von Neumann \cite{vonNeumann1950Functional} to the case of subspaces, of computing the projection onto the intersection of convex sets by relying on the solution of the simpler problem of projecting onto the individual sets. In the case of concave regression the projection onto a single convex set $\mathcal{K}_i$ involves only three points and, if the constraint is not satisfied, it corresponds to the straight line fitting the points $y_i$,$y_{i+1}$,$y_{i+2}$.
Dykstra's algorithm iterates by passing sequentially over the individual sets and projects onto each one a deflected version of the previous iterate. More precisely, before projecting onto the cone $\mathcal{K}_i$ during the $(k+1)-$th cycle, the residuum obtained when projecting onto $\mathcal{K}_i$ at the previous $k-$th cycle, say $R_i^k$ is removed and a new residuum associated to the cone $\mathcal{K}_i$, say $R_i^{k+1}$ is computed after the projection.
In practice, each $x^k_i$ is the projection of $y + R^k_1 +...+R^k_{i-1}+R^k_{i+1}+...+R^k_m$ onto $\mathcal{K}_i$, where $R^k_i= x^k_i-(y + R_1^k + R^k_{i-1} +R^{k-1}_{i+1}+...+R^{k-1}_m)$.
If each $\mathcal{K}_i$ is a subspace, then at each new cycle $k+1$ the residuum $-R_i^k$ of the projection onto each convex cone $\mathcal{K}_i$ is of course the projection of $x^{k}$ onto the cone $\mathcal{K}_i^o$. Therefore, the Dykstra's procedure for subspaces reduces to exactly the cyclic, iterated projections of von Neumann. In this case, for $k \rightarrow \infty$ , the sum of the residua over the cones $\mathcal{K}_i^k$ approximates the projection of $y$ onto the polar cone $\mathcal{K}^o$ and therefore, for the Moreau decomposition theorem, $x^{k}$ approximates the projection onto $\mathcal{K}$.
However, if the $\mathcal{K}_i$ are not subspaces, $\Pi(\cdot|\mathcal{K}_i)$ is not a linear operator and then the von Neumann algorithm does not necessarily converge.
The Dykstra's algorithm can also be interpreted as a variation of the Douglas–Rachford splitting method applied to the dual proximal formulation (\ref{eq:proximalDual}).
The seminal works of Hildreth and Dykstra have inspired many studies mostly devoted to theoretical investigations about their behavior in a Hilbert space \cite{BoyleMethod,Varian1984Nonparametric}, about its convergence \cite{Iusem1991Convergence,Crombez1995Finding}, about its relation to other methods \cite{Gaffke1989Cyclic,Bauschke1994Method} and about its interpretation in more general frameworks, such as the proximal splitting methods \cite{Combettes2011Proximal}. Han \cite{Han1988Successive}, as well as Iusem and De Pierro \cite{Iusem1991Convergence}, showed that in the polyhedral case, the method of Dysktra becomes the Hildreth's algorithm and therefore it has the same geometric interpretation of Gauss Seidel to the dual problem. Gaffke and Mathar (1989) \cite{Gaffke1989Cyclic} showed the relation of the Dysktra algorithm to the method of component-wise cyclic minimization over a product space, also proposing a fully simultaneous Dykstra algorithm. The only works devoted to give some insight about a more efficient implementation are the ones of Ruud. Goldman and Ruud \cite{Goldman1993Nonparametric}($1993$) generalized the method of Hildreth showing that there is not need to restrict the iterations to one element of $\lambda$ at a time: one can optimize over subsets or/and change the order in which the elements are taken. This observation is important for the speed of convergence since the slow speed can be understood as a symptom of near multicollinearity among restrictions. Because the intermediate projections are so close to one another, the algorithm makes small incremental steps towards the solution. They also remarked that Dykstra uses a parametrization in the primal parameter space, that causes numerical round off errors in the variable residuum. These round off errors cumulate so that the fitted value does not satisfy the constraints of the dual problem. It would be better to use a parametrization on the dual so that the contraints in the dual would be satisfied at each iteration.
Later, Ruud \cite{Ruud1997Restricted} proved that the contraction property of the proposed generalizations rests solely on the requirement that every constraints appears in at least one subproblem of an iteration. As one approach the solution, constraints that are satisfied at the solution are eliminated. To remove satisfied constraints would accelerate the Hildreth procedure. The authors propose to reduce the set of active constraints, that is the constraints satisfied as an equation at the corresponding points, by removing as many constraints as possible through periodic optimization over all positive elements of $\lambda$.
\subsubsection{Alternating Direction Method of Multipliers (ADMM)}
\label{subsubsec:ADMM}
ADMM is an augmented Lagrangian technique \cite{Hestenes1969Multiplier,Powe69a} which can be applied to problems of the form
\begin{equation}
\label{eq:admm}
Find \operatornamewithlimits{argmin}_{z\in \mathcal{R}^m, Ax=z, z\leq 0} ||y-x||^2 + g(Ax)
\end{equation}
where the matrix $A$ is assumed to be irreducible ($AA^T= vI, v>0$) and the intersection of the relative interiors of the domains of the two functions is assumed to be not empty ($ ri \hspace{0.2cm}dom(g) \cap ri\hspace{0.2cm} dom(f) \neq \varnothing$).
ADMM minimizes the augmented Lagrangian $\mathcal{L}$ over the two variables of the problems, say $x$ and $z$, first $x$ with $z$ fixed, then over $z$ with $x$ fixed, and then applying a proximal maximization step with respect to the Lagrange multiplier $\lambda$.
The augmented Lagrangian of index $\gamma \in [0, \infty]$ is
\begin{equation}
\mathcal{L}(x,z,y) = f(x) + g(z) + \frac{1}{\gamma} \lambda^T(Ax - z) + \frac{1}{2\gamma} ||Ax - z||^2
\end{equation}
where $f(x) = ||y-x||^2$.
Denoting by $prox_f^A$ the proximal operator which maps a point $z \in \mathcal{R}^n$ to the unique minimizer of $f(x) + ||Ax-z||^2$ and denoting by $prox_g = prox_{f \circ A}$ the implementation detailed in the Appendix is obtained.
The ADMM method rests on the proximal formulation \ref{eq:proximalDual1}. Indeed, it can be viewed as an application of the Douglas-Rachford splitting algorithm \cite{Eckstein1992DouglasRachford}.
\subsection{Algorithms with time-finite convergence}
\label{subsec:finiteconvergence}
All algorithms that will be reviewed in this section are \textit{active set methods} resting on the LCP formulation \ref{eq:LCP}. Active set methods work by choosing a subset of indices $j \in \tilde{J} \subset J = \{1, . . . , n\}$ such that $w_j$ is allowed to be non-zero and forcing the corresponding $\lambda_j$ to be zero, while the remaining indices $j \in J \setminus \tilde{J}$ force $w_j$ to be zero and allow $\lambda_j$ to take nonzero values.
In this section we will review active set algorithm suach as the mixed primal-dual basis algorithm (section \ref{subsubsec:fraser}), the critical index algorithm (section \ref{subsec:critical}) and the Meyer's algorithm (section \ref{subsubsec:hinge}).
Before detailing the algorithms, we introduce some definitions and basic results about the geometry of polyhedral convex cones, on which are based the algorithms presented in this section. For further details the reader is referred to \cite{silvapulle2011constrained}.
\subsubsection{Properties of polyhedral convex cones with $m\leq n$}
\label{subsec:properties}
Lemma \ref{lm:edgesPolar} establishes the relationship between the constraint matrix $A$ and the edges of the polar cone $\lbrace \gamma^i, i= 1,...,m\rbrace$, namely $A^T= [\gamma^1, ...,\gamma^m]$.
We would like now to determine the edges of the constraint cone $\mathcal{K}$.
Let the vectors $\lbrace \gamma^{m+1},.., \gamma^{n}\rbrace$ vectors orthogonal to $\lbrace\gamma^i, i = 1,..,m\rbrace$ and orthonormal to each other so that the set $\lbrace \gamma^i, i = 1,..,n\rbrace$ forms a basis for $\mathcal{R}^n$. By defining the dual basis of the basis $\lbrace \gamma^i, i = 1,..,n\rbrace$ as the set of vectors $\lbrace \beta^i, i= 1,...,n \rbrace$ that verify the
relationship
\begin{equation}
\label{eq:dualbasis}
(\beta^i)^T \gamma^j = \left\{\begin{array}{ccll} -1 \hspace{0.5cm} i = j \\ 0 \hspace{0.5cm} i \neq j \end{array} \right.
\end{equation}
the constraint cone $\mathcal{K} = \lbrace x: Ax \leq 0\rbrace$ can be equivalently written as
\begin{equation}
\label{constraintCone2}
\mathcal{K} = \Big \lbrace x: x = \sum_{i=1}^m b_i\beta^i + \sum_{i=m+1}^n b_i\beta^i, b_i \geq 0, i = 1,...,m \Big \rbrace
\end{equation}
To see that let $B = [\beta^1,...,\beta^n]$ and $C= [\gamma^1,...,\gamma^n]$. Then $Ax$ are the first $m$ coordinates of $Cx$. Since $B^TC = -I_n$ by construction, by multiplying both members at left for $B^{-1}$ and at right for $x$, we obtain: $Cx = -B^{-1}x$. Therefore $Cx$ gives the negative coordinates of $x$ in the basis $\lbrace \beta^i, i = 1,...,n \rbrace$. Furthermore, points in $\mathcal{K}$ have their first $m$ coordinates non-negative and can be written as $x = \sum_{i=1}^n b_i\beta^i$, where $b_i \geq 0$ for $i=1,..,m$.
Taking into account Def. \ref{def:edgecone}, Eq. \ref{constraintCone2} established that the vectors $\beta^i$ are the edges of the constrain cone $\mathcal{K}$.
\begin{defn}
\label{def:facecone}
Let $\mathcal{K}$ be a polyhedral convex cone in $\mathcal{R}^n$, then $F \subseteq \mathcal{K}$ is a \textit{face} of $\mathcal{K}$ if and only if $F$ is the intersection of $\mathcal{K}$ with a supporting hyperplane.
\end{defn}
A polyhedral convex cone arises as the intersection of a finite number of half-spaces whose defining hyperplanes pass through the origin. The $i-$th row of $A$ is normal to the hyperplane generating the $i-$th closed half-space.
The following Lemma, proved by Rockafellar in 1970 \cite{Rockafellar1970Convex}, establishes a relationship between the first $m$ vectors of the dual basis $\lbrace \beta^i, i= 1,...,m \rbrace$ and the faces of the cone $\mathcal{K}' = \mathcal{K} \cap span(\mathcal{K})$, where $span(\mathcal{K})$ denotes the subspace spanned by the $m$ edges of $\mathcal{K}$.
\begin{lmma}
\label{def:Partitionface}
Let $\mathcal{K} = \lbrace x: Ax \leq 0 \rbrace$, where $A^T = [\gamma_1,...,\gamma_m]$, be the constraint cone and let $\lbrace \beta^i, i= 1,...,m \rbrace$ be the dual basis of $\lbrace \gamma^i, i= 1,...,m \rbrace$. Denoting by $span(\mathcal{K})$ the subspace spanned by the $m$ edges of $\mathcal{K}$, let $\mathcal{K}' = \mathcal{K} \cap span(\mathcal{K}) = \lbrace x \in \mathcal{R}^n: x = \sum_{j=1}^m b_j\beta^j, b_j \geq 0 \rbrace$. Then, for $J \subseteq \lbrace {1,...,m} \rbrace$ the faces of $\mathcal{K}'$ are the sets:
\begin{equation*}
\Big \lbrace x \in \mathcal{R}^n: x = \sum_{j \in J}b_j\beta^j, b_j \geq0\Big \rbrace
\end{equation*}
The set of all relatively open faces
\begin{equation*}
\mathcal{F}_J = \Big \lbrace x \in \mathcal{R}^n: x = \sum_{j \in J}b_j\beta^j, b_j > 0\Big \rbrace
\end{equation*}
forms a partition of $\mathcal{K}'$.
\end{lmma}
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm]{ConicDuality.png}
\end{center}
\caption{The point $x_1 \in \mathcal{K}$ belongs to the open face $F_J= \lbrace x \in \mathcal{K}: x = b\beta^1, b >0 \rbrace$, with $J = \lbrace 1\rbrace$. The support cone of $\mathcal{K}$ at $x_1$ is $\mathcal{L}_{\mathcal{K}}(x_1) = \lbrace x : {\gamma^2}^Tx \leq 0\rbrace$ and its dual is $\mathcal{L}^o_{\mathcal{K}}(x_1) = \lbrace x : x = c\gamma^2, c \geq 0\rbrace$. The set of points that project onto $x_1$ is given by the set $\Pi^{-1}_{\mathcal{K}}(x_1) = \lbrace x_1 + \mathcal{L}^o_{\mathcal{K}}(x_1) \rbrace = \lbrace x_1 + c\gamma^2, c \geq 0 \rbrace$.
The point $x_3 \in \mathcal{K}$ belongs to the open face $F_J= \lbrace 0 \rbrace$, with $J = \varnothing$. The support cone of $\mathcal{K}$ at $x_3$ is $\mathcal{K}$ and its dual is $\mathcal{K}^o$, so that the set of points that project onto $x_3$ is $\lbrace \mathcal{K}^o \rbrace$.
The point $x_4 \in \mathcal{K}$ belongs to the open face $F_J= \lbrace x \in \mathcal{K}: x = \sum_{i=1,2}b_i\beta^i, b_i >0 \rbrace$, with $J = \lbrace 1,2\rbrace$.
The support cone of $\mathcal{K}$ at $x_4$ is the origin and its dual is the origin, so that the set of points that project onto $x_4$ is $\lbrace x_4 \rbrace $. }
\label{fig:Partition}
\end{figure}
Denoting by $u$ and $v$ the projections of $y$ onto $span(\mathcal{K})$ and $span(\lbrace\gamma^{m+1},...,\gamma^{n}\rbrace)$ respectively, being the two subspaces orthogonal, $y$ can be written as $y = u + v$. Since $v$ can be easily computed as $v = X(X^TX)^{-1}X^T y$ with $X = [\gamma^{m+1},...,\gamma^{n}]$, the problem (\ref{eq:primalCQP}) reduces to find the projection of $u$ onto $\mathcal{K}'$.
The next Lemma, proved by Zarantonello \cite{Zara71a} focuses on the set of points in $span(\mathcal{K})$ projecting on a given point $x \in \mathcal{K}'$, say $\Pi^{-1}_{\mathcal{K}'}(x)$. Before stating it we need to define the \textit{support cone} of a closed convex set at a given point.
\begin{defn}
\label{def:supportCone}
The support cone of a closed convex set $\mathcal{K}$ at $x$ denoted by $\mathcal{L}_{\mathcal{K}}(x)$ is the smallest convex cone with vertex at the origin containing $\mathcal{K}-x$.
\end{defn}
\begin{lmma}
\label{def:PartitionRn}
Let $\mathcal{K}' $, $\mathcal{F}_J$, $\lbrace \gamma_i, i= 1,...,n \rbrace$ and $\lbrace \beta_i, i= 1,...,n \rbrace$ defined as in Lemma \ref{def:Partitionface}. If $x$ is a point of $\mathcal{K}'$ belonging to the open face $F_J$, then:
\begin{itemize}
\item $\Pi^{-1}_{\mathcal{K}'}(x) = x + \mathcal{L}^o_{\mathcal{K}'}(x)= \Big \lbrace x + \sum_{i \notin J}c_i \gamma^i, c_i \geq 0\Big \rbrace = \Big \lbrace \sum_{j \in J}b_j \beta^j + \sum_{i \notin J}c_i \gamma^i, b_i > 0, c_i \geq 0\Big \rbrace$,
where $x=\sum_{j \in J} b_j\beta^j$.
\item The sets $\Pi^{-1}_{\mathcal{K}'}(x)$ are disjoint closed convex cones.
\item $\cup_{x \in \mathcal{K}'}\Pi^{-1}_{\mathcal{K}'}(x) = span(\mathcal{K})$
\end{itemize}
where $\mathcal{L}^o_{\mathcal{K}'}(x)$ denotes the dual of the support cone of $\mathcal{K}'$ at $x$.
\end{lmma}
Then any point in $span(\mathcal{K})$ projects onto an unique point in $\mathcal{K}'$ and belong to a unique non-negative orthant, or sector $S_J$
\begin{equation}
\label{eq:sector}
\mathcal{S}_J = \Big \lbrace x \in \mathcal{R}^n: x = \sum_{j \in J}b_j\beta^j + \sum_{j \notin J}c_j\gamma^j, b_j > 0, c_j \geq 0\Big \rbrace
\end{equation}
Fig. \ref{fig:Partition} illustrates this result for $span(\mathcal{K})=\mathcal{R}^2$. Points in $S_J$ project onto the subspace spanned by the vectors $\lbrace \beta^j, j\in J\rbrace$, that is on the face $F_J = \sum_{j \in J} b_j\beta^j$. Vectors belonging to $\mathcal{K}^o$ project onto the origin, vectors belonging to $\mathcal{K}$ project on themself, while each other vector of $\mathcal{R}^n$ project onto an unique face of $\mathcal{K}$.
Therefore, if the sector $S_J$ containing the vector $u$ is known, then the projection of $u$ onto $\mathcal{K}$ can be easily computed as projection of $u$ onto the subspace spanned by the $\lbrace \beta^j, j\in J\rbrace$. This reduces the problem of projecting $y$ onto $\mathcal{K}$ to the problem of finding the set of indices $\hat{J}$ such that the sector $S_{\hat{J}}$ contains $u$.
The set complementary of $\hat{J}$ with respect to $\lbrace 1,...,m\rbrace$ corresponds to the indices of the constraints satisfied at equality in the optimal solution.
The algorithms described in this section propose different strategies to find the optimal set $\hat{J}$.
\subsubsection{Early algorithms based on the properties of polyhedral convex cones}
The first algorithm addressing the problem of projecting a point $y \in \mathcal{R}^n$ onto a polyhedral convex cone $\mathcal{K} \subset \mathcal{R}^n$ by a non-asymptotic procedure dates back to work of Wilhelmsen \cite{Wilhelmsen1976Nearest} in 1976. Wilhelmsen assume that the $m$ generators $\beta^i$ of the cone $\mathcal{K} = \Big \lbrace x \in \mathcal{R}^n: x = \sum_{i=1}^m b_i \beta^i, b_i >0 \Big \rbrace$ are known and propose an algorithm which compute a sequence of nearest points $x^k$ to $y$ in subcones $\mathcal{K}^k$ of $\mathcal{K}$. Each subcone $\mathcal{K}^k$ is chosen so that $x^k \in int(K^k)$ and is closer to $y$ than is $x^{k-1}$. This means that $x^k$ is in the near side of the supporting hyperplane of $\mathcal{K}^{k-1}$ with respect to $y$. The key step is to find $x^{k+1}$ given $x^k$ and the proposed procedure to do that is laborious.
Pshenichny and Danilin (1978) \cite{Pshenichny1978Numerical} proposed an algorithm similar to the one of Wilhelmsen which also converges in a finite number of steps. In both algorithm $m$ can be any integer even larger than $n$. A more efficient procedure, but with the more restrictive assumption that $m\leq n$ has been proposed by Fraser and Massam in 1989.
\subsubsection{Mixed primal-dual basis algorithm}
\label{subsubsec:fraser}
Fraser and Massam \cite{Fraser1989Mixed} proposed an iterative algorithm to solve the general problem of projecting a data point $y \in \mathcal{R}^n$ onto a polyhedral convex cone $\mathcal{K} \subset \mathcal{R}^n$ generated by $m \leq n$ linear inequality constraints.
Polyhedral convex cones generated by a number of inequalities at least equal to the dimension of the space they belong to have been the subject of section \ref{subsec:properties}. As seen there, the problem of projecting a data point onto this class of cones can be reduced to find the set of \textit{edges}, or generators of the cone, indexed by $\hat{J} \subseteq {1,...,n}$ such that the sector $S_{\hat{J}}$ contains the data point.
To this goal, the set of edges of the polar cone $\lbrace \gamma^i, i=1,...,m \rbrace$ is completed by $n-m$ vectors orthogonal to $\lbrace \gamma^i, i=1,...,m \rbrace$ and orthonormal to each other. In the case of concave regression $m=n-2$, the set is completed by a constant function $\gamma^{m+1}= \textbf{1}/||\textbf{1}||$, where $\textbf{1}$ is the $m$-dimensional unitary vector and by a linear function $\gamma^{m+2} = (x- \bar{x}\textbf{1})/||(x- \bar{x}\textbf{1})||$, where $x=(x_1,...,x_m)'$ and $\bar{x}= \sum_{i=1}^mx_i/m$. The set of vectors $\lbrace \gamma^i, i=1,...,n \rbrace$ form a basis for $\mathcal{R}^n$.
Let the vectors $ \lbrace \beta_i, i=1,...,n \rbrace$ be the dual basis of the basis $\lbrace \gamma^i, i=1,...,n \rbrace$ as defined in (\ref{eq:dualbasis}). Fraser and Massam called the vectors $\beta^i$ and $\gamma^i$ primal and dual vectors respectively. A primal-dual basis for $\mathcal{R}^n$, associated to the set of indices $J \subseteq \lbrace 1,...,n\rbrace \equiv L$ is a basis $\mathcal{B}_{J} = [\alpha_1,...,\alpha_n]$ made up of a subset of the primal basis vectors $\lbrace \beta_i \rbrace_{i \in J}$ and a complementary subset of the dual basis vector $\lbrace \gamma_i \rbrace_{i \in L\setminus J}$. For $n=m$ the primal basis vectors, corresponding to the edges of $\mathcal{K}$, are simply the columns of $-(A^T)^{-1}$.
Using the above definitions, the problem of projecting a point $y \in \mathcal{R}^n$ onto the cone $\mathcal{K}$ can be formulated as follows.
\begin{thm}
The primal constrained quadratic minimization problem (\ref{eq:primalCQP}) is equivalent to the problem of finding
\begin{equation}
\label{eq:mpdb}
\operatornamewithlimits{argmin}_{ x \in \mathcal{K} } ||u-x||^2
\end{equation}
where $u = y - v$, with $v = \Pi(y|span(\gamma^{m+1},...,\gamma^{n}))$.
Denoting by $x_u$ the solution to this problem, the solution to the primal problem (\ref{eq:primalCQP}) is $\hat{x} = x_u + v$.
\end{thm}
Finding the sector containing $u$ is achieved moving along a fixed line joining an arbitrary chosen initial point $x^0$ inside the cone or on its boundary to the data point $u$.
By moving along a fixed line, many sectors are crossed: each time a sector is crossed the successive approximation $x^k$ is obtained by projecting the point on the line passing through $u$ and $x^0$ on the face $F_{J^k}$ of $\mathcal{K}$ (see Lemma \ref{def:Partitionface}) so that the distance $||u-x^k||$ is decreasing in $k$.
At each iteration each basis differs from another by one vector only and therefore the coordinates of $x^k$ onto the new basis are easy to calculate: what changes is that one coordinate becomes equal to zero and therefore there is not need of estimating the inverse of a matrix at each step. For this reason this algorithm is faster that the one of Wilhelmsen.
Points belonging to the sector $S_{J^k}$ have non-negative coordinates in the mixed primal-dual basis $\mathcal{B}_{J^k}$ relative to the cone $\mathcal{K}$. Therefore the procedure terminates when the coordinates of the data point in the primal dual basis $\mathcal{B}_{J^k}$ are all nonnegative, meaning that the point $x^k$ is on the face of the sector containing the data point $u$.
The number of iterations needed is equal to the number of different sectors that the line joining the initial point to the data point has to cross. This number is bounded above by $2^n$. It is worth to remark that crossing a sector corresponds to a pivot step in the equivalent LCP. In fact, the net effect of a pivot step is of moving from the point $x^k$ contained in the face $F_{J^k}$ of $\mathcal{K}$ to the point $x_{k+1}$ contained in the face $F_{J^{k+1}}$ of $\mathcal{K}$.
Ten years later, Meyer \cite{Meyer1999Extension} generalized the algorithm of Fraser and Massam to the case of more constraints than dimensions, that is when $m>n$.
\subsubsection{Critical index algorithm: Nearest point problem in simplicial cones}
\label{subsec:critical}
Murty and Fathy (1982) \cite{Murty1982Critical} considered the general problem of projecting a given vector $y \in \mathcal{R}^n$ onto a \textit{simplicial cone} $\mathcal{K}\subset \mathcal{R}^n$. The definition of simplicial cone and \textit{pos cone}, on which the former definition is based are as follows.
\begin{defn}
\label{def:poscone}
The pos cone generated by the vectors in $\Delta = \lbrace \delta^i, i = 1,...,m \rbrace$, denoted by $pos(\Delta)$, is the set $\lbrace x \in \mathcal{R}^n | x = \sum_{i=1}^m d_i \delta^i , d_i \geq 0\rbrace $.
\end{defn}
\begin{defn}
\label{def:simplicial}
A cone $\mathcal{K} \subset \mathcal{R}^n$ is said simplicial if it can be expressed as a positive linear span of $n$ linearly independent vectors $\Delta = \lbrace \delta^i, i = 1,...,n\rbrace$ in $\mathcal{R}^n$ (i.e., a basis for $\mathcal{R}^n$): $\mathcal{K} = pos(\Delta)$.
\end{defn}
For any point $x \in pos(\Delta)$ the vector $d = D^{-1}x$, where $D = [\delta^1,...,\delta^n]$ is called \textit{combination vector} corresponding to $x$. Therefore the projection $\hat{x}$ of $y$ onto $pos(\Delta)$ can be expressed as a nonnegative linear combination of the edges of the cone: $\hat{x} = \sum_{i=1}^n \hat{d}_i\delta^i$, where the optimal combination vector corresponding to $\hat{x}$ is $\hat{d} = D^{-1}\hat{x}$.
Murty and Fathy named the set of indices $\hat{J} \subseteq \lbrace 1,...,n \rbrace$ such that $\hat{d}_{i \in \hat{J}} > 0$ set of \textit{critical indices}.
Using the definitions of simplicial cone and combination vector, the original problem of projecting the point $y \in \mathcal{R}^n$ onto the cone $\mathcal{K}$ can be formulated as follows.
\begin{thm}
The primal constrained quadratic minimization problem (\ref{eq:primalCQP}) is equivalent to the problem
\begin{equation}
\label{eq:activeindex}
\hat d = \operatornamewithlimits{argmin}_{ d \geq 0 } (u-Dd)^T W (u-Dd)
\end{equation}
where $u = \Pi(y|span(\mathcal{K}))$ and $D = [\gamma^1,...,\gamma^n]$, with $\lbrace \gamma^i=A_i^T, i = 1,...,m \rbrace$ and $\lbrace \gamma^i, i = m+1,...,n \rbrace$ defined as in section \ref{subsubsec:fraser}.
Denoting by $\hat{d}$ the solution to this problem, the solution to the primal problem (\ref{eq:primalCQP}) is $\hat{x} = y - D\hat{d} + v$.
\end{thm}
This formulation has the same structure of the dual formulation (\ref{eq:dualCQP}), where the combination vector $d$ in (\ref{eq:activeindex}) plays the same role as the dual variable $\lambda$ in (\ref{eq:dualCQP}). The only difference is that in (\ref{eq:dualCQP}) the matrix $A \in \mathcal{R}^{m\times n}$ is not squared as the matrix $D \in \mathcal{R}^{n\times n}$.
As shown in section \ref{subsec:LCP} for the dual quadratic formulation (\ref{eq:dualCQP}), the formulation (\ref{eq:activeindex}) can be recasted as a LCP. This equivalency is important since the following theorem, proved in \cite{Murty1982Critical} applies to the LCP formulation of (\ref{eq:activeindex}).
\begin{thm}
\label{thm:reduction}
If a single critical index for the LCP problem of order $n$ is known, the problem can be reduced to a LCP of order $n-1$.
\end{thm}
The fact that finding a critical index reduces the dimension of the problem can be argued geometrically. Let $l$ be a critical index, then denoting by $NPP[\Gamma;u]$ the subproblem (\ref{eq:activeindex}), where $\Gamma = \lbrace \gamma^i,i=1,...,n\rbrace$ its solution is also the solution to the $NPP[\Gamma \cup \lbrace -\gamma^l\rbrace;u]$. Defining $\bar{u} = u - \frac{\gamma^l(u^T\gamma^l)}{||\gamma^l ||^2}$ and $\bar{\Gamma} = \lbrace \bar{\gamma}^1,...,\bar{\gamma}^{l-1}, \bar{\gamma}^{l+1},...,\bar{\gamma}^n \rbrace$, where $\bar{\gamma}^i = \gamma^i - \frac{\gamma^l(u^T\gamma^l)}{||\gamma^l ||^2}\gamma^i$, than $\bar{\gamma}^i, i \in \lbrace 1,...,n \rbrace \setminus l$ is orthogonal to $\gamma^l$ and the cone $pos(\Gamma \cup \lbrace -\gamma^l\rbrace)$ is the direct sum of the full line generated by $\gamma^l$ and the simplicial cone $pos(\bar{\Gamma})$.
Solving $NPP[\bar{\Gamma},\bar{u}]$ is an $n-1$ dimensional problem. If $\hat{x}^*$ is the solution of the $NPP(\bar{\Gamma},\bar{u})$, then the solution $\hat{x}$ to the $NPP[\Gamma;u]$ is obtained as $\hat{x} = \hat{x}^* + \frac{\gamma^l(u^T\gamma^l)}{||\gamma^l ||^2}$.
By relying on Theorem \ref{thm:reduction}, the authors proposed an algorithm consisting of a subroutine to identify a critical index for the problem, followed by a subroutine which reduces the size of the problem once a critical index is found.
Since the solution $\hat{d}$ is the orthogonal projection onto the linear hull of $\lbrace \gamma^i, i \in \hat{J} \rbrace$, if $\hat{J}$ is known, the solution of the equivalent $LCP(q,M)$ and correspondingly the solution to $NPP[\Gamma;u]$ can be easily found.
The routine to identify a critical index operates on the $NPP[\Gamma;u]$ by exploiting the geometric properties of projection faces of a pos cone, whose definition is as follows.
\begin{defn}
\label{def:projectionFace}
Let $S \subset \Gamma$. $pos(S)$ is a face of $pos(\Gamma)$ of dimension $|S|$. $pos(S)$ is said to be a projection face of $pos(\Gamma)$ if $\Pi(u|span(S)) \in pos(S)$.
\end{defn}
In the following we enunciated some theorems on which the critical index algorithm is based. Their proofs can be found in \cite{Murty1988Linear} (chapter 7).
\begin{thm}
\label{characterization_solution}
Let $S \subset \Gamma$, $S\neq 0$. The optimum solution of $NPP[S;u]$ is in the relative interior of $pos(\Gamma)$if and only if the projection of $u$ onto the linear span of $S$ is in the relative interior of $pos(S)$: $\Pi(u|span(S)) \in ri(pos(S))$.
\end{thm}
\begin{thm}
\label{projection_in_posCone}
Let $\hat{x} = D\hat{d}$ be the optimum solution of $NPP[\Gamma;u]$. Let $\hat{J}$ the set of critical indices and $S = \lbrace \gamma^j, j \in \hat{J}\rbrace$. Then $pos(S)$ is a projection face of $pos(\Gamma)$.
\end{thm}
Theorem \ref{projection_in_posCone} tells that the projection onto the cone $\mathcal{K}^o$ belongs to the pos cone generated by the set $S$ of vectors corresponding to critical indices and that such pos cone is a projection face. Therefore, is the set of critical index is known, for Theorem \ref{characterization_solution} the solution can be computed as projection onto the linear subspace spanned by vectors in $S$.
The routine maintains a non empty subset of $\Gamma$ called the \textit{current set} denoted by $S$, and a point called the \textit{current point} denoted by $\bar{x}$.
At each stage of the routine $\bar{x} \in pos(S)$. When termination occurs the routine either finds the nearest point in $pos(\Gamma)$ to $u$ in which case the problem is completely solved or it finds a critical index of the problem. In the latter case an LCP of order $(n-1)$ can be constructed and the same routine can be applied to this smaller problem. Hence the unique solution of the original problem can be obtained after at most $n$ applications of the routine which finds a critical index.
A characterization useful to find a critical index or the solution to the problem is provided by the following theorem.
\begin{thm}
Let $\bar{x} \in pos(\Gamma)$ be such that $0 \in T(u,\bar{x})$, where $T(u,\bar{x})$ is the tangent hyperplane at $\bar{x}$ to the ball of center $u$ and radius $\bar{x}$. If there exists an index $j$ such that $(u-\bar{x})^T\gamma^i \leq 0 $ for all $i \neq j$ and $(u-\bar{x})^T\gamma^j$ then $j$ is a critical index of $NPP(\Gamma,u)$
\end{thm}
A characterization of the optimal solution in terms of separating hyperplanes is given by Robertson et al. \cite{Hardle1989Robertson}.
\begin{thm}
\label{thm:optimality}
A point $\bar{x} \in pos(\Gamma)$ is the nearest point in $pos(\Gamma)$to $y$ if and only if $0 \in T(y;\bar{x})$ and $(y-\bar{x})^T\gamma^j \leq 0$, $\forall j=1,...,n$, where $T(y,\hat{x})$ is the tangent hyperplane to $pos(\Gamma)$ in $\hat{x}$.
\end{thm}
The routine to find a critical index alternates distance reduction operations with line-search and projection steps to find a projection face. In practice, the routine starts by projecting on the closest edge to the data point.
If the optimality condition is not satisfied, than the procedure iteratively adds vectors to $S$ and updates the point $\bar{x}$ while consistently reduces the distance between $u$ and $\bar{x}$. The distance reduction operation is carried out efficiently by projecting onto the nonnegative hull of two vectors in $\mathcal{R}^n$, the current point $\bar{x}$ and a vector $\gamma^i$ satisfying one of the conditions given by the following theorem.
\begin{thm}
\label{thm:reduce_distance}
Given $\bar{x} \in pos(\Gamma)$, $\bar{x} \neq 0$ such that $0 \in T(y,\bar{x})$, if for some $i\in \lbrace 1,...,n\rbrace$ we have $(y-\bar{x})^T\gamma^i>0$ and either:
\begin{itemize}
\item $|| \bar{x} - y|| \leq ||\Pi(y|\gamma^i) -y ||$ and $\lbrace \bar{x},\gamma^i\rbrace$ is linearly independent, or
\item $y^T\gamma^i \leq 0$
\end{itemize}
then the projection of $y$ onto the linear hull of $\lbrace \bar{x},\gamma^i\rbrace$ is in the relative interior of $pos\lbrace \bar{x},\gamma^i\rbrace$
\end{thm}
Once such updates are not longer possibles, it employs a sequence of line-search steps and projections in the subspace spanned by the vectors in $S$ to find a projection face of the corresponding pos cone. This line-search is in the same spirit than the one proposed by Fraser and Massam since the goal is to reduce the distance to the data point while keeping at the interior of a pos cone. In the particular case of concave regression, for which $m<n$, it can be implemented exactly in the same way.
This algorithm results to be much faster than the MPDB algorithm. The primary source of its computational efficiency is in that it relies mostly on distance reduction operations and size reduction steps whose computational requirement is relatively small compared to the computational effort required to find a projection face through a line-search.
Recently, Liu and Fathy (2011) \cite{Liu2011Active} generalized the work of Murty and Fathy (1982) to polyhedral non-simplicial cones, hence allowing the set $\Gamma$ to contain more than $n$ vectors. What allows the generalization is the equivalence between the structure of the two problems through the concept of polar cone.
The authors also proposed several strategies for efficient implementation mostly based on the mathematical properties of the entities involved. We have incorporated, where possible, these strategies, to all algorithm tested for objective evaluation of performances.
\subsubsection{Meyer's algorithm}
\label{subsubsec:hinge}
Meyer \cite{Meyer2013Simple} considered the general problem of projecting a data point $y \in \mathcal{R}^n$ onto a polyhedral convex cone $\mathcal{K} \subset \mathcal{R}^n$ generated by a finite number $m$ of linear inequalities. The problem is reduced to find the set of indices $\hat{J} \subset \lbrace 1,...,M \rbrace \equiv L $, where $M\leq m$ is the number of linearly independent constraints, corresponding to not saturated constraints at the solution. Meyer called these indices \textit{hinges}.
When $m\leq n$, the algorithm can be applied to both the primal and the dual formulation, whereas for $m>n$ it is applied to the dual formulation. In the following, since for the problem of concave regression $m< n$ we consider how to solve the primal problem (\ref{eq:primalCQP}).
The search for $\hat{J}$ is performed iteratively, starting with an initial guess by removing or adding one index at time, until the optimal solution is obtained.
For each candidate $J^k$ two conditions are tested: the \textit{interior point condition} and \textit{optimality condition}.
The interior point condition is satisfied when the current iterate belongs to the interior of $pos(S)$, that is when $x^k \in pos(S)$, where $S\subset \lbrace \beta^i, i \in J^k \rbrace$ . By using the following theorem
\begin{thm}
\label{thm:feasibility}
Let $S \subset \lbrace \beta^i, i = 1...,m\rbrace$, $S \neq \varnothing$. The optimum solution of problem (\ref{eq:primalCQP}) is in the relative interior of $pos(S)$ if and only if $\Pi(y|span(S))$ is in the relative interior of $pos(S)$.
\end{thm}
$x^k$ can be computed as projection of $y$ onto the linear hull spanned by the vectors $\lbrace \beta^i, i \in J^k \rbrace$.
If the feasibility condition is not satisfied, the index $j\in L \setminus J^k$ corresponding to the most negative coefficient is added to $J^k$ and the interior point condition is checked again.
Once the feasibility condition is satisfied, the optimality condition is tested by using the characterization given in Theorem \ref{thm:optimality}. If it is not satisfied, the vector $\beta^i, i \in J^k$ which most violates the condition is removed. The procedure continues until both conditions are satisfied.
Convergence is guaranteed by the fact that when the algorithm replaces just one edge, the Sum of the Squared Errors (SSE) after is less than the SSE before so that the algorithm never produces the same set of edges twice, which would result in an infinite loop.
In practice, each time an hinge is added, the best solution with $n+1$ hinges where the first $n$ hinges are already given is obtained. But this is not in general the best fit with $n+1$ hinges, so that some hinge may need to be changed. Therefore, the optimal solution can be interpreted as the best approximation with the biggest possible number of hinges.
\section{Issues about effectivness for large-scale problems}
In this section, we discuss stenghts and limitations of the algorithms detailed above in solving large-scale instances of a particular kind of cone regression, the concave regression. In particular, we consider computational issues related to numerical stability, computational cost and memory load as well as the suitability to take advantage of available good estimates and to be implemented in an online fashion.
\subsection{Suitability to take advantage of available good estimates}
One general strategy for reducing the computational cost of a large-scale optimization problem is to use an initial guess, easier to calculate and close to the optimal solution.
Within the class of algorithms with asymptotic convergence, splitting-based methods work by activing each of the convex constraints repetitively and by combining them to obtain a sequence converging to a feasible point. Since the projection point $\Pi(y| \mathcal{K})$ is characterized by the variational inequality
\begin{equation}
\hat{x} = \Pi(y| \mathcal{K}) \in \mathcal{K}, \hspace{1cm} \forall x \in \mathcal{K}: \langle y- \hat{x}, x-\hat{x} \rangle \leq 0
\end{equation}
the projection operator $\Pi( \cdot |\mathcal{K})$ is a closed contraction. Therefore the set of fixed points of $\Pi( \cdot |\mathcal{K})$ is exactly $\mathcal{K}$.
This prevents the use of an initialization point belonging to the feasible set as well as the use of multiscale strategies since there is not guarantee that the solution from a previous level does not belong to the feasible set.
The same difficult arises when considering interior point algorithms, since them need to be initialized to an interior point. In \cite{Goldman1993Nonparametric}, Goldman proved that the Dykstra's algorithm can potentially starts from better starting values than the given data point $y$. The author established the convergence to the nearest point to the primal cone from an arbitrary point in the intersection of $\mathcal{C}$ and the ball of radius $||y||$, where $\mathcal{C} = \lbrace x| x = y -A^T\lambda, \lambda \geq 0 \rbrace$, is a rotation of $\pi$ radiants of the dual cone $\mathcal{K}^o$ with its vertex translated at $y$. A point satisfying these conditions can be obtained efficiently by using distance reduction operations based on Theorem \ref{thm:reduce_distance}. It is worth to remark that this result can be easily interpreted in the active set framework. In fact, the Dykstra's algorithm can be undestood as a primal active set method and its solution is a primal dual feasible point. Therefore, any dual feasible point, that is every point belonging to the set $\mathcal{C}$, can be used as initialization.
All algorithm with time finite convergence detailed in the previous section are primal-dual and they reduce the problem of projecting a given data point onto a convex set to the problem of finding the set of indices corresponding to not saturated constraints at the solution. In general, they involve the selection of a subset from a collection of items, say $\hat{J} \subseteq \lbrace 1,...,m \rbrace$. With this formulation, they potentially allow to take advantage of a good estimate of the optimal active set. However, the adaptation is not rapid since the active set estimate is updated of one-by-one changes preventing this class of method from being effective general-purpose solvers for large-scale problems.
In the algorithm of Fraser and Massam, the set of successive approximations are obtained by moving along a fixed line connecting the initial guess to the data point. The number of iterations needed to reduce the data point reduces the number of sectors to be crossed to join the sector containing the data point.
By contrast, in the algorithm of Meyer the proximity of the initial guess to the data point is not a good criterion selection for the initial guess. In fact, given an initial guess, the solution is attained by adding and/or removing one index at time until optimal solution is found. Taking into account that the optimal solution can be interpreted as the best approximation with the biggest possible number of hinges, if the optimal solution contains just a few hinges, than using the empty set as an initial guess would result much faster than using the full set of possible hinges. On the contrary, if just a few constraints are satisfied at equality in the optimal solution, than the full set of indices will be a much better initial guess than the empty set. Therefore even if the choice of the initial guess may highly influence the performances, its choice depends on the data and there is not a well established criterion to fix it.
The Murty and Fathy's algorithm reduces the size of the problem each time a critical index is found. Therefore it is not compatible with strategies that take advantage of a good initial estimate since a good estimate does not lead to find a critical index faster.
To overcome the limitation of active set methods in taking advantage of a good estimate, Curtis et al. \cite{curtis2012} have recently proposed an euristic framework that allows for multiple simultaneous changes in the active-set estimate, which often leads to a rapid identification of the optimal set. However, there is not guarantee of the computational advantages for general problems and, furthermore, the authors recommend their approach when solving generic quadratic programming problems with many degrees of freedom, that is not the case of general concave regression problems.
\subsubsection{PAV's inspired approximate solution}
To evaluate through numerical simulations the suitability to take advantage from good initial estimates, we propose an algorithm inspired to Pool Adjacent Violators (PAV), whose computational complexity is $\mathcal{O}$(n).
Starting from the original signal, violated constraints are removed one by one by projecting the current iterate onto the convex cone corresponding to the violated constraint until a primal feasible solution is obtained. Since the dual feasibility of each iterate is not guaranteed, the founded solution is not optimal. However, in our experience the solution is a very good approximation of the optimal solution.
\subsection{Suitability to be implemented in an online fashion}
Another strategy to deal with large projection problems would be to build and evaluate the solution incrementally according to the order of its input, as done in online methodologies developed for dynamic optimization \cite{Bhatia1996Dynamic} over a stream of input. Even if the input is given in advance, inputs are processed sequentially and the algorithm must respond in real-time with no knowledge of future input. Each new input may cause to rising or falling constraints and the final solution is a sequence of feasible solutions, one for each time step, such that later solutions build on earlier solutions incrementally.
Of course this strategy requires that the algorithm respond in real-time for each new input, that would not possible when dealing with large matrix inverse computations. Let $\hat{x} \in \mathcal{R}^n$ be the projection of $y \in \mathcal{K}^n$ onto $\mathcal{K}$ and let $\hat{\bar{x}} \in \mathcal{K}^{n+1}$ be the projection of $\bar{y} \in \mathcal{R}^{n+1}$ onto $\mathcal{K}^{n+1}$. When a new element is added to the data point, a new constraint is added too, so that the constraint cone has a new edge. If this constraint corresponds to a critical index, that is to a constraint satisfied at equality in the optimal solution, then the projection face will be the same so that no further computing will be needed. On the contrary, if the new constraint does not correspond to a critical index, the projection face will change, including the edge corresponding to the new constraint and removing and adding some others. Therefore, the major difficulty faced by online strategy is the same faced in exploiting good estimates.
\subsection{Computational issues}
\label{subsec:computational_issues}
As highlithed in the previous section, despite the different strategies implemented by algorithms with time-finite convergence, generally an index at time is iteratively added or removed from the current set of indices $J^k$ until both the feasibility condition and the optimality condition are satisfied. Checking the optimality condition involves computing the inverse of a matrix that differs slightly from the matrix of the previous iteration. What "slightly" exactly means depends on the specific algorithm and it is detailed in the following.
The algorithm of Fraser and Massam involves the computation of a $n \times n$ fixed size matrix inverse at each iteration. The matrix to be inverted differs from the matrix of the previous iteration only for one column, being the change of the form $A \rightarrow (A + u \times v)$, where $u$ is an unitary vector with only one nonzero component corresponding to the index of the column to be changed, and $v$ corresponds to the difference between the elements of the dual basis vector and the elements of the primal basis vector or viceversa. In this case the inverse can be updated efficiently by using the Sherman-Morrison formula: $(A + u \times v)^{-1} = \frac{z\times w}{1+\lambda}$, where $z= A^{-1}u$, $w=(A^{-1})^Tv$, $\lambda = v^Tz$. Therefore only two matrix multiplications and a scalar product are needed to compute the new inverse at each step.
The algorithm of Meyer, as well as the one of Liu and Fathi \cite{Liu2011Active} involve the computation of an inverse matrix of variable size at each iteration. The matrix to be inverted differs from the matrix of the previous iteration only for one column, which has been added or removed. Since the matrix to be inverted $A(J) \in \mathcal{R}^{r\times n}$, with $r\leq m$ is generally rectangular, the Moore-Penrose generalized inverse or pseudoinverse is computed:$A(J)^{\dag}= A(J)^T(A(J)A(J)^T)^{-1}$.
In Matlab and Scilab, the computation of the pseudoinverse is based on the Singular Value Decomposition (SVD) and singular values lower than a tolerance are treated as zero. The advantage of this approach is that the pseudoinverse can be computed also for a nonsingular matrix. However, the method proposed by Liu and Fathi \cite{Liu2011Active} to improve the computational efficiency of their algorithm does not take advantage of SVD approach since it consists in updating the matrix $(A(J)A(J)^T)^{-1}$. If the matrix $A(J)$ is ill-conditioned, then the inverse cannot be computed with good accuracy and for the matrix $A(J)A^T(J)$ is even more so because this operation squares the condition number of the matrix $A(J)$.
A better solution would be to update directly the pseudoinverse. This can be achieved when a column is added by using the method proposed in \cite{Andelic:2006:KLM:1228509.1228513}
Let $A^T \in \mathcal{R}^{n\times r}$, $x \in \mathcal{R}^n$ and $B = \begin{pmatrix}A^T&x\end{pmatrix} \in \mathcal{R}^{n\times (r+1) }$. The pseudoinverse of $B$ can be computed from the pseudoinverse of $A^T$ as follows.
$B^{\dag} = $
$\begin{pmatrix}
A^{\dag} - A^{\dag}xw^{\dag}\\
w^{\dag}
\end{pmatrix}$, where $w = (I-AA^{\dag})x$ and $w^{\dag} = \frac{w^T}{||w||^2}$.
Alternatively, the transformation
$A(J)^T(A(J)A(J)^T)^{-1}y$ can be efficiently computed by a QR decomposition approach.
Let $A^T = \begin{pmatrix} Q_{11} Q_{12} \end{pmatrix}\begin{pmatrix}R_{11} \\ 0 \end{pmatrix}$ be the QR decomposition of $A^T$, where $R_{11}$ is an $m\times m$ invertible upper triangular matrix. Then: $A(J)^T(A(J)A(J)^T)^{-1}=Q_{11}(R^T_11)^{-1}$. The inverse of an upper triangular matrix can be efficiently implemented by a left matrix division or by more sophisticated methods as the one proposed in \cite{mahfoudhi2012fast}.
Courrieu \cite{Courrieu2008Fast} proposed a method based on the full rank Cholesky decomposition which has a computation time substantially shorter of the method based on SVD decomposition.
The two main operations on which his method is based are the full rank Cholesky factorization of $A^TA$ and the inversion of $L^TL$, where $L$ is a full rank matrix .
On a serial processor these computations are of complexity order $\mathcal{O}(n^3)$ and $\mathcal{O}(m^3)$ respectively. However, in a parallel architecture, with as many processor as necessary, the time complexity for Cholesky factorization of $A^TA$ could reduce to $\mathcal{O}(n)$, while the time complexity for the inversion of the symmetric positive definite matrix $L^TL$ could reduce to $\mathcal{O}(log(r))$. However, the computational advantage of this method can be appreciated only when $r<<n$, since the inverse of a $r \times r$ matrix has to be computed, which is not in general the case, specially for concave regression problems.
The method proposed by Zhu and Li \cite{Zhu2007Recursive} for recursive constrained least squares problems, found that the exact solution of Linear Equality-constrained Least Squares can be obtained by the same recursion as for the unconstrained problem, provided that the Rescricted Least Squares procedure is appropriately initialized. However, even this approach offer significant advantages in term of computational cost only when the number of constraints $m$ is small, which is not the case for large-scale concave regression problems.
\section{Improving the active set framework for concave regression problems}
An active set algorithm for solving the concave regression problem generates a sequence of quasi stationary points. A primal (dual) feasible active set algorithm generates a sequence of primal (dual) feasible quasi stationary points with decreasing objective function values and terminates when the dual (primal) feasibility is satisfied.
An active set $J$ induces a unique partition of the indices $\lbrace1,...,n\rbrace$ into blocks. A block $B$ of such partition is a set of consecutive integers, say, $\lbrace p,p+1,...,q \rbrace$, such that the index $i$ of the constraint $x_{i+1} - x_i \leq x_{i+2} - x_{i+1}$ is in $J$ for each $i$ such that $p \leq i \leq q-2$. Conversly any such partition of indices determines a unique active set.
Denoting by $\lambda_i$ the multiplier associated with the ith constraint, the Karush-Kuhn-Tucker can be written as:
\vspace{0.5cm}
$\left\lbrace
\begin{array}{llllllcl}
x_{i+1} - x_i \leq x_{i+2} - x_{i+1} \quad i = 1,...,n-2\\
y_1 - x_1 = \lambda_1\\
y_2 - x_2 = - 2\lambda_1 + \lambda_2 \\
y_3 - x_3 = \lambda_1 - 2\lambda_2 + \lambda_3 \\
y_{n-1} - x_{n-1} = \lambda_{n-4} - 2\lambda_{n-3} + \lambda_{n-2}\\
y_{n-1} - x_{n-1} = \lambda_{n-3} - 2\lambda_{n-2} \\
y_n - x_n = \lambda_{n-2} \\
\lambda_i \geq 0 \quad \quad i = 1,...,n-2\\
\lambda_i(2x_{i+1} - x_i - x_{i+2}) = 0 \quad i = 1,...,n-2\\
\end{array}\right.$
\vspace{0.3cm}
It is easy to show that: $\sum_{i=1}^n i(y_i-x_i) = 0$ and $\sum_{i=1}^n (y_i-x_i) = 0$. Knowing that each block can be represented by an affine function $x_i = \alpha + \beta i$, the case of blocks the above systems can be written as:
\vspace{0.5cm}
$\left\lbrace
\begin{array}{lllllllcl}
\sum_{i=1}^{n_1}y_i = \alpha^1 \sum_{i=1}^{n_1}i + \beta^1 n_1 + \lambda_{n_1} \\
\sum_{i={n_1+1}}^{n}y_i = \alpha^2 \sum_{i={n_1+1}}^{n}(i-n_1) + \beta^2 (n-n_1) - \lambda_{n1} \\
\sum_{i={n_1+1}}^{n}y_i = \alpha^2 \sum_{i={n_1+1}}^{n}(i-n_1) + \beta^2 (n-n_1) - \lambda_{n1} \\
\sum_{i={n_1+1}}^{n}(i-n_i)y_i = \alpha^2 \sum_{i={n_1+1}}^{n}(i-n_1)^2 + \beta^2 \sum_{i={n_1+1}^{n}} (i-n_1) - \lambda_{n1} \\
\lambda^1n_1 + \beta^1-\alpha^2-\beta^2=0
\end{array}\right.$
\vspace{0.3cm}
Therefore, for each block the unknown variables to be computed are $\alpha, \beta, \lambda$.
The systems to be solved can be written as $A\boldsymbol{x} = \boldsymbol{b}$, where $\boldsymbol{x} = (\alpha^1,\beta^1,\alpha^2,\beta^2,\lambda_{n_1})$, $\boldsymbol{b} = (\sum_{i=1}^{n_1}y_i,\sum_{i={n_1+1}}^{n}y_i,\sum_{i={n_1+1}}^ny_i,\sum_{i={n_1+1}}^n(i-n_i)y_i , 0)$ and
\vspace{0.5cm}
$A = $$\begin{pmatrix}
\sum_{i=1}^{n_1} i & n_1 & 0 & 0 & 1 \\[0.3em]
0 & 0 & \sum_{i=1}^{n-n_1} i & n-n_1 & -1 \\[0.3em]
\sum_{i=1}^{n_1} i^2 & \sum_{i=1}^{n_1} i & 0 & 0 & n_1 \\[0.3em]
0 & 0 &\sum_{i=1}^{n_2-n_1} i^2 & \sum_{i=1}^{n_2-n_1} i & 0 \\[0.3em]
n_1+1 & 1 & -1 & -1 & 0 \\[0.3em]
\end{pmatrix}$
\vspace{0.3cm}
In general, if $k$ is the number of blocks, than the system to be solved has size $3k-1$.
As observed in \cite{Kuosmanen2008Representation}, in the case of concave regression the number of blocks at the solution is usually much lower than $n$. Therefore, an active set algorithm that start with an empty active set, should found the solution without the need of inverting large matrices.
\section{Comparative results}
\label{sec:experiments}
In Tab. \ref{tab:1} we compare qualitatively the algorithms analyzed above in terms of their formulation (primal, dual, or primal-dual), their possibility to be initialized and their major limitations in dealing with large scale problems. All analyzed methods are dual or primal-dual: dual methods cannot be initialized, whereas initialization in primal-dual methods is allowed but constrained. The major limitation of asymptotic convergence methods when dealing with large problems is the convergence, whereas time-finite convergence methods become too slow and numerical instable because of accumulation errors. In the following, we provide evidence of these limitations through numerical simulations reported in the Appendix and we compare quantitatively their performances in term of distance from the solution for one second, one minute and ten minutes of CPU time.
Instead of using only unstructured random data as data-test, we considered signals of increasing difficulty level varying from a noised concave signal to a concave/convex noised signal.
More precisely, we considered three signals, whose equations and plots are given in Fig. \ref{signals} of the Appendix and, for each of them, we considered three different sizes: $n \in \lbrace 50, 500, 1000 \rbrace$
and three increasing levels of noise (standard deviations $\sigma \in \lbrace 0.01, 0.1, 0.5 \rbrace$). To evaluate robustness against initialization variation for time-finite active set methods, we considerer three different initializations of the active set $J$: the empty set, the set of $m$ indexes corresponding to the linear inequality constraints, the set of not saturated constraints obtained by using the algorithm inspired to PAV described in the previous section. In the class of algorithms with asymptotic convergence, the most efficient results to be ADMM. This is evident even for very small size data, when using difficult signal such as near-convex signals or very noised signals (see Fig. \ref{size50L2S2}
and Fig. \ref{size50L2S3}
of the Appendix). For noised concave signals (Fig. \ref{size50L2S1}
of the Appendix), the computational efficiency of ADMM is more evident in presence of an high level of noise. Already for signals of size $500$ the performance of ADMM are not very good: convergence ($SSE<10^{-6}$) is not completely attained.
The algorithm of Meyer is very sensitive to the initialization. It gives good performances when the signal is a Gaussian white noise and the initial active set is empty since most of constraints are expected to be saturated at the solution (see Fig. \ref{size50L2S3}
and Fig. \ref{size500L2S3} of the Appendix).
Given a good initialization, the MPDB algorithm allows to compute the exact solution faster than other methods. However, this algorithm is not numerically stable since it require to compute the inverse of a matrix at each iteration. As explained in section \ref{subsec:computational_issues}, this is than incrementally by using the Shermann-Morrison formula. However, numerical round-off errors cumulate so that, in our implementation, the exact inverse is computed each $150$ iterations. As it can be observed in Fig. \ref{size1000L2S1}
, Fig. \ref{size1000L2S2}
and Fig. \ref{size1000L2S3}
of the Appendix, that refer to signals of size $1000$, sometimes the round-off error dominates the calculation and the distance to the solution increases instead of decreasing.
These results demonstrated that, although the theoretical and practical advances in recent years, the use of shape-constrained nonparametric techniques for solving large scale problems (more than many thousand of points) is still limited by computational issues. To deal with very large scale problem, up to a million of points, matrix inverse calculation should be avoided and more efforts should be devoted to find a way to better initialized primal-dual methods, by computing approximate solutions and exploiting multi-scale strategies.
\par
\section{Conclusions}
\label{sec:conclusions}
In this paper we have stated, analyzed and compared qualitatively and quantitatively several optimization approaches for solving the cone regression problem. We have distinguished two broad classes of methods. On one side, methods with asymptotic convergence that rest on the properties of proximity operators and act by finding the solution as the limit of a sequence of successive approximations. On the other side, methods with finite-time convergence that exploit the geometric properties of polyhedral convex cones and find the exact solution as non-negative linear combination of functions, forming a basis in a specified finite dimensional space.
Simulations up to one thousand of points have demonstrated that the choice of the optimization approach severely impact algorithmic performances. In particular, it has emerged that methods with time-finite convergence are much more efficient with respect to asymptotic-convergence methods. However, from this study it emerged that they face a twofold difficulty to cope with large-scale optimization: the first difficulty arises from the fact that all algorithm of this class modify the active set estimate one-by-one, making the adaptation of the initial active set estimation very slow; the second difficulty lies in the fact they involve the computation of the inverse or the pseudoinverse of a matrix at each variation of the active set. Although there exists many methods to do that efficiently when the matrix rank is much lower that the size of the data, this condition cannot be assured in general. Incremental strategies to reduce the cost of computing the inverse of a matrix when the inverse of a slightly different matrix is known, are bounded by round-off error accumulation in an iterative setting. The results of this study suggest that to be able to trait very large-scale problems (up to a million of points) further research should focus on finding a way to exploit classical multi-scale strategies and to compute an approximate solution through a penalization or splitting method without involving any matrix inverse calculation.
\begin{table*}
\caption{Qualitative comparison of all algorithms}
\label{tab:1}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
Algorithm & Type & Formulation & Initialization & Limitation \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Hildret & dual & \ref{eq:dualCQP} & not possible& convergence\\
Dykstra & primal-dual & \ref{eq:primalCQP} & constrained & convergence \\
ADMM & dual & \ref{eq:admm} & not possible& convergence\\
LSPS & primal & \ref{eq:lsps} & constrained & convergence \\
Uzawa & dual & \ref{eq:uzawa} & not possible& convergence \\
MPDB & primal-dual & \ref{eq:mpdb} & constrained & slow or instable \\
Meyer & primal-dual & \ref{eq:primalCQP} & constrained and not robust & slow or instable \\
Active Index & dual & \ref{eq:activeindex} & not possible& slow or instable \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
\par
\clearpage
\newpage
\section{Appendix*}
\subsubsection{Signals used for comparative evaluations}
$S_1(z) = $$\left\lbrace
\begin{array}{lll}
2nsin(\frac{24}{5n}z) \quad \quad z = 1,...,\frac{n}{3}\\
\alpha + \beta z \quad \quad z = \frac{n}{3}+1,...,\frac{2n}{3} \\
\gamma z^3 + \delta \quad \quad z = \frac{2n}{3}+1,...,n
\end{array}\right.$
\vspace{0.1cm}
where $\beta = \frac{1}{10}$, $\alpha = 2n sin(\frac{8}{5}) - \beta n$, $\gamma = \frac{-2}{n^2}$ and $\delta = \alpha + \beta \frac{2n}{3} - \gamma \frac{8n^3}{27}$.
\vspace{0.3cm}
$S_2(x) = \mathcal{N}(\mu,\,\sigma^2)$
\vspace{0.3cm}
$S_3(z) = sinc(\frac{6}{n}z-1)$
\begin{figure}[!h]
\begin{tabular}{llllcc}
\includegraphics[width=48mm]{concave.png}&
\includegraphics[width=48mm]{concave_convexe.png}\\
\includegraphics[width=48mm]{concave_0_01.png}&
\includegraphics[width=48mm]{concave_convexe_0_01.png}\\
\includegraphics[width=48mm]{concave_0_1.png}&
\includegraphics[width=48mm]{concave_convexe_0_1.png}\\
\includegraphics[width=48mm]{concave_0_5.png}&
\includegraphics[width=48mm]{concave_convexe_0_5.png}
\end{tabular}
\caption{Signals $S_1$ (top left) and $S_2$ (top rigth) to which has been added white noise with three different values of standard deviation $\sigma = 0.01$, $\sigma = 0.1$, and $\sigma = 0.5$.}
\label{signals}
\end{figure}
\clearpage
\newpage
\subsubsection{Comparative evaluations on signals of size 50}
\begin{figure*}[!ht]
\begin{tabular}{lllcc}
\includegraphics[width=70mm]{finite_50_S1_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S1_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_50_S1_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S1_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_50_S1_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S1_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S1$ of size $50$. From up to the bottom, three increasing level of noise have been added. (Left) Algorithms with time-finite convergence: all them converge istantanely. (Right) Algorithms with asymptotic convergence: ADMM is more robust to noise. LSPS and Dykstra use a parametrization in the primal parameter space which causes numerical round-off errors cumulate so that the fitted values does not satisfy the constraints of the dual problem and convergence is not attained.}
\label{size50L2S1}
\end{figure*}
\newpage
\begin{figure*}
\begin{tabular}{llllcc}
\includegraphics[width=70mm]{finite_50_S2_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S2_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_50_S2_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S2_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_50_S2_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S2_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S_2$ of size $50$. From up to the bottom, three increasing level of noise have been added. (Left) Algorithms with time-finite convergence: all them converge istantanely. (Right) Algorithms with asymptotic convergence: both ADMM and Hildret attain the solution when not noise is added but ADMM is much more robust to noise.}
\label{size50L2S2}
\end{figure*}
\newpage
\begin{figure*}
\begin{tabular}{llllcc}
\includegraphics[width=70mm]{finite_50_S3_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S3_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_50_S3_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S3_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_50_S3_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_50_S3_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for Gaussian white noise signals of size $50$. (Left) Algorithms with time-finite convergence: all them converge istantanely. (Right) Algorithms with asymptotic convergence: ADMM is the only algorithm that converges.}
\label{size50L2S3}
\end{figure*}
\newpage
\onecolumn
\subsubsection{Comparative evaluations on signals of size 500}
\begin{figure*}[!ht]
\begin{tabular}{lllcc}
\includegraphics[width=70mm]{finite_500_S1_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S1_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_500_S1_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S1_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_500_S1_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S1_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S_1$ of size $500$. From up to the bottom, three increasing level of noise have been added. (Left) Algorithms with time-finite convergence: Meyer's algorithm is very sensitive to the initialization. The best performance are achieved by MPDB with a PAV approximate initialization. (Right) Algorithms with asymptotic convergence: ADMM converges only for when the signal is slighly noised.}
\label{size500L2S1}
\end{figure*}
\newpage
\begin{figure*}
\begin{tabular}{llllcc}
\includegraphics[width=70mm]{finite_500_S2_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S2_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_500_S2_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S2_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_500_S2_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S2_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S_2$ of size $500$. From up to the bottom, three increasing level of noise have been added. (Left) Algorithms with time-finite convergence: the best performance are achieved by MPDB with a PAV's inspired initialization. (Right) Algorithms with asymptotic convergence: no algorithm converges.}
\label{size500L2S2}
\end{figure*}
\newpage
\begin{figure*}
\begin{tabular}{llllcc}
\includegraphics[width=70mm]{finite_500_S3_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S3_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_500_S3_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S3_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_500_S3_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_500_S3_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a Gaussian white noise signal of size $500$. (Left) Algorithms with time-finite convergence: the best performance are achieved by MPDB with a PAV's inspired initialization. (Right) Algorithms with asymptotic convergence: no algorithm converges.}
\label{size500L2S3}
\end{figure*}
\newpage
\onecolumn
\subsubsection{Comparative evaluations on signals of size 1000}
\begin{figure*}[!ht]
\begin{tabular}{lllcc}
\includegraphics[width=70mm]{finite_1000_S1_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S1_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_1000_S1_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S1_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_1000_S1_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S1_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S_1$ of size $1000$. From up to the bottom, three increasing level of noise have been added. (Left) Algorithms with time-finite convergence: the best performance are achieved by MPDB with a PAV's inspired initialization. (Right) Algorithms with asymptotic convergence: no algorithm converges.}
\label{size1000L2S1}
\end{figure*}
\newpage
\begin{figure*}
\begin{tabular}{llllcc}
\includegraphics[width=70mm]{finite_1000_S2_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S2_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_1000_S2_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S2_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_1000_S2_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S2_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S_2$ of size $1000$. From up to the bottom, three increasing level of noise have been added. (Left) Algorithms with time-finite convergence: the best performance are achieved by MPDB with a PAV's inspired initialization. (Right) Algorithms with asymptotic convergence: no algorithm converges.}
\label{size1000L2S2}
\end{figure*}
\newpage
\begin{figure*}
\begin{tabular}{llllcc}
\includegraphics[width=70mm]{finite_1000_S3_0_01_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S3_0_01_L2.png}\\
\includegraphics[width=70mm]{finite_1000_S3_0_1_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S3_0_1_L2.png}\\
\includegraphics[width=70mm]{finite_1000_S3_0_5_L2.png}&
\includegraphics[width=70mm]{asymptotic_1000_S3_0_5_L2.png}
\end{tabular}
\caption{Distance to the solution ($L_2$ norm) for a signal of type $S_3$ of size $1000$. (Left) Algorithms with time-finite convergence: the best performance are achieved by MPDB with a PAV's inspired initialization when the round-off error does not dominate the calculation and by the Meyer algorithm whose initial active set is empy. This is easy to understand since, for a pure noise signal an high degree of freedom is expected at the solution and therefore the empty active set is close to the the final active set. (Right) Algorithms with asymptotic convergence: no algorithm converges.}
\label{size1000L2S3}
\end{figure*}
\clearpage
\newpage
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage
\vskip 14pt
\noindent {\large\bf Supplementary Materials}
The online supplementary materials contain the pseudocode of the reviewed algorithms as well as their implementation in Scilab.
\par
\vskip 14pt
\noindent {\large\bf Acknowledgements}
The author would like to thank Lionel Moisan for insighful suggestions. This work has been carried out during a postdoctoral stage in the Laboratory of Applied Mathematics (MAP5, CNRS UMR 8145) at Paris Descartes University.
\par
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
The traversable Lorentzian wormholes, a hypothetical narrow
`bridges' or `tunnels' connecting two regions of the same universe
or two separate universes, have become a subject of considerable
interest in the last couple of years following the pioneer work by
Morris and Thorne~\cite{MT1988}. Such wormholes, which act as a
kind of `shortcut' in spacetime, are offspring of the Einstein
field equations~\cite{MT1988,Visser1995} in the hierarchy of the
black holes and whiteholes. The most striking property of such a
wormhole is the existence of inevitable amount of exotic matter
around the throat. The existence of this static configuration
requires violation of the null energy condition
(NEC)~\cite{Hochberg1997,Ida1999,Visser2003,Fewster2005,Kuhfittig2006,Rahaman2006,Jamil2010}.
This implies that the matter supporting the wormholes is exotic.
As the violation of the energy condition is particularly a
problematic issue, Visser et al.~\cite{Visser2003} have shown that
wormhole spacetimes can be constructed with arbitrarily small
violation of the averaged null energy condition. It is noted that
most of the wormhole solutions have been devoted to study static
configurations that must satisfy some specific properties in order
to be traversable. However, one can study the wormhole
configurations such as dynamical
wormholes~\cite{Hochberg1998,Hayward1999}, wormholes with
cosmological constant $\Lambda$~\cite{Lemos2003,Rahaman2007},
rotating wormholes~\cite{Teo1998,Kuhfittig2003} etc. to obtain a
panoramic representation of different physical aspects of the
wormhole structures.
Scientists have been trying to describe the wormhole structure in
two ways: either modifying the Einstein theory or matter
distribution part. In this paper, however we shall study the
wormhole solution in the context of Finsler~\cite{Bao2000}
geometry, which is one of the alternatives of the general
relativity. This involves with the Riemann geometry as its special
case where the the four-velocity vector is treated as independent
variable. As a historical anecdote we note that Cartan~\cite{Car}
initiated the self-consistent Finsler geometry model in 1935.
Thereafter, the Einstein-Finsler equations for the Cartan
$d$-connection were introduced in 1950~\cite{hor}. As a
consequence of that various models of the Finsler geometry in
certain applications of physics were
studied~\cite{vac1,vac2,Bao2015}. Though in some of the cases, the
Finsler pseudo-Riemannian configurations were considered, however,
investigators were unable to obtain any exact solution. In the
beginning of 1996, Vacuru~\cite{vac3,vac4} constructed
relativistic models of the Finsler gravity in a self-consistent
manner. He derived Finsler gravity and locally anisotropic spinors
in the low energy limits of superstring/supergravity theories with
$N$-connection structure the velocity type coordinates being
treated as extra-dimensional ones. Vacaru and his
group~\cite{Vacaru2012,Stavrinos2013,Rajpoot2015} explained the
so-called anholonomic frame deformation method (AFDM) by using the
Finsler geometry methods, which allows to construct generic
off-diagonal exact solutions in various modified gravity theories.
In this direction, numerous class of exact solutions for the
Finsler modifications of black hole, black ellipsoid/torus/brane
and string configurations, locally anisotropic cosmological
solutions have been developed for the so-called canonical
$d$-connection and Cartan $d$-connections. Therefore, it is seen
that in recent years the Finsler geometry has drawn much attention
due to its potentiality to explain various issues that can not be
explained by the Einsteinian gravity. It has been argued that
cosmic acceleration can be explained in the context of the Finsler
geometry without invoking any dark matter~\cite{Chang2008} or dark
energy~\cite{Chang2009}. Very recently Chang et al.~\cite{Chang}
have studied the kinematics and causal structure in the Finsler
spacetime and the study reveals the superluminal phenomena of
neutrinos. Pfeifer and Wohlfarth~\cite{Pfeifer} have obtained an
action for the Finsler gravity by including the description of
matter fields which are coupled to the Finsler spacetime from the
first principles. An exact vacuum solution for the Finsler
spacetime have found by Li and Chang~\cite{Li2014}. They showed
that the Finslerian covariant derivative is conserved for the
geometrical part of the gravitational field equation.
Inspired by our previous work~\cite{FR} on compact stars in the
context of Finslerian spacetime geometry, we obtain exact wormhole
solutions in this paper. We assume some definite forms of wormhole
structures and try to find out matter distributions that
reproduces it. We thus consider specific shape functions and
impose restricted choices of redshift functions for the solutions.
We study the sensitivity of our solutions with respect to the
parameters defining the shape functions. Besides, we also consider
specific energy density and dark energy equation of state,
$p_r=\omega\rho$. The sensitivities of our results for $\omega <
-1$ have also been studied. We find interesting results.
The paper is organized as follows: in Sect. 2 we discuss the
basic equations based on the formalism of the Finslerian geometry.
Sect. 3 provides several models of the wormhole. We have also
analyzed the models in Sect. 3. The paper ends with a short
discussion in Sect. 4.
\section{Basic equations based on the Formalism of Finsler geometry}
To search wormhole structure one needs to introduce the metric. Let
us consider the Finsler structure is of the form~\cite{Li2014}
\begin{equation}
F^2=B(r)y^ty^t-A(r)y^ry^r-r^2\bar{F}^2(\theta,\varphi,y^\theta,y^\varphi).
\end{equation}
In this study, we consider $\bar{F^2}$ in the following form
\begin{equation}
\bar{F^2}=y^\theta y^\theta+f(\theta,\phi)y^\phi y^\phi
,\end{equation}
Thus \[ \bar{g}_{ij}=diag(1,f(\theta,\phi)),~~~~~~~and~~~~~ \bar{g}^{ij}=diag(1,\frac{1}{f(\theta,\phi)});~~~~~[i,j= \theta , \phi].\]
It is easy to calculate the geodesic spray coefficients $ \left[G^\mu = \frac{1}{4} g^{\mu \nu}
\left( \frac{\partial^2 F^2}{\partial x^\lambda \partial y^\nu} y^\lambda - \frac{\partial F^2}{\partial x^\nu} \right) \right] $ from $\bar{F^2}$ as
\[ \bar{G}^\theta=-\frac{1}{4}\frac{\partial f}{\partial \theta}y^\phi y^\phi,\]
\[ \bar{G}^\phi=\frac{1}{4f}\left(2\frac{\partial f}{\partial
\theta}y^\phi y^\theta+\frac{\partial f}{\partial
\phi}y^\phi y^\phi\right).\]
These yield Ricci scalar $\left(Ric\equiv R^\mu_\mu=\frac{1}{F^2}\left(2\frac{\partial
G^\mu}{\partial x^\mu}-y^\lambda \frac{\partial^2G^\mu}{\partial
x^\lambda\partial y^\mu}+2G^\lambda\frac{\partial^2G^\mu}{\partial
y^\lambda\partial y^\mu}-\frac{\partial G^\mu}{\partial
y^\lambda}\frac{\partial G^\lambda}{\partial y^\mu}\right)\right)$ in Finsler geometry
\[ \bar{F}^2\bar{Ric}=y^\phi y^\phi \left[-\frac{1}{2}\frac{\partial^2
f}{\partial \theta^2} +\frac{1}{2f}\frac{\partial^2 f}{\partial
\phi^2}-\frac{1}{2}\frac{\partial}{\partial
\phi}\left(\frac{1}{f} \frac{\partial f}{\partial \phi}\right) -
\frac{1}{2f^2}\left(\frac{\partial f}{\partial \phi}\right)^2
-\frac{1}{4f} \left(\frac{\partial f}{\partial
\theta}\right)^2+\frac{1}{4f}\frac{\partial f}{\partial
\phi}\frac{1}{f}\frac{\partial f}{\partial \phi}+\frac{\partial
f}{\partial \theta}\frac{1}{2f}\frac{\partial f}{\partial
\theta}-\frac{1}{4f^2}\left(\frac{\partial f}{\partial
\phi}\right)^2\right]\]
\begin{equation}+y^\theta
y^\theta\left[-\frac{1}{2}\frac{\partial}{\partial
\theta}\left(\frac{1}{f} \frac{\partial f}{\partial
\theta}\right)-\frac{1}{4f^2}\left(\frac{\partial f}{\partial
\theta}\right)^2\right]+ y^\phi
y^\theta\left[\frac{1}{f}\frac{\partial^2f}{\partial \theta
\partial \phi}-\frac{1}{f^2}\left(\frac{\partial f}{\partial
\theta}\right)\left(\frac{\partial f}{\partial
\phi}\right)-\frac{1}{2}\frac{\partial}{\partial
\theta}\left(\frac{1}{f} \frac{\partial f}{\partial
\phi}\right)-\frac{1}{2}\frac{\partial}{\partial
\phi}\left(\frac{1}{f} \frac{\partial f}{\partial
\theta}\right)\right].
\end{equation}
Note that the coefficient of $y^\phi y^\theta =0$ iff, $f$ is independent of $\phi$, i.e.
\begin{equation}
f(\theta, \phi) =f(\theta),
\end{equation}
where the coefficient of $y^\theta y^\theta $ and $y^\phi y^\phi $ are non-zero.
Now, using Eq. (4) in Eq. (3), we get
\[ \bar{F}^2\bar{Ric}=\left[-\frac{1}{2f}\frac{\partial^2
f}{\partial \theta^2} +\frac{1}{4f^2}\left(\frac{\partial f}{\partial
\theta}\right)^2\right] (y^\theta y^\theta+f y^\phi y^\phi).\]
Thus we obtain $\bar{Ric}$ as
\begin{equation}
\bar{Ric}= -\frac{1}{2f}\frac{\partial^2
f}{\partial \theta^2} +\frac{1}{4f^2}\left(\frac{\partial f}{\partial
\theta}\right)^2,
\end{equation}
which may be a constant or a function of $\theta$.
For constant value, say $\lambda$, one can get the Finsler
structure $\bar{F^2}$ as expressed in Eq. (2) in the following
categories:
\[\bar{F}^2 = y^\theta y^\theta + A \sin^2(\sqrt{\lambda} \theta )y^\phi y^\phi,~(for~ \lambda > 0); \]
\[ = y^\theta y^\theta + A \theta^2 y^\phi y^\phi,~(for~ \lambda = 0); \]
\begin{equation}
= y^\theta y^\theta + A \sinh^2(\sqrt{-\lambda} \theta )y^\phi y^\phi,~(for~ \lambda < 0).
\end{equation}
Without any loss of generality one can take $A$ as unity.
Now, the Finsler structure given in Eq. (1), assumes the following form
\[ {F}^2 =B(r) y^ty^t - A(r) y^ry^r -r^2 y^\theta y^\theta - r^2\sin^2 \theta y^\phi y^\phi + r^2\sin^2 \theta y^\phi y^\phi -r^2\sin^2(\sqrt{\lambda} \theta )y^\phi y^\phi.\]
That is
\begin{equation}
{F}^2 = \alpha^2 + r^2 \chi (\theta)y^\phi y^\phi,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\end{equation}
where $\chi (\theta) = \sin^2 \theta - \sin^2(\sqrt{\lambda} \theta )$ and $\alpha$ is a Riemannian metric.
Hence
\[ {F} = \alpha \sqrt{ 1 + \frac{r^2 \chi (\theta)y^\phi y^\phi}{\alpha^2}}.\]
For the choice $b_\phi= r\sqrt{ \chi (\theta)} $, we get
\begin{equation}
F = \alpha \phi(s)~~ , ~~\phi(s) = \sqrt{1+s^2},
\end{equation}
where
\[ s =\frac{(b_\phi y^\phi)}{\alpha} = \frac{\beta}{\alpha}, \]
\[ b_\mu = ( 0,0,0,b_\phi) ~,~ ~ b_\phi y^\phi = b_\mu y^\mu = \beta ~, ~ (\beta ~is ~one~ form ).\]
This indicates that $F$ is the metric of $(\alpha,~\beta)$-Finsler space.
Isometric transformations of Finsler structure~\cite{Li2014} yields the Killing equation $K_V(F) =0$ in the Finsler space as follows
\begin{equation}
\left(\phi(s)-s\frac{\partial \phi(s)}{\partial s}\right)K_V(\alpha) + \frac{\partial \phi(s)}{\partial s }K_V\beta) =0,
\end{equation}
where
\[K_V(\alpha) = \frac{1}{2 \alpha}\left(V_{\mu \mid \nu} +V_{\nu \mid \mu} \right)y^\mu y^\nu,\]
\[K_V(\beta) = \left(V^\mu \frac{\partial b_\nu}{\partial x^\mu } +b_{ \mu}\frac{\partial V^\mu}{\partial x^\nu} \right)y^\nu.\]
Here $``\mid"$ indicates the covariant derivative with respect to the Riemannian metric $\alpha$.
In the present consideration we have
\[K_V(\alpha)+ sK_V(\beta)=0 ~~or~~ \alpha K_V(\alpha)+ \beta K_V(\beta)=0.\]
This yields
\begin{equation} K_V(\alpha)= 0 ~~and~~K_V(\beta)=0,
\end{equation}
or
\begin{equation} V_{\mu \mid \nu} +V_{\nu \mid \mu} =0,
\end{equation}
and
\begin{equation} V^\mu \frac{\partial b_\nu}{\partial x^\mu } +b_{ \mu}\frac{\partial V^\mu}{\partial x^\nu} =0.
\end{equation}
Interestingly, we note that the second Killing equation constrains
the first one (Killing equation of the Riemannian space). Hence,
it is responsible for breaking the isometric symmetry of the
Riemannian space.
Actually, the present Finsler space (for the case $\bar{F^2}$ as
quadric in $ y^\theta~ \& ~y^\phi $) can be determined from a
Riemannian manifold $( M, g_{\mu \nu}(x))$ as we have
\[F(x,y) =\sqrt{g_{\mu \nu}(x)y^\mu y^\nu }.\]
It is to be noted that this is a semi-definite Finsler space. As a
result, we can use covariant derivative of the Riemannian space.
The Bianchi identities coincide with those of the Riemannian space
(being the covariant conservation of Einstein tensor). The present
Finsler space reduces to the Riemannian space and consequently the
gravitational field equations can be achieved. Again, following Li
et al.~\cite{LWC}, we can find the gravitational field equations
alternatively. They have also proved the covariantly conserved
properties of the tensor $G^\mu_\nu$ in respect of covariant
derivative in Finsler spacetime with the Chern-Rund connection.
It is also to be noted that the gravitational field equation in
the Finsler space is controlled to the base manifold of the
Finsler space~\cite{Li2014}, and the fiber coordinates $y^i$ are
set to be the velocities of the cosmic components (velocities in
the energy momentum tensor). It is also shown by Li et
al.~\cite{Li2014} that the gravitational field equation could be
derived from the approximation of the work done by Pfeifer et
al.~\cite{Pfeifer}. The gravitational dynamics for the Finsler
spacetime in terms of an action integral on the unit tangent
bundle has been studied by Pfeifer et. al.~\cite{Pfeifer}. Again
the gravitational field equation in the Finsler space is
insensitive to the connection because $G_\nu^\mu$ are obtained
from the Ricci scalar which is, in fact, insensitive to the
connections and depend only on the Finsler structure.
Thus the gravitational field equation in the Finsler space could
be derived from the Einstein field equation in the Riemannian
spacetime with the metric (1) in which the metric $\bar{g}_{ij}$
is given by
\[\bar{g}_{ij} = diag~(~ 1~,~~ \sin^2 \sqrt{\lambda} \theta~).\]
That is
\[g_{\mu\nu }= diag~(~B,~-A,~ -r^2~,~~ -r^2\sin^2 \sqrt{\lambda} \theta~ ).\]
Here the new parameter $\lambda$ plays a significant role in the
resulting field equations in Finsler space and consequently
affects the Finsler geometric consideration of the wormhole
problem.
Finsler structure (1) yields geodesic spray coefficients as
\begin{equation}
G^t=\frac{B'}{2B}y^ty^r
\end{equation}
\begin{equation}
G^r=\frac{A'}{4A}y^ry^r+\frac{B'}{4A}y^ty^t-\frac{r}{2A}\bar{F^2}
\end{equation}
\begin{equation}
G^\theta=\frac{1}{r}y^\theta y^r+\bar{G^\theta}
\end{equation}
\begin{equation}
G^\phi=\frac{1}{r}y^\phi y^r+\bar{G^\phi}.
\end{equation}
Here the prime indicates the derivative with respect to $r$, and
$\bar{G}^i$ are calculated from $\bar{F}^2$. Following Akbar-Zadeh
\cite{Akbar1988}, one can calculate Ricci tensor in Finsler
geometry from $Ric$ as
\begin{equation}
Ric_{\mu\nu}=\frac{\partial^2(\frac{1}{2}F^2Ric)}{\partial
y^\mu\partial y^\nu}.
\end{equation}
Also one can define the scalar curvature in Finsler as
$S=g^{\mu\nu}Ric_{\mu\nu}$ and as a consequence, the modified
Einstein tensor in Finsler spacetime can be obtained as
\begin{equation}
G_{\mu\nu}\equiv Ric_{\mu\nu}-\frac{1}{2}g_{\mu\nu}S
\end{equation}
Considering $\bar{F}$ as dimensional Finsler spacetime with
constant flag curvature $\lambda$, one can find Einstein tensors
in Finsler geometry as
\begin{equation}
G^t_t=\frac{A'}{rA^2}-\frac{1}{r^2A}+\frac{\lambda}{r^2},
\end{equation}
\begin{equation}
G^r_r=-\frac{B'}{rAB}-\frac{1}{r^2A}+\frac{\lambda}{r^2},
\end{equation}
\begin{equation}
G^\theta_\theta=G^\phi_\phi=-\frac{B''}{2AB}-\frac{B'}{2rAB}+\frac{A'}{2rA^2}+\frac{B'}{4AB}\left(\frac{A'}{A}+\frac{B'}{B}\right).
\end{equation}
As the matter distribution for constructing wormhole is still a
challenging issue to the physicists, we assume therefore, the
general anisotropic energy-momentum tensor~\cite{Rahaman2010} in
the form
\begin{equation}
T_\nu^\mu=(\rho + p_r)u^{\mu}u_{\nu} + p_r g^{\mu}_{\nu}+
(p_r -p_t )\eta^{\mu}\eta_{\nu},
\end{equation}
where $u^{\mu}u_{\mu} = - \eta^{\mu}\eta_{\mu} = 1$, $p_t$ and
$p_r$ are the transverse and radial pressures, respectively.
Using the above Finsler structure (1) and energy stress tensor
(22), one can write the gravitational field equations in the
Finsler geometry $(G^\mu_\nu=8\pi_FGT^\mu_\nu)$ as
\begin{equation}
8\pi_F G \rho=\frac{A'}{rA^2}-\frac{1}{r^2A}+\frac{\lambda}{r^2},
\end{equation}
\begin{equation}
-8\pi_F Gp_r=-\frac{B'}{rAB}-\frac{1}{r^2A}+\frac{\lambda}{r^2},
\end{equation}
\begin{equation}
-8\pi_F
Gp_t=-\frac{B''}{2AB}-\frac{B'}{2rAB}+\frac{A'}{2rA^2}+\frac{B'}{4AB}\left(\frac{A'}{A}+\frac{B'}{B}\right).
\end{equation}
Note that the $Ric$ from which the field equations are derived is
not dependent on connections, i.e. it is insensitive to the
connections. Secondly the field equations can be derived from a
Lagrangian approach. One can notice also that $\lambda$ which is
the beta part of the Finsler space fundamental function appears in
the field equations gives the Finslerian contribution. It is
important to take into account the Cartan's connection approach
which is the most convectional for studying gravitation field
equations in the framework of general relativity and gravitation.
The meaning is given in the fact of metrical connection
($g_{kl:m}$) which preserves the angle of two vectors moving along
the geodesics and the norm~\cite{Carroll}. It is a basic point in
the derivation of gravitation Einstein's equations. The
application of Cartan $d$-connection presents a difficulty to the
solutions of gravitational field. We avoid this approach in this
study, however, such an approach is possible.
To search for the wormhole solution we follow the convention given
by Morris and Thorne~\cite{MT1988} and hence write the above
equations in terms of the redshift function ($f(r)$) and shape
function ($b(r)$) by substituting $B(r) = e^{2f(r)}$ and $A(r) =
\frac{1}{1-\frac{b(r)}{r}}$. Thus the field equations (23)-(25)
take the following relationships
\begin{equation}
b' +\lambda -1 = 8\pi_F r^2G\rho,
\end{equation}
\begin{equation}
\left(1-\frac{b}{r}\right) \left(\frac{2f'}{r}+\frac{1}{r^2}\right)-\frac{\lambda}{r^2} = 8\pi_F G p_r,
\end{equation}
\begin{equation}
\left\{1-\frac{b}{r}\right\}\left\{f''+\frac{f'}{r}+{f'}^2\right\}-\left \{\frac{b'}{r}-\frac{b}{r^2}\right
\}\left\{\frac{f'}{2}+\frac{1}{2r}\right\}= 8\pi_F Gp_t.
\end{equation}
\section{Some models for wormholes}
Einstein's general theory of relativity relates the matter distribution
with the geometry of the spacetime produced by the matter contain under
consideration. Thus if we know the geometry of the spacetime, then we can find the
corresponding matter distribution and vice versa. Also it has an
interesting feature that if one knows partly the geometry of the
spacetime and some components of energy stress tensor, then one can determine the total
structure of the spacetime as well as matter distribution through
field equations. Therefore, in the following text we are discussing
several models of the wormholes under different conditions.
\subsection{Specific shape function and redshift function}
In this subsection we assume some definite form of wormhole structures and try to find the
matter distributions that produce it.\\
\textbf{Case~1:} For particular shape function, $ b(r)=r_0(\frac{r}{r_0})^n $, where, $r_0$ is the throat radius
and $n$ is an arbitrary constant, however, for satisfying flaring out, one has to take $n$ as less than unity~\cite{Lobo2006}.
Now, we shall consider two cases with different redshift functions: (i) $ f(r)= constant $, and (ii) $ f(r)=\frac{r_0}{r}$.
These two choices are justified as the redshift function $ f(r)$ must be finite for all values of $r$ to
avoid an event horizon.\\
{\it Subcase~(1a):} $f(r)= constant$\\
Using above field equations (26) - (28), we get the following stress-energy components:
\begin{equation}
\rho = \frac{n(\frac{r}{r_0})^{(n-1)}+(\lambda-1)}{8\pi_Fr^2G},
\end{equation}
\begin{equation}
p_r = \frac{-(\frac{r}{r_0})^{(n-1)}-(\lambda-1)}{8\pi_Fr^2G},
\end{equation}
\begin{equation}
p_t = -\frac{(n-1)(\frac{r_0}{r})^{(n-1)}}{16\pi_Fr^2G},
\end{equation}
\begin{equation}
\rho+p_r = \frac{(n-1)(\frac{r_0}{r})^{(n-1)}}{8\pi_Fr^2G}.
\end{equation}
{\it Subcase~(1b):} $f(r)=\frac{r_0}{r}$\\
Similarly, here we find the following stress-energy components:
\begin{equation}
\rho = \frac{n(\frac{r}{r_0})^{(n-1)}+(\lambda-1) }{8\pi_Fr^2G},
\end{equation}
\begin{equation}
p_r =\frac{2(\frac{r}{r_0})^{(n-2)}-(\frac{r}{r_0})^{(n-1)}-\frac{2r_0}{r}+1-\lambda}{8\pi_Fr^2G},
\end{equation}
\begin{equation}
p_t= \frac{-\frac{n-1}{2}(\frac{r}{r_0})^{(n-1)}+\frac{r_0}{r}
+(\frac{r_0}{r})^2-(\frac{r}{r_0})^{(n-3)}+\frac{n-3}{2}(\frac{r}{r_0})^{(n-2)}}{8\pi_Fr^2G},
\end{equation}
\begin{equation}
\rho+p_r =\frac{2(\frac{r}{r_0})^{(n-2)}+(n-1)(\frac{r}{r_0})^{(n-1)}-\frac{2r_0}{r}}{8\pi_Fr^2G}.
\end{equation}
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=8.5cm]{fig1.eps}
\end{tabular}
\caption{Plot showing $\rho+p_r <0$ for case 1. Here $n=0.0$, $0.2$, $0.5$ and $0.8$ represented
by solid, dotted, dashed and chain curves, respectively. Thick curves represent $f(r)=r_0/r $
and thin curves represent f(r)=constant. We have assumed $r_0=2$ for the throat of the wormhole}
\end{figure*}
\textbf{Case~2:} We choose the shape function, $ b(r)=r_0+\rho_0r_0^3 \ln(\frac{r_0}{r})$, where
$r_0$ is the throat radius and $\rho_0$ is an arbitrary constant. However, for satisfying the flare out condition,
one has to take $\rho_0$ as less than unity. We shall consider as above two cases with
different redshift functions:\\
{\it Subcase~(2a):} $ f(r)= constant$\\
We find unknown parameters
\begin{equation}
\rho=\frac{(\lambda-1)r-\rho_0r_0^3}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_r=\frac{(1-\lambda)r-[r_0+\rho_0r_0^3ln(\frac{r_0}{r})]}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_t=\frac{r_0+\rho_0r_0^3[1+ln(\frac{r_0}{r})]}{16\pi_Fr^3G},
\end{equation}
\begin{equation}
\rho +p_r=-\frac{r_0+\rho_0r_0^3[1+ln(\frac{r_0}{r})]}{8\pi_Fr^3G}.
\end{equation}
\\
{\it Subcase~(2b):} $f(r)=\frac{r_0}{r}$\\
We obtain the unknown parameters as follows:
\begin{equation}
\rho=\frac{(\lambda-1)r-\rho_0r_0^3}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_r=\frac{r(1-\lambda)-3r_0+\frac{2r_0^2}{r}+\rho_0r_0^3ln(\frac{r_0}{r})(\frac{2r_0}{r}-1)}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_t=\frac{\frac{3r_0}{2}-\frac{r_0^3}{r^2}-\frac{r_0^2}{2r}+\rho_0
r_0^3\left[\frac{1}{2}-\frac{r_0}{2r}+ln(\frac{r_0}{r})\left(\frac{1}{2}-\frac{3r_0}{2r}-\frac{r_0^2}{r^2}\right)\right]}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
\rho+p_r =\frac{-3r_0+\frac{2r_0^2}{r}+\rho_0r_0^3[ln(\frac{r_0}{r})(\frac{2r_0}{r}-1)-1]}{8\pi_Fr^3G}.
\end{equation}
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=9.0cm]{fig2.eps}
\end{tabular}
\caption{
Plot showing $\rho+p_r <0$ for case 2. Here $\rho_0 = 0.0$, $0.2$, $0.5$ and $0.8$ represented by
solid, dotted, dashed and chain curves, respectively. Thick curves represent $f(r)=r_0/r $
and thin curves represent f(r)=constant. We have assumed $r_0=2$ for the throat of the wormhole}
\end{figure*}
\textbf{Case~3:} For the shape function, $ b(r)=r_0+\gamma r_0(1-\frac{r_0}{r})$, where $r_0$
is the throat radius and $\gamma$ is an arbitrary constant, however, for satisfying the flare out condition,
one has to take $\gamma$ as less than unity. We shall consider here also two cases with
different redshift functions: \\
{\it Subcase~(3a):} $f(r) = constant$\\
We obtain the unknown parameters
\begin{equation}
\rho= \frac{(\lambda-1)r+\gamma \frac{r_0^2}{r}}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_r=\frac{(1-\lambda)r-r_0(1+\gamma )+\gamma \frac{r_0^2}{r}}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_t=\frac{-2\gamma\frac{r_0^2}{r}+r_0(1+\gamma)}{16\pi_Fr^3G},
\end{equation}
\begin{equation}
\rho+p_r=\frac{2\gamma\frac{r_0^2}{r}-r_0(1+\gamma)}{8\pi_Fr^3G}.
\end{equation}
{\it Subcase~(3b):} $f(r)=\frac{r_0}{r}$\\
We obtain the unknown parameters
\begin{equation}
\rho= \frac{(\lambda-1)r+\gamma \frac{r_0^2}{r}}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_r=\frac{r(1-\lambda)-3r_0+\frac{2r_0^2}{r}+\gamma
r_0(1-\frac{r_0}{r})(\frac{2r_0}{r}-1)}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
p_t=\frac{\frac{3r_0}{2}-\frac{r_0^3}{r^2}-\frac{r_0^2}{2r}+\frac{\gamma
r_0^2}{r}\left[\frac{r_0}{2r}-\frac{1}{2}\right]+\gamma
r_0\left[1-\frac{r_0}{r}\right]\left[\frac{1}{2}-\frac{3r_0}{2r}-\frac{r_0^2}{r^2}\right]
}{8\pi_Fr^3G},
\end{equation}
\begin{equation}
\rho+p_r=\frac{\gamma \frac{r_0^2}{r}-3r_0+\frac{2r_0^2}{r}+\gamma
r_0(1-\frac{r_0}{r})(\frac{2r_0}{r}-1)}{8\pi_Fr^3G}.
\end{equation}
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=9.0cm]{fig3.eps}
\end{tabular}
\caption{
Plot showing $\rho+p_r <0$ for case 3. Here $\gamma =0.0$, $0.2$, $0.5$ and $0.8$ represented
by solid, dotted, dashed and chain curves, respectively. Thick curves represent $f(r)=r_0/r $
and thin curves represent f(r)=constant. We have assumed $r_0=2$ for the throat of the wormhole}
\end{figure*}
\subsection{Specific energy density and redshift function}
\textbf{Case~4:} For the specific energy density, $\rho = \rho _0(\frac{r_0}{r})^\alpha$, where
$r_0$, $\rho_0$ and $\alpha$ are arbitrary constants, we shall consider two cases with
different redshift functions:\\
{\it Subcase~(4a):} $f(r) = constant$\\
Using the above choices of energy density and redshift function, we obtain the shape function
$b(r)$ from field Eq. (26) as
\begin{equation}
b=c_1-\left[(\lambda-1)+\frac{8\pi_F r^2G\rho_0}{\alpha-3}\left(\frac{r_0}{r}\right)^\alpha\right]r,
\end{equation}
where $c_1 $ is an integration constant.
The radial and transverse pressures are obtained as
\begin{equation}
8\pi_F r^2Gp_r=\frac{8\pi_F r^2G\rho_0}{\alpha-3}\left(\frac{r_0}{r}\right)^\alpha-\frac{c_1}{r},
\end{equation}
\begin{equation}
16\pi_F r^2Gp_t=\left[\frac{c_1}{r}-\left( \frac{\alpha-2}{\alpha-3} \right)8\pi_F r^2G\rho_0\left(\frac{r_0}{r}\right)^\alpha\right],
\end{equation}
\begin{equation}
8\pi_F r^2G(\rho+p_r)=\left(\frac{\alpha-2}{\alpha-3}\right)8\pi_F r^2G\rho_0\left(\frac{r_0}{r}\right)^\alpha-\frac{c_1}{r}.
\end{equation}
{\it Subcase~(4b):} $ f(r)=\frac{r_0}{r}$\\
We obtain the unknown parameters
\begin{equation}
b=c_1-\left[(\lambda -1)+\frac{8\pi_F r^2G\rho_0}{\alpha-3}\left(\frac{r_0}{r}\right)^\alpha\right]r,
\end{equation}
\begin{equation}
8\pi_F r^2Gp_r=\left[\frac{8\pi_F r^2G\rho_0}{\alpha-3}\left(\frac{r_0}{r}\right)^\alpha-\frac{c_1}{r}\right]\left[1-\frac{2r_0}{r}\right]-2\lambda
\frac{r_0}{r},
\end{equation}
\begin{equation}
8\pi_F r^2Gp_t=\left[\frac{8\pi_F r^2G\rho_0}{\alpha-3}\left(\frac{r_0}{r}\right)^\alpha
-\frac{c_1}{r}\right]\left[\left(\frac{r_0}{r}\right)^2+\frac{r_0}{r}\right],
\end{equation}
\begin{equation}
+\lambda \left[\left(\frac{r_0}{r}\right)^2+\frac{r_0}{r}\right]
-\left[\frac{\alpha-2}{\alpha-3}8 \pi_F r^2 G \rho_0
\left(\frac{r_0}{r}\right)^\alpha - \frac{c_1}{r}
\right]\left[\frac{1}{2}-\frac{r_0}{2r}\right],
\end{equation}
\begin{equation}
8\pi_F r^2G(\rho+p_r)=\left[\frac{8\pi_F r^2G \rho_0}{\alpha-3}\left(\frac{r_0}{r}\right)^\alpha
-\frac{c_1}{r}\right] \left[1-\frac{2r_0}{r}\right]-2\lambda \frac{r_0}{r} + 8\pi
r^2G\rho_0\left(\frac{r_0}{r}\right)^\alpha.
\end{equation}
Note that $ b(r)$ has the same form as the case $f = constant$. Therefore, we have same plots for
$b(r)$, $b(r) - r$ and $b'(r)$ when $f = constant $.
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=9.0cm]{fig4.eps}
\end{tabular}
\caption{Plot showing $\rho+p_r <0$ for {\it Case 4}. Here, $ \rho_0 = 0.001$, $0.01$, $0.05$, $0.10$ and $0.15$
are represented by black, orange, green, indigo and red colours, respectively. $\alpha $ is taken to be $0.0$, $1.0$
and $2.0$ are represented by solid, dashed and chain curves, respectively. Thick curves represent $f(r)=r_0/r $
and thin curves represent $f(r)=constant$. We have assumed $r_0=2$ and $c_1=1$}
\end{figure*}
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=8.0cm]{fig5.eps}
\end{tabular}
\caption{Plots (upper, middle and lower panels respectively) showing the behavior of the shape function, radii of the
throat where $b-r$ cuts $r$ axis and nature of derivative of the shape function for {\it Case 4}. Here,
$\rho_0 = 0.001$, $0.01$, $0.05$, $0.10$ and $0.15$ are represented by black, orange, green, indigo and red colours,
respectively. $\alpha $ is taken to be $0.0$, $0.5$, $1.0$, $1.5$ and $2.0$, represented by solid, dotted,
dashed, dot-dashed and chain curves, respectively. Thick curves represent $f(r)=r_0/r $ and thin curves represent
$f(r)=constant$. We have assumed $r_0=2$ and $c_1=1$}
\end{figure*}
\textbf{Case~5:} For the dark energy equation of state, $p_r=\omega \rho;~\omega<-1$, we shall consider as above
two cases with different redshift functions:\\
{\it Subcase~(5a):} $f(r)=$constant\\
Using the above choices of energy density and redshift function, we obtain the following parameters
\begin{equation}
b=(1-\lambda)r+r_0\left(\frac{r_0}{r}\right)^{(\frac{1}{\omega})},
\end{equation}
\begin{equation}
8\pi_F r^2G \rho=-\frac{1}{\omega}\left(\frac{r_0}{r}\right)^{(\frac{1}{\omega}+1)},
\end{equation}
\begin{equation}
8\pi_F r^2Gp_r=-\left(\frac{r_0}{r}\right)^{(\frac{1}{\omega}+1)},
\end{equation}
\begin{equation}
8\pi_Fr^2Gp_t=\frac{1}{2}\left(\frac{1}{\omega}+1\right)\left(\frac{r_0}{r}\right)^{(\frac{1}{\omega}+1)},
\end{equation}
where $r_0 ^{(\frac{1}{\omega}+1)}$ is an integration constant.\\
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=9.0cm]{fig6.eps}
\end{tabular}
\caption{Plots showing the behavior of the shape function, radii of the throat where $b-r$ cuts $r$ axis
and nature of derivative of the shape function for {\it Case 5a}. Here, $\lambda= 0.2$, $0.4$ and $0.6$
represented by black, orange and green colors. Solid, dotted, dashed, dot-dashed and chain curves represent
$\omega=-3.0$, $-2.5$, $-2.0$, $-1.5$, $-1.1$, respectively. We have assumed $r_0=2$ for the throat of the wormhole}
\end{figure*}
{\it Subcase~(5b):} $f(r)=\frac{r_0}{r}$\\
We obtain the unknown parameters as follows:
\begin{equation}
b=(1-\lambda)r+\left[\frac{4\lambda r_0^2}{\omega r} \right]
\left[ \left(-\frac{2r_0}{\omega r} \right)^{\left(
\frac{1}{\omega}-1\right)}\exp\left(-\frac{2r_0}{\omega r
}\right)\left\{\Gamma \left(1-\frac{1}{\omega}\right)-\Gamma
\left(1-\frac{1}{\omega},-\frac{2r_0}{\omega
r}\right)\right\}-\frac{\omega r}{2r_0} \right]+c_3,
\end{equation}
\begin{equation}
8\pi_Fr^2G\omega
\rho=(1-\lambda)+\left(\frac{2br_0}{r^2}-\frac{2r_0}{r}\right)-\frac{b}{r},
\end{equation}
\begin{equation}
8\pi_Fr^2Gp_r=(1-\lambda)+\left(\frac{2br_0}{r^2}-\frac{2r_0}{r}\right)-\frac{b}{r},
\end{equation}
\begin{eqnarray}
8\pi_Fr^2Gp_t=\left[1-\frac{b}{r}\right]\left[\frac{r^2_0}{r^2}+\frac{r_0}{r}\right]-\left[
(1-\lambda)\left(1+\frac{1}{\omega}\right)+\frac{1}{\omega}\left\{\frac{2br_0}{r^2}-
\frac{2r_0}{r}\right\}-\left(1+\frac{1}{\omega}\right)\frac{b}{r}
\right] \nonumber \\ \times \left[\frac{1}{2}-\frac{r_0}{2r}\right],
\end{eqnarray}
where $c_3 $ is an integration constant.
\begin{figure*}
\begin{tabular}{rl}
\includegraphics[width=9.0cm]{fig7.eps}
\end{tabular}
\caption{Plots showing the behavior of the shape function, radii of the throat where $b-r$ cuts $r$ axis
and nature of derivative of the shape function for {\it Case 5b}. Here $\lambda= 0.2$, $0.4$ and $0.6$, represented
by black, orange and green colors. Solid, dotted, dashed, dot-dashed and chain curves represent
$\omega=-3.0$, $-2.5$, $-2.0$, $-1.5$, $-1.1$, respectively. We have assumed $r_0=2$ and $c_3=3$}
\end{figure*}
\section{Discussion and Conclusion }
Recent literature survey exhibits that the Finsler geometry has
accumulated much attention due to its potentiality to explain
various issues, specially as cosmic acceleration, which can be
explained without invoking dark matter~\cite{Chang2008} or dark
energy~\cite{Chang2009}. In the context of GR, the violation of
NEC (often called {\it exotic matter}) is a basic ingredient of
static traversable wormholes, although the violation of energy
conditions is quite acceptable in certain quantum fields, among
which the Casimir effect and Hawking evaporation are mentionable.
The present work may be looked upon as a possible solutions to
construct theoretically traversable wormholes in the context of
Finsler geometry. In this context, we derived the Einstein
gravitational field equations in the Riemannian spacetime with
matter distribution as anisotropic in nature. We find out our
solution in conventional way like the Morris-Thorne wormhole
solution. We focus our attention mainly for discussing the
violation of null energy condition (NEC) and the constraint on
wormhole geometry, respectively.
In the present work we obtain exact solutions by imposing
restricted choices of the redshift function, the shape function
and/or specifying an equations of state. Some of the important
features of the present investigation can be formulated as
follows:
(i) In first three cases we consider the various choices for the
form function, namely, $ b(r)=r_0(\frac{r}{r_0})^n $, $
b(r)=r_0+\rho_0 r_0^3 \ln(\frac{r_0}{r})$ and $b(r)=r_0+\gamma
r_0(1-\frac{r_0}{r})$ and we have analyzed the solution by
considering that the redshift function can either be constant, or
have the functional relation of radial co-ordinate.
(ii) In the next two cases we have found out the shape functions
for the specific form of energy density, namely, $\rho = \rho _0(\frac{r_0}{r})^\alpha$
and dark energy equation of state, $p_r=\omega \rho;~\omega<-1$.
After knowing all the metric potentials $f(r)$, $b(r)$ and
stress-energy components $\rho$, $p_r$ and $p_t$, we examine
whether the results indeed give wormhole structures. It is
essentially required that to get a wormhole, the following
properties must be satisfied:\\
1) The \emph{redshift function}, $f(r)$, should remain finite everywhere to
prevent an event horizon.
2) The \emph{shape function}, $b(r)$, should obey the following flare-out conditions at the
throat $r = r_0$ : $b(r_0) = r_0$ and $b^\prime(r_0) < 1$, $r_0$ being the throat radius.
3) Another condition that needed to satisfy is $b(r)/r < 1$ for $r >r_0$.
4) The NEC must be violated for traversable wormhole, i.e. $p_r
+\rho < 0 $.
The first three conditions for the geometry of the spacetime and
last one for the matter distribution that produces this spacetime.
We have, however, verified whether our models satisfy all the
criteria to represent wormhole structure as follows:
In models 1-3, we have assumed that the spacetime produces
wormholes and try to search for the matter distributions which
produce these features. We have found out the components of the
energy momentum tensors and Figs. 1-4 indicate the matter
distributions violate the NEC. Note that violation of NEC is one
of the important criteria to hold a wormhole open. Thus the first
three models are physically valid. On the other hand, in models 4
and 5, we have found out the shape functions for the specific form
of energy density and dark energy equation of state. For model 4,
one can note that the shape function $b(r)$ assumes the same form
for constant or specific redshift function. In Fig. 5, we have
vividly depicted different characteristics of the shape function.
We observe that existence of the throat depends on the choices of
the parameters. The radius of the throat exists where $b(r) - r$
cuts $r$ axis. Also this figure indicates that flaring out
condition is satisfied at the throat i.e. $b'(r_0)<1$. Thus in
the model 4, the above four conditions are satisfied for
development of wormholes structure.
We have also analyzed, in the model 5, different characteristics
of the shape functions for different redshift functions in Figs. 6
and 7. These figures satisfy all the geometric criteria of the
wormhole structure. In this case we have assumed dark energy
equation of state, $p_r=\omega \rho;~\omega<-1$, and hence the
NEC, $p_r +\rho < 0$ is automatically satisfied. Thus, it is an
overall observation that present models successfully describe the
wormhole features under the background of the Finslerian
spacetime.
\subsection*{Acknowledgments}
FR, SR and AAU are thankful to IUCAA for providing Associateship
under which a part the work was carried out. AB is also grateful
to IUCAA for providing research facilities and hospitality. FR is
thankful to DST, Govt. of India for providing financial support
under PURSE programme. Finally we are grateful to the referee for
several valuable comments and suggestions which have improved the
manuscript substantially.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Mo$_3$Al$_2$C has been already synthesized in the 1960s \cite{Jeitschko1963}
and subsequently classified as a
superconductor.\cite{Johnston1964,Fink1965,Toth1968,Toth1971} Recently it has
attracted renewed
attention,\cite{Bauer2010,Karki2010,Bonalde2011,Koyama2011,Kuo2012} because its
cubic $\beta$-Mn type crystal structure does not contain a center of inversion.
In such noncentrosymmetric superconductors, as first discussed for
CePt$_3$Si,\cite{Bauer2004} the electron pairing process is described to be
be a mixture of spin-singlet and spin-triplet states.
\cite{Gorkov2001,Sigrist} The question whether Mo$_3$Al$_2$C can be classified
as a {\em conventional} or {\em unconventional} superconductor remains still
unresolved according to very recent
investigations.\cite{Bauer2010,Karki2010,Bonalde2011}
In this work, we will not tackle this issue directly
but will provide results of extensive density functional theory (DFT) calculations on the
thermodynamical and dynamical stability as well as the electronic structure of this compound
as a function of carbon content.
Mo$_3$Al$_2$C crystalizes in the cubic $\beta$-Mn type P4$_1$32 structure
containing 24 atoms in the unit cell with, namely 12 Mo, 8 Al, and 4 C atoms.
The C atoms are in the center of regular Mo$_6$ octahedrons which are tilted
to each other.\cite{Jeitschko1963,Bauer2010} The comparison of our DFT
structural parameters to experimental values \cite{Bauer2010} in Table
\ref{tab0} shows both in excellent agreement.
\begin{table}
\caption{\label{tab0} Structural parameters and Wyckoff positions of Mo$_3$Al$_2$C.}
\begin{center}
\begin{tabular}{ l d d}
\hline
\hline
& \multicolumn{1}{c}{\quad exp.\footnote{experimental results from Ref. \onlinecite{Bauer2010}}} &\multicolumn{1}{c}{\quad DFT\footnote{DFT calculation (present work)}} \\
\hline
lattice parameter $a$: & 6.863\,\text{\AA} & 6.890\,\text{\AA}\\
\hline
Mo on 12d \hfill y:& 0.2025(2)& 0.20183\\
Al on 8c \hfill x:& 0.068(1)& 0.06658\\
C on 4a & &\\
\hline
crystal structure: & \multicolumn{2}{l}{ \qquad\quad cubic $\beta$-Mn type} \\
space group: & \multicolumn{2}{l}{ \qquad\quad 213 or P4$_1$32} \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{Computational Details}
The DFT calculations were done using the Vienna \emph{ab initio} simulation
package (VASP) \cite{Kresse1996,Kresse1999} utilizing the pseudopotential
construction according to the projector augmented wave
method.\cite{Blochl:1994p8789} For the exchange-correlation functional the
generalized gradient approximation as parametrized by Perdew, Burke, and
Ernzerhof~\cite{PBE1996} was chosen. The potential for Mo contained 9 valence
states including the 4p semicore states, whereas for Al and C three and four
valence states were included, respectively. The size of the basis set was
defined by an energy cutoff of 500 eV. The Brillouin-zone integration for the
computation of total energies was made using the tetrahedron method with with
Bl\"ochl's corrections \cite{Blochl:1994p13006} based on a $13\times 13\times
13$ Monkhorst and Pack \cite{Monkhorst1976} $\vec{k}$-point mesh, whereas for
the structural relaxations and for the derivation of the force constants the
first order Methfessel-Paxton smearing method \cite{Methfessel:1989p13056} on a
$7\times 7\times 7$ $\vec{k}$-point mesh was chosen.
The vibrational properties were calculated within the harmonic approximation
by making use of the direct force-constant method as implemented in our
program package fPHON (full-symmetry PHON), which is based on the package
PHON.\cite{Alfe2009} The structural parameters, i.e., the volume and shape of
the unit cell as well as the positions of the atoms within the unit cell, were
relaxed until the residual forces were less than $10^{-4}$ eV/\AA.
Furthermore, for a high accuracy of the phonon calculations the force
constants derived from the displaced atoms were corrected by subtracting the
tiny but still finite forces of the fully relaxed structures. Some of the
phonon dispersions were cross checked by using density functional
perturbation theory~\cite{baroni2001} (DFPT) as implemented in VASP.
\section{Vibrational Properties}
\label{phonons}
It turns out that perfectly stoichiometric Mo$_3$Al$_2$C is dynamically
unstable. According to panel (a) of Fig. \ref{fig1} optical modes with
imaginary frequencies around $\Gamma$ arise. The dynamical instability seems
surprising considering the well-established crystal structure of
Mo$_3$Al$_2$C.\cite{Jeitschko1963,Johnston1964,Fink1965,Toth1968,Toth1971,Bauer2010,Karki2010,Bonalde2011,Koyama2011,Kuo2012}
One should, however, be aware that the actual carbon content is
experimentally difficult to discern by X-ray diffraction because of carbons
comparatively small X-ray cross section\cite{McMaster1969} and therefore
there exists some uncertainty with respect to carbon vacancies. In general, many
carbides are prone to have vacancies on the C sublattice.
Before speculating on the physical explanation of the detected instability
we undertook numerical and methodological tests. First of all, it is assured
that the results are converged with respect to the number of $\vec{k}$-points.
Furthermore,
for deriving the force
constants several atomic displacements of the atoms, i.e., $|\vec{u}|=\{0.01,
0.02,0.05, 0.1, 0.2,0.3,0.4\}~\text{\AA}$ were chosen and
calculations without any symmetry were performed. All
calculations confirm the existence of imaginary modes
for Mo$_3$Al$_2$C.
In particular, the calculation with the smallest displacement of $|\vec{u}|=0.01~\text{\AA}$
yielded imaginary optical modes with a frequency of $1.62~i~$THz at $\Gamma$.
As a further test the DFPT~\cite{baroni2001} technique as
implemented in VASP was applied also resulting in imaginary modes of $1.65~i~$THz at $\Gamma$.
From these DFPT calculations force constants have been derived
and used as input for \emph{f}PHON. Both, the direct force-constant method
and the DFPT treatment gave very similar results for the phonon
dispersions as shown in panels (a) of Fig. \ref{fig1}.
In order to study if another similar structure exists that is dynamically
stable and energetically more favorable compared to the cubic $\beta$-Mn
type structure, all atoms were displaced from their equilibrium
positions. In addition, a tetragonal deformation was enforced onto the unit cell.
Using VASP this deformed structure was subsequently relaxed without any symmetry
constrains, resulting in the well-known crystal structure described
before.
\begin{figure*}
\begin{center}
\begin{tabular}{ c c}
(a) & (b)\\
\includegraphics[width = 0.385\textwidth ]{fig1a.pdf}&
\includegraphics[width = 0.585\textwidth ]{fig1b.pdf}\\
\end{tabular}
\end{center}
\caption{(Color online) Phonon dispersions up to 5 THz of
stoichiometric Mo$_3$Al$_2$C (a) and of Mo$_3$Al$_2$C$_{0.75}$ (b).
Imaginary frequencies are shown as negative values.
For Mo$_3$Al$_2$C in panel (a) the calculated phonon dispersion
derived from the direct force-constant method (black lines) is compared to
results from DFPT theory (red lines).
For Mo$_{3}$Al$_{2}$C$_{0.75}$ in panel (b) the dispersion
relation as calculated from the direct force-constants method (black lines) is presented.
It should be noted that because of the reduced symmetry due to the carbon
vacancy the dispersions along the paths
$\Lambda^*=\pm[\xi,-\xi,-\xi]$ and $\Sigma^*=\{\pm[\xi,\xi,0], \pm
[\xi,0,\xi], \pm[0,-\xi,\xi] \}$ with $0<\xi<\pi/a$ differ to the other symmetry related
$\vec{k}$-directions refered by
$\Lambda$ and $\Sigma$, respectively.
}
\label{fig1}
\end{figure*}
Because the perfectly stoichiometric compound Mo$_3$Al$_2$C is found to be
dynamically unstable, we investigated if dynamical stabilization can be
achieved by vacancies, in particular by carbon vacancies.
In general, vacancies on all sublattices will stabilize the phonon dispersion,
at least above a certain concentration.
Vacancies on the Mo or Al
sublattice are less likely to exist since they would be easily
detectable, i.e., they have a comparatively large X-ray cross section.\cite{McMaster1969}
Furthermore, our DFT derived vacancy formation energies in the following
Section~\ref{formation} strongly indicate that Mo or Al vacancies are
thermodynamically too costly to be formed.
Assuming a certain carbon vacancy concentration suitable supercell
calculations were done for the defect structures.
Panel (b) in Fig. \ref{fig1} shows the phonon dispersion for Mo$_3$Al$_2$C$_{0.75}$, i.e., for a single
carbon vacancy in the standard unit cell ($1\times 1\times 1$ supercell) with
3 out of possible 4 carbon sites occupied.
Further calculations,
of which the dispersions are not shown, were also done for a
single carbon vacancy in a $2\times 1\times 1$ supercell
(Mo$_{3}$Al$_{2}$C$_{0.875}$ with 7 out of possible 8 C
sites occupied) and in a $2\times 2\times 2$ supercell
(Mo$_{3}$Al$_{2}$C$_{0.96875}$ with 31 out of 32 C sites occupied).
Both Mo$_{3}$Al$_{2}$C$_{0.75}$
and Mo$_{3}$Al$_{2}$C$_{0.875}$ are found to be dynamically stable with no
imaginary modes, whereas
Mo$_{3}$Al$_{2}$C$_{0.96875}$ is found to be significantly unstable with the
value of the lowest imaginary optical modes at $\Gamma$ being $1.26~i~$THz
quite close to that of the perfect stoichiometric compound ($1.62~i~$THz).
In Fig. \ref{fig3} the normalized phonon density of states (DOS) of the
dynamically unstable compounds
Mo$_{3}$Al$_{2}$C$_{1}$ and Mo$_{3}$Al$_{2}$C$_{0.96875}$ are compared to
those of
the dynamically
stable compounds Mo$_{3}$Al$_{2}$C$_{0.875}$ and Mo$_{3}$Al$_{2}$C$_{0.75}$.
While at low frequencies no Debye-like $\omega^2$ behavior is observed
for Mo$_{3}$Al$_{2}$C$_1$ and Mo$_{3}$Al$_{2}$C$_{0.96875}$,
it is seen for the other two cases. For
Mo$_{3}$Al$_{2}$C$_{0.875}$ the Debye-like behavior is observed only in a
rather narrow frequency range up to $0.5~$THz
due to the softening of the acoustical and optical modes. However, for Mo$_{3}$Al$_{2}$C$_{0.75}$ the
Debye-feature reaches up to $1.4~$THz. The partial DOS
reveals that Mo, as the heaviest atomic species, dominates the lower
frequency spectrum
(up to $7.5~$THz), whereas C, being the lightest atom,
has contributions only at frequencies above $13.5~$THz.
Furthermore, the carbon dominated frequency modes
are shifted down by the introduction of vacancies from above $15~$THz for Mo$_{3}$Al$_{2}$C$_{1}$ to about
$13.5~$THz for Mo$_{3}$Al$_{2}$C$_{0.75}$.
Strikingly, in all the
DOS's a pronounced Al peak at $\approx12.2~$THz occurs.
The Al spectrum is rather broad with significant contributions from $6~$THz to $12.2~$THz.
Even below this range a telling small contribution is found indicating a
hybridization with Mo modes.
The results indicate, that there is a critical concentration of carbon
vacancies below which the compound becomes dynamically unstable. Assuming
that the frequency of the lowest optical mode at $\Gamma$ scales linearly with
the carbon vacancy concentration $x$, the critical carbon vacancy concentration is estimated
to be $x_\text{crit}\sim0.09$
(using the values of Mo$_{3}$Al$_{2}$C$_{1-x}$ at $x=0,$ $0.03125$, and $0.125$
as input; see Fig. \ref{fig2}).
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
(a) & (b)\\
\includegraphics[width = 0.49\textwidth ]{fig3a.pdf} &
\includegraphics[width = 0.49\textwidth ]{fig3b.pdf}\\
&\\
&\\
(c) & (d)\\
\includegraphics[width = 0.49\textwidth ]{fig3c.pdf}&
\includegraphics[width = 0.49\textwidth ]{fig3d.pdf} \\
\end{tabular}
\end{center}
\caption{
(Color online) Total and partial phonon DOS of Mo$_{3}$Al$_{2}$C$_1$ (a), Mo$_{3}$Al$_{2}$C$_{0.96875}$ (b),
Mo$_{3}$Al$_{2}$C$_{0.875}$ (c), and Mo$_{3}$Al$_{2}$C$_{0.75}$ (d) on the
Total phonon DOS (black solid line), partial DOS of Mo (red solid line),
partial DOS of Al (blue dashed line), and partial DOS of C (green dotted
line). In the insets the total DOS at low frequencies is compared to a
Debye-like $\omega^2$ behavior (red dashed line).
}
\label{fig3}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width = 0.4\textwidth ]{fig2.pdf}
\end{center}
\caption{(Color online) Frequency of the lowest optical mode at $\Gamma$ versus carbon vacancy concentration $x$
as calculated (red circles) and linearly interpolated (solid black line).
The predicted critical carbon vacancy concentration is indicated (green dashed-dotted line).}
\label{fig2}
\end{figure}
The cause of the stabilization of the optical low-frequency Mo modes by
the carbon vacancies is the changed Mo-C bonding in the Mo$_6$C
subunits. We examined the relaxations occurring in Mo$_{3}$Al$_{2}$C$_{0.75}$,
i.e., when a single C atom is removed from one of the four Mo$_6$C subunits in
the unit cell of Mo$_{3}$Al$_{2}$C$_{1}$. Thereby a strong influence on the
Mo-C bonds in all three remaining Mo$_6$C subunits is observed because they
share a common Mo atom with the defect subunit. By relaxing the atomic
positions the C atom drifts into an off-center position within the remaining
Mo$_6$C subunits, increasing the average Mo-C bond length by 1.4 \% and the
corresponding octahedral volume by 3.6 \%. The distortion of the remaining
Mo$_6$C subunits seems to be the stabilizing factor for the vibrational modes.
Concomitantly we notice this distortion for the Mo-Al bonding, e.g., the two
distinctive nearest Mo-Al bonds with bond-lengths 2.84~\AA~and 2.95~\AA~in the
fully stoichiometric compound are distorted in Mo$_{3}$Al$_{2}$C$_{0.75}$ to
lengths in the range of $2.79-3.01~$\AA. Such a distortion for the Mo-Al bonds
has recently been indicated experimentally.\cite{Kuo2012}
\section{Carbon Vacancies}
\label{formation}
As discussed, a certain amount of carbon vacancies is needed to
stabilize the imaginary optical modes. The key question is
if the formation of vacancies is at all thermodynamically possible.
This question is investigated by calculating vacancy formation energies and
by means of a model.
Within a standard DFT approach, the vacancy formation energy
$\varepsilon^{X}_\text{vac}$ per atom $X$ is defined by
subtracting the total energy $E_\text{DFT}(\text{Mo}_{12}\text{Al}_8\text{C}_4)$ of the
stoichiometric compound from the total energy $E_\text{DFT}(\text{Mo}_{12}\text{Al}_8\text{C}_4-X)$ of
the compound with a vacancy of atom type $X$ and adding the ground-state energy
$E_\text{DFT}(X)$ of the removed atom $X$ by
\begin{eqnarray}
\label{eq:epsilon}
\varepsilon^{X}_\text{vac}&=&E_\text{DFT}(\text{Mo}_{12}\text{Al}_8\text{C}_4-X)+E_\text{DFT}(X)-
\nonumber\\&&
-E_\text{DFT}(\text{Mo}_{12}\text{Al}_8\text{C}_4)~.
\end{eqnarray}
Because standard DFT calculations are strictly valid only at $T=0~$K
no temperature dependency has been yet introduced. This can be done by
considering the temperature dependent
vibrational free energies $F_\text{vib}$ and defining the vibrational vacancy
formation energy per $X$ atom similar to
Equ.~\ref{eq:epsilon},\cite{reith2009}
\begin{eqnarray}
\label{eq:f}
f^{X}_\text{vac}(T)&=&F_\text{vib}(\text{Mo}_{12}\text{Al}_8\text{C}_4-X)+F_\text{vib}(X)-
\nonumber\\&&
-F_\text{vib}(\text{Mo}_{12}\text{Al}_8\text{C}_4)~.
\end{eqnarray}
Both, $\varepsilon^{X}_\text{vac}$ and $f^{X}_\text{vac}(T)$ are formulated for a standard unit cell. To derive results for
smaller vacancy concentrations larger supercells are needed and the
stoichiometries in Equs.~\ref{eq:epsilon} and \ref{eq:f} have to be scaled accordingly.
The reference energies $E_\text{DFT}(X)$ and $F_\text{vib}(X)$ were derived
for the ground states of body-centered cubic Mo, face-centered cubic Al, and C in the graphene structure.
\begin{table}
\caption{\label{tab2} Vacancy formation energies in eV for Mo$_3$Al$_2$C
for $1\times 1\times 1$ and $2\times 1\times 1$ supercells
without vibrational contributions (DFT, $T=0~$K) and with $f^{X\text{(vac)}}(T)$ at
$T=1523~$K and $1773~$K. }
\begin{center}
\begin{tabular}{l l c c c c }
\hline
\hline
& & Mo-vac & Al-vac & C-vac & C-vac \\
\hline
\multicolumn{2}{c}{supercell size} &1$\times$1$\times$1&1$\times$1$\times$1&1$\times$1$\times$1&2$\times$1$\times
$1\\
\hline
&$\varepsilon
$ & 1.74 & 0.86 & 0.60 & $~~0.56$ \\
$T=1523~$K:&
$\varepsilon+f(T)$& 1.58 & 0.50 & 0.20 & $-0.39$ \\
$T=1773~$K:&
$\varepsilon+f(T)$& 1.51 & 0.39 & 0.08 & $-0.66$ \\
\hline
\multicolumn{4}{l}{C vacancy concentration $x$ for Mo$_3$Al$_2$C$_{1-x}$:}& 0.25 &$~~0.125$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width = 0.49\textwidth ]{fig4.pdf}
\end{center}
\caption{(Color online) Vacancy formation energy $\varepsilon +f(T)$ as the sum of the DFT
$\varepsilon$ and vibrational $f(T)$ formation energy versus temperature for a
Mo vacancy (red solid line), a Al vacancy (blue dashed
line), a C vacancy (green dotted line) in a unit cell
($1\times1\times1$ supercell), and for a C vacancy in a
$2\times1\times1$ supercell (green dashed-dotted line). }
\label{fig4}
\end{figure}
The vacancy formation energies in Table \ref{tab2} and Fig. \ref{fig4}
at $T=0~$K
are strongly positive for all types of vacancies, whereby the Mo vacancy with
its formation
energy of 1.74 eV is by far the most unfavorable one. Carbon
vacancies are the most favorable ones with a formation energy of 0.60 eV
for Mo$_{3}$Al$_{2}$C$_{0.75}$. This value is reduced by only 0.04 eV
for the smaller vacancy concentration of Mo$_{3}$Al$_{2}$C$_{0.875}$.
The experimental samples were synthesized
at $T=1773~$K and heat treated at $1523~$K.\cite{Bauer2010} Therefore, theory
needs to consider
temperature dependent vacancy formation energies
$f^{X}_\text{vac}(T)$ combined with the composition dependent configurational
entropy $S_\text{conf}(x)$~\cite{reith2009} in order to compare with experiment.
For the actual calculation of the vibrational free energy the small amount of
imaginary modes in the fully stoichiometric
compound $\text{Mo}_{3}\text{Al}_{2}\text{C}_{1}$ were omitted.
Table \ref{tab2} and Fig. \ref{fig4} show that
the vibrational contributions reduce the
strongly positive vacancy formation energies at $T=0~$K.
While this reduction is rather small for the Mo vacancy
(from $1.74~$eV to $1.51~$eV at $1773~$K),
it is much larger for the other two types of vacancies.
In particular, the formation energy of the carbon vacancy in
Mo$_{3}$Al$_{2}$C$_{0.75}$ decreases from $0.60~$eV to $0.08~$eV
at $1773~$K. Remarkably, this reduction is much larger for the smaller carbon vacancy
concentration, i.e., Mo$_{3}$Al$_{2}$C$_{0.875}$, with a decrease
by more than $1$ eV down to $-0.66~$eV.
These negative values for the formation energy indicate a possible thermodynamical
stabilization of carbon vacancies in Mo$_{3}$Al$_{2}$C$_{1-x}$.
Noticeably, this difference of the temperature dependent free energies for
different carbon vacancy concentrations comes exclusively from the vibrational
contributions $f(x,T)$, because --as mentioned above--
at $T=0~$K the vacancy formation energies are almost equal.
\begin{figure}
\begin{center}
\includegraphics[width = 0.49\textwidth ]{fig5.pdf}
\end{center}
\caption{(Color online) Temperature dependent C vacancy concentration $x(T)$ plotted as a solid
red line. Experimental preparation temperatures are indicated as dashed black lines.
The critical C vacancy concentration, below which Mo$_3$Al$_2$C$_{1-x}$ gets
dynamically unstable, is shown as a green dash-dotted line, while the
C vacancy concentrations of the calculated supercells are drawn as blue dotted lines.
}
\label{fig5}
\end{figure}
From an isolated defect model~\cite{Ashcroftde} the temperature dependent
equilibrium vacancy concentration $x$ can be calculated. However, the vacancy
formation energy $\varepsilon(x) + f(x,T)$ in the description of the internal
energy, $U(x,T)=(\varepsilon(x) + f(x,T))x$,\cite{Mayer1997} is strongly
dependent on $x$. Hence, the isolated defect model cannot be applied directly,
as the internal energy $U(x,T)$ is not a linear function of $x$. In our case
it is described as a quadratic function of $x$, $U(x,T)=ax^2+bx$ wherein $a$
and $b$ are temperature-dependent parameters fitted to the calculated carbon
values at $x=0$, $0.125$, and $0.25$.
Thus, the free energy for the vacancy formation
including the configurational entropy is formulated as
\begin{equation}
F(x,T)=ax^2+bx-k_{B}TS_\text{conf}(x)~.
\end{equation}
Assuming $S_\text{conf}(x)$ is the mixing entropy of non-interacting vacancies
$S_\text{conf}(x)=x\ln(x)+(1-x)\ln(1-x)$, the derivative of $F$ with respect to
$x$ can be used to search for the temperature dependent concentration $x(T)$ by
minimizing the free energy, i.e.,
\begin{equation}\label{eq:xcrit}
\frac{\partial F(x,T)}{\partial x}=0 \quad \Rightarrow \quad x = \frac{e^{-\beta(2ax+b)}}{1+e^{-\beta(2ax+b)}}~,
\end{equation}
with $\beta=1/(k_B T)$. This expression enables the numerical calculation of $x(T)$, as
shown in Fig. \ref{fig5}, using a bracketing root finding
algorithm.\cite{HPress:2007p15452} Inspecting Fig. \ref{fig5} the C vacancy concentration is 0.13$-$0.14.
at the experimental preparation temperatures~\cite{Bauer2010} of $1773~$K and
$1523~$K. As elaborated in the previous Section~\ref{phonons} at such vacancy
concentrations Mo$_3$Al$_2$C$_{1-x}$ is dynamically stable.
\section{Electronic Structure}
\begin{figure*}
\begin{center}
\begin{tabular}{ c c}
(a) & (b)\\
\includegraphics[width = 0.40\textwidth ]{fig6a.pdf}&
\includegraphics[width = 0.555\textwidth ]{fig6b.pdf}\\
\end{tabular}
\end{center}
\caption{(Color online)
Electronic band structure and DOS of stoichiometric Mo$_3$Al$_2$C (a) and of
Mo$_3$Al$_2$C$_{0.75}$ (b) calculated scalar relativistically (black lines) and
fully relativistically including spin-orbit coupling (red lines). The Fermi
energy $E_\text{F}$ referring to the number of valence electrons of Mo$_{3}$Al$_{2}$C
is indicated as a dotted line while the Fermi energy corresponding to
Mo$_{3}$Al$_{2}$C$_{0.75}$ is indicated as a dashed line.
Note the different directions $\Sigma, \Sigma^*$ and $\Lambda, \Lambda^*$ as
discussed in the caption of Fig.~\ref{fig1}.
}
\label{fig6}
\end{figure*}
After finding that vacancies do exist on the carbon sublattice and that these
are necessary to stabilize the crystal structure
we will now briefly discuss the band structure and the electronic DOS of
Mo$_{3}$Al$_{2}$C$_{1-x}$. As will be shown the influence of a changed carbon
stoichiometry on the band structure can not be described by a simple rigid band model.
Especially, the spin-orbit splitting on the bands in a fully relativistic
calculation strongly depends on $x$.
In Fig. \ref{fig6} the electronic band structure and DOS of the fully
stoichiometric compound is compared to that of Mo$_{3}$Al$_{2}$C$_{0.75}$.
The attentive reader might question this choice of $x=0.25$ as being too high,
because our thermodynamical model (described in the previous section) predicted
a much lower carbon vacancy concentration. However, we chose this value due to the
fact that the unit cells of both compounds have equal size and shape making it
easier to compare these.
At this point it should be remarked that our calculated band structure for
Mo$_{3}$Al$_{2}$C$_{1}$ shown in panel (a) in Fig. \ref{fig6} resembles the one
published previously by Bauer \emph{et al.}\cite{Bauer2010} but differs
distinctively from that published by Karki \emph{et al}.\cite{Karki2010} The
different finding by Karki \emph{et al.} can only stem from the use of a
different crystal structure. We have recalculated the band structure by means
of the full-potential linearized augmented plane-wave method using our own
code FLAIR\cite{Weinert1982,Weinert2009} and also by comparing to a recent
calculation with the code Wien2K\cite{pcwPB} (the later was also used by
Karki \emph{et al.}\cite{Karki2010}). All these calculations yielded the same
result for the band structure of Mo$_{3}$Al$_{2}$C$_1$ in the cubic $\beta$-Mn
type crystal structure, i.e., the one shown in panel (a) of Fig. \ref{fig6}.
We have calculated the band structures both in a scalar relativistic
approximation, omitting spin-orbit coupling, and fully relativistically,
including spin-orbit coupling in a self-consistent manner. As expected, some
degeneracies at the high-symmetry points are different whether spin-orbit
coupling is included or not, e.g., a splitting of $30~$meV at $\Gamma$ and of $53~$meV at R in
Mo$_{3}$Al$_{2}$C$_{1}$ and of $20~$meV at both $\Gamma$ and R in
Mo$_{3}$Al$_{2}$C$_{0.75}$ just below the Fermi energy $E_\text{F}$ is evident. However,
the most striking point is the loss of the double degeneracy of the bands due
to spin-orbit coupling in noncentrosymmetric
compounds.\cite{Callaway1964,Bauer2009} From Fig. \ref{fig6} we observe
this vertical spin-orbit splitting of the bands, e.g., $65~$meV on
the path $\Lambda$ around $-0.35~$eV in Mo$_{3}$Al$_{2}$C$_{1}$ and $90~$meV on
the path $\Lambda^*$ around $-0.3~$eV in Mo$_{3}$Al$_{2}$C$_{0.75}$.
As a consequence the Fermi surfaces of noncentrosymmetric Mo$_{3}$Al$_{2}$C$_{1-x}$
do also split due to spin-orbit coupling which can be seen in Fig. \ref{fig6} as the horizontal
band splitting at $E_\text{F}$.
Comparing the band structure and electronic DOS of
Mo$_{3}$Al$_{2}$C$_{1}$ with that of Mo$_{3}$Al$_{2}$C$_{0.75}$ in Fig.
\ref{fig6} one immediately notices that removing a carbon atom from the carbon
sublattice has a substantial effect and it is not sufficient to simply
shift the Fermi level in order to account for the carbon vacancy.
From these results we conclude, that the structure of the Fermi surfaces and
hence any nesting, paramount to the understanding of superconductivity,
strongly depend on $x$.
\section{Conclusions}
In the present work we have shown, that vacancies are necessary to dynamically stabilize
the cubic $\beta$-Mn type (P4$_1$32) crystal structure of Mo$_{3}$Al$_{2}$C,
whereby vacancies on the carbon sublattice are energetically the most
favorable ones. According to our thermodynamical model the most probable carbon
vacancy concentration $x$ in Mo$_{3}$Al$_{2}$C$_{1-x}$ is about $0.13-0.14$
considering actual experimental preparation temperatures.
We have demonstrated that there exists a critical value of $x_\text{crit}\sim0.09$
below which Mo$_{3}$Al$_{2}$C$_{1-x}$ becomes dynamically unstable, and especially
the frequency at which the Debye-like behavior of the phonons ends strongly
depends on $x$.
Likewise, the band structure and electronic DOS are influenced by the carbon
vacancy concentration.
The still unresolved question if Mo$_{3}$Al$_{2}$C is a {\em conventional} or
{\em unconventional} superconductor can only be answered when the carbon
vacancies are properly considered, as the structure and nesting of the Fermi
surfaces depend on the carbon vacancy concentration. If this carbon vacancy
concentration could be controlled, it might be possible to tune the
superconducting properties of Mo$_{3}$Al$_{2}$C$_{1-x}$. This might be rather
difficult, because it may depend on the sample preparation and the cooling
process. Further, one can safely assume, that at the preparation temperature
of $\approx 1500~$K the sample is in its thermodynamic equilibrium. This is not the case
when its superconducting properties are measured where it is in a quenched
meta-stable state.
\acknowledgments
This work was supported by the Austrian Science Fund FWF under Grant No. P22295.
Computational calculations were done on the Vienna Scientific Cluster (VSC).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The willingness to trust predictions formulated by automatic algorithms is key in a vast number of domains. In addition to questions of ethics and responsibility, it is important to note that, whilst extremely powerful, a vast number of deep architectures are only able to formulate predictions without an associated uncertainty. This shortcoming critically reduces user compliance even when explainability techniques are used, and this issue is particularly sensitive when deep learning techniques are employed e.g. in the medical diagnosis field.
Being able to produce a measure of system confidence in its prediction can really improve the trustability in the deep learning tool as a recommendation machine able to improve the workflow of physicians.
Alzheimer's disease (AD) is one of the most critical public health concerns of our time. Due to life expectancy increases and better professional care more and more people reach older ages but are often affected by degenerative brain disorders like AD, which is a severe form of dementia \cite{alzheimer}.
Principal symptoms are progressive memory loss, difficulties in normal-life activities, language disorders, disorientation, and, in general, a decrease in cognitive functions. One of the most important risk factors is age, while in some cases specific genetic mutations are responsible for pathology onset, which however can also be related to comorbidities. As a progressive degenerative pathology, AD is usually preceded by a different condition called mild cognitive impairment (MCI), with less intense symptoms that often, but not always, evolve into AD, which has no cure. Many theories about the etiopathogenesis of AD exist, several of which are linked to an alteration in the metabolism of the precursor protein of beta-amyloid. The latter's metabolism slowly changes over the course of the years, leading to the formation of neurotoxic substances which slowly accumulate in the brain. The causal relationship between beta-myeloid metabolism and clinical AD presentation is the object of intense research \cite{simeon,Toschi,Hampel2018-kv,Hampel2020-rp}. In clinical practice, AD diagnosis is based on the symptoms and commonly confirmed using magnetic resonance imaging (MRI) or positron emission tomography (PET), which however leaves the clinician with a great deal of subjectivity and uncertainty to deal with when positioning a patient in the AD continuum.
For this reason, there is great interest in models able to detect and predict AD-related structural and functional changes. Deep learning models are able to usefully extract local and global features through convolutional layers and learn how to predict interesting outcomes, such as distinguishing healthy controls from AD patients or even MCI patients which will remain stable for those who will progress to AD \cite{alz_deep,simeon,cad,predict_alz}.
In this context, difficulties in accessing large-scale curated datasets and the need to work with multimodal high-dimensional data, call for particular attention to avoiding overfitting and increasing the reliability of automatic models, possibly including the output of uncertainty estimates which can be evaluated by neuroscientists and physicians. For those reasons, we propose a Hybrid Bayesian Neural Network in a framework where predicted probabilities are coupled with their uncertainties. To reduce the number of parameters, we propose a convolutional neural network based on depthwise separable convolutions. We trained our model on a subset of the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset using Jacobian Determinant images, that is, images where each voxel describes the change in the volume element resulting from nonlinear coregistration of the patient's brain MRI into a standard space such as e.g. the Montreal Neurological Institute (MNI) T1-weighted template. This choice was made in order to isolate morphometric changes (such as e.g. cortical atrophy) from image intensity variations \cite{morphometry}.
Once trained, we turned the last linear layer into a Bayesian neural network, replacing optimal values $w*$ with narrow parameter distribution $N(w^*,s)$. This means that instead of having a single weight value for each connection between two neurons from last layer to final output, a Gaussian distribution centered on the optimal value $w^*$ is used. Every time the network does an inference the weights of the last layer are sampled from those distributions. In this way is possible to obtain $N$ slightly different
networks which, in turn, allows to perform ensembling and hence provide an uncertainty estimate. The latter can also be thresholded in order to subset the data and increases prediction performance.
\section{Material and methods}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{images/architecture_overview.png}
\caption{Architecture overview: Our model is a classifier based on residual convolutional blocks. Each block is composed of two depthwise separable convolutional layers to keep the number of parameters as low as possible to process 3D images. In correspondence with the gaussian distribution symbol we sample the network classifier weights $w\sim \mathcal{N}(w^*,s)$.}
\label{fig:architecture}
\end{figure}
In this section, we describe the dataset and briefly revisit the theory behind Bayesian neural networks that justifies our approach.
\subsection{Dataset}
We selected a subset of 376 cases from the ADNI \cite{adni} dataset, composed of cases labeled as both healthy and AD and employed the Magnetization Prepared RApid Gradient Echo (MPRAGE), T1-weighted image only.T1-weighted (T1w) images were coregistered to the MNI template using linear initialization and a nonlinear warp, after which the Jacobian Determinant (JD) maps were computed by isolating the nonlinear part of the deformational field which takes the images from native space to standard space. We finally masked the deformation maps using the standard MNI brain mask. Registration procedures were performed using the ANTs package \cite{ants}.
The high-dimensional nonlinear transformation (symmetric diffeomorphic normalization transformation) model was initialized through a generic linear transformation that consisted of the center of mass alignment, rigid, similarity, and fully affine transformations followed by nonlinear warps (metric: neighborhood cross-correlation, sampling: regular, gradient step size: 0.12, four multiresolution levels, smoothing sigmas: 3, 2, 1, 0 voxels in the reference image space, shrink factors: 6, 4, 2, 1 voxels. We also used histogram matching of images before registration and data winsorization with quantiles 0.001 and 0.999. The convergence criterion was set to be as follows: the slope of the normalized energy profile over the last 10 iterations < 10-8). Coregistration of all scans required approximately 19200 hours of CPU time on a high-performance parallel computing cluster.
Our final dataset consisted of 376 JD images, evenly distributed between AD and healthy cases. The dataset was split in an 80(train)/20(test) fashion, normalized globally, and cropped to a (96,96,96) size.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{images/example_img.png}
\caption{Example of slices for one random case in the test set. Local Jacobian Determinant images are normalized in the range [0,1].}
\label{fig:architecture}
\end{figure}
\subsection{Bayesian Neural Network}
We briefly recap the theory behind Bayesian neural networks and then describe the architecture of our model and the training procedure.
The idea is that instead of estimating $w^*$ which minimizes the cost function, we
learn a weight distribution. This is equivalent to an infinite ensemble approach, which allows us to estimate the variance of the prediction, sampling a slightly different neural network each time we perform inference.
Instead of learning $w^*$ we learn the posterior $p(w|D)$, where $D$ represents the incoming data.
Our aim here is to perform inference as the average of different neural networks as described in Equation \ref{eq:1}.
\begin{equation}
p(\hat{y}|D)=\int_w p(y|w)p(w|D)dw = E_{p(w|D)}(p(y|w))
\label{eq:1}
\end{equation}
Where $p(\hat{y}|D)$ is probability of get the prediction $y$ given the data $D$, $p(y|w)$ the conditioned probability of get $y$ given network's weights $w$ and $p(w|D)$ the posterior probability of having weight $w$ given the data $D$.
To perform this computation we need the posterior $p(w|D)$ which can be rewritten using the Bayes theorem.
So $p(w|D)$ can be expressed by the likelihood $p(D|w)$ and the prior $p(w)$ but we also need the normalization term at the denominator which is computationally intractable.
At this point, several approaches exist to overcome this issue.
One popular approach is variational inference, trying to estimate $p(w|D)$ approximating this distribution with a parametrized distribution $q_\phi(z)$ that minimizes the Kullback-Leibler (KL) divergence with the target distribution.
Monte Carlo approaches are also possible, sampling points matching the required distribution as described in \cite{bayes_back,bayesian,variation_autoencodes,variational_dropout}.
This latter approach is anyway computationally intensive while using variational inference requires changes in the objective function that accomplish a new task described by a modification in the loss function. In this case, the standard loss function is augmented with the KL divergence between the distribution of the weights and the chosen prior which can make training unstable and longer.
Instead, we opted for a hybrid approach, first training a standard convolutional network and then turning the last layer weights into narrow Gaussian distributions centered with the optimal values $w^*$.
Assuming that optimal values can serve as the center of Gaussian Distribution and little deviation around these optimal values can represent similar networks this approach turns a standard neural network into a Bayesian one without the need for any added complexity during training time.
\begin{equation}
p(w|D)= \frac{p(D|w)p(w)}{\int_{w'} p(D|w')p(w')dw'}
\end{equation}
\subsection{Neural network architecture}
Our base model is a residual convolutional neural network based on depthwise separable convolutions, which we implemented to reduce the number of parameters and the risk of overfitting. 3D depthwise separable convolutions are based on an hoc PyTorch implementation, using grouped convolutions with the group number set to the same value as the number of input channels, followed by a pointwise convolution with output channels. In other words, convolutions are first learned channelwise, and then information about the interaction between channels is taken into account by the second depthwise convolution for each point. This reduces the number of parameters from $COK^3$ to $C(K^3+O)$. Here $C$ is the number of input channels, $K$ is the 3D kernel size, $O$ is the number of output channels.
Our model is composed of three residual blocks, and each block is composed of two depthwise separable convolutions with a PReLU activation function \cite{prelu}. Each block halves the side dimension of the images. A flattening layer is followed by a linear layer for the first part of the training and then turned into a Bayesian linear layer replacing optimal values with Gaussian distributions. We used the Adam optimizer \cite{adam} with a learning rate of $3e-4$ and trained our model for 5 epochs.
All implementation was built in python, using Pytorch, Monai \cite{monai} and Torchbnn libraries \cite{bayesian}.
After training the last layer weights were replaced by a set of Gaussians $w* \rightarrow N(w*,s)$ with s chosen to be small. We set $s=0.01$ in our experiments.
At each forward pass, the network processes information the standard way until reaching the last layer.
Here, a set of weights is sampled from $\title{w} \sim \mathcal{N}(w^*,s)$ generating a slightly different neural network.
Sampling $N$ networks in inference produce different estimation and let us the chance to estimate uncertainty about the output.
\subsection{Experiment}
Our model is trained to classify AD and healthy cases on JD images.
We ran inference on our test set with $N=100$, each time sampling the weights from their distributions.
We computed the \textit{softmax} of the output to obtain the probabilities and aggregate the results by means.
We also computed the standard deviation for each outcome probability.
Then, we set a set of thresholds on standard deviation to study the variation of performances.
Each estimation with a standard deviation is rejected.
The idea is that we can set the threshold according to our needs. If we need higher accuracy and avoid uncertain estimation we can set a small value for the threshold $t$. In this case, we'll have fewer data accepted for estimation and more "rejected" cases to be reviewed manually. On the other side, we can retain most or all of the test dataset if we can accept more misclassified cases.
Algorithm \ref{alg:inference} describes the whole procedure.
\begin{algorithm}[H]
\caption{Inference}
\label{alg:inference}
\begin{algorithmic}
\item[\algorithmicinput]{$x, N$ JD images, number of inference}
\FOR{$i$ in range($N$)}
\STATE Sample NN weights $w\sim \mathcal{N}(w^*,s)$
\STATE Estimate output probabilities $p_i=f_w(x)$
\ENDFOR
\STATE Average prediction $p_{mean}=\frac{1}{N}\sum_{i} p_i$
\STATE Compute $p_{std}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{Reject procedure }
\label{alg:rejection}
\begin{algorithmic}
\item[\algorithmicinput]{$p_{mean}, p_{std}$ for test set}
\STATE keep=[]
\FOR{sample in test set}
\IF {$p_{std}<threshold$}
\STATE keep.append($p_{mean}$)
\ELSE
\STATE Reject $p_{mean}$
\ENDIF
\ENDFOR
\STATE Evaluate keep
\end{algorithmic}
\end{algorithm}
\subsection{Explainability}
In order to visualize the portion of the images which were weighed most by our model, we used the trulens library \cite{trulens} implementation of integrated gradients in \cite{integrated_gradient}.
A baseline $x_0$ image - usually a tensor of zeros - is generated and a set of interpolated images are computed according to the formula $x_i=x_0+\alpha (x-x_0)$ where $x$ is the actual image that we are trying to explain, $\alpha$ is a set linearly spaced of coefficients in $[0,1]$.
All those images are passed to the network and the gradients along the path to the chosen class are collected and integrated.
Then, we smoothed the images with a 3D Gaussian kernel with $\sigma=4$ to reduce noise in the procedure and keep the values above the $95$ percentile to get a mask for the most important regions.
We repeated the procedure $10$ times sampling each time a slightly different neural network and then averaging the attributions masks.
\section{Results}
As a first experiment, we tested the standard neural network.
In this case, on the $100\%$ of the test set we obtained an accuracy of $0.86$, F1-score of $0.87$, precision $0.86$ and recall $0.86$ with an AUC of $0.938$.
Successively, our approach was tested as a function of t (i.e. the maximum standard deviation accepted for the class with the highest probability, see above). WE compured area under the receiver operating characteristic curve, and the fraction of the retained test dataset for each threshold. In Fig \ref{fig:results}, the results are reported as a function of the threshold value. We can clearly see two opposite trends, where reducing the threshold the accuracy and AUC increase while the fraction of the remaining test dataset naturally decreases since the model is rejecting the predictions whose associated uncertainty exceeds the threshold. In our use case, we observed the best results with a threshold of $0.002$, which retains $75\%$ of the dataset and reaches an accuracy of $0.95$ and AUC of $0.96$. Figure \ref{fig:integrad} shows the final explainability masks generated by averaging integrated gradients for randomly chosen AD and healthy cases in the test set. It appears that the model focuses on different areas of the lower brain and, in particular, the ventricular spaces, whose deformation is known to be correlated with AD \cite{ventriculi,enlargment}.
\begin{table}[htbp]
\label{tab:results}
\centering
\large
\begin{tabular}{@{\hskip 10pt}l@{\hskip 12pt}l@{\hskip 10pt}l@{\hskip 10pt}l}
\multicolumn{1}{l|}{\textbf{Threshold}} & \textbf{Accuracy} & \textbf{AUC} & \textbf{fraction} \\ \hline
\multicolumn{1}{l|}{0.002} & \textbf{0.947} & \textbf{0.959} & 0.750 \\
\multicolumn{1}{l|}{0.005} & 0.916 & 0.955 & 0.789 \\
\multicolumn{1}{l|}{0.01} & 0.904 & 0.951 & 0.829 \\
\multicolumn{1}{l|}{0.02} & 0.898 & 0.939 & 0.907 \\
\multicolumn{1}{l|}{0.05} & 0.876 & 0.939 & 0.960 \\
\multicolumn{1}{l|}{0.10} & 0.868 & 0.940 & 1.0 \\
\multicolumn{1}{l|}{0.15} & 0.868 & 0.940, & 1.0 \\
\multicolumn{1}{l|}{0.2} & 0.868 & 0.940 & 1.0 \\
& & &
\end{tabular}
\caption{Results: Accuracy, AUC, and the fraction of the test dataset with uncertainty behind the threshold.}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{images/results.png}
\includegraphics[width=0.45\textwidth]{images/roc.png}
\caption{Results: In the left figure accuracy (blue), AUC (orange), and the fraction of the test dataset (green) are shown as functions of the threshold. Right figure: AUC for the standard model and a selected bayesian model with a threshold of 0.002. }
\label{fig:results}
\end{figure}
\section{Discussion}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{images/integrated_gradients.png}
\caption{Results: Integrated Gradients. This is an interoperability method to look at the most influent areas for prediction. In this case, the model focus is on the ventriculi of the brain, an area which is involved in neurodegeneration.}
\label{fig:integrad}
\end{figure}
The willingness to trust predictions by automatic devices, especially those based on black-box algorithms like neural networks, is critical in a vast number of application domains, such as e.g. the medical field. Ethics and responsibility pose an upper bound on the contribution of such techniques in medical diagnosis, screening, and triaging. We believe that being able to estimate the uncertainty of a prediction, along with tools that can modulate the behavior of the network to the degree of confidence that the user is informed about (and comfortable with) can represent a crucial step in this direction. Such features can improve the translation from research to clinical predictive models. Rather than completely replacing humans in evaluation, AI can support extremely useful recommendation systems and powerful tools to reduce workload in an efficient way for e.g. medical professionals. We proposed a method that turns a classical neural network into a Bayesian neural network, hence endowing the model with the ability to estimate the uncertainty associated with predictions. We also incorporate a rejection method based on a threshold based on thresholding the estimated uncertainty, which has resulted in a global performance increase (which amounts to reducing probably misclassified cases as they are associated with higher uncertainty). Additionally, by exclusion, this system can select cases to be recommended for expert human evaluation when the uncertainty is above the threshold.
\section{Conclusion}
We built a Bayesian-based neural network method able to estimate variability in predictions by simulating sampling from an infinite neural network ensemble. We used the estimated variability combined with a rejection method to retain only the fraction of the dataset that the model is able to classify with an under-threshold uncertainty, and showed that this procedure can improve the accuracy from $0.86$ to $0.95$ (while retaining $75\%$ of the test) when discriminating for AD from healthy cases based on brain morphometry only. Using integrated gradients, we also found that our model focuses on brain areas that are consistent with the clinical presentation of AD, in addition to highlighting previously unexplored areas in the lower part of the brain.
\section*{Acknowledgements}
{\footnotesize
Part of this work is supported by the EXPERIENCE project (European Union's Horizon 2020 research and innovation program under grant agreement No. 101017727)
Matteo Ferrante is a PhD student enrolled in the National PhD in Artificial Intelligence, XXXVII cycle, course on Health and life sciences, organized by Università Campus Bio-Medico di Roma.
}
\bibliographystyle{splncs04}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{sec:intro}
Thanks to the significant progresses made in the recent years \cite{ASRBook-Yu2014,PretrainVSFineTune-Yu2010,CD-DNN-HMM-dahl2012,CD-DNN-HMM-SWB-seide2011,DNN4ASR-hinton2012,CNN4ASR-Abdel-Hamid2012,CNN-Trans-Abdel-Hamid2014,CLDNN-sainath2015,DeepCNN-bi2015,DeepCNN-qian2016.1,DeepCNN-qian2016.2,TFCNN-mitra2015,TDNN-peddinti2015,VGG-secru2016,Deepspeech2-amodei2015,FSMN-zhang2015,LACE-yu2016,HumanParity-Xiong2016,PIT-yu2017,PIT-Kolbak2017}, the ASR systems now surpassed the threshold for adoption in many real-world scenarios and enabled services such as Microsoft Cortana, Apple's Siri and Google Now, where close-talk microphones are commonly used.
However, the current ASR systems still perform poorly when far-field microphones are used. This is because many difficulties hidden by close-talk microphones now surface under distant recognition scenarios. For example, the signal to noise ratio (SNR) between the target speaker and the interfering noises is much lower than that when close-talk microphones are used. As a result, the interfering signals, such as background noise, reverberation, and speech from other talkers, become so distinct that they can no longer be ignored.
In this paper, we aims at solving the speech recognition problem when multiple talkers speak at the same time and only a single channel of mixed speech is available. Many attempts have been made to attack this problem. Before the deep learning era, the most famous and effective model is the factorial GMM-HMM \cite{FactorialHMM-ghahramani1997}, which outperformed human in the 2006 monaural speech separation and recognition challenge \cite{MonauralSpeechSepChallenge-Cooke2010}. The factorial GMM-HMM, however, requires the test speakers to be seen during training so that the interactions between them can be properly modeled. Recently, several deep learning based techniques have been proposed to solve this problem \cite{PIT-yu2017,PIT-Kolbak2017,SingleChannelSep-Weng2015,DeepClustering-hershey2015,DeepClustering2-isik2016,AtrractorNet4SpeechSeparation-chen2017}. The core issue that these techniques try to address is the label ambiguity or permutation problem (refer to Section \ref{sec:pit} for details).
In Weng et al. \cite{SingleChannelSep-Weng2015} a deep learning model was developed to recognize the mixed speech directly. To solve the label ambiguity problem, Weng et al. assigned the senone labels of the talker with higher instantaneous energy to output one and the other to output two. This, although addresses the label ambiguity problem, causes frequent speaker switch across frames. To deal with the speaker switch problem, a two-speaker joint-decoder with a speaker switching penalty was used to trace speakers. This approach has two limitations. First, energy, which is manually picked, may not be the best information to assign labels under all conditions. Second, the frame switching problem introduces burden to the decoder.
In Hershey et al. \cite{DeepClustering-hershey2015,DeepClustering2-isik2016} the multi-talker mixed speech is first separated into multiple streams. An ASR engine is then applied to these streams independently to recognize speech. To separate the speech streams, they proposed a technique called deep clustering (DPCL). They assume that each time-frequency bin belongs to only one speaker and can be mapped into a shared embedding space. The model is optimized so that in the embedding space the time-frequency bins belong to the same speaker are closer and those of different speakers are farther away. During evaluation, a clustering algorithm is used upon embeddings to generate a partition of the time-frequency bins first, separated audio streams are then reconstructed based on the partition. In this approach, the speech separation and recognition are usually two separate components.
Chen et al. \cite{AtrractorNet4SpeechSeparation-chen2017} proposed a similar technique called deep attractor network (DANet). Following DPCL, their approach also learns a high-dimensional embedding of the acoustic signals. Different from DPCL, however, it creates cluster centers, called attractor points, in the embedding space to pull together the time-frequency bins corresponding to the same source. The main limitation of DANet is the requirement to estimate attractor points during evaluation time and to form frequency-bin clusters based on these points.
In Yu et al. \cite{PIT-yu2017} and Kolbak et al.\cite{PIT-Kolbak2017}, a simpler yet equally effective technique named permutation invariant training (PIT)\footnote{In \cite{DeepClustering-hershey2015}, a similar permutation free technique, which is equivalent to PIT when there are exactly two-speakers, was evaluated with negative results and conclusion.} was proposed to attack the speaker independent multi-talker speech separation problem. In PIT, the source targets are treated as a set (i.e., order is irrelevant). During training, PIT first determines the output-target assignment with the minimum error at the utterance level based on the forward-pass result. It then minimizes the error given the assignment. This strategy elegantly solved the label permutation problem. However, in these original works PIT was used to separate speech streams from mixed speech. For this reason, a frequency-bin mask was first estimated and then used to reconstruct each stream. The minimum mean square error (MMSE) between the true and reconstructed speech streams was used as the criterion to optimize model parameters.
Moreover, most of previous works on multi-talker speech still focus on speech separation \cite{PIT-yu2017,PIT-Kolbak2017,DeepClustering-hershey2015,DeepClustering2-isik2016,AtrractorNet4SpeechSeparation-chen2017}. In contrast, the multi-talker speech recognition is much harder and the related work is less. There has been some attempts, but the related tasks are relatively simple. For example, the 2006 monaural speech separation and recognition challenge \cite{FactorialHMM-ghahramani1997,MonauralSpeechSepChallenge-Cooke2010,SingleChannelSep-Weng2015,HERSHEY201045,rennie2010single} was defined on a speaker-dependent, small vocabulary, constrained language model setup, while in \cite{DeepClustering2-isik2016} a small vocabulary reading style corpus was used. We are not aware of any extensive research work on the more real, speaker-independent, spontaneous large vocabulary continuous speech recognition (LVCSR) on multi-talker mixed speech before our work.
In this paper, we attack the multi-talker mixed speech recognition problem with a focus on the speaker-independent setup given just a single-channel of the mixed speech. Different from \cite{PIT-yu2017,PIT-Kolbak2017}, here we extend and redefine PIT over log filter bank features and/or senone posteriors. In some architectures PIT is defined upon the minimum mean square error (MSE) between the true and estimated individual speaker features to separate speech at the feature level (called PIT-MSE from now on). In some other architectures, PIT is defined upon the cross entropy (CE) between the true and estimated senone posterior probabilities to recognize multiple streams of speech directly (called PIT-CE from now on). Moreover, the PIT-MSE based front-end feature separation can be combined with the PIT-CE based back-end recognition in a joint optimization architecture. We evaluate our architectures on the artificially generated AMI data with both two- and three-talker mixed speech. The experimental results demonstrate that our proposed architectures are very promising.
The rest of the paper is organized as follows. In Section \ref{sec:problem} we describe the speaker independent multi-talker mixed speech recognition problem. In Section \ref{sec:pit} we propose several PIT-based architectures to recognize multi-streams of speech. We report experimental results in Section \ref{sec:exp} and conclude the paper in Section \ref{sec:conclusion}.
\section{Single-Channel Multi-Talker Speech Recognition} \label{sec:problem}
In this paper, we assume that a linearly mixed single-microphone signal ${y}[n] = \sum_{s=1}^{S} {x}_s[n]$ is known, where ${x}_s[n], s=1, \cdots, S$ are $S$ streams of speech sources from different speakers. Our goal is to separate these streams and recognize every single one of them. In other words, the model needs to generate $S$ output streams, one for each source, at every time step. However, given only the mixed speech ${y}[n]$, the problem of recognizing all streams is under-determined because there are an infinite number of possible ${x}_s[n]$ (and thus recognition results) combinations that lead to the same ${y}[n]$. Fortunately, speech is not random signal. It has patterns that we may learn from a training set of pairs $\mathbf{y}$ and ${\mathbf{\ell}^s,s=1,\cdots,S}$, where $\mathbf{\ell}^s$ is the senone label sequence for stream $s$.
In the single speaker case, i.e., $S=1$, the learning problem is significantly simplified because there is only one possible recognition result, thus it can be casted as a simple supervised optimization problem. Given the input to the model, which is some feature representation of $\mathbf{y}$, the output is simply the senone posterior probability conditioned on the input. As in most classification problems, the model can be optimized by minimizing the cross entropy between the senone label and the estimated posterior probability.
When $S$ is greater than $1$, however, it is no longer as simple and direct as in the single-talker case and the label ambiguity or permutation becomes a problem in training. In the case of two speakers, because speech sources are symmetric given the mixture (i.e., $\mathbf{x}_1+\mathbf{x}_2$ equals to $\mathbf{x}_2+\mathbf{x}_1$ and both $\mathbf{x}_1$ and $\mathbf{x}_2$ have the same characteristics), there is no predetermined way to assign the correct target to the corresponding output layer. Interested readers can find additional information in \cite{PIT-yu2017,PIT-Kolbak2017} on how training progresses to nowhere when the conventional supervised approach is used for the multi-talker speech separation.
\section{Permutation Invariant Training for Multi-Talker Speech Recognition} \label{sec:pit}
To address the label ambiguity problem, we propose several architectures based on the permutation invariant training (PIT) \cite{PIT-yu2017,PIT-Kolbak2017} for multi-talker mixed speech recognition. For simplicity and without losing the generality, we always assume there are two-talkers in the mixed speech when describing our architectures in this section.
Note that, DPCL \cite{DeepClustering-hershey2015,DeepClustering2-isik2016} and DANet \cite{AtrractorNet4SpeechSeparation-chen2017} are alternative solutions to the label ambiguity problem when the goal is speech source separation. However, these two techniques cannot be easily applied to direct recognition (i.e., without first separating speech) of multiple streams of speech because of the clustering step required during separation, and the assumption that each time-frequency bin belongs to only one speaker (which is false when the CE criterion is used).
\begin{figure*}[htbp]
\centering
\subfigure[Arch\#1: Feature separation with the fixed reference assignment]{
\includegraphics[width=0.47\linewidth]{figure/normal_SS_ASR.jpg}
}
~
\subfigure[Arch\#2: Feature separation with permutation invariant training]{
\includegraphics[width=0.47\linewidth]{figure/PIT_SS_ASR.jpg}
}
\caption{Feature separation architectures for multi-talker mixed speech recognition}
\label{fig:models1}
\end{figure*}
\subsection{Feature Separation with Direct Supervision}
To recognize the multi-talker mixed speech, one straightforward approach is to estimate the features of each speech source given the mixed speech feature and recognize them one by one using a normal single-talker LVCSR system. This idea is depicted in Figure \ref{fig:models1} where we learn a model to recover the filter bank (FBANK) features from the mixed FBANK features and then feed each stream of the recovered FBANK features to a conventional LVCSR system for recognition.
In the simplest architecture, which is denoted as {\bf{Arch\#1}} and illustrated in Figure \ref{fig:models1}(a), feature separation can be considered as a multi-class regression problem, similar to many previous works \cite{huang2014deep,Weninger:2015:SEL:2965758.2965771,wang2014training,xu2014experimental,huang2015jointtaslp,du2016regression}. In this architecture, $\mathbf{Y}$, the feature of mixed speech, are used as the input to some deep learning models, such as deep neural networks (DNNs), convolutional neural networks (CNNs), and long short-term memory (LSTM) recurrent neural networks (RNNs), to estimate feature representation of each individual talker. If we use the bidirectional LSTM-RNN model, the model will compute
\begin{align}
\mathbf{H}_0 &= \mathbf{Y} \\
\mathbf{H}_i^f &= RNN_i^f (\mathbf{H}_{i-1}), i=1, \cdots, N \\
\mathbf{H}_i^b &= RNN_i^b (\mathbf{H}_{i-1}), i=1, \cdots, N \\
\mathbf{H_i} &= Stack(H_i^f, H_i^b), i = 1, \cdots, N \\
\mathbf{\hat{X}}^s &= Linear(\mathbf{H}_{N}), s=1, \cdots, S
\end{align}
where $\mathbf{H}_0$ is the input, $N$ is the number of hidden layers, $\mathbf{H}_i$ is the $i$-th hidden layer, $RNN_i^{f}$ and $RNN_i^{b}$ are the forward and backward RNNs at hidden layer $i$, respectively, $\mathbf{\hat{X}}^s, s=1,\cdots,S$ is the estimated separated features from the output layers for each speech stream $s$.
During training, we need to provide the correct reference (or target) features $\mathbf{X^s}, s=1,\cdots,S$ for all speakers in the mixed speech to the corresponding output layers for supervision. The model parameters can be optimized to minimize the mean square error (MSE) between the estimated separated feature $\mathbf{\hat X^s}$ and the original reference feature $\mathbf{X^s}$,
\begin{align}
\mathbf{J} = \frac{1}{S} \min \sum_{s=1}^{S} \sum_t || \mathbf{X_t^s} - \hat{\mathbf{X_t^s}} ||^2
\end{align}
where $S$ is the number of mixed speakers. In this architecture, it is assumed that the reference features are organized in a given order and assigned to the output layer segments accordingly. Once trained, this feature separation module can be used as the front-end to process the mixed speech. The separated feature streams are then fed into a normal single-speaker LVCSR system for decoding.
\subsection{Feature Separation with Permutation Invariant Training}
The architecture depicted in Figure \ref{fig:models1}(a) is easy to implement but with obvious drawbacks. Since the model has multiple output layer segments (one for each stream), and they depend on the same input mixture, assigning reference is actually difficult. The fixed reference order used in this architecture is not quite right since the source speech streams are symmetric and there is no clear clue on how to order them in advance. This is referred to as the label ambiguity (or label permutation) problem in \cite{PIT-yu2017,SingleChannelSep-Weng2015,DeepClustering-hershey2015}. As a result, this architecture may work well on the speaker-dependent setup where the target speaker is known (and thus can be assigned to a specific output segment) during training, but cannot generalize well to the speaker-independent case.
The label ambiguity problem in the multi-talker mixed speech recognition was addressed with limited success in \cite{SingleChannelSep-Weng2015} where Weng et al. assigned reference features depending on the energy level of each speech source. In the architecture illustrated in Figure \ref{fig:models1}(b), named as {\bf{Arch\#2}}, permutation invariant training (PIT) \cite{PIT-yu2017,PIT-Kolbak2017} is utilized to estimate individual feature streams. In this architecture, The reference feature sources are given as a set instead of an ordered list. The output-reference assignment is determined dynamically based on the current model. More specifically, it first computes the MSE for each possible assignment between the reference $\mathbf{X^{s'}}$ and the estimated source $\mathbf{\hat X^{s}}$, and picks the one with minimum MSE. In other words, the training criterion is
\begin{align}
J = \frac{1}{S} \min_{s' \in permu(S)} \sum_s \sum_t || \mathbf{X_t^{s'}} - \mathbf{\hat{X}_t^s} ||^2, s=1, \cdots, S
\end{align}
where $permu(S)$ is a permutation of $1, \cdots, S$. We note two important ingredients in this objective function. First, it automatically finds the appropriate assignment no matter how the labels are ordered. Second, the MSE is computed over the whole sequence for each assignment. This forces all the frames of the same speaker to be aligned with the same output segment, which can be regarded as performing the feature-level tracing implicitly. With this new objective function, We can simultaneously perform label assignment and error evaluation on the feature level. It is expected that the feature streams separated with PIT (Figure \ref{fig:models1}(b)) has higher quality than that separated with fixed reference order (Figure \ref{fig:models1}(a)). As a result, the recognition errors on these feature streams should also be lower. Note that the computational cost associated with permutation is negligible compared to the network forward computation during training, and no permutation (and thus no cost) is needed during evaluation.
\begin{figure*}[!htbp]
\centering
\subfigure[Arch\#3: Direct multi-talker mixed speech recognition with PIT]{
\includegraphics[width=0.47\linewidth]{figure/PIT_ASR4.jpg}
}
\subfigure[Arch\#4: Joint optimization of PIT-based feature separation and recognition]{
\includegraphics[width=0.47\linewidth]{figure/PIT_SS_ASR_Joint.jpg}
}
\caption{Advanced architectures for multi-talker mixed speech recognition}
\label{fig:models2}
\end{figure*}
\subsection{Direct Multi-Talker Mixed Speech Recognition with PIT}
In the previous two architectures mixed speech features are first separated explicitly and then recognized independently with a conventional single-talker LVCSR system. Since the feature separation is not perfect, there is mismatch between the separated features and the normal features used to train the conventional LVCSR system. In addition, the objective function of minimizing the MSE between the estimated and reference features is not directly related to the recognition performance. In this section, we propose an end-to-end architecture that directly recognizes mixed speech of multiple speakers.
In this architecture, denoted as {\bf{Arch\#3}}, we apply PIT to the CE between the reference and estimated senone posterior probability distributions as shown in Figure \ref{fig:models2}(a). Given some feature representation $\mathbf{Y}$ of the mixed speech $\mathbf{y}$, this model will compute
\begin{align}
\mathbf{H}_0 &= \mathbf{Y} \\
\mathbf{H}_i^{f} &= RNN_i^{f}(\mathbf{H}_{i-1}), i=1,\cdots,N \\
\mathbf{H}_i^{b} &= RNN_i^{b}(\mathbf{H}_{i-1}), i=1,\cdots,N \\
\mathbf{H}_i &= Stack(\mathbf{H}_i^{f}, \mathbf{H}_i^{b}), i=1,\cdots,N \\
\mathbf{H}_o^s &= Linear(\mathbf{H}_N), s=1,\cdots,S \\
\mathbf{O}^s &= Softmax(\mathbf{H}_o^s), s=1,\cdots,S
\end{align}
using a deep bidirectional RNN, where Equations (8)$\sim$(11) are similar to Equations (1)$\sim$(4). $\mathbf{H}_o^s, s=1,\cdots,S$ is the excitation at output layer for each speech stream $s$, and $\mathbf{O}^s, s=1,\cdots,S$ is the output segment for stream $s$. Different from architectures discussed in previous sections, in this architecture each output segment represents the estimated senone posterior probability for a speech stream. No additional feature separation, clustering or speaker tracing is needed. Although various neural network structures can be used, in this study we focus on bidirectional LSTM-RNNs.
In this direct multi-talker mixed speech recognition architecture, we minimize the objective function
\begin{align}
J &= \frac{1}{S} \min_{s' \in permu(S)} \sum_{s} \sum_{t} { CE(\mathbf{\ell}_t^{s'},\mathbf{O}_t^s)}, s=1,\cdots,S
\end{align}
In other words, we minimize the minimum average CE of every possible output-label assignment. All the frames of the same speaker are forced to be aligned with the same output segment by computing the CE over the whole sequence for each assignment. This strategy allows for the direct multi-talker mixed speech recognition without explicit separation. It is a simpler and more compact architecture for multi-talker speech recognition.
\subsection{Joint Optimization of PIT-based Feature Separation and Recognition}
As mentioned above, the main drawback of the feature separation architectures is the mismatch between the distorted separation result and the features used to train the single-talker LVCSR system. The direct multi-talker mixed speech recognition with PIT, which bypassed the feature separation step, is one solution to this problem. Here we propose another architecture named joint optimization of PIT-based feature separation and recognition, and it is denoted as {\bf{Arch\#4}} and shown in Figure \ref{fig:models2}(b).
This architecture contains two PIT-components, the front-end feature separation module with PIT-MSE and the back-end recognition module with PIT-CE. Different from the architecture in Figure \ref{fig:models1}(b), in this architecture a new LVCSR system is trained upon the output of the feature separation module with PIT-CE. The whole model is trained progressively: the front-end feature separation module is firstly optimized with PIT-MSE; Then the parameters in the back-end recognition module are optimized with PIT-CE while keeping the parameters in the feature separation module fixed. Finally parameters in both modules are jointly refined with PIT-CE using a small learning rate. Note that the reference assignment in the recognition (PIT-CE) step is the same as that in the separation (PIT-MSE) step.
\begin{align}
J_{1} = \frac{1}{S} \min_{s' \in permu(S)} \sum_s \sum_t || \mathbf{X_t^{s'}} - \mathbf{\hat{X}_t^s} ||^2, s=1, \cdots, S
\end{align}
\begin{align}
J_2 &= \frac{1}{S} \min_{s' \in permu(S)} \sum_{s} \sum_{t} { CE(\mathbf{\ell}_t^{s'},\mathbf{O}_t^s)}, s=1,\cdots,S
\end{align}
During decoding, the mixed speech features are fed into this architecture, and the final posterior streams are used for decoding as normal.
\section{Experimental Results} \label{sec:exp}
To evaluate the performance of the proposed architectures, we conducted a series of experiments on an artificially generated two- and three-talker mixed speech dataset based on the AMI corpus \cite{hain2012transcribing}.
\def figure/ {figure/}
There are four reasons for us to use AMI: 1) AMI is a speaker-independent spontaneous LVCSR corpora. Compared to small vocabulary, speaker-dependent, read English datasets used in most of the previous studies \cite{MonauralSpeechSepChallenge-Cooke2010,SingleChannelSep-Weng2015,HERSHEY201045,rennie2010single}, observations made and conclusions drawn from AMI are more likely generalized to other real-world scenarios; 2) AMI is a really hard task with different kinds of noises, truly spontaneous meeting style speech, and strong accents. It reflects the true ability of LVCSR when the training set size is around 100hr. The state-of-the-art word error rate (WER) on AMI is around 25.0\% for the close-talk condition \cite{PureMMI-Povey2016} and more than 45.0\% for the far-field condition with single-microphone \cite{PureMMI-Povey2016,HighwayBLSTM-zhang2016}. These WERs are much higher than that on other corpora, such as Switchboard \cite{godfrey-switchboard} on which the WER is now below 10.0\% \cite{HumanParity-Xiong2016,PureMMI-Povey2016,CNN-Dilated-sercu2016,IBM-SWB-Saon2016}; 3) Although the close-talk data (AMI IHM) was used to generate mixed speech in this work, the existence of parallel far-field data (AMI SDM/MDM) allows us to evaluate our architectures based on the far-field data in the future; 4) AMI is a public corpora, using AMI allows interested readers to reproduce our results more easily.
The AMI IHM (close-talk) dataset contains about 80hr and 8hr speech in training and evaluation sets, respectively \cite{hain2012transcribing,swietojanski2013hybrid}. Using AMI IHM, we generated a two-talker (IHM-2mix) and a three-talker (IHM-3mix) mixed speech dataset.
To artificially synthesize IHM-2mix, we randomly select two speakers and then randomly select an utterance for each speaker to form a mixed-speech utterance. For easier explanation, the high energy (High E) speaker in the mixed speech is always chosen as the target speaker and the low energy (Low E) speaker is considered as interference speaker. We synthesized mixed speech for five different SNR conditions (i.e. 0dB, 5dB, 10dB, 15dB, 20dB) based on the energy ratio of the two-talkers. To eliminate easy cases we force the lengths of the selected source utterances comparable so that at least half of the mixed speech contains overlapping speech. When the two source utterances have different lengths, the shorter one is padded with small noise at the front and end. The same procedure is used for preparing both the training and testing data. We generated in total 400hr two-talker mixed speech, 80hr per SNR condition, as the training set. A subset of 80hr speech from this 400hr training set was used for fast model training and evaluation. For evaluation, total 40hr two-talker mixed speech, 8hr per SNR condition, is generated and used.
The IHM-3mix dataset was generated similarly. The relative energy of the three speakers in each mixed utterance varies randomly in the training set. Different from the training set, all the speakers in the same mixed utterance have equal energy in the testing set. We generated in total 400hr and 8hr three-talker mixed speech as the training and testing set, respectively.
\begin{figure}
\centering \includegraphics[width=0.9\linewidth]{figure/spec_highE.png} \includegraphics[width=0.9\linewidth]{figure/spec_lowE.png} \includegraphics[width=0.9\linewidth]{figure/spec_twotalker.png}
\caption{Spectrogram comparison between the original single-talker clean speech and the 0db two-talker mixed-speech in the IHM-2mix dataset}
\label{fig:spectrum}
\end{figure}
Figure \ref{fig:spectrum} compares the spectrogram of a single-talker clean utterance and the corresponding 0db two-talker mixed utterance in the IHM-2mix dataset. Obviously it is really hard to separate the spectrogram and reconstruct the source utterances by visually examining it.
\subsection{Single-speaker Recognition Baseline}
In this work, all the neural networks were built using the latest Microsoft Cognitive Toolkit (CNTK) \cite{yu2014introduction} and the decoding systems were built based on Kaldi \cite{povey2011kaldi}. We first followed the officially released kaldi recipe to build an LDA-MLLT-SAT GMM-HMM model. This model uses 39-dim MFCC feature and has roughly 4K tied-states and 80K Gaussians. We then used this acoustic model to generate the senone alignment for neural network training. We trained the DNN and BLSTM-RNN baseline systems with the original AMI IHM data. 80-dimensional log filter bank (LFBK) features with CMVN were used to train the baselines. The DNN has 6 hidden layers each of which contains 2048 Sigmoid neurons. The input feature for DNN contains a window of 11 frames. The BLSTM-RNN has 3 bidirectional LSTM layers which are followed by the softmax layer. Each BLSTM layer has 512 memory cells. The input to the BLSTM-RNN is a single acoustic frame. All the models explored here are optimized with cross-entropy criterion. The DNN is optimized using SGD method with 256 minibatch size, and the BLSTM-RNN is trained using SGD with 4 full-length utterances in each minibatch.
For decoding, we used a 50K-word dictionary and a trigram language model interpolated from the ones created using the AMI transcripts and the Fisher English corpus. The performance of these two baselines on the original single-speaker AMI corpus are presented in Table \ref{tab:baselinewerami}. These results are comparable with that reported by others \cite{swietojanski2013hybrid} even though we did not use adapted fMLLR feature. It is noted that adding more BLSTM layers did not show meaningful WER reduction in the baseline.
\begin{table}[th]
\caption{WER (\%) of the baseline systems on original AMI IHM single-talker corpus}
\label{tab:baselinewerami}
\centering
\begin{tabular}{ c c c }
\toprule
\textbf{Model} & \textbf{WER} \\
\midrule
DNN & 28.0 \\
BLSTM & 26.6 \\
\bottomrule
\end{tabular}
\end{table}
To test the normal single-speaker model on the two-talker mixed speech, the above baseline BLSTM-RNN model is utilized to decode the mixed speech directly. During scoring we compare the decoding output (only one output) with the reference of each source utterance to obtain the WER for the corresponding source utterance. Table \ref{tab:baselinewermixed} summarizes the recognition results. It is clear, from the table, that the single-speaker model performs very poorly on the multi-talker mixed speech as indicated by the huge WER degradation of the high-energy speaker when SNR decreases. Further more, in all the conditions, the WERs for the low energy speaker are all above 100.0\%. These results demonstrate the great challenge in the multi-talker mixed speech recognition.
\begin{table}[th]
\caption{WER (\%) of the baseline BLSTM-RNN single-speaker system on the IHM-2mix dataset}
\label{tab:baselinewermixed}
\centering
\begin{tabular}{ c c c }
\toprule
\textbf{SNR Condition} & \textbf{High E Spk} & \textbf{Low E Spk} \\
\midrule
0db & 85.0 & 100.5 \\
5db & 68.8 & 110.2 \\
10db & 51.9 & 114.9 \\
15db & 39.3 & 117.6 \\
20db & 32.1 & 118.7 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Evaluation of Two-talker Speech Recognition Architectures}
The proposed four architectures for two-taker speech recognition are evaluated here. For the first two approaches (Arch\#1 and Arch\#2) that contain an explicit feature separation stage (with and without PIT-MSE), a 3-layer BLSTM is used in the feature separation module. The separated feature streams are fed into a normal 3-layer BLSTM LVCSR system, trained with single-talker speech, for decoding. The whole system contains in total six BLSTM layers. For the other two approaches (Arch\#3 and Arch\#4), in which PIT-CE is used, 6-layer BLSTM models are used so that the number of parameters is comparable to the other two architectures. In all these architectures the input is the 40-dimensional LFBK feature and each layer contains 768 memory cells. To train the latter two architectures that exploit PIT-CE we need to prepare the alignments for the mixed speech. The senone alignments for the two-talkers in each mixed speech utterance are from the single-speaker baseline alignment. The alignment of the shorter utterance within the mixed speech is padded with the silence state at the front and the end. All the models were trained with a minibatch of 8 utterances. The gradient was clipped to 0.0003 to guarantee the training stability. To obtain the results reported in this section we used the 80hr mixed speech training subset.
The recognition results on both speakers are evaluated. For scoring, we evaluated the two hypotheses, obtained from two output sections, against the two references and pick the assignment with better WER to compute the final WER.
The results on the 0db SNR condition are shown in Table \ref{tab:diff_pitwer_nn}. Compared to the 0dB condition in Table \ref{tab:baselinewermixed}, all the proposed multi-talker speech recognition architectures obtain obvious improvement on both speakers. Within the two architectures with the explicit feature separation stage, the architecture with PIT-MSE is significantly better than the baseline feature separation architecture. These results confirmed that the label permutation problem can be well alleviated by the PIT-MSE at the feature level. We can also observe that applying PIT-CE on the recognition module (Arch\#3 \& Arch\#4) can further reduce WER by 10.0\% absolute. This is because these two architectures can significantly reduce the mismatch between the separated feature and the feature used to train the LVCSR model. It is also because cross-entropy is more directly related to the recognition accuracy. Comparing Arch\#3 and Arch\#4, we can see that the architecture with joint optimization on PIT-based feature separation and recognition slightly outperforms the direct PIT-CE based model.
Since Arch\#3 and Arch\#4 achieve comparable results, and the model architecture and training process of Arch\#3 is much simpler than that of Arch\#4, our further evaluations reported in the following sections are based on Arch\#3. For clarity, Arch\#3 is named {\bf{direct PIT-CE-ASR}} from now on.
\begin{table}[th]
\caption{WER (\%) of the proposed multi-talker mixed speech recognition architectures on the IHM-2mix dataset under 0db SNR condition (using 80hr training subset). Arch\#1-\#4 indicate the proposed architectures described in Section III.A-D, respectively}
\label{tab:diff_pitwer_nn}
\centering
\begin{tabular}{c c c | c c }
\toprule
\textbf{Arch} & \textbf{Front-end} & \textbf{Back-end} & \textbf{High E WER} & \textbf{Low E WER} \\
\midrule
\#1 & Feat-Sep-baseline & Single-Spk-ASR & 72.58 & 79.61 \\
\#2 & Feat-Sep-PIT-MSE & Single-Spk-ASR & 68.88 & 75.62 \\
\midrule
\#3 & $\times$ & PIT-CE & 59.72 & 66.96 \\
\#4 & Feat-Sep-PIT-MSE & PIT-CE & 58.68 & 66.25 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Evaluation of the Direct PIT-CE-ASR Model on Large Dataset}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\linewidth]{figure/separateDecodeBaseline2.png}
\caption{Decoding results of baseline single speaker BLSTM-RNN system on 0db two-talker mixed speech sample}
\label{fig:decodeSampleBs}
\end{figure*}
\begin{figure*}[!htbp]
\includegraphics[width=\linewidth]{figure/separateDecodePIT2.png}
\caption{Decoding results of the proposed direct PIT-CE-ASR model on 0db two-talker mixed speech sample}
\label{fig:decodeSamplePIT}
\end{figure*}
We evaluated the direct PIT-CE-ASR architecture on the full IHM-2mix corpus. All the 400hr mixed data under different SNR conditions are pooled together for training. The direct PIT-CE-ASR model is still composed of 6 BLSTM layers with 768 memory cells in each layer. All other configurations are also the same as the experiments conducted on the subset.
The results under different SNR conditions are shown in Table \ref{tab:pitwer}. The direct PIT-CE-ASR model achieved significant improvements on both talkers compared to baseline results in Table \ref{tab:baselinewermixed} for all SNR conditions. Comparing to the results in Table \ref{tab:diff_pitwer_nn}, achieved with 80hr training subset, we observe that additional absolute 10.0\% WER improvement on both speakers can be obtained using the large training set. We also observe that the WER increases slowly when the SNR becomes smaller for the high energy speaker, and the WER improvement is very significant for the low energy speaker across all conditions. In the 0dB SNR scenario, the WERs on two speakers are very close and are 45.0\% less than that achieved with the single-talker ASR system for both high and low energy speakers. At 20dB SNR, the WER of the high energy speaker is still significantly better than the baseline, and approaches the single-talker recognition result reported in Table \ref{tab:baselinewerami}.
\begin{table}[th]
\caption{WER (\%) of the proposed direct PIT-CE-ASR model on the IHM-2mix dataset with full training set}
\label{tab:pitwer}
\centering
\begin{tabular}{c c c }
\toprule
\textbf{SNR Condition} & \textbf{High E WER} & \textbf{Low E WER} \\
\midrule
0db & 47.77 & 54.89 \\
5db & 39.25 & 59.24 \\
10db & 33.83 & 64.14 \\
15db & 30.54 & 71.75 \\
20db & 28.75 & 79.88 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Permutation Invariant Training with Alternative Deep Learning Models}
We investigated the direct PIT-CE-ASR model with alternative deep learning models. The first model we evaluated is a 6-layer feed-forward DNN in which each layer contains 2048 Sigmoid units. The input to the DNN is a window of 11 frames each with a 40-dimensional LFBK feature.
The results of DNN-based PIT-CE-ASR model is reported at the top of Table \ref{tab:pitwer-diff-nn}. Although it still gets obvious improvement over the baseline single-speaker model, the gain is much smaller with near 20.0\% WER difference in every condition than that from BLSTM-based PIT-CE-ASR model. The difference between DNN and BLSTM models partially attribute to the stronger modeling power of BLSTM models and partially attribute to the better tracing ability of RNNs.
We also compared the BLSTM models with 4, 6, and 8 layers as shown in Table \ref{tab:pitwer-diff-nn}. It is observed that deeper BLSTM models perform better. This is different from the single speaker ASR model whose performance peaks at 4 BLSTM layers \cite{HighwayBLSTM-zhang2016}. This is because the direct PIT-CE-ASR architecture needs to conduct two tasks - separation and recognition, and thus requires additional modeling power.
\begin{table}[th]
\caption{WER (\%) of the direct PIT-CE-ASR model using different deep learning models on the IHM-2mix dataset}
\label{tab:pitwer-diff-nn}
\centering
\begin{tabular}{c | c c c }
\toprule
\textbf{Models} & \textbf{SNR Condition} & \textbf{High E WER} & \textbf{Low E WER} \\
\midrule
\multirow{5}{*}{6L-DNN} & 0db & 72.95 & 80.29 \\
& 5db & 65.42 & 84.44 \\
& 10db & 55.27 & 86.55 \\
& 15db & 47.12 & 89.21 \\
& 20db & 40.31 & 92.45 \\
\midrule
\multirow{5}{*}{4L-BLSTM} & 0db & 49.74 & 56.88 \\
& 5db & 40.31 & 60.31 \\
& 10db & 34.38 & 65.52 \\
& 15db & 31.24 & 73.04 \\
& 20db & 29.68 & 80.83 \\
\midrule
\multirow{5}{*}{6L-BLSTM} & 0db & 47.77 & 54.89 \\
& 5db & 39.25 & 59.24 \\
& 10db & 33.83 & 64.14 \\
& 15db & 30.54 & 71.75 \\
& 20db & 28.75 & 79.88 \\
\midrule
\multirow{5}{*}{8L-BLSTM} & 0db & 46.91 & 53.89 \\
& 5db & 39.14 & 59.00 \\
& 10db & 33.47 & 63.91 \\
& 15db & 30.09 & 71.14 \\
& 20db & 28.61 & 79.34 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Analysis on Multi-Talker Speech Recognition Results}
To better understand the results on multi-talker speech recognition, we computed the WER separately for the speech mixed with same and opposite genders. The results are shown in Table \ref{tab:pitwer_gender}. It is observed that the same-gender mixed speech is much more difficult to recognize than the opposite-gender mixed speech, and the gap is even larger when the energy ratio of the two speakers is closer to 1. It is also observed that the mixed speech of two male speakers is hard to recognize than that of two female speakers. These results suggest that effective exploitation of gender information may help to further improve the multi-talker speech recognition system. We will explore this in our future work.
\begin{table}[th]
\caption{WER (\%) comparison of the 6-layer-BLSTM direct PIT-CE-ASR model on the mixed speech generated from two male speakers ({\bf M + M}), two female speakers ({\bf F + F}) and a male and a female speaker ({\bf M + F})}
\label{tab:pitwer_gender}
\centering
\begin{tabular}{c | c c c }
\toprule
\textbf{Genders} & \textbf{SNR Condition} & \textbf{High E WER} & \textbf{Low E WER} \\
\midrule
\multirow{3}{*}{M + M} & 0db & 52.18 & 59.32 \\
& 5db & 42.64 & 61.77 \\
& 10db & 36.10 & 63.94\\
\midrule
\multirow{3}{*}{F + F} & 0db & 49.90 & 57.59 \\
& 5db & 40.02 & 60.92 \\
& 10db & 32.47 & 65.15\\
\midrule
\multirow{3}{*}{M + F} & 0db & 44.89 & 51.72 \\
& 5db & 37.34 & 57.43 \\
& 10db & 33.22 & 63.86 \\
\bottomrule
\end{tabular}
\end{table}
To further understand our model, we examined the recognition results with and without using the direct PIT-CE-ASR. An example of these results on a 0db two-talker mixed speech utterance is shown in Figure \ref{fig:decodeSampleBs} (using the single-speaker baseline system) and \ref{fig:decodeSamplePIT} (with direct PIT-CE-ASR). It is clearly seen that the results are erroneous when the single-speaker baseline system is used to recognize the two-talker mixed speech. In contrast, much more words are recognized correctly with the proposed direct PIT-CE-ASR model.
\subsection{Three-Talker Speech Recognition with Direct PIT-CE-ASR}
In this subsection, we further extend and evaluate the proposed direct PIT-CE-ASR model on the three-talker mixed speech using the IHM-3mix dataset.
The three-talker direct PIT-CE-ASR model is also a 6-layer BLSTM model. The training and testing configurations are the same as those for two-talker speech recognition. The direct PIT-CE-ASR training processes as measured by CE on both two- and three-talker mixed speech training and validation sets are illustrated in Figure \ref{fig:figure_ce}. It is observed that the direct PIT-CE-ASR model with this specific configuration converges slowly, and the CE improvement progress on the training and validation sets is almost the same. The training progress on three-talker mixed speech is similar to that on two-talker mixed speech, but with an obviously higher CE value. This indicates the huge challenge when recognizing speech mixed with more than two talkers. Note that, in this set of experiments we used the same model configuration as that used in two-talker mixed speech recognition. Since three-talker mixed speech recognition is much harder, using deeper and wider models may help to improve performance. Due to resource limitation, we did not search for the best configuration for the task.
\begin{figure}
\centering \includegraphics[width=0.9\linewidth]{figure/PIT-ASR-CE2.pdf}
\caption{CE values over epochs on both the IHM-2mix and IHM-3mix training and validation sets with the proposed direct PIT-CE-ASR model}
\label{fig:figure_ce}
\end{figure}
The three-talker mixed speech recognition WERs are reported in Table \ref{tab:three-spk}. The WERs on different gender combinations are also provided. The WERs achieved with the single-speaker model are listed at the first line in Table \ref{tab:three-spk}. Compared to the results on IHM-2mix, the results on IHM-3mix are significantly worse using the conventional single speaker model. Under this extremely hard setup, the proposed direct PIT-CE-ASR architecture still demonstrated its powerful ability on separating/tracing/recognizing the mixed speech, and achieved 25.0\% relative WER reduction across all three speakers. Although the performance gap from two-talker to three-talker is obvious, it is still very promising under this speaker-independent three-talker LVCSR task. Not surprisingly, the mixed speech of different genders is relatively easier to recognize than that of same gender.
\begin{table}[th]
\caption{WER (\%) comparison of the baseline single-speaker BLSTM-RNN system and the proposed direct PIT-CE-ASR model on the IHM-3mix dataset. {\bf Diff} indicates the mixed speech is from different genders, and {\bf Same} indicates the mixed speech is from same gender}
\label{tab:three-spk}
\centering
\begin{tabular}{c c c c c}
\toprule
\textbf{Genders} & \textbf{Model} & \textbf{Speaker1} & \textbf{Speaker2} & \textbf{Speaker3} \\
\midrule
All & BLSTM-RNN & 91.0 & 90.5 & 90.8 \\
\midrule
All & \multirow{3}{*}{direct PIT-CE-ASR} & 69.54 & 67.35 & 66.01 \\
Different & & 69.36 & 65.84 & 64.80 \\
Same & & 72.21 & 70.11 & 69.78 \\
\bottomrule
\end{tabular}
\end{table}
Moreover, we conducted another interesting experiment. We used the three-talker PIT-CE-ASR model to recognize the two-talker mixed speech. The results are shown in Table \ref{tab:pitwer_three2two}. Surprisingly, the results are almost identical to that obtained using the 6-layer BLSTM based two-talker model (shown in Table \ref{tab:pitwer}). This demonstrates the good generalization ability of our proposed direct PIT-CE-ASR model over variable number of mixed speakers. This suggests that a single PIT model may be able to recognize mixed speech of different number of speakers without knowing or estimating the number of speakers.
\begin{table}[th]
\caption{WER (\%) of using three-talker direct PIT-CE-ASR model to recognize two-talker mixed IHM-2mix speech}
\label{tab:pitwer_three2two}
\centering
\begin{tabular}{c | c c c }
\toprule
\textbf{Model} & \textbf{SNR Condition} & \textbf{High E WER} & \textbf{Low E WER} \\
\midrule
\multirow{5}{*}{Three-Talker PIT-CE-ASR} & 0db & 46.63 & 54.59 \\
& 5db & 39.47 & 59.78 \\
& 10db & 34.50 & 64.55 \\
& 15db & 32.03 & 72.88 \\
& 20db & 30.66 & 81.63 \\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion} \label{sec:conclusion}
In this paper, we proposed several architectures for recognizing multi-talker mixed speech given only a single channel of the mixed signal. Our technique is based on permutation invariant training, which was originally developed for separation of multiple speech streams. PIT can be performed on the front-end feature separation module to obtain better separated feature streams or be extended on the back-end recognition module to predict the separated senone posterior probabilities directly. Moreover, PIT can be implemented on both front-end and back-end with a joint-optimization architecture. When using PIT to optimize a model, the criterion is computed over all frames in the whole utterance for each possible output-target assignment, and the one with the minimum loss is picked for parameter optimization. Thus PIT can address the label permutation problem well, and conduct the speaker separation and tracing in one shot. Particularly for the proposed architecture with the direct PIT-CE based recognition model, multi-talker mixed speech recognition can be directly conducted without an explicit separation stage.
The proposed architectures were evaluated and compared on an artificially mixed AMI dataset with both two- and three-talker mixed speech. The experimental results indicate that the proposed architectures are very promising. Our models can obtain relative 45.0\% and 25.0\% WER reduction against the state-of-the-art single-talker speech recognition system across all speakers when their energies are comparable, for two- and three-talker mixed speech, respectively. Another interesting observation is that there is even no degradation when using proposed three-talker model to recognize the two-talker mixed speech directly. This suggests that we can construct one model to recognize speech mixed with variable number of speakers without knowing or estimating the number of speakers in the mixed speech. To our knowledge, this is the first work on the multi-talker mixed speech recognition on the challenging speaker-independent spontaneous LVCSR task.
\section*{Acknowledgment}
This work was supported by the Shanghai Sailing Program No. 16YF1405300, the China NSFC projects (No. 61573241 and No. 61603252), the Interdisciplinary Program (14JCZ03) of Shanghai Jiao Tong University in China, and the Tencent-Shanghai Jiao Tong University joint project. Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\iffalse
\contents{
\item interpolation procedure is obtained not quite for free, but
rather easily
\item reduction harmonises nicely with existing simplifiers for EUF +
LIA }
\fi
Recursive algebraic data types (ADTs) with absolutely free
constructors are increasingly supported by SMT solvers, and find
application in a variety of areas, including functional programming,
modelling languages, proof assistants, and verification. In solvers,
ADTs are usually implemented as native theory
solvers~\cite{Oppen:1980:RRD:322203.322204,DBLP:journals/jsat/BarrettST07,Suter:2010:DPA:1707801.1706325,DBLP:journals/jar/ReynoldsB17}
that apply congruence closure (upward closure), syntactic unification
(downward closure), cycle detection (occurs-check), and in additional
handle selectors and testers in order to decide satisfiability of
quantifier-free ADT formulas.
In this paper, we study a simple alternative approach to ADT
reasoning, based on the \emph{reduction} of ADT formulas to
equisatisfiable formulas over uninterpreted functions and linear
integer arithmetic (EUF+LIA).
Our approach is partly inspired, besides by eager SMT in general, by
the reduction approach from \cite{kapurInt}, in which quantifier-free
formulas are mapped to simpler theories for the purpose of checking
satisfiability and computing interpolants. For instance, as shown in
\cite{kapurInt}, the theory of sets with finite cardinality
constraints can be reduced to the theory of equality with
uninterpreted functions (EUF). Like in \cite{kapurInt}, the target
theories of our ADT reduction are EUF and linear arithmetic. Unlike
\cite{kapurInt}, we are able to completely avoid universal quantifiers
in the process of reduction, but the reduction depends on the
introduction of further uninterpreted functions (which create some
additional work in interpolation, see
Section~\ref{sec:interpolation}).
The main idea of reduction is to augment an ADT formula with
additional literals that ensure that constructors, selectors, and
testers are interpreted consistently, and that constructors are free.
EUF takes care of upward and downward closure, while cycle detection
and constructor testers are handled by LIA constraints. The reduction can
be implemented with little effort, and is widely applicable since EUF
and LIA are supported by virtually all SMT solvers, and increasingly
also by other theorem provers. Reduction to EUF+LIA has a few further
advantages, in particular it is possible to reuse existing, highly
optimised EUF+LIA simplifiers in solvers, and to compute interpolants
using EUF+LIA interpolation procedures.
The contributions of the paper are
(i)~definition and correctness proof of the reduction from ADTs to
EUF+LIA;
(ii)~discussion of Craig interpolation for ADTs;
(iii)~extension to ADTs with size constraints, and an effective
characterisation of the ADTs for which the resulting procedure is
complete.
The procedures discussed in the paper have been implemented in the
\textsc{Princess} theorem
prover~\cite{princess08}.\footnote{\texttt{\url{http://www.philipp.ruemmer.org/princess.shtml}}}
\subsection{Related Work}
\paragraph{ADT Solving}
While ADTs have only recently been standardised in the SMT-LIB, some
solvers (including STeP~\cite{DBLP:conf/tapsoft/MannaBBCCADKLSU95},
CVC3~\cite{BT07}, CVC4~\cite{DBLP:conf/cav/BarrettCDHJKRT11}, and
Z3~\cite{DBLP:conf/tacas/MouraB08}) have for a while supported ADTs
through native decision procedures extending the congruence closure
algorithm~\cite{Oppen:1980:RRD:322203.322204,DBLP:journals/jsat/BarrettST07,DBLP:journals/jar/ReynoldsB17}.
Native solvers offer excellent performance, but also require
significant implementation effort. The listed solvers do not support
Craig interpolation or formulas with size constraints.
Satisfiability of ADT formulas can also be checked by introducing
explicit axioms about the constructors and selectors. Since ADTs form
a local theory~\cite{DBLP:conf/cade/Sofronie-Stokkermans09}, the set
of required instances of the axioms can effectively be computed, and a
decision procedure for ADT satisfiability is obtained.
Our reduction-based approach sits in between native solvers and
methods based on explicit axioms. Like with explicit axioms, our
method leaves most of the heavy work to other theory solvers (EUF and
LIA), and is therefore easy to implement. The reduction approach is
structure-preserving, however, which makes us believe that it can
utilise existing contextual simplifiers (pre-processors or
in-processors) more effectively than approaches based on axioms; it
also directly gives rise to an interpolation procedure.
\paragraph{ADT Interpolation}
It has been observed in \cite{kapurInt} that the theory of ADTs has
the interpolation property; this result directly follows from
admissibility of quantifier elimination in
ADTs~\cite{Oppen:1980:RRD:322203.322204}. To the best of our
knowledge, our ADT solver implemented in \textsc{Princess} is the
\emph{first proof-based interpolation procedure for ADTs.}
\paragraph{ADTs with Size Constraints}
Our approach for handling ADT formulas with size constraints is
inspired by the more general unfolding-based decision procedure for
ADTs with abstractions (i.e., catamorphisms) in
\cite{Suter:2010:DPA:1707801.1706325}. The algorithm in
\cite{Suter:2010:DPA:1707801.1706325} is complete for
\emph{sufficiently surjective} abstraction functions, which includes
the size function on binary trees, but \emph{not} the size function on
ADTs in general. We augment the setting from
\cite{Suter:2010:DPA:1707801.1706325} by giving a necessary and
sufficient criterion for sufficient surjectivity of the size function,
and thus for completeness of the overall procedure.
ADTs with size constraints can also be represented in the local theory
framework~\cite{DBLP:conf/cade/Sofronie-Stokkermans09}, again by
introducing the necessary instances of explicit axioms.
A further decision procedure for ADTs with size constraints, based on
the concept of length constraint completion, has been described in
\cite{DBLP:journals/iandc/ZhangSM06}. Our method uses the simple
approach of \emph{unfolding} in order to add size constraints to the
overall reduction-based procedure; it is at this point unclear whether
length constraint completion could be combined with the reduction
approach as well.
\section{Preliminaries}
\label{sec:prelims}
We formulate our approach in the setting of multi-sorted first-order
logic. The signature~$\Sigma$ of an ADT is defined by a
sequence~$\sigma^d_1, \ldots, \sigma^d_k$ of sorts and a
sequence~$f_1, \ldots, f_m$ of constructors. The type~$\alpha(f_i)$
of an $n$-ary constructor is an $(n+1)$-tuple~$\langle\sigma_0,
\ldots, \sigma_n\rangle \in \{\sigma^d_1, \ldots, \sigma^d_k\}^{n+1}$,
normally written in the form~$f_i : \sigma_1 \times \cdots \times
\sigma_n \to \sigma_0$. Zero-ary constructors are also called
constants. By slight abuse of notation, we also write $f_i :
\sigma^d_j$ if the result type of $f_i$ is $\sigma^d_j$, i.e., if $f_i
: \sigma_1 \times \cdots \times \sigma_n \to \sigma^d_j$ for some
$\sigma_1, \ldots, \sigma_n$.
In addition to constructors, formulas over ADTs can be formulated in
terms of \emph{variables}~$x \in \cal X$ (with some type $\alpha(x)
\in \{\sigma^d_1, \ldots, \sigma^d_k\}$); \emph{selectors}~$f_i^j$,
which extract the $j^{\text{th}}$ argument of an $f_i$-term; and
\emph{testers}~$\mathit{is}_{f_i}$, which determine whether a term is
an $f_i$-term. The syntactic categories of terms~$t$ and
formulas~$\phi$ are defined by the following rules:
\begin{align*}
t ~~::= & ~~ x && \text{Variables}
\\
| ~ & ~~ f_i(\bar t) && \text{Constructors}
\\
| ~ & ~~ f_i^{j}(t) && \text{Selectors}
\\
\phi ~~::= & ~~ \mathit{is}_{f_i}(t) && \text{Testers}
\\
|~ & ~~ t \approx t && \text{Equality}
\\
|~ & ~~ \phi \wedge \phi ~|~ \phi \vee \phi ~|~ \neg \phi ~|~ \ldots
&&
\text{Boolean operators}
\end{align*}
Well-typed terms and formulas are defined as expected, assuming that
selectors have type $f_i^j : \sigma_0 \to \sigma_j$ whenever $f_i : \sigma_1
\times \cdots \times \sigma_n \to \sigma_0$, and
testers~$\mathit{is}_{f_i}$ expect an argument of type~$\sigma^d_j$ if
$f_i : \sigma^d_j$. In the whole paper, we assume that considered
expressions are well-typed.
\begin{example}[Lists]
\label{ex:lists}
We show examples in the concrete syntax used in our implementation.
\begin{lstlisting}[escapechar=\%]
\sorts {
Colour { red; green; blue; };
CList { nil; cons(Colour head, CList tail); };
}
\end{lstlisting}
Given variables~$x$ of sort~\id{CList} and $y$ of sort~\id{Colour}, a
formula over this data type is:\iflong\footnote{In all examples, the link
\exNum{1} will take you to the web interface of our SMT solver
\textsc{Princess} and directly load the given constraint.}\fi
\begin{lstlisting}[escapechar=\%]
(x.head = red | x = cons(y, nil))
\end{lstlisting}
corresponding to the abstract syntax formula
\begin{align*}
&\mathit{is}_{\id{cons}}(x) \wedge \neg(y \approx \id{blue}) \wedge~
\\
&\big(\id{cons}^1(x) \approx \id{red} \vee
x \approx \id{cons}(y, \id{nil}) \big)
\end{align*}
Assigning $\{x \mapsto \id{cons}(\id{red}, \id{nil}), y\mapsto \id{green} \}$
satisfies the formula.
\hfill\IEEEQED
\end{example}
A constructor term is a ground term~$t$ that only consists of
constructors (i.e., does not contain selectors or variables). We
denote the set of all constructor terms (for some fixed ADT signature)
by $\ensuremath{\mathbbm{T}}$, and the set of all constructor terms of
type~$\sigma^d_j$ by $\ensuremath{\mathbbm{T}}_{\sigma^d_j}$. An ADT is well-defined if
$\ensuremath{\mathbbm{T}}_{\sigma^d_j}$ is non-empty for all sorts~$\sigma^d_1, \ldots,
\sigma^d_k$, and we will henceforth only consider well-defined ADTs.
Semantics is defined in terms of structures~$(\ensuremath{\mathbbm{T}}, I)$ over the
universe~$\ensuremath{\mathbbm{T}}$ of constructor terms, i.e., constructors are absolutely
free. Selectors~$f^j : \sigma_0 \to \sigma_j$ in particular are mapped
to total set-theoretic functions
$I(f^j) : \ensuremath{\mathbbm{T}}_{\sigma_0} \to \ensuremath{\mathbbm{T}}_{\sigma_j}$ satisfying
$I(f^j)(f(t_1, \ldots, t_n)) = t_j$.
\section{A Verification Example}
As a high-level example, we outline how a simple program operating on
the ADT from Example~\ref{ex:lists} can be verified using our
procedures.
We represent the program in the form of (constrained) Horn clauses,
following the approach taken in several recent verification
systems~\cite{andrey-pldi, Rummer13}. The result resembles a classical
logic program implementing the concatenation of two lists;
\lstinline!C(x, y, r)! expresses that \lstinline!r! is the result of
concatenating lists~\lstinline!x, y!:
\begin{lstlisting}[escapechar=\%]
\forall CList y; // (C1)
C(nil, y, y)
\forall CList x, y, r; \forall Colour c; // (C2)
(C(x, y, r) -> C(cons(c, x), y, cons(c, r)))
\end{lstlisting}
As a first property of the program, we can observe that the head of a
non-empty result list~\lstinline!r! has to be the head of one of the
arguments~\lstinline!x, y!:
\begin{lstlisting}[escapechar=\%]
\forall CList x, y, r; ( // (P1)
r != nil & C(x, y, r) ->
(r.head = x.head | r.head = y.head))
\end{lstlisting}
To verify this property, it is enough to find a \emph{model} of the
(constrained) Horn clauses \lstinline!(C1), (C2), (P1)!, i.e., an
interpretation of the predicate~\lstinline!C! that satisfies all three
formulas. The predicate~\lstinline!C! can then be considered as a
post-condition (or inductive invariant) that is sufficient to show
property~\lstinline!(P1)!. One solution of
\lstinline!(C1), (C2), (P1)!
is to interpret \lstinline!C(x, y, r)! as
\begin{lstlisting}[escapechar=\%]
r = y | r.head = x.head
}
\end{lstlisting}
which can indeed be observed to satisfy all three clauses. The
decision procedure for ADTs defined in the next section can \iflong easily\fi
check correctness of this model mechanically, after inlining the
definition of \lstinline!C!, and skolemising away quantifiers.
To find models of clauses like \lstinline!(C1), (C2), (P1)!
automatically, the principle of \emph{Craig interpolation} can be
applied to derivation trees of the clauses, an approach that has been
implemented in several model checkers~\cite{andrey-pldi, Rummer13}. To
support ADTs, which are currently beyond the scope of most model
checkers, in Section~\ref{sec:interpolation} we explain how our
decision procedure can be extended to Craig interpolation.
\medskip
Consider now additional clauses computing the list length:
\begin{lstlisting}[escapechar=\%]
L(nil, 0) // (C3)
\forall CList x; // (C4)
\forall Colour c; \forall int n;
(L(x, n) -> L(cons(c, x), n+1))
\end{lstlisting}
We can combine the two programs to state a second property relating
concatenation and list length. Concatenating two lists yields a list
whose length is the sum of the individual list lengths:
\begin{lstlisting}[escapechar=\%]
\forall CList x, y, r; // (P2)
\forall int nx, ny, nr; (
C(x, y, r) & L(x, nx) & L(y, ny) & L(r, nr)
-> nr = nx + ny)
\end{lstlisting}
To verify this property, as before by showing the existence of a model
of \lstinline!(C1), (C2), (C3), (C4), (P2)!, we need a slightly
extended logic providing also an operator for the \emph{size} of ADT
terms (Section~\ref{sect::size}). ADT constraints without size
operator are not sufficiently expressive to formulate any model. The
size of a term~$t \in \ensuremath{\mathbbm{T}}$ is the number of constructor occurrences in
$t$. A model of \lstinline!(C1), (C2), (C3), (C4), (P2)!,
interpreting both the predicate
\lstinline!C! and \lstinline!L!, is then
\begin{lstlisting}[escapechar=\%]
\size(x) + \size(y) = \size(r) + 1
};
L(CList x, int n) {
\size(x) = 2*n + 1
};
\end{lstlisting}
Note that the \lstinline!\size! operator also counts the
\lstinline!nil! symbol, as well as the colour constructors
\lstinline!red, green, blue!, leading to the stated relationship
between the size and the length of a list. The correctness of the
model can be checked using the procedure we define in
Section~\ref{sect::size}.
\begin{table*}[t]
\fboxsep2mm
\fbox{
\begin{minipage}{\linewidth-2\fboxsep}
Reduction rules for
constructors~$f : \sigma_1 \times \cdots \times \sigma_n \to
\sigma_0$,
selectors~$f^j : \sigma_0 \to \sigma_j$, testers~$\mathit{is}_f$,
and equations between variables:
\begin{align}
\label{rule:1}
\hspace*{10ex}
f(x_1, \ldots, x_n) \approx x_0
\quad\Longrightarrow\quad &
\mathit{CtorSpec}_f(\tilde x_0, \ldots, \tilde x_n)
\\
\label{rule:2}
f^j(x) \approx y
\quad\Longrightarrow\quad &
\tilde f^j(\tilde x) \approx \tilde y ~\wedge~
\bigvee_{\substack{g \in \{f_1, \ldots, f_m\} \\ g : \sigma_0}}
\mathit{ExCtorSpec}_g(\tilde x)
\\
\label{rule:3}
\mathit{is}_f(x)
\quad\Longrightarrow\quad &
\mathit{ExCtorSpec}_f(\tilde x)
\\
\label{rule:4}
\neg\mathit{is}_f(x)
\quad\Longrightarrow\quad &
\bigvee_{\substack{g \in \{f_1, \ldots, f_m\} \\ g : \sigma_0 \text{~and~} g \not= f}}
\mathit{ExCtorSpec}_g(\tilde x)
\\
\label{rule:5}
x \approx y
\quad\Longrightarrow\quad &
\tilde x \approx \tilde y
\\
\label{rule:6}
\neg(x \approx y)
\quad\Longrightarrow\quad &
\neg(\tilde x \approx \tilde y)
\end{align}
The following abbreviations are used, for each
constructor $f : \sigma_1 \times \cdots \times \sigma_n \to \sigma_0$
and each sort~$\sigma \in \{\sigma^d_1, \ldots, \sigma^d_k\}$:
\begin{align*}
\mathit{CtorSpec}_f(x_0, \ldots, x_n)
\quad=\quad &
\left(
\begin{array}{@{}>{\displaystyle}l@{}}
\tilde f(x_1, \ldots, x_n) \approx x_0 \wedge
\mathit{ctorId}_{\sigma_0}(x_0) \approx \mathit{Id}_f ~\wedge
\\
\bigwedge_{j = 1}^n \big(
\tilde f^j(x_0) \approx x_j \wedge
\mathit{depth}_{\sigma_0}(x_0) > \mathit{depth}_{\sigma_j}(x_j)
\big)
\end{array}
\right)
%
\\[1ex]
%
\mathit{ExCtorSpec}_f(x)
\quad=\quad &
\exists x_1, \ldots, x_n.~\Big(
\bigwedge_{j = 1}^n
\mathit{In}_{\sigma_j}(x_j) \wedge
\mathit{CtorSpec}_f(x, x_1, \ldots, x_n)
\Big)
%
\\[1ex]
%
\mathit{In}_\sigma(x)
\quad=\quad &
\begin{cases}
0 \leq x < |\ensuremath{\mathbbm{T}}_{\sigma}| &
\text{if~} |\ensuremath{\mathbbm{T}}_{\sigma}| < \infty
\\
\mathit{true} & \text{otherwise}
\end{cases}
\end{align*}
\end{minipage}}
\caption{Rules for reduction of ADTs to EUF+LIA}
\label{tab:reduction}
\end{table*}
\section{Checking ADT Satisfiability by Reduction}
We now define our reduction from ADTs to EUF+LIA. Suppose $\phi$ is an
ADT formula as defined in Section~\ref{sec:prelims}. For sake of
presentation, we assume that $\phi$ has been brought into a
\emph{flat} form upfront. A formula~$\phi$ is flat if function symbols
(in our case, constructors and selectors) only occur in equations of
the form~$g(x_1, \ldots, x_n) \approx x_0$ (where $x_0, \ldots, x_n$
are variables, though not necessarily pairwise distinct), and only in
positive positions. Flatness can be established at the cost of
introducing a linear number of additional variables.
\begin{example}
\label{ex:flattened}
The formula in Example~\ref{ex:lists} can be flattened by
introducing variables~$\id{t1}, \id{t2} :
\id{Colour}$, and $\id{t3} : \id{CList}$:
%
\begin{lstlisting}[escapechar=\%]
((red = t2 & x.head = t2) |
(nil = t3 & cons(y, t3) = x))
\end{lstlisting}
\end{example}
\paragraph{Notation}
We need some further notation before we can formally define the
reduction. As before, we assume that $k$ sorts $\sigma^d_1, \ldots,
\sigma^d_k$ and $m$ constructors $f_1, \ldots, f_m$ have been fixed.
For each sort~$\sigma \in \{\sigma^d_1, \ldots, \sigma^d_k\}$, we define
$\mathit{\#Ctor}_\sigma$ to be the number of constructors of $\sigma$:
\begin{equation*}
\mathit{\#Ctor}_\sigma ~=~
|\{ j \mid j \in \{1, \ldots, m\} \text{~and~} f_j : \sigma \}|
\end{equation*}
Similarly, each constructor~$f_i$ with $f_i : \sigma$ is given a unique
index~$\mathit{Id}_{f_i} \in \{1, \ldots, \mathit{\#Ctor}_\sigma\}$ as
identifier within its sort~$\sigma$:
\begin{equation*}
\mathit{Id}_{f_i} ~=~ |\{ j \mid j \in \{1, \ldots, i\}
\text{~and~} f_j : \sigma \}|
\end{equation*}
For each sort~$\sigma \in \{\sigma^d_1, \ldots, \sigma^d_k\}$, we
furthermore need to know the cardinality~$|\ensuremath{\mathbbm{T}}_{\sigma}|$ of the
term domain~$\ensuremath{\mathbbm{T}}_{\sigma}$. The cardinality can be derived by
computing the strongly connected components of the dependency graph
induced by the constructors (the graph with sorts $\sigma^d_1, \ldots,
\sigma^d_k$ as nodes, and edges $\sigma^d_i \to \sigma^d_j$ whenever there
is a constructor with a $\sigma^d_j$-sorted argument and result sort
$\sigma^d_i$).
\Comment{Can we say it is infinite when graph has a loop?}
We write $|\ensuremath{\mathbbm{T}}_{\sigma}| = \infty$ for infinite
domains.
\subsection{Definition of the Reduction}
\label{sec:reduction}
Suppose $\phi$ is a flat formula in negation normal form (NNF) over an
ADT as defined in Section~\ref{sec:prelims}. To translate $\phi$ to an
EUF+LIA formula~$\tilde \phi$, we introduce a new set of function
symbols ranging over integers: for each constructor~$f : \sigma_1
\times \cdots \times \sigma_n \to \sigma_0$ a new function~$\tilde f :
\ensuremath{\mathbbm{Z}}^n \to \ensuremath{\mathbbm{Z}}$ with the same arity~$n$; for each selector~$f^j :
\sigma_0 \to \sigma_j$ a unary function~$\tilde f^j : \ensuremath{\mathbbm{Z}} \to \ensuremath{\mathbbm{Z}}$; for
each sort~$\sigma \in \{\sigma^d_1, \ldots, \sigma^d_k\}$ a function
symbol~$\mathit{ctorId}_\sigma : \ensuremath{\mathbbm{Z}} \to \ensuremath{\mathbbm{Z}}$ to encode testers, and a
function~$\mathit{depth}_\sigma : \ensuremath{\mathbbm{Z}} \to \ensuremath{\mathbbm{Z}}$ to ensure acyclicity of
terms. Further, for each variable~$x : \sigma$ occurring in $\phi$, we
introduce an integer-valued variant~$\tilde x : \ensuremath{\mathbbm{Z}}$.
The actual reduction is defined through the rewriting rules in the
upper half of Table~\ref{tab:reduction}. Since the reduction works
differently for positive and negative occurrences of
$\mathit{is}_f(x)$ literals, we assume that rules are only applied in
positive positions, and handle negation explicitly in the rules (and
assume that $\phi$ is in negation normal form). Rule~\eqref{rule:1}
augments every occurrence of a constructor symbol with corresponding
statements about selectors (ensuring that both are inverses of each
other); about the index $\mathit{Id}_f$ of the constructor (ensuring
that different constructors of the same sort produce distinct
values); and about the depth of the constructed term (ensuring that no
term can occur as sub-term of itself). Essentially the same
translation is done for testers by rule~\eqref{rule:3}, introducing
fresh constructor arguments through an existential quantifier.
Rule~\eqref{rule:2} augments each occurrence of a selector with a
disjunction stating that the considered term was actually created
using one of the constructors of the sort; this is necessary in
general since selectors~$f^j$ can be applied to terms constructed
using constructors other that $f$ (an optimisation is discussed in
Section~\ref{sec:opt}). Rule~\eqref{rule:4} asserts that the
constructor of a term is different from $f$, and \eqref{rule:5},
\eqref{rule:6} translate equations by simply renaming variables.
Suppose $\phi^*$ is the result of exhaustively applying the rules at
positive positions in $\phi$, and $x_1 : \sigma_1, \ldots, x_l :
\sigma_l$ are all variables occurring in $\phi$, then the reduct of
$\phi$ is defined as $ \tilde \phi = \phi^* \wedge \bigwedge_{i = 1}^l
\mathit{In}_{\sigma_i}(\tilde x_i)$.
\begin{example}
\label{ex:encoded}
In the encoded version of the formula from
Example~\ref{ex:flattened}, all variables and functions range over
integers; for readability, we keep the names of all variables. New
variables~$\id{s1}, \ldots, \id{s4}$ are introduced to eliminate the
quantifiers of $\mathit{ExCtorSpec}_f$ expressions through
Skolemisation:
%
\begin{lstlisting}[escapechar=\%]
cons(s1, s2) = x & ctorId_CList(x) = 1 &
head(x) = s1 & tail(x) = s2 & 0<=s1 & s1<3 &
depth_CList(x) > depth_Colour(s1) &
depth_CList(x) > depth_CList(s2) &
// encoding of blue = t1
blue = t1 & ctorId_Colour(t1) = 2 &
// encoding of y != t1 (unchanged)
y != t1 &
// encoding of red = t2
((red = t2 & ctorId_Colour(t2) = 0 &
// encoding of x.head = t2
head(x) = t2 & (
// case x.is_nil
(nil = x & ctorId_CList(x) = 0) |
// case x.is_cons
(cons(s3, s4) = x & ctorId_CList(x) = 1 &
head(x) = s3 & tail(x) = s4 &
0 <= s3 & s3 < 3 &
depth_CList(x) > depth_Colour(s3) &
depth_CList(x) > depth_CList(s4)))) |
// encoding of nil = t3
(nil = t3 & ctorId_CList(t3) = 0 &
// encoding of cons(y, t3) = x
cons(y, t3) = x & ctorId_CList(x) = 1 &
head(x) = y & tail(x) = t3 &
depth_CList(x) > depth_Colour(y) &
depth_CList(x) > depth_CList(t3))) &
// range constraints for x, y, t1, t2, t3
// (some of which are just "true")
0<=y & y<3 & 0<=t1 & t1<3 & 0<=t2 & t2<3
\end{lstlisting}
\end{example}
It should be noted that it is not necessary to assume positiveness of
the $\mathit{depth}_\sigma$ functions, since the functions are only
used to ensure acyclicity of terms by comparing the depth of a term
with the depths of its direct sub-terms. In general, although the
formula makes use of integer arithmetic, only very simple arithmetic
constraints are needed. Up to slight syntactic modifications, all
constraints fall into the Unit-Two-Variable-Per-Inequality fragment
UTVPI~\cite{DBLP:conf/ppcp/JaffarMSY94,DBLP:conf/cade/CimattiGS09},
i.e., only inequalities with up to two variables and unit coefficients
are needed. The constraints can therefore be both solved and
interpolated efficiently (of course, presence of Boolean structure or
negation still implies NP-hardness).
\iflong
\subsection{Correctness of Reduction}
\fi
\begin{theorem}
\label{thm:reductionSoundComplete}
The reduct~$\tilde \phi$ of a flat ADT formula~$\phi$ in NNF is
satisfiable (over EUF+LIA) if and only if $\phi$ is satisfiable
(over an ADT).
\end{theorem}
\iflong
\begin{IEEEproof}
Since reduction preserves the Boolean structure of a formula, and
the reduction rules are agnostic of the position at which they are
applied, it is enough to prove the theorem for flat conjunctions of
literals (i.e., formulas in negation normal form that do not contain
disjunctions).
``$\Longleftarrow$'' (easy direction) Suppose $\phi$ is
satisfiable, with structure~$(\ensuremath{\mathbbm{T}}, I)$ and variable
assignment~$\beta$. We construct a
family~$(\alpha_{\sigma^d_i})_{i=1}^k$ of injective functions as
embedding of the domains~$\ensuremath{\mathbbm{T}}_{\sigma^d_i}$ into $\ensuremath{\mathbbm{Z}}$. For $i$ such
that $\ensuremath{\mathbbm{T}}_{\sigma^d_i}$ is infinite, $\alpha_{\sigma^d_i}$ can be any
bijection~$\ensuremath{\mathbbm{T}}_{\sigma^d_i} \to \ensuremath{\mathbbm{Z}}$; if $\ensuremath{\mathbbm{T}}_{\sigma^d_i}$ is finite, we
choose $\alpha_{\sigma^d_i}$ to be a
bijection~$\ensuremath{\mathbbm{T}}_{\sigma^d_i} \to \{0, \ldots, |\ensuremath{\mathbbm{T}}_{\sigma^d_i}|-1\}$. Let
$\alpha = \bigcup_{i=1}^k \alpha_{\sigma^d_i}$. To satisfy
$\tilde\phi$, choose variable
assignment~$\tilde\beta = \alpha \circ \beta$, and the
interpretation~$\tilde I$ of constructors and selectors over $\ensuremath{\mathbbm{Z}}$
that is induced by $\alpha$. Define
$\tilde I(\mathit{depth}_\sigma)(n)$ to be the depth of the
constructor term~$\alpha_\sigma^{-1}(n)$, and
$\tilde I(\mathit{ctorId}_\sigma)(n)$ as the index~$\mathit{Id}_f$ of
the head symbol~$f$ of $\alpha_\sigma^{-1}(n)$ (and arbitrary if
$\alpha_\sigma^{-1}(n)$ is undefined).
``$\Longrightarrow$'' Suppose $\tilde\phi$ is satisfiable, with
structure~$(\ensuremath{\mathbbm{Z}}, \tilde I)$ and assignment~$\tilde \beta$. We
construct a set~$P$ of relevant integer indices~$a$ and
corresponding sorts~$\sigma$ in the model, and a mapping~$\gamma : P
\to \ensuremath{\mathbbm{T}}$ (with $\gamma(a, \sigma) \in \ensuremath{\mathbbm{T}}_\sigma$ for each $(a,
\sigma) \in P$) that can be used to define a variable
assignment~$\beta(x : \sigma) = \gamma(\tilde\beta(\tilde x),
\sigma)$ to satisfy $\phi$. The main difficulty is to ensure that
$\gamma$ is injective, since otherwise disequalities in $\phi$ might
be violated.
We set $P = D \cup D_t$, where $D$ is the set of
pairs~$(\tilde\beta(\tilde x), \sigma)$ for variables~$x : \sigma$
for which $\phi$ contains a constructor literal~$f(\ldots) \approx
x$, a selector literal~$f^j(x) \approx \ldots$, or a (possibly
negated) tester~$\mathit{is}_f(x)$. The encoding ensures that head
symbols and children of terms represented by elements of $D$ are
defined by the $\mathit{ctorId}_\sigma$ functions and the selectors;
for $(a, \sigma) \in D$, define therefore $\mathit{dep}(a, \sigma) =
\langle f, (c_1, \sigma_1), \ldots, (c_n, \sigma_n)\rangle$ if
$\tilde I(\mathit{ctorId}_\sigma)(a) = \mathit{Id}_f$, with $f :
\sigma_1 \times \cdots \times \sigma_n \to \sigma$, and $c_j =
\tilde I(\tilde f^j)(a)$ for $j \in \{1, \ldots, n\}$.
Let $D_t$ contain all pairs~$(c_i, \sigma_i)$ in tuples
$\mathit{dep}(a, \sigma) = \langle f, (c_1, \sigma_1), \ldots, (c_n,
\sigma_n)\rangle$, for any $(a, \sigma) \in D$; as well as
pairs~$(\tilde\beta(\tilde x), \sigma) \not\in D$ for any further
variable~$x : \sigma$ in $\phi$.
We inductively define a sequence~$\gamma_0, \gamma_1, \ldots,
\gamma_{|P|}$ of partial functions~$P \to \ensuremath{\mathbbm{T}}$:
\begin{enumerate}
\item let $\gamma_0 = \emptyset$;
\item for $i > 0$, if there is $(a, \sigma) \in D$ such that
$\mathit{dep}(a, \sigma) = \langle f, (c_1, \sigma_1), \ldots, (c_n,
\sigma_n)\rangle$,
the function~$\gamma_{i-1}$ is defined for each
pair~$(c_j, \sigma_j)$ (for $j \in \{1, \ldots, n\})$, but
$\gamma_{i-1}$ is not defined for $(a, \sigma)$, then let
$\gamma_i = \gamma_{i-1} \cup \{(a, \sigma) \mapsto
f(\gamma_{i-1}(c_1, \sigma_1), \ldots, \gamma_{i-1}(c_n,
\sigma_n))\}$.
\item for $i > 0$, if case~2) does not apply, pick any pair~$(a,
\sigma) \in P \setminus D$ for which $\gamma_{i-1}$ is not
defined, and any constructor term~$s \in \ensuremath{\mathbbm{T}}_\sigma$ that does not
occur in the range of $\gamma_{i-1}$ yet; choose $(a, \sigma)$ and
$s$ such that the \emph{depth of $s$ becomes minimal.} Let
$\gamma_i = \gamma_{i-1} \cup \{(a, \sigma) \mapsto s\}$.
\end{enumerate}
Importantly, the final function~$\gamma = \gamma_{|P|}$ is defined
for all elements of $P$, and no two elements of $P$ are mapped to
the same term. To see that $\gamma$ is defined for all elements of
$P$, observe that the use of $\mathit{depth}_\sigma$ functions in
the encoding ensures that the $\mathit{dep}$ function is acyclic,
i.e., no term can ever be required to contain itself as a
sub-term. To see that $\gamma$ is injective, observe that by
definition the choice of $s$ in 3) cannot violate injectivity in
$\gamma_i$. Different iterations of 2) cannot construct a term
$f(\gamma_{i-1}(c_1, \sigma_1), \ldots, \gamma_{i-1}(c_n,
\sigma_n))$ twice, due to the presence of constructor
literals~$\tilde f(\ldots) \approx \tilde x$ in $\tilde\phi$ that
are consistently interpreted. Finally, the fact that case 2) is
always preferred over 3) implies that the term $f(\gamma_{i-1}(c_1,
\sigma_1), \ldots, \gamma_{i-1}(c_n, \sigma_n))$ has to contain the
most recently chosen term~$s$ from case 3) (if there is any) as a
sub-term; this implies that $f(\gamma_{i-1}(c_1, \sigma_1), \ldots,
\gamma_{i-1}(c_n, \sigma_n))$ is deeper than all terms~$s$
previously chosen in case~3), and therefore different from all of
them.
It is then possible to choose the variable assignment
$\beta(x) = \gamma(\tilde\beta(\tilde x), \sigma)$ for each
variable~$x : \sigma$ in $\phi$.
\end{IEEEproof}
\fi
\subsection{Two Optimisations}
\label{sec:opt}
The reduction, as presented so far, can be improved in a number of
ways. A first optimisation concerns the way selectors are
translated to EUF+LIA, rule~\eqref{rule:2}. It can be observed that
the disjunction of $\mathit{ExCtorSpec}_g$ literals introduced by
rule~\eqref{rule:2} is in most cases unnecessary, and usually the rule
can be simplified to
\begin{equation}
\tag{\ref{rule:2}'}
f^j(x) \approx y
\quad\Longrightarrow\quad
\tilde f^j(\tilde x) \approx \tilde y
\end{equation}
This simplification is possible whenever rule~\eqref{rule:2} is
applied to \emph{guarded} selector literals, i.e., whenever
$f^j(x) \approx y$ occurs in conjunction with a (positive or negative)
test~$\mathit{is}_g(x)$, or in conjunction with a constructor
literal~$g(x_1, \ldots, x_n) \approx x$ (in both cases, regardless of
whether $f = g$).
\begin{example}
The effect of this redundancy can be seen in
Example~\ref{ex:encoded}: given lines~1--5, the disjunction in
14--21 can be simplified to \mbox{\lstinline!s3
= s1 & s4 = s2!}, and can be removed entirely since \id{s3} and
\id{s4} do not occur elsewhere in the formula.
%
\hfill\IEEEQED
\end{example}
\begin{example}
The full rule~\eqref{rule:2} is necessary for the following formula
over the ADT in Example~\ref{ex:lists}:
%
\begin{lstlisting}[escapechar=\%]
\end{lstlisting}
%
This is because \lstinline!x.head! and \lstinline!x.tail! occur
unguarded.
%
\hfill\IEEEQED
\end{example}
As a second optimisation, the treatment of sorts with finite domain
can be improved, in particular for sorts that are \emph{enumerations}
(i.e., sorts with only nullary constructors). The full EUF encoding is
overkill for enumerations, since instead we can map each
constructor~$f$ directly to the index~$\mathit{Id}_{f}$:
\begin{equation}
\tag{\ref{rule:1}'}
f \approx x_0
\quad\Longrightarrow\quad
\mathit{Id}_f \approx \tilde x_0
\end{equation}
Similarly, testers in enumerations reduce to simple arithmetic
comparisons.
\subsection{Size Increase Caused by the Reduction}
The reduction rules replace every literal in a formula~$\phi$ with an
expression that is linear in the size~$n$ of the considered ADT, so
that $|\tilde \phi| \in O(n \cdot |\phi|)$. If the ADT is considered
as fixed, the reduction is linear.
As an experimental evaluation of the size increase, we applied the
procedure (including the optimisations from the previous section) to
the 8000 randomly generated ADT benchmarks from
\cite{DBLP:journals/jsat/BarrettST07} (4422 of the benchmarks are
unsat). The benchmarks themselves are not very challenging, with the
most complicated one solved in around~1\,s, and the average solving
time of 43\,ms dominated by parsing, pre-processing, etc. The average
problem sizes, counted as the number of sub-expressions of each
formula, were:
\begin{center}
\begin{tabular}{ccc}
After parsing & After reduction & After red.
\& simpl.
\\\hline
76 & 337 & 34
\end{tabular}
\end{center}
This means that reduction led to an increase in size by a factor of
4.5, but this increase was more than offset by subsequent
simplification (using the standard EUF+LIA simplifier implemented in
\textsc{Princess}). Analysing further, it turned out that reduction
followed by simplification was extremely effective on the
unsatisfiable benchmarks: of the 4422 unsatisfiable problems, 4334
could directly be simplified to $\mathit{false}$. The average size of
the remaining 3666 problems, after reduction and simplification, was
74, incidentally the same as the average initial size of all
benchmarks.
An experimental comparison of our solver with other SMT solvers, on a
larger set of benchmarks, is ongoing.
\iffalse
\subsection{Syntactic Unification as ADT solving}
\contents{
\item talk about size of the formulas resulting from the encoding}
\fi
\section{Craig Interpolation in (Extensions of) ADTs}
\label{sec:interpolation}
\iffalse
\contents{
\item three main points that have to be discussed:
\begin{itemize}
\item encoding everything into $\ensuremath{\mathbbm{Z}}$ can lead to type-incorrect stuff
in the interpolants, that has to be dealt with
\item encoding of constructor ids has to be translated back
\item depth operations might occur in the interpolants, and there is
no easy way to rid of those. Work-around: we consider ADTs with size!
\end{itemize}
}
\fi
Since quantifier-free Craig interpolation in EUF+LIA is well
understood (e.g.,
\cite{vmcaiInterpolation2011,DBLP:conf/spin/ChristHN12,DBLP:conf/cade/CimattiGS09}),
the reduction approach can also be leveraged for interpolation. Given
an unsatisfiable conjunction~$\phi_A \wedge \phi_B$, the problem of
(reverse) interpolation is to find a formula~$I$ such that~$\phi_A
\Rightarrow I$, $\phi_B \Rightarrow \neg I$, and all variables in $I$
are common to $\phi_A$ and $\phi_B$. If $\phi_A, \phi_B$ are ADT
formulas, it is natural to approach interpolation by first computing
an EUF+LIA interpolant~$\tilde I$ for the reduced conjunction $\tilde
\phi_A \wedge \tilde \phi_B$.
\begin{example}
An interpolation problem over the list ADT from
Example~\ref{ex:lists} is:
\begin{lstlisting}[escapechar=\%]
z.is_cons & x.head != z.head)
& \part[right] (x = cons(c, cons(c, y)))
\end{lstlisting}
The only common variable of the two formulas is \id{x}, and a solution
of the interpolation problem is the disequality
\lstinline?x.head != x.tail.head?.
Note that this formula is a correct interpolant even though the
selectors are unguarded.
\hfill\IEEEQED
\end{example}
To translate~$\tilde I$ back to an ADT interpolant~$I$, three main
points have to be addressed. First, since all ADT sorts are translated
to integers, the formula~$\tilde I$ might contain arithmetic
operations on ADT terms that cannot easily be mapped back to the ADT
world. This turns out to be a non-issue for infinite ADT sorts, since
reduction does not make use of arithmetic operations for terms over
infinite sorts (indeed, equivalently every infinite ADT
sort~$\sigma^d$ could be mapped to a fresh uninterpreted
sort~$\tilde\sigma^d$). The situation is different for finite sorts,
where predicates~$\mathit{In}_\sigma$ from
Table~\ref{tab:reduction} represent cardinality constraints that can
contribute to unsatisfiability of a formula. One solution is the
optimisation discussed in Section~\ref{sec:opt}: by defining a
\emph{fixed} mapping of terms in finite domains to integers,
translation of interpolants back to ADT formulas is significantly
simplified.\iflong\footnote{In our implementation, such fixed mapping is
currently only done for enumerations, not for other finite ADT
sorts.}\fi
Second, the functions~$\mathit{ctorId}_\sigma$ introduced by the
reduction are not valid ADT operations, and have to be translated back
to testers (which can be done quite easily).
Third, interpolants might also mention $\mathit{depth}_\sigma$
operations, which have no direct correspondence in the original ADTs
theory. Instead of devising ways how to eliminate such operations, we
decide to embrace them instead as a useful extension of ADTs, and
adapt our reduction method accordingly. Since depth is but one measure
that can be used to ensure acyclicity, the next sections therefore
discuss how we can reason about ADTs with \emph{size constraints.}
\section{Solving ADTs with Size Constraints\label{sect::size}}
We now consider ADT formulas extended with constraints about term
size. The \emph{size}~$|t|$ of a term~$t \in \ensuremath{\mathbbm{T}}$ is the number of
constructor occurrences in $t$. The resulting formal language is an
extension of the language defined in Section~\ref{sec:prelims}:
\begin{align*}
\phi ~~::=~ & \ldots
~|~~ \phi_{\text{Pres}}(|t_1|, \ldots, |t_n|)
&&
\text{Size constraints}
\end{align*}
where $\phi_{\text{Pres}}(|t_1|, \ldots, |t_n|)$ is any Presburger
formula about the size of ADT terms~$t_1, \ldots, t_n$.
\begin{example}
Consider the ADT in Example~\ref{ex:lists}, and a variable $x$ of
sort~\id{CList}. The formula
\begin{lstlisting}[escapechar=\%]
\end{lstlisting}
has the satisfying assignment
$x \mapsto \id{cons}(\id{blue}, \id{nil})$, and this assignment is
unique. In contrast, the formula
\begin{lstlisting}[escapechar=\&]
&\exLink{7b}& \size(x)
\end{lstlisting}
is unsatisfiable, since the size of any list term is odd (term size
does not exactly coincide with the length of a list).
\hfill\IEEEQED
\end{example}
To extend our reduction approach to formulas with size constraints,
there are two main issues that have to be addressed: (i) constructor
terms might not exist for all sizes~$n \in \ensuremath{\mathbbm{N}}_{\geq 1}$, and (ii) even
if terms of some size~$n \in \ensuremath{\mathbbm{N}}_{\geq 1}$ exist, there might be too
few of them to satisfy a formula.
\begin{example}
\label{ex:nats}
Consider the ADT of positive natural numbers:
%
\begin{lstlisting}
\sorts {
Nat { one; succ(Nat pred); };
}
\end{lstlisting}
For every size~$b \in \ensuremath{\mathbbm{N}}_{\geq 1}$ there is exactly one constructor
term~$t$ with $|t| = b$. This implies unsat.\ of the formula
\begin{lstlisting}[escapechar=\%]
\end{lstlisting}
%
\end{example}
\subsection{Reduction and Incremental Unfolding}
\begin{table*}[t]
\fboxsep2mm
\fbox{
\begin{minipage}{\linewidth-2\fboxsep}
Additional reduction rule for size expressions, assuming
$x$ is a variable of sort~$\sigma \in \{\sigma^d_1, \ldots, \sigma^d_k\}$,
and $y$ a variable of sort~$\ensuremath{\mathbbm{Z}}$:
\begin{align}
\label{rule:7}
|x| \approx y
\quad\Longrightarrow\quad &
\mathit{size}_{\sigma}(\tilde x) \approx y ~\wedge~
y \in \ensuremath{\mathbbm{S}}_{\sigma}
\end{align}
Compared to Table~\ref{tab:reduction}, for each
constructor $f : \sigma_1 \times \cdots \times \sigma_n \to \sigma_0$
the abbreviation $\mathit{CtorSpec}_f$ is replaced with
$\mathit{CtorSpec}'_f$:
\begin{align*}
\mathit{CtorSpec}'_f(x_0, \ldots, x_n)
\quad=\quad &
\left(
\begin{array}{@{}>{\displaystyle}l@{}}
\tilde f(x_1, \ldots, x_n) \approx x_0 ~\wedge~
\mathit{ctorId}_{\sigma_0}(x_0) \approx \mathit{Id}_f ~\wedge
\\
\bigwedge_{j = 1}^n \big(
\tilde f^j(x_0) \approx x_j ~\wedge~
\mathit{size}_{\sigma_j}(x_j) \in \ensuremath{\mathbbm{S}}_{\sigma_j}
\big) ~\wedge~
\mathit{size}_{\sigma_0}(x_0) \approx
1 + \sum_{j=1}^n \mathit{size}_{\sigma_j}(x_j)
\end{array}
\right)
%
\end{align*}
\end{minipage}}
\caption{Additional rules for reduction of ADTs with size
constraints to EUF+LIA}
\label{tab:reductionSize}
\end{table*}
We address issue~(i) noted above by reasoning globally about possible
term sizes within an ADT. For an ADT sort~$\sigma^d_j$, we define
$\ensuremath{\mathbbm{S}}_{\sigma^d_j} = \{ |t| \mid t \in \ensuremath{\mathbbm{T}}_{\sigma^d_j} \} \subseteq
\ensuremath{\mathbbm{N}}$ to be the \emph{size image} of the term set~$\ensuremath{\mathbbm{T}}_{\sigma^d_j}$,
i.e., the set of term sizes in $\ensuremath{\mathbbm{T}}_{\sigma^d_j}$. The size image turns
out to be a special case of the \emph{Parikh image} of a context-free
language, since an ADT can be interpreted as a context-free grammar
over a singleton alphabet (by considering every sort as a non-terminal
symbol, and mapping every constructor to the unique letter in the
singleton alphabet). This implies that $\ensuremath{\mathbbm{S}}_{\sigma^d_j}$ is
semi-linear, and that a representation of the set in the form of an
existential Presburger formula can be derived from the set of
constructors in linear time~\cite{DBLP:conf/cade/VermaSS05}.
\Comment{no longer UTVPI; discuss implications for interpolation}
Table~\ref{tab:reductionSize} shows how the reduction from
Section~\ref{sec:reduction} (and Table~\ref{tab:reduction}) is
augmented to deal with size constraints. Instead of the
$\mathit{depth}_\sigma$ functions, for each
sort~$\sigma \in \{\sigma^d_1, \ldots, \sigma^d_k\}$ a
function~$\mathit{size}_\sigma : \ensuremath{\mathbbm{Z}} \to \ensuremath{\mathbbm{Z}}$ representing term size is
introduced, and the $\mathit{CtorSpec}_f$ constraints are changed
accordingly; and an additional reduction rule~\eqref{rule:7} is
introduced to handle equations~$|x| \approx y$ with the size
operation. Rule~\eqref{rule:7} adds constraints~$y \in \ensuremath{\mathbbm{S}}_{\sigma}$
to ensure that only genuine term sizes are considered, assuming
implicitly that the size image~$\ensuremath{\mathbbm{S}}_{\sigma}$ is represented as a
Presburger formula.
The resulting modified reduction approach is sound for checking
unsatisfiability of ADT formulas:
\begin{lemma}
\label{lem:sizeUnsat}
If the reduct~$\tilde \phi$ of a flat ADT formula~$\phi$ in NNF with
size constraints is unsatisfiable, then $\phi$ is unsatisfiable.
\end{lemma}
Reduction does not directly give rise to a decision procedure for ADT
constraints with size constraints, in contrast to the situation
without size. This is because reduction does not precisely translate the
\emph{number} of terms for each size~$n \in \ensuremath{\mathbbm{N}}_{\geq 1}$
(issue~(ii) from above). We can observe that the reduct~$\tilde\phi$
of the formula~$\phi$ in Example~\ref{ex:nats} is satisfiable, while
$\phi$ is unsatisfiable, showing that reduction alone is \emph{not}
sound for satisfiability (unsurprisingly).
Different approaches exist to establish soundness also for
satisfiability, in particular the extraction of \emph{length
constraint completion} formulas~\cite{DBLP:journals/iandc/ZhangSM06}
that precisely define term sizes with sufficiently many distinct
terms. We follow the approach of incrementally unfolding (aka.\
unrolling) from \cite{Suter:2010:DPA:1707801.1706325}, which is quite
flexible, and complete in many relevant cases.
Let $\phi$ again be a (flat and NNF) ADT formula with size constraints. We
construct unfolding sequences~$\phi_0, \phi_1, \ldots$ by setting
$\phi_0 = \phi$, and for each $i \geq 1$ deriving $\phi_i$ by
unfolding one ADT variable~$x : \sigma$ that occurs in
$\phi_{i-1}$. If $f_1, \ldots, f_n$ are all constructors of the
considered ADT, we set
\begin{equation*}
\phi_i ~~=~~ \phi_{i-1} \wedge
\bigvee_{\substack{j \in \{1, \ldots, n\} \\ f_j : \sigma}}
f_j(x_1^j, x_2^j, \ldots) \approx x
\end{equation*}
with fresh sorted argument variables~$x_1^1, x_2^1, \ldots, x_1^2,
x_2^2, \ldots$.
In practice, unfolding will usually happen incrementally: the next
variable to unfold is selected based on a model of the previous
partial unfolding~$\phi_{i-1}$, until enough terms have been
constructed to obtain a genuine model of $\phi$, or unsatisfiability
is detected.
\begin{lemma}
\label{lem:unfolding}
Let $\phi_0, \phi_1, \ldots$ be an unfolding sequence for $\phi$,
and for each $i \in \ensuremath{\mathbbm{N}}$ let $U_i$ be the set of variables unfolded
in $\phi_i$ (i.e., $U_0 = \emptyset$, and $U_i = U_{i-1} \cup \{x\}$
if $\phi_i$ was derived by unfolding $x$ in $\phi_{i-1}$). Then for
any $i \in \ensuremath{\mathbbm{N}}$:
\begin{enumerate}
\item if $\tilde \phi_i$ is unsatisfiable (over EUF+LIA) then $\phi$
is unsatisfiable (over ADTs with size);
\item if $\tilde \phi_i$ is satisfied by a model~$\tilde M$ and
assignment~$\tilde \beta$, such that for every ADT variable~$x :
\sigma$ in $\phi_i$ there is a variable~$y \in U_i$ with $y :
\sigma$ and $\ensuremath{\operatorname{val}}_{\tilde M, \tilde \beta}(\tilde x) =
\ensuremath{\operatorname{val}}_{\tilde M, \tilde \beta}(\tilde y)$, then $\phi$ is
satisfiable (over ADTs with size).
\end{enumerate}
\end{lemma}
\iflong
\begin{IEEEproof}
1) follows directly from Lemma~\ref{lem:sizeUnsat}.
2) Models over EUF+LIA can be translated to ADT models like in the
proof of Theorem~\ref{thm:reductionSoundComplete}
``$\Longrightarrow$''. It can be noted that case~3) in the proof
never applies due to the assumption that all variables are mapped to
unfolded terms.
\end{IEEEproof}
\fi
\begin{example}
In Example~\ref{ex:nats}, unsatisfiability is detected after unfolding
$x$ and $y$ three times each.
%
\hfill\IEEEQED
\end{example}
As the next example shows, however, unfolding is not always enough to
show unsatisfiability of a formula. The next sections will therefore
formulate a sufficient and necessary criterion for termination of
unfolding.
\begin{example}
\label{ex:nats2}
With the ADT from Example~\ref{ex:nats}, the formula
\begin{lstlisting}[escapechar=\%]
\end{lstlisting}
is unsatisfiable, but cannot be shown to be unsatisfiable with a
finite number of unfolding steps.
%
\hfill\IEEEQED
\end{example}
\subsection{Completeness and Incompleteness of Unfolding}
We give a precise characterisation of the ADTs for which unfolding
will allow us to eventually detect (un)satisfiable of a formula, and
therefore gives rise to a decision procedure. As identified
in\cite{Suter:2010:DPA:1707801.1706325}, the essential property of an
ADT (resp., of the considered catamorphism, which in our case is the
size function) is \emph{sufficient surjectivity}, implying that ADTs
are sufficiently populated to satisfy disequalities in a formula: the
number of terms of size~$b$ grows unboundedly when $b$ tends to
infinity. We write $\ensuremath{\mathbbm{T}}^k_{\sigma}$ for the set of constructor terms of
ADT sort~$\sigma$ and size~$k$, i.e., $\ensuremath{\mathbbm{T}}^k_{\sigma} = \{ t \in
\ensuremath{\mathbbm{T}}^k_{\sigma} \mid |t| = k\}$.
\begin{definition}
An ADT sort~$\sigma^d$ is \emph{expanding} if for every natural
number~$n \in \ensuremath{\mathbbm{N}}$ there is a bound~$b \in \ensuremath{\mathbbm{N}}$ such that for
every~$b' \geq b$ either $\ensuremath{\mathbbm{T}}^{b'}_{\sigma^d} = \emptyset$ or
$|\ensuremath{\mathbbm{T}}^{b'}_{\sigma^d}| \geq n$. An ADT is expanding if each of its
sorts~$\sigma^d_1, \ldots, \sigma^d_k$ is expanding.
\end{definition}
\begin{example}
An example of an ADT that is \emph{not} expanding are the
natural numbers (Example~\ref{ex:nats}): for every size~$b \in
\ensuremath{\mathbbm{N}}_{\geq 1}$ there is exactly one constructor term~$t$ with $|t| =
b$.
%
\hfill\IEEEQED
\end{example}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\path[use as bounding box] (-0.5,-0.8) rectangle(5,0.7);
\node[depSort] (clist) {CList};
\node[depSort,right=3 of clist] (col) {Colour};
\node[depCtor,left=0.6 of clist] (nil) {nil};
\node[depCtor,right=0.6 of col,yshift=-3ex] (green) {green};
\node[depCtor,right=0.6 of col,yshift=3ex] (red) {red};
\node[depCtor,right=0.6 of col] (blue) {blue};
\node[depCtor,right=1.1 of clist,yshift=-5ex] (cons) {cons};
\draw[->] (clist) -- (nil);
\draw[->] (col) -- (red);
\draw[->] (col) -- (green);
\draw[->] (col) -- (blue);
\draw[->] (clist) -| (cons);
\draw[->] (cons) -| (clist);
\draw[->] (cons) -| (col);
\end{tikzpicture}
\caption{Dependency graph for the list ADT from Example~\ref{ex:lists}}
\label{fig:depGraph}
\end{figure}
\begin{lemma}
Systematic unfolding terminates (i.e., in every unfolding
sequence~$\phi_0, \phi_1, \ldots$ in which every variable is
eventually unfolded, eventually one of the cases of
Lemma~\ref{lem:unfolding} applies) for all formulas~$\phi$ if and
only if the considered ADT is expanding.
\end{lemma}
\iflong
\begin{IEEEproof}[Proof sketch]
``$\Longrightarrow$''
%
Example~\ref{ex:nats2} generalises to arbitrary non-expanding ADTs:
for every non-expanding sort~$\sigma$ there is a constant~$c \in \ensuremath{\mathbbm{N}}$
and an infinite semi-linear set~$S \subseteq \ensuremath{\mathbbm{S}}_\sigma$ such that
$|\ensuremath{\mathbbm{T}}^{b}_{\sigma}| < c$ for all $b \in S$. The existence of $c, S$
follows from the proof of Theorem~\ref{thm:expanding} below.
``$\Longleftarrow$''
%
Consider first the case of $\phi$ being a conjunction of
disequalities and a size constraint~$\phi_{\text{Pres}}(|x_1|,
\ldots, |x_n|)$. Since the ADT is expanding, satisfiability of
$\phi$ reduces to the question whether the size images of the
$x_i$-domains contain elements large enough, and compatible with
$\phi_{\text{Pres}}$, that all disequalities can be
satisfied. Systematic unfolding of $x_1, \ldots, x_n$ will add size
image constraints~$|x'| \in \ensuremath{\mathbbm{S}}_\sigma$ for all sub-terms, and
either find a set of satisfying term sizes (and corresponding
terms), or conclude unsatisfiability because the conjunction of size
images and size constraint~$\phi_{\text{Pres}}$ becomes
inconsistent.
Adding constructor, selector, or test literals does not change the
argument, since solutions of such literals can be represented in the
form of a most-general unifier~\cite{Suter:2010:DPA:1707801.1706325}.
\end{IEEEproof}
\medskip
\fi
Non-expandingness turns out to be a corner case: all non-expanding
ADTs more or less look like the natural numbers
(Example~\ref{ex:nats}), and most other practical ADTs are expanding.
For instance, both ADT sorts in Example~\ref{ex:lists} expand.
\subsection{Effective Characterisation of Expanding ADTs}
To characterise expanding ADTs, we first make the simplifying
assumption that all ADT sorts~$\sigma^d_j$ contain at least two
constructor terms; sorts with only a single term can obviously be
eliminated easily from a constraint. As a further piece of notation,
we need a relativised version of the size image: for an ADT
sort~$\sigma^d_j$ and a constructor~$f$, we write
\begin{equation*}
\ensuremath{\mathbbm{S}}_{\sigma^d_j}^f = \{ |t| \mid t \in \ensuremath{\mathbbm{T}}_{\sigma^d_j}, \text{and~}
t \text{~does not start with~} f \}
\end{equation*}
for the size image restricted to terms with head symbol $\not= f$.
Consider then the bipartite dependency graph~$D = (V, E)$ with
vertices~$V = \{\sigma^d_1, \ldots, \sigma^d_k\} \cup \{f_1, \ldots,
f_m\}$ being sorts and constructors, and the edge set
\begin{equation*}
E ~=~ \left\{
\begin{array}{@{}l@{}}
(\sigma_0, f_j), (f_j, \sigma_1), \ldots, (f_j, \sigma_n)\\
\qquad
\mid j \in \{1, \ldots, m\},
f_j: \sigma_1 \times \cdots \times \sigma_n \to \sigma_0
\end{array}
\right\}
\end{equation*}
Fig.~\ref{fig:depGraph} gives the graph for the list ADT in
Example~\ref{ex:lists}.
\begin{theorem}
\label{thm:expanding}
An ADT is \emph{not} expanding if and only if the graph~$D$ contains
a simple
cycle~$C = \sigma^1 \to f^1 \to \sigma^2 \to f^2 \to \cdots \to f^n
\to \sigma^1$ with the following properties:
\begin{enumerate}
\item $C$ is the only path from $\sigma^1$ to itself, i.e., every
cycle starting and ending in $\sigma^1$ is a repetition of $C$;
\item all constructors $f^1, f^2, \ldots, f^n$ on $C$ are unary;
\item the cycle~$C$ unboundedly contributes to the size
image~$\ensuremath{\mathbbm{S}}_{\sigma^1}$, i.e.,
\begin{equation*}
\forall k.~~ \ensuremath{\mathbbm{S}}_{\sigma^1} \not=
\{0, \ldots, k\} \cdot n +
\bigcup_{i = 1}^{n} \big( \ensuremath{\mathbbm{S}}_{\sigma^{i}}^{f^{i}} + i - 1 \big)~.
\end{equation*}
\end{enumerate}
\end{theorem}
The characterisation theorem implies that every non-expanding ADT has
a set of cyclically connected sorts~\id{S1}, \ldots, \id{Sn}, each of
which might contain further constructors \lstinline!c1_1!,
\lstinline!c1_2!, \ldots\ that do not lead back to \id{S1}, \ldots,
\id{Sn}:
\begin{lstlisting}
\sorts {
// ...
S1 { f1(S2 s2); c1_1; c1_2; /* ... */ };
S2 { f2(S3 s3); c2_1; c2_2; /* ... */ };
// ...
Sn { fn(S1 s1); cn_1; cn_2; /* ... */ };
// ...
}
\end{lstlisting}
The conditions of the theorem are clearly satisfied for the ADT of
natural numbers (Example~\ref{ex:nats}). Condition~3)
is satisfied whenever $\ensuremath{\mathbbm{S}}_{\sigma^{i}}^{f^{i}}$ is finite for all
$i \in \{1, \ldots, n\}$, but there are more subtle situations\iflong:
\begin{example}
We extend the list ADT from Example~\ref{ex:lists} by adding two
further sorts:
%
\begin{lstlisting}
\sorts {
S1 { f1(S2 s2); };
S2 { f2(S1 s1); null; col(CList list); };
}
\end{lstlisting}
%
The domain of sort~\id{S1} contains terms of any size greater than
one. However, while the number of terms of size~$2k + 1$ grows
exponentially with $k$, there is exactly one term of size~$2k$ for
every $k$, proving non-expandingness.
A cycle of length~3, in contrast, yields an expanding ADT:
%
\begin{lstlisting}
\sorts {
S1 { f1(S2 s2); };
S2 { f2(S3 s3); };
S3 { f3(S1 s1); null; col(CList list); };
}
\end{lstlisting}
%
We can note that condition~3) of Theorem~\ref{thm:expanding} now
fails.
%
\hfill\IEEEQED
\end{example}
Before we can prove Theorem~\ref{thm:expanding}, we need some further
results about ADTs. Consider constructor terms~$t[\bullet]$ with a
unique hole~$\bullet$; such terms can alternatively be seen as terms
with a single occurrence of a sorted variable. Composition of terms
with holes is defined as $t_1[\bullet] \circ t_2[\bullet] =
t_1[t_2[\bullet]]$. Two terms~$t_1[\bullet], t_2[\bullet]$ with holes
are \emph{incomparable} if $t_1[s_1] \not= t_2[s_2]$ for all
constructor terms~$s_1, s_2$ of the right sort. The
size~$|t[\bullet]|$ of a term with hole is the number of constructor
symbol occurrences in $t$, i.e., the hole does not count. This implies
$|t_1[\bullet] \circ t_2[\bullet]| = |t_1[\bullet]| + |t_2[\bullet]|$.
\begin{lemma}
\label{lem:incompTerms}
Suppose an ADT sort~$\sigma$ contains two incomparable constructor
terms~$t_1[\bullet], t_2[\bullet]$ with holes~$\bullet$ of
sort~$\sigma$. Then $\sigma$ expands.
\end{lemma}
\begin{IEEEproof}
Fix some $n \in N$. We need to show that there is a bound~$b \in \ensuremath{\mathbbm{N}}$
ensuring $\ensuremath{\mathbbm{T}}^{b'}_{\sigma} = \emptyset$ or $|\ensuremath{\mathbbm{T}}^{b'}_{\sigma}| \geq
n$ for every $b' \geq b$. Observe that for every $g \geq n$ there
are $\mbox{}>n$ pairwise incomparable terms with holes $t_1^0 \circ
t_2^g \circ t_1^{g},~ t_1^1 \circ t_2^g \circ t_1^{g-1}, \ldots,
t_1^g \circ t_2^g \circ t_1^{0}$, all of which have size~$g \cdot
(|t_1| + |t_2|)$. The size image~$\ensuremath{\mathbbm{S}}_\sigma \subseteq \ensuremath{\mathbbm{N}}$ can be
represented as a finite union of arithmetic progressions,
$\ensuremath{\mathbbm{S}}_\sigma = \bigcup_{i=1}^l \{ a_i + k \cdot b_i \mid k \in \ensuremath{\mathbbm{N}}
\}$ with $a_i, b_i \in \ensuremath{\mathbbm{N}}$ for $i \in \{1, \ldots, l\}$. For each $i
\in \{1, \ldots, l\}$ with $b_i > 0$, assuming that $k \geq n \cdot
(|t_1| + |t_2|)$ we can then pick $g = b_i \cdot n$, and some term
$t : \sigma$ of size $a_i + (k - n \cdot (|t_1| + |t_2|)) \cdot
b_i$, and obtain $\mbox{}>n$ pairwise distinct terms
\begin{equation*}
(t_1^0 \circ t_2^g \circ t_1^{g})[t],~
(t_1^1 \circ t_2^g \circ t_1^{g-1})[t],~
\ldots,~
(t_1^g \circ t_2^g \circ t_1^0)[t]
\end{equation*}
that are all of size $a_i + (k - n \cdot (|t_1| + |t_2|)) \cdot b_i
+ g \cdot (|t_1| + |t_2|) = a_i + k \cdot b_i$.
%
This implies that there is also a global bound~$b \in \ensuremath{\mathbbm{N}}$ such that
$\ensuremath{\mathbbm{T}}^{b'}_{\sigma} = \emptyset$ or $|\ensuremath{\mathbbm{T}}^{b'}_{\sigma}| \geq n$ for
$b' \geq b$.
\end{IEEEproof}
\medskip
\begin{IEEEproof}[Proof of Theorem~\ref{thm:expanding}]
``$\Longrightarrow$''
%
Suppose an ADT is not expanding, which means that there is a
non-expanding sort~$\sigma$. Choose $\sigma$ such that whenever
$\sigma \stackrel{*}{\to} \sigma'$ in the $D$-graph, and $\sigma'$
is non-expanding as well, then $\sigma'$ is in the same strongly
connected component (SCC) as $\sigma$; this is possible because the
SCCs of $D$ form a DAG. Then $\ensuremath{\mathbbm{S}}_\sigma$ has to be infinite
(otherwise $\sigma$ would be expanding), and there is a simple
$D$-path $C = \sigma^1 \to f^1 \to \sigma^2 \to f^2 \to \cdots \to
f^n \to \sigma^1$ with $\sigma^1 = \sigma$ (otherwise there would be
a non-expanding sort~$\sigma'$ reachable from $\sigma$, but not in
the same SCC).
We show that $C$ satisfies the conditions of the theorem. 1) holds
because if there was a second path $C'$ from $\sigma$ to itself that
is not a repetition of $C$, then both paths could be translated to
incomparable $\sigma$-terms~$t_1[\bullet], t_2[\bullet]$ with
$\sigma$-sorted holes~$\bullet$, and by Lemma~\ref{lem:incompTerms}
the sort~$\sigma$ would be expanding. The same argument implies that
2) holds: if any of the constructors $f_i$ had multiple arguments,
incomparable terms $t_1[\bullet], t_2[\bullet]$ could be derived
(since, by assumption, every sort contains at least two constructor
terms).
Suppose finally that 3) does not hold, i.e., for some $k \in \ensuremath{\mathbbm{N}}$
\begin{equation*}
\ensuremath{\mathbbm{S}}_{\sigma} ~=~
\{0, \ldots, k\} \cdot n +
\bigcup_{i = 1}^{n} \big( \ensuremath{\mathbbm{S}}_{\sigma^{i}}^{f^{i}} + i - 1 \big)
\end{equation*}
This would imply that from some sort~$\sigma^{i}$ a non-expanding
sort~$\sigma'$ is reachable, by following a constructor other
than~$f^i$. By choice of $\sigma$, then $\sigma'$ has to be in the
same SCC as $\sigma$, therefore there is a path from $\sigma'$ to
$\sigma$, and condition 1) would be violated.
``$\Longleftarrow$''
%
Suppose there is a cycle~$C$ satisfying 1)--3). Note that due to 1)
and 2) we have the equality
\begin{equation*}
\ensuremath{\mathbbm{S}}_{\sigma} ~=~
\ensuremath{\mathbbm{N}} \cdot n +
\underbrace{\bigcup_{i = 1}^{n} \big( \ensuremath{\mathbbm{S}}_{\sigma^{i}}^{f^{i}} + i - 1
\big)}_R
\end{equation*}
Together with 3), this means $\ensuremath{\mathbbm{N}} \cdot n + R \not= \{0, \ldots, k\}
\cdot n + R$ for every $k \in \ensuremath{\mathbbm{N}}$. Because $R$ is a semi-linear set,
then there has to be a finite subset~$S \subseteq R$ such that for
infinitely many points~$s \in \ensuremath{\mathbbm{S}}_{\sigma}$ we have $\{ x \in R
\mid s \in \ensuremath{\mathbbm{N}} \cdot n + x \} \subseteq S$. Because the set $\{ t \in
\ensuremath{\mathbbm{T}}_\sigma \mid |t| \in S\}$ of terms is finite,\footnote{This
property breaks down when ADTs are combined with other infinite
data types, e.g., lists over $\ensuremath{\mathbbm{Z}}$. In this case condition~3) has
to be modified.} this immediately implies that the sort~$\sigma$
is not expanding.
\end{IEEEproof}
\else
{} with infinite domains.
\fi
\section{Conclusions}
At the moment we are exploring applications and further extensions of
our approach. We are in the process of integrating our procedure into
the model checker \textsc{Eldarica}~\cite{Rummer13} to handle implication
checks and interpolation for ADTs; this also requires combination with
other data types, and in the long run likely interpolation
heuristics. It is also frequently necessary to combine ADTs with
quantifier reasoning and recursively defined functions, a direction
that requires further work. Finally, as a side-effect of
Theorem~\ref{thm:expanding}, there is a simple way to achieve
termination also for non-expanding ADTs, namely by replacing the cycle
with an explicit counter ranging over a built-in type of natural
numbers.
\paragraph{Acknowledgements}
R\"ummer was supported by the
Swedish Research Council
under grant 2014-5484.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
\label{sec:Intro}
Star formation is known to occur within molecular clouds and it has been found that star formation occurs more readily in regions of greater gas column density where more material is available for star formation.
This effect can be observed on larger scales such as giant molecular clouds and galaxies where Kennicutt--Schmidt (K--S) relations show a higher surface density of star formation rate at higher column densities \citep{Kennicutt1989}.
It is observable on molecular cloud scales with similar star formation surface density relations \citep{2010ApJ...723.1019H}, and finally, smaller, filamentary scales where prestellar cores are more frequently observed coincident on the sky with high-density filaments \citep{andre2010}.
Since star formation surface density is correlated with column density, is it possible to describe the distribution of where star formation has occurred using only column density information?
The answer to this question requires a quantification of the amount that star formation is enhanced by increasing column density as well as a model for how this leads to star formation being distributed throughout a cloud.
As mentioned earlier, measurements of the surface density of star formation rate as a function of column density are often expressed in the form of a power law (the K--S law is a particular form of this),
\begin{equation}
\label{eqn:intro-sigma}
\Sigma_\mathrm{SFR}= \Cr \Sigma^\mu_{\mathrm{Gas}},
\end{equation}
where $\Sigma_\mathrm{SFR}$ is the star formation rate surface density and $\Sigma_{\mathrm{Gas}}$ is the gas surface density.
The parameters $\Cr$ and $\mu$ quantify the relation between the surface densities of star formation and gas.\footnote{A full list of the mathematical symbols used in this paper is given in Table \ref{table:symbols}.}
Table \ref{table:power-law} presents some previous measurements of $\mu$ in local star forming regions.
Most $\mu$ values typically range between 1.5 and 2.5 \citep{2010ApJ...723.1019H,gutermuth2011,lombardi2014,Pokhrel2020} with some more extreme values of $\mu > 3$ \citep{lada2017}.
A value of $\mu$ describes the change in the surface density of star formation as a function of column density, but it is missing information as to how the star formation is distributed throughout the cloud.
To illustrate this point Fig. \ref{fig:intro_illustration} presents two distributions of early-stage Young Stellar Objects (YSOs) which share the same value of $\mu$.
The YSOs in the left-hand cloud are evenly distributed throughout the cloud according to column density, while those on the right are biased towards the lower-right of the region.
A distribution such as that in the right-hand cloud could be due to a column-density-independent effect which has strongly influenced the distribution of star formation, such as magnetic fields or stellar feedback.
If such effects are significant, column density alone may not be sufficient to describe the distribution of star formation on local scales.
\begin{figure}
\includegraphics[width = \columnwidth]{intro_illustration.pdf}
\caption{\label{fig:intro_illustration} Illustration of two populations of YSOs with the same power-law relationship with column density.
The YSOs in the cloud on the left are evenly distributed throughout the cloud while those on the right are clustered towards the lower-right portion of the cloud. }
\end{figure}
Using spatial statistics it is possible to investigate if the distribution of Class~0/I YSOs within a region is consistent with some distribution model, for example Eqn. \ref{eqn:intro-sigma} for given values of $\mu$ and $\Cr$.
These distribution models, known in spatial statistics as spatial point processes, are stochastic mechanisms for generating the locations of objects within a region.
To understand star formation, this work uses the locations of Class~0/I YSOs as a tracer for recent star forming activity to test how well different spatial point processes represent star formation within the molecular clouds.
Spatial statistics are well-suited for testing distribution models such as these -- examples of their application can be found in various fields including ecology and epidemiology \citep{Barot1999,Wiegand2009,Velazquez2016}.
The O-ring summary statistic is a measure of how separated YSOs are from one another as a function of spatial scale.
This paper utilises this statistic to determine if the locations of a set of YSOs are consistent with a null hypothesis by comparing its values to confidence envelopes.
The spatial statistics methods used in this work for testing a model will be discussed in more detail in Section \ref{sec:spatial statistics}.
In an earlier paper the efficacy of different summary statistics from spatial statistics were tested and then applied to Class~0/I YSOs in Serpens South to determine if they were consistent with complete spatial randomness (CSR) \citep{retter2019}.
CSR is a homogeneous Poisson point process where the probability of forming a star at a given location is unaffected by environment or neighbouring stars, and is therefore random.
The Class~0/I YSOs in Serpens South were found to be inconsistent with CSR and the work presented in this paper expands on this result by testing inhomogeneous Poisson Point Processes where the probability of placing a star at a given location is affected by the local column density.
In doing so it will be possible to determine if Class~0/I YSO positions, and therefore locations of star formation, are equivalent to randomly sampling a two-dimensional probability distribution based on the observed gas density.
While spatial statistics are useful for testing models, we employ Bayesian statistics for parameter fitting.
For this reason, this work introduces a Bayesian method of estimating $\mu$ by measuring the surface density of Class 0 and Class 1 Young Stellar Objects (YSOs) within column density bins in Section \ref{sec:bayes_stats}.
Class~0/I YSOs are used in this work as a tracer of star formation as they are the in the earliest stage of YSOs evolution.
This can be observed in the relative distributions of YSOs where the Class 0 and Class I YSOs, whose positions tend to be more correlated with dense cloud than the more evolved Flat and Class~II and Class~III sources \citep{mairs2016,anne2020}.
Prestellar cores would likely be good tracers for star formation, but unfortunately it is not easy to determine which starless cores are likely to evolve to become stars and so Class~0/I YSOs are the youngest objects that are identifiable as definite precursors to stars.
This work will look at the YSOs in five local star forming regions which are good laboratories for testing distribution functions, due to their proximity and number of YSOs: Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348.
By applying the Bayesian method described in Section \ref{sec:bayes_stats}, the parameter $\mu$ was measured for each region individually as well as for the set of regions as a whole to estimate a global value of $\mu$ (Section \ref{subsec:mu}).
In addition to measuring $\mu$ for each region, we apply the same Bayesian method to find the best-estimates of $\Cr$ for each region in Section \ref{subsec:cr_estimate}.
In Section \ref{subsec:application} we use these best-fitting values for $\mu$ and $\Cr$ in combination with the O-ring statistic to test the distributions of YSOs in each of these regions against Eqn. \ref{eqn:intro-sigma}.
We also present the results of testing the distributions of YSOs for general models with $\mu = 0$, $\mu = 1$ and the global value of $\mu = 2.05$.
Each of these models test for a different potential physical description for how the distribution of stars is correlated with column density.
The first model uses $\mu=0$ above a threshold visual extinction $\mathrm{A_v} = 6$ to test if there is a relationship between the amount of column density and surface density of YSOs, or if, once some threshold is reached, the YSOs are simply dispersed randomly.
The second model, $\mu=1$, is motivated by the distribution of prestellar cores which may depend less steeply on column density than protostellar cores \citep{Sokol2019,konyves2020}.
With this model we look to determine if the observed distribution of YSOs is consistent with the distribution they may have had at an earlier stage in their evolution.
The third model is to explore how well, or even if, a single power-law, can simultaneously represent multiple star-forming regions.
It was found that, when considering the number of regions that reject the model, the region-specific $\mu$ values were the most successful at describing the distributions of Class~0/I YSOs.
As for the general models, the best-performing model was the global model of $\mu = 2.05$ which was consistent with 3 out of the 5 regions it was applied to, and the worst performing model was $\mu=1$ -- which was rejected by every region.
As a further test, we apply the $\mu = 2.05$ model to the Class~II YSOs in all five regions in Section \ref{subsec:results-classII}.
This is to show that these statistics have enough discriminatory power to distinguish between two potentially similar distributions within the same study region.
Finally, the discussion of these results is presented in Section \ref{sec:discussion}.
There we discuss how effective a column-density-only model is at describing the distributions of YSOs.
We also explore how the rejection of the $\mu = 1$ model by Class~0/I YSOs could imply an environmental dependence on the evolutionary time-scales for prestellar cores and/or Class~0/I YSOs.
\begin{table*}
\centering
\begin{threeparttable}
\caption{\label{table:power-law} YSO Kennicutt-Schmidt power law estimates for different regions.}
\begin{tabular}{l l l }
Region & Power-law & Source \\
\hline
AFGL 490 & $1.8 \pm 0.3$ & \citep{Pokhrel2020} \\
Aquila North & $1.8 \pm 0.1$ & \citep{Pokhrel2020} \\
Aquila South & $2.3 \pm 0.2$ & \citep{Pokhrel2020} \\
Auriga-California Molecular Cloud & 4 & \citep{harvey2013} \\
California & $3.31 \pm 0.23$ & \citep{lada2017} \\
Cep 0B3 & $2.2 \pm 0.1$ & \citep{Pokhrel2020} \\
Cygnus X & $1.9 \pm 0.1$ & \citep{Pokhrel2020} \\
G305 & $2.50 \pm 0.04$ & \citep{willis2015} \\
G326.4 & $1.91 \pm 0.05$ & \citep{willis2015} \\
G326.6 & $1.77 \pm 0.04$ & \citep{willis2015} \\
G333 & $2.86 \pm 0.03$ & \citep{willis2015} \\
G351 & $2.30 \pm 0.03$ & \citep{willis2015} \\
Mon R2 & $2.1 \pm 0.1$ & \citep{Pokhrel2020} \\
MonR2 & $2.67 \pm 0.02$ & \citep{gutermuth2011} \\
NGC 2264 & $1.8 \pm 0.1$ & \citep{Pokhrel2020} \\
NGC 6634 & $2.08 \pm 0.08$ & \citep{willis2015} \\
Ophiuchus & $1.87 \pm 0.03$ & \citep{gutermuth2011} \\
Ophiuchus & $1.9 \pm 0.1$ & \citep{Pokhrel2020} \\
Orion A & $1.99 \pm 0.05$ & \citep{lombardi2013} \\
Orion A & $2.2 \pm 0.1$ & \citep{Pokhrel2020} \\
Orion B & $2.16 \pm 0.10$ & \citep{lombardi2014} \\
Orion B & $2.3 \pm 0.2$ & \citep{Pokhrel2020} \\
Perseus & $2.4 \pm 0.6$ & \citep{zari2016} \\
Perseus & $2.1 \pm 0.1$ & \citep{Pokhrel2020} \\
S140 & $1.8 \pm 0.2$ & \citep{Pokhrel2020} \\
\hline
\end{tabular}
\begin{tablenotes}
\item Note: Different sources use different methods and different astrophysical objects to produce power-law estimates.
\end{tablenotes}
\end{threeparttable}
\vspace{-4mm}
\end{table*}
\section{SPATIAL STATISTICS}
\label{sec:spatial statistics}
To determine if there is a correlation between the gas density within a local molecular cloud and the number of Class~0/I YSOs it is necessary to produce and test a model that encompasses this relationship.
The type of model that will be described in this paper is known in the field of spatial statistics as a spatial point process, which is a stochastic mechanism that generates a set of points within a study region with the number and locations of these points dependent on the model.
A spatial point pattern is a realisation of the model, and by assuming YSO positions are a realisation of a model it is possible to try and infer information about the model from the observed pattern.
As such, these models and the realisations thereof are informed by physics but are not physical simulations.
\subsection{First-order models}
First-order effects affect the expected number of stars at a given location without consideration of the presence, or absence, of other stars.
Due to this lack of dependence on neighbouring stars these are sometimes attributed to environmental factors.
For example the availability of material for forming stars could be a factor in the number and location of stars formed within a star-forming region.
One measure of first-order effects, the first-order intensity, is the expected density at a given position $x$.
The first-order intensity is given by the equation \citep{Diggle2013}
\begin{equation}
\label{eqn:first-order}
\lambda(x) = \lim_{|dx|\rightarrow 0} \left\{\frac{\mathrm{E}[N(dx)]}{|dx|}\right\},
\end{equation}
where $\lambda(x)$ is the first-order intensity at position $x$, $dx$ is the small region containing $x$, $\mathrm{E}[N(dx)]$ is the expected number of points contained within $dx$ and $|dx|$ denotes the area within $dx$.
Given the first-order intensity it is then possible to estimate the mean number of stars within the study window with
\begin{equation}
\langle N\rangle=\int_A\lambda(x)dx
\label{eqn:pois_nbar}
\end{equation}
where $N$ is the total number of stars within study window $A$.
\subsection{Generating spatial point patterns from first-order models}
\label{subsec:patterns}
With first-order models the probability of placing a star is entirely defined by the first-order intensity, and so a spatial point pattern can be produced by randomly sampling the first-order intensity distribution across the study region.
For a study region represented by a grid some modification needs to be made to account for cells that cover different amounts of area.
The joint-distribution is then composed of two parts: a probability proportional to area and a probability proportional to the first-order intensity.
Therefore, when making a pattern using a first-order model in a gridded study region we sample
\begin{equation}
\label{eqn:prob_u}
\mathrm{prob}(u) = \frac{\lambda(u) |u|}{\sum_v \lambda(v) |v|},
\end{equation}
where $u$ and $v$ are cell indices and $|u|$ and $|v|$ are cell areas.
The left panel of Fig. \ref{fig:serpens_south_random} shows the positions of the Class~0/I YSOs in Serpens South with an example realisation of Eqn. \ref{eqn:prob_u} with $\lambda(u)$ equal to Eqn. \ref{eqn:intro-sigma} using $\mu = 2.05$ in the right panel.
Given that they were generated using Eqn. \ref{eqn:intro-sigma}, it is known that the YSOs in the right panel are consistent with this spatial point process and the question remains as to whether or not the YSOs in the left panel are also consistent with this process.
\begin{figure*}
\includegraphics[width = \textwidth]{serpens_south_class0I_2.05_comparison}
\caption{\label{fig:serpens_south_random} (left) Class~0/I YSOs in Serpens South and (right) random realisation of YSOs with $\mu = 2.05$ (see Section \ref{subsubsec:results-mu2}) plotted on \textit{Herschel} 18.2$''$ column density maps.}
\end{figure*}
\subsection{Spherical Projection}
\label{subsec:spherical}
The column density data used in this paper are sections of the celestial sphere which have been stored in a tangent plane projection.
To calculate the O-ring statistic, estimate the quantity $\mu$ using the methodology in Section \ref{sec:bayes_stats} and accurately reproduce first-order spatial point processes the distances between YSOs and the sizes of pixels need to be measured.
Both of these quantities are affected by the gnomonic projection.
The projected maps are stored in the FITS file format \citep{1981A&AS...44..363W} which contains the keywords necessary to convert pixel coordinates to coordinates in the gnomonic projection plane or celestial sphere.
The set of keywords that allow for coordinate conversion are known as the FITS ``world coordinate system'' (WCS) and with these the pixel coordinates of most maps can be converted to projection and celestial coordinates \citep{calabretta_celestial,greisen_world}.
Software is available to perform transform between these coordinates systems; the results in this paper use the library {\scshape wcslib} \footnote {https://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/}.
While it is possible to use projection coordinates entirely, they are representations of sections of a spherical surface and so it may be more simple and intuitive to use the native spherical coordinates.
For example, with spherical coordinates the angular distance, $\Delta\sigma$, between two points can be calculated using the haversine formula,
\begin{equation}
\label{eqn:great_circle}
\Delta \sigma = 2\mathrm{arcsin}\sqrt{\mathrm{sin}^2\left(\frac{\Delta \delta}{2}\right)+\mathrm{cos}\delta_1\mathrm{cos}\delta_2\mathrm{sin}^2\left(\frac{\Delta \alpha}{2}\right)}
\end{equation}
where $\alpha$ and $\delta$ refer to the right ascension (RA) and declination (Dec) in radians.
Some projections distort the shape or size of areas on the sphere and so the amount of angular area represented by a given pixel on the map is not necessarily consistent across the projection plane.
The \textit{Herschel} maps used in Section \ref{sec:results}, use the gnomonic projection which is designated the AIPS code `TAN'.
A gnomonic projection is produced by projecting points on the surface of a sphere onto a tangential plane from the perspective of an observer at the sphere's centre.
With this type of projection the amount of distortion increases as a function of angular distance measured from the tangent point, $\theta$.
This distortion increases the projected area covered by objects on the sphere by $1/\mathrm{cos}^3(\theta)$ and, therefore, decreases the amount of area on the sphere covered by a pixel in the projection by $\mathrm{cos}^3(\theta)$.
Finally, the amount of area covered by a pixel is given by,
\begin{equation}
|u| = |O|\mathrm{cos}^3(\theta_{u}),
\end{equation}
where $|u|$ is the area covered by pixel $u$, $|O|$ is the area of a pixel at the tangent point. The angle $\theta_u$ is equal to $\Delta \sigma$ between the tangent point and $u$.
Unlike angular distance, which requires only the angular coordinates for two points, the area covered by a pixel depends on the type of projection.
As a general solution, however, it is possible to calculate the world coordinates of the corners of each pixel and use those to approximate the areas, though this assumes the pixels are small enough to be approximately flat.
\subsection{Hypothesis testing}
\label{subsec:spatial-hypothesis}
It is possible to test whether a distribution of stars is consistent with a given spatial point process through the use of summary statistics and confidence envelopes.
The most common spatial point process that patterns are tested against is that of complete spatial randomness (CSR) as in \citet{retter2019}, but other processes can be tested against using the methods described in this section.
\subsubsection{Summary statistic}
The summary statistic that will be used and discussed in this paper is the O-ring statistic.
O-ring uses all of the inter-point distances to estimate the average density of stars that would be observed at a distance $r$ from a given star.
As is the case in this work, study windows may be in the form of a grid, and to account for this the methodology of \citet{Wiegand2004} is applied to calculate O-ring.
In general, O-ring can be calculated using
\begin{equation}
\label{eqn:oring}
\mathrm{\hat{O}}(r) = \frac{\sum_{i=1}^{N}\text{Points}[\mathrm{R}_i^w(r)]}{\sum_{i=1}^{N}\text{Area}[\mathrm{R}_i^w(r)]},
\end{equation}
where $\mathrm{R}_i^w(r)$ is an annulus with radius $r$ and width $w$ centred on the $i$th star, and the operators $\text{Points}[\mathrm{R}_i^w(r)]$ and $\text{Area}[\mathrm{R}_i^w(r)]$ count the number of stars and area contained within $\mathrm{R}_i^w(r)$ respectively.
Fig. \ref{fig:spatial_statistics:Oring_explainer} shows a schematic representation of the O-ring function where stars and area within the shaded annulus of Fig. \ref{fig:spatial_statistics:Oring_explainer} are counted for star $i$.
If boundary conditions are applied then only the points and area within the intersection of $\mathrm{R}_i^w(r)$ and the study region are counted.
The number of points within $\mathrm{R}_i^w(r)$ in a grid is defined by
\begin{equation}
\text{Points}[\mathrm{R}_i^w(r)] = \sum_u \sum_v \mathrm{S}(u,v)\mathrm{P}(u,v)\mathrm{I}_r(x_{u,v},y_{u,v},x_i,y_i).
\end{equation}
Here, $u$ and $v$ are the row and column indices of the grid respectively.
$\mathrm{S}(u,v)$ is an indicator function equal to 1 if cell $(u,v)$ is contained within the study window and zero otherwise and $\mathrm{P}(u,v)$ is the number of stars contained within $(u,v)$.
Finally, $\mathrm{I}_r$ is another selection function to determine if a cell is within the annulus, defined by
\begin{align}
\label{eqn:points_operator}
\mathrm{I}_r(x_{u,v},y_{u,v},x_i,y_i) = \begin{cases}
1\ &\text{ if } r-\frac{w}{2} \leq d_{(u,v),i} \leq r + \frac{w}{2},\\
0\ &\text{otherwise,}\\
\end{cases}
\end{align}
where $d_{(u,v),i}$ is the the distance between the centre of cell $(u,v)$ and the $i$th star. The $\mathrm{Area}$ operator within Eqn. \ref{eqn:oring} is similarly defined
\begin{equation}
\label{eqn:area_operator}
\text{Area}[\mathrm{R}_i^w(r)] = \sum_u \sum_v \mathrm{S}(u,v)\mathrm{A}(u,v)\mathrm{I}_r(x_{u,v},y_{u,v},x_i,y_i),
\end{equation}
where $\mathrm{A}(u,v)$ is the amount of area contained within cell $(u,v)$.
There exist other summary statistics such as Ripley's K \citep{Ripley1981}, Diggle's G function \citep{Diggle2013} and the "free-space" function; however, the O-ring test has been chosen for this analysis due to its sensitivity compared to the other tests \citep{retter2019}.
\begin{figure}
\includegraphics[width=\columnwidth]{O_ring_explainer}
\caption{\label{fig:spatial_statistics:Oring_explainer} Schematic illustration of O-ring function. Density at a distance $r$ from star $i$ is estimated by counting stars within the shaded portion. The shaded area shows the region where $\mathrm{S}(u,v)$ and $\mathrm{I}_r(x_{u,v},y_{u,v},x_i,y_i)$ are equal to 1.}
\end{figure}
\subsubsection{Confidence envelopes}
A summary statistic produces a single value (in this case at a given $r$) to represent a chosen facet of the data being measured.
This value can be compared to the distribution of values the summary statistic takes under some null hypothesis, $H_0$ to determine if the null hypothesis can be rejected with some significance level, $\alpha$.
The distribution of a summary statistic under a simple null hypotheses, such as that of CSR, can sometimes be found analytically \citep{wiegand2016}; otherwise it may be sampled computationally through repeated realisations of the $H_0$.
The global confidence envelope covers the range of acceptable values of the summary statistic which, if it is exceeded, rejects $H_0$ with significance $\alpha$.
To test the significance of a single measurement it is sufficient to find the distribution of the statistic under the null hypothesis and determine if the measured value is among the $k$th-most-extreme values where, for a two-sided distribution,
\begin{equation}
\label{eqn:alpha}
\alpha = 2k/(n+1)
\end{equation}
and $n$ is the number of simulated patterns of $H_0$.
Where there are multiple measurements being tested simultaneously, using the $k$th-most-extreme values for each results in a lower $\alpha$ for the test as the probability of finding an extreme values increases as the number of independent tests increases.
It is possible to test over multiple scales with a controlled $\alpha$ by using a global confidence envelope.
A global confidence envelope is able to identify significant measurements of an observed statistic, $T_1(r)$.
Using the methodology described in \citet{myllmaki2017} the upper and lower bounds of the global confidence envelope are defined by
\begin{align}
\label{eqn:Tlow}
T^u_{\mathrm{low}}(r) &= T_0(r) - u_{\alpha}|\underline{T}(r) - T_0(r)|\\
\label{eqn:Tupp}
T^u_{\mathrm{upp}}(r) &= T_0(r) - u_{\alpha}|\overline{T}(r) - T_0(r)|
\end{align}
respectively, where $T_0(r)$ is the expected value under the $H_0$, $\overline{T}(r)$ and $\underline{T}(r)$ are the 2.5 per cent upper and lower quantiles of the distribution of $T(r)$ under $H_0$ and $u_{\alpha}$ is the parameter used to determine the confidence level of the envelopes.
If the distribution of $T(r)$ is not known then it can be estimated from measurements of simulated patterns $T_i(r)$, where $i = 2,3,\ \ldots\ , n+1$.
In this paper $u_{\alpha}$ is the $\alpha (n+1)$th largest value from the collection of $u_i$s calculated using
\begin{equation}
u_i = \max \left[f(r)^{-1} (T_i(r)-T_0(r)) \right]
\end{equation}
where $f(r)$ is
\begin{align}
f(r) = \begin{cases}
\overline{T}(r)-T_0(r)\ &\text{ if } \{T(r) \geq T_0(r)\} \text{, or}\\
\underline{T}(r)-T_0(r)\ &\text{ if } \{T(r) < T_0(r)\}. \\
\end{cases}
\end{align}
Using Eqns. \ref{eqn:Tlow} and \ref{eqn:Tupp} it is possible reject a null hypothesis with a controlled $\alpha$ and find the values of $r$ which exceed the envelope.
\section{BAYESIAN STATISTICS}
\label{sec:bayes_stats}
From inspection of Eqn. \ref{eqn:intro-sigma}, the power-law index $\mu$, for a single region, can be estimated from the straight-line gradient in a plot of $\log(\Sigma_\mathrm{SFR})$ versus $\log(\Sigma_\mathrm{Gas})$ \citep{2010ApJ...723.1019H,gutermuth2011}.
A plot of the YSO surface density versus column density for the five regions studied in this paper is presented in Fig. \ref{fig:results-yso_surface_density} and from this it is clear that there is a correlation between YSO surface density and column density in each of these regions.
The gradients ($\mu$ values) are not dissimilar, but different $\Cr$ values are implied as the YSO surface density functions have different y-intercepts for different regions.
One problem with measuring gradients is that the number of YSOs in each $\mathrm{A_v}$ bin are small enough that care needs to be taken when dealing with the uncertainties.
Care must also be taken when estimating $\mu$ values that represent multiple regions as combining area and YSO counts to find an average density assumes every region has the same value of $\Cr$, which is not always true.
For these reasons a Bayesian method of estimating $\mu$ is described in this section which can be extended to calculate joint values of $\mu$ for sets of regions.
\begin{figure*}
\includegraphics[width=\linewidth]{yso_density_plot}
\caption{\label{fig:results-yso_surface_density} YSO surface density measurements within column density bins in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348.
As an illustration of typical uncertainties, the Poisson uncertainties on YSO counts are given.
The dashed line shows a gradient of 2.05 (Section \ref{subsec:mu}).}
\end{figure*}
To model the distribution of Class~0/I YSOs with respect to the observed column density in a star forming region we introduce the following equation,
\begin{equation}
\hat{\lambda}\mathrm{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2)} = \Cr\times\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2^\mu,
\label{eqn:lmda-model}
\end{equation}
where $\hat{\lambda}\mathrm{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2)}$ is the estimate of the number density of Class~0/I YSOs at column density $\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2$, $\Cr$ is a region-specific constant that normalises the number of Class~0/I YSOs such that Eqn. \ref{eqn:pois_nbar} is satisfied for a given region, and $\mu$ is the global power law affecting the distribution of YSOs with respect to the column density.
We then look to find the most likely values of $\Cr$ and $\mu$.
First, consider a sub-region, $m$, of the star-forming region that has an approximately constant column density, $\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}}$.
If we assume the probability that this sub-region contains a number of YSOs, $N_m$, follows a Poisson distribution, it can then be shown that the probability of counting $N_m$ YSOs is given by
\begin{multline}
\mathrm{prob}(N_m|A_m,\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}},\Cr,\mu) =\\
\frac{(\Cr(\mathrm{\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}})^{\mu}}A_m)^{N_m}e^{-(\Cr\mathrm{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m})}^{\mu}} A_m)}}{N_m!},
\end{multline}
where $A$ is the area of $m$.
Repeating the experiment with $M$ different sub-regions of the star-forming region results in the probability
\begin{multline}
\label{eqn:N_one_region}
\mathrm{prob}(\{N\}|\{A\},\{\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2\},\Cr,\mu) =\\
\prod_{m=0}^M\frac{(\Cr{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}}})^{\mu}A_m)^{N_m}e^{-(\Cr{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}})}^{\mu}A_m)}}{N_m!},
\end{multline}
where $\{N\}$ is a vector of $N$, etc.
Given the condition of similar column densities, this equation functions with any form of subdivision of the star-forming region.
For this paper contours of column density are used to define each sub-region.
If we assume a uniform prior for $\mu$ and a Jeffreys' prior for $\Cr$, we find that the na\"ive prior of $\mu$ and $\Cr$ is inversely proportional to $\Cr$, i.e.
\begin{align}
\label{eqn:one-region-prior}
\mathrm{prob}(\Cr,\mu) = \begin{cases}
\frac{1}{\Cr}\ &\text{ for } \Cr \geq 0 \text{ and } \mu \geq 0,\\
0\ &\text{otherwise.}\\
\end{cases}
\end{align}
With Bayes' theorem we may then construct an equation that can be used to find the probability associated with a combination of $\Cr$ and $\mu$:
\begin{multline}
\label{eqn:c,b_one_region}
\mathrm{prob}(\Cr,\mu|\{N\},\{A\},\{\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2\}) \propto \\
\frac{1}{\Cr}\prod_{m=0}^M\frac{(\Cr{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}}})^{\mu}A_m)^{N_m}e^{-(\Cr{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}})}^{\mu}A_m)}}{N_m!}.
\end{multline}
This joint Probability Density Function (PDF) can be marginalised to find the marginal probabilities of $\mu$ and $\Cr$ for the star-forming region separately.
As an extension, we may then question if the power in Eqn. \ref{eqn:lmda-model} is not specific to a single star-forming region, but is instead a universal property shared across different star-forming regions, each with unique values of $\Cr$.
It would be desirable, then, to combine the measurements from multiple star-forming regions, each with their own sub-regions, to produce a single best-estimate for $\mu$.
These measurements can be included by modifying the prior in Eqn. \ref{eqn:one-region-prior} to be proportional to the inverse of the product of $\Cr$ values,
\begin{equation}
\label{eqn:more-region-prior}
\mathrm{prob}(\{\Cr\},\mu) \propto \prod_{i=1} ^\mathit{\#\ s.f.\ regions} \frac{1}{{\Cr}_{,i}},
\end{equation}
and adding the measurements to the product in Eqn. \ref{eqn:c,b_one_region} to produce the equation
\begin{multline}
\label{eqn:c,b_all_region}
\mathrm{prob}(\{\Cr\},\mu|\{N\},\{A\},\{\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2\}) \propto \\
\prod_{i=1}^\mathit{\#\ s.f.\ regions} \frac{1}{{\Cr}_{,i}}\prod_{m=0}^M\frac{({\Cr}_{,i}{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}}})^{\mu}A_m)^{N_m}e^{-({\Cr}_{,i}{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2{_{,m}})}^{\mu}A_m)}}{N_m!}.
\end{multline}
The addition of more regions increases the number of dimensions of the PDF which increases the computational difficulty of sampling the PDF without the use of techniques such as Markov chain Monte Carlo (MCMC) \citep{hastings1970,2010CAMCS...5...65G,Foreman_Mackey_2013}.
\section{APPLICATION TO STAR-FORMING REGIONS }
\label{sec:results}
In this work $\mu$ values are determined for the star-forming regions Serpens South, Serpens Main, Ophiuchus, NGC1333 and IC348, as well as a joint $\mu$ value for all regions.
The joint $\mu$ value and $\mu = 0$ and $\mu = 1$ were then tested using 95 per cent confidence envelopes as described in Section \ref{sec:spatial statistics}.
The \citet{dunham2015} YSO catalogue was used for YSO position and classification for all regions.
YSOs classified as Class~0/I are those with a corrected spectral index value greater than or equal to 0.3 and $T_\text{bol} < 650 \mathrm{K}$.
Using a single catalogue and classifying with corrected spectral index and $T_\text{bol}$ provides a simple and consistent method of identifying YSO populations which can be compared between clouds. Missing or misclassified sources should not affect the results unless there is a selection bias with respect to column density, which we do not expect except possibly at high column densities (discussed further in Sect.~\ref{sec:discussion}).
The column density data used for the different regions were the \textit{Herschel} 18.2$''$ resolution maps \citep{andre2010,palmeirim2013}.
Table \ref{table:region_summary} lists the number of Class~0/I YSOs and distances to each cloud, as well as the RA and Dec boundaries used to extract the regions.
\begin{table*}
\begin{threeparttable}
\caption{\label{table:region_summary} Summary of the properties of the star-forming regions analysed.}
\begin{tabular}{c c c c c}
Region & No. of & RA limits & Dec limits & Distance (pc) \\
& Class~0/I & & &\\
& YSOs & & &\\
\hline
Serpens South & 44 & $277.2\degree \leq \mathrm{RA} \leq 277.7\degree$ & $-2.25\degree \leq \mathrm{Dec} \leq -1.75\degree$ & 484 \tnote{a}\\
Serpens Core & 16 & $277.4\degree \leq \mathrm{RA} \leq 277.6\degree$ & $1.18\degree \leq \mathrm{Dec} \leq 1.38\degree$ & 484 \tnote{a}\\
Ophiuchus & 24 & $246.0\degree \leq \mathrm{RA} \leq 248.5\degree$ & $-25.2 \degree\leq \mathrm{Dec} \leq -23.8\degree$ & 144 \tnote{a}\\
NGC1333 & 32 & $52.0\degree \leq \mathrm{RA} \leq 52.8\degree$ & $31.0\degree \leq \mathrm{Dec} \leq 31.8\degree$ & 293 \tnote{b}\\
IC348 & 12 & $55.8\degree \leq \mathrm{RA} \leq 56.4\degree$ & $31.9\degree \leq \mathrm{Dec} \leq 32.5\degree$ & 321 \tnote{b}\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] \citep{zucker2019}
\item[b] \citep{Ortiz2018}
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Estimations of $\mu$}
\label{subsec:mu}
In this section, we present the results of estimating the power law using the Bayesian marginalisation described in Section \ref{sec:bayes_stats}.
The joint-probability distributions for $\Cr$ and $\mu$ were calculated using the number of YSOs and the area contained within column density bins in each region. The results of this analysis are presented in Fig. \ref{fig:join-pdf-all}.
These joint-probability distributions were then marginalised over $\Cr$ to find the PDFs for $\mu$ for each region, the results of which are shown in Fig. \ref{fig:mu_regions} and Table \ref{table:mu}.
To determine if these joint-probability distributions were reasonable Eqn. \ref{eqn:pois_nbar} was used to find the solutions where $\langle N\rangle$ was equal to the observed number of YSOs in each region.
These are overlaid on Fig. \ref{fig:join-pdf-all} as dashed lines.
The intersection between Eqn. \ref{eqn:pois_nbar} and the high probability density regions of the joint probability distribution demonstrates that Eqn. \ref{eqn:c,b_one_region} is consistent with producing $\Cr$ and $\mu$ values that approximate the number of YSOs used to calculate the joint-distribution.
As can be seen in Fig.~\ref{fig:join-pdf-all}, this overlapping of the two functions is consistent across all regions.
To find the most likely value of $\mu$ over all regions the areas and number of YSOs in each sub-region were combined into a single 6-dimensional joint-probability distribution: one dimension for each of the five regions' $\Cr$ values and one for $\mu$.
This distribution was sampled and marginalised using the MCMC functionality of the {\sc Python} package {\scshape emcee} \citep{Foreman_Mackey_2013} to find the PDF for $\mu$.
The best-estimate for the global power law was found to be $\mu = 2.05 \pm 0.20$, using a 95 per cent confidence interval as the uncertainty.
A value of $\mu = 2.05$ sits within the 95 per cent confidence intervals for Serpens Core, Ophiuchus and IC348 and only marginally outside those of Serpens South and NGC1333.
While further testing is required to determine if the distributions of the YSOs within these regions are consistent with a global power law, the global power law appears to adequately describe the distribution of these YSOs as a set.
{\graphicspath{{./Figures/beta_pdf/rescaled/}}
\begin{figure*}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=1.1\linewidth]{serpens_south/pdf_with_margs_serpens_south}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=1.1\linewidth]{serpens_core/pdf_with_margs_serpens_core}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=1.1\linewidth]{ophiuchus/pdf_with_margs_ophiuchus}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=1.1\linewidth]{ngc1333/pdf_with_margs_ngc1333}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=1.1\linewidth]{ic348/pdf_with_margs_ic348}
\end{subfigure}
\caption[ Joint-probability distribution of Eqn. \ref{eqn:c,b_one_region} for Class~0/I YSOs in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with the marginalised probability density functions for $\mu$ and $\Cr^{\prime}$]{\label{fig:join-pdf-all} Joint-probability distribution of Eqn. \ref{eqn:c,b_one_region} for Class~0/I YSOs in the labelled star-forming regions with the marginalised probability density functions for $\mu$ and $\Cr^{\prime}$. The contours outline the 50 and 95 per cent cumulative probabilities and the dashed line follows the solutions to Eqn. \ref{eqn:pois_nbar}. The x-axis is $\Cr ^\prime$ (Eqn. \ref{eqn:cr_prime}), because the joint-probability distributions have been calculated using column density values scaled by a factor of $10^{-22}$. This reduces the span of $\Cr$ values and allows the structure in the joint-probability distribution to be distinguishable. See Section \ref{subsec:cr_estimate}}
\end{figure*}}
\begin{figure}
\includegraphics[width=\columnwidth]{box_plot}
\caption{\label{fig:mu_regions} Marginalised distributions for $\mu$ for Serpens South, Serpens Core, Ophiuchus, NGC1333, IC348 and global estimate. The box and whiskers present the best estimate and the 50 and 95 per cent intervals. The orange lines present the best estimate and the 50 and 95 percent intervals for the global $\mu$ value.}
\end{figure}
\begin{table}
\caption{\label{table:mu} $\mu$ estimates for all regions.}
\begin{tabular}{c c c}
Region & Best estimate of $\mu$ & 95 per cent confidence interval\\
\hline
Serpens South & 2.39 & $2.06 \geq \mu \geq 2.70$\\
Serpens Core & 2.06 & $1.45 \geq \mu \geq 2.76$\\
Ophiuchus & 1.78 & $1.28 \geq \mu \geq 2.20$\\
NGC1333 & 1.66 & $1.28 \geq \mu \geq 2.04$\\
IC348 & 3.00 & $2.03 \geq \mu \geq 4.04$\\
All regions & 2.05 & $1.85 \geq \mu \geq 2.25$ \\
\hline
\end{tabular}
\end{table}
\subsection{Estimations of $\Cr$}
\label{subsec:cr_estimate}
In this section we present the results of estimating the region-specific constants for each of the five regions.
The best-estimates for $\Cr$ were produced by marginalising the joint-probability distributions over $\mu$, as demonstrated in Fig. \ref{fig:join-pdf-all}.
These results are presented in Table \ref{table:crprime} as $\Cr^\prime$ values, which are related to $\Cr$ by
\begin{equation}
\label{eqn:cr_prime}
\Cr^\prime = \Cr \times (10^{22})^\mu
\end{equation}
or, equivalently,
\begin{equation}
\hat{\lambda}\mathrm{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2)} = \Cr^\prime \times \left(\frac{\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2}{10^{22}~\mathrm{cm^{-2}}}\right)^\mu.
\label{eqn:lmda-crprime-model}
\end{equation}
The results are presented in this form because $\Cr$ values scale with $\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2^{-\mu}$, and with column density values of order $\sim 10^{22}~\mathrm{cm^{-2}}$ the uncertainties in $\mu$ lead to a large range in the magnitudes of potential $\Cr$ values.
It is because of such large ranges of potential $\Cr$ values that the joint-probability densities of Fig. \ref{fig:join-pdf-all} were presented using $\Cr^\prime$.
While $\Cr$ (and $\Cr^\prime$) values are important for estimating the amount of star formation within a region with Eqn. \ref{eqn:lmda-model}, $\Cr$ does not affect where in a cloud the stars will be positioned in the model -- as can be seen by substituting Eqn. \ref{eqn:lmda-model} into Eqn. \ref{eqn:prob_u} -- and the observed number of YSOs in a region can be used when simulating a spatial point pattern.
In addition, care must be taken in the interpretation of the meaning of the values.
If we take the logarithm of Eqn. \ref{eqn:lmda-model},
\begin{equation}
\log(\Sigma_\mathrm{SFR}) = \log(\Cr) + \mu \log(\Sigma_{\mathrm{Gas}}),
\label{eqn:log-lmda}
\end{equation}
we can see that for a given value of $\mu$, $\Cr$ is equal to the expected YSO surface density when the gas density measure is equal to one, in this case $\Sigma_{\mathrm{Gas}} = 1~\mathrm{cm^{-2}}$.
From this we can see that to compare $\Cr$ values is to compare \emph{expected} YSO surface densities at unit $\Sigma_{\mathrm{Gas}}$ and since $\Sigma_{\mathrm{Gas}}$ can be any density measure, $\Cr$ can be measured at any column density.
Fig. \ref{fig:yso-best-line} shows the expected YSO surface densities using the individual best estimates of $\mu$ and $\Cr$ from Tables \ref{table:mu} and \ref{table:crprime} -- this is a reasonable procedure as, in these regions, the best individual estimates of $\mu$ and $\Cr$ are approximately equal to the best joint estimates for $\mu$ and $\Cr$.
From this figure it can be seen that the YSO surface density in these regions is well represented by a power-law with column density using the results of Eqn. \ref{eqn:c,b_all_region}.
As discussed, since $\Cr$ is the expected YSO surface density at a chosen reference column density, we can also see in Fig. \ref{fig:yso-best-line} how the region with the highest $\Cr$ depends on this choice of reference.
While it is not possible to remove the $\mu$ dependence from $\Cr$, it is possible to find the best-estimates for $\Cr$ in each region assuming the same value of $\mu$ across all regions.
Table \ref{table:crprime205} presents the best estimates of $\Cr^\prime$ assuming $\mu = 2.05$, and Fig. \ref{fig:yso-trend205} shows the new expected YSO surface density functions.
We can see from these results the effect of different star-formation efficiencies on regions which are assumed to have the same column density dependence.
\begin{table}
\caption{\label{table:crprime} Estimates of $\Cr^\prime$ from marginalisation over $\mu$ for all regions.}
\begin{center}
\begin{tabular}{c c c}
Region & Best estimate of $\Cr^\prime$ & 95 per cent confidence interval\\
&$(\mathrm{stars~pc^{-2}})$&\\
\hline
Serpens South & $0.56$ & $0.31 \geq \Cr^\prime \geq 0.94$\\
Serpens Core & $1.26$ & $0.25 \geq \Cr^\prime \geq 3.93$\\
Ophiuchus & $5.36$ & $2.71 \geq \Cr^\prime \geq 9.31$\\
NGC1333 & $8.31$ & $4.10 \geq \Cr^\prime \geq 14.73$\\
IC348 & $2.53$ & $0.48 \geq \Cr^\prime \geq 7.5$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{\label{table:crprime205} Estimates of $\Cr^\prime$ for all regions for $\mu = 2.05$.}
\begin{center}
\begin{tabular}{c c c}
Region & Best estimate of $\Cr^\prime$ & 95 per cent confidence interval\\
&$(\mathrm{stars~pc^{-2}})$&\\
\hline
Serpens South & $0.88$ & $0.64 \geq \Cr^\prime \geq 1.16$\\
Serpens Core & $1.27$ & $0.74 \geq \Cr^\prime \geq 1.93$\\
Ophiuchus & $3.91$ & $2.50 \geq \Cr^\prime \geq 5.62$\\
NGC1333 & $4.50$ & $3.06 \geq \Cr^\prime \geq 6.22$\\
IC348 & $6.50$ & $3.11 \geq \Cr^\prime \geq 10.85$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\includegraphics[width=\linewidth]{yso_density_plot_6panel_trend.pdf}
\caption[YSO surface density measurements within column density bins in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with straight lines showing best estimates of $\mu$ and $\Cr$ in each region]{\label{fig:yso-best-line} YSO surface density measurements within column density bins in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with straight lines showing best estimates of $\mu$ and $\Cr$ in each region.
Uncertainties on Ophiuchus are the Poisson uncertainty on YSO counts to give an idea of YSO surface density uncertainties}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{yso_density_plot_6panel_2.05.pdf}
\caption[ YSO surface density measurements within column density bins in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with straight lines showing best estimates of $\Cr$ assuming $\mu = 2.05$ in each region]{\label{fig:yso-trend205} YSO surface density measurements within column density bins in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with straight lines showing best estimates of $\Cr$ assuming $\mu = 2.05$ in each region.
Error bars are the Poisson uncertainty on YSO counts to give an idea of YSO surface density uncertainties}
\end{figure}
\subsection{Application to simulated protostar spatial distributions}
\label{subsec:application_simulated}
In this section we apply the O-ring statistic with 95 per cent global confidence envelopes to two sets of simulated YSO distributions in Serpens South, presented in Fig. \ref{fig:serpens_simulated}.
Both sets of simulated data contain the same number of YSO positions as Class~0/I YSOs observed in Serpens South and were generated using Eqn. \ref{eqn:lmda-model} with $\mu = 2.05$ using the \textit{Herschel} column density data for Serpens South.
The left-hand, or unbiased, distribution in Fig.~\ref{fig:serpens_simulated} was generated using a probability distribution which spanned the entire study region, while the right, or biased, distribution was generated using a probability map covering only the south-west portion of the map.
Using the same methods applied to the star forming regions in Section \ref{subsec:mu}, $\mu$ was measured for the unbiased and biased distributions to find $\mu =2.06$ and $\mu = 2.01$ respectively.
The lower portion of Fig. \ref{fig:serpens_simulated} presents the results of using the O-ring statistic to test for Eqn. \ref{eqn:lmda-model} with $\mu = 2.05$.
We can see from these results that the unbiased distribution is not rejected, and so is consistent with this model -- this is to be expected as the unbiased distribution is a realisation of said model.
We can also see that the biased distribution rejects this model as the O-ring statistic exceeds the envelope.
These results show how the O-ring test is able to reject the spatially biased distribution of YSOs, whereas a power-law measurement like $\mu$ does not have this type of discriminatory power.
\begin{figure*}
\includegraphics[width=\textwidth]{serpens_south_model_comparison}
\caption[Comparison of two distributions of YSOs generated using the \textit{Herschel} column density data for Serpens South and Eqn. \ref{eqn:lmda-model} with $\mu = 2.05$ where one is biased spatially]{\label{fig:serpens_simulated} Two distributions of YSOs generated using the \textit{Herschel} column density data for Serpens South and Eqn. \ref{eqn:lmda-model} with $\mu = 2.05$ with the 95 per cent confidence envelope tests for Eqn. \ref{eqn:lmda-model} with $\mu = 2.05$. The left-hand distribution was generated using the entire study region, while the right, was generated using a probability map covering only the south-west portion of the map.}
\end{figure*}
\subsection{Application to protostar spatial distributions}
\label{subsec:application}
The distribution of protostars in Perseus, Ophiuchus and Serpens were tested against four distribution models: a minimum threshold of $6\times10^{21}$ \ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2 $\mathrm{cm}^{-2}$ for YSOs to be placed but with no other dependence on column density, a power law dependence of $\mu = 1$, a power law dependence of $\mu = 2.05$, and a power-law equal to the Bayesian-estimated power-law for the region.
The distributions of protostars were tested for their consistency with a distribution model using the O-ring statistic as a summary statistic and 95 per cent global confidence envelopes.
The O-ring statistic for the distribution of Class~0/I YSOs were measured in each region at a set of spatial scales,
\begin{equation}
r=\{x | x = n \Delta r \text{, }x \leq R\ \text{, } n \geq 1\},
\end{equation}
where $\Delta r = 0.03~\mathrm{pc}$ and $R$ is equal to the half the length of the shortest axis (in either RA or Dec) of the study window.
In other words, $r$ values are linearly spaced in intervals of $\Delta r$ from $r = \Delta r$ up to the largest scale that is less than, or equal to, half of the length of the shortest axis.
Following the results from \citet{retter2019}, the widths of the annuli used in the O-ring test are logarithmic and are equal to
\begin{equation}
w = 0.6 \times \rho,
\end{equation}
where $\rho$ is the set of spatial scales for each region, $r$, converted into degrees.
Confidence envelopes were generated for each of the four models described earlier using 99 realisations of the first-order processes.
Each realisation was produced by sampling the first-order intensity map a number of times equal to the number of YSOs observed in the region.
The O-ring statistic was measured for each realisation at the spatial scales $r$ with annuli widths $w$.
All of the confidence envelopes, along with the measured O-ring statistic for the Class0/I YSOs, are presented in Fig. \ref{fig:all_envelopes} and discussed in the following subsections.
The y-axis of the subplots in Fig. \ref{fig:all_envelopes} are the measured O-ring statistic divided by the YSO density of the study window encompassing the star forming region i.e.
\begin{equation}
\bar{\lambda} = \frac{\sum_u \sum_v \mathrm{P}(u,v)}{\sum_u \sum_v \mathrm{A}(u,v)}.
\end{equation}
As such the y-axis represents how many more times clustered, or less clustered, the YSOs are compared to CSR in the same window.
\subsubsection{CSR in gas above cutoff value}
\label{subsubsec:results-csr}
The simplest null hypothesis is that there is no correlation between molecular cloud material and YSOs.
CSR is unlikely to be a successful model as (i) protostars and prestellar cores are known to be associated with dense material within molecular clouds \citep{andre2010}, (ii) the measured power laws in Section \ref{subsec:mu} are greater than zero and (iii) it was shown that Serpens South is inconsistent with this model \citep{retter2019}.
A more sensible, simple relationship between column density and YSOs is one in which some material is required for stars to form but the amount of material has no impact on the number of YSOs. Therefore, we test a model in which star formation is equally likely above some threshold column density, specifically that measured in Taurus, Ophiuchus and Perseus \citep{onishi1998,johnstone2004,andre2010}.
The spatial point process used for the confidence envelopes uses a uniform probability for forming stars in any pixel with a visual extinction above $\mathrm{A_v} = 6$ (assumed to be equal to $6\times10^{21}\, \ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2 \mathrm{cm}^{-2}$) and zero otherwise.
The study windows covered in this paper are too limited in size to come to any conclusions on the existence, or value of, a star-formation column density threshold.
A more complete study of star-formation thresholds would require a greater array of thresholds, and study windows covering more low-column density space. However, including the previously-determined threshold provides a more reasonable model for uniform star formation than no threshold.
The envelopes for this model, presented in the first column of Fig. \ref{fig:all_envelopes}, are exceeded by every region except NGC1333.
For most regions this model produced too few pairs of YSOs at small scales, as evidenced by the measured O-ring statistics exceeding the upper bound of the envelope.
The O-ring statistic for IC348, however, exceeds the lower bound of the confidence envelope at a separation of 0.7~pc, due to too few YSOs at that separation. These results are summarised in Table~\ref{tab:oring_summary}.
\begin{table}
\centering
\begin{tabular}{l |c c c c c}
& \multicolumn{4}{c}{Class~0/I} &\multicolumn{1}{c}{Class~II}\\
Model & CSR & $\mu =1$ & $\mu=2.05$ & best $\mu$ & $\mu=2.05$\\
\hline
Consistent & 1 & 0 & 3 & 4 & 1\\
Rejected & 4 & 5 & 2 & 1 & 4\\
\end{tabular}
\caption{The number of regions that are consistent with or rejected by the O-ring test for each model.}
\label{tab:oring_summary}
\end{table}
\subsubsection{Envelopes with $\mu = 1$}
\label{subsubsec:results-mu1}
A power law of $\mu = 1$ means that the surface density of Class~0/I YSOs is directly proportional to the column density.
This is a worthwhile test to perform as it is the simplest relationship in which the surface density of YSOs increases with column density.
It is also of interest as within Orion B the distribution of prestellar cores have been observed to follow a linear relationship with column density above a visual extinction threshold of $\mathrm{A_v} \sim 7$ \citep{konyves2020}, and although the prestellar core distribution is closer to $\mu=2$ in Monoceros~R2 \citep{Sokol2019}, it is still less steep than the protostellar distribution ($\mu = 2.67$).
The results of applying this model are presented in the second column of Fig. \ref{fig:all_envelopes}.
Serpens South, Serpens core, Ophiuchus and IC348 all exceed the envelope at small scales due to YSOs being more clustered at that scale than typically measured with a $\mu$ equal to 1.
NGC1333 also exceeds the envelope though at a more intermediate scale of 1.3~pc.
\subsubsection{Envelopes with $\mu = 2.05$}
\label{subsubsec:results-mu2}
Following the results discussed previously in Section \ref{subsec:mu} the third model tested was that of the global value of $\mu = 2.05$.
This power is the best estimate of a model where the distribution of Class~0/I protostars is proportional to column density raised to a power which is consistent across the five star-forming regions examined in this paper.
The 95 per cent confidence envelopes presented in the third column of Fig. \ref{fig:all_envelopes} show that Serpens South and NGC1333 both exceed the envelopes at spatial scales around 0.15~pc and therefore reject the model.
IC348, Serpens Core and Ophiuchus remain entirely within the envelopes and are therefore consistent with the model.
While still rejected by two regions, this was the most successful value of $\mu$ tested.
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{all_envelopes_all_regions}
\caption{\label{fig:all_envelopes} Measured $\mathrm{O}/\hat{\lambda}$ vs $r$ for Class~0/I YSOs in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with 95 per cent confidence envelopes for different YSO surface-density models: (left) no dependence above a column density threshold of $N_{\mathrm{H}_2} = 6 \times 10^{21} \hbox{cm}^{-2}$; (centre) a power law dependence with $\mu=1$; (right) $\mu=2.05$. }
\end{center}
\end{figure*}
\subsubsection{Envelopes with best estimate for $\mu$}
\label{subsubsec:results-mubest}
The final test performed on each region was using the best-estimate for $\mu$ calculated in Section \ref{subsec:mu}.
Unlike the previous models where one value of $\mu$ was applied to all of the regions, with this test each region was tested against a different value for $\mu$.
Confidence envelopes were produced for each region using the best-estimates of $\mu$ presented in Table \ref{table:mu} and the results are presented in Fig. \ref{fig:class0I_bestmu}.
Serpens South, Serpens Core, NGC1333 and IC348 all remain within their respective envelopes, however Ophiuchus rejects the model on small scales of 0.06~pc.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{class0I_bestmu_0.03pc}
\caption[Measured $\mathrm{O}/\hat{\lambda}$ vs $r$ for Class~0/I YSOs in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with 95 per cent confidence envelopes using the best-estimate for $\mu$ in each region from Table \ref{table:mu}]{\label{fig:class0I_bestmu} Measured $\mathrm{O}/\hat{\lambda}$ vs $r$ for Class~0/I YSOs in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with 95 per cent confidence envelopes using the best-estimate for $\mu$ in each region from Table \ref{table:mu}.}
\end{center}
\end{figure}
\subsection{Application to Class~II YSOs}
\label{subsec:results-classII}
Class~II YSOs are more evolved than Class~0/I sources and tend to be less associated with the dense gas material \citep{mairs2016}; it is likely, then, that the surface-density of Class~II YSOs should follow a different power law with column density to Class~0/I YSOs, if any at all.
To show that the O-ring statistic with 95 per cent confidence envelopes has enough discriminatory power to distinguish between YSO surface-density models the $\mu = 2.05$ model was applied to the Class~II YSOs in each region.
The Class~II YSOs were selected from the \citet{dunham2015} catalogue with $-1.6 \leq \alpha < -0.3$ and $T_\text{bol} > 100\mathrm{K}$.
Due to there being different numbers of Class~II YSOs compared to Class~0/I the confidence envelopes were recalculated using 99 realisations.
Presented in Fig. \ref{fig:classII_envelopes} are the measured O-ring statistics and $\mu = 2.05$ model confidence envelopes for the Class~II YSOs in each region.
The measured $O/\bar{\lambda}$ values show that, except for NGC1333, Class~II YSOs are less clustered (compared to CSR) at small scales than Class~0/I YSOs within the same region.
Serpens South, Serpens Core, Ophiuchus and IC348 exceed the 95 per cent confidence envelopes and therefore reject the $\mu = 2.05$ model, while NGC1333 stays within the envelopes.
\begin{figure}
\includegraphics[width=\columnwidth]{classII_2.05_0.03pc.pdf}
\caption{\label{fig:classII_envelopes} Measured $\mathrm{O}/\hat{\lambda}$ vs $r$ for Class~II YSOs in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 with 95 per cent confidence envelopes for a $\mu = 2.05$ YSO surface-density model. }
\end{figure}
\section{DISCUSSION}
\label{sec:discussion}
In Sections \ref{subsec:mu} and \ref{subsec:cr_estimate} parameters relating to possible power law relationships between the column density and the surface density of Class~0/I YSOs were estimated.
In Section \ref{subsec:application} methods from spatial statistics were used to determine if, and how many, star forming regions from the set were consistent with the stellar distribution models tested.
These are complementary and independent methods as one does not necessarily imply the other; the first test assumed a model and found the parameters that best fit the model, while the spatial statistics tests determined the suitability of the proposed models.
\subsection{Measured YSO surface density relations}
\label{subsec:disc-measured-mu}
The power-law relationship between the surface density of Class~0/I YSOs and column density was measured in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348, the results of which are presented in Tables \ref{table:mu} and \ref{table:crprime}.
The power, $\mu$, was estimated by marginalising the joint-probability distributions of Eqn. \ref{eqn:c,b_one_region} for each region over the region-specific constant, $\Cr$ and vice-versa.
As discussed, $\mu$ defines the relative likelihoods of forming YSOs at different column densities within a region, and $\Cr$ is a region-specific constant which normalises the number of YSOs formed within the region.
The region-specific constants were measured for the individual regions, as discussed in Section \ref{subsec:cr_estimate}; however, it was also discussed that because the region-specific constants depend on $\mu$ and the units of $\Sigma_{\mathrm{Gas}}$ (here, $N_{\mathrm{H}_2}$), comparison of $\Cr$ between regions of different $\mu$ is difficult to interpret.
The dependence on $\mu$ can be mitigated by considering $\Cr$ values when regions are assumed to have the same $\mu$, and it was shown in Fig. \ref{fig:yso-trend205} that different regions which have the same value of $\mu$ can have different values of $\Cr$.
Such differences in $\Cr$ are due to factors independent of column density that cause different amounts of star-formation, in particular the free-fall time \citep{Pokhrel:2021}.
And so, while $\Cr$ is important for estimating the YSO densities using Eqn. \ref{eqn:lmda-model}, the column density dependence is what is being tested using the O-ring statistic and so discussion will be focused on $\mu$.
Values of $\mu$ measured for the star forming regions in this work are consistent with studies looking for YSO surface density relationships in other star-forming regions \citep{gutermuth2011, rapson2014,willis2015,lada2013,lombardi2013,lombardi2014,Pokhrel2020}.
Even high values of $\mu$, such as that of IC348, have been measured such as Perseus with $\mu = 3$ \citep{hatchell2005} and $\mu = 3.8$ \citep{gutermuth2011} and the California Nebula with $\mu = 3.31$ \citep{lada2017} -- though it is shown in Fig. \ref{fig:all_envelopes} and Table \ref{table:mu} that IC348 is consistent with a much lower value of $\mu$.
There is some overlap between the regions tested in this paper and those tested in other works: Ophiuchus with a $\mu$ of 1.78 is exceptionally close to the value of 1.87 and 1.9 measured by \citet{gutermuth2011} and \citet{Pokhrel2020} respectively, and IC348, within the Perseus molecular cloud shows a similar power law to \citet{gutermuth2011}, though NGC1333 does not. Some of the variation in $\mu$ may be an indication of deviations from a $\mu=2.0$ power law at higher column densities \citep{Pokhrel:2021}.
It is interesting that these power laws show such similarity given the differences in the methods of measuring the power law, the column density measures, identifying the YSOs and the star-forming regions used.
There is even potential that some of the higher values of $\mu$, such as that of IC348, may be reduced in future with increasing resolution as happened with Orion B \citep{lombardi2014}.
In addition to measuring $\mu$ for each region, Eqn. \ref{eqn:c,b_all_region} was used to estimate the power-law value which best represents the YSO distributions in all five regions simultaneously by marginalising over the region-specific constants: $\mu = 2.05$.
Unlike taking an average value of $\mu$, which requires measured values of $\mu$ and an assumption as to how they should be weighted, this method directly uses the available data to estimate the parameter.
Given this difference in methodology, it is interesting how similar this value is to the weighted mean $\mu = 2.06 \pm 0.14$ (with 95 per cent confidence intervals) for these regions and the mean value of $\mu = 2.0$ for the 12 regions studied in \citet{Pokhrel2020, Pokhrel:2021}.
Examining the individual estimates of $\mu$ presented in Table \ref{table:mu} and Fig. \ref{fig:mu_regions}, $\mu = 2.05$ appears to represent the ensemble of Bayesian fitted $\mu$ values well.
While this is expected, given that this value of $\mu$ was estimated using the YSO and area counts that were used to produce the values of $\mu$ for each region, it did not use the values of $\mu$ themselves and so demonstrates that the combination of measurements produces a value that is reasonable and representative.
We can also see in Fig. \ref{fig:yso-trend205} that for most regions $\mu = 2.05$, combined with the appropriate estimate of $\Cr$, provides a good, visual representation of the YSO surface density measurements despite not being the best-estimate for the most regions.
\subsection{Testing YSO distributions against spatial distribution models}
A measured $\mu$ value describes how the surface density of YSOs changes in general across the entire study region.
Measuring a power-law, however, does not mean that the YSOs are evenly distributed according to column density throughout the cloud.
This was demonstrated in Section \ref{subsec:application_simulated}, where a value of $\mu$ measured from an evenly distributed population of YSOs could be reproduced in a population of stars distributed over only half of the cloud. Hence it is reasonable to say that $\mu$ is a useful metric to describe YSO distributions but is not enough on its own to say whether YSOs have a relationship with cloud material of the form Eqn. \ref{eqn:lmda-model}.
By utilising the spatial information, spatial statistics can test if observed distributions of YSOs are consistent with a power-law relationship with column density.
\subsubsection{Class~0/I YSOs}
One should expect the surface density of YSOs to be affected by column density.
From a physics standpoint this makes sense as a greater reservoir of material has the potential to form more stars, and from an observational standpoint the values of $\mu$ measured for star-forming regions are all greater than zero.
The O-ring tests confirm this as Serpens South, Serpens Core, Ophiuchus and IC348 all have Class~0/I YSO populations inconsistent with YSOs positioned independently of column density above $\mathrm{A_v} = 6$.
It was also confirmed by the O-ring test that the relationship between $\mu$ and column density is likely superlinear as all five regions rejected $\mu = 1$ and subsequent tests with higher values of $\mu$ all had fewer rejections (see Table~\ref{tab:oring_summary}).
Each region was also tested against the global estimate of $\mu = 2.05$, the 95 per cent confidence envelopes for which are presented in Fig. \ref{fig:all_envelopes}.
These results show that of the five regions tested, Serpens Core, Ophiuchus and IC348 have Class~0/I YSO populations that are consistent with the $\mu = 2.05$ model.
While it is unsurprising that Serpens Core is consistent with $\mu = 2.05$, given its power law was estimated to be $\mu = 2.06$, this is a more interesting result for Ophiuchus and IC348 as their estimates for $\mu$ were $1. 78$ and $3.00$ respectively.
O-ring tests for Serpens South and NGC1333 rejected the $\mu = 2.05$ model.
This was due to overclustering and regularity for Serpens South and NGC1333 respectively.
Interestingly, the outcome of the envelope tests -- with Serpens South and NGC1333 rejecting the $\mu = 2.05$ model while the other regions do not -- is mirrored in the $\mu$ values measured in Table \ref{table:mu}.
The power $\mu = 2.05$ is within the 95 per cent confidence intervals for Serpens Core, Ophiuchus and IC348 individually while it is marginally outside the interval for Serpens South and NGC1333.
It is perhaps due to the proximity of $2.05$ to the 95 percent confidence intervals of Serpens South and NGC1333 that the O-ring values exceed the confidence envelopes over such a small set of spatial scales at $\sim 0.15~\mathrm{pc}$.
Finally, each region was tested against its best-estimate for $\mu$.
Unlike the other models which assume a single value of $\mu$, this model contains $\mu$ as an adjustable parameter for each region.
By having four additional adjustable parameters in total, one should expect the number of YSO distributions that are consistent with the model to increase.
This was observed in Fig. \ref{fig:class0I_bestmu} where it was found that Serpens South, Serpens Core, NGC1333 and IC348 all have YSO populations consistent with their best-estimates of $\mu$.
From these results then we can see that Eqn. \ref{eqn:lmda-model}, using the Bayesian estimates of $\mu$, is generally supported by spatial statistics.
Though Ophiuchus rejected $\mu = 1.78$ at 0.06~pc, on similar scales to the regions which rejected $\mu = 2.05$.
\subsubsection{Class~II YSOs}
While $\mu$ values were not measured for the Class~II YSOs in these regions, by looking at the measured O-ring statistic and $\mu=2.05$ envelopes in Fig. \ref{fig:classII_envelopes} it is clear that the two populations are not equally dependent on column density.
The Class~II YSOs in Serpens South, Serpens Core, Ophiuchus and IC348 are all inconsistent with the a $\mu=2.05$ model, while those in NGC1333 remain within the envelope.
This increase in rejection by more evolved YSOs, in combination with lower $O/\bar{\lambda}$ values and generally flatter O-ring results as a function of radial separation, shows that there is a change in the separation of YSOs as a function of their age.
These results also demonstrate that these tests have enough discriminatory power to distinguish between two distinct but related populations within the same region - Class~0/I and Class~II YSOs.
\subsection{Potential for a universal column density model}
\label{subsec:disc-evidence_global}
A question proposed at the beginning of this work was if it is possible to describe the locations of YSOs within a molecular cloud with a model that only uses column density.
After applying four different models to the Class~0/I YSOs in five star forming regions every region was found to be consistent with at least one model (see Table~\ref{tab:oring_summary}).
The answer to this question, therefore, appears to be `yes' as the parameters for a given model can be tweaked in order to be consistent with a
given set of YSOs.
Given that an individual region can be described using a column density model, the next question is whether it is possible to describe the distributions of YSOs within multiple molecular clouds using the same column density model.
The most successful of the four models tested was that in which the best-estimate of $\mu$ calculated for each region using the Bayesian methodology from Section \ref{sec:bayes_stats} was applied.
Using this model, four out of the five regions were found to have YSO distributions consistent with being distributed throughout the cloud according to column density alone.
It is possible, therefore, that if YSOs are distributed following column density alone that $\mu$ simply varies between star-forming regions and that there is no universal power-law distribution.
However, not all of the regions were consistent with their best-estimate of $\mu$ and it is difficult to say whether this increase in consistency with the data is significant enough to justify the addition of an adjustable parameter to the model.
As discussed in Sections \ref{subsubsec:results-mu2} and \ref{subsec:disc-measured-mu}, multiple regions can be consistent with the same power-law despite the best estimate of their $\mu$ values not being equivalent.
Fig. \ref{fig:yso-trend205} shows how a YSO surface density proportional to column density to the power of $\mu = 2.05$ represents the data quite well, and using the O-ring statistic the $\mu=2.05$ model is able to describe the YSO distributions of Serpens Core, Ophiuchus and IC348 across all of the tested spatial scales.
Out of the three models tested using a single value of $\mu$, $\mu = 2.05$ performed the best with three regions out of five being consistent with the distribution.
The first test with $\mu = 0$ above a column density threshold was only consistent with NGC1333 and $\mu = 1$ was not consistent with any of the regions.
While the Class~0/I YSOs in Serpens South and NGC1333 rejected the $\mu = 2.05$ model, this rejection was only over a small set of spatial scales between $0.12~\mathrm{pc}$ and $0.18~\mathrm{pc}$, and on other scales the distribution was consistent with the model.
This can be seen in Fig. \ref{fig:mu2_larger} which shows the measured O-ring data from Fig. \ref{fig:all_envelopes} for $r > 0.18~\mathrm{pc}$; the O-ring statistics in Fig. \ref{fig:mu2_larger} remain within the envelope across all scales and so appear consistent with the $\mu = 2.05$ model.
It is worth emphasising that, while the envelopes in Fig. \ref{fig:mu2_larger} have been adjusted compared with the right-hand panels of Fig. \ref{fig:all_envelopes} to retain a 95 per cent significance level, the envelopes have been calculated using the same null hypothesis data and so are not an independent test.
Fig. \ref{fig:mu2_larger} does show, however, that remaining within the envelopes at larger $r$ values is a feature of the data and not due to the envelopes being widened by the O-ring values which exceed the envelopes.
From this additional check we can say that the large-scale behaviour of the Class~0/I YSOs in all of these regions is well described by the same power-law relationship with column density.
\begin{figure}
\includegraphics[width=\columnwidth]{mu2_envelopes_all_regions_r0.18}
\caption{\label{fig:mu2_larger} The measured $\mathrm{O}/\hat{\lambda}$ vs $r$ for Class~0/I YSOs in Serpens South, Serpens Core, Ophiuchus, NGC1333 and IC348 from Fig. \ref{fig:all_envelopes} with 95 per cent confidence envelopes for a $\mu = 2.05$ YSO surface density model, using $r > 0.18~\mathrm{pc}$.}
\end{figure}
\subsection{Alternative universal models}
Using both spatial statistics and Bayesian statistics it was shown that a power-law model with $\mu = 2.05$ provides a good approximation to the data.
It is interesting to note that while Eqn. \ref{eqn:lmda-model} appears to fit the measured YSO surface density data in Serpens South and NGC1333, as shown in Fig. \ref{fig:yso-trend205}, these regions both exceed the confidence envelopes.
This could imply a situation like that discussed in Section \ref{subsec:application_simulated} where column-density-independent effects resulted in star formation being unevenly distributed throughout the cloud.
The surface density of YSOs is not necessarily proportional to the column density to some power and so some modification of the first-order model Eqn. \ref{eqn:lmda-model} may produce better results.
The number of different models which could be simulated are potentially unlimited; however, the excursions from the $\mu = 2.05$ envelope were brief and the data elsewhere are consistent with the power law and so any additional changes to the model should not have a large effect on the power-law relationship.
Furthermore the scales on which these additional parameters influence the star formation need only be limited to small scales.
The results in Fig. \ref{fig:all_envelopes} show that Serpens South and NGC1333 reject the model at scales close to 0.15~pc due to the over-clustering and under-clustering respectively and are otherwise consistent with the model.
At most scales, therefore, the distribution of YSOs in Serpens South and NGC1333 behave similarly to this simple power-law relationship -- except at a spatial scale of 0.15~pc.
From Fig. \ref{fig:all_envelopes} is not possible to determine exactly what this rejection means without further testing, however some possibilities will be discussed here.
The first option is that a global first-order model for star formation between clouds requires a different power law.
This appears unlikely as the O-ring statistic shows over-clustering in Serpens South and under-clustering in NGC1333.
Any increase in $\mu$ would lead to increased clustering at smaller scales while a decrease in $\mu$ would have the opposite effect, neither of which would necessarily represent Serpens South and NGC1333 simultaneously.
Fig. \ref{fig:all_envelopes} shows the results for a power law of $\mu = 1$ which consistently under-represent the density in Serpens South while NGC1333's O-ring statistic is consistent at small scales.
Extrapolating the envelopes between the first and second columns of Fig. \ref{fig:all_envelopes} provides some evidence against power laws less than 2, though more simulations would be required to conclusively determine this to be true.
A second option would be to add more parameters to the surface density model.
One parameter of particular interest is a column density threshold for star formation.
Indeed, from Fig. \ref{fig:all_envelopes} it was shown that the YSOs within NGC1333 are consistent with being positioned randomly in pixels with a column density greater than $6 \times 10^{21} \mathrm{cm}^{-2}$.
\citet{lombardi2013}, hereby LLA, introduced a Bayesian method related to that in this paper which uses the positions of protostars and the visual extinctions at those positions to estimate parameters in their model for protostellar surface densities.
The surface density model in LLA is similar to Eqn. \ref{eqn:lmda-model}, except with two parameters in addition to $\Cr$ and $\mu$ (in their notation $\kappa$ and $\beta$ respectively): $\sigma$ and $A_0$.
$\sigma$ is a diffusion coefficient term which allows for some amount of travel between the protostars' sites of formation and observation and $A_0$ is a star formation threshold density.
\citet{lada2013} applied the method of LLA to Orion A, Orion B, California and Taurus, and found that there was no significant measurement of a diffusion coefficient and that a star-formation threshold may be more due to the distribution of material in the cloud -- suggesting that the model is scale free.
From the results in this paper it is not possible to come to a conclusion on a model which uses both a power law and a threshold; no such model was tested and the number of YSOs from low column density regions in this work is insufficient to provide much insight on YSO distributions at low column densities.
However, given the results of LLA the effect of including a column density threshold would be likely to be limited.
A third option is that the power-law model cannot be applied to small spatial scales.
This could be due to data-related problems, for example resolution.
The spatial separations used start at, and are separated by, an interval of 0.03~pc, at these small spatial scales resolution effects become more prominent which in turn increases the likelihood that close, separate sources will be counted as a single source or vice-versa. This is a particular problem at small radii where a small change in the number of YSOs has a large impact on the density.
It could also be that the distributions of YSOs are affected by different physics at small-scales than large scales.
The scales at which Serpens South and NGC1333 reject the global power law are at scales close to the filament scale of $0.1$~pc \citep{arzoumanian2011}, and average core separations in filaments of 0.14~pc \citep{konyves2020}.
Given that YSOs form within collapsing filaments it is possible that the structure of the filaments in which these Class~0/I YSOs form affects their distributions.
Finally, it may also be the case that a model with second-order components is needed to capture the nature of the distribution of star formation in star-forming regions.
In a first-order model the clustering of YSOs is a product of a general increase in YSO density due to a change in environment, in this case column density.
It is also possible that clusters of protostars are not simply a function of increased density but are instead a product of a cluster-formation process which preferentially generates clusters in higher-density regions.
Such behaviour can be represented through application of second-order effects which raise or lower the probability of forming a star as a function of distance from another star.
This could be a YSO disrupting the column density of its immediate surroundings for example in NGC1333 \citep{knee2000}, or it could be clusters of protostars forming within a dense core or filament as, for example, in Perseus \citep{Tobin2016}.
\subsection{Changing evolutionary timescales with column density}
\label{subsec:disc-mu1}
As discussed in Section \ref{subsubsec:results-mu1}, the surface density of prestellar cores in Orion B has been observed to follow a less steep relationship with column density compared to the YSOs in Orion B, which follow a power-law with $\mu \approx 2$ \citep{lombardi2014,konyves2020,Pokhrel2020}.
Similarly, studies on the prestellar and protostellar populations in Monoceros R2, Serpens South, Ophiuchus and Perseus showed that the protostars in these regions were more clustered than their prestellar counterparts \citep{Enoch2008, gutermuth2011, Sokol2019}.
This leads naturally to the question of why these power laws should be different if prestellar cores are expected to evolve into Class 0 YSOs.
One reason for this could be that the prestellar cores and Class~0/I YSOs have timescales that are affected by environment in different ways, or, the converse argument, for these distributions to be the same it would require the evolutionary timescales of prestellar cores and Class~0/I YSOs to share the same dependence on the environment.
To see why this is the case a simple model of the rates of change of surface density over time, similar to that in nuclear decay, is introduced using a subset of eqns. (2)--(7) from \citet{kristensen2018}.
Assuming prestellar cores are produced at a constant rate, $\gamma$ (which may be a function of local column density), and evolve into Class~0/I YSOs with a lifetime $\tau_{\mathrm{PC}}$ the change in prestellar core surface density is
\begin{equation}
\label{eqn:dpc/dt}
\frac{\mathrm{d}\Sigma_{\mathrm{PC}}}{\mathrm{d}t} = \gamma - \frac{\Sigma_{\mathrm{PC}}(t)}{\tau_{\mathrm{PC}}},
\end{equation}
where $\Sigma_\mathrm{PC}$ is the surface density of prestellar cores.
Similarly, assuming Class~0/I YSOs evolve into Flat or Class~II YSOs with a lifetime $\tau_{\mathrm{0/I}}$ the surface density of Class~0/I YSOs, $\Sigma_{\mathrm{0/I}}$, is
\begin{equation}
\label{eqn:d0i/dt}
\frac{\mathrm{d}\Sigma_{\mathrm{0/I}}}{\mathrm{d}t} = \frac{\Sigma_{\mathrm{PC}}(t)}{\tau_{\mathrm{PC}}} - \frac{\Sigma_{\mathrm{0/I}}(t)}{\tau_{\mathrm{0/I}}}.
\end{equation}
The solutions to Eqns. \ref{eqn:dpc/dt} and \ref{eqn:d0i/dt} are
\begin{equation}
\label{eqn:sigma_pc}
\Sigma_{\mathrm{PC}} = \gamma\tau_{\mathrm{PC}} \left( 1 - e^{-t/\tau_{\mathrm{PC}}}\right),
\end{equation}
and
\begin{equation}
\label{eqn:sigma_01}
\Sigma_{\mathrm{0/I}} = \gamma\tau_{\mathrm{0/I}} \left( 1 - \frac{\tau_{\mathrm{PC}}}{\tau_{\mathrm{PC}}-\tau_{\mathrm{0/I}}}e^{-t/\tau_{\mathrm{PC}}} - \frac{\tau_{\mathrm{0/I}}}{\tau_{\mathrm{0/I}}-\tau_{\mathrm{PC}}}e^{-t/\tau_{\mathrm{0/I}}}\right)
\end{equation}
respectively, where it is assumed that at $t = 0$ the surface density of prestellar cores and protostars are zero.
Everything inside the brackets of Eqns. \ref{eqn:sigma_pc} and \ref{eqn:sigma_01} is unitless and column density independent.
The column density dependence of a population, therefore, is defined by the product of prestellar core formation rate and the lifetime of the population.
For simplicity the solutions to the steady-state condition, where Eqns. \ref{eqn:dpc/dt} and \ref{eqn:d0i/dt} are both equal to zero, are
\begin{equation}
\label{eqn:pc_steady}
\Sigma_{\mathrm{PC}} = \gamma\tau_{\mathrm{PC}}
\end{equation}
and
\begin{equation}
\Sigma_{\mathrm{0/I}} = \gamma\tau_{\mathrm{0/I}}.
\label{eqn:0I_steady}
\end{equation}
From inspection it can be seen that for $\Sigma_{\mathrm{PC}}$ and $\Sigma_{\mathrm{0/I}}$ to share the same column density dependence, their lifetimes must also be equally dependent on column density.
In other words, if the prestellar cores and Class~0/I YSOs in a region have different column density dependences this could be due to different column density dependences of the evolutionary timescales of prestellar cores and protostars.
As discussed, observations in Orion B find that $\Sigma_{\mathrm{PC}}$ is linearly proportional to $\Sigma_{\mathrm{Gas}}$ while $\Sigma_{\mathrm{0/I}}$ is proportional to $\Sigma_{\mathrm{Gas}}$ to a power of about two; additionally, measurements in Monoceros R2 show $\Sigma_{\mathrm{PC}} \propto \Sigma_{\mathrm{Gas}}^2$ \citep{Sokol2019, Pokhrel2020,Pokhrel:2021} and $\Sigma_{\mathrm{0/I}} \propto \Sigma_{\mathrm{Gas}}^{2.67}$ \citep{gutermuth2011}.
And so, while the exact values of $\mu$ may differ between regions, the fact that $\mu$ varies between classes, combined with Eqns. \ref{eqn:pc_steady} and \ref{eqn:0I_steady}, strongly suggests that $\tau_{\mathrm{PC}}$ and $\tau_{\mathrm{0/I}}$ must have different dependencies on column density due to interactions with the environment such as ongoing accretion.
To gain some insight into how the relative time-scale depends on column density in Orion B we substitute in the observed relations of $\Sigma_{\mathrm{0/I}} \propto \Sigma_{\mathrm{Gas}}^2$ -- from this and other measurements discussed in Section \ref{subsec:disc-evidence_global} -- and $\Sigma_{\mathrm{PC}} \propto \Sigma_{\mathrm{Gas}}$ from \citet{konyves2020}:
\begin{align}
\label{eqn:gamma_taupc}
\gamma\tau_{\mathrm{PC}} &\propto \Sigma_{\mathrm{Gas}} ,\\
\gamma\tau_{\mathrm{0/I}} &\propto \Sigma_{\mathrm{Gas}}^2
\end{align}
and
\begin{equation}
\label{eqn:ratio3}
\frac{\tau_{\mathrm{PC}}}{\tau_{\mathrm{0/I}}} \propto \Sigma_{\mathrm{Gas}}^{-1},
\end{equation}
where Eqn. \ref{eqn:ratio3}, the ratio of $\Sigma_{\mathrm{PC}}$ and $\Sigma_{\mathrm{0/I}}$, states that the difference in column density dependence between $\tau_{\mathrm{PC}}$ and $\tau_{\mathrm{0/I}}$ is a factor of $\Sigma_{\mathrm{Gas}}$.
This suggests that prestellar cores evolve more quickly at higher column densities than Class~0/I YSOs.
There are different ways to interpret this: (i) prestellar cores evolve on shorter time-scales at higher column densities; (ii) Class~0/I YSOs remain embedded in their envelope longer at higher column densities; (iii) alternatively, both are column density dependent in some form with prestellar cores ultimately evolving faster than Class~0/I YSOs at higher column densities.
It is very likely that both $\tau_{\mathrm{PC}}$ and $\tau_{\mathrm{0/I}}$ are column density dependent.
For prestellar cores their lifetime is often compared to the free-fall time of a spherically-symmetric mass,
\begin{equation}
\label{eqn:free-fall}
\mathrm{t_{ff}} \propto \rho^{-1/2},
\end{equation}
where $\rho$ is the density of the sphere.
Eqn. \ref{eqn:free-fall} shows that, since free-fall time is proportional to volume density to a power $-1/2$, higher density objects collapse more quickly.
Numerical simulations have shown that Bonner--Ebert spheres have higher central densities and are quicker at collapsing within higher density environments \citep{Kaminski2014}.
Observationally, the smaller numbers of prestellar cores observed with higher densities also suggest that lifetime decreases with increasing density \citep{jessopwt00,konyves2015}.
Finally, normalising the column density by the free-fall time results in a linear relationship between $\Sigma_\mathrm{gas}/t_\mathrm{ff}$ and $\Sigma_\mathrm{SFR}$ \citep{Pokhrel:2021}.
Hence it is likely that $\tau_{\mathrm{PC}}$ is lower at higher column densities.
For Class~0/I protostars to take longer to evolve at higher column densities it would require that they remain embedded within their envelopes for longer compared to their lower-column-density counterparts.
It may be the case that Class~0/I protostars are able to remain embedded while material is available for accretion, which would result in longer lifetimes in regions that are more dense \citep{hatchell08}.
This is in part supported by numerical simulations where it was found that the accretion rate onto protostars was equivalent between two simulated clouds of different densities \citep{2005MNRAS.356.1201B}.
If Class~0/I protostars do take longer to evolve at higher column densities,
a change in $\tau_{\mathrm{0/I}}$ with respect to column density could be observable in the relative masses in protostars in regions of different column density.
Indeed some evidence of this has been observed in mass segregation in YSOs and dense cores, where the most massive sources were found within regions with higher densities of sources and towards the central location of the cluster \citep{kirk2011,Kirk_2016}.
It was also noted in \citet{2005MNRAS.356.1201B} that objects formed within a denser cloud showed a greater variation in the time taken for an object to accrete.
As a counter argument, the same simulations also showed that dynamical interactions between objects were the dominant force in terminating accretion and objects were more likely to be ejected sooner in a higher density cloud \citep{bate2012}.
This would imply that $\tau_{\mathrm{0/I}}$ is smaller in higher column densities.
These are, however, results from numerical simulations and observational evidence is currently insufficient to convincingly support either lengthening or shortening Class~0/I lifetimes.
Ultimately, it is not possible to determine which of the three terms $\gamma$, $\tau_{\mathrm{YSO}}$ or $\tau_{\mathrm{0/I}}$ are column density dependent from Eqns. \ref{eqn:gamma_taupc} -- \ref{eqn:ratio3}.
However, a minimum of two of the terms must be functions of column density, at least one of which must be an evolution time-scale for prestellar cores or Class~0/I YSOs.
This is true for any region in which $\Sigma_{\mathrm{PC}}$ and $\Sigma_{\mathrm{0/I}}$ are measured to have different dependencies on column density.
\section{CONCLUSIONS}
In this paper the distribution of Class~0/I YSOs in Serpens South, Serpens core, Ophiuchus, NGC1333 and IC348 were tested against a spatial distribution model of the form
\begin{equation}
\hat{\lambda}\mathrm{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2)} \propto \ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2^\mu,
\end{equation}
where $\hat{\lambda}\mathrm{(\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2)}$ is the estimate of the surface density of Class~0/I YSOs at a column density \ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2, and $\mu$ is some power.
{\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item It was found that four of the regions had Class~0/I populations inconsistent with $\mu = 0$ when combined with a threshold column density of $6\times10^{21} \ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2 \mathrm{cm}^{-2}$ and zero probability elsewhere -- implying that star formation is not decoupled from column density (Section \ref{subsubsec:results-csr}).
\item The Class~0/I YSOs in all of the tested regions were also found to be inconsistent with $\mu=1$ -- the power law associated with the surface densities of prestellar cores (Section \ref{subsubsec:results-mu1}).
\item The power law index $\mu$ was measured for each region individually in Section \ref{subsec:mu}, the results of which are tabulated in Table \ref{table:mu}, and by combining the YSO surface density data from all regions a global $\mu$ value was measured to be $2.05 \pm 0.20$ where the reported uncertainty is the 95 per cent confidence interval.
\item The best value of $\mu$ tested was that of the global $\mu$ value 2.05 in Section \ref{subsubsec:results-mu2}, with only the YSOs in Serpens South and NGC1333 rejecting the model between 0.12~pc and 0.18~pc.
It was shown that all five regions were consistent with $\mu = 2.05$ when considering radial separations greater than 0.18~pc.
\item Serpens South and NGC1333 rejected the $\mu=2.05$ model at a radial separation of $\sim 0.15~\mathrm{pc}$.
This could be due to physical effects such as a preferential scaling for filament collapse or small-scale interactions between YSO or data-related issues, such as resolution.
However, because of the generally good fit to the model any modification should be limited to small spatial scale interactions.
\item Class~0/I YSOs were shown to have a different relationship to column density than Class~II YSOs (Section \ref{subsec:results-classII}) showing that this relationship is not consistent over time.
\item In Section \ref{subsec:disc-mu1}, using a toy evolution model it was determined that,
if prestellar cores and protostars have different power-law relationships with column density, column density must play a role in their evolutionary timescales. Specifically, at least two of the prestellar core formation rate, prestellar core evolutionary time-scale and Class~0/I evolutionary time-scale, must be affected by the local column density environment.
\end{enumerate}}
\begin{table}
\caption{Table of Symbols}
\label{table:symbols}
\begin{tabular}{ll}
\hline
Symbol & Description\\
\hline
$\Sigma_\mathrm{SFR}$ & star formation rate surface density \\
$\Sigma_\mathrm{GAS}$ & gas surface density \\
$\Cr$ & region-specific constant\\
$\mu$ & power-law index of SFR surface density relation \\
$\lambda$ & first-order intensity \\
$\ensuremath{\mathrm{N_{H_2}}}} % shortcut to write N_{H_2$ & column density $\mathrm{cm}^{-2}$\\
$N$ & number of YSOs \\
$u$,$v$ & cell indices \\
$\Delta \sigma$ & angular distance \\
$\alpha$ & right ascension (RA), significance level \\
$\delta$ & declination (Dec) \\
$r$ & radius, radial separation \\
$w$ & annulus width \\
$H_0$ & null hypothesis\\
$n$ & number of simulated patterns \\
$A$ & area \\
\hline
\end{tabular}
\end{table}
\section*{ACKNOWLEDGEMENTS}
Brendan Retter is funded by an STFC studentship.
This research has made use of the {\sc starlink} software \citep{2014ASPC..485..391C} which is supported by the East Asian Observatory.
The figures in this paper have been produced using {\sc matplotlib}: a 2D graphics package used in {\sc Python} for application development, interactive scripting, and publication-quality image generation across user interfaces and operating systems \citep{Hunter:2007}.
This research made use of {\sc Astropy},\footnote{http://www.astropy.org} a community-developed core {\sc Python} package for Astronomy \citep{astropy:2013, astropy:2018}.
This research has made use of NASA's Astrophysics Data System.
This work is based (in part) on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
This research has made use of data from the Herschel Gould Belt survey (HGBS) project (http://gouldbelt-herschel.cea.fr). The HGBS is a Herschel Key Programme jointly carried out by SPIRE Specialist Astronomy Group 3 (SAG 3), scientists of several institutes in the PACS Consortium (CEA Saclay, INAF-IFSI Rome and INAF-Arcetri, KU Leuven, MPIA Heidelberg), and scientists of the Herschel Science Center (HSC).
\section*{DATA AVAILABILITY}
The \textit{Herschel} Gould Belt survey (HGBS) data available in HGBS Archive, at \url{http://www.herschel.fr/cea/gouldbelt/en/} .
The \citet{dunham2015} Young Stellar Object source data are available at \url{https://doi.org/10.1088/0067-0049/220/1/11} .
The Spitzer data underlying this article are available in NASA/IPAC Infrared Science Archive at \url{https://irsa.ipac.caltech.edu/data/SPITZER/C2D/images/}
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The purpose of this paper is to prove a Gauss-Kuzmin type problem for non-regular continued fraction expansions introduced by Chan \cite{Chan-2006}.
\subsection{Gauss' Problem}
One of the first and still one of the most important results in the metrical theory of continued fractions is so-called Gauss-Kuzmin theorem. Write $x \in [0,1)$ as a regular continued fraction
\[
x = \displaystyle \frac{1}{a_1+\displaystyle \frac{1}{a_2+\displaystyle \frac{1}{a_3+ \ddots}}} :=[a_1, a_2, a_3, \ldots],
\]
where $a_n \in \mathbb{N}_+ : = \left\{1, 2, 3, \ldots\right\}$.
The metrical theory of continued fractions started on 25th October 1800, with a note by Gauss in his mathematical diary. Gauss wrote that (in modern notation)
\[
\lim_{n \rightarrow \infty} \lambda \left(\tau^n \leq x\right) = \frac{\log(1+x)}{\log2}, \ x \in I:=[0,1].
\]
Here $\lambda$ is Lebesgue measure and the map $\tau : [0, 1) \rightarrow [0, 1)$, the so-called \textit{regular continued fraction} (or \textit{Gauss}) transformation, is defined by
\[
\tau(x) := \frac{1}{x}-\left\lfloor \frac{1}{x} \right\rfloor, \quad x \neq 0; \ \tau(0):=0,
\]
where $\left\lfloor \cdot \right\rfloor$ denotes the \textit{floor} (or \textit{entire}) function. Gauss' proof (if any) has never been found. A little more than 11 years later, in a letter dated 30 January 1812, Gauss asked Laplace to estimate the error
\[
e_n(x) := \lambda \left(\tau^{-n}[0, x]\right) - \frac{\log(1+x)}{\log2}, \quad n \geq 1, \ x\in I.
\]
This has been called \textit{Gauss' Problem}. It received a first solution more than a century later, when R.O. Kuzmin (see \cite{Kuzmin-1928}) showed in 1928 that $e_n(x) = \mathcal{O}(q^{\sqrt{n}})$ as $n \rightarrow \infty$, uniformly in $x$ with some (unspecified) $0 < q < 1$. One year later, using a different method, Paul L\'evy (see \cite{Levy-1929}) improved Kuzmin's result by showing that $\left|e_n(x)\right| \leq q^n$, $n \in \mathbb{N}_+$, $x \in I$, with $q = 3.5 - 2\sqrt{2} = 0.67157...$. The Gauss-Kuzmin-L\'evy theorem is the first basic result in the rich metrical theory of continued fractions.
\subsection{A non-regular continued fraction expansion}
In this paper, we consider a generalization of the Gauss transformation and prove an analogous result. This transformation was studied in detail by Chan in \cite{Chan-2006} and Lascu and Kawamura in \cite{LK-2011}.
In \cite{Chan-2006}, Chan shows that any $x \in \left[0, 1\right)$ can be written in the form
\begin{equation}
x = \frac{m^{-a_1(x)}}{\displaystyle 1+\frac{(m-1)m^{-a_2(x)}}{\displaystyle 1 + \frac{(m-1)m^{-a_3(x)}}{\displaystyle 1 + \ddots} }}:=[a_1(x), a_2(x), a_3(x), \ldots]_m, \label{3.1}
\end{equation}
where $m \in \mathbb{N}_+$, $m \geq 2$ and $a_n(x)$'s are non-negative integers.
For any $m \in \mathbb{N}_+$ with $m \geq 2$, define the transformation $\tau_m$ on $I$ by
\begin{equation}
\tau_m (x) = \left\{\begin{array}{lll}
\displaystyle \frac{m^{\left\{\frac{\log x^{-1}}{\log m}\right\}}-1}{m-1}, & \hbox{if} & x \ne 0 \\
\\
0, & \hbox{if} & x = 0,
\end{array} \right. \label{3.5}
\end{equation}
where $\left\{\cdot\right\}$ stands for fractionary part. It is easy to see that $\tau_m$ maps the set $\Omega$ of irrationals in $I$ into itself. For any $x \in (0,1)$ put
\begin{equation}
a_n(x) = a_1\left(\tau_m^{n-1}(x)\right), \quad n \in {\mathbb{N}}_+, \nonumber
\end{equation}
with $\tau_m^0 (x) = x$ and
\begin{equation}
a_1 (x) = \left\{\begin{array}{lll}
\lfloor\log x^{-1} / \log m\rfloor, & \hbox{if} & x \neq 0 \\
\infty, & \hbox{if} & x = 0.
\end{array} \right. \nonumber
\end{equation}
Transformation $\tau_m$ which generates the continued fraction expansion (\ref{3.1}) is ergodic with respect to an invariant probability measure, $\gamma_m$, where
\begin{equation}
\gamma_m (A) = k_m \int_{A} \frac{dx}{((m-1)x+1)((m-1)x+m)}, \quad A \in {\mathcal{B}}_I,
\end{equation}
with $k_m = \frac{(m-1)^2}{\log \left(m^2/(2m-1)\right)}$ and $\mathcal{B}_I$ is the $\sigma$-algebra of Borel subsets of $I$ (which, by definition, is the smallest $\sigma$-algebra containing intervals).
Next, we briefly present the results of Lascu and Kawamura in \cite{LK-2011}. Were given some basic metric properties of the continued fraction expansion in (\ref{3.1}). Hence, it was found the Brod\'en-Borel-L\'evy formula used to determine the probability structure of $(a_n)_{n \in \mathbb{N}_+}$ under $\lambda$ (=the Lebesgue measure).
Also, it was given the so-called \textit{natural extension} $\overline{\tau}_m$, and were defined and studied the extended incomplete quotients $\overline{a}_l$, $l \in \mathbb{Z}$.
The associated Perron-Frobenius operator under different probability measures on ${\mathcal{B}}_I$ was derived. The Perron-Frobenius operator of $\tau_m$ under the invariant measure $\gamma_m$ induced by the limit distribution function was studied and it was derived the asymptotic behaviour of this operator. Also, the Perron-Frobenius operator was restricted to the linear space of all complex-valued functions of bounded variation and to the space of all bounded measurable complex-valued functions.
The main result in \cite{LK-2011} is the solving of Gauss-Kuzmin type problem using the method of random systems with complete connections by Iosifescu \cite{IG-2009}.
\subsection{Main theorem}
We show our main theorem in this subsection. For this purpose let $\mu$ be a non-atomic probability measure on $\mathcal{B}_I$ and define
\begin{eqnarray}
F_n (x) &=& \mu (\tau_m^n < x), \ x \in I, \ n \in \mathbb{N}, \label{1.3}\\
F(x) &=& \displaystyle \lim_{n \rightarrow \infty}F_n(x), \ x \in I, \nonumber
\end{eqnarray}
with $F_0 (x) = \mu ([0,x))$.
Our main result is the following theorem.
\begin{theorem} \label{Th.G-K}
Let $k_m = \displaystyle \frac{(m-1)^2}{\log \left(m^2/(2m-1)\right)}$. Then there exists
\begin{equation}
F_n(x) = \frac{k_m}{(m-1)^2}\log \frac{m((m-1)x+1)}{(m-1)x+m} + {\mathcal O}(q_m^n), \label{6}
\end{equation}
where $0<q_m<1$.
\end{theorem}
\section{Proof of Theorem \ref{Th.G-K}}
First, we show that $\{F_n\}$, defined in (\ref{1.3}), satisfy a Gauss-Kuzmin-type equation, i.e., the following holds.
\begin{proposition} \label{G-K.eq.}
If $\{F_n\}$ are the functions defined in (\ref{1.3}) , then $F_n$ satisfies the following Gauss-Kuzmin-type equation
\begin{equation}
F_{n+1} (x) = \sum_{i \in \mathbb{N}}\left\{F_n\left(\alpha^i\right) - F_n\left(\frac{\alpha^i}{1+(m-1)x}\right)\right\}, \quad x \in I, \quad n \in \mathbb{N}. \label{2}
\end{equation}
\end{proposition}
\noindent\textbf{Proof.}
Let $\alpha = 1/m$. Since $\tau_m^{n}(x) = \displaystyle\frac{m^{-a_{n+1}(x)}}{1+(m-1)\tau_m^{n+1}(x)}$ it follows that
\begin{eqnarray*}
F_{n+1} (x) &=& \mu \left(\tau^{n+1}_m < x\right)= \sum_{i \in \mathbb{N}}\mu \left(\frac{\alpha^i}{1+(m-1)x} < \tau^n_m < \alpha^i\right) \\
&=& \sum_{i \in \mathbb{N}}\left( F_n\left(\alpha^i\right) - F_n\left(\frac{\alpha^i}{1+(m-1)x}\right) \right).
\end{eqnarray*}
\hfill $\Box$
Assuming that for some $p \in \mathbb{N}$ the derivative $F'_p$ exists everywhere in $I$ and is bounded, it is easy to see by induction that $F'_{p+n}$ exists and is bounded for all $n \in \mathbb{N}_+$. This allows us to differentiate (\ref{1.3}) term by term, obtaining
\begin{equation}
F'_{n+1}(x) = \sum_{i \in \mathbb{N}} \frac{(m-1)\alpha^i}{(1+(m-1)x)^2}F'_{n}\left(\frac{\alpha^i}{1+(m-1)x}\right). \label{3}
\end{equation}
Further, write $f_n(x) = (1+(m-1)x)(m+(m-1)x)F'_n(x)$, $x \in I$, $n \in \mathbb{N}$,
then (\ref{3}) is rewritten as
\begin{equation}
f_{n+1}(x) = \sum_{i \in \mathbb{N}} P_m^i((m-1)x)f_n\left(\frac{\alpha^i}{1+(m-1)x}\right), \label{4}
\end{equation}
where
\begin{equation}
P^i_m(x) = \frac{(m-1)\alpha^{i+1}(x+1)(x+m)}{(x+(m-1)\alpha^{i}+1)(x+(m-1)\alpha^{i+1}+1)}, \quad x \in I. \label{5}
\end{equation}
Putting $\Delta_i = \alpha^{i} - \alpha^{2i}$, $i \in \mathbb{N}$, we get
\begin{eqnarray*}
P^i_m ((m-1)x) = (m-1) \left[\alpha^{i+1} + \frac{\Delta_i}{(m-1)x+(m-1)\alpha^{i}+1} \right.\\
\left.- \frac{\Delta_{i+1}}{(m-1)x + (m-1)\alpha^{i+1}+1}\right].
\end{eqnarray*}
\textit{Proof of Theorem \ref{Th.G-K}}
Introduce a function $R_n(x)$ such that
\begin{equation}
F_n(x) = \frac{k_m}{(m-1)^2}\log \frac{m((m-1)x+1)}{(m-1)x+m} + R_n\left(\frac{k_m}{(m-1)^2}\log \frac{m((m-1)x+1)}{(m-1)x+m}\right), \label{7}
\end{equation}
where $k_m = \displaystyle \frac{(m-1)^2}{\log \left(m^2/(2m-1)\right)}$.
Because $F_n(0)=0$ and $F_n(1)=1$, we have $R_n(0)=R_n(1)=0$. To prove the theorem, we have to show that
\begin{equation}
R_n(x) = {\mathcal O}(q_m^n), \label{8}
\end{equation}
where $0<q_m<1$.
If we can show that $f_n(x)=k_m+{\mathcal O}(q_m^n)$, then its integration will show the equation (\ref{6}).
To demonstrate that $f_n(x)$ has this desired form, it suffices to establish that $f'_n(x) = {\mathcal O}(q_m^n)$.
We have
\begin{eqnarray*}
\left(P_m^i((m-1)x)\right)' = (m-1) \left[\frac{(m-1)\Delta_{i+1}}{\left((m-1)x + (m-1)\alpha^{i+1}+1\right)^2} \right.\\
\left.- \frac{(m-1)\Delta_i}{\left((m-1)x+(m-1)\alpha^{i}+1\right)^2}\right].
\end{eqnarray*}
Now from (\ref{4}), we have
\begin{eqnarray}
f'_{n+1}(x)&=& \sum_{i \in \mathbb{N}} \left[\left(P_m^i((m-1)x)\right)' f_n\left(\frac{\alpha^i}{1+(m-1)x}\right)\right. \nonumber \\
&-& \left. P_m^i((m-1)x)\frac{(m-1)\alpha^i}{((m-1)x+1)^2} f'_n\left(\frac{\alpha^i}{1+(m-1)x}\right)\right] \nonumber \\
&=& -(m-1) \sum_{i \in \mathbb{N}} (A_i + B_i) \label{9}
\end{eqnarray}
where
\begin{eqnarray*}
A_i = \frac{(m-1)\Delta_{i+1}}{((m-1)x + (m-1)\alpha^{i+1}+1)^2} \qquad \qquad \qquad \qquad \quad \ \\
\times \left( f_n\left(\frac{\alpha^{i+1}}{(m-1)x+1}\right) - f_n \left(\frac{\alpha^{i}}{(m-1)x+1} \right) \right ) \qquad \\
B_i = P_m^i((m-1)x)\frac{(m-1)\alpha^i}{((m-1)x+1)^2} f'_n\left(\frac{\alpha^i}{1+(m-1)x}\right). \quad
\end{eqnarray*}
By applying the mean value theorem of calculus to the difference
\[
f_n\left(\frac{\alpha^{i+1}}{(m-1)x+1}\right) - f_n \left(\frac{\alpha^{i}}{(m-1)x+1} \right),
\]
we obtain
\[
f_n\left(\frac{\alpha^{i+1}}{(m-1)x+1}\right) - f_n \left(\frac{\alpha^{i}}{(m-1)x+1} \right)= \left\{\frac{\alpha^{i+1}}{(m-1)x+1} - \frac{\alpha^{i}}{(m-1)x+1}\right\} f'_n(\theta_i),
\]
with
\[
\frac{\alpha^{i+1}}{(m-1)x+1} < \theta_i < \frac{\alpha^{i}}{(m-1)x+1}.
\]
Thus, from (\ref{9}), we have
\begin{eqnarray}
f'_{n+1}(x) = (m-1)^2 \sum_{i \in \mathbb{N}}\frac{\alpha^{i+1}\Delta_{i+1}}{((m-1)x+1)((m-1)x + (m-1)\alpha^{i+1}+1)^2} f'_n(\theta_i) \nonumber \\
- (m-1) \sum_{i \in \mathbb{N}} P_m^i((m-1)x)\frac{\alpha^i}{((m-1)x+1)^2} f'_n\left(\frac{\alpha^i}{1+(m-1)x}\right). \quad \label{10}
\end{eqnarray}
Let $M_n$ be the maximum of $|f'_n(x)|$ on $I$, i.e., $M_n = \displaystyle \max_{x \in I}|f'_n(x)|$. Then (\ref{10}) implies
\begin{eqnarray}
M_{n+1} & \leq & M_n \cdot \max_{x \in I} \left| (m-1)^2 \sum_{i \in \mathbb{N}}\frac{\alpha^{i+1}\Delta_{i+1}}{((m-1)x+1)((m-1)x + (m-1)\alpha^{i+1}+1)^2} \right.\nonumber \\
&+& \left.(m-1) \sum_{i \in \mathbb{N}} P_m^i((m-1)x)\frac{\alpha^i}{((m-1)x+1)^2} \right|. \label{11}
\end{eqnarray}
We now must calculate the maximum value of the sums in this expression.
First, we note that
\begin{equation}
\frac{\alpha^{i+1}\Delta_{i+1}}{((m-1)x+1)((m-1)x + (m-1)\alpha^{i+1}+1)^2} \leq \frac{\alpha^{2i+2}}{((m-1)\alpha^{i+1}+1)^2}. \label{12}
\end{equation}
Note that, in the upper inequality, we have used that $\Delta_i = \alpha^{i} - \alpha^{2i}$ and that $0 \leq x \leq 1$.
Next, observe that the function
\[
h_m(x) := \frac{\alpha^iP_m^i((m-1)x)}{((m-1)x+1)^2}
\]
is decreasing for $x \in I$ and $i \in \mathbb{N}$. Hence, $h_m(x) \leq h_m(0)$. This leads to
\begin{equation}
\frac{\alpha^iP_m^i((m-1)x)}{((m-1)x+1)^2} \leq \frac{(m-1)\alpha^{2i}}{((m-1)\alpha^{i+1}+1)^2}. \label{13}
\end{equation}
The relations (\ref{12}) and (\ref{13}) allow us to rewrite inequality (\ref{11}) as
\begin{equation}
M_{n+1} \leq q_m \cdot M_n, \label{14}
\end{equation}
where
\begin{equation}
q_m := (m-1)^2(m^2+1) \sum_{i \in \mathbb{N}} \frac{1}{\left(m^{i+1}+m-1\right)^2}. \label{15}
\end{equation}
Since, for $i \geq 2$, we have
\[
\frac{1}{\left(m^{i+1}+m-1\right)^2} \leq \frac{1}{m^2(m-1)^2(m^2+1)} \left(\frac{1}{m}\right)^i,
\]
therefore,
\begin{eqnarray*}
q_m \leq (m-1)^2(m^2+1) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \quad \quad \ \\
\times \left( \frac{1}{(2m-1)^2} + \frac{1}{m^2+m-1} + \frac{1}{m^2(m-1)^2(m^2+1)} \sum_{i \geq 2}\left(\frac{1}{m}\right)^i \right) \quad \ \\
= (m-1)^2(m^2+1) \left( \frac{1}{(2m-1)^2} + \frac{1}{m^2+m-1} + \frac{1}{m^3(m-1)^3(m^2+1)} \right) \\
\leq 1, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \
\end{eqnarray*}
for any $m \in \mathbb{N}$, $m \geq 2$.
$\hfill \Box$
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The macroscopic systems maintained out of equilibrium by means of the injection and dissipation of energy
are characterized by exhibiting coexistence of different equilibria \cite{GlansdorffPrigogine,NicolisPrigogine,Pismen2006}.
This is the physical context, where the life develops. Inhomogeneous initial conditions
caused by, e.g., inherent fluctuations of macroscopic systems, generate the emergence
of equilibria in different parts of space, which are usually identified as spatial domains.
These domains are separated by domain walls or interface between the equilibria.
A classic example of this phenomena is the magnetic domains and walls \cite{Hubert}.
Depending on the configuration of the magnetization these walls are usually denominated as Ising, Bloch, and Neel.
Likewise, similar walls have been observed in liquid crystals, when a liquid crystal film is subjected
to magnetic or electric fields \cite{OswaldPieranski}.
In particular, nematic liquid crystals with planar anchoring exhibit Ising walls \cite{Chevallard}.
Close to the reorientation instability of the molecules, Fr\'eedericksz transition, this system is well described by the Allen-Cahn equation.
Besides, using a photosensitive wall, it is possible to induce a molecular reorientation in a thin liquid crystal film \cite{ResidoriRiera2001}.
This type of device is usually called as liquid crystal light valve (see \cite{Residori2005} and references therein).
Due to the inhomogeneous illumination generated by light on the liquid crystal layer, the dynamics of molecular reorientation is described by
\begin{equation}
\partial_t u(x_1,x_2,t)=\epsilon^2 \Delta u+\mu(x_1,x_2)u -u^3+ a \epsilon x_1 f(x_1,x_2),
\label{Eq-LCLV}
\end{equation}
where $u(x_1,x_2,t)$ accounts for the average rotational amplitude of the molecules, $t$, $x_1$, and $x_2$,
respectively, stand for time and the transverse coordinates of the liquid crystal layer,
${x_1}$ is the direction in which the molecules are anchored, $f(x_1,x_2)=-\frac{1}{2}\partial_{x_1} \mu(x_1,x_2)$, and non dimensional parameters $\epsilon, a$ are positive. The function
\begin{equation}
\label{def mu1}
\mu(x_1,x_2)=\mu_0+I_o e^{-\frac{x_1^2+x_2^2}{w^2}},
\end{equation}
which accounts the forcing given by the external electric field and the
effect of the illuminated photo-sensitive wall characterized by the light intensity $I_o>0$, is typically sign changing i.e $-I_0<\mu_0<0$. This last condition describes the situation when the electrical voltage applied to the liquid crystal sample is less than the Fr\'eedericksz
voltage. The level set $\{\mu(x_1, x_2)=0\}$ separates two disjoint regions where $\mu$ is of constant sign. For any $x\in \{\mu>0\}$ the potential
\[
U(z,x)=-\mu(x)\frac{z^2}{2}+\frac{z^4}{4}
\]
has precisely two non degenerate minima of equal depth at $z=\pm \sqrt{\mu(x)}$, while in the region $\{\mu<0\}$, $U$ is nonnegative and its only minimum occurs at $z=0$. Motivated by this we will call the set $\{\mu<0\}\subset \R^2$ the bistable region and the set $\{\mu>0\}\subset \R^2$ the monostable region. Note that with the choice of the function $\mu$ in (\ref{def mu1}) the bistable region is a disc and the monostable region is its complement in $\R^2$. The objective of this paper is to understand how the location of the domain walls defined as the set of zeros of the solutions of (\ref{Eq-LCLV}) change when the parameters $\epsilon$ and $\alpha$ vary. For this purpose we will restrict our attention to the time independent solutions, the idea being that the system quickly relaxes to its stationary state.
If one ignores the dependence on the transversal $x_2$ coordinate, the system exhibits two type of walls that
separate domains that evanesce asymptotically \cite{Barboza2016,panayotis_1}. One corresponds to the extension
of Ising wall, standard kink, in this inhomogeneous system, which is a symmetric solution and centered
in the region of the maximal illumination i.e. $x=0$ (since $\mu(x)$ attains its maximum in the origin). The other corresponds to a wall centered in the non-illuminated part,
shadow kink \cite{Barboza2016,panayotis_1}. To understand the latter one can expand the solution around the point where $\mu(x)=0$. In this limit the profile of the transition is described by the second Painlev\'e equation \cite{panayotis_1,troy,2005math.ph...8062C}. This paper is devoted to understanding the physically relevant situation when the dependence on the second coordinate is not neglected and we take $t\to \infty$. In this limit the stationary solutions of (\ref{Eq-LCLV}) can be characterized as the minima of the following energy functional
\begin{equation}
\label{funct 0}
E(u)=\int_{\R^2}\frac{\epsilon}{2}|\nabla u|^2-\frac{1}{2\epsilon}\mu(x)u^2+\frac{1}{4\epsilon}u^4-a f_1(x) u,
\end{equation}
where $u\in H^1(\R^2)$ and $\epsilon>0$, $a\geq 0$ are real parameters.
More generally as in (\ref{def mu1}) we suppose that $\mu \in C^\infty(\R^2)$ is radial i.e. $\mu(x)=\mu_{\mathrm{rad}}(|x|)$, with $\mu_{\mathrm{rad}} \in C^\infty(\R)$ an even function.
We take $f=(f_1,f_2)\in C^\infty(\R^2,\R^2)$ also to be radial i.e. $f(x)= f_{\mathrm{rad}}(|x|)\frac{x}{|x|}$, with $f_{\mathrm{rad}} \in C^\infty(\R)$ an odd function.
In addition we assume that
\begin{equation}\label{hyp2}
\begin{cases}
\mu \in L^\infty(\R^2),\ \mu_{\mathrm{rad}}'<0 \text{ in }(0,\infty), \text{ and $ \mu_{\mathrm{rad}}(\rho)=0$ for a unique $\rho>0$},\medskip \\
\text{$f \in L^1(\R^2,\R^2)\cap L^\infty(\R^2,\R^2)$, and $ f_{\mathrm{rad}}>0$ on $(0,\infty)$}.
\end{cases}
\end{equation}
The Euler-Lagrange equation of $E$ is
\begin{equation}\label{ode}
\ve^2 \Delta u+\mu(x) u-u^3+\ve a f_1(x)=0,\qquad x=(x_1,x_2)\in \R^2,
\end{equation}
and we also write its weak formulation:
\begin{equation}\label{euler}
\int_{\R^2} -\epsilon^2 \nabla u\cdot \nabla \psi+\mu u \psi-u^3 \psi+\epsilon a f_1\psi=0,\qquad \forall \psi \in H^1(\R^2),
\end{equation}
where $\cdot$ denotes the inner product in $\R^2$.
Note that due to the radial symmetry of $\mu$ and $f$, the energy \eqref{funct 0} and equation
\eqref{ode} are invariant under the transformations $u(x_1,x_2)\mapsto -u(-x_1,x_2)$, and $u(x_1,x_2)\mapsto u(x_1,-x_2)$.
Our purpose in this paper is to study qualitative properties of the global minimizers of $E$ as the parameters $a$ and $\epsilon$ vary. In general we will assume that $\epsilon>0$ is small and $a\geq 0$ is fixed. In our previous work \cite{panayotis_1} and \cite{panayotis_2}, we examined respectively the cases of minimizers $v:\R\to\R$, and $v:\R^2\to\R^2$. In the present paper we follow the approach presented therein, and introduce several new ideas to address the specific issues occuring for minimizers $v:\R^2\to\R$. In particular, new variational arguments to determine the limit points of the zero level set of $v$, which is now a curve (cf. the conclusion of the proofs of Theorem \ref{thcv} (ii) and (iii)), and a computation of the energy that reduces to a one dimensional problem, by using iterated integrals.
Proceeding as in \cite{panayotis_2}, one can see that under the above assumptions there exists a global minimizer $v$ of $E$ in $H^1(\R^2)$, namely
that $E(v)=\min_{H^1(\R^2)} E$.
In addition, we show that $v$ is a classical solution of \eqref{ode},
and $v$ is even with respect to $x_2$ i.e. $v(x_1,x_2)=v(x_1,-x_2)$.
In the sequel, we will always denote by $v$ the global minimizer, and by $u$ an arbitrary critical point of $E$ in $H^1(\R)$.
Some basic properties are stated in:
\begin{theorem}\label{theorem 1}
For $\epsilon\ll 1$, and $a\geq 0$ bounded (possibly dependent on $\epsilon$),
let $v_{\epsilon, a}$ be a global minimizer of $E$, let $\rho>0$ be the zero of $\mu_{\mathrm{rad}}$ and let $\mu_1:=\mu_{\mathrm{rad}}'(\rho)<0$. The following statements hold:
\begin{itemize}
\item[(i)] Let $\Omega\subset D(0;\rho)$ be an open set such that $v_{\epsilon, a}> 0$ (resp. $v_{\epsilon, a}< 0$) on $\Omega$, for every $\epsilon\ll 1$.
Then $v_{\epsilon, a}\to \sqrt{\mu}$ (resp. $v_{\epsilon, a}\to -\sqrt{\mu}$) in $C^0_{\mathrm{loc}}(\Omega)$.
\item[(ii)] For every $\xi=\rho e^{i\theta}$, we consider the local coordinates $s=(s_1,s_2)$ in the basis $(e^{i\theta},i e^{i\theta})$, and the
rescaled minimizers:
\[
w_{\epsilon,a}(s)= 2^{-1/2}(-\mu_1\ve)^{-1/3} v_{\epsilon,a}\Big( \xi+\ve^{2/3} \frac{s}{(-\mu_1)^{1/3}}\Big).
\]
Assuming that $\lim_{\epsilon\to 0}a(\epsilon)=a_0$, then as $\ve\to 0$, the function $w_{\epsilon, a}$ converges in $C^2_{\mathrm{loc}}(\R^2)$ up to subsequence,
to a function $y$ bounded in $[s_0,\infty)\times \R$ for every $s_0\in\R$, which is a minimal
solution of
\begin{equation}\label{pain}
\Delta y(s)-s_1 y(s)-2y^3(s)-\alpha=0, \qquad \forall s=(s_1,s_2)\in \R^2,
\end{equation}
with $\alpha=\frac{a_0 f_1(\xi)}{\sqrt{2}\mu_1}$.
\item[(iii)] Assuming that $\lim_{\epsilon\to 0}a(\epsilon)= a_0$, then we have $\lim_{\epsilon\to 0}\frac{u_{\epsilon,a}(x)}{\epsilon}=
-\frac{a_0}{\mu(x)}f_1(x)$ uniformly on compact subsets of $\{|x|>\rho\}$.
\end{itemize}
\end{theorem}
Looking at the energy $E$ it is evident that as $\epsilon\to 0$ the modulus of the global minimizer $|v_{\epsilon,a}|$ should approach a nonnegative
root of the polynomial
\[
-\mu(x)u+u^3-a\epsilon f_1(x)=0,
\]
or in other words, $|v_{\epsilon, a}|\to \sqrt{\mu^+}$ as $\epsilon\to 0$ in some, perhaps weak, sense. We observe for instance that as a corollary
of Theorem \ref{theorem 1} (i) and Theorem \ref{thcv} (ii) below we obtain when $a<a_*$ the convergence in
$C^0_{\mathrm{loc}}(D(0; \rho))$ (actually the uniform convergence holds in the whole plane). Because of the analogy between the functional $E$ and the Gross-Pitaevskii functional in theory of Bose-Einstein condensates we will call $\sqrt{\mu^+}$ the Thomas-Fermi limit of the global minimizer.
Theorem \ref{theorem 1} gives account on how non smoothness of the limit of $v_{\epsilon, a}$ is mediated near the circumference $|x|=\rho$, where
$\mu$ changes sign, through the solution of (\ref{pain}).
This equation is a natural generalization of the second Painlev\'e ODE
\begin{equation}
\label{pain 1d}
y''-sy-2y^3-\alpha=0, \qquad s\in \R.
\end{equation}
In \cite{panayotis_1} we showed that this last equation plays an analogous role in the one dimensional, scalar version of the energy $E$:
\[
E(u,\R)=\int_{\R}\frac{\epsilon}{2}|u_x|^2-\frac{1}{2\epsilon}\mu(x)u^2+\frac{1}{4\epsilon}|u|^4-a f(x)u
\]
where $\mu$ and $f$ are scalar functions satisfying similar hypothesis to those we have described above. In this case the Thomas-Fermi limit of the global minimizer is simply $\sqrt{\mu^+(x)}$, which is non differentiable at the points $x=\pm \xi$ which are the zeros of the even function $\mu$. Near these two points a rescaled version of the global minimizer approaches a solution of (\ref{pain 1d}) similarly as it is described in Theorem \ref{theorem 1} (ii).
It is very important to realize that not every solution of (\ref{pain 1d}) can serve as the limit of the global minimizer, since in our case the limiting solutions of (\ref{pain}) are necessarily minimal as well.
To explain what this means, let
\[
E_{\mathrm{P_{II}}}(u, A)=\int_A \left[ \frac{1}{2} |\nabla u|^2 +\frac{1}{2} s_1 u^2 +\frac{1}{2} u^4+\alpha u\right].
\]
By definition a solution of (\ref{pain}) is minimal if
\begin{equation}\label{minnn}
E_{\mathrm{P_{II}}}(y, \mathrm{supp}\, \phi)\leq E_{\mathrm{P_{II}}}(y+\phi, \mathrm{supp}\, \phi)
\end{equation}
for all $\phi\in C^\infty_0(\R^2)$. This notion of minimality is standard for many problems in which the energy of a localized solution is actually infinite due to non compactness of the domain.
The study of minimal solutions of \eqref{pain 1d} was recently initiated in \cite{panayotis_1} where we showed that the Hastings-McLeod solutions $h$ and $-h$, are the only minimal solutions of the homogeneous equation
\begin{equation}\label{phom}
y''-sy-2y^3=0, \, s\in \R,
\end{equation}
which are bounded at $+\infty$.
We recall (cf. \cite{MR555581}) that $h:\R\to\R$ is positive, strictly decreasing ($h' <0$) and such that
\begin{align}\label{asy0}
h(s)&\sim \mathop{Ai}(s), \qquad s\to \infty, \nonumber \\
h(s)&\sim \sqrt{|s|/2}, \qquad s\to -\infty.
\end{align}
On the other hand in \cite{panayotis_3} we considered when $a=0$, the odd minimizer $u$ of \eqref{funct 0}\footnote{Due to the symmetry of $\mu$ and $f$, $u$ is also a critical point of \eqref{funct 0} (cf. \cite{palais}), thus it solves \eqref{ode}.} in the class
$H^1_{\mathrm{odd}}(\R^2):=\{u \in H^1(\R^2): u(x_1,x_2)=-u(-x_1,x_2)\}$ of odd functions with respect to $x_1$, and following Theorem \ref{theorem 1} (ii), we established the existence of a nontrivial solution $y$ of the homogeneous equation \eqref{pain}. It has a form of a quadruple connection between the Airy function
$\mathop{Ai}(x)$, the two one dimensional Hastings-McLeod solutions $\pm h(x)$ and the heteroclinic orbit $\eta(x)= \tanh(x/\sqrt{2})$ of the ODE $\eta''=\eta^3-\eta$.
Although we know (cf. \cite[Theorem 2.1]{panayotis_3}) that Theorem \ref{theorem 1} (ii) applied to the global minimizer $v$ in the homogeneous case $a=0$, gives at the limit either $y(s_1,s_2)=h(s_1)$ or $y(s_1,s_2)=-h(s_1)$, we are
not aware if in the nonhomogeneous case $a\neq 0$, Theorem \ref{theorem 1} (ii) produces a new kind of minimal solution. This goes beyond the scope of the present paper.
Finally, regarding Theorem \ref{theorem 1} (iii)
we note that since the sign of the local limit of the rescaled global minimizer in $|x|>\rho$ is determined by the sign of $f_1$, one may expect that the zero level set of $v_{\epsilon, a}$ is a smooth curve (cf. Lemma \ref{smooth}) partitioning the plane. In Theorem \ref{thcv} we will determine the limit of this level set according to the value of $a$, and discuss the dependence of the global minimizer on $a$, when $\epsilon\ll 1$.
Before stating our second result we recall that the heteroclinic orbit $\eta(x)= \tanh(x/\sqrt{2})$ ($\eta:\R\to (-1,1)$) of the ODE $\eta''=\eta^3-\eta$, connecting the two minima $\pm 1$ of the potential $W(u)=\frac{1}{4}(1-u^2)^2$ ($W:\R\to [0,\infty)$) plays a crucial role in the study of minimal solutions of the Allen-Cahn equation
\begin{equation}\label{pdeac}
\Delta u=u^3-u, u:\R^n\to \R.
\end{equation}
Again, we say that $u$ is a minimal solution of (\ref{pdeac}) if
\[
E_{\mathrm{AC}}(u, \mathrm{supp}\, \phi)\leq E_{\mathrm{AC}}(u+\phi, \mathrm{supp}\, \phi),
\]
for all $\phi\in C^\infty_0(\R^2)$, where
\[
E_{\mathrm{AC}}(u,\Omega):=\int_\Omega \frac{1}{2}|\nabla u|^2+\frac{1}{4}(1-u^2)^2
\]
is the Allen-Cahn energy associated to \eqref{pdeac}. It is known \cite{savin} that in dimension $n\leq 7$, any minimal solution $u$ of (\ref{pdeac}) is either trivial i.e. $u\equiv\pm1$ or one dimensional i.e. $u(x)= \eta( (x-x_0)\cdot \nu)$, for some $x_0\in\R^n$, and some unit vector $\nu\in\R^n$.
\begin{theorem}\label{thcv}
Let $Z=\{l\in \R^2 \text{ is a limit point of the set of zeros of $v_{\epsilon,a}$ as $\epsilon\to 0$}\}$.
The following statements hold.
\begin{itemize}
\item[(i)]
When $a=0$ the global minimizer $v$ is unique up to change of $v$ by $-v$. It can be written as $v(x)=v_{\mathrm{rad}}(|x|)$,
with $v_{\mathrm{rad}} \in C^\infty(\R)$, positive and even.
\item[(ii)]
There exists a constant $a_*>0$ such that
for all $a\in (0, a_*)$, we have up to change of $v(x_1,x_2)$ by $-v(-x_1,x_2)$:
$$ \{x_1<0, |x|=\rho\} \cup \{x_1=0, |x_2|\geq \rho\}\subset Z \subset \{|x|=\rho\} \cup \{x_1=0, |x_2|\geq \rho\},$$
and
\begin{equation}\label{cvn2}
\lim_{\epsilon\to 0} v(x+s\epsilon)=\sqrt{\mu^+(x)}, \forall x \in \R^2,
\end{equation}
in the $C^2_{\mathrm{ loc}}(\R)$ sense. The above asymptotic formula holds as well when $a=0$.
\item[(iii)]
Suppose that $f'_{\mathrm{rad}}(0)\neq 0$, then there exists a constant
$a^*\geq a_*$ such that for all $a>a^*$ we have $Z=\{x_1=0\}$, and the global minimizer $v$ satisfies
\begin{equation}\label{cvn1}
\begin{aligned}
&\lim_{\ve\to 0} v( x+\ve s)=\begin{cases}\sqrt{\mu^+(x)} &\text{for } x_1>0, \\
-\sqrt{\mu^+(x)} &\text{for } x_1<0,
\end{cases}
\end{aligned}
\end{equation}
in the $C^2_{\mathrm{loc}}(\R)$ sense. Next, if $\bar x_{\epsilon,a}=(\bar t_{\epsilon,a},x_2)$ is a zero of $v_{\epsilon,a}$ with fixed ordinate $x_2$, then up to subsequence and for a.e. $x_2\in(-\rho,\rho)$ we have
\begin{equation}\label{cvn00}
\lim_{\ve\to 0} v(\bar x+\ve s)=\sqrt{\mu(0,x_2)}\tanh(s_1\sqrt{\mu(0,x_2)/2}), \text{ in the $C^2_{\mathrm{loc}}(\R)$ sense}.
\end{equation}
Finally, when $f=-\frac{1}{2}\nabla \mu$ we have $a_*=a^*=\sqrt{2}$.
\end{itemize}
\end{theorem}
Perhaps the most interesting and unexpected is the statement (ii) of the above theorem. It says that, at least in the limit $\epsilon\to 0$ the domain wall $Z$ is located at the border between the monostable region $\{\mu<0\}$ and the bistable region $\{\mu>0\}$. Physically this means that as the intensity of the illumination, measured by $a$, is relatively small then no defect is visibly seen. For this reason and by analogy with \cite{panayotis_1, panayotis_2} we call it the shadow domain wall. As $a$ increases the shadow domain wall penetrates the bistable region becoming the standard domain wall, as described in (iii).
It is natural to expect in Theorem \ref{thcv} (ii) that $Z= \{x_1<0, |x|=\rho\} \cup \{x_1=0, |x_2|\geq \rho\}$. However, the energy considerations presented in the proof of Theorem \ref{thcv} do not exclude the existence of a limit point of the zeros of $v$ in the half-circle $\{x_1>0, |x|=\rho\}$. Actually, the existence of such a limit point induces an infinitesimal variation of the total energy that makes it difficult to detect. For the same reason, the limit \eqref{cvn00} in Theorem \ref{thcv} (iii) holds only for a.e. $x_2\in(-\rho,\rho)$. We also point out that the assumption that $f$ is radial, is essential to prove the existence of the constants $a_*$ and $a^*$ (cf. Lemma \ref{astar}).
\section{General results for minimizers and solutions}
In this section we gather general results for minimizers and solutions that are valid for any values of the parameters $\epsilon> 0$ and $a\geq 0$.
We first prove the existence of global minimizers.
\begin{lemma}\label{lem exist min}
For every $\epsilon> 0$ and $a\geq 0$, there exists $v \in H^1(\R^2)$ such that $E(v)=\min_{H^1(\R^2)} E$.
As a consequence, $v$ is a $C^\infty$ classical solution of \eqref{ode}. Moreover
$v(x)\to 0$ as $|x|\to \infty$, and $v(x_1,x_2)=v(x_1,-x_2)$.
\end{lemma}
\begin{proof}
We proceed as in \cite[Lemma 2.1]{panayotis_2} to establish that the global minimizer exists and is a smooth solution of \eqref{ode} converging to $0$ as
$|x|\to \infty$.
It remains to show that $v(x_1,x_2)=v(x_1,-x_2)$. We first note that $E(v,\R\times [0,\infty))=E(v,\R\times (-\infty,0])$. Indeed, if we assume without loss of generality that $E(v,\R\times [0,\infty))<E(v,\R\times (-\infty,0])$, the function
\begin{equation}
\tilde v(x_1,x_2)=
\begin{cases}
v(x_1,x_2) &\text{when } x_2\geq 0,\\
v(x_1,-x_2) &\text{when } x_2\leq 0,
\end{cases}
\end{equation}
has strictly less energy than $v$, which is a contradiction. Thus, $E(v,\R\times [0,\infty))=E(v,\R\times (-\infty,0])$, and as a consequence the
function $\tilde v$ is also a global minimizer and a solution. It follows by unique continuation \cite{sanada} that $\tilde v\equiv v$ .
\end{proof}
To study the limit of solutions as $\epsilon\to 0$, we need uniform bounds in the different regions considered in Theorem \ref{theorem 1}.
\begin{lemma}\label{s3}
For $\epsilon a$ belonging to a bounded interval, let $u_{\epsilon,a}$ be a solution of \eqref{ode} converging to $0$ as $|x|\to \infty$. Then, the solutions $u_{\epsilon,a}$ and the maps $\epsilon\nabla u_{\epsilon,a}$ are uniformly bounded.
\end{lemma}
\begin{proof}
We drop the indexes and write $u:=u_{\epsilon,a}$.
Since $|f|$, $\mu$, and $\epsilon a$ are bounded, the roots of the cubic equation in the variable $u$
$$u^3-\mu(x)u-\epsilon a f_1(x)=0$$ belong to a bounded interval, for all values of $x$, $\epsilon$, $a$.
If $u$ takes positive values, then it attains its maximum $0\leq \max_{\R^2}u=u(x_0)$, at a point $x_0\in\R^2$.
In view of \eqref{ode}: $$0\geq\epsilon^2 \Delta u(x_0)=u^3(x_0)-\mu(x_0)u(x_0)-\epsilon a f_1(x_0),$$ thus it follows that
$u(x_0)$ is uniformly bounded above. In the same way, we prove the uniform lower bound for $u$.
The boundedness of $\epsilon\nabla u_{\epsilon,a}$ follows from \eqref{ode}, the uniform bound of $u_{\epsilon,a}$, and standard elliptic estimates.
\end{proof}
\begin{lemma}\label{l2}
For $\epsilon\ll 1$ and $a$ belonging to a bounded interval, let $u_{\epsilon,a}$ be a solution of \eqref{ode} converging to $0$ as $|x|\to\infty$.
Then, there exist a constant $K>0$ such that
\begin{equation}\label{boundd}
|u_{\ve,a}(x)|\leq K(\sqrt{\max(\mu (x),0)}+\ve^{1/3}), \quad \forall x\in \R^2.
\end{equation}
As a consequence, if for every $\xi=\rho e^{i\theta}$ we consider the local coordinates $s=(s_1,s_2)$ in the basis $(e^{i\theta},i e^{i\theta})$, then the rescaled functions
$\tilde u_{\epsilon,a}(s)=\frac{u_{\ve,a}(\xi+ s\epsilon^{2/3})}{\epsilon^{1/3}}$ are uniformly bounded on the half-planes $[s_0,\infty)\times\R$, $\forall s_0\in\R$.
\end{lemma}
\begin{proof}
For the sake of simplicity we drop the indexes and write $u:=u_{\epsilon,a}$. Let us define the following constants
\begin{itemize}
\item $ M>0$ is the uniform bound of $|u_{\epsilon,a}|$ (cf. Lemma \ref{s3}),
\item $\lambda>0$ is such that $3 \mu_{\mathrm{rad}}(\rho-h)\leq 2\lambda h$, $\forall h \in [0,\rho]$,
\item $F:=\sup_{\R^2}| f_1|$,
\item $\kappa>0$ is such that $\kappa^3\geq 3aF$, and $\kappa^4\geq 6\lambda$.
\end{itemize}
Next, we construct the following comparison function
\begin{equation}\label{compchi}
\chi(x)=
\begin{cases}
\lambda\Big(\rho-|x|+\frac{\epsilon^{2/3}}{2}\Big)&\text{ for } |x|\leq\rho,\\
\frac{\lambda}{2\epsilon^{2/3}}(|x|-\rho-\epsilon^{2/3})^2&\text{ for }\rho\leq |x|\leq\rho+\epsilon^{2/3},\\
0&\text{ for } |x|\geq\rho+\epsilon^{2/3}.
\end{cases}
\end{equation}
One can check that $\chi\in C^1(\R^2\setminus\{0\})\cap H^1(\R^2)$ satisfies $\Delta \chi\leq \frac{2\lambda}{\epsilon^{2/3}}$ in $H^1(\R^2)$. Finally, we define the function
$\psi:=\frac{u^2}{2}-\chi-\kappa^2\epsilon^{2/3}$, and compute:
\begin{align}
\epsilon^2 \Delta \psi&=\epsilon^2 (|\nabla u|^2+ u\Delta u-\Delta \chi)\nonumber\\
&\geq-\mu u^2+u^4-\epsilon a f_1 u-\epsilon^2 \Delta \chi \nonumber\\
&\geq-\mu u^2+u^4-\epsilon a F| u|-2\epsilon^{4/3}\lambda.
\end{align}
Now, one can see that when $x\in \omega:=\{x\in\R^2: \psi(x)> 0\}$, we have $\frac{u^4}{3}- \mu u^2\geq 0$, since
$$x \in\omega\cap \overline{D(0;\rho)}\Rightarrow \frac{u^4}{3}\geq \frac{2\lambda}{3}\Big(\rho-|x|+\frac{\epsilon^{2/3}}{2}\Big)u^2\geq \mu u^2 .$$
On the open set $\omega$, we also have: $\frac{u^4}{3}\geq \frac{\kappa^4}{3}\epsilon^{4/3}\geq 2\epsilon^{4/3}\lambda$,
and $\frac{u^4}{3}\geq \frac{\kappa^3}{3}\epsilon|u|\geq \epsilon aF|u|$. Thus $\Delta \psi \geq 0$ on $\omega$ in the $H^1$ sense.
To conclude, we apply Kato's inequality that gives: $\Delta \psi^+ \geq 0$ on $\R^2$ in the $H^1$ sense. Since $\psi^+$ is subharmonic with compact
support, we obtain by the maximum principle that $\psi^+\equiv 0$ or equivalently $\psi \leq 0$ on $\R^2$. The statement of the lemma follows by adjusting the constant $K$.
\end{proof}
\begin{lemma}\label{s3gg}
Assume that $a$ is bounded and let $u_{\epsilon,a}$ be solutions of \eqref{ode} uniformly bounded.
Then, the functions $\frac{u_{\epsilon,a}}{\epsilon}$ and the maps $\nabla u_{\epsilon,a}$ are uniformly bounded on the sets $\{x:\, |x|\geq \rho_1\}$
for every $\rho_1>\rho$.
\end{lemma}
\begin{proof}
We consider the sets $S:= \{ x:\ |x|\geq\rho_1\}\subset S':= \{ x:\ |x|>\rho'_1\}$, with $\rho<\rho'_1<\rho_1$, and define the constants:
\begin{itemize}
\item $ M>0$ which is the uniform bound of $|u_{\epsilon,a}|$,
\item $ \mu_0=-\mu_{\mathrm{rad}}(\rho'_1)>0$,
\item $ f_\infty=\|f_1\|_{L^\infty}$,
\item $a^*:=\sup a(\epsilon)$,
\item $k=\frac{2a_*f_\infty}{\mu_0}>0$.
\end{itemize}
Next we introduce the function $\psi(x)=\frac{1}{2}(u^2-k^2\epsilon^2)$ satisfying:
\begin{align*}
\epsilon^2 \Delta \psi=\epsilon^2 \Delta\frac{u^2}{2}&\geq u^4+\mu_0u^2-\epsilon a_*f_\infty |u| \ , \forall x\in S',\\
&\geq \mu_0 \psi , \ \forall x \in S' \text{ such that } \psi(x)\geq 0.
\end{align*}
By Kato's inequality we have $\epsilon^2\Delta \psi^+ \geq \mu_0\psi^+$ on $S'$, in the $H^1$ sense, and utilizing a standard
comparison argument, we deduce that $\psi^+(x)\leq M^2 e^{-\frac{c}{\epsilon}d(x,\partial S')}$, $\forall x \in S$, and $
\forall \epsilon\ll 1$, where $d$ stands for the Euclidean distance, and $c>0$ is a constant. It is clear that
$$d(x,\partial S')>-\frac{\epsilon}{c}\ln\Big(\frac{k^2
\epsilon^2}{2 M^2}\Big)\Rightarrow M^2e^{-\frac{c}{\epsilon}d(x,\partial S')}<\frac{k^2\epsilon^2}{2}\Rightarrow u^2<2k^2\epsilon^2.$$
Therefore, there exists $\epsilon_0$ such that
\begin{equation}\label{asd1}
\frac{|u_{\epsilon,a}(x)|}{\epsilon}\leq \sqrt{2} k,\ \forall \epsilon<\epsilon_0,\ \forall x\in S.
\end{equation}
The boundedness of $\nabla u_{\epsilon,a}$ follows from \eqref{ode}, the uniform bound \eqref{asd1}, and standard elliptic estimates.
\end{proof}
\section{Proof of Theorems \ref{theorem 1} and \ref{thcv}}\label{proofs}
\begin{proof}[Proof of Theorem \ref{theorem 1} (i)]
Without loss of generality we assume that $v_{\epsilon, a}> 0$ on $\Omega$.
Suppose by contradiction that $v$ does not converge uniformly to $\sqrt{\mu}$ on a closed set $F\subset \Omega$. Then there exist a sequence $\epsilon_n\to 0$
and a sequence $\{x_n\}\subset F$ such that
\begin{equation}\label{either}
\text{$|v_{\epsilon_n}(x_n)-\sqrt{\mu(x_n)}|\geq\delta$, for some $\delta>0$.}
\end{equation}
In addition, we may assume that up to a subsequence $\lim_{n\to\infty}x_n=x_0\in F$.
Next, we consider the rescaled functions
$\tilde v_n(s)=v_{\epsilon_n}(x_n+\epsilon_n s)$ that satisfy
\begin{equation}\label{asd2ccee}
\Delta \tilde v_n(s)+\mu( x_n+\epsilon_n s)\tilde v_n(s)-\tilde v^3_n(s)+\epsilon_n af_1( x_n+\epsilon_n s)=0 , \ \forall s \in \R^2.
\end{equation}
In view of the Lemma \ref{s3} and \eqref{asd2ccee}, $\tilde v_{n}$ and its first derivatives are uniformly bounded for $\epsilon \ll 1$.
Moreover, by differentiating \eqref{asd2ccee}, one also obtains the boundedness of the second derivatives of $\tilde v_n$ on compact sets.
Thus, we can apply the theorem of Ascoli via a diagonal argument, and show that for a subsequence still called $\tilde v_n$,
$\tilde v_n$ converges in $C^2_{\mathrm{ loc}}(\R^2)$ to a function $\tilde V$, that we are now going to determine.
For this purpose, we introduce the rescaled energy
\begin{equation}
\label{functresee}
\tilde E(\tilde u)=\int_{\R^2}\Big(\frac{1}{2}|\nabla \tilde u(s)|^2-\frac{1}{2}\mu( x_n+\epsilon_n s)\tilde u^2(s)+\frac{1}{4}
\tilde u^4(s)-\epsilon_n a f_1( x_n+\epsilon_n s)\tilde u(s)\Big)\dd s=\frac{1}{\epsilon_n}E(u),\nonumber
\end{equation}
where we have set $\tilde u(s)=u_{\epsilon_n}( x_n+\epsilon_ns)$ i.e. $u_{\epsilon_n}(x)=\tilde u\big(\frac{x-x_n}{\epsilon_n}\big)$.
Let $\tilde \xi$ be a test function with support in the compact set $K$. We have $\tilde E(\tilde v_n+\tilde \xi,K)\geq \tilde E(\tilde v_n,K)$, and at the limit
$G_{0}( \tilde V+\tilde\xi,K)\geq G_{0}(\tilde V,K)$, where $$ G_{0}(\psi,K)=\int_{K}\left[\frac{1}{2}|\nabla\psi|^2-\frac{1}{2}\mu(x_0)\psi^2
+\frac{1}{4}\psi^4\right],$$
or equivalently $G( \tilde V+\tilde\xi,K)\geq G(\tilde V,K)$, where
\begin{equation}\label{glee}
G(\psi,K)=\int_{K}\left[\frac{1}{2}|\nabla\psi|^2-\frac{1}{2}\mu(x_0)\psi^2+\frac{1}{4}\psi^4+\frac{(\mu(x_0))^2}{4}\right]
=\int_{K}\left[\frac{1}{2}|\nabla\psi|^2+\frac{1}{4}(\psi^2-\mu(x_0))^2\right].
\end{equation}
Thus, we deduce that $\tilde V$ is a bounded minimal solution of the P.D.E. associated to the functional \eqref{glee}:
\begin{equation}\label{odegl}
\Delta \tilde V(s)+(\mu(x_0)-\tilde V^2(s))\tilde V(s)=0.
\end{equation}
If $\tilde V$ is the constant solution $\sqrt{\mu(x_0)}$, then we have $\lim_{n\to\infty}v_{\epsilon_n}(x_n)=\sqrt{\mu(x_0)}$ which is excluded by \eqref{either}.
Therefore we obtain $\tilde V(s)=\sqrt{\mu(x_0)}\tanh(\sqrt{\mu(x_0)/2}(s-s_0)\cdot \nu)$, for some unit vector $\nu \in \R^2$, and some $s_0\in \R^2$.
This implies that $v_n $ takes negative values in the open disc $D(x_n; 2\epsilon_n|s_0|)$ for $\epsilon_n\ll 1 $,
which contradicts the fact that $v_\epsilon>0$ on $\Omega$ for $\epsilon\ll 1$.
\end{proof}
\begin{proof}[Proof Theorem \ref{theorem 1} (ii)]
For every $\xi=\rho e^{i\theta}$ we consider the local coordinates $s=(s_1,s_2)$ in the basis $(e^{i\theta},i e^{i\theta})$,
and we rescale the global minimizer $v$ by setting $\tilde v_{\epsilon,a}(s)=\frac{v_{\epsilon,a}(\xi+ s\epsilon^{2/3})}{\epsilon^{1/3}}$.
Clearly $\Delta \tilde v(s)=\epsilon \Delta v(\xi+s\epsilon^{2/3})$, thus,
\begin{equation}\label{oderes1}
\Delta \tilde v(s)+\frac{\mu(\xi+s\epsilon^{2/3})}{\epsilon^{2/3}} \tilde v(s)-\tilde v^3(s)+ a f_1(\xi+s\epsilon^{2/3})=0, \qquad \forall s\in \R^2.\nonumber
\end{equation}
Writing $\mu(\xi+h)=\mu_1 h_1+h\cdot A(h)$, with $\mu_1:=\mu'_{\mathrm{rad}}(\rho)<0$, $A \in C^\infty(\R^2,\R^2)$, and $A(0)=0$, we obtain
\begin{equation}\label{oderes2}
\Delta \tilde v(s)+(\mu_1 s_1 + A(s \epsilon^{2/3})\cdot s) \tilde v(s)-\tilde v^3(s)+ a f_1(\xi+s\epsilon^{2/3})=0,\qquad \forall s\in \R^2.
\end{equation}
Next, we define the rescaled energy by
\begin{equation}
\label{functres2}
\tilde E(\tilde u)=\int_{\R^2}\Big(\frac{1}{2}|\nabla\tilde u(s)|^2-\frac{\mu(\xi+s \epsilon^{2/3})}{2\epsilon^{2/3}}\tilde u^2(s)+\frac{1}{4}\tilde u^4(s)- a f_1(\xi+s \epsilon^{2/3}) \tilde u(s)\Big)\dd s.
\end{equation}
With this definition $\tilde E(\tilde u)=\frac{1}{\epsilon^{5/3}}E(u)$.
From Lemma \ref{l2} and \eqref{oderes2}, it follows that $\Delta \tilde v$, and also $\nabla\tilde v$, are uniformly bounded on compact sets.
Moreover, by differentiating \eqref{oderes2} we also obtain the boundedness of the second derivatives of $\tilde v$.
Thanks to these uniform bounds, we can apply the theorem of Ascoli via a diagonal argument to obtain the convergence of $\tilde v$ in
$C^2_{\mathrm{ loc}}(\R^2)$ (up to a subsequence)
to a solution $\tilde V $ of the P.D.E.
\begin{equation}\label{oderes4}
\Delta \tilde V(s)+\mu_1 s_1 \tilde V(s)-\tilde V^3(s)+ a_0 f_1(\xi)=0, \ \forall s\in \R^2, \text{ with } a_0:=\lim_{\epsilon\to 0}a(\epsilon),
\end{equation}
which is associated to the functional
\begin{equation}
\label{functres4}
\tilde E_0(\phi,J)=\int_{J}\Big(\frac{1}{2}|\nabla\phi(s)|^2-\frac{\mu_1}{2} s_1 \phi^2(s)+\frac{1}{4}\phi^4(s)- a_0 f_1(\xi)\phi(s) \Big)\dd s.
\end{equation}
Setting $y(s):=\frac{1}{\sqrt{2}(-\mu_1)^{1/3}}\tilde V\big(\frac{s}{(-\mu_1)^{1/3}}\big)$, \eqref{oderes4} reduces to \eqref{pain},
that is, $y$ solves \eqref{pain} with $\alpha=\frac{a_0f_1(\xi)}{\sqrt{2}\mu_1}$.
Finally, we can see as in the previous proof that
the limit $\tilde V$ obtained in \eqref{oderes4} as well as the solution $y$ of \eqref{pain} are
minimal in the sense of definition \eqref{minnn}.
\end{proof}
\begin{proof}[Theorem \ref{theorem 1} (iii)]
For every $x_0\in\R^2$ such that $|x_0|>\rho$, we consider the rescaled minimizers
$\tilde v_{\epsilon,a}(s)= \frac{v_{\epsilon,a}(x_0+\epsilon s)}{\epsilon}$, with $s=(s_1,s_2)$, satisfying
\begin{equation}\label{asd2}
\Delta \tilde v(s)+\mu(x_0+\epsilon s)\tilde v(s)-\epsilon^2\tilde v(s)^3+af_1(x_0+\epsilon s)=0 , \ \forall s \in \R^2.
\end{equation}
In view of the bound provided by Lemma \ref{s3gg} and \eqref{asd2}, we can see that the first derivatives of
$\tilde v_{\epsilon,a}$ are uniformly bounded on compact sets for $\epsilon \ll 1$.
Moreover, by differentiating \eqref{asd2}, one can also obtain the boundedness of the second derivatives of $\tilde v$ on compact sets.
As a consequence, we conclude that $\lim_{\epsilon\to 0, a\to a_0}\tilde v_{\epsilon,a}(s)=\tilde V(s)$ in $C^2_{\mathrm{loc}}$,
where $\tilde V(s)\equiv-\frac{a_0}{\mu(x_0)}f_1(x_0 )$ is the unique bounded solution of
\begin{equation}\label{asd3}
\Delta \tilde V(s)+\mu(x_0)\tilde V(s)+a_0f_1(x_0)=0 , \ \forall s \in \R^2.
\end{equation}
Indeed, consider a smooth and bounded solution $\phi:\R^2\to\R$ of $\Delta \phi=W'(\phi)$ where the potential $W:\R\to \R$ is smooth
and strictly convex. Then, we have $\Delta(W(\phi))=|W'(\phi)|^2+W''(\phi)|\nabla \phi|^2\geq 0$, and since $W(\phi)$ is bounded we deduce
that $W(\phi)$ is constant. Therefore, $\phi\equiv \phi_0$ where $\phi_0\in \R$ is such that $W'(\phi_0)=0$.
To prove the uniform convergence $\frac{v_{\epsilon,a}(x)}{\epsilon}\to-\frac{a_0}{\mu(x)}f_1(x)$ on compact subsets of $\{|x|>\rho\}$, we proceed by contradiction.
Assuming that the uniform convergence does not hold, one can find a sequence $\epsilon_n\to 0$,
a sequence $a_n\to a_0$, and a sequence $x_n \to x_0$, with $|x_0|>\rho$, such that
$\Big|\frac{v_{\epsilon_n,a_n}(x_n)}{\epsilon_n}+\frac{a_0}{\mu(x_n)}f_1(x_n)\Big|\geq \delta$, for some $\delta>0$.
However, by reproducing the previous arguments,
it follows that the rescaled functions $\tilde v_{n}(s)= \frac{v_{\epsilon_n,a_n}(x_n+\epsilon_n s)}{\epsilon_n}$ converge
in $C^2_{\mathrm{loc}}$ to the constant $\tilde V(s)\equiv-\frac{a_0}{\mu(x_0)}f_1(x_0)$. Thus, we have reached a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thcv} (i)]
We first notice that $v \not\equiv 0$ for $\epsilon \ll 1$.
Indeed, by choosing a test function $\psi\not\equiv 0$ supported in $D(0;\rho)$, and such that $\psi^2<2\mu$, one can see that
\[
E(\psi)=\frac{\epsilon}{2}\int_{\R^2}|\nabla \psi|^2+\frac{1}{4\epsilon}\int_{\R^2}\psi^2(\psi^2-2\mu)<0, \qquad \epsilon\ll 1.
\]
Let $x_0\in \R^2$ be such that $v(x_0)\neq 0$. Without loss of generality we may assume that $v(x_0)>0$. Next, consider $\tilde v=|v|$ which is another global minimizer and thus another solution.
Clearly, in a neighborhood of $x_0$ we have $v=|v|$, and as a consequence of the unique continuation principle (cf. \cite{sanada})
we deduce that $v\equiv \tilde v\geq 0$ on $\R^2 $. Furthermore,
the maximum principle implies that $v>0$, since $v\not\equiv 0$.
To prove that $v$ is radial we consider the reflection with respect to the line $x_1=0$.
We can check that $E(v,\{x_1>0\})=E(v,\{x_1<0\})$, since otherwise by even reflection we can construct a map in $H^1$ with energy smaller than $v$. Thus, the map
$\tilde v(x) = v(|x_1|,x_2)$ is also a minimizer, and since $\tilde v= v$ on $\{x_1>0\}$, it follows by unique continuation that
$\tilde v\equiv v$ on $\R^2$. Repeating the same argument for any line of reflection, we deduce that $v$ is radial.
To complete the proof, it remains to show the uniqueness of $v$ up to change of $v$ by $-v$. Let $\tilde v$ be another global minimizer such that $\tilde v>0$, and $\tilde v\not\equiv v$.
Choosing $\psi=u$ in \eqref{euler}, we find for any solution $u\in H^1(\R^2)$ of \eqref{ode} the following alternative expression of the energy:
\begin{equation}\label{enealt}
E(u)=-\int_{\R^2} \frac{u^4}{4\epsilon}.
\end{equation}
Formula \eqref{enealt} implies that $v $ and $\tilde v$ intersect for $|x|=r>0$. However, setting
\begin{equation*}
w(x)=\begin{cases}
v(x) &\text{ for } |x|\leq r\\
\tilde v(x) &\text{ for } |x|\geq r,
\end{cases}
\end{equation*}
we can see that $w$ is another global minimizer, and again by the unique continuation principle we have $w\equiv v\equiv\tilde v$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thcv} (ii), (iii)]
We first establish two lemmas.
\begin{lemma}\label{smooth}
Let $a>0$ and $\rho_0\in(0,\rho)$ be fixed, and set $l:=\frac{\sqrt{\mu_{\mathrm{rad}}(\rho_0)}}{2\mu(0)}$, $\lambda:=\frac{\sqrt{2} \tanh^{-1}(8/9)}{\sqrt{\mu_{\mathrm{rad}}(\rho_0)}}$, and $\lambda':=\frac{\mu_{\mathrm{rad}}(\rho_0) }{2\cosh^2(\lambda\sqrt{\mu(0)/2})}$.
Then, there exist $\epsilon_0>0$ such that
\begin{itemize}
\item[(i)] for every $\epsilon \in (0,\epsilon_0)$
the set $Z_{\epsilon}:=\{\bar x\in D(0;\rho_0): v_{\epsilon,a}(\bar x)=0\}$ is a smooth one
dimensional manifold. Let $\nu(\bar x)$
be a unit normal vector
at $\bar x\in Z_\epsilon$.
\item[(ii)] for every $\epsilon\in (0,\epsilon_0)$, $\bar x\in Z_\epsilon$, and $|s|\leq l$, we have
$|v(\bar x+\epsilon s)|\leq\frac{1}{2} \sqrt{\mu_{\mathrm{rad}}(\rho_0)}$.
\item[(iii)] for every $\epsilon \in (0,\epsilon_0)$, and $\bar x\in Z_\epsilon$, we have
$|v(\bar x+\epsilon\lambda \nu)|\geq \frac{3}{4}\sqrt{\mu_{\mathrm{rad}}(\rho_0)}$,
\item[(iv)] for every $\epsilon \in (0,\epsilon_0)$, $\bar x\in Z_\epsilon$, and $t\in[-\lambda,\lambda]$ we have
$\epsilon \big|\frac{\partial v}{\partial \nu}(\bar x+\epsilon t \nu)\big|\geq \lambda'$.
\end{itemize}
\end{lemma}
\begin{proof}
To prove (i) it is sufficient to establish that there exists $\epsilon_0>0$ such that for every $\epsilon \in (0,\epsilon_0)$ and
$\bar x \in Z_\epsilon$, we have $\nabla v_{\epsilon,a}(\bar x)\neq 0$. Assuming by contradiction that this does not hold,
we can find a sequence $\epsilon_n\to 0$, and a sequence
$ Z_{\epsilon_n}\ni \bar x_n \to x_0 \in \overline{D(0;\rho_0)}$ such that $\nabla v_{\epsilon_n,a}(\bar x_n)= 0$.
However, by considering the rescaled functions
$\tilde v_n(s)=v_{\epsilon_n,a}(\bar x_n+\epsilon_n s)$, it follows as in the proof of Theorem \ref{theorem 1} (i) that $\tilde v_n$ converges in $C^2_{\mathrm{ loc}}(\R^2)$ (up to a subsequence) to
$\tilde V(s)=\sqrt{\mu(x_0)}\tanh(\sqrt{\mu(x_0)/2}(s\cdot \nu))$, where $\nu\in\R^2$ is a unit vector. Since $\nabla \tilde V(0)\neq 0$, we have reached a contradiction.
To prove (ii), we proceed again by contradiction, and assume that we can find a sequence $\epsilon_n\to 0$, a sequence
$ Z_{\epsilon_n}\ni \bar x_n \to x_0 \in \overline{D(0;\rho_0)}$, and a sequence $\overline{D(0;l)}\ni s_n\to s_0$ such that
$|v(\bar x_n+\epsilon_n s_n)|> \sqrt{\mu_{\mathrm{rad}}(\rho_0)}/2$. As before, we obtain that $\tilde v_n(s)=v_{\epsilon_n,a}(\bar x_n+\epsilon_n s)$,
converges in $C^2_{\mathrm{ loc}}(\R^2)$ to
$\tilde V(s)=\sqrt{\mu(x_0)}\tanh(\sqrt{\mu(x_0)/2}(s\cdot \nu))$. In particular, it follows that
$$\lim_{n\to\infty}|v_{\epsilon_n,a}(\bar x_n+\epsilon_n s_n)|=\sqrt{\mu(x_0)}|\tanh(\sqrt{\mu(x_0)/2}(s_0\cdot \nu))|\leq \frac{\mu(0)l}{\sqrt{2}}
<\sqrt{\mu_{\mathrm{rad}}(\rho_0)}/2,$$
which is a contradiction. The proofs of (iii) and (iv) are similar.
\end{proof}
\begin{lemma}\label{astar}
Let
\begin{equation}\label{asbb}
a_*:=\inf_{x_1\leq 0, |x|<\rho}\frac{\sqrt{2}(\mu(x))^{3/2}}{3\int_{-\sqrt{\rho^2-x_2^2}}^{x_1} |f_1(t,x_2)|\sqrt{\mu(t,x_2)} \dd t},
\end{equation}
and
\begin{equation}\label{ashh}
a^*:=\sup_{x_1< 0, |x|\leq\rho} \frac{\sqrt{2}\big((\mu(0,x_2))^{3/2}-(\mu(x))^{3/2}\big)}{3\int_{x_1}^0|f_1(t,x_2)|\sqrt{\mu(t,x_2)}\dd t},
\end{equation}
then we have $a_*\in (0,\infty)$ and
\begin{equation}\label{inegaaa}
a_*\leq\frac{2\sqrt{2}\int_{|r|<\rho}(\mu_{\mathrm{rad}}(r))^{3/2}\dd r}{3\int_{D(0;\rho)}|f_1|\sqrt{\mu} }\leq a^*.
\end{equation}
Moreover, if $f'_{\mathrm{rad}}(0)> 0$, then $a^*<\infty$.
Finally, if $f=-\frac{1}{2}\nabla\mu$, then $a_*=a^*=\sqrt{2}$.
\end{lemma}
\begin{proof}
We first check that $a_*\in (0,\infty)$ and $a^*\in [a_*,\infty]$. Let us define the auxilliary function
$$\{x\in\R^2: x_1\leq 0, |x|\leq\rho\}\ni x\to \beta_*(x)=\frac{\sqrt{2}}{3}(\mu(x))^{3/2}-a\int_{-\sqrt{\rho^2-x_2^2}}^{x_1} |f_1(t,x_2)|\sqrt{\mu(t,x_2)} \dd t,$$
and compute $\frac{\partial\beta_*}{\partial x_1}(x)=\big(\frac{\sqrt{2}}{2}\mu'_{\mathrm{rad}}(r)-af_{\mathrm{rad}}(r)\big) \sqrt{\mu(x)}\cos \theta$, where $x=(r\cos\theta,r\sin\theta)$.
It is clear that for sufficiently small $a_1>0$ and $\gamma>0$, we have $\frac{\partial\beta_*}{\partial x_1}(x)> 0$ provided that $x_1<0$,
$\rho-\gamma<|x|<\rho$, and $a\leq a_1$. Since $\beta_*(x)=0$ for $|x|=\rho$, it follows that
$\beta_*(x)\geq 0$ provided that $x_1\leq 0$, $\rho-\gamma\leq|x|\leq\rho$, and $a\leq a_1$.
There also exists $a_2>0$ such that for $a\leq a_2$, we have $\beta_* \geq 0$ on the set $\{x_1\leq 0, |x|\leq\rho-\gamma\}$.
Thus, we can see that $a_*\geq\min(a_1,a_2)>0$. Furthermore, since the inequalities
$a_*\leq\frac{2\sqrt{2}(\mu(0,x_2))^{3/2}}{3\int_{|t|<\sqrt{\rho^2-x_2^2}}|f_1(t,x_2)|\sqrt{\mu(t,x_2)}\dd t }\leq a^*$ hold for every
$x_2\in (-\rho,\rho)$, we obtain after an integration \eqref{inegaaa}.
Next, we define a second auxilliary function
$$\{x\in\R^2: x_1\leq 0, |x|\leq\rho\}\ni x\to \beta^*(x)=\frac{\sqrt{2}}{3}[(\mu(0,x_2))^{3/2}-(\mu(x))^{3/2}]-a\int_{x_1}^{0} |f_1(t,x_2)|\sqrt{\mu(t,x_2)} \dd t,$$
and compute $\frac{\partial\beta^*}{\partial x_1}(x)=\big(\frac{\sqrt{2}}{2}\mu'_{\mathrm{rad}}(r)+af_{\mathrm{rad}}(r)\big) \sqrt{\mu(x)}|\cos \theta|$, where $x=(r\cos\theta,r\sin\theta)$.
Since $f'_{\mathrm{rad}}(0)> 0$, one can see that $\frac{\sqrt{2}}{2}\mu''_{\mathrm{rad}}(r)+af'_{\mathrm{rad}}(r)> 0$,
provided that $r\in[0,\gamma]$ and $a\geq a_3$, with $\gamma>0$ sufficiently small, and $a_3>0$ sufficiently big. Thus,
$\frac{\sqrt{2}}{2}\mu'_{\mathrm{rad}}(r)+af_{\mathrm{rad}}(r)\geq 0$, and $\frac{\partial\beta^*}{\partial x_1}(x)> 0$, when $r=|x|\leq\gamma$,
$x_1<0$, and $a\geq a_3$.
On the other hand it is clear that for sufficiently big $a_4>0$, we have $\frac{\partial\beta^*}{\partial x_1}(x)> 0$ provided that $x_1<0$,
$\gamma\leq |x|<\rho$, and $a\geq a_4$. Since $\beta^*(x)=0$ for $x_1=0$, it follows that
$\beta^*(x)\leq 0$ provided that $x_1\leq 0$, $|x|\leq\rho$, and $a\geq \max(a_3,a_4)$. This proves that $a^*\leq \max(a_3,a_4)$.
Finally, one can check that $a_*=a^*=\sqrt{2}$ when $f=-\frac{1}{2}\nabla\mu \Rightarrow f_1=-\frac{1}{2}\frac{\partial\mu}{\partial x_1}$, by
computing the integrals appearing in the denominators of \eqref{asbb}, \eqref{ashh}.
\end{proof}
The minimum of the energy defined in \eqref{funct 0} is nonpositive and tends to $-\infty$ as $\epsilon \to 0$. Since we are interested in the behavior of the minimizers as $\epsilon \to 0$, it is useful to define a renormalized energy, which is obtained by adding to \eqref{funct 0} a suitable term so that the result is tightly bounded from above. We define the renormalized energy as
\begin{equation}\label{renorm}
\mathcal{E}(u):=E(u)+\int_{|x|<\rho}\frac{\mu^2}{4\epsilon}=\int_{\R^2}\frac{\epsilon}{2}|\nabla u|^2+\int_{|x|<\rho}
\frac{(u^2-\mu)^2}{4\epsilon}+\int_{|x|>\rho}\frac{u^2(u^2-2\mu)}{4\epsilon}- a \int_{\R^2}f_1 u ,
\end{equation}
and claim the bound:
\begin{lemma}\label{bbnm}
\begin{equation}\label{quaq1}
\limsup_{\epsilon\to 0}\mathcal{E}_{\epsilon,a}(v_{\epsilon,a})\leq\min\Big(0,\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu_{\mathrm{rad}}(r))^{3/2}\dd r
-a\int_{D(0;\rho)}|f_1|\sqrt{\mu}\Big),
\text{ for arbitrary fixed $a$}.
\end{equation}
\end{lemma}
\begin{proof
Let us consider the $C^1$ piecewise function:
\begin{equation}
\psi_\epsilon(x)=
\begin{cases}
\sqrt{\mu(x)} &\text{for } |x|\leq \rho-\epsilon^{2/3} \\
k_\epsilon \epsilon^{-1/3}(\rho-|x|) &\text{for }\rho-\epsilon^{2/3} \leq|x|\leq \rho\\
0 &\text{for } |x|\geq \rho
\end{cases}, \nonumber
\end{equation}
with $k_\ve$ defined by $k_\epsilon \epsilon^{1/3}=\sqrt{\mu_{\mathrm{rad}}(\rho- \epsilon^{2/3})}\Longrightarrow k_\epsilon=|\mu'_{\mathrm{rad}} (\rho)|^{\frac{1}{2}}+o(1)$.
Since $\psi \in H^1(\R^2)$, it is clear that $\mathcal E(v)\leq \mathcal E(\psi)$.
We check that $\mathcal E(\psi)=\frac{\pi|\mu_1|\rho}{6}|\epsilon\ln\epsilon|+\mathcal O(\epsilon)$, since it is the sum of the following integrals:
$$\int_{\rho-\epsilon^{2/3}<|x|<\rho}\frac{(\psi^2-\mu)^2}{4\epsilon}=\mathcal O(\epsilon),\ \int_{|x|>\rho-\epsilon^{2/3}}\frac{\epsilon}{2} |\nabla \psi|^2=\mathcal O(\epsilon),$$
$$\int_{|x|\leq\rho-\epsilon^{2/3}}\frac{\epsilon}{2}\frac{ |\mu'_{\mathrm{rad}}(|x|)|^2}{4\mu}=\frac{\epsilon|\mu_1|}{8}
\int_{|x|\leq\rho-\epsilon^{2/3}}\frac{1}{\rho-|x|}+\mathcal O(\epsilon)=\frac{\pi|\mu_1|\rho}{6}|\epsilon\ln\epsilon|+\mathcal O(\epsilon).$$
Thus, $\limsup_{\epsilon\to 0}\mathcal{E}_{\epsilon,a}(v_{\epsilon,a})\leq \limsup_{\epsilon\to 0}\mathcal{E}_{\epsilon,a}(\psi_\epsilon)=0$.
Next, we set $\zeta_\epsilon:= \epsilon^{-\beta}$, with $\beta\in (\frac{1}{3},\frac{4}{9})$, and define the $C^1$ piecewise functions:
\begin{equation}
\l_\epsilon(x_2)=
\begin{cases}
\frac{\psi_\epsilon (\epsilon\zeta_\epsilon,x_2)}{\tanh \big( \zeta_\epsilon\frac{\psi_\epsilon(0,x_2)}{\sqrt{2}}\big)}
&\text{for } |x_2|\leq (\rho^2-\epsilon^2\zeta_\epsilon^2)^{1/2}
\\
0 &\text{for } |x_2|\geq (\rho^2-\epsilon^2\zeta_\epsilon^2)^{1/2}
\end{cases}, \nonumber
\end{equation}
and
\begin{equation}
\chi_\epsilon(x)=
\begin{cases}
l_\epsilon (x_2)\tanh\big(\frac{x_1\psi_\epsilon(0,x_2)}{\sqrt{2}\epsilon}\big) &\text{for } |x_1|\leq \epsilon \zeta_\epsilon
\\
\psi_\epsilon (x) &\text{for } x_1\geq \epsilon\zeta_\epsilon\\
-\psi_\epsilon (x) &\text{for } x_1\leq -\epsilon\zeta_\epsilon
\end{cases}. \nonumber
\end{equation}
We also consider the sets $$D_\epsilon^1:=\{(x_1,x_2): |x_1|\leq \epsilon \zeta_\epsilon, |x_2|\leq \big((\rho^2-\epsilon^{2/3})^2-\epsilon^2\zeta_\epsilon^2\big)^{1/2}\},$$
$$D_\epsilon^2:=\{(x_1,x_2): |x_1|\leq \epsilon \zeta_\epsilon, |x_2|\geq \big((\rho^2-\epsilon^{2/3})^2-\epsilon^2\zeta_\epsilon^2\big)^{1/2}, |x|\leq \rho\},$$
and
$$D_\epsilon^3:=\{(x_1,x_2): |x_1|\geq \epsilon \zeta_\epsilon, |x|\leq \rho\}.$$
One the one hand, it is clear that $$\lim_{\epsilon\to0}-a\int_{\R^2}f_1\chi_\epsilon=-a\int_{D(0;\rho)}|f_1|\sqrt{\mu},$$
and
$$\lim_{\epsilon\to0}\int_{D_\epsilon^3}\big(\frac{\epsilon}{2}|\nabla \chi_\epsilon|^2+\frac{(\chi_\epsilon^2-\mu)^2}{4\epsilon}\big)=0.$$
In addition, it is a simple calculation to verify that
$$\lim_{\epsilon\to0}\int_{D_\epsilon^2}\big(\frac{\epsilon}{2}|\nabla \chi_\epsilon|^2+\frac{(\chi_\epsilon^2-\mu)^2}{4\epsilon}\big)=0.$$
On the other hand when $|x_2|\leq \big((\rho^2-\epsilon^{2/3})^2-\epsilon^2\zeta_\epsilon^2\big)^{1/2}=:\tau_\epsilon$, we have
$l_\epsilon^2(x_2)=\mu(0,x_2)+\mathcal O(\epsilon^2\zeta_\epsilon^2)$, uniformly in $x_2$.
Our claim is that
\begin{equation}\label{quabb}
\lim_{\epsilon\to0}\int_{D_\epsilon^3}\big(\frac{\epsilon}{2}|\nabla \chi_\epsilon|^2+\frac{(\chi_\epsilon^2-\mu)^2}{4\epsilon}\big)=\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu(0,x_2))^{3/2}\dd x_2.
\end{equation}
Indeed, setting $\tilde \chi(x_1,x_2)=\sqrt{\mu(0,x_2)}\tanh\Big(x_1\sqrt{\frac{\mu(0,x_2)}{2}}\Big)$, we can see that
$\int_{D_\epsilon^3}\big(\frac{\epsilon}{2}|\nabla \chi_\epsilon|^2+\frac{(\chi_\epsilon^2-\mu)^2}{4\epsilon}\big)$ is the sum of the following integrals:
$$\int_{D_\epsilon^1}\frac{\mu^2}{4\epsilon}=\int_{D_1^\epsilon}\frac{\mu^2(0,x_2)}{4\epsilon}+\mathcal O(\zeta_\epsilon^2\epsilon), $$
$$\int_{D^1_\epsilon}\frac{\epsilon}{2}\Big|\frac{\partial\chi_\epsilon}{\partial x_1}\Big|^2=\int_{|x_1|<\zeta_\epsilon, |x_2|<\tau_\epsilon}\frac{1}{2}\frac{l_\epsilon^2(x_2)}{\mu(0,x_2)}\Big|\frac{\partial\tilde \chi}{\partial x_1}\Big|^2
=\int_{|x_1|<\zeta_\epsilon, |x_2|<\tau_\epsilon}\frac{1}{2}\Big|\frac{\partial\tilde \chi}{\partial x_1}\Big|^2 +\mathcal O( \epsilon^{4/3}\zeta_\epsilon^2),$$
$$\int_{D^1_\epsilon}\frac{\epsilon}{2}\Big|\frac{\partial\chi_\epsilon}{\partial x_2}\Big|^2=\mathcal O( \epsilon^{4/3}\zeta_\epsilon^3),$$
\begin{multline*}
-\int_{D^1_\epsilon}\frac{\mu}{2\epsilon}\chi^2_\epsilon=-\int_{D^1_\epsilon}\frac{\mu(0,x_2)}{2\epsilon}\chi^2_\epsilon+\mathcal O(\epsilon\zeta_\epsilon^2)=-\int_{|x_1|<\zeta_\epsilon, |x_2|<\tau_\epsilon }\frac{l_\epsilon^2(x_2)}{\mu(0,x_2)}\frac{\mu(0,x_2)\tilde \chi^2}{2}+\mathcal O(\epsilon\zeta_\epsilon^2)\\=-\int_{|x_1|<\zeta_\epsilon, |x_2|<\tau_\epsilon }\frac{\mu(0,x_2)\tilde \chi^2}{2}+\mathcal O( \epsilon^{4/3}\zeta_\epsilon^3),
\end{multline*}
$$\int_{D^1_\epsilon }\frac{\chi_\epsilon^4}{4\epsilon}=\int_{|x_1|<\zeta_\epsilon, |x_2|<\tau_\epsilon }\frac{l_\epsilon^4(x_2)}{(\mu(0,x_2))^2}\frac{\tilde \chi^4}{4}=\int_{|x_1|<\zeta_\epsilon ,|x_2|<\tau_\epsilon}\frac{\tilde \chi^4}{4}+\mathcal O( \epsilon^{4/3}\zeta_\epsilon^3).$$
Gathering the previous results,
it follows that
$$\lim_{\epsilon\to0}\int_{D_\epsilon^3}\big(\frac{\epsilon}{2}|\nabla \chi_\epsilon|^2+\frac{(\chi_\epsilon^2-\mu)^2}{4\epsilon}\big)=\int_{-\rho}^\rho\int_{\R}\Big(\frac{1}{2}\Big|\frac{\partial\tilde \chi}{\partial x_1}\Big|^2+\frac{(\tilde \chi^2-\mu(0,x_2))^2}{4}\Big)\dd x_1\dd x_2=\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu(0,x_2))^{3/2}\dd x_2.$$
Finally, in view of what precedes we deduce that $$\limsup_{\epsilon\to 0}\mathcal{E}_{\epsilon,a}(v_{\epsilon,a})\leq\lim_{\epsilon\to 0}\mathcal E_{\epsilon,a}(\chi_\epsilon)=\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu_{\mathrm{rad}}(r))^{3/2}\dd r -a\int_{D(0;\rho)}|f_1|\sqrt{\mu}.$$
\end{proof}
At this stage, we are going to compute a lower bound of $\mathcal{E}_{\epsilon,a}(v_{\epsilon,a})$ (cf. \eqref{fatt}). This computation reduces to the one
dimensional problem studied in \cite{panayotis_1}. For every $x_2\in (-\rho,\rho)$ fixed, we consider the restriction of the energy to the line $\{(t,x_2): t\in \R\}$:
\begin{equation}\label{funct 00}
E^{x_2}(\phi)=\int_{\R}\left(\frac{\epsilon}{2}|\phi'(t)|^2-\frac{1}{2\epsilon}\mu(t,x_2)\phi^2(t)+\frac{1}{4\epsilon}|\phi(t)|^4-a f_1(t,x_2)\phi(t)\right)\dd t, \ \phi \in H^1(\R).
\end{equation}
We recall (cf. \cite{panayotis_1}) that there exists $\psi^{x_2}_{\epsilon,a} \in H^1(\R)$ such that
$E^{x_2}(\psi^{x_2}_{\epsilon,a})=\min_{H^1(\R)} E^{x_2}$, and moreover setting
$$\mathcal{E}^{x_2}(\phi):=E^{x_2}(\phi)+\int_{|t|<\sqrt{\rho^2-x_2^2}}\frac{\mu^2(t,x_2)}{4\epsilon}\dd t,$$
$$a_*(x_2):=\inf_{t \in (-\sqrt{\rho^2-x_2^2},0]}\frac{\sqrt{2}(\mu(t,x_2))^{3/2}}{3\int_{-\sqrt{\rho^2-x_2^2}}^t
|f_1(t,x_2)|\sqrt{\mu(t,x_2)} },$$
and
$$a^*(x_2):=\sup_{t\in[-\sqrt{\rho^2-x_2^2},0)}
\frac{\sqrt{2}\big((\mu(0,x_2))^{3/2}-(\mu(t,x_2))^{3/2}\big)}{3\int_t^0|f_1(t,x_2)|\sqrt{\mu(t,x_2)}},$$
we have
\begin{equation}\label{recal1}
\lim_{\epsilon\to 0} \mathcal{E}^{x_2}_{\epsilon,a}(\psi^{x_2}_{\epsilon,a})=0, \forall x_2\in (-\rho,\rho), \forall a\in(0, a_*(x_2)),
\end{equation}
and
\begin{equation}\label{recal2}
\lim_{\epsilon\to 0} \mathcal{E}^{x_2}_{\epsilon,a}(\psi^{x_2}_{\epsilon,a})=\frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}-a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t, \forall x_2\in (-\rho,\rho), \forall a\in(a^*(x_2),\infty).
\end{equation}
Also note that $0<a_*=\inf_{x_2\in (-\rho,\rho)}a_*(x_2)\leq a_*(x_2)\leq a^*(x_2)\leq a^*=\sup_{x_2\in (-\rho,\rho)} a^*(x_2)$, for every $x_2\in (-\rho,\rho)$.
In view of these results we claim:
\begin{lemma}\label{bbnmbispr}We have
\begin{equation}\label{fattlem1}
\lim_{\epsilon\to 0}\int_{\R^2}\epsilon\Big| \frac{\partial v_{\epsilon,a}}{\partial x_2}\Big|^2=0 \text{ when } a\in (0,a_*)\cup (a^*,\infty),
\end{equation}
\begin{equation}\label{fatnew1}
\lim_{\epsilon\to 0}\int_{-\rho}^\rho
|\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))|\dd x_2=0 \text{ when } a\in (0,a_*),
\end{equation}
and
\begin{equation}\label{fatnew2}
\lim_{\epsilon\to 0}\int_{-\rho}^\rho
\Big|\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2)) - \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}+a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t\Big| \dd x_2=0\text{ when } a\in (a^*,\infty).
\end{equation}
\end{lemma}
\begin{proof}
It is clear that
$\mathcal E_{\epsilon,a}(v_{\epsilon,a})=\frac{\epsilon}{2}\int_{\R^2}|v_{x_2}|^2+\int_{-\rho}^\rho
\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))\dd x_2 +\int_{|x_2|>\rho}\frac{v^2(v^2-2\mu)}{4\epsilon}- a \int_{|x_2|>\rho}f_1 v$.
We are going to examine each of these integrals.
In view of Theorem \ref{theorem 1} (iii), we have by dominated convergence
\begin{equation}\label{cvdomin}
\lim_{\epsilon\to 0}\int_{|x_2|>\rho}f_1 v=0.
\end{equation}
On the other hand, since $\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))\geq
\mathcal{E}^{x_2}(\psi^{x_2}_{\epsilon,a})$, and $\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))$ is uniformly bounded from below on $(-\rho,\rho)$,
it follows from \eqref{recal1}, \eqref{recal2}, and Fatou's Lemma that
\begin{align}\label{fatt}
\liminf_{\epsilon\to 0}\int_{-\rho}^\rho
\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))\dd x_2&\geq\int_{-\rho}^\rho \liminf_{\epsilon\to 0}
\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))\dd x_2
\geq\int_{-\rho}^\rho \liminf_{\epsilon\to 0}\mathcal{E}^{x_2}(\psi^{x_2}_{\epsilon,a})\nonumber \\
&\geq\begin{cases}0 &\text{ when } a\in (0,a_*),\\
\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu_{\mathrm{rad}}(r))^{3/2}\dd r
-a\int_{D(0;\rho)}|f_1|\sqrt{\mu} &\text{ when } a\in (a^*,\infty).
\end{cases}
\end{align}
Next, we utilize \eqref{quaq1}, \eqref{cvdomin}, and \eqref{inegaaa}, to obtain
\begin{align}\label{fattcal}
\limsup_{\epsilon\to 0}\int_{-\rho}^\rho
\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))\dd x_2&\leq \limsup_{\epsilon\to 0}\mathcal{E}_{\epsilon,a}(v_{\epsilon,a})\nonumber \\
&\leq\begin{cases}0 &\text{ when } a\in (0,a_*),\\
\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu_{\mathrm{rad}}(r))^{3/2}\dd r
-a\int_{D(0;\rho)}|f_1|\sqrt{\mu} &\text{ when } a\in (a^*,\infty).
\end{cases}
\end{align}
Combining \eqref{fatt} with \eqref{fattcal}, we deduce that
\begin{equation}\label{fattlem}
\lim_{\epsilon\to 0}\int_{-\rho}^\rho
\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))\dd x_2=\begin{cases}0 &\text{ when } a\in (0,a_*),\\
\frac{2\sqrt{2}}{3}\int_{-\rho}^\rho (\mu_{\mathrm{rad}}(r))^{3/2}\dd r
-a\int_{D(0;\rho)}|f_1|\sqrt{\mu} &\text{ when } a\in (a^*,\infty),
\end{cases}
\end{equation}
from which \eqref{fattlem1} follows. For a.e. $x_2\in(-\rho,\rho)$, we also obtain (respectively when $a\in (0,a_*)$ and $a\in(a^*,\infty)$), that
\begin{equation}\label{fattlem2}
\liminf_{\epsilon\to 0} \mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))=\begin{cases}0,\\
\frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}-a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t,
\end{cases}
\end{equation}
thus
\begin{equation}\label{fattlem23}\begin{cases}
\lim_{\epsilon\to 0} \min\big[\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2)),0\big]=0,\\
\lim_{\epsilon\to 0} \min\big[\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))- \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}+a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t,0\big]=0,
\end{cases}
\end{equation}
and by dominated convergence
\begin{equation}\label{fattlem24}\begin{cases}
\lim_{\epsilon\to 0}\int_{-\rho}^\rho \min\big[\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2)),0\big]\dd x_2=0,\\
\lim_{\epsilon\to 0} \int_{-\rho}^\rho\min\big[\mathcal{E}^{x_2}_{\epsilon,a}(v_{\epsilon,a}(\cdot,x_2))- \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}+a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t,0\big]\dd x_2=0.
\end{cases}
\end{equation}
Combining \eqref{fattlem} with \eqref{fattlem24}, we conclude that \eqref{fatnew1} and \eqref{fatnew2} hold.
\end{proof}
The proof of Theorem \ref{thcv} (ii) will follow from
\begin{lemma}\label{conclth2}For fixed $a\in (0,a_*)$, we have
\begin{equation}\label{qaz1}
\lim_{\epsilon\to 0}\int_{\R^2}f_1v_{\epsilon,a}=0,
\end{equation}
and
\begin{equation}\label{qaz2}
\lim_{\epsilon\to 0}\Big(\int_{\R^2}\frac{\epsilon}{2}|\nabla v_{\epsilon,a}|^2+\int_{|x|<\rho}
\frac{(v_{\epsilon,a}^2-\mu)^2}{4\epsilon}\Big)=0.
\end{equation}
\end{lemma}
\begin{proof}
Given a sequence $\epsilon_n\to 0$, we are going to show that we can extract a subsequence $\epsilon'_n\to 0$ such that $\lim_{n\to\infty}\int_{\R^2}f_1v_{\epsilon'_n,a}=0$. This will prove \eqref{qaz1}.
According to \eqref{fattlem1} and \eqref{fatnew1}, there exists a negligible set $N\subset (-\rho,\rho)$ such that for a subsequence called $\epsilon'_n$, and for every $x_2\in (-\rho,\rho)\setminus N$, we have
\begin{equation}\label{claa1}
\lim_{n\to \infty}\int_{\R}\epsilon'_n\Big| \frac{\partial v_n}{\partial x_2}(t,x_2)\Big|^2\dd t=0,
\end{equation}
and
\begin{equation}\label{claa2}
\lim_{n\to \infty}\mathcal{E}^{x_2}_{\epsilon'_n,a}(v_n(\cdot,x_2))=0,
\end{equation}
where we have set $v_n=v_{\epsilon'_n,a}$.
Our claim is that
\begin{equation}\label{claiml1}
\lim_{n\to\infty}\int_{\R} f_1(t,x_2)v_n(t,x_2) \dd t=0, \ \forall x_2\in (-\rho,\rho)\setminus N.
\end{equation}
From \eqref{claa1} and \eqref{claa2}, it follows that given $x_2\in (-\rho,\rho)\setminus N$ and $\gamma\in(0,\sqrt{\rho_2^2-x_2^2})$, there exists $ \bar n(x_2,\gamma)$ such that
\begin{equation}\label{zerovv}
n\geq\bar n(x_2,\gamma),\ |t|< \gamma\Rightarrow v_n(t,x_2)\neq 0.
\end{equation}
Indeed, otherwise we can find a subsequence $n_k$ and a sequence $(-\gamma,\gamma)\ni t_k\to t_0$ such that $v_{n_k}(t_k,x_2)=0$. Then, proceeding as in \cite[Proof of Theorem 1.1, Step 6]{panayotis_1} we obtain that $\liminf_{k\to \infty}\mathcal{E}^{x_2}_{\epsilon'_{n_k},a}(v_{n_k}(\cdot,x_2))>0$, which contradicts \eqref{claa2}. Next, for fixed $t \in (-\gamma,\gamma)$, we set $\tilde v_n(s):=v_n(t+\epsilon'_ns_1,x_2+\epsilon'_n s_2)$, and proceeding as in the proof of Theorem \ref{theorem 1} (i) above, we can see that
$\tilde v_n$ converges in $C^2_{\mathrm{ loc}}(\R^2)$ to a minimal solution $\tilde V$ of the equation $\Delta \tilde V +(\mu(t,x_2)-\tilde V^2)\tilde V=0$.
If $\tilde V(s)=\sqrt{\mu(t,x_2)}\tanh(\sqrt{\mu(t,x_2)/2}(s-s_0)\cdot \nu)$, for some unit vector $\nu =(\nu_1,\nu_2)\in \R^2$, and some $s_0\in \R^2$, then \eqref{zerovv} excludes the case where $\nu_1\neq 0$, while \eqref{claa1} excludes the case where $\nu_2\neq 0$. Thus, $\tilde V(s)\equiv \pm \sqrt{\mu(t,x_2)}$, and in particular $\lim_{n\to\infty}|v_n(t,x_2)|=\sqrt{\mu(t,x_2)}$. Finally, given $\delta>0$, we choose $\gamma$ such that $2(\sqrt{\rho^2-x_2^2}-\gamma)\|f_1\| _{L^\infty}\sup_n \|v_n\| _{L^\infty}<\delta/2$, and since $\lim_{n\to\infty}\big|\int_{-\gamma}^\gamma f_1(t,x_2)v_n(t,x_2)\dd t\big|=\lim_{n\to\infty}\big|\int_{-\gamma}^\gamma f_1(t,x_2)|v_n(t,x_2)|\dd t\big|=0$, we deduce that $$\Big|\int_{|t|<\sqrt{\rho^2-x_2^2}} f_1(t,x_2)v_n(t,x_2)\dd t\Big|<\delta$$ provided that $n$ is big enough. This proves that $\lim_{n\to\infty}\int_{|t|<\sqrt{\rho2-x_2^2}}f_1(t,x_2)v_n(t,x_2)\dd t=0$, and recalling that $\lim_{n\to\infty}\int_{|t|>\sqrt{\rho2-x_2^2}}f_1(t,x_2)v_n(t,x_2)\dd t=0$ in view of Theorem \ref{theorem 1} (iii), we have established \eqref{claiml1}. Then, we conclude that $\lim_{n\to \infty}\int_{|x_2|<\rho}f_1v_n=0$ by dominated convergence, and since $\lim_{n\to \infty}\int_{|x_2|>\rho}f_1v_n=0$ by Theorem \ref{theorem 1} (iii), we have proved \eqref{qaz1}. The limit in \eqref{qaz2} follows from \eqref{qaz1} and \eqref{quaq1}.
\end{proof}
\begin{proof}[Conclusion of the proof of Theorem \ref{thcv} (ii)]
We first show that when $a\in(0,a_*)$, we have $Z \subset \{|x|=\rho\} \cup \{x_1=0, |x_2|\geq \rho\}$.
Assume by contradiction that there exist a sequence $\epsilon_n\to 0$, and a sequence $\bar x_n\to x_0 \in D(0;\rho_0)$, with $\rho_0<\rho$, such that $v_n:=v_{\epsilon_n,a}$ vanishes at $\bar x_n$. By Lemma \ref{smooth} (i), we know that $\bar x_n$ belongs to a smooth branch of zeros that we called $Z_{\epsilon_n}$. Let $D_1(n)=\{x_1: (x_1,x_2)\in Z_{\epsilon_n}\}$, $D_2(n)=\{x_2: (x_1,x_2)\in Z_{\epsilon_n}\}$, and for $i=1,2$, let $\delta_i(n)=\mathcal L^1(D_i(n))$, where $\mathcal L$ denotes the Lebesgue measure. Since by Lemma \ref{smooth} (ii), we have $\frac{(v_{n}^2(x)-\mu(x))^2}{4\epsilon_n}\geq \frac{9\mu_{\mathrm{rad}}^2(\rho_0)}{4^3\epsilon_n}$, for $x \in \cup_{z\in Z_{\epsilon_n} }D(z; l\epsilon_n)$,
it follows that $ \frac{9\mu_{\mathrm{rad}}^2(\rho_0)l}{4^3}\delta_i(n)\leq \int_{|x|<\rho}\frac{(v_{n}^2-\mu)^2}{4\epsilon_n}$, and thus $\lim_{n \to\infty}\delta_i(n)=0$, in view of \eqref{qaz2}. This implies in particular that $\bar x_n$ belongs to a smooth Jordan curve $\Gamma_n\subset D(0;\rho_0)$. Let $\omega_n$ be the open set bounded by $\Gamma_n$, let $\nu_n(z)$ be the outer unit normal vector at $z\in\Gamma_n$, and let us define the open set $\Omega_n=\{x\in\R^2: d(x,\overline \omega_n)<\lambda \epsilon_n\}$, where $d$ stands for the Euclidean distance, and $\lambda$ is the constant defined in Lemma \ref{smooth}.
As previously we set $\tilde D_1(n)=\{x_1: (x_1,x_2)\in \Gamma_{n}\}$, $\tilde D_2(n)=\{x_2: (x_1,x_2)\in \Gamma_{n}\}$, and for $i=1,2$, $\tilde \delta_i(n)=\mathcal L^1(\tilde D_i(n))$.
By Lemma \ref{smooth} (iii), we have either $v_n\geq \frac{3}{4}\sqrt{\mu_{\mathrm{rad}}(\rho_0)}$ or $v_n\leq -\frac{3}{4}\sqrt{\mu_{\mathrm{rad}}(\rho_0)}$ on $\partial \Omega_n$. Assuming without loss of generality that $v_n\geq \frac{3}{4}\sqrt{\mu_{\mathrm{rad}}(\rho_0)}$ on $\partial \Omega_n$, we introduce the comparison function
\begin{equation}\label{chi1}
\chi_n(x)=\begin{cases}
v_n(x) &\text{for } x\in\R^2\setminus \Omega_n\\
\max(|v_n(x) |, \sqrt{\mu_{\mathrm{rad}}(\rho_0)}/2)&\text{for } x\in\Omega_n,
\end{cases}
\end{equation}
and notice that $|\mu-\chi_n^2|\leq|\mu-v_n^2|$. Setting $S_n:=\{x: d(x, \Gamma_n)< l\epsilon_n\}\subset \Omega_n$, it is clear that
$\mathcal L^2(S_n)\geq \tilde\delta_i(n)l\epsilon_n$ for $i=1,2$. In addition, according to Lemma \ref{smooth} (ii) and (iv), the inequalities $|v_n|\leq \sqrt{\mu_{\mathrm{rad}}(\rho_0)}/2$, and $\epsilon_n |\nabla v_n|\geq \lambda'$ hold on $S_n$.
Finally, we also notice that Lemma \ref{smooth} (iv) implies that $\tilde \delta_i(n)\geq \lambda\epsilon_n$. Gathering these results we reach the following contradiction
\begin{align}
\mathcal{E}_{\epsilon_n,a}(\chi_{n})-\mathcal{E}_{\epsilon_n,a}(v_{n})&\leq -\frac{\epsilon_n}{2}\int_{|v_n|\leq \sqrt{\mu_{\mathrm{rad}}(\rho_0)}/2}|\nabla v_n|^2+a\int_{\Omega_n}f(v_n-\chi_n)\nonumber\\
&\leq-\frac{|\lambda'|^2}{2\epsilon_n}\mathcal L^2(S_n)+K\mathcal L^2(\Omega_n), \text{ where $K>0$ is a constant} \nonumber\\
&\leq-\frac{|\lambda'|^2l}{2}\tilde \delta_1(n)+K(\tilde \delta_1(n)+2\lambda \epsilon_n)(\tilde \delta_2(n)+2\lambda \epsilon_n)\nonumber\\
&\leq\Big( 9K\tilde \delta_2(n)-\frac{|\lambda'|^2l}{2}\Big)\tilde \delta_1(n)<0, \text{ for $n$ large enough}.
\end{align}
This proves that there are no limit points of the zeros of $v$ in $D(0;\rho)$. In view of Theorem \ref{theorem 1} (iii) we deduce that
$Z \subset \{|x|=\rho\} \cup \{x_1=0, |x_2|\geq \rho\}$. Another consequence is that given $\rho_0\in(0,\rho)$, there exists $\epsilon_0>0$ such that when $\epsilon\in (0,\epsilon_0)$, the minimizer $v_{\epsilon,a}$ does not vanish on $D(0;\rho_0)$.
Up to change of $v(x_1,x_2)$ by $-v(-x_1,x_2)$, we may assume that $v_{\epsilon,a}>0$ on $D(0;\rho_0)$. Then, in view of Theorem \ref{theorem 1} (iii) we have
$ \{x_1<0, |x|=\rho\} \cup \{x_1=0, |x_2|\geq \rho\}\subset Z $. Finally, the limit in \eqref{cvn2} follows from Theorem \ref{theorem 1} (i), in the case where $|x|<\rho$. On the other hand, for fixed $x$ such that $|x|\geq\rho$, the rescaled minimizers
$\tilde v(s)= v(x+s\epsilon)$ converge to a bounded solution $\tilde V$ of the equation $\Delta \tilde V(s)+(\mu(x)-\tilde V^2(s))\tilde V(s)=0$. As in the proof of Theorem \ref{theorem 1} (iii),
the associated potential $W(u)=\frac{u^4}{4}-\frac{\mu(x)}{2}u^2$ is stricly convex, thus $\tilde V$ satisfies $W'(\tilde V)=0$, i.e. $\tilde V=0$.
\end{proof}
Now we establish the analog of Lemma \ref{conclth2} in the case where $a>a^*$, to complete the proof of Theorem \ref{thcv} (iii).
\begin{lemma}\label{conclth2aa}For fixed $a\in (a^*,\infty)$, and for every $\gamma\in (0,\rho)$, we have
\begin{equation}\label{qaz1aa}
\lim_{\epsilon\to 0}\int_{\R^2}f_1v_{\epsilon,a}=\int_{D(0;\rho)}|f_1|\sqrt{\mu},
\end{equation}
\begin{equation}\label{fattlem1aa}
\lim_{\epsilon\to 0}\int_{|x|<\rho, |x_1|<\gamma}\Big(\frac{\epsilon}{2}\Big| \frac{\partial v_{\epsilon,a}}{\partial x_1}\Big|^2+\frac{(v_{\epsilon,a}^2-\mu)^2}{4\epsilon}\Big)=\int_{-\rho}^\rho \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}\dd x_2,
\end{equation}
\begin{equation}\label{qaz2aa}
\lim_{\epsilon\to 0}\int_{|x|<\rho, |x_1|>\gamma}\Big(\frac{\epsilon}{2}| \nabla v_{\epsilon,a}|^2+\frac{(v_{\epsilon,a}^2-\mu)^2}{4\epsilon}\Big)=0.
\end{equation}
\end{lemma}
\begin{proof}
Given a sequence $\epsilon_n\to 0$, we are going to show that we can extract a subsequence $\epsilon'_n\to 0$ such that $\lim_{n\to\infty}\int_{\R^2}f_1v_{\epsilon'_n,a}=\int_{D(0;\rho)}|f_1|\sqrt{\mu}$, and
$$\lim_{n\to\infty}\int_{|x|<\rho, |x_1|<\gamma}\Big(\frac{\epsilon'_n}{2}\Big| \frac{\partial v_{\epsilon'_n,a}}{\partial x_1}\Big|^2+\frac{(v_{\epsilon'_n,a}^2-\mu)^2}{4\epsilon'_n}\Big)=\int_{-\rho}^\rho \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}\dd x_2.$$ This will prove \eqref{qaz1aa} and \eqref{fattlem1aa}.
According to \eqref{fattlem1} and \eqref{fatnew2}, there exists a negligible set $N\subset (-\rho,\rho)$ such that for a subsequence called $\epsilon'_n$, and for every $x_2\in (-\rho,\rho)\setminus N$, we have
\begin{equation}\label{claa1aa}
\lim_{n\to \infty}\int_{\R}\epsilon'_n\Big| \frac{\partial v_n}{\partial x_2}(t,x_2)\Big|^2\dd t=0,
\end{equation}
and
\begin{equation}\label{claa2aa}
\lim_{n\to \infty}\mathcal{E}^{x_2}_{\epsilon'_n,a}(v_n(\cdot,x_2))= \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}-a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t,
\end{equation}
where we have set $v_n=v_{\epsilon'_n,a}$.
Our claim is that
\begin{equation}\label{claiml1aa}
\lim_{n\to\infty}\int_{\R} f_1(t,x_2)v_n(t,x_2) \dd t=\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t, \ \forall x_2\in (-\rho,\rho)\setminus N.
\end{equation}
From \eqref{claa1aa} and \eqref{claa2aa}, it follows that given $x_2\in (-\rho,\rho)\setminus N$ and $\gamma\in(0,\rho)$, there exists $ \bar n(x_2,\gamma)$ such that
\begin{equation}\label{zerovvaa}
n\geq\bar n(x_2,\gamma),\ \gamma<|t|<\rho+1\Rightarrow v_n(t,x_2)\neq 0.
\end{equation}
Indeed, otherwise we can find a subsequence $n_k$ and a sequence $(-\rho-1,-\gamma)\cup(\gamma,\rho+1)\ni t_k\to t_0$ such that $v_{n_k}(t_k,x_2)=0$. Then, proceeding as in \cite[Proof of Theorem 1.1, Step 6]{panayotis_1} we obtain that $\liminf_{k\to \infty}\mathcal{E}^{x_2}_{\epsilon'_{n_k},a}(v_{n_k}(\cdot,x_2))> \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}-a
\int_{|t|<\sqrt{\rho^2-x_2^2}} |f_1(t,x_2)| \sqrt{\mu(t,x_2)}\dd t$, which contradicts \eqref{claa2aa}. Thus, \eqref{zerovvaa} holds, and actually in view of Theorem \ref{theorem 1} (iii) we have
\begin{equation}\label{zerovvaabis}
n\geq\bar n(x_2,\gamma),\ \gamma<t<\rho+1\Rightarrow v_n(t,x_2)> 0, \text{ and }n\geq\bar n(x_2,\gamma),\ -\rho-1<t<-\gamma \Rightarrow v_n(t,x_2)< 0.
\end{equation}
Next, for fixed $t \in (-\sqrt{\rho^2-x_2^2},-\gamma)\cup(\gamma,\sqrt{\rho^2-x_2^2})$, we set $\tilde v_n(s):=v_n(t+\epsilon'_ns_1,x_2+\epsilon'_n s_2)$, and proceeding as in the proof of Lemma \ref{conclth2}, we can see that
$\lim_{n\to\infty}v_n(t,x_2)=\sqrt{\mu(t,x_2)}$ for $t\in (\gamma,\sqrt{\rho^2-x_2^2})$, while
$\lim_{n\to\infty}v_n(t,x_2)=-\sqrt{\mu(t,x_2)}$ for $t\in (-\sqrt{\rho^2-x_2^2},-\gamma)$.
Then, by repeating the arguments in the proof of Lemma \ref{conclth2}, our claim
\eqref{claiml1aa} follows. Finally, we conclude that $\lim_{n\to \infty}\int_{|x_2|<\rho}f_1v_n=\int_{D(0;\rho)}|f_1|\sqrt{\mu}$ by dominated convergence, and since $\lim_{n\to \infty}\int_{|x_2|>\rho}f_1v_n=0$ by Theorem \ref{theorem 1} (iii), we have established \eqref{qaz1aa}. Another consequence of \eqref{zerovvaabis} is that for every $x_2\in (-\rho,\rho)\setminus N$, there exists a sequence $\bar t_n\to 0$ such that $v_n(\bar t_n,x_2)=0$.
Setting $\tilde v_n(s):=v_n(\bar t_n+\epsilon'_ns_1,x_2+\epsilon'_n s_2)$, we obtain as in Lemma \ref{conclth2}, that
$\tilde v_n$ converges in $C^2_{\mathrm{ loc}}(\R^2)$ to $\tilde V(s)=\sqrt{\mu(0,x_2)}\tanh(\sqrt{\mu(0,x_2)/2}(s\cdot \nu))$, for some unit vector $\nu =(\nu_1,\nu_2)\in \R^2$. Again, \eqref{claa1aa} implies that $\nu=(1,0)$, and we refer to the detailed computation in \cite[Proof of Theorem 1.1, Step 6]{panayotis_1} to see that
\begin{equation}\label{detail}
\liminf_{n\to\infty}\int_{|t|<\min(\gamma,\sqrt{\rho^2-x_2^2})}\Big(\frac{\epsilon'_n}{2}\Big| \frac{\partial v_{n}}{\partial x_1}(t,x_2)\Big|^2+\frac{(v_{n}^2(t,x_2)-\mu(t,x_2))^2}{4\epsilon'_n}\Big)\dd t\geq \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}.
\end{equation}
Then, it follows from Fatou's Lemma that
\begin{equation}\label{detail2}
\liminf_{n\to\infty}\int_{|x|<\rho, |x_1|<\gamma}\Big(\frac{\epsilon'_n}{2}\Big| \frac{\partial v_{\epsilon'_n,a}}{\partial x_1}\Big|^2+\frac{(v_{\epsilon'_n,a}^2-\mu)^2}{4\epsilon'_n}\Big)\geq\int_{-\rho}^\rho \frac{2\sqrt{2}}{3}(\mu(0,x_2))^{3/2}\dd x_2.
\end{equation}
Finally, combining \eqref{detail2} with \eqref{qaz1aa} and \eqref{quaq1}, we deduce \eqref{fattlem1aa} and \eqref{qaz2aa}.
\end{proof}
\begin{proof}[Conclusion of the proof of Theorem \ref{thcv} (iii)]
Proceeding as in the conclusion of the proof of Theorem \ref{thcv} (ii), we show that there are no limit points of the zeros of $v$ in the set $D(0;\rho)\cap\{(x_1,x_2): |x_1|> \gamma\}$, where $\gamma>0$ is small.
As a consequence, given $\rho_0\in(0,\rho)$, there exists $\epsilon_0>0$ such that when $\epsilon\in (0,\epsilon_0)$, the minimizer $v_{\epsilon,a}$ is positive on $D(0;\rho_0)\cap\{(x_1,x_2): x_1> \gamma\}$.
Let $K\subset (\gamma,\rho+1) \times(-\rho_0,\rho_0)$ be a compact set.
Our claim is that there exists $\epsilon_K>0$ such that when $\epsilon\in (0,\epsilon_K)$, the minimizer $v_{\epsilon,a}$ is positive on $K$. To prove this claim we assume by contradiction that there exist a sequence $\epsilon_n\to 0$, and a sequence
$K\ni x_n\to x_0$ such that $v_n(x_n)\leq 0$, where we have set $v_n:=v_{\epsilon_n,a}$. Having a closer look at the proof of Lemma \ref{conclth2aa} (cf. in particular \eqref{zerovvaabis}), we can find $\rho_1\in(0,\rho_0)$ such that
$K\subset (\gamma,\rho+1) \times(-\rho_1,\rho_1)$, and $\bar n(\rho_1,\gamma)$ such that for $n\geq\bar n(\rho_1,\gamma)$, and $t\in(\gamma,\rho+1)$, we have $v_n(t,\pm\rho_1)>0$. Next, in view of Theorem \ref{theorem 1} (iii), we also obtain that $v_n(\rho+1,s)>0$ for every $s\in [-\rho_1,\rho_1]$, provided that $n$ is large enough. Gathering these results, it follows that there exists $n_K$ such that for every $n\geq n_K$, $v_n$ is positive on the boundary of the rectangle $R:= (\gamma,\rho+1)\times(-\rho_1,\rho_1)$. In addition, for $n\geq n_K$, $v_n$ cannot take negative values in $R$, since otherwise we would have $E(|v_n|,R)<E(v_n,R)$ in contradiction with the minimality of $v_n$. Thus, $v_n$ has a local minimum at $x_n$ for $n\geq n_K$, and \eqref{ode} implies that $0\leq \epsilon^2\Delta v_n(x_n)=-\epsilon a f_1(x_n)\in(-\infty,0)$, which is a contradiction.
This establishes our claim, and now in view of Theorem \ref{theorem 1} (iii) it is clear that $Z=\{x_1=0\}$. Finally, the limit in \eqref{cvn1} is established as in the conclusion of the proof of Theorem \ref{thcv} (ii). To prove the limit in \eqref{cvn00}, we proceed as in Lemma \ref{conclth2aa}. There exists a subsequence $\epsilon'_n\to 0$, and a negligible set $N\subset(-\rho,\rho)$, such that \eqref{claa1aa} and \eqref{claa2aa} hold for every $x_2\in(-\rho,\rho)\setminus N$.
Now, let $\bar x_{n}=(\bar t_{\epsilon'_n,a},x_2)$ be a zero of $v_{n}$ with fixed ordinate $x_2\in(-\rho,\rho)\setminus N$, and set $\tilde v_n(s):= v(\bar x_n+\ve'_n s)$.
Then $\tilde v_n$ converges in the $C^2_{\mathrm{loc}}(\R)$ sense to $\tilde V(s)=\sqrt{\mu(0,x_2)}\tanh(\sqrt{\mu(0,x_2)/2}(s\cdot \nu))$ for some unit vector $\nu\in\R^2$, and \eqref{claa1aa} implies that $\nu=(\pm 1,0)$, while \eqref{claa2aa} implies that
for $n$ large enough $v_n$ has a unique zero with fixed ordinate $x_2$. Thus, $\nu=(1,0)$ and \eqref{cvn00} is established.
\end{proof}
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and main results }
\labell{sec:main-results}
\subsection{Introduction}
\label{sec:intro}
In this paper, we establish new restrictions on Hamiltonian
diffeomorphisms and Reeb flows which have only finitely many periodic
orbits. While these dynamical systems are rare, there are many
natural examples, such as irrational rotations of the two-dimensional
sphere and Reeb flows on irrational ellipsoids. Moreover, these
systems serve as important counterpoints to cases where one can prove
the existence of infinitely many periodic orbits, see for example
\cite{E,EH,FrHa,Hi,Gi:conley,GG:gap,GG:gaps,SZ,Vi}. Our main theorems
establish resonance relations for the mean indices of the periodic
orbits of these systems when the ambient manifolds meet some
additional requirements.
For a Hamiltonian diffeomorphism $\varphi$, the additional requirement
on the ambient symplectic manifold is that $N\geq n+1$, where $N$ is
the minimal Chern number and $n$ is half the dimension. A resonance
relation in this case is simply a linear relation, with integer
coefficients, between the mean indices $\Delta_i$, viewed as elements
of ${\mathbb R}/2N{\mathbb Z}$. The existence of these relations can be established
essentially by carrying out an argument from \cite{SZ} modulo $2N$,
cf.\ \cite{BCE}.
The specific form of the resonance relations established depends on
$\varphi$, although conjecturally the relation $\sum\Delta_i=0\mod 2N$
is always satisfied.
For Reeb flows, we show that certain sums of the reciprocal mean
indices are equal to the (positive/negative) mean Euler
characteristic, an invariant of the contact structure defined via
cylindrical contact homology, when it itself is defined. (This
relation generalizes the one discovered by Ekeland and Hofer \cite{E,EH} and
Viterbo \cite{Vi} for convex and star-shaped hypersurfaces in
${\mathbb R}^{2n}$, which served as the main motivation for the present work.)
One can view these resonance relations as new expressions for the mean
Euler characteristics written purely in the terms of the mean
indices. As is shown below in Example \ref{ex:U}, these invariants
can be used to distinguish some contact structures which are homotopic
but not diffeomorphic, such as those distinguished by Ustilovsky in
\cite{U} using cylindrical contact homology.
One forthcoming application of our results is the $C^\infty$-generic
existence of infinitely many periodic orbits for Hamiltonian
diffeomorphisms or Reeb flows established under the same hypotheses as
the resonance relations; see \cite{GG:generic}. For the Reeb flows,
these results generalize the $C^\infty$-generic existence of
infinitely many closed characteristics on convex hypersurfaces in
${\mathbb R}^{2n}$ (see \cite{E}) and the $C^\infty$-generic existence of
infinitely many closed geodesics (see \cite{Ra1,Ra2}).
\subsection{Resonances for Hamiltonian diffeomorphisms}
\label{sec:res-hd}
Let $(M^{2n},\omega)$ be a closed symplectic manifold, which throughout
this paper is assumed to be weakly monotone; see, e.g., \cite{HS} or
\cite{MS} for the definition. Denote by $N$ the minimal Chern number
of $(M, \omega)$, i.e., $N$ is the positive generator of the subgroup
$\left<c_1(TM),\pi_2(M)\right>$ of ${\mathbb Z}$. (When
$c_1(TM)\mid_{\pi_2(M)}=0$, we set
$N=\infty$.) Recall that $(M, \omega)$ is said to be \emph{rational} if the group
$\left<\omega,\pi_2(M)\right>\subset {\mathbb R}$ is discrete.
A Hamiltonian $H\colon {\mathbb R}/{\mathbb Z}\times M\to {\mathbb R}$ which is one-periodic in
time, determines a vector field $X_H$ on $M$ via Hamilton's equation
$i_{X_H} \omega = -dH$. Let $\varphi=\varphi_H$ be the Hamiltonian
diffeomorphism of $M$ given by the time-$1$ flow of $X_H$. Recall
that there is a one-to-one correspondence between $k$-periodic points
of $\varphi$ and $k$-periodic orbits of $H$. In this paper, we
restrict our attention exclusively to periodic points of $\varphi$
such that the corresponding periodic orbits of $H$ are
contractible. (One can show that contractibility of the orbit
$\varphi^t(x)$ of $H$ through a periodic point $x\in M$ is completely
determined by $x$ and $\varphi$ and is independent of the choice of
generating Hamiltonian $H$.) To such a periodic point $x$, we
associate the mean index $\Delta(x)$, which is viewed here as a point
in ${\mathbb R}/2N{\mathbb Z}$, and hence is independent of the choice of capping of the
orbit. The mean index measures the sum of rotations of the eigenvalues
on the unit circle of the linearized flow $d\varphi^t_H$ along $x$.
The reader is referred to \cite{SZ} for the definition of the mean
index $\Delta$; see also, e.g., \cite{GG:CMH} for a detailed
discussion. We only mention here that
$$
|\Delta(x) - \operatorname{\hat{\mu}_{\scriptscriptstyle{CZ}}}(x)| \leq n,
$$
where $\operatorname{\hat{\mu}_{\scriptscriptstyle{CZ}}}(x)$ is the Conley--Zehnder index of $x$, viewed as a
point in ${\mathbb R}/2N{\mathbb Z}$ and given a nonstandard normalization such that for
a critical point $x$ of a $C^2$-small Morse function one has
$\operatorname{\hat{\mu}_{\scriptscriptstyle{CZ}}}(x) = \operatorname{\mu_{\scriptscriptstyle{Morse}}}(x) -n \mod 2N$.
A Hamiltonian diffeomorphism $\varphi$ is said
to be \emph{perfect} if it has finitely many (contractible) periodic
points, and every periodic point of $\varphi$ is a fixed point. Let
$\Delta_1,\ldots, \Delta_m$ be the collection of the mean indices of the fixed
points of a perfect Hamiltonian diffeomorphism $\varphi$ with exactly
$m$ fixed points. A \emph{resonance} or \emph{resonance relation}
is a vector $\vec{a}=(a_1,\ldots,a_m)\in{\mathbb Z}^m$ such that
$$
a_1\Delta_1+\ldots+a_m\Delta_m=0\mod 2N.
$$
It is clear that resonances form a free abelian group
${\mathcal R}={\mathcal R}(\varphi)\subset {\mathbb Z}^m$.
\begin{Theorem}
\label{thm:res}
Assume that $n+1\leq N<\infty$.
\begin{itemize}
\item[(i)] Then ${\mathcal R}\neq 0$, i.e., the mean indices $\Delta_i$ satisfy at least
one non-trivial resonance relation.
\item[(ii)] Assume in addition that there is only one resonance, i.e.,
$\operatorname{rk}{\mathcal R}=1$, and let $\vec{a}=(a_1,\ldots,a_m)$ be a generator of ${\mathcal R}$
with at least one positive component. Then all components of $\vec{a}$ are
non-negative, i.e., $a_i\geq 0$ for all $i$, and
$$
\sum a_i \leq \frac{N}{N-n}.
$$
\item[(iii)] Furthermore, assume that $(M, \omega)$ is rational. Then assertion
(i) holds when only irrational mean indices are considered (i.e.,
the irrational mean indices satisfy a non-trivial resonance
relation) and assertion (ii) holds when only non-zero mean indices
are considered.
\end{itemize}
\end{Theorem}
We require $(M, \omega)$ to be weakly monotone here only for the sake of
simplicity: this condition can be eliminated by utilizing the
machinery of virtual cycles. Likewise, the hypothesis that $(M, \omega)$ is
rational in (iii) is purely technical and probably
unnecessary. However, the proof of (iii) relies on a result from
\cite{GG:gaps} which has so far been established only for rational
manifolds although one can expect it to hold without this requirement;
see \cite[Remark 1.19]{GG:gaps}. When $N=\infty$, i.e.,
$c_1(TM)\mid_{\pi_2(M)}=0$, perfect Hamiltonian diffeomorphisms probably
do not exist and the assertion of the theorem is void. For instance,
if $(M, \omega)$ is rational and $N=\infty$, every Hamiltonian
diffeomorphism has infinitely many periodic points; see
\cite{GG:gaps}.
We note that every $\Delta_i\in{\mathbb Q}$ (e.g., $\Delta_i=0$)
automatically gives rise to an infinite cyclic subgroup of
resonances. Thus, assertion (iii) is much more precise than (i) or (ii) in the
presence of rational or zero mean indices.
Finally we observe that the condition that $\varphi$ is perfect can be
relaxed and replaced by the assumption that $\varphi$ has finitely
many periodic points. Indeed, in this case, suitable iterations
$\varphi^k$ are perfect. Applying Theorem \ref{thm:res} to such a
$\varphi^k$, we then obtain resonance relations involving (appropriately
normalized) mean indices of all periodic points of $\varphi$.
\begin{Example}
\label{ex:CPn}
Let $\varphi$ be the Hamiltonian diffeomorphism of ${\mathbb C}{\mathbb P}^n$ generated
by a quadratic Hamiltonian
$H(z)=\pi\big(\lambda_0|z_0|^2+\ldots+\lambda_n|z_n|^2\big)$, where
the coefficients $\lambda_0,\ldots,\lambda_n$ are all distinct. (Here,
we have identified ${\mathbb C}{\mathbb P}^n$ with the quotient of the unit sphere in
${\mathbb C}^{n+1}$. Recall also that $N=n+1$ for ${\mathbb C}{\mathbb P}^n$.) Then, the
Hamiltonian diffeomorphism $\varphi=\varphi_H$ is perfect and has
exactly $n+1$ fixed points (the coordinate axes). The mean indices are
$$
\Delta_i=\sum_j\lambda_j-(n+1)\lambda_i,
$$
where now $i=0,\ldots,n$. Thus, $\sum \Delta_i=0$ and this is the
only resonance relation for a generic choice of the coefficients: the
image of the map $(\lambda_0,\ldots,\lambda_n)\mapsto
(\Delta_0,\ldots,\Delta_n)$ is the hyperplane $\sum \Delta_i=0$.
\end{Example}
More generally, we have
\begin{Example}
\label{ex:Ham-gp-action}
Suppose that $(M, \omega)$ is equipped with a Hamiltonian torus action
with isolated fixed points; see, e.g., \cite{GGK,MS:intro} for the
definition and further details. A generic element of the torus gives
rise to a perfect Hamiltonian diffeomorphism $\varphi$ of $(M,
\omega)$ whose fixed points are exactly the fixed points of the torus
action. One can show that in this case the mean indices again satisfy the resonance
relation $\sum \Delta_i=0$. (The authors are grateful to Yael Karshon
for a proof of this fact; \cite{Ka}.)
Examples of symplectic manifolds which admit such torus actions include the
majority of coadjoint orbits of compact Lie groups. One can also
construct new examples from a given one by equivariantly blowing-up
the symplectic manifold at its fixed points. The resulting symplectic
manifold always inherits a Hamiltonian torus action and, in many
instances, this action also has isolated fixed points.
\end{Example}
These examples suggest that, in the setting of Theorem \ref{thm:res},
the mean indices always satisfy the resonance relation $\sum
\Delta_i=0$. The next result can be viewed as preliminary step
towards proving this conjecture.
\begin{Corollary}
\label{cor:CPn}
Let $\varphi$ be a perfect Hamiltonian diffeomorphism of ${\mathbb C}{\mathbb P}^n$ such
that there is only one resonance, i.e., $\operatorname{rk}{\mathcal R}=1$. Denote by $\vec{a}$ a
generator of ${\mathcal R}$ as described in statement (ii) of Theorem
\ref{thm:res}, and assume that $a_i\neq 0$ for all $i$. Then
$\varphi$ has exactly $n+1$ fixed points and $\vec{a}=(1,\ldots,1)$, i.e.,
the mean indices satisfy the resonance relation $\sum\Delta_i=0$.
\end{Corollary}
\begin{proof}
By the Arnold conjecture for ${\mathbb C}{\mathbb P}^n$, we have $m\geq n+1$, see
\cite{Fo,FW} and also \cite{F:c-l,Sc2}. By (ii), $a_i\neq 0$ means
that $a_i\geq 1$. Hence, by (ii) again, $m=n+1$ and $a_i=1$ for all
$i$ since $N=n+1$ and $\sum a_i\leq N/(N-n)=n+1$.
\end{proof}
Conjecturally, any Hamiltonian diffeomorphism of ${\mathbb C}{\mathbb P}^n$
with more than $n+1$ fixed points has infinitely many periodic
points. (For $n=1$ this fact is established in \cite{FrHa}.) Corollary
\ref{cor:CPn} implies that that this is indeed the case, provided that
the mean indices satisfy exactly one resonance relation (i.e., $\operatorname{rk}
{\mathcal R}=1$) and all components of the resonance relation are non-zero. (We
emphasize that by Theorem \ref{thm:res}, $\operatorname{rk}{\mathcal R}\geq 1$.)
\begin{Remark}
The resonances considered here are not the only numerical constraints
on the fixed points of a perfect Hamiltonian diffeomorphism
$\varphi\colon M\to M$. Relations of a different type,
involving both the mean indices and action values, are established in
\cite{GG:gaps} when $(M, \omega)$ is either monotone or negative
monotone. For instance, it is proved there that the so-called augmented
action takes the same value on all periodic points of $\varphi\colon
{\mathbb C}{\mathbb P}^n\to {\mathbb C}{\mathbb P}^n$ whenever $\varphi$ has exactly $n+1$ periodic points.
Note also that a perfect Hamiltonian diffeomorphism need not be
associated with a Hamiltonian torus action as are the Hamiltonian
diffeomorphisms in Examples \ref{ex:CPn} and
\ref{ex:Ham-gp-action}. For instance, there exists a Hamiltonian
perturbation $\varphi$ of an irrational rotation of $S^2$ with
exactly three ergodic invariant measures: the Lebesgue measure and
the two measures corresponding to the fixed points of $\varphi$;
\cite{AK,FK}. Clearly, $\varphi$ is perfect and not conjugate to a
rotation.
\end{Remark}
\begin{Remark}
As is immediately clear from the proof, one can replace in Theorem
\ref{thm:res} the collection $\Delta_1,\ldots,\Delta_m$ of the mean
indices of the fixed points of $\varphi$ by the set of all distinct mean
indices. This is a refinement of the theorem, for an equality of
two mean indices is trivially a resonance relation. Note also that,
as a consequence of this refinement, all mean indices are distinct
whenever $\operatorname{rk} {\mathcal R}=1$.
\end{Remark}
\subsection{Resonances for Reeb flows} Let $(W^{2n-1},\xi)$ be a
closed contact manifold such that the \emph{cylindrical contact homology}
$\operatorname{HC}_*(W,\alpha)$ is defined. More specifically, we require $(W,\xi)$ to
admit a contact form $\alpha$ such that
\begin{itemize}
\item[(CF1)] all periodic orbits of the Reeb flow of $\alpha$ are
non-degenerate, and
\item[(CF2)] the Reeb flow of $\alpha$ has no contractible periodic orbits $x$ with
$|x|=\pm 1$ or $0$.
\end{itemize}
Here, $|x|=\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)+n-3$, where $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)$ stands for the
Conley--Zehnder index of $x$ (with its standard normalization). For
the sake of simplicity, we also assume that
$c_1(\xi)=0$. Then $\operatorname{HC}_*(W,\xi)$ is the
homology of a complex $\operatorname{CC}_*(W,\alpha)$ which is generated (over a
fixed ground field, say, ${\mathbb Z}_2$) by certain periodic orbits of the
Reeb flow, and is graded via $|\cdot|$. To be more precise, the
generators of $\operatorname{CC}_*(W,\alpha)$ are all iterations of good Reeb orbits
and odd iterations of bad Reeb orbits (See the definitions below.)
The homology $\operatorname{HC}_*(W,\xi)$ is independent of $\alpha$ as long as
$\alpha$ meets requirements (CF1) and (CF2). The exact nature of the
differential on $\operatorname{CC}_*(W,\alpha)$ is inessential for our
considerations. We refer the reader to, for instance, \cite{Bo,BO,El}
and the references therein for a more detailed discussion of contact
homology.
Furthermore, assume that
\begin{itemize}
\item[(CH)] there are two integers $l_+$ and $l_-$, such that the space
$\operatorname{HC}_l(W,\xi)$ is finite-dimensional for
$l\geq l_+$ and $l\leq l_-$.
\end{itemize}
In the examples considered here, the contact homology is finite
dimensional in all degrees and this condition is automatically
met. By analogy with the constructions from \cite{EH,Vi}, we set
\begin{equation}
\label{eq:Euler1}
\chi^\pm(W,\xi)=\lim_{N\to\infty}\frac{1}{N}
\sum_{l=l_\pm}^N(-1)^l\dim \operatorname{HC}_{\pm l}(W,\xi),
\end{equation}
provided that the limits exist.
Clearly, when $\operatorname{HC}_l(W,\xi)$ is finite--dimensional for all $l$, we have
$$
\frac{\chi^+(W,\xi)+\chi^-(W,\xi)}{2}=\chi(W,\xi)
:=
\lim_{N\to\infty}\frac{1}{2N+1}
\sum_{l=-N}^N(-1)^l\dim \operatorname{HC}_{l}(W,\xi).
$$
We call $\chi^\pm(W,\xi)$ the \emph{positive/negative mean Euler
characteristic} of $\xi$. Likewise, we call $\chi(W,\xi)$ the mean
Euler characteristic of $\xi$. (This invariant is also considered in
\cite[Section 11.1.3]{VK:thesis}.)
In what follows, we denote by $x^k$ the $k$th iteration of a periodic
orbit $x$ of the Reeb flow of $\alpha$ on $W$. Recall that a simple
periodic orbit $x$ is called \emph{bad} if the linearized Poincar\'e
return map along $x$ has an odd number of real eigenvalues strictly
smaller than $-1$. Otherwise, the orbit is said to be
\emph{good}. (This terminology differs slightly from the standard
usage, cf.\ \cite{Bo,BO}.) When the orbit $x$ is good, the parity of
the Conley--Zehnder indices $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x^k)$ is independent of $k$; if $x$
is bad, then the parity of $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x^k)$ depends on the parity of $k$.
To proceed, we now assume that
\begin{itemize}
\item[(CF3)] the Reeb flow of $\alpha$ has finitely many simple periodic orbits.
\end{itemize}
In contrast with (CF1) and (CH) and even (CF2), this is a very strong
restriction on $\alpha$. Denote the good simple periodic orbits of the
Reeb flow by $x_i$ and the bad simple periodic orbits by $y_i$. Then,
$\operatorname{CC}_*(W,\alpha)$ is generated by the $x_i^k$, for all $k$, together
with the $y_i^k$ for $k$ odd. Whenever (CF3) holds, condition (CH) is
automatically satisfied with $l_-=-2$ and $l_+=2n-4$ . Moreover, in
this case the spaces $\operatorname{CC}_l(W,\alpha)$ are finite--dimensional. (This
fact can, for instance, be extracted from the proof of Theorem
\ref{thm:reeb}; see \eqref{eq:index1} and \eqref{eq:index2}.)
Likewise, all spaces $\operatorname{CC}_l(W,\alpha)$ (and hence $\operatorname{HC}_l(W,\xi)$) are
finite--dimensional, provided that all of the orbits $x_i$ and $y_i$
have non-zero mean indices. We denote the mean index of an orbit $x$
by $\Delta(x)$ and set
$\sigma(x)=(-1)^{|x|}=-(-1)^n(-1)^{\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)}$. In other words,
$\sigma(x)$ is, up to the factor $-(-1)^n$, the topological index of
the orbit $x$ or, more precisely, of the Poincar\'e return map of
$x$.
\begin{Theorem}
\label{thm:reeb}
Assume that $\alpha$ satisfies conditions (CF1)--(CF3). Then the limits in
\eqref{eq:Euler1} exist and
\begin{equation}
\label{eq:Euler2}
{\sum}^{\pm} \frac{\sigma(x_i)}{\Delta(x_i)}
+\frac{1}{2}{\sum}^{\pm} \frac{\sigma(y_i)}{\Delta(y_i)}=
\chi^{\pm}(W,\xi),
\end{equation}
where ${\sum}^{+}$ (respectively, ${\sum}^{-}$) stands for the sum over
all orbits with positive (respectively, negative) mean index.
\end{Theorem}
This theorem will be proved in Section \ref{sec:contact}. Here we only
mention that the specific nature of the differential on the complex
$\operatorname{CC}_*(W,\xi)$ plays no role in the argument. Also note that a similar
result holds when the homotopy classes of orbits are restricted to any
set of free homotopy classes closed under iterations, provided that
(CF1)--(CF3) hold for such orbits. For instance, \eqref{eq:Euler2}
holds when only contractible periodic orbits are taken into account in
the calculation of the left-hand side and the definition of
$\chi^\pm$. Also it is worth pointing out that for non-contractible
orbits the definitions of the Conley--Zehnder and mean indices involve
some additional choices (see, e.g., \cite{Bo}) which effect both the
right- and the left-hand side of~\eqref{eq:Euler2}.
\begin{Example}
\label{ex:standard}
Let $\xi_0$ be the standard contact structure on
$S^{2n-1}$. Then, as is easy to see, $\chi^-(S^{2n-1},\xi_0)=0$ and
$\chi^+(S^{2n-1},\xi_0)=1/2$. In this case, the resonance relations
\eqref{eq:Euler2} were proved in \cite{Vi}. (The case of a convex
hypersurface in ${\mathbb R}^{2n}$ was originally considered in \cite{E,EH}.)
\end{Example}
By definition \eqref{eq:Euler1}, the mean Euler characteristics
$\chi^\pm(W, \xi)$ are invariants of the contact structure
$\xi$. (Strictly speaking this is true only when $(W,\xi)$ is equipped
with some extra data or $W$ is simply connected.) Theorem
\ref{thm:reeb} implies that, whenever there is a contact form $\alpha$
for $\xi$ which satisfies conditions (CF1)--(CF3), these invariants
can, in principle, be calculated by purely elementary means (without
first calculating the contact homology) via the mean indices of closed
Reeb orbits. The following example shows that the mean Euler
characteristics can distinguish some non-diffeomorphic contact
structures within the same homotopy class.
\begin{Example}
\label{ex:U}
In \cite{U}, Ustilovsky considers a family of contact structures
$\xi_p$ on $S^{2n-1}$ for odd $n$ and positive $p\equiv \pm 1\mod 8$.
For a fixed $n$, the contact structures $\xi_p$ fall within a finite
number of homotopy classes, including the class of the standard
structure $\xi_0$. By computing $\operatorname{HC}_*(S^{2n-1},\xi_p)$, Ustilovsky
proves that the structures $\xi_p$ are mutually non-diffeomorphic, and
that none of them are diffeomorphic to $\xi_0$.
It follows from \cite{U}, that $\xi_p$ can be given by a contact
form $\alpha_p$ satisfying conditions (CF1)--(CF3). Furthermore, it is
not hard to show (see below or \cite{VK:thesis}) that
\begin{equation}
\label{eq:ust}
\chi^+(S^{2n-1},\xi_p)=\frac{1}{2}
\left(
\frac{p(n-1)+1}{p(n-2)+2}
\right)
\end{equation}
and $\chi^-(S^{2n-1},\xi_p)=0$. The right-hand side of \eqref{eq:ust}
is a strictly increasing function of $p>0$. Hence, the positive mean
Euler characteristic distinguishes the structures $\xi_p$ with $p>0$.
Note also that $\chi^+(\xi_p)>\chi^+(S^{2n-1},\xi_0)=1/2$ when $p>1$
and $\chi^+(\xi_1)=1/2$. In particular, $\chi^+$ distinguishes $\xi_p$
with $p>1$ from the standard structure $\xi_0$.
Formula \eqref{eq:ust} can be established in two ways, both relying on
\cite{U}. The first way is to use the contact form $\alpha_p$
constructed in \cite{U}. The indices of periodic orbits of $\alpha_p$
are determined in \cite{U} and the mean indices can be found in a
similar fashion or obtained using the asymptotic formula
$\Delta(x)=\lim_{k\to\infty} \operatorname{\mu_{\scriptscriptstyle{CZ}}}(x^k)/k$; see \cite{SZ}. Then
\eqref{eq:Euler2} is applied to calculate
$\chi^+(S^{2n-1},\xi_p)$. (Note that this calculation becomes even
simpler when the Morse--Bott version of \eqref{eq:Euler2} is used,
reducing the left-hand side of \eqref{eq:Euler2} to just one
term for a suitable choice of contact form, see \cite{Es}.)
Alternatively one can use the definition \eqref{eq:Euler1} of $\chi^+$
and the calculation of $\dim \operatorname{HC}_{*}(S^{2n-1},\xi_p)$ from \cite{U};
see \cite[Section 11.1.3]{VK:thesis}.
\end{Example}
\begin{Remark}
\label{rmk:filling}
A different version of contact homology, the linearized
contact homology, is defined when $(W, \xi)$ is equipped with a
symplectic filling $(M, \omega)$. The chain group for this homology
is still described via the closed orbits of a Reeb flow for
$\xi$. In particular, it is still generated by all iterations of
good orbits and the odd iterations of bad ones. The differential is
defined via the augmentation, associated with $(M, \omega)$, on the
full contact homology differential algebra; see, e.g., \cite{BO,El}.
Thus, the linearized contact homology, in general, depends on $(M,
\omega)$.
A key point in this construction is that one does not need to assume
condition (CF2) in order to define the linearized contact homology.
(When (CF2) is satisfied, the linearized contact homology coincides
with the cylindrical contact homology.) Hence, for fillable contact
manifolds, Theorem \ref{thm:reeb} holds without this assumption.
This follows immediately from the fact that our proof of the theorem
makes no use of the specific nature of the differential. One still
requires the contact form $\alpha$ to satisfy (CF1) and (CF3),
and the filling $(M, \omega)$ is assumed to be such that
$\left<\omega,\pi_2(M)\right>=0$ and $c_1(TM)=0$.
\end{Remark}
\begin{Remark} An argument similar to the proof of Theorem
\ref{thm:reeb} also establishes the following ``asymptotic Morse
inequalities''
$$
{\sum}^{\pm}\frac{1}{\Delta(x_i)}
+\frac{1}{2}{\sum}^{\pm}\frac{1}{\Delta(y_i)}
\geq \limsup_{N\to\infty}\frac{1}{N}\sum_{l=l_\pm}^N\dim \operatorname{HC}_{\pm l}(W,\xi),
$$
provided that $\alpha$ satisfies (CF1)--(CF3) or in the setting of
Remark \ref{rmk:filling}. A similar inequality (as well as some other
relations between mean indices and actions) is proved in \cite{EH} for
convex hypersurfaces in ${\mathbb R}^{2n}$.
\end{Remark}
\begin{Remark}
Finally note that the quite restrictive requirement that the Reeb
flow is non-degenerate and has finitely many periodic orbits can be
relaxed in a variety of ways. For instance, the Morse--Bott version
of Theorem \ref{thm:reeb} is proved in \cite{Es}, generalizing the
resonance relations for geodesic flows established in \cite{Ra1}.
\end{Remark}
\subsection{Acknowledgments} The authors are grateful to Yasha
Eliashberg, Jacqui Espina, Ba\c sak G\"urel, Yael Karshon, and Anatole Katok for
useful discussions. The authors also wish to thank the referee for her/his
comments and remarks.
\section{Resonances in the Hamiltonian case}
\subsection{Resonances and subgroups of the torus}
\label{sec:res-gen}
Consider a closed subgroup $\Gamma$ of $ {\mathbb T}^m={\mathbb R}^m/{\mathbb Z}^m$ which is
topologically generated by an element $\bar{\Delta}=(\bar{\Delta}_1,\ldots,\bar{\Delta}_m)\in
{\mathbb T}^m$. In other words, $\Gamma$ is the closure of the set $\{k\bar{\Delta}\mid
k\in{\mathbb N}\}$, which we will call the orbit of $\bar{\Delta}$. Note that $\Gamma$
is a Lie group since it is a closed subgroup of a Lie group. (We
refer the reader to, e.g., \cite{DK,Ki} for the results on Lie groups and duality
used in this section.) Moreover, the connected component of the
identity $\Gamma_0$ in $\Gamma$ is a torus, for $\Gamma_0$ is compact,
connected and abelian. Denote by ${\mathcal R}$ the group of characters
${\mathbb T}^m\to S^1$ which vanish on $\Gamma$ or equivalently on $\bar{\Delta}$. Thus,
${\mathcal R}$ is a subgroup of the dual group ${\mathbb Z}^m$ of ${\mathbb T}^m$. We can think
of ${\mathcal R}$ as the set of linear equations determining $\Gamma$. In other
words, $\vec{a}$ belongs to ${\mathcal R}$ if and only if
$$
a_1\bar{\Delta}_1+\ldots + a_m\bar{\Delta}_m=0 \mod 1 .
$$
We will refer to ${\mathcal R}$ as the group of resonances associated to
$\Gamma$. Clearly, $\Gamma$ is completely determined by ${\mathcal R}$.
When the role of $\bar{\Delta}$ or $\Gamma$ needs to be emphasized,
we will use the notation $\Gamma(\bar{\Delta})$ and ${\mathcal R}(\Gamma)$ or
${\mathcal R}(\bar{\Delta})$, etc. Furthermore, we denote by ${\mathcal R}_0\supset {\mathcal R}$ the group of
resonances associated to $\Gamma_0$.
We will need the following properties of $\Gamma$ and ${\mathcal R}$:
\begin{itemize}
\item $\operatorname{codim} \Gamma=\operatorname{rk} {\mathcal R}$;
\item $\Gamma\subset \Gamma'$ iff ${\mathcal R}(\Gamma)\supset {\mathcal R}(\Gamma')$;
\item $\Gamma/\Gamma_0$ and ${\mathcal R}_0/{\mathcal R}$ are finite cyclic groups dual to each other.
\end{itemize}
Here the second assertion is obvious. To prove the first and the last
ones, first note that $\Gamma/\Gamma_0$ is finite and cyclic since
$\Gamma$ is compact and has a dense cyclic subgroup. Further note that
${\mathcal R}$ can be identified with the dual group of ${\mathbb T}^m/\Gamma$ and that
the first assertion is clear when $\Gamma=\Gamma_0$, i.e., $\Gamma$ is
a torus. Dualizing the exact sequence $0\to
\Gamma/\Gamma_0\to {\mathbb T}^m/\Gamma_0\to{\mathbb T}^m/\Gamma \to 0$ we obtain the
exact sequence $0\to{\mathcal R}\to{\mathcal R}_0\to {\mathcal R}_0/{\mathcal R}\to 0$ (see, e.g.,
\cite[Chapter 12]{Ki}) and the last assertion follows. (The dual of a
finite cyclic group, say ${\mathbb Z}_k$, is isomorphic to ${\mathbb Z}_k$.) It also
follows that $\operatorname{rk} {\mathcal R}=\operatorname{rk} {\mathcal R}_0=\operatorname{codim} \Gamma_0=\operatorname{codim} \Gamma$.
\subsection{Proof of Theorem \ref{thm:res}} Let $\Delta = (\Delta_1,
\ldots, \Delta_m)$ where the components $\Delta_i \in {\mathbb R}/2N{\mathbb Z}$ are the mean
indices of the $m$ periodic points of the perfect Hamiltonian
diffeomorphism $\varphi$. Set $\bar{\Delta}=\Delta/2N$. Then, $\bar{\Delta}$ belongs to
${\mathbb T}^m$ and we have ${\mathcal R}(\varphi)={\mathcal R}(\bar{\Delta})$. Recalling that $n<N$, we
define $\Pi$ to be the cube $(n/N,1)^m$ in ${\mathbb T}^m$. In other words, $\Pi$
consists of points $\theta=(\theta_1,\ldots,\theta_m)\in{\mathbb T}^m$ such
that $\theta_i$ is in the arc $(n/N,1)$ for all $i$. We will refer to
$\Pi$ as the \emph{prohibited region} of ${\mathbb T}^m$.
\subsubsection{Proof of (i)} By a standard argument, for
every $k\in {\mathbb N}$ at least one component of $k\bar{\Delta}$ is in the arc
$[0,n/N]$. (See \cite{SZ} or, e.g., \cite{GG:gaps}; we will also
briefly recall the argument in the proof of (iii) below.) In other
words, none of the points of the orbit $\{k\bar{\Delta}\mid k\in{\mathbb N}\}$ lies in
the prohibited region $\Pi$. Since $\Pi$ is open, we conclude that
$\Gamma\cap\Pi=\emptyset$. Hence, $\operatorname{codim} \Gamma>0$ and ${\mathcal R}\neq 0$.
\subsubsection{Proof of (ii)} Here we assume that $\operatorname{rk}{\mathcal R}=1$. Let
$\vec{a}$ be a generator of ${\mathcal R}$. Then,
$\Gamma$ is given by the equation
$$
\vec{a}\cdot\theta:=\sum_i a_i \theta_i=0 \text{ in } {\mathbb R}/{\mathbb Z},
$$
where $\theta =(\theta_1,\ldots,\theta_m)\in{\mathbb T}^m$. Note that $\Pi$ can
also be viewed as the product of the arcs $(-1+n/N,0)$ in ${\mathbb T}^m$. Thus
the intersection of $\Pi$ with a neighborhood of $(0, \ldots, 0) \in
{\mathbb T}^m$ fills in the (open) portion of the negative quadrant in that
neighborhood. Since $\Gamma\cap \Pi=\emptyset$, all non-zero
components $a_i$ must have the same sign. Hence, if $\vec{a}$ has at least
one positive component we have $a_i\geq 0$.
Let $L=\{ (t,\ldots,t)\mid t\in S^1\}$ be the ``diagonal''
one-parameter subgroup of ${\mathbb T}^m$. The point of $L$ with $t=-1/\sum
a_i$ lies in $\Gamma$. Hence, this point must be outside $\Pi$ and so
$|t|\geq 1-n/N$. It follows that $\sum a_i\leq N/(N-n)$.
\subsubsection{Proof of (iii)} The proofs of (i) and (ii) above are
based on the observation that for every $k$, there exists a capped
$k$-periodic orbit $\bar{y}$ such that the local Floer homology of
$\varphi^k$ at $\bar{y}$ is non-zero in degree $n$:
$\operatorname{HF}_n(\varphi^k,\bar{y})\neq 0$. Then $\Delta(\bar{y}) \in [0,\,2n]$. (See
\cite{Gi:conley,GG:gap,GG:gaps} for the proofs of these facts; however, the
argument essentially goes back to \cite{SZ}. Note also that here we
use the grading of the Floer homology by $\operatorname{\hat{\mu}_{\scriptscriptstyle{CZ}}}$,
i.e., the fundamental class has degree $n$.) The orbit $y$ is the
$k$th iteration of some orbit $x_i$, and hence $\Delta(y)=k\Delta_i$ in
${\mathbb R} / 2N {\mathbb Z}$. We claim that necessarily $\Delta_i\neq 0$, provided that
$M$ is rational and, as before, $N\geq n+1$. As a consequence, the
orbits with $\Delta_i=0$ can be discarded in the proofs of (i) and
(ii).
To show that $\Delta_i\neq 0$, we argue by contradiction. Assume
the contrary: $\Delta_i=0$. Then $\Delta(\bar{y})=0\mod 2N$ and, in fact,
$\Delta(\bar{y}) =0$, since we also have $\Delta(\bar{y}) \in [0,\,2n]$ and
$N\geq n+1$. The condition that $\Delta(\bar{y}) =0$ and
$\operatorname{HF}_n(\varphi^k,\bar{y})\neq 0$ is equivalent to that $\bar{y}$ is a
symplectically degenerate maximum of $\varphi^k$; \cite{GG:gap,GG:gaps}. By
\cite[Theorem 1.18]{GG:gaps}, a Hamiltonian diffeomorphism with
symplectically degenerate maximum necessarily has infinitely many
periodic points whenever $M$ is rational. This contradicts the
assumption that $\varphi$ is perfect.
Thus, we have proved that (i) and (ii) hold with only non-zero mean
indices (in ${\mathbb R} / 2N {\mathbb Z}$) taken into account. To finish the proof of
(iii), it suffices to note that replacing $\varphi$ by $\varphi^k$, for
a suitably chosen $k$, we can make every rational mean index
zero. Since every resonance relation for $\varphi^k$ is also a
resonance relation for $\varphi$, we conclude that the irrational mean
indices of $\varphi$ satisfy a resonance relation.
\begin{Remark} The requirement that $M$ be rational enters the proof
of (iii) only at the last point where \cite[Theorem 1.18]{GG:gaps}
is utilized. The role of this requirement in the proof of this
theorem is purely technical and it is likely that the requirement
can be eliminated. Note also that we do not assert that (ii) holds
when only irrational mean indices are considered. However, an
examination of the above argument shows that the following is
true. Assume that the resonance group for the irrational mean
indices has rank one for a perfect Hamiltonian diffeomorphism
$\varphi$. Then these mean indices satisfy a non-trivial resonance
relation of the form $r\vec{b}$, where $r$ is a natural number, $b_i\geq
0$ for all $i$, and $\sum b_i\leq N/(N-n)$.
\end{Remark}
\subsection{Perfect Hamiltonian flows on ${\mathbb C}{\mathbb P}^n$.}
In this section, we state (without proof) another result asserting,
roughly speaking, that $\sum \Delta_i=0$ for perfect flows on ${\mathbb C}{\mathbb P}^n$
satisfying some additional, apparently generic, requirements.
Consider an autonomous, Morse Hamiltonian $H$ on ${\mathbb C}{\mathbb P}^n$ with
exactly $n+1$ critical points $x_0,\ldots, x_n$. Let us call
$\tau\in{\mathbb R}$, $\tau> 0$, a \emph{critical} period if at least one of the
critical points of $H$ is degenerate when viewed as a $\tau$-periodic
orbit of $H$ or equivalently as a fixed point of $\varphi^\tau_H$. We
denote the collection of critical times by $C_H\subset {\mathbb R}$ and call
$t\in (0,\infty)\smallsetminus C_H$ \emph{regular}. Assume furthermore that
for every regular $t>0$ the points $x_0,\ldots, x_n$ are the only
fixed points of $\varphi_H^t$ and that for every critical time
$\tau>0$ at least one of the points $x_i$ is non-degenerate as a
fixed point of $\varphi_h^{\tau}$. \emph{Then $\sum \Delta(x_i,t)=0$ for
any regular $t>0$, where $\Delta(x_i,t)$ is the mean index of $\varphi^t_H$
at $x_i$ equipped with trivial capping.}
In particular, $\sum\Delta_i=0$ in the setting of
Theorem \ref{thm:res} with $\varphi=\varphi^t_H$. (To ensure that
$\varphi$ satisfies the hypotheses of the theorem it suffices to
require that $kt\not\in C_H$ for all $k\in{\mathbb N}$.) A quadratic
Hamiltonian on ${\mathbb C}{\mathbb P}^n$ with $n\geq 2$ from Example \ref{ex:CPn}
meets the above conditions for generic eigenvalues $\lambda_i$ or,
more precisely, if and only if $H$ generates a Hamiltonian action of a
torus of dimension greater than one.
The proof of this result, to be detailed elsewhere, goes beyond the
scope of the present paper. The argument is conceptually similar to the proof of
\cite[Theorem 1.12]{GG:gaps} but is technically more involved, for it
relies on a more delicate version of Ljusternik--Schnirelman theory
than the one considered in that paper.
\section{Reeb flows: the proof of Theorem \ref{thm:reeb}}
\label{sec:contact}
Our goal in this section is to prove Theorem \ref{thm:reeb}. We focus
on establishing the result for $\chi^+(W,\xi)$. The case of
$\chi^-(W,\xi)$ can be handled in a similar fashion.
First recall that for every periodic orbit $x$ of the Reeb flow, we have
\begin{equation}
\label{eq:index1}
|\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x^k)-k\Delta(x)|<n-1
\end{equation}
(see, e.g., \cite{SZ}), and hence
\begin{equation}
\label{eq:index2}
-2<|x^k|-k\Delta(x)<2n-4.
\end{equation}
In particular, it follows from \eqref{eq:index2} that condition (CF3)
implies condition (CH) with $l_+=2n-4$ and $l_-=-2$. Moreover, the
dimension of $\operatorname{CC}_l(W,\alpha)$ with $l\geq l_+$ or $l\leq l_-$ is
bounded from above by a constant independent of $l$.
To simplify the notation, let us set
$C_l:=\operatorname{CC}_l(W,\alpha)$. Denote by $C_*^{(N)}$ the complex $C_*$ truncated
from below at $l_+$ and from above at $N>l_+$. In other words,
$$
C_l^{(N)}=
\begin{cases}
C_l &\text{when $l_+\leq l\leq N$,}\\
0 &\text{otherwise.}
\end{cases}
$$
The complex $C_*^{(N)}$ is generated by the iterations $x_i^k$ and
$y_i^k$ (with odd $k$) such that $|x_i^k|$ and $|y_i^k|$ are in the
range $[l_+,\,N]$. Note that by \eqref{eq:index2} this can only happen
when $\Delta(x_i)>0$ and $\Delta(y_i)>0$. By \eqref{eq:index2} again,
an orbit $x_i^k$ or $y_i^k$ with positive mean index $\Delta$ is in
$C_*^{(N)}$ for $k$ ranging from some constant (depending on the
orbit, but not on $N$) to roughly $N/\Delta$, up to a constant
independent of $N$. Furthermore, the parity of $|x_i^k|$ and
$|y_i^k|$ (odd $k$) is independent of $k$, i.e.,
$\sigma(x_i^k)=\sigma(x_i)$ and $\sigma(y_i^k)=\sigma(y_i)$. Thus, the
contribution of the iterations of $x_i$ to the Euler characteristic
$$
\chi\big(C_*^{(N)}\big):=\sum(-1)^l\dim C_l^{(N)}=\sum_{l=l_+}^N(-1)^l\dim C_l
$$
is $\sigma(x_i)N/\Delta(x_i)+ O(1)$ as $N\to\infty$. Likewise, the
contribution of the iterations of $y_i$ is $\sigma(y_i)N/2\Delta(y_i)+
O(1)$, since $k$ assumes only odd values in this case. Summing up over
all $x_i$ and $y_i$ with positive mean index, we have
$$
\chi\big(C_*^{(N)}\big)=N\Bigg(
{\sum}^+\frac{\sigma(x_i)}{\Delta(x_i)}
+\frac{1}{2}{\sum}^+\frac{\sigma(y_i)}{\Delta(y_i)}
\Bigg)
+ O(1),
$$
and hence
$$
\lim_{N\to\infty}\frac{\chi\big(C_*^{(N)}\big)}{N}
=
{\sum}^+\frac{\sigma(x_i)}{\Delta(x_i)}
+\frac{1}{2}{\sum}^+\frac{\sigma(y_i)}{\Delta(y_i)}.
$$
To finish the proof it remains to show that
\begin{equation}
\label{eq:chi}
\chi^+(W,\xi)=\lim_{N\to\infty} \chi\big(C_*^{(N)}\big)/N,
\end{equation}
which is nearly obvious. Indeed,
by the very definition of $C_*^{(N)}$, we have
$H_l\big(C_*^{(N)}\big)=\operatorname{HC}_l(W,\xi)$ when $l_+<l<N$. Furthermore,
$|H_N\big(C_*^{(N)}\big)-\operatorname{HC}_N(W,\xi)|=O(1)$ since $\dim C_N=O(1)$.
Hence,
$$
\chi\big(C_*^{(N)}\big)=\sum_l(-1)^l\dim H_l\big(C_*^{(N)}\big)
=\sum_{l=l_+}^N (-1)^l\dim\operatorname{HC}_l(W,\xi)+O(1)
$$
and \eqref{eq:chi} follows. This completes the proof of the theorem.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Consider on $\plainL2( \mathbb R^{3N})$ the Schr\"odinger operator
\begin{align}\label{eq:ham}
H = \sum_{k=1}^N \bigg(-\Delta_k - \frac{Z}{|x_k|}
\bigg)
+ \sum_{1\le j< k\le N} \frac{1}{|x_j-x_k|},
\end{align}
describing an atom with $N$ charged particles
with coordinates $\bx = (x_1, x_2, \dots, x_N),\,x_k\in \mathbb R^3$, $k= 1, 2, \dots, N$,
and a nucleus with charge $Z>0$. The notation $\Delta_k$ is used for
the Laplacian w.r.t. the variable $x_k$.
The operator $H$ acts on the Hilbert space $\plainL2( \mathbb R^{3N})$
and it is self-adjoint on the domain
$D(H) =\plainH2( \mathbb R^{3N})$, since the potential in \eqref{eq:ham}
is an infinitesimal perturbation
relative to the unperturbed operator $-\Delta = - \sum_k \Delta_k$,
see e.g. \cite[Theorem X.16]{ReedSimon2}.
Let $\psi = \psi(\bx)$
be an eigenfunction of the operator $H$ with an eigenvalue $E\in \mathbb R$, i.e. $\psi\in D(H)$ and
\begin{align*
(H-E)\psi = 0.
\end{align*}
For each $j=1, \dots, N$, we represent
\begin{align*}
\bx = (\hat\bx_j, x_j), \quad \textup{where}\
\hat\bx_j = (x_1, \dots, x_{j-1}, x_{j+1},\dots, x_N),
\end{align*}
with obvious modifications if $j=1$ or $j=N$.
The \textit{one-particle density matrix} is defined as the function
\begin{align}\label{eq:den}
\gamma(x, y) = \sum_{j=1}^N\int\limits_{ \mathbb R^{3N-3}} \overline{\psi(\hat\bx_j, x)} \psi(\hat\bx_j, y)\
d\hat\bx_j,\quad (x,y)\in \mathbb R^3\times \mathbb R^3.
\end{align}
Introduce also the function
\begin{align}\label{eq:tau}
\tau(x, y) =
\sum_{j=1}^N\int\limits_{ \mathbb R^{3N-3}} \overline{\nabla_x\psi(\hat\bx_j, x)}
\cdot\nabla_y \psi(\hat\bx_j, y)\
d\hat\bx_j,
\end{align}
that we call the \textit{one-particle kinetic energy matrix}.
The choice of this term does not seem to be standard, but it is partly
motivated by the fact that the trace
\begin{align*}
\tr \boldsymbol {\sf{T}} = \int_{ \mathbb R^3} \tau(x, x)\, dx
\end{align*}
gives the kinetic energy of the $N$ particles, see e.g. \cite[Chapter 3]{LiebSei2010} or
\cite[Section 4]{LLS2019}.
Denote by $\BOLG = \iop(\gamma)$ and $\boldsymbol{\sf{T}} = \iop(\tau)$ the integral operators
with kernels $\gamma(x, y)$ and $\tau(x, y)$ respectively.
We call $\BOLG$ \textit {the one-particle density operator},
and $\boldsymbol{\sf{T}}$ - \textit {the one-particle kinetic energy operator}.
If we assume that the particles are fermions (resp. bosons), i.e. that the eigenfunction
is antisymmetric (resp. symmetric) with respect to the permutations $x_k\leftrightarrow x_j$,
then the formulas for $\gamma(x, y)$ and $\tau(x, y)$ take a more compact form:
\begin{align}\label{eq:fb}
\begin{cases}
\gamma(x, y) = N \int\limits_{ \mathbb R^{3N-3}}
\overline{\psi(\hat\bx, x)} \psi(\hat\bx, y)\ d\hat\bx,\\[0.5cm]
\tau(x, y) = N\int\limits_{ \mathbb R^{3N-3}} \overline{\nabla_x\psi(\hat\bx, x)}
\cdot\nabla_y \psi(\hat\bx, y)\,d\hat\bx,
\end{cases}
\end{align}
where $\hat\bx = \hat\bx_N$. In this paper, however,
we do not make these assumptions and
work with the general definitions \eqref{eq:den} and \eqref{eq:tau}.
Since $\psi, \nabla\psi\in\plainL2( \mathbb R^{3N})$,
both $\BOLG$ and $\boldsymbol{\sf{T}}$ are trace class.
We are interested in the exact decay rate of the eigenvalues $\l_k, k= 1, 2, \dots,$
for $\BOLG$ and $\boldsymbol{\sf{T}}$.
Both operators play a central
role in quantum chemistry computations of atomic and molecular bound states, see e.g. the papers
\cite{
Fries2003_1, Fries2003, Lewin2004, Lewin2011} and the book \cite{SzaboOstlund1996}.
The knowledge of the eigenvalue behaviour should
serve to estimate the errors due to finite-dimensional
approximations.
The eigenvalue asymptotics for the operator $\BOLG$ were studied in \cite{Sobolev2021}
under the assumption that the function $\psi$ satisfies an exponential bound
\begin{align}\label{eq:exp}
\sup_{\bx\in \mathbb R^{3N}} e^{\varkappa_\scalel{0} |\bx|}\,
|\psi(\bx)|<\infty
\end{align}
with some $\varkappa_0>0$.
This condition always holds if $E$ is a discrete eigenvalue. For detailed discussion
and bibliography on the property \eqref{eq:exp} we refer to
\cite{SimonSelecta}.
It was shown in \cite{Sobolev2021} that
\begin{align}\label{eq:bolg}
\lim_{k\to \infty} k^{\frac{8}{3}} \,\l_k(\BOLG) = A^{\frac{8}{3}}
\end{align}
with some coefficient $A\ge 0$, see also \cite{Cioslowski2020} and \cite{CioPrat2019}
for relevant quantum chemistry calculations.
The subject of the current paper is to establish
for the eigenvalues $\l_k(\boldsymbol{\sf{T}})$ the formula
\begin{align}\label{eq:mainas}
\lim_{k\to\infty} k^{2}\l_k(\boldsymbol{\sf{T}}) = B^2,
\end{align}
with a coefficient $B\ge 0$. The precise statement, including the formula for
the coefficient $B$, requires further properties
of the function $\psi$ and
it is given in Theorem \ref{thm:maincompl}. Moreover, in Sect. \ref{sect:reg} for comparison
we also provide a formula for the coefficient $A$.
The proof splits into three steps that are briefly outlined below.
\textit{Step 1: factorization.}
First we represent the operator $\boldsymbol{\sf{T}}$ as
the product $\boldsymbol{\sf{T}} = \boldsymbol{\sf{V}}^*\boldsymbol{\sf{V}}$, where
$\boldsymbol{\sf{V}}:\plainL2( \mathbb R^3)\to \plainL2( \mathbb R^{3N-3})$ is a
suitable compact operator.
This representation implies that $\l_k(\boldsymbol{\sf{T}}) = s_k(\boldsymbol{\sf{V}})^2$, $k=1, 2 \dots,$,
where $s_k(\boldsymbol{\sf{V}})$ are the singular values of the operator $\boldsymbol{\sf{V}}$.
Therefore the asymptotic formula \eqref{eq:mainas} takes the form
\begin{align}\label{eq:main1}
\lim_{k\to\infty}k\, s_k(\boldsymbol{\sf{V}}) = B.
\end{align}
If we take formula \eqref{eq:fb} as the definition of $\tau(x, y)$
(which is the case for fermions or bosons), then the operator $\boldsymbol{\sf{V}}$ can be taken to be
the integral operator with the vector-valued kernel $\sqrt{N}\,\nabla_x\psi(\hat\bx, x)$, i.e.
\begin{align}\label{eq:ferbos}
(\boldsymbol{\sf{V}} u)(\hat\bx) = \sqrt{N}\int_{ \mathbb R^3}
\nabla_x\psi(\hat\bx, x) u(x) d x,\ u\in\plainL2( \mathbb R^3).
\end{align}
For the general case the operator $\boldsymbol{\sf{V}}$
has a more complicated form and it is defined
in Subsect. \ref{subsect:fact}.
The factorization $\boldsymbol{\sf{T}} = \boldsymbol{\sf{V}}^*\boldsymbol{\sf{V}}$
reduces the study of the asymptotics
to the operator $\boldsymbol{\sf{V}}$ which is convenient since the properties of
the functions $\psi$ have been very well studied in the literature.
For the rest of the introduction we will assume for simplicity that $\boldsymbol{\sf{V}}$ is given by
the formula \eqref{eq:ferbos}.
For the sake of comparison we note that the operator $\BOLG$ also admits a factorization: $\BOLG = \boldsymbol\Psi^* \boldsymbol\Psi$. Under the assumption that
$\gamma(x, y)$ is given by the formula \eqref{eq:fb} the
scalar operator $\boldsymbol\Psi:\plainL2( \mathbb R^3)\to \plainL2( \mathbb R^{3N-3})$ has the form
\begin{align}\label{eq:gam}
(\boldsymbol\Psi u)(\hat\bx) = \sqrt{N}\int_{ \mathbb R^3}
\psi(\hat\bx, x) u(x) d x,\ u\in\plainL2( \mathbb R^3).
\end{align}
This factorization was a key observation in the study of the operator $\BOLG$ in \cite{Sobolev2021}.
\textit{Step 2: estimates for singular values.}
The asymptotic analysis of the operator $\boldsymbol{\sf{V}}$ begins with effective bounds for the
singular values of $\boldsymbol{\sf{V}}$. By effective bounds we mean bounds for the
operator $\boldsymbol{\sf{V}}$ with weights
$a = a(x), x\in \mathbb R^3$, $b = b(\hat\bx), \hat\bx\in \mathbb R^{3N-3}$,
of the form
\begin{align*}
\limsup_{k\to\infty} k\, s_k(b\, \boldsymbol{\sf{V}} a)\le C(a, b),
\end{align*}
where the right-hand side depends exlpicitly on
$a$ and $b$.
It is important that $C(a, b)\to 0$
as some natural integral norm of $a$ (or $b$) tends to zero.
The precise statement is the subject of Theorem \ref{thm:psifull}.
Our proof of the effective bound follows the method used by the author in \cite{Sobolev2020}
to establish analogous bounds for
the operator $\boldsymbol\Psi$.
It relies on the results by
M.S. Birman and M. Z. Solomyak on
estimates for singular values of
integral operators via suitable Sobolev norms of their kernels, see \cite{BS1977}, and
on the regularity results for the function $\psi$.
By an ellipticity argument, the function $\psi$ is
real analytic away from the coalescence points of
the particles, i.e. from $x_j\not = x_k, 1\le j < k \le N$ and $x_j\not = 0$,
$j = 1, 2, \dots, N$. The classical T. Kato's paper \cite{Kato1957} shows that
at the coalescence points the function $\psi$ is Lipschitz.
In our proofs we use the global estimates for the derivatives of $\psi$
obtained by S. Fournais and T.\O. S\o rensen in the recent paper \cite{FS2018}.
In combination with the estimates from \cite{BS1977} they lead to the required
effective bounds for $\boldsymbol{\sf{V}}$.
In article \cite{FS2018} one can find further references
to the vast bibliography on the regularity of solutions to the Schr\"odinger equation.
\textit{Step 3: asymptotics.} This step follows the approach of \cite{Sobolev2021} where the
asymptotic formula \eqref{eq:bolg} was proved.
In order to obtain an
asymptotic formula for $\boldsymbol{\sf{V}}$
one needs to know the behaviour of the kernel $\nabla_x\psi(\hat\bx, x)$
near the coalescence points. An appropriate representation
for the function $\psi$ was obtained
in \cite{FHOS2009}.
To simplify the explanation we assume that the system consists of two particles,
i.e. that $N=2$.
Under this assumption the problem retains all its crucial features,
but permits to avoid some tedious technical details. For $N=2$ we have $\bx = (t, x)\in \mathbb R^3\times \mathbb R^3$,
and the operator $\boldsymbol{\sf{V}}$ as defined in \eqref{eq:ferbos}
acts from $\plainL2( \mathbb R^3)$ into $\plainL2( \mathbb R^3)$.
According to \cite{FHOS2009}, there exists a neighbourhood
$\Omega_{1,2}\subset \big( \mathbb R^3\setminus \{0\}\big)\times \big( \mathbb R^3\setminus\{0\}\big)$ of the
diagonal set $\{(x, x): x\in \mathbb R^3\setminus \{0\}\}$
and two functions $\xi_{1,2}, \eta_{1, 2}$, real analytic in $\Omega_{1,2}$, such that
the eigenfunction $\psi = \psi(t, x)$ admits the representation
\begin{align}\label{eq:locan0}
\psi(t, x) = \xi_{1,2}(t, x) + |t-x|\,\eta_{1,2}(t, x),
\quad \textup{for all}\quad (t, x)\in \Omega_{1,2},
\end{align}
and hence the kernel of the operator $\boldsymbol{\sf{V}}$ has the form
\begin{align}\label{eq:locan}
\nabla_x\psi(t, x) = &\ \nabla_x\xi_{1,2}(t, x) + |t-x|\,\nabla_x\eta_{1,2}(t, x)
+ \frac{t-x}{|t-x|}\eta_{1,2}(t, x),\notag\\
&\ \qquad \qquad\textup{for all}\quad (t, x)\in \Omega_{1,2}.
\end{align}
Each of the terms on the right-hand side
of \eqref{eq:locan}
gives a different contribution
to the asymptotics of the singular values. According to \cite{BS1977}, infinitely smooth kernels
lead to the decay rate which is faster than any negative power of the singular value number.
This allows us to say that the first term on the right-hand side of \eqref{eq:locan} gives a zero contribution.
The other two terms contain homogeneous factors of order one and zero respectively.
Spectral asymptotics for operators with homogeneous kernels were studied extensively by
M. Birman and M. Solomyak in \cite{BS1970, BS1977_1} and \cite{BS1979},
see also \cite{BS1977}. A summary of the results needed for our purposes here can be found
in \cite{Sobolev2021}.
The homogeneity order one kernels played a central role in paper
\cite{Sobolev2021} in the study of the operator $\boldsymbol\Psi$ defined in \eqref{eq:gam}.
In this case the Birman-Solomyak theory led
to the asymptotics
\begin{align*}
\lim_{k\to\infty} k^{\frac{4}{3}} s_k(\boldsymbol\Psi) = A^{\frac{4}{3}},
\end{align*}
which, in its turn, implied \eqref{eq:bolg}.
In the current paper the decay of the singular values of the operator
$\boldsymbol{\sf{V}}$ is determined by the term of homogeneity order zero in \eqref{eq:locan}.
Precisely, according to the Birman-Solomyak results (see \cite{Sobolev2021} for a summary),
kernels of order zero produce the asymptotics
of order $k^{-1}$ , which agrees with \eqref{eq:main1}. However, as in \cite{Sobolev2021},
the known formulas for integral operators with homogeneous kernels are not directly applicable,
since we have information neither on the smoothness of the functions $\xi_{1, 2}, \eta_{1, 2}$
on the closure
$\overline{\Omega_{1,2}}$ nor on the integrability of these functions and
their derivatives on $\Omega_{1, 2}$.
To resolve this problem we
approximate $\xi_{1,2}, \eta_{1,2}$ by suitable
$\plainC\infty_0$-functions supported inside $\Omega_{1,2}$.
The estimates obtained in Step 2 ensure that the induced error in the spectral asymptotics
tends to
zero as the smooth approximations converge.
In the limit this leads to the formula
\eqref{eq:main1} with the constant
\begin{align*
B = \frac{4}{3\pi}\int_{ \mathbb R^3} |2^{1/2}\eta_{1,2}(x, x)| dx.
\end{align*}
Interestingly,
the finiteness of the above integral is a by-product of the
spectral bounds.
We emphasize that the coalescence points $x = 0$ or $t = 0$ do not contribute to the asymptotics.
For $N\ge 3$ the proof needs to be modified to incorporate representations of the form
\eqref{eq:locan0} in the neighbourhood of all pair coalescence points
$x_j = x_k$, \ $j, k = 1, 2, \dots, N$,\ $j \not = k$, see \eqref{eq:repr}.
The kernel of the operator
$\boldsymbol{\sf{V}}$ near the
coalescence points becomes more complex and as a result the proof requires an
extra step involving an
integral operator whose structure mimics that of the operator $\boldsymbol{\sf{V}}$.
Such a model operator was studied in the author's paper \cite{Sobolev2021}. In order
to make it adaptable for the use in different situations, the model operator was allowed to have
arbitrary order of homogeneity at the pair coalescence points. For order one the model operator
was the key point in the analysis for
the operator $\boldsymbol\Psi$ conducted in \cite{Sobolev2021}. In the current paper we
use the same model operator but with homogeneity of order zero, which is relevant to the operator
$\boldsymbol{\sf{V}}$. The required results are collected in Subsect. \ref{subsect:model}.
Using these facts together with
approximations of $\boldsymbol{\sf{V}}$ similar to the ones in the case $N=2$,
we arrive at the asymptotics \eqref{eq:main1} with the coefficient $B$ given in \eqref{eq:coeffA},
concluding the proof.
The paper is organized as follows.
Section \ref{sect:reg} contains the main result and provides the preliminaries.
First we describe the representation of
the function $\psi$ near the pair coalescence points and
state the main result as Theorem \ref{thm:maincompl}, which includes
the formula \eqref{eq:coeffA} for
the coefficient $B$. The important Theorem \ref{thm:etadiag} ensures that the coefficient $B$ is
finite.
In Subsect. \ref{subsect:fact} we detail the factorization
$\boldsymbol{\sf{T}} = \boldsymbol{\sf{V}}^*\boldsymbol{\sf{V}}$.
In Subsect. \ref{subsect:compact}, \ref{subsect:intop} we
put together the necessary facts about classes of
compact operators with power-like spectral behaviour and collect
the estimates for singular values of integral operators borrowed from \cite{BS1977}.
Subsect. \ref{subsect:model} is focused on spectral asymptotics
of the model integral operator that is instrumental to the case $N\ge 3$.
Using the factorization $\boldsymbol{\sf{T}} = \boldsymbol{\sf{V}}^*\boldsymbol{\sf{V}}$,
in Sect. \ref{sect:factor} the main Theorem \ref{thm:maincompl}
is restated in terms of the operator $\boldsymbol{\sf{V}}$, see Theorem
\ref{thm:ttov}.
In Lemma \ref{lem:central} we construct an approximation for $\boldsymbol{\sf{V}}$
which is adapted for the use of the model operator from Subsect. \ref{subsect:model}.
These results allow us to prove Theorem \ref{thm:ttov} (and hence Theorem \ref{thm:maincompl})
in Subsect. \ref{subsect:proofs}. The rest of the paper is focused on the proof
of the approximation Lemma \ref{lem:central}. It begins with derivation of
effective spectral bounds for the operator $\boldsymbol{\sf{V}}$ in Sect. \ref{sect:prep}.
With the help of the obtained bounds
in Sect. \ref{sect:trim} we show that
the error in the asymptotics induced by the approximations of $\boldsymbol{\sf{V}}$ tends to
zero as the approximations converge, thereby proving Lemma \ref{lem:central}.
We conclude the introduction with some general notational conventions.
\textit{Coordinates.}
As mentioned earlier, we use the following standard notation for the coordinates:
$\bx = (x_1, x_2, \dots, x_N)$,\ where $x_j\in \mathbb R^3$, $j = 1, 2, \dots, N$.
The vector $\bx$ is often represented in the form
\begin{align*}
\bx = (\hat\bx_j, x_j) \quad \textup{with}\quad
\hat\bx_j = (x_1, x_2, \dots, x_{j-1}, x_{j+1},\dots, x_N)\in \mathbb R^{3N-3},
\end{align*}
for arbitrary $j = 1, 2, \dots, N$. Most frequently we use this notation with $j=N$, and
write $\hat\bx = \hat\bx_N$, so that $\bx = (\hat\bx, x_N)$.
In order to write
formulas in a more compact and unified way, we sometimes use the notation
$x_0 = 0$.
In the space $ \mathbb R^d, d\ge 1,$
the notation $|x|$ stands for the Euclidean norm.
In Sect. \ref{sect:prep} in some calculations it is convenient to use the
$\ell^1$-norm of $x$
which we denote by $|x|_1$.
For $N\ge 3$ it is also useful to introduce the notation
for $\bx$ with $x_j$ and $x_k$ taken out:
\begin{align}\label{eq:xtilde}
\tilde\bx_{k, j} = \tilde\bx_{j, k}
= (x_1, \dots, x_{j-1}, x_{j+1}, \dots, x_{k-1}, x_{k+1},\dots, x_N), \quad \textup{for}\ j <k.
\end{align}
If $j < k$, then we write $\bx = (\tilde\bx_{j, k}, x_j, x_k)$. For any $j\le N-1$
the vector $\hat\bx$ can be represented as $\hat\bx = (\tilde\bx_{j, N}, x_j)$.
The notation $B_R$ is used for the ball $\{x\in \mathbb R^3: |x| < R\}$.
\textit{Indicators.}
For any set $\L\subset \mathbb R^d$ we denote by $\mathbbm 1_{\L}$ its indicator function (or indicator).
\textit{Derivatives.}
Let $\mathbb N_0 = \mathbb N\cup\{0\}$.
If $x = (x', x'', x''')\in \mathbb R^3$ and $m = (m', m'', m''')\in \mathbb N_0^3$, then
the derivative $\partial_x^m$ is defined in the standard way:
\begin{align*}
\partial_x^m = \partial_{x'}^{m'}\partial_{x''}^{m''}\partial_{x'''}^{m'''}.
\end{align*}
\textit{Bounds.}
For two non-negative numbers (or functions)
$X$ and $Y$ depending on some parameters,
we write $X\lesssim Y$ (or $Y\gtrsim X$) if $X\le C Y$ with
some positive constant $C$ independent of those parameters.
To avoid confusion we may comment on the nature of
(implicit) constants in the bounds.
\textit{Cut-off functions.}
We systematically use the following smooth cut-off functions. Let
\begin{align}\label{eq:sco}
\t\in\plainC\infty_0( \mathbb R),\quad \zeta(t) = 1-\t(t),
\end{align}
be functions such that $0\le \t\le 1$ and
\begin{align}\label{eq:sco1}
\t(t) = 0,\quad \textup{if}\quad |t|>1;\ \quad
\t(t) = 1,\quad \textup{if}\quad |t|<\frac{1}{2}. \
\end{align}
\textit{Integral operators.}
The notation $\iop(\mathcal K)$ is used for the integral operator with kernel $\mathcal K$,
e.g. $\Gamma = \iop(\gamma)$.
The functional spaces, where $\iop(\mathcal K)$ acts are obvious from the context.
\section{Main result, factorization of
$\boldsymbol{\sf{T}}$, compact operators}\label{sect:reg}
\subsection{Representation formula}
In order to state the main result we need a formula for the function $\psi$ describing its behaviour
in a neighbourhood of the pair coalescence points. Such a formula was obtained in
\cite{FHOS2009}.
For convenience, along with the standard
notation $\bx = (x_1, x_2, \dots, x_N)\in \mathbb R^{3N}$
we use the notation $x_0 = 0$.
Thus, unless otherwise stated, the indices labeling the particles, run from $0$ to $N$.
Denote
\begin{align}\label{eq:sls}
{\sf S}_{l,s} = \{\bx\in \mathbb R^{3N}: x_l\not = x_s\},\
l\not = s.
\end{align}
The function $\psi$ is real-analytic on the set
\begin{align*
{\sf{U}} =
\bigcap_{0\le l < s\le N}{\sf S}_{l,s}.
\end{align*}
For each pair $j, k: j\not = k$, the set
\begin{align}\label{eq:uj}
{\sf{U}}_{j,k} =
\bigcap_{\substack{l \not = s\\
(l, s)\not = (j, k)}}
{\sf S}_{l,s}.
\end{align}
contains the coalescence point $x_j = x_k$, whereas the other pairs of variables are separated from each other.
For the main result we are interested in the shape of function $\psi$
near the diagonal
\begin{align}\label{eq:diag}
{\sf{U}}^{(\rm d)}_{j,k} = \{\bx\in{\sf{U}}_{j,k}: x_j = x_k\}.
\end{align}
The sets introduced above are obviously symmetric with respect to permutations of
indices, e.g. ${\sf{U}}_{j,k} = {\sf{U}}_{k,j}$, ${\sf{U}}^{(\rm d)}_{j, k}={\sf{U}}^{(\rm d)}_{k,j}$.
The following property follows from \cite[Theorem 1.4]{FHOS2009}.
\begin{prop}\label{prop:repr}
For each pair of indices $j, k = 0, 1, \dots, N$ such that $j\not = k$, there exists
an open connected set $\Omega_{j,k} = \Omega_{k, j}\subset \mathbb R^{3N}$, such that
\begin{align}\label{eq:omin}
{\sf{U}}_{j,k}^{(d)}\subset\Omega_{j,k}\subset {\sf{U}}_{j,k},
\end{align}
and two uniquely defined functions $\xi_{j, k}, \eta_{j, k}$, real analytic on $\Omega_{j, k}$, such that
for all $\bx\in \Omega_{j, k}$
the following representation holds:
\begin{align}\label{eq:repr}
\psi(\bx) = \xi_{j, k}(\bx) + |x_j-x_k| \eta_{j, k}(\bx).
\end{align}
\end{prop}
Because of the uniqueness of functions $\xi_{j, k}, \eta_{j, k}$, they are
symmetric with respect to the permutations $j\leftrightarrow k$, i.e.
$\xi_{j, k} = \xi_{k, j}$, $\eta_{j, k} = \eta_{k, j}$ for all $j\not = k$.
The asymptotic coefficient $B$ in the formula \eqref{eq:mainas} is defined via the functions
$\eta_{j,k}$, $j, k = 1, 2, \dots, N, j<k$, on the sets \eqref{eq:diag}.
Using the notation \eqref{eq:xtilde} we write the function $\eta_{j, k}(\bx)$
on ${\sf{U}}_{j, k}^{(\dc)}$ as $\eta_{j, k}(\tilde\bx_{j, k}, x, x)$. Proposition \ref{prop:repr}
gives no information on the properties of $\xi_{j, k}, \eta_{j, k}$ on the closure $\overline{\Omega_{j, k}}$,
but while proving the asymptotic formula \eqref{eq:mainas} we find the following integrability
property of $\eta_{j, k}(\tilde\bx_{j,k}, x, x)$.
\begin{thm}\label{thm:etadiag}
If $N\ge 3$, then each function $\eta_{j, k}(\ \cdot\ , x, x)$, $1\le j < k\le N$,
belongs to $\plainL2( \mathbb R^{3N-6})$ a.e. $x\in \mathbb R^3$ and
the function
\begin{align}\label{eq:H}
H(x):= \bigg[2 \sum\limits_{1\le j < k\le N}\int_{ \mathbb R^{3N-6}} \big|
\eta_{j, k}(\tilde\bx_{j,k}, x, x) \big|^2 d\tilde\bx_{j, k}\bigg]^{\frac{1}{2}},
\end{align}
belongs to $\plainL1( \mathbb R^{3})$.
If $N = 2$, then the function $H(x):= \sqrt2 |\eta_{1, 2}(x, x)|$
belongs to $\plainL1( \mathbb R^{3})$.
\end{thm}
Now we are in a position to state the main result of the paper.
\begin{thm}\label{thm:maincompl}
Suppose that the eigenfunction $\psi$ satisfies the bound \eqref{eq:exp}.
Then the eigenvalues $\l_k(\boldsymbol{\sf{T}}), k = 1, 2, \dots,$
of the operator $\boldsymbol{\sf{T}}$ satisfy the asymptotic formula \eqref{eq:mainas}
with the constant
\begin{align}\label{eq:coeffA}
B = \frac{4}{3\pi}
\int_{ \mathbb R^{3}} H(x)\, dx.
\end{align}
\end{thm}
\begin{remark}
\begin{enumerate}
\item
Theorem \ref{thm:maincompl} extends to the case of a
molecule with several nuclei whose positions are fixed.
The modifications are straightforward.
\item
The formula \eqref{eq:coeffA} shows that the spectral asymptotics
depend only on the behaviour of the eigenfunction $\psi$ near the pair coalescence points
$x_j=x_k$, $j, k = 1, 2, \dots, N$, $j\not = k$.
Neither the points
$x_j = 0, j = 1, 2, \dots, N$, nor
the coalescence points of higher orders (e.g. $x_j = x_k = x_l$ with pair-wise distinct $j, k, l$)
contribute to the asymptotics \eqref{eq:mainas}.
\item
For the sake of comparison we give here the formula for the coefficient
$A$ in the asymptotic formula \eqref{eq:bolg} for the operator $\BOLG$.
As shown in \cite[Theorem 2.2]{Sobolev2021}, the function $H$ also belongs to $\plainL{3/4}( \mathbb R^3)$
and
\begin{align*}
A = \frac{1}{3}\bigg(\frac{2}{\pi}\bigg)^{\frac{5}{4}}
\int_{ \mathbb R^{3}} H(x)^{\frac{3}{4}}\, dx.
\end{align*}
Thus in both asymptotic formulas \eqref{eq:bolg} and \eqref{eq:mainas} are determined only by
the functions $\eta_{j, k}$.
\end{enumerate}
\end{remark}
\subsection{Factorization of $\boldsymbol{\sf{T}}$:
change of variables $(\hat\bx_j, x)\mapsto (\hat\bx, x)$}
\label{subsect:fact}
Let us describe the factorization of the operator $\boldsymbol{\sf{T}}$
which is central to the proof of Theorem
\ref{thm:maincompl}, and the associated change of variables.
Rewrite the definition \eqref{eq:tau} in the form:
\begin{align}\label{eq:psij}
\tau(x, y) = &\ \sum_{j=1}^N \int_{ \mathbb R^{3N-3}}
\overline{{\sf{v}}_j(\hat\bx, x)} \cdot {\sf{v}}_j(\hat\bx, y) d\hat\bx,
\quad \textup{where}
\notag\\
{\sf{v}}_j(\hat\bx, x) = &\ \nabla_x
\psi(x_1, \dots, x_{j-1}, x, x_j, \dots, x_{N-1}),\quad j = 1, 2, \dots, N.
\end{align}
Therefore $\boldsymbol{\sf{T}}$ can be represented as a product $\boldsymbol{\sf{T}} = \boldsymbol{\sf{V}}^*\boldsymbol{\sf{V}}$,
where $\boldsymbol{\sf{V}}:\plainL2( \mathbb R^3)\to \plainL2( \mathbb R^{3N-3})$
is the integral operator with the vector-valued kernel ${\sf{V}}(\hat\bx, x): \mathbb C\to \mathbb C^{3N}$
given by
\begin{align}\label{eq:bpsi}
{\sf{V}}(\hat\bx, x) = \{{\sf{v}}_j(\hat\bx, x)\}_{j=1}^N.
\end{align}
As explained in the Introduction, given this factorization,
the asymptotic relation \eqref{eq:mainas} translates to the formula \eqref{eq:main1}. Later we state this fact again as Theorem \ref{thm:ttov} using a more convenient notation.
The change of variables $(\hat\bx_j, x)\mapsto (\hat\bx, x)$ plays an important role throughout
the paper. In particular, it is crucial to rewrite
Proposition \ref{prop:repr} in terms of the new variables $(\hat\bx, x)$.
Let $j\not = k$, and let $\Omega_{j, k}$ be the sets and $\xi_{j, k}(\bx)$, $\eta_{j, k}(\bx)$ be the functions
from Proposition \ref{prop:repr}.
For all $j = 1, 2, \dots, N$ and all $k = 0, 1, \dots, N-1,$ denote
\begin{align*}
\tilde\Omega_{j, k} =
\begin{cases}
\{(\hat\bx, x)\in \mathbb R^{3N}:
(x_1, \dots, x_{j-1}, x, x_{j}, \dots, x_{N-1})\in\Omega_{j, k}\},\quad \textup{if}\ j\ge k+1,\\
\{(\hat\bx, x)\in \mathbb R^{3N}:
(x_1, \dots, x_{j-1}, x, x_{j}, \dots, x_{N-1})\in\Omega_{j, k+1}\},\quad \textup{if}\ j\le k.
\end{cases}
\end{align*}
According to \eqref{eq:omin} we have
\begin{align}\label{eq:tom}
{\sf{U}}_{N,k}^{(\dc)}\subset \tilde\Omega_{j, k} \subset {\sf{U}}_{N,k},\quad k = 0, 1, \dots, N-1,
\end{align}
for all $j = 1, 2, \dots, N$.
Together with functions $\xi_{j,k}, \eta_{j,k}$ define
\begin{align*}
\tilde\xi_{j, k}(\hat\bx, x) =
\begin{cases}
\xi_{j,k}(x_1, \dots, x_{j-1}, x, x_{j}, \dots, x_{N-1}),\quad \textup{if}\ j\ge k+1,\\[0.2cm]
\xi_{j,k+1}(x_1, \dots, x_{j-1}, x, x_{j}, \dots, x_{N-1}),\quad \textup{if}\ j\le k,
\end{cases}
\end{align*}
and
\begin{align*
\tilde\eta_{j,k}(\hat\bx, x) =
\begin{cases}
\eta_{j,k}(x_1, \dots, x_{j-1}, x, x_{j}, \dots, x_{N-1}),\quad \textup{if}\ j\ge k+1,\\[0.2cm]
\eta_{j,k+1}(x_1, \dots, x_{j-1}, x, x_{j}, \dots, x_{N-1}),\quad \textup{if}\ j\le k.
\end{cases}
\end{align*}
By Proposition \ref{prop:repr},
for each $j = 1, 2, \dots, N$, and each $k = 0, 1, \dots, N-1$, we have
\begin{align}\label{eq:trepr}
{\sf{v}}_j(\hat\bx, x) =
\nabla_x\tilde\xi_{j, k}(\hat\bx, x) + &\ |x_k-x| \nabla_x \tilde\eta_{j, k}(\hat\bx, x) \notag\\
&\ + (x_k-x)|x_k-x|^{-1} \tilde\eta_{j, k}(\hat\bx, x),
\quad \textup{for all}\ (\hat\bx, x) \in \tilde\Omega_{j, k}.
\end{align}
Observe that the sets $\tilde\Omega_{j, k}$
and the functions $\tilde\xi_{j,k}, \tilde\eta_{j,k}$ are
not symmetric under the permutation $j\leftrightarrow k$.
The function \eqref{eq:H} can be easily rewritten via the new functions $\tilde\eta_{j, k}$:
\begin{align}\label{eq:tH}
H(x) =
\begin{cases}
\big(|\tilde\eta_{1, 1}(x, x)|^2 + |\tilde\eta_{2, 1}(x, x)|^2\big)^{1/2},\quad \textup{if}\ N = 2;\\[0.3cm]
\bigg[\sum\limits_{j=1}^N\sum\limits_{k=1}^{N-1}\int_{ \mathbb R^{3N-6}} \big|
\tilde\eta_{j, k}(\tilde\bx_{k,N}, x, x) \big|^2 d\tilde\bx_{k, N}\bigg]^{\frac{1}{2}},\quad
\textup{if}\ N\ge 3.
\end{cases}
\end{align}
This calculation is done in \cite[Sect. 2]{Sobolev2021}.
\subsection{Compact operators} \label{subsect:compact}
Here we provide necessary information about compact operators.
Most of the listed facts can be found in \cite[Chapter 11]{BS}.
Let $\mathcal H$ and $\mathcal G$ be separable Hilbert spaces.
Let $T:\mathcal H\to\mathcal G$ be a compact operator.
If $\mathcal H = \mathcal G$ and $T=T^*\ge 0$, then $\l_k(T)$, $k= 1, 2, \dots$,
denote the positive eigenvalues of $T$
numbered in descending order counting multiplicity.
For arbitrary spaces $\mathcal H$, $\mathcal G$ and compact $T$, by $s_k(T) >0$,
$k= 1, 2, \dots$, we denote the singular values of
$T$ defined by $s_k(T)^2 = \l_k(T^*T) = \l_k(TT^*)$.
Note the useful inequality
\begin{align}\label{eq:2k}
s_{2k}(T_1+T_2)\le s_{2k-1}(T_1+T_2)\le s_k(T_1) + s_k(T_2),
\end{align}
which holds for any two compact $T_1, T_2$.
If $s_k(T)\lesssim k^{-1/p}, k = 1, 2, \dots$,
with some $p >0$, then we say that $T\in \BS_{p, \infty}$ and denote
\begin{align}\label{eq:quasi}
\| T\|_{p, \infty} = \sup_k s_k(T) k^{\frac{1}{p}}.
\end{align}
The class $\BS_{p, \infty}$ is a complete linear space with the quasi-norm $\|T\|_{p, \infty}$.
For all $p>0$ the functional $\|T\|_{p, \infty}$
satisfies the following ``triangle" inequality for
operators $T_1, T_2\in\BS_{p, \infty}$:
\begin{align}\label{eq:triangle}
\|T_1+T_2\|_{\scalet{p}, \scalel{\infty}}^{{\frac{\scalel{p}}{\scalet{p+1}}}}
\le \|T_1\|_{\scalet{p}, \scalel{\infty}}^{{\frac{\scalel{p}}{\scalet{p+1}}}}
+ \|T_2\|_{\scalet{p, \infty}}^{{\frac{\scalel{p}}{\scalet{p+1}}}}.
\end{align}
For $T\in \BS_{p, \infty}$ the following numbers are finite:
\begin{align}\label{eq:limsupinf}
\begin{cases}
{\sf{G}}_p(T) =
\big(\limsup\limits_{k\to\infty} k^{\frac{\scalel{1}}{\scalel{p}}}s_k(T)\big)^{p},\\[0.3cm]
{\sf{g}}_p(T) =
\big(\liminf\limits_{k\to\infty} k^{\frac{\scalel{1}}{\scalel{p}}}s_k(T)\big)^{p},
\end{cases}
\end{align}
and they clearly satisfy the inequalities
\begin{align*}
{\sf{g}}_p(T)\le {\sf{G}}_p(T)\le \|T\|_{p, \infty}^p.
\end{align*}
Note that if ${\sf{G}}_q(T) = 0$ for all $q > p$. Observe that
\begin{align}\label{eq:double}
{\sf{g}}_{p}(T T^*) = {\sf{g}}_{p}(T^*T) = {\sf{g}}_{2p}(T),\quad
{\sf{G}}_{p}(T T^*) = {\sf{G}}_{p}(T^*T) = {\sf{G}}_{2p}(T).
\end{align}
If ${\sf{G}}_p(T) = {\sf{g}}_p(T)$, then the singular values of $T$ satisfy the asymptotic formula
\begin{align*}
s_n(T) = \big({\sf{G}}_p(T)\big)^{\frac{1}{p}} n^{-\frac{1}{p}} + o(n^{-\frac{1}{p}}),\ n\to\infty.
\end{align*}
The functionals ${\sf{g}}_p(T)$, ${\sf{G}}_p(T)$
also satisfy the inequalities of the type \eqref{eq:triangle}:
\begin{align}\label{eq:trianglep}
\begin{cases}
{\sf{G}}_p(T_1+T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}}
\le {\sf{G}}_p(T_1)^{{\frac{\scalel{1}}{\scalet{p+1}}}}
+ {\sf{G}}_p(T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}},\\[0.3cm]
{\sf{g}}_p(T_1+T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}}
\le {\sf{g}}_p(T_1)^{{\frac{\scalel{1}}{\scalet{p+1}}}}
+ {\sf{G}}_p(T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}}.
\end{cases}
\end{align}
We need a version of the first inequality for infinitely many operators.
\begin{lem}\label{lem:triangleg}
Suppose that $T_j\in \BS_{p, \infty}$, $j = 1, 2, \dots,$ with some $p>0$ and that
\begin{align}\label{eq:conv}
\sum_{j} \|T_j\|_{\scalet{p}, \scalel{\infty}}^{{\frac{\scalel{p}}{\scalet{p+1}}}}<\infty.
\end{align}
Then
\begin{align}\label{eq:triangleg}
{\sf{G}}_p\big(\sum_{j} T_j\big)^{{\frac{\scalel{1}}{\scalet{p+1}}}}\le
\sum_j {\sf{G}}_p (T_j)^{{\frac{\scalel{1}}{\scalet{p+1}}}}.
\end{align}
\end{lem}
\begin{proof}
The bound \eqref{eq:triangleg} follows from the triangle inequality \eqref{eq:trianglep}
by virtue of completeness of $\BS_{p, \infty}$.
\end{proof}
It also follows from \eqref{eq:trianglep} that
the functionals ${\sf{G}}_p$ and ${\sf{g}}_p$ are continuous on $\BS_{p, \infty}$:
\begin{align*}
\big|
{\sf{G}}_p(T_1)^{{\frac{\scalel{1}}{\scalet{p+1}}}} - {\sf{G}}_p(T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}}
\big|\le &\ {\sf{G}}_p(T_1-T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}},\\
\big|{\sf{g}}_p(T_1)^{{\frac{\scalel{1}}{\scalet{p+1}}}} - {\sf{g}}_p(T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}}
\big|\le &\ {\sf{G}}_p(T_1-T_2)^{{\frac{\scalel{1}}{\scalet{p+1}}}}.
\end{align*}
We need the following two corollaries of this fact:
\begin{cor}\label{cor:zero}
Suppose that ${\sf{G}}_p(T_1-T_2) = 0$. Then
\begin{align*}
{\sf{G}}_p(T_1) = {\sf{G}}_p(T_2),\quad {\sf{g}}_p(T_1) = {\sf{g}}_p(T_2).
\end{align*}
\end{cor}
The next corollary is more general:
\begin{cor}\label{cor:zero1}
Suppose that $T\in\BS_{p, \infty}$ and that for every $\nu>0$ there exists an operator
$T_\nu\in \BS_{p, \infty}$ such that ${\sf{G}}_p(T - T_\nu)\to 0$,
$\nu\to 0$. Then the functionals
${\sf{G}}_p(T_\nu), {\sf{g}}_p(T_\nu)$ have limits as $\nu\to 0$ and
\begin{align*}
\lim_{\nu\to 0} {\sf{G}}_p(T_\nu) = {\sf{G}}_p(T),\quad
\lim_{\nu\to 0} {\sf{g}}_p(T_\nu) = {\sf{g}}_p(T).
\end{align*}
\end{cor}
We also need to make a remark about ``block-vector" operators. Let $T_j\in\BS_{p, \infty}$
be a finite collection of compact operators. Define the operator $\BT:\mathcal H\to\oplus_j \mathcal G$ by
$\BT = \{T_j\}_{j}$.
Since
\begin{align*}
\BT^* \BT = \sum_j T_j^*T_j,
\end{align*}
by \eqref{eq:trianglep} and \eqref{eq:double}
we have $\BT^*\BT\in\BS_{q, \infty}$, $q = p/2$ and
\begin{align*}
{\sf{G}}_q(\BT^*\BT)^{{\frac{\scalel{1}}{\scalet{q+1}}}}\le \sum_j{\sf{G}}_q(T_j^*T_j)^{{\frac{\scalel{1}}{\scalet{q+1}}}}.
\end{align*}
Therefore, in view of \eqref{eq:double} again,
\begin{align}\label{eq:blockvec}
{\sf{G}}_p(\BT)^{{\frac{\scalel{2}}{\scalet{p+2}}}}
\le \sum_j {\sf{G}}_p(T_j)^{{\frac{\scalel{2}}{\scalet{p+2}}}}.
\end{align}
The same bound holds if one replaces ${\sf{G}}_p(\ \cdot\ )$ with the
quasi-norm $\|\ \cdot\ \|_{p, \infty}$.
Consequently, in order to estimate the singular values of $\BT$ it suffices to
estimate those of its components $T_j$. We use this fact throughout the paper.
\subsection{Singular values of integral operators}\label{subsect:intop}
The final ingredient of the proof is the result due to M.Birman and M. Solomyak,
investigating the
membership of integral operators in the class $\BS_{p, \infty}$ with some $p>0$.
For estimates of the
singular values we rely on \cite[Propositions 2.1, 2.3]{BS1977}, see also
\cite[Theorem 11.8.4]{BS},
which we state here in a form convenient for
our purposes.
Let $T_{ba}:\plainL2(X)\to\plainL2( \mathbb R^n), X\subset \mathbb R^d,$
be the integral operator of the form
\begin{align*}
(T_{ba}u)(t) = b(t) \int_X T(t, x) a(x) u(x)\,dx.
\end{align*}
We consider this operator in two cases: when $X$ is the unit
cube $\mathcal C = (0, 1)^d$, or $X = \mathbb R^d$.
\begin{prop}\label{prop:BS}
Let $X = \mathcal C$.
Assume that the kernel $T(t,x), t\in \mathbb R^n, x\in\mathcal C,$ is such that
$T(t, \ \cdot\ )\in \plainH{l}(\mathcal C)$ with some $l = 1, 2, \dots$, $2l \ge d$,
a.e. $t\in \mathbb R^n$. Assume that $a\in\plainL{r}(\mathcal C)$, where $r = 2$ if $2l >d$ and
$r>1$ is arbitrary if $2l=d$. Then
\begin{align*}
s_k(T_{ba})\lesssim k^{-\frac{1}{2}-\frac{l}{d}}
\biggl[\int_{ \mathbb R^n} \|T(t, \ \cdot\ )\|_{\plainH{l}}^2\, |b(t)|^2\, dt\biggr]^{\frac{1}{2}}
\|a\|_{\plainL{r}(\mathcal C)},
\end{align*}
$k = 1, 2, \dots$. In other words,
$T_{ba}\in\BS_{q, \infty}$ with $q^{-1} = 2^{-1} + l d^{-1}$ and
\begin{align*}
\|T_{ba}\|_{q, \infty}\lesssim \biggl[\int_{ \mathbb R^n}
\|T(t,\ \cdot\ )\|_{\plainH{l}}^2\, |b(t)|^2 \,dt \biggr]^{\frac{1}{2}}
\|a\|_{\plainL{r}(\mathcal C)}.
\end{align*}
\end{prop}
It is straightforward to check that
if one replaces the cube $\mathcal C$ with its translate $\mathcal C_n = \mathcal C+n, n\in \mathbb Z^d$, then
the bounds of Proposition \ref{prop:BS} still hold with implicit constants independent of $n$.
\begin{prop}\label{prop:BS1}
Let $X = \mathbb R^d$.
Assume that the kernel $T(t,x), t\in \mathbb R^n, x\in \mathbb R^d,$ is such that
$T(t, \ \cdot\ )\in \plainH{l}( \mathbb R^d)$ with some $l = 1, 2, \dots$, $2l < d$,
a.e. $t\in \mathbb R^n$. Assume that $a\in\plainL{r}( \mathbb R^d)$, where $r = dl^{-1}$.
Then
\begin{align*}
s_k(T_{ba})\lesssim k^{-\frac{1}{2}-\frac{l}{d}}
\biggl[\int_{ \mathbb R^n} \|T(t, \ \cdot\ )\|_{\plainH{l}}^2\, |b(t)|^2\, dt\biggr]^{\frac{1}{2}}
\|a\|_{\plainL{r}( \mathbb R^d)},
\end{align*}
$k = 1, 2, \dots$. In other words,
$T_{ba}\in\BS_{q, \infty}$ with $q^{-1} = 2^{-1} + l d^{-1}$ and
\begin{align*}
\|T_{ba}\|_{q, \infty}\lesssim \biggl[\int_{ \mathbb R^n}
\|T(t,\ \cdot\ )\|_{\plainH{l}}^2\, |b(t)|^2 \,dt \biggr]^{\frac{1}{2}}
\|a\|_{\plainL{r}( \mathbb R^d)}.
\end{align*}
\end{prop}
\subsection{Model operator}\label{subsect:model}
Let $a, b_{j,k}, \b_{j,k}$, $j = 1, 2, \dots, N$, $k = 1, 2, \dots, N-1$, be
scalar functions such that
\begin{align}\label{eq:abbeta}
\begin{cases}
a\in\plainC\infty_0( \mathbb R^3), &\ \quad b_{j,k}\in \plainC\infty_0( \mathbb R^{3N-3}),\\[0.2cm]
\b_{j,k}\in\plainC\infty( \mathbb R^{3N}),
\end{cases}
\end{align}
for all $j = 1, 2, \dots, N$, $k = 1, 2, \dots, N-1$.
Let $\Phi\in\plainC\infty( \mathbb R^3\setminus \{0\})$ be a vector
function with $m$ scalar components,
homogeneous of order $\a>-3$:
\begin{align}\label{eq:phi}
\Phi(t x) = t^\a \Phi(x),\quad x\not = 0, t >0.
\end{align}
Consider the vector-valued kernel $\mathcal M(\hat\bx, x): \mathbb C\to \mathbb C^{mN}$:
\begin{align}\label{eq:cm}
\begin{cases}
\mathcal M(\hat\bx, x) = \{\mathcal M_j(\hat\bx, x)\}_{j=1}^N,\
\quad
\mathcal M_j(\hat\bx, x) = \sum_{k=1}^{N-1} \mathcal M_{j,k}(\hat\bx, x),\\[0.3cm]
\mathcal M_{j,k}(\hat\bx, x) = b_{j,k}(\hat\bx) \Phi(x_k-x) a(x)\b_{j,k}(\hat\bx, x).
\end{cases}
\end{align}
The eigenvalue asymptotics for the operator $\iop(\mathcal M)$ was found in \cite{Sobolev2021} for
general functions $\Phi$ of the form \eqref{eq:phi}. Below we state this result for the
function $\Phi(x) = \nabla |x| = x|x|^{-1}$, which is
homogeneous of order $0$. This is
the case needed for the study of the operator
$\boldsymbol{\sf{T}}$.
We use the representations $(\hat\bx, x) = (\tilde\bx_{k, N}, x_k, x)$ introduced in \eqref{eq:xtilde}.
Denote
\begin{align}\label{eq:hm}
\begin{cases}
h(t) = \bigg[\sum_{j=1}^N\sum_{k=1}^{N-1}\int_{ \mathbb R^{3N-6}}
| b_{j,k}(\tilde\bx_{k, N}, t)\b_{j,k}(\tilde\bx_{k, N}, t, t) |^2 d\tilde\bx_k\bigg]^{\frac{1}{2}},\
\textup{if}\ N\ge 3;\\[0.3cm]
h(t) = \big(|b_{1,1}(t)\b_{1,1}(t, t)|^2
+ |b_{2,1}(t)\b_{2,1}(t, t)|^2\big)^{\frac{1}{2}},\ \textup{if}\ N=2.
\end{cases}
\end{align}
The next proposition summarizes the required results from \cite{Sobolev2021}.
\begin{prop}\label{prop:gradp}
If $\Phi(x)=\nabla|x| = x|x|^{-1}$, then $\iop(\mathcal M)\in \BS_{1, \infty}$ and
\begin{align}\label{eq:gradp}
{\sf{G}}_1\big(\iop(\mathcal M)\big)
= {\sf{g}}_1\big(\iop(\mathcal M)\big)
= \nu_{0, 3}\int_{ \mathbb R^3} |a(x) h(x)|\, dx,
\end{align}
where $\nu_{0, 3} = 4(3\pi)^{-1}$.
If $\Phi(x)$ is homogeneous of order $\a >0$, then ${\sf{G}}_1\big(\iop(\mathcal M)\big) = 0$.
\end{prop}
\section{Factorization of $\boldsymbol {\sf{T}}$:
operator $\boldsymbol{\sf{V}}$}\label{sect:factor}
\subsection{Reformulation of the problem}
Using the functionals \eqref{eq:limsupinf}, one can rewrite the sought formula \eqref{eq:mainas} as
\begin{align*}
{\sf{G}}_2(\boldsymbol{\sf{T}}) = {\sf{g}}_2(\boldsymbol{\sf{T}}) = B.
\end{align*}
Since $\boldsymbol{\sf{T}} = \boldsymbol{\sf{V}}^*\boldsymbol{\sf{V}}$ with the operator
$\boldsymbol{\sf{V}}: \plainL2( \mathbb R^3)\to\plainL2( \mathbb R^{3N-3})$
defined in \eqref{eq:bpsi},
by \eqref{eq:double} the above equalities rewrite as
\begin{align}\label{eq:ttov}
{\sf{G}}_1(\boldsymbol{\sf{V}}) = {\sf{g}}_1(\boldsymbol{\sf{V}}) = B.
\end{align}
Thus the main Theorem \ref{thm:maincompl} can be recast as follows:
\begin{thm}\label{thm:ttov}
Under the conditions of Theorem \ref{thm:maincompl} the
formula \eqref{eq:ttov} holds with the constant $B$
which is defined in \eqref{eq:coeffA}.
\end{thm}
The rest of the paper is focused on the proof of Theorem \ref{thm:ttov}.
As explained in the Introduction,
at the heart of the proof is the formula \eqref{eq:repr} for the function $\psi$,
which translates to the representation \eqref{eq:trepr} for the kernels ${\sf{v}}_j$ defined in
\eqref{eq:psij}.
This representation allows us to reduce the problem to the model operator
considered in Subsect. \ref{subsect:model} with the function $\Phi(x) = x|x|^{-1}$.
At the first stage of this reduction we construct
convenient approximations of the kernels ${\sf{v}}_j(\hat\bx, x)$.
\subsection{Cut-off functions}
Firt we construct appropriate cut-offs.
Fix a $\d > 0$. Along with the sets \eqref{eq:sls} introduce
\begin{align}\label{eq:slsd}
{\sf S}_{l, s}(\d) = {\sf S}_{s, l}(\d) = \{\bx\in \mathbb R^{3N}: |x_l-x_s|>\d\},\ 0\le l < s\le N,
\end{align}
and for all $k = 0, 1, \dots, N-1$, define
\begin{align*
{\sf{U}}_{k}(\d) = \bigg(\bigcap_{0\le l < s\le N-1} {\sf S}_{l, s}(\d)\bigg)\bigcap\bigg(
\bigcap_{\substack{0\le s\le N-1\\sigma\not = k}} {\sf S}_{s, N}(\d)\bigg).
\end{align*}
Comparing with \eqref{eq:uj} we see that ${\sf{U}}_k(\d)\subset {\sf{U}}_{k, N}$,
and for $\bx\in {\sf{U}}_k(\d)$ all
the coordinate pairs, except for $x_k$ and $x_N$, are separated by a distance $\d$.
Similarly to \eqref{eq:diag} define the diagonal set
\begin{align*}
{\sf{U}}^{(\rm d)}_{k}(\d) = \{\bx\in{\sf{U}}_{k}(\d): x_j = x_N\}\subset{\sf{U}}_{k, N}^{(\dc)}.
\end{align*}
Recall that the representation
\eqref{eq:trepr} holds on the domain $\tilde\Omega_{j, k}$ which satisfies
\eqref{eq:tom} for all $j = 1, 2, \dots, N$, $k = 0, 1, \dots, N-1$.
We construct a compact subset of $\tilde\Omega_{j, k}$ in the following way.
For $R>0$ let
\begin{align*}
{\sf{U}}_{k}(\d, R) = &\ {\sf{U}}_k(\d)\bigcap \ (B_R)^N,\\
{\sf{U}}^{(\rm d)}_{k}(\d, R) = &\ \{\bx\in{\sf{U}}_{k}(\d, R): x_k = x_N\},
\end{align*}
where $B_R = \{x\in \mathbb R^3: |x| <R\}$.
The set ${\sf{U}}^{(\rm d)}_{k}(\d, R)$ is bounded and its closure belongs to
$\tilde\Omega_{j,k}$ for all $\d>0, R>0$. Therefore, there exists
an $\varepsilon_0 = \varepsilon_0(\d, R)>0$ such that the $j$-independent $\varepsilon$-neighbourhood
\begin{align}\label{eq:omj}
\tilde\Omega_{k}(\d, R, \varepsilon) := \{\bx\in{\sf{U}}_{k}(\d, R): |x_k-x_N|<\varepsilon\},
\end{align}
together with its closure, belongs to $\tilde\Omega_{j,k}$
for all $\varepsilon\in (0, \varepsilon_0)$:
\begin{align}\label{eq:omdere}
\overline{\tilde\Omega_{k}(\d, R, \varepsilon)}\subset \tilde\Omega_{j,k},\quad \forall
\varepsilon\in (0, \varepsilon_0).
\end{align}
Now we specify $\plainC\infty_0$ cutoffs supported on the domains
$\tilde\Omega_{k}(\d, R, \varepsilon)$.
Let $\t\in\plainC\infty_0( \mathbb R)$ and $\zeta = 1-\t$ be as defined in \eqref{eq:sco}, \eqref{eq:sco1}.
Denote
\begin{align}\label{eq:ydel}
Y_\d(\hat\bx) = \prod_{0\le l < s \le N-1} \zeta\big(|x_l-x_s|(4\d)^{-1}\big).
\end{align}
By the definition of $\zeta$,
\begin{align*
\supp Y_\d\subset
\bigcap_{0\le l<s\le N-1} {\sf S}_{l, s}(2\d),
\end{align*}
where ${\sf S}_{l, s}(\ \cdot\ )$ is defined in \eqref{eq:slsd}.
As $\t \ge 0$ and $\zeta\le 1$, we have
\begin{align}\label{eq:comply}
1-Y_\d(\hat\bx)\le \sum_{0\le l<s\le N-1} \t\big(|x_l-x_s|(4\d)^{-1}\big),
\end{align}
Furthermore, due to \eqref{eq:sco1}, for any $\varepsilon\le \d$ we have
\begin{align}\label{eq:partun}
Y_\d(\hat\bx)\sum_{j=0}^{N-1} \t\big(|x-x_j|\varepsilon^{-1}\big)
+ Y_\d(\hat\bx)\prod_{j=0}^{N-1} \zeta\big(|x-x_j|\varepsilon^{-1}\big) = Y_\d(\hat\bx).
\end{align}
Define also cut-offs at infinity. Denote
\begin{align}\label{eq:qr}
Q_R(\hat\bx) = \prod_{1\le l\le N-1} \t\big(|x_l|R^{-1}\big),\quad
K_R(x) = \t\big(|x|R^{-1}\big).
\end{align}
As shown in \cite[Lemma 5.2]{Sobolev2021},
\begin{align}\label{eq:cutoff}
\textup{for all}\ \ &\ \varepsilon<\min\{\varepsilon_0, \d\}\quad
\textup{the support of the function}\notag\\
&\ Q_R(\hat\bx) K_R(x) Y_\d(\hat\bx) \t\big(|x-x_k|\varepsilon^{-1}\big)
\ \
\textup{belongs to}\ \ \tilde\Omega_k(\d, R, \varepsilon)
\end{align}
for all $k = 0, 1, \dots, N-1$.
\subsection{}
Using the cut-offs introduced above we construct a convenient approximation
for the kernels ${\sf{v}}_j(\hat\bx, x)$.
Taking if necessary, a smaller $\varepsilon_0$ in \eqref{eq:omdere},
we will assume that $\varepsilon_0(\d, R) \le \d$, and hence for all
$\varepsilon < \varepsilon_0(\d, R)$,
apart from the inclusion \eqref{eq:omdere}
we also have \eqref{eq:cutoff}.
Thus, for these values of $\varepsilon$
the real analytic functions $\tilde\xi_{j,k}, \tilde\eta_{j,k}$
are well-defined on the support of the function \eqref{eq:cutoff},
and hence the vector-valued kernel
\begin{align}\label{eq:upsilon}
\Upsilon_j[\d,R,\varepsilon](\hat\bx, x)
= Q_R(\hat\bx) Y_\d(\hat\bx) K_R(x)\sum_{k=1}^{N-1}\t\big(|x-x_k|\varepsilon^{-1}\big)
\frac{x-x_k}{|x-x_k|}\tilde\eta_{j, k}(\hat\bx, x),
\end{align}
is well-defined for all $(\hat\bx, x)\in \mathbb R^{3N}$, and each of the functions
\begin{align*}
Q_R(\hat\bx) Y_\d(\hat\bx) K_R(x) \t\big(|x-x_k|\varepsilon^{-1}\big)
\tilde\eta_{j,k}(\hat\bx, x),\quad k = 1, 2, \dots, N-1,
\end{align*}
is $\plainC\infty_0( \mathbb R^{3N})$.
Our objective is to prove that the vector-valued kernel
\begin{align*}
\boldsymbol\Upsilon[\d, R, \varepsilon](\hat\bx, x)
= \big\{\Upsilon_j[\d,R,\varepsilon](\hat\bx, x)\big\}_{j=1}^N
\end{align*}
is an approximation for $\boldsymbol{\sf{V}}(\hat\bx, x)$(see \eqref{eq:bpsi}) in the following sense.
\begin{lem}\label{lem:central}
The following relations hold:
\begin{align*}
{\sf{G}}_1(\boldsymbol{\sf{V}}) = \lim\limits_{\substack{\d\to 0\\ R\to\infty}} \lim_{\varepsilon\to 0}
{\sf{G}}_1\big(\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])\big),\quad
{\sf{g}}_1(\boldsymbol{\sf{V}}) = \lim\limits_{\substack{\d\to 0\\ R\to\infty}} \lim_{\varepsilon\to 0}
{\sf{g}}_1\big(\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])\big),
\end{align*}
where the limits on the right-hand side exist.
\end{lem}
This lemma in combination with Proposition \ref{prop:gradp} is sufficient for the proof
of Theorem \ref{thm:ttov}, which is the subject of the next subsection.
\subsection{Proof of Theorems \ref{thm:etadiag} and \ref{thm:ttov}, \ref{thm:maincompl}}
\label{subsect:proofs}
First we apply Proposition \ref{prop:gradp} to the operator
$\iop\big(\boldsymbol\Upsilon[\d, R, \varepsilon]\big)$.
\begin{lem}
The operator $\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])$ belongs to $\BS_{1, \infty}$
for all $\d >0, R>0, \varepsilon<\varepsilon_0(\d, R)$
and
\begin{align}\label{eq:upsas}
{\sf{G}}_1\big(\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])\big)
= {\sf{g}}_1\big(\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])\big)
= \frac{4}{3\pi} \int K_R(t) H_{\d, R}(t) dt,
\end{align}
where
\begin{align*}
H_{\d, R}(t) = Q_R(t) Y_\d(t)\, \big(|\tilde\eta_{1, 1}(t, t)|^2 + \tilde\eta_{1, 2}(t, t)|^2\big)^{1/2},\ \textup{if}\ N=2,
\end{align*}
and
\begin{align}\label{eq:hdr}
H_{\d, R}(t) =
\bigg[\sum_{j=1}^N\sum_{k=1}^{N-1}\int_{ \mathbb R^{3N-6}} \big|
Q_R(\tilde\bx_{k, N}, t) Y_\d(\tilde\bx_{k, N}, t)
\tilde\eta_{j, k}(\tilde\bx_{k, N}, t, t) \big|^2 d\tilde\bx_{k, N}\bigg]^{\frac{1}{2}},\ \textup{if}\ N\ge 3.
\end{align}
\end{lem}
\begin{proof}
The kernel $\boldsymbol\Upsilon[\d, R, \varepsilon]$ (see \eqref{eq:upsilon}) has the form \eqref{eq:cm}
with
\begin{align*}
a(x) = K_{2R}(x),\ &\ b_{j,k}(\hat\bx) = Q_{2R}(\hat\bx) Y_{\d/2}(\hat\bx),\\
\b_{j, k}(\hat\bx, x) = &\ \t\big(|x-x_k|\varepsilon^{-1}\big) \tilde\eta_{j, k}(\hat\bx, x)
Q_{R}(\hat\bx) Y_{\d}(\hat\bx)K_{R}(x),
\end{align*}
and the homogeneous function $\Phi(x) = x|x|^{-1}$.
Here we have used the fact that
\begin{align*}
Q_R(\hat\bx)Q_{2R}(\hat\bx) = Q_R(\hat\bx),\quad
Y_\d(\hat\bx) Y_{\d/2}(\hat\bx) = Y_\d(\hat\bx)
\quad \textup{and} \quad K_R(x) K_{2R}(x) = K_R(x).
\end{align*}
Thus defined functions $a, b_{j, k}$ and $\b_{j, k}$ satisfy conditions \eqref{eq:abbeta}.
Therefore we can use Proposition \ref{prop:gradp}. It is immediate to see that in this case
the function $h$ defined in \eqref{eq:hm}, coincides with $H_{\d, R}$, so that
\eqref{eq:gradp} entails \eqref{eq:upsas}, as required.
\end{proof}
\begin{proof}[Proof of Theorems \ref{thm:etadiag}, \ref{thm:ttov} and \ref{thm:maincompl}]
By Lemma \ref{lem:central} each term in the relation \eqref{eq:upsas} has a limit
as $\d\to 0, R\to\infty$. Therefore the integral on the right-hand side of
\eqref{eq:upsas} is bounded uniformly in $\d>0, R>0$.
Assume for convenience that the function $\t$ defined
in \eqref{eq:sco1} is monotone decreasing for $t\ge 0$. Therefore the poitwise
convergencies
\begin{align*}
Y_\d(\tilde\bx_{k, N}, t)\to 1, \ \d\to 0\quad \textup{and}\quad
K_R(t)\to 1, Q_R(\tilde\bx_{k, N} , t)\to 1,\ R\to\infty,
\end{align*}
are monotone increasing. By the Monotone Convergence Theorem,
the integrand $ K_R(t) H_{\d, R}(t) $ on the right-hand side of \eqref{eq:upsas}
converges for a.e. $t\in \mathbb R^3$ as $\d\to 0, R\to\infty$ to an $\plainL1( \mathbb R)$-function,
which we denote by $\tilde H(t)$, and the
integral in \eqref{eq:upsas} converges to
\begin{align}\label{eq:limit}
\frac{4}{3\pi}\int \tilde H(t) dt.
\end{align}
If $N=2$, then this concludes the proof
of Theorem
\ref{thm:etadiag},
since in this case
\begin{align*}
H_{\d, R}(t)\to\big(|\tilde\eta_{1, 1}(t, t)|^2 + \tilde\eta_{2, 1}(t, t)|^2\big)^{1/2},
\end{align*}
a.e. $t\in \mathbb R^3$, and by virtue of \eqref{eq:tH} this limit coincides with $H(t)$.
If $N\ge 3$, then the convergence to $\tilde H(t)$ implies that
for a.e. $t\in \mathbb R^3$
the function $K_R(t)H_{\d, R}(t)$, and hence $H_{\d, R}(t)$, is bounded uniformly in $\d$ and $R$.
Applying the Monotone Convergence Theorem to the integral \eqref{eq:hdr},
we conclude that the a.e.-limit
\begin{align*}
|\tilde\eta_{j,k}(\tilde\bx_{k, N}, t, t)|
= \lim_{\d\to 0, R\to\infty}\big| Q_R(\tilde\bx_{k, N}, t) Y_\d(\tilde\bx_{k, N}, t)
\tilde\eta_{j,k}(\tilde\bx_{k, N}, t, t) \big|,\
\end{align*}
belongs to $\plainL2( \mathbb R^{3N-6})$,\ a.e. $t\in \mathbb R^3$, and
\begin{align*}
\lim_{\d\to 0, R\to\infty} H_{\d, R}(t) = H(t),\quad \textup{a.e.}\quad t\in \mathbb R^3,
\end{align*}
where we have used the formula \eqref{eq:tH} for $H$. Thus
$H = \tilde H\in\plainL1( \mathbb R^3)$. As
\eqref{eq:tH} is equivalent to \eqref{eq:H},
this completes the proof of Theorem \ref{thm:etadiag}.
Furthermore, the integral
\eqref{eq:limit}
coincides with the coefficient
$B$ in \eqref{eq:coeffA}. Together with Lemma \ref{lem:central} this completes the proof of Theorem
\ref{thm:ttov}.
As explained before, Theorem \ref{thm:ttov} is
equivalent to Theorem \ref{thm:maincompl}.
\end{proof}
The rest of the paper focuses on the proof of Lemma \ref{lem:central}. The pivotal role in the proof
is played by the singular value estimates for the weighted operator
$\boldsymbol{\sf{V}}$, that are derived in the next section.
\section{Spectral bounds for the operator $\boldsymbol{\sf{V}}$}
\label{sect:prep}
Recall that the vector-valued kernel ${\sf{V}}(\hat\bx, x)$ of the operator
$\boldsymbol{\sf{V}}$ is given by \eqref{eq:bpsi} with the kernels ${\sf{v}}_j(\hat\bx, x)$ defined in
\eqref{eq:psij}.
\subsection{The weighted operator $\boldsymbol{\sf{V}}$}
We assume that the weights $a = a(x), x\in \mathbb R^3,$ and
$b = b(\hat\bx)$,
$\hat\bx\in \mathbb R^{3N-3},$ satisfy the following properties.
Denote $\mathcal C_n = (0, 1)^3 + n$,\ $n\in\mathbb Z^3$.
Let $\varkappa_l>0$ be the constants in the exponential bounds \eqref{eq:exp} and \eqref{eq:FS}.
We assume that $b\in\plainL\infty( \mathbb R^{3N-3})$ and that the weight
$a\in\plainL3_{\textup{\tiny loc}}( \mathbb R^3)$
is such that
\begin{align*}
\sup_{n\in\mathbb Z^3} \|a\|_{\plainL3(\mathcal C_n)}<\infty,
\end{align*}
so that both
\begin{align}\label{eq:Sq}
R_\varkappa(a) = \bigg[\sum_{n\in\mathbb Z^3} e^{- \frac{1}{2}\varkappa|n|_{\scaleto{1}{3pt}}}
\|a\|_{\plainL3(\mathcal C_n)}^{\frac{1}{2}}\bigg]^2,
\end{align}
and
\begin{align}\label{eq:mb}
M_\varkappa(b) = \biggl[\int_{ \mathbb R^{3N-3}}
|b(\hat\bx)|^2 e^{-2\varkappa|\hat\bx|_{\scaleto{1}{3pt}}}
d\hat\bx\biggr]^{\frac{1}{2}},
\end{align}
are finite for any $\varkappa>0$.
Note that in \eqref{eq:Sq} and \eqref{eq:mb} we use the $\ell^1$-norm $|\bx|_1$ instead of the
Eucledian norm $|\bx|$.
We do this for computational convenience
later in the proofs.
Recall that the functional ${\sf{G}}_p$ is defined in \eqref{eq:limsupinf}.
Our objective is to prove the following theorem.
\begin{thm}\label{thm:psifull}
Let the weights $a$ and $b$ be as described above.
Then $b\, \iop({\sf{v}}_j) a \in\BS_{1, \infty}$, $j= 1, 2, \dots, N$, and
there exists a $\varkappa>0$ such that
\begin{align}
\|b\, \iop({\sf{v}}_j) a\|_{1, \infty}
\lesssim &\ \|b\|_{\plainL\infty} R_\varkappa(a),\label{eq:vfullest}\\
{\sf{G}}_1(b\, \iop({\sf{v}}_j) a)
\lesssim &\ M_\varkappa(b) R_\varkappa(a),\label{eq:vfull}
\end{align}
for each $j = 1, 2, \dots, N$.
\end{thm}
By \eqref{eq:blockvec} the above bounds imply the
same bounds for the operator $b\,\boldsymbol{\sf{V}} a$.
We prove Theorem \ref{thm:psifull}
for the kernel ${\sf{v}}_N(\hat\bx, x) =: {\sf{v}}(\hat\bx, x)$.
Bounds for the remaining $j$'s follow by a simple permutation of variables.
\subsection{Regularity of the eigenfunction}
We need some of the efficient bounds for
the derivatives of the eigenfunction away from the coalescence points, obtained by
S. Fournais and T.\O. S\o rensen in \cite{FS2018}.
Let
\begin{align*}
\dc(\hat\bx, x) = \min\{|x|, |x-x_j|, \ j = 1, 2, \dots, N-1\}.
\end{align*}
The following proposition is a consequence of \cite[Corollary 1.3]{FS2018}
Recall that $|\ \cdot\ |_{{\scaleto{1}{3pt}}}$ denotes the $\ell^1$-norm.
\begin{prop
Assume that $\psi$ satisfies
\eqref{eq:exp}.
Then for all multi-indices $m\in\mathbb N_0^3$, $|m|_1\ge 1$, we have
\begin{align}\label{eq:FS}
|\partial_x^m \psi(\hat\bx, x)|
\lesssim\dc(\hat\bx, x)^{1-
} e^{-\varkappa_{\scalel{l}} {|\bx|_{\scaleto{1}{3pt}}}}, \ l = |m|_1,
\end{align}
with some $\varkappa_l >0$.
\end{prop}
The precise values of the constants $\varkappa_l>0$ are insignificant for us,
and therefore we may assume that
\begin{align}\label{eq:monotone}
\varkappa_1\ge \varkappa_2\ge \dots >0.
\end{align}
Let us rewrite the bounds \eqref{eq:FS}
using the notation $x_0 = 0$.
With this convention, we have
\begin{align*}
\dc(\hat\bx, x) = \min\{|x-x_j|, \ j = 0, 1, 2, \dots, N-1\}
\end{align*}
and
\begin{align*
\dc(\hat\bx, x)^{-1}\le \sum_{0\le j\le N-1} |x-x_j|^{-1}.
\end{align*}
Therefore \eqref{eq:FS} implies that
\begin{align}
|\partial_x^m {\sf{v}}(\hat\bx, x)|\lesssim &
e^{-\varkappa_{\scalel{l+1}} |\bx|_{\scaleto{1}{3pt}}} \big(\min_{0\le j\le N-1}|x-x_j|\big)^{-l}\notag\\
\lesssim &
e^{-\varkappa_{\scalel{l+1}} |\bx|_{\scaleto{1}{3pt}}}
\sum_{0\le j\le N-1} |x-x_j|^{-l},\quad \ l = |m|_1,\label{eq:FS2}
\end{align}
for all $m\in\mathbb N_0^3$.
The plan of the proof of Theorem \ref{thm:psifull} follows the approach of \cite{Sobolev2020},
where a similar theorem was proved for the operator $\boldsymbol\Psi$.
First we study the operators $\iop({\sf{v}})\mathbbm 1_{\mathcal C_n}$, $n\in\mathbb Z^3$.
With the help of various cut-offs including $Y_\nu$, for each fixed $n$ this
operator is split in the sum of several
operators depending on the parameter $\nu>0$,
whose singular values are estimated in different ways using Propositions
\ref{prop:BS} and \ref{prop:BS1}.
A key point is that we keep track of the explicit dependence on the parameter $\nu$.
Thus, although none of these bounds is sharp, in the end, when collecting all of them
together in Subsect.
\ref{subsect:tog}, we get the sharp bounds \eqref{eq:vfullest}, \eqref{eq:vfull}
by making an optimal choice of the parameter $\nu$.
Whenever we consider the operators $b\iop(\ \cdot\ )\mathbbm 1_{\mathcal C_n}a$ with weights $a, b$,
the constants in all the bounds are independent on the weights or on the parameter $n\in\mathbb Z^3$.
Recall also that we use the notation $x_0 = 0$. The symbol $\sum_j$ (resp. $\prod_j$)
assumes summation (resp. product) over all $j = 0, 1, \dots, N-1$.
\subsection{Partition of the kernel ${\sf{v}}(\hat\bx, x)$}
Let the function $Y_\nu$, $\nu >0$,
be as defined in \eqref{eq:ydel}. We use the new parameter
$\nu$ in order not to confuse it with the parameter $\d$ in Lemma \ref{lem:central}.
Represent ${\sf{v}}$ as follows:
\begin{align}\label{eq:split1}
{\sf{v}} = &\ {\sf{v}}_1^{(\nu)} + {\sf{v}}_2^{(\nu)},\\
{\sf{v}}_1^{(\nu)}(\hat\bx, x) = &\ Y_\nu(\hat\bx){\sf{v}}(\hat\bx, x),
\notag\\
{\sf{v}}_2^{(\nu)}(\hat\bx, x)
= & \ \big(1- Y_\nu(\hat\bx)\big){\sf{v}}(\hat\bx, x).\notag
\end{align}
It follows from \eqref{eq:comply} and \eqref{eq:FS2} that
\begin{align}
|\partial_x^m
{\sf{v}}^{(\nu)}_2(\hat\bx, x)|
\lesssim
e^{-\varkappa_{|m|_{\scaleto{1}{3pt}}+1}|\bx|_{\scaleto{1}{3pt}}}\sum_{0\le j\le N-1} |x-x_j|^{-|m|_{\scaleto{1}{3pt}}}
\sum_{0\le l<s\le N-1} \mathbbm 1_{\{|x_l-x_s|<4\nu\}}(\hat\bx),\label{eq:dpsidel2}
\end{align}
for all $m\in\mathbb N_0^3$, with implicit constants independent of
$\nu>0$.
In the next lemma we consider the operator $\iop({\sf{v}}_{2}^{(\nu)})$
the weight $b = 1$ and arbitrary
$a\in\plainL3(\mathcal C_n)$.
Further on we use the straightforward inequality
\begin{align}\label{eq:expon}
\max_{x:|x-n|\le 6} e^{-\varkappa_{\scalel{l}}|\bx|_{\scaleto{1}{3pt}}}
\lesssim e^{-\varkappa_{\scalel{l}}|\hat\bx|_{\scaleto{1}{3pt}}} e^{-\varkappa_{\scalel{l}}|n|_{\scaleto{1}{3pt}}}.
\end{align}
\begin{lem}\label{lem:psidel}
The operator
$\iop({\sf{v}}_2^{(\nu)}) a\mathbbm 1_{\mathcal C_n}$ belongs to $\BS_{6/5, \infty}$ and
\begin{align}
\| \iop({\sf{v}}^{(\nu)}_2)\, a\, \mathbbm 1_{\mathcal C_n}\|_{6/5, \infty}
\lesssim &\ e^{-\varkappa_{\scalel{2}}|n|_{\scaleto{1}{3pt}}}
\nu^{\frac{3}{2}}\|a\|_{\plainL3(\mathcal C_n)},\label{eq:dpsidelcube}
\end{align}
for all $n\in\mathbb Z^3$ and all $\nu>0$.
\end{lem}
\begin{proof}
By \eqref{eq:dpsidel2}, ${\sf{v}}_2^{(\nu)}\in\plainH1( \mathbb R^3)$ a.e. $\hat\bx\in \mathbb R^{3N-3}$,
so in order to estimate the singular values,
we consider the operator on the left-hand side of \eqref{eq:dpsidelcube} as an operator
on $\plainL2( \mathbb R^3)$, and hence we use Proposition \ref{prop:BS1} with
the weight $a(x)\mathbbm 1_{\mathcal C_n}(x)$, and parameters $l=1$, $d = 3$ and $r = 3$.
Let $\t$ be the cut-off defined in \eqref{eq:sco}. Since
\[
{\sf{v}}_2^{(\nu)}(\hat\bx, x) \t\big(|x-n|/6\big)\mathbbm 1_{\mathcal C_n}(x) =
{\sf{v}}_2^{(\nu)}(\hat\bx, x) \mathbbm 1_{\mathcal C_n}(x),
\]
instead of the function ${\sf{v}}_2^{(\nu)}$ we study the kernel
$T(\hat\bx, x) = {\sf{v}}_2^{(\nu)}(\hat\bx, x) \t\big(|x-n|/6\big)$.
According to \eqref{eq:dpsidel2} and \eqref{eq:expon},
$T(\hat\bx, \ \cdot\ )\in \plainH1( \mathbb R^3)$,
for a.e. $\hat\bx\in \mathbb R^{3N-3}$ and
\begin{align*}
e^{2\varkappa_{\scalel{2}}|\hat\bx|_{\scaleto{1}{3pt}}}
&\ \|T(\hat\bx,\ \cdot\ )\|_{\plainH1}^2 \\
\lesssim &\ e^{-2\varkappa_{\scalel{2}}|n|_{\scaleto{1}{3pt}}}
\bigg(\int\limits_{|x-n|<6} \sum_{0\le j\le N-1} |x-x_j|^{-2}\, dx\bigg)
\sum_{0\le l<s\le N-1}
\mathbbm 1_{\{|x_l-x_s|<4\nu\}}(\hat\bx) \\[0.2cm]
\lesssim &\ e^{-2\varkappa_{\scalel{2}} |n|_{\scaleto{1}{3pt}}}
\sum_{0\le l<s\le N-1}
\mathbbm 1_{\{|x_l-x_s|<4\nu\}}(\hat\bx).
\end{align*}
Using Proposition \ref{prop:BS1} with $l=1, d=3, r=3$
we get that the operator on the left-hand side of \eqref{eq:dpsidelcube} belongs
to $\BS_{q, \infty}$ with $1/q = 1/2+1/3 = 5/6$ and
\begin{align*}
\|\iop({\sf{v}}^{(\nu)}_2)\, a\,\mathbbm 1_{\mathcal C_n}\|_{6/5, \infty}
\lesssim &\ \bigg[\int_{ \mathbb R^{3N-3}}
\| T(\hat\bx, \ \cdot\ )\|_{\plainH1}^2\,d\hat\bx \biggl]^{\frac{1}{2}}\, \|a\|_{\plainL3(\mathcal C_n)}
\notag\\
\lesssim &\ e^{-\varkappa_{\scalel{2}} |n|_{\scaleto{1}{3pt}}}\bigg[\int_{ \mathbb R^{3N-3}}
e^{-2\varkappa_{\scalel{2}}|\hat\bx|_{\scaleto{1}{3pt}}} \sum_{0\le l < s\le N-1}\mathbbm 1_{\{|x_l-x_s|<4\nu\}}(\hat\bx)\,
d\hat\bx
\bigg]^{\frac{1}{2}}\,\|a\|_{\plainL3(\mathcal C_n)} \notag\\
\lesssim &\ e^{-\varkappa_{\scalel{2}}|n|_{\scaleto{1}{3pt}}} \nu^{\frac{3}{2}}\,\|a\|_{\plainL3(\mathcal C_n)},
\end{align*}
which gives \eqref{eq:dpsidelcube}.
\end{proof}
In its turn, the kernel ${\sf{v}}_1^{(\nu)}$ splits into two components.
Consider separately
the following components of ${\sf{v}}_1^{(\nu)}$:
\begin{align}\label{eq:split2}
{\sf{v}}_1^{(\nu)} = &\ {\sf{v}}_{11}^{(\nu)} + {\sf{v}}_{12}^{(\nu)},\\
{\sf{v}}_{11}^{(\nu)}(\hat\bx, x)
= &\ Y_\nu(\hat\bx)\sum_{0\le j\le N-1} \t\big(|x-x_j|\nu^{-1}\big) {\sf{v}}(\hat\bx, x),\notag\\
{\sf{v}}^{(\nu)}_{12}(\hat\bx, x) = &\ Y_\nu(\hat\bx)
\big[1 - \sum_{0\le j\le N-1} \t\big(|x-x_j|\nu^{-1}\big)\big] {\sf{v}}(\hat\bx, x).\notag
\end{align}
By \eqref{eq:partun} we have
\begin{align*}
{\sf{v}}_{12}^{(\nu)}(\hat\bx, x) = &\ Y_\nu(\hat\bx){\sf{v}}(\hat\bx, x) \prod_{j=0}^{N-1}
\zeta\big(|x-x_j|\nu^{-1}\big).
\end{align*}
To estimate the derivatives of the above functions first observe that
\begin{align*}
\big|\partial_x^m \t\big(|x|\nu^{-1}\big)\big|
\lesssim |x|^{-|m|_{\scaleto{1}{3pt}}}\mathbbm 1_{\{|x|<\nu\}}(x),\quad
\big|\partial_x^m \zeta\big(|x|\nu^{-1}\big)\big|
\lesssim |x|^{-|m|_{\scaleto{1}{3pt}}}\mathbbm 1_{\{|x|>\nu/2\}}(x), \quad m\in\mathbb N_0^3,
\end{align*}
uniformly in $\nu>0$, and hence
\begin{align*
\big|\partial_x^m\sum_{j=0}^{N-1} \t\big(|x-x_j|\nu^{-1}\big)\big|
\lesssim &\ \sum_{j=0}^{N-1} |x-x_j|^{-|m|_{\scaleto{1}{3pt}}}
\mathbbm 1_{\{|x-x_j|<\nu\}}(\hat\bx, x),\\
\big|\partial_x^m\prod_{j=0}^{N-1} \zeta\big(|x-x_j|\nu^{-1}\big)\big|
\lesssim &\ \bigg(\sum_{j=0}^{N-1} |x-x_j|^{-|m|_{\scaleto{1}{3pt}}}\bigg)
\prod_{j=0}^{N-1}
\mathbbm 1_{\{|x-x_j|>\nu/2\}}(\hat\bx, x),\quad m\in\mathbb N_0^3.
\end{align*}
Together with \eqref{eq:FS2} these give
\begin{align}\label{eq:dpsi12der}
\begin{cases}
\big|\partial_x^m{\sf{v}}_{11}^{(\nu)}(\hat\bx, x)\big|
\lesssim e^{-\varkappa_{|m|_{\scaleto{1}{3pt}}+1}|\bx|_{\scaleto{1}{3pt}}}
\sum_{j=0}^{N-1} |x-x_j|^{-|m|_{\scaleto{1}{3pt}}} \mathbbm 1_{\{|x-x_j|<\nu\}}(\hat\bx, x),\\[0.4cm]
\big|\partial_x^m{\sf{v}}_{12}^{(\nu)}(\hat\bx, x)\big|
\lesssim e^{-\varkappa_{|m|_{\scaleto{1}{3pt}}+1}|\bx|_{\scaleto{1}{3pt}}}
\sum_{j=0}^{N-1} |x-x_j|^{-|m|_{\scaleto{1}{3pt}}} \mathbbm 1_{\{|x-x_j|>\nu/2\}}(\hat\bx, x),
\end{cases}
\end{align}
for all $m\in\mathbb N_0^3$, uniformly in $\nu>0$. For the first (resp. second)
bound in \eqref{eq:dpsi12der}
we have used the first (resp. second) bound in \eqref{eq:FS2}.
Recall that the quantity $M_\varkappa(b)$ is defined in \eqref{eq:mb}.
\begin{lem
For any $l\ge 2$ the operator $b\iop({\sf{v}}_{12}^{(\nu)})a\mathbbm 1_{\mathcal C_n}$
belongs to $\BS_{q, \infty}$ with
\begin{align}\label{eq:q}
\frac{1}{q} = \frac{1}{2} + \frac{l}{3},
\end{align}
and
\begin{align}
\|b\iop({\sf{v}}_{12}^{(\nu)})\, a\, \mathbbm 1_{\mathcal C_n}\|_{q, \infty}
\lesssim e^{-\varkappa|n|_{\scaleto{1}{3pt}}}
M_{\varkappa}(b)
\nu^{-l+\frac{3}{2}}\|a\|_{\plainL2(\mathcal C_n)},\quad
\varkappa = \varkappa_{l+1},
\label{eq:dpsi12d}
\end{align}
for all $\nu >0$,
with an implicit constant depending on $l$ only.
\end{lem}
\begin{proof}
According to \eqref{eq:dpsi12der},
${\sf{v}}_{12}^{(\nu)}(\hat\bx, \ \cdot\ )\in \plainH{l}(\mathcal C_n)$,
for a.e. $\hat\bx\in \mathbb R^{3N-3}$ with an arbitrary $l\ge 2$ and we have
\begin{align*}
\|{\sf{v}}^{(\nu)}_{12}(\hat\bx, \cdot\ )\|_{\plainH{l}}^2
\lesssim &\ e^{-2\varkappa_{\scalel{l+1}}|n|_{\scaleto{1}{3pt}}}
e^{-2\varkappa_{\scalel{l+1}}|\hat\bx|_{\scaleto{1}{3pt}}}
\int_{\mathcal C_n}
\sum_{j=0}^{N-1} |x-x_j|^{-2l} \mathbbm 1_{\{|x-x_j|>\nu/2\}}(\hat\bx, x) dx \\[0.2cm]
\lesssim &\
e^{-2\varkappa_{\scalel{l+1}}|n|_{\scaleto{1}{3pt}}} e^{-2\varkappa_{\scalel{l+1}}|\hat\bx|_{\scaleto{1}{3pt}}}\nu^{-2l+3}.
\end{align*}
Thus by Proposition \ref{prop:BS} with $l\ge 2$, $d = 3$ (so that $2l > d$) and $r = 2$,
for $q$ defined in \eqref{eq:q} we have
\begin{align*}
\|b \iop({\sf{v}}_{12}^{(\nu)}) a \mathbbm 1_{\mathcal C_n}\|_{q, \infty}
\lesssim e^{-\varkappa_{\scalel{l+1}}|n|_{{\scaleto{1}{3pt}}}}\nu^{-l+\frac{3}{2}}
\biggl[
\int_{ \mathbb R^{3N-3}} e^{-2\varkappa_{\scalel{l+1}}|\hat\bx|_{{\scaleto{1}{3pt}}}} |b(\hat\bx)|^2 d\hat\bx
\biggr]^{\frac{1}{2}}\|a\|_{\plainL2(\mathcal C_n)}.
\end{align*}
By definition \eqref{eq:mb} the bound \eqref{eq:dpsi12d} follows.
\end{proof}
\begin{lem}
The operator $b\iop({\sf{v}}_{11}^{(\nu)}) a\mathbbm 1_{\mathcal C_n}$
belongs to $\BS_{6/5, \infty}$ and
\begin{align}
\| b\iop({\sf{v}}^{(\nu)}_{11})\, a\, \mathbbm 1_{\mathcal C_n}\|_{6/5, \infty}
\lesssim &\
e^{-\varkappa|n|_{\scaleto{1}{3pt}}}
M_\varkappa(b)
\nu^{\frac{1}{2}}\|a\|_{\plainL3(\mathcal C_n)},\quad \varkappa = \varkappa_2,\label{eq:dupsidelcube}
\end{align}
for all $n\in\mathbb Z^3$ and all $\nu>0$.
\end{lem}
\begin{proof}
Similarly to the proof of Lemma \ref{lem:psidel}, we use Proposition \ref{prop:BS1} with
the kernel $T(\hat\bx, x)
= {\sf{v}}_{11}^{(\nu)}(\hat\bx, x) \t\big(|x-n|/6\big)$, the weight
$a(x)\mathbbm 1_{\mathcal C_n}(x)$ and parameters $l=1, d = 3$ and $r=3$.
According to \eqref{eq:dpsi12der},
$T(\hat\bx, \ \cdot\ )\in \plainH{1}( \mathbb R^3)$,
for a.e. $\hat\bx\in \mathbb R^{3N-3}$, and we have
\begin{align*}
e^{2\varkappa_{\scalel{2}}|\hat\bx|_{\scaleto{1}{3pt}}}
\|T(\hat\bx, \cdot\ )\|_{\plainH1}^2
\lesssim &\ e^{-2\varkappa_{\scalel{2}}|n|_{\scaleto{1}{3pt}}}\int\limits_{|x-n| < 6}
\sum_{j=0}^{N-1}|x-x_j|^{-2}\mathbbm 1_{\{|x-x_j|<\nu\}}(\hat\bx, x) dx \\[0.2cm]
\lesssim &\ e^{-2\varkappa_{\scalel{2}} |n|_{\scaleto{1}{3pt}}}\nu.
\end{align*}
Use Proposition \ref{prop:BS} with $d = 3, l = 1$, so that $q^{-1} = 5/6$ and
$r = 3$:
\begin{align*}
\|b \iop({\sf{v}}_{11}^{(\nu)}) a \mathbbm 1_{\mathcal C_n}\|_{6/5, \infty}
\lesssim &\ \bigg[\int_{ \mathbb R^{3N-3}}
\| T(\hat\bx, \ \cdot\ )\|_{\plainH1}^2|b(\hat\bx)|^2\,d\hat\bx \biggl]^{\frac{1}{2}}\, \|a\|_{\plainL3(\mathcal C_n)}
\\
\lesssim &\ e^{-\varkappa_{\scalel{2}}|n|_{{\scaleto{1}{3pt}}}}\nu^{\frac{1}{2}}
\biggl[
\int_{ \mathbb R^{3N-3}} e^{-2\varkappa_{\scalel{2}}|\hat\bx|_{{\scaleto{1}{3pt}}}} |b(\hat\bx)|^2 d\hat\bx
\biggr]^{\frac{1}{2}}\|a\|_{\plainL3(\mathcal C_n)}.
\end{align*}
By \eqref{eq:mb} this leads to the bound \eqref{eq:dupsidelcube}.
\end{proof}
\subsection{Estimates for $\iop({\sf{v}})\mathbbm 1_{\mathcal C_n}$} \label{subsect:tog}
Now we can
put together the bounds obtained above for the
parts of the operator $\iop({\sf{v}})\mathbbm 1_{\mathcal C_n}$.
\begin{lem}\label{lem:cell}
Suppose that $b\in\plainL\infty( \mathbb R^{3N-3})$ and $a\in \plainL3(\mathcal C_n)$. Then
$b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\in \BS_{1, \infty}$ and
\begin{align}
\| b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\|_{1, \infty} \lesssim &\ e^{-\varkappa |n|_{{\scaleto{1}{3pt}}}}
\|b\|_{\plainL\infty} \|a\|_{\plainL3(\mathcal C_n)},
\label{eq:upsilonorm}\\
{\sf{G}}_1\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)
\lesssim &\
e^{-\varkappa |n|_{\scaleto{1}{3pt}}}M_\varkappa(b)
\|a\|_{\plainL3(\mathcal C_n)},\quad \varkappa = \varkappa_3.
\label{eq:sfv}
\end{align}
\end{lem}
\begin{proof}
Without loss of generality assume that $\|b\|_{\plainL\infty}\le 1$ and
$\|a\|_{\plainL3(\mathcal C_n)}\le 1$.
By \eqref{eq:split1} and \eqref{eq:split2} for each $\nu >0$ we have
\begin{align*}
{\sf{v}} = {\sf{v}}_{11}^{(\nu)} + {\sf{v}}_{12}^{(\nu)} + {\sf{v}}_{2}^{(\nu)}.
\end{align*}
According to \eqref{eq:dpsidelcube},\eqref{eq:dupsidelcube} and the inequality \eqref{eq:triangle},
\begin{align*}
\|b\,\iop({\sf{v}}_2^{(\nu)}+{\sf{v}}_{11}^{(\nu)})a\mathbbm 1_{\mathcal C_n}\|_{\scalet{6}/\scalet{5}, \infty}
^{\scalet{6}/\scalet{11}}
\le &\
\|b\,\iop({\sf{v}}_2^{(\nu)})a\mathbbm 1_{\mathcal C_n}\|_{\scalet{6}/\scalet{5}, \infty}^{\scalet{6}/\scalet{11}}
+ \|\iop({\sf{v}}_{11}^{(\nu)})a\mathbbm 1_{\mathcal C_n}\|_{\scalet{6}/\scalet{5}, \infty}^{\frac{\scalet{6}}{\scalet{11}}}\\
\lesssim &\ \big[e^{-\varkappa_{\scalel{2}} |n|_{\scaleto{1}{3pt}}}
\big(\nu^{\frac{1}{2}} M_{\varkappa_{\scalel{2}}}(b)
+ \nu^{\frac{3}{2}}\big)\big]
^{\scalet{6}/\scalet{11}},
\end{align*}
so that, by definition \eqref{eq:quasi},
\begin{align}\label{eq:usw}
s_k\big(b\,\iop({\sf{v}}_2^{(\nu)}+{\sf{v}}_{11}^{(\nu)})a\mathbbm 1_{\mathcal C_n}\big)\notag\\[0.2cm]
\lesssim &\
e^{-\varkappa_{\scalel{2}} |n|_{\scaleto{1}{3pt}}}\big(\nu^{\frac{1}{2}} M_{\varkappa_{\scalel{2}}}(b) + \nu^{\frac{3}{2}}
\big)
k^{-\frac{5}{6}},\quad k = 1, 2, \dots.
\end{align}
Since $\|a\|_{\plainL2(\mathcal C_n)}\le \|a\|_{\plainL3(\mathcal C_n)}\le 1$, it follows from \eqref{eq:dpsi12d}
that
\begin{align}\label{eq:uss}
s_k\big(b\,\iop({\sf{v}}_{12}^{(\nu)})a\mathbbm 1_{\mathcal C_n}\big)
\lesssim e^{-\varkappa_{\scalel{l+1}}|n|_{\scaleto{1}{3pt}}}
M_{\varkappa_{\scalel{l+1}}}(b) \d^{-l+\frac{3}{2}} k^{- \frac{1}{2} - \frac{l}{3}},
\end{align}
for all $l \ge 2$.
Due to \eqref{eq:2k} and \eqref{eq:monotone}, combining \eqref{eq:usw} and \eqref{eq:uss}, we get the estimate
\begin{align}\label{eq:u2kminus}
s_{2k}\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)\le &\ s_{2k-1}\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)\notag\\
\lesssim &\ e^{-\varkappa |n|_{\scaleto{1}{3pt}}}\bigg[\big(\nu^{\frac{1}{2}}
M_{\varkappa}(b) + \nu^{\frac{3}{2}}\big)k^{-\frac{5}{6}}
+ \nu^{-l+\frac{3}{2}} M_{\varkappa}(b) k^{-\frac{1}{2} - \frac{l}{3}}
\bigg], \quad \varkappa = \varkappa_{l+1},
\end{align}
where we have used that
$M_{\varkappa_{\scalel{2}}}(b)\le M_{\varkappa_{\scalel{l+1}}}(b)$, $l\ge 2$.
Rewrite the expression in the square brackets, gathering
the terms containing $M_\varkappa(b)$:
\begin{align*
M_\varkappa(b)\big(
\nu^{\frac{1}{2}} k^{-\frac{5}{6}}
+ &\ \nu^{-l+\frac{3}{2}} k^{-\frac{1}{2} - \frac{l}{3}}
\big)
+ \nu^{\frac{3}{2}} k^{-\frac{5}{6}}\\
= &\ M_\varkappa(b) \nu^{\frac{1}{2}} k^{-\frac{5}{6}}\big(
1 + \nu^{-l+1} k^{\frac{1-l}{3}}
\big)
+ \nu^{\frac{3}{2}} k^{-\frac{5}{6}}.
\end{align*}
Since $\nu>0$ is arbitrary, we can pick $\nu = \nu_k = k^{-1/3}$ so that
\begin{align*}
\nu^{-l+1} k^{\frac{1-l}{3}} = 1,\quad
\nu^{\frac{1}{2}} k^{-\frac{5}{6}} = k^{-1},
\quad \nu^{\frac{3}{2}} k^{-\frac{5}{6}} = k^{- \frac{4}{3}}.
\end{align*}
Thus the bound \eqref{eq:u2kminus} rewrites as
\begin{align}\label{eq:utransform}
s_{2k}\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)\le &\ s_{2k-1}\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)
\lesssim e^{-\varkappa |n|_{{\scaleto{1}{3pt}}}}
\big(M_\varkappa(b)k^{-1} + k^{- \frac{4}{3}}\big).
\end{align}
Using the bound $M_\varkappa(b)\lesssim \|b\|_{\plainL\infty}\le 1$
we conclude that
\begin{align*}
s_k\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)\lesssim e^{-\varkappa |n|_{{\scaleto{1}{3pt}}}} k^{-1}.
\end{align*}
This leads to \eqref{eq:upsilonorm} if one takes $l=2$, so that $\varkappa = \varkappa_3$.
In order to obtain \eqref{eq:sfv}, we use \eqref{eq:utransform}
to write
\begin{align*}
\limsup_{k\to\infty}k
s_k\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)
\lesssim
e^{-\varkappa |n|_{\scaleto{1}{3pt}}}
\limsup_{k\to \infty}\big(M_\varkappa(b)
+ k^{-\frac{1}{3}}\big)
= e^{-\varkappa |n|_{\scaleto{1}{3pt}}} M_\varkappa(b).
\end{align*}
Taking again $l=2$, we obtain \eqref{eq:sfv}.
\end{proof}
\subsection{Proof of Theorem \ref{thm:psifull}
Here we sum up over $n\in\mathbb Z^3$ the estimates in Lemma \ref{lem:cell}
to complete the proof
of Theorem \ref{thm:psifull}. Recall again that the
quantities $R_\varkappa(a)$ and $M_\varkappa(b)$
are defined in \eqref{eq:Sq} and \eqref{eq:mb} respectively.
As noted earlier, it suffices to prove the bounds \eqref{eq:vfullest} and \eqref{eq:vfull}
for the operator $b\,\iop({\sf{v}}) a$, where ${\sf{v}} := {\sf{v}}_N$.
Since ${\sf{v}} = \sum_{n\in\mathbb Z^3} {\sf{v}}\mathbbm 1_{\mathcal C_n}$, we have,
by \eqref{eq:triangle} and \eqref{eq:upsilonorm},
\begin{align*}
\|b\,\iop({\sf{v}}) a\|_{1, \infty}^{\frac{\scalet{1}}{\scalet{2}}}\le &\ \sum_{n\in\mathbb Z^3}
\|b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\|_{1, \infty}^{\frac{\scalet{1}}{\scalet{2}}}\\
\lesssim &\ \|b\|_{\plainL\infty}^{\frac{1}{2}}\sum_{n\in\mathbb Z^3}
e^{-\frac{1}{2}\varkappa |n|_{\scaleto{1}{3pt}}}
\|a\|_{\plainL3(\mathcal C_n)}^{\frac{1}{2}}
= \|b\|_{\plainL\infty}^{\frac{1}{2}} \big(R_\varkappa(a)\big)^{\frac{1}{2}},\quad \varkappa=\varkappa_3.
\end{align*}
This proves \eqref{eq:vfullest}.
To prove \eqref{eq:vfull} observe that the previous calculation shows that
condition \eqref{eq:conv} holds for the operators $T_n = {\sf{v}}\mathbbm 1_{\mathcal C_n}$, and hence we can use
Lemma \ref{lem:triangleg}. According to \eqref{eq:triangleg}
and \eqref{eq:sfv},
\begin{align*}
{\sf{G}}_1\big(b\,\iop({\sf{v}}) a\big)^{\frac{1}{2}}\le &\
\sum_{n\in\mathbb Z^3} {\sf{G}}_1\big(b\,\iop({\sf{v}})\mathbbm 1_{\mathcal C_n} a\big)^{\frac{1}{2}}\\
\lesssim &\ \big(M_\varkappa(b)\big)^{\frac{1}{2}}\sum_{n\in\mathbb Z^3}
e^{-\frac{1}{2}\varkappa |n|_{\scaleto{1}{3pt}}}
\|a\|_{\plainL3(\mathcal C_n)}^{\frac{1}{2}}
= \big(M_\varkappa (b)\big)^{\frac{1}{2}} \big(R_\varkappa(a)\big)^{\frac{1}{2}},\quad \varkappa=\varkappa_3.
\end{align*}
This completes the proof of Theorem \ref{thm:psifull}.
\qed
\section{Proof of Lemma \ref{lem:central}}\label{sect:trim}
The proof of Lemma \ref{lem:central} is divided in several steps each of which is justified
using either Corollary \ref{cor:zero} or Corollary \ref{cor:zero1}.
The first stage is described in the next lemma. Recall that the cut-offs $Y_\d$ and $Q_R$
are defined in \eqref{eq:ydel} and \eqref{eq:qr}.
\begin{lem
The following relations hold:
\begin{align}\label{eq:asymp1}
{\sf{G}}_1(\boldsymbol{\sf{V}}) = \lim\limits_{\substack{\d\to 0\\ R\to\infty}}
{\sf{G}}_1(Q_RY_\d \boldsymbol{\sf{V}} K_R),\quad
{\sf{g}}_1(\boldsymbol{\sf{V}})
= \lim\limits_{\substack{\d\to 0\\ R\to\infty}} {\sf{g}}_1(Q_RY_\d \boldsymbol{\sf{V}} K_R),
\end{align}
where the limits on the right-hand side exist.
\end{lem}
\begin{proof} First we check that
\begin{align}\label{eq:psidelR}
\begin{cases}
\lim\limits_{\d\to 0} {\sf{G}}_1\big((I- Y_\d)\boldsymbol{\sf{V}}\big) = 0,\\[0.3cm]
\lim\limits_{R\to \infty} {\sf{G}}_1\big((I- Q_R)\boldsymbol{\sf{V}}\big) = 0,\
\lim\limits_{R\to \infty} {\sf{G}}_1\big(\boldsymbol{\sf{V}}(I- K_R)\big) = 0.
\end{cases}
\end{align}
Consider first $(I-Y_\d)\boldsymbol{\sf{V}}$.
By virtue of \eqref{eq:comply}, it follows from \eqref{eq:vfull} that
for some $\varkappa >0$, we have
\begin{align*}
{\sf{G}}_1\big((1- Y_\d)\boldsymbol{\sf{V}}\big)
\lesssim &\ M_\varkappa(1-Y_\d) \\
\lesssim &\ \sum_{0\le l < s\le N-1}
\bigg[
\int \t\big(|x_l-x_s|(4\d)^{-1}\big)^2 e^{-2\varkappa |\hat\bx| } d\hat\bx
\bigg]^{1/2}\lesssim \d^{3/2}\to 0,\ \d\to 0,
\end{align*}
and hence the first relation in \eqref{eq:psidelR} holds.
In a similar way one estimates $(I-Q_R)\boldsymbol{\sf{V}}$ and
$\boldsymbol{\sf{V}} (I-K_R)$.
Estimate, for example, the first of these operators.
Since
\begin{align*}
1-Q_R(\hat\bx)\le \sum_{1\le l\le N-1} \zeta\big(|x_l|R^{-1}\big),
\end{align*}
it follows from \eqref{eq:vfull} again that
\begin{align*}
{\sf{G}}_1\big((I-Q_R)\boldsymbol{\sf{V}}\big)\lesssim &\ M_\varkappa(1-Q_R)\\
\lesssim &\ \sum_{0\le l\le N-1}
\bigg[
\int_{ \mathbb R^3} \zeta(|x_l| R^{-1})^2 e^{-2\varkappa |x_l| }\, d x_l
\bigg]^{1/2}\lesssim e^{- \varkappa R/2}\to 0, \ R\to\infty,
\end{align*}
whence the second equality in \eqref{eq:psidelR}. The third one is derived similarly.
Represent $\Psi$ in the form
\begin{align*}
\boldsymbol{\sf{V}} = Q_R Y_d\boldsymbol{\sf{V}} K_R + (I-Q_R)\boldsymbol{\sf{V}} + Q_R(1-Y_\d)\boldsymbol{\sf{V}} +
Q_R Y_\d \boldsymbol{\sf{V}} (I-K_R),
\end{align*}
According to \eqref{eq:trianglep},
\begin{align*}
{\sf{G}}_1\big(\boldsymbol{\sf{V}} - Q_R Y_d\boldsymbol{\sf{V}} K_R\big)^{\frac{\scalet{1}}{\scalet{2}}}
\le &\ {\sf{G}}_1\big((I-Q_R)\boldsymbol{\sf{V}}\big)^{\frac{\scalet{1}}{\scalet{2}}}\\[0.2cm]
&\ + {\sf{G}}_1\big(Q_R(1-Y_\d)\boldsymbol{\sf{V}}\big)^{\frac{\scalet{1}}{\scalet{2}}} +
{\sf{G}}_1\big(Q_R Y_\d \boldsymbol{\sf{V}}(I-K_R)\big)^{\frac{\scalet{1}}{\scalet{2}}}\\[0.2cm]
\le &\ {\sf{G}}_1\big((I-Q_R)\boldsymbol{\sf{V}}\big)^{\frac{\scalet{1}}{\scalet{2}}}\\[0.2cm]
&\ + {\sf{G}}_1\big((1-Y_\d)\boldsymbol{\sf{V}}\big)^{\frac{\scalet{1}}{\scalet{2}}} +
{\sf{G}}_1\big(\Psi(I-K_R)\boldsymbol{\sf{V}}\big)^{\frac{\scalet{1}}{\scalet{2}}}.
\end{align*}
By virtue of \eqref{eq:psidelR} the right-hand side tends to
zero as $\d\to 0, R\to\infty$.
By Corollary \ref{cor:zero1} this implies \eqref{eq:asymp1}.
\end{proof}
In the next stage we partition the kernel
\begin{align}\label{eq:trim}
Q_R(\hat\bx) Y_\d(\hat\bx) {\sf{V}}(\hat\bx, x) K_R(x)
\end{align}
of the operator $Q_R Y_\d\boldsymbol{\sf{V}} K_R$ on the right-hand side of the formulas in \eqref{eq:asymp1}.
We do this by introducing the cut-offs $\t\big(|x-~x_j|\varepsilon^{-1}\big)$,
$j = 0, 1, \dots, N-1$, assuming that $\varepsilon<\d$.
In view of \eqref{eq:partun}
the $j$'th component of the kernel \eqref{eq:trim} can be represented as follows:
\begin{align}\label{eq:split}
Q_R(\hat\bx) Y_\d(\hat\bx) {\sf{v}}_j(\hat\bx, x) K_R(x)
= \sum_{k=0}^{N-1}\phi_{j,k}[\d, R, \varepsilon](\hat\bx, x)
+ \rho_j[\d, R, \varepsilon](\hat\bx, x)
\end{align}
with
\begin{align*}
\phi_{j,k}[\d, R, \varepsilon](\hat\bx, x)
= &\ Q_R(\hat\bx) Y_\d(\hat\bx)\t\big(|x-~x_k|\varepsilon^{-1}\big) {\sf{v}}_j(\hat\bx, x) K_R(x),\quad
k = 0, 1, \dots, N-1,\\[0.2cm]
\rho_j[\d, R, \varepsilon](\hat\bx, x)
= & \ Q_R(\hat\bx)
Y_\d(\hat\bx) \prod_{k=0}^{N-1} \zeta\big(|x-x_k|\varepsilon^{-1}\big) {\sf{v}}_j(\hat\bx, x) K_R(x).
\end{align*}
First we show that the kernels $\rho_j[\d, R, \varepsilon]$ and $\phi_{j,0}[\d, R, \varepsilon]$
give negligible contributions to the asymptotics.
\begin{lem} For each $\d>0, R>0$ and $\varepsilon<\d$ one has
\begin{align}\label{eq:rho}
{\sf{G}}_1\big(\iop\big(\rho_j[\d, R, \varepsilon]\big)\big) = 0,\ j = 1, 2, \dots, N.
\end{align}
\end{lem}
\begin{proof}
By the definitions \eqref{eq:ydel} and \eqref{eq:sco1},
the support of the kernel
$\rho_j[\d, R, \varepsilon]$ belongs to the bounded domain
\begin{align*}
{\bigcap_{0\le l < s \le N} {\sf S}_{l, s}(\varepsilon/2)\cap (B_R)^N}.
\end{align*}
The function ${\sf{v}}_j$ is real-analytic on this domain and it is uniformly bounded
together with all its derivatives, so that $\rho_j[\d, R, \varepsilon]\in \plainC\infty_0( \mathbb R^{3N})$.
By Proposition \ref{prop:BS},
${\sf{G}}_p(\iop(\rho_j[\d, \mathbb R, \varepsilon])) = 0$ for all $p >0$, and in particular,
for $p = 1$, as claimed.
\end{proof}
\begin{lem} For each $\d>0, R>0$ one has
\begin{align}\label{eq:phi0}
\lim_{\varepsilon\to 0}{\sf{G}}_1\big(\iop\big(\phi_{j,0}[\d, R, \varepsilon]\big)\big) = 0,\
j = 1, 2, \dots, N.
\end{align}
\end{lem}
\begin{proof}
As $x_0 = 0$ by definition, the kernel $\phi_{j,0}[\d, R, \varepsilon]$ has the form
\begin{align*}
\phi_{j,0}[\d, R, \varepsilon](\hat\bx, x)
= Q_R(\hat\bx) Y_\d(\hat\bx){\sf{v}}_j(\hat\bx, x) \t\big(|x|\varepsilon^{-1}\big) K_R(x).
\end{align*}
Estimating $Q_R Y_\d\le 1$, $ K_R\le 1$, one sees that
the singular values of $\iop(\phi_{j,0}[\d, R, \varepsilon])$ do not exceed those of the operator
$\iop({\sf{v}}_j) a$ with the weight $a(x) = \t(|x|\varepsilon^{-1})$.
By \eqref{eq:vfull},
\begin{align*}
{\sf{G}}_1(\iop({\sf{v}}_j)a)\lesssim R_\varkappa(a)\lesssim \bigg(\int_{ \mathbb R^3} \t\big(|x|\varepsilon^{-1} \big)^3 dx\bigg)^{1/3}
\lesssim \varepsilon \to 0,\ \varepsilon\to 0.
\end{align*}
This implies \eqref{eq:phi0}.
\end{proof}
\begin{cor}
Denote by $\boldsymbol\a[\d, R, \varepsilon] = \{\a_j[\d, R, \varepsilon]\}_{j=1}^N$
the vector-valued kernel with the components
\begin{align*}
\a_j[\d, R, \varepsilon](\hat\bx, x) = \sum_{j=1}^{N-1}\phi_{j,k}[\d, R, \varepsilon](\hat\bx, x).
\end{align*}
Then for all $\d >0$ and $R>0$, we have
\begin{align}\label{eq:asymp2}
\begin{cases}
{\sf{G}}_1(Q_RY_\d \boldsymbol{\sf{V}} K_R) = &\
\lim\limits_{\varepsilon\to 0}
{\sf{G}}_1(\iop(\boldsymbol\a[\d, R, \varepsilon])),
\\[0.2cm]
{\sf{g}}_1(Q_RY_\d \boldsymbol{\sf{V}} K_R) = &\
\lim\limits_{\varepsilon\to 0}
{\sf{g}}_1(\iop(\boldsymbol\a[\d, R, \varepsilon])),
\end{cases}
\end{align}
where the limits on the right-hand side exist.
\end{cor}
\begin{proof}
By \eqref{eq:split}, the kernel $Q_R Y_\d {\sf{v}}_j K_R$ has the form
\begin{align*}
\a_j[\d, R, \varepsilon] + \phi_{j, 0}[\d, R, \varepsilon] + \rho_j[\d, R, \varepsilon].
\end{align*}
By virtue of \eqref{eq:triangleg} and \eqref{eq:rho}, \eqref{eq:phi0}, we have
\begin{align*}
\lim_{\varepsilon\to 0}
{\sf{G}}_1\big(\iop\big(\phi_{j, 0}[\d, R, \varepsilon] + \rho_j[\d, R, \varepsilon]\big)\big) = 0.
\end{align*}
Now \eqref{eq:asymp2} follows from Corollary \ref{cor:zero1}.
\end{proof}
\begin{proof}[Completion of the proof of Lemma \ref{lem:central}]
According to
\eqref{eq:cutoff}, under the condition $\varepsilon < \varepsilon_0(\d, R)$, the
support of each kernel
\begin{align*}
\phi_{j, k}[\d, R, \varepsilon], \quad j= 1, 2, \dots, N,\quad k = 1, 2, \dots, N-1,
\end{align*}
belongs to $\tilde\Omega_k(\d, R, \varepsilon)$, see \eqref{eq:omj}
for the definition. Therefore one can use the representation \eqref{eq:trepr} for the function
${\sf{v}}_j$:
\begin{align*}
\a_j[\d, R, \varepsilon](\hat\bx, x)
= \sum_{k=1}^{N-1}&\ \phi_{j,k}[\d, R, \varepsilon](\hat\bx, x)
= \sum_{k=1}^{N-1} Q_R(\hat\bx) Y_\d(\hat\bx)\t\big(|x-x_k|\varepsilon^{-1}\big)
\nabla_x\tilde\xi_{j,k}(\hat\bx, x) K_R(x)\\[0.2cm]
&\ \quad + \sum_{k=1}^{N-1} Q_R(\hat\bx) Y_\d(\hat\bx)\t\big(|x-x_k|\varepsilon^{-1}\big)
|x_k-x|\nabla_x\tilde\eta_{j,k}(\hat\bx, x) K_R(x)\\[0.2cm]
&\ \quad + \sum_{k=1}^{N-1} Q_R(\hat\bx) Y_\d(\hat\bx)\t\big(|x-x_k|\varepsilon^{-1}\big)
\frac{x_k-x}{|x_k-x|}\tilde\eta_{j,k}(\hat\bx, x) K_R(x).
\end{align*}
Each term in the first sum on the right-hand side is $\plainC\infty_0( \mathbb R^{3N})$.
Thus, by Proposition \ref{prop:BS}, the functional
${\sf{G}}_p$ for the associated operator equals zero for all $p >0$, and in particular, for $p=1$.
Each term in the second sum contains the function $\Phi(x) = |x|$, which is homogeneous of order $1$.
Thus, by Proposition \ref{prop:gradp} the functional ${\sf{G}}_1$ for the associated operator equals zero.
The third sum coincides with the kernel $\Upsilon_j[\d, R, \varepsilon](\hat\bx, x)$,
defined in \eqref{eq:upsilon}. Therefore, by Corollary \ref{cor:zero},
\begin{align}\label{eq:asymp3}
\begin{cases}
{\sf{G}}_1(\iop(\boldsymbol\a[\d, R, \varepsilon])) =
{\sf{G}}_1(\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])),\\[0.2cm]
{\sf{g}}_1(\iop(\boldsymbol\a[\d, R, \varepsilon])) =
{\sf{g}}_1(\iop(\boldsymbol\Upsilon[\d, R, \varepsilon])),
\end{cases}
\end{align}
for each $\d>0, R>0$ and $\varepsilon<\varepsilon_0(\d, R)$.
Putting together \eqref{eq:asymp1}, \eqref{eq:asymp2} and \eqref{eq:asymp3},
and using Corollary \ref{cor:zero1},
we conclude the proof of Lemma \ref{lem:central}.
\end{proof}
Recall that the main Theorem \ref{thm:maincompl} was derived from Lemma \ref{lem:central}
in Subsect. \ref{subsect:proofs}. Thus all proofs are complete.
\vskip 0.5cm
\textbf{Acknowledgments.}
The author is grateful to J. Cioslowski
for stimulating discussions, and to M. Lewin for pointing out the asymptotic
problem for the kinetic energy operator.
The author was supported by the EPSRC grant EP/P024793/1.
\bibliographystyle{../beststyle}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
At a distance of $\sim8\;$kpc (\cite{reid93}), SgrA*, the compact radio, NIR
and X-ray source at the Galactic Center, is thought to mark the position of a
super-massive black hole of mass $\sim4\times10^6\;M_\odot$
(\cite{schoedel02},\cite{ghez05}). Proper motions of SgrA* (\cite{reid04})
confirm that it traces a significant amount of the mass that is inferred by
stellar motions and orbits. Due to its proximity, SgrA* is the only galactic
nucleus that can be studied with VLBI on sub-AU linear scales
($R_{\mbox{sch}}\sim10\mu\mbox{as}\sim0.1\mbox{AU}$). The ionized ISM, however,
scatter-broadens images of SgrA* with a $\lambda^2$ dependence, and VLBI at the
highest frequencies is the only available means to set important limits on
intrinsic structures near the event horizon. VLBI at 7~mm and 3.5~mm has
detected evidence for intrinsic structure of SgrA*, but these observations
remain dominated by scattering effects, and the intrinsic sizes at these
wavelengths (set by the optical depth of the emission) are much larger
than the apparent size of the event horizon (\cite{bower04}, \cite{shen05}).
Evidence from light curves of SgrA* flares from the radio to Xray
(\cite{eckart06}, \cite{yusef-zadeh06}, \cite{marrone08})
implicate structures on smaller ($\sim5-15 R_{\mbox{sch}}$) scales. Only at
frequencies above 230GHz does the scattering size become smaller than the VLBI
array resolution allowing direct measurement of intrinsic structure on these
scales corresponding to the innermost accretion region. Over the past decade,
MIT Haystack Observatory has focused on developing next-generation
wideband VLBI instrumentation capable of significantly increasing the
sensitivity of mm/submm VLBI arrays. The observations described herein used
this instrumentation to show that 1.3mm VLBI of SgrA* can now probe the event
horizon of this super-massive black hole candidate.
\section{Instrumentation}
The focus of new VLBI instrumentation has been to process higher
bandwidths using commercially available digital technology. For VLBI, this
has translated into a re-conceptualization of the two main components of
the traditional VLBI backend: the digitization/formatting stage and the recording
stage.
\subsection{Digital Backend}
Prior to recording data for subsequent correlation, the IF (intermediate
frequency) at each VLBI telescope must be sampled and channelized in frequency.
The Mark4 system (similar to the system in use at the VLBA) uses a bank of
analog filters to break the IF into sub-bands. With the advent of mature FPGA
(Field Programmable Gate Array) technology, it is now feasible to channelize
the IF signal using a polyphase filterbank approach that is realized in digital
signal processing after the sampling stage. A collaboration between Haystack
Observatory and the Space Science Laboratory at UC Berkeley has focused on
developing a fully digital FPGA-based VLBI backend, which produces an output
suitable for recording on modern hard-disk based recorders. This system, the
DBE, is capable of sampling two 480MHz wide IFs, and producing two 2Gigabit/sec
output streams (Nyquist sampled, 2-bit resolution). The DBE (Figure
\ref{dbe_photo}) represents a reduction in backend cost by a factor of $\sim10$
over the previous Mark4 data acquisition system and is 5 times smaller in size
(the size of a single PC). The next-generation DBE (imaginatively called
DBE2), is currently under development, and will use the Xilinx Virtex5 family
of FPGA chips to double the data throughput to $\sim8$ Gigabits/sec.
\begin{figure}[h]
\begin{center}
\includegraphics[width=20pc,angle=0]{./dbe_photo.eps}
\caption{\label{dbe_photo} The Digital Backend (DBE) system developed to process two 480MHz wide
IF bands, producing an aggregate VLBI data output rate of 3.84 Gigbit/sec. }
\end{center}
\end{figure}
\subsection{Mark5 Recorder}
The Mark5 system was developed at Haystack Observatory in collaboration with
Conduant Corp as the first high-data-rate VLBI data system based on
magnetic-disc technology. Incorporating primarily low-cost PC-based
components, the Mark5 system now supports data rates up to 2048 Mbps, recording
to an array of 8 inexpensive removable hard disks. The Mark 5 system (Figure
\ref{mk5_photo}) was developed primarily to re-packaged the disks into
convenient '8-pack' modules and over 100 Mark5 units are in use throughout the
VLBI community. Support for Mark 5 development at MIT Haystack Observatory was
provided by BKG, EVN, KVN, MPI, NASA, NRAO, NSF, and USNO. The Mark5 system
replaces a magnetic tape system, which used a non-standard reel-to-reel
recorder and a special-purpose tape media whose cost was not likely to decrease
over time. The new system is over a factor of 5 smaller in size than the
previous recorder with a data rate improvement of x4. The cost for this new
recording system is $\sim10$ times less than the tape system and uses standard
commercial disk media whose cost (per GigaByte) is projected to substantially
decrease over time. Efforts to increase recording rates to 4Gigabits/sec are
underway, and the new Mark5C recorder will be capable of recording data from
10Gigabit Ethernet inputs.
\begin{figure}[h]
\begin{center}
\includegraphics[width=20pc,angle=0]{./mk5.ps}
\caption{\label{mk5_photo} The Mark5 hard disk VLBI data recorder. Each hard
disk module holds up to 8 individual hard disks. With current disk sizes, a
full module can record 6 TBytes. Maximum recording rates are now 2Gb/s but
will increase to 4Gb/s in 2009.}
\end{center}
\end{figure}
\section{Observations}
In April 2007, SgrA* and several quasar calibrators were observed over two
consecutive days at a wavelength of 1.3mm with a three station VLBI array
(\cite{doeleman08a}). The array included the James Clerk Maxwell Telescope
(JCMT) on Mauna Kea, the Arizona Radio Observatory Submillimeter Telescope
(ARO/SMT) on Mt Graham in Arizona, and one 10m dish of the Coordinated Array
for Research in Millimeter-wave Astronomy (CARMA) in California. Projected
baseline lengths on SgrA* ranged from $500\times10^6\lambda$ on the shortest
baseline to $3500\times10^6\lambda$ on the longest. An effective bandwidth of
960 MHz was recorded, resulting in an aggregate recording data rate of 3.84
Gigabits/sec at each site (2 bits/sample, Nyquist sampling). Data were
processed on the MIT Haystack Observatory Mark4 Correlator to produce complex
visibilities with 0.5 second time resolution. Calibration quasars were
robustly detected on all three baselines, validating operation of the VLBI
array and allowing refinement of telescope positions for processing of the
SgrA* observations.
Because the geometry of VLBI baselines in an array is not typically known to
$\ll1\lambda$ precision, it is standard practise to search for detections over
a grid of interferometric delay and delay-rate. A peak in signal-to-noise
ratio of the visibility amplitude, found over a range of Nyquist-sampled delay
and delay-rate space, is deemed a detection if the probability of false
detection is sufficiently low. At an observing wavelength of 1.3mm,
atmospheric turbulence limits the time over which the VLBI signal can be
coherently integrated. Therefore, a technique of incoherent averaging
(\cite{rogers95}) was used, to perform the fringe search over each 10 minute
VLBI scan and to determine the VLBI signal amplitude. Incoherent averaging extends the
effective integration time, but builds signal to noise more slowly than
$\sqrt{t}$. After measuring the coherence losses due to atmospheric effects
over a range of time scales, the atmospheric coherence time was found to be
$\sim8$ seconds, and the VLBI detection searches were thus made by incoherently
averaging 8 second intervals of coherently averaged data. These searches
resulted in robust detections and correlated flux density measurements of SgrA*
on both the ARO/SMT-JCMT and ARO/SMT-CARMA baselines (see Figure
\ref{detections}). No detections were found on the CARMA-JCMT baseline, which
is attributable to the lower sensitivity of that baseline compared with the
others. The error associated with each visibility amplitude was calculated by
adding in quadrature the noise determined from the detection search with a 10\%
calibration error. Measurements of SgrA* made with the CARMA array during the
VLBI observations yield a total flux density of SgrA* of $2.4\pm0.25$Jy, which
was observed to be stable over both days, suggesting that SgrA* was observed in
a quiescent state. Errors in the total flux density measurement are dominated
by pointing and calibration.
\begin{figure}[h]
\begin{center}
\includegraphics[width=40pc,angle=0]{./detection_contour.ps}
\caption{\label{detections} Detections of SgrA* and nearby calibrator at 1.3mm$\lambda$
on a 3500km projected baseline between the Submillimeter Telescope on Mt.
Graham, AZ and the James Clerk Maxwell Telescope on Mauna Kea, HI. Shown are
searches in signal to noise ratio over interferometer delay and delay-rate for
10 minute scans for quasar PKS B1921-293 on April 11, 2007 at 14:00UT (left)
and for SgrA* the same day at 12:00UT (right). The data were segmented into 8
second intervals to reduce coherence loss due to atmospheric turbulence and the
amplitudes were averaged incoherently. The formal probability of false
detection (PFD) in each search is computed by comparing the observed signal to
noise ratio with maximal peaks derived from pure noise over the same search
space, and is $<10^{-9}$ for both fringe searches shown above. Contours in each
plot begin at signal to noise ratio of 2.0 and increase in steps of $2^{1/4}$. Peak
signal to noise is 7.9 and 5.8 on the left and right searches respectively.}
\end{center}
\end{figure}
\section{Discussion}
A circular Gaussian model was fit to the VLBI data (shown in Figure
\ref{uvdist}). The weighted least-squares best-fit model has a total flux
density of $2.4\pm0.5$Jy and full width at half maximum (FWHM) of 43 (+14,-−8)
$\mu$as where errors are $3\sigma$. On the assumption of a Gaussian profile,
the intrinsic size of Sgr A* can be extracted from our measurement assuming
that the scatter broadening due to the ISM adds in quadrature with the
intrinsic size. At a wavelength of 1.3 mm the scattering size extrapolated
from previous longer-wavelength VLBI (\cite{bower06}) is $\sim22\mu$as.
Removing the scattering effects results in a $3\sigma$ range for the intrinsic
size of Sgr A* equal to 37 ($+16$,$-10$) $\mu$as. The $3\sigma$ intrinsic size
upper limit at 1.3 mm, combined with a lower limit to the mass of Sgr A* of
$4\times10^5M_\odot$ from measured proper motions yields a lower limit for the
mass density of $9.3\times10^{22}M_\odot\mbox{pc}^{-3}$. This density lower
limit and central mass would rule out most alternatives to a black hole for Sgr
A* because other concentrations of matter would have collapsed or evaporated on
timescales that are short compared with the age of the Milky Way
(\cite{maoz98}).
\begin{figure}[h]
\begin{center}
\includegraphics[width=20pc,angle=0]{./sgra_uvdist_paper_v2.ps}
\caption{\label{uvdist} Shown are the correlated flux density data on the
ARO/SMT-–CARMA and ARO/SMT-–JCMT baselines plotted against projected baseline
length (errors are $1\sigma$). Squares show ARO/SMT-–CARMA baseline data and triangles
show ARO/SMT-–JCMT data, with open symbols for 10 April and filled symbols for
11 April. The solid line shows the weighted least-squares best fit to a
circular Gaussian brightness distribution, with FWHM size of 43.0~$\mu$as. The
dotted line shows a uniform thick-ring model with an inner diameter of 35~$\mu$as
and an outer diameter of 80~$\mu$as convolved with scattering effects due to the
interstellar medium. The total flux density measurement made with the CARMA
array over both days of observing is shown as a filled
circle. An upper limit for flux density of 0.6 Jy, derived from non-detection
on the JCMT-–CARMA baselines, is represented with an arrow near a baseline
length of $3075\times10^6\lambda$.}
\end{center}
\end{figure}
It should be noted, however, that while structure on $4R_{\mbox{sch}}$ scales
is present in SgrA*, models other than the circular Gaussian can be fit to the
data. This is illustrated by the dotted line in Figure \ref{uvdist}, which
shows the expected flux density as a function of baseline length for a uniform
circular annulus with inner diameter 35$\mu$as and outer diameter 80$\mu$as
that has been scatter broadened by the ISM. Future higher-sensitivity
observations will distinguish between these two models by allowing detections
of SgrA* on the CARMA-JCMT baseline, which is now represented in Figure
\ref{uvdist} only as an upper limit.
Because of gravitational lensing effects due to the extreme gravity near the
assumed black hole, radiation emitted from near the event horizon of a
non-spinning black hole will have an apparent size of
$3\sqrt{3}R_{\mbox{sch}}$. For SgrA*, this expected diameter is
$5.2R_{\mbox{sch}}\simeq52\mu$as, which differs by $3\sigma$ from the size
derived from a Gaussian model. Even if the black hole is maximally spinning
(a=1), the diameter of the event horizon in the equatorial plane ($\sim
45\mu$as) would still exceed the estimated size. This suggests that SgrA* is
not an optically thick emission region that symmetrically enfolds the black
hole. Rather, it is likely due either to emission from a jet or from the
approaching (and therefore Doppler enhanced) side of an accretion disk that is
inclined to our line of sight (\cite{falcke00}, \cite{noble07},
\cite{avery06}). Either scenario results in emission that is offset from the
black hole position. This marks the first time that astronomical observations
of any kind have directly constrained the spatial relationship between SgrA*
and the black hole.
\section{Conclusions}
The technology to significantly increase the sensitivity of VLBI at wavelengths
of 1.3mm and shorter is now enabling observations of SgrA* on Schwarzschild
radius scales. Efforts to extend the capabilities of the current 1.3mm VLBI
array include: phasing together radio telescopes on Mauna Kea and at CARMA to
increase collecting area (discussed elsewhere in these proceedings), continuing
to pursue increases in recording bandwidth, and bringing new mm/submm VLBI
sites on-line.
By 2009, the international collaboration (see the author list of
\cite{doeleman08a}) that has carried out these observations, will field a
higher sensitivity 1.3mm VLBI array that will be sensitive to time variable
structures in SgrA*. The closure phase is the sum of interferometric phases
around a triangle of baselines, is largely immune from calibration errors, and
deviates from zero in the presence of asymmetric source structure. Projected
sensitivities will allow monitoring of the closure phase on $\sim10$ second
time scales, and enable tests for periodic structure variation in SgrA* as is
predicted by orbiting hot-spot models of the accretion flow (\cite{avery06},
\cite{genzel}). By timing periodicities in the closure phase, one can extract
the fundamental black hole spin parameter (\cite{doeleman08b}). Thus, by
spatially resolving the innermost accretion region surrounding SgrA*, mm/submm
VLBI is now positioned to address fundamental issues in black hole physics.
\section*{Acknowledgments}
VLBI at mm/submm wavelengths would not be possible without the dedicated support of
staff and scientists as all participating facilities. VLBI work at the MIT Haystack
Observatory is supported through grants from the National Science Foundation.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A few percent of galaxies host a central, compact, highly luminous region called active galactic nucleus (AGN). AGNs are the result of a highly accreting supermassive black hole (SMBH), in which infalling gas forms an accretion disk around the SMBH.
In addition to being the source of the high central luminosity, AGN disks can also impact the dynamics of stellar remnants in the galactic center. Of particular interest has been the interaction of AGN disks with stellar-mass black holes (BHs) that have migrated to the galactic center through mass segregation \citep{2009ApJ...697.1861A,2009ApJ...698L..64K,2009MNRAS.395.2127O}. While orbiting the central SMBH, BHs periodically cross the AGN disk that gradually aligns their orbit with the disk plane \citep{Bartos17}. Once in the disk, BHs migrate inwards due to
viscous-tidal interactions with the disk \citep{McKernan12,Bellovary16}. After these processes brought two BHs in each others' vicinity, the dense gas of the AGN disk facilitates gravitational capture and ultimately the binary merger of the two BHs through dynamical friction.
BH mergers in AGN disks may be an important gravitational-wave source with BH properties distinct from those expected from other astrophysical mechanisms \citep{Bartos17,2017MNRAS.464..946S,2018ApJ...866...66M,Yang19a,Yang19b_PRL,Yang20,Yang20_GW190814,2019ApJ...878...85S,Tagawa19,2020ApJ...899...26T,Tagawa20_ecc,Tagawa20_massgap,Gayathri20_GW190521,Samsing20}. Merging BHs or neutron stars in AGN disks might also produce detectable electromagnetic emission, opening up another way to study AGN-assisted mergers \citep{2019ApJ...884L..50M, 2021arXiv210302461K, Perna2021,2021arXiv210310963P,2021arXiv210409389Z,2021ApJ...906L..11Z}.
Stellar orbits are also affected by AGN disks as disk crossings dissipate orbital energy~\citep{2000ApJ...545..847M}, which will be particularly significant for stars on highly eccentric orbits. Some of these stars will gradually get closer to the central SMBH until they are tidally disrupted by it. Nonetheless such interactions with the AGN disk may have limited effect on the overall tidal disruption rate in galaxies with AGNs (\citealt{2020ApJ...889...94M}, but see \citealt{Tadhunter2017}).
Here we examine the orbital alignment of stars with the AGN disk, and its consequences. Similarly to BHs and neutron stars, some of the stars in galactic centers will align their orbits with the AGN disk plane. Once in the plane, stars can form binaries with BHs, leading to their eventual tidal disruption within the AGN disk. We discuss the rate density of such tidal disruption events around BHs (hereafter micro-TDEs; \citealt{2016ApJ...823..113P}), their expected observational signatures and possible observational indications of their existence.
\section{Orbital evolution around AGN disks}
Upon crossing the AGN disk, the velocity of stars will change due to the accretion of matter from the disk. We simulated the orbital evolution of stars accounting for this velocity change following \cite{2019ApJ...876..122Y}. The mass of infalling gas upon each crossing is given by
\begin{equation}
\Delta m_{\rm gas}=v_{\rm rel}t_{\rm cross}R_{\rm cap}^2\pi\Sigma/(2H)\,,
\label{eq:mcross}
\end{equation}
where $R_{*}$ and $M_{*}$ are the stellar radius and mass,
$R_{\rm cap}=\max\{R_{*}, R_{\rm BHL}\}$,
$R_{\rm BHL}\equiv 2GM_{*}/(v^2_{\rm rel}+c_s^2)$ is the stars' Bondi-Hoyle-Lyttleton radius, $\Sigma$ and $H$ are the surface density and scale height of the AGN disk respectively, $t_{\rm cross}\equiv 2H/v_{\rm z}$ is the crossing time, $v_{\rm z}$ is the $z$ component of the stars' velocity. We found mass loss by
a main-sequence star due to ram pressure exerted by the disk gas to be negligible \citep{2005ApJ...619...30M}.
We considered a geometrically thin, optically thick, radiatively efficient, steady-state
\citep{1973A&A....24..337S} but self-gravitating \citep{2003MNRAS.341..501S} accretion disk. We adopted viscosity parameter $\alpha=0.1$, accretion rate $\dot{M}_{\bullet}=0.1\dot{M}_{\rm Edd}$ and radiation efficiency $\epsilon=0.1$. We used a fiducial SMBH mass of $10^6$\,M$_{\odot}$.
We computed the changes in the velocity and angular momentum of the star upon each crossing based on momentum conservation \citep{2019ApJ...876..122Y}:
\begin{align}
\bf{\Delta {v_{*}}}&=-\lambda(\bf{v_{*}}-\bf{v_{gas}})=-\lambda \bf{v_{\rm vel}}\\
\bf{\Delta J}&=-\lambda \bf{r}\times \bf{v_{\rm rel}}\,,
\end{align}
where $\bf{v_{\rm rel}} $ is the relative velocity between the star and the gas, and $\lambda\equiv\Delta M_{\rm gas}/M_{*}$ is a dimensionless factor. We updated the parameters of the star's orbit after each crossing according to the change of velocity and obtained the orbital evolution iteratively.
We assumed that the stars follow a mass-radius relation $R_{*}=1.06(M_*/{\rm M}_\odot)^{0.945}$ for $M_{*}<1.66M_{\odot}$ and $1.33(M_*/{\rm M}_\odot)^{0.555}$ for $M_{*}>1.66M_{\odot}$ \citep{1991Ap&SS.181..313D}. We assumed that the stellar mass follows a Kroupa initial mass function \citep{2001MNRAS.322..231K}. We took into account the mass segregation in the spatial distributions of main sequence stars, adopting $dN/da\propto a^{-3/2-0.5M_{*}/M_{\rm max}}$, where $a$ is the semi-major axes of the star's orbit around the SMBH \citep{2009ApJ...698L..64K,2009ApJ...697.1861A,2018ApJ...860....5G}. We adopted a maximum stellar mass of $M_{\rm max}=50$\,M$_{\odot}$, which accounts for the fact that the lifetime of the most massive stars is too short for them to participate in mass segregation. The total stellar mass within the gravitational influence radius of the SMBH was taken to be equal to the mass of the SMBH \citep{2000ApJ...545..847M}.
We similarly computed the orbital alignment of BHs and neutron stars with the AGN disk. For simplicity we adopted the Salpeter initial mass function $dN/dM\propto M^{-2.35}$ for BHs within the range of $5-50$\,M$_\odot$ and a normal initial mass function $M/M_{\odot}\sim\textit{N}(1.49,0.19)$ for neutron stars \citep{2016ARA&A..54..401O}. We assumed that the total mass of the BH population is $1.6\%$ of the stellar mass in galactic centers and the number of neutron stars is ten times the number of BHs \citep{2018ApJ...860....5G}. We additionally took into account mass segregation in the three-dimensional spatial distributions of BHs and neutron stars following \cite{2018ApJ...860....5G}. Otherwise orbital alignment was computed similarly to stars.
\section{Binary formation, evolution and tidal disruption}
Once some of the stars, BHs and neutron stars align their orbits with the AGN disk, they begin migrating inward
analogously to planetary migration in protostellar disks (e.g., \citealt{2020ApJ...899...26T}). Both orbital alignment and migration increase the number density of each type of object in the inner AGN disk, leading to efficient binary formation through gravitational capture. Dynamical friction within the gas further facilitates the formation of binaries and their consecutive hardening.
We considered a binary consisting of a main sequence star and a BH. The binary typically forms at small separations of only $\sim 10$ times the star's tidal radius $R_{\rm t} = R_*(M_{\rm bh}/M_*)^{1/3}$, and could be eccentric at (or soon after) its birth, either due to its formation (c.f. \citealt{2004Natur.427..518F}) and/or subsequently driven to be eccentric by a circumbinary disk \citep{2021ApJ...909L..13Z,2021arXiv210309251D}. This could justify an impulsive disruption "event" at the pericenter of an ellipse, rather than a slow circular inspiral and gradual tidal "peeling".
Once the binary's pericenter distance $R_{p}$ approaches $R_{\rm t}$, the star is tidally disrupted. TDEs around SMBHs typically occur at parabolic ($e\approx1$) stellar orbits, leading to a tidal tail that will affect long-term accretion. In addition, partial disruption has a substantially larger cross-section in the case of TDEs, and should be more common \citep{2020SSRv..216...35S}. The lower eccentricity of micro-TDEs in AGNs will result in most of the stellar material remaining in the vicinity of the BH, leading to a more massive and compact accretion disk.
To qualitatively understand the above tidal disruption process, we performed a simulation of a micro-TDE using smoothed-particle hydrodynamics (SPH; \citealt{2010ARA&A..48..391S}). We considered a star with mass $M_*=1$\,M$_\odot$ and a BH with mass $M_{\rm bh}=10$\,M$_\odot$. The star was initially located at $5R_{\odot}$ distance from the BH. For simplicity, we assumed that the BH-star binary is on a circular orbit.
We adopted a polytrope model to compute the structure of the main sequence star, assuming that pressure depends on the density via $P=k\rho^{1+1/n}$, where constant $k$ is chosen to produce a Sun-like star and $n$ is set to be 3, which can describe the structure of a main sequence star. We considered the self gravity and adopted artificial viscosity in our simulation \citep{2010ARA&A..48..391S}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.7]{TDE_sph_v2.png}
\caption{{\bf Smoothed-particle hydrodynamics simulation of micro-TDE of a binary system} consisting of a 10$M_{\odot}$ BH and a 1$M_{\odot}$ main sequence star, the initially circular orbit with 5$R_{\odot}$ separation. The period of the initial orbit is about 0.4\,days. We show the normalized surface density of the debris at different times and give the normalization factor in each subplot. }
\label{fig:TDE_sph}
\end{center}
\end{figure*}
The outcome of our simulation is shown in Fig \ref{fig:TDE_sph}. We see that tidal instability rapidly grows and the star is tidally disrupted within a day. Most of the stellar matter remains gravitationally bound to the BH.
The above simulation does not account for the presence of the mini-disk around the BH accumulated from the AGN disk prior to stellar disruption. This can be quantitatively important but we do not expect it to qualitatively change the outcome. We estimated that stellar mass loss prior to disruption due to radiation by the accreting BH will be only $\sim 10\%$ of the star's mass and therefore will not meaningfully affect the outcome \citep{2020ApJ...903..140Z,2021arXiv210302461K}.
\section{Radiation features of micro-TDEs}
TDE and micro-TDE light curves and spectra are not yet well understood. They also depend on a large number of variables, such as the masses of the black hole and the star, the eccentricity and impact parameter of the encounter, the black hole's spin, and others.
While in some ways similar, TDE and micro-TDE light curves and spectra can differ due to multiple effects. First, TDEs by SMBHS occur with highly eccentric orbits ($e\approx1$), while AGN-assisted micro-TDEs are not expected to be highly eccentric. This latter case allows a larger fraction of the stellar matter to remain in the vicinity of the BH, quickly forming a nearly circular accretion disk \citep{2019ApJ...881...75K}. Second, the larger amount of mass and lower black hole mass in the micro-TDE case results in highly super-Eddington accretion that can exceed the Eddington rate $\dot{M}_{\rm Edd}\equiv L_{\rm Edd}/c^2=2.6\times10^{-9}M_{\rm BH}$\,yr$^{-1}$ by up to a factor of $10^5$ \citep{2019ApJ...881...75K}. Here, $L_{\rm Edd}$ is the Eddington luminosity and $c$ is the speed of light. Super-Eddington accretion results in a geometrically thick accretion disk. A large fraction of the disk mass is expected to be blown away as a disk wind. Radiation from the inner accretion disk will escape with a delay, resulting in an extended peak in the light curve \citep{2019ApJ...881...75K}. Further delay in reaching peak luminosity may be expected given that the AGN will surround micro-TDEs with a dense medium. The delay, and even whether the micro TDE emission can escape the AGN disk, may depend on the mass of the AGN disk and the location of the disruption event within the disk \citep{Perna2021}. In our fiducial disk model, micro-TDEs are expected mostly in regions around $10^{-4}-10^{-2}$\,pc from the central SMBH, where the disk is ionized and the opacity to scattering is very high, significantly altering the observed emission. However, BHs in the AGN disk are expected to open cavities due to accretion-induced radiation \citep{2021arXiv210302461K}, which results in the reduction of opacity, leaving emission from the micro-TDE less affected. Finally, the vicinity of the SMBH for binaries that reach the AGN disk's migration trap could also further alter the expected light curves.
In the case of micro-TDEs in AGNs, a large fraction of the stellar mass remains gravitationally bound to the BH, resulting in a bolometric luminosity of up to $\sim 10^{44}$\,erg\,s$^{-1}$ \citep{2019ApJ...881...75K}. By comparison, the amount of stellar matter available in the case of TDEs depends on the penetration parameter $\beta\equiv R_{\rm t}/R_{\rm p}$. Partial disruption ($\beta \lesssim 1$) is more common, resulting in lower luminosity \citep{2020SSRv..216...35S}.
The long-term evolution of TDEs can be characterized by a smooth power-law decay \citep{2021ApJ...908....4V}, with a fiducial power-law index of $-5/3$ \citep{1988Natur.333..523R}. Micro-TDEs are expected to result in similar power-law decays in the long term \citep{2019ApJ...881...75K}.
SPH simulations of micro-TDEs found the power-law index to vary from $t^{-5/3}$ to $t^{-9/4}$, with steeper indices corresponding to lower BH mass \citep{Wang2021}.
The high accretion rate in micro TDEs might also launch relativistic outflows that produce significant $\gamma$-ray and X-ray emission. Such emission would be weaker and longer than typical GRBs, possibly resembling that of ultralong GRBs \citep{2016ApJ...823..113P}. Such $\gamma$ and X-ray emission is highly beamed and is therefore only detectable from a fraction of micro TDEs. Micro TDE-driven GRBs could be differentiated from other types of GRBs through the identification of a thermal TDE-like counterpart with longer-wavelength observations, and possibly directional coincidence with AGNs.
\section{SMBHs too heavy for TDEs}
For SMBHs whose Schwarzschild radius is greater than their tidal radius, stars will be swallowed whole before they can be tidally disrupted. The maximum (non-rotating) SMBH mass capable of producing TDEs, called the Hills mass \citep{1975Natur.254..295H}, is around $10^{8}$\,M$_\odot$ for solar-type stars \citep{2020SSRv..216...35S}. This limit is only weakly dependent on the stellar mass for zero-age main sequence stars, although it can be greater for off-main-sequence giant stars, or for highly spinning SMBHs.
Micro-TDEs in AGNs are not limited by the SMBH mass. Therefore, given the uncertainties in their light curves and spectra, the identification of TDE-candidates in AGNs with SMBH mass beyond $10^{8}$\,M$_\odot$ may help distinguish micro-TDEs and TDEs.
There has been two identifications of TDE-like emission from AGNs with SMBH mass $M_\bullet>10^{8}$\,M$_\odot$:
{\bf ASASSN-15lh} was an unusually bright transient first discovered by the All-Sky Automated Survey for Supernovae (ASAS-SN; \citealt{2014ApJ...788...48S}) in 2015 at a redshift of $z = 0.232$ \citep{ASASSN-15lh}. The spectrum of ASASSN-15lh points to a TDE and rules out superluminous supernova origin \citep{2016NatAs...1E...2L,2018A&A...610A..14K}. ASASSN-15lh was found to come from a galactic nucleus with SMBH mass $M_\bullet=5^{+8}_{-3}\times10^8$\,M$_\odot$ \citep{2018A&A...610A..14K}, which may be a low-luminosity AGN based on its BPT classification \citep{2018A&A...610A..14K}. This is beyond the Hills mass for solar-type stars, although a highly spinning SMBH in this mass range could produce a TDE. The late-time ($\gtrsim 100$\,days) light curve and spectrum of ASASSN-15lh could be well explained with a close to maximally spinning SMBH with mass $M_\bullet\sim10^9$\,M$_\odot$, although the early light curve and the lack of radio and dim X-ray emission are not understood \citep{2020MNRAS.497L..13M}. The peak luminosity of ASASSN-15lh was $L_{\rm peak}\sim 5\times10^{45}$\,erg\,s$^{-1}$, higher than the $\sim 10^{44}$\,erg\,s$^{-1}$ predicted from analytical micro-TDE models \citep{2016NatAs...1E...2L}, which nevertheless have large uncertainties.
{\bf ZTF19aailpwl} was a bright ($L_{\rm peak}\sim 10^{45}$\,erg\,s$^{-1}$) transient first observed by the Zwicky Transient Facility \citep{2019PASP..131g8001G} in 2019 at a redshift of $z = 0.37362$ \citep{2020arXiv201008554F}. It originated from an AGN with a reconstructed SMBH mass of $M_\bullet\sim10^{8.2}$\,M$_\odot$ (although a mass $<10^{8}$\,M$_\odot$ cannot be completely ruled out; \citealt{2019PASP..131g8001G}). Follow-up observations found that the spectral properties of ZTF19aailpwl are consistent with a TDE \citep{2019PASP..131g8001G}, but also with an AGN flare attributed to enhanced accretion reported by \cite{2019NatAs...3..242T}. Its light curve is similar to that of TDEs, with a somewhat longer than usual rise time. For micro-TDEs, such longer rise time may be possible due to interaction with the disk wind and the low eccentricity of the stellar orbit that makes the encounter less impulsive and the tidal effects more gradual, compared to the usual SMBH-TDE case.
For both cases above, the reconstructed black hole masses beyond the Hills mass point away from TDEs but are consistent with a micro-TDE origin.
Micro-TDEs can also occur outside of AGN disks, for example in stellar triples hosting a BH \citep{2019MNRAS.489..727F}, or in dense stellar clusters \citep{2019ApJ...881...75K,2019PhRvD.100d3009S,2021MNRAS.500.4307F}. The rate of micro-TDEs from these channels is uncertain but could be as high as that of TDEs. These micro-TDEs could also occur in galaxies with SMBH mass above the Hills mass. Observationally, however, these micro-TDE channels will occur far from galactic centers and will not be confined to AGNs. As both ASASSN-15lh and ZTF19aailpwl were localized to the galactic center and in an AGN means that, for these two cases, non-AGN-assisted micro-TDE channels are unlikely.
\section{Rate Density}
To estimate the expected rate density of micro-TDEs, we computed the number of stars, BHs and neutron stars that undergo orbital alignment with the AGN disk within the AGN's lifetime (assumed to be $10^7$\,yr). We found $\sim 5$ orbital alignments for each object type (see also \citealt{2020ApJ...901L..34Y}). We simulated the merger of objects within the disk using a Monte Carlo simulation of many galaxies, with random number of orbital alignments following a Poisson distribution for each galaxy.
To compute the rate density we further assumed for simplicity that the expected orbital alignment rate is identical in every AGN (there is only a weak dependence on the AGN properties; \citealt{2019ApJ...876..122Y}). We adopted a number density $n_{\rm seyfert}=0.018$\,Mpc$^{-3}$ for Seyfert galaxies \citep{2005AJ....129.1795H} which represent most AGNs. We show the resulting expected merger rate density for all possible combinations of stars, BHs and neutron stars in Table \ref{table :merger rate}.
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{c|c|c|c|}
\cline{2-4}
& black hole & neutron star & star \\
\hline
\multicolumn{1}{|c|}{black hole} & 13 & 1 & 170 \\
\hline
\multicolumn{1}{|c|}{neutron star} & & 10$^{-3}$ & 0.14 \\
\hline
\multicolumn{1}{|c|}{star} & & & 20 \\
\hline
\end{tabular}
\caption{{\bf Expected rate density of binary encounters in AGNs}. Micro-TDEs correspond to BH--star encounters. Results are shown in units of Gpc$^{-3}$ yr$^{-1}$. While BHs are the least common, they dominate the merger rate as their higher mass results in more efficient mass segregation and orbital alignment, while their mergers result in BHs that can undergo further mergers in the AGN disk.}
\label{table :merger rate}
\end{center}
\end{table}
We separately estimated the micro-TDE rate density from observations. For this we used the rate density of TDE candidates in galaxies with $M_\bullet\gtrsim10^{8}$\,M$_\odot$, which are difficult to explain with disruption by SMBHs. We used the TDE host galaxy SMBH mass function estimated by \cite{2018ApJ...852...72V} based on about two dozen observed TDE-candidates. \cite{2018ApJ...852...72V} found that the mass function is roughly constant for $M_\bullet < 10^{7.5}$\,M$_\odot$, while it sharply drops for $M_\bullet > 10^{7.5}$\,M$_\odot$ (in this high mass range they only consider the detection of ASASSN-15lh). Their reconstructed mass function corresponds to a TDE rate of $5^{+8}_{-4}\times10^{-2}$\,Gpc$^{-3}$yr$^{-1}$ (1$\sigma$ uncertainty) for $M_\bullet > 10^{8}$\,M$_\odot$.
Taking into account the mass function of SMBHs in AGNs, we found that about $1\%$ of micro-TDEs occur in AGNs with $M_\bullet > 10^{8}$\,M$_\odot$, corresponding to a rate of $\sim 2$\,Gpc$^{-3}$yr$^{-1}$.
\section{Conclusion}
We proposed that micro-TDEs occur in AGN disks, and found their rate to be about $170$\,Gpc$^{-3}$yr$^{-1}$. Such micro-TDEs may be easiest to distinguish from TDEs around SMBHs by focusing on AGN-hosting galaxies in which the central SMBH's mass is too high ($M_\bullet\gtrsim10^{8}$M$_\odot$) to tidally disrupt solar-like stars. Two such TDE candidates have been reported so far, ASASSN-15lh and ZTF19aailpwl, both among the highest-luminosity TDE candidates.
The observed rate density of TDE candidates from galaxies with $M_\bullet\gtrsim10^{8}$M$_\odot$ \citep{2018ApJ...852...72V} is below the micro-TDE rate density of $\sim2$\,Gpc$^{-3}$yr$^{-1}$ predicted in this work for AGNs with $M_\bullet\gtrsim10^{8}$M$_\odot$, and might be a bright sub-population. In addition, one candidate (ZTF19aailpwl) occurred in an AGN, while the other (ASASSN-15lh) probably occurred in a weak AGN.
The unique environments in AGN disks are expected to give rise to a wealth of other interesting phenomena, such as the tidal disruption of a star by a neutron star (albeit at a smaller rate of $0.1$\,Gpc$^{-3}$yr$^{-1}$), the formation of Thorne-Zytkow objects \citep{1975ApJ...199L..19T} when a neutron star in the disk is unable to tidally disrupt a stellar companion due to a large stellar radius, the tidal disruption of white dwarfs by stellar-mass BHs, with unique signatures~\citep{Maguire+2020}, and the accretion induced collapse of neutron stars \citep{2021arXiv210310963P} and white dwarfs \citep{2021arXiv210409389Z}.
Further theoretical and observational work is needed to better understand the spectral and temporal properties of AGN-assisted micro-TDEs, and to observe them against the background of AGN variability. In particular we suggest an observational focus on AGNs that harbor the heaviest SMBHs and exhibit unusual flaring activity (e.g., \citealt{2019NatAs...3..242T}).
\begin{acknowledgments}
The authors would like to thank
Sjoert van Velzen for useful suggestions. I.B. acknowledges the support of the the Alfred P. Sloan Foundation. ZH was supported by NASA grant NNX15AB19G and NSF grants AST-2006176 and AST-1715661. HT acknowledges support by the Grants-in-Aid for Basic Research by the Ministry of Education, Science and Culture of Japan (HT:17H01102, 17H06360). RP acknowledges support by NSF award AST-2006839 and from NASA (Fermi) award 80NSSC20K1570.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Introduction}
The exclusive break-up of the polarized deuteron by the
protons at a deuteron beam energy of 2 GeV was studied at Saclay
\cite{Ero1994,Exp145}. The experiment was motivated by
the idea to explore the important details of the deuteron wave function
at the high internal momenta. As compared with
the series of previous $dp\rightarrow ppn$
experiments \cite{Perdrisat1969,Witten1975,Felder1976,Perdrisat1985,Sulimov1994},
this experiment was designed to be sensitive to the ratio
between the $S$- and $D$-components of the deuteron wave function brought by
polarization observables. The sensitivity to the S/D ratio is maximum
when the Impulse Approximation (IA) is valid. In this approximation one
of the deuteron's nucleon is a spectator and another one interacts
with the beam/target proton.
As theory predicts, and it is confirmed experimentally,
the IA is valid only when the internal momentum of
the deuteron does not exceed $\approx$ 200 MeV/c. An attempt of straightforward
experimental testing of the deuteron wave function above this momentum could be
justified only if other mechanisms beyond of the IA are well under the control.
Therefore a good knowledge
of other interaction mechanisms is of the great importance in the
experiments of such kind.
That is why the theory of
the correction to the IA has a history as long as the experiments initiated
by this approximation. Chew and Goldberger
\cite{ChewGoldberger1952} represented the
scattering of the elementary particles by complex nuclei as the
multiple-scattering series
$T=\sum_{k=1}^{\infty}T^{(k)}$,
where $k$ is the number of two-body NN interactions. The $T^{(1)}$
corresponds to the single-scattering and contains the IA term,
$T^{(2)}$ corresponds to the double-scattering and so on.
Analogous development follows from the Faddeev
equations \cite{Faddeev1961}. Everett
\cite{Everett1962} was the first who scrutinized
the double-scattering terms. He used the spin-dependent NN amplitude
from the phase shift analysis (PSA) corrected so as to allow one nucleon to be
off-shell and calculated the loop integrals of the
double-scattering graphs, taking the
NN amplitudes out of the integral at a fixed point. Golovin et al.
\cite{Golovin1972} used the similar method with the up-date NN amplitudes.
Another approach was developed by Wallace
\cite{Wallace1972}. He took an
advantage of the Glauber cancellation by the high-order
terms in multiple-scattering series of those pieces of the double-scattering
loop integral, which originate from off-shell states \cite{Harrington1969}.
This approach obtained an application by Punjabi
et al. \cite{PerdrisatPunjabi1990}.
The question about the role of the $\Delta$-isobar in the (p,2p)-reactions
often arises when
discussing the discrepancies between the experiment data and
predictions with the
only NN rescatterings. Yano \cite{Yano1985}
calculated the contribution of the $\Delta$-isobar to the deuteron break-up
reaction
using Feynman diagram approach but the corresponding
amplitude was added incoherently to the other amplitudes.
The analysis of the copious and precise data on the polarization
parameters in exclusive $\vec{d}p\rightarrow \vec{p}pn$ reaction,
obtained at Saclay \cite{Exp145}, requires to deal
with the interference between the various amplitudes of the
multiple scattering. In this paper we present such a model. It is
based on the approach of Everett and includes the spin and isospin variables.
The model takes into account
the double-scattering contribution and also the graphs including the
process $\pi^* d\rightarrow$NN, the virtual pion being
emitted by the beam/target proton. Since the $\pi d\rightarrow$NN
reaction at these energies goes mainly through $\Delta$N in the
intermediate state, the graphs considered by Yano are also added
in a coherently to the other contributions. Furthermore,
the pion-nucleon scattering in other partial waves (S, P, D) is also
considered.
When calculating the nucleon-nucleon rescattering contributions to the deuteron
break-up amplitude
one needs to transform the NN matrices from nucleon-nucleon center of mass frames
to the common laboratory frame.
These transformations depend on the spin basis chosen for the
one-nucleon states. Usually the canonical or helicity basis are
used which are transformed with unitary two by two matrices depending on a
nucleon momentum. They are so called Wigner rotations and one needs to
make the four different rotations corresponding to the two incoming and
two outgoing scattering nucleons. The covariant basis \cite{Joss1962,Stapp1962}
transforming independently of the particle
momentum by the unimodular two by two matrices is
free from this deficiency. An amplitudes in the covariant basis are
usually referred to as the M-functions of Stapp.
The covariant formalism of the M-functions
developed by Stapp \cite{Stapp1962,Stapp1983} is based on the matrices
$\sigma^\mu=(1, \vec{\sigma})$ and $\tilde{\sigma}^\mu=(1, -\vec{\sigma})$,
where $ \vec{\sigma}$ are the standard Pauli matrices.
With an each kinematical momentum
$P^\mu$ of the reaction two by two matrices
$\tilde{P} \equiv P^\mu \tilde{ \sigma_\mu}$ and $P \equiv P^\mu \sigma_\mu$
are associated. The products $\tilde{P}_i P_j$ of these matrices
are the elements from which the M-functions are built.
The matrices $\tilde{V}_i$ and $V_i$ associated with the
four-velocity of the i-th particle, serve as the metric tensors
of the particle when performing the contraction over
this index (traces, successive processes).
The outline of the paper is as follows. The Sections 1-3 cover the three
main mechanisms involved in the deuteron break-up: Impulse Approximation,
NN double-scattering and $\Delta$-excitation. The polarization observables
obtained with the covariant deuteron density matrix are described
in the \mbox{Section 4.}
The discussion of the calculation results and summary is given in the Section 5.
The introduction to the covariant formalism of the M-functions
is given in the Appendix.
\vspace{1cm}
\section{Impulse Approximation}
\begin{figure}[ht]
\begin{center}
\begin{picture}(350,170)(0,0)
\thicklines
\put(30,50){\line(1,0){170}} \put(210,50)
{$p_2$, $\tau _2, \sigma _2$}
\put(300,50){$(RS)$}
\put(20,40){$p_0=(m,\vec{0})$, $\tau _0, \sigma _0$}
\put(30,135){\line(1,0){170}} \put(210,135){$p_3$, $\tau _3, \sigma _3$}
\put(280,135){$(undetected)$} \put(20,145){$p_d$, $\sigma _p \sigma _n$}
\put(30,130){\line(1,0){50}} \put(80,130){\line(1,-2){40}}
\put(120,50){\line(2,1){80}} \put(210,90){$p_1$, $\tau _1, \sigma _1$}
\put(50,90){$p_v$, $\tau _v, \sigma _v$}
\put(290,80){$(SPES)$}
\end{picture}
\end{center}
\caption{IA mechanism. In Saclay experiment $p_1$ was
measured by the magnetic spectrometer
SPES-4 in coincidence with the recoil protons ($p_2$) detected
by Recoil Spectrometer RS.}
\label{fig:IA}
\end{figure}
The general expression for the $S$-matrix elements of the deuteron break-up
reaction is
\[S=i(2 \pi )^4 \delta ^4(p_1+p_2+p_3-p_0-p_d) M \;,\]
where $M$ is the amplitude of the reaction and
we assign indices for the
each particle of the reaction d(p,2p)n as
\begin{center}
\begin{tabular}{cccccccccl}
d & + & p & $\rightarrow$ & p & + & p & + & n & \\
(d) & + & (0) & $\rightarrow$ & (1) & + & (2) & + & (3) &. \\
\end{tabular}
\end{center}
To avoid in what follows the numerous repetitions of the summation
sign $\sum$, which is inevitable when considering the spin and
isotopic spin variables, we will use the tensor-like notation with
the indices characterizing spin and isotopic spin projections. The
upper and lower indices relate to the final and initial channels,
respectively. The same index at the up and down positions stands
for a summation over this index.
Free nucleon is characterized by the projections of the spin
$\sigma$, of the isospin $\tau$ and by the momentum $p$.
We will use the representation of the deuteron spin states by the
symmetric spin-tensor $S^{\sigma _n \sigma _p}$ so that the amplitudes
of the reaction with participation of the deuteron will have the pair
of indices $\sigma _n \sigma _p$ and will be symmetric with respect to
their interchange.
The amplitude of the deuteron break-up reaction with the all indices looks like
\[{{\rm M}^{\tau _1 \tau _2 \tau _3}_{\tau _0}}^
{; \sigma _1 \sigma _2 \sigma _3}_
{; \sigma _0 \sigma _p \sigma _n}(p_1,p_2,p_3;p_0,p_d)\;.\]
The Pauli principle requires that this amplitude should be antisymmetric
with respect to the interchange of any two final nucleons. For example
\[{{\rm M}^{\tau _1 \tau _2 \tau _3}_{\tau _0}}^
{; \sigma _1 \sigma _2 \sigma _3}_
{; \sigma _0 \sigma _p \sigma _n}(p_1,p_2,p_3;p_0,p_d)=
-{{\rm M}^{\tau _2 \tau _1 \tau _3}_{\tau _0}}^
{; \sigma _2 \sigma _1 \sigma _3}_
{; \sigma _0 \sigma _p \sigma _n}(p_2,p_1,p_3;p_0,p_d)\;. \]
To fulfill this requirement we proceed as follows. Having written
the expression antisymmetric with respect to the interchange of for
example the first and the second nucleons ${\rm M}^{(12)3}$, we
perform then the cyclic permutation of the nucleons. The resulting
expression ${\rm M}^{(12)3}+{\rm M}^{(23)1}+{\rm M}^{(31)2}$
possess the required antisymmetry.
Let us start with the expression corresponding to the graph pictured
in Fig. \ref{fig:IA}. With the all indices it looks like
\begin{equation}
{{\rm M}_{NN}}_{\tau _0 \tau _v; \sigma _0 \sigma _v}
^{\tau _1 \tau _2; \sigma _1 \sigma _2}(p_1,p_2;p_0,p_v)\;
\frac{i}{2m_v(m_v-m)}\;
{{\rm D}^{\tau _v \tau _3;}}^{\sigma _v \sigma _3}_
{\sigma _p \sigma _n}(p_v,p_3;p_d)\;,
\label{eq:ONE d-3}
\end{equation}
where $m_v^2 \equiv (p_d-p_3)^2$ is the squared mass of the virtual nucleon
and ${\rm M}_{NN}$ and ${\rm D}$ are the nucleon-nucleon amplitude
and the deuteron vertex, respectively. The above expression corresponds only
to the first term in the development of the spinor particle
propagator in the Dirac particle-antiparticle formalism
\[\frac{\not p +m}{s-m^2}=\frac{1}{2W}\left[ \frac{u \cdot \bar{u}}{W-m} -
\frac{v \cdot \bar{v}}{W+m}\right]\;,\;\;\;W=\sqrt{p^2}\;.\]
Thus we do not consider in what follows the virtual antinucleon states,
which requires a knowledge of the antinucleon components of the nucleon-nucleon
amplitude and of the deuteron vertex.
The deuteron wave function is a \mbox{product} of the vertex and the propagator
\begin{equation}
\frac{{{\rm D}^{\tau _v \tau _3;}}^{\sigma_v \sigma_3}_{\sigma_p \sigma_n}
(p_v,p_3;p_d)}
{2m_v(m_v-m)} \equiv \epsilon ^{\tau _v \tau _3}
\Phi^{\sigma_v \sigma_3}_{\sigma_p \sigma_n}(p_v,p_3;p_d)\;,
\label{eq:dnp wave funtion}
\end{equation}
where $ \epsilon$ is an antisymmetric two by two tensor in isotopic space.
The fully antisymmetrized 'one nucleon exchange' ($ONE$) amplitude is equal to
\begin{eqnarray}
{{{\rm M}_{ONE}}^{\tau _1 \tau _2 \tau _3}_{\tau _0}}^
{; \sigma _1 \sigma _2 \sigma _3}_
{; \sigma _0 \sigma _p \sigma _n}(p_1,p_2,p_3;p_0,p_d)= \nonumber \\
i[{{\rm M}_{NN}}_{\tau _0 \tau _v; \sigma _0 \sigma _v}
^{\tau _1 \tau _2; \sigma _1 \sigma _2}(p_1,p_2;p_0,p_d-p_3)
\epsilon ^{\tau _v \tau _3}
\Phi^{\sigma _v \sigma _3}_{\sigma _p \sigma _n}(p_d-p_3,p_3;p_d)+ \nonumber \\
{{\rm M}_{NN}}_{\tau _0 \tau _v; \sigma _0 \sigma _v}
^{\tau _2 \tau _3; \sigma _2 \sigma _3}(p_2,p_3;p_0,p_d-p_1)
\epsilon ^{\tau _v \tau _1}
\Phi^{\sigma _v \sigma _1}_{\sigma _p \sigma _n}(p_d-p_1,p_1;p_d)+
\label{eq:ONE term} \\
{{\rm M}_{NN}}_{\tau _0 \tau _v; \sigma _0 \sigma _v}
^{\tau _3 \tau _1; \sigma _3 \sigma _1}(p_3,p_1;p_0,p_d-p_2)
\epsilon ^{\tau _v \tau _2}
\Phi^{\sigma _v \sigma _2}_{\sigma _p \sigma _n}
(p_d-p_2,p_2;p_d)]\;, \nonumber
\end{eqnarray}
where we have taken into account the symmetry properties of
nucleon-nucleon amplitude in the expression (\ref{eq:ONE d-3}).
The IA stands for the deuteron break-up amplitude represented
by only the term in the
eq.(\ref{eq:ONE term}) with the minimal virtuality of the intermediate nucleon.
It corresponds to the minimal momentum of nucleon spectator in the
deuteron at rest frame. Under the kinematical conditions
of the Saclay experiment it was
the undetected neutron ($p_3$).
In the next subsections we describe the nucleon-nucleon amplitude
and the deuteron wave function.
\vspace{1cm}
\subsection{NN amplitude}
\begin{figure}[h]
\begin{center}
\begin{picture}(200,100)(0,0)
\thicklines
\put(100,50){\circle{40}}
\put(30,30){\line(1,0){140}} \put(30,70){\line(1,0){140}}
\put(50,20){$p_0$, $\tau_0$, $\sigma_0$}
\put(50,80){$p_v$, $\tau_v$, $\sigma_v$}
\put(160,20){$p_2$, $\tau_2$, $\sigma_2$}
\put(160,80){$p_1$, $\tau_1$, $\sigma_1$}
\end{picture}
\end{center}
\caption{NN amplitude}
\label{fig:NN amplitude}
\end{figure}
The spin and isospin dependent NN amplitude looks like
\begin{eqnarray*}
&{{\rm M}_{NN}}^{\tau _1,\tau _2 ;\sigma _1\sigma _2}
_{\tau _0,\tau _v ;\sigma _0 \sigma _v}
(p_1,p_2;p_0,p_v) =& \\
& {\displaystyle \frac{e^{\tau _1}_{\tau _0}e^{\tau _2}_{\tau _v}-
e^{\tau _2}_{\tau _0}e^{\tau _1}_{\tau _v}}{2} }
{{\rm M}_{0}}^{\sigma _1\sigma _2}_{\sigma _0\sigma _v}(p_1,p_2;p_0,p_v)+
{\displaystyle \frac{e^{\tau _1}_{\tau _0}e^{\tau _2}_{\tau _v}+
e^{\tau _2}_{\tau _0}e^{\tau _1}_{\tau _v}}{2} }
{{\rm M}_{1}}^{\sigma _1\sigma _2}_{\sigma _0\sigma _v}(p_1,p_2;p_0,p_v)\;,&
\end{eqnarray*}
where $e$ is the unit two by two operator in
the isotopic space and the M$_0$ and M$_1$ are the isosinglet and
isotriplet parts of the NN amplitude, respectively.
According to the Pauli principle the amplitudes M$_0$ and M$_1$
obey the symmetry relations
\begin{eqnarray*}
{{\rm M}_{0}}^{\sigma _1 \sigma _2}_{\sigma _0 \sigma _v}(p_1,p_2;p_0,p_v)=
{{\rm M}_{0}}^{\sigma _2 \sigma _1}_{\sigma _0 \sigma _v}(p_2,p_1;p_0,p_v)\;,
\nonumber \\
{{\rm M}_{1}}^{\sigma _1 \sigma _2}_{\sigma _0 \sigma _v}(p_1,p_2;p_0,p_v)=
-{{\rm M}_{1}}^{\sigma _2 \sigma _1}_{\sigma _0 \sigma _v}(p_2,p_1;p_0,p_v)\;.
\end{eqnarray*}
The c.m. canonical amplitudes, i.e. the
$S$-matrix elements, of the NN scattering are expressed in terms of five
complex amplitudes $a,b,c,d,e$ \cite{Bystritsky1978}
\[S=\frac{1}{2}[(a+b)+(a-b)\hat{n} \otimes \hat{n}
+(c+d)\hat{m} \otimes \hat{m}
+(c-d)\hat{l} \otimes \hat{l}+
e(e \otimes \hat{n}+\hat{n} \otimes e)]\;,\]
where
\[\vec{m}=\frac{\vec{k}_f-\vec{k}_i}{|\vec{k}_f-\vec{k}_i|}\;,\;\;
\vec{l}=\frac{\vec{k}_f+\vec{k}_i}{|\vec{k}_f+\vec{k}_i|}\;,\;\;
\vec{n}=\frac{\vec{k}_i \times \vec{k}_f}{|\vec{k}_i \times \vec{k}_f|}\;,\]
$\vec{k}_{i,f}$ are the initial and final c.m. momenta
of the NN interacting system and
$\hat{n} \equiv \vec{n} \cdot \vec{\sigma}$ and so on. The shorthand
$A \otimes B$ stands for $A^{\sigma_1}_{\sigma_v}B^{\sigma_2}_{\sigma_0}$.
We have to deal with the NN amplitudes in the laboratory frame and it is
inconvenient to make the transformations from numerous individual NN c.m. frames
to these frame during the calculations.
It is why the applying of the M-functions of
Stapp \cite{Stapp1962,Stapp1983} is very natural in such calculations.
The basic M-functions analogous to the
$S$-matrix basis $e \otimes e, \hat{m} \otimes \hat{m},
\hat{l} \otimes \hat{l}, \hat{n} \otimes \hat{n},
e \otimes \hat{n}+\hat{n} \otimes e$ are built from the
products $\tilde{V}_iV_j$ of the two by two (hermitian) matrices
$\tilde{V}_i \equiv V_i^\mu \tilde{ \sigma_\mu}$ and
$V_j \equiv V_j^\mu \sigma_\mu$, where $V^\mu _i$ are the
four-velocities of the scattered nucleons completed by the
four-velocity $V$ of the whole system in the arbitrary frame.
The possible M-function's basis $b_i,\;i=1, \ldots ,6$ is
given by the eqs.(\ref{eq:bi}-\ref{eq:norms Mi}) of the Appendix. Once fixed,
the basis allows represent the M-function of the NN scattering as a sum
\begin{equation}
{\rm M}=g_1b_1+g_2b_2+g_3b_3+g_4b_4+g_5\frac{b_5+b_6}{\sqrt{2}}\;,
\label{eq:M func devel}
\end{equation}
where $g_i$ are the five complex amplitudes.
The relations between the M-function's amplitudes $g_i$ and
the canonical amplitudes
$a,b,c,d,e$ are \cite{Grebenyuk1989}
\begin{eqnarray*}
g_1=-\frac{c+d}{2}\;,& g_2={\displaystyle -\frac{c-d}{2} }\;, \\
g_3=\frac{b+a\cos \varphi -ie\sin \varphi }{2}\;, &
g_4= {\displaystyle \frac{b-a\cos \varphi +ie\sin \varphi }{2} }\;, \\
g_5=-\frac{a\sin \varphi +ie\cos \varphi }{\sqrt{2}}\;,&
e^{i\varphi}= {\displaystyle \frac{\omega_0 -\omega}{\omega_0 -\omega^{-1}} }\;,
\end{eqnarray*}
where $\omega =e^{i\theta}$, $\theta$ being the c.m. scattering angle, and
\[\omega_0 \equiv \sqrt{\frac{(V,V_2)+1}{(V,V_2)-1}\;
\frac{(V,V_0)+1}{(V,V_0)-1}}\;.\]
The values of the amplitudes {\em a, b, c, d, e} for given laboratory energy
and c.m. angle were calculated by use of the PSA of Arndt et
al. \cite{Arndt1988}. The normalization of the M-functions
is such that the c.m. cross-section is equal to
\[\frac {d \sigma }{d \Omega_{NN} }=\frac {1}{{(8 \pi )}^2s} \;
\frac{Tr({\rm M}\;\;\tilde{V}_v \otimes \tilde{V}_0\;\;
{\rm M}^\dagger \;\;V_1 \otimes V_2)}{4}\;.\]
The presence of metric matrices $\tilde{V}_i$ and $V_j$ is the peculiarity
of the Stapp formalism as was
mentioned in the Introduction.
As to the virtual nucleons they are off-shell and we took it into account
only kinematically. In particular we calculate the basis M-functions $b_i$
by use of the eqs.(\ref{eq:bi}-\ref{eq:norms Mi}) using the 'virtual' velocity
$V_v=(p_d-p_3)/m_v$, if of course
\mbox{$m_v^2= (p_d-p_3)^2$} was happened to be positive. When it was not
the case we assigned
to the matrix elements of the corresponding nucleon-nucleon amplitude
zero values.
For the scalar amplitudes $g_i$ the only way to obtain the off-shell values is
the model calculations and they are permanently in progress
(see for example \cite{Gross1992}).
Still the situation is far from being satisfactory, especially
for the energies above several hundreds MeV.
Thus only the on-shell amplitudes obtained in the PSA
are of the practical use. The choice of the on-shell kinematic
corresponding to the off-shell one is ambiguous. One way is to take
the PSA solution at the
$s=(p_0+p_v)^2=(p_1+p_2)^2$ and $t=(p_1-p_0)^2=(p_2-p_v)^2$. It means to take
$g_i(s,t,m_v^2) = g_i(s,t,m^2)$. Another way
is to put at first the virtual nucleon on the mass-shell
$p_v=(E_v,{\bf p}_v) \rightarrow p_v^*=(\sqrt{{\bf p}_v^2+m^2},
{\bf p}_v)$, and then get the $PSA$ solution at the
$s^*={(p_0+p_v^*)}^2 \geq s$ and $t^*={(p_2-p^*_v)}^2$, i.e. to accept that
$g_i(s,t,m_v^2) = g_i(s^*,t^*,m^2)$.
We preferred the first way, since it ignores the
off-shell mass dependence at all in contrast to the non-dynamical recipe for such
dependence induced by the second way.
\vspace{1cm}
\subsection{Deuteron wave function}
\begin{figure}[ht]
\begin{center}
\begin{picture}(200,100)(0,0)
\thicklines
\put(100,25){\circle{10}}
\put(30,20){\line(1,0){140}} \put(30,30){\line(1,0){70}}
\put(100,30){\line(2,1){70}}
\put(50,10){$p_d$, $\sigma _p$}
\put(50,35){$p_d$, $\sigma _n$}
\put(160,10){$p_v$, $\sigma _v$}
\put(160,70){$p_3$, $\sigma _3$}
\end{picture}
\end{center}
\caption{DNP-vertex}
\label{fig:dnp vertex}
\end{figure}
The DNN-vertex (see Fig. \ref{fig:dnp vertex}) relates to
the wave function by the eq.(\ref{eq:dnp wave funtion}). The antisymmetric tensor
\[\epsilon ^{\tau _v \tau _3}=\frac{1}{\sqrt{2}}
\left( \begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}\right) \; \]
determines the isosinglet nature of the deuteron.
The classic deuteron wave functions are related to the deuteron at rest frame.
The most suitable for the transition to the M-function
expression looks like
\[{\Phi_s} ^{\sigma_v \sigma_3}_{\sigma_p \sigma_n}(\vec{k})=
a( e^{\sigma_v}_{\sigma_p}
e^{\sigma_3}_{\sigma_n} +
e^{\sigma_v}_{\sigma_n}
e^{\sigma_3}_{\sigma_p}) -b k^ik^j
( {e_i}^{\sigma_v}_{\sigma_p}
{e_j}^{\sigma_3}_{\sigma_n} +
{e_i}^{\sigma_v}_{\sigma_n}
{e_j}^{\sigma_3}_{\sigma_p})\;, \]
where subscript $s$ reminds the $S$-matrix origin of the classic wave function
and the unit vector $\vec{k}$ is directed along the nucleon momentum. The
matrices $e_i,\;i=1,2,3$ coincide with the Pauli matrices, but we use the
different notation to distinguish them from the matrices $\sigma^i$,
which have another spinor indices:
$\sigma^i_{\bar{c}d}$. The scalar functions $a$ and $b$ are connected with
the $S$- and $D$-wave functions $u$ and $w$ as following
\begin{eqnarray}
a=u-\frac{w}{\sqrt{8}}\;\;,&\;\;
b=-{\displaystyle \frac{3w}{\sqrt{8}}}\;, \label{eq:a,b(u,w)} \\
u=a-\frac{b}{3}\;\;,&\;\;w=-{\displaystyle \frac{b\sqrt{8}}{3}}\;. \nonumber
\end{eqnarray}
The corresponding M-function of the deuteron wave function in an arbitrary
frame is built from the
products $\tilde{V}_vV_d$ and $\tilde{V}_3V_d$ of the two by two matrices,
where $V^\mu_{d,v,3}$ are the four-velocities of the
deuteron, virtual nucleon and on-shell nucleon, respectively. It is equal to
\begin{eqnarray}
\Phi^{\sigma_v \sigma_3}_{\sigma_p \sigma_n}(p_v,p_3;p_d)=
\nonumber \\ a\frac{
(e+\tilde{V_v}V_d)^{\sigma_v}_{\sigma_p}
(e+\tilde{V_3}V_d)^{\sigma_3}_{\sigma_n} +
(e+\tilde{V_v}V_d)^{\sigma_v}_{\sigma_n}
(e+\tilde{V_3}V_d)^{\sigma_3}_{\sigma_p}}
{2\sqrt{((V_v,V_d)+1)((V_3,V_d)+1)}}
+ \nonumber \\ b\frac{
(e-\tilde{V_v}V_d)^{\sigma_v}_{\sigma_p}
(e-\tilde{V_3}V_d)^{\sigma_3}_{\sigma_n} +
(e-\tilde{V_v}V_d)^{\sigma_v}_{\sigma_n}
(e-\tilde{V_3}V_d)^{\sigma_3}_{\sigma_p}}
{2\sqrt{((V_v,V_d)-1)((V_3,V_d)-1)}}\;.
\label{eq:M matrix triplet vertex}
\end{eqnarray}
The following normalization equation holds
\[Tr(\Phi \tilde{\rho}_0 \Phi^{\dagger}\;V_3 \otimes V_v) =
\Phi^{\sigma _v \sigma _3}_{\sigma _p \sigma _n}
\rho_0^{\sigma_p \bar{\sigma}_p\;\;\sigma_n \bar{\sigma}_n}
\bar{\Phi}^{\bar{\sigma} _v \bar{\sigma} _3}_
{\bar{\sigma} _p \bar{\sigma}_n}
{V_3}_{\bar{\sigma}_3 \sigma_3}{V_v}_{\bar{\sigma}_v \sigma_v}=u^2+w^2\;,\]
where $\tilde{\rho}_0$ is the invariant density matrix of the deuteron
defined in the Section 4 (see eq.(\ref{eq:r0})).
The functions $u$ and $w$ obtained with the Paris \cite{Paris}
or Bonn \cite{Bonn} potentials
are approximated by series of the poles
\begin{eqnarray}
u(m_d^2,m_v^2)=\sqrt{8\pi m_d}N_S \left(\frac{1}{q^2+\alpha^2}
-\sum_i\frac{c_i}{q^2+\alpha_i^2}\right)\;,
\nonumber \\
w(m_d^2,m_v^2)=\sqrt{8\pi m_d}N_D \left(\frac{1}{q^2+\alpha^2}
-\sum_i\frac{d_i}{q^2+\alpha_i^2}\right)\;.
\label{eq:u,w param}
\end{eqnarray}
The values $N_{S,D}$ are the normalization constants ($N_S^2 \simeq 0.16\;GeV$)
and the dependence on the $m_d^2$ and $m_v^2$ goes through the 3-momentum
of the nucleons in the deuteron at rest frame
\[q=\frac{\sqrt{[(m_d+m)^2-m_v^2][(m_d-m)^2-m_v^2]}}{2m_d}\;.\]
The binding energy $\epsilon$ relates to the $\alpha$ as $\alpha^2=\epsilon m$.
The sum rules $\sum_i c_i= \sum_i d_i=1$ should be fulfilled.
The dimension of the wave functions $u$ and $w$ is
GeV$^{-1}$ and they are normalized as follows
\[\int d \vec{q} (u^2+w^2)=(2\pi)^3 2m_d\;.\]
\vspace{1cm}
\section{ NN double-scattering}
\begin{figure}[ht]
\begin{center}
\begin{picture}(250,100)(0,0)
\thicklines
\put(80,75){\circle{10}}
\put(30,80){\line(1,0){170}} \put(30,70){\line(1,0){50}}
\put(80,70){\line(1,-1){42}}
\put(30,20){\line(1,0){170}}
\put(125,25){\circle{10}}
\put(170,75){\circle{10}}
\put(170,70){\line(2,-1){30}}
\put(128,28){\line(1,1){42}}
\put(30,90){$p_d$, $\sigma _p \sigma _n$}
\put(30,10){$p_0$, $\tau _0, \sigma _0$}
\put(210,10){$p_2$, $\tau _2, \sigma _1$}
\put(210,55){$p_1$, $\tau _1, \sigma _1$}
\put(210,90){$p_3$, $\tau _3, \sigma _3$}
\put(100,90){$p_s$, $\tau _s, \sigma _s$}
\put(30,45){$p_v$, $\tau _v, \sigma _v$}
\put(150,45){$p_f$, $\tau _f, \sigma _f$}
\put(125,77){X}
\end{picture}
\end{center}
\caption{Double-scattering mechanism}
\label{fig:DS}
\end{figure}
Let us start with the graph shown in Fig. \ref{fig:DS} referred to
in what follows as the
DS graph 2(31). Two other graphs with the cyclically permuted nucleons will be
referred to as the DS graphs 1(23) and 3(12). For the DS graph 2(31) the
amplitude with the 'spectator' on the mass-shell looks like
\begin{eqnarray}
&{{{\rm M}^{2(31)}_{DS}}^{\tau _1 \tau _2 \tau _3}_{\tau _0}}^
{;\sigma _1 \sigma _2 \sigma _3}_
{;\sigma _0 \sigma _p \sigma _n}(p_1,p_2,p_3;p_0,p_d)=
-i \int {\displaystyle\frac{d{\bf p}_ s}{(2\pi)^3 2E_s}} & \nonumber \\
&{\rm M}^{\tau _3 \tau _1;\sigma _3 \sigma _1}_
{\tau _f \tau _s;\sigma _f \sigma _s}(p_3,p_1;p_s,p_{31}-p_s)
{\rm M}^{\tau _2 \tau _f;\sigma _2 \sigma _f}_
{\tau _0 \tau _v;\sigma _0 \sigma _v}(p_2,p_{31}-p_s;p_0,p_d-p_s)&
\label{eq:DS loop int} \\
&{\displaystyle \frac{\epsilon ^{\tau _v \tau _s} \Phi^{\sigma _v \sigma _s}_
{\sigma _p \sigma _n}(p_d-p_s,p_s;p_d)}{2m_f(m_f-m+i\varepsilon)}}&\nonumber
\end{eqnarray}
where \mbox{$p_{31} =p_3+p_1=(E_{31},{\bf p}_{31})$},
$E_s=\sqrt{m^2+{\bf p}_ s^2}$ and $m_f^2=(p_{31}-p_s)^2$.
The simplifications are necessary to
calculate of the integral (\ref{eq:DS loop int}) since it requires
a knowledge of the
off-shell nucleon-nucleon amplitudes.
The most simple method is to take the nucleon-nucleon amplitudes out of the
integral sign. Reasonable to do it at some momentum
$p_s^0$ placed on the singular
surface of the integral corresponding to the mass-shell
of the virtual nucleon $f$.
With the $z$-axis directed along the momentum ${\bf p}_{31}$ the equation of
this surface looks like
\begin{equation}
(\frac{p^x_s}{q_{31}})^2+ (\frac{p^y_s}{q_{31}})^2+
(\frac{p^z_s-\frac{|{\bf p}_{31}|}{2}}{q_{31}\frac{E_{31}}{W_{31}}})^2=1\;,
\label{eq:ellipse}
\end{equation}
where $W_{31}=\sqrt{s_{31}}=\sqrt{p^2_{31}}$ is the invariant mass of
the 31-pair of the nucleons and $q_{31}=\sqrt{s_{31}-4m^2}/2$ is their c.m.
momentum. At the end of this Section and in the Section 5 presenting the
calculation results we return to the problem of the
choice of this Fermi momentum.
When the nucleon pair at the final state has the invariant mass
near the nucleon-nucleon threshold it is necessary and possible to take
into account the off-shell behavior of the corresponding
NN amplitude in the eq.(\ref{eq:DS loop int}) to
fulfill the closure sum rules in the $^1 S_3$ state
(see an argumentation in
\cite{Aladashvili1977,AlvearWilkin1984,KolybasovKsenzov1975}).
Near the threshold it is possible to describe this off-shell behavior
by the simple form-factor ${\rm M}^{off}={\rm M}^{on}\;f(s_{31},m_f^2)$, which
is related to the $^1S_3$ wave function of the deuteron (\ref{eq:u,w param}) as
follows
\begin{equation}
f(s_{31},m_f^2)=1-(q^2+\alpha^2)\sum_i\frac{c_i}{q^2+\alpha_i^2}\;,
\label{eq:off-shell ff}
\end{equation}
where the dependence on the virtual mass and energy goes from the c.m. momentum
\[q=\frac{\sqrt{[(W_{31}+m)^2-m^2_f][(W_{31}-m)^2-m^2_f]}}{2W_{31}}\;.\]
We used the form-factor (\ref {eq:off-shell ff}) up to the
energy 200 MeV and above this energy we were replacing it by the unity.
Note that at the low energies the corresponding DS graph is refereed to as
the final state interaction (FSI) graph.
Taking the NN amplitudes
out of the integral sign, but keeping inside of it the off-shell form-factor,
deuteron wave function and the propagator, we derive the following approximation
of the DS amplitude from the eq.(\ref{eq:DS loop int})
\begin{eqnarray}
&{{{\rm M}^{2(31)}_{DS}}^{\tau _1 \tau _2 \tau _3}_{\tau _0}}^
{;\sigma _1 \sigma _2 \sigma _3}_
{;\sigma _0 \sigma _p \sigma _n}(p_1,p_2,p_3;p_0,p_d)\simeq &\nonumber \\
&{\displaystyle -i\epsilon ^{\tau _v \tau _s}
{\rm M}^{\tau _3 \tau _1;\sigma _3 \sigma _1}_
{\tau _f \tau _s;\sigma _f \sigma _s}(p_3,p_1;p^0_s,p_{31}-p^0_s)
{\rm M}^{\tau _2 \tau _f;\sigma _2 \sigma _f}_
{\tau _0 \tau _v;\sigma _0 \sigma _v}(p_2,p_{31}-p^0_s;p_0,p_d-p^0_s)}&
\nonumber \\
& {\displaystyle F^{\sigma _v \sigma _s}_
{\sigma _p \sigma _n}(s_{31},t_2)}\;,& \label{eq:DS31}
\end{eqnarray}
where
\[F^{\sigma _v \sigma _s}_
{\sigma _p \sigma _n}(s_{31},t_2) \equiv
\int \frac{d{\bf p}_ s}{(2\pi)^3 2E_s}
\frac{\Phi^{\sigma _v \sigma _s}_
{\sigma _p \sigma _n}(p_d-p_s,p_s;p_d)f(s_{31},(p_{31}-p_s)^2)}
{2m_f(m_f-m+i\varepsilon)}\]
and $t_2=(p_2-p_0)^2=(p_d-p_{31})^2$.
The deuteron wave function $\Phi$ is defined by the
eq.(\ref{eq:M matrix triplet vertex}).
To preserve the covariance it is necessary to approximate
the velocities of the virtual nucleon ($v$) and the nucleon 'spectator' ($s$)
in the latter integral by the corresponding values in
the NN amplitudes taken out of the integral sign.
Then the expression for the $F$ becomes
\begin{eqnarray*}
& F^{\sigma _v \sigma _s}_
{\sigma _p \sigma _n}(s_{31},t_2) \simeq
{\displaystyle \frac{F_a}{2}\frac{
(e+\tilde{V_v}V_d)^{\sigma_v}_{\sigma_p}
(e+\tilde{V_s}V_d)^{\sigma_s}_{\sigma_n} +
(e+\tilde{V_v}V_d)^{\sigma_v}_{\sigma_n}
(e+\tilde{V_s}V_d)^{\sigma_s}_{\sigma_p}}
{2\sqrt{((V_v,V_d)+1)((V_s,V_d)+1)}}}
+ & \\ & {\displaystyle \frac{F_b}{2}\frac{
(e-\tilde{V_v}V_d)^{\sigma_v}_{\sigma_p}
(e-\tilde{V_s}V_d)^{\sigma_s}_{\sigma_n} +
(e-\tilde{V_v}V_d)^{\sigma_v}_{\sigma_n}
(e-\tilde{V_s}V_d)^{\sigma_s}_{\sigma_p}}
{2\sqrt{((V_v,V_d)-1)((V_s,V_d)-1)}}}\;,&
\end{eqnarray*}
where
\[V_v=\frac{p_d-p^0_s}{\sqrt{(p_d-p^0_s)^2}}\;,\;\;
V_s=\frac{p^0_s}{m}\;,\;\;V_d=\frac{p_d}{m_d} \]
are the four-velocities of the corresponding particles.
The complex functions $F_{a,b}$ are equal to (see eqs.(\ref{eq:a,b(u,w)}))
\[F_a=F_u-\frac{F_w}{\sqrt{8}}\;\;,\;\;
F_b=-\frac{3F_w}{\sqrt{8}}\;, \]
where
\[F_u (s_{31},t_2)\equiv \int \frac{d{\bf p}_ s}{(2\pi)^3 2E_s}
\frac{u(m_d^2,(p_d-p_s)^2)f(s_{31},(p_{31}-p_s)^2) }
{2m_f(m_f-m+i\varepsilon)}\;\]
and $F_w$ is determined analogously by the $w$-wave function. For
the pole expansion of the functions $u$ and $w$ (\ref{eq:u,w param})
a good analytical approximation of these integrals have been obtained
by J.M.Laget \cite{Laget1978}:
\begin{eqnarray}
&{\displaystyle F_u \simeq \frac {N_S} { q_{31}\sqrt{32 \pi m_d}} \{ }&\nonumber \\
&{\displaystyle 2\left[\arctan \frac{p_+}{\alpha }+ \arctan \frac{p_-}{\alpha }
-\sum_i c_i \left(\arctan \frac{p_+}{\alpha_i}+
\arctan \frac{p_-}{\alpha_i} \right) \right]} & \nonumber \\
&{\displaystyle +4\left[\sum_i c_i \;
\arctan \frac{q_{31}}{2(\alpha + \alpha _i)}- \;
\sum _{i,j}c_i c_j \; \arctan \frac{q_{31}}{2(\alpha_i + \alpha _j)}\right]- }&
\label{eq:Lag appr}\\
& {\displaystyle i\left[\ln \frac{p_-^2+\alpha ^2}{p_+^2+\alpha ^2}-
\sum _ic_i\ln \frac{p_-^2+ \alpha _i^2}{p_+^2+ \alpha _i^2}\right] \} }\;,&
\nonumber
\end{eqnarray}
where \[q_{31}=\frac{\sqrt{[(W_{31}+m_d)^2-t_2][(W_{31}-m_d)^2-t_2]}}{2m_d}\]
is the momentum of 31-pair in the deuteron at rest frame and
\[p_{\pm } \equiv \frac{q_{31}}{2} \pm \frac{s_{31}+m_d^2-t_2}{2W_{31}m_d}
\; \frac{\sqrt{s_{31}-4m^2}}{2}\;.\]
The replacements in the eq.(\ref{eq:Lag appr})
of the normalization constant $N_S$ by the $N_D$
and of the parameters $c_i$ by the $d_i$
entail the expression for the $F_w$. We have tried
in the calculations the $F_w$ thus
obtained in addition to the $F_u$, but in the final calculations
it was still omitted. The reason for
this lays in the closure sum rule, which requires in case of the $F_w \not =0$
to take into account the off-shell behavior of the FSI nucleon-nucleon
amplitude also in the $^1D_3$-state, rather than in the $^1S_3$-state only as
we did.
The real part
of the factor $F$ in the eq.(\ref{eq:Lag appr}) has two terms. The first one
is due to the off-shell states contribution inside the triangle loop,
which is proved to be canceled in the eikonal regime \cite{Harrington1969}.
The second term in the real part (\ref{eq:Lag appr})
originates from the threshold form factor
(\ref{eq:off-shell ff}) and we were dropping it for the energies
of the finally interacting nucleons above 200 MeV.
Now let us return to the problem of the choice of the Fermi momentum $p_s^0$ in
the eq.(\ref{eq:DS31}).
We have considered two candidates. The first one is
advocated as follows. The deuteron wave function in the integral
(\ref{eq:DS loop int}) drops quickly with the decreasing of the virtual mass
$m_v^2=(p_d-p_s)^2$ suppressing by this the contribution of
${\bf p}_s$ corresponding to the small virtual masses $m_v$.
It is reasonable then to test the $p_s^0$ giving the maximum value of $m_v$. In
the c.m. frame of the 31-pair the momenta placed on the ellipsoidal surface
(\ref{eq:ellipse}) look like $(\frac{W_{13}}{2},q_{31}{\bf n})$, where
${\bf n}$ is an unite vector. Then
\[m_v^2=(p_d-p^0_s)^2=m_d^2+m^2-E_d W_{13}+2p_d q_{31} x\;,\]
where $(E_d,{\bf p}_d)$ is the deuteron four-momentum in this frame and $x$
is the cosine of the angle between ${\bf n}$ and ${\bf p}_d$.
Therefore the maximum $m_v^2$ is achieved at $x=1$ and the
momentum of the nucleon $s$ should be directed along the deuteron momentum
in the 31-pair c.m. frame. We will denote the momentum
thus obtained as $p_s^{max}$.
The argumentation for the second candidate is less evident and consists of
the following. Since the states with small virtual masses are still contribute
to the integral (\ref{eq:DS loop int}) we should not use the
Fermi momentum corresponding to the maximal $m_v$ but try a momentum corresponding
to some smaller value of the virtual mass $m_v$. We have chosen for this purpose
the momentum placed at the 'top' of the ellipsoidal surface (\ref{eq:ellipse}),
which corresponds to the maximum velocity of the nucleon $s$
in the laboratory frame.
We will call this momentum as the optimal one and will denote
it as $p_s^{opt}$. With the $z$-axis directed
along the momentum ${\bf p}_{31}$ its components are
\[p_s^{opt}=(E_s,0,0, \frac{|{\bf p}_{13}|}{2}+
q_{31}\frac{E_{13}}{W_{13}} )\;,\]
whereas the momentum $p_s^{max}$ has in this frame the components
\[p_s^{max}=(E_s,q_{31}n_x,q_{31}n_y, \frac{|{\bf p}_{13}|}{2}+
q_{31}\frac{E_{13}}{W_{13}} n_z)\;,\]
where ${\bf n}$ is the unite vector directed along
the deuteron momentum in c.m. frame of the 31-pair.
The comparison with the experiment of the calculations
performed using the both Fermi momenta
is presented in the Section 5 and definitely testifies
in favor of the optimal Fermi momentum $p_s^{opt}$.
\vspace{1cm}
\section{$\Delta$ -excitation contribution}
\begin{figure}[t]
\unitlength=.80mm
\begin{center}
\begin{picture}(114.00,50.00)(0,95)
\put(14.00,139.00) {\line(1,0){100}}
\put(14.00,135.00) {\line(1,0){30}}
\put(44.00,135.00) {\line(2,-3){10}}
\put(54.00,118.00) {\line(1,0){60}}
\put(14.00, 98.00) {\line(1,0){100}}
\put(54.00, 98.00) {\line(0,3){3}}
\put(54.00,103.00) {\line(0,3){3}}
\put(54.00,108.00) {\line(0,3){3}}
\put(54.00,113.00) {\line(0,3){3}}
\put(54.00,118.00) {\line(0,3){2}}
\put(54.00,116.00) {\line(3,0){3}}
\put(59.00,116.00) {\line(3,0){3}}
\put(64.00,116.00) {\line(3,0){3}}
\put(69.00,116.00) {\line(3,0){3}}
\put(74.00,116.00) {\line(3,0){3}}
\put(77.00,116.00) {\line(0,3){3}}
\put(77.00,121.00) {\line(0,3){3}}
\put(77.00,126.00) {\line(0,3){3}}
\put(77.00,131.00) {\line(0,3){3}}
\put(77.00,136.00) {\line(0,3){3}}
\put(62.00,144.00){\makebox(0,0)[cc]{$\vec{p}_s,\;\tau_s,\;\sigma_s$}}
\put(104.00,144.00){\makebox(0,0)[lc]{$p_3,\;\tau_3,\;\sigma_3$ (undetected)}}
\put(104.00,102.00){\makebox(0,0)[lc]{$p_2,\;\tau_2,\;\sigma_2$ (RS)}}
\put(104.00,122.00){\makebox(0,0)[lc]{$p_1,\;\tau_1,\;\sigma_1$ (SPES)}}
\end{picture}
\end{center}
\caption{$\Delta$ excitation mechanism}
\label{fig:Delta excitation}
\end{figure}
The contribution corresponding to the graph shown in
Fig. \ref{fig:Delta excitation} is equal to
\begin{eqnarray}
{{{\rm M}^{2(31)}_{\Delta}}^{\tau _1 \tau _2 \tau _3}_{\tau _0}}^
{;\sigma _1 \sigma _2 \sigma _3}_
{;\sigma _0 \sigma _p \sigma _n}(p_1,p_2,p_3;p_0,p_d)= \nonumber \\
i\frac{{\Gamma ^t}^{\tau _2 ;\sigma_2}_{\tau _0;\sigma _0}(p_2,q;p_0)
{{\rm M}^{\tau _1 \tau _3}_t}^{;\sigma _1 \sigma _3}_
{;\sigma _p \sigma _n}(p_1,p_3;q,p_d)}{q^2-\mu^2}
\;,\;\;\;\;q=p_0-p_2\;,
\label{eq:delta contr}
\end{eqnarray}
where the $\mu$ is the pion mass.
The $\pi$NN-vertex in the eq.(\ref{eq:delta contr}) is equal to
\[{\Gamma ^t}^{\tau _2 ;\sigma _2}_{\tau _0; \sigma _0}
(p_2,q;p_0) ={e^t}^{\tau _2}_{\tau _0}
G^{\sigma _2}_{\sigma _0}(p_2,q;p_0)\;,\]
where $t$ is the isovector index of the pion and
\[e^{+1}=\left(\begin{array}{cc}
0 & 0 \\
1 & 0
\end{array}\right)\;, \;\;
e^{-1}=\left(\begin{array}{cc}
0 & 1 \\
0 & 0
\end{array}\right)\;, \;\;
e^0=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right)\;. \]
The spatial part of $\pi$NN-vertex with the virtual pion looks in Stapp
formalism like (see the eq.(\ref{eq:indless pseuds vertex}) of the Appendix)
\[G^{\sigma _2}_{\sigma _0}(p_2,q;p_0)=f(q^2) g_\pi
m (e-\tilde{V}_2 V_0)^{\sigma_2}_{\sigma_0}\;,\]
where $g_{\pi} \simeq 13.6$ is the $\pi$NN coupling constant, and
$f(q^2)$ is the pion form factor, for which we took the monopole
representation
\begin{equation}
f(q^2) =\frac{m_\pi ^2-\Lambda
^2}{q^2-\Lambda ^2}
\label{eq:piNN ff}
\end{equation}
with the cut-off $\Lambda =1.0$ GeV.
The spin and isospin dependent $\pi d\rightarrow$NN
amplitude in the eq.(\ref{eq:delta contr}) looks like
\[{{\rm M}^{\tau _1 \tau _3}_t}^{;\sigma _1 \sigma _3}_
{;\sigma _p \sigma _n}(p_1,p_3;q,p_d)=
\epsilon ^{\tau _1 \tau _3}_t
M^{\sigma _1 \sigma _3}_{\sigma _p \sigma _n}(p_1,p_3;q,p_d)\;,\]
where
\[\epsilon _{+1}=\left( \begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right)\;\; ,\; \;
\epsilon _{-1}=\left( \begin{array}{cc}
0 & 0 \\
0 & -1
\end{array}\right)\;\; ,\; \;
\epsilon _{0}=-\frac{1}{\sqrt{2}}\left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}\right)\;.\]
The amplitudes of the physical reactions are equal then to
\[{\rm M}_{\pi ^{+}d \rightarrow pp}=M\;\; ,\;\;
{\rm M}_{\pi ^{-}d \rightarrow nn}=-M\;\; ,\;\;
{\rm M}_{\pi ^{0}d \rightarrow np}=-\frac{M}{\sqrt{2}}\;. \]
We have used for the deriving of the M-function of the $\pi d\rightarrow$NN
reaction the $S$-matrix elements obtained by J.M.Laget \cite{Laget1981}.
In this approach the one-loop box diagrams with the
N$\Delta$ as the intermediate state had been calculated numerically, including
the $\pi$N intermediate scattering in the S, P and D waves parametrized by
their phase shifts. However, to avoid the double counting with the DS term
we have excluded
the $P_{11}$ -wave
(the nucleon pole in the $\pi$N amplitude is a part of the DS term). Finally,
the $\rho$ -exchange was considered in the
\mbox{$\pi d\rightarrow$NN} amplitude. The details are given in the Appendix of the
paper \cite{Laget1981}. These calculations have been performed
in the deuteron at rest frame.
So having at our disposal the $S$-matrix elements
\[S^{\sigma_1\sigma_3}_{\sigma_p\sigma_n}(p_{1(d)},p_{3(d)};q_{(d)},m_d)\;,\]
where the subscript $(d)$ means that the momenta are considered
in the deuteron at rest
frame, we have to obtain the M-function in an arbitrary frame. The only
way in this case is the straightforward applying of the common recipe
prescribed by the eq.(\ref{eq:S=invvfMvi}) of the Appendix. At first we
derive the M-function in the deuteron at rest system:
\begin{equation}
M^{\sigma_1\sigma_3}_{\sigma_p\sigma_n}(p_{1(d)},p_{3(d)};q_{(d)},m_d)=
{v_1}^{\sigma_1}_a {v_3}^{\sigma_3}_b
S^{a\;\;b}_{\sigma_p\sigma_n}(p_{1(d)},p_{3(d)};q_{(d)},m_d)\;.
\label{eq:M in d-rest}
\end{equation}
The boost matrix $v^a_b$ of the nucleon from the rest to the four-velocity
$(V_0,{\bf k}\sqrt{V_0^2-1})$, where ${\bf k}$ is the unit vector along the
momentum, is determined by the eq.(\ref{eq:boost}).
Having obtained the M-matrix in the deuteron at rest frame we applied to the
eq.(\ref{eq:M in d-rest}) the boost transformation ${v_d}^a_b$ corresponding
to the deuteron moving along the $z$-axis with the four-velocity
$(V_{d0},0,0,\sqrt{V_{d0}^2-1})$ so that the final M-function looked like
(see the eq.(\ref{eq:Ml=zMinvz}))
\[M^{\sigma_1\sigma_3}_{\sigma_p\sigma_n}(p_1,p_3;q,p_d)=
{v_d}^{\sigma_1}_a {v_d}^{\sigma_3}_b M^{a\;\;b}_{g\;\;h}
(p_{1(d)},p_{1(d)};q_{(d)},m_d)
{v^{-1}_d}^g_{\sigma_p}{v^{-1}_d}^h_{\sigma_n} \;.\]
\vspace{1cm}
\section{Deuteron density matrix and observables}
Let us remind at first how like looks the density matrix of a spinor particle
in the Stapp formalism \cite{Stapp1983}:
\begin{equation}
\rho^{a\bar{b}}=\frac{1}{2}(V^\mu+S^\mu)\sigma_\mu^{a\bar{b}}\;,
\label{eq:nucl den matr}
\end{equation}
where the $a,b=1,2$ are the spinor indices, $V^\mu$ is the four-velocity
and $S^\mu$ is the four-vector of polarization of the particle, which is
orthogonal to $V$: $(V,S)=0$. In the index-less form it looks like
\[\tilde{\rho}=\frac{1}{2}(V^\mu+S^\mu)\tilde{\sigma}_\mu=
\frac{1}{2}(\tilde{V}+\tilde{S})\;.\]
The analog of the expression (\ref{eq:nucl den matr}) for the vector particle is
\begin{equation}
\rho ^ {\sigma_p \bar{\sigma}_p\;\;\sigma_n \bar{\sigma}_n}=
\rho^{\mu \nu} \sigma_\mu^{\sigma_p \bar{\sigma}_p}
\sigma_\nu^{\sigma_n \bar{\sigma}_n}\;,
\label{eq:ro in dir prod}
\end{equation}
where
\[\rho ^ {\mu \nu}=\frac{1}{12}\left[-g^{\mu \nu}+4V^\mu V^\nu +
2\sqrt{3}(S^\mu V^\nu +S^\nu V^\mu)+
4\sqrt{\frac{3}{2}}T^{\mu \nu}\right]\;.\]
The $S$ and $T$ are the vector and tensor polarizations with the properties
\[(S,V)=0\;,\; T^{\mu \nu}=T^{\nu \mu}\;,\; V_\mu T^{\mu \nu}=0\;,\;
T^\mu _\mu=0\;.\]
In the index-less form eq.(\ref{eq:ro in dir prod}) could be written as
\[\tilde{\rho}=\rho^{\mu \nu} \tilde{\sigma}_\mu \otimes \tilde{\sigma}_\nu\;.
\]
If the quantization axis is directed along the $y$-axis then
in the c.m. frame we have
\begin{eqnarray*}
S=(0,0,\frac{\sqrt{3}}{2}p_y,0)\;, \nonumber \\
T=-\frac{p_{yy}}{2\sqrt{6}}\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -2 & 0 \\
0 & 0 & 0 & 1
\end{array} \right)\;,
\end{eqnarray*}
where $p_y \equiv n_+-n_-$ and $p_{yy}\equiv n_++n_--2n_0$. The
statistical weights $n_+,n_-,n_0$ of the states
with the definite projections of the spin
on the quantization axis are normalized so that
$n_++n_0+n_-=1$. Then $Tr(\rho^2)=n_+^2+n_0^2+n_-^2$.
If the particle moves along the $z$-axis with the velocity
$V=(V_0,0,0,V_z)$, then $S$ does not change, but $T$ becomes
\[T_{\mu \nu}=-\frac{p_{yy}}{2\sqrt{6}}\left(
\begin{array}{cccc}
V_z^2 & 0 & 0 & V_0V_z \\
0 & 1 & 0 & 0 \\
0 & 0 & -2 & 0 \\
V_0V_z & 0 & 0 & V_0^2
\end{array} \right) \equiv -\frac{p_{yy}}{2\sqrt{6}}t_{\mu \nu}\;.\]
We can represent then the $\tilde{\rho}$ as
\begin{equation}
\tilde{\rho}_d=
\tilde{\rho}_0+\frac{3}{2}p_y\tilde{\rho}_y+\frac{1}{2}p_{yy}\tilde{\rho}_{yy}\;,
\label{eq:r0+ry+ryy}
\end{equation}
where
\begin{equation}
\tilde{\rho}_0 \equiv \frac{1}{3}\tilde{V} \otimes \tilde{V} -
\frac{1}{12}\tilde{\sigma}_\mu \otimes \tilde{\sigma}^\mu
\label{eq:r0}
\end{equation}
is the unpolarized density matrix and
\begin{eqnarray*}
\tilde{\rho}_y \equiv -\frac{1}{3}(\tilde{\sigma}_y \otimes \tilde{V} +
\tilde{V} \otimes \tilde{\sigma}_y)\;, \\
\tilde{\rho}_{yy} \equiv -\frac{1}{6} t_{\mu \nu} \tilde{\sigma}^\mu \otimes
\tilde{\sigma}^\nu
\end{eqnarray*}
are the vector and the tensor polarization matrices, respectively.
If the (target) proton
is unpolarized, the initial density matrix of the $dp$ system is equal to
\[\tilde{\rho}_i=\frac{\tilde{V}_0}{2} \otimes \tilde{\rho}_d\]
and the vector and tensor analyzing powers are equal to
\begin{eqnarray}
\sigma_0 A_y=\frac{Tr(M \;\;\tilde{V}_0 \otimes \tilde{\rho}_y\;\;
M^\dagger \;\;V_1 \otimes V_2 \otimes V_3)}{2}\;,
\label{eq:s0ay} \\
\sigma_0 A_{yy}=\frac{Tr(M \;\;\tilde{V}_0 \otimes \tilde{\rho}_{yy}\;\;
M^\dagger\;\; V_1 \otimes V_2 \otimes V_3)}{2}\;,
\label{eq:s0ayy}
\end{eqnarray}
where
\[\sigma_0 \equiv \frac{Tr(M \;\;\tilde{V}_0 \otimes \tilde{\rho}_0\;\;
M^\dagger\;\; V_1 \otimes V_2 \otimes V_3)}{2}\; \]
is the unpolarized cross section.
The polarization of the fast outgoing nucleon in the $y$ direction is
determined by the equation
\[\sigma P_{1y}=Tr(M \;\; \tilde{\rho}_i\;\; M^\dagger \;\;\sigma^y
\otimes V_2 \otimes V_3)\;, \]
where the polarized cross section is equal to
\[\sigma =Tr(M \;\;\tilde{\rho}_i\;\; M^\dagger \;\;V_1
\otimes V_2 \otimes V_3)=
\sigma_0(1+\frac{3}{2}p_yA_y+\frac{1}{2}p_{yy}A_{yy})\;.\]
The last equation follows from
the eqs.(\ref{eq:r0+ry+ryy},\ref{eq:s0ay},\ref{eq:s0ayy}).
Introducing the polarization $P_0$ of this nucleon for the unpolarized
deuteron and the depolarization parameters $D_v$ as follows
\begin{eqnarray*}
\sigma_0 P_0 = \frac{Tr(M \;\;\tilde{V}_0 \otimes \tilde{\rho}_0\;\;
M^\dagger\;\; \sigma^y \otimes V_2 \otimes V_3)}{2}\;, \\
\sigma_0 D_v = \frac{Tr(M \;\;\tilde{V}_0 \otimes \tilde{\rho}_y\;\;
M^\dagger\;\; \sigma^y \otimes V_2 \otimes V_3)}{2}
\end{eqnarray*}
we can write polarization $P_{1y}$ for the vector
polarized deuteron beam in the following form
\begin{equation}
P_{1y}=\frac{P_0+\frac{3}{2}p_yD_v}
{1+\frac{3}{2}p_yA_y}\;.
\label{eq:vec pol d p1y}
\end{equation}
\vspace{1cm}
\section{Calculation results and discussion}
Let us start with the main parameters of the detecting system of
the performed experiment.
Polarized deuteron beam of the accelerator Saturne was incident on a 4 cm thick
liquid hydrogen target.
Fast protons from the reaction ($p_1$) were selected at the
scattering angle $\Theta_1=18.0 \pm 0.5^\circ$ by the magnetic spectrometer
SPES-4 in the coincidence with the recoil protons ($p_2$) detected at the
angle $\Theta_2=57.0^\circ ( \pm 4.25^\circ$ in the scattering plane and
$\pm 8^\circ$ in the vertical plane) with a mosaic of $E$ and $\Delta E$
scintillation counters. Two multiwire proportional chambers at 1.5
and 3 $m$ from the target provided the recoil proton track position. Six central
momentum settings for the proton detected in SPES-4
($p_{10}=$ 1.6, 1.7, 1.8, 1.9, 2.0, 2.05 GeV/c) were studied. The
polarimeter POMME located behind the final focal plane of SPES-4 was used to
measure the polarization of the fast protons.
At the chosen $\Theta_1$ and for the momentum $p_1$ of the fast proton, two
kinematic solutions are possible for the recoil proton energy $T_2$,
which according to our definition are the high energy (HES) and the low energy
(LES) solutions.
The neutron spectator momenta in the deuteron rest frame, denoted as $q$,
ranged from 30 to 440 MeV/c.
Thus the calculation procedure should include scanning over the acceptance of
the detecting system. By scanning over the allowed phase space of the detectors,
we intended to obtain a realistic calculation result, which can be directly
compared to the data. This scanning procedure inevitably invokes the necessity
for a Monte-Carlo type event generation, because the number of phase unit
volumes amounts to a huge number when the unit volume is defined by the
resolution of the measurement.
\begin{figure}
\begin{center}
\psfig{figure=fig1c.ps,bbllx=20pt,bblly=145pt,bburx=575pt,bbury=700pt,width=7cm}
\end{center}
\caption{The $(T_{ij},t_{di})$ and $(T_{ij},t_{0i})$ distributions
for $p_{10}=$ 1.6 GeV/c and HES.}
\label{fig:Tij td0i}
\end{figure}
We would like to show now, taking the
$p_{10}=$ 1.6 GeV/c and HES as an example, some kinematic distributions
characterizing different mechanisms of the deuteron break-up in
this experiment.
In Fig. \ref{fig:Tij td0i} the two-dimensional
distributions of the kinetic energies
$T_{ij}=(s_{ij}-4m^2)/
{2m}$ of the $i,j$ pairs of the final nucleons with two momenta transfer,
$t_{di}=(p_d-p_i)^2$ and $t_{0i}=(p_0-p_i)^2$, are shown. Note that $t_{di}$ are
equal to the squared masses of virtual nucleon in the nucleon pole graphs.
It is seen that of three final nucleons
considered as the spectator in these pole graphs,
the neutron ($p_3$) provides the maximal
masses of the exchanged nucleon. The pole
graph with $p_2$ as the spectator gives the negative squared masses and we
have excluded this graph from the calculation. The distributions of
$(T_{ij},t_{0i})$ characterize the graphs of the DS mechanism.
$T_{31}$ ranges from 30 to 200 Mev and therefore the DS graph 2(31) is the
typical FSI graph. Two other pairs
of the final nucleons and especially two protons $p_1$ and $p_2$ have much
higher relative energies. However
applying of the Glauber approach when calculating the DS graphs with these
pairs being rescattered \cite{Wallace1972,PerdrisatPunjabi1990} is not
justified because the momenta transfer $t_{03}$ and $t_{01}$ for these graphs
are very high.
Thus we have applied for calculation of the DS graphs the expression
(\ref{eq:DS31}) and two others obtained from it by the cyclic permutation of the
final nucleons.
\begin{figure}
\begin{center}
\psfig{figure=dsch.ps,bbllx=20pt,bblly=145pt,bburx=575pt,bbury=700pt,width=14cm}
\end{center}
\caption{Polarization observables for the different choices of the Fermi momentum.
Dotted line presents the calculations
with $p^{max}_s$ in the DS graph 1(23) and $p^{opt}_s$ in the DS graph 3(12).
Dashed line
corresponds to the choice of $p^{opt}_s$ for DS graph 1(23) and $p^{max}_s$
for the DS graph 3(12). Solid line is obtained with the $p^{opt}_s$ in the both
these graphs. The dashed-dotted line presents
the only FSI prediction (DS graph 2(31)).}
\label{fig:com of fm}
\end{figure}
We would like now to return to the problem mentioned in the
\mbox{Section 2}. It is the choice of the Fermi momentum $p^0_s$
in the eq.(\ref{eq:DS31}).
The dotted line in Fig. \ref{fig:com of fm} presents
the polarization observables calculated
with the $p^{max}_s$ in the DS graph 1(23) and with the $p^{opt}_s$
in the DS graph 3(12)
(the sensitivity of the contribution of the FSI to the choice of Fermi
momentum is weak and we fixed it to be $p^{max}_s$). The dashed line
corresponds to the choice of the $p^{opt}_s$ for the DS graph 1(23) and
of the $p^{max}_s$
for the DS graph 3(12). The solid line is obtained with the $p^{opt}_s$ in the both
these graphs. The improvement in the description of the data in the
latter case is evident. It is important that neglecting of these graphs
results in the significant worsening of the data description.
The dashed-dotted line presents the only FSI prediction and
we see that it does not provide the suppression of the tensor analyzing power
at the high $q$ observed in the experiment. So we can summarize that besides
the FSI graph the both 1(23) and 3(12) DS graphs are required
for data description and they should
be calculated with $p^{opt}_s$ as the Fermi momentum. It is the very choice
of the DS model, which we have applied in further calculations.
\begin{figure}
\begin{center}
\epsfig{file=ayyh.eps,height=14cm}
\end{center}
\caption{The tensor analyzing power $A_{yy}$ for the HES.
The experimental points are
presented
for the different values of the central momentum detected in the magnetic
spectrometer and as a
function of the outgoing neutron momentum in the deuteron at rest frame.
The dashed-doted line is the IA ,
the dashed line has in addition the DS contribution, and the continuous line is
the full calculation including in addition the virtual $\Delta$.}
\label{fig:Ayy}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=ayyl.eps,height=16cm}
\end{center}
\caption{The tensor analyzing power $A_{yy}$ for the LES.
Same notations as in Fig. \ref{fig:Ayy}. }
\label{fig:leAyy}
\end{figure}
The measured tensor analyzing powers $A_{yy}$ and the calculation
results with Bonn deuteron wave function are shown
in Figs. \ref{fig:Ayy} and \ref{fig:leAyy}.
The main feature of the experimental data is the strong deviation from
the IA (dashed line) for $q \geq 0.2$ GeV/c. The full theory including
the $\Delta$-excitation graphs (full line) gives the
satisfactory explanation of this deviation, the DS graphs (dashed-dotted
line) playing the decisive role. One could expect the more noticeable
$\Delta$-effects taking in mind that the laboratory kinetic energies of the
23-pair of the final nucleons range from 0.7 to 0.9 GeV
(see Fig. \ref{fig:Tij td0i}),
which is exactly the region of the $\Delta$ dominance in the $\pi d\rightarrow$NN
amplitude. Yet the momentum transfer $t_{01}$ from the target proton
to the fast proton detected by the SPES-4 is very high and the pion form
factor (\ref{eq:piNN ff}) reduces the common contribution of the graph
shown in Fig. \ref{fig:Delta excitation} ( with, of course, correspondingly
interchanged final nucleons).
\begin{figure}
\begin{center}
\epsfig{file=ayh.eps,height=16cm}
\end{center}
\caption{The vector analyzing power $A_{y}$ for the HES.
Same notations as in Fig. \ref{fig:Ayy}. }
\label{fig:Ay}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=ayl.eps,height=16cm}
\end{center}
\caption{The vector analyzing power $A_{y}$ for the LES.
Same notations as in Fig. \ref{fig:Ayy}. }
\label{fig:leAy}
\end{figure}
The vector analyzing power $A_{y}$ results,
shown in Figs.\ref{fig:Ay} and \ref{fig:leAy},
exhibit the similar tendency: significant
correction to the IA above 0.25 GeV/c by the full calculations.
The surprisingly good coincidence, taking in mind the very small statistical
errors of the measured values, of full
calculations with the experimental points is achieved
at the all SPES-4 settings for
the both high and low energy solutions.
\begin{figure}
\begin{center}
\epsfig{file=p0h.eps,height=16cm}
\end{center}
\caption{The forward proton polarization $P_0$ for the HES.
The notation of the curves are
the same as in Fig. \ref{fig:Ayy}. The calculations are done consistently as for
$A_y$ and $A_{yy}$ for each setting of the spectrometer while the data are
summed.}
\label{fig:P0}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=p0l.eps,height=16cm}
\end{center}
\caption{The forward proton polarization $P_0$ for the LES.
The notation of the curves are
the same as in Fig. \ref{fig:Ayy}. }
\label{fig:leP0}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=dvh.eps,height=8cm} \epsfig{file=dvl.eps,height=8cm}
\end{center}
\caption{The depolarization parameter $D_v$ for the HES (right picture)
and LES (left picture) . Same notations as in Fig.
\ref{fig:Ayy}.}
\label{fig:Dv}
\end{figure}
For the polarization of the fast proton the data are averaged over the SPES-4
settings to obtain more statistics. However the calculations were
performed for each setting.
The experimental data and the calculations are presented in the terms of
the proton polarization $P_0$ for the unpolarized
deuteron and the depolarization parameters $D_v$
(see the eq.(\ref{eq:vec pol d p1y})) and are shown
in Figs.\ref{fig:P0}-\ref{fig:leP0}
and \ref{fig:Dv}, respectively. For HES a good description is
obtained for the $P_0$ with the full diagram calculations,
whereas it is not the case for LES. For the $D_v$,
the IA is even closer to the data than the
full calculations. However, considering the large error bars for the $D_v$, one
can say that there is not decisive discrimination between the both calculations.
\begin{figure}
\begin{center}
\psfig{figure=pbc.ps,bbllx=20pt,bblly=145pt,bburx=575pt,bbury=700pt,width=16cm}
\end{center}
\caption{Comparison of the Bonn(full line) and Paris(dashed line) deuteron wave
functions predictions}
\label{fig:BoPaCom}
\end{figure}
The above results were obtained with the Bonn deuteron wave
function \cite{Bonn}. Returning to the initial idea of this
experiment to discriminate between the different deuteron wave functions we
have tried also widely used Paris deuteron wave function
\cite{Paris}. In Fig. \ref{fig:BoPaCom} the results of the
calculations for the both wave functions at 1.8 and 2.0 GeV/c
settings of the SPES-4 are shown. It is seen that the dependence on the
deuteron wave function of the full calculations is weak.
To conclude we would like to remind that the main task of this experiment
was the investigation of the deuteron structure in the high internal
momentum region. The polarization data were thought to bring the information
on the $S$- and $D$-component of the deuteron wave function,
which would be possible only if
the IA would dominate in the reaction mechanism. The underlying idea of this
experiment as well as the analogous ones is that the deviations from
the IA, corrected by other presumingly small contributing conventional mechanisms,
could be interpreted as the revealing of the quark degrees of freedom. In fact,
the polarization parameters, measured at first time in this exclusive experiment
with a large accuracy and in a large amount, deviate strongly from the IA at the
high internal momenta. However, the estimations of other
traditional reaction mechanisms,
the nucleon-nucleon double-scattering being the most significant,
were tried and have shown a qualitative agreement to the referred experimental
data. This agreement between the data and the theory is thought to imply that the
possible new degrees of freedom in the deuteron structure contribute much
smaller than the hadronic degrees. Still definite perfection of the model is
necessary for the adequate description of the polarized deuteron break-up
data. Investigation of the short range deuteron wave function, if it is at all
possible in the deuteron break-up experiments with the hadron probes, will be,
by our opinion, the further step only after all
discrepancies in theoretical description of polarization observables are
overcome.
\section*{Acknowledgements}
The author thanks S.Belostotski, A.Boudard, J.-M.Laget and V.Nikulin
for fruitful discussions and cooperation during performing this work. He is also
very grateful to A.Boudard for the warm
hospitality at Service de Physique Nucleaire,
Centre d'Etudes de Saclay, where the main part of this work was done.
\vspace{1cm}
\section*{Appendix. M-functions}
Let us consider the process with the $n_f$ and $n_i$ spinor particles in the final
and initial channels, respectively. We will denote the corresponding
amplitude with the indices as \[ {{\rm M}^{\sigma_{1_f} \ldots \sigma_{n_f}}}_
{\sigma_{1_i} \ldots \sigma_{n_i}}\;.\]
Let $v_i$ be the boosts of i-th spinor particle moving with
the velocity $V_i$. Then the connection between the
$S$-matrix and M-function looks like
\begin{equation}
S=v_{1_f}^{-1}\otimes \ldots \otimes v_{n_f}^{-1}\;{\rm M}\;
v_{1_i} \otimes \ldots \otimes v_{n_i}\;.
\label{eq:S=invvfMvi}
\end{equation}
The boost matrix $v^a_b$ from the rest to the 4-velocity
$(V_0,{\bf k}\sqrt{V_0^2-1})$, where ${\bf k}$ is the unit vector along
the momentum, is equal to
\begin{equation}
v^a_b=\left(\begin{array}{cc}
v_0+v_3 & v_1-iv_2 \\
v_1+iv_2 & v_0-v_3 \end{array} \right)\;,
\label{eq:boost}
\end{equation}
where
\[v=\frac{1}{\sqrt{2}}(\sqrt{V_0+1},{\bf k}\sqrt{V_0-1})\;.\]
With respect to Lorentz transformations, represented by the
unimodular matrices $z$
\[z^a_b=\left(\begin{array}{cc}
z_0+iz_3 & iz_1+z_2 \\
iz_1-z_2 & z_0-iz_3 \end{array} \right)\;,\]
where $z_{0,1,2,3}$ are the complex numbers such that $z_0^2+z_1^2+z_2^2+z_3^2=1$,
the M-functions behave as the operators in the direct product of spinor spaces
\begin{equation}
{\rm M} \rightarrow z \otimes \ldots \otimes z\;{\rm M}\;
z^{-1}\otimes \ldots \otimes z^{-1}\;.
\label{eq:Ml=zMinvz}
\end{equation}
The simplicity of the transformation (\ref{eq:Ml=zMinvz})
is the main advantage of the
M-functions. It allows to build the M-functions from
products $\tilde{V}_iV_j$ of two by two matrices
$\tilde{V}_i \equiv V_i^\mu \tilde{ \sigma_\mu}$ and
$V_j \equiv V_j^\mu \sigma_\mu$, where $V^\mu _i$ are the
four-velocities of the scattered spinor particles completed by the
four-velocity $V$ of the whole system in the arbitrary frame. The matrices
$\sigma^\mu=(1, \vec{\sigma})$ and $\tilde{\sigma}^\mu=(1, -\vec{\sigma})$,
where $ \vec{\sigma}$ are the standard Pauli matrices, have the different
position of spinor indices specified in the following way
\[\sigma^\mu \rightarrow \sigma^\mu_{\bar{a}b}\;,\;\;
\tilde{\sigma}^\mu \rightarrow {\sigma^\mu}^{a\bar{b}}\;.\]
The simplest M-functions are easy to derive using the Dirac bispinors
in the Weyl representation in the M-function form
\[{u^\alpha}_b=\sqrt{m}
\left(\begin{array}{c}
e^a_b \\
V_{\bar{a} b}
\end{array}\right) \;,\;\;\;
{\bar{u}^b}_\alpha=\sqrt{m}
\left(\begin{array}{cc}
e^b_a & V^{b\bar{a}}
\end{array}\right)\;.\]
Here $\alpha$ is the bispinor index,
$V_{\bar{a} b}=V_\mu \sigma^\mu_{\bar{a}b}\;,
V^{b\bar{a}}=V_\mu {\sigma^\mu}^{b\bar{a}}$ and $V_\mu$ is the four-
velocity of the particle.
Let us consider for example the $\pi$NN vertex.
In the Weyl representation the $\gamma_5$-
matrix looks like \[\gamma_5 =\left(\begin{array}{cc} e & 0 \\
0 & -e \end{array}\right)\] and the amplitude of the transition between real
nucleons with the production of the pion is equal to
\[ {\rm M}^a_b(p_f,q;p_i)=
g_\pi {\bar{u}^a}_\alpha {\gamma_5}^\alpha _\beta {u^\beta}_b =
g_\pi m(e^a_b-V_f^{a\bar{c}}{V_i}_{\bar{c}b})\;,\]
or in the index-less form
\begin{equation}
{\rm M}(p_f,q;p_i)=g_\pi \bar{u}_f \gamma_5 u_i =
g_\pi m(e-\tilde{V}_f V_i)\;.
\label{eq:indless pseuds vertex}
\end{equation}
The matrices $\tilde{V}_i$ and $V_i$, where $V^\mu _i$ is the
four-velocity of the i-th particle, serve as the metric tensors for the spin
index of this particle in Stapp formalism when performing the contraction over
this index (traces, successive processes). So, in contrast to the simple
expression of the cross section via the $S$-matrix elements
\[\sigma =\frac{1}{2^{n_i}}Tr(S S^\dagger)\;,\]
the same via the M-functions looks like
\[\sigma =\frac{1}{2^{n_i}} {\rm M}^{\sigma_{1_f} \ldots \sigma_{n_f}}_
{\sigma_{1_i} \ldots \sigma_{n_i}} V^{\sigma_{1_i} \bar{\sigma}_{1_i}}_{1_i}
\ldots V^{\sigma_{n_i} \bar{\sigma}_{n_i}}_{n_i}
\bar{{\rm M}}^{\bar{\sigma}_{1_f} \ldots \bar{\sigma}_{n_f}}_
{\bar{\sigma}_{1_i} \ldots \bar{\sigma}_{n_i}}
{V_{1_f}}_{\bar{\sigma}_{1_f} \sigma_{1_f}} \ldots
{V_{n_f}}_{\bar{\sigma}_{n_f} \sigma_{n_f}}\;, \]
or in the index-less form
\begin{equation}
\sigma =\frac{1}{2^{n_i}} Tr({\rm M}\; \tilde{V}_{1_i} \otimes \ldots
\otimes \tilde{V}_{n_i} \; {\rm M}^\dagger
V_{1_f} \otimes \ldots \otimes V_{n_f})\;.
\label{eq:M funct norm}
\end{equation}
At last let us give one of the basis of the M-functions for the NN scattering.
It is convenient to use such functions $b_i$ in the development
(\ref{eq:M func devel}), that the cross section is equal to
$\sum_i|g_i|^2$. From the eq.(\ref{eq:M funct norm}) follows that
for this purpose one should build the basis orthonormalized with
respect to the scalar product
\[({\rm M}_i,{\rm M}_j) \equiv \frac{Sp({\rm M}_i\tilde{V}_0 \otimes \tilde{V}_v
{\rm M}_j^\dagger V_1 \otimes V_2)}{4}\;.\]
The eqs.(\ref{eq:bi}-\ref{eq:norms Mi}) present such a basis \cite{Grebenyuk1989}
\begin{equation}
b_i \equiv \frac{{\rm M}_i}{\sqrt{\|{\rm M}_i\|^2}}\;,i=1 \ldots 6\;,
\label{eq:bi}
\end{equation}
where
\begin{eqnarray}
{\rm M}_1=&(e-\tilde{V}_1 V_v) \otimes (e-\tilde{V}_2 V_0)\;.
\nonumber \\
{\rm M}_2=&(\tilde{V}_1 V-\tilde{V} V_v) \otimes
(\tilde{V}_2 V-\tilde{V} V_0)\;.
\nonumber \\
{\rm M}_3=&(e+\tilde{V}_1 V_v) \otimes (e+\tilde{V}_2 V_0)\;,
\label{eq:Mi} \\
{\rm M}_4=&
(\alpha^{1v}(e+\tilde{V}_1 V_v)-(\tilde{V}_1 V+\tilde{V} V_v))
\otimes
(\alpha^{20}(e+\tilde{V}_2 V_0)-(\tilde{V}_2 V+\tilde{V} V_0))\;,
\nonumber \\
{\rm M}_5=&
(e+\tilde{V}_1 V_v) \otimes
(\alpha^{20}(e+\tilde{V}_2 V_0)-(\tilde{V}_2 V+\tilde{V} V_0))\;,
\nonumber \\
{\rm M}_6=&
(\alpha^{1v}(e+\tilde{V}_1 V_v)-(\tilde{V}_1 V+\tilde{V} V_v))
\otimes (e+\tilde{V}_2 V_0)\;, \nonumber
\end{eqnarray}
with
\begin{equation}
\alpha^{ij} \equiv 2\frac{(V,V_i+V_j)}{(V_i+V_j)^2}\;,
\label{eq:NN alpha definition}
\end{equation}
and
\begin{eqnarray}
\|{\rm M}_1\|^2= & 4\left[1-(V_1,V_v)\right]\left[1-(V_2,V_0)\right]
\nonumber \\
\|{\rm M}_2\|^2= & 4\left[1+(V_1,V_v)-2(V_1,V)(V_v,V)\right]
\left[1+(V_2,V_0)-2(V_2,V)(V_0,V)\right] \nonumber \\
\|{\rm M}_3\|^2= & 4\left[1+(V_1,V_v)\right]\left[1+(V_2,V_0)\right]
\nonumber \\
\|{\rm M}_4\|^2= & \frac{\|{\rm M}_5\|^2\|{\rm M}_6\|^2}{\|{\rm M}_3\|^2}
\label{eq:norms Mi} \\
\|{\rm M}_5\|^2= & 4\frac{1+(V_1,V_v)}{1+(V_2,V_0)}
\left\{ \left[1-(V,V_2)^2\right]\left[1-(V,V_0)^2\right]-
\left[(V_2,V_0)-(V,V_2)(V,V_0)\right]^2 \right\} \nonumber \\
\|{\rm M}_6\|^2= & 4\frac{1+(V_2,V_0)}{1+(V_1,V_v)}
\left\{ \left[1-(V,V_1)^2\right]\left[1-(V,V_v)^2\right]-
\left[(V_1,V_v)-(V,V_1)(V,V_v)\right]^2 \right\}\;. \nonumber
\end{eqnarray}
The $V_i$ in the above equations are the four-velocities of the i-th particle,
$V$ being the four-velocity of the whole NN c.m. system.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:Intro}
\subsection{Preliminaries}
\label {sub:Prel}
\IEEEPARstart
{T}{he} noiseless index coding problem was first introduced by Birk and Kol \cite{ISCO} as an informed source coding problem over a broadcast channel. It involves a single source $\mathcal{S}$ that wishes to send $n$ messages from a set $\mathcal{X}= \lbrace x_{1},x_{2},\ldots,x_{n} \rbrace, \ x_i \in \mathbb{F}_{2}$, to a set of $m$ receivers $\mathcal{R}=\lbrace R_{1},R_{2},\ldots,R_{m} \rbrace $. A receiver $R_{i}$ $\in$ $\mathcal{R}$ is identified by $\lbrace \mathcal{W}_i , \mathcal{K}_i \rbrace$, where $\mathcal{W}_i\subseteq \mathcal{X}$ is the set of messages demanded by the receiver $R_{i}$ and $\mathcal{K}_i\subsetneq \mathcal{X}$ is the set of messages known to the receiver $R_{i}$ a priori, called the side information set. The index coding problem can be specified by $\left(\mathcal{X},\mathcal{R}\right)$.
\begin{definition}
An index code for the index coding problem $\left(\mathcal{X},\mathcal{R}\right)$ consists of\\$1)$ An encoding function $f:\mathbb{F}_{2}^n \rightarrow \mathbb{F}_{2}^l$\\
$2)$ A set of decoding functions $g_{1}, g_{2},\ldots,g_{m}$ such that, for a given input $\textbf{x} \in \mathbb{F}_{2}^n $, $g_{i}\left(f(\textbf{x}),\mathcal{K}_i\right) = \mathcal{W}_i, \ \forall \ i \in \lbrace 1, 2,\ldots,m \rbrace $.
\end{definition}
The optimal index code as defined in \cite{OMIC} is that index code which minimizes $l$, the length of the index code which is equal to the number of binary transmissions required to satisfy the demands of all the receivers. An index code is said to be linear if its encoding function is linear and linearly decodable if all its decoding functions are linear \cite{ICSI}.
The class of index coding problems where each receiver demands a single unique message were named in \cite{OMIC} as single unicast index coding problems. For such index coding problems, $m=n$. WLOG, for a single unicast index coding problem, let the receiver $R_i$ demand the message $x_i$. The side information graph $G$, of a single unicast index coding problem, is a directed graph on $n$ vertices where an edge $\left( i,j\right) $ exists if and only if $R_i$ knows the message $x_j$ \cite{ICSI}. The minrank over $\mathbb{F}_{2}$ of the side information graph $G$ is defined in \cite{ICSI} as $\min \left\lbrace rank_2\left( A\right) : A \text{ fits } G\right\rbrace$, where a 0-1 matrix $A$ is said to fit $G$ if $a_{ii}=1 \ \forall \ i \in \left\lbrace 1,2, \ldots, n\right\rbrace$ and $a_{ij}=0$, if $\left( i,j\right)$ is not an edge in $G$ and $rank_2$ denotes the rank over $\mathbb{F}_2$. Bar-Yossef et al. in \cite{ICSI} established that single unicast index coding problems can be expressed using a side information graph and the length of an optimal index code for such an index coding problem is equal to the minrank over $\mathbb{F}_{2}$ of the corresponding side information graph. This was extended in \cite{ECIC} to a general instance of index coding problem using minrank over $\mathbb{F}_q$ of the corresponding side information hypergraph.
In both \cite{ISCO} and \cite{ICSI}, noiseless binary channels were considered and hence the problem of index coding was formulated as a scheme to reduce the number of binary transmissions. This amounts to minimum bandwidth consumption, with binary transmission. We consider noisy index coding problems over AWGN broadcast channel. Here, we can reduce bandwidth further by using some M-ary modulation scheme. A previous work which considered index codes over Gaussian broadcast channel is by Natarajan et al.\cite {IGBC}. Index codes based on multi-dimensional QAM constellations were proposed and a metric called ``side information gain" was introduced as a measure of efficiency with which the index codes utilize receiver side information. However \cite {IGBC} does not consider the index coding problem as originally defined in \cite{ISCO} and \cite{ICSI} as it does not minimize the number of transmissions. It always uses $2^n$- point signal sets, whereas we use signal sets of smaller sizes as well as $2^n$- point signal set for the same index coding problem.
\subsection{Our Contribution}
We consider index coding problems over $\mathbb{F}_2$, over AWGN broadcast channels. For a given index coding problem, for an index code of length $N$, we propose to use $2^N$- ary modulation scheme to broadcast the index codeword rather than using $N$ BPSK transmissions, with the energy of the symbol being equal to that of $N$ binary transmissions. Our contributions are summarized below.
\begin{itemize}
\item An algorithm, to map $N$ index coded bits to a $2^N$- PSK/ $2^N$-QAM signal set is given.
\item We show that by transmitting $N$ index coded bits as a signal point from $2^N$- PSK or QAM constellation, certain receivers get both coding gain as well as bandwidth gain and certain other receivers trade off coding gain for bandwidth gain.
\item A necessary and sufficient condition that the side information possessed by a receiver should satisfy so as to get coding gain over a receiver with no side information is presented.
\item We show that it is not always necessary to find the minimum number of binary transmissions required for a given index coding problem, i.e., a longer index code may give higher coding gains to certain receivers.
\item We find that for index coding problems satisfying a sufficient condition, the difference in probability of error performance between the best performing receiver and the worst performing receiver widens monotonically with the length of the index code employed.
\item We prove that transmitting the $N$ index coded bits as a QAM signal is better than transmitting them as a PSK signal if the receivers see an effective signal set with eight points or more.
\end{itemize}
\subsection{Organization}
The rest of this paper is organized as follows. In Section \ref{sec:Model}, the index coding problem setting that we consider is formally defined with examples. The bandwidth gain and coding gain obtained by receivers by transmitting index coded bits as a PSK symbol are formally defined. A necessary and sufficient condition that the side information possessed by a receiver should satisfy so as to get coding gain over a receiver with no side information is stated and proved. In Section \ref{sec:PSK_Algo}, we give an algorithm to map the index coded bits to a $2^N$- PSK symbol such that the receiver with maximum amount of side information sees maximum PSK-SICG. In section \ref{sec:ICQAM}, we compare the transmission of index coded bits as a PSK signal against transmitting them as a QAM signal. The algorithm given in Section \ref{sec:PSK_Algo} itself can be used to map index codewords to QAM signal set. We find that all the results including the necessary and sufficient condition to get coding gain holds and show that transmitting the index coded bits as a QAM signal gives better performance if the receivers see an effective signal set with eight points or more. We go on to give examples with simulation results to support our claims in the subsequent Section \ref{sec:simu}. Finally concluding remarks and directions for future work is given in Section \ref{sec:conc}.
\section{Side Information Coding Gain}
\label{sec:Model}
Consider an index coding problem $\left( \mathcal{X}, \mathcal{R}\right) $, over $\mathbb{F}_2$, with $n$ messages and $m$ receivers, where each receiver demands a single message. This is sufficient since any general index coding problem can be converted into one where each receiver demands exactly one message, i.e., $\left|\mathcal{W}_i \right| = 1,\ \forall \ i \in \lbrace 1, 2, \ldots, m\rbrace$. A receiver which demands more than one message, i.e., $\left|\mathcal{W}_i \right|>1$, can be considered as $\left|\mathcal{W}_i \right|$ equivalent receivers all having the same side information set $\mathcal{K}_i$ and demanding a single message each. Since the same message can be demanded by multiple receivers, this gives $m \geq n$.
For the given index coding problem, let the length of the index code used be $N$. Then, instead of transmitting $N$ BPSK symbols, which we call the $N$- fold BPSK scheme, we will transmit a single point from a $2^{N}$- PSK signal set with the energy of the $2^{N}$- PSK symbol being equal to $N$ times the energy of a BPSK symbol, i.e., equal to the total transmitted energy of the $N$ BPSK symbols.
\begin{example}
\label{ex_psk_1}
Let $m=n=7$ and $ \mathcal{W}_i = x_{i},\ \forall \ i\in \lbrace 1, 2,\ldots,7 \rbrace $. Let the side information sets be $\mathcal{K}_1 =\left\{2,3,4,5,6,7\right\},\ \mathcal{K}_2=\left\{1,3,4,5,7\right\},\ \mathcal{K}_3=\left\{1,4,6,7\right\},\ \mathcal{K}_4=\left\{2,5,6\right\},\ \mathcal{K}_5=\left\{1,2\right\},\ \mathcal{K}_6=\left\{3\right\}\ \text{and} \ \mathcal{K}_7=\phi$.\\
The minrank over $\mathbb{F}_{2}$ of the side information graph corresponding to the above problem evaluates to $N=4$.
An optimal linear index code is given by the encoding matrix,
{\small
\begin{center}
$L =\left[\begin{array}{cccc}
1 & 0 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1
\end{array}\right]$.
\end{center}
}
The index coded bits are $\textbf{y}=\textbf{x}L$, where,$$
\textbf{y}= \left[y_1 \ y_2\ y_3\ y_4\right]=\left[x_1\ x_2\ \ldots x_7\right]L=\textbf{x}L$$
$\mbox{giving}~~~~~~~~~~ y_{1}=x_{1}+x_{2}+x_{5};~~~~~~~~ y_{2}=x_{3}+x_{6};~~~~~~~~ y_{3}=x_{4};~~~~~~~~ y_{4}=x_7. $
\end{example}
In the 4-fold BPSK index coding scheme we will transmit 4 BPSK symbols. In the scheme that we propose, we will map the index coded bits to the signal points of a 16-PSK constellation and transmit a single complex number thereby saving bandwidth. To keep energy per bit the same, the energy of the 16-PSK symbol transmitted will be equal to the total energy of the 4 transmissions in the 4-fold BPSK scheme.
This scheme of transmitting index coded bits as a single PSK signal will give bandwidth gain in addition to the gain in bandwidth obtained by going from $n$ to $N$ BPSK transmissions. This extra gain is termed as PSK bandwidth gain.
\begin{definition}
The term \textit{PSK bandwidth gain} is defined as the factor by which the bandwidth required to transmit the index code is reduced, obtained while transmitting a $2^{N}$- PSK signal point instead of transmitting $N$ BPSK signal points.
\end{definition}
For an index coding problem, there will be a reduction in required bandwidth by a factor of $N/2$, which will be obtained by all receivers.
With proper mapping of the index coded bits to PSK symbols, the algorithm for which is given in Section \ref{sec:PSK_Algo}, we will see that receivers with more amount of side information will get better performance in terms of probability of error, provided the side information available satisfies certain properties. This gain in error performance, which is solely due to the effective utilization of available side information by the proposed mapping scheme, is termed as PSK side information coding gain (PSK-SICG). Further, by sending the index coded bits as a $2^N$- PSK signal point, if a receiver gains in probability of error performance relative to a receiver in the $N$- fold BPSK transmission scheme, we say that the receiver gets PSK absolute coding gain (PSK-ACG).
\begin{definition}
The term \textit{PSK side information coding gain} is defined as the coding gain a receiver with side information gets relative to one with no side information, when the index code of length $N$ is transmitted as a signal point from a $2^{N}$- PSK constellation.
\end{definition}
\begin{definition}
The term \textit{PSK Absolute Coding gain} is defined as the gain in probability of error performance obtained by any receiver in the $2^N$- PSK signal transmission scheme relative to its performance in N- fold BPSK transmission scheme.
\end{definition}
We present a set of necessary and sufficient conditions for a receiver to get PSK-SICG in the following subsection.
\subsection{PSK Side Information Coding Gain (PSK-SICG)}
\label{subsec:PSK-SICG}
Let $\mathcal{C} = \lbrace \textbf{y} \in \mathbb{F}_2^N \ | \ \textbf{y} = \textbf{x}L ,\ \textbf{x} \in \mathbb{F}_2^n \rbrace$, where $L$ is the $n \times N$ encoding matrix corresponding to the linear index code chosen. Since $N \leq n $, we have $\mathcal{C} = \mathbb{F}_2^N$.
For each of the receivers $R_{i},\ i\in \left\lbrace 1,2,\ldots, m\right\rbrace $, define the set $S_{i}$ to be the set of all binary transmissions which $R_{i}$ knows a priori, i.e., $S_{i}= \lbrace y_{j}|y_{j}=\sum\limits_{k \in J }x_{k} ,\ J\subseteq \mathcal{K}_{i}\rbrace$. For example, in Example \ref{ex_psk_1}, $S_1= \{ y_2,y_3,y_4 \},$ $S_2= \{ y_3,y_4 \},$ $S_3= \{ y_3, y_4 \}$ and $S_4=S_5=S_6=S_7= \phi.$
Let $\eta_{i} = min \lbrace n-\left|\mathcal{K}_i\right|, N-\left|S_i\right|\rbrace$. For example, in Example \ref{ex_psk_1}, $\eta_1=1, \ \eta_2=\eta_3=2 \ \text{and} \ \eta_4=\eta_5=\eta_6=\eta_7=4$.
\begin{theorem}
\label{th_psk_1}
A receiver $R_{i}$ will get PSK-SICG if and only if its available side information satisfies at least one of the following two conditions:
\begin{align}
n-\left|\mathcal{K}_i\right| < N \label{eq:Th1_1}\\
\left|S_{i}\right|\geq 1 \label{eq:Th1_2}
\end{align}
Equivalently, a receiver $R_i$ will get PSK-SICG if and only if
\begin{align}
\eta_i < N \label{eq:Th1_3}.
\end{align}
\begin{proof}
The equivalence of the conditions in (\ref{eq:Th1_1}) and (\ref{eq:Th1_2}) and the condition in (\ref{eq:Th1_3}) is straight-forward since $\eta_i= min \lbrace n-\left|\mathcal{K}_i\right|, N-\left|S_i\right|\rbrace$ will be less than $N$ if and only if at least one of the two conditions given in (\ref{eq:Th1_1}) and (\ref{eq:Th1_2}) is satisfied.
Let $\mathcal{K}_i = \lbrace i_{1}, i_2, \ldots, i_{\left|\mathcal{K}_i\right|} \rbrace$ and $\mathcal{A}_i$ $\triangleq$ $\mathbb{F}_2^{\left|\mathcal{K}_i\right|}$, $i=1,2,\ldots,m$.
\textit{Proof of the "if part"} : If condition (\ref{eq:Th1_1}) is satisfied, the ML decoder at $R_i$ need not search through all codewords in $\mathcal{C}$. For a given realization of $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )$, say, $(a_{1},a_{2}, \ldots, a_{\left|\mathcal{K}_i\right|}) \in \mathcal{A}_i$, the decoder needs to search through only the codewords in $$\left\lbrace \mathbf{y}=\mathbf{x}L : ( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} ) = (a_{1},a_{2}, \ldots, a_{\left|\mathcal{K}_i\right|})\right\rbrace, $$ i.e., the codewords in $\mathcal{C}$ which resulted from $\mathbf{x}$ such that $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} ) = (a_{1},a_{2}, \ldots, a_{\left|\mathcal{K}_i\right|})$. Since number of such $\mathbf{x}$ is $= 2^{n-\left|\mathcal{K}_i\right|} < 2^N$, the decoder need not search through all the codewords in $\mathcal{C}$.
Similarly if the condition (\ref{eq:Th1_2}) is satisfied, then also the ML decoder at $R_i$ need not search through all the codewords in $\mathcal{C}$. For any fixed realization of $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )$, the values of $\left\lbrace y_j \in S_i\right\rbrace $ are also fixed. The decoder needs to search through only those $\mathbf{y} \in \mathcal{C}$ with the given fixed values of $\left\lbrace y_j \in S_i\right\rbrace $. Again, the number of such $\mathbf{y}$ is less than $2^N$.
Thus, if any of the two conditions of the theorem is satisfied, the ML decoder at $R_i$ need to search through a reduced number of signal points, which we call the effective signal set seen by $R_i$. The size of the effective signal set seen by the receiver is $2^{\eta_i} < 2^N$. Therefore, by appropriate mapping of the index coded bits to PSK symbols, we can increase $d_{min}(R_i) \triangleq$ the minimum distance of the effective signal set seen by the receiver $R_i$, $i = 1,2,\ldots,m$, thus getting PSK-SICG.
\textit{Proof of the "only if part"} : If none of the two conditions of the theorem are satisfied or equivalently if $\eta_i \nless N$, then the effective signal set seen by $R_i$ will be the entire $2^N$-PSK signal set. Thus $d_{min}(R_i)$ cannot be increased. $d_{min}(R_i)$ will remain equal to the minimum distance of the corresponding $2^{N}$-- PSK signal set. Therefore the receiver $R_i$ will not get PSK-SICG.
\end{proof}
\end{theorem}
\begin{note}
The condition (\ref{eq:Th1_2}) above indicates how the PSK side information coding gain is influenced by the linear index code chosen. Different index codes for the same index coding problem will give different values of $\left|S_{i}\right|, \ i \in \left[ m\right] $ and hence leading to possibly different PSK side information coding gains.
\end{note}
\begin{figure*}
\includegraphics[scale=0.28]{example-1b}
\caption{16-PSK Mapping for Example \ref{ex_psk_1}.}
\label{fig:ex1_map}
~\hrule
\end{figure*}
Consider the receiver $R_1$ in Example \ref{ex_psk_1}. It satisfies both the conditions with $n-\left|\mathcal{K}_1\right| = 7-6 = 1 < 4$ and $\left|S_{1}\right| = 3 >1 $. For a particular message realization $(x_1, x_2, \dots, x_7)$, the only index coded bit $R_1$ does not know a priori is $y_1$. Hence there are only 2 possibilities for the received codeword at the receiver $R_1$. Hence it needs to decode to one of these 2 codewords, not to one of the 16 codewords that are possible had it not known any of $y_1,\ y_2,\ y_3,\ y_4$ a priori. Then we say that $R_1$ sees an effective codebook of size 2. This reduction in the size of the effective codebook seen by the receiver $R_1$ is due to the presence of side information that satisfied condition (\ref{eq:Th1_1}) and (\ref{eq:Th1_2}) above.
For a receiver to see an effective codebook of size $ < 2^N $, it is not necessary that the available side information should satisfy both the conditions. If at least one of the two conditions is satisfied, then that receiver will see an effective codebook of reduced size and hence will get PSK-SICG by proper mapping of index coded bits to $2^N$- PSK symbols. This can be seen from the following example.
\begin{example}
\label{ex3}
Let $m=n=6$ and $\mathcal{W}_i = x_{i}, \ \forall \ i\in \lbrace 1, 2,\ldots,6 \rbrace $. Let the known sets be $\mathcal{K}_1 =\left\{2,3,4,5,6\right\},\ \mathcal{K}_2=\left\{1,3,4,5\right\},\ \mathcal{K}_3=\left\{2,4,6\right\},\ \mathcal{K}_4=\left\{1,6\right\},\ \mathcal{K}_5=\left\{3\right\} \ \text{and} \ \mathcal{K}_6=\phi$.\\
The minrank over $\mathbb{F}_{2}$ of the side information graph corresponding to the above problem evaluates to $N=4$.
An optimal linear index code is given by the encoding matrix,
{\small
\begin{center}
$L = \left[\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right]$ \\
\end{center}
}
The index coded bits in this example are,
$$ y_{1}=x_{1}+x_{4};~~~~~~~~ y_{2}=x_{2}+x_{3};~~~~~~~~ y_{3}=x_{5}; ~~~~~~~~ y_{4}=x_{6}. $$
\end{example}
Here, receiver $R_4$ does not satisfy condition (1) since $n-\left|\mathcal{K}_4\right| = 6-2 = 4 = N$.
However, it will still see an effective codebook of size 8, since $\left|S_{4}\right| = 1 $, and hence will get PSK-SICG by proper mapping of the codewords to 16-PSK signal points.
\begin{note}
The condition required for a receiver $R_i$ to get PSK-ACG is that the minimum distance of the effective signal set seen by it, $d_{min}(R_i) > 2$ since the minimum distance seen by any receiver while using $N$-fold BPSK to transmit the index coded bits is $d_{min}(\text{BPSK})=2$.
\end{note}
\begin{note}
For the class of index coding problems with $\mathcal{W}_i \cap \mathcal{W}_j = \phi, \ \mathcal{K}_i \cap \mathcal{K}_j = \phi , \ i \neq j$ and $\left| \mathcal{W}_i\right| = 1, \ \left| \mathcal{K}_i\right| = 1$, which were called single unicast single uniprior in \cite{OMIC}, $\left|\mathcal{S}_i\right| = 0, \ \forall \ i \in \lbrace 1,2, \ldots, m \rbrace$. Therefore, no receiver will get PSK-SICG.
\end{note}
\subsection{$2^N$-PSK to $2^n$-PSK}
In this subsection we discuss the effect of the length of the index code used on the probability of error performance of different receivers. We consider index codes of all lengths from the minimum length $N=$ minrank over $\mathbb{F}_2$ of the corresponding side information hypergraph to the maximum possible value of $N=n$. Consider the following example.
%
\begin{example}
\label{ex_N_to_n1}
Let $m=n=5$ and $\mathcal{W}_i = \lbrace x_i \rbrace, \ \forall \ i \in \lbrace 1, 2, 3, 4, 5 \rbrace.$ Let the known information be $ \mathcal{K}_1=\lbrace2,3,4,5\rbrace, \ \mathcal{K}_2=\lbrace1,3,5\rbrace, \ \mathcal{K}_3=\lbrace1,4\rbrace, \ \mathcal{K}_4= \lbrace2\rbrace$ and $ \mathcal{K}_5 = \phi$.
For this problem, minrank, $N$ = 3. An optimal linear index code is given by
{\small
\begin{align*}
L_1 = \left[\begin{array}{ccc}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}\right],
\end{align*}
}
with the index coded bits being
$$ y_{1}=x_{1}+x_{2}+x_{3};~~~~~ y_{2}=x_{2}+x_{4}; ~~~~~ y_{3}=x_{5}.
$$
%
%
Now, we consider an index code of length $N+1=4$. The corresponding encoding matrix is
{\small
\begin{align*}
L_2 = \left[\begin{array}{cccc}
1 & 0 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right]
\end{align*}
}
and the index coded bits are
$$ y_{1}=x_{1}+x_{2};~~~~~ y_{2}=x_{3};~~~~ y_{3}=x_{4};~~~~ y_{4}=x_{5}.
$$
We compare these with the case where we send the messages as they are, i.e.,
\begin{align*}
L_3= I_5,
\end{align*}
where $I_5$ denotes the $5 \times 5$ identity matrix.
Optimal mappings for the three different cases considered are given in Fig. \ref{fig:ex_N_to_n1}(a), (b) and (c) respectively.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.4]{example-6b}
\caption{8-PSK, 16-PSK and 32-PSK Mappings for Example \ref{ex_N_to_n1}.}
\label{fig:ex_N_to_n1}
\end{center}
~\hrule
\end{figure*}
The values of $\eta$ for the different receivers while using the three different index codes are summarized in TABLE \ref{table_eta}. We see that the receiver $R_1$ sees a two point signal set irrespective of the length of the index code used. Since as the length of the index code increases, the energy of the signal also increases, $R_1$ will see a larger minimum distance when a longer index coded is used. However, the minimum distance seen by the receiver $R_5$ is that of the $2^N$ signal set in all the three cases, which decreases as $N$ increases. Hence the difference between the performances of $R_1$ and $R_5$ increases as the length of the index code increases. This is generalized in the following lemma.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline $N$ & $\eta_1$ & $\eta_2$ & $\eta_3$ & $\eta_4$ & $\eta_5$ \\
\hline 3 & 1 & 2 & 3 & 3 & 3 \\
4 & 1 & 2 & 3 & 4 & 4 \\
5 & 1 & 2 & 3 & 4 & 5 \\
\hline
\end{tabular}
\caption{Table showing values of $\eta$ for different receivers.}
\label{table_eta}
\end{center}
\end{table}
\begin{lemma}
\label{lemm:N_to_n}
For a given index coding problem, as the length of the index code used increases from N to n, where N is the minrank of the side information hypergraph of the index coding problem and n is the number of messages, the difference in performance between the best performing and the worst performing receiver increases monotonically if the worst performing receiver has no side information, provided we use an optimal mapping of index coded bits to PSK symbols given by Algorithm \ref{algo_psk}.
\begin{proof}
If there is a receiver with no side information, say $R$, whatever the length $l$ of the index code used, the effective signal set seen by $R$ will be $2^l$- PSK. Therefore the minimum distance seen by $R$ will be the minimum distance of $2^l$- PSK signal set. For PSK symbol energy $l$, the squared minimum pair-wise distance of $2^l$- PSK, $d_{min}(2^l$- PSK$)$, is given by $d_{min}(2^l$- PSK$) = 4lsin^{2}(\pi /(2^l))$, which is monotonically decreasing in $l$.
\end{proof}
\end{lemma}
\begin{remark}
\label{rem:N_to_n}
For an index coding problem where the worst performing receiver knows one or more messages a priori, whether or not the gap between the best performing receiver and the worst performing receivers widens monotonically depends on the index code chosen. This is because the index code chosen determines $\eta$ of the receivers which in turn determines the mapping scheme and thus the effective signal set seen by the receivers. Therefore the minimum distance seen by the receivers and thus their error performance depends on the index code chosen.
\end{remark}
\section{Algorithm}
\label{sec:PSK_Algo}
In this section we present the algorithm for labelling the appropriate sized PSK signal set. Let the number of binary transmissions required $=$ minrank over $\mathbb{F}_{2}=N$ and the $N$ transmissions are labeled $Y= \lbrace y_{1},y_{2},\ldots,y_{N} \rbrace$, where each of $y_{i}$ is a linear combination of $\lbrace x_{1},x_{2},\ldots,x_{n} \rbrace$. If the minrank is not known then $N$ can be taken to be the length of any known linear index code.
Order the receivers in the non-decreasing order of $\eta_{i}$.
WLOG, let $\lbrace R_{1},R_{2},\ldots,R_{m} \rbrace$ be such that
\begin{align*}
\eta_{1} \leq \eta_2 \leq \ldots \leq \eta_m.
\end{align*}
Let $\mathcal{K}_i = \lbrace i_{1}, i_2, \ldots, i_{\left|\mathcal{K}_i\right|} \rbrace$ and $\mathcal{A}_i$ $\triangleq$ $\mathbb{F}_2^{\left|\mathcal{K}_i\right|}$, $i=1,2,\ldots,m$. As observed in the proof of Theorem \ref{th_psk_1}, for any given realization of $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )$, the effective signal set seen by the receiver $R_i$ consists of $2^{\eta_i}$ points. Hence if $\eta_{i} \geq N $, then $d_{min}(R_i) =$ the minimum distance of the signal set seen by the receiver $R_i$, $i = 1,2,\ldots,m$, will not increase. $d_{min}(R_i)$ will remain equal to the minimum distance of the corresponding $2^{N}$- PSK. Thus for receiver $R_i$ to get PSK-SICG, $\eta_{i}$ should be less than $ N $.
The algorithm to map the index coded bits to PSK symbols is given in \textbf{Algorithm 1}.
Before running the algorithm, Use Ungerboeck set partitioning \cite{TCM} to partition the $2^{N}$- PSK signal set into $N$ different layers. Let $L_0,\ L_1, ..., L_{N-1}$ denote the different levels of partitions of the $2^N$-PSK with the minimum distance at layer $L_i=\Delta_i$, $i \in \lbrace0, 1,\ldots, N-1\rbrace$, being such that $\Delta_0 < \Delta_1 < \ldots < \Delta_{N-1}$.
\begin{algorithm}
\caption{Algorithm to map index coded bits to PSK symbols}\label{algo_psk}
\begin{algorithmic}[1]
\If {$\eta_1 \geq N $}, do an arbitrary order mapping and \textbf{exit}.
\EndIf
\State $i \gets 1$
\If {all $2^N$ codewords have been mapped}, \textbf{exit}.
\EndIf
\State Fix $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )=(a_{1},a_{2}, \ldots, a_{\left|\mathcal{K}_i\right|}) \in \mathcal{A}_i$ such that the set of codewords, $\mathcal{C}_i \subset \mathcal{C} $, obtained by running all possible combinations of $\lbrace x_{j}|\ j \notin \mathcal{K}_i\rbrace$ with $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )=(a_{1},a_{2}, \ldots, a_{\left|\mathcal{K}_i\right|})$ has maximum overlap with the codewords already mapped to PSK signal points.
\If {all codewords in $\mathcal{C}_i$ have been mapped},
\begin{itemize}
\item $\mathcal{A}_i$=$\mathcal{A}_i \setminus \lbrace( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )|( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )$ together with all combinations of $\lbrace x_{j}|\ j \notin \mathcal{K}_i\rbrace$ will result in $\mathcal{C}_i\rbrace$.
\item $i \gets i+1$
\item \textbf{if} {$\eta_i \geq N$} \textbf{then},
\begin{itemize}
\item $i \gets 1$.
\item goto \textbf{Step 3}
\end{itemize}
\item \textbf{else}, goto \textbf{Step 3}
\end{itemize}
\Else
\begin{itemize}
\item Of the codewords in $\mathcal{C}_i$ which are yet to be mapped, pick any one and map it to a PSK signal point in that $2^{\eta_i}$ sized subset at level $L_{N-\eta_i}$ which has maximum number of signal points mapped by codewords in $\mathcal{C}_i$ without changing the already labeled signal points in that subset.
If all the signal points in such a subset have been already labeled, then map it to a signal point in another $2^{\eta_i}$ sized subset at the same level $L_{N-\eta_i}$ such that this point together with the signal points corresponding to already mapped codewords in $\mathcal{C}_i$ has the largest minimum distance possible. Clearly this minimum distance, $d_{min}(R_i)$ is such that $\Delta_{N-\eta_i} \geq d_{min}(R_i) \geq \Delta_{N-(\eta_i+1)}$.
\item $i \gets 1$
\item goto \textbf{Step 3}
\end{itemize}
\EndIf
\end{algorithmic}
\end{algorithm}
Let the PSK-SICG obtained by the mapping given in Algorithm \ref{algo_psk} by the receiver $R_i = g_i, \ i \in \left\lbrace 1,2,\ldots, m\right\rbrace $. This algorithm gives an optimal mapping of index coded bits to PSK symbols. Here optimality is in the sense that, for the receivers $\lbrace R_{1},R_{2},\ldots,R_{m} \rbrace$ ordered such that
$\eta_{1} \leq \eta_2 \leq \ldots \leq \eta_m$,
\begin{enumerate}
\item No other mapping can give a PSK-SICG $> g_1$ for the receiver $R_1$.
\item Any mapping which gives PSK-SICG $=g_j$ for the receivers $R_j, \ j = 1,2, \ldots, i-1$, cannot give a PSK-SICG $>g_i$ for the receiver $R_i$
\end{enumerate}
\begin{remark}
Note that the Algorithm \ref{algo_psk} above does not result in a unique mapping of index coded bits to $2^N$- PSK symbols. The mapping will change depending on the choice of $( x_{i_1},x_{i_2}, \ldots, x_{i_{\left|\mathcal{K}_i\right|}} )$ in each step. However, the performance of all the receivers obtained using any such mapping scheme resulting from the algorithm will be the same. Further, if $\eta_{i}= \eta_j$ for some $i \neq j$, depending on the ordering of $\eta_i$ done before starting the algorithm, $R_i$ and $R_j$ may give different performances in terms of probability of error.
\end{remark}
\subsection{How the Algorithm works}
For any given realization of $\mathbf{x}= \left( x_1, x_2,\ldots, x_n\right) $, the ML decoder at receiver $R_i$ with $\eta_i < N$ need to consider only $2^{\eta_i}$ codewords and not the entire $2^N$ possible codewords as explained in the proof of Theorem \ref{th_psk_1}. So the algorithm maps these subset of codewords to PSK signal points to one of the subsets of signal points at the layer $L_{N-\eta_i}$ of Ungerboeck partitioning of the $2^N$-PSK signal set so that these $2^{\eta_i}$ signal points have a pairwise minimum distance equal to $\Delta_{N-\eta_i}$. An arbitrary mapping cannot ensure this since if any two codewords in this particular subset of $2^{\eta_i}$ codewords are mapped to adjacent points of the $2^N$- PSK signal set, the effective minimum distance seen by the receiver $R_i$ will still be that of $2^N$- PSK.
Further, since $\Delta_0 < \Delta_1 < \ldots < \Delta_{N-1}$, the largest pair-wise minimum distance can be obtained by a receiver with the smallest value of $\eta$. Therefore, we order the receivers in the non-decreasing order of their $\eta$ values and map the codewords seen by $R_1$ first, $R_2$ next and so on. Therefore, the largest pair-wise minimum distance and hence the largest PSK-SICG is obtained by $R_1$.
Consider the index coding problem in Example \ref{ex_psk_1} in Section \ref{sec:Model}.
Here, $\eta_1=1,\ \eta_2= \eta_3 =2$ and $\eta_i \geq 4, \ i \in \lbrace4,5,6,7\rbrace$.
While running the Algorithm \ref{algo_psk}, suppose we fix $( x_2, x_3, x_4, x_5, x_6, x_7)= (000000)$, we get $\mathcal{C}_{1}=\lbrace \lbrace0000\rbrace, \lbrace1000\rbrace \rbrace$. These codewords are mapped to a pair of diametrically opposite 16-PSK symbols, which constitute a subset at the Ungerboeck partition level $L_3$ of the 16-PSK signal set as shown in Fig. \ref{fig:ex1_map}(a). Then, $\mathcal{C}_{2}$, which results in maximum overlap with $\lbrace \lbrace0000\rbrace, \lbrace1000\rbrace \rbrace$, is $\lbrace \lbrace0000\rbrace, \lbrace0100\rbrace, \lbrace1000\rbrace,\lbrace1100\rbrace \rbrace$. We consider $\lbrace0100\rbrace \in \mathcal{C}_2 \setminus \lbrace \lbrace0000 \rbrace, \lbrace1000\rbrace \rbrace$ and map it to a signal point such that the three labeled signal points belong to a subset at level $L_2$. Now we go back to Step 3 with $i=1$ and find $C_1$ which has maximum overlap with the mapped codewords. Now $C_{1} =\lbrace \lbrace0100\rbrace, \lbrace1100\rbrace \rbrace$. Then we map $\lbrace 1100 \rbrace \in \mathcal{C}_1$, which is not already mapped, to that PSK signal point such that $C_{1} =\lbrace \lbrace0100\rbrace, \lbrace1100\rbrace \rbrace$ together constitute a subset at level $L_3$ of the Ungerboeck partitioning. This will result in the mapping as shown in Fig. \ref{fig:ex1_map}(b). Continuing in this manner, we finally end up with the mapping shown in Fig. \ref{fig:ex1_map}(c). We see that for such a mapping the $d_{min}^2(R_1)=(2\sqrt{(4)})^2=16$ and $d_{min}^2(R_2)= d_{min}^2(R_3)=(\sqrt{2}\sqrt{4})^2=8$.
\section{Index Coded Modulation with QAM}
\label{sec:ICQAM}
Instead of transmitting $N$ index bits as a point from $2^N$- PSK, we can also transmit the index coded bits as a signal point from $2^N$- QAM signal set, with the average energy of the QAM symbol being equal to the total energy of the $N$ BPSK transmissions. The Algorithm \ref{algo_psk} in Section \ref{sec:PSK_Algo} can be used to map the index coded bits to QAM symbols.
Before starting to run the algorithm to map the index coded bits to $2^N$- QAM symbols, we need to choose an appropriate $2^N$- QAM signal set.
To choose the appropriate QAM signal set, do the following:
\begin{itemize}
\item \textbf{if} $N$ is even, choose the $2^N$- square QAM with average symbol energy being equal to $N$.
\item \textbf{else}, take the $2^{N+1}$- square QAM with average symbol energy equal to $N$. Use Ungerboeck set partitioning \cite{TCM} to partition the $2^{N+1}$- QAM signal set into two $2^N$ signal sets. Choose any one of them as the $2^N$- QAM signal set.
\end{itemize}
After choosing the appropriate signal set, the mapping proceeds in the same way as the mapping of index coded bits to PSK symbols. For the Example \ref{ex_psk_1}, the QAM mapping is shown in Fig. \ref{fig: Map_ex-PSK_QAM}.
\begin{figure}
\begin{center}
\includegraphics[scale= 0.6]{ex1_16-qam}
\caption{16-QAM mapping for Example \ref{ex_psk_1}}
\label{fig: Map_ex-PSK_QAM}
\end{center}
~\hrule
\end{figure}
The definitions for bandwidth gain, side information coding gain and absolute coding gain are all the same except for the fact that the index coded bits are now transmitted as a QAM signal. Since we transmit QAM signal, we call them QAM bandwidth gain, QAM side information coding gain (QAM-SICG) and QAM absolute coding gain (QAM-ACG) respectively. Further, since the condition for getting SICG depends only on the size of the signal set used, the same set of conditions holds for a receiver to obtain QAM-SICG.
Since for the given index coding problem and for the chosen index code, the index codeword can be transmitted either as a PSK symbol or as a QAM symbol with the conditions for obtaining side information coding gain being same, we need to determine which will result in a better probability of error performance. This is answered in the following theorem.
\begin{theorem}
\label{th_qam}
A receiver $R_i$ with $\eta_i \leq 2$ will get better performance when the $N$ index coded bits are transmitted as a $2^N$- PSK symbol whereas a receiver with $\eta_i > 2$ has better performance when the index coded bits are transmitted as a $2^N$- QAM symbol.
\begin{proof}
When the $N$ bit index code is transmitted as a signal point from $2^N$- PSK or $2^N$- QAM signal set, the receiver $R_i$ will see an effective signal set of size $2^{\eta_i}$. The side information coding gain for receivers satisfying the condition $\eta_i < N$ comes from mapping the $2^{\eta_i}$ index codewords to signal points on the $2^N$ signal set such that the minimum distance of these $2^{\eta_i}$ signal points is equal to the minimum distance of $2^{\eta_i}$- PSK or QAM and not that of $2^N$- PSK or QAM.
So to prove that for $\eta_i \geq 3$, QAM gives a better error performance, we will show that, for equal average signal energy, $2^{\eta_i}$ points can be mapped to signal points in $2^N$- QAM constellation with a higher minimum distance than to the signal points in $2^N$- PSK.
The largest possible pair-wise minimum distance that is obtained by any mapping of $2^{\eta}$ points to $2^N$- PSK and QAM signal sets are as follows.
\begin{align*}
d_{min-\text{PSK}}(N,\eta) &= 2 \sqrt{N}\sin\left(\frac{\pi}{2^\eta} \right). \\
d_{min-\text{QAM}}(N,\eta)&=\left\{
\begin{array}{@{}ll@{}}
\sqrt{2}^{N-\eta+2}\sqrt{\dfrac{1.5N}{(2^{N}-1)}}, & \text{if}\ N \text{ is even} \\
\sqrt{2}^{N-\eta+3}\sqrt{\dfrac{1.5N}{(2^{N+1}-1)}}, & \text{otherwise.}
\end{array}\right.
\end{align*}
For sufficiently large values of $N$, $d_{min-\text{QAM}}(N,\eta)$ can be approximated for even and odd values of $N$ as $d_{min-\text{QAM}}(N,\eta) \approxeq \sqrt{2}^{2-\eta}\sqrt{1.5N}$. \\
\textit{Case 1:} For sufficiently large $N$ and $\eta \geq 3$\\
For $\eta=3, \ \sin\left( \frac{\pi}{2^3}\right) = 0.3827$ and $\frac{\pi}{2^3} = 0.3927$. Therefore for all $\eta \geq 3$, we take $\sin\left( \frac{\pi}{2^3}\right)\approxeq \frac{\pi}{2^\eta}$.
Therefore, we have
\begin{align*}
d_{min-\text{PSK}}(N,\eta) & \approxeq 2 \sqrt{N}\left(\frac{\pi}{2^\eta} \right)\\ d_{min-\text{QAM}}(N,\eta) & \approxeq \sqrt{2}^{2-\eta}\sqrt{1.5N}.
\end{align*}
We see that $\frac{d_{min-\text{QAM}}(N,\eta)}{d_{min-\text{PSK}}(N,\eta)}= \left( \frac{\sqrt{1.5}}{\pi}\right) \sqrt{2}^\eta \geq 1, \forall \ \eta \geq 3.$
Therefore QAM gives a better performance than PSK if $\eta \geq 3$ for sufficiently large.\\
\textit{Case 2:} For sufficiently large $N$ and $\eta = 1,2$.\\
With $\eta=1$, we have
\begin{align*}
d_{min-\text{PSK}}(N,1) & = 2 \sqrt{N}\sin\left(\frac{\pi}{2^\eta} \right) =2\sqrt{N} \\ d_{min-\text{QAM}}(N,1) & \approxeq \sqrt{2}^{2-\eta}\sqrt{1.5N} = \sqrt{3N}.
\end{align*}
Clearly, $d_{min-\text{PSK}}(N,1) > d_{min-\text{QAM}}(N,1)$. Therefore, PSK has a better performance.
Similarly with $\eta=2$, we have $d_{min-\text{PSK}}(N,2) = \sqrt{2N}$ and $d_{min-\text{QAM}}(N,2) = \sqrt{1.5N}.$ Again, PSK performs better than QAM.\\
\end{proof}
\end{theorem}
\begin{figure*}
\includegraphics[scale=0.4]{PSK_QAM}
\caption{Minimum distance of PSK and QAM for different values of $N$ and $\eta$}
\label{fig: min_dis_N_eta}
~\hrule
\end{figure*}
We have also given a plot validating the result for $N=3,4,5,6$ and $7$ in Fig. \ref{fig: min_dis_N_eta}. Hence we see that for receivers with less amount of side information, i.e., receivers which see effective signal sets with eight points or more, transmitting the index codeword as a QAM symbol will result in better probability of error performance.
\section{Simulation Results}
\label{sec:simu}
Simulation results for the Example \ref{ex_psk_1} is shown in Fig. \ref{fig:ex1_sim}. We see that the probability of message error plots corresponding to $R_{1}$ is well to the left of the plots of $R_{2}$ and $R_{3}$, which themselves are far to the left of other receivers as $R_{1},\ R_{2},\ R_{3}$ get PSK-SICG as defined in Section \ref{sec:Model}. Since $\left|S_{1}\right|>\left|S_{2}\right|=\left|S_{3}\right|, R_{1}$ gets the highest PSK-SICG. Further, since $\mathcal{K}_{4}, \ \mathcal{K}_{5},\ \mathcal{K}_{6}$ and $\mathcal{K}_{7}$ does not satisfy any of the two conditions required, they do not get PSK-SICG. The performance improvement gained by $R_1, R_2$ and $R_3$ over 4-fold BPSK index code transmission can also be observed.
\begin{figure*}
\includegraphics[scale=0.4]{example11_psk}
\caption{Simulation results for Example \ref{ex_psk_1}.}
\label{fig:ex1_sim}
\end{figure*}
From the probability of message error plot, though it would seem that the receivers $R_{4}, R_{5}, R_{6}$ and $R_{7}$ lose out in probability of message error performance to the 4-fold BPSK scheme, they are merely trading off coding gain for bandwidth gain as where the 4-fold BPSK scheme for this example uses 4 real dimensions, the proposed scheme only uses 1 complex dimension, i.e., 2 real dimensions. Hence the receivers $R_{4},\ R_{5},\ R_{6}$ and $R_{7}$ get PSK bandwidth gain even though they do not get PSK-ACG whereas $R_{1}$, $R_{2}$ and $R_{3}$ get both PSK bandwidth gain and PSK-ACG. The amount of PSK-SICG, PSK bandwidth gain and PSK-ACG that each receiver gets is summarized in TABLE \ref{Table1}.
{\footnotesize
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|m{2cm}|c|c|c|c|c|c|c|}
\hline
Parameter & $R_{1}$ & $R_{2}$ & $R_{4}$ & $R_{5}$ & $R_{6}$ & $R_{7}$ \\
\hline
$d_{min_{PSK}}^2$ & 16 & 8 & 0.61 & 0.61 & 0.61 & 0.61 \\
$d_{min_{binary}}^2$ & 4 & 4 & 4 & 4 & 4 & 4 \\
PSK bandwidth gain & 2 & 2 & 2 & 2 & 2 & 2 \\
PSK-SICG (in dB) & 14.19 & 11.19 & 0 & 0 & 0 & 0 \\
PSK-ACG (in dB) & 6.02 & 3.01 & -8.16 & -8.16 & -8.16 & -8.16\\
\hline
\end{tabular}
\caption \small { Table showing PSK-SICG, PSK bandwidth gain and PSK-ACG for different receivers in Example \ref{ex_psk_1}. $R_3$ has same values as $R_2$}
\label{Table1}
\end{center}
\end{table}
}
\begin{figure}
\begin{center}
\includegraphics[scale=0.42]{example-3b}
\caption{16-PSK Mapping for Example \ref{ex3}.}
\label{fig6}
\end{center}
~\hrule
\end{figure}
\begin{figure*}
\includegraphics[scale=0.4]{example22_psk}
\caption{Simulation results for Example \ref{ex3}.}
\label{fig5}
\end{figure*}
Now consider Example \ref{ex3}. Here, suppose we fix $( x_2, x_3, x_4, x_5, x_6)= (00000)$, we get $\mathcal{C}_{1}=\lbrace \lbrace0000\rbrace, \lbrace1000\rbrace \rbrace$. After mapping these codewords to a subset at level $L_3$ of the Ungerboeck partition of the 16-PSK signal set, a subset of $\mathcal{C}$ which results in maximum overlap with already mapped codewords is $\mathcal{C}_{2} = \lbrace \lbrace0000\rbrace, \lbrace0001\rbrace, \lbrace0100\rbrace,\lbrace0101\rbrace \rbrace$. We see that $\mathcal{C}_{1}\not \subseteq \mathcal{C}_{2}$, so all the codewords in $\mathcal{C}_{2}$ cannot be mapped to the same 4-point subset in the level $L_2$ without disturbing the mapping of codewords of $\mathcal{C}_{1}$ already done. So we try to map them in such a way that the minimum distance, $d_{min}(R_2) \geq d_{min}$ of 8-PSK. The algorithm gives a mapping which gives the best possible $d_{min}(R_2)$ keeping $d_{min}(R_1) = d_{min}$ of 2-PSK. This mapping is shown in Fig. \ref{fig6}.
Simulation results for this example is shown in Fig. \ref{fig5}. The receivers $R_{1}, R_{2}, R_{3}$ and $R_{4}$ get PSK-SICG. We see that the probability of message error plots corresponding to the 4-fold BPSK binary transmission scheme lies near $R_{3}$ and $R_{4}$ showing better performances for receivers $R_{1}$ and $R_{2}$. Thus receivers $R_{1}$ and $R_{2}$ get PSK-ACG as well as PSK bandwidth gain over the 4-fold BPSK scheme, $R_{3}$ and $R_{4}$ get the same performance as 4-fold BPSK with additional bandwidth gain and $R_{5}$ and $R_{6}$ trade off bandwidth gain for coding gain. The amount of PSK-SICG, PSK bandwidth gain and PSK-ACG that each receiver gets is summarized in TABLE \ref{Table3}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $R_{1}$ & $R_{2}$ & $R_{3}$ & $R_{4}$ & $R_{5}$ & $R_{6}$ \\
\hline
$d_{min_{PSK}}^2$ & 16 & 4.94 & 2.34 & 2.34 & 0.61 & 0.61 \\
$d_{min_{binary}}^2$ & 4 & 4 & 4 & 4 & 4 & 4 \\
PSK bandwidth gain & 2 & 2 & 2 & 2 & 2 & 2 \\
PSK-SICG (in dB) & 14.19 & 9.08 & 5.84 & 5.84 & 0 & 0 \\
PSK-ACG (in dB) & 6.02 & 0.92 & -2.33 & -2.33 & -8.16 & -8.16 \\
\hline
\end{tabular}
\caption \small { Table showing PSK-SICG, PSK bandwidth gain and PSK-ACG for different receivers in Example \ref{ex3}.}
\label{Table3}
\end{center}
\end{table}
\begin{remark}
Even though the minimum distance for the 4-fold BPSK transmissions is better than $d_{min}(R_3)$ and $d_{min}(R_4)$, as seen from TABLE \ref{Table3}, the probability of error plot for the 4-fold BPSK lies slightly to the right of the error plots for $R_3$ and $R_4$. This is because since the 4-fold BPSK scheme takes 2 times the bandwidth used by the 16-PSK scheme, the noise power = $N_o$(Bandwidth), where $N_o$ is the noise power spectral density, is 2 times more for the 4-fold BPSK. Therefore, the signal to noise power ratio for the 4-fold BPSK scheme is 2 times less than that for 16-PSK scheme, even though the transmitted signal power is the same for both the schemes.
\end{remark}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.4]{N_to_n2_psk}
\caption{Simulation results for Example \ref{ex_N_to_n1}.}
\label{fig12}
\end{center}
~\hrule
\end{figure*}
The following example demonstrates that if $\eta_{i}= \eta_j$ for some $i \neq j$, depending on the ordering of $\eta_i$ done before starting the algorithm, the mapping changes and hence the probability of error performances of $R_i$ and $R_j$ can change.
\begin{example}
\label{ex4}
Let $m=n=6$ and the demanded messages be $\mathcal{W}_i = x_{i}, \ \forall \ i\in \lbrace 1, 2, \ldots, 6 \rbrace $. The side information possessed by various receivers are
$\mathcal{K}_1 =\left\{2, 4, 5, 6\right\},\ \mathcal{K}_2=\left\{1, 3, 4, 5\right\},\ \mathcal{K}_3=\left\{2,4\right\},\ \mathcal{K}_4=\left\{1,3\right\},\ \mathcal{K}_5=\left\{2\right\},\ \text{and} \ \mathcal{K}_6=\left\{1\right\}$.\\
For this problem, the minrank $N$=3.
An optimal linear index code is given by the encoding matrix,
{\small
\begin{center}
$L = \left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0
\end{array}\right]$. \\
\end{center}
}
$ \mbox{ Here~~~~~~~~~~~~~~~~~~~~~~} y_{1}=x_{1}+x_{6};~~~~~~~~~~ y_{2}=x_{2}+x_{5}; ~~~~~~~~~~ y_{3}=x_{3}+x_{4}.$
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.39]{example-4b}
\caption{8-PSK Mappings for the 2 cases in Example \ref{ex4}.}
\label{fig7}
\end{center}
~\hrule
\end{figure}
\begin{figure*}
\includegraphics[scale=0.4]{example3}
\caption{Simulation results for Example \ref{ex4}.}
\label{fig9}
\end{figure*}
We see that $\left|\mathcal{K}_1\right| = \left|\mathcal{K}_2\right|$ and $\left|S_{1}\right| = \left|S_2\right|,\ \therefore \eta_1 = \eta_2$. Then, we can choose to prioritize $R_{1}$ or $R_{2}$ depending on the requirement. If we choose $R_{1}$, the resulting mapping is shown in Fig. \ref{fig7}(a) and if we choose $R_{2}$, then the mapping is shown in Fig. \ref{fig7}(b). Simulation results for this example with the mapping in Fig. \ref{fig7}(a) is shown in Fig. \ref{fig9}, where we can see that $R_{1}$ outperforms the other receivers. $R_{1}$ and $R_{2}$ get PSK-SICG as expected. They also get PSK-ACG. The other receivers have the same performance as the 3-fold BPSK scheme. All 6 receivers get PSK bandwidth gain. The amount of PSK-SICG, PSK bandwidth gain and PSK-ACG that each receiver gets is summarized in TABLE \ref{Table4}.
{\small
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $R_{1}$ & $R_{2}$ & $R_{3}$ & $R_{4}$ & $R_{5}$ & $R_{6}$ \\
\hline
$d_{min_{PSK}}^2$ & 6 & 1.76 & 1.76 & 1.76 & 1.76 & 1.76 \\
$d_{min_{binary}}^2$ & 4 & 4 & 4 & 4 & 4 & 4 \\
PSK bandwidth gain & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 \\
PSK-SICG (in dB) & 5.33 & 0 & 0 & 0 & 0 & 0 \\
PSK-ACG (in dB) & 1.77 & -3.56 & -3.56 & -3.56 & -3.56 & -3.56 \\
\hline
\end{tabular}
\caption \small { Table showing PSK-SICG, PSK bandwidth gain and PSK-ACG for different receivers for case (a) in Example \ref{ex4}.}
\label{Table4}
\end{center}
\end{table}
}
\end{example}
Here, even though $d_{min}(R_2)=d_{min}(R_3)=d_{min}(R_4)=d_{min}(R_5)=d_{min}(R_6)$, the probability of error plot of $R_2$ is well to the left of the error plots of $R_3$, $R_4$, $R_5$ and $R_6$. This is because the distance distribution seen by $R_2$ is different from the distance distribution seen by the other receivers, as shown in TABLE \ref{Table5}, where, $d_{min_1}$ gives the minimum pairwise distance, $d_{min_2}$ gives the second least pairwise distance and so on.
{\small
\begin{table}
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $R_1$ & $R_2$ & $R_3$ & $R_4$ & $R_5$ & $R_6$ \\
\hline
Effective signal set seen & 4 pt & 4 pt & 8 pt & 8 pt & 8 pt & 8 pt \\
$d_{min_1}$ & 6 & 1.76 & 1.76 & 1.76 & 1.76 & 1.76 \\
No. of pairs & 4 & 4 & 8 & 8 & 8 & 8\\
$d_{min_2}$ & 12 & 10.24 & 6 & 6 & 6 & 6 \\
No. of pairs & 2 & 2 & 8 & 8 & 8 & 8 \\
$d_{min_3}$ & -- & 12 & 10.24 & 10.24 & 10.24 & 10.24\\
No. of pairs & 0 & 2 & 8 & 8 & 8 & 8 \\
$d_{min_4}$ & -- & -- & 12 & 12 & 12 & 12 \\
No. of pairs & 0 & 0 & 4 & 4 & 4 & 4 \\
\hline
\end{tabular}
\caption \small {Table showing the pair-wise distance distribution for the receivers in Example \ref{ex4}.}
\label{Table5}
\end{center}
\end{table}
}
\subsection{$2^N$-PSK to $2^n$-PSK}
The simulation results for the Example \ref{ex_N_to_n1} is shown in Fig. \ref{fig12}. We can see that the best performing receiver's, i.e., $R_1$'s performance improves as we go from $N$ to $n$. The minimum distance seen by different receivers for the 3 cases considered, namely, 8-PSK, 16-PSK and 32-PSK, are listed in TABLE \ref{Table7}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Parameter & $R_{1}$ & $R_{2}$ & $R_{3}$ & $R_{4}$ & $R_{5}$ \\
\hline
$d_{min_{\ 8-PSK}}^2$ & 12 & 6 & 1.76 & 1.76 & 1.76 \\
$d_{min_{\ 16-PSK}}^2$ & 16 & 8 & 0.61 & 0.61 & 0.61 \\
$d_{min_{\ 32-PSK}}^2$ & 20 & 8.05 & 0.76 & 0.76 & 0.19 \\
$d_{min_{binary}}^2$ & 4 & 4 & 4 & 4 & 4 \\
\hline
\end{tabular}
\caption \small { Table showing the minimum distances seen by different receivers for 8-PSK, 16-PSK and 32-PSK in Example \ref{ex_N_to_n1}.}
\label{Table7}
\end{center}
\end{table}
\end{example}
This example satisfies the condition in Lemma \ref{lemm:N_to_n} and hence the difference in performance between $R_1$ and $R_5$ increases monotonically with the length of the index code used. However, as stated in the Remark \ref{rem:N_to_n}, when the receiver with the worst probability of error performance knows at least one message a priori, the difference between the performances of the best and worst receiver need not increase monotonically. This is illustrated in the following example.
\begin{example}
\label{ex5}
Let $m=n=$ 4 and $\mathcal{W}_i = x_{i},\ \forall \ i\in \lbrace 1, 2, \ldots, 4 \rbrace $, with the side information sets being
$\mathcal{K}_1 =\left\{2, 3, 4\right\},\ \mathcal{K}_2=\left\{1, 3\right\},\ \mathcal{K}_3=\left\{1,4\right\}$ and $\mathcal{K}_4=\left\{2\right\}$. \\
For this problem, the minrank evaluates to $N$=2.
An optimal linear index code is given by the encoding matrix,\\
\begin{center}
$L_1 = \left[\begin{array}{ccc}
1 & 0 \\
1 & 1 \\
1 & 0 \\
0 & 1
\end{array}\right]$.\\
\end{center}
The corresponding 4-PSK mapping is given in Fig. \ref{fig8}(a).
\begin{figure*}
\includegraphics[scale=0.4]{N_to_n}
\caption{Simulation results for Example \ref{ex5}.}
\label{fig10}
~\hrule
\end{figure*}
\begin{figure*}[h]
\includegraphics[scale=0.4]{ex1_PSK_vs_QAM}
\caption{Simulation result comparing the performance of 16-PSK and 16-QAM for Example \ref{ex_psk_1}.}
\label{fig: sim_ex-PSK_QAM}
~\hrule
\end{figure*}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4]{example-5b}
\caption{4-PSK, 8-PSK and 16-PSK Mappings for Example \ref{ex5}.}
\label{fig8}
\end{center}
~\hrule
\end{figure}
Now assuming that we did not know the minrank for the above problem and chose $N=3$. Then an encoding matrix is,
\begin{center}
$L_2 = \left[\begin{array}{ccc}
1 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right]$, \\
\end{center}
and an 8-PSK mapping which gives the best possible PSK-SICGs for the different receivers is shown in Fig. \ref{fig8}(b).
Now, compare the above two cases with the case where the 4 messages are transmitted as they are, i.e., $ \left[y_1 \ y_2\ y_3\ y_4\right]=\left[x_1\ x_2\ x_3\ x_4\right]$. A 16-PSK mapping which gives the maximum possible PSK-SICG is shown in Fig. \ref{fig8}(c).
From the simulation results shown in Fig \ref{fig10}, we see that the performance of the best receiver, i.e., $R_1$, improves as we go from $N$ to $n$. However, the gap between the best performing receiver and worst performing receiver widens as we go from N to n. The reason for the difference in performance seen by different receivers is that they see different minimum distances, which is summarized in TABLE \ref{Table6}, for 4-PSK, 8-PSK and 16-PSK.
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Parameter & $R_{1}$ & $R_{2}$ & $R_{3}$ & $R_{4}$ \\
\hline
$d_{min_{\ 4-PSK}}^2$ & 8 & 4 & 4 & 4 \\
$d_{min_{\ 8-PSK}}^2$ & 12 & 6 & 1.76 & 1.76 \\
$d_{min_{\ 16-PSK}}^2$ & 16 & 4.94 & 2.34 & 2.34 \\
$d_{min_{binary}}^2$ & 4 & 4 & 4 & 4 \\
\hline
\end{tabular}
\caption \small { Table showing the minimum distances seen by different receivers for 4-PSK, 8-PSK and 16-PSK in Example \ref{ex5}.}
\label{Table6}
\end{center}
\end{table}
\end{example}
Here we see that the difference in performance between the best and worst receiver is not monotonically widening with the length of the index code employed.
\subsection{Comparison between PSK and QAM}
For the Example \ref{ex_psk_1}, the plot comparing the performances of PSK and QAM is shown in Fig. \ref{fig: sim_ex-PSK_QAM}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Parameter & $R_{1}$ & $R_{2}$ & $R_{4}$ & $R_{5}$ & $R_{6}$ & $R_{7}$ \\
\hline
$d_{min}^2 - 16-QAM$ & 12.8 & 6.4 & 1.6 & 1.6 & 1.6 & 1.6 \\
$d_{min}^2 - 16-PSK$ & 16 & 8 & 0.61 & 0.61 & 0.61 & 0.61 \\
$d_{min}^2 - binary$ & 4 & 4 & 4 & 4 & 4 & 4 \\
\hline
\end{tabular}
\caption \small { Table showing minimum distance seen by different receivers while using 16-QAM and 16-PSK in Example \ref{ex_psk_1}. $R_3$ has same values as $R_2$.}
\label{Table-ex_PSK_QAM}
\end{center}
\end{table}
We can see that while $R_1, \ R_2 \ \text{and} \ R_3$ performs better while the index coded bits are transmitted as a PSK signal, the other receivers have better performance when a QAM symbol is transmitted. This is because of the difference in the minimum distances seen by the different receivers as summarized in TABLE \ref{Table-ex_PSK_QAM}. This observation agrees with Theorem \ref{th_qam}.
\section{Conclusion}
\label{sec:conc}
The mapping and 2-D transmission scheme proposed in this paper is applicable to any index coding problem setting. In a practical scenario, we can use this mapping scheme to prioritize those customers who are willing to pay more, provided their side information satisfies the condition mentioned in Section \ref{sec:Model}. Further, the mapping scheme depends on the index code, i.e., the encoding matrix, $L$, chosen, since $L$ determines $\left|S_{i}\right|, \forall \ i \in \lbrace1, 2, \ldots, m\rbrace$. So we can even choose an $L$ matrix such that it favors our chosen customer, provided $L$ satisfies the condition that all users use the minimum possible number of binary transmissions to decode their required messages. Further, if we are interested only in giving the best possible performance to a chosen customer who has large amount of side information and not in giving the best possible performance to every receiver, then using a $2^n$- PSK/QAM would be a better strategy. The mapping and 2-D transmission scheme introduced in this paper are also applicable to index coding over fading channels which was considered in \cite{OLIC}.
\section{Acknowledgment}
This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and main results}
Uniformly hyperbolic dynamical systems are very well understood. An approach
to study more general systems is to see to what extent they resemble
uniformly hyperbolic ones. A very fruitful approach in this respect is the
development of Pesin theory, that requires hyperbolic features (no zero
Lyapunov exponent) almost everywhere with respect to an invariant measure,
and constructs from these local stable and unstable manifolds, then leading
to results such as the ergodicity of the system under study.
A basic tool in Pesin theory is the notion of Pesin sets, made of points for
which, along their orbits, the Oseledets decomposition is well controlled in
a quantitative way. Their existence follows from general measure theory
argument, but they are not really explicit. Even in uniformly hyperbolic
situations, Pesin sets are relevant objects as the control of the Oseledets
decomposition gives directions in which the dynamics is close to conformal.
In particular, the second author has shown in~\cite{stoyanov_dolgopyat} that
Pesin sets could be used, in contact Anosov flows, to study the decay of
correlations: he proved that, if points return exponentially fast to Pesin
sets, then the correlations decay exponentially fast.
Our goal in this article is to investigate this question, for Anosov
diffeomorphisms and flows. We do not have a complete answer, but our results
indicate a dichotomy: if the dynamics is not too far away from conformality
(for instance in the case of the geodesic flow on a $1/4$-pinched compact
manifold of negative curvature), points return exponentially fast to Pesin
sets for generic metrics (in a very strong sense), and possibly for all
metrics. On the other hand, far away from conformality, this should not be
the case (we have a counter-example in a related setting, but with weaker
regularity).
Such statements are related to large deviations estimates for matrix
cocycles, i.e., products of matrices governed by the dynamics (for Pesin
theory, the cocycle is simply the differential of the map). Indeed, we will
show that such large deviations estimates make it possible to control the
returns to Pesin sets, by quantifying carefully some arguments in the proof
of Oseledets theorem.
\bigskip
Let $T: X \to X$ be a measurable map on a space $X$, preserving an ergodic
probability measure $\mu$. Consider a measurable bundle $E$ over $X$, where
each fiber is isomorphic to $\R^d$ and endowed with a norm. A linear cocycle
is a measurable map $M$ on $E$, mapping the fiber above $x$ to the fiber
above $Tx$ in a linear way, through a matrix $M(x)$. We say that the cocycle
is log-integrable if $\int \log \max(\norm{M(x)}, \norm{M(x)^{-1}}) \dd\mu(x)
< \infty$. In this case, it follows from Kingman's theorem that one can
define the Lyapunov exponents of the cocycle, denoted by $\lambda_1 \geq
\lambda_2 \geq \dotsb \geq \lambda_d$. They are the growth rate of vectors
under iteration of the cocycle, above $\mu$-almost every point. The sum
$\lambda_1+\dotsb+\lambda_i$ is also the asymptotic exponential growth rate
of the norm of the $i$-th exterior power $\Lambda^i M^n(x)$, for $\mu$-almost
every $x$.
The main condition to get exponential returns to Pesin sets is an exponential
large deviations condition.
\begin{definition}
\label{def: exp_dev_general} Consider a transformation $T$ preserving a
probability measure $\mu$, and a family of functions $u_n : X \to \R$. Assume
that, almost everywhere, $u_n(x)/n$ converges to a limit $\lambda$. We say
that the family has exponential large deviations if, for any $\epsilon>0$,
there exists $C>0$ such that, for all $n\geq 0$,
\begin{equation*}
\mu\{ x \st \abs{u_n(x) - n \lambda} \geq n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation*}
\end{definition}
This general definition specializes to several situations that will be
relevant in this paper:
\begin{definition}
\label{def:exp_large_dev_u} Consider an integrable function $u$ above an
ergodic transformation $(T,\mu)$. We say that $u$ has exponential large
deviations if its Birkhoff sums $S_n u$ have exponential large deviations in
the sense of Definition~\ref{def: exp_dev_general}, i.e., for any
$\epsilon>0$, there exists $C>0$ such that, for all $n\geq 0$,
\begin{equation*}
\mu\{ x \st \abs{S_n u(x) - n \int u} \geq n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation*}
\end{definition}
\begin{definition}
\label{def:exp_large_dev} Consider a log-integrable linear cocycle $M$ above
a transformation $(T,\mu)$, with Lyapunov exponents $\lambda_1 \geq \dotsb
\geq \lambda_d$. We say that $M$ has exponential large deviations for its top
exponent if the family of functions $u_n(x) = \log \norm{M^n(x)}$ (which
satisfies $u_n(x)/n \to \lambda_1$ almost everywhere) has exponential large
deviations in the sense of Definition~\ref{def: exp_dev_general}, i.e., for
any $\epsilon>0$, there exists $C>0$ such that, for all $n\geq 0$,
\begin{equation*}
\mu\{ x \st \abs{\log \norm{M^n(x)} - n \lambda_1} \geq n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation*}
We say that $M$ has exponential large deviations for all exponents if, for
any $i\leq d$, the functions $\log \norm{\Lambda^i M^n(x)}$ satisfy
exponential large deviations in the sense of Definition~\ref{def:
exp_dev_general}, i.e., for any $\epsilon>0$, there exists $C>0$ such that,
for all $n\geq 0$,
\begin{equation}
\label{eq:exp_dev_all_exp}
\mu\{ x \st \abs{\log \norm{\Lambda^i M^n(x)}
- n (\lambda_1+\dotsb+\lambda_i)} \geq n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation}
\end{definition}
We will explain in the next paragraph that many linear cocycles above
subshifts of finite type have exponential large deviations for all exponents,
see Theorem~\ref{thm:large_deviations} below. This builds on techniques
developed by Bonatti, Viana and Avila (see~\cite{bonatti_viana_lyapunov,
avila_viana_criterion}). The main novelty of our work is the proof that such
large deviations imply exponential returns to Pesin sets, as we explain in
Paragraph~\ref{subsec:quantitative_Pesin}. The last paragraph of this
introduction discusses consequences of these results.
\subsection{Sufficient conditions for large deviations for linear cocycles}
In this paragraph, we consider a (bilateral) transitive subshift of finite
type $T:\Sigma \to \Sigma$, together with a Gibbs measure $\mu$ for a Hölder
potential. Let $E$ be a continuous $\R^d$-bundle over $\Sigma$, endowed with
a continuous linear cocycle $M$ on $E$ over $T$. For instance, one may take
$E=\Sigma \times \R^d$, then $M(x)$ is simply an invertible $d\times d$
matrix depending continuously on $x$. We describe in
Theorem~\ref{thm:large_deviations} various conditions under which such a
cocycle has exponential large deviations for all exponents, in the sense of
Definition~\ref{def:exp_large_dev}. Through the usual coding process, similar
results follow for hyperbolic basic sets of diffeomorphisms, and in
particular for Anosov or Axiom A diffeomorphisms.
We show in Appendix~\ref{app:counter} the existence of a continuous linear
cocycle above a subshift of finite type which does not have exponential large
deviations for its top exponent. Hence, additional assumptions are needed for
this class of results (contrary to the case of Birkhoff sums, where all
Birkhoff sums of continuous functions over a transitive subshift of finite
type have exponential large deviations). These assumptions, as is usual in
the study of linear cocycles, are defined in terms of holonomies. In a
geometric context, holonomies are usually generated by connections. In the
totally disconnected context of subshifts of finite type, connections do not
make sense, but the global notion of holonomy does.
The local stable set of $x$ is the set $W_{\loc}^s(x) = \{y \st y_n=x_n
\text{ for all } n\geq 0\}$. In the same way, its local unstable set is
$W_{\loc}^u(x) = \{y \st y_n=x_n \text{ for all } n\leq 0\}$. By definition,
$W_{\loc}^s(x) \cap W_{\loc}^u(x) = \{x\}$.
An unstable holonomy is a family of isomorphisms $H^u_{ x \to y}$ from
$E(x)$ to $E(y)$, defined for all $x$ and $y$ with $y \in W_{\loc}^u(x)$. We
require the compatibility conditions $H^u_{x\to x} = \Id$ and $H^u_{ y \to z}
\circ H^u_{ x \to y} = H^u_{ x \to z}$ for any $x$, $y$ and $z$ on the same
local unstable set. Moreover, we require the continuity of $(x, y) \mapsto
H^u_{ x \to y}$ (globally, i.e., not only along each leaf).
In the same way, one defines a stable holonomy as a family of maps $H^s_{ x
\to y}$ from $E(x)$ to $E(y)$ when $x$ and $y$ belong to the same local
stable set, with the same equivariance and continuity requirements as above.
\begin{definition}
A linear cocycle admits invariant continuous holonomies if there exist two
stable and unstable continuous holonomies, denoted respectively by $H^s$ and
$H^u$, that are equivariant with respect to the cocycle action. More
precisely, for any $x$, for any $y \in W_{\loc}^s(x)$, and any $v \in E(x)$,
one should have
\begin{equation*}
M(y) H^s_{ x \to y} v = H^s_{ T x \to T y} M(x) v.
\end{equation*}
Similarly, for any $x$, for any $y \in W_{\loc}^u(x)$, and any $v \in E(x)$,
one should have
\begin{equation*}
M(y)^{-1} H^u_{ x \to y} v
= H^u_{ T^{-1} x \to T^{-1} y} M(x)^{-1} v.
\end{equation*}
\end{definition}
Stable holonomies give a canonical way to trivialize the bundle over local
stable sets. Thus, to trivialize the whole bundle, one may choose an
arbitrary trivialization over an arbitrary local unstable set, and then
extend it to the whole space using the holonomies along the local stable
sets. In this trivialization, the cocycle is constant along local stable
sets, i.e., it only depends on future coordinates. Symmetrically, one can
trivialize the bundle first along a stable set, and then using unstable
holonomies along the local unstable sets. In this trivialization, the cocycle
is constant along unstable sets, and depends only on past coordinates. Note
that these two trivializations do not coincide in general, unless the stable
and unstable holonomies commute: In this case, the cocycle only depends on
the coordinate $x_0$ in the resulting trivialization, i.e., it is locally
constant. Conversely, a locally constant cocycle admits the identity as
stable and unstable invariant commuting holonomies.
We say that a linear cocycle is \emph{pinching and twisting in the sense of
Avila-Viana}~\cite{avila_viana_criterion} if it has invariant continuous
holonomies, and if there exist a periodic point $p$ (of some period $k$) and
a point $q$ which is asymptotic to $p$ both in the past and in the future
(i.e., $q\in W_{\loc}^u(p)$ and $T^i q \in W_{\loc}^s(p)$ for some $i$ which
is a multiple of $k$), such that
\begin{itemize}
\item All the eigenvalues of $M^k(p)$ are real and different.
\item Define a map $\Psi:H^s_{T^i q \to p} \circ M^i(q) \circ H^u_{p \to
q}$ from $E(p)$ to itself. Then, for any subspaces $U$ and $V$ of
$E(p)$ which are invariant under $M^k(p)$ (i.e., which are union of
eigenspaces) with $\dim U + \dim V = \dim E$, then $\Psi(U) \cap V =
\{0\}$. In other words, the map $\Psi$ puts the eigenspaces of $M^k(p)$
in general position.
\end{itemize}
This condition ensures that the Lyapunov spectrum of any Gibbs measure is
simple, by the main result of~\cite{avila_viana_criterion}. In the space of
fiber-bunched cocycles (which automatically admit invariant continuous
holonomies), this condition is open (this is clear) and dense (this is harder
as there might be pairs of complex conjugate eigenvalues at some periodic
points, which need more work to be destroyed, see~\cite[Proposition
9.1]{bonatti_viana_lyapunov}).
\begin{thm}
\label{thm:large_deviations} Let $T$ be a transitive subshift of finite type
on a space $\Sigma$, and $\mu$ a Gibbs measure for a Hölder-continuous
potential. Consider a continuous linear cocycle $M$ on a vector bundle $E$
above $T$. Then $M$ has exponential large deviations for all exponents in the
following situations:
\begin{enumerate}
\item If all its Lyapunov exponents coincide.
\item If there us a continuous decomposition of $E$ as a direct sum of
subbundles $E=E_1 \oplus \dotsb \oplus E_k$ which is invariant under
$M$, such that the restriction of $M$ to each $E_i$ has exponential
large deviations for all exponents.
\item More generally, if there is an invariant continuous flag
decomposition $\{0\}=F_0 \subseteq F_1 \subseteq \dotsb \subseteq F_k =
E$, such that the cocycle induced by $M$ on each $F_i/F_{i-1}$ has
exponential large deviations for all exponents.
\item If the cocycle $M$ is locally constant in some trivialization of the
bundle $E$ (this is equivalent to the existence of invariant continuous
holonomies which are commuting).
\item If the cocycle $M$ admits invariant continuous holonomies, and if it
is pinching and twisting in the sense of Avila-Viana.
\item If the cocycle $M$ admits invariant continuous holonomies, and the
bundle is $2$-dimensional.
\end{enumerate}
\end{thm}
The first three points are easy, the interesting ones are the last ones. The
various statements can be combined to obtain other results. For instance, if
each (a priori only measurable) Oseledets subspace is in fact continuous (for
instance if the Oseledets decomposition is dominated), then the cocycle has
exponential large deviations for all exponents: this follows from points (1)
and (2) in the theorem. We expected that our techniques would show a result
containing (4--6): if a cocycle admits invariant continuous holonomies, then
it should have exponential large deviations for all exponents. However, there
is a difficulty here, see Remark~\ref{rmk:not_locally_constant}. Points (1-3)
are proved on Page~\pageref{proof:thm_1-3}, (4) on Page~\pageref{proof:4},
(5) on Page~\pageref{proof:5} and (6) on Page~\pageref{proof:6}. The proofs
of (4), (5) and (6) follow the same strategy, we will insist mainly on (4)
and indicate more quickly the modifications for (5) and (6). These proofs are
essentially applications of the techniques in~\cite{bonatti_viana_lyapunov,
avila_viana_criterion}.
\begin{rmk}
In Theorem~\ref{thm:large_deviations}, exponential large deviations are
expressed in terms of matrix norms: one should choose on each $E(x)$ a norm,
depending continuously on $x$, and then $\norm{M^n(x)}$ is the operator norm
of $M^n(x)$ between the two normed vector spaces $E(x)$ and $E(T^n x)$. The
above statement does not depend on the choice of the norm (just as the value
of the Lyapunov exponents) as the ratio between two such norms is bounded
from above and from below by compactness. Hence, we may choose whatever norm
we like most on $E$. For definiteness, we use a Euclidean norm.
\end{rmk}
The above theorem shows that, in most usual topologies, generic linear
cocycles have exponential large deviations for all exponents. Indeed, for
generic cocycles in the $C^0$ topology, the Oseledets decomposition is
dominated (see~\cite[Theorem 9.18]{viana_lyapunov}), hence (1) and (2) in the
theorem yield exponential large deviations. For generic cocycles in the
Hölder topology among fiber bunched cocycles (the most tractable Hölder
cocycles), pinching and twisting are generic, hence (5) also gives
exponential large deviations.
\subsection{Quantitative Pesin theory from large deviations for linear cocycles}
\label{subsec:quantitative_Pesin}
Let $T$ be an invertible continuous map on a compact metric space $X$,
preserving an ergodic probability measure $\mu$. Let $M$ be a continuous
cocycle above $T$, on the trivial bundle $X \times \R^d$. Denote by
$\lambda_1 \geq \dotsb \geq \lambda_d$ its Lyapunov exponents, and $I=\{i \st
\lambda_i <\lambda_{i-1}\}$. Then $(\lambda_i)_{i \in I}$ are the distinct
Lyapunov exponents. Denote by $E_i$ the corresponding Oseledets subspace, its
dimension $d_i$ is $\Card\{j\in [1,d] \st \lambda_j = \lambda_i\}$. The
subspaces $E_i(x)$ are well-defined on an invariant subset $X'$ of $X$ with
$\mu(X') = 1$ and $E_i(T(x)) = E_i(x)$ for all $x\in X'$. Moreover
$\frac{1}{n} \log \norm{M^n(x) v} \to \lambda_i$ as $n \to \pm \infty$ for
all $v\in E_i(x)\setminus \{ 0\}$. With this notation, the space $E_i(x)$ is
repeated $d_i$ times. The distinct Oseledets subspaces are $(E_i(x))_{i \in
I}$.
Let $\epsilon>0$. The basic ingredient in Pesin theory is the function
\begin{equation}
\label{eq:def_A_epsilon}
\begin{split}
A_\epsilon(x)& = \sup_{i\in I} A_\epsilon^{(i)}(x)
\\&
= \sup_{i\in I} \sup_{v \in E_i(x)\setminus\{0\}} \sup_{m,n\in \Z}
\frac{ \norm{M^n(x) v}}{\norm{M^m(x) v}} e^{-(n-m)\lambda_i} e^{-(\abs{n}
+ \abs{m}) \epsilon/2}
\in [0,\infty].
\end{split}
\end{equation}
This function is slowly varying, i.e.,
\begin{equation*}
e^{-\epsilon} A_\epsilon(x) \leq A_\epsilon(Tx) \leq e^\epsilon A_\epsilon(x),
\end{equation*}
as the formulas for $x$ and $Tx$ are the same except for a shift of $1$ in
$n$ and $m$. Moreover, for all $k\in \Z$ and all $v\in E_i(x)$,
\begin{equation*}
\norm{v}{A_\epsilon(x) e^{-\abs{k}\epsilon}} \leq
\frac{\norm{M^k(x) v}}{e^{k \lambda_i }} \leq \norm{v}{A_\epsilon(x)
e^{\abs{k}\epsilon}},
\end{equation*}
where one inequality follows by taking $m=0$ and $n=k$ in the definition of
$A_\epsilon$, and the other inequality by taking $m=k$ and $n=0$. The almost
sure finiteness of $A_\epsilon$ follows from Oseledets theorem.
\emph{Pesin sets} are sets of the form $\{x \st A_\epsilon(x) \leq C\}$, for
some constant $C>0$. Our main goal is to show that most points return
exponentially often to some Pesin set. This is the content of the following
theorem.
\begin{thm}
\label{thm:exp_returns_Pesin} Let $T$ be a transitive subshift of finite type
on a space $\Sigma$, and $\mu$ a Gibbs measure for a Hölder-continuous
potential. Consider a continuous linear cocycle $M$ on the trivial vector
bundle $\Sigma \times \R^d$ above $T$. Assume that $M$ has exponential large
deviations for all exponents, both in positive and negative times.
Let $\epsilon>0$ and $\delta>0$. Then there exists $C>0$ such that, for all
$n\in \N$,
\begin{equation*}
\mu \{x \st \Card\{j \in [0, n-1] \st A_\epsilon(T^j x) > C\} \geq \delta n\}
\leq C e^{-C^{-1}n}.
\end{equation*}
\end{thm}
One difficulty in the proof of this theorem is that the function $A_\epsilon$
is defined in terms of the Lyapunov subspaces, which are only defined almost
everywhere, in a non constructive way. To get such controls, we will need to
revisit the proof of Oseledets to get more quantitative bounds, in
Section~\ref{thm:deterministic_Oseledets}, showing that an explicit control
on the differences $\pare*{\abs*{\log \norm{\Lambda^i M^n(x)} - n(\lambda_1 +
\dotsc + \lambda_i)}}_{n\in \Z}$ at some point $x$ implies an explicit
control on $A_\epsilon(x)$ in Theorem~\ref{thm:deterministic_Oseledets}.
Then, the number of returns to the Pesin sets is estimated using an abstract
result in subadditive ergodic theory, interesting in its own right,
Theorem~\ref{thm:exp_returns}. These two statements are finally combined in
Section~\ref{sec:proof_returns_Pesin} to prove
Theorem~\ref{thm:exp_returns_Pesin}.
\subsection{Applications}
In this paragraph, we describe several systems to which our results on large
deviations and exponential returns to Pesin sets apply.
First, coding any Anosov or Axiom A diffeomorphism thanks to a Markov
partition, then the above theorems apply to such maps, provided the matrix
cocycle has exponential large deviations. Hence, one needs to check the
conditions in Theorem~\ref{thm:large_deviations}.
\medskip
The main class of cocycles admitting stable and unstable holonomies is the
class of \emph{fiber bunched cocycles},
see~\cite[Definition~A.5]{avila_viana_criterion}.
A $\nu$-Hölder continuous cocycle $M$ over a hyperbolic map $T$ on a compact
space is $s$-fiber bunched if there exists $\theta \in (0,1)$ such that $d(T
x, T y) \leq \theta d(x,y)$ and $\norm{M(x)} \norm{M(y)^{-1}} \theta^\nu <
1$, for all $x,y$ on a common local stable set (or more generally if this
property holds for some iterate of the map and the cocycle). This means that
the expansion properties of the cocycle are dominated by the contraction
properties of the map $T$. This results in the fact that $M^n(y)^{-1} M^n(x)$
converges exponentially fast when $n\to \infty$, to a map which is a
continuous invariant stable holonomy,
see~\cite[Proposition~A.6]{avila_viana_criterion}
In the same way, one defines $u$-fiber bunched cocycles. Finally, a cocycle
is fiber-bunched if it is both $s$ and $u$-fiber bunched. For instance, if
$T$ and $\nu$ are fixed, then a cocycle which is close enough to the identity
in the $C^\nu$ topology is fiber bunched. Our results apply to such cocycles
if they are pinching and twisting, which is an open and dense condition among
fiber bunched cocycles.
\medskip
Our results also apply to generic cocycles in the $C^0$ topology. Indeed, the
Oseledets decomposition is then dominated (see~\cite[Theorem
9.18]{viana_lyapunov}), hence (1) and (2) in the theorem yield exponential
large deviations, and from there one deduces exponential returns to Pesin
sets by Theorem~\ref{thm:exp_returns_Pesin}.
\medskip
The main application we have in mind is to flows. The second author proves
in~\cite{stoyanov_dolgopyat} the following theorem:
\begin{thm}[Stoyanov~\cite{stoyanov_dolgopyat}]
\label{thm:stoyanov_mixing} Let $g_t$ be a contact Anosov flow on a compact
manifold $X$, with a Gibbs measure $\mu_X$.
Consider the first return map to a Markov section $T$, the corresponding
invariant measure $\mu$, and the corresponding derivative cocycle $M$, from
the tangent space of $X$ at $x$ to the tangent space of $X$ at $Tx$. Assume
that $(T, M, \mu)$ has exponential returns to Pesin sets as in the conclusion
of Theorem~\ref{thm:exp_returns_Pesin}.
Then the flow $g_t$ is exponentially mixing: there exists $C>0$ such that,
for any $C^1$ functions $u$ and $v$, for any $t\geq 0$,
\begin{equation*}
\abs*{\int u\cdot v \circ g_t \dd\mu_X - \int u \dd\mu_X \cdot \int v \dd\mu_X}
\leq C \norm{u}_{C^1} \norm{v}_{C^1} e^{-C^{-1} t}.
\end{equation*}
\end{thm}
By a standard approximation argument, exponential mixing for Hölder
continuous functions follows.
This statement is the main motivation to study exponential returns to Pesin
sets. We deduce from Theorem~\ref{thm:exp_returns_Pesin} the following:
\begin{thm}
\label{thm:contact} Consider a contact Anosov flow with a Gibbs measure, for
which the derivative cocycle has exponential large deviations for all
exponents. Then the flow is exponentially mixing.
\end{thm}
To apply this theorem in concrete situations, we have to check whether the
sufficient conditions of Theorem~\ref{thm:large_deviations} for exponential
large deviations hold. The main requirement is the existence of stable and
unstable holonomies. Unfortunately, we only know their existence when the
foliation is smooth:
\begin{lem}
\label{lem:holonomies} Consider a contact Anosov flow for which the stable
and unstable foliations are $C^1$. Then the derivative cocycle admits
continuous invariant holonomies with respect to the induced map on any Markov
section.
\end{lem}
\begin{proof}
It suffices to show that the flow admits continuous invariant holonomies
along weak unstable and weak stable manifolds, as they descend to the Markov
section.
We construct the holonomy along weak unstable leaves, the holonomy along weak
stable leaves being similar. Consider two points $x$ and $y$ on a weak
unstable leaf. Then the holonomy of the weak unstable foliation gives a local
$C^1$ diffeomorphism between $W^s(x)$ to $W^s(y)$, sending $x$ to $y$. The
derivative of this map is a canonical isomorphism between $E^s(x)$ and
$E^s(y)$, which is clearly equivariant under the dynamics. There is also a
canonical isomorphism between the flow directions at $x$ and $y$. What
remains to be done is to construct an equivariant isomorphism between
$E^u(x)$ and $E^u(y)$.
For this, we use the fact that the flow is a contact flow, i.e., there exists
a smooth one-form $\alpha$, invariant under the flow, with kernel $E^s \oplus
E^u$, whose derivative $\dd\alpha$ restricts to a symplectic form on $E^s
\oplus E^u$. We get a map $\phi$ from $E^s$ to $(E^u)^*$, mapping $v$ to
$\dd\alpha(v, \cdot)$. This map is one-to-one: a vector $v$ in its kernel
satisfies $\dd\alpha(v, w) = 0$ for all $w \in E^u$, and also for all $w \in
E^s$ as $E^s$ is Lagrangian. Hence, $v$ is in the kernel of $\dd\alpha$,
which is reduced to $0$ as $\dd\alpha$ is a symplectic form. As $E^s$ and
$E^u$ have the same dimension, it follows that $\phi$ is an isomorphism.
Consider now $x$ and $y$ on a weak unstable leaf. We have already constructed
a canonical isomorphism between $E^s(x)$ and $E^s(y)$. With the above
identification, this gives a canonical isomorphism between $(E^u(x))^*$ and
$(E^u(y))^*$, and therefore between $E^u(x)$ and $E^u(y)$. This
identification is equivariant under the flow, as $\alpha$ is invariant.
\end{proof}
For instance, for the geodesic flow on a compact Riemannian manifold with
negative curvature, the stable and unstable foliations are smooth if the
manifold is $3$-dimensional or the curvature is strictly $1/4$-pinched, i.e.,
the sectional curvature belongs everywhere to an interval $[-b^2, -a^2]$ with
$a^2/b^2 > 1/4$, by~\cite{hirsch_pugh_C1}. Hence, we deduce the following
corollary from Theorem~\ref{thm:large_deviations} (1), (6) and (5)
respectively:
\begin{cor}
\label{cor:exp_mixing} Consider the geodesic flow $g_t$ on a compact
riemannian manifold $X$ with negative curvature. Assume one of the following
conditions:
\begin{enumerate}
\item $X$ is of dimension $3$.
\item $X$ is of dimension $5$ and the curvature is strictly $1/4$ inched.
\item $X$ has any dimension, the curvature is strictly $1/4$ pinched, and
moreover the flow is pinching and twisting.
\end{enumerate}
Then the flow is exponentially mixing for any Gibbs measure.
\end{cor}
However, these results were already proved by the second author, under weaker
assumptions: exponential mixing holds if the curvature is (not necessarily
strictly) $1/4$-pinched, in any dimension (without twisting and pinching).
This follows from the articles~\cite{stoyanov_spectrum}, in which it is
proved that a contact Anosov flow with Lipschitz holonomies and satisfying a
geometric condition is exponentially mixing for all Gibbs measure, and
from~\cite{stoyanov_pinching} where the aforementioned geometric condition is
proved to be satisfied in a class of flows including geodesic flows when the
curvature is $1/4$-pinched.
On the opposite side, the techniques of~\cite{liverani_contact}
or~\cite{faure_tsujii_flot2} prove exponential mixing for any contact Anosov
flow, without any pinching condition, but for Lebesgue measure (or for Gibbs
measure whose potential is not too far away from the potential giving rise to
Lebesgue measure): they are never able to handle all Gibbs measure.
The hope was that Theorem~\ref{thm:stoyanov_mixing} would be able to bridge
the gap between these results and the results of Dolgopyat, proving
exponential mixing for all contact Anosov flows and all Gibbs measures.
However we still need geometric conditions on the manifold to be able to
proceed. The counterexample in the Appendix~\ref{app:counter} shows that in
general exponential large deviations do not hold. Whether one can design
similar counterexamples for nice systems, e.g.\ contact Anosov flows, remains
unknown at this stage. It is also unknown whether one can prove a result
similar to Theorem~\ref{thm:contact} without assuming exponential large
deviations for all exponents.
\section{Preliminaries}
\subsection{Oseledets theorem}
Let $A$ be a linear transformation between two Euclidean spaces of the same
dimensions. We recall that, in suitable orthonormal bases at the beginning
and at the end, $A$ can be put in diagonal form with entries $s_1 \geq \dotsb
\geq s_d \geq 0$. The $s_i$ are the \emph{singular values} of $A$. They are
also the eigenvalues of the symmetric matrix $\sqrt{A^t \cdot A}$. The
largest one $s_1$ is the norm of $A$, the smallest one $s_d$ is its smallest
expansion. The singular values of $A^{-1}$ are $1/s_d \geq \dotsb \geq
1/s_1$. For any $i \leq d$, denote by $\Lambda^i A$ the $i$-th exterior
product of $A$, given by
\begin{equation*}
(\Lambda^i A) (v_1 \wedge v_2 \wedge \dotsb \wedge v_i) = Av_1 \wedge Av_2 \wedge \cdots\wedge Av_i .
\end{equation*}
Then
\begin{equation*}
\norm{\Lambda^i A} = s_1 \dotsm s_i,
\end{equation*}
as $\Lambda^i A$ is diagonal in the corresponding orthonormal bases.
\medskip
Consider a transformation $T$ of a space $X$, together with a finite
dimensional real vector bundle $E$ above $X$: all the fibers are isomorphic
to $\R^d$ for some $d$ and the bundle is locally trivial by definition. For
instance, $E$ may be the product bundle $X \times \R^d$, but general bundles
are also allowed. In our main case of interest, $T$ will be a subshift of
finite type. In this case, any such continuous vector bundle is isomorphic to
$X \times \R^d$: by compactness, there is some $N>0$ such that the bundle is
trivial on all cylinders $[x_{-N},\dotsc, x_N]$. As these (finitely many)
sets are open and closed, trivializations on these cylinders can be glued
together to form a global trivialization of the bundle. In the course of the
proof, even if we start with the trivial bundle, we will have to consider
general bundles, but they will be reducible to trivial bundles thanks to this
procedure.
A cocycle is a map $M$ associating to $x\in X$ an invertible linear operator
$M(x): E(x) \to E(T x)$ (where $E(x)$ denotes the fiber of the fiber bundle
above $x$). When $E=X \times \R^d$, then $M(x)$ is simply a $d\times d$
matrix. The iterated cocycle is given by $M^{n}(x) = M(T^{n-1} x) \dotsm
M(x)$ for $n\geq 0$, and by $M^{-n}(x) = M(T^{-n} x)^{-1} \dotsm M(T^{-1}
x)$. It maps $E(x)$ to $E(T^n x)$ in all cases. Be careful that, with this
notation, $M^{-1}(x) \neq M(x)^{-1}$: the first notation indicates the
inverse of the cocycle, with the intrinsic time shift, going from $E(x)$ to
$E(T^{-1} x)$, while the second one is the inverse of a linear operator, so
it goes from $E(T x)$ to $E(x)$. In general, $M^{-n}(x) = M^n(T^{-n}
x)^{-1}$.
\medskip
Assume now that $T$ is invertible, that it preserves an ergodic probability
measure, and that the cocycle $M$ is $\log$-integrable. For any $i \leq d$,
the quantity $x \mapsto \log \norm{\Lambda^i (M^n(x))}$ is a subadditive
cocycle. Hence, by Kingman's theorem, $\log \norm{\Lambda^i (M^n(x))} / n$
converges almost surely to a constant quantity that we may write as
$\lambda_1 + \dotsb + \lambda_i$, for some scalars $\lambda_i$. These are
called the \emph{Lyapunov exponents} of the cocycle $M$ with respect to the
dynamics $T$ and the measure $\mu$. Let $I = \{i \st \lambda_i <
\lambda_{i-1} \}$ parameterize the distinct Lyapunov exponents, and let $d_i=
\Card\{j \st \lambda_j=\lambda_i\}$ be the multiplicity of $\lambda_i$.
In this setting, the Oseledets theorem asserts that the $\lambda_i$ are
exactly the asymptotic growth rates of vectors, at almost every point. Here
is a precise version of this statement (see for
instance~\cite[Theorem~3.4.11]{arnold_random}).
\begin{thm}[Oseledets Theorem]
\label{thm:Oseledets} Assume that the cocycle $M$ is log-integrable. Then:
\begin{enumerate}
\item For $i \in I$, define $E_i(x)$ to be the set of nonzero vectors $v
\in E(x)$ such that, when $n \to \pm \infty$, then $\log \norm{M^n(x)
v}/n \to \lambda_i$, to which one adds the zero vector. For
$\mu$-almost every $x$, this is a vector subspace of $E(x)$, of
dimension $d_i$. These subspaces satisfy $E(x) = \bigoplus_{i \in I}
E_i(x)$. Moreover, $M(x) E_i(x) = E_i(T x)$.
\item Almost surely, for any $i\in I$, one has $\log
\norm{M^n(x)_{|E_i(x)}} / n \to \lambda_i$ when $n \to \pm \infty$, and
$\log \norm{M^n(x)_{|E_i(x)}^{-1}} / n \to -\lambda_i$.
\end{enumerate}
\end{thm}
In other words, the decomposition of the space $E(x) = \bigoplus_{i\in I}
E_i(x)$ gives a block-diagonal decomposition of the cocycle $M$, such that in
each block the cocycle has an asymptotic behavior given by $e^{n\lambda_i}$
up to subexponential fluctuations.
The spaces $E_i(x)$ can be constructed almost surely as follows. Let
$t_1^{(n)}(x) \geq \dotsb \geq t_d^{(n)}(x)$ be the singular values of
$M^n(x)$. They are the eigenvalues of the symmetric matrix $\sqrt{ M^n(x)^t
\cdot M^n(x)}$, the corresponding eigenspaces being orthogonal. Write
$t_i^{(n)}(x) = e^{n \lambda_i^{(n)}(x)}$. Then $\lambda_i^{(n)}(x)$
converges to $\lambda_i$ for almost every $x$. In particular, for $i \in I$,
one has $t_{i-1}^{(n)}(x) > t_i^{(n)}(x)$ for large enough $n>0$. It follows
that the direct sum of the eigenspaces of $\sqrt{ M^n(x)^t \cdot M^n(x)}$ for
the eigenvalues $t_i^{(n)}(x),\dotsc, t_{i+d_i-1}^{(n)}(x)$ is well defined.
Denote it by $F^{(n)}_i(x)$. We will write $F^{(n)}_{\geq i}$ for
$\bigoplus_{j \geq i, j\in I} F^{(n)}_j(x)$, and similarly for $F^{(n)}_{\leq
i}$. In the same way, we define similar quantities for $n<0$.
\begin{thm}
\label{thm:Oseledets_limit} Fix $i\in I$. With these notations,
$F^{(n)}_{i}(x)$ converges almost surely when $n \to \infty$, to a vector
subspace $F^{(\infty)}_i(x) \subseteq E(x)$. In the same way, $F^{(-n)}_i$
converges almost surely to a space $F^{(-\infty)}_{i}(x)$. Moreover, the
direct sums $F_{\geq i}^{(\infty)}(x)$ and $F_{\leq i}^{(-\infty)}(x)$ are
almost surely transverse, and their intersection is $E_i(x)$.
\end{thm}
See~\cite[Theorem 3.4.1 and Page 154]{arnold_random}. One can reformulate the
theorem as follows. The subspaces $F^{(n)}_{\geq i}(x)$ (which are decreasing
with $i$, i.e., they form a flag) converge when $n\to \infty$ to the flag
$E_{\geq i}(x)$. Note that $F^{(n)}_{\geq i}(x)$ is only defined in terms of
the positive times of the dynamics, hence this is also the case of $E_{\geq
i}(x)$: this is the set of vectors for which the expansion in positive time
is at most $e^{n \lambda_i}$, up to subexponential fluctuations (note that
this condition is clearly stable under addition, and therefore defines a
vector subspace, contrary to the condition that the expansion would be
bounded \emph{below} by $e^{n \lambda_i}$). In the same way, $F^{(-n)}_{\leq
i}(x)$ converges when $n \to \infty$ to $E_{\leq i}(x)$, which therefore only
depends on the past of the dynamics. On the other hand, $E_i(x)$, being
defined as the intersection of two spaces depending on positive and negative
times, depends on the whole dynamics and is therefore more difficult to
analyze. We emphasize that $E_i(x)$ is in general different from
$F^{(\infty)}_i(x)$ or $F^{(-\infty)}_i(x)$.
In the above theorem, when we mention the convergence of subspaces, we are
using the natural topology on the \emph{Grassmann manifold} of linear
subspaces of some given dimension $p$. It comes for instance from the
following distance, that we will use later on:
\begin{equation}
\label{eq:distance_Grass}
\df(U,V) = \norm{\pi_{U\to V^\perp}} = \max_{u\in U, \norm{u}=1} \norm{\pi_{V^\perp} u},
\end{equation}
where $\pi_{U \to V^\perp}$ is the orthogonal projection from $U$ to the
orthogonal $V^\perp$ of $V$. It is not completely obvious that this formula
indeed defines a distance. As $\df(U, V) = \norm{\pi_{V^\perp} \pi_U}$, the
triangular inequality follows from the following computation (in which we use
that orthogonal projections have norm at most $1$):
\begin{align*}
\df(U,W) & = \norm{\pi_{W^\perp} \pi_U} = \norm{\pi_{W^\perp} (\pi_V + \pi_{V^\perp}) \pi_U}
\leq \norm{\pi_{W^\perp} \pi_V \pi_U} + \norm{\pi_{W^\perp} \pi_{V^\perp} \pi_U}
\\&
\leq \norm{\pi_{W^\perp} \pi_V} + \norm{\pi_{V^\perp} \pi_U}
= \df(V, W) + \df(U,V).
\end{align*}
For the symmetry, we note that $\df(U,V) = \sqrt{1- \norm{\pi_{U \to
V}}_{\min}^2}$, where $\norm{M}_{\min}$ denotes the minimal expansion of a
vector by a linear map $M$. This is also its smallest singular value. As
$\pi_{V \to U} = \pi_{U \to V}^t$, and a (square) matrix and its transpose
have the same singular values, it follows that $\norm{\pi_{U \to V}}_{\min} =
\norm{\pi_{V \to U}}_{\min}$, and therefore that $\df(U, V) = \df(V, U)$.
\subsection{Oseledets decomposition and subbundles}
The following lemma follows directly from Oseledets theorem, by considering
the Oseledets decomposition in each subbundle.
\begin{lem}
\label{lem:direct_split} Consider a log-integrable cocycle $M$ on a normed
vector bundle $E$, over an ergodic probability preserving dynamical system
$T$. Assume that $E$ splits as a direct sum of invariant subbundles $E_i$.
Then the Lyapunov spectrum of $M$ on $E$ is the union of the Lyapunov spectra
of $M$ on each $E_i$, with multiplicities.
\end{lem}
The same holds if $M$, instead of leaving each $E_i$ invariant, is upper
triangular. While this is well known, we give a full proof as this is not as
trivial as one might think.
\begin{lem}
\label{lem:flag_split} Consider a log-integrable cocycle $M$ on a normed
vector bundle $E$, over an ergodic probability preserving dynamical system
$T$. Assume that there is a measurable invariant flag decomposition $\{0\} =
F_0(x) \subseteq F_1(x) \subseteq \dotsb \subseteq F_k(x) = E(x)$. Then the
Lyapunov spectrum of $M$ on $E$ is the union of the Lyapunov spectra of $M$
on each $F_i/F_{i-1}$, with multiplicities.
\end{lem}
Equivalently, considering $E_i$ a complementary subspace to $F_{i-1}$ in
$F_i$, then the matrix representation of $M$ in the decomposition $E=E_1
\oplus \dotsb \oplus E_k$ is upper triangular, and the lemma asserts that the
Lyapunov spectrum of $M$ is the union of the Lyapunov spectra of the diagonal
blocks.
\begin{proof}
Passing to the natural extension if necessary, we can assume that $T$ is
invertible.
Let us first assume that $k=2$, and that there is only one Lyapunov exponent
$\lambda$ in $E_1$ and one Lyapunov exponent $\mu$ in $E_2$, both with some
multiplicity. In matrix form, $M$ can be written as
$\left(\begin{smallmatrix} A_1 & B \\ 0 & A_2 \end{smallmatrix}\right)$,
where the growth rate of $A_1^n$ and $A_2^n$ are respectively given by
$e^{\lambda n}$ and $e^{\mu n}$. Then
\begin{equation}
\label{eq:Mn}
M^n(x) = \left(\begin{array}{cc} A_1^n(x) & \sum_{k=1}^n A_1^{n-k}(T^k x) B(T^{k-1}x) A_2^{k-1}(x)
\\ 0 & A_2^n(x) \end{array} \right).
\end{equation}
As $M$ is a log-integrable cocycle, $\log \norm{M(T^n x)}/n$ tends almost
surely to $0$ by Birkhoff theorem. Hence, the growth of $\norm{B(T^n x)}$ is
subexponential almost surely.
Assume first $\lambda > \mu$. Define a function $\Phi(x) : E_2(x) \to E_1(x)$
by
\begin{equation*}
\Phi(x) = - \sum_{k=0}^\infty A_1^{k+1}(x)^{-1} B(T^k x) A_2^k(x).
\end{equation*}
The series converges almost surely as $\norm{A_1^{k+1}(x)^{-1} B(T^k x)
A_2^k(x)} \leq C e^{(\mu-\lambda)k + \epsilon k}$ and $\mu-\lambda < 0$.
This series is designed so that $A_1(x) \Phi(x) + B(x) = \Phi(Tx) A_2(x)$,
i.e., so that the subspace $\tilde E_2(x) = \{ (\Phi(x)v, v) \st v \in
E_2(x)\}$ is invariant under $M$. We have obtained a decomposition $E= E_1
\oplus \tilde E_2$, on which the cocycle acts respectively like $A_1$ and
$A_2$. Hence, the result follows from Lemma~\ref{lem:direct_split}.
Assume now $\lambda<\mu$. Then one can solve again the equation $A_1(x)
\Phi(x) + B(x) = \Phi(Tx) A_2(x)$, this time going towards the past, by the
converging series
\begin{equation*}
\Phi(x) = \sum_{k=0}^\infty A_1^{-k}(x)^{-1} B(T^{-k}x) A_2^{-k-1}(x).
\end{equation*}
Then, one concludes as above.
Finally, assume $\lambda=\mu$. For any typical $x$, any $n$ and any $k\leq
n$, we have
\begin{align*}
\norm{A_1^{n-k}(T^k x)} &
= \norm{A_1^n(x) A_1^k(x)^{-1}} \leq \norm{A_1^n(x)} \norm{A_1^k(x)^{-1}}
\\&
\leq C e^{\lambda n + \epsilon n/4} \cdot e^{-\lambda k + \epsilon k/4}
\leq C e^{\lambda(n-k) + \epsilon n/2}.
\end{align*}
Hence, one deduces from the expression~\eqref{eq:Mn} of $M^n(x)$ that its
norm grows at most like $n e^{n\lambda+n\epsilon}$ almost surely. Hence, all
its Lyapunov exponents are $\leq \lambda$. The same argument applied to the
inverse cocycle, for $T^{-1}$, shows that all the Lyapunov exponents are also
$\geq \lambda$, concluding the proof in this case.
\medskip
We turn to the general case. Subdividing further each $F_i/F_{i-1}$ into the
sum of its Oseledets subspaces, we may assume that there is one single
Lyapunov exponent in each $F_i/F_{i-1}$. Then, we argue by induction over
$k$. At step $k$, the induction assumption ensures that the Lyapunov spectrum
$L_2$ of $M$ in $E/F_1$ is the union of the Lyapunov spectra in the
$F_i/F_{i-1}$ for $i>1$. Denoting by $L_1$ the Lyapunov spectrum in $F_1$
(made of a single eigenvalue $\lambda$ with some multiplicity), we want to
show that the whole Lyapunov spectrum is $L_1 \cup L_2$, with multiplicities.
Using the Oseledets theorem in $E/F_1$ and lifting the corresponding bundles
to $E$, we obtain subbundles $G_2,\dotsc, G_I$ such that, in the
decomposition $E=F_1 \oplus G_2 \oplus \dotsb \oplus G_I$, the matrix $M$ is
block diagonal, except possibly for additional blocks along the first line.
Each block $G_i$ in which the Lyapunov exponent is not $\lambda$ can be
replaced by a block $\tilde G_i$ which is really invariant under the
dynamics, as in the $k=2$ case above. We are left with $F_1$ and possibly one
single additional block, say $G_i$, with the same exponent $\lambda$. The
$k=2$ case again shows that all the Lyapunov exponents in $F_1 \oplus G_i$
are equal to $\lambda$, concluding the proof.
\end{proof}
\section{Exponential large deviations for norms of linear cocycles}
\subsection{Gibbs measures}
\label{subsec:Gibbs}
In this section, we recall basic properties of Gibbs measures, as explained
for instance in~\cite{bowen} and~\cite{parry-pollicott}. By \emph{Gibbs
measure}, we always mean in this article Gibbs measure with respect to some
Hölder continuous potential.
Let $\phi$ be a Hölder-continuous function, over a transitive subshift of
finite type $T:\Sigma \to \Sigma$. The \emph{Gibbs measure} associated to
$\phi$, denoted by $\mu_\phi$, is the unique $T$-invariant probability
measure for which there exist two constants $P$ (the \emph{pressure} of
$\phi$) and $C>0$ such that, for any cylinder $[a_0,\dotsc, a_{n-1}]$, and
for any point $x$ in this cylinder,
\begin{equation}
\label{eq:Gibbs}
C^{-1} \leq \frac{\mu_\phi[a_0,\dotsc, a_{n-1}]}{e^{S_n \phi(x) - nP}} \leq C.
\end{equation}
The Gibbs measure only depends on $\phi$ up to the addition of a coboundary
and a constant, i.e., $\mu_{\phi} = \mu_{\phi + g-g\circ T + c}$.
Here is an efficient way to construct the Gibbs measure. Any Hölder
continuous function is cohomologous to a Hölder continuous function which
only depends on positive coordinates of points in $\Sigma$. Without loss of
generality, we can assume that this is the case of $\phi$, and also that
$P(\phi) = 0$. Denote by $T_+: \Sigma_+ \to \Sigma_+$ the unilateral subshift
corresponding to $T$. Define the transfer operator $\boL_\phi$, acting on the
space $C^\alpha$ of Hölder continuous functions on $\Sigma_+$ by
\begin{equation*}
\boL_\phi u(x_+) = \sum_{T_+ y_+ = x_+} e^{\phi(y_+)} v(y_+).
\end{equation*}
Then one shows that this operator has a simple eigenvalue $1$ at $1$,
finitely many eigenvalues of modulus $1$ different from $1$ (they only exist
if $T$ is transitive but not mixing) and the rest of its spectrum is
contained in a disk of radius $<1$. One deduces that, for any $v \in
C^\alpha$, then in $C^\alpha$ one has $\frac{1}{N} \sum_{n=0}^{N-1}
\boL_\phi^n u \to \mu^+(v) v_0$, where $v_0$ is a (positive) eigenfunction
corresponding to the eigenvalue $1$, and $\mu^+$ is a linear form on
$C^\alpha$. One can normalize them by $\mu^+(1) = 1$. By approximation, it
follows that this convergence also holds in $C^0$ for $v\in C^0$. Moreover,
$\mu^+$ extends to a continuous linear form on $C^0$, i.e., it is a
probability measure.
Replacing $\phi$ with $\phi + \log v_0 -\log v_0 \circ T_+$, one replaces the
operator $\boL_\phi$ (with eigenfunction $v_0$) with the operator $\boL_{\phi
+ \log v_0 - \log v_0 \circ T_+}$, with eigenfunction $1$. Hence, without
loss of generality, we can assume that $v_0 = 1$. With this normalization,
one checks that the measure $\mu^+$ is $T_+$-invariant. It is the Gibbs
measure for $T_+$, satisfying the property~\eqref{eq:Gibbs}. Its natural
$T$-invariant extension $\mu$ to $\Sigma$ is the Gibbs measure for $T$. We
have for any $v \in C^0(\Sigma_+)$
\begin{equation}
\label{eq:conv_boL}
\frac{1}{N} \sum_{n=0}^{N-1} \boL^n v (x_+) \to \int v \dd \mu^+,
\quad \text{uniformly in $x_+ \in \Sigma_+$.}
\end{equation}
It follows from the construction above that the jacobian of $\mu^+$ with
respect to $T_+$ is given by $J(x_+) = \frac{\dd T^* \mu_+}{\mu_+}(x) =
e^{-\phi(x_+)}$.
Consider the disintegration of $\mu$ with respect to the factor $\mu^+$:
there exists a family of measures $\mu^-_{x_+}$ on $W_{\loc}^s(x_+)$ for $x_+
\in \Sigma_+$, such that $\mu = \int \mu^-_{x_+} \dd\mu_+(x_+)$. Formally, we
write $\mu = \mu_+ \otimes \mu_{x_+}^-$, even though this is not a direct
product. These measures can in fact be defined for all $x_+$ (instead of
almost all $x_+$) in a canonical way, they depend continuously on $x_+$, they
belong to the same measure class when the first coordinate $(x_+)_0$ is
fixed, and moreover their respective Radon-Nikodym derivatives are continuous
in all variables. See for instance~\cite[Section A.2]{avila_viana_criterion}.
Geometrically, the picture is the following. Consider some point $x_+ \in
\Sigma_+$. It has finitely many preimages $y_+^1,\dotsc, y_+^i$ under $T_+$.
Then $W_{\loc}^s(x_+) = \bigcup_i T(W_{\loc}^s(y_+^i))$, and
\begin{equation}
\label{eq:T-inv}
\mu_{x_+}^- = \sum_i \frac{1}{J(y_+^i)} T_* \mu_{y_+^i}^- = \sum_i e^{\phi(y_+^i)} T_* \mu_{y_+^i}^-.
\end{equation}
\subsection{First easy bounds}
In this paragraph, we prove (1-3) in Theorem~\ref{thm:large_deviations}.
\begin{lem}
\label{lem:anx_bound_Birkhoff} Let $a_n(x)=a(n, x)$ be a subadditive cocycle
which is bounded in absolute value for any $n$. Then, for any $N$, there
exists $C>0$ with
\begin{equation*}
a(n, x) \leq S_n (a_N/N) (x) + C,
\end{equation*}
for any $n\in \N$ and any $x \in \Sigma$.
\end{lem}
\begin{proof}
This is clear for $n \leq 2N$ as all those quantities are bounded. Consider
now $n \geq 2N$, consider $p$ such that $n=Np+r$ with $r\in [N, 2N]$. For any
$j\in [0,N-1]$, one may write $n=j+Np+r$ with $r\in [0,2N]$. Thus,
\begin{equation*}
a(n, x) \leq a(j, x) + \sum_{i=0}^{p-1} a(N, T^{iN+j} x) + a (r, T^{pN+j} x)
\leq C + \sum_{i=0}^{p-1} N (a_N/N) (T^{iN+j} x).
\end{equation*}
Summing over $j\in [0,N-1]$, we get
\begin{align*}
N a(n, x) & \leq NC + \sum_{j=0}^{N-1} \sum_{i=0}^{p-1} N (a_N/N)(T^{iN+j} x)
= NC + N S_{Np} (a_N/N)(x)
\\& \leq C' + N S_n (a_N/N)(x).
\end{align*}
This proves the claim.
\end{proof}
\begin{lem}
\label{lem:upper_anx} Let $(T,\mu)$ be a transitive subshift of finite type
with a Gibbs measure, and $a(n,x)$ a subadditive cocycle above $T$ such that
$a(n,\cdot)$ is continuous for all $n$. Let $\lambda$ be the almost sure
limit of $a(n,x)/n$, assume $\lambda>-\infty$. Then, for any $\epsilon>0$,
there exists $C>0$ such that, for all $n\geq 0$,
\begin{equation*}
\mu\{ x \st a(n,x) \geq n \lambda + n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation*}
\end{lem}
\begin{proof}
By Kingman's theorem, $a(n, x)/n$ converges to $\lambda$ almost everywhere
and in $L^1$. Thus, one can take $N$ such that $\int a_N/N \dd\mu(x) \leq
(\lambda + \epsilon/2) N$. From the previous lemma, we obtain a constant $C$
such that, for all $n$ and $x$,
\begin{equation*}
a(n, x) \leq S_n(a_N/N) x + C.
\end{equation*}
Thus,
\begin{equation*}
\{ x \st a(n, x) \geq n \lambda + n \epsilon\}
\subseteq \{ x \st S_n(a_N/N) x \geq n \int(a_N/N) + n\epsilon/2-C\}.
\end{equation*}
By the large deviations inequality for continuous functions\footnote{This
holds for continuous functions in transitive subshifts of finite type, by
reduction to the mixing setting after taking a finite iterate of the map, and
by reduction to Hölder continuous functions by uniform approximation.}, this
set has exponentially small measure. This proves the lemma.
\end{proof}
\begin{prop}
\label{prop:upper} Let $(T,\mu)$ be a transitive subshift of finite type with
a Gibbs measure, and $M$ a continuous cocycle above $T$ with Lyapunov
exponents $\lambda_1\geq \dotsb \geq \lambda_d$. For any $\epsilon>0$, there
exists $C>0$ such that, for all $n\geq 0$ and all $i \leq d$,
\begin{equation*}
\mu\{ x \st \log \norm{\Lambda^i M^n(x)}
\geq n (\lambda_1+\dotsb+\lambda_i) + n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation*}
\end{prop}
\begin{proof}
Fix $i\leq d$. Then the result follows from the previous lemma applied to
$a(n, x) = \log \norm{ \Lambda^i M^n(x)}$.
\end{proof}
This proposition shows one of the two directions in
Theorem~\ref{thm:large_deviations}, without any assumption on the cocycle.
Hence, to prove this theorem, it will suffice to prove the corresponding
lower bound
\begin{equation}
\label{eq:lower_bound}
\mu \{ x \st \log \norm{ \Lambda^i M^n(x)}
\leq n (\lambda_1+\dotsb+\lambda_i) - n \epsilon\} \leq C e^{-C^{-1}n},
\end{equation}
under the various possible assumptions of this theorem. As is usual with
subadditive ergodic theory, this lower bound is significantly harder than the
upper bound. Indeed, the analogue of Lemma~\ref{lem:upper_anx} for the lower
bound is false, see Proposition~\ref{prop:counter_subadd} in
Appendix~\ref{app:counter}
We already have enough tools to prove the easy cases of
Theorem~\ref{thm:large_deviations}.
\begin{proof}[Proof of Theorem~\ref{thm:large_deviations} (1-3)]
\label{proof:thm_1-3} First, we prove (1): assuming that
$\lambda_1=\dotsc=\lambda_d=\lambda$, we have to prove that
\begin{equation*}
\mu \{ x \st \log \norm{ \Lambda^i M^n(x)}
\leq n i\lambda - n \epsilon\} \leq C e^{-C^{-1}n},
\end{equation*}
Let $s_i(x,n)$ be the $i$-th singular value of $M^n(x)$. Then
\begin{equation*}
\norm{\Lambda^i M^n(x)} = s_1(x,n) \dotsm s_i(x,n) \geq s_d(x,n)^i =
\norm{M^n(x)^{-1}}^{-i}.
\end{equation*}
Hence, to conclude, it suffices to show that
\begin{equation*}
\mu \{ x \st \log \norm{M^n(x)^{-1}}
\geq -n\lambda + n \epsilon\} \leq C e^{-C^{-1}n}.
\end{equation*}
This follows from Proposition~\ref{prop:upper} applied to the cocycle $\tilde
M(x) = (M(x)^{-1})^t$, whose Lyapunov exponents are all equal to $-\lambda$.
\medskip
Let us now prove (3), for $k=2$ as the general case then follows by induction
over $k$. Assume that $E_1$ is an invariant continuous subbundle such that,
on $E_1$ and on $E/E_1$, the induced cocycle has exponential large deviations
for all exponents. Denote by $L_1$ and $L_2$ the Lyapunov exponents of the
cocycle on these two bundles, then the Lyapunov spectrum on $E$ is $L_1 \cup
L_2$ with multiplicity, by Lemma~\ref{lem:flag_split}. Let $E_2$ be the
orthogonal complement to $E_1$. We want to show~\eqref{eq:lower_bound}, for
some $i$. In $\lambda_1,\dotsc,\lambda_i$, some of these exponents, say a
number $i_1$ of them, are the top exponents in $L_1$. Denote their sum by
$\Sigma_1$. The remaining $i_2 = i-i_1$ exponents are the top exponents in
$L_2$, and add up to a number $\Sigma_2$.
In the decomposition $E=E_1 \oplus E_2$, the matrix $M$ is block diagonal, of
the form $\left(\begin{smallmatrix} M_1 & B \\ 0 & M_2
\end{smallmatrix}\right)$. One has $\norm{\Lambda^i M(x)} \geq
\norm{\Lambda^{i_1} M_1(x)} \norm{\Lambda^{i_2} M_2(x)}$: considering $v_1$
and $v_2$ that are maximally expanded by $\Lambda^{i_1} M_1(x)$ and
$\Lambda^{i_2} M_2(x)$, the expansion factor of $\Lambda^i M(x)$ along $v_1
\wedge v_2$ is at least $\norm{\Lambda^{i_1} M_1(x)} \norm{\Lambda^{i_2}
M_2(x)}$ thanks to the orthogonality of $E_1$ and $E_2$, and the
block-diagonal form of $M(x)$. Therefore,
\begin{align*}
\{ x \st \log & \norm{ \Lambda^i M^n(x)}
\leq n (\lambda_1+\dotsb+\lambda_i) - n \epsilon\}
\\&
\subseteq \{ x \st \log \norm{\Lambda^{i_1} M_1^n(x)} + \log \norm{\Lambda^{i_2} M_2^n(x)}
\leq n \Sigma_1 + n\Sigma_2 - n \epsilon\}
\\&
\subseteq \{ x \st \log \norm{\Lambda^{i_1} M_1^n(x)} \leq n \Sigma_1 - n \epsilon/2\}
\cup \{ x \st \log \norm{\Lambda^{i_2} M_2^n(x)} \leq n \Sigma_2 - n \epsilon/2\}.
\end{align*}
The last sets both have an exponentially small measure, as we are assuming
that the induced cocycles on $E_1$ and $E/E_1$ have exponential large
deviations for all exponents. Hence, $\mu \{ x \st \log \norm{ \Lambda^i
M^n(x)} \leq n (\lambda_1+\dotsb+\lambda_i) - n \epsilon\}$ is also
exponentially small. This concludes the proof of (3).
\medskip
Finally, (2) follows from (3) by taking $F_i=E_1 \oplus \dotsb \oplus E_i$.
\end{proof}
\subsection{\texorpdfstring{$u$}{u}-states}
Consider a cocycle $M$ admitting invariant continuous holonomies. We define a
fibered dynamics over the projective bundle $\Pbb(E)$ by
\begin{equation*}
T_{\Pbb} (x, [v]) = (Tx, [M(x) v]).
\end{equation*}
Let $\pi_{\Pbb(E) \to \Sigma} : \Pbb(E) \to \Sigma$ be the first
projection.
In general, $T_{\Pbb}$ admits many invariant measures which project under
$\pi_{\Pbb(E) \to \Sigma}$ to a given Gibbs measure $\mu$. For instance, if
the Lyapunov spectrum of $M$ is simple, denote by $v_i(x)$ the vector in
$E(x)$ corresponding to the $i$-th Lyapunov exponent, then $\mu \otimes
\delta_{[v_i(x)]}$ is invariant under $T_{\Pbb}$. By this notation, we mean
the measure such that, for any continuous function $f$,
\begin{equation*}
\int f(x,v) \dd(\mu \otimes \delta_{[v_i(x)]}) (x,v)
= \int f(x, [v_i(x)]) \dd \mu(x).
\end{equation*}
More generally, if $m_{ x}$ is a family of measures on $\Pbb(E(x))$ depending
measurably on $x$ such that $M(x)_* m_{ x} = m_{ T x}$, then the measure
$\mu \otimes m_{ x}$ (defined as above) is invariant under $T_{\Pbb}$.
Conversely, any $T_{\Pbb}$-invariant measure that projects down to $\mu$ can
be written in this form, by Rokhlin's disintegration theorem.
To understand the growth of the norm of the cocycle, we need to distinguish
among those measures the one that corresponds to the maximal expansion, i.e.,
$\mu \otimes \delta_{[v_1]}$. This measure can be obtained as follows,
assuming that $\lambda_1$ is simple. Start from a measure on $\Pbb(E)$ that
is of the form $\mu \otimes \nu_{ x}$ where the measures $\nu_{ x}$ depend
continuously on $x$ and give zero mass to all hyperplanes. Then
\begin{equation*}
(T_{\Pbb}^n)_* (\mu \otimes \nu_{ x}) = \mu \otimes (M^n(T^{-n} x)_* \nu_{ T^{-n} x}).
\end{equation*}
By Oseledets theorem, the matrix $M^n(T^{-n} x)$ acts as a contraction on
$\Pbb(E(T^{-n} x))$, sending the complement of a neighborhood of some
hyperplane to a small neighborhood of $[v_1(x)]$. As $\nu_y$ gives a small
mass to the neighborhood of the hyperplane (uniformly in $y$), it follows
that $(M^n(T^{-n} x)_* \nu_{ T^{-n} x})$ converges to $\delta_{[v_1(x)]}$.
Thus,
\begin{equation*}
\mu \otimes \delta_{[v_1]} = \lim (T_{\Pbb}^n)_* (\mu \otimes \nu_{ x}).
\end{equation*}
There is a remarkable consequence of this construction. We can start from a
family of measure $\nu_{x}$ which is invariant under the unstable holonomy
$H^u_{x \to y}$, i.e., such that $(H^u_{x \to y})_* \nu_{ x} = \nu_{y}$.
Then the same is true of all the iterates $(M^n(T^{-n}
x)_* \nu_{T^{-n} x})$. In the limit $n \to
\infty$, it follows that $\delta_{[v_1]}$ is also invariant under unstable
holonomies. (There is something to justify here, as it is not completely
straightforward that the holonomy invariance is invariant under weak
convergence: The simplest way is to work with a one-sided subshift, and then
lift things trivially to the two-sided subshift, see~\cite[Section
4.1]{avila_viana_criterion} for details). This remark leads us to the
following definition.
\begin{definition}
Consider a probability measure $\nu$ on $\Pbb(E)$ which projects to $\mu$
under $\pi$. It is called a $u$-state if, in the fiberwise decomposition
$\nu= \mu \otimes \nu_{x}$, the measures $\nu_{x}$ are $\mu$-almost surely
invariant under unstable holonomies. It is called an invariant $u$-state if,
additionally, it is invariant under the dynamics.
\end{definition}
The invariant $u$-states can be described under an additional irreducibility
assumption of the cocycle, strong irreducibility.
\begin{definition}
\label{def:strongly_irreducible} We say that a cocycle $M$ with invariant
continuous holonomies over a subshift of finite type is \emph{not strongly
irreducible} if there exist a dimension $0<k<d=\dim E$, an integer $N>0$, and
for each point $x \in \Sigma$ a family of distinct $k$-dimensional vector
subspaces $V_1(x),\dotsc, V_N(x)$ of $E(x)$, depending continuously on $x$,
with the following properties:
\begin{itemize}
\item the family as a whole is invariant under $M$, i.e., for all $x$,
\begin{equation*} M(x)\{V_1(x),\dotsc,V_N(x)\} =
\{V_1(T x),\dotsc, V_N(T x)\}.
\end{equation*}
\item the family as a whole is invariant under the holonomies, i.e., for
all $x$ and all $y \in W_{\loc}^u(x)$ one has $H^u_{x \to
y}\{V_1(x),\dotsc, V_N(x)\} = \{V_1(y),\dotsc, V_N(y)\}$, and the same
holds for the stable holonomies.
\end{itemize}
Otherwise, we say that $M$ is strongly irreducible.
\end{definition}
In a locally constant cocycle, where holonomies commute (and can therefore be
taken to be the identity), then the holonomy invariance condition reduces to
the condition that each $V_i$ is locally constant, i.e., it only depends on
$x_0$.
The following theorem is the main result of this paragraph. It essentially
follows from the arguments in~\cite{bonatti_viana_lyapunov,
avila_viana_criterion}.
\begin{thm}
\label{thm:u-state-unique} Consider a transitive subshift of finite type $T$
with a Gibbs measure $\mu$. Let $M$ be a locally constant cocycle on a bundle
$E$ over $T$, which is strongly irreducible and has simple top Lyapunov
exponent. Then the corresponding fibered map $T_{\Pbb}$ has a unique
invariant $u$-state, given by $ \mu \otimes \delta_{[v_1]}$ where $v_1(x)$ is
a nonzero vector spanning the $1$-dimensional Oseledets subspace for the top
Lyapunov exponent at $x$.
\end{thm}
Note that we are assuming that the cocycle is locally constant: This theorem
is wrong if the cocycle only has invariant continuous holonomies, see
Remark~\ref{rmk:not_locally_constant} below.
The rest of this subsection is devoted to the proof of this theorem. We have
already seen that $\mu \otimes \delta_{[v_1]}$ is an invariant $u$-state,
what needs to be shown is the uniqueness. Starting from an arbitrary
$u$-state $\nu$, we have to prove that it is equal to $\mu \otimes
\delta_{[v_1]}$.
As the cocycle is locally constant, one can quotient by the stable direction,
obtaining a unilateral subshift $T_+:\Sigma_+ \to \Sigma_+$ with a Gibbs
measure $\mu_+$, a vector bundle $E_+$ and a cocycle $M_+$. The measure
$\nu^+=(\pi_{E \to E_+})_* \nu$ is then invariant under the fibered dynamics
$T_{+, \Pbb}$. It can be written as $\mu_+ \otimes \nu^+_{x_+}$ for some
measurable family $\nu^+_{x_+}$ of probability measures on $\Pbb(E_+(x_+))$.
The following lemma is~\cite[Proposition 4.4]{avila_viana_criterion}.
\begin{lem}
\label{lem:hatnu_nux} Assume that $\nu$ is an invariant $u$-state. Then the
family of measures $\nu^+_{x_+}$, initially defined for $\mu_+$-almost every
$x_+$, extends to a (unique) family that depends continuously in the weak
topology on all $x_+\in \Sigma_+$.
\end{lem}
For completeness, we sketch the proof, leaving aside the technical details.
\begin{proof}
The measure $\nu^+_{x_+}$ is obtained by averaging all the conditional
measures $\nu_{x}$ over all points $x$ which have the future $x_+$, i.e.,
over the points $(x_-, x_+)$, with respect to a conditional measure
$\dd\mu^-_{x_+}(x_-)$. If $y_+$ is close to $x_+$, one has $y_0=x_0$, so the
possible pasts of $y_+$ are the same as the possible pasts of $x_+$. For any
continuous function $f$ on projective space, we obtain
\begin{equation*}
\int f \dd\nu^+_{x_+} = \int \left(\int f \dd \nu_{x_-, x_+}\right) \dd\mu^-_{x_+}(x_-), \quad
\int f \dd\nu^+_{y_+} = \int \left(\int f \dd \nu_{x_-, y_+}\right) \dd\mu^-_{y_+}(x_-).
\end{equation*}
When $y_+$ is close to $x_+$, the measures $\dd\mu^-_{x_+}$ and
$\dd\mu^-_{y_+}$ are equivalent, with respective density close to $1$, as we
explained in Paragraph~\ref{subsec:Gibbs}. Moreover, by holonomy invariance
of the conditional measures of $\nu$,
\begin{equation*}
\int f \dd \nu_{x_-, y_+} = \int f \circ H^u_{(x_-, x_+) \to (x_-, y_+)} \dd \nu_{x_-, x_+}.
\end{equation*}
By continuity of the holonomies, the function $f \circ H^u_{(x_-, x_+) \to
(x_-, y_+)}$ is close to $f$ if $y_+$ is close to $x_+$. It follows that
$\int f \dd\nu^+_{y_+}$ is close to $\int f \dd\nu^+_{x_+}$, as desired.
Details can be found in~\cite[Section 4.2]{avila_viana_criterion}.
\end{proof}
Henceforth, we write $\nu^+_{x_+}$ for the family of conditional measures,
depending continuously on $x_+$. The next lemma is a version
of~\cite[Proposition 5.1]{avila_viana_criterion} in our setting.
\begin{lem}
\label{lem:hyperplane_0} Assume that $M$ is strongly irreducible in the sense
of Definition~\ref{def:strongly_irreducible}. Let $\nu$ be an invariant
$u$-state, write $\nu^+_{x_+}$ for the continuous fiberwise decomposition of
Lemma~\ref{lem:hatnu_nux}. Then, for any $x_+$, for any hyperplane $L \subset
\Pbb(E_+(x_+))$, one has $\nu^+_{x_+}(L)=0$.
\end{lem}
\begin{proof}
Assume by contradiction that $\nu^+_{x_+}$ gives positive mass to some
hyperplane, for some $x_+$. We will then construct a family of subspaces as
in Definition~\ref{def:strongly_irreducible}, contradicting the strong
irreducibility of the cocycle.
Let $k$ be the minimal dimension of a subspace with positive mass at some
point. Let $\gamma_0$ be the maximal mass of such a $k$-dimensional subspace.
By continuity of $x_+ \mapsto \nu^+_{x_+}$ and compactness, there exist a
point $a_+$ and a $k$-dimensional subspace $V$ with $\nu^+_{a_+}(V) =
\gamma_0$ (\cite[Lemma 5.2]{avila_viana_criterion})
Let $\boV(x_+)$ be the set of all $k$-dimensional subspaces $V$ of $E_+(x_+)$
with $\nu^+_{x_+}(V)=\gamma_0$. Two elements of $\boV(x_+)$ intersect in a
subspace of dimension $<k$, which has measure $0$ by minimality of $k$.
Hence, $\gamma_0 \Card \boV(x_+)= \nu^+_{x_+}(\bigcup_{V \in \boV(x_+)} V)$.
As this is at most $1$, the cardinality of $\boV(x_+)$ is bounded from above,
by $1/\gamma_0$.
Consider a point $b_+$ where the cardinality $N$ of $\boV(b_+)$ is maximal.
For each $V \in \boV(b_+)$, $\nu^+_{b_+}(V)$ is an average of
$\nu^+_{x_+}(M(x_+)^{-1}V)$ over all preimages $x_+$ of $b_+$ under $T_+$
(see~\cite[Corollary 4.7]{avila_viana_criterion}). By maximality, all the
$M(x_+)^{-1}V$ also have mass $\gamma_0$ for $\nu^+_{x_+}$. Iterating this
process, one obtains for all points in $T_+^{-n}\{b_+\}$ at least $N$
subspaces with measure $\gamma_0$ (and in fact exactly $N$ by maximality).
The set $\bigcup_n T_+^{-n}\{b_+\}$ is dense. Hence, any $x_+$ is a limit of
a sequence $x_n$ for which $\boV(x_n)$ is made of $N$ subspaces
$V_1(x_n),\dotsc, V_N(x_n)$. Taking subsequences, we can assume that each
sequence $V_i(x_n)$ converges to a subspace $V_i$, which belongs to $\boV(y)$
by continuity of $y_+\mapsto \nu^+_{y_+}$. Moreover, one has $V_i \neq V_j$
for $i\neq j$: otherwise, the corresponding space would have measure at least
$2 \gamma_0$, contradicting the definition of $\gamma_0$. This shows that the
cardinality of $\boV(x_+)$ is at least $N$, and therefore exactly $N$.
We have shown that the family $\boV(x_+)$ is made of exactly $N$ subspaces
everywhere, that it depends continuously on $x_+$ and that it is invariant
under $T_{+,\Pbb}$. We lift everything to the bilateral subshift $\Sigma$,
setting $\boV(x) = \boV(\pi_{\Sigma \to \Sigma_+} x)$. By construction, the
family is invariant under the dynamics $T_{\Pbb}$. As $V_i$ does not depend
on the past of the points, it is invariant under the stable holonomy (which
is just the identity when one moves along stable sets, thanks to our choice
of trivialization of the bundle).
The family $\boV(x)$ only depends on $x_+$. We claim that, in fact, it only
depends on $x_0$, i.e., it is also invariant under the unstable holonomy. Fix
some $x_+$, and some $y_+$ with $y_0=x_0$. Then $\gamma_0 =
\nu^+_{x_+}(V_i(x_+))$ is an average of the quantities $\nu_{(x_-, x_+)}
(V_i(x_+))$ over all possible pasts $x_-$ of $x_+$. One deduces from this
that $\nu_{(x_-, x_+)} (V_i(x_+)) = \gamma_0$ for almost every such $x_-$,
see~\cite[Lemma 5.4]{avila_viana_criterion}. As $\nu$ is invariant under
unstable holonomy, we obtain $\nu_{(x_-, y_+)} (V_i(x_+)) = \gamma_0$ for
almost every $x_-$. Integrating over $x_-$, we get $\nu^+_{y_+}(V_i(x_+)) =
\gamma_0$. Hence, $V_i(x_+) \in \boV(y_+)$. This shows that
$\boV(x_+)=\boV(y_+)$ if $x_0=y_0$ (almost everywhere and then everywhere by
continuity). Hence, $\boV$ is locally constant. This shows that $M$ is not
strongly irreducible.
\end{proof}
Let us explain how this proof fails if the cocycle are not locally constant,
i.e., if the holonomies do not commute. Let us argue in a trivialization were
the stable holonomies are the identity. The failure is at the end of the
proof, when we show that the family $\boV(x)$ is invariant under unstable
holonomy. We can indeed prove that $\nu_{(x_-, x_+)}(V_i(x_+))= \gamma_0$ for
almost every $x_-$. Then, it follows that $\nu_{(x_-, y_+)} (H^u_{(x_-, x_+)
\to (x_-, y_+)} V_i(x_+)) = \gamma_0$. The problem is that the subspaces
$H^u_{(x_-, x_+) \to (x_-, y_+)} V_i(x_+)$ vary with $x_-$, so one can not
integrate this equality with respect to $x_-$, to obtain a subspace $V$ with
$\nu_{y_+}^+ (V) = \gamma_0$.
\begin{proof}[Proof of Theorem~\ref{thm:u-state-unique}]
Let $\nu$ be a $u$-state, let $\mu\otimes \nu_x$ be its fiberwise
disintegration, and $\nu^+_{x_+}$ the conditional expectation of $\nu_x$ with
respect to the future sigma-algebra. The martingale convergence theorem shows
that, almost surely,
\begin{equation}
\label{eq:nux_limit}
\nu_x = \lim M^n(T^{-n} x)_* \nu^+_{(T^{-n} x)_+},
\end{equation}
see~\cite[Proposition 3.1]{avila_viana_criterion}.
Let $\epsilon>0$. We may find $\delta$ such that, for any $x_+$ and any
hyperplane $L \subseteq E_+(x_+)$, the $\delta$-neighborhood of $L$ in
$\Pbb(E_+(x))$ (for some fixed distance on projective space) satisfies
$\nu^+_{x_+}(\boN_\delta(L)) \leq \epsilon$, thanks to
Lemma~\ref{lem:hyperplane_0} and continuity of the measures.
Let $E_1(x)=\R v_1(x)$ be the top Oseledets subspace of $M$, and $E_2(x)$ be
the sum of the other subspaces. Let $A$ be a compact subset of $\Sigma$ with
positive measure on which the decomposition $E(x)=E_1(x)\oplus E_2(x)$ is
continuous and on which the convergence in Oseledets theorem is uniform. Fix
$x \in A$. By Poincaré's recurrence theorem, there exists almost surely an
arbitrarily large $n$ such that $T^{-n} x \in A$. In the decomposition $E=E_1
\oplus E_2$, the cocycle $M^n(T^{-n} x)$ is block diagonal, with the first
(one-dimensional) block dominating exponentially the other one. Hence, it
sends $\Pbb(E(T^{-n} x)) \setminus \boN_\delta(E_2(T^{-n} x))$ (whose
$\nu^+_{(T^{-n} x)_+}$-measure is at least $1-\epsilon$ thanks to the choice
of $\delta$) to an $\epsilon$-neighborhood of $E_1(x)$ if $n$ is large
enough. Therefore, $M^n(T^{-n} x)_* \nu^+_{(T^{-n} x)_+}
(\boN_\epsilon([v_1(x)])) \geq 1-\epsilon$. Letting $\epsilon$ tend to $0$,
we get $\nu_x([v_1(x)]) = 1$ thanks to~\eqref{eq:nux_limit}. As the measure
of $A$ can be taken arbitrarily close to $1$, we finally get that $\nu_x$ is
almost everywhere equal to $\delta_{[v_1(x)]}$.
\end{proof}
\begin{rmk}
\label{rmk:not_locally_constant} Theorem~\ref{thm:u-state-unique} is wrong in
general for cocycles which are not locally constant. The difficulty is in
Lemma~\ref{lem:hyperplane_0}: If the cocycle $M$ merely admits invariant
continuous holonomies, there is no reason why the invariant family of
subspaces $\boV(x)$ we construct there should be invariant under the unstable
holonomy, even though $\nu_x$ is. Here is an example of a strongly
irreducible cocycle with simple Lyapunov exponents, over the full shift on
two symbols endowed with any Gibbs measure, which admits two $u$-states.
Let $\Sigma$ be the full shift, let $E=\Sigma \times \R^3$ and let $M$ be the
constant cocycle given by the matrix
$\left(\begin{smallmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\
0 & 0 & 1\end{smallmatrix}\right)$. We introduce the holonomies
\begin{align*}
H_{x \to y}^u & = \left(\begin{matrix}
1 & 0 & \sum_{n\geq 0} 3^{-n} (y_n - x_n) \\
0 & 1 & \sum_{n\geq 0} 2^{-n} (y_n - x_n) \\
0 & 0 & 1
\end{matrix} \right), \\
\intertext{and}
H_{x \to y}^s & = \left(\begin{matrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
\sum_{n\leq 0} 3^{n} (y_n - x_n) & \sum_{n\leq 0} 2^{n} (y_n - x_n) & 1
\end{matrix} \right).
\end{align*}
One checks easily that they are indeed holonomies, and that they are
invariant under $T$. Let $e_i$ denote the $i$-th vector of the canonical
basis. As $e_1$ and $e_2$ are invariant under the unstable holonomies, they
give rise to two distinct $u$-states.
We claim that the cocycle is strongly irreducible. Indeed, consider a nonzero
subbundle $F$ of $E$ which is invariant under $T$ and the holonomies, we will
show that $F=E$. Considering the Oseledets decomposition of $F$ under the
cocycle, it follows that $F$ is spanned by some subfamily $(e_i)_{i\in I}$.
If $1 \in I$ or $2 \in I$, then the invariance under stable holonomy implies
that $3\in I$, since $H^s e_1$ and $H^s e_2$ have a nonzero component along
$e_3$. Hence, $e_3 \in F$ almost everywhere. Then, using the invariance under
unstable holonomy, we deduce that $e_1 \in F$ and $e_2 \in F$ almost
everywhere, as $H^u e_3$ has nonzero components along $e_1$ and $e_2$.
Finally, $F=E$.
\end{rmk}
\subsection{The case of locally constant cocycles}
\label{subsec:locally_constant}
In this paragraph, we prove Theorem~\ref{thm:large_deviations}~(4): if a
cocycle is locally constant above a transitive subshift of finite type, then
it has exponential large deviations for all exponents. The main step is the
following result:
\begin{thm}
\label{thm:strongly_irreducible} Consider a continuous cocycle over a
transitive subshift of finite type endowed with a Gibbs measure, admitting
invariant continuous holonomies. Assume that it has a unique $u$-state. Then
the cocycle has exponential large deviations for its top exponent.
\end{thm}
Before proving this theorem, let us show by successive reductions how it
implies Theorem~\ref{thm:large_deviations}~(4).
\begin{lem}
\label{lem:strongly_irreducible1} Consider a locally constant cocycle which
is strongly irreducible and has simple top Lyapunov exponent, above a
subshift of finite type with a Gibbs measure. Then it has exponential large
deviations for its top exponent.
\end{lem}
\begin{proof}
By Theorem~\ref{thm:u-state-unique}, the cocycle admits a unique $u$-state.
Hence, the result follows from Theorem~\ref{thm:strongly_irreducible}.
\end{proof}
\begin{lem}
\label{lem:strongly_irreducible2} Consider a locally constant cocycle which
has simple top Lyapunov exponent, above a subshift of finite type with a
Gibbs measure. Then it has exponential large deviations for its top exponent.
\end{lem}
\begin{proof}
We argue by induction on dimension of the fibers of the cocycle. Consider a
cocycle $M$ on a bundle $E$ over a subshift of finite type $T$, with simple
top Lyapunov exponent. We will show that it has exponential large deviations
for its top exponent, assuming the same results for all cocycles on fiber
bundles with strictly smaller dimension. We will prove the lower
bound~\eqref{eq:lower_bound} (with $i=1$) for $M$.
If the cocycle $M$ is strongly irreducible, then the result follows from
Lemma~\ref{lem:strongly_irreducible1}, so assume that it is not. Consider a
locally constant invariant family $V_1(x),\dotsc, V_N(x)$ as in
Definition~\ref{def:strongly_irreducible}, such that $N$ is minimal. Let
$V(x)$ be the span of $V_1(x),\dotsc, V_N(x)$. It is also locally constant
and invariant under the cocycle and the holonomies.
Assume first that the dimension of $V$ is strictly smaller than that of $E$.
Define a cocycle $M_{V}$ as the restriction of $M$ to $V$, and a cocycle
$M_{E/ V}$ as the cocycle induced by $M$ on the quotient bundle $E/ V$. These
two cocycles are locally constant. By definition of the restriction norm and
the quotient norm, one has
\begin{equation}
\label{eq:quotient_restrict}
\norm{M^n(x)} \geq \max (\norm{M_{V}^n(x)},
\norm{M_{E/ V}^n(x)}).
\end{equation}
Moreover, by Lemma~\ref{lem:flag_split}, one of the two cocycles has
$\lambda_1$ as a simple top Lyapunov exponent, and these two cocycles are
locally constant and have strictly smaller fiber dimension. By our induction
assumption, we deduce that
\begin{equation*}
\mu \{x \st \log \norm{M_W^n(x)}
\leq n \lambda_1 - n \epsilon\} \leq C e^{-C^{-1}n},
\end{equation*}
where $W$ is either $V$ or $E/V$. The same bound follows for $M$ thanks
to~\eqref{eq:quotient_restrict}.
\smallskip
Assume now that the dimension of $V$ is equal to that of $E$, i.e., $V=E$.
Consider a new dynamics $\tilde T$, on $\tilde \Sigma= \Sigma \times
\{1,\dotsc, N\}$, mapping $(x, i)$ to $(Tx, j)$ where $j=j(x, i)$ is the
unique index such that $M(x) V_i(x) = V_j(T x)$. As $M$ and all the $V_k$
only depend on $x_0$, the function $j$ only depends on $i$, $x_0$ and $x_1$.
Hence, $\tilde T$ is a subshift of finite type. As we chose $N$ to be
minimal, there is no invariant proper subfamily of $V_1,\dotsc, V_N$. Hence,
$\tilde T$ is a transitive subshift. Let also $\tilde \mu$ be the product
measure of $\mu$ and the uniform measure on $\{1,\dotsc,N\}$, it is again a
Gibbs measure for $\tilde T$, therefore ergodic by transitivity.
Above $\tilde \Sigma$, we consider a new bundle $\tilde E(x, i) = V_i(x)$,
and the resulting cocycle $\tilde M$ which is the restriction of $M$ to
$V_i$. On any $E(x)$, one can find a basis made of vectors in the subspaces
$V_i(x)$, by assumption. It follows that $\norm{M^n(x)} \leq C
\max_i\norm{M^n(x)_{|V_i(x)}}$, for some uniform constant $C$. Hence, the top
Lyapunov exponent of $\tilde M$ is (at least, and therefore exactly)
$\lambda_1$. Moreover, it is simple as the top Oseledets space for $\tilde M$
in $\tilde E(x, i)$ is included in the top Oseledets space for $M$ in $E(x)$,
which is one-dimensional by assumption.
By our induction assumption, we obtain the bound~\eqref{eq:lower_bound} with
$i=1$ for the cocycle $\tilde M$ over the subshift $\tilde T$ and the measure
$\tilde \mu$ (note that it is important there that we have formulated the
induction assumption for all subshifts of finite type, not only the original
one). The result follows for the original cocycle as $\norm{M^n(x)} \geq
\norm{\tilde M^n(x, 1)}$ for all $x$.
\end{proof}
\begin{lem}
\label{lem:strongly_irreducible3} Consider a locally constant cocycle, above
a subshift of finite type with a Gibbs measure. Then it has exponential large
deviations for its top exponent.
\end{lem}
\begin{proof}
Consider a locally constant cocycle $M$ for which the multiplicity $d$ of the
top Lyapunov exponent is $>1$. Then the top Lyapunov exponent of $\Lambda^d
M$ is simple, equal to $d\lambda_1$. Moreover, for any matrix $A$ (with
singular values $s_1 \geq s_2 \geq \dotsc$), we have $\norm{A}^d =s_1^d \geq
\norm{\Lambda^d A}=s_1 \dotsm s_d$. Thus,
\begin{equation*}
\{x \st \log \norm{M^n(x)} \leq n \lambda_1(M) - n\epsilon \}
\subseteq \{ x \st \log \norm{\Lambda^d M^n(x)}
\leq n \lambda_1(\Lambda^d M) - nd\epsilon\}.
\end{equation*}
The last set has an exponentially small measure by
Lemma~\ref{lem:strongly_irreducible2}, as $\Lambda^d M$ has a simple top
Lyapunov exponent by construction, and is locally constant. The desired bound
follows for $M$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:large_deviations}~(4)]
\label{proof:4} Proving exponential large deviations for the cocycle $M$ and
some index $i$ amounts to proving exponential large deviations for $\Lambda^i
M$ and its top Lyapunov exponent. Hence, the theorem follows from
Lemma~\ref{lem:strongly_irreducible3}.
\end{proof}
The rest of this paragraph is devoted to the proof of
Theorem~\ref{thm:strongly_irreducible}. The proof follows the classical
strategy of Guivarc'h Le Page for products of independent matrices (with the
uniqueness of the $u$-state replacing the uniqueness of the stationary
measure), although the technical details of the implementation are closer for
instance to~\cite[Proof of Theorem~1]{dolgopyat_limit}.
Henceforth, we fix a transitive subshift of finite type $T:\Sigma \to \Sigma$
with a Gibbs measure $\mu$, and a continuous cocycle $M:E \to E$ above $T$
which admits a unique $u$-state denoted by $\nu_u$. Changing coordinates in
$E$ using the unstable holonomy, we can assume without loss of generality
that $M(x)$ only depends on the past $x_-$ of $x$.
We denote by $\Sigma_-$ the set of pasts of points in $\Sigma$. The left
shift $T$ does not induce a map on $\Sigma_-$ (it would be multivalued, since
there would be a choice for the zeroth coordinate), but the right shift
$T^{-1}$ does induce a map $U$ on $\Sigma-$. This is a subshift of finite
type, for which the induced measure $\mu_- = (\pi_{\Sigma \to \Sigma_-})_*
\mu$ is invariant (and a Gibbs measure).
The measure $\mu$ has conditional expectations $\mu_{x_-}^+$ above its factor
$\mu_-$: it can be written as $\mu = \mu_- \otimes \mu_{x_-}^+$. The family
of measures $\mu_{x_-}^+$ is canonically defined for all point $x_- \in
\Sigma_-$, and varies continuously with $x_-$, as we explained in
Paragraph~\ref{subsec:Gibbs} (for the opposite time direction).
To any point $(x,[v]) \in \Pbb(E)$, we associate a measure $\nu_{(x,[v])}$ on
$\Pbb(E)$ as follows. There is a canonical lift to $E$ of $W_{\loc}^u(x_-)$,
going through $v$, given by $\{(x_-, y_+, H^u_{(x_-,x_+) \to (x_-, y_+)} v
\}$. The measure $\mu_{x_-}^+$ on $W_{\loc}^u(x_-)$ can be lifted to this
set, giving rise after projectivization to the measure $\nu_{(x,[v])}$. This
measure is invariant under (projectivized) unstable holonomy, it projects to
$\mu_{x_-}^+$ under the canonical projection $\Pbb(E) \to \Sigma$, and it
projects to $\delta_{x_-}$ under the canonical projection $\Pbb(E) \to
\Sigma_-$. By construction, for any $x_-$, $x_+$ and $y_+$,
\begin{equation*}
\nu_{(x_-, x_+, [v])} = \nu_{(x_-, y_+, [H^u_{(x_-, x_+) \to (y_-, y_+)} v])}.
\end{equation*}
More generally, finite averages or even integrals of such measures are again
$H^u$-invariant.
\medskip
There is a natural Markov chain on $\Sigma_-$, defined as follows. A point
$x_-$ has several preimages $y_-^i$ under $U$. By the invariance of the
measure $\mu_-$, the sum $1/J(y_-^i)$ is equal to $1$, where $J$ is the
jacobian of $U$ for $\mu_-$. Hence, one defines a Markov chain, by deciding
to jump from $x_-$ to $y_-^i$ with probability $1/J(y_i^i)$. The
corresponding Markov operator is given by
\begin{equation*}
\boL v(x_-) = \sum_{U(y_-) = x_-} \frac{1}{J(y_-)} v (y_-).
\end{equation*}
This is simply the transfer operator of Paragraph~\ref{subsec:Gibbs} (for the
map $U$ instead of the map $T$). Replacing the potential $\phi$ which defines
the Gibbs measure by a cohomologous potential, we may write $\frac{1}{J(y_-)}
= e^{\phi(y)}$.
Correspondingly, we define an operator $\boM$ acting on measures on
$\Pbb(E)$, by
\begin{equation*}
\boM \nu = (T_\Pbb)_* \nu.
\end{equation*}
It maps $\nu_{(x,[v])}$ (supported on the lift $V$ of $W_{\loc}^u(x_-)$
through $[v]$) to a measure supported on $T_\Pbb V$ (which is a lift of the
union of the unstable manifolds $W_{\loc}^u(y_-)$ for $U(y_-) = x_-$). Choose
on each of these submanifolds a point $(y_-, y_+, [v_y])$ (where $y_+$ is
arbitrary, and $[v_y]$ is the unique vector in $TV$ above $(y_-, y_+)$). Then
we have
\begin{equation}
\label{eq:iterate_boM}
\boM \nu_{(x,[v])} = \sum e^{\phi(y_-)} \nu_{(y_-, y_+, [v_y])}.
\end{equation}
This follows from the equation~\eqref{eq:T-inv} for the evolution of the
conditional measures under the dynamics, and then the uniqueness of the
$H^u$-invariant lift.
\begin{prop}
\label{prop:limboMn} Let $f$ be a continuous function on $\Pbb(E)$. Then,
uniformly in $x \in \Sigma$ and $v \in \Pbb(E)$, when $N \to \infty$,
\begin{equation*}
\frac{1}{N} \sum_{n=0}^{N-1}\int f \dd \boM^n \nu_{(x,[v])} \to \int f \dd \nu_u,
\end{equation*}
where $\nu_u$ is the unique invariant $u$-state of $M$.
\end{prop}
\begin{proof}
It suffices to show that any weak limit $\nu_\infty$ of sequences of the form
\begin{equation*}
\nu_N= \frac{1}{N} \sum_{n=0}^{N-1}\boM^n \nu_{(x_N,[v_N])}
\end{equation*}
(where $x_N$ and $[v_N]$ may vary with $N$) is an invariant $u$-state.
The invariance of the limiting measure is clear from the Cesaro-averaging and
the definition $\boM \nu = (T_\Pbb)_* \nu$. The $H^u$ invariance also follows
from the construction. It remains to show that $\nu_\infty$ projects to $\mu$
on $\Sigma$ or, equivalently, that it projects to $\mu_-$ on $\Sigma_-$.
The projection of $\nu_N$ on $\Sigma_-$ is the Cesaro average of $\sum_{U^n
y_- = (x_N)_-} e^{S_{-n} \phi(y_-)} \delta_{y_-}$, i.e., the position at time
$n$ of the Markov chain started from $x_N$ at time $0$. For any continuous
function $v$ on $\Sigma_-$, we get $\int v \dd\pi_* \nu_N = \frac{1}{N}
\sum_{n=0}^{N-1} \boL^n v((x_N)_-)$. By a classical property of transfer
operators (see~\eqref{eq:conv_boL}), this converges uniformly to $\int v
\dd\mu_-$. This proves that the only possible weak limit for $\pi_*(\nu_N)$
is $\mu_-$.
\end{proof}
Fix once and for all $\epsilon>0$, for which we want to prove the inequality
\begin{equation}
\label{eq:mu_to_prove}
\mu\{x \st \log \norm{M^n(x)} \leq n(\lambda_1 - \epsilon)\} \leq C e^{-C^{-1}n}.
\end{equation}
\begin{lem}
\label{lem:g0} Define a function $g_0$ on $\Pbb(E)$ by
\begin{equation*}
g_0([x], v) = \log (\lambda_1 - \epsilon) - \log (\norm{M(x) v}/\norm{v} ),
\end{equation*}
where the last term in this formula does not depend on the choice of the lift
$v$ of $[v]$. Then there exist $N$ and $\alpha,\beta>0$ such that, for any
$x$ and $v$,
\begin{equation*}
\int e^{\alpha S_N g_0} \dd \nu_{(x,[v])} \leq e^{-\beta N}.
\end{equation*}
\end{lem}
\begin{proof}
Define a function $f_0$ on $\Pbb(E)$ by
\begin{equation*}
f_0(x, [v]) = \log (\norm{M(x) v}/\norm{v} ).
\end{equation*}
The integral of $f_0$ with respect to the unique invariant $u$-state $\nu_u$
measures the average expansion of a vector in the maximally expanded
Oseledets subspace, which is by definition equal to the maximal Lyapunov
exponent $\lambda_1$. Hence, it is not difficult to check the following
formula, due to Furstenberg (see for instance~\cite[Proposition
6.5]{viana_lyapunov}):
\begin{equation*}
\int f_0 \dd \nu_u = \log \lambda_1.
\end{equation*}
It follows that
\begin{equation*}
\int g_0 \dd \nu_u = \log (\lambda_1 - \epsilon) - \log \lambda_1 < 0.
\end{equation*}
Fix some $c_0>0$ such that $\int g_0 \dd \nu_u < -c_0$. By
Proposition~\ref{prop:limboMn}, there exists an integer $N$ such that, for
any $x$ and $v$,
\begin{equation*}
\frac{1}{N} \sum_{n=0}^{N-1}\int g_0 \dd \boM^n \nu_{(x,[v])} \leq -c_0.
\end{equation*}
By definition of $\boM$, we get
\begin{equation*}
\int S_N g_0 \dd \nu_{(x,[v])} = \sum_{n=0}^{N-1}\int g_0 \dd \boM^n \nu_{(x,[v])} \leq -c_0 N.
\end{equation*}
Using the inequality $e^t \leq 1+t+t^2 e^{\abs{t}}$, we obtain for any
$\alpha \in (0,1)$
\begin{equation*}
\int e^{\alpha S_N g_0} \dd \nu_{(x,[v])} \leq 1+\alpha \int S_N g_0 \dd \nu_{(x,[v])}
+ \alpha^2 \int (S_N g_0)^2 e^{\abs{S_N g_0}} \dd \nu_{(x,[v])}
\leq 1 -\alpha c_0 N + \alpha^2 C,
\end{equation*}
where $C$ is a constant depending on $N$ but not on $\alpha$. (For the bound
in the last term, note that the function $S_N g_0$ is uniformly bounded, as a
continuous function on a compact space.) When $\alpha$ is small enough, the
term $\alpha^2 C$ is negligible. Hence, we obtain for small enough $\alpha$
and for $\beta = \alpha c_0/2$ the inequality
\begin{equation*}
\int e^{\alpha S_N g_0} \dd \nu_{(x,[v])} \leq 1 - \beta N \leq e^{-\beta N}.
\qedhere
\end{equation*}
\end{proof}
\begin{lem}
\label{lem:exp_control} There exists a constant $C$ such that, for any $n \in
\N$ and any $x$ and $v$, one has
\begin{equation*}
\int e^{\alpha S_n g_0} \dd \nu_{(x,[v])} \leq C e^{-\beta n}.
\end{equation*}
\end{lem}
\begin{proof}
It suffices to prove the lemma for times of the form $nN$, as the general
case only results in an additional multiplicative constant.
Fix some $n$. Iterating~\eqref{eq:iterate_boM}, $(T_{\Pbb}^{nN})_*
\nu_{(x,[v])}$ is a finite linear combination of measures of the form
$\nu_{(x_i, [v_i])}$, with some coefficients $c_i >0$ adding up to $1$. Then
\begin{equation*}
\int e^{\alpha S_{(n+1)N} g_0} \dd \nu_{(x,[v])}
= \sum_i c_i \int e^{\alpha S_{nN} g_0 \circ T_{\Pbb}^{-nN}} \cdot e^{\alpha S_N g_0} \dd\nu_{(x_i, [v_i])}.
\end{equation*}
In each of the integrals, the term $e^{\alpha S_{nN} g_0 \circ T^{-nN}}$ is
constant as $g_0$ and $M$ only depend on the past of points in $\Sigma$.
Hence, this integral is a constant multiple of $\int e^{\alpha S_N g_0}
\dd\nu_{(x_i, [v_i])}$, which is $\leq e^{-\beta N}$ by Lemma~\ref{lem:g0}.
We get
\begin{equation*}
\int e^{\alpha S_{(n+1)N} g_0} \dd \nu_{(x,[v])}
\leq e^{-\beta N} \sum_i c_i \int e^{\alpha S_{nN} g_0 \circ T^{-nN}}\dd\nu_{(x_i, [v_i])}
= e^{-\beta N} \int e^{\alpha S_{nN} g_0} \dd \nu_{(x,[v])}.
\end{equation*}
The conclusion then follows by induction on $n$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:strongly_irreducible}]
Fix some vector $v$. Then the average $\int \nu_{(x,[v])} \dd\mu(x)$ is a
measure on $\Pbb(E)$ that projects to $\mu$. If $\log \norm{M^n(y)} \leq n
\lambda_1(M) - n\epsilon$, then for any vector $w$ one has $\log(\norm{M^n(y)
w} / \norm{w}) \leq n(\lambda_1(M) - \epsilon)$, i.e., $S_n g_0(y, w) \geq
0$. We obtain
\begin{align*}
\mu\{y \st \log \norm{M^n(y)} \leq n \lambda_1(M) - n\epsilon\}
&\leq \int 1(S_n g_0(y,w) \geq 0) \dd \nu_{(x,[v])}(y,w) \dd\mu(x)
\\&\leq \int \left( \int e^{\alpha S_n g_0} \dd \nu_{(x,[v])}\right) \dd\mu(x).
\end{align*}
By Lemma~\ref{lem:exp_control}, the last integral is bounded by $C e^{-\beta
n}$. The upper bound~\eqref{eq:mu_to_prove} follows.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:large_deviations}~(5)}
\label{proof:5}
Consider a cocycle $M$ admitting invariant continuous holonomies, which is
pinching and twisting in the sense of Avila-Viana. We want to show that it
admits exponential large deviations for all exponents.
\cite{avila_viana_criterion} shows that there is a unique invariant $u$-state
on $\Pbb(E)$, corresponding to the maximally expanded Oseledets subspace, see
the first lines of Section~7 in~\cite{avila_viana_criterion}. Hence,
Theorem~\ref{thm:strongly_irreducible} applies and shows that $M$ has
exponential large deviations for its top exponent.
To prove exponential large deviations for an exponent $i$, a natural strategy
would be to consider the cocycle $\Lambda^i M$ and prove that it has
exponential large deviations for its top Lyapunov exponent. However, there is
no reason why $\Lambda^i M$ should be twisting and pinching. What Avila and
Viana prove in~\cite{avila_viana_criterion}, however, is that $M$ has a
unique $u$-state on the Grassmannian of $i$-dimensional subspaces. All the
arguments in the proof of Theorem~\ref{thm:strongly_irreducible} go through
if one replaces everywhere the space $\Pbb(E)$ by the corresponding
Grassmannian. Then the Grassmannian version of
Theorem~\ref{thm:strongly_irreducible} shows that $M$ has exponential large
deviations for the exponent $i$
\qed
\subsection{Proof of Theorem~\ref{thm:large_deviations}~(6)}
\label{proof:6}
Consider a two dimensional cocycle $M$ admitting continuous holonomies, we
want to show that it satisfies exponential large deviations for all exponents
$i$. For $i=2$, the norm $\norm{\Lambda^i M(x)}$ is the absolute value of the
determinant of $M(x)$. The desired estimate~\eqref{eq:exp_dev_all_exp}
involves an additive cocycle, the Birkhoff sums of the continuous function
$\log \abs{\det M(x)}$. Hence,~\eqref{eq:exp_dev_all_exp} follows from the
large deviations estimate for Birkhoff sums.
The only non-trivial case is $i=1$, i.e., exponential large deviations for
$\norm{M^n(x)}$. If $M$ admits a unique invariant $u$-state on $\Pbb(E)$,
then the result follows from Theorem~\ref{thm:strongly_irreducible}, and we
are done. If the two Lyapunov exponents of $M$ are equal, then the result
follows from Theorem~\ref{thm:large_deviations}~(1). The last case is when
the Lyapunov exponents are distinct, but there are two different invariant
$u$-states. The non-uniqueness implies that something fails if we try to
follow the proof of Theorem~\ref{thm:u-state-unique}. The only place in the
proof of this theorem where we used the fact that the cocycle is locally
constant is in the proof of Lemma~\ref{lem:hyperplane_0}. Without this
assumption, the proof of this lemma constructs a family $\boV(x)$ of
subspaces (which are necessarily one-dimensional), invariant under the
dynamics and the stable holonomy, but not necessarily under the unstable
holonomy. In general, this prevents us from implementing the induction
argument of Lemma~\ref{lem:strongly_irreducible2} as the induced cocycle on
$\boV$ and the quotient cocycle do not admit invariant continuous holonomies
any more. However, in the specific case of $2$-dimensional cocycles, the
induced cocycles and the quotient cocycles are both $1$-dimensional.
Therefore, they satisfy exponential large deviations thanks to
Theorem~\ref{thm:large_deviations}~(1). Hence, the argument in
Lemma~\ref{lem:strongly_irreducible2} goes through to prove that the original
cocycle also satisfies exponential large deviations.
\qed
\section{Exponential returns to nice sets for subadditive cocycles}
The main statement of this section is the following theorem. Note that the
assumptions of the theorem ensure that the function $F$ below is finite
almost everywhere, although it can be infinite on points which are not
typical for $\mu$. We are trying to control how large it will be along
typical orbits, in a quantitative sense.
\begin{thm}
\label{thm:exp_returns} Let $T:X \to X$ be a continuous map preserving an
ergodic probability measure $\mu$ on a compact space. Consider a subadditive
cocycle $u:\N \times X \to \R$, such that $u(n,x)/n$ converges almost
everywhere to $0$, and $u(n, \cdot)$ is continuous for all $n$. Let also
$\epsilon>0$. Define a function
\begin{equation*}
F(x) = \sup_{n\geq 0}\, \abs{u(n,x)} - \epsilon n.
\end{equation*}
Assume that $u$ has exponential large deviations, and that the Birkhoff sums
of continuous functions also have exponential large deviations.
Let $\delta>0$. Then there exists $C>0$ such that, for any $n\geq 0$,
\begin{equation*}
\mu \{x \st \Card\{j \in [0, n-1] \st F(T^j x) > C\} \geq \delta n\}
\leq C e^{-C^{-1}n}.
\end{equation*}
\end{thm}
In the applications we have in mind, $u$ will be of the form $u(n,x) = \log
\norm{\Lambda^i M^{(n)}(x)}-n(\lambda_1+\dotsb+\lambda_i)$, for some cocycle
$M$ with Lyapunov exponents $\lambda_k$. The points where $F(x) \leq C$ are
the points where all the iterates of the cocycle are well controlled.
Essentially, they belong to some Pesin sets (see
Proposition~\ref{thm:deterministic_Oseledets} below for a precise version of
this statement). Hence, the lemma will imply that most iterates of a point
return often to Pesin sets, if the matrix cocycle has exponential large
deviations for all exponents.
The proof is most conveniently written in terms of superadditive cocycles.
Note that, in the lemma below, the definition of $G$ resembles that of $F$ in
the theorem above, except for the lack of absolute value. Hence, the
following lemma applied to $v(n,x) = -u(n,x)-n\epsilon$ proves one of two
inequalities in Theorem~\ref{thm:exp_returns}.
\begin{lem}
\label{lem:exp_returns} Let $T:X \to X$ preserve an ergodic probability
measure $\mu$ on a compact space. Consider a superadditive cocycle $v:\N
\times X \to \R$, such that $v(n,x)/n$ converges almost everywhere to
$-\epsilon<0$, and $v(n, \cdot)$ is continuous for all $n$. Define a function
\begin{equation*}
G(x) = \sup_{n\geq 0} v(n,x).
\end{equation*}
Assume that $v$ satisfies exponential large deviations, and that the Birkhoff
sums of continuous functions also satisfy exponential large deviations.
Let $\delta>0$. Then there exists $C>0$ such that, for any $n\geq 0$,
\begin{equation*}
\mu \{x \st \Card\{j \in [0, n-1] \st G(T^j x) > C\} \geq \delta n\}
\leq C e^{-C^{-1}n}.
\end{equation*}
\end{lem}
\begin{proof}
When $N$ tends to $+\infty$, the sequence $v_N/N$ tends almost surely to
$-\epsilon$. The convergence also holds in $L^1$ by Kingman's theorem. Then
$v_N/N + \epsilon$ tends almost surely and in $L^1$ to $0$. Then $\min(v_N/N
+ \epsilon, 0)$ tends almost surely and in $L^1$ to $0$. Thus, we can take
once and for all a large enough $N$ so that
\begin{equation}
\label{eq:int_w}
\int \min(v_N/N + \epsilon, 0) \geq - \delta \epsilon/10.
\end{equation}
Let $w = v_N/N$.
By Lemma~\ref{lem:anx_bound_Birkhoff} applied to the subadditive cocycle
$-v$, there exists a constant $C_0>0$ such that $v(n,x) \geq S_n w (x) -
C_0$, for any $x\in X$ and any $n\in \N$. We will show that
\begin{equation*}
\mu \{x \st \Card\{j \in [0, n-1] \st G(T^j x) > 2C_0\} \geq \delta n\}
\leq C e^{-C^{-1}n}.
\end{equation*}
Assume first that $x$ has an iterate where the cocycle is large along an
extremely long interval, i.e., $x$ belongs to
\begin{equation*}
K_n = \bigcup_{t=0}^{n-1} (T^t)^{-1} \{y \st \exists j \geq \delta n/2, v(j,y) > 0\}.
\end{equation*}
As $v$ has exponential large deviations and converges to a negative constant,
the last set has a measure which is exponentially small in $n$. As $T$ is
measure-preserving, it follows that $\mu(K_n)$ is also exponentially small.
Consider now $x\notin K_n$ such that $\Card\{j \in [0, n-1] \st G(T^j x)
> 2C_0\} \geq \delta n$. Then
\begin{equation}
\label{eq:decompose_pos}
\Card\{j \in [0, n-1-\delta n/2] \st G(T^j x) > 2C_0\} \geq \delta n/2.
\end{equation}
We define inductively a sequence of times $t_k$ as follows. We start from
$t_0=0$. If $G(T^{t_k} x) > 2C_0$ and $t_k\leq n-1 -\delta n/2$, then we say
that $t_k$ belongs to the set $U^+$ of sum-increasing times. In this case, we
can choose $n_k>0$ such that $v(n_k, T^{t_k}x) > 2C_0$, by definition of $H$.
Then we let $t_{k+1} = t_k+n_k$. Otherwise, we say that $t_k$ belongs to the
set $U^-$ of sum-decreasing times, and we let $t_{k+1} = t_k+1$. We stop at
the first $t_j$ where $t_j \geq n$.
Let $A^+ = \bigcup_{t_k\in U^+} [t_k, t_{k+1})$, and $A^- = [0,n-1] \setminus
A^+$. As $x \notin K_n$, the lengths $n_k=t_{k+1}-t_k$ when $t_k \in U^+$ are
all bounded by $\delta n/2$. Hence, $A^+$ is included in $[0,n-1]$. Moreover,
the set of bad times, on the left of~\eqref{eq:decompose_pos}, is included in
$A^+$. Therefore, $\Card A^+ \geq \delta n/2$, and $\Card A^- \leq
(1-\delta/2)n$.
We will also need to write the set $A^-$ as a union of intervals $\bigcup
[t'_j, t'_j + n'_j)$ over some index set $J$, i.e., we group together the
times in $U^-$ that are not separated by times in $U^+$.
Using the decomposition of $[0,n-1]$ as $A^+ \cup A^-$, the decomposition of
these sets into intervals, and the superadditivity of the cocycle, we obtain
the inequality
\begin{equation*}
v(n, x) \geq \sum_{t_k \in U^+} v(n_k, T^{t_k} x) + \sum_{j\in J} v(n'_j, T^{t'_j}x)
\geq \sum_{t_k \in U^+} 2C_0 + \sum_{j\in J} v(n'_j, T^{t'_j}x),
\end{equation*}
where the last inequality follows from the definition of $U^+$. Note that the
right endpoint of an interval in $A^-$ belongs to $U^+$, except for the last
interval. It follows that $\Card J \leq \Card U^+ + 1 \leq 2 \Card U^+$.
Hence, the above inequality implies
\begin{equation*}
v(n, x) \geq \sum_{j\in J} (C_0 + v(n'_j, T^{t'_j}x)).
\end{equation*}
Together with the definition of $C_0$, this gives
\begin{equation*}
v(n, x) \geq \sum_{j\in J} S_{n'_j} w(T^{t'_j}x)
= \sum_{k\in A^-} w(T^k x).
\end{equation*}
Now, let us introduce $\epsilon$:
\begin{align*}
v(n, x) & \geq \sum_{k\in A^-} (w(T^k x) + \epsilon) - \epsilon \Card(A^-)
\\& \geq \sum_{k\in [0,n-1]} \min(w(T^k x) + \epsilon, 0) - \epsilon \Card(A^-)
\\& \geq \sum_{k\in [0,n-1]} \min(w(T^k x) + \epsilon, 0) - \epsilon (1-\delta/2)n,
\end{align*}
where the last inequality holds as $\Card A^- \leq (1-\delta/2)n$.
The continuous function $x\mapsto \min(w(x) + \epsilon, 0)$ has exponential
large deviations and integral $\geq -\delta\epsilon/10$ by ~\eqref{eq:int_w}.
Hence, we have $\sum_{k\in [0,n-1]} \min(w(T^k x) + \rho, 0) \geq - n
\delta\epsilon/5$ apart from an exponentially small set. Apart from this set,
we obtain
\begin{equation*}
v(n,x) \geq -\epsilon n + (\delta/2-\delta/5)\epsilon n.
\end{equation*}
As $v$ has exponential large deviations and asymptotic average $-\epsilon$,
it follows that this condition on $x$ has exponentially small measure.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:exp_returns}]
The function $F$ is the maximum of the two functions
\begin{equation*}
H(x) = \sup_{n\geq 0} -u(n,x) - n\epsilon,\quad I(x) = \sup_{n\geq 0} u(n,x) - n\epsilon.
\end{equation*}
We should show that each of these functions satisfies the conclusion of the
theorem. For $H$, this follows from Lemma~\ref{lem:exp_returns} applied to
$v(n,x) = -u(n,x) -n\epsilon$.
For $I$, let us consider $N>0$ such that $u_N/N$ has integral $<\epsilon/2$.
By Lemma~\ref{lem:anx_bound_Birkhoff}, there exists a constant $C_0$ such
that $u(n,x) \leq S_n(u_N/N) + C_0$ for all $n$. Let $w = u_N/N-\epsilon$.
Lemma~\ref{lem:exp_returns} applied to the cocycle $S_n w$ shows that, for
some constant $C_1>0$,
\begin{equation*}
\mu \{x \st \Card\{j \in [0, n-1] \st \sup_n S_n w(T^j x) > C_1\} \geq \delta n\}
\leq C e^{-C^{-1}n}.
\end{equation*}
If $u(n,x)-n\epsilon \geq C_0+C_1$, then $S_n w(x) \geq C_1$. Hence, the
control on $I$ follows from the previous equation.
\end{proof}
\section{A deterministic control on the Pesin function}
An important difficulty to prove Theorem~\ref{thm:exp_returns_Pesin} is that
the Pesin function $A_\epsilon$ is defined in terms of the Oseledets
subspaces $E_i(x)$, which vary only measurably with the point and for which
we have no good control. On the other hand, Theorem~\ref{thm:exp_returns}
provides exponentially many returns for sets defined in terms of functions
for which we have good controls, e.g., Birkhoff sums of continuous functions
(by the large deviation principle) or norms of linear cocycles (if one can
prove exponential large deviations for them, using for instance
Theorem~\ref{thm:large_deviations}). Our goal in this section is to explain
how controls on such quantities imply controls on the Pesin function
$A_\epsilon$. Then, Theorem~\ref{thm:exp_returns_Pesin} will essentially
follow from Theorem~\ref{thm:exp_returns}. To prove such a result, we need to
revisit the proof of Oseledets theorem and replace almost sure controls with
more explicit bounds.
Consider an invertible map $T:X\to X$ preserving a probability measure $\mu$,
and a log-integrable linear cocycle $M$ above $T$ on $X \times \R^d$. Let
$\lambda_1 \geq \dotsb \geq \lambda_d$ be its Lyapunov exponents, let $I=\{i
\st \lambda_i<\lambda_{i-1}\}$ be a set of indices for the distinct Lyapunov
exponents, let $E_i$ be the Lyapunov subspaces.
Given $\epsilon>0$, define functions
\begin{align}
\notag
B^+_\epsilon(x) & = \sup_{i\in [1,d]} B^{(i)+}_\epsilon = \sup_{i\in [1,d]} \sup_{n\geq 0}\,
\abs{ \log \norm{\Lambda^i M^{(n)}(x)} - n(\lambda_1+\dotsb+\lambda_i)} - n\epsilon,
\\ \notag
B^-_\epsilon(x) & = \sup_{i\in [1,d]} B^{(i)-}_\epsilon = \sup_{i\in [1,d]} \sup_{n\leq 0}\,
\abs{ \log \norm{\Lambda^i M^{(n)}(x)} - n (\lambda_d+\dotsb+\lambda_{d-i+1})} - \abs{n}\epsilon
\intertext{and}
\label{eq:def_B_epsilon}
B_\epsilon(x) & = \max(B^+_\epsilon(x), B^-_\epsilon(x)).
\end{align}
These are the functions we can control using the tools of the previous
sections.
The following proposition asserts that a control on $B_\epsilon$ and a mild
control on angles implies a control on $A_{\epsilon'}$ for $\epsilon' =
20d\epsilon$. For $i\in I$, let us denote by $F^{(m)}_{\geq i}(x)$ the
maximally contracted subspace of $M^{(m)}(x)$ of dimension $d-i+1$, and by
$F^{(-m)}_{<i}(x)$ the maximally contracted subspace of $M^{(-m)}(x)$ of
dimension $i$, if these spaces are uniquely defined, as in the statement of
Theorem~\ref{thm:Oseledets_limit}.
\begin{thm}
\label{thm:deterministic_Oseledets} Assume that $\norm{M(x)}$ and
$\norm{M(x)^{-1}}$ are bounded uniformly in $x$. Consider $\epsilon \in (0,
\min_{i\neq j \in I} \abs{\lambda_i-\lambda_j}/(20d))$ and $\rho>0$ and
$C>0$. Then there exist $m_0\in \N$ and $D>0$ with the following properties.
Consider a point $x$ satisfying $B_{\epsilon}(x) \leq C$. Then its subspaces
$F^{(n)}_{\geq i}(x)$ and $F^{(-n)}_{\leq i}(x)$ are well defined for all $n
\geq m_0$, and converge to subspaces $F^{(\infty)}_{\geq i}(x)$ and
$F^{(-\infty)}_{\leq i}(x)$.
Assume additionally that, for all $i\in I$, there exists $m \geq m_0$ such
that the angle between $F^{(m)}_{\geq i}(x)$ and $F^{(-m)}_{<i}(x)$ is at
least $\rho$. Then the Oseledets subspace $E_i(x) = F^{(\infty)}_{\geq i}(x)
\cap F^{(-\infty)}_{\leq i}(x)$ is a well-defined $d_i$-dimensional space for
all $i \in I$. Moreover, the function $A_{20d\epsilon}(x)$ (defined
in~\eqref{eq:def_A_epsilon} in terms of these subspaces) satisfies
$A_{20d\epsilon}(x) \leq D$.
\end{thm}
Note that there is no randomness involved in this statement, it is completely
deterministic.
The condition on $B_\epsilon$ controls separately what happens in the past
and in the future. Oseledets subspaces are defined by intersecting flags
coming from the past and from the future, as explained in
Theorem~\ref{thm:Oseledets_limit}. Therefore, it is not surprising that there
should be an additional angle requirement to make sure that these flag
families are not too singular one with respect to the other. Note that the
angle requirement is expressed in terms of a fixed time $m$. Hence, it will
be easy to enforce in applications.
\bigskip
In this section, we prove Theorem~\ref{thm:deterministic_Oseledets}. Once and
for all, we fix $T$, $M$ and $\mu$ satisfying the assumptions of this
theorem, and constants $C>0$, $\epsilon\in (0, \min_{i\neq j \in I}
\abs{\lambda_i-\lambda_j}/(20d))$ and $\rho>0$. Consider a point $x$
satisfying $B_{\epsilon}(x) \leq C$. We want to show that, if $m$ is suitably
large (depending only on $C$, $\epsilon$ and $\rho$), then the subspaces
$F_{\geq i}^{(m)}(x)$ and $F_{<i}^{(-m)}(x)$ are well defined, and moreover
if the angle between them is at least $\rho$, then $A_{20d\epsilon}(x)$ is
bounded by a constant $D$ only depending on $C$, $\epsilon$ and $\rho$.
We will use the notations introduced before
Theorem~\ref{thm:Oseledets_limit}. In particular, $t_i^{(n)}(x) = e^{n
\lambda_i^{(n)}(x)}$ is the $i$-th singular value of $M^n(x)$. We will
essentially repeat the argument from the proof of a technical lemma
in~\cite{ruelle_pesin_theory}. A more detailed exposition is given in
Section~2.6.2 in~\cite{sarig_notes}.
\medskip
\emph{Step 1: there exists $N_1 = N_1(C, \epsilon)$ such that, if $n \geq
N_1$, then $\abs{\lambda_i^{(n)}(x) - \lambda_i} \leq 3\epsilon$ for all
$i$.} In particular, thanks to the inequality $\epsilon < \min_{i\neq j \in
I} \abs{\lambda_i-\lambda_j}/(20d)$, there is a gap between the eigenvalues
$\lambda_j^{(n)}(x)$ in different blocks $\{i,\dotsc, i+d_i-1\}$. (Note that
the $20d$ is much larger than what we need here, $6$ would be enough, but it
will be important later on.) This implies that the different subspaces
$(F_i^{(n)}(x))_{i\in I}$ are well defined.
\begin{proof}
We have $B_{\epsilon}(x) \leq C$. Thanks to the equality $\log
\norm{\Lambda^i M^n(x)} = n(\lambda_1^{(n)}(x)+\dotsc + \lambda_i^{(n)}(x))$,
and to the definition of $B_\epsilon^+$, this gives for all $i$
\begin{equation*}
n \abs*{ \lambda_1^{(n)}(x)+\dotsc + \lambda_i^{(n)}(x) - (\lambda_1 + \dotsc+ \lambda_i)}
\leq \epsilon n + C.
\end{equation*}
Subtracting these equations with indices $i$ and $i-1$, we get
$\abs{\lambda_i^{(n)}(x) - \lambda_i} \leq 2 \epsilon + 2C/n$. If $n$ is
large enough, this is bounded by $3\epsilon$ as desired.
\end{proof}
From this point on, we will only consider values of $n$ or $m$ which are
$\geq N_1$, so that the subspaces $F_i^{(n)}(x)$ are well defined. We will
write $\Pi^{(n)}_i$ for the orthogonal projection on this subspace, and
$\Pi^{(n)}_{\geq i}$ and $\Pi^{(n)}_{<i}$ for the projections on
$\bigoplus_{j \in I, j \geq i} F_j^{(n)}(x)$ and $\bigoplus_{j \in I, j < i}
F_j^{(n)}(x)$ respectively. They satisfy $\Pi^{(n)}_{\geq i} + \Pi^{(n)}_{<i}
= \Id$.
\medskip
\emph{Step 2: there exists a constant $K_1 = K_1(C, \epsilon)$ such that, for
all $m \geq n \geq N_1$, all $i>j$ in $I$ and all $v \in F_{\geq
i}^{(n)}(x)$, holds}
\begin{equation*}
\norm{\Pi_{\leq j}^{(m)} v} \leq K \norm{v} e^{-n (\lambda_j - \lambda_i - 6(d-1)\epsilon)}.
\end{equation*}
\begin{proof}
The proof is done in two steps.
First claim: there exists a constant $K_0$ such that, for $n \geq N_1$, $v
\in F_{\geq i}^{(n)}(x)$ and $j<i$,
\begin{equation*}
\norm{\Pi_j^{(n+1)} v} \leq K_0 \norm{v} e^{-n (\lambda_j - \lambda_i - 6\epsilon)}.
\end{equation*}
Indeed, on the one hand, we have
\begin{equation*}
\norm{M^{n+1}(x) v} = \norm{M(T^n x) \cdot M^n(x) v} \leq (\sup_y \norm{M(y)}) \cdot \norm{M^n(x) v}
\leq (\sup_y \norm{M(y)}) e^{n(\lambda_i+3\epsilon)} \norm{v},
\end{equation*}
thanks to the first step and the fact that $v \in F_{\geq i}^{(n)}(x)$. On
the other hand, as $M^{n+1}(x)$ respects the orthogonal decomposition into
the spaces $F_k^{(n+1)}(x)$, we have
\begin{equation*}
\norm{M^{n+1}(x) v} \geq \norm{M^{n+1}(x) \Pi_j^{(n+1)} v}
\geq e^{(n+1) (\lambda_j - 3\epsilon)} \norm{\Pi_j^{(n+1)} v},
\end{equation*}
again thanks to the first step. Putting these two equations together gives
the result.
\medskip
Second claim: for all $j<i$ in $I$, there exists a constant $K_{i,j}$ such
that, for all $m \geq n\geq N_1$ and all $v \in F_{\geq i}^{(n)}(x)$, we have
\begin{equation}
\label{eq:claim2}
\norm{\Pi_{\leq j}^{(m)} v} \leq K_{i,j} e^{-n (\lambda_j-\lambda_i - 6(i-j) \epsilon)} \norm{v}.
\end{equation}
Once this equation is proved, then Step 2 follows by taking for $K_1$ the
maximum of the $K_{i,j}$ over $j<i$ in $I$. To prove~\eqref{eq:claim2}, we
argue by decreasing induction over $j < i$, $j \in I$. Assume thus that the
result is already proved for all $k \in I \cap (j,i)$, let us prove it for
$j$.
Decomposing a vector $v$ along its components on $F_{\leq j}^{(m)}(x)$, on
$F_k^{(m)}(x)$ for $k \in I \cap (j, i)$ and on $F_{\geq i}^{(m)}(x)$, we get
\begin{equation}
\label{eq:piqusfdp}
\norm{\Pi_{\leq j}^{(m+1)} v} \leq \norm{\Pi_{\leq j}^{(m+1)} \Pi_{\leq j}^{(m)} v}
+ \sum_{k\in I \cap (j, i)} \norm{\Pi_{\leq j}^{(m+1)} \Pi_{k}^{(m)} v}
+ \norm{\Pi_{\leq j}^{(m+1)} \Pi_{\geq i}^{(m)} v}.
\end{equation}
The first term is bounded by $\norm{\Pi_{\leq j}^{(m)} v}$ as $\Pi_{\leq
j}^{(m+1)}$ is a projection. The second term is bounded by $K_0
e^{-m(\lambda_j-\lambda_k-6\epsilon)} \norm{\Pi_{k}^{(m)} v}$ thanks to the
first claim applied to $m$ and $\Pi_{k}^{(m)} v \in F_{\geq k}^{(m)}(x)$. The
induction hypothesis asserts that $\norm{\Pi_{k}^{(m)} v} \leq K_{k, i}
e^{-m(\lambda_k - \lambda_i -6(i-k)\epsilon)}\norm{v}$. Overall, we get for
the second term a bound which is at most
\begin{equation*}
\sum_{k \in I \cap (j,i)} K_0 K_{i,k} e^{-m(\lambda_j - \lambda_i - 6(i-k + 1) \epsilon)}
\leq K' e^{-m (\lambda_j -\lambda_i - 6(i-j) \epsilon)}\norm{v}.
\end{equation*}
Finally, the third term in~\eqref{eq:piqusfdp} is bounded by $K_0
e^{-m(\lambda_j - \lambda_i-6\epsilon)} \norm{\Pi_{\geq i}^{(m)} v}$, by the
first claim applied to $m$ and $\Pi_{\geq i}^{(m)} v \in F_{\geq
i}^{(m)}(x)$. This is bounded by $K_0 e^{-m(\lambda_j - \lambda_i-6\epsilon)}
\norm{v}$ as $\Pi_{\geq i}^{(m)}$ is a projection.
All in all, we have proved that
\begin{equation*}
\norm{\Pi_{\leq j}^{(m+1)} v} \leq (K'+K_0) e^{-m (\lambda_j -\lambda_i - 6(i-j) \epsilon)}\norm{v}
+ \norm{\Pi_{\leq j}^{(m)} v}.
\end{equation*}
The estimate~\eqref{eq:claim2} then follows by induction over $m$, summing
the geometric series starting from $n$ as $\lambda_j -\lambda_i - 6(i-j)
\epsilon > 0$ thanks to the choice of $\epsilon$.
\end{proof}
The second step controls projections from $F_i^{(n)}$ to $F_j^{(m)}$, for $m
\geq n$, when $i > j$. The third step controls projections in the other
direction, thus giving a full control of the respective projections of the
spaces.
\medskip
\emph{Step 3: for all $m \geq n \geq N_1$, all $i>j$ in $I$ and all $v \in
F_{\leq j}^{(n)}$, holds}
\begin{equation*}
\norm{\Pi_{\geq i}^{(m)} v} \leq K_1 \norm{v} e^{-n (\lambda_j - \lambda_i - 6(d-1)\epsilon)}.
\end{equation*}
\begin{proof}
Define a new matrix cocycle by $\tilde M(x) = (M^{-1}(x))^t$, from $E^*(x)$
to $E^*(T x)$. In coordinates (identifying $E(x)$ and $E^*(x)$ thanks to its
Euclidean structure), it is given as follows. Write $M^n(x)$ as $k_1 A k_2$
where $k_1$ and $k_2$ are orthogonal matrices, and $A$ is a diagonal matrix
with entries $t_1^{(n)}(x)=e^{n \lambda_1(n)(x)},\dotsc, t_d^{(n)}(x)=e^{n
\lambda_d(n)(x)}$. Then $\tilde M^n(x) = k_1 A^{-1} k_2$. Hence, it has the
same decomposition into singular spaces as $M^n(x)$, the difference being
that the singular values of $M^n(x)$ are replaced by their inverses.
The proof in Step 2 only used the fact that the logarithms of the singular
values were $3\epsilon$-close to $\lambda_i$, and the norm of the cocycle is
uniformly bounded. All these properties are shared by $\tilde M$. Hence, the
conclusion of Step 2 also applies to $\tilde M$, except that the inequality
between $i$ and $j$ have to be reversed as the ordering of singular values of
$\tilde M$ is the opposite of that of $M$. This is the desired conclusion.
\end{proof}
Overall, Steps 2 and 3 combined imply that the projection of a vector in
$F_i^{(n)}(x)$ on $(F_i^{(m)}(x))^\perp = F_{<i}^{(m)}(x) \oplus
F_{>i}^{(m)}(x)$ has a norm bounded by $2K_1 e^{-\delta n}$, for $\delta =
\min_{k \neq \ell \in I} \abs{\lambda_k -\lambda_\ell} -6(d-1) \epsilon > 0$.
Hence, in terms of the distance $\df$ on the Grassmannian of
$d_i$-dimensional subspaces defined in~\eqref{eq:distance_Grass}, we have
$\df(F_i^{(n)}(x), F_i^{(m)}(x)) \leq 2K_1 e^{-\delta n}$. It follows that
$F_i^{(n)}(x)$ is a Cauchy sequence, converging to a subspace
$F_i^{(\infty)}(x)$ as claimed in the statement of the theorem.
\medskip
\emph{Step 4: there exist $N_2 \geq N_1$ and a constant $K_2$ such that, for
all $n \geq N_2$, all $i$ in $I$ and all $v \in F_{i}^{(\infty)}$ with norm
$1$, holds}
\begin{equation}
\label{eq:step4}
K_2^{-1} e^{n(\lambda_i -6d \epsilon)} \leq \norm{M^n(x) v} \leq K_2 e^{n (\lambda_i + 6d\epsilon)}.
\end{equation}
\begin{proof}
Take a vector $v \in F_i^{(\infty)}(x)$. For $j\in I$, the norm of the
projection $\pi_{F^{(n)}_j(x) \to F_i^{(\infty)}(x)}$, as the limit of the
projections $\pi_{F^{(n)}_j(x) \to F_i^{(m)}(x)}$, is bounded by $K_1 e^{-n
(\abs{\lambda_i -\lambda_j} - 6(d-1)\epsilon)}$ thanks to Steps 2 and 3 (note
that this bound is nontrivial only if $j \neq i$). Its transpose, the
projection $\pi_{F^{(\infty)}_i(x) \to F_j^{(n)}(x)}$, has the same norm and
therefore satisfies the same bound.
Writing $v_j = \pi_{F^{(\infty}_i(x) \to F_j^{(n)}(x)} v$, we have $M^n(x) v
= \sum_{j \in I} M^n(x) v_j$. We have
\begin{equation}
\label{eq:norm_vj}
\norm{v_j} \leq K_1 e^{-n (\abs{\lambda_i -\lambda_j} - 6(d-1)\epsilon)}.
\end{equation}
As $M^n(x)$ expands by at most $e^{n \lambda_j + 3\epsilon}$ on
$F_i^{(n)}(x)$, thanks to Step $1$, we obtain
\begin{equation*}
\norm{M^n(x) v_j} \leq K_1 e^{-n (\abs{\lambda_i -\lambda_j} - 6(d-1)\epsilon)} e^{n \lambda_j + 3\epsilon}
\leq K_1 e^{n (\lambda_i + 6d\epsilon)}.
\end{equation*}
Here, it is essential to have in Step 2 a control in terms of
$\lambda_j-\lambda_i$, and not merely some exponentially decaying term
without a control on the exponent. This proves the upper bound
in~\eqref{eq:step4}.
For the lower bound, we write $\norm{M^n(x) v} \geq \norm{M^n(x) v_i}$ as all
the vectors $M^{(n)}(x) v_j$ are orthogonal. This is bounded from below by
$e^{n (\lambda_i - 3\epsilon)} \norm{v_i}$, by Step 1. To conclude, it
suffices to show that $\norm{v_i}$ is bounded from below by a constant if $n$
is large enough. As $\norm{v_i} \geq \norm{v} - \sum_{j \neq i} \norm{v_j}$,
this follows from the fact that $\norm{v_j}$ tends to $0$ with $n$ if $j\neq
i$, thanks to~\eqref{eq:norm_vj}.
\end{proof}
We recall that we are trying to control the behavior of $M^n(x)$ not on
$F_i^{(\infty)}(x)$, but on the Oseledets subspace $E_i(x) = F_{\geq
i}^{(\infty)}(x) \cap F_{\leq i}^{(\infty)}(x)$. To this effect, there is in
the statement of Theorem~\ref{thm:deterministic_Oseledets} an additional
angle assumption that we will use now. Let $\rho>0$ be given as in the
statement of the theorem. There exists $\delta > 0$ with the following
property: if $U$ and $V$ are two subspaces of complementary dimension making
an angle at least $\rho$, then any subspaces $U'$ and $V'$ with $\df(U, U')
\leq \delta$ and $\df(V, V') \leq \delta$ make an angle at least $\rho/2$.
We fix once and for all $m_0=m_0(C, \epsilon, \delta) \geq N_2$ such that,
for all $i\in I$ and all $m\geq m_0$, one has $\df(F_{\geq i}^{(m)}(x),
F_{\geq i}^{(\infty)}(x)) \leq \delta$ and $\df(F_{\leq i}^{(-m)}(x), F_{\leq
i}^{(-\infty)}(x)) \leq \delta$. Its existence follows from the convergence
asserted at the end of Step 3 (and from the same result for $T^{-1}$).
Assume now (and until the end of the proof) that, for some $m\geq m_0$, the
angle between $F_{\geq i}^{(m)}(x)$ and $F_{<i}^{(-m)}(x)$ is $\geq \rho$, as
in the assumptions of the theorem. It follows then that the angle between
$F_{\geq i}^{(\infty)}(x)$ and $F_{< i}^{(-\infty)}(x)$ is at least $\rho/2$.
As a consequence, the spaces $F_{\geq i}^{(\infty)}(x)$ and $F_{\leq
i}^{(-\infty)}(x)$ are transverse, and their intersection is a
$d_i$-dimensional space $E_i(x)$.
\medskip
\emph{Step 5: there exist constants $K_3>0$ and $N_3 \geq N_2$ such that, for
all $n \geq N_3$, all $i\in I$ and all $v \in E_i(x)$ with norm $1$, holds}
\begin{equation}
\label{eq:step5}
K_3^{-1} e^{n(\lambda_i -6d \epsilon)} \leq \norm{M^n(x) v} \leq K_3 e^{n (\lambda_i + 6d\epsilon)}.
\end{equation}
\begin{proof}
We have $v \in E_i(x) \subseteq F_{\geq i}^{(\infty)}(x)$. Decomposing the
vector $v$ along its components $v_j \in F_j^{(\infty)}(x)$ with $j\in I \cap
[i, d]$ and using the upper bound of~\eqref{eq:step4} for each $v_j$, the
upper bound in~\eqref{eq:step5} readily follows.
For the lower bound, we note that $E_i(x)$, being contained in $F_{\leq
i}^{(-\infty)}(x)$, makes an angle at least $\rho/2$ with
$F_{>i}^{(\infty)}(x)$. This implies that the norm of the projection $v_i$ of
$v$ on $F_i^{(\infty)}(x)$ is bounded from below, by a constant $c_0 > 0$.
Using both the upper and the lower bounds of Step 4, we obtain
\begin{equation*}
\norm{M^n(x) v} \geq \norm{M^n(x) v_i} - \sum_{j \in I, j > i} \norm{M^n(x) v_j}
\geq c_0 K_2^{-1} e^{n(\lambda_i -6d \epsilon)} - \sum_{j\in I, j > i} K_2 e^{n (\lambda_j + 6d\epsilon)}.
\end{equation*}
The choice of $\epsilon$ ensures that, for $j>i$ in $I$, one has $\lambda_i
-6d \epsilon > \lambda_j + 6d\epsilon$. Hence, the sum in this equation is
asymptotically negligible, and we obtain a lower bound $c_0 K_2^{-1}
e^{n(\lambda_i -6d \epsilon)} /2$ if $n$ is large enough.
\end{proof}
\emph{Step 6: there exists a constant $K_4$ such that, for all $n \in \Z$,
all $i\in I$ and all $v \in E_i(x)$ with norm $1$, holds}
\begin{equation}
\label{eq:step6}
K_4^{-1} e^{n\lambda_i -6d \epsilon \abs{n}} \leq \norm{M^n(x) v}
\leq K_4 e^{n \lambda_i + 6d\epsilon \abs{n}}.
\end{equation}
\begin{proof}
Step 5 shows that this control holds uniformly over $n \geq N_3$. The same
argument applied to the cocycle $M^{-1}$ and the map $T^{-1}$ gives the same
control for $n \leq -N_3$ (note that the function $B_\epsilon(x)$, which is
bounded by $C$ by assumption, controls both positive and negative times).
Finally, the control over $n \in (-N_3, N_3)$ follows from the finiteness of
this interval, and the uniform boundedness of $M$ and $M^{-1}$.
\end{proof}
We can finally conclude the proof of
Theorem~\ref{thm:deterministic_Oseledets}. We want to bound the quantity
$A_{20d \epsilon}(x)$ defined in~\eqref{eq:def_A_epsilon}. Fix $i\in I$, $v
\in E_i(x) \setminus \{0\}$ and $m, n \in \Z$. Then, using the upper bound
of~\eqref{eq:step6} for $\norm{M^n(x) v}$ and the lower bound for
$\norm{M^m(x) v}$, we get
\begin{align*}
\frac{ \norm{M^n(x) v}}{\norm{M^m(x) v}} & e^{-(n-m)\lambda_i} e^{-(\abs{n} + \abs{m}) (20 d \epsilon)/2}
\\& \leq K_4 e^{n \lambda_i + 6d \epsilon \abs{n}} \cdot K_4 e^{-m \lambda_i + 6d \epsilon \abs{m}} \cdot
e^{-(n-m)\lambda_i} e^{-(\abs{n} + \abs{m}) (20 d \epsilon)/2}
\\ &
= K_4^2 e^{-(\abs{n} + \abs{m})4d \epsilon}
\leq K_4^2.
\end{align*}
Taking the supremum over $i\in I$, $v \in E_i(x) \setminus \{0\}$ and $m, n
\in \Z$, this shows that $A_{20d \epsilon}(x) \leq K_4^2$. This concludes the
proof, for $D = K_4^2$. \qed
\section{Exponential returns to Pesin sets}
\label{sec:proof_returns_Pesin}
In this section, we prove Theorem~\ref{thm:exp_returns_Pesin}. As in the
assumptions of this theorem, let us consider a transitive subshift of finite
type $T$, with a Gibbs measure $\mu$ and a Hölder cocycle $M$ which has
exponential large deviations for all exponents. Let $\delta>0$. We wish to
show that, for some $D>0$, the set
\begin{equation*}
\{x \st \Card\{k \in [0, n-1] \st A_\epsilon(T^k x) > D\} \geq \delta n\}
\end{equation*}
has exponentially small measure. Reducing $\epsilon$ if necessary, we can
assume $\epsilon < \abs{\lambda_i-\lambda_j}$ for all $i\neq j \in I$. Set
$\epsilon' = \epsilon/(20d)$.
The angle between the Lyapunov subspaces is almost everywhere nonzero. In
particular, given $i\in I$, the angle between $F_{\geq i}^{(\infty)}(x)$ and
$F_{<i}^{(-\infty)}(x)$ is positive almost everywhere. On a set of measure
$>1-\delta/2$, it is bounded from below by a constant $2 \rho>0$ for all $i$.
These subspaces are the almost sure limit of $F^{(m)}_{\geq i}(x)$ and
$F^{(-m)}_{<i}(x)$, according to Theorem~\ref{thm:Oseledets_limit}. Hence, if
$m$ is large enough, say $m \geq m_1$, the set
\begin{multline*}
U = U_m = \{x\in X \st \forall i \in I, F^{(m)}_{\geq i}(x) \text{ and }F^{(-m)}_{<i}(x)\text{ are well defined} \\
\text{and } \angle(F^{(m)}_{\geq i}(x),F^{(-m)}_{<i}(x)) > \rho\}
\end{multline*}
has measure $>1-\delta/2$.
We will use the functions $B^{(i)\pm}_{\epsilon'}$ defined
before~\eqref{eq:def_B_epsilon}. For each $i \in [1,d]$ and $\sigma \in
\{+,-\}$, there exists a constant $C_{i,\sigma}$ such that
\begin{equation*}
\{x \st \Card\{k \in [0, n-1] \st B^{(i)\sigma}_{\epsilon'}(T^k x) > C_{i,\sigma}\} \geq \delta n/ (4d)\}
\end{equation*}
has exponentially small measure, by Theorem~\ref{thm:exp_returns} and the
assumption on exponential large deviations for all exponents. (For
$\sigma=-$, this theorem should be applied to $T^{-1}$). Let $C' = \max
C_{i,\sigma}$. As $B_{\epsilon'}$ is the maximum of the functions
$B^{(i)\sigma}_{\epsilon'}$, it follows that
\begin{equation*}
\{x \st \Card\{k \in [0, n-1] \st B_{\epsilon'}(T^k x) > C' \} \geq \delta n/ 2\}
\end{equation*}
has exponentially small measure.
We apply Theorem~\ref{thm:deterministic_Oseledets} with $\epsilon=\epsilon'$
and $C=C'$ and $\rho$, obtaining some integer $m_0 \geq 1$ and some constant
$D$ with the properties described in Theorem 5.1. Let us fix until the end of
the proof $m = \max(m_0, m_1)$.
The set $U = U_m$ is open by continuity of $M^m$ and $M^{-m}$. In particular,
it contains a set $V$ which is a finite union of cylinders, with
$\mu(V)>1-\delta/2$. To conclude, it suffices to show that
\begin{equation}
\label{eq:piouqsif}
\{x \st \Card\{k \in [0, n-1] \st T^k x \notin V\} \geq \delta n/ 2\}
\end{equation}
has exponentially small measure. Indeed, assume this holds. Then, apart from
an exponentially small set, there are at most $\delta n$ bad times $k$ in
$[0,n-1]$ for which $T^k x \notin V$ or $B_{\epsilon'}(T^k x) > C'$. For the
other good times, we have $T^k x \in V$ and $B_{\epsilon'}(T^k x) \leq C'$.
Then Theorem~\ref{thm:deterministic_Oseledets} shows that $A_\epsilon(T^k x)
= A_{20d\epsilon'}(T^k x)\leq D$, as desired.
It remains to control~\eqref{eq:piouqsif}. Let $\chi_{V}$ denote the
characteristic function of $V$, it is a continuous function. The set
in~\eqref{eq:piouqsif} is
\begin{equation*}
\{x \st S_n \chi_{V} (x) < (1-\delta/2) n\}.
\end{equation*}
As $\int \chi_{V} = \mu(V) > 1-\delta/2$ by construction, the large deviation
principle for continuous functions shows that this set is indeed
exponentially small. This concludes the proof of the theorem. \qed
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{introduction}
Thirty years ago, David Vogan conjectured a purely local description of A-packets for $p$-adic groups \cite{Vogan:Langlands}, closely related to a more developed theory for real groups by Adams, Barbasch and Vogan \cite{ABV}.
While there is considerable evidence in the form of examples from \cite{CFMMX}*{Chapters 11-16}, Vogan's conjecture for $p$-adic groups remains open.
In this paper we prove this conjecture for general linear groups over $p$-adic fields, building on previous work in \cite{CR:irred}. We view this as a step toward proving Vogan's conjecture for the groups treated by Arthur in \cite{Arthur:book}, and the strategy of the proof given here reflects this objective.
This conjecture was first explicated in \cite{CFMMX}*{Conjecture~1} for a quasisplit symplectic or orthogonal $p$-adic group $G$ and attributed to Vogan. It predicts that, for every Arthur parameter $\psi : W''_F \to \Lgroup{G}$, the A-packet $\Pi_\psi(G)$ of representations of $G(F)$ coincides with the ABV-packet attached to the Langlands parameter $\phi_\psi$ determined by $\psi$. As defined \cite{CFMMX}*{Definition~1} and recalled in \cite{CR:irred}, the ABV-packet for $\phi_\psi$ is given by
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_\psi}(G) \coloneqq
\left\{
\pi\in \Pi_{\lambda}(G) \ \vert \
\operatorname{Evs}_\psi \mathcal{P}(\pi) \ne 0
\right\},
\]
where
\begin{itemize}
\item
$\Pi_{\lambda}(G)$ is the set of equivalence classes of irreducible, smooth representations of $G(F)$ with infinitesimal parameter $\lambda$ determined by $\psi$;
\item
$\mathcal{P}(\pi)$ is the simple object in the category $\operatorname{Per}_{H_\lambda}(V_\lambda)$ of equivariant perverse sheaves on the moduli space $V_\lambda$ of Langlands parameters of matching $\pi$ under the enhanced local Langlands correspondence, with $H_\lambda \coloneqq Z_{\dualgroup{G}}(\lambda)$;
\item
$\operatorname{Evs}_\psi$ is the functor
$$
\operatorname{Evs}_\psi : \operatorname{Per}_{H_\lambda}(V_\lambda) \to \operatorname{Loc}_{H_\lambda}(T^*_{C_\psi}(V_{\lambda})^\text{reg}) \equiv \operatorname{Rep}(A_\psi),
$$
introduced in \cite{CFMMX}*{Section 7.10}, where $C_\psi$ is the $H_\lambda$-orbit of the point in $V_\lambda$ corresponding to $\phi_\psi$ and where where $T^*_{C_\psi}(V_{\lambda})^\text{reg}$ is the regular part of conormal bundle $T^*_{C_\psi}$.
\end{itemize}
These terms are all defined carefully in \cite{CFMMX} and revisited in \cite{CR:irred}.
The main result of this paper, Theorem~\ref{thm:main}, shows that
\begin{equation}\label{equation:main}
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_\psi}(G) = \Pi_\psi(G),
\end{equation}
for every Arthur parameter $\psi$ for $G$, for $G = \operatorname{GL}_n$.
Let us now sketch the proof Theorem~\ref{thm:main}. We begin with an arbitrary Arthur parameter $\psi$ of $G$, a map $W''_F=W_F \times \operatorname{SL}_2(\mathbb{C})\times \operatorname{SL}_2(\mathbb{C}) \to \operatorname{GL}_n(\mathbb{C})$. This map, in general, has the form
\[
\psi = \psi_1\oplus\cdots\oplus\psi_k.
\]
Here each $\psi_i$ is an irreducible Arthur parameter; set $m_i \coloneqq \dim \psi_{i}$.
This decomposition naturally picks out a Levi subgroup $M \simeq \operatorname{GL}_{m_1}\times \cdots \times \operatorname{GL}_{m_k}$. Observe that $\dualgroup{M}$ is a Levi subgroup of $\dualgroup{G}$ containing the image of $\psi$.
Pick $s\in \dualgroup{G}$, of finite order, and therefore semisimple, so that $Z_{\dualgroup{G}}(s) = \dualgroup{M}$.
Let $\psi_M : W''_F \to \dualgroup{M}$ be the Arthur parameter for $M$ such that the following diagram commutes.
\[
\begin{tikzcd}
W''_F \arrow{rr}{\psi} \arrow{dr}[swap]{\psi_M} && \dualgroup{G}\\
& \dualgroup{M} =Z_{\dualgroup{G}}(s) \arrow{ur}
\end{tikzcd}
\]
Observe that $\psi_M$ is an irreducible Arthur parameter for $M$.
Let $\lambda_M$ be the infinitesimal parameter of $\psi_M$.
Now the inclusion $Z_{\dualgroup{G}}(s) \to \dualgroup{G}$ induces an inclusion
\[
\varepsilon : V_{\lambda_M} \hookrightarrow V_\lambda,
\]
which is equivariant for the action of $H_{\lambda_M}\coloneqq Z_{\dualgroup{M}}(\lambda_M)$ on $V_{\lambda_M}$ and the action of $H_\lambda \coloneqq Z_{\dualgroup{G}}(\lambda)$ on $V_\lambda$.
Indeed,
\[
V_{\lambda_M} = V_\lambda^s = \{ x\in V_{\lambda} \ \vert \ \operatorname{Ad}(s)x = x\}.
\]
Because $\psi_M$ is irreducible, earlier work \cite{CR:irred} establishes Vogan's conjecture for $\psi_M$:
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_{\psi_M}}(M) = \Pi_{\psi_M}(M).
\]
\iffalse
Now we may return to the argument above and conclude
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_{\psi}}(G) = \Pi_{\psi}(G).
\]
In this way we complete the proof of Vogan's conjecture on A-packets for $p$-adic general linear groups.
\fi
In order to lift this result from $M$ to $G$ we use the local Langlands correspondence to define a pairing between two Grothendieck groups that appear naturally on either side of the correspondence: on the spectral side, the category $\operatorname{Rep}^\text{fl}_{\lambda}(G)$ of finite-length representations of $G(F)$ with infinitesimal parameter $\lambda$ determined by $\psi$; on the Galois/geometric the category $\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$ of $H_{\lambda}$-equivariant perverse sheaves on $V_{\lambda}$.
In Section~\ref{subsection:eta} we recall a virtual representation $\eta^{\operatorname{Evs}}_{\psi} \in K\operatorname{Rep}^\text{fl}_\lambda(G)$ that characterizes the ABV-packet $\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_\psi}(G)$, with the property that
\[
\langle \eta^{\operatorname{Evs}}_{\psi}, [\mathcal{F}] \rangle_{\lambda}
=
(-1)^{d(\psi)} \operatorname{rank}\left(\operatorname{Evs}_\psi \mathcal{F}\right), \qquad \forall \mathcal{F}\in \operatorname{Per}_{H_\lambda}(V_\lambda),
\]
where $d(\psi)$ is the dimension of the $H_\lambda$-orbit of $x_\psi$ in $V_\lambda$; see Proposition~\ref{prop:eta}.
\iffalse
The strategy of our argument to prove Vogan's conjecture for $G$ is to show that if $M$ is a Levi subgroup of $G$ and if $\psi_M$ is an Arthur parameter for $M$ for which the ABV-packet and A-packets for $\psi_M$ coincide, then the same is true for the lift $\psi$ of $\psi_M$ to an Arthur parameter for $G$. To do this, we reformulate our goal in terms of virtual representations. We recall that the virtual representations $\eta_{\psi}$ and $\eta_{\psi}^{\operatorname{Evs}}$ are linear combinations of representations in the A-packet and ABV-packet, respectively (viewed as elements in their Grothendiek groups).
\fi
Since A-packets are singletons for $\operatorname{GL}_n$, we set $\eta_{\psi} :=[\pi_{\psi}]$. Thus, to show that the ABV-packet coincides with this A-packet, it is enough to show
\[\eta^{\operatorname{Evs}}_{\psi}=\eta_{\psi}=[\pi_{\psi}].\]
As the pairing $\langle \cdot, \cdot \rangle_{\lambda}$ is non-degenerate, this is equivalent to showing that
\[
\langle \eta^{\operatorname{Evs}}_{\psi}, [\mathcal{F}] \rangle_{\lambda} = \langle \eta_{\psi}, [\mathcal{F}] \rangle_{\lambda},
\]
for all $\mathcal{F}\in \operatorname{Per}_{H_\lambda}(V_\lambda)$. We do this by showing the three equalities below, for all $ \mathcal{F}\in \operatorname{Per}_{H_\lambda}(V_\lambda)$.
\[
\begin{tikzcd}
{\langle \eta^{\operatorname{Evs}}_{\psi}, [\mathcal{F}] \rangle_{\lambda}}
\arrow[equal]{d}[swap]{\text{Fixed-point formula \hskip3pt}}{\text{Prop.~}\ref{prop:M}}
\arrow[dashed,equal]{rrrr}{\text{Vogan's conjecture for $\psi$}}[swap]{\text{Theorem~\ref{thm:main}}}
&&&&
{\langle \eta_{\psi}, [\mathcal{F}] \rangle_{\lambda}}
\\
\langle \eta^{\operatorname{Evs}}_{\psi_M}, [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M}
\arrow[equal]{rrrr}[swap]{\text{Vogan's conjecture for $\psi_M$}}{\text{Prop.~}\ref{prop:M}}
&&&&
\langle \eta_{\psi_M}, [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M}
\arrow[equal]{u}[swap]{\text{\hskip3pt Endoscopic lifting}}{\text{Prop.~}\ref{prop: Lift on irreducibles of A type}}
\end{tikzcd}
\]
\iffalse
The proof of Theorem~\ref{thm:main} uses the Kazhdan-Lusztig Hypothesis \cite{zelevinskii1981p}.
There is ambiguity in the literature about the statement and status of the $p$-adic Kazhdan-Lusztig Hypothesis, so we devote Section~\ref{section:KLH} to making the statement precise and point to other work for clarity regarding its proof.
\fi
We remark that even for general linear groups, non-Arthur type ABV-packets hold some surprises.
Specifically, \cite{CFK} presents a non-Arthur type Langlands parameter $\phi_{\operatorname{KS}}$ for $\operatorname{GL}_{16}$ such that $\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16})$ consists of two representations.
Even more remarkably, the coronal representation $\pi_\psi$ in $\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16})$ is of Arthur type.
The main result of this paper implies $\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_\psi}(\operatorname{GL}_{16}) = \Pi_\psi(\operatorname{GL}_{16})$.
This is an example, then, of the following containments.
\[
\begin{tikzcd}
& \Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16}) = \{ \pi_{\phi_{\operatorname{KS}}}, \pi_\psi \} & \\
\arrow[>->]{ur} \Pi_{\phi_{\operatorname{KS}}}(\operatorname{GL}_{16}) = \{ \pi_{\phi_{\operatorname{KS}}} \} && \arrow[>->]{ul} \{ \pi_\psi\} = \Pi_\psi(\operatorname{GL}_{16})
\end{tikzcd}
\]
\subsection{Acknowledgements}
Some ideas in this paper are inspired by \cite{ABV} which treats real groups; we are happy to acknowledge the debt we owe to these three authors.
We also thank the entire Voganish Project research group, especially Bin Xu, Geoff Vooys and Kristaps Balodis, for their contributions to this research. We also thank Matthew Sunohara and Tom Haines for helpful conversations.
\subsection{Relation to other work}
This paper is part of the Voganish Project; for related results, we refer to \cite{CFMMX}, \cite{Mracek}, \cite{CFK}, \cite{CFZ:cubics}, \cite{CFZ:unipotent}, \cite{CR:irred}.
The main result of this paper, Theorem~\ref{thm:main} can be proved by an argument different than the one presented here, as we now explain.
In \cite{CR:irred}*{Lemma 5.1} we proved the following geometric statement: if $\psi$ is a simple (irreducible with trivial restriction to $W_F$) Arthur parameter and $C_\psi$ is its associated $H_\lambda$-orbit in $V_\lambda$, then, for any $H_\lambda$-orbit $C$ in $V_\lambda$, $C_\psi \leq C$ and $C^*_\psi \leq C^*$ implies $C_\psi = C$; here we refer to the Zariski-closure relation on these orbits. In his MSc thesis written under the supervision of Andrew Fiori, Connor Riddlesden \cite{Riddlesden} extended this result to unramified Arthur parameters $\psi$, dropping the irreducibility condition appearing in \cite{CR:irred}*{Lemma 5.1}, though not applying to arbitrary Arthur parameters. When combined with unramification as it appears in \cite{CR:irred}*{Section 6, especially Lemma 6.8} and results from \cite{CFMMX}, these can be assembled to give an alternate proof of our main result, Theorem~\ref{thm:main}.
While our proof of Theorem~\ref{thm:main} is perhaps more complicated than this alternate argument, we believe that our strategy is better adapted to generalizations, specifically, to proving Vogan's conjecture on A-packets to quasisplit classical groups and their pure inner forms. This belief is based on the fact that, in this paper, we have used endoscopic lifting
in a very special case. While it is characterized by parabolic induction in this case, we expect Langlands-Shelstad transfer to play a role more generally.
Since Arthur's packets are characterized by Langlands-Shelstad transfer and Kottwitz-Shelstad transfer, together with certain normalizing choices referring to Whittaker models, we expect the geometric incarnation of both kinds of transfer to play an important role in extending our main result to other groups $G$.
\subsection{Notation}\label{subsection:notation}
In this work, for the most part, we follow notational conventions established in \cite{CR:irred}.
Here, $F$ is a $p$-adic field.
Henceforth, $G$ is $\operatorname{GL}_n$ and $P$ is a parabolic subgroup of $G$ with Levi subgroup $M$ and unipotent radical $N$; these statements are made in the category of algebraic groups over $F$, not their $F$-points, for which we use the notation $G(F)$, $P(F)$, etc.
We use the notation
\[
W_F' \coloneqq W_F\times \operatorname{SL}_2(\mathbb{C});
\]
this topological group is denoted by $L_F$ in Arthur's work.
We also use the notation
\[
W_F'' \coloneqq W_F\times \operatorname{SL}_2(\mathbb{C})\times \operatorname{SL}_2(\mathbb{C});
\]
this topological group is denoted by $L'_F$ in Arthur's work.
Let $\dualgroup{G}$ denote the complex dual group of $G$, which for us is simply $\operatorname{GL}_n(\mathbb{C})$.
By a Langlands parameter we mean an admissible homomorphism $\phi: W'_F \to \dualgroup{G}$, as defined in \cite{Borel:Corvallis}, for example.
We refer to $\dualgroup{G}$-conjugacy classes of Langlands parameters as L-parameters.
An infinitesimal parameter $\lambda: W_F\to \dualgroup{G}$ is simply a Langlands parameter with domain $W_F$.
The infinitesimal parameter $\lambda_\phi$ of a Langlands parameter $\phi$ is defined by
\[
\lambda_\phi(w) \coloneqq \phi(w,\operatorname{diag}(|w|^{1/2}, |w|^{-1/2})).
\]
By an Arthur parameter we mean a homomorphism $\psi: W''_F \to \dualgroup{G}$ satisfying conditions explained in \cite{CFMMX}*{Section 3.5}, notably, that its restriction to $W_F$ is bounded.
The Langlands parameter $\phi_\psi$ is defined by $\phi_\psi(w,x)\coloneqq \psi(w,x,\operatorname{diag}(|w|^{1/2}, |w|^{-1/2}))$.
The infinitesimal parameter $\lambda_\psi$ of $\psi$ is the infinitesimal parameter of $\phi_\psi$, thus given by \[
\lambda_\psi(w) \coloneqq \psi(w,\operatorname{diag}(|w|^{1/2}, |w|^{-1/2}),\operatorname{diag}(|w|^{1/2}, |w|^{-1/2})).
\]
When $\psi$ has been fixed, we set $\lambda=\lambda_\psi$.
For a smooth irreducible representation $\sigma$ of $M(F)$, the symbol $\operatorname{Ind}_P^G(\sigma)$ denotes the normalized parabolic induction of the representation $\sigma$ of the $F$-rational points $M(F)$ of the Levi subgroup $M$ of $P$; this means we inflate $\sigma$ from $M(F)$ to $P(F)$, twist by the modulus quasicharacter $\delta_P^{1/2}$ for the parabolic $P(F)$ and then induce from $P(F)$ to $G(F)$.
In this paper we write $\operatorname{rank}(E)$ for the Euler characteristic of a graded vector space $E = \oplus_{i\in \mathbb{Z}} E^i$:
\[
\operatorname{rank}(E) = \sum_{i\in \mathbb{Z}}(-1)^i \dim(E^i).
\]
For $\mathcal{F}\in \operatorname{D}_{H_\lambda}(V_\lambda)$ and $x\in V_\lambda$, we write $\mathcal{F}_x \in \operatorname{D}_{Z_H(x)}(x)$ for the stalk of $\mathcal{F}$ at $x$ and $\mathcal{H}^\bullet_x\mathcal{F}$ for its cohomology complex, often viewed as a graded vector space.
We follow the conventions of \cite{BBD} regarding perverse sheaves; in particular, the restriction of ${\mathcal{IC}\hskip-1pt }(\mathcal{L}_C)$ to $C$ is $\mathcal{L}_C[\dim C]$, and complexes are shifted according to $(\mathcal{F}[n])^i = \mathcal{F}^{n+i}$.
The notation ${\mathbbm{1}}_X$ is used to denote the constant sheaf on $X$.
\section{Preliminaries on Vogan's conjecture on A-packets}\label{section:eta}
In this section we revisit the definition of ABV-packets for $p$-adic groups from \cite{CFMMX} and cast it in a form that is adapted to the proof of the main result, Theorem~\ref{thm:main}.
Instead of purely working over $\Pi_{\lambda}(G)$ and $\operatorname{Per}_{H_{\lambda}}(V_{\lambda})_{/\op{iso}}^{\op{simple}}$ as in the previous paper \cite{CR: irred}, we work over Grothendieck groups $K\operatorname{Rep}_{\lambda}^{\op{fl}}(G)$ and $K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$. We therefore spend some time explaining these groups.
\subsection{Spectral side}
By the Langlands correspondence, the $\dualgroup{G}$-conjugacy class of $\lambda$ is identified with a cuspidal support $(L,\sigma)_{G} \in \Omega(G)$, the Bernstein variety for $G(F)$. We remark that $L$ is a Levi subgroup of $M$, where $M$ is determined by $\psi$ as in Section~\ref{introduction}.
Let $\operatorname{Rep}_\lambda(G)$ be the \emph{cuspidal support category} of smooth representations of $G(F)$ whose Jordan Holder series is contained in the Jordan Holder series of $\mathop{\text{Ind}}_P^G(\sigma)$ where $M$ is a parabolic subgroup of $G$ with Levi component $M$.
Let $\operatorname{Rep}^\text{fl}_\lambda(G)$ be the subcategory of finite-length representations. The Grothendieck group $K\operatorname{Rep}_{\lambda}^{\op{fl}}(G)$ has two different bases - one consisting of smooth irreducible representations of $G(F)$ that share the infinitesimal parameter $\lambda$, and the other of standard representations attached to these irreducible representations. We use both these bases in the rest of the paper, so we recall the theory surrounding these objects below.
\subsubsection{Irreducible representations}
Let $\Pi_\lambda(G) = \{ \pi_i \ \vert \ i\in I \}$ be the Jordan-Holder series of $\mathop{\text{Ind}}_P^G(\sigma)$;
this may be identified with the set of isomorphism classes of irreducible admissible representations of $G(F)$ with infinitesimal parameter $\lambda$.
Smooth irreducible representations of $G(F)$ are classified using Zelevinsky theory \cite{Z2}.
This was surveyed beautifully in \cite{kudla1994local} and we use his notation. For any representation $\pi$ of $\operatorname{GL}_m(F)$, let $\pi(i):=|\text{det}(\cdot)|^i\pi$.
For a partition $n=\underbrace{m+m+\ldots+m}_{r\text{-times}}$ and a supercuspidal representation $\sigma$ of $\operatorname{GL}_m(F)$, we call
\begin{equation}
\label{segment}
(\sigma, \sigma(1), \ldots, \sigma(r-1))=[\sigma,\sigma(r-1)]=:\Delta
\end{equation}
a segment.
This segment determines a representation of a parabolic subgroup of $G$ whose Levi subgroup is $\underbrace{\operatorname{GL}_m(F) \times \operatorname{GL}_m(F) \times \cdots \times \operatorname{GL}_m(F)}_{r\text{-times}}$.
We can then carry out parabolic induction to obtain the induced representation $\operatorname{Ind}_P^{G}(\sigma \otimes \sigma(1)\otimes \cdots \otimes \sigma(r-1))$ of $G$, which has a unique irreducible quotient denoted by $Q(\Delta)$. We refer to $Q(\Delta)$ as the \textit{Langlands quotient} associated to $\Delta$. For a segment $\Delta=[\sigma,\sigma(r-1)]$, we set
\[\Delta(x)\coloneqq [\sigma(x),\sigma(r-1+x)].\]
A multisegment is a multiset of segments. A segment $\Delta_1$ is said to \textit{precede} $\Delta_2$ if $\Delta_1 \not \subset \Delta_2$, $\Delta_2 \not \subset \Delta_1$, and there exists a positive integer $x$ so that \[\Delta_2=\Delta_1(x) \]
making $\Delta_1 \cup \Delta_2$ a segment. A multisegment $\{\Delta_1,\Delta_2,\cdots, \Delta_k\}$ where $\Delta_i$ does not precede $\Delta_j$ for any $i<j$ is said to satisfy the "does not precede" condition.
Now let $\alpha=\{\Delta_1, \Delta_2, \ldots, \Delta_k\}$ be a multisegment satifying the "does not precede" condition. Let $P'$ denote the standard parabolic subgroup specified by $\alpha$. The Langlands classification theorem tells us that any smooth irreducible representation of $G$ occurs as a unique irreducible quotient of the parabolically induced representation $\operatorname{Ind}_{P'}^G(Q(\Delta_1)\otimes \cdots \otimes Q(\Delta_k))$ -- we denote that quotient by $Q(\Delta_1, \Delta_2, \ldots, \Delta_k)$ or $Q(\alpha)$ and refer to it as the \textit{Langlands quotient} associated to $\alpha$; see \cite{kudla1994local}*{Theorem 1.2.5}. Next, for integers $i<j$ we introduce the notation
\begin{equation}
\label{oursegment}
[i,j]:=(|\cdot|^i, |\cdot|^{i+1}, \ldots, |\cdot|^j)
\end{equation}
for a segment which is the special case of \eqref{segment} when we consider the partition $1+1+ \cdots +1$ and $\sigma$ to be the character $|\cdot|$ of $F^{\times}$. This notation may be extended to half integers $i<j$ as long as $j-i+1$ is a positive integer (this is the length of the segment). A segment of length 1 of the form $\{|\cdot|^i\}$ is just denoted $[i]$.
\subsubsection{Standard representations}\label{ssec: standard representations}
Next, we review the notion of a standard representation, also known as standard module in the literature. While this is written down in many different places, we follow the exposition of \cite{Konno}. A standard representation of $G(F)$ corresponds to the data $(P,\nu, \tau)$, where $P=MN$ is a standard parabolic subgroup of $G$, $\nu \in \mathfrak{a}_{P}^{*,+}$, and $\tau$ a tempered representation of $M$. The definition of $\mathfrak{a}_{P}^{*,+}$ is given in Section 2.2 of \textit{loc. cit.}. The character $\nu$ corresponds to a $P$-positive unramified quasicharacter $\exp{\nu}$ of $M(F)$ as explained in Section 2.3 of \textit{loc. cit.}. The standard representation associated to this data is given by $\operatorname{Ind}_P^{G}(\tau \otimes \exp{\nu} )$. This representation has a unique irreducible quotient (see Corollary 3.2 of \textit{loc. cit.}), say $\pi$. In this paper we use the notation
\[
\Delta(\pi) := \operatorname{Ind}_P^{G}(\tau \otimes \exp{\nu} ),
\]
and call it the standard representation of $\pi$.
Thus, $K\operatorname{Rep}^{\op{fl}}_{\lambda}(G)$ has two $\mathbb{Z}$-bases - one gives by irreducible representations
\[\{[\pi]: \pi \in \Pi_{\lambda}(G)\},\]
and the other given by standard representations
\[\{[\Delta(\pi)]: \pi \in \Pi_{\lambda}(G)\}.\] We note that the latter is true because every irreducible representation is the unique quotient of its standard representation, by the Langlands classification theorem.
\subsection{Galois/geometric side}
Recall that in this paper we make free use of \cite{CFMMX} and \cite{CR:irred}.
In particular, for every infinitesimal parameter $\lambda : W_F \to \dualgroup{G}$, set
\[
V_\lambda \coloneqq \{ x\in \operatorname{Lie} \dualgroup{G} \ \vert \ \operatorname{Ad}(\lambda(w))(x) = |w| x,\ \forall w\in W_F \},
\]
and
\[
H_\lambda \coloneqq \{ g\in \dualgroup{G} \ \vert \ \lambda(w) g \lambda(w)^{-1} = g,\ \forall w\in W_F \}.
\]
Then $V_\lambda$ is a prehomogeneous vector space for the $H_\lambda$-action inherited from conjugation in $\operatorname{Lie}\dualgroup{G}$, stratified into $H_\lambda$-orbits $\{ C_i \ \vert \ i \in I\}$.
Recall that $V_\lambda$ is a moduli space of Langlands parameters with infinitesimal parameter $\lambda$. Vogan's geometric perspective relates Langlands parameters to simple objects up to isomorphism in the category $\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$. As on the spectral side, $K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$ has two bases - one consisting of simple perverse sheaves and the other of standard sheaves, which we explain below.
\subsubsection{Simple perverse sheaves}\label{simple perverse sheaves}
Simple objects in $\operatorname{Per}_{H_\lambda}(V_\lambda)$ are all of the form ${\mathcal{IC}\hskip-1pt }(\mathcal{L}_C)$, for simple equivariant local systems $\mathcal{L}_C \in \operatorname{Loc}_{H_\lambda}(V_\lambda)$ and thus of the form ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_C)$ for $G=\operatorname{GL}_n$ as each orbit only has the trivial irreducible local system. We invoke the local Langlands correspondence for $G$ and write $\pi_\phi \in \Pi(G)$ for the isomorphism class of irreducible representation with Langlands parameter $\phi$, and likewise $C_{\pi}$ for the $H_\lambda$-orbit in $V_\lambda$ of parameters that correspond to $\pi$. We refer to the latter identification as the Vogan-Langlands correspondence in \cite{CR:irred}. We summarize the entire correspondence below:
\[
\begin{tikzcd}
{\Pi_{\lambda}(G(F))} & {\Phi_{\lambda}(G(F))} & {H_\lambda \operatorname{-orbits} \operatorname{ in}V} & {\operatorname{Perv}_{H_\lambda}(V_\lambda)^{\operatorname{simple}}_{/\operatorname{iso}},} & {} \\[-10pt]
{\pi \hspace{2mm}} & \phi & {\hspace{2mm}C_{\phi} \hspace{2mm}} & {\hspace{2mm}\mathcal{P}(\pi)=\mathcal{IC}(\mathbb{1}_{C_{\phi}}).}
\arrow[from=1-1, to=1-2]
\arrow[from=1-2, to=1-3]
\arrow[maps to, from=2-1, to=2-2]
\arrow[maps to, from=2-2, to=2-3]
\arrow[from=1-3, to=1-4]
\arrow[maps to, from=2-3, to=2-4]
\end{tikzcd}
\]
Thus, there is a unique orbit $C_{\phi_{\psi}}$ in $V_\lambda$ attached to $\phi_{\psi}$. We shorten this notation to $C_{\psi}$.
We write $\mathcal{P}(\pi)$ for $H_\lambda$-equivariant intersection cohomology complex on $V_\lambda$ determined by $\pi$ through $C_\pi$; note that $\mathcal{P}(\pi)$ is a simple simple object in the abelian category $\operatorname{Per}_{H_\lambda}(V_\lambda)$ of $H_\lambda$-equivariant perverse sheaves on $V_\lambda$.
\subsubsection{Standard sheaves}\label{standard sheaves}
For any $H_\lambda$-orbit $C\subseteq V_\lambda$ and any simple local system $\mathcal{L}_C$ on $C$, we introduce the notation $\mathcal{L}_C^\natural$ for the $H_\lambda$-equivariant sheaf on $V_\lambda$ with the defining property
\[
\left(\mathcal{L}_C^\natural\right)\vert_{C'}
=
\begin{cases}
\mathcal{L}_C & C'=C\\
0 & C'\ne C.
\end{cases}
\]
Thus, $K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$ has two $\mathbb{Z}$-bases, one consisting of simple perverse sheaves
\[\{[\mathcal{IC}({\mathbbm{1}}_{C})]: \text{ }C \text{ ranges over }H_{\lambda}\text{-orbits in }V_{\lambda}\}\]
and the other of standard sheaves
\[\{[{\mathbbm{1}}_C^\natural]: \text{ }C \text{ ranges over }H_{\lambda}\text{-orbits in }V_{\lambda}\}.\]
Standard sheaves produce a basis for $K\operatorname{Per}_{H_\lambda}(V_\lambda)$, as these sheaves are simple objects in the heart of a t-structure for $\operatorname{D}_{H_\lambda}(V_\lambda)$.
\iffalse
Since we will only need this notion for simple objects $\mathcal{L}_C$ in $\operatorname{Loc}_H(C)$ and since these are all constant sheaves for $G= \operatorname{GL}_n$, we will be particularly interested in the constant sheaf ${\mathbbm{1}}_{C}$ on $C$ and its extension ${\mathbbm{1}}_C^\natural$, as defined above.
\fi
\subsection{Dual Grothendieck groups}\label{subsection:Pairing}
For every infinitesimal parameter $\lambda :W_F \to \dualgroup{G}$, the local Langlands correspondence determines a perfect pairing between Grothendieck groups
\[
K\operatorname{Rep}^\text{fl}_\lambda(G) \times K\operatorname{Per}_{H_\lambda}(V_\lambda) \to \mathbb{Z}
\]
defined by
\begin{equation}\label{eqn:pairing}
\langle [\pi],[\mathcal{P}]\rangle_{\lambda} =
\begin{cases}
(-1)^{d(\pi)} & [\mathcal{P}] = [\mathcal{P}(\pi)],\\
0 & \text{otherwise},
\end{cases}
\end{equation}
where $d(\pi) \coloneqq \dim C_{\pi}$.
Recall that $\mathcal{P}(\pi)$ denotes a simple object in $\operatorname{Per}_{H_\lambda}(V_\lambda)$ matching $\pi\in \Pi_\lambda(G)$ as in Section \ref{simple perverse sheaves}.
Notice that this perfect pairing between Grothendieck groups matches $\pi\in \Pi_\lambda(G)$ with the shifted perverse sheaf $\mathcal{P}(\pi)[-\dim C_\pi]$.
If we index $\Pi_\lambda(G) = \{ \pi_i \ \vert \ i\in I\}$ and likewise index isomorphism classes of simple objects in $\operatorname{Per}_{H_\lambda}(V_\lambda)$ by $\{ {\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_j}) \ \vert \ j\in I\}$ then the pairing above becomes
\[
\langle [\pi_i],[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_j})]\rangle_{\lambda} =
\begin{cases}
(-1)^{\dim C_i} & i = j,\\
0 & \text{otherwise}.
\end{cases}
\]
If we change the scalars to $\mathbb{C}$ throughout, the pairing extends:
\[
\mathbb{C} \otimes_{\mathbb{Z}} K\operatorname{Rep}^\text{fl}_\lambda(G) \times \mathbb{C} \otimes_{\mathbb{Z}} K\operatorname{Per}_{H_\lambda}(V_\lambda) \to \mathbb{C} \otimes_{\mathbb{Z}}\mathbb{Z}.
\]
In other words,
\begin{equation}\label{complex pairing}
\langle \cdot,\cdot \rangle: K_{\mathbb{C}}\operatorname{Rep}^\text{fl}_\lambda(G) \times K_{\mathbb{C}}\operatorname{Per}_{H_\lambda}(V_\lambda) \to \mathbb{C}.
\end{equation}
\subsection{Kazhdan-Lusztig Hypothesis}\label{section:KLH}
In this section we state the Kazhdan-Lusztig Hypothesis for $p$-adic general linear groups.
For every irreducible $\pi\in \operatorname{Rep}^\text{fl}_\lambda(G)$, let $\Delta(\pi)$ be the standard representation for $\pi$; thus, in particular, $\pi$ is the unique irreducible quotient of $\Delta(\pi)$.
For every $\pi_i$ and $\pi_j$ in $\Pi_\lambda(G)$, let $m_{i j}$ denoted the muliplicity of $\pi_i$ in $\Delta(\pi_j)$; thus, in the Grothendieck group $K\operatorname{Rep}^\text{fl}_\lambda(G)$,
\[
[\Delta(\pi_j)] = \sum_{i\in I} m_{i j} [\pi_i].
\]
Let $m_\lambda = (m_{i j})$ be the matrix of these entries.
It is possible to order $I$, and thus the representations appearing in $\Pi_\lambda(G)$, so that the matrix $m$ is lower triangular, with diagonal entries $1$; consequently, the matrix $m$ is invertible.
Notice that $m$ is the change of basis matrix for the vector space $K_\mathbb{C}\operatorname{Rep}^{\op{fl}}_{\lambda}(G)$, from the basis $\{ [\Delta(\pi_i)] \ \vert \ i\in I\}$ to $\{ [\pi_j] \ \vert \ j\in I\}$.
Return to the infinitesimal parameter $\lambda : W_F\to \dualgroup{G}$ and consider the abelian category $\operatorname{Per}_{H_\lambda}(V_\lambda)$ of $H_\lambda$-equivariant perverse sheaves on $V_\lambda$.
Simple objects in this category are the intersection cohomology complexes ${\mathcal{IC}\hskip-1pt } ({\mathbbm{1}}_{C_i})$.
For each $H_\lambda$-orbit $C_j$ in $V_\lambda$, pick a base point $x_j\in C_j$ and let $c_{ij}$ be the Euler characteristic of the stalk of ${\mathcal{IC}\hskip-1pt } ({\mathbbm{1}}_{C_j})[-\dim C_j]$ at ${x_i}$:
\begin{eqnarray*}
{c}_{ij}
= (-1)^{\dim C_j}\operatorname{rank}\left(\mathcal{H}^\bullet_{x_i}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_j}))\right).
\end{eqnarray*}
Set ${c_\lambda} = ({c}_{ij})$.
\begin{hypothesis}[$p$-adic analogue of the Kazhdan-Lusztig Hypothesis]\label{hypothesis}
In the Grothendieck group $K\operatorname{Rep}(G)$ the multiplicity of the irreducible representation $\pi_i$ in the standard representation $\Delta(\pi_j)$ is given by
\[
m_{i j}
=
(-1)^{\dim C_i}\operatorname{rank}(\mathcal{H}^\bullet_{x_j}{\mathcal{IC}\hskip-1pt } \left({\mathbbm{1}}_{C_i})\right).
\]
Equivalently, the change of basis matrix in $K\operatorname{Rep}^\text{fl}_\lambda(G)$ from standard representations to irreducible representations is computed by the Euler characteristics of stalks of simple objects in $\operatorname{Per}_{H_\lambda}(V_\lambda)$:
\[
m_\lambda = \,^t{c_\lambda}.
\]
\end{hypothesis}
Hypothesis~\ref{hypothesis} was first articulated in \cite{zelevinskii1981p}. It also appears in \cite{ABV} for real groups and in \cite{Vogan:Langlands} for real and $p$-adic groups, though there are some sign errors in the latter.
For general linear groups, Hypothesis~\ref{hypothesis} is a folklore theorem, often attributed to \cite{CG} or \cite{Lusztig:Cuspidal2}.
More recently, Hypothesis~\ref{hypothesis}, as it applies here, is also asserted in \cite{Solleveld:pKLH}*{Theorem E, (b) and (c)}.
In this paper we take Hypothesis~\ref{hypothesis} as given.
Using this notation, we revisit the matrix ${c}$ from Section~\ref{section:KLH} and write
\[
[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_j})]
=
(-1)^{\dim(C_j)} \sum_{i\in I} {c}_{ij} [{\mathbbm{1}}_{C_i}^\natural]
\]
in $K\operatorname{Per}_{H_\lambda}(V_\lambda)$.
Thus, $c_\lambda$ is the change of basis matrix for the vector space $K_\mathbb{C}\operatorname{Rep}_\lambda(G)$, from the basis $\{ [{\mathbbm{1}}_{C_i}^\natural] \ \vert \ i\in I \}$ to the basis $\{ [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_i})[-\dim C_i] \ \vert \ i\in I \}$.
Likewise,
\[
[{\mathbbm{1}}_{C_j}^\natural]
=
\sum_{i\in I}
({c_\lambda}^{-1})_{ij} (-1)^{\dim(C_i)} [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_i})]
\]
in $K_\mathbb{C}\operatorname{Per}_{H_\lambda}(V_\lambda)$.
Since standard representations form a basis for the Grothendieck group $K\operatorname{Rep}^\text{fl}_\lambda(G)$, it is natural to ask what objects of $K\operatorname{Per}_{H_\lambda}(V_\lambda)$ are dual to this basis, under the pairing of Equation~\eqref{eqn:pairing}.
In the lemma below we use Hypothesis~\ref{hypothesis} to show that standard sheaves in $K\operatorname{Per}_{H_\lambda}(V_\lambda)$ are dual to standard representation $[\Delta(\pi)]$ in $K\operatorname{Rep}^\text{fl}_\lambda(G)$.
\begin{lemma}\label{lemma:St}
For any $\pi\in \Pi_\lambda(G)$ and any $H_\lambda$-orbit $C$ in $V_\lambda$,
\[
\langle [\Delta(\pi)],[{\mathbbm{1}}_{C}^\natural]\rangle_{\lambda} =
\begin{cases}
1 & [\mathcal{P}(\pi)] = [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_C)];\\
0 & \text{otherwise}.
\end{cases}
\]
\end{lemma}
\begin{proof}
\begin{eqnarray*}
\langle [\Delta(\pi_j)], [{\mathbbm{1}}_{C_i}^\natural] \rangle_\lambda
&=&
\langle
\sum_{k\in I} m_{ji} [\pi_l],
\sum_{l\in I } (-1)^{\dim(C_l)}({c)\lambda}^{-1})_{li} [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_l})]
\rangle_\lambda\\
&=&
\sum_{k,l \in I} m_{ji}\
(-1)^{\dim(C_l)} ({c_\lambda}^{-1})_{li}
\langle
[\pi_k],[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_l})] \rangle_\lambda\\
&=&
\sum_{l\in I} m_{lj}\
(-1)^{\dim(C_l)} ({c_\lambda}^{-1})_{li}
(-1)^{\dim(C_l)}\\
&=&
\sum_{l\in I}
({c_\lambda}^{-1})_{li}\
m_{lj}
\\
&=&
\sum_{l\in I} (\,^t{c_\lambda}^{-1})_{il}\ m_{lj}\\
&=&
(\,^t{c_\lambda}^{-1}\ m_\lambda)_{ij}.
\end{eqnarray*}
By Hypothesis~\ref{hypothesis}, $\,^t{c_\lambda}^{-1} = m_\lambda^{-1}$, so
\begin{eqnarray*}
(\,^t{c_\lambda}^{-1}\ m_\lambda)_{ij}
&=&
(m_\lambda^{-1}\ m_\lambda )_{ij}\\
&=&
\begin{cases}
1 & i=j\\
0 &i\ne j.
\end{cases}
\end{eqnarray*}
\end{proof}
\subsection{Alternate form of Vogan's conjecture on A-packets}\label{subsection:eta}
Now let $\psi : W''_F \to \dualgroup{G}$ be an Arthur parameter for $G$.
Let $\lambda \coloneqq \lambda_\psi : W_F\to \dualgroup{G}$ be its infinitesimal parameter, as defined in Section~\ref{subsection:notation}.
Based on \cite{CFMMX}*{Definition 2, \S 8.2}, define $\eta^{\operatorname{Evs}}_{\psi} \in K\operatorname{Rep}_\lambda(G)$ by
\[
\eta^{\operatorname{Evs}}_{\psi} \coloneqq
(-1)^{d(\psi)}\sum_{\pi\in \Pi_{\lambda}(G)} (-1)^{d(\pi)} \operatorname{rank} \left(\operatorname{Evs}_\psi \mathcal{P}(\pi) \right)\ [\pi],
\]
where $d(\psi) \coloneqq \dim(C_\psi)$ and $d(\pi) = \dim(C_{\phi_\pi})$.
Recall that the classes $[\pi]$, as $\pi$ ranges over $\Pi_\lambda(G)$, form a basis for $K\operatorname{Rep}^\text{fl}_\lambda(G)$.
\begin{proposition}\label{prop:eta}
For all $\mathcal{F}\in D_{H_\lambda}(V_\lambda)$,
\[
\langle \eta^{\operatorname{Evs}}_{\psi}, [\mathcal{F}] \rangle_\lambda
=
(-1)^{d(\psi)} \operatorname{rank} \left(\operatorname{Evs}_\psi \mathcal{F}\right).
\]
\end{proposition}
\begin{proof}
It is enough to prove the proposition in the case that $\mathcal{F}$ is a simple object in $\operatorname{Per}_{H_\lambda}(V_\lambda)$.
To see this, note that classes of simple objects in $\operatorname{Per}_{H_\lambda}(V_\lambda)$ form a basis for $K\operatorname{Per}_{H_\lambda}(V_\lambda)$, and since $\operatorname{Per}_{H_\lambda}(V_\lambda)$ is a heart of $\operatorname{D}_{H_\lambda}(V_\lambda)$, the Grothendieck groups coincide: $K\operatorname{Per}_{H_\lambda}(V_\lambda)=K\operatorname{D}_{H_\lambda}(V_\lambda)$.
Moreover, the functor $\operatorname{Evs}_\psi \mathcal{F}$ depends only on the class of $\mathcal{F}$ in $K\operatorname{D}_{H_\lambda}(V_\lambda)$,
So now we assume $\mathcal{F}$ is a simple object in $\operatorname{Per}_{H_\lambda}(V_\lambda)$.
Recall that every simple object in this category takes the form $\mathcal{P}(\pi_i)$ for some $\pi_i\in \Pi_\lambda(G)$.
We now prove the Proposition for $\mathcal{F} = \mathcal{P}(\pi_i)$:
\[
\begin{array}{rlr}
&\langle \eta^{\operatorname{Evs}}_\psi,[\mathcal{P}(\pi_i)]\rangle_{\lambda}
&
\\
&=
\langle (-1)^{d(\psi)} \mathop{\sum}\limits_{\pi\in \Pi_{\lambda}(G)} (-1)^{d(\pi)} \operatorname{rank} \left(\operatorname{Evs}_\psi \mathcal{P}(\pi) \right)[\pi_i], [\mathcal{P}(\pi)]\rangle_{\lambda}
&
\\
&= (-1)^{d(\psi)} (-1)^{d(\pi_i)}
\operatorname{rank} \left(\operatorname{Evs}_\psi \mathcal{P}(\pi_i)\right)
\langle [\pi_i], [\mathcal{P}(\pi_i)]\rangle_\lambda
&
\\
&= (-1)^{d(\psi)} (-1)^{d(\pi_i)}
\operatorname{rank} \left(\operatorname{Evs}_\psi \mathcal{P}(\pi_i)\right)
(-1)^{d(\pi_i)}
& \text{by Equation~\ref{eqn:pairing}}
\\
&= (-1)^{d(\psi)}
\operatorname{rank} \left(\operatorname{Evs}_\psi \mathcal{P}(\pi_i)\right).
&
\end{array}
\]
\end{proof}
Armed with these tools, we may now re-cast Vogan's conjecture on A-packets for $G(F)$ in the following form, which we will prove in Theorem~\ref{thm:main}:
\[
\eta^{\operatorname{Evs}}_\psi = \eta_\psi;
\]
or equivalently,
\[
\langle \eta^{\operatorname{Evs}}_\psi, [\mathcal{F}]\rangle_\lambda
=
\langle \eta_\psi, [\mathcal{F}]\rangle_\lambda,
\qquad \forall \mathcal{F} \in D_{H_\lambda}(V_\lambda).
\]
\iffalse
or equivalently, by Proposition~\ref{prop:eta},
\[
\langle \eta_\psi, [\mathcal{F}]\rangle_\lambda
=
(-1)^{d(\psi)} \operatorname{rank}\left( \operatorname{Evs}_\psi \mathcal{F}\right)
,
\qquad \forall \mathcal{F} \in D_{H_\lambda}(V_\lambda).
\]
\fi
\section{Main result on the Levi subgroup}
The results of \cite{CR:irred} adapt to Levi subgroups of $G$, as we now explain.
\begin{proposition}\label{prop:M}
Let $\psi_M$ be an irreducible Arthur parameter for a Levi subgroup $M$ of $G$. Then
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\psi_M}(M) = \Pi_{\psi_M}(M) = \{ \pi_{\psi_M}\} \qquad\text{and}\qquad
\eta_{\psi_M}^{\operatorname{Evs}} = \eta_{\psi_M} = [\pi_{\psi_M}].
\]
\end{proposition}
\begin{proof}
As in Section~\ref{section:eta}, we have
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\psi_M}(M)
\coloneqq
\{ \sigma\in \Pi_{\lambda_M}(M) \ \vert \ \operatorname{Evs}_{\psi_M} \mathcal{P}(\sigma) \ne 0 \},
\]
where $\lambda_M$ is the infinitesimal parameter of $\psi_M$.
In the definition of $\Pi_{\psi_M}(M)$, we see the simple object $\mathcal{P}(\sigma)$ in $\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M})$ determined by $\sigma \in \operatorname{Rep}_{\lambda_M}(M)$ using the local Langlands correspondence.
Recall $M = \operatorname{GL}_{n_1}\times\cdots\times\operatorname{GL}_{n_k}$; to simplify notation, set $M_i = \operatorname{GL}_{n_i}$. Then $\lambda_M = \lambda_1\boxtimes\cdots\boxtimes\lambda_k$ for $\lambda_i : W_F \to \dualgroup{M_i}$.
Now factor $V_{\lambda_M}$:
\[
V_{\lambda_M} = V_{\lambda_1}\times\cdots\times V_{\lambda_k} ,
\]
where $V_{\lambda_i}$ is the moduli space of Langlands parameters for $M_i$ with infinitesimal parameter $\lambda_i$.
Likewise, set $H_{\lambda_i} \coloneqq Z_{\dualgroup{M_i}}(\lambda_i)$ so
$
H_{\lambda_M} = H_{\lambda_1}\times\cdots\times H_{\lambda_k}.
$
Then
\[
\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M}) \cong \operatorname{Per}_{H_{\lambda_1}}(V_{\lambda_1})\boxtimes\cdots\boxtimes \operatorname{Per}_{H_{\lambda_k}}(V_{\lambda_k})
\]
(finite product of categories).
Since $\sigma$ is irreducible, $\sigma = \sigma_1\boxtimes\cdots\boxtimes\sigma_k$ where each $\sigma_i$ is irreducible.
Now the Langlands correspondence for $M$ takes the form attaches
\[
\mathcal{P}(\sigma)
=
\mathcal{P}(\sigma_1)\boxtimes\cdots\boxtimes\mathcal{P}(\sigma_k)
\]
to $\sigma$.
Finally, recall
\[
\psi_M = \psi_1\boxtimes\cdots\boxtimes\psi_k,
\]
and write $x_{\psi_M} = (x_{\psi_1}, \ldots , x_{\psi_k}) \in V_{\lambda_M}$ for the corresponding elements in the moduli space;
likewise write $y_{\psi_M} = (y_{\psi_1}, \ldots , y_{\psi_k}) \in V_{\lambda_M}^*$.
By the theorem of Thom-Sebastiani \cite{Illusie} \cite{Massey},
\begin{eqnarray*}
\left( \operatorname{R\hskip-1pt\Psi}_{y_{\psi_M}} \mathcal{P}(\sigma)\right)_{x_{\psi_M}}
&=&
\left( \operatorname{R\hskip-1pt\Psi}_{(y_{\psi_1}, \ldots , y_{\psi_k})} \mathcal{P}(\sigma_1)\boxtimes\cdots\boxtimes \mathcal{P}(\sigma_k)\right)_{(x_{\psi_1}, \ldots , x_{\psi_k})} \\
&=&
\left( \operatorname{R\hskip-1pt\Psi}_{y_{\psi_1}} \mathcal{P}(\sigma_1)\right)_{x_{\psi_1}}
\boxtimes\cdots\boxtimes\left( \operatorname{R\hskip-1pt\Psi}_{y_{\psi_k}} \mathcal{P}(\sigma_k)\right)_{x_{\psi_k}} .
\end{eqnarray*}
Thus,
\[
\left( \operatorname{R\hskip-1pt\Psi}_{y_{\psi_M}} \mathcal{P}(\sigma)\right)_{x_{\psi_M}}\ne 0
\qquad\iff\qquad
\left( \operatorname{R\hskip-1pt\Psi}_{y_{\psi_i}} \mathcal{P}(\sigma_i)\right)_{x_{\psi_i}} \ne 0,\ \forall i=1,\ldots ,k.
\]
Equivalently,
\[
\operatorname{Evs}_{\psi_M}\mathcal{P}(\sigma)\ne 0
\qquad\iff\qquad
\operatorname{Evs}_{\psi_i}\mathcal{P}(\sigma_i)\ne 0,\ \forall i=1,\ldots ,k.
\]
By \cite{CR:irred},
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\psi_i}(M_i) = \Pi_{\psi_i}(M_i) = \{ \pi_{\psi_i} \}, \ \forall i=1, \ldots, k.
\]
It now follows that
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\psi_M}(M) = \Pi_{\psi_M}(M) = \{ \pi_{\psi_M}\}.
\]
Recall the definition:
\[
\eta^{\operatorname{Evs}}_{\psi_M} \coloneqq
(-1)^{d(\psi_M)}\sum_{\sigma\in \Pi_{\lambda_M}(G)} (-1)^{d(\sigma)} \operatorname{rank} \left(\operatorname{Evs}_{\psi_M}\mathcal{P}(\sigma)\right)\ [\sigma],
\]
We have just seen that $\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\psi_M}(M) = \{ \pi_{\psi_M}\}$.
Therefore,
\[
\eta^{\operatorname{Evs}}_{\psi_M} =
(-1)^{d(\psi_M)-d(\pi_{\psi_M})} \operatorname{rank} \left(\operatorname{Evs}_{\psi_M}\mathcal{P}(\pi_{\psi_M})\right)\ [\pi_{\psi_M}],
\]
By \cite{CFMMX}*{\S 8.2},
\[
\operatorname{rank} \left(\operatorname{Evs}_{\psi_M}\mathcal{P}(\pi_{\psi_M})\right)
=
1,
\]
so
\[
\eta^{\operatorname{Evs}}_{\psi_M} = [\pi_{\psi_M}],
\]
as claimed.
\end{proof}
\iffalse
\section{Main result on the Levi subgroup}
Recall the setting: We consider the A-parameter $\psi=\bigoplus_{i=1}^k\psi_i $ where $\psi_i$ is an irreducible A-parameter for $\operatorname{GL}_{n_i}$ as defined in \ref{}. This gives us a Levi subgroup $M=\operatorname{GL}_{n_1}\times \cdots \times \operatorname{GL}_{n_k}$ of $\operatorname{GL}_n$ and \[\psi_M = \otimes_{i=1}^k\psi_i: W_F'' \xrightarrow[]{} \dualgroup{M}\] is interpreted as an irreducible A-parameter for $M$. The infinitesimal parameter $\lambda_M=\bigoplus_{i=1}^k\lambda_i$ where $\lambda_i:= \lambda_{\psi_i}$ for all $i$. As in Equation \ref{}, $V_{\lambda_M}=\boxtimes_{i=1}^kV_{\lambda_i}$ and every orbit in $V_{\lambda_M}$ is a product $\boxtimes C_i$ of orbits where $C_i$ is a $\operatorname{GL}_{n_i}$-orbit in $V_{\lambda_i}$. In particular, $C_{\psi_i}$ is the orbit of Arthur type for each $i$ and $C_M=\boxtimes_{i=1}^kC_{\psi_i}$ is the orbit of Arthur type for $\psi_M$. Recall the pairing from Equation \ref{} on $M$.
\begin{lemma}\label{product of perverse sheaf cats}
\[\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M}) \simeq \boxtimes_{i=1}^k\operatorname{Per}_{H_{\lambda_i}}(V_{\lambda_{i}}). \] \todo{symbol for equivalence of cats}
\end{lemma}\label{product of evs}
\begin{lemma}
For an $H_{\lambda_M}$-orbit $C=\boxtimes_{i=1}^kC_i$ in $V_{\lambda_M}$,
\[ \operatorname{Evs}_{\psi_M}({\mathcal{IC}\hskip-1pt } ({\mathbbm{1}}_{C}))=\prod_{i=1}^k \operatorname{Evs}_{\psi_i}({\mathcal{IC}\hskip-1pt } ({\mathbbm{1}}_{C_i}))\]
\end{lemma}
\begin{proposition}\label{prop:M}
\[
\langle \pi_{\psi_M}, \mathcal{G}\rangle_{\lambda_M} = \operatorname{rank} \operatorname{Evs}_{\psi_M} \mathcal{G},
\]
for all $\mathcal{G} \in D_{H_{\lambda_M}}(V_{\lambda_M})$.
\end{proposition}
\begin{proof}
Since $\psi_i$ is irreducible, from \cite{CR:irred}, we have that $\Pi_{\phi_{\psi_i}}^{{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}}(G(F))=\Pi_{\psi_i}(G(F))=\{\pi_{\psi_i}\}$ for all $i$. By definition, this means that for an $H_{\lambda_i}$-orbit $C_i$ in $V_{\lambda_i}$,
\[\operatorname{rank} \operatorname{Evs}_{\psi_i}({\mathcal{IC}\hskip-1pt } ({\mathbbm{1}}_{C_i})) =
\begin{cases}
1 & C_i=C_{\psi_i},\\
0 & \text{otherwise.}
\end{cases}
\]
Using Lemma \ref{product of evs}, for an $H_{\lambda_M}$-orbit $C=\boxtimes_{i=1}^kC_i$ in $V_{\lambda_M}$ we have that
\[\operatorname{rank} \operatorname{Evs}_{\psi_M}({\mathcal{IC}\hskip-1pt } ({\mathbbm{1}}_{C})) =
\begin{cases}
1 & C=C_M,\\
0 & \text{otherwise.}
\end{cases}\]
This shows that the ABV-packet $\Pi^{{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}}_{\phi_{\psi_M}}(M(F))=\{\pi_{\psi_M}\}$. Now for any $\mathcal{G} \in D_{H_{\lambda_M}}(V_{\lambda_M})$,
\begin{align*}
\operatorname{rank} \operatorname{Evs}_{\psi_M}(\mathcal{G}) &=\langle \eta^{{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}}\mathcal{}, \mathcal{G}\rangle_{\lambda_M} \\
&= \langle \pm \pi_{\psi_M}, \mathcal{G}\rangle_{\lambda_M} \\
&= \pm \langle \pi_{\psi_M}, \mathcal{G}\rangle_{\lambda_M}
\end{align*}
where the first equality follows from Proposition \ref{}, the third follows from the bilinearity of the pairing, and second follows from the definition of $\eta^{{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}}_{\psi}=$
\end{proof}
As in Section~\ref{section:eta}, set
\begin{proposition}\label{prop:etaM}
\[
\eta_{\psi_M}^{{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}}
=
[\pi_{\psi_M}]
\]
\end{proposition}
\fi
\iffalse
\begin{lemma}\label{lemma:Perprod}
\[
\operatorname{Per}_{H_1\times H_2}(V_1\times V_2) \cong \operatorname{Per}_{H_1}(V_1)\times \operatorname{Per}_{H_2}(V_2)
\]
\end{lemma}
\begin{proof}
This is a good exercise for regarding equivariant perverse sheaves.
\end{proof}
\fi
\section{Endoscopic Lifting}\label{section:Lift}
With reference to Section~\ref{subsection:Pairing}, let
\begin{equation}
\operatorname{Lift}_M^G : K_\mathbb{C}\operatorname{Rep}_{\lambda_M}(M) \to K_\mathbb{C}\operatorname{Rep}_{\lambda}(G)
\end{equation}
be the linear transformation defined by
\begin{equation}\label{equation:e_*}
\langle \operatorname{Lift}_M^G[\pi], [\mathcal{F}] \rangle_\lambda =
\langle [\pi], [\varepsilon^*\mathcal{F}] \rangle_{\lambda_M}.
\end{equation}
Following the nomenclature of \cite{ABV}*{Definition 26.18} for real groups, we refer to the linear transformation $\operatorname{Lift}_M^G$ as \emph{endoscopic lifting}.
In this section we show that $\operatorname{Lift}_M^G$ coincides with the linear transformation given by $\operatorname{Ind}_P^G$ on the level of Grothendieck groups.
In Section~\ref{sec:LSLift} we see that it also coincides with Langlands-Shelstad transfer.
\[
\begin{tikzcd}
K_\mathbb{C} \operatorname{Rep}_\lambda(G)\times K_\mathbb{C} \operatorname{Per}_{H_\lambda}(V_\lambda)\arrow[bend left]{dd}{\varepsilon^* \hskip4pt \text{(geometric restriction)}} \arrow{r}& \mathbb{C}\\
\\
\arrow[bend left]{uu}{\text{(endoscopic lifting)}\hskip4pt \operatorname{Lift}_M^G} K_\mathbb{C} \operatorname{Rep}_{\lambda_M}(M)\times K_\mathbb{C} \operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M}) \arrow{r}& \mathbb{C}
\end{tikzcd}
\]
\subsection{Endoscopic lifting of standard representations}\label{ssec:Lift}
\begin{proposition}\label{prop:Lift}
Let $\phi_M$ be any Langlands parameter for $M$ with infinitesimal parameter $\lambda_M$.
Let $\pi_{\phi_M}$ be the corresponding irreducible representation of $M(F)$.
Then
\begin{equation*}
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right)
=
[\Delta(\pi_\phi)],
\end{equation*}
where $\phi$ is the Langlands parameter for $G$ obtained by lifting $\phi_M$ via $\dualgroup{M} \hookrightarrow \dualgroup{G}$.
\end{proposition}
\begin{proof}
We prove the proposition by showing
\[
\langle
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right), [\mathcal{F}]
\rangle_{\lambda}
=
\langle
[\Delta(\pi_\phi)],
[\mathcal{F}]
\rangle_{\lambda},
\]
for every $\mathcal{F}\in \operatorname{D}_{H_\lambda}(V_\lambda)$.
To do this, it is sufficient to take $\mathcal{F} = {\mathbbm{1}}_{C}^\natural$ and allow $C$ to range over $H_\lambda$-orbits in $V_\lambda$, since these sheaves provide a basis for the Grothendieck group.
Observe that
\begin{equation*}
\varepsilon^*\left({\mathbbm{1}}_{C}^\sharp\right)
=
{\mathbbm{1}}_{C\cap V_{\lambda_M}}^\sharp.
\end{equation*}
Consequently, in $K\operatorname{Per}_{H_\lambda}(V_\lambda)$,
\begin{equation*}
[\varepsilon^*\left({\mathbbm{1}}_{C}^\sharp\right)]
=
\sum_{D}
[{\mathbbm{1}}_{D}^\sharp]
\end{equation*}
where the sum is taken over $H_{\lambda_M}$-orbits $D$ in $V_{\lambda_M}$ appearing in $C\cap V_{\lambda_M}$, or in other words, over all orbits $D$ in $V_{\lambda_M}$ whose saturation in $V_{\lambda}$ is $C$.
Now,
\begin{eqnarray*}
\langle
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right), [{\mathbbm{1}}_{C}^\natural]
\rangle_{\lambda}
&=&
\langle
[\Delta(\pi_{\phi_M})], \varepsilon^*[{\mathbbm{1}}_{C}^\natural]
\rangle_{\lambda_M}
\\
&=&
\langle
[\Delta(\pi_{\phi_M})], \sum_{D} [{\mathbbm{1}}_{D}^\natural]
\rangle_{\lambda_M}
\\
&=&
\sum_{D}
\langle
[\Delta(\pi_{\phi_M})], [{\mathbbm{1}}_{D}^\natural]
\rangle_{\lambda_M}.
\end{eqnarray*}
Now, by Lemma~\ref{lemma:St} adapted from $G$ to $M$,
$
\langle
[\Delta(\pi_{\phi_M})], [{\mathbbm{1}}_{D}^\natural]
\rangle_{\lambda_M}
$
is non-zero only whee $D$ is the $H_{\lambda_M}$-orbit of $\phi_M\in V_{\lambda_M}$, in which case the pairing gives the value $1$.
Therefore,
\begin{eqnarray*}
\langle
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right), [{\mathbbm{1}}_{C}^\natural]
\rangle_{\lambda}
&=&
\begin{cases}
1 & \phi\in C\\
0 & \phi\not\in C.
\end{cases}
\end{eqnarray*}
On the other hand, by Lemma~\ref{lemma:St},
\begin{eqnarray*}
\langle
[\Delta(\pi_\phi)],
[{\mathbbm{1}}_C^\natural]
\rangle_{\lambda}
&=&
\begin{cases}
1 & \phi\in C\\
0 & \phi\not\in C.
\end{cases}
\end{eqnarray*}
It follows that
\[
\langle
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right), [{\mathbbm{1}}_C^\natural]
\rangle_{\lambda}
=
\langle
[\Delta(\pi_\phi)],
[{\mathbbm{1}}_C^\natural]
\rangle_{\lambda},
\]
for every $H_\lambda$-orbit $C$ in $V_\lambda$, and therefore
\[
\langle
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right), [\mathcal{F}]
\rangle_{\lambda}
=
\langle
[\Delta(\pi_\phi)],
[\mathcal{F}]
\rangle_{\lambda},
\]
for every $\mathcal{F}\in \operatorname{D}_{H_\lambda}(V_\lambda)$.
Since the pairing is perfect, it follows that
\[
\operatorname{Lift}_M^G\left([\Delta(\pi_{\phi_M})]\right)
=
[\Delta(\pi_\phi)].
\]
\end{proof}
\subsection{Comparison of parabolic induction with endoscopic lifting}\label{subsection:IndLift}
In this subsection, we make precise and prove the claim that parabolic induction of a standard representation is a standard representation. We then show that endoscopic lifting can be characterized by parabolic induction.
We use the theory and notation established in Section \ref{ssec: standard representations} for irreducible and standard representations. Let $\pi$ be an irreducible representation of $G(F)$. The standard representation $\Delta(\pi)$ of $\pi$ can be extracted in terms of multisegments using Zelevinsky theory. This can be seen directly from \cite{kudla1994local}*{Theorem 2.2.2}, but we also clarify this explicitly in the lemma below.
\begin{lemma}\label{lemma: standard is second induction}
Let $\pi$ be a smooth irreducible representation of $G=\operatorname{GL}_n(F)$ with multisegment $\alpha = \{\Delta_1, \ldots \Delta_k\}$ arranged so that the segments satisfy the "does not precede" condition. Then \[
\Delta(\pi)\simeq \operatorname{Ind}_P^G(Q(\Delta_1)\otimes Q(\Delta_2)\otimes \cdots \otimes Q(\Delta_k)),
\]
where $P$ is the standard parabolic specified by the $\Delta_i$s.
\end{lemma}
\begin{proof}
We use Kulda's expository work \cite{kudla1994local}, specifically the arguments in pages 372-374. Recall from \ref{ssec: standard representations} that $\pi$ is the unique irreducible quotient of $\operatorname{Ind}_P^G(Q(\Delta_1)\otimes Q(\Delta_2)\otimes \cdots \otimes Q(\Delta_k))$ where $\Delta_i$s are arranged so that they satisfy the "does not precede" condition. Each $Q(\Delta_i)$ is an essentially tempered representation, which means there is a $x_i \in \mathbb{R}$ so that $Q(\Delta_i)\simeq Q(\Delta'_i)(x_i)$ where $Q(\Delta'_i)$ is tempered. Thus we have
\[
\begin{array}{rl}
&\operatorname{Ind}_P^G(Q(\Delta_1)\otimes Q(\Delta_2)\otimes \cdots \otimes Q(\Delta_k))\\ &\simeq \operatorname{Ind}_P^G(Q(\Delta'_1)(x_1)\otimes Q(\Delta'_2)(x_2)\otimes \cdots \otimes Q(\Delta'_k)(x_k)).
\end{array}
\]
Since $Q(\Delta'_i)$s are square-integrable, none of the $\Delta'_i$s can be linked. Moreover, we must have $x_1 \geq x_2 \ldots \geq x_k$ as the $\Delta_i$s satisfy the "does not precede" condition. If $x_i=x_{i+1}$, then we can replace $Q(\Delta'_i)(x_i)\otimes Q(\Delta'_{i+1})(x_{i+1})$ with $Q(\Delta'_i, \Delta'_{i+1})(x_i)$, which is equal to the full induced representation, and an irreducible tempered representation twisted by $x_i$. Thus, we obtain a sequence $x_1 > \cdots > x_{k'}$ and tempered representations $\tau_1=Q(\alpha_1),\ldots,\tau_{k'}=Q(\alpha_{k'})$ where $\alpha_i$s partition the set $\{\Delta'_1,\Delta'_2,\ldots, \Delta'_{k}\}$. This gives us,
\begin{equation}\label{standard is second induction}
\operatorname{Ind}_P^G(Q(\Delta_1)\otimes Q(\Delta_2)\otimes \cdots \otimes Q(\Delta_k))) \simeq \operatorname{Ind}_{P'}^G(\tau_1(x_1)\otimes \cdots \otimes \tau_{k'}(x_{k'})).\end{equation}
Here $P'=M'N'$ is the standard parabolic subgroup specified by the $\alpha_i$s. By observing that $\tau_1\otimes \cdots \otimes \tau_{k'}$ is a tempered representation of $M'$ and $x_1 > \cdots > x_{k'}$ specifies a $P'$-positive unramified character of $M'$, we see that the representations in \eqref{standard is second induction} are standard representations. The result now follows by using the that $\pi$ is the unique irreducible quotient of $\operatorname{Ind}_P^G(Q(\Delta_1)\otimes Q(\Delta_2)\otimes \cdots \otimes Q(\Delta_k))$.
\end{proof}
The upshot of this lemma is that we can talk about standard representations purely in terms of multisegments, which makes it easy to pin down the standard representation obtained after parabolic induction. We prove the implicit claim in this statement below.
\begin{proposition}\label{prop:Ind standard}
Let $M=\operatorname{GL}_{m_1}\times \operatorname{GL}_{m_2} \cdots \times \operatorname{GL}_{m_k}$ be a Levi subgroup of $G$. Let $P$ denote the standard parabolic of $G$ with Levi component $M$. Then, for any $\pi_M \in \Pi_{\lambda_M}(M)$, $[\operatorname{Ind}_P^{G}\left(\Delta(\pi_M)\right)]$ is the image of a standard representation of $G$ in $K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)$. Moreover, $\operatorname{Ind}_P^{G}\left(\Delta(\pi_M)\right)$ has a unique composition factor $\pi$ so that \[[\Delta(\pi)]=[\operatorname{Ind}_P^{G}\left(\Delta(\pi_M)\right)].\]
\end{proposition}
\begin{proof}
A representation $\pi_M \in \Pi_{\lambda_M}(M)$ can be written as $\pi_1\otimes \cdots \otimes \pi_k$, where $\pi_i \in \Pi_{\lambda_i}(\operatorname{GL}_{m_i})$ for $1\leq i \leq k$. Moreover, $\Delta(\pi_M)=\Delta(\pi_1)\otimes \cdots \otimes \Delta(\pi_k)$. It suffices to prove this proposition for the case $k=2$. Each $\pi_i$ has the associated data of $\Delta^i_1, \ldots, \Delta^i_{k_i}$ and $x^i_i>x^i_2>\cdots >x^i_{k_i}$ where $Q(\Delta^i_j)$s are irreducible tempered representations. Then, from Lemma \ref{lemma: standard is second induction}, $\Delta(\pi_i)=\operatorname{Ind}_{P_i}^{\operatorname{GL}_{m_i}}(Q(\Delta^i_1)(x^i_1)\otimes \cdots \otimes Q(\Delta^i_{k_i})(x^i_{k_i}))$, where $P_i$ is specified by the $Q(\Delta^i_j)$s. Thus, we have
\begin{align*}
&\Delta(\pi_1)\otimes \Delta(\pi_2) \\
&= \operatorname{Ind}_{P_1}^{\operatorname{GL}_{m_1}}(Q(\Delta^1_1)(x^1_1)\otimes \cdots \otimes Q(\Delta^1_{k_1})(x^1_{k_1})) \otimes \operatorname{Ind}_{P_2}^{\operatorname{GL}_{m_2}}(Q(\Delta^2_1)(x^2_1)\otimes \cdots \otimes Q(\Delta^2_{k_2})(x^2_{k_2})).
\end{align*}
Let $P$ be the standard parabolic subgroup of $G$ with Levi component $\operatorname{GL}_{m_1}\times \operatorname{GL}_{m_2}$. Applying the exact functor, $\operatorname{Ind}_P^G$ throughout, we get
\begin{align*}
&\operatorname{Ind}_P^G(\Delta(\pi_1)\otimes \Delta(\pi_2)) \\
&\simeq \operatorname{Ind}_{P_{12}}^{G}(Q(\Delta^1_1)(x^1_1)\otimes \cdots \otimes Q(\Delta^1_{k_1})(x^1_{k_1}) \otimes Q(\Delta^2_1)(x^2_1)\otimes \cdots \otimes Q(\Delta^2_{k_2})(x^2_{k_2})).
\end{align*}
Here $P_{12} \subset P$ is the standard parabolic subgroup specified by the $Q(\Delta^i_j)$s and the identification follows from transitivity of induction. We rearrange the $Q(\Delta^i_j)(x^i_j)$s so that the $x^i_j$ are decreasing, and whenever two consecutive $x^i_j$s are equal, we may replace $Q(\Delta^i_j)(x^i_j)\otimes Q(\Delta^{i'}_{j'})(x^i_j)$ by $Q(\Delta^i_j, \Delta^{i'}_{j'})(x^i_j)$, which is the full induced representation and also an irreducible tempered representation twisted by $x^i_j$. This rearrangement does not affect the representative of $\operatorname{Ind}_{P_{12}}^{\operatorname{GL}_n}(Q(\Delta^1_1)(x^1_1)\otimes \cdots \otimes Q(\Delta^1_{k_1})(x^1_{k_1}) \otimes Q(\Delta^2_1)(x^2_1)\otimes \cdots \otimes Q(\Delta^2_{k_2})(x^2_{k_2}))$ in $K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)$. Thus, we obtain a decreasing sequence $y_1,\ldots,y_{l}$ from the $x^i_j$s and multisets $\alpha_1, \ldots, \alpha_{l}$ which partition the $\Delta^i_j$s. Setting $\tau_i=Q(\alpha_i)$, we may write
\[[\operatorname{Ind}_P^{\operatorname{GL}_m(F)}(\Delta(\pi_1)\otimes \Delta(\pi_2))]=[\operatorname{Ind}_{P'_{12}}^{\operatorname{GL}_n(F)}(\tau_1(y_1)\otimes \cdots \otimes \tau_l(y_l)].\]
Here $P'_{12}=M'N'$ is the standard parabolic subgroup specified by the $\alpha_i$s. Now $\tau_1\otimes \cdots \otimes \tau_2$ is a tempered representation of $M'$ and $y_1 > \cdots > y_l$ specifies a $P'_{12}$-positive unramified character of $M'$. This shows that the representation in the above equation is a standard representation of $G$.
We now show a unique choice of $\pi$ so that $[\Delta(\pi)]=[\operatorname{Ind}_P^{G}(\Delta(\pi_1)\otimes \Delta(\pi_2))]$. This is completely determined by the multisegment data of the $\tau_i(y_i)$s as we explain below. For a segment $\Delta=[\rho(b),\rho(e)]$, set the notation $\Delta(x)=[\rho(b+x),\rho(e+x)]$. For a multisegment $\beta=\{\Delta_1,...,\Delta_s\}$, set $\beta(x)=\{\Delta_1(x), \cdots, \Delta_s(x)\}$. With this in mind, write $\alpha=\alpha_1(y_1) \sqcup \alpha_2(y_2)\sqcup \cdots \sqcup \alpha_l(y_l)$ where $\alpha_i$s and $y_i$s were determined above. If we write this disjoint union like a concatenation, \textit{i.e.,} preserve the order of $\alpha_i$s and the segments within them, then this multisegment satisfies the "does not precede" condition due to the procedure carried out above. This $\alpha$ corresponds to a unique irreducible representation $Q(\alpha)$ obtained from Langlands classification via multisegments, which is the unique irreducible quotient of $\operatorname{Ind}_{P'_{12}}^{G}(\tau_1(y_1)\otimes \cdots \otimes \tau_l(y_l))$. Setting $\pi=Q(\alpha)$, $\Delta(\pi)=\operatorname{Ind}_{P'_{12}}^{\operatorname{GL}_n(F)}(\tau_1(y_1)\otimes \cdots \otimes \tau_l(y_l))$. Thus, we have
\[[\operatorname{Ind}_P^{G}(\Delta(\pi_1)\otimes \Delta(\pi_2))]=[\Delta(\pi)]\]
in $K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(G)$.
\end{proof}
\begin{remark}\label{remark: standard identification multisegments}
In the proof above, observe that $\alpha$ is given by the disjoint union of the multisegments of $\pi_1$ and $\pi_2$ \textit{after} appropriate rearrangement to satisfy the "does not precede" condition. Thus, it is easy to see how the multisegment data of the $\pi_i$s completely determines the representation $\Delta(\pi)$. This procedure generalizes to $k$ representations: If $\pi_i$ corresponds to $\alpha_i$, set $\alpha=\sqcup_i \alpha_i$ rearranged so that the segments satisfy the "does not precede" condition. Then $\pi=Q(\alpha)$ is the uniquely determined Langlands quotient of $\operatorname{Ind}_P^G(\otimes_{\Delta \in \alpha}Q(\Delta))$ so that
\[[\Delta(\pi)]=[\operatorname{Ind}_P^{G}(\Delta(\pi_1)\otimes \cdots \otimes \Delta(\pi_k))]=[\operatorname{Ind}_P^{G}(\Delta(\pi_M)].\]
where $\Delta(\pi_M) = \Delta(\pi_1)\otimes \cdots \otimes \Delta(\pi_k)$.
\end{remark}
This property of induction coincides with endoscopic lifting. Thus, we see that endoscopic lifting is characterized by parabolic induction in this case.
\begin{proposition}\label{prop: induction and langlands functorialty}
Let $M \simeq \operatorname{GL}_{m_1} \times \operatorname{GL}_{m_2}\times \cdots \times \operatorname{GL}_{m_k}$ be a Levi subgroup of $G$. Let $P=MN$ be the standard parabolic subgroup with Levi component $M$. Then, for any $[\pi] \in K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(M)$,
\[\operatorname{Lift}_M^G([\pi])=[\operatorname{Ind}_P^{G}(\pi)].\]
\end{proposition}
\begin{proof}
We show that $\operatorname{Lift}_M^G$ and $\operatorname{Ind}_P^{G}$ have the same image on the basis consisting of standard representations of $K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(M)$. Let $\phi$ be a langlands parameter for $G$ with infinitesimal parameter $\lambda$, both of which factor through $M$. We denote the Langlands and infinitesimal parameters for $M$ using $\phi_M$ and $\lambda_M$, respectively. Using Remark \ref{remark: standard identification multisegments} and the compatibility of the local Langlands correspondence with parabolic induction, we have that
\[[\operatorname{Ind}_P^{G}(\Delta(\pi_{\phi_M}))]=[\Delta(\pi_{\phi})].\]
However, from Lemma \ref{prop:Lift}, we have that
\[\operatorname{Lift}_M^G([\Delta(\pi_{\phi_M})])=[\Delta(\pi_{\phi})].\]
Since the two maps agree on the basis of standard representations, they are the same.
\end{proof}
Finally, we show that endoscopic lifting identifies the $A$-packet of the Levi with the $A$-packet of $G$.
\begin{proposition}\label{prop: Lift on irreducibles of A type}
For every $\mathcal{F}\in D_{H_{\lambda}}(V_{\lambda})$,
\[
\langle \eta_{\psi_M},[\mathcal{F}\vert_{V_{\lambda_M}}]\rangle_{\lambda_M}
=
\langle \eta_{\psi},[\mathcal{F}]\rangle_{\lambda};
\]
equivalently,
\begin{equation*}
\operatorname{Lift}_M^G[\pi_{\psi_M}]
=
[\pi_\psi].
\end{equation*}
\end{proposition}
\begin{proof}
Let $P$ be the standard parabolic subgroup of $G$ with Levi component $M$. The representation $\pi_{\psi_M}$ is a product of unitary Speh representations. We know that $\operatorname{Ind}_P^G(\pi_{\psi_M})$ is an irreducible representation of $G$, see \cite{Atobe}*{Section 2.4}, for example. By matching mutisegments of $\phi_{\psi_M}$ and $\phi_{\psi}$, we have \begin{equation}\label{eq: induction is irreducible on A-par}
[\operatorname{Ind}_P^G(\pi_{\psi_M})]=[\pi_{\psi}].
\end{equation}
Recall that $\eta_{\psi}=[\pi_{\psi}]$, as the A-packet for $\psi$ is a singleton. We know from \ref{prop:M} that $\Pi_{\psi_M}(M)=\{\pi_{\psi_M}\}$, which gave us $\eta_{\psi_M}=[\pi_{\psi_M}]$. Now we have
\[
\begin{array}{rlr}
\langle \eta_{\psi_M}, [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M}
&= \langle [\pi_{\psi_M}], \varepsilon^*[\mathcal{F}] \rangle_{\lambda_M}
& \text{\text{$\Pi_{\psi_M}(M) = \{ \pi_{\psi_M} \}$}},
\\
&= \langle \operatorname{Lift}_M^G[\pi_{\psi_M}], [\mathcal{F}] \rangle_{\lambda_M}
& \text{Definition \ref{equation:e_*}},
\\
&= \langle [\operatorname{Ind}_P^G \left(\pi_{\psi_M}\right)], [\mathcal{F}] \rangle_{\lambda_M}
& \text{Proposition~\ref{prop: induction and langlands functorialty}},
\\
&= \langle [\pi_{\psi}], [\mathcal{F}] \rangle_{\lambda}
& \text{by \eqref{eq: induction is irreducible on A-par}},
\\
&= \langle \eta_{\psi}, [\mathcal{F}] \rangle_{\lambda}
& \text{$\Pi_{\psi}(G) = \{ \pi_\psi \}$}.
\end{array}
\]
\end{proof}
\section{Fixed-point Formula}\label{section:FPF}
The proof of the main result, Theorem~\ref{thm:main}, uses a fixed-point formula, explained in this section.
From Section~\ref{introduction}, recall that $V_{\lambda_M}$ is the subvariety of $V_{\lambda}$ fixed by $\operatorname{Ad}(s)$,
where $s\in \dualgroup{G}$ be a finite-order element such that $\dualgroup{M} = Z_{\dualgroup{G}}(s)$: $V_{\lambda_M} = V_{\lambda}^s$.
Let
\[
\varepsilon : V_{\lambda_M} \hookrightarrow V_\lambda
\]
be the obvious inclusion.
Let $\varepsilon^* : \operatorname{D}_{H_\lambda}(V_\lambda) \to \operatorname{D}_{H_{\lambda_M}}(V_{\lambda_M})$ be the equivariant restriction functor of equivariant derived categories.
We will also use the notation
\[
\mathcal{F}\vert_{V_{\lambda_M}} \coloneqq \varepsilon^*\mathcal{F}.
\]
While $\varepsilon^*$ is an exact functor, it does not take perverse sheaves to perverse sheaves.
\begin{lemma}\label{lemma:FPF}
Let $\psi$ and $\psi_M$ be as above.
For all $\mathcal{F}\in \operatorname{D}_{H_\lambda}(V_\lambda)$,
\[
(-1)^{d(\psi)}\operatorname{rank} \left(\operatorname{Evs}_{\psi}\mathcal{F}\right)
=
(-1)^{d(\psi_M)}
\operatorname{rank} \left(\operatorname{Evs}_{\psi_M} \mathcal{F}\vert_{V_{\lambda_M}}\right).
\]
\end{lemma}
\begin{proof}
By \cite{CFMMX}*{Proposition 7.8 and Definition 2}, the functor $\operatorname{Evs}_\psi$ is related to vanishing cycles by
\[
\operatorname{Evs}_\psi \mathcal{F}
=
(-1)^{d({\hat\psi})-\dim V_\lambda} \left( \operatorname{R\hskip-1pt\Psi}_{y_\psi} [-1] \mathcal{F}\right)_{x_\psi},
\]
where
\begin{itemize}
\item
$x_\psi$ is the point for $\phi_\psi$ in this moduli space $V_\lambda$;
\item
$y_\psi$ is the point in the dual moduli space $V_\lambda^*$ matching the Langlands parameter $\phi_{\hat\psi}$ where ${\hat \psi}(w,x,y) \coloneqq \psi(w,y,x)$; and
\item
$\operatorname{R\hskip-1pt\Psi}_{y_{\psi}}$ is Deligne's vanishing cycles functor.
\item
$d({\hat\psi})$ is the dimension of the $H_\lambda$-orbit of $y_\psi$ in $V_\lambda^*$.
\end{itemize}
Next, recall the relation between vanishing cycles and local Morse groups, as for example in \cite{GM:book}*{Part II, Chapter 6, Section 6.A.2}, so
\[
\left( \operatorname{R\hskip-1pt\Psi}_{y_\psi} [-1] \mathcal{F}\right)_{x_\psi}
=
A^\bullet_{y_\psi}(\mathcal{F}),
\]
where we view $y_\psi \in T^*_{C_\psi,x_\psi}(V_\lambda)$.
Here we use \cite{CFMMX}*{Proposition 6.1} to see that $(x_\psi,y_\psi)\in T^*_{H_\lambda}(V_\lambda)$ is regular, so $y_\psi$ is non-degenerate in the sense of Morse theory.
Combining these observations, it follows that
\[
\mathcal{H}^i \left( \operatorname{Evs}_\psi \mathcal{F}[\dim C_\psi] \right) = H^i(J,K;\mathcal{F}),
\]
where $(J,K)$ is normal Morse data corresponding to $y_\psi$ as a linear functional on $V_\lambda$, as in \cite{GM:book}*{Part II, Chapter 6, Section 6.A.1}.
Now recall that $M$ was chosen from $\psi$ precisely so that its image lies in $\dualgroup{M} = Z_{\dualgroup{G}}(s)$ and, consequently, $s\in Z_{\dualgroup{G}}(\psi)$.
Recall also that for $G= \operatorname{GL}_n$, the group $Z_{\dualgroup{G}}(\psi)$ is connected, so $A_\psi = \pi_0(Z_{\dualgroup{G}}(\psi))$ is trivial.
This allows us to interpret $\operatorname{rank} \operatorname{Evs}_\psi \mathcal{F}$ as a Lefschetz number:
\[
\operatorname{rank}\left(\operatorname{Evs}_\psi \mathcal{F}\right)
=
\operatorname{trace}\left(s,\operatorname{Evs}_\psi \mathcal{F}\right)
=
(-1)^{d(\psi)} \sum_{i}(-1)^i \operatorname{trace}\left(s,H^i(J,K;\mathcal{F})\right).
\]
Arguing as in the proof of \cite{ABV}*{Theorem 25.8}, which makes essential use of \cite{GM:Lefschetz}, it now follows that
\[
\sum_{i}(-1)^i \operatorname{trace}\left(s,H^i(J,K;\mathcal{F})\right)
=
\sum_{i}(-1)^i \operatorname{trace}\left(s,H^i(J^s,K^s;\mathcal{F})\right);
\]
in other words,
\[
(-1)^{d(\psi)} \operatorname{trace}(s,\operatorname{Evs}_\psi \mathcal{F})
=
(-1)^{d(\psi_M)}\operatorname{trace}(s,\operatorname{Evs}_\psi \varepsilon^*\mathcal{F});
\]
equivalently,
\[
(-1)^{d(\psi)} \operatorname{rank} \operatorname{Evs}_\psi \mathcal{F}
=
(-1)^{d(\psi_M)} \operatorname{rank} \operatorname{Evs}_\psi \varepsilon^*\mathcal{F}.
\]
Here we have also used that, by construction, $\varepsilon(x_{\psi_M}) = x_{\psi}$ and likewise, $y_{\psi_M}$ maps to $y_{\psi}$ under $V_{\lambda_M}^* \hookrightarrow V_\lambda^*$, and by \cite{CFMMX}*{Proposition 6.1}, $(x_\psi,y_\psi)\in T^*_{H_\lambda}(V_\lambda)$ is regular, while the same result shows $(x_{\psi_M},y_{\psi_M})\in T^*_{H_{\lambda_M}}(V_{\lambda_M})$ is regular.
\end{proof}
\begin{proposition}\label{prop:FPF}
Let $M$ be any Levi subgroup of $G$ and let $\psi_M$ be any Arthur parameter for $M$; let $\psi$ be its lift to $G$.
Let $\lambda_M$ (resp. $\lambda$) be the infinitesimal parameter of $\psi_M$ (resp. $\phi$).
Then
\[
\langle \eta^{\operatorname{Evs}}_{\psi} , [\mathcal{F}] \rangle_{\lambda}
=
\langle \eta^{\operatorname{Evs}}_{\psi_M} , [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M},
\]
for every $\mathcal{F}\in \operatorname{Per}_{H_{\lambda}}(V_{\lambda})$.
\end{proposition}
\begin{proof}
For all $\mathcal{F}\in D_{H_\lambda}(V_\lambda)$,
\[
\begin{array}{rlr}
{\langle \eta^{\operatorname{Evs}}_{\psi}, [\mathcal{F}] \rangle_{\lambda}}
&= (-1)^{d(\psi)}
\operatorname{rank}\left(\operatorname{Evs}_\psi \mathcal{F}\right).
& \text{by Proposition~\ref{prop:eta}}
\\
&= (-1)^{d(\psi_M)} \operatorname{rank}\left(\operatorname{Evs}_\psi \mathcal{F}\vert_{V_{\lambda_M}}\right)
& \text{by Lemma~\ref{lemma:FPF}}
\\
&= \langle \eta^{\operatorname{Evs}}_{\psi_M}, [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M}
& \text{by Proposition~\ref{prop:eta}}.
\end{array}
\]
\end{proof}
\section{Main result}
\begin{theorem}[Vogan's conjecture for A-packets for $p$-adic general linear groups]\label{thm:main}
For every $p$-adic field $F$, every positive integer $n$ and every Arthur parameter $\psi$ for $\operatorname{GL}_n(F)$, the A-packet for $\psi$ coincides with the ABV-packet for the Langlands parameter $\phi_\psi$, and the virtual representation attached to this packet agrees with Arthur's:
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_{\phi_\psi}(G) = \Pi_\psi(G) = \{ \pi_\psi\},
\qquad\text{and}\qquad
\eta^{\operatorname{Evs}}_\psi = \eta_\psi = [\pi_\psi].
\]
\end{theorem}
\begin{proof}
The proof is obtained by the following diagram, which we call the "endoscopy square", in which $\mathcal{F}\in \operatorname{D}_{H_\lambda}(V_\lambda)$ is arbitrary.
\[
\begin{tikzcd}
{\langle \eta^{\operatorname{Evs}}_{\psi}, [\mathcal{F}] \rangle_{\lambda}}
\arrow[equal]{d}[swap]{\text{Fixed-point formula \hskip3pt}}
\arrow[dashed,equal]{rrrr}{\text{Vogan's conjecture for $\psi$}}
&&&&
{\langle \eta_{\psi}, [\mathcal{F}] \rangle_{\lambda}}
\\
\langle \eta^{\operatorname{Evs}}_{\psi_M}, [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M}
\arrow[equal]{rrrr}[swap]{\text{Vogan's conjecture for $\psi_M$}}
&&&&
\langle \eta_{\psi_M}, [\mathcal{F}\vert_{V_{\lambda_M}}] \rangle_{\lambda_M}
\arrow[equal]{u}[swap]{\text{\hskip3pt Endoscopic lifting}}
\end{tikzcd}
\]
We establish the equality across the top by verifying the equality on the other three sides.
The left-hand side of the endoscopy square is a consequence of Proposition~\ref{prop:FPF}.
The equality on the bottom of the endoscopy square is direct consequence of Proposition~\ref{prop:M}; we remark that this result makes use of the main result from \cite{CR:irred}.
The right-hand side of the endoscopy square is Proposition~\ref{prop: Lift on irreducibles of A type}.
We may now conclude
\[
\langle \eta^{\operatorname{Evs}}_{\psi},[\mathcal{F}]\rangle_\lambda
=
\langle \eta_{\psi},[\mathcal{F}]\rangle_\lambda ,
\]
for every $\mathcal{F}\in D_{H_\lambda}(V_\lambda)$.
Since the pairing above is non-degenerate, if follows that
\[
\eta^{\operatorname{Evs}}_{\psi} = \eta_{\psi}.
\]
Since $\eta_\psi = [\pi_\psi]$, it follows that
\[
\Pi^{\mbox{\raisebox{1pt}{\scalebox{0.5}{$\mathrm{ABV}$}}}}_\psi(G) = \{ \pi_\psi \} = \Pi_\psi(G).
\]
This concludes the proof of Vogan's conjecture for A-packets of general linear groups.
\end{proof}
\section{Langlands-Shelstad transfer and endoscopic lifting}\label{sec:LSLift}
In this section we show that the endoscopic lifting $\varepsilon^*$ from Section{} coincides with Langlands-Shelstad transfer from the Levi subgroup $M$ to the general linear group $G$.
This result does not play a role in the proof of the main result, Theorem~\ref{thm:main}, so it is offered here as a remark.
Recall that Langlands-Shelstad transfer is defined first on Schwartz functions. Since we are considering the case $G=\operatorname{GL}_n$ and $M = \operatorname{GL}_{m_1}\times\cdots\times\operatorname{GL}_{m_k}$ for $n=m_1 + \cdots + m_k$, there is no need to mention stability in this context.
The geometric transfer coefficients $\Delta(\gamma,\delta)$ are very simple in this case: functions $f\in C^\infty_c(G(F))$ and $f^M\in C^\infty_c(M(F))$ are said to match if
\[
\mathcal{O}^M_\gamma(f^M)
=
\Delta(\gamma,\delta)\ \mathcal{O}^G_\delta(f),
\]
for regular semisimple $\gamma\in M(F)$ and $\delta\in G(F)$, where $\Delta(\gamma,\delta)=0$ unless $\gamma$ and $\delta$ have the same characteristic polynomials, in which case
\[
\Delta(\gamma,\delta) = \lvert {\det}_{\mathfrak{g}/\mathfrak{m}}\left(1-\operatorname{Ad}(\gamma) \right)\lvert_F.
\]
See, for example, \cite{Clozel}*{\S 2}.
In fact, in the case at hand, Langlands-Shelstad transfer is given by the linear transformation
\begin{align*}
C^\infty_c(G(F)) &\to C^\infty_c(M(F))\\
f &\mapsto f^M
\end{align*}
defined by
\[
f^M(m) = \delta_{P}^{1/2}(m) \int_{K} \int_{N(F)} f(kmuk^{-1})\, du\, dk
\]
where $K$ is the maximal compact subgroup of $G$, $N$ is the unipotent radical of the standard parabolic $P$ with Levi component $M$ and $\delta_P$ is the modulus quasicharacter for $P(F)$.
Recall also that distributions $D$ on $G(F)$ and $D^M$ on $M(F)$ are related by Langlands-Shelstad transfer if
\[
D^M(f^M) = D(f),
\]
for all $f\in C^\infty_c(G(F))$.
We recall the distribution character $\Theta_\pi$ attached to an admissible representation $(\pi,V)$ of $G$. For any $f \in C_c^{\infty}(G)$, the linear operator
\[\pi(f)v=\int_G f(g)\pi(g)vdg\]is of finite rank, by admissibility of $\pi$. Therefore, it has a well-defined trace \[\Theta_{\pi}(f) := \op{tr}\pi(f).\]
Furthermore, Harish-Chandra's work gives us a locally integrable function $\theta_{\pi}$ on $G$ so that $\Theta_{\pi}$ is written in terms of $\theta_{\pi}$.
\begin{equation}\label{big theta small theta}
\Theta_{\pi}(f)=\int_{G}f(g)\theta_{\pi}(g) dg.
\end{equation}
We note here that the above is true for reductive $p$-adic groups, and not just general linear groups.
\begin{lemma}\label{lemma: LS matches standards with standards}
Let $\pi_M$ be an irreducible admissible representation of $M(F)$.
Let $\phi_M$ be its Langlands parameter of $\pi_M$; let $\phi$ be the lift of $\phi_M$ to $G$ and let $\pi$ be the irreducible admissible representation of $G(F)$ matching $\pi$ under the Langlands correspondence.
Recall that $\Delta(\pi)$ (resp. $\Delta(\pi_M)$) denotes the standard representation for $\pi$ (resp. $\pi_M$).
Then Langlands-Shelstad transfer matches standard representations with standard representations:
\begin{equation*}
\Theta_{\Delta(\pi_M)}(f^M)
=
\Theta_{\Delta(\pi)}(f),
\end{equation*}
for all $f\in C^\infty_c(G(F))$.
\end{lemma}
\begin{proof}
We show this by direct calculation. Recall that $\pi_M \in \Pi_{\lambda_M}(M)$ is matched with a $\pi \in \Pi_{\lambda}(G)$ by matching their multisegments, in the sense of Remark \ref{remark: standard identification multisegments}. Set $\tau=\operatorname{Ind}_P^G(\pi_M)$ where $P$ is the standard parabolic subgroup of $G$ with Levi subgroup $M$. Recall that $K$ is the maximal compact subgroup of $G$, and we have $G(F)=K\, M(F)\, N(F)$. After making all the appropriate choices for Haar measures, we have
\[
\begin{array}{rlr}
&\Theta_{\Delta(\pi_M)}(f^M)\\
&= \int_{M(F)} f^M(m)\theta_{\Delta(\pi_M)}(m)\, dm
& \text{by \eqref{big theta small theta}},
\\
&= \int_{M(F)} \delta_{P}^{1/2}(m) \int_{K} \int_{N(F)} f(kmuk^{-1})\, du\, dk \, \theta_{\Delta(\pi_M)}(m) \, dm
& \text{\hskip10pt definition of $f^M$},
\\
&= \delta_{P}^{1/2}(m) \int_K \int_{N(F)} \int_{M(F)} \theta_{\Delta(\pi_M)}(m)f(kmuk^{-1})\, dm \, dn \, dk
& \text{Fubini-Tonelli},
\\
&= \Theta_{\tau}(f),
\end{array}
\]
where the last equality follows from \cite{vanDijk}*{Theorem 2} paired with the remark at the end of Section 5 in \textit{loc. cit}. We know from Proposition \ref{prop:Ind standard} that $[\tau]=[\operatorname{Ind}_P^G(\Delta(\pi_M))]=[\Delta(\pi)]$. In this sense, the transfer of functions $f \mapsto f^M$ matches distribution characters of standard representations at the level of Grothendiek groups.
\end{proof}
Passing from distributions build from irreducible representations to the Grothendieck group of these representations, Langlands-Shelstad transfer defines a linear transformation
\[
\op{LS} : K_\mathbb{C} \operatorname{Rep}_{\lambda_M}(M) \to K_\mathbb{C} \operatorname{Rep}_{\lambda}(G).
\]
This linear transformation sends standard representations to standard representations exactly as in Proposition \ref{prop:Ind standard} and Remark \ref{remark: standard identification multisegments}. Thus, this is another way to characterize endoscopic transfer, analogous to Proposition \ref{prop: induction and langlands functorialty}.
\begin{proposition}\label{prop:Langlands shelstad endoscopic transfer}
Let $M \simeq \operatorname{GL}_{m_1} \times \operatorname{GL}_{m_2}\times \cdots \times \operatorname{GL}_{m_k}$ be a Levi subgroup of $G$. Let $P$ be the standard parabolic subgroup with Levi component $M$. Then, for any $[\pi] \in K_{\mathbb{C}}\operatorname{Rep}_{\lambda}(M)$,
\[\operatorname{Lift}_M^G([\pi])=\operatorname{LS}([\pi]).\]
\end{proposition}
Finally, we show that LS lifts A-packets from $M$ to A-packets of $G$. This follows immediately once we recall that $\operatorname{Ind}_P^G(\pi_{\psi_M})\simeq \pi_{\psi}$ from the proof of Proposition \ref{prop: Lift on irreducibles of A type}. Now, we may re-purpose the proof of Lemma \ref{lemma: LS matches standards with standards}, to assert the following.
\begin{proposition}\label{LS matches A packets with A packets}
Langlands-Shelstad transfer matches $\pi_{\psi_M}$ with $\pi_{\psi}$, in the sense that
\begin{equation*}
\Theta_{\pi_{\psi_M}}(f^M)
=
\Theta_{\pi_{\psi}}(f)
\end{equation*}
for any $f \in C_c^{\infty}(G(F))$.
\end{proposition}
\begin{remark}
While it may appear that we are repackaging the results of Section \ref{ssec:Lift} in a slightly different language, our purpose here is to demonstrate that Langlands-Shelstad transfer will potentially give us the results in more general settings that we obtain from parabolic induction for general linear groups. In particular, we expect Langlands-Shelstad transfer to match standard representations with standard representations at the level of Grothendiek groups, just as parabolic induction does. However, parabolic induction of a representation from the $A$-packet of a Levi subgroup (or more generally, an endoscopic group) may not be irreducible, and is therefore not a good candidate for lifting A-packets of the Levi subgroup to A-packets of the group. In future work, we study Vogan's conjecture for a classical group $G$. Suppose $H$ is an endoscopic group of $G$. We propose an independent study of Langlands-Shelstad transfer to obtain the image $\op{Lift}_H^G([\pi_{\psi_H}])$.
\end{remark}
\section{Examples}\label{sec:examples}
In this section, we provide examples to supplement the theory developed in the paper. Although this paper generalizes the situation from a simple Arthur parameter to an arbitrary parameter, we begin with a simple parameter and then move on to sums of simple parameters.
\begin{example}[Steinberg $\operatorname{GL}_2$]\label{example:Steinberg GL2}
In this example, we work with a simple Arthur parameter, calculate the spectral and geometric multiplicity matrices, and demonstrate Hypothesis \ref{hypothesis} in this case. For $G=\operatorname{GL}_2$ over $F$, consider the Arthur parameter
$\psi : W''_F \to \dualgroup{G}$ defined by
\[\psi(w,x,y)=\operatorname{Sym}^1(x).\]Then, $\phi_{\psi}(w,x)=\operatorname{Sym}^1(x)$ and $\lambda(w) = \operatorname{diag}(|w|^{1/2}, |w|^{-1/2})$.
We start with the spectral side: $\operatorname{Rep}^\text{fl}_\lambda(G)$ contains exactly two irreducible representations - the trivial representation $\pi_0$ and the Steinberg representation $\pi_1$. The Steinberg representation is its own standard representation because it is tempered, so $\Delta(\pi_1) = \pi_1$.
The standard representation for the trivial representation $\pi_0$ is $\Delta(\pi_0) = \operatorname{Ind}_B^G(\chi)$, where $\chi(\operatorname{diag}(t_1, t_2)) = |t_1|^{1/2}|t_2|^{-1/2}$ and $B$ is the standard Borel subgroup of $\operatorname{GL}_2$.
From the short exact sequence
\[
\begin{tikzcd}
0 \arrow{r} & \pi_0 \arrow{r} & \operatorname{Ind}_B^G(\chi) \arrow{r} & \pi_1 \arrow{r} & 0
\end{tikzcd}
\]
we see that $\pi_0$ and $\pi_1$ both appear in $\Delta(\pi_0)$ with multiplicity $1$, so
\begin{eqnarray*}\,
[\Delta(\pi_0)] &=& [\pi_0] + [\pi_1], \text{ and}\\ \,
[\Delta(\pi_1)] &=& [\pi_1]
\end{eqnarray*}
in $K\operatorname{Rep}^\text{fl}_\lambda(G)$. Thus, we have
\[
m =
\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix}.
\]
Now we describe the geometry.
\[ V_\lambda= \left\{\begin{pmatrix}0&x\\0&0\end{pmatrix}: \text{ }x\in \mathbb{C} \right\}\simeq\mathbb{A}^1_{\mathbb{C}},\]
and \[H_\lambda = \operatorname{GL}_1(\mathbb{C})\times \operatorname{GL}_1(\mathbb{C}),\] with action $s\cdot x = s_1 x s_2^{-1}$, where $s=(s_1,s_2)$.
The two $H_\lambda$-orbits in $V_\lambda$ are $C_0 = \{ 0 \}$ and $C_1 = \{ x\in \mathbb{A}^1 \ : \ x\ne 0 \}$, and simple objects in $\operatorname{Per}_{H_\lambda}(V_\lambda)$ are ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})$ and ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})$.
In order to compute the matrix $c$, we pick base points $x_0 \in C_0$ and $x_1\in C_1$ and compute the stalks: The sheaf complex ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})$ is the skyscraper sheaf ${\mathbbm{1}}_{C_0}^\natural$ at $C_0$ in degree $0$ while ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})$ is the constant sheaf on $V_\lambda$ shifted by $1$, ${\mathbbm{1}}_{V}[1]$.
It follows that
\begin{eqnarray*}\,
[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})] &=& [{\mathbbm{1}}_{C_0}^\natural] \\\,
(-1)[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})] &=& [{\mathbbm{1}}_{C_0}^\natural] + [{\mathbbm{1}}_{C_1}^\natural],
\end{eqnarray*}
in $K\operatorname{Per}_{H_\lambda}(V_\lambda)$, so
\[
c =
\begin{bmatrix}
1 & 1\\
0 & 1
\end{bmatrix}.
\]
It is now clear that $m = \,^tc$, as predicted by Hypothesis~\ref{hypothesis}.
\end{example}
\begin{example}\label{example:braden gl4}
We now consider an Arthur parameter of small dimension which is not simple. We show the computation of the geometric and spectral multiplicity matrices $c$ and $m$. We will also see in the next two examples that the Vogan variety for this parameter does not decompose into a product of Vogan varieties, thus it is an example of the type of parameter this paper is dealing with.
Consider the group $\operatorname{GL}_4(F)$ and the Arthur parameter
\[\psi(w,x,y)=\operatorname{Sym}^1(x) \oplus \operatorname{Sym}^1(y).\]
The infinitesimal parameter $\lambda_\psi$ is given by
\[
\lambda_\psi(w) =
\begin{bmatrix}
|w|^{1/2} & 0 & 0 & 0 \\
0 & |w|^{-1/2} & 0 & 0 \\
0 & 0 & |w|^{1/2} & 0 \\
0 & 0 & 0 & |w|^{1/2}
\end{bmatrix}.
\]
We can replace $\lambda_{\psi}$ by an element in its $\operatorname{GL}_4(\mathbb{C})$-conjugacy class - this will not change the geometry. Thus, we apply the permutation $(2\hspace{1mm}3)$ to $\lambda_{\phi_{\psi}}$ and drop the subscript $\psi$ from the notation to get
\[
\lambda(w) =
\begin{bmatrix} |w|^{1/2}&0&0&0\\0&|w|^{1/2}&0&0\\0&0&|w|^{-1/2}&0\\0&0&0&|w|^{-1/2}
\end{bmatrix}.
\]
Let us do the geometric side first. The above rearrangement enables us to easily compute the Vogan variety and the group action.
\[V_{\lambda}=
\left\{
\begin{bmatrix}
0_{} & X \\
0_{} & 0_{}
\end{bmatrix}: X \in \op{Mat}_2(\mathbb{C}) \right\}
\cong
\op{Mat}_2(\mathbb{C}).
\]
and \[H_{\lambda} = \operatorname{GL}_2(\mathbb{C}) \times \operatorname{GL}_2(\mathbb{C}).\]
The action of $H_\lambda$ on $V_{\lambda}$ is given by $$(g_1,g_2)\cdot X=g_1Xg_2^{-1}.$$ Thus, the rank of $X$ completely determines its $H_\lambda$-orbit. There are $3$ orbits - $C_0,C_1,$ and $C_2$ consisting of matrices of matrices of ranks $0,1$ and $2$ respectively. Note that $C_1$ is the orbit corresponding to $\phi_{\psi}$, so we set $C_{\psi}=C_1$.
In order to find the matrix $c_{\lambda}$ in this case, pick base points $x_0\in C_0$, $x_1\in C_1$ and $x_2\in C_2$ and consider the stalks of simple ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_C)$. Since ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})$ is the skyscraper sheaf at $0\in V_\lambda$, its stalks are easy to compute:
$\mathcal{H}^\bullet_{x_0}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0}) = {\mathbbm{1}}[0]$;
$\mathcal{H}^\bullet_{x_1}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0}) = 0$; and
$\mathcal{H}^\bullet_{x_1}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0}) = 0$.
Likewise, since ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_2})$ is the constant sheaf on $V_\lambda$ shifted by $4$, we have
$\mathcal{H}^\bullet_{x}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_2}) = {\mathbbm{1}}[4]$ for every $x\in V_\lambda$.
Only ${\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})$ is interesting - its stalks are given by
\begin{eqnarray*}
\mathcal{H}^\bullet_{x_0} {\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1}) &=& H^\bullet(\mathbb{P}^1)[3] = {\mathbbm{1}}[1] \oplus {\mathbbm{1}}[3]\\
\mathcal{H}^\bullet_{x_1}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1}) &=& {\mathbbm{1}}[3] \\
\mathcal{H}^\bullet_{x_2}{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1}) &=& 0.
\end{eqnarray*}
In $K\operatorname{Per}_{H_\lambda}(V_\lambda)$ we now have
\begin{eqnarray*}\,
[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})] &=& [{\mathbbm{1}}_{C_0}^\natural] \\ \,
(-1)[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})] &=& \hskip-5pt 2 [{\mathbbm{1}}_{C_0}^\natural] + [{\mathbbm{1}}_{C_1}^\natural] \\ \,
[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_2})] &=& [{\mathbbm{1}}_{C_0}^\natural] + [{\mathbbm{1}}_{C_1}^\natural] + [{\mathbbm{1}}_{C_2}^\natural].
\end{eqnarray*}
Therefore,
\[
c_\lambda=
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & 1\\
0 & 0 & 1
\end{bmatrix}.
\]
On the spectral side, $\operatorname{Rep}^\text{fl}_\lambda(G)$ contains exactly three irreducible representations, all appearing in $\operatorname{Ind}_{B}^{G}(\chi_\lambda)$ for $\chi_\lambda(\operatorname{diag}(t_1, t_2,t_3,t_4)) = |t_1 t_2|^{1/2}|t_3 t_4|^{-1/2}$. The these three irreducible representations are: the unique irreducible quotient $\pi_0$ of $\operatorname{Ind}_{B}^{G}(\chi)$ and two irreducible subrepresentations $\pi_1$ and $\pi_2$, of which only $\pi_2$ is tempered. Note that $\pi_{\psi}=\pi_1$ as it corresponds to the parameter $\phi_{\psi}$. The standard representations for $\pi_0$, $\pi_1$ and $\pi_2$ are given as follows, where $P_0$ is the standard Borel, $P_1$ is the standard parabolic with Levi $\operatorname{GL}_2\times \operatorname{GL}_1^2$ and $P_2$ is the standard parabolic with Levi $\operatorname{GL}_2\times \operatorname{GL}_2$:
\begin{eqnarray*}
\Delta(\pi_0) &=& \operatorname{Ind}_{P_0}^{G}(\chi_\lambda) \\
\Delta(\pi_1) &=& \operatorname{Ind}_{P_1}^{G}(\text{St}_2\otimes\chi_1) \\
\Delta(\pi_0) &=& \operatorname{Ind}_{P_2}^{G}(\text{St}_2\otimes \text{St}_2) .
\end{eqnarray*}
Here, $\text{St}_2$ is Steinberg for $\operatorname{GL}_2(F)$ and $\chi_1$ is the character $\chi$ appearing in Example~\ref{example:Steinberg GL2}.
In fact, $\operatorname{Ind}_{B}^{G}(\chi)$ is a length-four representation and $\pi_1$ appears with multiplicity $2$ in $\operatorname{Ind}_{B}^{G}(\chi)$:
\[
[\Delta(\pi_0)] = [\pi_0] + 2[\pi_1] + [\pi_2]
\]
Being tempered, $\pi_2$ is its own standard representation, so $[\Delta(\pi_2)] = [\pi_2]$.
In fact,
\[
\pi_2 = \operatorname{Ind}_{P_2}^{G}(\text{St}_{\operatorname{GL}_2}\otimes\text{St}_{\operatorname{GL}_2}),
\]
which can be used to see that
\[
\begin{tikzcd}
0 \arrow{r} & \pi_2 \arrow{r} & \Delta(\pi_1) \arrow{r} & \pi_1 \arrow{r} & 0,
\end{tikzcd}
\]
is a short exact sequence in $\operatorname{Rep}^\text{fl}_\lambda(G)$. We therefore have
\begin{eqnarray*}\,
[\Delta(\pi_1)] &=& [\pi_1] + [\pi_2]
\end{eqnarray*}
in $K\operatorname{Rep}^\text{fl}_{\lambda}(G)$.
Now we know the matrix $m$:
\[
m_\lambda =
\begin{bmatrix}
1 & 0 & 0\\
2 & 1 & 0\\
1 & 1 & 1
\end{bmatrix}.
\]
We see that $m_{\lambda} = \,^tc_{\lambda}$, yet another demonstration of Hypothesis~\ref{hypothesis}.
\end{example}
\begin{example}\label{example: levi braden gl4}
In this example, we discuss the geometry of the Arthur parameter for the Levi subgroup carved out by the parameter in Example \ref{example:braden gl4}, and compute $c_{\lambda_M}$ and $m_{\lambda_M}$.
Once again, we consider the Arthur parameter from Example \ref{example:braden gl4}:
\[\psi(w,x,y)=\operatorname{Sym}^1(x)\oplus \operatorname{Sym}^1(y)\]
Following the recipe of Section~\ref{introduction}, this picks out the Levi subgroup $M=\operatorname{GL}_2\times \operatorname{GL}_2$ of $\operatorname{GL}_4$, with associated simple Arthur parameters $\psi_1(w,x,y)= \operatorname{Sym}^1(x)\otimes \operatorname{Sym}^0(y)$ and $\psi_2(w,x,y)= \operatorname{Sym}^0(x)\otimes \operatorname{Sym}^1(y)$ which correspond in turn to Langlands parameters $\phi_1, \phi_2$ and infinitesimal parameters $\lambda_1,\lambda_2$, respectively. We use $\psi_M$ and $\lambda_M$ to denote the Arthur parameter for $M$.
Then $V_{\lambda_M}=V_{\lambda_1} \times_{} V_{\lambda_2}$ consists of elements of the type
\[(x_0,x_1) \coloneqq \left(\begin{bmatrix}0&x_0\\0&0\end{bmatrix} , \begin{bmatrix}0&x_1\\0&0\end{bmatrix}\right),\]
the group $H_{\lambda_M}=H_{\lambda_1} \times_{} H_{\lambda_1}$ consists of elements of the type
\[
(t,s) \coloneqq \left(\begin{bmatrix}t_1&0\\0&t_2\end{bmatrix} , \begin{bmatrix}s_1&0\\0&s_2\end{bmatrix}\right),\]
and the action is given by
\[
(t,s)\cdot (x_0,x_1) = (t_1x_0t_2^{-1}, s_1x_1s_2^{-1}).
\]
There are four orbits depending on whether or not $x_i=0$ for $i=0,1$. We denote them as $C_{00}, C_{10}, C_{01}$ and $C_{11}$, where $C_{ij}$ corresponds to the $H_{\lambda_M}$-orbit of $(x_0,x_1)=(i,j)$.
To identify $V_{\lambda_M}$ as a subspace of $V_{\lambda}$,
we consider each element in $V_{\lambda_M}$ as a block diagonal $4 \times 4$ matrix in the obvious way, then apply the same permutation $(2\hspace{1mm}3)$ as before. The variety is still denoted $V_{\lambda_M}$ and its elements are identified with matrices of the type
\[\begin{bmatrix}0_{2 \times 2}&X\\0_{2\times 2}&0_{2\times 2}\end{bmatrix} \text{ where } X=\begin{bmatrix}x_0&0\\0&x_1\end{bmatrix}.\]
We do the same to elements of $H_{\lambda_M}$ to identify it with a torus in $\operatorname{GL}_4$, with elements as matrices
\[\begin{bmatrix}\operatorname{diag}(t_1,s_1)&0_{2 \times 2}\\0_{2\times 2}&\operatorname{diag}(t_2,s_2)\end{bmatrix} \text{ where } t_i,s_i \in \mathbb{C}^{\times}.\]
The conjugation action still takes $x_0 \mapsto t_1x_0t_2^{-1}$; likewise $x_1 \mapsto s_1x_1s_2^{-1}$. Thus, the embedding $V_{\lambda_M} \xhookrightarrow{} V_{\lambda}$ is $H_{\lambda_M}$-equivariant.
We continue to use the notation $C_{ij}$ for the $H_{\lambda_M}$-orbits . The orbit $C_{\psi_M}$ of type $\psi_M$ is $C_{10}$. At this point, we encourage the reader to think about the restriction of the orbits $C_0,C_1,C_2$ from Example \ref{example:braden gl4} to $V_{\lambda_M}$. In particular, observe that $C_1$ restricts to $C_{10} \sqcup C_{01}$.
Let us compute the matrix $c_{\lambda_M}$. From Example \ref{example:Steinberg GL2} we know \[c_{\lambda_i}=
\begin{bmatrix}
1&1\\0&1
\end{bmatrix},
\]
for $i=1,2$. This matrix can be interpreted as a change of basis matrix from the standard sheaves to shifted simple perverse sheaves. One easily sees that
\[c_{\lambda_M}=c_{\lambda_1}\otimes c_{\lambda_2}
=
\begin{bmatrix}
1&1\\0&1
\end{bmatrix}\otimes \begin{bmatrix}
1&1\\0&1
\end{bmatrix}
=\begin{bmatrix}
1&1&1&1\\
0&1&0&1\\
0&0&1&1\\
0&0&0&1
\end{bmatrix}.
\]
This is a change of basis matrix from the shifted simple perverse sheaves \[
\{(-1)^{\dim C_{ij}}[\mathcal{IC}({\mathbbm{1}}_{C_{ij}})]: 0\leq i,j \leq 1\}
\]
to standard sheaves
\[
\{[{\mathbbm{1}}_{C_{ij}}^{\natural}]: 0\leq i,j \leq 1\}
\]
in $K\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M})$.
The change of basis matrix from standard sheaves to shifted simple perverse sheaves is therefore
\[
c^{-1}_{\lambda_M}=\begin{bmatrix}
1&-1&-1&-1\\0&1&0&-1\\0&0&1&-1\\0&0&0&1
\end{bmatrix}.
\]
On the spectral side, the orbits $C_{ij}$ correspond to irreducible representations $\pi_{ij}$ in $\operatorname{Rep}^{\op{fl}}_{\lambda_M}(M)$. Following an analogous calculation for the spectral multiplicity matrix using $m$ from Example \ref{example:Steinberg GL2}, we get
\[m_{\lambda_M}^{-1}=\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.\]
\end{example}
\begin{example}\label{Lift and pullback for gl4 braden and levi}
In this example we compute the endoscopic lifts of some irreducible representations, using only the definition of $\operatorname{Lift}_M^G$ from Section~\ref{ssec:Lift}, with $G=GL_4$, and $\psi$ and $M$ as in Examples \ref{example:braden gl4} and \ref{example: levi braden gl4}.
First, we calculate restriction of standard sheaves via
\[\varepsilon^*: K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda}}(V_{\lambda}) \to K_{\mathbb{C}}\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M}). \]
We note that
\begin{eqnarray*}\,
[{\mathbbm{1}}_{C_0}^\natural \lvert_{V_{\lambda_M}}] &=& [{\mathbbm{1}}_{C_{00}}^\natural]\\ \,
[{\mathbbm{1}}_{C_1}^\natural\lvert_{V_{\lambda_M}}] &=& [{\mathbbm{1}}^\natural_{C_{10}}]+[{\mathbbm{1}}^\natural_{C_{01}}]\\ \,
[{\mathbbm{1}}_{C_2}^\natural\lvert_{V_{\lambda_M}}] &=& [{\mathbbm{1}}^\natural_{C_{11}}],
\end{eqnarray*}
because, respectively, $C_0\cap V_{\lambda_M} =C_{00}$, $C_1 \cap V_{\lambda_M} = C_{10} \sqcup C_{01}$, $C_2\cap V_{\lambda_M} = C_{11}$.
Thus, the matrix for $\varepsilon^*$ with respect to the basis of standard sheaves is
\[
[\varepsilon^*]_{\op{sts}}
=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}.
\]
From Example \ref{example:braden gl4}, recall that $c_{\lambda}$ is a change of basis matrix from shifted simple perverse sheaves to standard sheaves in $K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$.
From Example~\ref{example: levi braden gl4} recall that $c^{-1}_{\lambda_M}$ is the change of basis matrix from standard sheaves to shifted simple perverse sheaves in $K\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M})$.
Therefore, restriction of shifted simple objects from $K\operatorname{Per}_{H_{\lambda}}(V_{\lambda})$ to $K\operatorname{Per}_{H_{\lambda_M}}(V_{\lambda_M})$ via $\varepsilon^*$ is given by
\begin{align*}
[\varepsilon^*]_{\op{ssim}}
&=
c^{-1}_{\lambda_M} \ [\varepsilon^*]_{\op{sts}}\ c_{\lambda} \\
&=
\begin{bmatrix}
1 & -1 & -1 & 1\\
0 & 1 & 0 & -1\\
0 & 0 & 1 & -1\\
0 & 0 & 0 & 1
\end{bmatrix}
\
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}
\
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & 1\\
0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}.
\end{align*}
This shows:
\begin{align*}
\varepsilon^*[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})] &= [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{00}})], \\
\varepsilon^*[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})[-3]] &= [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{01}})[-1]] + [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{01}})[-1]], \\
\varepsilon^*[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_2})[-4]] &= [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{11}})[-2]],
\end{align*}
so
\begin{align*}
\varepsilon^*[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_0})] &= [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{00}})], \\
\varepsilon^*[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_1})] &= [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{01}})] + [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{01}})], \\
\varepsilon^*[{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_2}) &= [{\mathcal{IC}\hskip-1pt }({\mathbbm{1}}_{C_{11}})].
\end{align*}
Now let us use this calculation, together with Hypothesis~\ref{hypothesis}, to calculate $\operatorname{Lift}_M^G[\sigma]$ for every $\sigma \in \Pi_{\lambda_M}(M)$:
\begin{align*}
[\operatorname{Lift}_M^G]_{\op{sim}}
&= \,^t[\varepsilon^*]_{\op{ssim}} \\
&= \,^t\left(
c^{-1}_{\lambda_M} \ [\varepsilon^*]_{\op{sts}}\ c_{\lambda}\right) \\
&= \,^t c_{\lambda}\ \,^t [\varepsilon^*]_{\op{sts}}\ \,^tc_{\lambda_M}^{-1} \\
&= m_{\lambda}\ \,^t [\varepsilon^*]_{\op{sts}}\ m_{\lambda_M}^{-1}\\
&= \begin{bmatrix}
1 & 0 & 0\\
2 & 1 & 0\\
1 & 1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 & 0\\
-1 & 1 & 0 & 0\\
-1 & 0 & 1 & 0\\
1 & -1 & -1 & 1
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
\end{align*}
This shows:
\begin{align*}
\operatorname{Lift}_M^G[\pi_{00}] &= [\pi_0] \\
\operatorname{Lift}_M^G[\pi_{01}] &= [\pi_1] \\
\operatorname{Lift}_M^G[\pi_{10}] &= [\pi_1] \\
\operatorname{Lift}_M^G[\pi_{11}] &= [\pi_2] .
\end{align*}
\end{example}
In particular, $\operatorname{Lift}_M^G[\pi_{10}]=[\pi_1]$, in other words, $\operatorname{Lift}_M^G[\pi_{\psi_M}] = [\pi_{\psi}]$, which is true in general for $\operatorname{GL}_n$ and we prove this in Proposition \ref{prop: Lift on irreducibles of A type}.
\begin{bibdiv}
\begin{biblist}
\bib{Pramod}{book}{
author={Achar, Pramod N.},
title={Perverse sheaves and applications to representation theory},
series={Mathematical Surveys and Monographs},
volume={258},
publisher={American Mathematical Society, Providence, RI},
date={2021},
pages={xii+562},
isbn={978-1-4704-5597-2},
}
\bib{ABV}{book}{
author={Adams, Jeffrey},
author={Barbasch, Dan},
author={Vogan, David A., Jr.},
title={The Langlands classification and irreducible characters for real reductive groups},
series={Progress in Mathematics},
volume={104},
publisher={Birkh\"{a}user Boston, Inc., Boston, MA},
date={1992},
pages={xii+318},
}
\bib{Arthur:book}{book}{
author={Arthur, James},
title={The endoscopic classification of representations},
series={American Mathematical Society Colloquium Publications},
volume={61},
note={Orthogonal and symplectic groups},
publisher={American Mathematical Society, Providence, RI},
date={2013},
pages={xviii+590},
isbn={978-0-8218-4990-3},
doi={10.1090/coll/061},
}
\bib{Arthur:unipotent-motivation}{article}{
author={Arthur, James},
title={Unipotent automorphic representations: global motivation},
conference={
title={Automorphic forms, Shimura varieties, and $L$-functions, Vol.
I},
address={Ann Arbor, MI},
date={1988},
},
book={
series={Perspect. Math.},
volume={10},
publisher={Academic Press, Boston, MA},
},
date={1990},
pages={1--75},
review={\MR{1044818}},
}
\bib{arthur1989unipotent}{article}{
title={Unipotent automorphic representations: conjectures},
author={Arthur, James},
year={1989},
book={
series={Ast\'{e}risque},
volume={},
publisher={},
},
}
\bib{Atobe}{article}{
author={Atobe, Hiraku},
title={Construction of local $A$-packets},
journal={J. Reine Angew. Math.},
volume={790},
date={2022},
pages={1--51},
issn={0075-4102},
review={\MR{4472864}},
doi={10.1515/crelle-2022-0030},
}
\iffals
\bib{A}{article}{
author={Aubert, Anne-Marie},
title={Dualit\'{e} dans le groupe de Grothendieck de la cat\'{e}gorie des
repr\'{e}sentations lisses de longueur finie d'un groupe r\'{e}ductif $p$-adique},
journal={Trans. Amer. Math. Soc.},
volume={347},
date={1995},
number={6},
pages={2179--2189},
issn={0002-9947},
doi={10.2307/2154931},
}
\f
\bib{BBD}{article}{
author={Be\u{\i}linson, A. A.},
author={Bernstein, J.},
author={Deligne, P.},
title={Faisceaux pervers},
conference={
title={Analysis and topology on singular spaces, I},
address={Luminy},
date={1981},
},
book={
series={Ast\'{e}risque},
volume={100},
publisher={Soc. Math. France, Paris},
},
date={1982},
pages={5--171},
}
\bib{Borel:Corvallis}{book}{
author={Borel, A},
title={Automorphic L-functions},
series={Automprhic forms, representations, and $L$-functions Part 2 (Proc. Sympos. Pure Math, Corvallis XXXIII)},
note={p. 27-61},
publisher={American Mathematical Society, Providence, RI},
date={1979},
pages={27-61}
}
\bib{CG}{book}{
author={Chriss, Neil},
author={Ginzburg, Victor},
title={Representation theory and complex geometry},
series={Modern Birkh\"{a}user Classics},
note={Reprint of the 1997 edition},
publisher={Birkh\"{a}user Boston, Ltd., Boston, MA},
date={2010},
pages={x+495},
isbn={978-0-8176-4937-1},
doi={10.1007/978-0-8176-4938-8},
}
\bib{Clozel}{article}{
author={Clozel, Laurent},
title={Sur une conjecture de Howe. I},
language={English, with French summary},
journal={Compositio Math.},
volume={56},
date={1985},
number={1},
pages={87--110},
issn={0010-437X},
}
\bib{CFMMX}{book}{
author={Cunningham, Clifton},
author={Fiori, Andrew},
author={Moussaoui, Ahmed},
author={Mracek, James},
author={Xu, Bin},
title={A-packets for p-adic groups by way of microlocal vanishing cycles of perverse sheaves, with examples},
series={Memoirs of the American Mathematical Society},
publisher={AMS},
volume={276},
date={2022},
number={1353},
}
\bib{CFZ:cubics}{article}{
author={Cunningham, Clifton},
author={Fiori, Andrew},
author={Zhang, Qing},
title={A-packets for $G_2$ and perverse sheaves on cubics},
journal={Advances in mathematics},
volume={395},
date={2022},
}
\bib{CFZ:unipotent}{unpublished}{
author={Cunningham, Clifton},
author={Fiori, Andrew},
author={Zhang, Qing},
title={Toward the endoscopic classification of unipotent representations of $p$-adic $G_2$},
note={http://arxiv.org/abs/2101.04578},
date={2021},
}
\bib{CFK}{article}{
title={Appearance of the Kashiwara-Saito singularity in the representation theory of $p$-adic GL(16)},
author={Cunningham, Clifton},
author={Fiori, Andrew},
author={Kitt, Nicole},
journal={Pacific Journal of Mathematics},
date={2023},
note={https://arxiv.org/abs/2103.04538},
}
\bib{CR:irred}{unpublished}{
author={Cunningham, Clifton},
author={Ray, Mishty},
title={Proof of Vogan's conjecture on A-packets: irreducible parameters for $p$-adic general linear groups},
date={2022},
note={https://arxiv.org/abs/2206.01027}
}
\bib{vanDijk}{article}{
author={van Dijk, G.},
title={Computation of certain induced characters of ${\germ p}$-adic
groups},
journal={Math. Ann.},
volume={199},
date={1972},
pages={229--240},
issn={0025-5831},
doi={10.1007/BF01429876},
}
\bib{GM:book}{book}{
author={Goresky, Mark},
author={MacPherson, Robert},
title={Stratified Morse theory},
series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in
Mathematics and Related Areas (3)]},
volume={14},
publisher={Springer-Verlag, Berlin},
date={1988},
pages={xiv+272},
isbn={3-540-17300-5},
doi={10.1007/978-3-642-71714-7},
}
\bib{GM:Lefschetz}{article}{
author={Goresky, Mark},
author={MacPherson, Robert},
title={Local contribution to the Lefschetz fixed point formula},
journal={Invent. Math.},
volume={111},
date={1993},
number={1},
pages={1--33},
issn={0020-9910},
doi={10.1007/BF01231277},
}
\bib{Illusie}{article}{
author={Illusie, Luc},
title={Around the Thom-Sebastiani theorem, with an appendix by Weizhe
Zheng},
journal={Manuscripta Math.},
volume={152},
date={2017},
number={1-2},
pages={61--125},
}
\bib{KZ}{article}{
author={Knight, Harold},
author={Zelevinsky, Andrei},
title={Representations of quivers of type $A$ and the multisegment
duality},
journal={Adv. Math.},
volume={117},
date={1996},
number={2},
pages={273--293},
issn={0001-8708},
}
\bib{knight1996representations}{article}{
title={Representations of quivers of type A and the multisegment duality},
author={Knight, Harold},
author={Zelevinsky, Andrei},
journal={Advances in mathematics},
volume={117},
number={2},
pages={273--293},
year={1996},
publisher={New York: Academic Press, 1965-}
}
\bib{Konno}{article}{
author={Konno, Takuya},
title={A note on the Langlands classification and irreducibility of
induced representations of $p$-adic groups},
journal={Kyushu J. Math.},
volume={57},
date={2003},
number={2},
pages={383--409},
issn={1340-6116},
doi={10.2206/kyushujm.57.383},
}
\bib{kudla1994local}{article}{
title={The local Langlands correspondence: the non-archimedean case},
author={Kudla, Stephen S},
journal={Motives (Seattle, WA, 1991)},
volume={55},
number={Part 2},
pages={365--391},
year={1994},
publisher={American Mathematical Society Providence, RI}
}
\iffalse
\bib{Lusztig:Quivers}{article}{
author={Lusztig, George},
title={Quivers, perverse sheaves, and quantized enveloping algebras},
journal={J. Amer. Math. Soc.},
volume={4},
date={1991},
number={2},
pages={365--421},
issn={0894-0347},
}
\bib{Lusztig:classification-unipotent}{article}{
author={Lusztig, George},
title={Classification of unipotent representations of simple $p$-adic
groups},
journal={Internat. Math. Res. Notices},
date={1995},
number={11},
pages={517--589},
issn={1073-7928},
}
\bib{MS}{article}{
author={Mars, J. G. M.},
author={Springer, T. A.},
title={Character sheaves},
note={Orbites unipotentes et repr\'{e}sentations, III},
journal={Ast\'{e}risque},
number={173-174},
date={1989},
pages={9, 111--198},
issn={0303-1179},
}
\fi
\bib{Lusztig:Cuspidal2}{article}{
author={Lusztig, George},
title={Cuspidal local systems and graded Hecke algebras. II},
note={With errata for Part I [Inst. Hautes \'{E}tudes Sci. Publ. Math. No. 67
(1988), 145--202; MR0972345 (90e:22029)]},
conference={
title={Representations of groups},
address={Banff, AB},
date={1994},
},
book={
series={CMS Conf. Proc.},
volume={16},
publisher={Amer. Math. Soc., Providence, RI},
},
date={1995},
pages={217--275},
}
\bib{Massey}{article}{
author={Massey, David B.},
title={The Sebastiani-Thom isomorphism in the derived category},
journal={Compositio Math.},
volume={125},
date={2001},
number={3},
pages={353--362},
}
\bib{MW:involution}{article}{
author={M\oe glin, Colette},
author={Waldspurger, Jean-Loup},
title={Sur l'involution de Zelevinski},
journal={J. Reine Angew. Math.},
volume={372},
date={1986},
pages={136--177},
issn={0075-4102},
}
\bib{Mracek}{book}{
author={Mracek, James},
title={Applications of Algebraic Microlocal Analysis in Symplectic
Geometry and Representation Theory},
note={Thesis (Ph.D.)--University of Toronto (Canada)},
publisher={ProQuest LLC, Ann Arbor, MI},
date={2017},
pages={101},
isbn={978-0355-53067-4},
}
\bib{Pyasetskii}{article}{
Author = {Pjasecki\u\i , V. S.},
journal = {Akademija Nauk SSSR. Funkcional\cprime nyi Analiz i ego Prilo\v zenija},
Issn = {0374-1990},
Number = {4},
Pages = {85--86},
Title = {Linear {L}ie groups that act with a finite number of orbits},
Volume = {9},
Year = {1975}
}
\bib{Riddlesden}{article}{
author={Riddlesden, Connor},
title={Combinatorial approach to ABV-packets for $GL_n$},
journal={MSc thesis, University of Lethbridge},
year={2022},
note={\url{https://opus.uleth.ca/handle/10133/6377}},
}
\bib{Solleveld:pKLH}{unpublished}{
author={Solleveld, Maarten},
title={Graded Hecke algebras, constructible sheaves and the p-adic Kazhdan--Lusztig conjecture},
note={\href{https://arxiv.org/pdf/2106.03196.pdf}{https://arxiv.org/pdf/2106.03196.pdf}},
date={2022}
}
\bib{Vogan:Langlands}{article}{
author={Vogan, David A., Jr.},
title={The local Langlands conjecture},
conference={
title={Representation theory of groups and algebras},
},
book={
series={Contemp. Math.},
volume={145},
publisher={Amer. Math. Soc., Providence, RI},
},
date={1993},
pages={305--379},
}
\bib{Z2}{article}{
author={Zelevinsky, Andrei V.},
title={Induced representations of reductive ${\germ p}$-adic groups. II.
On irreducible representations of ${\rm GL}(n)$},
journal={Ann. Sci. \'{E}cole Norm. Sup. (4)},
volume={13},
date={1980},
number={2},
pages={165--210},
}
\bib{zelevinskii1981p}{article}{
title={p-adic analog of the kazhdan-Lusztig hypothesis},
author={Zelevinskii, Andrei Vladlenovich},
journal={Functional Analysis and Its Applications},
volume={15},
number={2},
pages={83--92},
year={1981},
publisher={Springer}
}
\end{biblist}
\end{bibdiv}
\end{document} | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Cayley automatic groups, introduced by Kharlampovich, Khoussainov
and Miasnikov in \cite{KKMjournal}, generalize the class of automatic groups while retaining their key {\em algorithmic}
properties.
Namely, the
word problem in a Cayley automatic group is decidable in quadratic time,
regular normal forms for group elements can be computed in quadratic time, and
the first order theory for a (directed, labeled) Cayley graph of a Cayley automatic group is decidable.
The family of Cayley automatic groups is much broader than that of automatic groups, as it includes, for example, all finitely generated nilpotent
groups of nilpotency class
two \cite{KKMjournal}, the Baumslag-Solitar groups \cite{BerK-BS,KKMjournal},
higher rank lamplighter groups \cite{Taback18}, and restricted wreath products of the form $G\wr H$ where $G$ is Cayley automatic and $H$ is (virtually) infinite cyclic \cite{berdkhouss15,BETwreath}.
As defined, the existence of a Cayley automatic structure for a group $G$ appears to impose no restrictions on its geometry. This differs from the existence of an automatic structure; if a group $G$ admits an automatic structure then the Cayley graph with respect to any finite generating set $S$ enjoys the so-called fellow traveler property.
This geometric condition requires that the normal form representatives for a pair of group elements at distance 1 in the Cayley graph $\Gamma(G,S)$ remain a uniformly bounded distance apart in this graph.
The goal of this paper is to explore the geometry of the Cayley graph of a Cayley automatic group; in particular, to understand an analogue of the fellow traveler property for these groups.
A Cayley automatic group differs from an automatic group in that the normal form for group elements is defined over a finite symbol alphabet rather than a set $S$ of generators. However, without loss of generality we can take these symbols to be additional generators, renaming the larger generating set $S$, and for $g \in G$ obtain a normal form which describes a path in the Cayley graph $\Gamma(G,S)$ but (most likely) not a path to the vertex labeled $g$.
In lieu of a fellow traveler property, we investigate the distance in $\Gamma(G,S)$ between the vertex labeled $g$ and the endpoint of this path.
To quantify how far a Cayley automatic structure is from being automatic, we follow the first author and Trakuldit in \cite{measuringcloseness} and define the Cayley distance function $f_{\psi}$ for a given Cayley automatic structure $\psi$, where $f_{\psi}(n)$ is the maximum distance between a normal form word representing $g \in G$ and the vertex labeled $g$, over all
normal forms of word length at most $n$.
In \cite{measuringcloseness} it is shown that $G$ is automatic if and only if this function is equivalent to a constant function, in a notion of equivalence defined below.
Thus for Cayley automatic groups which are not automatic this function is always unbounded and non-decreasing.
This motivates our investigation of when the \distfun\ might be bounded below by a non-constant function, quantitatively separating $G$ from the class of automatic groups.
However, the possibility exists that a group may admit a sequence of Cayley automatic structures for which the corresponding sequence of \distfun s limit to a constant function, but never contains the constant function.
In this paper we prove that such limiting behaviour is not possible in any group which is not finitely presented, or in any finitely presented group that has a super-quadratic Dehn function, as given in Definition~\ref{defn:strong-super}. In each case, we construct a concrete unbounded function depending only on the group, so that the \distfun\ for any Cayley automatic structure on the group is bounded below by this function, up to equivalence.
We say that a Cayley automatic
group $G$ is {\em $f$-separated} if the \distfun\ with respect to any Cayley automatic structure on $G$ is bounded below by a function in the equivalence class of $f$.
Let $\mathfrak{i}$ denote the function $\mathfrak{i}(n)=n$ on some domain $[N,\infty)$.
Super-quadratic and strongly-super-polynomial functions, referred to in Theorem~\ref{thmA:fp} below, are introduced in Definition~\ref{defn:strong-super}.
We prove the following.
\begin{restatable}[Finitely presented groups]{theoremx}{ThmA}
\label{thmA:fp}
If $G$ is a finitely presented Cayley automatic group with super-quadratic Dehn function,
then there exists an unbounded function $\phi$ depending only on $G$ so that $G$ is $\phi$-separated.
Furthermore, if $G$ has strongly-super-polynomial Dehn function, then $G$ is $\mathfrak{i}$-separated.
\end{restatable}
The analogous theorem for non-finitely presented groups is as follows.
A non-finitely presented group is {\em dense} if its irreducible relators have lengths which are ``dense" in the natural numbers; see \S\!~\ref{sec:dense} for a precise definition. Wreath products are the prototypical examples of dense groups.
\begin{restatable}[Non-finitely presented groups]{theoremx}{ThmB}
\label{thmB:nfp}
If $G$ is a Cayley automatic group which is not finitely presented, then there is a non-decreasing step function $\phi$ depending only on $G$
that is linear for infinitely many values,
so that $G$ is $\phi$-separated.
Furthermore, if $G$ is dense then $G$ is $\mathfrak{i}$-separated.
\end{restatable}
We conjecture that for every Cayley automatic group that is not automatic, the distance function with respect to every Cayley automatic structure on the group
is bounded below by a linear function, which is equivalent to being $\mathfrak{i}$-separated.
\begin{conjecturex}\label{conj1}
Let $G$ be a Cayley automatic group. Then $G$ is either automatic or
$\mathfrak{i}$-separated.
\end{conjecturex}
Our results provide support for this conjecture by exhibiting lower bounds for \distfun s for all non-automatic Cayley automatic groups whose Dehn function is super-quadratic or are not finitely presented. However, these bounds are not always equivalent to $\mathfrak{i}$.
While we believe the conjecture to be true, two groups for which this linear lower bound is not obvious to us are the following.
\begin{itemize}[itemsep=5pt]
\item The higher Heisenberg groups $H_{2k+1}$ for $k\geq 2$.
These are nilpotent of step 2 so they are Cayley automatic by \cite[Theorem 12.4]{KKMjournal}. Since the only nilpotent automatic groups are virtually abelian \cite{Epsteinbook}, they are not automatic. It is proved in \cite{MR1616147,MR1253544,MR1698761} that their Dehn function is quadratic.
\item The higher rank lamplighter groups, or {\em Diestel-Leader groups}, proven to be Cayley automatic by B\'{e}rub\'{e}, Palnitkar and the third author in \cite{Taback18}. One can show that the Cayley automatic structure constructed in \cite{Taback18} has Cayley distance function equivalent to the identity function. These groups are not of type FP$_\infty$ \cite{BartholdiNW}, hence not automatic.
See \cite{Taback18} for a discussion explaining why their Dehn functions are quadratic.
\end{itemize}
The paper is organised as follows. In Section~\ref{sec:Auto-CGA} we review automatic and Cayley automatic groups, define the \distfun\ for a Cayley automatic structure, and finish with a short discussion of Dehn functions. In Section~\ref{sec:FP-dehn} we prove Proposition~\ref{prop:dehnbound}, which relates the \distfun\ to the Dehn function for a finitely presented Cayley automatic group. In Section~\ref{sec:FP} we define super-quadratic, super-polynomial and strongly-super-polynomial functions and prove Theorem~\ref{thmA:fp}.
We then turn to non-finitely presented Cayley automatic groups. We introduce the notion of a densely presented group in Section~\ref{sec:dense}, then prove Theorem~\ref{thmB:nfp} in Section~\ref{sec:thmB}. We include additional information about strongly-superpolynomial functions in Appendix~\ref{appendix:super-strong}.
\section{Automatic and Cayley automatic groups}
\label{sec:Auto-CGA}
We assume that the reader is familiar with the notions of regular languages, finite automata and multi-tape synchronous automata. For more details, we refer the reader to \cite{Epsteinbook}. We say a language $L\subseteq X^n$ is {\em regular} if it is accepted by a synchronous $n$-tape automaton where $n\in\N$ and $X$ is a finite set, or {\em alphabet}.
For any group $G$ with finite symmetric generating set $S=S^{-1}$, let $\pi\colon S^*\to G$ denote the canonical projection map. For $w\in S^*$ let $|w|_S$ denote the length of $w$ as a word in the free monoid $S^*$.
\subsection{Automatic and Cayley automatic groups}
We define automatic and Cayley automatic groups, and provide some standard lemmas on the invariance of the Cayley automatic structure under change of generating set.
\begin{definition}
\label{def:aut}
An {\em automatic structure} for a group $G$ is a pair $(S,L)$ where
\begin{enumerate}
\item $S$ is a finite symmetric generating set for $G$;
\item $L\subseteq S^*$ is a regular language;
\item $\pi|_L \colon L \rightarrow G$ is
a bijection;
\item for each $a \in S$ the binary relation
$$R_a = \{(u,v) \in L \times L \mid \pi(u)a=_G\pi(v)\} \subseteq S^* \times S^*$$
is regular, that is, recognized by a two-tape synchronous automaton.
\end{enumerate}
A group is called {\em automatic} if it has an automatic structure with respect to some finite generating set.
\end{definition} It is a standard result, see, for example \cite[Theorem 2.4.1]{Epsteinbook}, that if $G$ is automatic then $G$ has an automatic structure with respect to any finite generating set.
Cayley automatic groups were introduced in \cite{KKMjournal} with the motivation of
allowing the language $L$ of normal forms representing group elements to be defined over a
symbol alphabet $\Lambda$ rather than a generating set $S$ for $G$.
\begin{definition}
\label{def:Caut}
A {\em Cayley automatic structure} for a group $G$ is a 4-tuple $(S,\Lambda, L,\psi)$ where
\begin{enumerate}
\item $S$ is a finite symmetric generating set for $G$;
\item $\Lambda$ is an alphabet and
$L \subseteq \Lambda^*$ is a regular
language;
\item
$\psi\colon L \rightarrow G$ is a bijection;
\item
for each $a \in S$
the binary relation
$$R_a = \{(u,v) \in L \times L \,
|\,\psi(u)a=_G\psi(v)\} \subseteq \Lambda^* \times \Lambda^*$$
is regular, that is, recognized by
a two-tape synchronous automaton. \end{enumerate}
A group is called {\em Cayley automatic} if it has a Cayley automatic structure $(S,\Lambda, L,\psi)$ with respect to some finite generating set $S$.\end{definition}
As for automatic groups, if $G$ has a Cayley automatic structure $(S,\Lambda, L,\psi)$
and $Y$ is another finite generating set for $G$, then there exists a Cayley automatic structure $(Y,\Lambda_Y, L_Y,\psi_Y)$ for $G$. See \cite[Theorem 6.9]{KKMjournal} for a proof of this fact; we sharpen this in Proposition~\ref{prop:modifyingCAstructure} below.
Note that a Cayley automatic structure $(S,S,L,\pi|_L)$ for $G$, that is, one in which the symbol alphabet is in fact a generating set, and the natural projection gives a bijection from $L$ to $G$, is simply an automatic structure for $G$.
{\em A priori} the symbol alphabet $\Lambda$ has no relation to a generating set for $G$.
However it is straightforward to show that we may assume without loss of generality that $\Lambda=S$ in any Cayley automatic structure, and so we can always associate a Cayley graph with respect to a generating set which includes symbol letters from the Cayley automatic structure. This is proven in \cite{measuringcloseness} and in Proposition~\ref{prop:modifyingCAstructure} below.
With this assumption, a word $w\in L$ labels a path from $1_G$ to $\pi(w)$ in the Cayley graph $\Gamma(G,S)$.
It is crucial to note that in general $\pi(w)\neq \psi(w)$.
\begin{definition}
\label{def:h}
Let $(G,S)$ be a group with Cayley automatic structure $(S,S, L,\psi)$. The {\em \distfun} corresponding to $\psi$ is defined to be
$$ h_{S,\psi}(n) = \max \{ d_S (\pi (w),\psi(w)) \,| \, w \in L^{\leqslant n}\}$$
where $d_S$ is the word metric on $G$ with respect to $S$ and
$$L^{\leqslant n} = \{ w \in L \, |\, |w|\leqslant n\}.$$
\end{definition}
For any $N\in \N$ let $ \F$ be the following set of non–decreasing functions.
\[ \F=\{f \colon [N,+\infty)\to \Rplus \mid [N,+\infty)\subseteq \N\wedge \forall n (n\in \mathrm{dom} f \Rightarrow f(n)\leq f(n+1))\}.\]
As noted previously, let $ \mathfrak{i}\in \F$ denote the identity function $ \mathfrak{i}(n)=n$ on any suitable domain.
Note that if
$G$ is a group with Cayley automatic structure $(S,S,L,\psi)$ and \distfun\ $h_{S,\psi}$, then $h_{S,\psi}\in \F$.
We introduce the following partial order on $\F$.
\begin{definition}\label{defn:equivF} Let $f,g \in \F$. We say that $g \preceq_1 f$ if there exist positive integers $K,M$ and $N$ such that $[N,+\infty) \subseteq
\mathrm{dom}\,g\cap \mathrm{dom}\, f$ and $g(n)\leq Kf(Mn)$ for every integer $n\geq N$.
We say that $g \approx_1 f$ if $g \preceq_1 f$ and $f \preceq_1 g$.
\end{definition}
The subscript in Definition~\ref{defn:equivF} serves to distinguish this equivalence from the equivalence on Dehn functions discussed in \S\!~\ref{subsec:Dehn}. It is clear from the definition that $\approx_1$ is transitive.
Let $\mathbf z$ denote the zero function $\mathbf z (n)=0$ on
some domain $[N,\infty]$.
We note that if $f\in\F$ and $f(n)=0$ for infinitely many values of $n\in\dom f$ then $f=\mathbf z$ on its domain, because $f\in\F$ is non-decreasing.
The next lemma will be used repeatedly in the proofs below, and is a fact about certain types of functions which are easily seen to be related or equivalent under the definition above.
\begin{lemma}
\label{lemma:lose_the_constant}
Let $A,B,C,D\in\R$ with $A, D\geq 1$ and $B,C\geq 0$.
Let $f,g \in \F$ with $f(n) \leq D g(An+B)+C$.
If $g\neq \mathbf z$, then $f(n) \preceq_1 g(n)$. Moreover, if $h(n)=Df(An+B)+C$
and $f\neq \mathbf z$,
then $h \in \F$ and $h\approx_1 f$.
\end{lemma}
\begin{proof}
If $g$ is bounded, so $g(n)\leq E$ for all $n\in \dom\,g$ for some fixed constant $E$, then
$f$ is bounded as well.
Since $g(n)=0$ for at most finitely many values of $n\in\dom g$, we have $f(n) \preceq_1 g(n)$ (possibly increasing $N_0$).
If $f$ is bounded and $f(n)=0$ for at most finitely many values of $n\in\dom f$, it follows immediately that $h\approx_1 f$.
For the remainder of the proof, we assume that $g$ is not bounded.
There is a constant $N_0$ so that for $n \geqslant N_0$ we have $An \geqslant B$. As $g \in \F$, it follows that for $n \geqslant N_0$ we have $D g(An+B) +C\leq D g(2An)+C$.
Since $g$ is not bounded, there is a constant $N_1$ so that for $n \geq N_1$ we have $g(2An) \geqslant C$.
Then for $n \geq \max(N_0,N_1)$ we have $f(n) \leq D g(An+B)+C \leq D g(2An) +C \leq (D+1) g(2An)$ and thus $f \preceq_1 g$.
Letting $g=f \in \F$ the above reasoning shows that
$h \preceq_1 f$. As it is clear that
$h(n)=Df(An+B)+C \in \F$ and $f \preceq_1 h$,
it follows that $f \approx_1 h$, as desired.
\end{proof}
Note that $\approx_1$ defines an equivalence relation
on the set $\F$ and $\preceq_1$ then gives a partial ordering on the resulting set of equivalence classes.
The poset of equivalence classes of elements of
$\F$ has a minimal element $[\mathbf z]$.
It follows from the previous lemma that all bounded
functions $f\in\F$ for which $f(n)=0$ for at most finitely many values of $n\in\dom f$
are in the same equivalence class.
Furthermore, every $f \in \F$ can be compared to a constant function. In contrast, we show that the partial ordering is not a linear ordering, that is, there are functions in $\F$ which cannot be compared to the identity function $\mathfrak{i}$ under $\preceq_1$.
\begin{lemma}\label{lemma:comp_to_constant}
Let $c \in \N$ and $g: \N \rightarrow \N$ the constant function $g(n)=c$. Let $f \in \F$ be any function. Then either $f\preceq_1 g$ or $g\preceq_1 f$.
\end{lemma}
\begin{proof}
If
there is some $D\in \Rplus$ so that $f(n)\leq D=\left(\frac{D}{c}\right)c=\left(\frac{D}{c}\right)g(n)$ for all $n\in\dom f$ then $f\preceq_1 g$.
If not, then for all $D\in \Rplus$ there is an integer $N_D\in\dom f$ so that $f(n)>D$ for $n>N_D$. In particular, there is an integer $N_c \in \dom f$ so that $f(n)>c=g(n)$ for all $n\in \dom f\cap [N_c,\infty)$. Thus $g\preceq_1 f$.
\end{proof}
\begin{definition}\label{defn:isep}
Let $f,g\in \F$. We say that $g$ is {\em $f$-separated} if there exists a function $h\approx_1 f$ with $h\preceq_1 g$.
\end{definition}
Lemma~\ref{lem:example-incompariable} demonstrates that not every function $f\in \F$ is $\mathfrak{i}$-separated.
\begin{lemma}\label{lem:example-incompariable}
There exists a function $f\in \F$ so that $\mathfrak{i}\npreceq_1 f$ and $f\npreceq_1 \mathfrak{i}$
\end{lemma}
\begin{proof}
Let $n_0 =2$ and define the infinite sequence of integers $n_{i+1}=n_i^2 = 2^{2^i}$.
Consider the step function $f\colon\N\to \Rplus$
defined by
\[f(x)= \left\{\begin{array}{lll}
n_{2i} &n_{2i} \leq x < n_{2i+1},\\
n_{2i+2} &n_{2i+1} \leq x < n_{2i+2}.\\
\end{array} \right.\]
Suppose $f\preceq_1 \mathfrak{i}$. Then $\exists N_0, K,M$ so that $f(x)\leq K\mathfrak{i}(Mx)= KMx$ for all $x\geq N_0$.
However,
\[ f(n_{2i+1})=n_{2i+2}=n_{2i+1}^2\leq KM n_{2i+1}\]
which implies that
$n_{2i+1}\leq KM $ for sufficiently large $i$, a contradiction.
Thus $f\npreceq_1 \mathfrak{i}$.
Conversely suppose $\mathfrak{i}\preceq_1 f$. Then $\exists N_0, K,M$ so that $x\leq Kf(Mx)$ for all $x\geq N_0$. This means $\frac{s}{M}\leq Kf(s)$ for all $s=Mx\geq MN_0$, which
implies that $M\lfloor\frac{s}{M}\rfloor\leq KMf(s)$ for all $s\geq MN_0$.
However,
\[M \left\lfloor\frac{n_{2i}^2-1}{M} \right\rfloor=
M \left\lfloor \frac{n_{2i+1}-1}{M} \right\rfloor \leq KMf(n_{2i+1}-1)=KMn_{2i}
\]
and thus $\left\lfloor \frac{n_{2i}^2-1}{M} \right\rfloor \leq Kn_{2i}$.
Therefore,
$\frac{n_{2i}^2-1}{M} \leqslant K n_{2i} + 1$,
so $n_{2i} \leqslant KM + \frac{M+1}{n_{2i}}$
which is a contradiction for sufficiently large $i$.
Thus $\mathfrak{i}\npreceq_1 f$.
\end{proof}
\subsection{Invariance under change of generating set and change of structure}
Here we describe how robust both the Cayley automatic structure and the function $h_{S,\psi}$ are to, respectively, change in generating set and change in structure.
First we recall the following standard fact.
\begin{lemma}\label{lem:2tape}
Let $G$ be a Cayley automatic group and $S$ a finite symmetric generating set for $G$.
Let $(S,\Lambda,L,\psi)$ be a Cayley automatic structure for $G$.
Then for any $w\in S^*$,
\[L_w=\{(u,v)\in L^2\mid \psi(v)=_G\psi(u)w\}\] is regular.
\end{lemma}
\begin{proof}
Let $w=s_1\dots s_n$ where $s_i\in S$ for $1 \leq i \leq n$.
As $(S,\Lambda,L,\psi)$ is a Cayley automatic structure for $G$, for each $s \in S$ there is a synchronous 2-tape automaton $\texttt{M}_{s}$ which accepts the language $$L(\texttt{M}_{s})=\{(u,v)\in L^2\mid \psi(v)=_G\psi(u)s\}.$$
Let $\texttt{M}_i'$ be a synchronous $(n+1)$-tape automaton accepting \[(z_0, \dots, z_{i-1}, u, v, z_{i+2},\dots, z_n)\] where $z_i\in \Lambda^*, u,v\in L$ and $\psi(v)=\psi(u)s_i$.
We construct $\texttt{M}_i'$ from $\texttt{M}_{s_i}$ by replacing each edge labeled $(a,b)\in \Lambda^2$ by the finite number of edges
labeled $(x_0,\dots x_{i-1},a,b,x_{i+2},\dots ,x_n)\in \Lambda^{n+1}$ for all possible choices of $x_j \in \Lambda$
where $0\leq j \leq i-1$ and $i+2 \leq j \leq n$.
Then \[L_w=\bigcap_{i=1}^n L(\texttt{M}_i')\] which is regular since this is a finite intersection, then apply a homomorphism to project onto the first and last factors.
\end{proof}
\begin{proposition}\label{prop:modifyingCAstructure}
Let $G$ be a Cayley automatic group and $S$ a finite symmetric generating set for $G$.
\begin{enumerate}\item If $(S,\Lambda,L, \psi)$ is a Cayley automatic structure for $G$, then so is $(S',S',L, \psi)$ where $S'=\Lambda\cup\Lambda^{-1}\cup S$
\item If $(S,S,L, \psi)$ is a Cayley automatic structure for $G$ with \distfun\ $h_{S,\psi}$, and $Y$ is a finite symmetric generating set for $G$, then there exists a language $L'\subseteq Y^*$ and a bijection $\psi'\colon L'\to G$ so that
$( Y, Y,L',\psi')$ is a Cayley automatic structure for $G$ with \distfun\ $h_{ Y,\psi'}\approx_1 h_{S,\psi}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\textit{(1)} Suppose $\langle S\mid R\rangle$ is a presentation for $G$.
For each $a\in \Lambda$ choose an element $g_a\in G$, and choose a word $u_a\in S^*$ with $\pi(u_a)=g_a$.
Note that this choice is arbitrary; the element $g_a$ corresponding to the symbol letter $a$ could be any group element.
Let $\Lambda^{-1}$ be the disjoint set $\{a^{-1}\mid a\in \Lambda\}$; we will not use these letters, but include them to ensure our new generating set is symmetric.
Since $\Lambda$ is finite, there is a bound on the length of all $u_a$ words.
We have $S'=\Lambda\cup\Lambda^{-1}\cup S$ and
$G$ is presented by $\langle S'\mid R\cup\{a=u_a\mid a\in\Lambda\}\rangle$.
With this new generating set, we have the same language $L$ which is regular, and the map $\psi\colon L\to G$.
For each $s\in S$ there is an automaton $\texttt{M}_s$ recognizing multiplication by $s$, and it follows from Lemma~\ref{lem:2tape} that there is an analogous 2-tape automaton $\texttt{M}_a$ for each $a\in \Lambda^{\pm 1}$.
\medskip
\noindent
\textit{(2)}
We have $L\subseteq S^*$ is a regular language in bijective correspondence with $G$. For each $s\in S$, choose a word $u_s\in Y^*$ with $s=_Gu_s$ and for each $y\in Y$, choose a word $v_y\in S^*$ with $y=_Gv_y$. Let $M_1=\max\{|v_y|_S\mid y\in Y\}$ and
$M_2=\max\{|u_s|_Y\mid s\in S\}$.
Let the monoid homomorphism $\rho\colon S^*\to Y^*$ be defined by $\rho(s)=u_s$. It follows that $L'=\rho(L)$ is a regular language in bijection with $G$, where
$\psi'\colon L'\to G$ defined by $\psi'=\psi\circ\left(\rho|_L\right)^{-1}$ is a bijection.
Note that for all $w\in L$ we have $\pi(w)=\pi(\rho(w))$ and $\psi(w)=\psi'(\rho(w))$.
For each $w\in L^{\leq n}$ we claim that
\begin{equation}\label{eqn:inequality}
d_S\left(\pi(w), \psi(w)\right)\leq
M_1h_{Y,\psi'}\left(M_2n\right)
\end{equation}
To see this, we argue as follows.
\begin{itemize}
\item Under $\rho$, the path labeled $w$ from $1_G$ to $\pi(w)$ in $\Gamma(G,S)$ is mapped to a path labeled $\rho(w)$ from $1_G$ to $\pi(\rho(w))=\pi(w)$ in $\Gamma(G,Y)$, and this path has length at most $M_2n$, replacing each letter $s$ of the path by $u_s$. See Figure~\ref{fig:quasiIsom}.
\item By definition, the distance from $\pi(\rho(w))$ to $\psi'(\rho(w))$ in $\Gamma(G,Y)$ is at most
$h_{Y,\psi'}(M_2n)$ since this is the maximum such distance over all possible words in $(L')^{\leq M_2n}$.
\item Then in $\Gamma(G,Y)$ we have a path from $\pi(w)$ to $\psi(w)$ of length at most $h_{Y,\psi'}(M_2n)$ in the letters from $Y$; call it $\gamma$. Replacing each of these letters $y$ by $u_y$ we obtain a path $\rho^{-1}(\gamma)$ in $\Gamma(G,S)$ from $\pi(w)$ to $\psi(w)$ of length at most $M_1h_{Y,\psi'}(M_2n)$.
\end{itemize}
\begin{figure}[h!]
\begin{subfigure}{.45\textwidth}
\centering
\input{qi1}
\caption{In $\Gamma(G,S)$}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\input{qi2}
\caption{In $\Gamma(G,Y)$}
\label{fig:sub-second}
\end{subfigure}
\caption{Drawing $w\in L$ and $\rho(w)\in L'$ in each Cayley graph.}
\label{fig:quasiIsom}
\end{figure}
Since Equation~(\ref{eqn:inequality}) is true for all $w\in L^{\leq n}$, it follows that
$h_{S,\psi}\preceq_1 h_{Y,\psi'} $.
Similarly
for each $w'=\rho(w)\in (L')^{\leq n}$ the same argument shows that \[d_Y\left(\pi(w'), \psi'(w')\right)
\leq
M_2h_{S,\psi}\left(M_1n\right)
\] as $\pi(w')=\pi(w)$ and $\psi(w)=\psi'(w')$ are the same vertices.
Thus $h_{Y,\psi'}\preceq_1 h_{S,\psi}$ and it follows that $h_{Y,\psi'}\approx_1 h_{S,\psi}$
\end{proof}
\begin{remark}
Note that a given group may admit many different Cayley automatic structures whose \distfun s are not equivalent under $\preceq_1$. Part (2) of Proposition~\ref{prop:modifyingCAstructure} proves that given one Cayley automatic structure for a group $G$ with respect to a generating set $S$, we can create a new Cayley automatic structure for $G$ over a generating set $Y$ so that both \distfun s are equivalent under $\preceq_1$.
\end{remark}
\subsection{Dehn functions}\label{subsec:Dehn}
Let $\mathcal P=\langle X\mid R\rangle $ be a finite presentation of a group $G$ and $F_X$ the free group on $X$. If $w\in F_X$ is equal to $1_G$ in $G$, then
there exist $N\in \N$, $r_i\in R$ and $u_i\in F_X$ such that
\[w_{F_X}=\prod_{i=1}^N u_i^{-1}r_i^{\epsilon_i}u_i
\]
We define the {\em area} of $w$, denoted $A_{\mathcal P}(w)$, to be the minimal $N\in \N$ so that $w$ has such an expression.
\begin{definition}
The {\em Dehn function} of a presentation is the function $\delta_{\mathcal P}\colon \N\to\N$ given by
$$
\delta_{\mathcal P}(n)=\max\{A_{\mathcal P}(w)\mid w\in F_X, w=_G1_G, |w|\le n \}
$$
\end{definition}
Note that if $f$ is a Dehn function then $f\in \F$.
It is standard to define the following partial order on Dehn functions.
\begin{definition}\label{def:dehn_equiv}
For $f,g \in \F$ we define $f \preceq_2 g$ if there exists a constant $C>0$ so that $f(n)\le Cg(Cn)+Cn$ for all $n\in\N$.
We write $f \approx_2 g$ if $f \preceq_2 g$ and $g \preceq_2 f$.
\end{definition}
Recall that each presentation of a group $G$ can give rise to a different Dehn function.
It is a standard fact
that all Dehn functions on a group $G$ are equivalent under the relation $\preceq_2$. Thus we can
consider the equivalence class of these functions as a quasi-isometry invariant of the group. In particular, we can refer to a group as possessing a linear, quadratic or exponential Dehn function, for example.
Recall that
there are no groups with Dehn function equivalent to $n^\alpha$ for $\alpha \in (1,2)$ \cite{Bowditch,GromovHyp, Olshanskii}.
\section{Finite presentability and Dehn functions}\label{sec:FP-dehn}
We start with the following observation.
\begin{lemma}[\cite{KKMjournal}, Lemma 8.2; \cite{cga}, Lemma 8]\label{lem:bound1}
Let $(S,\Lambda,L,\psi)$ be a Cayley automatic structure for $G$. Then there are constants $m,e\in \N$, depending on the Cayley automatic structure, with $m\geq 1$ so that for each $u \in L$,
\[|u| \leq m d_S(1_G, \psi(u)) + e\]
where $d_S$ denotes the word metric in $G$ with respect to the generating set $S$.
\end{lemma}
\begin{proof}
For each $x\in S$ let $\texttt{M}_x$ be a synchronous 2-tape automaton accepting
\[\{(u,v)\in L^2\mid \psi(v)=_G\psi(u)x\},\]
and let $|\texttt{M}_x|$ denote the number of states in $\texttt{M}_x$, and
$m=\max\{|\texttt{M}_x|\mid x\in S\}$. Let $u_0\in L$ be such that $\psi(u_0)=1_G$, and $e=|u_0|$.
For $u\in L$ let
$x_1\dots x_k$ be a geodesic for $\psi(u)$ where $x_i\in S$, so $k={d_G(1_G,\psi(u))}$. Define $u_i\in L$ by $\psi(u_i)=_Gx_1\dots x_i$. Then for $i=1,\dots, k$ we have
\[\left||u_i|-|u_{i-1}|\right|\leq m.\]
If this difference in length was greater than $m$, the path accepted by the two-tape automaton would end with a sequence of $\$$ symbols in one coordinate of length greater than $m$.
One could then apply the pumping lemma to this path, and contradict the fact that $\psi$ is a bijection.
It then follows from the triangle inequality that
\[\begin{array}{lll}
|u|&=&\left||u_k|-|u_{k-1}|+|u_{k-1}|-\dots -|u_1|+|u_1|-|u_{0}|+|u_0|\right|\\
&\leq &\left|\left(|u_k|-|u_{k-1}|\right)\right|+\dots +\left|\left(|u_1|-|u_{0}|\right)\right|+|u_0|\\
&\leq &m k+e\end{array}\]
which establishes the bound.
\end{proof}
The following proposition relates the Cayley distance function to fillings of loops in the Cayley graph of a Cayley automatic group.
\begin{proposition}
\label{prop:dehnbound}
Let $(S,S,L,\psi)$ be a Cayley automatic structure for $G$ with \distfun\ $h_{S,\psi}$.
There exist constants
$c,d,\varsigma,n_0, D\in\N$, depending on the Cayley automatic structure, so that the following holds.
\begin{enumerate}\item
For every $w\in S^*$ with $w=_G1_G$ and $|w| \geq n_0$,
there exist $w_i,\rho_i\in S^*$ with $w_i=_G1_G$ for $1\leq i\leq k$ so that \[w=_{F_S}\prod_{i=1}^k\rho_iw_i\rho_i^{-1}\ \ \ \ \
\text{and} \ \ \ \ \ |w_i|\leq 4h_{S,\psi}(
c|w|+d)+\varsigma.\]
\item
If $G$ is finitely presented, and $\delta$ is the Dehn function with respect to a fixed presentation $\langle S\mid R\rangle$, then
\[\delta(n) \leq D n^2 \delta(f(n))\] for all $n\geq n_0$,
where $f\approx_1 h_{S,\psi}$.
\end{enumerate}
\end{proposition}
Note that the constants $D,c$, and $\varsigma$ in the statement of the proposition depends only on the Cayley automatic structure and not on the Dehn function $\delta$.
\begin{proof}
Let $m=\max_{s\in S}\{|\texttt{M}_s|\}$ be the maximum number of states in any two-tape synchronous automaton accepting $R_s$ as in Definition~\ref{def:Caut} in the Cayley automatic structure for $G$ and $u_0\in L$ the word representing the identity element of $G$ of length $e$ as in Lemma~\ref{lem:bound1}.
Without loss of generality we assume that $m$ is
even, so that all arguments of the function $h_{S,\psi}$
below are integers.
Choose a loop in $\Gamma(G,S)$ based at $1_G$ labeled by the path $w=s_1\dots s_n$ where $s_i\in S$ and $\pi(w)=1_G$. For each $g_i=\pi(s_1\dots s_i)$ let $u_i\in L$ be such that $\psi(u_i)=g_i$.
For $1\leq i\leq n$,
as $d(1_G,g_i)\leq n/2$, it follows from Lemma~\ref{lem:bound1} that \begin{equation}\label{eq:lemma}
|u_i|_S\leq m n/2+e
\end{equation} so the distance from $\pi(u_i)$ to $g_i$ is at most
$h_{S,\psi}(mn/2+e)$.
Let $\gamma_i$ be a path from $\pi(u_i)$ to $g_i$ of length at most this bound.
We will describe how to fill ``corridors" having perimeter $u_i\gamma_is_i\gamma_{i+1}^{-1}u_{i+1}^{-1}$ with relators of bounded perimeter.
See Figure~\ref{fig:filling1} for an example of such a corridor.
\begin{figure}[h!]
\input{loop1.tex}
\caption{The exterior of the figure is labeled by a loop $w = s_1s_2 \cdots s_n$ with $w=_G1$. The figure depicts a corridor whose sides are labeled by the closed path $u_i\gamma_is_i\gamma_{i+1}^{-1}u_{i+1}^{-1}$.}
\label{fig:filling1}
\end{figure}
Let $u_i=a_{i,1}a_{i,2}\dots a_{i,|u_i|_S}$ and for $0\leq j\leq |u_i|_S$ define $p_{ij}=\pi(a_{i,1}a_{i,2}\dots a_{i,j})$ to be the point in $\Gamma(G,S)$ corresponding to the prefix of $u_i$ of length $j$.
If $j>|u_i|_S$ then let $p_{ij}=\pi(u_i)$.
We know that the pair $(u_i,u_{i+1})$ is accepted by $M_{s_i}$.
Consider the state of $M_{s_i}$ which is reached upon reading the input $\{(a_{i,l},a_{i+1,l})\}_{j=1}^l \}$
where $a_{i,l},a_{i+1,l}\in S\cup\{\$\}$.
There must be a path of length at most $m$ in $M_{s_i}$ from this state to some accept state of $M_{s_i}$.
Denote the labels along this path by $\{(b_{i,j,r},b_{i+1,j,r}) \}_{r=1}^m$ where
$b_{i,j,r},b_{i+1,j,r}\in S\cup\{\$\}$ and we insert the padding symbol $\$$ in both coordinates if the path has length less than $m$.
Then if $x_{i,j}$ denotes the concatenation
$\{a_{i,j}\}_{j=1}^l\{b_{i,j,r}\}_{r=1}^m$, and $x_{i+1,j}$ denotes
$\{a_{i+1,l}\}_{l=1}^j\{b_{i+1,j,r}\}_{r=1}^m$, then $\psi(x_{i,j})s_i = \psi(x_{i+1,j})$, and both these points, as well as $\pi(x_{i,j})$ and $\pi(x_{i+1,j})$ are depicted in Figure~\ref{fig:filling2}.
\begin{figure}[h!]
\input{loop2}
\caption{Depiction of a path in $\Gamma(G,S)$ between $p_{i,j} = \pi(a_{i,1}a_{i,2}\dots a_{i,j})$ and $p_{i+1,j} = \pi(a_{i+1,1}a_{i+1,2}\dots a_{i+1,j})$, where $u_i=a_{i,1}a_{i,2}\dots a_{i,|u_i|_S}$ is such that $\psi(u_i) = g_i$.}\label{fig:filling2}
\end{figure}
Thus there is a path in $\Gamma(G,S)$ from $p_{ij}$ to $p_{i+1,j}$ of length at most $2m+2h_{S,\psi}(mn/2+e+m)+1$
consisting of the following segments, as shown in Figure~\ref{fig:filling2}:
\begin{itemize}[itemsep=5pt]
\item $\beta_i=\{b_{i,j,r}\}_{i=1}^r$ from $p_{i,j}$ to $\pi(x_i)$ of length at most $m$,
\item a path from $\pi(x_i)$ to $\psi(x_i)$ of length at most $h_{S,\psi}(mn/2+e+m)$,
\item an edge labeled $s_i$ from $\psi(x_{i,j})$ to $\psi(x_{i+1,j})$,
\item a path from $\psi(x_i)s_i = \psi(x_{i+1})$ to $\pi(x_{i+1})$ of length at most $h_{S,\psi}(mn/2+e+m)$,
\item $\beta_{i+1}=\{b_{i+1,j,r}\}_{i=1}^r$ from $\pi(x_i)$ to $p_{i+1,j}$ of length at most $m$.
\end{itemize}
In Figure~\ref{fig:filling3} the paths between $p_{i,j}$ and $p_{i+i,j}$ for all $1 \leq j < \max\{|u_i|_S,|u_{i+1}|_S\}$ are depicted for one corridor.
Between $p_{|u_i|}$ and $p_{|u_{i+1}|}$ we use that existing path
$\gamma_i s_i \gamma_{i+1}^{-1}$.
These corridors create two types of cells. The first
type are created from two of these paths and their connecting edges, for some $j$ and $j+1 < \max\{|u_i|_S,|u_{i+1}|_S\}$.
This creates a cell with perimeter at most
\[2(2m+2h_{S,\psi}(mn/2+e+m)+1) + 2 = 4m+4h_{S,\psi}(mn/2+e+m)+4,\] where the additional $+2$ accounts for the single edges $a_{i,j+1}$ between $p_{i,j}$ and $p_{i,j+1}$, and $a_{i+1,j+1}$ between $p_{i+1,j}$ and $p_{+1,j+1}$, which lie on the paths $u_i$ and $u_{i+1}$, respectively, and are not part of the paths previously constructed.
\begin{figure}[h!]
\input{loop3}
\caption{Filling the corridors created by the paths $u_i\gamma_i$ with cells of bounded perimeter.}\label{fig:filling3}
\end{figure}
The second type is the ``top" cell created by the path from $p_{i,|u_i|-1}$ to $p_{i+1,|u_{i+1}|-1}$ together with the path
$a_{i,|u_i|-1}\gamma_is_i\gamma_{i+1}^{-1}a_{i+1,|u_{i+1}|-1}$.
This cell has perimeter at most \[\begin{array}{lll}
&
(2m+2h_{S,\psi}(mn/2+e+m)+1)+2+( 2h_{S,\psi}(mn/2+e)+1)\\
=& 2m+2h_{S,\psi}(mn/2+e+m)+2h_{S,\psi}(mn/2+e)+4\\
\leq & 4m +4h_{S,\psi}(mn/2+e+m)+4\end{array}\]
where the terms in the first line come, respectively, from
\begin{itemize}[itemsep=5pt]
\item the path from $p_{i,|u_i|-1}$ to $p_{i+1,|u_{i+1}|-1}$,
\item the two edges labeled $a_{i,|u_i|-1}$ and $a_{i+1,|u_{i+1}-1}$, and
\item the path $\gamma_is_i\gamma_{i+1}^{-1}$.
\end{itemize}
To obtain the inequality, note that $h_{S,\psi}\in \F$ and $m\geq 1$, so $2m+4\leq 4m+4$ and \[2h_{S,\psi}(mn/2+e+m)+2h_{S,\psi}(mn/2+e)\leq 4h_{S,\psi}(mn/2+e+m).\]
Setting $c=m/2, d=e+m$ and $\varsigma=4m+4$ proves the first claim in the proposition.
To prove the second claim in the proposition, we count the total number of cells required to subdivide the initial loop into cells of bounded perimeter.
This will yield the inequality involving the Dehn function $\delta$.
It follows from Lemma~\ref{lem:bound1} that $|u_i|\leq mn/2+e$
for all $1\leq i\leq n$.
Each corridor is filled by at most $(mn/2+e)$ cells, each of perimeter at most $4h_{S,\psi}(cn+d)+\varsigma$, where $c,d$ and $\varsigma$ depend on $m$ and $e$.
For a fixed finite presentation $\langle S\mid R\rangle$ for $G$ with Dehn function $\delta$, each cell constructed above can be filled by at most $\delta(4h_{S,\psi}(cn+d)+\varsigma)$ cells with perimeter labeled by a relator from the set $R$.
With $n$ corridors, there are $n\cdot (mn/2+e)=n(cn+e)$ such cells to fill.
Thus an upper bound on the number of relators required to fill $w$ is
\begin{dmath*}
n(cn+e)\cdot \delta\left(4h_{S,\psi}(cn+d)+\varsigma\right)
=n^2(c+\frac{e}{n})\cdot \delta\left(4h_{S,\psi}(cn+d)+\varsigma\right)\\
\leq n^2(c+e)\cdot \delta\left(4h_{S,\psi}(cn+d)+\varsigma\right).
\end{dmath*}
Setting $D=c+e$
and noting that it follows from Lemma~\ref{lemma:lose_the_constant} that $4h_{s,\psi}(cn+d)+\varsigma\approx_1 h_{s,\psi}(n)$
proves the second claim of the proposition.
\end{proof}
\section{Separating finitely presented Cayley automatic groups from automatic groups}\label{sec:FP}
In this section we prove Theorem~\ref{thmA:fp}.
First
we introduce the following notion.
\begin{definition}
Let $f,g\in \F$.
We say that
$f \ll g $
if there exists an unbounded
function $t \in \F$ such that $ft \preceq_1 g$.
\end{definition}
\begin{example}
If $g(n)=n^c$ with $c>2$ and $f(n)=n^2$ then $f\ll g$.
Take $t(n)=n^{c-2}$. Then $t \in \F$ is an unbounded function and \[f(n)t(n) = n^c \preceq_1 g(n).\]
\end{example}
Next we define the following.
\begin{definition}\label{defn:strong-super}
A function $f\in \F$ is
{\em super-quadratic} if for all constants
$M > 0$ we have $f(n) \leqslant M n^2$
for at most finitely many $n \in \mathbb{N}$.
A non-zero function is $f\in \F$ is {\em strongly-super-polynomial} if $n^2f\ll f$.
\end{definition}
\begin{example}
The functions $n^2\ln n$ and $n^c$ for $c>2$ are super-quadratic; the functions $e^n$ and $n^{\ln n}$ are strongly-super-polynomial.
\end{example}
\Olshan\ introduces the notion of a function being {\em almost quadratic} in \cite{Ol-almost}; our definition of a super-quadratic function is the same as being not almost quadratic.
However, our notion of a strongly-super-polynomial is
stronger than the more standard definition of a super-polynomial function given, for example, in \cite{GrigPak}:
\begin{definition}\label{defn:superpoly}
A function $f\colon \N\to \R$
is
{\em super-polynomial} if $$ \lim_{n \to \infty} \frac{\ln f(n)}{\ln n} = \infty.$$
\end{definition}
In Lemma~\ref{lem:NotStrongSuperP}
we give an example of a function in $\F$ which satisfies the above limit but is not strongly-super-polynomial. However Proposition~\ref{prop:strong_implies_super} justifies our use of ``strongly" since it shows that every strongly-super-polynomial function is super-polynomial.
Proposition~\ref{prop:strongSuper} shows that $f$ is strongly-super-polynomial if and only if $n^cf\ll f$ for any $c>0$, that is, there is nothing special about the choice of the exponent $2$ in Definition~\ref{defn:strong-super}.
\begin{lemma}\label{lem:equiv-super-quad}
A function $f \in \F$
is super-quadratic if
and only if
$n^2 \ll f$.
\end{lemma}
\begin{proof}
Assume first that $n^2 \ll f$.
Since $n^2 \ll f$, there exist an unbounded
function $t \in \mathcal{F}$ and integer constants
$K,N>0$ and $M \geqslant 1 $ such that
$n^2 t(n) \leqslant K f (M n)$ for all
$n \geqslant N$.
Assume that for some $M'\geqslant 1$ there exist
infinitely many $n_i \in \mathbb{N}$, $i\geq 1$ with $1\leq n_i < n_{i+1}$ for which
$f(n_i) \leqslant M' n_i^2$.
Let $n_i = k_i M + r_i$, where $k_i$ is an integer
and $0 \leqslant r_i < M$. Then we have:
\begin{equation*}
\begin{split}
\frac1{K} k_i ^2 t(k_i)\leqslant f (M k_i)
\leqslant f (n_i)
\leqslant M' n_i^2 = M' M^2 k_i^2 +
2 M' M r_i k_i + M' r_i ^2 \leqslant \\
M' M^2 k_i ^2 + 2 M' M^2 k_i + M' M^2
\leqslant M' M^2 (k_i +1)^2
\end{split}
\end{equation*}
for all $k_i \geqslant N$.
Therefore, $t(k_i) \leqslant KM'M^2
\frac{(k_i + 1)^2}{k_i^2} \leqslant 2 KM'M^2$ for all
$k_i \geqslant \max\{N, 3\}$, where the $3$ follows from
the simple observation that if
$k \geqslant 3$ then $\frac{(k+1)^2}{k^2} \leqslant 2$.
This contradicts the fact that $t$ is an unbounded function.
Now assume that $f$ is super-quadratic. Then
for each integer $i \geqslant 1$ the set
\[ \{m \, |\,\forall n
\left[n \geqslant m \implies f(n) \geqslant i n^2
\right]\}\] is non-empty. Let $m_i=\min \{m \, |\,\forall n
\left[n \geqslant m \implies f(n) \geqslant i n^2
\right]\}$.
We define a function $t(n)$ as follows:
for $0 \leqslant n < m_1$, let $t(n)=0$, and for
$m_i \leq n < m_{i+1}$ with $i\geq 1$, let $t(n) = i$.
By construction, $t(n)$ is a nondecreasing and unbounded function.
As $n^2 t(n) \leqslant f(n)$, it follows
that $n^2 \ll f$.
\end{proof}
In \cite{WeirdQuadDehn} \Olshan\ gives an example of a finitely presented group which has Dehn function bounded above by $c_1n^2$ for infinitely many values of $n$, bounded below by $c_2n^2\log' n\ \log'\log' n$ for infinitely many values of $n$, where $\log'(n)=\max\{\log_2 n, 1\}$, and bounded between $c_3n^2$ and $c_4n^2\log'n\ \log'\log' n$ for all $n\in \N$. Since this Dehn function is not super-quadratic, it follows that Theorem~\ref{thmA:fp} below does not apply to this example.
We now prove Theorem~\ref{thmA:fp}.
\ThmA*
\begin{proof}
Fix a presentation for $G$ and let $\delta$ be the Dehn function arising from this presentation. Fix a Cayley automatic structure $(S,S,L,\psi)$ for $G$.
If $n^2 \ll \delta$, there is an unbounded function $t(n) \in \F$ and positive constants $K,M,N_0 \in \N$ so that $n^2t(n) \leq K\delta(Mn)$ for all $n \geq N_0$.
From Proposition~\ref{prop:dehnbound} we know that there are constants $N_1, D\geq 1$ and a function $f\in \F$ so that for $n \geq N_1$, we have $\delta(n) \leq Dn^2\delta(f(n))$ where $f\approx_1 h_{S,\psi}(n)$.
Combining these equations we have that for all $n\geq \max\{N_0,N_1\}$
\[n^2t(n)\leq K\delta(Mn)\leq KDn^2\delta\left(f(Mn)\right)\]
and dividing both sides by $KDn^2$ we obtain
\begin{equation}\label{egnINEQ}
\frac{t(n)}{KD}\leq \delta\left(f(Mn)\right).\end{equation}
Define
$$\phi(n) = \min\left\{m\, \middle|\,\frac{t(n)}{KD} \leq \delta(m)\right\}.$$
It is immediate from Equation~(\ref{egnINEQ}) that in the definition of $\phi(n)$ we have $\phi(n) \leq f(Mn)$ for all $n\geq \max\{N_0,N_1\}$, and hence $\phi \preceq_1 f$. Since $t \in \F$ is unbounded, it follows that $\phi \in \F$ and $\phi$ is unbounded.
Now assume that the inequality $n^2 \delta \ll \delta$
is satisfied.
Therefore, there exist integer constants $K,M,N_0 >0$ and an unbounded function $t \in \mathcal{F}$ such that
\begin{equation}
\label{ineq_n^2_delta_ll_delta}
n^2 \delta (n) t(n) \leqslant K \delta (M n)
\end{equation}
for all $n \geqslant N_0$.
It follows from statement (2) of Proposition 3.2. that there exists
a function $f \approx_1 h_{S,\psi}$ and integer constants
$N_1 \geqslant 0$ and $D>0$ for which
the inequality
\begin{equation*}
\delta(n) \leqslant D n^2 \delta (f(n))
\end{equation*}
holds for all $n \geqslant N_1$.
This implies that
$\delta (Mn) \leqslant DM^2 n^2 \delta (f (Mn))$ for all
$n \geqslant N_1$.
Combining this with the inequality in \eqref{ineq_n^2_delta_ll_delta} we obtain
that
$$n^2 \delta (n) t(n) \leqslant K \delta (M n)
\leqslant D K M^2 n^2 \delta (f(Mn))
$$
for all $n \geqslant \max \{N_0, N_1\}$.
Therefore,
\begin{equation*}
\delta (n) \tau (n) \leqslant \delta (f(Mn))
\end{equation*}
for all $n \geqslant \max \{N_0, N_1\}$, where
$\tau (n) = \frac{t(n)}{D K M^2}$.
Let \[m_0 = \min\{n\in \dom(\tau)\subseteq \N \, | \, \tau(n) \geqslant 2 \};\]
such $m_0$ exists because $\tau(n)$ is unbounded.
Therefore,
\begin{equation*}
\label{semifinal_ineq}
2 \delta (n) \leqslant \delta (f(Mn)))
\end{equation*}
for all $n \geqslant \max \{N_0, N_1, m_0\}$.
Let $d_0 = \min\{n \, | \, \delta (n)
\geqslant 1\}$.
If $f(Mn)<n$ for some
$n \geqslant \max\{N_0, N_1, m_0, d_0\}$
then $2\delta(n)\leq \delta(f(Mn))\leq \delta(n)$, which is a contradiction.
Thus
for all $n\geq \max \{N_0, N_1, m_0, d_0\}$ we must have that
\begin{equation*}
\label{final_ineqAA}
n \leqslant f(Mn).
\end{equation*}
From this we obtain that
$\mathfrak{i} \preceq_1 f$. As $f \preceq_1 h_{S,\psi}$ it follows that
$\mathfrak{i}\preceq_1 h_{S,\psi}$, and we conclude that $G$ is $\mathfrak{i}$-separated.
\end{proof}
\section{Dense groups}\label{sec:dense}
We introduce a property of some infinitely presented groups which will allow us to obtain sharper lower bounds on the Cayley distance function of such a Cayley automatic group.
This property will be shown to be independent of generating set and the prototypical examples of groups with this property are restricted wreath products.
Recall that $F_X$ denotes the free group generated by a set $X$.
\begin{definition}[Densely generated]
\label{def:dense}
Let $G$ be a group with finite generating set $X$.
We say that $G$ is {\em densely generated} by $X$
if there exist constants $E,F, N_0\in \N$,
$1\leq E<F$
such that for all $n\geq N_0$ there is a word $w_n\in (X\cup X^{-1})^*$ which has the following properties:
\begin{itemize}[itemsep=5pt]
\item $w_n=_G1_G$,
\item $En\leq |w_n|\leq Fn$, and
\item for any collection of words $u_i,\rho_i\in (X\cup X^{-1})^*$, $1 \leq j \leq k$ with $u_i=_G1_G$ and
\[ w_n=_{F_X}\prod_{i=1}^k\rho_iu_i\rho_i^{-1}, \]
we have $|u_j|>n$ for some $1 \leq j \leq k$.
\end{itemize} \end{definition}
In other words, for every interval $[En,Fn]$ there is a loop $w_n$ whose length lies in that interval which cannot be {filled} by loops all having length at most $n$.
It follows that if $G$ is densely generated by $X$ then every presentation for $G$ over $X$ is infinite.
The following lemma shows that being densely generated is independent of the choice of finite generating set.
\begin{lemma}\label{lem:denseGsetInvariant} If $G$ is {densely generated} by $X$ and $Y$ is another finite generating set for $G$ then $G$ is {densely generated} by $Y$.\end{lemma}
\begin{proof}
Let $| \cdot |_X$ denote the length of a word in $(X \cup X^{-1})^*$ and $| \cdot |_Y$ denote the length of a word in $(Y \cup Y^{-1})^*$.
For each $x\in X$ choose a nonempty word $v_x\in (Y\cup Y^{-1})^*$ with $x=_Gv_x$. Let $M_1=\max_{x\in X}\{|v_x|_Y\}$, and $\tau\colon (X\cup X^{-1 })^*\to (Y\cup Y^{-1})^*$ be the monoid homomorphism defined by $\tau(x)=v_x$.
For each $y\in Y$ choose a nonempty word $q_y\in (X\cup X^{-1})^*$ with $y=_Gq_y$. Let $M_2=\max_{y\in Y}\{|q_y|_X\}$, and $\kappa\colon (Y\cup Y^{-1 })^*\to (X\cup X^{-1})^*$ the monoid homomorphism defined by $\kappa(y)=q_y$.
As $G$ is densely generated by $X$, there exist fixed constants $E,F,N_0 \in \N$ as in Definition~\ref{def:dense}.
Suppose $G$ is not densely generated by $Y$. Then for all constants $E',F', N_0' \in \N$ there exist some $s\geq N_0'$
so that all words equal to $1_G$ of length between $E's$ and $F's$ can be filled by cells of perimeter at most $s$.
Choose $E'=EM_2$ and $F'=M_1M_2F$, and $N_0'=\max\{N_0,M_1M_2+1\}$.
Let $s_0$ be chosen so that with respect to these constants, all words equal to $1_G$ of length between $E's_0$ and $F's_0$ can be filled by cells of perimeter at most $s_0$.
As $G$ is densely generated by $X$, choose $n=M_2s_0$. There must be a word $w_n\in (X\cup X^{-1})^*$ so that $w_n =_G 1_G$ and whose length satisfies
\[E(M_2s_0)\leq |w_n|\leq F(M_2s_0)\]
which cannot be filled by cells all of perimeter at most $n$.
Then $\tau(w_n)$ labels a path in $\Gamma(G,Y)$ so that
\[EM_2s_0\leq |\tau(w_n)|\leq M_1FM_2s_0.\]
Note that the map $\tau$ does not decrease length, as $\tau(w_n)$ is obtained by substitution, with no free reduction.
Since $E's_0\leq |\tau(w_n)|\leq F's_0$, by our choice of $s_0$ we can fill this word
by cells of perimeter at most $s_0$. Now consider this van Kampen diagram as a subgraph of $\Gamma(G,Y)$.
Map the entire subgraph, edge-by-edge, into $\Gamma(G,X)$ by applying the map $\kappa$; the boundary of the new subgraph consists of paths of the form $\kappa(\tau(x_i))$, where $w = x_1x_2, \cdots x_n$.
These paths form the boundary of the subgraph in $\Gamma(G,X)$ connecting the original vertices on the path labeled by $w_n$, and have length at most $M_1M_2$. We have thus created cells of the form $\kappa(\tau(x_i))x_i^{-1}$ of perimeter at most $1+M_1M_2$.
These boundary cells, together with the copy of the van Kampen diagram, provide a filling of $w_n$.
In summary, the filling we have created has cells of two types:
\begin{itemize}
\item the boundary cells, of perimeter $1+M_1M_2$, and
\item images of the cells in the van Kampen diagram in $\Gamma(G,Y)$, which had perimeter at most $s_0$; after applying the homomorphism $\kappa$, the image of such a cell has perimeter at most $M_2s_0$.
\end{itemize}
Note that we chose $N_0'=\max\{M_2M_1+1,N_0\}$ so all of these cells have perimeter at most $M_2s_0=n$.
This contradicts the assumption that $G$ is densely generated by $X$. Thus $G$ is also densely generated by $Y$.
\end{proof}
A group is called {\em dense} if it is densely generated by some, hence any, finite generating set.
This definition is inspired by Baumslag's paper \cite{baumslag61} about wreath products $G\wr H$.
We prove in Proposition~\ref{prop:BaumslagDense} that if $H$ is infinite then $G\wr H$ is dense.
\begin{proposition}\label{prop:BaumslagDense}
Let $G$ and $H$ be finitely generated groups. If $G$ is nontrivial and $H$ is infinite, then
$G \wr H$ is dense.
\end{proposition}
\begin{proof}
Let $G = \langle X \, |\, P \rangle$ and
$H = \langle Y \, | \, Q \rangle$ be presentations of
the groups $G$ and $H$, where
$X \subseteq G$ and $Y \subseteq H$ are finite generating sets.
For each $h \in H$ choose a geodesic word $u_h \in (Y \cup Y^{-1})^*$ with $\pi(u_h) =_G h$, and let
$U = \{ u_h | h \in H\}$.
Then the wreath product $G \wr H$ has presentation
\[G \wr H = \langle X \cup Y \, | \,
P \cup Q \cup \{[a_1^u,a_2^v]\,|\,
a_1,a_2 \in X, u,v \in U\} \rangle.\]
Let $B_{H,Y} (n)$ denote the ball
of radius $n$ in the group $H$ with respect to the
generating set $Y$.
For a given positive integer $m$, define the set of relators
\[ R_m =P \cup Q \cup \{[a_1^u,a_2^v]
\mid u,v \in U,u\not=v,\pi(u),\pi(v) \in B_{H,Y}(m)\}.\]
For any set $S \subseteq H$, define the relation
$T_S = \{ (s_1 h,s_2 h) \mid s_1,s_2
\in S, h \in H\}$.
Now we mimic Baumslag's argument for
proving the non-finite presentability of
wreath products, presented in Lemma~3 of \cite{baumslag61}, see also \cite{berstein2015}.
Baumslag constructs a group $\BaumGroup$
generated by $G$ and $H$ with the following properties:
\begin{itemize}[itemsep=5pt]
\item $G^{h_1} \cap G^{h_2} =\{1_G\} $ for all $h_1,h_2 \in H$ with $h_1 \not= h_2$, and
\item $\left[ G^{h_1}, G^{h_2}\right]= \{1_G\}$ if and only if $(h_1,h_2) \in T_S$.
\end{itemize}
Note that instead of requiring all conjugacy classes to commute in $\BaumGroup$, we only require this when the conjugating elements form a pair in the relation $T_S$.
Choosing $S = B_{H,Y}(n)$ for any fixed $n$, it follows that in $\BaumGroup $ we have $[G,G^h]\not=e$ for any $h$ for which
$(e,h) \not\in T_S$.
In particular, this holds for any
$h \in B_{H,Y} (2n+1) \setminus B_{H,Y} (2n)$.
Therefore there is a relation $[a_1,a_2 ^h]$ in
$R_{2n+1} \setminus R_{2n} $ which cannot be obtained
as a product of conjugates of the relations from
$R_n$.
Now observe that every loop $w \in (X \cup X^{-1} \cup Y \cup Y^{-1})^*$
of length $|w| \leqslant n$
in the wreath product $G \wr H$
can be represented as a product of
conjugates of relations from $R_n$.
Therefore, a loop of length $8n + 8$,
given by the relation $[a_1,a_2 ^h]$,
cannot be decomposed into
smaller loops of length less or equal than $n$.
Thus, $G \wr H$ is dense.
\end{proof}
\section{Separating non-finitely presented Cayley automatic groups from automatic groups}\label{sec:thmB}.
The proof of Theorem~\ref{thmB:nfp} relies
on the following proposition.
\begin{proposition}
\label{prop:nonFP}
Let $G$ be a non-finitely presented group with finite generating set $S$. Then there exists a non-decreasing step function $\phi_{G,S} \in\F$, depending on $G$ and $S$, and an infinite sequence of integers $\{n_i\}$ such that $\phi_{G,S}(n_i)=n_i$ and for any Cayley automatic structure $(S,S,L,\psi)$ on $G$,
\begin{enumerate}
\item
$\phi_{G,S} \preceq_1 h_{S,\psi}$, and
\item if $G$ is dense then
$\mathfrak i\preceq_1 h_{S,\psi}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Since $G$ is not finitely presented, there exists an infinite sequence of words $w_i\in S^*$ so that
\begin{itemize}[itemsep=5pt]
\item $w_i =_G 1_G$,
\item if $w_i=\prod_{j=1}^k \rho_ju_j\rho_j^{-1}$ for some $k \in \N$ and $u_j, \rho_j \in S^*$ for $1\leq j\leq k$, then $|u_j|\geq |w_i|$ for at least one value of $j$, and
\item $|w_i| = l_i$, and $l_i < l_{i+1}$.
\end{itemize}
Define $\phi_{G,S} \in \F$ by $\phi_{G,S}(n) = l_i$ for $l_i \leq n < l_{i+1}$.
Let $c,d,\varsigma$, and $n_0$ be the constants from Proposition~\ref{prop:dehnbound}. Then for any $i\in\N$ with $l_i\geq n_0$ we can
decompose $w_i$ into loops $u_{i,j}$ using the algorithm described in Proposition~\ref{prop:dehnbound}, and illustrated in Figures~\ref{fig:filling1}, \ref{fig:filling2} and \ref{fig:filling3}, so that $|u_{i,j}| \leq 4h_{S,\psi}(cl_i+d) + \varsigma$.
Our choice of $w_i$ ensures that for some $j$ we have $\phi_{G,S}(l_i) = l_i = |w_i| \leq |u_{i,j}|$.
Suppose $l_i \leq n < l_{i+1}$.
It follows that for this choice of $j$,
\[ \phi_{G,S}(n) = l_{i}\leq |u_{i,j}| \leq 4h_{S,\psi}(c l_{i} + d) + \varsigma \leq 4h_{S,\psi}(cn+d)+\varsigma. \]
It then follows from Lemma~\ref{lemma:lose_the_constant} that
$\phi_{G,S}(n) \preceq_1 h_{S,\psi}(n)$.
Now suppose that $G$ is densely generated by $X$, so there exist constants $E,F$ and $N_0$ so that for all $n \geq N_0$ there exists a loop $w_n=_G 1_G$ so that
\begin{itemize}
\item $En \leq |w_n| \leq Fn$, and
\item $w_n$ cannot be subdivided into loops all of whose lengths are bounded above by $n$. That is, if we write $w_n = \Pi_{i=1}^k \rho_i u_i \rho_i^{-1}$ where each $u_i =_G 1_G$ then for some $i$ we have $|u_i| \geq n$.
\end{itemize}
Again it follows from Proposition~\ref{prop:dehnbound} that there are constants $c,d,\varsigma$ and $n_0$ so that for $n \geq \max(n_0,N_0)$, each $u_j$ in the above decomposition of $w_n$ we have
$|u_j| \leq 4h_{S,\psi}(cFn+d)+\varsigma.$
Since $G$ is dense, it follows that for some $j$ we have
$$n \leq |u_j|\leq 4h_{S,\psi}(cFn+d)+\varsigma.$$
As this is true for every $n \in \N$ with $n \geq \max(n_0,N_0)$,
it follows from Lemma~\ref{lemma:lose_the_constant} that $\mathfrak{i} \preceq_1 h_{S,\psi}$.
\end{proof}
We now prove Theorem~\ref{thmB:nfp}.
\ThmB*
\begin{proof}
Suppose $(Y,Y, L, \psi)$ is a Cayley automatic structure for $G$ with respect to some arbitrary finite generating set $Y$.
If
$G$ is dense.
it follows from part (2) of Proposition~\ref{prop:nonFP} that $\mathfrak{i}\preceq_1 h_{Y,\psi}$.
Next, choose a finite generating set $S$ for $G$.
It follows from part (1) of Proposition~\ref{prop:nonFP} that the function $\phi_{G,S}$ is such that $\phi_{G,S}\preceq_1 h_{S,\psi'}\approx_1 h_{Y,\psi}$. Setting $\phi=\phi_{G,S}$ gives a step function which suffices to prove the theorem.
\end{proof}
\section{Conclusion}
Theorems~\ref{thmA:fp} and~\ref{thmB:nfp} together imply that the only possible candidates for non-automatic Cayley automatic
groups where the geometry comes close to resembling that of an automatic group (in the coarse sense considered here)
are groups with
quadratic or almost quadratic Dehn function. The class of groups with quadratic Dehn function
is a wild and interesting collection (Gersten referred to the class as a "zoo" in \cite{Gersten}),
including the following
groups.
\begin{enumerate}
\item The higher Heisenberg groups $H_{2k+1}$ for $k>2$ \cite{MR1616147,MR1253544,MR1698761}.
\item The higher rank lamplighter groups, or Diestel-Leader groups \cite{Taback18}.
\item Stallings' group and its generalizations \cite{CarterForester,DisonElderRileyYoung}. These examples are not of type $FP_{\infty}$, hence not automatic; it is not known whether they admit Cayley automatic structures.
\item Thompson's group $F$, which is not known to be automatic or Cayley automatic though is 1-counter-graph automatic \cite{cga}.
\item An example of \Olshan\ and Sapir \cite{OSapir-CP} which has quadratic Dehn
function and unsolvable conjugacy problem.
\end{enumerate}
\Olshan\ \cite{WeirdQuadDehn,Ol-almost} also gives an example of a group whose Dehn function is
almost quadratic, and so Theorem~\ref{thmA:fp} does not apply to this group. It is not known whether this
example or the unsolvable conjugacy example of \Olshan\ and Sapir are Cayley automatic. Note that our definition of
a super-quadratic function is equivalent to saying the function is not almost quadratic, as shown in Lemma~\ref{lem:equiv-super-quad}.
Progress towards proving Conjecture~\ref{conj1} takes two forms.
First, one must improve the exhibited bounding functions given for groups with super-quadratic Dehn function and for non-dense non-finitely presented groups to show that these groups are $\mathfrak{i}$-separated.
Second, one must prove that non-automatic Cayley automatic groups with quadratic and almost quadratic Dehn function are $\mathfrak{i}$-separated. We have some optimism for progress on the first part, and find the second more difficult.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{s:introduction}
Some physical processes can be considered continuous in the sense that the discretization of measurements comes from the detector's sampling rate. Others are discrete in the sense that they give rise to individual events that can be detected as such, as long as the sampling is faster than the detection rate. For non-variable processes, the values of the measurements will generally be distributed as either a normal variable---in those seen as continuous, or as a Poisson variable---in those that are discrete.
The way in which the measurements are distributed is absolutely crucial in applying the appropriate statistical treatment. But irrespective of that distribution, each measurement carries information that can be used to estimate the values of the parameters of models that help us learn about the processes being observed. Most importantly, each measurement considered individually, and the collection of measurements as a whole, all carry statistical evidence that can be used to accurately assess the agreement between a given hypothesis or model and the data.
Treating data as evidence is a powerful means to detect changes, differences, deviations or variations in any kind of process that is observed. Treating data as statistical evidence is, in fact, the only way that data should be treated no matter what the application or circumstances, because that is what data actually is. The way this is done, mathematically and statistically, is through the likelihood function.
As obvious as this is, it is important to point out that the detection of an event or feature that is localized in time, involves identifying something that was not there before. Whether it rises, dwells, and decays over weeks and months like a supernova, or whether it just appears and disappears in a fraction of a second like a $\gamma$-ray burst; whether it manifests as a complete change of shape of the energy spectrum during a state transition in a black hole, or as the short-lived emission line from an accretion event; whether it comes as a sudden change of spectral index in the power spectrum or as the appearance of an ephemeral quasi-periodic oscillation (QPO); all of these phenomena, independently of their particular timescale, share in common that they appear as a sharp change in the data.
Detection and identification of a transient feature in an ensemble of measurements is a statistical procedure that involves comparing numbers and making decisions based on probabilities or, in fact, on probability ratios, and in other words, on likelihoods. Naturally, we would like to use the most powerful method for the problem at hand. Thus, whatever the application, we want to maximize sensitivity to transient phenomena, and minimize the frequency of identifying a statistical fluctuation as an actual event. For this reason we must use all the information that is available, without degrading in any way the data the instruments provide us with, and interpret these as statistical evidence.
In this article we address this task of identifying \emph{transients}, in the most general sense of the word, without reference to the kind of phenomenon nor the kind of data we are working with. In light of this, we use the word \emph{transient} in a sense that is broader than its customary usage, which refers to a source whose intensity varies dramatically enough to go from being invisible to detectable or even bright, and back to being undetectable. We define a \emph{transient} as any feature or change that can be identified in the data as \emph{statistically distinct} from the global process and underlying conditions. This definition therefore implies that if a feature cannot be distinguished by statistical means, it cannot be detected and identified. Whether this is because the transient is too weak or too long-lived does not matter. The limitations of a particular detection procedure, with its own thresholds and timescales, can always be accurately established before applying it.
We develop and present a general method based on the likelihood function that can be applied to wide range of problems (Section\;\ref{s:identifyingTransients}). We describe the procedure (Section\;\ref{s:method}), what the likelihood function is (Section\;\ref{s:likelihoodFunction}), and illustrate the method for a general counting experiment (Section\;\ref{s:illustrationOfMethod}). We elaborate on the procedure's use and applicability in astronomy and astrophysics (Section\;\ref{s:astrophysics}) by considering, after a few introductory remarks (Section\;\ref{s:astroIntro}), its application to images (Section\;\ref{s:images}), time series (Section\;\ref{s:timeSeries}), energy spectra (Section\;\ref{s:energySpectra}), and power spectra (Section\;\ref{s:powerSpectra}). We briefly discuss identification of transients over a variable background in the next section (Section\;\ref{s:identifyingTransients}) and in the concluding remarks (Section\;\ref{s:conclusion}).
The mathematical statistics of likelihood are from the work of \cite{Fisher:1912uv,1922RSPTA.222..309F}; the philosophical basis for, understanding of, and inspiration to work with and interpret data as statistical evidence are primarily from \cite{Royall:1997vc}; and other technical details of data analysis and statistics are mostly from \cite{1997sda..book.....C}.
\section{Identifying Transients}
\label{s:identifyingTransients}
A transient can only be identified as such in relation to something else: in relation to the underlying background conditions. There are two families of circumstances pertaining to the characteristics of the background process that cover all cases under which transients may appear: the process can be constant or it can be variable. In absolute terms, it could be said that if a process is not constant, then it is variable. In practice, however, the variability is characterized by timescales above which the process is seen to be variable, but below which it cannot, or at least not easily be seen to be variable. Furthermore, the variability will in general manifest differently at different timescales.
In most applications where transient sources are searched for, they are not detectable until they brighten for a while before once more fading away below the detection level. Therefore, the background is characterized by the statistical and instrumental fluctuations inherent to the particular experimental setup and sky pixel where the source appears. This is also generally true for supernovae and $\gamma$-ray bursts at all wavelengths (but on different timescales), as well as for X-ray novae or X-ray flares, bursts or flashes in most systems: the underlying background is usually constant or very weakly variable in comparison to the sharp change in intensity associated with the transient phenomenon.\footnote{The work of \cite{1998ApJ...504..405S} and \cite{2013ApJ...764..167S} on astronomical time series, with which we were not acquainted while developing our method, is different in its details but quite similar in spirit (even if more complicated in presentation and implementation) and seems well suited for $\gamma$-ray burst searches in X-ray and $\gamma$-rays time series.} In the energy and power spectra, irrespective of spectral shape, transient phenomena will also most often appear against a constant or very weakly variable background. Therefore, in all these cases and in the majority of applications searching for transients, the background is constant or nearly so.
In fact, this is indeed what allows these short-lived phenomena to be considered and labeled as transient. However, as the intensity of the background against which we seek to identify a transient of a given magnitude increases, the ability to do so decreases. If instead of increasing in intensity the background were to increase in its variability, the ability to identify a transient similarly decreases. In both cases, it is a question of scales: of the intensity scale and of the timescale. In the first case, statistical identification depends mostly on the ratio of intensities between the transient and the background (the commonly held notion of signal-to-noise ratio), but also on its duration: the brighter, the easier to identify; and if it is not bright, the longer it lasts, the easier to identify. In the second, identification depends more on the ratio of the transient's duration to the timescale of the variability of the background, but obviously also on the magnitude of its intensity: the shorter in duration, the easier to identify with respect to a slowly variable background; and naturally the brighter it is, the easier the identification.
Hence, these factors both come into play, and gain or lose in importance depending on the characteristics of the physical process being observed, and also, most sensitively, on the quality of the data. It is important to underline that these elements---intensity ratios and variability timescales---can and should be considered and worked with as parameters in order to establish optimal detection algorithms and thresholds for different problems in various kinds of data sets, as well as to determine the limitations of our methods that should ultimately always be statistical in nature, and dependent on the quality and kind of data, not on weaknesses in the methods themselves. We will carry out and present such an investigation and quantitative characterization of these issues, including the discussion of commonly encountered complications in a future work, together with applications to data from different experiments and surveys.
The purpose of this paper is to present the foundational aspects of the method and illustrate how it can be of immediate applicability in a variety of situations that include the search for transient features in X-ray and $\gamma$-ray data, transient sources in optical sky surveys, radio transients, and in particular rotating radio transients that are currently typically identified by eye. Our basic working assumption, which does indeed cover most applications, is that we are dealing with a background that is constant (nil or not) on the timescales relevant to the problem of identifying transients.
The method is straight forward and based on the likelihood function. As such, all of the details that pertain to the inherent statistical properties of the kind of random variable that results from the observational process (e.g., normal, Poisson, exponential) are automatically and effortlessly taken into account at every step, and integrated in every aspect of the procedure. The presentation is intended to be as clear, intuitive and easy and as possible, with minimal formalism. It is worth noting that the tools for this have been available in practice at least since \cite{1922RSPTA.222..309F}, and in principle since \cite{Bayes:1763ee}. The method makes use of every measurement, does not require grouping or approximations, and therefore does not impose on the data any degradation of its resolution or accuracy, no matter what kind of data it is. Since the approach is the same in all circumstances where the aim is to detect a transient, it is described in general terms and its principles demonstrated in the next section. Astrophysics applications are discussed afterward.
\subsection{Methodology\label{s:method}}
The first measurement gives the first estimate of the reference value: the value we expect to measure under usual conditions when there is no transient. With this single measurement we can already draw the curve that expresses the likelihood of all possible reference values given the evidence carried by that measurement.\footnote{In practice, most observations (from most patches of sky and most sources) are not the first of their kind, and there is, therefore, no need to make a guess of the expected intensity; it can just be determined from previous observations, which implies that even the first measurement can be compared to the expected reference value, and the sensitivity of the method does not depend on the number of prior measurements.} To draw this curve, we must make an informed assumption as to \emph{how the measurements will be distributed} around the true value: we must make an informed assumption about the nature of that random variable. As was mentioned, the most common in physical experiments are the normal distribution seen in continuous processes like measuring temperature, and the Poisson distribution that arises when discrete events are recorded. The likelihood function shows the maximum likelihood of the true value, and the ratio between that and every other possible value: it gives us the relative likelihood of any value with respect to any other, depending only on the nature of the variable and on the data.
The second measurement gives a second estimate of the reference value. Because we already have a likelihood function based on the first measurement, the value of the second can be immediately evaluated for its potential of being a transient---a feature that stands out from what is expected---by computing its likelihood ratio with respect to the previously estimated maximum likelihood. Although limited by the accuracy with which the mean is known, this is nonetheless the only mathematically correct way to evaluate the likelihood of measuring that second value in light of the first, without invoking an a priori model or assumption. If the second measurement is not statistically different from the first beyond the established threshold, it is combined with the first to better estimate the mean, and can also be used to compute a first estimate of the standard deviation of the distribution. The joint likelihood function is computed from the two measurements and its central peak immediately begins to grow narrower and hone in on the true value of the parameter.
The third measurement gives a third estimate of the reference, the likelihood of measuring such a value is evaluated by the ratio of the single-measurement likelihood function centered on the maximum likelihood reference value given by the previously calculated joint likelihood. This is an important point: the joint likelihood is the likelihood function of the reference value given the entire set of measurements considered together as a whole, and with each additional measurement, it gets narrower and more finely peaked on the maximum likelihood reference value; the single-measurement likelihood is the function that shows how likely it is to measure any given value each time a measurement is made. The more precisely the reference value is determined, the more reliable the location of the single-measurement likelihood function. However, its shape depends only on the probability density of the random variable and on the reference value.\footnote{We can formally incorporate the uncertainty in the determination of the reference value into the single-measurement likelihood function by computing the cross-correlation of the joint and single-measurement functions, an operation which yields a broadened function that tends to the pure probability density as the joint likelihood tends toward a delta function. We have investigated this, and found that it increases the complexity of the procedure substantially, but that the effect is only appreciable in the first few measurements where a broadening is seen. Beyond even a handful of measurements, the joint likelihood function is narrow enough for the broadening to be negligible. It is therefore not warranted to include this step unless we are dealing with a process that will only ever yield a handful of measurements.}
With each subsequent measurement, the same procedure is repeated:
\begin{inparaenum}[(1)]
\item compute the likelihood of the newly measured value based on the single-measurement function defined by the current maximum likelihood reference value;
\item if the likelihood is less than the defined threshold, issue a transient event trigger. Do not update the estimate of the reference value;
\item if the likelihood is more than the defined threshold (within the likelihood interval), recalculate the joint likelihood function including the new measurement and update the maximum likelihood reference value.
\end{inparaenum}
The single or multiple thresholds used to identify the occurrence of a transient event must be defined and optimized according to the application.
\begin{figure*}[t]
\epsscale{1.0}
\begin{center}
\includegraphics[scale=0.22, angle=-90]{likelihood_n1.ps}
\includegraphics[scale=0.22, angle=-90]{likelihood_n2.ps}
\includegraphics[scale=0.22, angle=-90]{likelihood_n7.ps}
\includegraphics[scale=0.22, angle=-90]{likelihood_n19.ps}
\end{center}
\caption{\footnotesize Graphical representation of the single-measurement and joint likelihood functions after one (panel (a)), two (panel (b)), seven (panel (c)) and nineteen (panel (d)) measurements of a Poisson variable with a true rate parameter of $\nu=5$. The 1/8 likelihood interval is shown in panel (d).
\label{f:illustration}}
\end{figure*}
\subsection{The Likelihood Function}
\label{s:likelihoodFunction}
The likelihood is proportional to the product of probabilities. However, because there are an infinite number of points on the probability density function, the product of a collection of probabilities will in general be unbounded. Hence, only the \emph{ratio} of likelihoods is meaningful. Although it is reminiscent of a probability distribution, it is quite distinct, because only its shape matters while its scale does not. In the words of \citet[][p.\ 327]{1922RSPTA.222..309F}, ``the likelihood may be held to measure the degree of our rational belief in a conclusion'', and in those of \citet[][p.\ 28]{Royall:1997vc}, ``Probabilities measure uncertainty and likelihood ratios measure evidence.'' It is worth emphasizing the following point: before making a measurement, we have probabilities; after making the measurement, we have likelihoods. We adopt the convenient approach suggested by \cite{1922RSPTA.222..309F} himself, and normalize the likelihood function such that the maximum equals one. This then implies that the value of every point on the curve is already the ratio to the maximum likelihood.
The most fundamental distinction is that probability density relates to the random variable whereas likelihood relates to the parameter. The probability density function expresses how we can expect the measured values of the random variable to be distributed given a certain value of the parameter. For example, if we take the rate parameter $\nu$ of the Poisson distribution to be equal to 5 events per second, the density function tells us that the probability to detect 5 events in a one second interval is given by the value of the density function at 5 and equals 14.6\%, or that the probability to detect 10 events or more is given by the sum from 10 onward and equals 1.4\%.
Now, knowing that we are observing a Poisson process, say that we measure five events in the sampling time of one second. That measurement is made and the uncertainty about its value therefore disappears. The value is now evidence about the probability distribution, about the unknown value of the parameter. The likelihood function represents this evidence: it tells us that the most likely value of the rate parameter $\nu$ is 5, and that, for example, it is 1.1 times more likely than 6, 4.6 times more likely than 10, and 32 times more likely than 13.4. Computing the likelihood function, the $y$-axis is the relative likelihood and the $x$-axis represents different values of the parameter. In the case of a single measurement $n_0$, the likelihood associated with each value of the parameter $\nu$ on the abscissa is given by (proportional to) $f(n_0;\nu)$; for two measurements $n_1$ and $n_2$, it is given by the product $f(n_1;\nu) f(n_2; \nu)$.
In this work, we consider five families of random variables: the Poisson, normal, $\chi^2$, exponential, and inverse-exponential. For simplicity and clarity of presentation, the relevant equations are given in the Appendix instead of being presented or derived in the text.
\subsection{Illustration of Method}
\label{s:illustrationOfMethod}
The instrument is turned on and the observation begins. Nothing is known other than that we are observing a non-variable Poisson process. The sampling rate is one second and in the first interval three events are detected. With this first measurement, we can already compute the likelihood function of the rate parameter $\nu$, and because we have only one measurement, the joint likelihood is identical to the single-measurement likelihood (\cref{f:illustration}, panel (a)).
In the second interval, four events are detected. We calculate the likelihood ratio for this measurement in relation to the previously estimated maximum likelihood value of the rate parameter which was 3, and find that it is $L_1(\nu=4)/L_1(\nu=3) = 0.872$ which is much larger than the warning threshold of $1/8 = 0.125$. Therefore, we compute the joint likelihood function using both measurements, and now have a slightly more accurate estimate of the rate parameter as exhibited by the narrower peak of the joint likelihood; the single-measurement function is also updated accordingly (\cref{f:illustration}, panel (b)).
The observation continues and by the time we have made 7 measurements the joint likelihood is quite a bit narrower (\cref{f:illustration}, panel (c)), and after 19 measurements the peak is significantly narrower and peaks on the true rate parameter, $\nu_{t}=5$ (\cref{f:illustration}, panel (d)). Naturally, the more measurements are made, the sharper the peak grows, and hence the accuracy of our maximum likelihood estimate of the rate parameter which in turn defines the single-measurement likelihood function against which we evaluate the evidence for the occurrence of a transient with each new measurement by calculating the likelihood ratio; the 1/8 likelihood interval is drawn and seen to extend from 1.2 to 10.2 in panel (d).
\section{Astrophysical Transients}
\label{s:astrophysics}
\subsection{Introductory Remarks}
\label{s:astroIntro}
The bounds between which changes can be detected are defined by the instrument's sampling rate for the shortest timescales, and by the time spanned by the data on the longest: if changes in the system take place much faster than the sampling rate (millisecond pulses sampled on second timescales), or much slower than the duration of the observation (an outburst that lasts longer than the time during which it was observed without appreciable change in brightness), they will not be readily detectable. Naturally, transient searches only have meaning within these bounds.
Within these limits, the granularity in time of each iteration of the likelihood procedure is a crucial factor. If made fine enough compared to the timescale of the transient, the change will be gradual and smooth enough to be unrecognized as such. Therefore, the time between iterations must be chosen to ensure optimal sensitivity to timescales relevant to the problem. If there are several, the solution is to run multiple procedures in parallel, each with a different characteristic timescale. Another solution is to monitor the evolution of the average value of the likelihood ratio as a function of time. This technique relies on the stability of the value of the likelihood ratio, and is as sensitive as our knowledge of the background conditions against which we want to detect transient events (which grows with each additional measurement), and most importantly on the probability distribution of the measurements. Naturally, we can look at the evolution of the likelihood ratio in each channel, in the joint function or in both to maximize our sensitivity to specific kinds of transients.
\subsection{Transients in Images}
\label{s:images}
Imaging data is acquired by different means at different wavelengths, but in what concerns the task of identifying transient features in these images, the main requirement is that there must be at least more than two and preferably a collection of images of the same region of the sky. Thus, independently of the actual integration time of each snapshot or the means by which this integration time is chosen, we can treat the problem of having sky pixels, each with an ensemble of measured values that we take as an intensity regardless of the units.
In regards to the intensity, there are two cases: \begin{inparaenum}[(1)] \item the intensity in every pixel is independent of (not correlated with) the intensity in any other pixel, or \item the intensity in a given pixel is related to the intensity of neighboring pixels. \end{inparaenum} If the instrument's point spread function (PSF) is mostly contained within a single detector pixel, then we consider each pixel to give independent intensity measurements and also define the size of independent sky pixels. If this is not the case and the PSF spreads on several detector pixels, then we can either make sky pixels as large as necessary to include a large enough fraction of the PSF in order to be considered independent of neighboring sky pixels, or we must include the model of the PSF in the analysis.
If pixel intensities are independent of one another, this makes an image a collection of intensity measurements, one per pixel, that each corresponds to a distinct region of the sky. Each snapshot yields one such intensity measurement for each pixel, and therefore the problem immediately reduces to the illustrative case above: with the first snapshot, we already have the means to construct the model for each pixel of what can be expected; with each additional snapshot, the model improves in accuracy, and consequently, our ability to detect changes in intensity increases; and the expected intensity in each sky pixel is represented by the single-measurement likelihood function centered and scaled in accord with the joint likelihood function constructed from the ensemble of intensity measurements.
The two relevant functional forms are those of the Poisson and the normal density distributions (\cref{eq:poissonDensity,eq:normalDensity}). The procedure was illustrated in Section\;\ref{s:illustrationOfMethod} using the Poisson function (\cref{eq:poissonLikelihood-single,eq:poissonLikelihood-joint}), but maybe in most imaging applications, the normal likelihood function (\cref{eq:normalLikelihood-single,eq:normalLikelihood-joint}) will be the appropriate one to use.
\begin{figure*}
\epsscale{1.0}
\begin{center}
\includegraphics[scale=0.34, angle=-90]{ts-30and60s.ps} \hspace{2mm}
\includegraphics[scale=0.34, angle=-90]{transientInTimeSeries.ps}
\end{center}
\caption{\footnotesize Time series of the simulated observation shown in counts per bin for 30 and 60\,s bins (left panel, bottom and top respectively), and instantaneous count rate calculated as the inverse of the time between events shown as a function of each event's arrival time with the transient detection likelihood also evaluated in real time (right panel, top and bottom respectively). The maximum value of the instantaneous rate is 2950\,s$^{-1}$\xspace, but the scale is truncated to 86\,s$^{-1}$\xspace to match the scale of the 60\,s time series and better show the scatter. Values of the log-likelihood that do not meet the trigger criterion are shown at the warning threshold level of $-2.1$ (likelihood of 0.14). The sole detection is that of the transient event, and it dips down to $-48.41$ (likelihood of $10^{-21}$).
\label{f:transient}}
\end{figure*}
If the detector pixels are smaller than the PSF and the intensity measurements are hence correlated with neighboring ones, the problem is more complex in its details but the same in its principle. The most significant difference in the approach is that instead of treating images as a collection of independent intensity measurements, they must be considered and modeled as a whole. The global model will preferably be constructed before the observation to include all known sources in the field. With the first image, an intensity is estimated for each identified source included in the model taking into account the instrument's PSF, and then monitored using each subsequent snapshot. Accordingly, each source has its own joint likelihood function that is better defined with each additional image, as well as its own single-measurement likelihood function based on the joint likelihood. These likelihood functions, however, are based on the model of the PSF, and thus only indirectly on the analytical probability density.
In addition, to the iteratively refined modeling of likelihood functions for each identified source, every other pixel in the image is monitored as if it were independent of all others in the same way as in the previous case. This is obviously of prime importance given that our aim is to detect transients and especially new, weak, and invisible or otherwise unidentified sources. When the intensity of one of these pixels is seen to climb or fall out of its likelihood interval in a number of consecutive snapshots, the new source is then properly modeled using the PSF as for all other sources in the field. The detailed treatment depends on the PSF and is not carried out here for a hypothetical case, but the use of a global model is analogous to the treatment of both energy and power spectra. We thus leave out of this section a more explicit discussion.
\subsection{Transients in Time Series}
\label{s:timeSeries}
The procedure for treating time series is, in fact, identical to the one for a single independent sky pixel. And here also, no grouping or resampling of any kind is necessary such that all the information is used with full resolution to yield maximum accuracy in our estimates.
If the intensity measurements are derived from snapshots, as is often the case at many wavelengths, then a time series is made up of intensities from a single pixel in the images as a function of time. If instead, measurements consist of the detection of individual photons, then this can be treated either as in the general illustration of the method in Section\;\ref{s:illustrationOfMethod}, or, to use every event as each one is detected, the rate parameter can be estimated directly by the inverse of the time between events. For example, a quarter of a second between two events gives a value of the estimate of the intensity in events per second (the rate parameter) of 4\,s$^{-1}$\xspace; if five seconds pass, then it is 0.2\,s$^{-1}$\xspace. Therefore, whether the intensity is measured in a sky pixel as a function of time in successive images, whether it is the number of events detected during a given time interval, or whether it is estimated by the inverse of the inter-arrival times for each and every detected photon, the procedure remains the same in all respects and is as in Section\;\ref{s:illustrationOfMethod}. There is, however, a crucial difference in these three closely related cases that must be highlighted.
In the first case, we can expect the intensity values to be either Poisson or normally distributed depending on the imaging instrument and characteristics of the experiment; in the second, they will always follow the Poisson distribution; and in the third, because inter-arrival times are exponentially distributed, the corresponding \emph{instantaneous rates will follow the inverse-exponential distribution}. The exact distribution of a given random variable is the key to both the accuracy and the power of the likelihood function in statistical analysis.
Figure \ref{f:transient} is a demonstration of the technique applied to an unbinned time series (a list of event arrival times). In this example, we are observing a hypothetical flaring X-ray source embedded in a region from which the average event rate is 1\,s$^{-1}$\xspace. The observation lasted one hour, and contained a weak flare that lasted about 30\,s and counted exactly 33 events from start to finish. Even though a hint of its presence is seen in the 30\,s binned time series (but not with 60\,s bins), this event could easily have gone unnoticed if the detection relied on the examination of similar time series.
With the procedure described above, the flare is detected at a log-likelihood of -48.41, and thus likelihood of about $10^{-21}$. This is the combined likelihood of detecting at least eight consecutive events, each of which met the warning threshold, when we expect the mean detection rate. In contrast, looking at the peak that stands out in the 30\,s binned time series, we would compare a total intensity of 53 events against the expectation of 30, and find a likelihood of $5.8\times 10^{-4}$, which might be enough to make us raise an eyebrow, but not much more. This also helps illustrate the important difference between the binned treatment of the data and the unbinned likelihood approach that treats each measurement individually.
In this example with a weak transient, the warning threshold was set at the relatively high log-likelihood value of $-2.1$ (likelihood of 0.12), but the detection trigger was set to require eight consecutive events over the warning threshold.\footnote{This detection strategy was optimized using simulations: when the observation did not include a burst, false positives were detected only 10\% of the time, but when a flare was included, although its strength is indeed very weak, it was detected about 60\% of the time.} Similar searches will require this kind of strategy. For detecting strong but short-lived transients, it would instead be better to use a very low, single-point threshold. Each application has its own purpose and thus optimal settings that can be very precisely tuned with simulations.
Related considerations will be discussed in detail in a subsequent paper, but it is worth noting again the significant difference between using the approach described here compared to that of binning the time series and looking for outliers. Although to establish the optimal warning threshold and the number of consecutive warnings required to define a transient is indeed akin to defining, by whatever means, the optimal bin time for a particular kind of transient (e.g., \cite{1998ApJ...504..405S}), it is very different in several ways. Using each measurement individually gives access to the full resolution of the data, and thus the exact distribution in time of the measurements. This, in turn, allows us to legitimately ask the question for each measurement `what is the likelihood of this measurement if we expect that?' (the single-measurement likelihood), and also to ask the question `what is the likelihood of these two, three, four or however many measurements taken together as an ensemble, when we expect this value?' (the joint likelihood), and the answer is always perfectly well-defined and directly interpretable as statistical evidence with respect to the question we are asking, (the hypothesis we are testing). Using each measurement also allows us to discriminate between consecutive measured values above the warning threshold, and an concurrence of several separate fluctuations above that threshold that are not consecutive but happened to occur in the same bin time interval.
\subsection{Transients in Energy Spectra}
\label{s:energySpectra}
In energy spectra, frequency histograms as a function of energy, transient features can also appear. When they do, they can potentially give powerful clues as to the physical changes in the conditions of the system under study. As with images, there are two ways of approaching the problem: each energy channel can be treated as independent of the others, in which case the procedure is the same as that for independent pixels, or we can recognize that the intensity in each spectral channel is almost surely related to the intensity in every other channel because radiative processes, other than those that manifest in mono-energetic line emission, produce a continuum of radiation. The former is simple, relies on no additional information and can thus be very powerful to detect narrow features. It is, however, limited in its sensitivity to changes that occur in several channels simultaneously. The latter is more complex and relies on the use of a global spectral model in order to treat the problem with all channels considered together, but is substantially more sensitive to subtle changes.
Hence, in the absence of a model, we use the former approach, treating each energy channel individually. We emphasize the importance of using all spectral information provided by the instrument and not grouping channels into coarser energy bins because \emph{every measurement counts}, and each one must be used and accounted for.
Thus, with the very first measurement that falls in a single energy channel, the procedure is initiated and we have an estimate of the expected value that is used to construct the first likelihood function for that channel; with the second in the same channel, the likelihood ratio of that measured value with respect to the previous is calculated and evaluated for its potential as signaling a transient event, following which the joint and single-measurement likelihood functions are updated; with the third and every subsequent measurement, the same procedure is followed.
With a reliable model, even if it is simple, the power to detect changes in the shape of the spectrum increases markedly because the amount of information contained within and used to iteratively update the likelihood function is greater: all the measurements in each channel add to our knowledge of every other channel and makes the overall likelihood function as informative as it can be. The means by which it is calculated and updated is slightly different. In whichever way the measured values are distributed in a channel, it is the model that links these across the entire spectral range, and the key component from which the likelihood function is constructed and on which the entire procedure hinges.
From an initial estimate of the model's parameter values we define the joint likelihood function. In the case of Poisson distributed data, the function is the product of the contributing elements from each channel, each supplying a term of the form given in Equation (\ref{eq:poissonDensity}): $$f(n;\nu) = \frac{\nu^n e^{-\nu}}{n!}.$$
This yields an ensemble of such terms, each with a different value of $n$: $n_i$---the measured intensity in channel $i$, and a different value of $\nu$: $\nu_i$---the model-predicted intensity in that channel. There is thus a single likelihood function: the joint likelihood function for the model across all spectral channels, and it is given by Equation (\ref{eq:poissonLikelihood-model}): $$L(\pmb{\nu}|\pmb{n}) = \prod_i \frac{{\nu_i}^{n_i} e^{-\nu_i}}{n_i!}.$$
\begin{figure*}[htb]
\epsscale{1.0}
\begin{center}
\includegraphics[scale=0.225, angle=-90]{normal.ps}
\includegraphics[scale=0.225, angle=-90]{chi2_1.ps}
\includegraphics[scale=0.225, angle=-90]{chi2_2.ps}
\includegraphics[scale=0.225, angle=-90]{chi2_10.ps}
\end{center}
\caption{\footnotesize Illustration of the relationship between standard normal, $\chi^2$ and exponential variables. Using the event list of the time series presented in Figure\,\ref{f:transient}, and calculating the phases corresponding to the independent periods between 0.5 and 3600\,s, in panel (a) we see the normalized frequency distribution of the scaled variable $c=\sqrt{2N}C$ overlaid with the standard normal density function; in panel (b) we see its square, $c^2=2NC$, overlaid with the $\chi^2_1$ density function; and in panel (c) we see the normalized distribution of the Rayleigh statistic, $R^2=c^2+s^2$, overlaid with the $\chi^2_2$, which is the exponential density function with $\tau=2$. Panel (d) illustrates using $10^5$ pseudo-random numbers, the difference between the sum of five $\chi^2_2$ variables, which results in the unimodal a $\chi^2_{10}$ distribution peaking at eight, and the distribution that results from multiplying or scaling a $\chi^2_2$ by five, which yields the exponential distribution with $\tau=10$.
\label{f:variables}}
\end{figure*}
Here as in the other cases, every single additional measurement brings an additional value of $n_i$ in channel $i$ from which the accuracy of estimates of $\nu_i$ can be improved. However, unlike the procedure for independent pixels or spectral channels, the joint likelihood function is used to determine the most likely model parameter values given the available data, and this is done by fitting using the $C$ statistic \citep{1979ApJ...228..939C} given by $C=-2\ln L$, with $\ln L$ given by Equation (\ref{eq:poissonLogLikelihood-model}).
Since fitting entails varying the values of the parameters in order to maximize the likelihood and thus minimize the value of the fit statistic, comparing different model parameters is done through the ratio of likelihoods that translate to differences of log-likelihoods. Hence, terms (in this case the term) that does not depend on the parameters, will not contribute and can be dropped. The fit statistic becomes
\begin{equation}
C = 2\sum_i \left[ \nu_i - n_i\ln({\nu_i}) \right].
\end{equation}
Thus what is updated with each measurement or iteration are the model's parameter values, and the identification of transient spectral features is based on the likelihood of the current freshly updated observed spectrum in relation to the previous best fit model. The thresholds are optimized for the application, both in terms of the individual value of the likelihood ratio after each measurement and in terms of the number of consecutive unlikely values. A discussion of the historical context, the motivation for, and the details that relate to procedures used in comparing data and models is the subject of another paper in preparation, and we therefore do not go any further into this here.
\subsection{Transients in Power Spectra}
\label{s:powerSpectra}
The power spectrum is estimated by the periodogram and carries information about various scales in the observed system. Because it presents what is called ``power'', which is a measure of the amount of ``activity'' at a given frequency or timescale, the power spectrum conveys information about both dynamical and distance scales. Any spontaneous and stochastic, or triggered and stimulated event that leads to a reconfiguration of the dynamics of the system, be it local or global, will generally manifest itself in the power spectrum. How this will then translate into the periodogram for a given geometry and observing conditions depends upon how large, long lasting, and coherent the event, and surely most importantly, on the sensitivity of the instrument with which the system is being observed. Nevertheless, such transients can potentially be detected and identified as such, and we seek the best means to do so.
The approach most resembles the treatment of energy spectra in its details. For power spectra as well, the problem can be treated as though the values in each channel, in this case frequency channels, were independent of one another---something that is only true when the power spectrum is globally flat with equal power at all frequencies---or it can be treated with the use of a global model that prescribes how the values in the different channels are related. Note that in both cases, the model can be theoretical, or empirical and constructed from the successive iterations of measurements and refinement of the estimated spectrum by computations of the periodogram. Therefore, conceptually and procedurally, this problem is the same as for the energy spectrum.
There are two differences, however. One is superficial: that models for power spectra are generally fewer, largely phenomenological and often simpler than those used for energy spectra. The other is fundamental: that the values of power estimates in frequency channels are distributed neither as Poisson nor as normal variables, but are instead related to $\chi^2$ and exponential distributions. The reason is that each power estimate is calculated from a sum of squared standard normal variates.
For an ensemble of detected events, each with its own arrival time, the simplest way to calculate the power of the fundamental harmonic at a given frequency $f$, is to map each arrival time $t_i$ to its phase $\phi_i$ within the periodic cycle that corresponds to that frequency ($p=1/f$), and calculate the Rayleigh statistic \citep{1983ApJ...272..256L}:
\begin{equation}
R^2 = 2N(C^2 + S^2)
\end{equation}
where $C$ and $S$ are defined as:
\begin{equation}
\label{eq:CandS}
C = \frac{1}{N} \sum_{i=1}^N \cos{\phi_i}
\hspace{4mm} {\rm and} \hspace{4mm}
S = \frac{1}{N} \sum_{i=1}^N \sin{\phi_i}.
\end{equation}
First, the expectation value of the functions $\cos\phi$ and $\sin\phi$ is zero: $\mean{\cos{\phi}} = \mean{\sin{\phi}} = 0$. Therefore, so are those of $C$ and $S$. Second, the variances of $\cos\phi$ and $\sin\phi$ both equal one half: ${\rm V}[\cos{\phi}]={\rm V}[\sin{\phi}]=1/2$. Therefore, those of $C$ and $S$ are a factor of $N$ times smaller: ${\rm V}[C]={\rm V}[S]=1/2N$. Finally, since ${\rm V}[mX]$\,=\,$m^2 {\rm V}[X]$, where $m$ is a numerical constant, the scaled variables $c$\,=\,$\sqrt{2N}\cdot\/C$ and $s$\,=\,$\sqrt{2N}\cdot\/S$ have a variance of one: ${\rm V}[\!\sqrt{2N}\cdot C] = {\rm V}[\!\sqrt{2N}\cdot S] = 2N\cdot {\rm V}[C] = 1$.
Note however, that the phases are uniformly distributed between 0 and 2$\pi$, and the sine and cosine are distributed between $-1$ and 1 with their characteristic, symmetric U-shaped distribution with minimum at 0, and rising slowly toward the edges where it peaks very sharply. It is the summing and averaging of several identically distributed values that yields the two normal variables $C$ and $S$, and standard normal $c$ and $s$.
\begin{figure*}[htb]
\epsscale{1.0}
\begin{center}
\includegraphics[scale=0.3, angle=-90]{ts-mkn421.ps} \hspace{2mm}
\includegraphics[scale=0.225, angle=-90]{psd-mkn421.ps} \hspace{2mm}
\includegraphics[scale=0.225, angle=-90]{powers-detrended-mkn421.ps}
\end{center}
\caption{\footnotesize Illustration of periodogram powers of astrophysical red noise as scaled $\chi^2_2$ variables using a 43 ks observation of Mkn 421 \emph{XMM-Newton}. Panel (a) shows the Reflecting Grating Spectrometer time series in rates (0.3--2 keV with 85\,s bins). In black are the measured data and in red are those that result from applying a Kalman filter \citep{Kalman:1960kh}, which very effectively suppresses the white noise scatter \citep[see also][]{1997A&AS..124..589K}, and thus allows for a better constrained fit on the resultant periodogram that is shown in panel (b) with the best fit power-law model. Panel (c) is the distribution of de-trended periodogram powers overlaid with the analytical form of the $\chi^2_2$ density function. The excess power at the lowest frequency, about 12 times above the best-fit in panel (b), is due to the global trend in the time series marked by a large difference between the start and end of the observation, and is seen in the right hand tail and greatest value in the histogram in panel (c). (A color version of this figure is available in the online journal.)
\label{f:rednoisepowers}}
\end{figure*}
This implies that: $$R^2=c^2 + s^2=2NC^2 + 2NS^2=2N(C^2 + S^2)$$ is the sum of the squares of two standard normal variables. Squaring a standard normal yields a $\chi^2$ variable with one degree of freedom (dof). Summing $\chi^2$ variables yields another $\chi^2$ variable with a number of dof that is the sum of the dof of the variables being summed (illustration in \cref{f:variables}).
Therefore, the power being the sum of two $\chi^2_1$ variables is $\chi^2_2$ distributed with a mean and standard deviation of two (variance of four). This is convenient due to the simplicity of the purely exponential $\chi^2_2$ density function:
\begin{equation}
\chi^2_2(x) = \frac{1}{2} e^{-x/2}.
\end{equation}
The caveat here is that this is only true if the power estimates at different frequencies are independent, which is only true for non-variable processes: the same kind that lead to a globally flat power spectrum with equal power at all frequencies. This is therefore an ideal case that should be treated as such, and not given more attention than it deserves. Nonetheless, it is instructive and illustrates how normal, $\chi^2$ and exponential probability densities can be related.
A natural choice for a general $\chi^2$ fit statistic is twice the negative of the log-likelihood of Equation (\ref{eq:chiSquareLogLikelihood}), $K=-2\ln L$, and dropping the term that does not depend on the parameters $k_i$ yields:
\begin{equation}
K = -2\sum_i \left[ \left(\frac{k_i}{2}-1\right)\ln x_i - \frac{k_i}{2}\ln 2 - \ln\Gamma\left(\frac{k_i}{2}\right) \right].
\end{equation}
Just as the $C$ statistic is optimal for Poisson distributed data because it is derived from the likelihood function, which is in turn derived from the probability density for Poisson variables, the $K$ statistic is optimal for $\chi^2$ distributed data for the same reason. It is optimal for fitting a model to a set of measurements that are samples of random $\chi^2$ variables with potentially different dof $k_i$ in each channel $i$.
Now, a much more common and also more general case than that of the flat spectrum of a non-variable process, is that of a power-law distribution of power estimates as a function of frequency usually called red noise in Astrophysics (see the highly sophisticated work by \cite{2010MNRAS.402..307V} on a Bayesian approach to evaluating the significance of an almost or strictly periodic signal in red noise for a discussion of its properties). The ideal case of a globally flat power spectrum considered above is the simplest manifestation of a red noise process: that with a spectral index of zero. The power values in red noise are related to one another in a well-defined manner through the relation Power $\propto f^{-\alpha}$, where $f$ is the frequency and $\alpha$ is the power spectral index. This is a model that describes the shape of the underlying power spectrum; not the shape of any particular periodogram that serves as an estimate of it, subject to various degradation factors, distortions and statistical fluctuations.
As is the case for the likelihood, the absolute value of the power in the periodogram is not meaningful: it is only the relative power at a frequency compared to that at another that carries meaning. Therefore, any normalization factor appropriate for the application can be used to scale the periodogram up or down. The key, however, is that we are working with \emph{scaled $\chi^2_2$ variables}. This is demonstrably true for astrophysical red noise (\cref{f:rednoisepowers}), because dividing the power estimates in each channel by the best-fit power-law model yields values that are $\chi^2_2$ distributed.\footnote{It is necessary to demonstrate this relationship using real data, because many algorithms used to generate power-law noise, as the one commonly used by \cite{1995A&A...300..707T}, are precisely like this by construction, for the Fourier components are generated by multiplying the model spectrum at each frequency by pseudo-random standard normal variates, one for the phase and one for the amplitude. Squaring and summing to get the power and then dividing by the best-fit power-law model will always give $\chi^2_2$ distributed powers.}
This would also be true for \emph{any} power spectral shape \emph{if} the process can be considered as one of simply scaling the basic $\chi^2_2$ variable that results from summing two squared standard normal variables by the underlying model, whatever the shape. This is indeed what we have always seen in our work in the frequency domain, and thus assume this to be generally the case.\footnote{\cite{1986ssds.proc..105D} and \cite{1990ApJ...364..699A} discuss this issue and also adopt this position.}
\begin{figure*}[htb]
\epsscale{1.0}
\begin{center}
\includegraphics[scale=0.3, angle=-90]{ts-allTimes-5s.ps} \hspace{1mm}
\includegraphics[scale=0.225, angle=-90]{fft-allTimes-kalman.ps} \hspace{1mm}
\includegraphics[scale=0.225, angle=-90]{powAtPeriod.ps} \\\vspace{1mm}
\includegraphics[scale=0.3, angle=-90]{ts-duringQPO-0.1s.ps} \hspace{2mm}
\includegraphics[scale=0.225, angle=-90]{fft-duringQPO-kalman.ps} \hspace{2mm}
\includegraphics[scale=0.225, angle=-90]{logLikeOfPows.ps}
\end{center}
\caption{\footnotesize The top row shows the time series of the entire observation (binned to a resolution of 5\,s for clarity of presentation); the periodogram made from the Kalman-filtered, 0.05\,s resolution time series of the event arrival times; and the power at 1 Hz estimated at 10\,s intervals from the Rayleigh periodogram of the event arrival times as a function of time. The bottom row shows a zoom on the time series during the transient QPO from its start at 485\,s until its end at 515\,s after the start of the observation; the periodogram of the Kalman-filtered 0.05\,s resolution time series; and the log-likelihood as a function of time where only detections beyond the established threshold are shown. The QPO is characterized by 30 cycles of an almost periodic signal centered on 1 Hz with a standard deviation of 1/20 about that frequency and a pulsed fraction of 27\%.
\label{f:transientInPowspec}}
\end{figure*}
Therefore, whether or not we have a model describing the power spectral shape, we work with the power estimates at a given frequency as though they were $\chi^2_2$ variables scaled differently in each channel. This implies they are distributed according to the exponential density function (\ref{eq:exponentialDensity}), that their joint likelihood function is given by Equation (\ref{eq:exponentialLikelihood-model}), and thus the log-likelihood by Equation (\ref{eq:exponentialLogLikelihood-model}): $$\ln L(\pmb{\tau} | \pmb{x}) = -\sum_i (\ln\tau_i + x_i/\tau_i).$$
In the periodogram, $x_i$ is the measured, and $\tau_i$ is the model-predicted power in frequency channel $i$. We can therefore construct another fit statistic specifically for fitting periodograms based on this expression (\cite{1986ssds.proc..105D} also derive and use this statistic; see also \cite{2012ApJ...746..131B}) as was done above with $K$ for the general case of different $\chi^2$ variables, but now for the general case of a collection of exponentially distributed variables, such that $B = -2\ln L$:
\begin{equation}
\label{eq:exponentialLogLikelihood-global}
B = 2\sum_i (\ln\tau_i + x_i/\tau_i).
\end{equation}
When working with a power spectral model, the $B$ statistic is used to continuously compare the observed with the predicted distribution of powers in the periodogram, as is done with the $C$ statistic in the case of Poisson distributed events in an energy spectrum, or the $K$ statistic for $\chi^2$ distributed measurements more generally. Sensitivity to detect changes in the overall shape of the spectrum increases quickly with the number of iterations. However, fluctuations in the power estimates in each channel always remains important due to the intrinsically high variance of exponential variables for which the standard deviation is equal to the decay constant ($\mu=\tau,$ $\sigma^2=\tau^2$ and thus $\sigma=\tau$).
As a demonstration, we consider a hypothetical observation in X-rays of a bright (500\,s$^{-1}$\xspace) accreting system whose variable emission comes mostly from two components: the accretion disk, and the hot and turbulent gas in the inner flow. In both, the emission processes are connected on all timescales, and thus each gives rise to a red noise component. The accretion disk is much larger in extent and has a sharp inner radius. It dominates at lower frequencies with a power-law index $\alpha=-1$, and has a high-frequency cutoff beyond which it does not contribute to the power spectrum. The turbulent inner flow is much smaller in extent because it is bounded by the inner edge of the disk. Its emission is more variable and dominates the high-frequency part of the spectrum with a power-law index $\alpha=-3$.
In this case, we are interested in monitoring the range of frequencies between 0.1 and 10 Hz for the appearance of a weak, short-lived, transient QPO that we expect to appear at or very near the break in the power spectrum at 1 Hz, which marks the boundary between the disk and the turbulent inner flow. For this, we make a periodogram every 10\,s with the events accumulated during this time interval, and monitor the power at one or any number of frequencies. Because we are interested in a short-lived QPO, the strategy is different than for the previous example of the time series: we cannot rely on the transient persisting in more than one ``measurement'', and therefore must establish a single detection threshold that is constraining enough for our application. This threshold is established using simulations. We have done this for the power at 1 Hz, the frequency of interest for us here, to first determine the average expected power (35), and then establish a threshold (log-likelihood of $-10.1$, and thus a likelihood of $4.1\times 10^{-5}$) that ensures a level of false detections of 5\%.
The observation and the analysis are presented in Figure \ref{f:transientInPowspec} in which we see that the transient QPO is clearly detected in the likelihood monitoring, but, as should be expected from its very short lifetime, is not at all evident in the periodogram of the entire observations, even though it is indeed present. It is important to emphasize once more that to maximize the power of this method, the detection strategy must be devised and optimized for the application.
\section{Concluding Remarks}
\label{s:conclusion}
Treating and interpreting data as statistical evidence in seeking to further our understanding of physical processes and of the behavior of complex systems such as those we observe in astrophysics, using all the information carried by these data, is most directly done through the use of the likelihood function appropriately chosen according to the statistical distribution of the random variable that is measured. This approach yields a most powerful and effective means for treating any kind of data, because no information about the process under investigation is left out or ignored, nor degraded in any way; everything is seamlessly taken into consideration and into account in the statistical decision-making process. This is particularly appropriate for the search and identification of transient phenomena in all data domains of interest in astrophysics (temporal, spatial, energy, and frequency), and this is what we have tried to demonstrate.
\pagebreak
The method presented is well suited to handle the first two classes of transients that have a non-variable background, be it nil or of some constant level, without any further considerations. Evidently, in this as in any other method, the identification efficiency ultimately depends very intimately on the relative strength of the transient signal. Identifying transients with the described procedures is perfectly suited for analyzing archival data, where the data sets are already complete and well defined. It is, however, also powerful for real-time applications, maybe especially so. Handling the third class of transients characterized by a variable background requires additional care, as discussed in the Section\;\ref{s:identifyingTransients}. Here are some additional considerations:
If the process is variable but predictable as with a sinusoidal modulation, for example, then this is a simple extension of the procedure using a model, but in which it itself evolves as a function of time; the formalism is otherwise identical. If the process is variable and unpredictable, it implies that the measurements in each pixel or channel are not distributed according to a particular and unchanging probability distribution. Instead, even though it may belong to a given family of distributions based on the nature of the process, it will inevitably have a hybrid shape due to the changes in that process as a function of time which we must model empirically. Therefore, each pixel or channel is treated individually, but because we have no a priori expression for the likelihood function, the intensity and how it is distributed can be characterized approximately using the running mean and variance.
For highly variable processes, where deviations in shape from known probability distributions are large, looking at the distribution of measured values per pixels or channel does not make much sense because the changing intensity in each is like a very small window onto complex interactions that give rise to variable emission that cannot be described by random variables with stationary probability distributions. Doing this is very limited in usefulness even when it is possible to find a function that can be fitted to the distribution of measured intensities such as a log-normal distribution, for example. However, a variable process can be highly non-stationary in the time domain, but stationary in frequency space, having a power spectrum that does change in time. This is analogous to a variable point source whose intensity varies markedly in successive images but whose location in the sky remains the same, and whose shape as it appears in each image is as always given by the instrument's PSF. Combining the information carried by the data in the time and frequency domains, and treating it simultaneously in the fashion described in this paper is certainly a most powerful means of analysis for detecting transient features in highly variable processes.
Note that, as alluded to in Section\;\ref{s:identifyingTransients}, the crucial elements in working with variable processes are the timescales involved: in this application, that of the transient with respect to that of the variability. More specifically, since the stationarity of the probability distribution can be considered as being a function of the timescale at which the process is viewed, in general it is possible to have a running estimation of that probability distribution which is stationary up to a given timescale, but evolves on longer timescales. In this way, the likelihood function and all the associated statistics are well defined at any point in time, and the method becomes a more general, time-dependent form of the procedure presented. As stated, details relating to this will be presented elsewhere.
The generality and wide applicability of the method we presented in this paper, can also be viewed as its primary limitation. This is not unlike the majority of statistical methods with a specific purpose, and is related, very simply in this case, to the fact that there are various kinds of transients, and we want to detect them all with maximum sensitivity. Therefore, as was shown in both the case of a transient in a time series and in a power spectrum, the power of the method relies on simulations for an accurate estimation of the statistics of the process, and for defining the detection thresholds, that will in general be geared toward a certain class of events. Nonetheless, there are in principle no limits to the number of automated searches that can be performed in parallel, for any given data set or application. Furthermore, the generality of the formalism is such that it can be applied to identifying transients in other parameter spaces, where the independent variable is not time.
Another limitation relates to the full reliance on well-defined statistics because the likelihood function otherwise simply cannot be constructed. Although this may not appear to be an important factor in many methods commonly used, the truth is that it always is, but it is not necessarily recognized because of the generalized implicit assumption of normal statistics. The periodogram is an excellent example of the importance of using the correct probability distribution.
Having recognized that the power values in a frequency channel of any periodogram are exponentially distributed with a mean given by the expected power in that channel, the one-sided tail probability of finding a power value of 60 or greater, for instance, when the expectation is 30, is 0.135 or 13.5\%, which everyone would agree is definitely not worthy of even a second glance in terms of ``statistical significance''. However, using normal statistics (mean power of 30 and standard deviation of $\sqrt{30}$, say), finding a value of 60 or greater is a 5.47$\sigma$ result with a probability of about $10^{-8}$.
Therefore, although this limitation could be a weakness of the method in certain circumstances, it is clearly also a strength that highlights a very fundamental point which should rightly be incorporated in any statistical method: that of using the correct statistics. We believe this likelihood-based approach is a way forward, and will prove to be very valuable in many applications where transients and outliers are of importance.
\acknowledgements{The author is grateful to Andy Pollock for many stimulating and inspiring discussions, to Marnix Bindels for his suggestions and help in Java programming, especially with the AggregatePeriodogram, and to Joel Velasco for his reading recommendations in the philosophy of statistics and especially \emph{Statistical Evidence} \citep{Royall:1997vc}. Tom Maccarone, Michiel van der Klis, and Jeremy Heyl made comments and posed questions that helped improve the manuscript.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsubsection*{\bibname}}
\bibliographystyle{apalike}
\begin{document}
\twocolumn[
\aistatstitle{Memory visualization tool for training neural network}
\aistatsauthor{ Mahendran N}
\aistatsaddress{ Indian Institute of Technology, Tirupati} ]
\begin{abstract}
Software developed helps world a better place ranging from system software, open source, application software and so on. Software engineering does have neural network models applied to code suggestion, bug report summarizing and so on to demonstrate their effectiveness at a real SE task. Software and machine learning algorithms combine to make software give better solutions and understanding of environment. In software, there are both generalized applications which helps solve problems for entire world and also some specific applications which helps one particular community. To address the computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training time. Machine learning algorithms have a greater impact in the world but there is a considerable amount of memory utilization during the process. We propose a new tool for analysis of memory utilized for developing and training deep learning models. Our tool results in visual utilization of memory concurrently. Various parameters affecting the memory utilization are analysed while training. This tool helps in knowing better idea of processes or models which consumes more memory.
\end{abstract}
\section{INTRODUCTION}
Deep learning (DL) and other machine learning (ML) techniques are evolving in a rapid phase. Integration of machine learning algorithms with Software Engineering (SE) is latest trend. Reusable deep learning models are deployed while integrating with SE. Models once developed by the trained machine learning experts can be easily deployed without the help of experts \cite{Li18}. Deep Learning advancements in Software Engineering, healthcare, Computer vision, Natural Language processing, Autonomous vehicles help enable remarkable progress in recent years. Processes like predicting sales, detection of disease using computer vision enhances idea of involvement of deep learning models in Software Engineering. Machine learning have a huger impact in Software Engineering as machine learning helps to generate code.
Big tech companies are researching in the field of integrating AI with their SE tasks like code generation, malware detection. Integration of deep learning in SE tasks are becoming popular. Researches in this field consisted of integration more than 40 Software Engineering tasks are integrated with deep learning. There are research papers accepted in more than 60 venues about combining Deep learning with SE tasks. Machine learning algorithms learn by itself as it is not hand coded like Software. As machine learning algorithms are changing according to requirements and trends, the best practices in before deploying models in productions is to have exactness of the model.To know the quality of model before deploying in Software. To know the quality of model is important before deployment in order to have a good software model. The defective model has to be identified and with help of precision index \cite{Li18,Ma07}.
Although software benefit from deep learning, it comes with more utilization of memory than the normal software without deep learning algorithm. However, the models developed in the availability of large computational resources can be implemented in software easily. Cloud platforms resources paves the way for implementing even bigger machine learning algorithms in software effectively. But having offline models in software is reliable, secure. Memory utilization can cause training models not possible. There are several steps taken in order to reduce the memory utilization in deep learning algorithms.
Research going in this field to reduce memory consumption to an extent. Existing researches like compression of models, using cloud storage for ML models helps software to be applied for even less computational and less storage devices. Uncompressed networks like AlexNet (handles 150,528-dimensional input) and VGG (Very deep convolutional network containing 16-19 layers) have large numbers (more than tens of mega-bytes) of weights in fully connected layers which incur significant amount of memory accesses \cite{Krizhevsky12,Simonyan14}. And mobile devices have strict constraints in terms of computing power, battery, and memory capacity. \cite{Kim15} researched that compression of models results in reduction of memory, runtime and energy significantly with only minor accuracy drop. Compression can be performed after training the model in a environment where resources are available. In less computational environment there has to be steps taken in order to reduce the resource utilization like memory. Knowing where in the training model, memory is utilized more can help to solve this problem. The reason for increase/decrease in memory consumption at a particular time is not known. Storing models in cloud may be a better implementation when compared to compressing the models. This allow less memory usage in software applications but has issues like maintaining the storage, privacy of the model, internet connectivity, updating, versioning the model.
We have to find the root cause for increasing in memory utilization. Real time memory usage has to be known which paves the way to analyze each model while training. As large number of weights incur significant amount of memory access, there are certain other factors which increases the memory utilization. The increase in nodes for each layer, increase in layers, training epochs, computation complexity, input data dependency value or quality of image used in dataset contributes to increase in utilization of memory. Increase in memory utilization in turn increases the power consumption. Analysing the memory is needed for each models irrespective of hardware used. So having a tool for monitoring memory utilization in real time can provide better understanding of the current situation. We propose a tool which can monitor the memory utilization in computer and can produce the result in graph form.
These models have grown in memory requirements (typically in the millions of parameters or weights). These algorithms require high levels of computation during training as they have to be trained on large amounts of the data while during deployment they may be used multiple times. On the other side, these models do memory access each time when models are trained. So it is very important to consider memory usage while training a model.
\section{LITERATURE REVIEW}
The memory utilization plays an important role in developing a machine learning algorithm. Understanding the development of software helps in inheriting with the machine learning model. Software utilizes memory for storing the code and function it is intended to do. Software evolved in such a way, less memory utilized and can even inherited with wrist watches. Integrating software with machine learning also follows the same path so as to reduce the memory utilization to provide a better services.
\cite{marimuthu} et al. does empirical study on analysing energy in software applications. They found machine learning methods are adopted for detecting design pattern detection, code smell detection, bug prediction and recommending systems. Here the algorithms do have some patterns but when considering the energy consumption, only the small amount of energy is dissipated in these tasks. Where as the large amount of power dissipation is done in developing the model and when model undergoes training/testing phase. This paper provides idea of analysing the energy dissipation in machine learning models as similar to software applications.
\cite{Han15} et al. researched to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. They does this to find energy and memory dissipation in neural networks. This can be achieved by changing weights and connections within neural networks. In our research, we measure the memory and energy spent on read/write operations by neural network or any machine learning algorithm. With the obtained results we get to know how much energy one particular algorithm is using and average of energy while training the model.
\cite{Dayarathna15} et al. proposed a research survey on power consumption and memory consumption in Data centers. The data centers of the companies provide the space for most of the consumers data. Users data is critical in order to take any steps in growing the business for developing better product for users and so on. The survey details usage of consumption of energy in data centers. They analysed energy spent on cooling, lighting, power conversion, network and hardware and server and storage. They use multiple different power modeling tools for evaluation.
\cite{Kim15} et al. researched about finding the energy consumption for deep neural network from Titan x to Smartphones in order to compress the neural network.They are considering computation energy and memory access in deep learning. They try to compress the neural network so that they can fit it in smartphones. Caffe,Torch,Theano tools are used for developing neural networks. We provide results based on current analysis of energy dissipation in neural networks. Their tools and techniques which are used in extracting the energy dissipation in neural network.
\cite{Vanhoucke11} et al. researched to reduce the computation cost for training deep learning networks.They use specific modifications in order to improve the speed of neural networks by using floating point implementations. Our project is to find the energy spent on the neural networks.We are doing the calculation of computation and memory, where as they do some minor changes in order to increase the computational speed. Memory access data is also calculated. They did some implementations in memory access to reduce computation speed but they did not actually considered the various types of neural networks. We can utilize the inspect their idea of calculating the computational speed for our machine learning algorithms such as neural networks.
\section{METHODOLOGY}
This section introduces the details of the tool developed and its functions. We discuss the process of tool development, deployment of tool,execution of machine learning algorithm and management of concurrent extraction of measurements under this section. The core approach of how this tool works is depicted in Figure \ref{fig:architecture}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth,height=10cm]{Pictures/Architecture.png}
\caption{Overview of Tool development Approach}
\label{fig:architecture}
\end{figure*}
\subsection{Tool Development}
There needs to be a better user interface in order to get better user experience when using any tool for better understanding. The User Interface can be provided by TkinterUI in python. We develop a application type interface within the python package with the help of Tkinter. This package helps in providing user, the current status of the data and can update the memory readings in real time.
\begin{figure}
\centering
\includegraphics[width=6cm,height=10cm]{Pictures/inputdata.png}
\caption{Tkinter User Interface}
\label{fig:inputdata}
\end{figure}
The Tkinter User Interface has simple functions and needs less memory to implement in python. Figure \ref{fig:inputdata} depicts the Tkinter User Interface in python. In order to know the difference between the models trained, we provide the hyperparameter data input form in Tkinter as shown in Figure \ref{fig:inputdata}. With this data input we can identify the difference between the models trained and energy can be viewed in Graphical representation.
Tool development consists of taking crucial steps in order to get better outcome as tool. In the view of developing this tool, the basic requirements needed for this tool development are memory utilization values. The measurement of the memory utilization value has to be in real time. Real time values help in knowing the function of the model. Memory utilization is basically memory read and memory write signals. Multiple signals are needed will be high when performing numerous tasks at a time like matrix multiplications and so on. In order to get the memory utilization, we use 'psutil' directory in python.
Psutil package helps in retrieving information on system utilization. We use Ubuntu environment for developing this project.The memory utilization in Ubuntu environment can be get using terminal commands like free,top and so on. Using the same commands the psutil package directly fetch the data and result it in seperate variable. We can call the variable whenever needed. It is much easier to use in the User Interface if memory utilization values are allotted to variable individually.
\subsection{Tool Deployment}
The execution of two functions and the tkinter UI can be performed by multiprocessing. In python, multiprocessing package helps in executing process simultaneously. This package supports spawning processes. Spawning function loads and executes a new child process. The current process and new child process can continue to execute concurrent computing. Here multiprocessing in python uses API similar to the threading module. The multiprocessing package offers concurrency on both locally and remote.
Once the tool is activated, the memory utilization data is extracted from the OS using the additional package 'psutil'. The extracted data is stored in the folder and concurrently the Tkinter UI reads the data from the stored database and provide a graphical format of memory utilization. The graphical form contains total memory, memory available, Memory used percentage, Memory available, memory currently active. Figure \ref{fig:memorygraph} represents the graphical data that is collected from the data stored.
\begin{figure}
\centering
\includegraphics[width=6cm,height=10cm]{Pictures/memorygraph.png}
\caption{Memory utilization Graph}
\label{fig:memorygraph}
\end{figure}
\section{RESULTS}
We developed a tool for measurement of memory utilization and remembering the hyper parameters utilized while training a deep learning model. This helps in analyzing the memory utilization better for a deep learning model since knowing the hyperparameters and relate it with memory utilization. Utilizing this tool can help in developing better deep learning model keeping utilization of memory in consideration.
\section{CONCLUSION}
We propose the new perspective of considering memory utilized while developing machine learning algorithms. Deep learning models are taken as example in providing an overview of tool and its functionalities.
\subsubsection*{\bibname}}
\begin{document}
\onecolumn
\aistatstitle{Instructions for Paper Submissions to AISTATS 2022: \\
Supplementary Materials}
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2022.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
One of the most popular ways for annotating affect is based on rating systems, such as simple Likert scales \cite{likert1932technique}, self-assessment manikins \cite{morris1995observations}, and rating scales of the discrete states in the Geneva emotion wheel \cite{scherer2005emotions}. The common characteristic of all the above rating systems is that they provide \textit{ordinal} and not \textit{nominal} information about the affect states. In addition, several psychometric studies show that ratings of affect do not follow an absolute and consistent scale \cite{ovadia2004ratings,metallinou2013annotation}. Therefore, learning to predict nominal values of affect yields inconsistent models of questionable quality and use. On the contrary, treating ratings as ordinal values yields less biased datasets and, thus, more reliable models of affect \cite{martinez2014don}. For the reasons above, formulating affect modelling tasks as supervised learning ranking problems is gaining ground within the Affective Computing research community.
The supervised learning problem of ranking consists of using labelled information to derive accurate ranking prediction functions. Most of the algorithms that try to address that problem are using information that comes solely from labelled pairs of data points and transform the ranking problem into a classification one \cite{joachims2002optimizing,freund2003efficient,burges2010ranknet}. The label of a pair $(x_i, x_i')$ of data points is 1, -1, or 0 if $x_i$ is ranked higher than, lower than, or equal to $x_i'$, respectively. Hence, these algorithms can be used even when only the global ordering of data points is provided without the need for preference scores or other kinds of information.
In many real-world applications, however, there is an asymmetric distribution of information between training and test time; that is, additional information is given about the training data, which is not available at test time. Consider, for example, user ratings for different movies, self-assessment manikin scale for affect annotation, or the number of likes and dislikes associated with advertisements. Although this additional information, which can be implicitly seen as preference scores, is very valuable, it is disregarded by algorithms that use solely labelled pairs of data points.
In this study, we propose a supervised learning ranking model of affect. Besides the information that comes from labelled pairs of data points, our model also exploits additional information that directly or indirectly is associated with preference scores, that is ordinal values of affect states. Since this additional information can only be available during the training phase of the model and not at test time, we treat it as \textit{privileged information} and follow the learning paradigm of Learning Under Privileged Information (LUPI) proposed by Vapnik and Vashist \cite{vapnik2009new}. Our model of affect is based on Neural Networks (NN) and extends the well-known RankNet \cite{burges2005learning} to the LUPI paradigm, hence its name \textit{AffRankNet+}. To the best of our knowledge, this is the first time that privileged information is incorporated into NN for addressing supervised learning ranking problems and the first time that the LUPI paradigm is used for affect modelling. Experimental validation of AffRankNet+ on the large scale publicly available Afew-VA dataset \cite{kossaifi2017afew} indicates that privileged information \textit{significantly} improves the ranking performance of affect models.
\section{Related Work}
This section surveys literature on supervised learning ranking models, and affect modelling based on ranking/preference learning approaches.
\subsection{Supervised Learning Ranking Models}
The supervised learning problem of ranking, based on labelled pairs of data points, has been widely studied. Below we present some landmark works focusing on this problem.
RankSVM proposed in \cite{joachims2002optimizing} was one of the first approaches focusing on this problem. The authors use Support Vector Machines (SVM) to compute a preference function.
In \cite{kuo2014large, lee2014large} the authors reduce the number of RankSVM variables from quadratic to linear with respect to the number of training instances in order to significantly reduce the training time and make RankSVM suitable for large-scale problems.
RankBoost \cite{freund2003efficient, rudin2009margin, connamacher2020rankboost} is another well-known ranking algorithm. RankBoost creates and aggregates a set of ranking functions in an iterative fashion to build an effective ranking procedure. Using solely information that comes from labelled pairs of data points, RankBoost estimates a preference function that can map single points to real-valued preference scores.
The authors in \cite{burges2005learning} approach the ranking problem by proposing a probabilistic cost function for training machine learning models. In their study, they utilize NN, and thus they call their approach RankNet. The idea, however, of employing a probabilistic cost function has equally well been applied to ranking algorithms that adopt different learning machines, such as Boosted Trees \cite{burges2010ranknet}. Similarly to the approaches presented above, RankNet is trained on labelled pairs of data points. After training, it can evaluate single points and produce preference scores for each one of them. DeepRank \cite{pang2017deeprank}, which targets information retrieval tasks, is also based on NN. However, it differs from RankNet, since it identifies and exploits local preference relations between the data points to induce the global ranking. In \cite{rahangdale2019deep} the authors introduce $l_1$ regularization to a NN-based ranking model, to enforce sparsity and avoid overfitting. Since the above mentioned approaches are based on NN, they can straightforward exploit the recent advances in deep learning \cite{song2014adapting, parthasarathy2017ranking} and tensor-based learning \cite{makantasis2018tensor, makantasis2019common, makantasis2021rank}. However, none of these follows the LUPI paradigm to exploit additional/privileged information about the training data that might be available. In other words, they follow the typical supervised learning setting by transforming the ranking problem to a classification one.
Selecting a preference function using the methods presented above is based solely on the order of the data points. Even if additional information is available, such as preference scores associated with the points, this information is entirely disregarded. In this study, we argue that exploiting additional information associated with preference scores can produce more accurate ranking algorithms. We assume that the additional information is available only during the training phase of the model and not at test time. This assumption is critical to impose no restrictions related to capturing additional information during the real-world deployment of the model. To enable the AffRankNet+ model to exploit additional information during training efficiently, we follow the LUPI paradigm \cite{vapnik2009new, vapnik2015learning}, which is closely related to knowledge distillation proposed in \cite{hinton2015distilling}. Theoretical results \cite{lopez2015unifying, pechyony2010theory} show that following the LUPI paradigm reduces the sample complexity of the learning algorithm, which implies that LUPI models learn faster, and at the same time, they are very efficient for small sample setting problems, i.e. problems where the number of annotated samples is limited.
\subsection{Ranking-based Affect Modelling}
Based on psychological theories and evidence from multiple disciplines, such as neuroscience and artificial intelligence, Yannakakis et al. \cite{yannakakis2018ordinal} draw the theoretical reasons to favour ordinal labels for representing and annotating affective states. They also suggest ranking/preference learning as the appropriate approach for building reliable and valid affect models. Due to the ordinal nature of emotions, several studies approach affect modelling using ranking or preference machine learning algorithms.
In \cite{yang2010ranking} the authors represent the emotion elicited by a music song as a point into a two-dimensional Cartesian space with valence and arousal as dimensions. The coordinates of a song are determined relatively, using a modification of the ListNet \cite{xia2008listwise} algorithm, with respect to other songs' emotions. The study in \cite{fan2017ranking} also focuses on music emotion recognition. The authors first collect a dataset and annotate it using ordinal labels. Then, they propose a modification of RankSVM, called smoothed RankSVM, for deriving emotion recognition models.
Similarly, the study in \cite{zhou2018relevant} proposes a ranking algorithm to identify the emotions that are more intensely associated with a given text. By exploiting the ordinal nature of emotions, their proposed approach outperforms multi-label classification methods. The authors in \cite{liang2018multimodal} propose a multimodal ranking algorithm for emotion recognition. Their algorithm is based on the emotion intensity gradient; that is, the relative emotion intensity change between two or more different inputs. The authors in \cite{soleymani2008affective} exploit a ranking algorithm to predict spectators' felt emotions for a given movie scene. They use both physiology and audio-visual features to build and evaluate their models of affect. In \cite{makantasis2021pixels} audio-visual information from gameplay videos is fed to a deep learning RankNet model to estimate the intensity of emotions felt by gamers while they were playing a game. Finally, due to the theoretical and experimental evidence that ordinal data processing yields more reliable, valid and general models of affect, the authors in \cite{camilleri2019pyplt} present the open-source Python Preference Learning Toolbox (PyPLT) to enable the extensive use of ordinal data processing and ranking algorithms.
\begin{figure*}[!tb]
\begin{minipage}{0.33\linewidth}
\centering
\centerline{\fbox{\includegraphics[width=0.95\linewidth]{pair_cost}}}
\end{minipage}
\begin{minipage}{0.33\linewidth}
\centering
\centerline{\fbox{\includegraphics[width=0.95\linewidth]{LUPI_cost}}}
\end{minipage}
\begin{minipage}{0.33\linewidth}
\centering
\centerline{\fbox{\includegraphics[width=0.95\linewidth]{terms_cost}}}
\end{minipage}
\caption{Error surfaces normalized to [0,1] for the losses in (\ref{eq:empirical_error}) (left) and (\ref{eq:empirical_error_lupi}) (middle) for a pair of points $(x, x')$ when $f(x, x')=1$, $g(z)=8$, $g(z')=4$, $\lambda=0.5$ and $\tau=1$. The diagram on the right presents the error due to the terms that correspond to the privileged information in (\ref{eq:empirical_error_lupi}).}
\label{fig:costs}
\end{figure*}
\subsection{Our Contribution}
The contribution of this study is four-fold. First, to the best of our knowledge, we propose for the first time an NN-based supervised learning algorithm that focuses on the problem of ranking and exploits privileged information associated with preference scores. Second, since our approach utilizes NN, it can take full advantage of the recent advances in deep learning and tensor-based NN, such as automatic feature extraction and information processing in high dimension spaces. The above implies that our model can be straightforwardly applied to data points that are represented as feature vectors, but also to data points that lie in tensor spaces such as images, videos, multi-model data (e.g. audiovisual signals) as well as to spatiotemporally evolving sensor network data \cite{makantasis2020space}. Third, by exploiting additional information only during the training phase, the potential applications of our model are not restricted by the requirement of capturing additional information at deployment time. Fourth, we evaluate the proposed model on a large scale publicly available affect dataset; the evaluation results indicate that exploitation of privileged information significantly improves ranking results.
\section{Problem Formulation}
In this section, we first present the ranking problem when information comes solely from labelled pairs of data points. Then, we present its extension to follow the LUPI paradigm assuming privileged information regarding preference scores is available only during the training phase of the model (and not during test time). For simplicity, we formulate the problem of ranking based on labelled pairs of data points as a binary classification problem. However, the formulation can be straightforwardly modified to treat this problem as a three-class classification task to consider pairs of points that are non-comparable or equally preferred
\subsection{The Ranking Problem}
Let us denote by $\mathcal X$ the input space i.e. the feature space of data points, by $f: \mathcal X \times \mathcal X \rightarrow \{0,1\}$ a target labeling function, and by "$\succ$" and "$\preceq$" preference relations; $x_i \succ$ $x_j$ means $x_i$ is ranked higher than $x_j$ and thus $f(x_i,x_j)=1$. Similarly, $x_i \preceq$ $x_j$ means that $x_i$ is ranked lower than or equal to $x_j$ and thus $f(x_i,x_j)=0$. Given a set of labelled points
\begin{equation}
\label{eq:sample}
S = \{(x_i, x_i', t_i)\}_{i=1}^m,
\end{equation}
where $t_i = f(x_i, x_i')$, and a class $\mathcal H$ of preference functions mapping $\mathcal X$ to $\mathbb R$, the Empirical Risk Minimization principle (ERM) \cite{vapnik1999overview} suggests to select a preference function $h^* \in \mathcal H$ that minimizes the empirical error (error over the training set $S$), i.e.
\begin{equation}
h^* \in \arg \min_{h \in \mathcal H} \hat{R}_S(h),
\end{equation}
where $\hat{R}_S(h)$ stands for the empirical error of the preference function $h$ and can be quantified by the Binary Cross Entropy (BCE) loss function
\begin{equation}
\label{eq:empirical_error}
\hat{R}_S(h) = -\frac{1}{m} \sum_{i=1}^m t_i \log(p_i) + (1-t_i) \log(1-p_i),
\end{equation}
where $p_i=\sigma(h(x_i)-h(x_i'))$, and $\sigma(x)=1/(1+\exp(-x))$ is the sigmoid function.
At this point, we should mention that the class of functions $\mathcal{H}$ contains all the functions that a given machine learning model can compute. Consider, for example, a neural network with a given architecture. Then every function that the above neural network can compute for different values for its weights belongs to $\mathcal{H}$.
\subsection{The Problem of Ranking Using Privileged Information}
LUPI is based on the availability of additional information, called \textit{privileged information}, that can be used only during the training phase of a learning model. According to LUPI, exploitation of privileged information during training makes a learning model learn better and faster \cite{vapnik2009new, vapnik2015learning}. This information, however, is not available at test time.
Let us denote as $\mathcal Z$ the space of privileged information. Then, the set of labelled points in (\ref{eq:sample}) is enhanced by the presence of privileged information as
\begin{equation}
\label{eq:sample_lupi}
S_{LUPI} = \{(x_i, x_i', z_i, z_i', t_i)\}_{i=1}^m,
\end{equation}
where $z_i, z_i' \in \mathcal Z$, and in general $\mathcal X \neq \mathcal Z$.
Considering especially the problem of ranking, $z_i$'s should correspond to a representation of information that can be used to estimate preferences scores for $x_i$'s (for example, the output of a learning model that has been trained on $z_i$'s to predict preference scores), or to a direct representation of those preference scores. As far as the latter case is concerned, having available $z_i$'s, which are a direct representation of preference scores, is prevalent for many real-world ranking applications; consider, for example, affect ratings from ordinal annotation tools be directly used as preference scores.
In the following, we unify the two cases of privileged information mentioned above by considering a function $g: \mathcal Z \rightarrow \mathbb R$ that transforms $z_i$'s to preference scores. In the second case where $z_i$'s are a direct representation of preference scores, $g$ is the identity function, i.e. $g(z_i) = z_i$. The function $g$ in LUPI and knowledge distillation parlance is called ``teacher".
For exploiting privileged information we modify the empirical error in (\ref{eq:empirical_error})) as follows
\begin{equation}
\label{eq:empirical_error_lupi}
\begin{split}
&\hat{R}_{S_{L}}(h) = -\frac{\lambda}{m} \sum_{i=1}^m t_i \log(p_i) + (1-t_i) \log(1-p_i) + \\ &(1-\lambda)(\phi(\frac{(h(x_i)-g(z_i))^2}{\tau}) + \phi(\frac{(h(x_i')-g(z_i'))^2}{\tau})),
\end{split}
\end{equation}
where function $\phi$ is the hyperbolic tangent function, i.e., $\phi(x)=\tanh(x)$, that bounds the additional error terms to $[0,1)$, $\lambda \in [0,1]$ is balancing the error terms, and $\tau>0$ is a temperature parameter that quantifies the degree to which the values of the preference scores can be trusted. Fig. \ref{fig:costs} presents the error surfaces normalized to $[0,1]$ for equations (\ref{eq:empirical_error}) and (\ref{eq:empirical_error_lupi}), when $\lambda=0.5$ and $\tau=1.0$ and the preference scores for the two data points are $8$ and $4$, respectively. The same figure also presents the error added to the cost due to the two additional terms in (\ref{eq:empirical_error_lupi}) which corresponds to the privileged information that comes in the form of preference scores. While the loss in (\ref{eq:empirical_error}) considers as best solution the one that maximizes the difference $h(x)-h(x')$, the loss in (\ref{eq:empirical_error_lupi}) selects the preference function $h$ that at the same time reduces the BCE loss and matches, as match as possible, the preference scores provided by the privileged information.
Based on the discussion above, given a labelled set of training points in the form of equation (\ref{eq:sample_lupi}) and a set of preference functions $\mathcal H$, the main objective of this study is to select a function $h^* \in \mathcal H$ that minimizes the loss in (\ref{eq:empirical_error_lupi}).
A natural question that arises is why not use a typical regression model for estimating the preference function $h$ using as target the preference scores provided by the privileged information. Minimizing the sum of BCE and the last two terms of equation (\ref{eq:empirical_error_lupi}) determines simultaneously the distance of a labelled pair of points from the classification decision boundary, and the degree to which the preference function $h(\cdot)$ matches the preference scores coming from privileged information. In ranking problems, the preference scores are usually subjectively biased; different users follow a different internal/personal preference function for providing their ratings. Parameters $\lambda$ and $\tau$ in equation (\ref{eq:empirical_error_lupi}) determine the degree to which the learning model should trust the provided preference scores. Doing the same thing using a typical regression model is not possible. In addition, the loss in (\ref{eq:empirical_error_lupi}) is not just balancing classification and regression losses; BCE and mean squared error. The employment of a general function $g:\mathcal Z \rightarrow \mathbb R$ that transforms privileged information to preference scores indicates that our model goes beyond the typical supervised learning setting to benefit from the properties (robust and fast training) of the LUPI paradigm.
\section{The AffRankNet+ Model}
Our proposed model is based on and extends the RankNet model \cite{burges2005learning}. Like RankNet, it is an NN-based learning model that is trained on labelled pairs of data points. At the same time, however, and unlike RankNet, it exploits privileged information related to preference scores during its training phase to learn faster and in a more robust way.
Specifically, our proposed model is a two-stream neural network. The architectures of the neural networks corresponding to the two streams are identical, and their weights are tied. Therefore, the two streams produce the same outputs, given that their inputs are the same. Let us denote by $\hat{h}(x)$ the output of each stream when $x$ is given as input. The model receives as input a labelled pair of data points, that is $(x_i, x_i', g(z_i), g(z_i'), t_i)$. The first data point $x_i$ is fed as input to the first stream, which outputs $\hat{h}(x_i)$. Similarly, the second point $x_i'$ is fed as input to the second stream, which outputs $\hat{h}(x_i')$. Having the outputs of the two streams, the model predicts a label $\hat{t}_i$ for the sample $(x_i, x_i')$ as follows:
\begin{equation}
\hat{t}_i =
\begin{cases}
\:\:\:1 \:\:\text{ if } \sigma(\hat{h}(x_i) - \hat{h}(x_i')) > 0.5\\
\:\:\:0 \:\:\text{ otherwise}
\end{cases},
\end{equation}
where $\sigma$ is the sigmoid function. The output $\hat{t}_i$ replaces $t$ in (\ref{eq:empirical_error_lupi}) and is used to estimate the missranking error. For computing the second and the third terms in equation (\ref{eq:empirical_error_lupi}), we replace $h(x_i)$ and $h(x_i')$ with $\hat{h}(x_i)$ and $\hat{h}(x_i')$ respectively.
Given that we have in our disposal a training set of labelled points in the form of equation (\ref{eq:sample_lupi}) and a function $g:\mathcal Z \rightarrow \mathbb R$ that transforms $z_i$'s to preference scores, we can train the AffRankNet+ model by minimizing (\ref{eq:empirical_error_lupi}) with respect to the model parameters (NN weights). After training, the AffRankNet+ model computes a preference function $\hat{h}^*(x)$ that outputs the preference score for each data point $x$.
\begin{figure}[!tb]
\begin{minipage}{1.0\linewidth}
\centering
\centerline{\includegraphics[trim=0 80 0 0 ,width=1.0\linewidth]{afewva_data}}
\end{minipage}
\caption{Indicative frames from the Afew-VA dataset along with their annotation for the arousal dimension.}
\label{fig:afewva}
\end{figure}
\section{Experimental Setting and Evaluation}
In this section, we present the employed dataset including the training and test sets construction, the architecture of the AffRankNet+ model and the training details, and finally, the performance evaluation results.
\begin{figure*}[!tb]
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\includegraphics[trim=70 120 50 10 ,width=0.95\linewidth]{AffRankNet_architecture.pdf}}
\end{minipage}
\caption{The architecture of AffRankNet+ model.}
\label{fig:architecture}
\end{figure*}
\subsection{Dataset}
\label{ssec:dataset}
For evaluating the proposed AffRankNet+ model, we use the Afew-VA public available dataset \cite{kossaifi2017afew}. That dataset consists of 600 videos from films that range from 10 to 120 frames. The collected videos display various facial expressions. Each of the videos is annotated per frame, in terms of valence and arousal level, in the integer range [-10, 10]. In this study, which serves as a proof of concept, we consider only the arousal annotations. Therefore our objective is to rank the video frames based on the arousal level. Fig. \ref{fig:afewva} presents six indicative frames from the employed dataset along with their arousal annotation. We should note that the target variables for the Afew-VA dataset are \textit{subjectively} defined and thus their estimation is better suited within a ranking setting \cite{yannakakis2018ordinal, martinez2014don}.
The Afew-VA dataset, along with the frames, provides 68 facial landmark points. We use those landmark points to detect and crop the face. After cropping the face, we create a vector representation of the facial images using the features produced by the VGG-Face neural network \cite{parkhi2015deep} pre-trained for face recognition. Moreover, since the integer arousal annotation can be seen as preference scores, in our case the function $g(\cdot)$ in loss (\ref{eq:empirical_error_lupi}) is the identity function.
After defining the vector representation of frames and the form of privileged information, we have to construct the training and test sets in the form of (\ref{eq:sample_lupi}) for training and evaluating the performance of our model. To do so, in the first place, we split the Afew-VA dataset into two sets following the \textit{group} holdout scheme. This way, we can be sure that frames corresponding to the same video will be present either in the training set or the testing set, but not in both. Then, we compare the arousal annotation values of all pairs of points that belong to the same set and include in the training (test) set the pairs whose annotation difference is larger than a threshold. That threshold can be seen as a preference uncertainty bound which avoids resulting in a ranking model that its output is affected by trivial input differences. Such a threshold is commonly used when ranking algorithms are used for affect modelling (see for example \cite{makantasis2021pixels}). In this study, we set the value for that threshold equal to 4 following a trial-and-error procedure. The value of the threshold above balances, on the one hand, the richness of the data, and on the other, the size of the dataset, which highly affects the computational cost for training the model.
\subsection{Architecture of AffRankNet+ and Training Details}
\label{ssec:details}
As mentioned before, the AffRankNet+ model is a two-stream neural network. The two streams have exactly the same topology and tied weights. In this study, the AffRankNet+ model uses the pre-trained VGG Face as a backbone network for constructing features from the face images. The VGG Face network builds features with 4096 elements, which then are fed to a fully connected feedforward neural network with one hidden layer of 512 neurons. We keep fixed the weights of the VGG Face feature construction network during training and modify only the weights of the subsequent fully connected feedforward neural network. Fig. \ref{fig:architecture} visually presents the architecture of the proposed AffRankNet+ model. The green part corresponds to the VGG Face backbone network, while the blue to the trainable part of the AffRankNet+.
We conduct experiments with varying training sizes; that is, 5\%, 10\%, and 20\% of the whole dataset are used for training and the rest for testing. 10\% of the training set is used as the validation set to activate early stopping criteria; the training stops after 15 epochs without validation loss improvement. We choose to use a small percentage of the whole dataset for training since the exploitation of privileged information reduces the sample complexity of learning, enabling the efficient training of the model using a small number of labelled data. For each training set size, we run ten experiments following the group holdout cross-validation scheme (see Section \ref{ssec:dataset}). For all the experiments, we keep $\tau$ parameter fixed equal to 1 (further investigations regarding the effect of $\tau$ on the model's performance are left for our future work). Finally, for updating the model's weights, we use the Adam optimizer with a learning rate equal to 0.001.
\subsection{Evaluation Results}
Following the experimental setting described above, we evaluate the ranking performance of the proposed model. The evaluation takes place in terms of average Pearson's correlation coefficient ($r$) and average Kendall's tau ($\uptau$) since these metrics are widely used for evaluating ranking algorithms \cite{kossaifi2017afew, melhart2020study}.
First, we investigate the effect of $\lambda$ parameter (see equation (\ref{eq:empirical_error_lupi})) on the performance of the model by running experiments for different values of $\lambda$, namely $\lambda=0.3$, $0,5$, $0.8$. Second, we compare the performance of the AffRankNet+ model against the performance of the RankNet model, which does not exploit privileged information. To conduct a fair comparison, the two models have the same architecture and are trained and evaluated on the same sets of data points.
Tables \ref{tab:pearson} and \ref{tab:kendall} present the performance of AffRankNet+ model in terms of Pearson's correlation coefficient ($r$) and Kendall's tau ($\uptau$), respectively. We can see that for small-sized datasets, larger values of $\lambda$ parameter yield better model's performance both in terms of Pearson's $r$ and Kendall's $\uptau$. The obtained results agree with the formulation of loss in equation (\ref{eq:empirical_error_lupi}) used by the AffRankNet+ model. The number of points in a dataset and the uncertainty about the preference scores are inversely proportional. As mentioned before, parameter $\lambda$ quantifies the degree to which the model should trust the privileged information from the preference scores. Therefore, when the uncertainty is large, the model achieves better results by weighting more the first (preference relations) term in equation (\ref{eq:empirical_error_lupi}). On the contrary, when the size of the dataset is adequately large, the second term in (\ref{eq:empirical_error_lupi}), associated with preference scores, is more important and smaller values for parameter $\lambda$ yield better ranking results.
\begin{table}[t]
\caption{AffRankNet+ performance in terms of Pearson's correlation coefficient ($r$) using three different values for parameter $\lambda$.}
\begin{tabular}{|r||c|c|c|} \hline
& \begin{tabular}[c]{@{}c@{}}Dataset size\\ 5\%\end{tabular} & \begin{tabular}[c]{@{}c@{}}Dataset Size \\ 10\%\end{tabular} & \begin{tabular}[c]{@{}c@{}}Dataset size\\ 20\%\end{tabular} \\ \hline
AffRankNet+ ($\lambda=0.3$) & 0.262 & 0.302 & \textbf{0.293} \\ \hline
AffRankNet+ ($\lambda=0.5$) & 0.258 & 0.312 & 0.289 \\ \hline
AffRankNet+ ($\lambda=0.8$) & \textbf{0.263} & \textbf{0.322} & 0.284 \\ \hline
\end{tabular}
\label{tab:pearson}
\end{table}
\begin{table}[t]
\caption{AffRankNet+ performance in terms of Kendall's tau coefficient ($\uptau$) using three different values for parameter $\lambda$.}
\begin{tabular}{|r||c|c|c|} \hline
& \begin{tabular}[c]{@{}c@{}}Dataset size\\ 5\%\end{tabular} & \begin{tabular}[c]{@{}c@{}}Dataset Size \\ 10\%\end{tabular} & \begin{tabular}[c]{@{}c@{}}Dataset size\\ 20\%\end{tabular} \\ \hline
AffRankNet+ ($\lambda=0.3$) & 0.172 & 0.198 & \textbf{0.210} \\ \hline
AffRankNet+ ($\lambda=0.5$) & 0.168 & 0.204 & 0.206 \\ \hline
AffRankNet+ ($\lambda=0.8$) & \textbf{0.179} & \textbf{0.216} & 0.191 \\ \hline
\end{tabular}
\label{tab:kendall}
\end{table}
In the next set of experiments, we compare the performance of the proposed AffRankNet+ model against the RankNet model, which does not exploit privilege information. As mentioned above, the two compared models have exactly the same architecture and are trained/validated on exactly the same data points. This way, any difference in the performance of the two models would be due to the exploitation of privileged information that comes from the preference scores. We should mention that for the AffRankNet+ model, we use the best value for the $\lambda$ parameter based on our previous experiment, that is $\lambda=0.8$ for dataset sizes 5\% and 10\% and $\lambda=0.3$ for 20\% dataset size.
Fig.\ref{fig:performance} presents the results from the comparison above. No matter the size of the dataset, the AffRankNet+ model achieves better performance in terms of both metrics than the RankNet model. Moreover, we test whether the improvement in performance achieved by using privileged information is statistically significant or not. Since we run ten experiments for each dataset size following the group holdout cross-validation scheme, we collect each fold's models' performance. Then, we test the null hypothesis the performances of the two models come from the same distribution. To do so, we conduct paired t-tests. Based on the t-tests outcomes, for all dataset sizes, we can reject the null hypothesis at a significance level of 0.05. Therefore, we can safely conclude that the exploitation of privileged information can significantly boost the performance of a ranking model.
At this point, we should stress out that this study is not focusing on proposing a state-of-the-art ranking model for that specific dataset. Instead, it focuses on the importance of exploiting privileged information and on presenting a general methodology for ranking problems. Therefore the Afew-VA dataset is used as a proof-of-concept.
\begin{figure}[!tb]
\begin{minipage}{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{pearson.pdf}}
\end{minipage}
\vspace{0.1in}
\begin{minipage}{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{kendall.pdf}}
\end{minipage}
\caption{The architecture of AffRankNet+ model.}
\label{fig:performance}
\end{figure}
\section{Conclusions}
This study introduces the AffRankNet+ model for ranking affect states using privileged information associated with preference scores. To the best of our knowledge, this is the first time that a ranking model based on neural networks follows the LUPI paradigm. Although this study considers that preference scores are available, the formulation of the AffRankNet+ model is general to allow other types of information associated with preference scores to be used as privileged. For example, facial action units can be used as privileged information for learning to rank using the pixels' information solely from images of faces. We tested the ranking performance of AffRankNet+ on the public available Afew-VA dataset and compared it against the RankNet model. To conduct a fair comparison, both the AffRankNet+ and RankNet models have the same architecture and are trained/validated on the same data points. The experimental results emphasize the importance of privileged information by indicating that AffRankNet+, when appropriately parameterized, can perform significantly better than the RankNet model.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction \label{intro}}
Even four decades after its discovery \citep{SS70}, the blue compact dwarf
(BCD) galaxy \object{I~Zw~18}\ continues to attract considerable interest and feed
intense debates in extragalactic research.
Its low oxygen abundance \citep{SS72}, established in numerous subsequent studies
\citep[][among others]{Lequeux79,French80,KD81,Pagel92,SK93,Martin96,VilchezIglesiasParamo98-IZw18,IT98a,IT98b,ICF99} to be 12+log(O/H)$\approx$7.2, makes it the third most metal-poor star-forming (SF)
galaxy in the nearby Universe, after \object{SBS\ 0335-052\,W}
\citep{Izotov05-SBS0335,Papaderos06-SBS0335,Izotov09-SBS0335} and
\object{DDO68} \citep{Pustilnik05-DDO68,IT07-DDO68}.
Despite a meanwhile long record of extremely metal-poor (12+log(O/H)$\la$7.6) BCDs
(hereafter XBCDs) discovered in the recent years
\citep[see e.g.][for a review]{Papaderos08,Guseva09-LZ},
\object{I~Zw~18}\ remains the unconquered prototypical example of this enigmatic
galaxy class.
\object{I~Zw~18}\ was originally described by \cite{Zwicky66} as a pair of compact galaxies,
later on recognized to be SF regions within the same galaxy, the brighter
northwestern (NW) and the fainter southeastern (SE) component, separated by
$\approx$6\arcsec\ (cf Fig. \ref{IZw18-Fig1}).
Subsequent work has shown that these regions are embedded within an extended,
low-surface brightness (LSB) envelope
\citep{Davidson89,DufourHester90,Ostlin96,Dufour96a,Martin96},
whose rich filamentary substructure was impressively revealed with the
advent of the {\sl HST}\ \citep{HT95,Dufour96b}.
That nebular emission (hereafter \nsf ne\rm) is very strong in the central part of
\object{I~Zw~18}\ and its north-western super-shell was spectroscopically documented early on.
For example, \cite{Izotov01-IZw18} have shown on the basis of deep
long slit spectroscopy that the equivalent width (EW) of the H$\alpha$\ emission
line rises to 1700~$\AA$ northwest of region NW and that \nsf ne\rm\
is present as far away as 15\arcsec\ from it (regions labeled ``H$\alpha$\ arc''
and ``Loop'' in Fig. \ref{IZw18-Fig1}).
The EW(H$\alpha$) morphology of \object{I~Zw~18}\ was first studied
with high-resolution ground-based imagery by \cite{Ostlin96} who
described a horseshoe-shaped rim of intense (EW(H$\alpha$)$\simeq$1500 $\AA$) \nsf ne\rm\
encompassing region NW. This conspicuous EW pattern was later on confirmed through
{\sl HST}\ WFPC2 data \citep{Papaderos01-IZw18,Izotov01-IZw18} and, more
impressively, by \cite{VilchezIglesiasParamo98-IZw18} who were the first to
present a 2D spectroscopic study of the chemical abundance patterns
of the warm interstellar medium (ISM) in \object{I~Zw~18}.
Much less attention has been attracted by the fainter detached C component of
\object{I~Zw~18}\ (hereafter \object{I~Zw~18\,C}), located $\sim$ 22\arcsec\ northwest of region NW.
\cite{Dufour96a}, \cite{Petrosian97}, \cite{IT98a}, \cite{vanZee98-IZw18} and \cite{Izotov01-IZw18}
have shown it to have the same recession velocity as the main body, thus establishing its
physical association to \object{I~Zw~18}. This was also shown through interferometric 21cm studies
\citep[][see also \cite{Via87}]{vanZee98-IZw18} which revealed that \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ are
immersed within a large common HI complex with a projected size of 60\arcsec $\times$
45\arcsec\ connecting with a $\ga$1\arcmin\ southern tail with no optical counterpart.
The SF activity in \object{I~Zw~18\,C}\ is known to be weak with its EW(H$\alpha$) not exceeding $\sim$60 $\AA$
along its major axis \citep[][see also \citet{vanZee98-IZw18}]{Izotov01-IZw18}.
Despite deep Keck\,II spectroscopy, \cite{Izotov01-IZw18} failed to detect
oxygen lines, so its oxygen abundance is not known.
\begin{figure*}[ht]
\begin{picture}(17.4,16.4)
\put(0.2,0){{\psfig{figure=lr_fig1.eps,width=18.0cm,angle=0,clip=}}}
\end{picture}
\caption[]{
Three-color composite image of \object{I~Zw~18}\ and \object{I~Zw~18\,C}, combining
{\sl HST}\ ACS data in $V$, $R$ and $I$ (blue, green and red channel,
respectively). The position of the star-forming regions NW and SE
is indicated by crosses. The regions labeled ``Loop'' and ``H$\alpha$ arc''
were studied through deep Keck\,II long slit spectroscopy by \cite{Izotov01-IZw18}.
The region $\omega$ at the southeastern tip of \object{I~Zw~18}\ shows comparatively weak
nebular emission, its colors allow therefore to place meaningful constraints
on the age of the stellar component \citep{Papaderos02-IZw18}.
In the magnified version of region NW (inset), combining the unsharp masked
images $I_{\rm c}$, $R_{\rm c}$ and $V_{\rm c}$ (see Sect. \ref{results}
for details), about 30 point sources, surrounded by a complex network
of ionized gas shells are discernible. The irregular blue galaxy \object{I~Zw~18\,C}\
is located $\sim$22\arcsec\ northwest of region NW ($\approx$2 kpc at
the assumed distance of 19 Mpc to \object{I~Zw~18}). It shows faint nebular
emission in its bluer southeastern tip and central star cluster
complex C \citep{Izotov01-IZw18}. North is at the top and east to the left.
}
\label{IZw18-Fig1}
\end{figure*}
Traditionally, the distance to \object{I~Zw~18}\ has been taken to be 10 Mpc, assuming a
pure Hubble flow recessional velocity. However, \cite{Izotov99-IZw18} have argued
based on an {\sl HST}\ color-magnitude diagram (CMD) study that the distance to
\object{I~Zw~18}\ has to be at least 15 Mpc, and most likely $\sim$20 Mpc, in order for
its brightest stars being massive enough to account for the ionizing flux observed.
This upper distance value has recently received independent support by
\cite{Fiorentino10}. These authors have identified three long-period Cepheid
candidates in \object{I~Zw~18}, which, if interpreted as classical Cepheids, imply by the
Wesenheit relation a distance of 19.0$^{+2.8}_{-2.5}$ Myr.
In the following, we shall throughout adopt a distance $D$=19 Mpc to both \object{I~Zw~18}\ and
\object{I~Zw~18\,C}\ and convert distance-dependent quantities from the literature
accordingly. It should be noted, however, that the assumed distance has practically
no influence on the main conclusions from this study.
The wealth of dedicated studies of \object{I~Zw~18}\ highlight the importance placed on
this XBCD as precious nearby laboratory for exploring collective star
formation and the associated feedback process under metallicity conditions
approaching those in distant protogalactic systems.
Some examples include the consideration of \object{I~Zw~18}\ as reference object for many
dwarf galaxy chemical evolution models
\citep[][among others]{MatteuchiTosi85,RoyKunth95,MHK99,Legrand00-IZw18,Recchi04-IZw18},
the great deal of effort put in the determination of its chemical abundance
patterns in its neutral ISM \citep{Kunth94-IZw18,Aloisi03-H2,LdE04-IZw18},
and in the study of its dust and molecular gas content
\citep[e.g.][]{Cannon01,Leroy07-IZw18},
the thorough exploration of the excitation mechanisms of its brightest H{\sc ii} region
\citep{StasinskaScharer99-IZw18,Pequignot08-IZw18}, and the deep spectroscopic
studies that led to the discovery of Wolf-Rayet stellar features in it
\citep{Izotov97-IZw18-WR,Legrand97-IZw18-WR}.
But arguably, the most longstanding debate associated with \object{I~Zw~18}\ ever since
its discovery concerns its evolutionary status.
In this regard, various interpretations have been put forward, ranging
from \object{I~Zw~18}\ being a \emph{bona fide} young galaxy, currently forming
its \emph{first} stellar
generation \cite[][see also \citet{IzotovThuan99-heavy-elements}]{SS70},
all through the diametrically opposite picture of an ancient
``slowly cooking'' dwarf galaxy that is forming stars continuously over the
Hubble time \citep{Legrand00-IZw18,Legrand01-IZw18}.
Notwithstanding an impressive amount of high-quality multiwavelength data
and numerous dedicated analyses of considerable effort and sophistication,
the convergence towards a consensual view on the evolutionary status of
\object{I~Zw~18}\ has been slow.
CMD analyses, based on {\sl HST}\ data, have primarily been focusing on the
question of whether or not \object{I~Zw~18}\ contains a sizeable population of evolved
red giant branch (RGB)
stars, similar to typical \citep[12+log(O/H)$\ga$8, see e.g.][]{KunthOstlin00} BCDs.
In the latter, an extended envelope of resolved RGB stars around the
SF component \citep[e.g.][]{Tosi01,Crone02} nicely echoes the since-long
observationally established fact of an evolved underlying host galaxy in
these systems \citep[e.g.][]{LT86,P96a,LM01a,BergvallOstlin02,Noeske03-NIR,GildePazMadore05}.
Initial CMD studies suggested for the main body an age between several
10 Myr \citep{HT95,Dufour96b} and $\sim$~1 Gyr \citep{Aloisi99-IZw18}.
\citet{Ostlin00} argued from an {\sl HST}\ NICMOS near infrared (NIR)
study that a fit to the $J$ vs. $J-H$ CMD of \object{I~Zw~18}\ is best achieved for
a stellar population of age $\sim$5~Gyr. The subsequent identification of
five carbon star candidates with an estimated age 0.5--1 Gyr by
\cite{OstlinMouhcine05} is in accord with that conclusion, even though the
number of evolved star candidates in all above studies (about a dozen
altogether) was recognized to be surprisingly small compared to typical
BCDs of equal luminosity.
A significant step forward has been possible through the advent of {\sl HST}~ACS,
allowing \cite[][hereafter IT04]{IT04-IZw18} to extend point source photometry
to magnitudes as faint as 29 mag in the $V$ and $I$ band and revisit the
question of the presence of RGB stars in \object{I~Zw~18}.
IT04 found, in addition to numerous blue main sequence and blue and red
supergiants with an age $\la$100 Myr, an older population of
asymptotic giant branch (AGB) stars with an age between 0.1 and 0.5 Gyr.
This study, in which no RGB were detected, has been the first to also explore
the spatial distribution of stars of different ages in \object{I~Zw~18}.
The upper age limit of 0.5 Gyr for the oldest stars in \object{I~Zw~18}\ (IT04) was
subsequently relaxed from a re-analysis of the same data by
\citet{YakobchukIzotov06} which revealed an untypically small number
of RGB candidates. Various other efforts have been made to improve on the
CMD analysis of IT04 by pushing point source photometry to
by 1--2 mag fainter levels \citep{Momany05,Tosi07-IZw18,Tikhonov07-IZw18}.
The faintest ($>$29 mag) point sources in those CMDs cover almost uniformly
the color range between $<$--1 mag and $>$2 mag.
It is worth pointing out that, whereas divergent in their conclusions
regarding stellar age, all CMD analyses for \object{I~Zw~18}\ consistently indicate
a conspicuous absence of an extended stellar LSB envelope surrounding
regions NW\&SE, at sharp contrast to any previously studied BCD.
As for \object{I~Zw~18\,C}, CMD analyses yield an upper age between a few ten and hundred Myr
\citep{Dufour96b,Aloisi99-IZw18}. Recently, \citet{Jamet10-IZw18} employing a probabilistic
CMD modeling technique reported an upper age of $\sim$125 Myr, without, however,
strictly ruling out the presence of older stars.
An age of the same order was previously inferred for \object{I~Zw~18\,C}\ from a combined CMD
and evolutionary spectral synthesis study by \cite{Izotov01-IZw18}.
From the viewpoint of surface photometry, diametrically different
conclusions on the photometric structure and evolutionary status of
\object{I~Zw~18}\ were drawn by \citet[][hereafter K\"O00]{KunthOstlin00} and
\citet[][hereafter P02]{Papaderos02-IZw18}.
Nevertheless, these two studies were the first to demonstrate on the basis of
surface photometry that \object{I~Zw~18}\ is not presently forming its first stellar
generation but contains a substantial unresolved stellar background of
intermediate age.
As a matter of fact, much of the disparity between these studies has been due
to the different importance they ascribed to the presence and photometric
impact of \nsf ne\rm.
K\"O00\ concluded that SF activity in \object{I~Zw~18}\ is hosted by an old, extended
stellar disk that dominates the stellar mass, just like in typical BCDs.
Their rationale has mainly been based on their finding that \object{I~Zw~18}\ shows
an exponential intensity decrease and reddish ($B$--$R$$\approx$0.6 mag)
colors in its LSB envelope (9\arcsec$\ga${\sl R}$^{\star}$$\la$20\arcsec).
On the assumption that stellar emission dominates throughout, this color
translates by a continuous star formation model to an age of $\ga$5 Gyr.
The central surface brightness $\mu_0$ and exponential scale length
$\alpha$ of the disk, read off the $B$ band surface brightness profile
(SBP) of K\"O00, imply that the old stellar host contains $\sim$1/2 of the
emission and the bulk of the stellar mass in \object{I~Zw~18}.
Note that the stellar disk interpretation for \object{I~Zw~18}\ is in qualitative agreement
with the evolutionary scenario by
\citet[][see also \cite{Legrand00-IZw18}]{Legrand01-IZw18}.
These authors argued that the low and uniform gas-phase
metallicity of \object{I~Zw~18}\ is reproducible through continuous low-level star
formation throughout the main HI complex of \object{I~Zw~18}\ (45\arcsec$\times$60\arcsec)
over the past $\sim$14 Gyr.
This process would produce an extended stellar disk
of extremely low surface brightness ($\overline{\mu}\simeq$28 $B$ mag/$\sq\arcsec$).
P02 \citep[see also][]{Papaderos01-IZw18}, on the other hand, called into
question the conclusions by K\"O00\ by invoking various lines of observational evidence.
First, they have empirically shown that an exponential outer intensity
drop off is a generic property of the nebular halo of starbursting dwarf galaxies.
Consequently, the exponentiality of the LSB envelope of \object{I~Zw~18}\
is not \emph{per se} a compelling argument for it being due to a stellar disk.
This also applies to its reddish colors which can naturally be accounted for
by photoionized gas of subsolar metallicity \citep[see e.g.][]{Krueger95}.
Secondly, and in a more straight forward approach, P02 used {\sl HST}\ WFPC2 narrow
band images to bidimensionally subtract the [O{\sc iii}] and H$\alpha$\ line
emission from broad band {\sl HST}\ data in order to isolate and study the residual
stellar emission in \object{I~Zw~18}.
This correction led to the virtual removal of the filamentary LSB envelope,
proving its gaseous nature. Specifically, P02 have shown that \nsf ne\rm\ dominates
the line-of-sight intensity already at a photometric radius
{\sl R}$^{\star}$$\ga$6\arcsec\ and contributes between 30\% and 50\% of the
$R$ band luminosity of \object{I~Zw~18}.
Broad band images, after decontamination from nebular line emission (though still
affected by nebular continuum emission), were then used
to study the photometric structure and color distribution of the \emph{stellar} component of \object{I~Zw~18}.
SBPs computed from them have revealed a very compact host which,
at sharp contrast to typical BCDs, shows practically no radial color
gradients and overall very blue ($\la$0.1 mag) colors down to a limiting
surface brightness $\mu\sim$26 mag/$\sq\arcsec$.
These exceptional properties were interpreted as evidence for youth:
young stars in \object{I~Zw~18}\ did not have had enough time to migrate significantly
far from heir initial locus and gradually form the extended stellar host
that is typical of evolved BCDs. This, and the blue optical and NIR colors of the
southeastern tip of \object{I~Zw~18}\ (region $\omega$ in the notation of P02,
cf Fig. \ref{IZw18-Fig1}) where \nsf ne\rm\ is weak led P02 to conclude
that most of the stellar mass in \object{I~Zw~18}\ has formed within the past 0.5 Gyr.
Hence, the picture put forward by P02 is that \object{I~Zw~18}\ is a cosmologically
young object that presently undergoes its dominant formation phase and
contains a small, if any, mass fraction of stars older than $\sim$1 Gyr.
Further support to this conclusion came from a subsequent NIR study of
\object{I~Zw~18}\ by \cite{Hunt03-IZw18}.
As for \object{I~Zw~18\,C}, the nearly constant blue colors ($\approx$0 mag)
determined within its Holmberg radius, lent further support to
the youth interpretation that was previously advocated by \cite{Izotov01-IZw18}.
However, important aspects of the \object{I~Zw~18}\ system could not be conclusively
addressed from previous photometric studies.
For example, whereas SBPs for \object{I~Zw~18\,C}\ by P02 reach a surface brightness level
$\mu\sim28$ $B$ mag/$\sq\arcsec$, their large photometric uncertainties already
below $\approx$26 mag/$\sq\arcsec$\ have practically prevented an assessment of the question of whether
a redder underlying stellar population dominates in the extremely faint periphery of the galaxy.
Clearly, this issue is central to the understanding of the evolutionary status of \object{I~Zw~18\,C}.
One may argue that, since the evolved stellar host generally dominates for
$\mu\ga24.5$ $B$ mag/$\sq\arcsec$\ \citep[][hereafter P96a]{P96a}, it would have been
detected in \object{I~Zw~18\,C}, should have been present.
This is a circular argument, however, given that empirical relations established
for typical BCDs should not be taken for granted for young dwarf galaxy candidates.
Similarly, due to the shallowness of previous surface photometry
no definite conclusions could be drawn regarding the ultra-LSB disk predicted
by \cite{Legrand00-IZw18}.
As for the main body of \object{I~Zw~18}, previous studies did not had the
sensitivity to pin down the maximal extent, morphology and color pattern
of the LSB envelope, adding potentially important constraints
to chemodynamical and spectrophotometric models for \object{I~Zw~18}.
From such considerations, extremely deep surface photometry appears
crucially important for further advancing our understanding of the
photometric structure and evolutionary status of \object{I~Zw~18}\ and \object{I~Zw~18\,C}.
This is particularly true for deep $I$ band surface photometry
which is entirely lacking both for \object{I~Zw~18}\ and \object{I~Zw~18\,C}.
Note that $I$ band {\sl HST}\ WFPC2 images included the main body of
\object{I~Zw~18}\ only and were not deep enough for a study of its LSB envelope.
This data set was therefore not used in the surface photometry analysis by P02.
This study is motivated by the availability of an unprecedently deep set
of archival {\sl HST}\ ACS $V$, $R$ and $I$ broad band data that has accumulated over the
past few years. $I$ band photometry is an important asset in this respect,
not only due to its higher sensitivity to a putative old stellar
background but also because it offers, in combination with $V$ and $R$ data,
a sensitive tracer of \nsf ne\rm\ both in the center and the LSB envelope
of \object{I~Zw~18}. This is because the $I$ band is affected by nebular continuum emission
only, whereas the $V$ and $R$ transmission curves additionally include, respectively,
the strong [O{\sc iii}]$\lambda\lambda$4959,5007 and H$\alpha$\ emission lines.
As a result, regions with strongly enhanced \nsf ne\rm\ can readily be identified by
their extremely blue (--0.5 \dots --1.4) $V$--$I$ and $R$--$I$ colors
\citep[see e.g.][P02 and references therein for a discussion and
examples among XBCDs]{Papaderos98-SBS0335}.
This paper is organized as follows: in Sect. \ref{data} we discuss the data
processing and SBP derivation technique used and in Sect. \ref{results} the
structural and morphological properties of \object{I~Zw~18\,C}\ and \object{I~Zw~18}.
Section \ref{discussion} concentrates on the evolutionary status and
the formation process of \object{I~Zw~18\,C}\ (Sect. \ref{IZw18C-evol}) under consideration of
the effect that the diffusion of young stars would have on the observed
colors (Sect. \ref{mass-filtering}).
The evolutionary status of \object{I~Zw~18}\ and the hypothesis of an ultra-LSB
underlying stellar disk are discussed on the basis of the
present photometric analysis in Sects. \ref{age-IZw18} and \ref{izw18-disk},
respectively.
In Sect. \ref{iz18-z} we use \object{I~Zw~18}\ as template to briefly explore the biases
that extended \nsf ne\rm\ may introduce in photometric studies of morphologically
analogous star-forming galaxies at higher redshift ($z$).
The main results from this study are summarized in Sect. \ref{Conclusions}.
\section{Data processing \label{data} }
This study is based on the entire set of archival {\sl HST}\ ACS broad band images
for \object{I~Zw~18}\ that has been acquired through the {\sl HST}\ programs 9400 (PI: Thuan) and
10586 (PI: Aloisi). It comprises 38, 65 and 81 images in the filters
F555W ($V$), F606W (broad $VR$, referred to in the following as $R$)
and F814W ($I$), summing up to on-source exposures of 87, 55 and 101
ksec, respectively. This is the deepest imaging data set currently available
for \object{I~Zw~18}, with an integration time in $R$ and $I$ equaling $\sim$1/3 of
the time spent on the {\sl HST}\ ACS Ultra Deep Field \citep{Beckwith06-ACSUDF}
in the filters F606W and F775W.
\begin{figure}
\begin{picture}(17.4,9.4)
\put(0.1,0.){{\psfig{figure=lr_fig2.eps,width=8.8cm,angle=0,clip=}}}
\end{picture}
\caption[]{Combined $I$ band exposure of \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ with point- and compact sources
in the vicinity of the galaxies (polygonal regions A and B) marked with
rectangles. North is up and east to the left.
}
\label{fig:IZw18_cps}
\end{figure}
The data processing was carried out using
IRAF\footnote{IRAF is the Image Reduction and Analysis Facility distributed by
the National Optical Astronomy Observatory, which is operated by the
Association of Universities for Research in Astronomy (AURA) under
cooperative agreement with the National Science Foundation (NSF).} and ESO
MIDAS\footnote{Munich Image Data Analysis System, provided by the European
Southern Observatory (ESO)}. Photometric quantities refer to the Vega system.
Since the main goal of this study is deep surface photometry, its most
critical aspect is the removal of diffuse and compact background
sources (diffuse extragalactic background, zodiacal light, foreground
stars and background galaxies, respectively).
After subtraction of the diffuse background, we therefore checked to
which extent compact or point sources (\nsf cps\rm) within the extended
LSB envelope of \object{I~Zw~18}\ can affect SBPs and color profiles.
For example, for an extended source of constant surface brightness
$\mu=29$ mag/$\sq\arcsec$, the total apparent magnitude $m$ of a circular annulus with
19\arcsec$\leq${\sl R}$^{\star}$$\leq$20\arcsec\ (roughly the radius of \object{I~Zw~18}) is 23.8 mag.
At this intensity level, already a single faint ($m$=25 mag) background
\nsf cps\rm\ can introduce an error of 0.3 mag in surface photometry.
We adopted the following procedure: after image alignment and correction
for cosmics, we used the combined images in the three filters to compile
a catalogue of \nsf cps\rm\ in the relevant portion of the field of view.
In doing this, we disregarded \nsf cps\rm\ in \object{I~Zw~18}\ (roughly the area subtended by
the 25 $R$\arcmin\ mag/$\sq\arcsec$\ isophote in Fig. \ref{IZw18HaImage}) and in \object{I~Zw~18\,C},
as well as compact clumps of \nsf ne\rm, identified by their
blue $V$--$I$ and $R$--$I$ color.
All \nsf cps\rm\ in each frame were in turn replaced by the mean intensity
in the adjacent area.
Special care was given to the removal of two background galaxies
close to the western super-shell of \object{I~Zw~18}. This was done by subtracting
a 2D model, computed with the \cite{BenderMollenhoff} algorithm.
In total, $\sim$140 \nsf cps\rm\ were removed in the field of interest around
\object{I~Zw~18}\ and \object{I~Zw~18\,C}\ (polygonal regions labeled A and B in Fig. \ref{fig:IZw18_cps}).
Their integral $I$ magnitude of 19.17 mag and 21.3 mag within
the regions considered ($\sim$1600 $\sq\arcsec$ and $\sim$200
$\sq\arcsec$ for A and B, respectively) corresponds to a mean
surface brightness of $\sim$27 $I$ mag/$\sq\arcsec$.
This value is consistent at the 1.5$\sigma$ level with the value of 26.7 $I$ mag/$\sq\arcsec$\
inferred by \citet{ZMO09} for the resolved extragalactic background light emission.
The isophotal radius of the \emph{stellar} component of \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ at
$\mu$=26 $B$ mag/$\sq\arcsec$\ was determined by P02 to be 8\farcs8 and 5\farcs5,
respectively. From their combined isophotal area of $\sim$340 $\sq\arcsec$
and the above derived \nsf cps\rm\ surface density of $\sim$0.08 $\sq\arcsec^{-1}$, we
expect about 27 background \nsf cps\rm\ in \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ altogether.
SBPs computed prior to and after removal of \nsf cps\rm, were found in all bands to be
consistent within 1$\sigma$ uncertainties, ensuring that \nsf cps\rm\ contamination
does not affect our conclusions in Sect. \ref{results}.
SBPs were computed with the code {\nlx iv} (P02) \citep[also referred to as
{\tt Lazy} by][]{Noeske06-UDF} that was specifically developed for the study
of irregular galaxies.
This code permits a simultaneous processing of co-aligned images of a galaxy
in several bands and does not require a choice of a
galaxy center, neither does implicitly assume that the galaxy can be approximated
by the superposition of axis-symmetric luminosity components.
One of its key features is the computation of photon statistics within automatically
generated irregular annuli that are adjusted to the galaxy morphology for each
surface brightness interval $\mu\pm\Delta\mu$.
This distinguishes code {\nlx iv} from other surface photometry packages
which generally employ ellipse-fitting to isophotes or photon
statistics within elliptical annuli (e.g. {\nlx meth. i} of P96a, task ELLIPSE in
IRAF, FIT/ELL3 in MIDAS), or approximate a galaxy by a single or several 2D
axis-symmetric components (e.g. GIM2D, \citet{Simard98} and GALFIT, \citet{Peng02}).
\section{Results \label{results} }
\subsection{The photometric structure of I\ Zw\ 18\,C \label{phot:IZw18C} }
\begin{figure*}
\begin{picture}(16.4,12.6)
\put(0,5.0){{\psfig{figure=fig3a.ps,width=8.7cm,angle=-90}}}
\put(0.05,0.){{\psfig{figure=fig3b.ps,width=8.7cm,angle=-90}}}
\put(9,5.0){{\psfig{figure=fig3c.ps,width=8.7cm,angle=-90}}}
\put(9.05,0.){{\psfig{figure=fig3d.ps,width=8.8cm,angle=-90}}}
\end{picture}
\caption[]{{\bf (a)} Surface brightness profiles (SBPs) of \object{I~Zw~18\,C}\ in $V$
(F555W), $R$ (F606W) and $I$ (F814W). The thick gray curve shows a fit
to the $V$ SBP for {\sl R}$^{\star}$$\geq$3\farcs8 with the modified exponential fitting
function Eq. \ref{eq:p96a} ({\tt modexp}) for a core radius
$R_{\rm c}=2.4\cdot\alpha$ (dotted vertical line).
The effective radius $R_{\rm eff}$ and the radius $R_{80}$ enclosing
80\% of the total $V$ emission are indicated.
The dashed line shows a linear fit to the outermost exponential part
of the $V$ SBP for {\sl R}$^{\star}$$\geq$6\arcsec, i.e. at the extremely faint
($\mu\geq 27.6$ mag/$\sq\arcsec$) outskirts of the galaxy.
The shadowed area corresponds to the 1$\sigma$ uncertainties of the $V$ SBP.
{\bf b)} Radial $V$--$R$ and $R$--$I$ color profiles of \object{I~Zw~18\,C}, derived from the
SBPs in the upper panel.
{\bf c)} Comparison of the best-fitting {\tt modexp} model in panel a (thick
gray curve) with the $V$ SBPs of \object{I~Zw~18\,C}\ after partial removal of the brightest
point sources ($V_{\star}$; squares).
Open circles show the SBP of the \emph{unresolved} stellar emission
($V_{\rm d}$), computed after complete removal of compact ($\leq$0\farcs5)
sources with an unsharp masking technique.
{\bf d)} Comparison of the color profiles in panel b (open and filled circles)
with the ($V$--$I$)$_{\rm d}$ color profile (squares) of the unresolved
stellar emission.
The shadowed area depicts the 1$\sigma$ uncertainties of the $V$--$R$ profile.}
\label{IZw18C_sbp}
\end{figure*}
\begin{figure*}
\begin{picture}(16.4,17.7)
\put(0,0){{\psfig{figure=lr_fig4.eps,width=18.4cm,angle=0,clip=}}}
\end{picture}
\caption[]{{\bf (a)} Three-color composite of the images $R_{\rm c}$, $V_{\rm c}$ and
$I_{\rm c}$ (red, green and blue image channel, respectively), illustrating the
spatial distribution of compact ($\leq$0\farcs5) sources in \object{I~Zw~18\,C}.
The regions C\,{\sc i}, C\,{\sc ii} and C\,{\sc iii} defined by \cite{IT04-IZw18} are indicated.
The blown up version of the central part of \object{I~Zw~18\,C}\ (lower-right) shows the
ionized gas shell in the vicinity of the bright stellar cluster C
\citep[cf][]{Dufour96b}.
{\bf (b)} Three-color composite of $R_{\rm d}$, $V_{\rm d}$ and $I_{\rm d}$,
of the \emph{unresolved} stellar emission. The color coding is the same as in panel a.
{\bf c)} $V$--$I$ color map of \object{I~Zw~18\,C}, revealing very blue colors ($V$--$I<$0)
all over the southeastern third (region C\,{\sc i}) of the galaxy.
The contour corresponds to 25 $V$ mag/$\sq\arcsec$.
{\bf d)} Comparison of the spatial distribution of compact and point sources in
$V_{\rm c}$ and $R_{\rm c}$ (blue and green channel, respectively)
with the unresolved stellar background ($I_{\rm d}$, red channel).
}
\label{IZw18C_hb}
\end{figure*}
In this section we investigate the photometric structure of \object{I~Zw~18\,C}, based on the combined
{\sl HST}\ ACS data in the filters $V$, $R$ and $I$. These allow to extend previous {\sl HST}\ WFPC2
surface photometry (P02) by $\ga$1 mag, with significantly reduced uncertainties below $\mu\sim26$ mag/$\sq\arcsec$.
In agreement with previous work, the SBPs of \object{I~Zw~18\,C}\ in all bands (Fig. \ref{IZw18C_sbp}a)
were found to be nearly indistinguishable from one another, implying nearly constant
radial colors. All SBPs display a narrow ({\sl R}$^{\star}$$\leq$1\arcsec) central excess and an
outer (3\arcsec$\la${\sl R}$^{\star}$$\la$9\arcsec) roughly exponential drop-off that mainly
reflects the luminosity output from the host galaxy.
A salient feature at intermediate radii (1\arcsec$\leq${\sl R}$^{\star}$$\la$3\arcsec)
is a shallower intensity increase than what inwards extrapolation of the outer
profile slope to {\sl R}$^{\star}$=0\arcsec\ predicts.
An adequate fit to such SBPs therefore requires a modified
exponential model that involves an extended ({\sl R}$^{\star}$$\sim$3\arcsec) central core
of nearly constant surface brightness.
One such fitting function (hereafter {\tt modexp}), used by P96a
to fit the host galaxy of BCDs
\citep[see][for applications on near infrared (NIR) studies and a comparison
with the S\'ersic law]{Noeske03-NIR} has the form:
\begin{equation}
I(R^*) = I_{\rm exp} \cdot
\big[1-\epsilon_1\,\exp(-P_3(R^*))\big],
\label{eq:p96a}
\end{equation}
where $P_3(R^*)$ is defined as
\begin{equation}
P_3(R^*) =
\left(\frac{R^*}{\epsilon_2\,\alpha}\right)^3+\left(\frac{R^*}{\alpha}\,\frac{1-\epsilon_1}{\epsilon_1}\right).
\label{eq:p96b}
\end{equation}
In addition to the central intensity $I_0$ and scale length $\alpha$ of a pure
exponential profile $I_{\rm exp}=I_0\cdot\exp(R^*/\alpha)$, Eqs. \ref{eq:p96a}\&\ref{eq:p96b}
involve two further parameters: the central intensity depression
$\epsilon_1=\Delta I/I_0$ relative to an exponential model and the
core radius $R_{\rm c}=\epsilon_2 \cdot \alpha$.
The best-fitting {\tt modexp} model to the $V$ band SBP for $\epsilon_{1,2}$=(2.4,0.8) (cf P02)
is shown in Fig. \ref{IZw18C_sbp}a with the gray thick curve.
It yields in all bands an \emph{extrapolated} central surface brightness
$\mu_{\rm E,0}$=21.7$\pm$0.2, a \emph{true} central surface brightness
$\mu_0=\mu_{\rm E,0}-2.5\,log(1-\epsilon_2)$=23.45 mag/$\sq\arcsec$\ and an $\alpha$ in
the narrow range between 108 and 117 pc. The absolute $V$ magnitude
determined from the {\tt modexp} fit (--11.67 mag) corresponds to
$\sim$80\% of the total luminosity of \object{I~Zw~18\,C}.
Note that a direct determination of the intensity profile of the host galaxy
of BCDs for radii {\sl R}$^{\star}$$\leq R_{\rm c}$ is generally prevented by the luminous
young stellar component that typically dominates out to
{\it R$_{\rm SF}$}$\approx$2$\alpha$ \citep[][hereafter P96b]{P96b}.
Since young stellar clusters (SCs) can hardly be sufficiently resolved and
subtracted out, even at the angular resolution of the {\sl HST}, the chosen parameter set
$\epsilon_{1,2}$ has to rely on plausibility arguments
\citep[see discussion in P96a and][]{Noeske03-NIR}
and is to be considered approximative only.
In the case of \object{I~Zw~18\,C}, however, due to the comparatively low surface density of
young SCs, one stands a better chance to directly constrain the central form
of the host's SBP.
For this, we first subtracted out the brightest $\sim$200 point sources from
the galaxy using DAOPHOT \citep{Stetson79-DAOPHOT} and subsequently recomputed
the $V$ SBP from the residual emission.
Figure \ref{IZw18C_sbp}c shows that this SBP (open squares, labeled $V_{\star}$)
closely matches the best-fitting {\tt modexp} model for intermediate to large {\sl R}$^{\star}$.
This agreement suggests that the adopted $\epsilon_{1,2}$ parameters yield a
reasonable first-order approximation to the unresolved emission of the host.
SBP integration indicates that roughly 30\% of \object{I~Zw~18\,C}'s emission within
$R_{80}$ (Fig. \ref{IZw18C_sbp}) is due to point sources.
This is rather a lower limit for the luminosity fraction of \nsf cps\rm\
given that with the adopted procedure SCs could not be fully deblended and
subtracted out. This is apparent from the still strong ($\sim$2 mag) central peak
of the $V_{\star}$ SBP that is mainly due to incomplete removal of the central
SC complex C and surrounding bright SCs (cf Fig. \ref{IZw18C_hb}a).
In an effort to better constrain the SBP of the \emph{unresolved} stellar
component of \object{I~Zw~18\,C}, we subsequently applied a flux-conserving unsharp masking technique
\citep[][hereafter P98]{Papaderos98-SBS0335} to filter out all
higher-surface brightness (HSB) clumpy features with $\leq$0\farcs5 and
isolate the diffuse $V$ emission only.
The frames holding the compact ($V_{\rm c}$, $R_{\rm c}$ and $I_{\rm c}$)
and diffuse ($V_{\rm d}$, $R_{\rm d}$ and $I_{\rm d}$) emission in each band
are displayed in panels {\nlx a} and {\nlx b} of Fig. \ref{IZw18C_hb}, respectively.
The $V_{\rm d}$ SBP (open circles in Fig. \ref{IZw18C_sbp}c)
is at intermediate radii (1\arcsec$\la${\sl R}$^{\star}$$\la$6\arcsec) fairly comparable to the
$V_{\star}$ SBP, except for a nearly constant offset by $\approx$0.3 mag.
Its corresponding absolute magnitude (--11.3 mag) is by about 0.75 mag fainter
than the integral value for \object{I~Zw~18\,C}\ (--12.05 mag) and can be regarded as
characteristic for the \emph{unresolved} stellar emission in the host galaxy.
Note that all SBPs in Figs. \ref{IZw18C_sbp}a,c display at very faint levels
($\mu\ga 27.6$ mag/$\sq\arcsec$) a shallower outer ({\sl R}$^{\star}$$\ga$6\arcsec)
exponential slope (dashed line in Fig. \ref{IZw18C_sbp}a).
This feature is certainly not due to point spread function (PSF) convolution
effects since the maximum extent of the ACS PSF at its lowest
measured intensity (10 mag below its central value) is $\sim$1\farcs5
\citep{Jee07-ACSPSF}. \nsf ne\rm\ contamination from the main body
can as well be excluded both on the basis of narrow-band imaging
\citep[e.g.][P02]{Ostlin96} and because it would bluen $V$--$I$ and
$R$--$I$ color profiles, in disagreement with the slight reddening
of color profiles for {\sl R}$^{\star}$$\ga$6\arcsec\ (see below).
This outermost SBP feature has evaded detection on previous surface
photometry (P02) which, due to photometric uncertainties,
was practically restricted to within the Holmberg radius.
Because of its faintness (merely 3\% of the total emission) and
low surface brightness ($\sim$27.6 -- 29 mag/$\sq\arcsec$), its reality can
not be established beyond doubt even with the present data.
If due to an underlying stellar host of constant exponential
slope to {\sl R}$^{\star}$=0\arcsec, its $\mu_0$ ($\approx 23.6$ mag/$\sq\arcsec$) and $\alpha$ (160 pc)
would qualify it as an LSB dwarf with a $M_V\approx$--11 mag
(equivalent to $\sim$38\% of \object{I~Zw~18\,C}'s emission).
We turn next to the color distribution in \object{I~Zw~18\,C}.
Figure \ref{IZw18C_sbp}b shows that for 1\arcsec$\la${\sl R}$^{\star}$$\la$6\arcsec\ both
the $V$--$R$ and $R$--$I$ index is very blue and nearly constant
($\approx$0 with a standard deviation about the mean of 0.05 mag).
Since {\sl R}$^{\star}$$=$6\arcsec\ encompasses practically the total emission of \object{I~Zw~18\,C},
this color may be regarded as representative for the galaxy as a whole.
In the outermost periphery of the galaxy, where the shallower outer exponential
component appears, color profiles hold a hint for a slightly redder
$R$--$I$ (0.16$\pm$0.09) without a notable change in $V$--$R$.
The color profiles in their innermost ({\sl R}$^{\star}$$\leq$1\arcsec) part are
dominated by luminous \nsf cps\rm\ in the surroundings of region C (cf Fig. 1)
and show a large scatter (up to 0.15 mag) around mean values of 0 and 0.06 mag
for $V$--$R$ and $R$--$I$, respectively.
In order to place meaningful constraints on the evolutionary status of \object{I~Zw~18\,C},
it is necessary to verify that these blue colors are not due to the
luminosity-weighted average of bright blue SCs with a red underlying stellar
background. The latter could readily escape detection, even when dominating
the stellar mass (see e.g. P98).
This concern is underscored by two atypical properties of \object{I~Zw~18\,C}:
First, the galaxy exhibits a weak color gradient along its major
axis \cite[cf][]{Aloisi99-IZw18,Izotov01-IZw18}, with its northwestern tip
being redder ($B-V$ = 0.05 mag, $V-I$ = 0.2 mag) than the southeastern one
($B-V$ = --0.07 mag and $V-I$ = --0.2 mag).
This is apparent from Fig. \ref{IZw18C_hb}c, from which a $V$--$I$ color as blue as
--0.2 mag can be read off all over the southeastern third of the galaxy
(region C\,{\sc i} in the denomination by IT04).
By contrast, the average $V$--$I$ color in the central (C\,{\sc ii}) and
northwestern (C\,{\sc iii}) part of the galaxy is redder ($\geq$0 mag)
with several local color maxima ($\geq$0.3 mag) associated with \nsf cps\rm.
It is thus conceivable that the bluer southeastern and redder northwestern galaxy half
counter-balance each other, thereby introducing an overall blue mean radial color.
Another characteristic of \object{I~Zw~18\,C}\ that could additionally conspire in diminishing
radial color gradients is that the surface density of its \nsf cps\rm\ tends to be
spatially anti-correlated with the \emph{unresolved} stellar background of the host.
This is illustrated in Fig. \ref{IZw18C_hb}d where the diffuse emission of the
latter ($I_{\rm d}$: red channel) is overlaid with the \nsf cps\rm\ in the
$V_{\rm c}$ and $R_{\rm c}$ images (blue and green channel, respectively):
it can be seen that the diffuse component peaks at the northwestern half of \object{I~Zw~18\,C}\
(C\,{\sc ii} and C\,{\sc iii}) in which it accounts for $\approx$50\%
of the $V$ line-of-sight intensity, whereas its contribution drops to
$\la$30\% in the southeastern half of \object{I~Zw~18\,C}\ (region C\,{\sc i}) where most of the
bright blue \nsf cps\rm\ are located.
Consequently, especially in region C\,{\sc i}, a hypothetical red stellar
background could readily escape detection on radial color profiles.
From such considerations, and in order to infer the colors of the
unresolved stellar component of \object{I~Zw~18\,C}\ in an as unbiased manner as possible, we additionally
computed color profiles based on $V_{\rm d}$, $R_{\rm d}$ and $I_{\rm d}$ SBPs only.
In all cases, we found a good agreement with the results initially obtained from
the total emission with the exception of a central ({\sl R}$^{\star}$$\la$1\arcsec) red peak
with mean values of ($V$--$R$)$_{\rm d}$=0.1$\pm$0.03 and
($V$--$I$)$_{\rm d}$=0.16$\pm$0.05.
This color excess might be attributed to a stellar age gradient or
enhanced extinction in region C and
surroundings. Note that \cite{Izotov01-IZw18} estimated from spectral synthesis
models an extinction coefficient C(H$\beta$)=0.1--0.3 for region C,
corresponding to $A_V$=0.2--0.65 mag.
\begin{figure}
\begin{picture}(8.6,11.8)
\put(0,4.2){{\psfig{figure=fig5a.ps,width=8.6cm,angle=-90}}}
\put(0.1,0){{\psfig{figure=fig5b.ps,width=8.6cm,angle=-90}}}
\end{picture}
\caption[]{
{\bf (upper panel)} $V$, $R$ and $I$ SBPs of \object{I~Zw~18}. The thick-solid and
thin-dashed lines show linear fits to the $V$ SBP in the radius range
7\arcsec$\leq${\sl R}$^{\star}$$\leq$15\arcsec\ and {\sl R}$^{\star}$$\geq$7\arcsec, respectively.
The effective radius R$_{\rm eff}$ and the radius R$_{\rm 80}$ enclosing 80\%
of the total $V$ emission are indicated.
The $V$ SBP derived by P02 after subtraction of the H$\beta$\ and
[O{\sc iii}]$\lambda\lambda$4959,5007 emission lines (referred to as
$V$\arcmin), shifted by +1 mag for the sake of clarity, is included for comparison.
{\bf (lower panel)} $V$--$R$ (filled circles), $V$--$I$ (squares) and $R$--$I$
(open circles) color profiles, computed from the SBPs in the upper panel.
The vertical bar indicates the $V$-$I$ color range inferred by P02 from ground
based and {\sl HST}\ PC data for region $\omega$ at the southeastern tip of \object{I~Zw~18}\
($V$-$I$($\omega$) = 0.17 \dots 0.32 mag).
Since nebular emission is relatively weak in that region, its color can be considered
representative for the stellar host of \object{I~Zw~18}.
The mean $V$--$R$ color of \object{I~Zw~18}'s host after subtraction of nebular line emission
($V$\arcmin--$R$\arcmin=0.12$\pm$0.04 mag; P02) is indicated by the horizontal
line. Note that the colors $V$-$I$($\omega$) and $V$\arcmin--$R$\arcmin\
of the stellar host galaxy are by, respectively, $\ga$0.8 mag redder and
$\approx$0.4 mag bluer than the colors of the exponential nebular envelope
({\sl R}$^{\star}$$\geq$6\arcsec).}
\label{IZw18SBP}
\end{figure}
As apparent from panel d of Fig. \ref{IZw18C_sbp}, the $V$--$I$ profile of
the unresolved stellar component (($V$--$I$)$_{\rm d}$, squares) is fairly comparable
to the $V$--$R$ and $R$--$I$ profiles (panel b), revealing a nearly constant color of
$\approx$0$\pm$0.04 mag within 1\arcsec$\leq${\sl R}$^{\star}$$\leq$6\arcsec\
and a slightly redder value (0.2$\pm$0.08 mag) in the extreme periphery of the
galaxy.
We are therefore led to conclude that the overall blue colors of \object{I~Zw~18\,C}\
for 1\arcsec$\la${\sl R}$^{\star}$$\la$6\arcsec are not dictated by bright young
SCs spread all over the body of the galaxy but as well characteristic for its
unresolved stellar component.
\subsection{The photometric structure of I\ Zw\ 18 \label{phot:IZw18} }
\begin{figure*}[ht]
\begin{picture}(16.4,9.7)
\put(0,0){{\psfig{figure=lr_fig6.eps,width=18.4cm,angle=0}}}
\end{picture}
\caption[]{$V$--$R$ (left panel) and $R$--$I$ (right panel) color map of
\object{I~Zw~18}, displayed in the range between 0 and 0.4 mag and --0.9 and 0 mag,
respectively. Crosses mark the star-forming regions NW and SE.
The region termed $\omega$ by P02 (cf Fig. \ref{IZw18-Fig1})
is indicated at the southeastern part of the image.
The insets in the left panel show a magnified view of a shell
$\approx$5\arcsec\ northwards of region NW.
}
\label{IZw18-VRImaps}.
\end{figure*}
The $V$, $R$ and $I$ SBPs of \object{I~Zw~18}\ (Fig. \ref{IZw18SBP}) compare well to those
presented by K\"O00\ and P02. They exhibit a central ({\sl R}$^{\star}$$\la$6\arcsec)
high-surface brightness core and a nearly exponential LSB envelope extending out
to {\sl R}$^{\star}$$\sim$20\arcsec.
Profile fitting in the radius range 7\arcsec$\leq${\sl R}$^{\star}$$\leq$15\arcsec\ (solid line)
yields for the LSB component an $\alpha=2\farcs3$ (210 pc) and a luminosity fraction
of $\approx$30\%.
At fainter levels ($\ga$ 27.6 -- 30 mag/$\sq\arcsec$) all SBPs exhibit a shallower
exponential slope with $\alpha=4\farcs1$ (370 pc).
This SBP feature is attributable to the faint filamentary emission in the extreme
northwestern and southeastern
periphery of \object{I~Zw~18}\ (cf. Fig. \ref{IZw18HaImage}) that has evaded
detection on previous {\sl HST}\ WFPC2 imagery and which accounts for no more than
1\% of the total luminosity. A linear fit to the whole LSB envelope
(7\arcsec$\leq${\sl R}$^{\star}$$\leq$20\arcsec; dashed line) yields a mean
$\alpha=2\farcs9\pm 0\farcs13$ ($\sim$270 pc).
The radial color profiles of \object{I~Zw~18}\ (lower panel of Fig. \ref{IZw18SBP}),
reflect the increasing contribution of \nsf ne\rm\ to the observed
line-of-sight intensity with increasing galactocentric radius,
in agreement with previous evidence (P02).
The $V$--$R$ color (filled circles) increases roughly
linearly from $\la$0 at {\sl R}$^{\star}$=0\arcsec\ to 0.55 mag at {\sl R}$^{\star}$=9\arcsec\
where it levels off to a nearly constant value.
The corresponding, relatively strong color gradient of
$\gamma_+$=0.6 mag kpc$^{-1}$ is rather typical for evolved BCDs (P96b),
suggesting, at first glance, that SF activity in \object{I~Zw~18}\ occurs within
a more extended, old underlying host.
This interpretation is, however, immediately challenged upon inspection
of the extremely blue $V$--$I$ and $R$--$I$ profiles.
The latter display already in their inner part (2\arcsec$\la${\sl R}$^{\star}$$\leq$6\arcsec)
mean values of --0.21 mag and --0.44 mag, respectively, both inconsistent
with the red $V$--$R$ color, if the emission is assumed to be dominated by stars.
More impressively, at {\sl R}$^{\star}$$\sim$6\arcsec, i.e. roughly at the transition
radius between the HSB core and the exponential LSB envelope of SBPs, either
color index shows a sudden decrease to values as blue as
$V$--$I$=--0.61$\pm$0.13 mag and $R$--$I$=--1.1$\pm$0.08 mag,
which then remain nearly constant out to {\sl R}$^{\star}$$\sim$20\arcsec.
\begin{figure}
\begin{picture}(8.9,8.0)
\put(0,0){{\psfig{figure=lr_fig7.eps,width=8.9cm,angle=0,clip=}}}
\end{picture}
\caption[]{$V$--$I$ map of the central 15\farcs6$\times$15\farcs6
of \object{I~Zw~18}, displayed in the color range between --0.5 and 0.15 mag. The
contours are computed from the $R$\arcmin\ image by P02 (see also
Fig. \ref{IZw18HaImage}) and go from 19 to 25 mag/$\sq\arcsec$\ in steps of 0.5 mag.
Note the very blue ($<$--0.5 mag) rim encompassing the northwestern
star-forming region.
The mean $V$--$I$ color over the southeastern region $\omega$ is determined to be
0.23 mag with an upper value of 0.3 mag in its reddest quarter.}
\label{IZw18-VImap}
\end{figure}
That such colors can not be of stellar origin is apparent already from the
fact that, even for an O5V star, the $V$--$I$ and $R$--$I$ color is --0.32 mag and
--0.18 mag, respectively, i.e. 0.3 to 0.9 mag redder than the color of the
LSB envelope of \object{I~Zw~18}. More generally, as pointed out by P02, there
is no stellar population, regardless of star formation history, age and
metallicity that can reproduce the observed combination of red ($\sim$0.6 mag)
$V$--$R$ with blue (--0.6 \dots --1.1 mag) $V$--$I$ and $R$--$I$
colors of the LSB envelope of \object{I~Zw~18}.
Quite to the contrary, such colors can solely be accounted for by \nsf ne\rm.
\begin{figure*}
\begin{picture}(17.4,16.8)
\put(0.2,0){{\psfig{figure=lr_fig8.eps,width=18.0cm,angle=0,clip=}}}
\end{picture}
\caption[]{
Composite image of \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ with the $R$, $I$ and $V$ bands shown
in the red, green and blue channel, respectively.
The image discloses a complex network of ionized gas filaments extending as far
out as 2.6 kpc, approximately twice the distance reported from previous
{\sl HST}\ WFPC2 studies, corresponding to
$\sim$16 exponential scale lengths $\alpha$ of the \emph{stellar} host
galaxy of \object{I~Zw~18}\ (160 pc; P02).
The contours, adapted from P02, are computed from {\sl HST}\ WFPC2 $R$ data,
after two-dimensional subtraction of the H$\alpha$\ emission
(referred to as $R$\arcmin), they thus delineate the morphology of the
stellar and nebular continuum emission in the main body of \object{I~Zw~18}.
Contours go from 19 to 25 $R$\arcmin\ in steps of 0.5 mag.
Note that \object{I~Zw~18}\ (i.e. its \emph{stellar} component, as depicted by the
$R$\arcmin\ contours) and \object{I~Zw~18\,C}\ show a remarkably similar structure.
}
\label{IZw18HaImage}
\end{figure*}
Indeed, comparison of Fig. \ref{IZw18SBP} with Fig. 12 of P02 reveals that
the radius {\sl R}$^{\star}$=6\arcsec -- 7\arcsec\ where the steep $V$--$I$ and $R$--$I$ color drop-off occurs
is identifiable with the radius where the line-of-sight contribution of stellar emission
steeply decreases from a plateau value of $\sim$40\% for
3$\la${\sl R}$^{\star}$$\la$6\arcsec\ to less than 20\%.
Therefore, the much deeper data studied here confirm and strengthen the previous
conclusion that the extended (6\arcsec$\la${\sl R}$^{\star}$$\la$20\arcsec) exponential LSB envelope of
\object{I~Zw~18}\ is due to \nsf ne\rm.
The gaseous nature of the LSB envelope of \object{I~Zw~18}\ is also evident from the
color maps in Figs. \ref{IZw18-VRImaps} and \ref{IZw18-VImap}.
These show a clear correspondence to radial color profiles, most notably
a strong core--envelope color contrast (0.5 -- 1.5 mag) and remarkably
uniform colors for the envelope
($V$--$R$$\sim$ 0.5 \dots 0.6, $R$--$I$$\approx$--1.4 mag, $V$--$I$$\approx$--1
mag) over a spatial scale of 9 kpc$^2$.
The northwestern super-shell, for example, though prominent on direct images (cf Fig. 1),
is barely distinguishable from its gaseous surroundings on
$V$--$I$ and $R$--$I$ color maps, suggesting a nearly constant spectral energy
distribution (SED) all over the LSB envelope.
The relative extent of the stellar component that is confined to the compact
HSB core with respect to the nebular LSB envelope is better
illustrated in Fig. \ref{IZw18HaImage}.
The image reveals a complex network of ionized gas filaments extending as far
out as 2.6 kpc, twice the galactocentric distance previously
reported from {\sl HST}\ WFPC2 studies and equivalent to $\sim$16 exponential
scale lengths $\alpha$ of the \emph{stellar} host of \object{I~Zw~18}\ (160 pc; P02).
The contours, adapted from P02, are computed from {\sl HST}\ WFPC2 $R$ data,
after two-dimensional subtraction of H$\alpha$\ line emission
(referred to as $R$\arcmin), they thus delineate the morphology of the
stellar and nebular continuum emission.
Note that \object{I~Zw~18}\ (i.e. its \emph{stellar} component as depicted by the
$R$\arcmin\ contours) and \object{I~Zw~18\,C}\ show a remarkably similar structure.
\begin{figure*}[!ht]
\begin{picture}(16.4,16.8)
\put(0.2,0.0){{\psfig{figure=lr_fig9.eps,width=18.0cm,angle=0,clip=}}}
\end{picture}
\caption[]{
Three-color image of \object{I~Zw~18}\ displaying the unresolved stellar
emission ($I_{\rm d}$) in the red channel and the $R$ and $V$ emission of
compact \nsf ne\rm-dominated regions in the green and blue channel.
White areas in the central (red) part of the galaxy depict regions whose
colors are strongly affected by nebular emission, as apparent from their extremely
blue $V$--$I$ and $R$--$I$ ($<$ --0.4 \dots --0.8) colors.
The inset, showing a magnified portion of the western super-shell, is meant to
illustrate a subtle spatial shift between the $V_{\rm c}$ and $R_{\rm c}$
emission. This can be attributed to a spatial displacement
by 30 -- 100 pc between the intensity maxima of the
[O{\sc iii}]$\lambda\lambda$4959,5007 and H$\alpha$\ emission lines.
}
\label{IZw18-ige}
\end{figure*}
\subsubsection{The impact of nebular emission on the colors of the stellar component
of I\ Zw\ 18 \label{kentro}}
We discuss next in more detail the effect of \nsf ne\rm\ contamination in the
HSB core, i.e. the region subtended by the 25 $R$\arcmin\ mag/$\sq\arcsec$\ isophote
in Fig. \ref{IZw18HaImage}. Our color maps reveal here a substantial substructure
that was not sufficiently recovered on previous data.
A previously known feature is the \nsf ne\rm\ rim
\citep[EW(H$\alpha$)=1500--2000 $\AA$;][]{Ostlin96,VilchezIglesiasParamo98-IZw18,Papaderos01-IZw18,Izotov01-IZw18} that
encompasses the SF region NW. Its $V$--$I$ and $R$--$I$ colors are much bluer
(--0.8 mag and --0.6 mag, respectively) than those of the centrally located
young SCs (--0.15 mag and --0.23 mag, measured within a 2\farcs8$\times$2\farcs8
aperture) where the EW(H$\alpha$) is much lower \citep[$\sim$200 $\AA$;][]{Izotov01-IZw18}.
This striking spatial anti-correlation between stellar surface density and
emission-line EWs, with the $V$--$I$ color steeply decreasing with the increasing
EW in the periphery of ionizing SCs is essentially identical to that described in the
XBCD \object{SBS 0335-052E} by P98: the bluest $V$--$I$ colors (--0.8 mag) in that
galaxy were not observed at the position of its young SCs but in their periphery,
over an extended horseshoe-shaped gaseous rim some 500 pc offset from the
former. Several similar examples are documented among XBCDs and BCDs
\citep[e.g.][]{Papaderos99-Tol65,Guseva01,Fricke01-Tol1214,Ostlin03-ESO338,Guseva04-Pox186,Papaderos08}
both on small and large spatial scales.
Of considerable interest is also the small-scale contamination of colors
in the HSB core of \object{I~Zw~18}\ due to \nsf ne\rm. To better illustrate its impact,
we computed a pseudo three-color composite (Fig. \ref{IZw18-ige}) using a
combination of unsharp masked images (see discussion in Sect. \ref{phot:IZw18C}).
In order to partly suppress the stellar contribution in
$V_{\rm c}$ and $R_{\rm c}$ images (blue and green channel, respectively),
we subtracted from the latter a pseudo stellar continuum
using a scaled version of the $I_{\rm c}$ image.
This procedure allowed to better isolate and visualize regions where \nsf ne\rm\
contamination is strongest.
The red channel of Fig. \ref{IZw18-ige} holds the unresolved emission
in the $I$ band ($I_{\rm d}$), it thus primarily reflects the stellar
host galaxy.
In the resulting three-color overlay, white patches on top the reddish
background of the HSB core depict regions where \nsf ne\rm\ dictates colors.
Just like in radial profiles and color maps
(Figs. \ref{IZw18SBP}, \ref{IZw18-VRImaps} and \ref{IZw18-VImap}),
the footprint of \nsf ne\rm\ is a combination of red $V$--$R$ with extremely blue $V$--$I$ and $R$--$I$ colors.
Some examples include the regions labeled 1 through 8, whose respective
colors are (0.1, --0.35, --0.45), (0.17, --0.58, --0.76), (0.05, --0.41, --0.45),
(0.23, --0.43, --0.67), (0.24, --0.45, --0.69), (0.21, --0.53, --0.74),
(0.12, --0.35, --0.47) and (0.23, --0.23, --0.46).
The total area of these severely \nsf ne\rm-contaminated regions of $\approx$30 $\sq\arcsec$
is equivalent to the isophotal size of \object{I~Zw~18}\ at 22 mag/$\sq\arcsec$\ (32 $\sq\arcsec$)
and to 20\% of that of its \emph{stellar} component (the region subtended
by the 25 $R$\arcmin\ mag/$\sq\arcsec$\ isophote in Fig. \ref{IZw18HaImage}).
\begin{figure}
\begin{picture}(8.6,6.9)
\put(1.3,0){{\psfig{figure=lr_fig10.eps,width=6.4cm,angle=0}}}
\end{picture}
\caption[]{Schematic illustration of the effect that the slight spatial
displacement between the [O{\sc iii}]$\lambda\lambda$4959,5007 and
the H$\alpha$\ emission lines across an expanding shell of ionized gas might have
on $V$ and $R$ photometry of stars in its close vicinity.
The shell, of thickness 100 pc ($\approx$1\arcsec at the distance to \object{I~Zw~18})
comprises an inner and outer zone with, respectively, a higher and lower
[O{\sc iii}]/H$\alpha$\ ratio (cf the discussion related to
Figs. \ref{IZw18-VRImaps},left and \ref{IZw18-ige}).
Two cases may be distinguished: in the case of a star located close to the
inner boundary of the shell (a), the local $V$ background within a
circular annulus will be overestimated due to the enhanced contribution
of the [O{\sc iii}] lines in the inner shell interface.
This would result in an artificially red $V$--$R$ color.
The opposite would be the case for (b) where the local $R$ background
would be over-subtracted due to the lower [O{\sc iii}]/H$\alpha$\ ratio in
the outer layer of the shell, leading to an artificially blue
$V$--$R$ color. The principal effect would therefore be that the
magnitudes of stars in the vicinity of shells would be underestimated by a
different amount in different bands, with the $V$--$R$ colors of stars
in the interior of the shell being redder than in its outer part.
}
\label{CMD_effect}
\end{figure}
It is noteworthy that \nsf ne\rm\ shows significant substructure on spatial scales of
a few pixels, with little spatial correlation to the local stellar
background. Therefore, its treatment and subtraction as uniform
foreground emitting layer in CMD studies could result into systematic
uncertainties that might not be fully accounted for by the standard error budget.
Subtle spatial displacements between the intensity maxima of
strong nebular emission lines may be of some importance in this regard.
An example is given in the inset of Fig. \ref{IZw18-ige} where we show magnified
a portion of the western super-shell. It can be seen that the inner layer
along the shell appears blueish whereas the outer one greenish.
This can be plausibly attributed to a displacement by 0\farcs1 -- 0\farcs3
between the [O{\sc iii}]$\lambda\lambda$4959,5007 and H$\alpha$\ emission
lines which are registered in $V$ (blue channel) and $R$ (green channel), respectively.
Another illustrative example is given for a shell $\sim$5\arcsec\ northwards of region NW
(Fig. \ref{IZw18-VRImaps}) whose inner (southeastern) and outer (northwestern) layer differ by
$\approx$0.3 mag in their $V$--$R$ color, pointing to a variation by
$\ga$30\% in the [O{\sc iii}]/H$\alpha$\ ratio across the shell.
Note that systematic variations of the [O\,{\sc iii}]$\lambda$5007/H$\beta$\ ratio
by a factor $\sim$3 on spatial scales of a few ten pc have been documented by
integral field unit spectroscopy in a number of H{\sc ii}\ regions and BCDs
\citep[see e.g.][]{Westmoquette07-NGC1569,Cairos09-Mrk409,Relano2010-NGC595,Monreal2010-NGC5253}.
Whereas spatial displacements between nebular lines have certainly
no measurable effect on CMD studies of galaxies with faint \nsf ne\rm,
they may be of some relevance in the case of \object{I~Zw~18}.
This is because the local background level in point source photometry studies
may be overestimated by a different amount in different bands,
depending on the position of a star relative to a shell and the specifics
of the local background determination.
Figure \ref{CMD_effect} illustrates schematically two special cases:
for a star located near the inner interface of the shell
(case a) the circular annulus within which the local background is determined
captures the inner [O{\sc iii}]-enhanced portion of the shell
and misses its outer H$\alpha$-enhanced layer.
This could lead to an over-subtraction of the local $V$ background, hence,
to a reddening of the $V$--$R$ color.
The opposite might be the case for a star close to the outer boundary
of the shell (case b), whose color would appear bluer due to the over-estimation
of the $R$ (H$\alpha$\ enhanced) background.
In the simple geometry of Fig. \ref{CMD_effect}, the principal effect
would then be a redder color for stars in the shell interior
and \emph{vice versa}, in addition to underestimated stellar magnitudes
because of over-subtraction of the local background.
An examination of the cumulative effect that multiple overlapping \nsf ne\rm\ shells
could have on CMD studies of \object{I~Zw~18}\ might therefore be of some interest.
Of special relevance to the study of the evolutionary status of \object{I~Zw~18}\
(Sect. \ref{age-IZw18}) are the colors of region $\omega$ (Fig. \ref{IZw18-Fig1}).
This region, roughly delimited by the rectangular area at
(RA,DEC)=(5\farcs8 \dots 4\farcs6,--6\farcs6 \dots --4\farcs4)
relative to NW (see Figs. \ref{IZw18-VRImaps} and
\ref{IZw18-VImap}) is relatively free of \nsf ne\rm\ contamination
\citep[cf the EW(H$\alpha$) map in Fig. 7 of][]{Izotov01-IZw18},
its colors can therefore be used to place constraints on the age of \object{I~Zw~18}.
From the present data we infer a mean
$V$--$R$ color of --0.06 mag and a $R$-$I$ and $V$--$I$ color of $\approx$0.2 mag
with values of $V$--$R$ $\simeq$ $R$-$I \approx 0.15$ mag and $V$--$I$=0.3 mag
within its reddest quarter. The $V$--$I$ color range inferred for region $\omega$
from {\sl HST}\ ACS data is in good agreement with the values 0.17 -- 0.32 mag
previously determined by P02.
\section{Discussion \label{discussion} }
\subsection{Constraints on the evolutionary status of \object{I~Zw~18\,C}\
\label{IZw18C-evol}}
In Fig. \ref{IZw18-pegase} we compare the observed colors of \object{I~Zw~18\,C}\
with the predicted color evolution for a stellar population forming
instantaneously (\lvss SFH1\rm, dotted line), with an exponentially decreasing
star formation rate (SFR) and an e-folding time $\tau=1$ Gyr (\lvss SFH2\rm, solid
line) and continuously with a constant SFR (\lvss SFH3\rm, dashed line).
The theoretical curves were computed with the evolutionary synthesis code Pegase~2.0
\citep{FR97} for constant stellar metallicity of $Z$=0.001 and Salpeter
initial mass function (IMF) between 0.1 and 100 $M_\odot$, and do not include nebular emission.
As already pointed out in Sect. \ref{phot:IZw18C}, the blue and
nearly constant (0$\pm$0.05 mag) $V$--$I$ and $R$--$I$ colors of
\object{I~Zw~18\,C}\ down to 27.6 mag/$\sq\arcsec$\ imply that the photometrically dominant stellar
population in this system is almost uniformly young.
Formal upper age estimates within 1$\sigma$ uncertainties
range from $t_1=15$ Myr for \lvss SFH1\rm\ to $t_2\sim$35 Myr for \lvss SFH2,3\rm.
The latter SF scenarios are, however, incompatible with the data
since star formation starting at $t_2$ and continuing to the present
would have given rise to amble \nsf ne\rm\ with an EW(H$\alpha$)$\ga$800 $\AA$.
The uniformly blue colors of \object{I~Zw~18\,C}, in connection with the absence of
\nsf ne\rm\ everywhere but in its central region indicate that SF activities
must have ceased very recently, some $\sim$20 Myr ($= t_{\rm c}$) ago.
If so, the estimated SF duration ($t_2 - t_{\rm c}\simeq t_1$) would imply
an almost instantaneous SF process, rapidly synchronized over the projected area
of the galaxy ($\sim$1 kpc$^2$) at probably supersonic speeds.
A conceivable scenario might invoke sequential star formation from the
redder northwestern tip towards the bluer southeastern tip of
\object{I~Zw~18\,C}\ \citep[see also][]{Aloisi99-IZw18,Izotov01-IZw18}.
The respective colors of those regions translate by \lvss SFH1\rm\ to an
age difference of $\tau_{\rm SF} = 40 - 80$ Myr.
Taking $\tau_{\rm SF} \sim 60$ Myr as an indicative time scale for the spatial
progression of SF activities along the projected major axis of \object{I~Zw~18\,C}\ (1 kpc), one can estimate
an average SF propagation velocity of $u_{\rm SF} \sim 20$ km s$^{-1}$.
The latter is comparable to the $u_{\rm SF}$ inferred for the
XBCD \object{SBS 0335-052E}
\citep[$\sim$ 20 -- 35 km s$^{-1}$, P98,][]{Reines08-SBS0335},
of the order of the sound speed in the warm ISM.
Induced gas collapse along the collisional interface of a super-shell expanding
from the northwestern tip of \object{I~Zw~18\,C}\ between 80 Myr ($\tau_{\rm SF}$+$t_c$) and $t_c$ ago
might offer a tenable, though not necessarily unique explanation
for propagating star formation: following \cite{McCrayKafatos87},
the radius $R_{\rm sh}$ (pc) of a SF-driven super-shell can be approximated as
\begin{equation}
R_{\rm sh} = 269\,\left( \frac{L_{\rm m,38}}{n_0} \right)^{1/5}\, t_7^{3/5}
\label{eq:MC}
\end{equation}
where $L_{\rm m,38}$ is the mechanical luminosity injected into the ISM
by stellar winds and SNe in $10^{38}$ erg\,s$^{-1}$,
$n_0$ the ambient gas density in cm$^{-3}$ and
$t_7$ the dynamical expansion time in $10^7$ yr.
A rough estimate on the mean mechanical luminosity
can be inferred from the absolute $V$ magnitude of \object{I~Zw~18\,C}\ (--12.2 mag)
which translates for continuous star formation over $\tau_{\rm SF}$ and an
ensuing quiescent phase over the past $t_c$ to a stellar mass of
$\approx 2.1\times 10^5$ $M_\odot$\ and a mean SFR of $3.5\times 10^{-3}$ $M_\odot$\ yr$^{-1}$.
The mean $L_{\rm m,38}$ over $\tau_{\rm SF}$ may be estimated from
Starburst99 \citep{Starburst99} to be 14.6.
\begin{figure}[h]
\begin{picture}(8.6,8.6)
\put(0.04,4.6){{\psfig{figure=fig11a.ps,width=8.8cm,angle=-90}}}
\put(0.06,0){{\psfig{figure=fig11b.ps,width=8.88cm,angle=-90}}}
\end{picture}
\caption[]{
Comparison of the $V$--$I$ and $R$--$I$ colors
of \object{I~Zw~18\,C}\ and of region $\omega$ in \object{I~Zw~18}\ with model predictions
for a stellar population forming instantaneously (\lvss SFH1\rm: thick dotted curve)
or continuously, with an exponentially decreasing or constant
star formation rate (\lvss SFH2\rm\ and \lvss SFH3\rm: thick solid and dashed line, respectively).
The models have been computed with Pegase~2.0 \citep{FR97} for
a constant metallicity ($Z$=0.001) and {\bf a}
Salpeter initial mass function between 0.1 and 100 $M_\odot$, and
do not include nebular emission.
Thin curves correspond to the same star formation histories but
assume an IMF truncated above 5 $M_\odot$.
The vertical bars labeled I\,Zw\,18$\omega$ depict the range between the
mean and reddest color in region
$\omega$ of \object{I~Zw~18}\ ($V$--$I$=0.2 \dots 0.3 and $R$--$I$=--0.06 \dots 0.15).
Bars labeled \object{I~Zw~18\,C}\ (1\arcsec--6\arcsec) indicate
the mean color of \object{I~Zw~18\,C}\ in the radius range 1\arcsec$\leq${\sl R}$^{\star}$$\leq$6\arcsec\
($\approx$0$\pm$0.05 mag).
The mean colors and their 1$\sigma$ uncertainties in
the extreme periphery ({\sl R}$^{\star}$$\geq$6\arcsec) of \object{I~Zw~18\,C}\ at $\mu\ga27.6$ mag/$\sq\arcsec$\
are depicted by the rectangular areas.
}
\label{IZw18-pegase}
\end{figure}
Equation \ref{eq:MC} then yields for $R_{\rm sh}=1$ kpc and
$t_7=6$ ($\equiv\tau_{\rm SF}$), an ambient gas density $n_0 \approx 4$ cm$^{-3}$.
This value does not seem to be unrealistic, given that the VLA HI map by
\cite{vanZee98-IZw18} indicates for \object{I~Zw~18\,C}\ a H{\sc i} surface density of
$\ga 4\times 10^{20}$ cm$^{-2}$, which after inclusion of the helium contribution
translates for a disk thickness of $\sim$50 pc to $n_0 \sim 3$ cm$^{-3}$.
As the passage of the SF front along \object{I~Zw~18\,C}\ has likely been accompanied by the
depletion of its molecular content and the dispersal of its ISM, the present
gas surface density should rather represent a lower estimate on the average
$n_0$ over $\tau_{\rm SF}$.
Clearly, the considerations above rely on strongly simplifying assumptions
(constant $L_{\rm m,38}$ and $n_0$, no radiative losses in the shell,
coplanar face-on geometry) and are meant to be indicative only.
They merely touch upon one among several possible scenarios
behind propagating star formation in \object{I~Zw~18\,C}\ (e.g., gravitational
interaction with the main body or gas infall from the H{\sc i} halo)
and are invoked to demonstrate that this process can account for some
important characteristics of the galaxy.
These include its uniformly blue colors with merely a weak color
gradient along its major axis, the virtual absence of \nsf ne\rm\
and the higher surface density of the unresolved stellar background in its
northwestern half as possible signature of the disintegration of its oldest SCs.
However, propagating star formation between
$\tau_{\rm SF}$+$t_c$ and $t_c$ can alone not explain the slightly
redder color of stellar populations in the outskirts of \object{I~Zw~18\,C}.
\subsubsection{Stellar diffusion and its effect on age determinations
in the faint periphery of \object{I~Zw~18\,C}\ \label{mass-filtering}}
In the following we further explore the evolutionary history of \object{I~Zw~18\,C}\
by considering the properties of its faint (27.6 -- 29 mag/$\sq\arcsec$) stellar periphery.
The ($V$--$I$)$_{\rm d}$ (0.2$\pm$0.08) and $R$--$I$ (0.16$\pm$0.09) color of
the latter translate by \lvss SFH2\rm\ to an age of $\sim$130 Myr
with an upper bound of $\sim$500 Myr at the 1$\sigma$
level, if purely stellar emission is assumed.
For models including \nsf ne\rm, age estimates would rise to 270--900 Myr,
depending on the color considered.
However, even for those high ages, this SFH model (and \lvss SFH3\rm\ alike)
predicts an EW(H$\alpha$) between 150 $\AA$ and $\ga$300 $\AA$
in clear conflict with the absence of \nsf ne\rm\ in the outskirts of \object{I~Zw~18\,C}.
Models invoking continuous star formation to the present are thus
fundamentally incompatible to the data.
Cessation of SF activities over the past $t_c$ would alleviate the contradiction,
it would, however, at the same time imply a steeper color evolution, thus a
younger age, in better agreement with the 1$\sigma$ upper estimate
$t_{\rm SFH1} \sim 100$ Myr read off Fig. \ref{IZw18-pegase} for the
instantaneous star formation model.
As we shall argue next, stellar diffusion and the resulting radial
\emph{stellar mass filtering effect} described by P02 could have a
significant effect on the color distribution of young galaxy candidates and
add an important element towards a consistent evolutionary picture for \object{I~Zw~18\,C}.
Already at a mean radial velocity of $u_{\rm r} \la 4$ km s$^{-1}$,
less than the velocity dispersion of the neutral gas
($\sigma_{\rm HI}\approx$6--8 km s$^{-1}$), a star born in the central part of \object{I~Zw~18\,C}\ could migrate within
$\tau_{\rm diff}=\tau_{\rm SF}+t_c$ (80 Myr) out to $r_{\rm diff} \approx 300$ pc from its initial locus.
This galactocentric distance corresponds to the semi-minor axis of \object{I~Zw~18\,C}\ at 29 mag/$\sq\arcsec$,
i.e. it is topologically identifiable with the redder, outer exponential SBP feature
in Fig. \ref{IZw18C_sbp}.
A consequence of stellar diffusion with the above radial velocity
is that any stellar generation reaching out to $r_{\rm diff}$ can not be bluer than
0.3 mag in $V$--$I$ (approximately the 1$\sigma$ color bound, depicted by
the rectangular area in the upper panel of Fig. \ref{IZw18-pegase}),
since on its way to that radius it will have been de-populated from stars
with lifetimes $\leq \tau_{\rm diff}$
(or, correspondingly, from stars more massive than $M_{\rm diff}$).
The commonly adopted \lvss SFH2,3\rm\ models would then inevitably result in an overestimation of
the stellar age in the LSB periphery ({\sl R}$^{\star}$$\simeq$$r_{\rm diff}$)
of young galaxy candidates, such as \object{I~Zw~18\,C}.
This is because such SFH parametrizations are throughout applied on the implicit
assumption that stellar populations form in the radial zone where they are
currently observed, hence they invariably contain a certain
fraction of massive blue stars younger than $\tau_{\rm diff}$.
Obviously, a match between predicted and observed colors is then only
possible for an age $t_{\rm SFH2,3}$ exceeding the true stellar
age $t_{\star}$ at the radius {\sl R}$^{\star}$, i.e. the condition
\begin{equation}
\tau_{\rm diff} \leq t_{\rm SFH1} \leq t_{\star} < t_{\rm SFH2,3}
\end{equation}
applies throughout, in particular when models including \nsf ne\rm\ are adopted.
That, in the presence of stellar diffusion, the age of the LSB host of
young BCD candidates may be overestimated when continuous star
formation models are used is especially apparent
when projection effects are additionally taken into account:
in a spherical-symmetric geometry, the colors registered at
{\sl R}$^{\star}$$\equiv r_{\rm diff}$ reflect the luminosity-weighted average
of stars at radii $\geq r_{\rm diff}$, since the observer's
line of sight crosses more remote foreground and background
galaxy zones of longer $\tau_{\rm diff}$.
Therefore, even though \lvss SFH2,3\rm\ may provide a reasonable
approximation to the \emph{integral} SFH of some star-forming
dwarf galaxies (SFDGs), it should not be taken for granted that they
are universally applicable to \emph{spatially resolved} age-dating studies.
This is already suggested by the failure of e.g. \lvss SFH3\rm\ to reproduce
the generally red colors ($V$--$I$=0.9, $B$--$R$=1.1) of the host galaxy of
typical BCDs (see e.g. P02), even for ages exceeding the Hubble time.
The effect of stellar diffusion to the radius $r_{\rm diff}$
is equivalent to in situ star formation at $r_{\rm diff}$ with an
IMF truncated at masses above a radially decreasing limit
$M_{\rm diff}$($\tau_{\rm diff}$) (hereafter tIMF).
To illustrate this point, and as a minimum consistency check, one may consider
continuous star formation according to \lvss SFH2,3\rm\ with a tIMF limited
to stars with a lifetime $\ga \tau_{\rm diff}$.
This timescale roughly corresponds to the main sequence lifetime of
a B4 star (94 Myr), placing an upper cutoff of 5 $M_\odot$\ to the tIMF.
The color evolution for \lvss SFH2,3\rm\ with that tIMF is depicted in
Fig.~\ref{IZw18-pegase} with thin lines.
It can be seen that \lvss SFH2,3\rm+tIMF models reproduce the observed colors
at an age of $\sim$130 Myr with an 1$\sigma$ upper bound of
150--250 Myr and consistently account for the absence of nebular emission.
A conceivable alternative to stellar diffusion is in situ star formation
in the LSB periphery of \object{I~Zw~18\,C}\ in conjunction with stochastic effects
on the IMF \citep[see][and references therein]{CervinoMasHesse94}.
The latter are to be expected when low masses are involved in the SF process,
as is typically the case for SFDGs, and to affect the sampling of the IMF in
its intermediate-to-high mass range.
Cervi\~no \& Mas-Hesse (see their Fig.~1) argue, however, that stochastic effects
do not result in a truncated IMF above a certain stellar mass, but
some massive stars always form.
On the statistical average, one would therefore expect nebular emission
to be present in the outskirts of \object{I~Zw~18\,C}, in disagreement with the observations.
It should be noted, though, that this conclusion is likely dependent on the detailed physics
of star formation, from which the high-mass end of the IMF is populated
and the time scales for low and high mass star formation.
With certain assumptions \citep[see e.g.][]{WeidnerKroupa2005,Pflamm-Altenburg2009}
a truncated IMF can be realized in peripheral galaxy zones, lifting the discrepancy described above.
There might therefore be variants of the traditional {\sl static} scenarios, i.e. scenarios
implicitly assuming that stars are born in the radial zone where they are observed,
that can quantitatively reproduce the radial color distribution of \object{I~Zw~18\,C}\ and SFDGs in general.
It is not clear, however, whether such static scenarios, and their assumptions on the radial
dependence of the IMF, can consistently account for the specific structural
characteristics of SFDGs, and in particular BCDs, such as e.g. the exponentiality of the stellar LSB host and
the confinement of star-forming activities to within {\sl R}$^{\star}$$\approx 2\alpha$ (see e.g. P96b).
Stellar diffusion and its role on the buildup of SFDGs constitutes an unexplored territory in dwarf galaxy research.
A caveat of the above discussion is that it assumes young stars to
initially drift apart freely at a roughly constant velocity
$u_r$ ($\la\sigma_{\rm HI}$) over $\sim 10^8$ yr.
Clearly, the validity of this simplifying assumption needs to be investigated,
and the initial kinematics of the newly formed stars as well as the form of
the gravitational potential be included in the analysis.
High-velocity resolution ($\la 10$ km s$^{-1}$) integral field
spectroscopy might provide key insights into the velocity
patterns and the possible outwards migration of bound and unbound
stellar clusters in young starbursts.
A closer investigation of the effects of diffusion would be of
considerable interest, among others because it offers a mechanism that naturally
produces radial color gradients in galaxies.
As such, it may be of relevance for a range of topics, such as e.g.
the \emph{Red Halo} phenomenon \citep{BergvallOstlin02,Z06,BZC10}.
In summary, our results suggest that the overall
photometric properties of \object{I~Zw~18\,C}\ can be consistently accounted for by
a single episode involving solely sequential star formation
($u_{\rm SF} \approx 20$ km s$^{-1}$) and stellar diffusion
($u_{\rm r} \approx 4$ km s$^{-1}$) over the past $\sim$100--200 Myr.
Whereas an ultra-faint substrate of ancient stars can not \emph{per se}
be ruled out, the lines of evidence presented here indicate
that \object{I~Zw~18\,C}\ is a cosmologically young object.
This conclusion concurs with that by \cite{Izotov01-IZw18} who inferred an
upper age of $\sim$100 Myr for \object{I~Zw~18\,C}\ from spectral synthesis models.
\cite{Jamet10-IZw18} reached a similar conclusion, that \object{I~Zw~18\,C}\ is younger than
$\approx$125 Myr, from a probabilistic analysis of {\sl HST}\ ACS CMDs.
\subsection{Constraints on the evolutionary status of \object{I~Zw~18}\ \label{age-IZw18}}
We turn next to the evolutionary status of \object{I~Zw~18}'s main body.
The $V$--$I$ and $R$--$I$ color range in region $\omega$
(Fig. \ref{IZw18-pegase}), translate by \lvss SFH2,3\rm\ to an age between
$\sim$100 and $\la$250 Myr.
The possibility of \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ forming a co-evolving pair of dwarf galaxies
that underwent a roughly synchronous strong recent evolution can not be ruled out.
By considering the reddest quartile of region $\omega$, one obtains from \lvss SFH2,3\rm\
an upper age of $\sim$500 Myr. The age span inferred from the present study
is consistent with that obtained from the mean $B$\arcmin--$V$\arcmin\ (0.09$\pm$0.04 mag)
and $V$\arcmin-$R$\arcmin\ (0.12$\pm$0.04 mag) for the stellar host
galaxy of \object{I~Zw~18}\ after two-dimensional subtraction of strong nebular lines
from broad band images (P02).
Note, however, that the latter emission-line free colors are likely slightly
overestimated, as they do not include corrections for the red \citep[$B$--$V$=0.34,
$V$--$R$=0.64, see e.g.][]{Krueger95} nebular \emph{continuum}.
Notwithstanding this fact, even when taken within their +1$\sigma$ bounds
at face value, they imply by \lvss SFH2\rm\ an age of $\sim$0.8 Gyr, translating
into a mass fraction of $\ga$50\% for stars younger than 0.5 Gyr.
Evidently, this conclusion \citep[see also][]{Hunt03-IZw18} is not in conflict
with the presence of a small number ($\la$20) of stars with ages between 0.5
and $\sim$1 Gyr in CMDs \citep{Aloisi99-IZw18,Ostlin00,OstlinMouhcine05}.
On the other hand, it is important to bear in mind that estimates on the mass
fraction of young stars depend critically on the adopted SFH.
\subsubsection{An extended underlying stellar disc in I Zw 18?\label{izw18-disk}}
We next revisit the hypothesis envisaged by \cite{Legrand00-IZw18},
according to which the formation of \object{I~Zw~18}\ is occurring throughout its H{\sc i}
halo (60\arcsec$\times$45\arcsec) at an extremely low SFR over the past $\sim$14 Gyr.
This model can reconcile the low gas-phase metallicity of \object{I~Zw~18}\ with
a cosmological age, and predicts a high degree of chemical homogeneity in its
warm ISM.
From the photometric point of view, the main prediction from the
Legrand model is an extended ultra-LSB underlying stellar disk
($\overline{\mu}\simeq$28 $V$ mag/$\sq\arcsec$).
In a subsequent study, \cite{Legrand01-IZw18} have generalized the scenario of
'slowly cooking' dwarfs to BCDs, arguing that their main
metallicity and stellar mass contributor is continuous star
formation, rather than coeval starbursts.
The hypothesis of a stellar disk beneath the nebular envelope of \object{I~Zw~18}\ was
later on investigated by P02 on the basis of emission-line free {\sl HST}\ WFPC2
$B$\arcmin, $V$\arcmin\ and $R$\arcmin\ images.
By consideration of photometric uncertainties, P02 argued that the predicted
disk would evade detection on their SBPs if its central surface brightness
$\mu_{\rm E,0}$ is fainter than 27.1 mag/$\sq\arcsec$\ and its exponential scale length
$\alpha$ larger than 10\farcs5. These limits translate to an apparent
magnitude of 20 mag and a $\overline{\mu}\simeq$28.6 mag/$\sq\arcsec$,
slightly fainter, but probably still consistent, within the uncertainties,
with the $\overline{\mu}$ predicted by the Legrand model.
Thus, previous surface photometry could neither strictly rule out nor establish the
presence of the putative ultra-LSB disk in \object{I~Zw~18}.
Since deep {\sl HST}\ ACS narrow band imagery is unavailable for \object{I~Zw~18}, we can
not improve on the nebular line subtraction carried out by P02 and push previous
$V$\arcmin and $R$\arcmin\ surface photometry to fainter levels.
However, CMD studies of \object{I~Zw~18}\ argue against the disk hypothesis, as
none among them has revealed a uniform and extended population of RGB
stars being cospatial with the nebular envelope,
at sharp contrast to any evolved BCD studied as yet.
Admittedly, as none among the published CMD studies for \object{I~Zw~18}\
fully appreciates the importance of \nsf ne\rm\ contamination (except
for that by IT04) and attempts a proper assessment of the systematic
errors this can introduce, the robustness of the non-detection
of RGB candidates in the LSB envelope can not be currently evaluated,
neither is it clear whether CMD studies can ever place firm constraints in this respect.
The currently most convincing evidence for the absence of a substantial stellar background
beneath the nebular envelope of \object{I~Zw~18}\ is based on the emission-line
free {\sl HST}\ images by P02 (see their discussion).
Note that, even if an ultra-LSB stellar disk complying with the photometric
limits placed by P02 was present, it would likely not significantly alter the youth
interpretation for \object{I~Zw~18}:
The absolute $V$ magnitude of the predicted disk (--11.4 mag), implies for continuous
SFR over 14 Gyr a stellar mass of $\sim 5.6 \times 10^6$ $M_\odot$.
The $V$\arcmin\ magnitude of the \emph{host} galaxy of \object{I~Zw~18}\ (i.e. excluding
regions NW and SE) of --13.8 mag (P02) translates by \lvss SFH2\rm\ for an age between
0.5 and 1 Gyr to a stellar mass of $\sim 11 \times 10^6$ $M_\odot$, i.e. about
twice that of the hypothetical ultra-LSB disk.
Even in the extreme case of a 14 Gyr old instantaneous burst, the underlying disk would only be
twice as heavy as the young population which have formed on a much shorter time scale.
Consequently, the conclusion that \object{I~Zw~18}\ has formed most of its stellar mass
at a late cosmic epoch still holds.
It would be interesting to check whether the combined chemical output
from these two stellar populations is consistent with the observed
gas-phase metallicity in \object{I~Zw~18}.
With regard to the proposed generalization of the 'slowly cooking' scenario,
it should be noted that the Legrand model faces difficulties in reproducing basic
photometric properties of BCDs.
Specifically, the host galaxy of typical BCDs is incompatible with an
ultra-LSB disk and, to the contrary, its central stellar density
($\geq$1 $M_\odot$\ pc$^{-3}$) exceeds by an order
of magnitude that of dwarf irregulars or dwarf spheroidals (P96b).
Similarly, the mean surface brightness of the BCD host is
$\leq$24 $B$ mag/$\sq\arcsec$\ \citep[e.g. P96b,][]{Amorin09-BCDs}, roughly 4 mag brighter
than the model prediction.
Additionally, as pointed out in Sect. \ref{mass-filtering}, continuous star
formation models are inconsistent with the red host galaxy colors of most nearby BCDs.
In summary, whereas appealing from the chemical point of view, the 'slowly cooking'
scenario does not seem to be reconcilable with the structural properties of
BCDs. This scenario might, however, provide a good approximation to
quiescent late-type dwarfs, such as e.g.
{\sl blue low surface-brightness galaxies} \citep{RB94}.
\subsection{I Zw 18 as local morphological template for rapidly assembling
galaxies at high redshift \label{iz18-z}}
The integral photometric properties of the overwhelming majority of SF
galaxies in the local universe are barely affected by \nsf ne\rm.
Typically, the EW(H$\alpha$+[N\,{\sc ii}])\ ranges between a few ten $\AA$
\citep[e.g.][]{MoustakasKennicutt06,KoopmannKenney06} in normal late-type galaxies
to $\la 10^2$ $\AA$ for the majority of local SFDGs
\citep[see e.g.][]{Lee07,SanchezAlmeida08}.
Even for BCDs, \nsf ne\rm\ has a noticeable photometric impact
\citep[EW(H$\alpha$+[N\,{\sc ii}])$\simeq$ 200 -- 500~$\AA$;][among others]{Terlevich91,Cairos02,BergvallOstlin02,GildePaz03-BCDs,Guseva09-LZ}
in their centrally confined SF component only (i.e. for {\sl R}$^{\star}$$\la${\it R$_{\rm SF}$}, with
{\it R$_{\rm SF}$}\ being of the order of $\sim$1 kpc) and is practically negligible for larger
radii where the evolved stellar LSB host entirely dominates the line-of-sight
intensity \citep[P02,][]{Knollmann04}.
More generally, notwithstanding the fact that \nsf ne\rm\ in local SFDGs may
protrude beyond {\it R$_{\rm SF}$}\
\citep[][among others]{HunterGallagher1985,Gallagher1989,HunterGallagher1990,
HunterGallagher1992,Meurer92,Marlowe95,Ferguson96-Ha,PapaderosFricke98a,
Martin98,Bomans02,Cairos02}, it has due to its extremely low surface
brightness no impact on surface photometry \citep[P02,][]{Knollmann04}.
As an example, even for BCDs with strong ongoing SF activity, corrections of
integral photometric quantities for \nsf ne\rm\ do not exceed $\approx$0.1 mag \citep{Salzer89}.
In the nearby universe, the only cases of SFDGs with extreme \nsf ne\rm\ contamination
are documented in a few XBCDs.
Intense and almost galaxy-wide SF activity in these rare young galaxy candidates,
in combination with the low surface density of their underlying stellar host,
boost [O{\sc iii}]$\lambda$5007 and H$\alpha$\ emission line EWs to values
of up to $\ga 2\times 10^3$ $\AA$ and $\ga 1.6\times 10^3$ $\AA$, respectively
\citep[e.g. P98,][among
others]{Izotov97-SBS0335,Guseva04-Pox186,Izotov06-SBS0335-GIRAFE,Papaderos06-6dF,Papaderos08},
i.e. of the order of the effective width of broad band filters.
Similar, though less extreme, cases are some of the recently discovered
ultra-compact starbursting dwarfs in galaxy clusters \citep[EW(H$\alpha$)$\sim 10^3$ $\AA$,][]{Reverte07}.
As shown in P02, the radial H$\alpha$\ intensity profiles of BCDs comprise two
characteristic components: a higher-surface brightness core with
{\sl R}$^{\star}$$\simeq${\it R$_{\rm SF}$}\ and an outer, roughly exponential LSB envelope with
$\alpha_{\rm H\alpha}$ in the range 0.1 -- 1 kpc.
Judging from its $\alpha_{\rm H\alpha}$ (210 -- 270 pc) and radial extent ({\sl R}$^{\star}$$\sim$2.6 kpc),
the \nsf ne\rm\ halo of \object{I~Zw~18}\ is by no means exceptional among BCDs/SFDGs.
However, \object{I~Zw~18}\ strikingly differs from any SFDG studied as yet by the fact
that the galaxy itself (i.e. its stellar host, cf. Fig. \ref{IZw18HaImage})
is several times more compact than the nebular halo:
\nsf ne\rm\ dominates already for {\sl R}$^{\star}$$\simeq$6\arcsec\ ($\equiv$3\,{\it R$_{\rm eff}$}) and reaches as far out as
$\sim$16 \emph{stellar} exponential scale lengths (Sect. \ref{phot:IZw18}).
This, so far unique, case in the nearby universe is schematically illustrated in
Fig. \ref{fig:IZw18_ce} where the radial distribution of stars and \nsf ne\rm\ in
\object{I~Zw~18}\ is compared with that of normal BCDs.
For the forthcoming discussion it is of special importance to recall that the
surface brightness level at which in \object{I~Zw~18}\ \nsf ne\rm\ dominates is quite high
($\mu \simeq 23.5$ mag/$\sq\arcsec$; cf Fig. \ref{IZw18SBP}), i.e. comparable to the
central surface brightness of dwarf irregulars
\citep[e.g.][]{PattersonThuan96,vanZee00-dI} and dwarf ellipticals
\citep{BinggeliCameron91,BinggeliCameron93}, and by at least one mag brighter
than that of Local Group dwarf spheroidals \citep[cf.][]{Mateo98-LG}.
The \nsf ne\rm\ envelope of \object{I~Zw~18}\ is therefore not to be confused with the
extraordinarily diffuse \nsf ne\rm\ in typical SFDGs.
Of special relevance is as well the fact that the exponential \nsf ne\rm\ envelope
contributes at least 1/3 of the total $R$ band luminosity of \object{I~Zw~18}\
(P02 and Sect. \ref{phot:IZw18}).
\begin{figure}[h]
\begin{picture}(8.2,7.3)
\put(0.3,0.){{\psfig{figure=lr_fig12.eps,width=8.4cm,angle=0,clip=}}}
\end{picture}
\caption[]{
Schematic comparison of typical Blue Compact Dwarf (BCD) galaxies
(left) with \object{I~Zw~18}\ (right) with respect to the morphology and
radial intensity distribution (upper and lower figure, respectively)
of their stellar and nebular emission.
In typical BCDs (and star-forming dwarf galaxies in general) the contribution
of nebular emission to the line-of-sight intensity is practically negligible,
especially outside their centrally confined SF component (i.e. for {\sl R}$^{\star}$$\geq${\it R$_{\rm SF}$}).
By contrast, in \object{I~Zw~18}\ nebular emission dominates already in the inner part of the galaxy
({\sl R}$^{\star}$$\simeq$3{\it R$_{\rm eff}$}, corresponding to a surface brightness level of 23.5 mag/$\sq\arcsec$)
and produces its \emph{exponential} lower-surface brightness envelope that
reaches as far out as $\sim$16 stellar exponential scale lengths.
}
\label{fig:IZw18_ce}
\end{figure}
As we shall argue next, the morphological properties of \object{I~Zw~18}, whereas unique
among nearby SFDGs, are likely typical for rapidly assembling galaxies in the distant universe.
Just like \object{I~Zw~18}, these systems are building up their stellar component
through starbursts or prolonged phases of strongly elevated specific SFR (SSFR),
translating into short (a few 100 Myr) stellar mass doubling times.
The cumulative output of energy and momentum from stellar winds and SNe during such
dominant phases of galaxy evolution will inevitably lead to a large-scale
gas thermalization and acceleration, with super-shells protruding much beyond the galaxy itself.
Extended nebular halos encompassing the still compact stellar component
of high-SSFR proto-galactic systems may thus be ubiquitous in the early
universe. The photometric structure of these galaxies could then closely resemble the right
panel of Fig. \ref{fig:IZw18_ce}, comprising a HSB core to within which
the stellar component is confined and dominates and a much larger,
nearly exponential, nebular LSB envelope.
Despite cosmological dimming, the latter should be readily accessible to
observations out to $z \ga 2$, given that deep {\sl HST}\ imaging and image
stacking now permit studies of galaxies down to rest-frame surface
brightnesses of $\ga$ 26.5 -- 28.5 mag/$\sq\arcsec$\ at those redshifts
\citep[e.g.][]{Stockton08,vanDokkum2010,Noeske06-UDF}.
Morphological analogs to \object{I~Zw~18}\ may as well exist
among compact high-SSFR galaxies at intermediate redshift (0.1$\la z \la$0.8),
such as e.g. Compact Narrow Emission-Line Galaxies \citep[CNLEGs,][]{Koo94,Guzman98},
Luminous Compact Blue Galaxies \citep{Guzman03-LCBG,Puech06-LCBs} and
Green Pea (GP) galaxies \citep{Cardamone09-GP,Amorin10-GP,Izotov11-GP}.
Note, that the rest-frame EWs of GPs can in some cases almost compete with those
of nearby XBCDs \citep[see e.g.,][]{Izotov01-SBS335,Papaderos06-6dF}.
\smallskip
Such considerations motivate a closer examination of observational biases
that the \emph{spatial segregation} between stellar and nebular emission
may introduce into studies of distant, poorly resolved morphological analogs of \object{I~Zw~18}.
By appreciation of the differing spatial distribution of stellar and nebular emission
the discussion here goes beyond the framework of state-of-art modeling studies
\citep[e.g.][among others]{Huchra77,Bergvall85-ESO338-IG04,
Salzer89,Olofsson89,Krueger95,FR97,Izotov97-SBS0335,
Guseva01,Moy01,Anders04-NGC1569,
Zackrisson01,Panuzzo03,Zackrisson08,Kotulla09-GALEV,MM10-POPSTAR,ShaererBarros09,
Finlator11,Ono10} exploring in detail the impact of \nsf ne\rm\ on the
\emph{integral} SED of SF galaxies.
The predictions from such zero-dimensional models are to be treated with some
caution when compared with \emph{spatially resolved} observables (e.g. radial EW and color
profiles) in order to place constraints on the formation history of galaxies.
The minimum prerequisite for this approach to be valid is that nebular emission is
cospatial with the local ionizing and non-ionizing stellar background, or,
equivalently, that the ionizing Lyman continuum budget is reprocessed into
nebular emission \emph{on the spot}.
That this idealized picture is not invariably valid and, actually, a strong
spatial anti-correlation between emission line EWs and stellar surface density
may evolve over time both on small and large scales has been shown for several
XBCDs \cite[e.g. P98; P02;][]{Papaderos99-Tol65,Izotov01-IZw18,Guseva01,Fricke01-Tol1214}.
In these cases, subtraction of synthetic \nsf ne\rm\ SEDs from observed spectra, as done in P98,
offers the only viable approach for isolating and age-dating the underlying stellar SED.
A discussion of aperture effects on the luminosity-weighted SEDs of
distant morphological analogs to \object{I~Zw~18}\ is beyond the scope of this study.
Here, we will only focus on potential photometric biases.
A first one, already described in P02, arises from the fact that the
nebular envelope may, due to its exponentiality and reddish ($B$--$R$ and
$V$--$R$) rest frame colors, be taken as evidence for an evolved stellar disk.
Automated surface photometry analyses of large
extragalactic probes, if not including a robust discriminating criterion
between stellar and nebular emission, could therefore be biased towards an
enhanced frequency of galactic disks already at place at an early cosmic epoch.
The arguably most convincing argument for this concern derives from
\object{I~Zw~18}\ itself: if for one of the best studied galaxies in the local universe,
it took three decades to realize that its exponential LSB envelope is not due
to a stellar disk but due to \nsf ne\rm, one may rightfully doubt that this
would be immediately apparent in studies of its poorly resolved morphological
analogs at high $z$.
\begin{figure}[h]
\begin{picture}(8.2,10.6)
\put(0.1,3.1){{\psfig{figure=fig13a.ps,width=8.6cm,angle=-90.0}}}
\put(0.14,0.){{\psfig{figure=fig13b.ps,width=8.54cm,angle=-90.0}}}
\end{picture}
\caption[]{
{\bf upper panel:} Comparison of the $R$ SBP of \object{I~Zw~18}\ (filled circles),
with a synthetic SBP (labeled {\nlx core+env1}) that is due to the superposition of
two exponential components of differing central surface brightness
$\mu_0$ and exponential scale length $\alpha$.
The first one, labeled {\nlx core} (open circles) approximates the stellar component
that is confined to and dominates within the inner
({\sl R}$^{\star}$$\leq$6\arcsec) HSB part of the observed SBP.
The second component ({\nlx env1}; dotted line intersecting
the abscisca at $\mu \sim 22$ mag/$\sq\arcsec$) is a linear fit to the nebular
LSB envelope.
A S\'ersic model to the composite {\nlx core+env1} SBP (orange solid-line curve)
yields a S\'ersic exponent $\eta \approx $2, close to the best-fitting value of
$\eta\approx$2.2 for the observed SBP.
The synthetic SBP labeled {\nlx core+env2}
(red solid-line curve) is computed by superposing on the
{\nlx core} an exponential nebular envelope of equal luminosity but twice
as large $\alpha$ as the observed envelope {\nlx env1}.
A S\'ersic fit to {\nlx core+env2} (thin curve) yields an $\eta\approx5$.
{\bf lower panel:} Residuals between the synthetic SBPs {\nlx core+env1} (thin curve)
and {\nlx core+env2} (thick curve) and their S\'ersic fits when the latter are
computed down to a surface brightness level $\mu_{\rm lim}$ of 26.5 and 29
mag/$\sq\arcsec$\ (solid and dotted curve, respectively).
It can be seen that residuals do not exceed 0.2 mag and 0.5 mag, respectively,
i.e. they are of the order of or smaller than typical uncertainties in SBPs
of intermediate-to-high $z$ galaxies at rest frame surface brightness levels
$\mu\la$26.5 mag/$\sq\arcsec$.
}
\label{fig:2exp_SBPs}
\end{figure}
\begin{figure}[h]
\begin{picture}(8.2,5.8)
\put(0.3,0.){{\psfig{figure=fig14.ps,width=8.2cm,angle=-90.0}}}
\end{picture}
\caption[]{Best-fitting S\'ersic exponent $\eta$ vs $\mu_{\rm lim}$ for the SBPs
in Fig. \ref{fig:2exp_SBPs}.
The thick-gray curve and the filled squares show, respectively, the variation of $\eta$
for the observed $R$ band SBP of \object{I~Zw~18}\ and the two-component approximation to
it, labeled {\nlx core+env1}.
The variation of $\eta$ with $\mu_{\rm eff}$ for the synthetic SBP
{\nlx core+env2} is shown with filled circles.}
\label{fig:mulim_vs_eta}
\end{figure}
Another potential bias is owing to the fact that the superposition of two exponential
profiles of differing $\mu_0$ and $\alpha$ -- one representing the steeper
star-dominated core and the other the shallower nebular envelope -- can closely
approximate a genuine S\'ersic profile with a high shape parameter
(2$\leq \eta \la 5$), thereby mimicking the SBP of a massive spheroid.
The best-fitting S\'ersic exponent $\eta$ for such a composite SBP depends
both on the properties of its constituent exponential profiles and the limiting
surface brightness $\mu_{\rm lim}$ down to which S\'ersic models are fitted.
As an example, the best-fitting $\eta$ for \object{I~Zw~18}\ increases monotonically
from 1.1 for $\mu_{\rm lim}=23.5$ mag/$\sq\arcsec$\ (at the surface brightness
where \nsf ne\rm\ takes over) to 2.2 for $\mu_{\rm lim}=28$ mag/$\sq\arcsec$.
This is illustrated in Fig. \ref{fig:2exp_SBPs}, where the observed SBP of
\object{I~Zw~18}\ (filled circles) is overlaid with the superposition of two
exponential components, approximating the core (open circles; profile labeled
{\llx core}) and the \nsf ne\rm\ envelope (dotted line crossing the abscissa at $\sim$22 mag/$\sq\arcsec$; {\llx env1}).
Their sum ({\llx core+env1}; orange solid-line curve) is then fitted by S\'ersic
models down to a progressively faint $\mu_{\rm lim}$.
It can be seen from Fig. \ref{fig:mulim_vs_eta} that the best-fitting
$\eta$ (squares) for the synthetic SBP is doubled when $\mu_{\rm lim}$
decreases from 23.5 to $\geq$27.5.
The same trend is recovered when S\'ersic models are fitted
directly to the observed SBP (thick curve).
It is worth checking how the S\'ersic $\eta$ vs. $\mu_{\rm eff}$ relation may
change in the case of an equally luminous but shallower nebular envelope.
For this, we superpose to the {\llx core} a component ({\llx env2}) whose $\mu_0$ and
$\alpha$ is by, respectively, 1.5 mag fainter and a factor of 2 larger than that
in the observed component {\llx env1}.
The S\'ersic fit to the {\llx core+env2} profile (red solid-line curve) is included
in Fig.~\ref{fig:2exp_SBPs} with the thin--dark curve.
As apparent from Fig. \ref{fig:mulim_vs_eta}, in this case $\eta$ shows a
steeper dependence on $\mu_{\rm lim}$, increasing to $\sim 4$ already for
$\mu_{\rm lim}\simeq 26.5$ mag/$\sq\arcsec$\ and leveling off to $\sim$5 at fainter levels.
Consequently, S\'ersic fits can readily lead to the misclassification of a
compact high-SSFR galaxy as a massive elliptical and, quite counter-intuitively,
deeper photometry exacerbates the problem.
Note that the residuals between SBP and model (lower panel of Fig. \ref{fig:2exp_SBPs})
are $\la$0.5 mag for {\llx core+env2} and just $\la$0.2 mag for {\llx core+env1},
i.e. of the order of 1$\sigma$ uncertainties in SBPs of intermediate-to-high $z$ galaxies \citep[cf
e.g.][]{Noeske06-UDF}, i.e. small enough to go undetected.
In practice, even when PSF convolution effects are fully accounted for,
the pseudo-S\'ersic profile of a compact diskless high-SSFR galaxy is barely
distinguishable from the S\'ersic profile of a massive galaxy spheroid.
Evidently, since extended \nsf ne\rm\ can drastically affect an SBP as a whole,
it also impacts virtually all secondary photometric parameters that are derivable
from it (e.g. the effective radius and Gini coefficient, and the various light
concentration indices commonly used in galaxy quantitative morphology studies).
\smallskip
Thirdly, the \nsf ne\rm\ luminosity fraction in \object{I~Zw~18}\ ($\geq$1/3), if typical for its
higher-$z$ analogs, translates into a systematic error of $\geq$0.4 mag
in galaxy scaling relations involving total magnitudes.
These range from the Tully-Fisher relation to all relations comparing
luminosity with e.g. metallicity, diameter, mean surface brightness and velocity dispersion.
Moreover, errors in galaxy luminosity propagate, potentially amplified, in
stellar mass determinations using theoretical mass-to-light ratios or SED fitting.
An investigation of this issue was recently presented by \cite{Izotov11-GP}:
these authors has shown that masses computed from spectral fits to the SED
\emph{continuum} of high-SSFR galaxies at intermediate $z$ can be
overestimated by a factor of up to $\sim$4.
This is not primarily due to the luminosity contribution but due to the
red spectral slope of the nebular continuum \citep[see e.g.][]{Krueger95} which
drives SED fitting towards solutions invoking a much too high
luminosity fraction from old stars.
In galaxy assembly studies covering a wide range in $z$, downsizing
effects may further complicate the aforementioned biases, since \nsf ne\rm\ is expected to
affect galaxy populations of different mass over different timescales.
\smallskip
We next turn to the color contrast $\delta_{\rm ce}$\ between the HSB
\emph{core} and the nebular LSB \emph{envelope}.
This quantity can readily be determined
from SBPs, or within concentric apertures, and provides a handy proxy
to radial color gradients in galaxies, thus a first classification guess.
As we will show below, $\delta_{\rm ce}$\ offers for certain $z$ intervals a powerful
diagnostic for identifying high-SSFR galaxies with morphological properties
analogous to those \object{I~Zw~18}.
Within other $z$ intervals, however, and depending on the colors considered,
a superficial interpretation of $\delta_{\rm ce}$\ can further aggravate the above discussed
galaxy misclassification biases.
In computing $\delta_{\rm ce}$\ and its variation with $z$, we approximated the spectrum
of \object{I~Zw~18}\ with synthetic stellar + \nsf ne\rm\ SEDs from Pegase~2, referring to
\lvss SFH2\rm\ and a metallicity $Z$=0.0004.
In these models, the properties of the HSB core
({\sl R}$^{\star}$$\leq$6\arcsec) are well reproduced by a synthetic SED for an age $t$=100 Myr.
This yields colors of $V$--$R$=0.2,
$V$--$I$\,=\,--0.15, $R$--$I$\,=\,--0.37 mag, and an EW(H$\alpha$) of 670 $\AA$, in good
agreement with the observed values
\cite[][P02]{Izotov01-IZw18,VilchezIglesiasParamo98-IZw18}.
As for the envelope (6\arcsec$\leq${\sl R}$^{\star}$$\la$20\arcsec), we adopted the SED
from the same model for $t$=0, i.e. a purely \nsf ne\rm\ spectrum.
Its colors ($B$--$V$=0.28, $V$--$R$=0.47, $B$--$R$=0.7, $V$--$I$\,=\,--0.65 and
$R$--$I$\,=\,--1.1 mag) provide as well a good match to the data.
The variation of the $B$--$J$, $V$--$K$, $V$--$R$, $V$--$I$ and $R$--$I$
colors of the core and the envelope as a function of $z$ is shown in the
upper two panels of Fig. \ref{IZw18-z}. It can be seen that the envelope (middle panel)
shows particularly large color variations, as different strong emission lines shift
into the transmittance window of various filters, depending on $z$.
As already evident from Sect. \ref{phot:IZw18}, a local analog to \object{I~Zw~18}\ is easily
identifiable by its blue nebular envelope and large (0.8 mag) $\delta_{\rm ce}$\ (lower
panel) both with respect to $V$--$I$ and $R$--$I$.
This is also the case for the redshift range $0.15\la z \la 0.3$ where the envelope
appears much redder than the core in $V$--$I$ but bluer in $B$--$V$.
Other distinct peaks in $\|$$\delta_{\rm ce}$$\|$ with respect to various optical or optical--NIR colors
are apparent for e.g. $z\approx 0.42$, 0.55 and 0.9.
A comparison of the upper and lower panel of Fig. \ref{IZw18-z} shows that
the $\delta_{\rm ce}$\ exhibits much larger variations ($\|$$\delta_{\rm ce}$$\|$$\simeq$1.6 mag) than the
HSB core, i.e. it is a far more sensitive indicator of
extended \nsf ne\rm\ than integral (luminosity-weighted) colors that are primarily
driven by the core.
For example, at $z\approx0.27$, the $V$--$I$ and $B$--$V$ colors of the
HSB core are equal to within $\leq$0.3 mag, whereas their $\delta_{\rm ce}$\ is differing
by up to $\approx$1.3 mag.
It is worth pointing out that, for certain redshift windows and color indices,
the $\delta_{\rm ce}$\ may point towards diametrically different views on the nature of an \object{I~Zw~18}\ analog.
For instance, at $0.15\la z \la 0.3$ the large $V$--$I$
core-to-envelope color contrast ($\delta_{\rm ce}$\ $\sim$ 0.8 mag) and the moderately blue
colors of the core (0.5 mag) superficially suggest an old disk hosting nuclear SF activity.
The opposite conclusion could be drawn from the $\delta_{\rm ce}$\ in $B$--$V$
($\approx$0.5 mag) which may be taken as evidence for a very young
stellar disk encompassing a slightly older core, in line with the
inside-out galaxy growth interpretation.
In either case, the exponential envelope, usually interpreted as stellar disk,
would nicely augment the erroneous conclusion.
Several similar cases for various color combinations and redshifts can be read off
Fig. \ref{IZw18-z}.
Of importance is also the fact that for other $z$'s
(e.g. 0.1$\la z \la$0.15, 0.3, 0.75, 1.05) the $\delta_{\rm ce}$\ vanishes to $\la$0.2 mag
for some colors, having little discriminating power between stellar
and nebular emission, and superficially suggesting a nearly uniform stellar age.
\smallskip
With regard to all concerns above, one might counter-argue that photometric
$k$ corrections would rectify SBPs and color profiles, and eliminate pitfalls
in the interpretation of $\delta_{\rm ce}$.
However, state-of-art $k$ corrections, mostly tailored to stellar SEDs and applied to a
morphological analog of \object{I~Zw~18}\ as a whole, regardless of its physically
distinct radial zones, would most probably aggravate the problem in a barely
predictable manner.
\begin{figure}
\begin{picture}(8.6,12.36)
\put(0,8.4){{\psfig{figure=fig15a.ps,width=8.8cm,angle=-90.0}}}
\put(0,4.6){{\psfig{figure=fig15b.ps,width=8.8cm,angle=-90.0}}}
\put(0,0.0){{\psfig{figure=fig15c.ps,width=8.93cm,angle=-90.0}}}
\end{picture}
\caption[]{Variation of the observed color as a function of redshift $z$ for a
template galaxy with the properties of \object{I~Zw~18}.
The model consists of a high-surface brightness {\nlx core}
to within which stellar emission is confined and dominates
{\bf (upper panel)}, and a larger lower-surface brightness nebular
{\nlx envelope} {\bf (middle panel)}.
The colors of the {\nlx core} and the {\nlx envelope} yield for $z=0$ a good
match to the observed values for the core ({\sl R}$^{\star}$$\leq$6\arcsec) and the nebular
envelope (6\arcsec$\leq${\sl R}$^{\star}$$\la$20\arcsec) of \object{I~Zw~18}\ (cf Sect. \ref{phot:IZw18}).
The {\bf lower panel} shows the dependence of the color contrast $\delta_{\rm ce}$\
between the core and the envelope as a function of redshift.}
\label{IZw18-z}
\end{figure}
\smallskip
In summary, the case of \object{I~Zw~18}\ stands as a warning benchmark to studies of
high-SSFR galaxies near and far.
It reminds us of the fact that nebular emission in SF galaxies is not
always cospatial with the underlying local ionizing and non-ionizing
stellar background, neither has to scale with its surface density.
Quite the contrary, in galaxies with high SSFR (i.e. forming young
galaxies and/or systems rapidly assembling their stellar mass during dominant
phases of their evolution) nebular emission is plausibly expected to extend
far beyond the galaxy itself and to significantly contribute to its total luminosity.
This, in connection with the empirical evidence that nebular emission forms an
\emph{exponential} envelope, conspires in potentially important observational
biases in studies of moderately resolved morphological analogs to \object{I~Zw~18}\
which are likely ubiquitous in the early universe.
Interesting in this context is also the detection of \emph{exponential} Ly$\alpha$
halos around low \citep{Hayes2007} and high-$z$ star-forming galaxies
\citep{Steidel11-Lya}, resulting in SBPs strikingly similar to those of \object{I~Zw~18}.
In principle, any strong thermal or non-thermal central source of energy and momentum could
generate a large nebular envelope and a pseudo bulge--disk luminosity profile.
The foregoing discussion is therefore of relevance to a wider range of topics,
as e.g. the co-evolution of AGNs with their host galaxies or the properties of ionized
halos around powerful radio galaxies \citep[e.g.,][and references therein]{VillarMartin03}.
The prospect of spatially resolved studies of galaxy formation in the faraway universe
with next-generation observing facilities calls for theoretical guidance on the properties and time evolution
of nebular halos around protogalactic systems.
Some questions (see also P02) to computational astrophysics include:
i) in which way are the photometric properties of the nebular envelope
(e.g. its $\mu_0$ and $\alpha$), and their temporal evolution, related to the
specifics of energy production (e.g. the SFH and the ionizing SED),
the initial geometry and kinematics
of the protogalactic gas reservoir and the physical conditions in the intergalactic medium
(e.g. external pressure, ambient ionizing field from multiple mutually 'illuminating'
protogalactic units)? More specifically,
ii) does (for a given set of environmental conditions) the core-envelope
radial intensity pattern scale in a homologous manner, i.e. is the $\alpha$
of the nebular envelope invariant? If not, iii) what does the core-to-envelope
luminosity and normalized {\it R$_{\rm SF}$}/$\alpha$ ratio tell us about e.g. the
recent SFH and current SSFR?
Additionally, iv) how is the photometric, chemical and kinematical evolution of the nebular
envelope related to the escape probability of Ly continuum and Ly$\alpha$ photons?
\section{Summary and conclusions \label{Conclusions}}
We used the entire set of archival {\sl HST}\ ACS broad band imaging data
for \object{I~Zw~18}\ and its fainter component \object{I~Zw~18\,C}\ to study the photometric structure
of this nearby (19 Mpc) dwarf galaxy pair to unprecedently faint surface
brightness levels ($\mu\ga$29 mag/$\sq\arcsec$).
\smallskip
\noindent The main results from this study may be summarized as follows:
\noindent {\nlx i)} Radial color profiles reveal very blue and practically
constant colors (0$\pm$0.05 mag) for \object{I~Zw~18\,C}\ down to $\mu \sim 27.6$ mag/$\sq\arcsec$, and
a previously undisclosed, slightly redder ($V$--$I$=0.2$\pm$0.08 mag,
$R$--$I$=0.16$\pm$0.09 mag) stellar component in its extreme periphery (27.6 -- 29 mag/$\sq\arcsec$).
We have verified that these blue colors do not merely reflect the
luminosity-weighted average of blue young stellar clusters with a faint
red stellar background but that they are characteristic for the unresolved
host galaxy of \object{I~Zw~18\,C}.
We argue that the buildup of the photometrically dominant stellar component of
\object{I~Zw~18\,C}\ has occurred in a largely sequential mode, through a star-forming (SF)
process that has likely started $\tau \sim $100 Myr ago at the redder
northwestern tip of the galaxy and propagated with a mean velocity of
$\sim$20 km s$^{-1}$\ to its bluer southeastern tip.
The photometric properties of the extreme periphery of \object{I~Zw~18\,C}\ are entirely consistent
with this formation scenario, if the effect of stellar diffusion is taken into account.
Radial migration of newly forming stars with a mean velocity of $\approx$4 km s$^{-1}$\ over $\sim\tau$,
and the associated \emph{stellar mass filtering effect} described in
Papaderos et al. (2002, hereafter P02), can naturally account for the
slightly redder colors, absence of nebular emission (\nsf ne\rm) and topological
properties of the stellar outskirts of \object{I~Zw~18\,C}.
A faint ancient stellar substrate can not be ruled out, even though
our analysis does not lend observational support to its presence.
\smallskip
\noindent {\nlx ii)} In \object{I~Zw~18}\ (i.e. the main body) severe contamination
of broad band colors by \nsf ne\rm\ prevents a conclusive age dating of
the stellar component almost everywhere. Therefore, even though our combined images
are the deepest presently available, we can not improve on the age-dating analysis
by P02 who have two-dimensionally subtracted strong nebular emission lines
from broad band {\sl HST}\ WFPC2 images to isolate and age-date the residual underlying stellar
background. However, the colors of the reddest quartile of region $\omega$, a region at the
southeastern tip of \object{I~Zw~18}\ with comparatively weak \nsf ne\rm,
were determined to be in good agreement with those previously inferred by P02.
These colors, if representative for the host galaxy of \object{I~Zw~18},
imply on the basis of a continuous or exponentially decreasing star formation
history (SFH) that \object{I~Zw~18}\ has formed most of its stellar mass
at a late cosmic epoch. This, together with our conclusions under {\nlx i)},
supports the view that both \object{I~Zw~18}\ and \object{I~Zw~18\,C}\ are cosmologically young objects
that have undergone a nearly synchronous evolution.
\smallskip
\noindent {\nlx iii)} We show that $\sim$20\% of the isophotal
area of \object{I~Zw~18}\ at 25 mag/$\sq\arcsec$\ is severely affected by \nsf ne\rm, thus inaccessible
to age dating studies via broad band colors and color-magnitude diagrams (CMDs).
The local impact of \nsf ne\rm\ manifests itself i.a. in a combination of
red ($\sim$0.5 mag) $V$--$R$ with extremely blue (--0.4 \dots --0.8 mag)
$V$--$I$ and $R$--$I$ colors.
Nebular emission shows considerable sub-structure, with numerous clumps and
overlapping shells, and little spatial correlation with the local stellar
background. It thus can not be treated as a uniform foreground emitting layer
and accurately subtracted out using standard point source photometry algorithms.
This likely results in substantial random and systematic errors that might not
be fully accounted for by the standard CMD error budget.
Further potential sources of systematic uncertainties stem from spatial
displacements (50 -- 100 pc) between the intensity maxima of the
[O{\sc iii}]$\lambda\lambda$4959,5007 and H$\alpha$\ emission lines along ionized
gas shells. These may differentially affect the local background determination
in various filters, causing an artificial reddening of CMD point sources
in the interior of nebular shells (and {\sl vice versa}).
\smallskip
\noindent {\nlx iv)} Based on the extraordinarily deep combined {\sl HST}\ ACS images, we have
been able to study the \emph{exponential} low-surface brightness (LSB) envelope of
\object{I~Zw~18}\ out to its extreme periphery using both surface brightness profiles
(SBPs) and color maps.
These reveal uniform colors of $V$--$R$$\approx$0.55 mag, $V$--$I$$\approx$--1
mag and $R$--$I$$\approx$--1.4 mag all over the LSB component of \object{I~Zw~18},
corroborating the previous conclusion by P02 that this luminosity envelope
is not due to a stellar disk but due to extended \nsf ne\rm.
Specifically, our analysis indicates that \nsf ne\rm\ dominates the line-of-sight intensity
beyond 3 effective radii (or, equivalently, for $\mu\geq$23.5 mag/$\sq\arcsec$) and
extends as far out as $\sim$16 stellar exponential scale lengths.
The overall picture emerging from our analysis is therefore that
\object{I~Zw~18}\ is a cosmologically young object that consists of a compact,
high-surface brightness (HSB) core, within which stellar emission is confined
and dominates, and a much larger nebular LSB envelope.
\smallskip
\noindent {\nlx v)} We argue that the morphological properties of \object{I~Zw~18},
while unique among nearby SF dwarf galaxies, are probably typical among
distant young galaxies in dominant phases of their
evolution, during which they assemble their stellar component at high
specific star formation rates (SSFRs).
The prodigious energetic output during such phases of rapid stellar mass
growth is expected to result into a large-scale gas ionization and acceleration,
with \nsf ne\rm\ protruding much beyond the still compact stellar galaxy host.
These systems could thus bear strong morphological resemblance
to \object{I~Zw~18}, comprising a compact core that is dominated by stellar
emission and a much larger exponential nebular envelope.
A question of considerable interest is therefore, how the nebular envelope
could impact photometric studies of moderately resolved morphological
analogs of \object{I~Zw~18}\ at higher $z$.
\smallskip
A potential bias, already discussed in P02, arises from the fact that the nebular
envelope mimicks due to its exponentiality and red $B$--$R$ color an evolved stellar disk.
Here, by using \object{I~Zw~18}\ as a template, we extend previous considerations:
{\nlx v.1} We point out that the superposition of two exponential components
of differing central surface brightness and scale length, approximating the
core and the envelope of a distant \object{I~Zw~18}\ analog, may be barely distinguishable from a genuine
S\'ersic profile with an exponent 2$\la\eta\la$5.
Therefore, S\'ersic models offer, in the specific context, a poor diagnostic for
disentangling compact high-SSFR protogalaxies from massive galaxy spheroids.
{\nlx v.2} Nebular emission contributes at least 1/3 of the total luminosity
of \object{I~Zw~18}\ (P02 and this study).
This luminosity fraction, if typical for its higher-$z$ analogs,
translates into a systematic error of $\ga$0.4 mag in all galaxy scaling
relations involving luminosities (e.g., the Tully-Fisher relation, and
relations between luminosity and metallicity, diameter, velocity dispersion).
Evidently, errors in total luminosities propagate into errors in galaxy mass
determinations via theoretical mass-to-light ratios or SED fitting techniques.
Moreover, since extended \nsf ne\rm\ can drastically affect a galaxy SBP
as a whole, it also affects practically all secondary photometric
quantities that are derivable from it (e.g. the effective radius or various
light concentration indices used for quantitative galaxy morphology studies).
{\nlx v.3} We investigate the variation of the color contrast $\delta_{\rm ce}$\
between the star-dominated core and the surrounding nebular envelope as a function of $z$.
This task is motivated by the fact that $\delta_{\rm ce}$\ provides a handy proxy
to radial color gradients in galaxies and a valuable galaxy classification tool.
We show that for certain $z$ intervals, this quantity offers a powerful
diagnostic for the identification of moderately resolved \object{I~Zw~18}\ analogs.
Within other $z$ intervals, however, and depending on the color indices considered,
a superficial interpretation of $\delta_{\rm ce}$\ can further enhance galaxy
misclassification biases stemming from SBP fitting (cf v.1) and potentially
impact our understanding of galaxy assembly over time.
State-of-art $k$ corrections applied to distant morphological analogs to
\object{I~Zw~18}\ as a whole, i.e. regardless of their physically distinct radial zones,
may aggravate observational and interpretation biases
in a barely predictable manner.
\smallskip
In the era of spatially resolved studies of galaxy formation in the early
universe with next-generation observing facilities, a better theoretical
understanding of the rise and fall of nebular galaxy halos over cosmic time
appears to be crucially important.
In this respect, some questions to computational astrophysics are formulated.
\begin{acknowledgements}
Polychronis Papaderos is supported by a Ciencia 2008 contract, funded
by FCT/MCTES (Portugal) and POPH/FSE (EC).
He also acknowledges support by the Wenner-Gren Foundation.
G\"oran \"Ostlin acknowledges support from the Swedish Research
council (VR) and the Swedish National Space Board (SNSB).
He is a Royal Swedish Academy of Sciences research fellow, supported
from a grant from the Knut and Alice Wallenberg foundation.
A careful reading and constructive report by the referee is very much appreciated.
This research has made use of data acquired with the
European Southern Observatory telescopes, and obtained from
the ESO/ST-ECF Science Archive Facility and of the NASA/IPAC
Extragalactic Database (NED) which is operated by the
Jet Propulsion Laboratory, CALTECH, under
contract with the National Aeronautic and Space Administration.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\vspace{-0.5em}
Deep neural networks achieve state-of-the-art performance in many tasks, \eg, image classification, object detection, and instance segmentation, but they are vulnerable to adversarial attacks. A small perturbation that is imperceptible to humans can mislead a neural network's prediction~\cite{szegedy2013intriguing, carlini2017towards,athalye2018obfuscated, kurakin2016adversarial, carlini2017adversarial}.
To mitigate this problem, Madry \etal~\cite{madry2018towards} develop an effective framework to train robust neural networks. They formulate adversarial training as a robust optimization problem.
Specifically, they use projected gradient descent (PGD) to find the worst-case adversarial example near the original image and then minimize the loss at this point during training.
Networks trained under this framework achieve state-of-the-art robustness under many attacks~\cite{zhang2019theoretically, Wang2020Improving, rice2020overfitting}.
However, these networks are only emperically robust, but not verifiably robust. They become vulnerable when stronger attacks are presented~\cite{wang2018mixtrain,croce2020reliable,tjeng2019evaluating}.
\begin{figure}[t]
\centering
\subfigure[Constant]{
\label{fig:constant_bdl}
\includegraphics[width=0.48\columnwidth]{figures/constant_bdl.png}}
\subfigure[Tight]{
\label{fig:zero_bdl}
\includegraphics[width=0.48\columnwidth]{figures/zero_bdl.png}}
\\
\subfigure[Adaptive: Case $|l|>u$]{
\label{fig:adaptive_bdl_1}
\includegraphics[width=0.48\columnwidth]{figures/adaptive_bdl_case_1.png}}
\subfigure[Adaptive: Case $|l|\leq u$]{
\label{fig:adaptive_bdl_2}
\includegraphics[width=0.48\columnwidth]{figures/adaptive_bdl_case_2.png}}
\caption{Illustration of different strategies to choose bounding lines for the three status of a ReLU neuron. Dead: $l \leq u \leq 0$; Unstable: $l < 0 < u$; Alive: $0 \leq l \leq u$. $[l,u]$ is the input range of the neuron.
(a) chooses constant bounding lines.
(b) is the tight strategy.
(c) and (d) are the two cases of unstable neurons in the adaptive strategy. The adaptive strategy chooses the same bounding lines as the tight strategy for dead and alive neurons. See more details in
{Appendix A.7}.}
\vspace{-2em}
\label{fig:bounding_line_strategies}
\end{figure}
This leads to the development of robustness verification, which aims to provide a certificate that a neural network gives consistent predictions for all inputs in some set, usually an $l_p$ ball around a clean image.
The key of robustness verification is to compute the lower and upper bounds of the output logits when input can take any value in the $l_p$ ball.
The exact bounds can be computed through Satisfiability Modulo Theory~\cite{katz2017reluplex} or solving a
Mixed Integer Linear Programming (MILP) problem~\cite{tjeng2019evaluating, cheng2017maximum}.
Relaxed bounds can be obtained by reduce the bound computation problem to a linear programming (LP) problem~\cite{wong2018provable} or a semidefinite programming (SDP) problem~\cite{dathathri2020enabling}.
However, these programming based methods are expensive and difficult to scale to large networks.
To this end, another approach that makes linear relaxations of the nonlinear activation functions in a network is proposed~\cite{singh2018fast, singh2019abstract, wang2018efficient, weng2018towards, zhang2018crown, ko2019popqorn}.
Figure~\ref{fig:bounding_line_strategies} illustrates different strategies to make linear relaxations of a ReLU neuron.
These methods can compute bounds analytically and efficiently.
In this paper, we focus on the study of CROWN~\cite{zhang2018crown}, which can \emph{compute relatively tight bounds while being fast}. Other similar approaches \cite{singh2018fast, singh2019abstract, wang2018efficient, weng2018towards} are either a special case of CROWN or a different view of it as demonstrated by Salman \etal~\cite{salman2019convex}.
Wong \etal~\cite{wong2018provable} propose to incorporate bounds computed by the aforementioned linear relaxation based methods in the loss function to train verifiably robust networks. Similar approaches are proposed in several other works~\cite{mirman2018differentiable, dvijotham2018training, raghunathan2018certified, wang2018mixtrain}. However, these methods generally bring heavy computational overhead to the original training process.
Gowal \etal~\cite{gowal2019scalable} propose to use a simple technique, interval bound propagation (IBP), to compute bounds. IBP is fast and can scale to large networks. Despite being loose, IBP outperforms previous linear relaxation based methods in terms of training verifiably robust networks. Zhang \etal~\cite{zhang2020towards} further improve this method by combining IBP with the tighter linear relaxation based method, CROWN. The resulting method is named CROWN-IBP. They use CROWN-IBP to compute bounds at the initial training phase and
achieve the lowest $l_{\infty}$ verified errors.
We notice that both IBP trained networks~\cite{gowal2019scalable} and CROWN-IBP trained networks~\cite{zhang2020towards} are verified by IBP after training. One natural question is whether we can use tighter linear relaxation based methods to verify the networks to achieve lower verified error. Surprisingly, Zhang \etal~\cite{zhang2020towards} find the typically much tighter method, CROWN, gives very loose bounds for IBP trained networks. It seems that IBP trained networks have very different verification properties from normally trained networks.
We also find that CROWN cannot verify large networks due to its high memory cost.
Another phenomenon we observe on IBP and CROWN-IBP trained networks is that most neurons become dead during training. We believe that this could restrict the representation capability of the network and thus hurt its performance. In this paper, we make the following contributions to tackle the aforementioned problems:
\begin{enumerate}
\vspace{-0.3em}
\item
We develop a relaxed version of CROWN, linear bound propagation (LBP), which has better scalability. We demonstrate LBP can be used to obtain tighter bounds than IBP on both normally trained networks or IBP trained networks.
\vspace{-0.3em}
\item
We prove IBP is a special case of CROWN and LBP. The reason that CROWN gives looser bounds than IBP on IBP trained networks is that CROWN chooses bad bounding lines when making linear relaxations of the nonlinear activation functions. We prove CROWN and LBP are always tighter than IBP if they adopt the tight strategy to choose bounding lines as shown in Figure~\ref{fig:bounding_line_strategies}.
\vspace{-1.5em}
\item We propose to use a new activation function, parameterized ramp function (ParamRamp), to train verifiably robust networks. Compared with ReLU, where most neurons become dead during training, ParamRamp brings more diversity of neuron status. Our experiments demonstrate networks with ParamRamp activation achieve state-of-the-art verified $l_\infty$ robustness on MNIST, CIFAR-10 and Tiny-ImageNet.
\end{enumerate}
\vspace{-1.5em}
\section{Background and Related Work}
\vspace{-0.5em}
In this section, we start by giving definition of an $m$-layer feed-forward neural network
and then briefly introduce the concept of robustness verification.
Next we present interval bound propagation, which is used to train networks with best verified errors.
Finally we review two state-of-the-art verifiable adversarial training methods~\cite{gowal2019scalable, zhang2020towards} that are most related to our work.
\vspace{-1em}
\paragraph{Definition of an $m$-layer feed-forward network.}
\begin{align}
\small
\begin{split}
{\vz}^{(k)} &= {\mW}^{(k)} {\va}^{(k-1)} + {\vb}^{(k)}, {\va}^{(k)} = \sigma({\vz}^{(k)}), \\
k&=1,2,\cdots,m.
\end{split}
\vspace{-0.5em}
\end{align}
${\mW}^{(k)}, {\vb}^{(k)}, {\va}^{(k)}, {\vz}^{(k)}$ are the weight matrix, bias, activation and pre-activation of the $k$-th layer in the network, respectively. $\sigma$ is the elementwise activation function. Note that we always assume $\sigma$ is a monotonic increasing function in rest part of the paper. $\va^{(0)} = \vx$ and $\vz^{(m)}$ are the input and output of the network. We also use $n_k$ to denote the number of neurons in the $k$-th layer and $n_0$ is the dimension of the input.
Although this network only contains fully connected layers, our discussions on this network in rest part of the paper readily generalize to convolutional layers as they are essentially a linear transformation as well~\cite{boopathy2019cnn}.
\vspace{-1.5em}
\paragraph{Robustness verification.}
Robustness verification aims to guarantee a neural network gives consistent predictions for all inputs in some set, typically an $l_p$ ball around the original input: $\mathbb{B}_p(\vx_0, \epsilon) = \{\vx \,|\, ||\vx-\vx_0||_p \leq \epsilon\}$, where $\vx_0$ is the clean image.
The key step is to compute the lower and upper bounds of the output logits $\vz^{(m)}$ (or the lower bound of the margin between ground truth class and other classes as defined in~\eqref{eqn:margin}) when the input can take any value in $\mathbb{B}_p(\vx_0, \epsilon)$. We can guarantee that the network gives correct predictions for all inputs in $\mathbb{B}_p(\vx_0, \epsilon)$ if the lower bound of the ground truth class is larger than the upper bounds of all the other classes (or the lower bound of the margin is greater than $0$).
The verified robustness of a network is usually measured by the verified error: The percentage of images that we can not guarantee that the network always gives correct predictions for inputs in $\mathbb{B}_p(\vx_0, \epsilon)$.
Note that the verified error not only depends on the network and the allowed perturbation of the input, but also the method we use to compute bounds for the output. CROWN and IBP are the two bounding techniques that are most related to our work.
We briefly walk through CROWN in Section~\ref{sec:relaxed_crown} and introduce IBP right below.
\vspace{-1.5em}
\paragraph{Interval bound propagation.}
Assume we know the lower and upper bounds of the activation of the $(k-1)$-th layer: $\hat{\vl}^{(k-1)} \leq \va^{(k-1)} \leq \hat{\vu}^{(k-1)}$. Then IBP computes bounds of $\vz^{(k)}$, $\vl^{(k)}$ and $\vu^{(k)}$, in the following way:
\begin{align}
\small
\begin{split}
\vl^{(k)}&=relu({\mW}^{(k)}) \hat{\vl}^{(k-1)} + neg({\mW}^{(k)}) \hat{\vu}^{(k-1)} + {\vb}^{(k)}, \\
\vu^{(k)}&=relu({\mW}^{(k)}) \hat{\vu}^{(k-1)} + neg({\mW}^{(k)}) \hat{\vl}^{(k-1)} + {\vb}^{(k)},
\end{split}
\vspace{-1em}
\end{align}
where $relu$ is the elementwise ReLU function and $neg$ is the elementwise version of the function $neg(x) = x$, if $x\leq0$; $neg(x) = 0$, else.
Next, bounds of $\va^{(k)}$, $\hat{\vl}^{(k)}$ and $\hat{\vu}^{(k)}$, can be computed by
\begin{align}
\small
\begin{split}
\hat{\vl}^{(k)} = \sigma({\vl}^{(k)}), \hat{\vu}^{(k)} = \sigma({\vu}^{(k)}).
\end{split}
\end{align}
IBP repeats the above procedure from the first layer and computes bounds layer by layer until the final output as shown in Figure~\ref{fig:ibp_vs_lbp}. Bounds of $\va^{(0)}=\vx$ is known if the allowed perturbation is in an $l_\infty$ ball. Closed form bounds of $\vz^{(1)}$ can be computed using Holder's inequality as shown in~\eqref{eqn:closed_form_bound_z^k} if the allowed perturbation is in a general $l_p$ ball.
\vspace{-1em}
\paragraph{Train Verifiably Robust Networks.}
Verifiable adversarial training first use some robustness verification method to compute a lower bound $\vl^{\vomega}$ of the margin $\vomega$ between ground truth class $y$ and other classes:
\begin{align}
\small
\begin{split}
&\vomega_i(\vx) = \vz^{(m)}_y(\vx) - \vz^{(m)}_i(\vx), i=1,2,\cdots,n_m. \label{eqn:margin}\\
&\vl^{\vomega}(\vx_0, \epsilon) \leq \vomega(\vx), \forall \vx \in \mathbb{B}_p(\vx_0, \epsilon).
\end{split}
\end{align}
Here we use ``$\leq$'' to denote elementwise less than or equal to. For simplicity, we won't differentiate operators between vectors and scalars in rest part of the paper when no ambiguity is caused.
Gowal \etal~\cite{gowal2019scalable} propose to use IBP to compute the lower bound $\vl_{\text{IBP}}^{\vomega}(\vx_0, \epsilon)$ and minimize the following loss during training:
\begin{align}
\label{eqn:IBP_loss}
\small
\begin{split}
\mathop{\mathbb{E}}_{(\vx_0,y)\in\mathcal{X}} \kappa L(\vz^{(m)}(\vx_0), y) + (1-\kappa) L(-\vl_{\text{IBP}}^{\vomega}(\vx_0, \epsilon), y),
\end{split}
\end{align}
where $\mathcal{X}$ is the underlying data distribution, $\kappa$ is a hyper parameter to balance the two terms of the loss, and $L$ is the normal cross-entropy loss.
This loss encourages the network to maximize the margin between ground truth class and other classes.
Zhang \etal~\cite{zhang2020towards} argue that IBP bound is loose during the initial phase of training, which makes training unstable and hard to tune. They propose to use a convex combination of the IBP bound $\vl_{\text{IBP}}^{\vomega}$ and CROWN-IBP bound $\vl_{\text{C.-IBP}}^{\vomega}$ as the lower bound to provide supervision at the initial phase of training:
\begin{align}
\label{eqn:convex_IBP_CROWN}
\small
\begin{split}
\vl^{\vomega} = (1-\beta) \vl_{\text{IBP}}^{\vomega} +\beta \vl_{\text{C.-IBP}}^{\vomega}.
\end{split}
\end{align}
The loss they use is the same as the one in \eqref{eqn:IBP_loss} except for replacing $\vl_{\text{IBP}}^{\vomega}$ with the new $\vl^{\vomega}$ defined in \eqref{eqn:convex_IBP_CROWN}. They design a schedule for $\beta$: It starts from $1$ and decreases to $0$ during training. Their approach achieves state-of-the-art verified errors on MNIST and CIFAR-10 datasets.
Xu \etal~\cite{xu2020automatic} propose a loss fusion technique to speed up the training process of CROWN-IBP and this enables them to train large networks on large datasets such as Tiny-ImageNet and Downscaled ImageNet.
\vspace{-0.5em}
\section{Relaxed CROWN}
\vspace{-0.5em}
\label{sec:relaxed_crown}
CROWN is considered an efficient robustness verification method compared with LP based methods \cite{weng2018towards, zhang2018crown, lyu2020fastened}, but these works only test CROWN on small multi-layer perceptrons with at most several thousand neurons in each hidden layer. Our experiment suggests that CROWN scales badly to large convolutional neural networks (CNNs): It consumes more than $12$ GB memory when verifying a single image from CIFAR-10 for a small $4$-layer CNN (See its detailed structure in
{Appendix B.1}), which prevents it from utilizing modern GPUs to speed up computation. Therefore, it is crucial to improve CROWN's scalability to employ it on large networks. To this end, we develop a relaxed version of CROWN named \textbf{Linear Bound Propagation (LBP)}, whose computation complexity and memory cost grow linearly with the size of the network.
We first walk through the deduction process of the original CROWN.
\vspace{-1em}
\paragraph{The original CROWN.}
Suppose we want to compute lower bound for the quantity $\mW^{obj} \vz^{(k)} + \vb^{obj}$. $\mW^{obj}$ and $\vb^{obj}$ are the weight and bias that connect $\vz^{(k)}$ to the quantity of interests.
For example, the quantity becomes the margin $\vomega(\vx)$ if we choose appropriate $\mW^{obj}$ and set $\vb^{obj}=0, k=m$.
Assume we already know the bounds of pre-activation of the $(k-1)$-th layer:
\begin{align}
\small
\begin{split}
\vl^{(k-1)} \leq \vz^{(k-1)} \leq \vu^{(k-1)}, \forall \vx \in \mathbb{B}_p(\vx_0, \epsilon).
\label{eqn:lower_upper_bounds_of_z_k-1}
\end{split}
\end{align}
Next CROWN finds two linear functions of $\vz^{(k-1)}$ to bound $\va^{(k-1)}=\sigma(\vz^{(k-1)})$ in the intervals determined by $\vl^{(k-1)}, \vu^{(k-1)}$.
\begin{align}
\small
\begin{split}
&\vh^{(k-1)L}(\vz^{(k-1)}) \leq \sigma(\vz^{(k-1)}) \leq \vh^{(k-1)U}(\vz^{(k-1)}), \\
&\forall \, \vl^{(k-1)} \leq \vz^{(k-1)}
\leq \vu^{(k-1)},
\end{split}
\label{eqn:h_k-1}
\end{align}
where
\begin{align}
\small
\begin{split}
\vh^{(k-1)L}(\vz^{(k-1)}) &= \vvs^{(k-1)L} * \vz^{(k-1)} + \vt^{(k-1)L},\\
\vh^{(k-1)U}(\vz^{(k-1)}) &= \vvs^{(k-1)U} * \vz^{(k-1)} + \vt^{(k-1)U}.
\end{split}
\label{eqn:bounding_lines_of_k-1_layer}
\end{align}
Here we use ``$*$" to denote elementwise product.
$\vvs^{(k-1)L/U}, \vt^{(k-1)L/U}$ are constant vectors of the same dimension of $\vz^{(k-1)}$.
We use $L,U$ in the superscripts to denote quantities related to lower bounds and upper bounds, respectively.
We also use $L/U$ in the superscripts to denote ``lower bounds or upper bounds".
The linear functions $\vh^{(k-1)L/U}(\vz^{(k-1)})$ are also called bounding lines, as they bound the nonlinear function $\sigma(\vz^{(k-1)})$ in the intervals determined by $\vl^{(k-1)}, \vu^{(k-1)}$.
See Figure~\ref{fig:bounding_line_strategies} for a visualization of different strategies to choose bounding lines.
\begin{figure}[t]
\centering
\subfigure[CROWN vs Relaxed-CROWN-2]{
\label{fig:crown_vs_relax}
\includegraphics[width=0.99\columnwidth]{figures/crown_vs_relaxed_crown.png}} \\
\subfigure[IBP vs LBP]{
\label{fig:ibp_vs_lbp}
\includegraphics[width=0.99\columnwidth]{figures/IBP_vs_LBP.png}}
\caption{Illustration of CROWN, Relaxed-CROWN, IBP and LBP. (a) shows how CROWN and Relaxed-CROWN-2 compute bounds for the $4$-th layer of a $5$ layer network.
(b) shows how IBP and LBP compute bounds layer by layer for a $3$ layer network.}
\vspace{-1em}
\end{figure}
Next CROWN utilizes these bounding lines to build a linear function of $\vz^{(k-1)}$ to lower bound $\mW^{obj} \vz^{(k)} + \vb^{obj}$:
\begin{align}
\small
\begin{split}
\mW^{obj} \vz^{(k)} + \vb^{obj} \geq \mW^{(k,k-1)L} \vz^{(k-1)} + \vb^{(k,k-1)L}.
\end{split}
\end{align}
See the detailed formulas of $\mW^{(k,k-1)L}, \vb^{(k,k-1)L}$ in
{Appendix A.1}.
In the same manner, CROWN builds a linear function of $\vz^{(k-2)}$ to lower bound $\mW^{(k,k-1)L} \vz^{(k-1)} + \vb^{(k,k-1)L}$ if bounds of $\vz^{(k-2)}$ are known.
CROWN repeats this procedure: It back-propagates layer by layer until the first layer $\vz^{(1)}$ as shown in Figure~\ref{fig:crown_vs_relax}:
\vspace{-0.25em}
\begin{align}
\small
\begin{split}
&\mW^{obj} \vz^{(k)} + \vb^{obj} \geq \mW^{(k,k-1)L} \vz^{(k-1)} + \vb^{(k,k-1)L} \geq \cdots\\
&\mW^{(k,k-2)L} \vz^{(k-2)} + \vb^{(k,k-2)L}
\geq \mW^{(k,1)L} \vz^{(1)} + \vb^{(k,1)L}.
\end{split}
\label{eqn:z_1_to_z^k}
\end{align}
\vspace{-0.25em}
Notice $\vz^{(1)} = \mW^{(1)} \vx + \vb^{(1)}$. We plug it in the last term of \eqref{eqn:z_1_to_z^k} and obtain a linear function of $\vx$.
\begin{align}
\small
\begin{split}
\mW^{obj} \vz^{(k)} + \vb^{obj} &\geq \tilde{\mW}^{(k)L} \vx + \tilde{\vb}^{(k)L},
\label{eqn:x_to_z^k}
\end{split}
\end{align}
where $\tilde{\mW}^{(k)L} = \mW^{(k,1)L} \mW^{(1)}, \tilde{\vb}^{(k)L} = \mW^{(k,1)L} \vb^{(1)} + \vb^{(k,1)L}$. Now we can compute the closed-form lower bound of $\mW^{obj} \vz^{(k)} + \vb^{obj}$ through Holder's inequality:
\begin{align}
\small
\begin{split}
&\mW^{obj} \vz^{(k)} + \vb^{obj}
\geq \tilde{\mW}^{(k)L} \vx + \tilde{\vb}^{(k)L} \geq \\
&\tilde{\mW}^{(k)L} \vx_0+\tilde{\vb}^{(k)L} - \epsilon ||\tilde{\mW}^{(k)L}||_q, \forall \, \vx \in \mathbb{B}_p(\vx_0, \epsilon),
\end{split}
\label{eqn:closed_form_bound_z^k}
\end{align}
where $1/p+1/q=1$ and $||\tilde{\mW}^{(k)L}||_q$ denotes a column vector that is composed of the $q$-norm of every row in $\tilde{\mW}^{(k)L}$.
We can compute a linear function of $\vx$ to upper bound $\mW^{obj} \vz^{(k)} + \vb^{obj}$ in the same manner and then compute its closed-form upper bound. See details in
{Appendix A.1}.
Let's review the process of computing bounds for $\mW^{obj} \vz^{(k)} + \vb^{obj}$. It requires us to know the bounds of the previous $(k-1)$ layers: $\vz^{(1)}, \vz^{(2)}, \cdots, \vz^{(k-1)}$. We can fulfill this requirement by starting computing bounds from the first layer $\vz^{(1)}$, and then computing bounds layer by layer in a forward manner until the $(k-1)$-th layer.
Therefore,
the computation complexity of CROWN is of the order $\mathcal{O}(m^2)$.
And its memory cost is of the order $\mathcal{O}(\max_{k \neq v} n_{k} n_{v})$, where $0 \leq k,v \leq m$, and $n_k$ is the number of neurons in the $k$-th layer.
This is because we need to record a weight matrix $\mW^{(k,v)}$ between any two layers as shown in \eqref{eqn:z_1_to_z^k}. This makes CROWN difficult to scale to large networks. To this end, we propose a relaxed version of CROWN in the next paragraph.
\vspace{-1em}
\paragraph{Relaxed CROWN.}
As the same in the above CROWN deduction process, suppose we want to compute bounds for the quantity $\mW^{obj} \vz^{(k)} + \vb^{obj}$. In the original CROWN process, we first compute linear functions of $\vx$ to bound the pre-activation of the first $(k-1)$ layers:
\begin{align}
\small
\begin{split}
\tilde{\mW}^{(v)L} \vx + \tilde{\vb}^{(v)L} \leq \vz^{(v)} \leq \tilde{\mW}^{(v)U} \vx + \tilde{\vb}^{(v)U}, \\
\forall \, \vx \in \mathbb{B}_p(\vx_0, \epsilon), v = 1,2,\cdots, k-1,
\end{split}
\label{eqn:linear_x_to_bound_z^v}
\end{align}
and use these linear functions of $\vx$ to compute closed-form bounds for the first $(k-1)$ layers.
We argue that in the back-propagation process in \eqref{eqn:z_1_to_z^k}, we don't need to back-propagate to the first layer. We can stop at any intermediate layer and plug in the linear functions in \eqref{eqn:linear_x_to_bound_z^v} of this intermediate layer to get a linear function of $\vx$ to bound $\mW^{obj} \vz^{(k)} + \vb^{obj}$. Specifically, assume we decide to back-propagate $v$ layers:
\begin{align}
\small
\begin{split}
\mW^{obj} \vz^{(k)} + \vb^{obj} \geq \mW^{(k,k-1)L} \vz^{(k-1)} + \vb^{(k,k-1)L} \\
\geq \cdots \geq \mW^{(k,k-v)L} \vz^{(k-v)} + \vb^{(k,k-v)L}, v < k.
\end{split}
\label{eqn:z_k-v_to_z^k}
\end{align}
We already know
\begin{align*}
\small
\begin{split}
\tilde{\mW}^{(k-v)L} \vx + \tilde{\vb}^{(k-v)L} \leq \vz^{(k-v)} \leq \tilde{\mW}^{(k-v)U} \vx + \tilde{\vb}^{(k-v)U}.
\end{split}
\end{align*}
We can directly plug it to \eqref{eqn:z_k-v_to_z^k} to obtain a lower bound of $\mW^{obj} \vz^{(k)} + \vb^{obj}$:
\begin{align}
\small
\begin{split}
&\mW^{obj} \vz^{(k)} + \vb^{obj} \geq \\
&relu(\mW^{(k,k-v)L}) [\tilde{\mW}^{(k-v)L} \vx + \tilde{\vb}^{(k-v)L}] + \vb^{(k,k-v)L} \\
& +neg(\mW^{(k,k-v)L})[\tilde{\mW}^{(k-v)U} \vx + \tilde{\vb}^{(k-v)U}].
\label{eqn:relaxed_crown_v}
\end{split}
\end{align}
Now the last line of \eqref{eqn:relaxed_crown_v} is already a linear function of $\vx$ and we can compute the closed-form lower bound of $\mW^{obj} \vz^{(k)} + \vb^{obj}$ in the same manner as shown in \eqref{eqn:closed_form_bound_z^k}. The upper bound of $\mW^{obj} \vz^{(k)} + \vb^{obj}$ can also be computed by back-propagating only $v$ layers in the same gist.
We have shown we can only back-propagate $v$ layers, instead of back-propagating to the first layer, when computing bounds for the $k$-th layer. In fact, we can only back-propagate $v$ layers when computing bounds for any layer. If the layer index $k$ is less than or equal to $v$, we just back-propagate to the first layer. In other words, we back-propagate at most $v$ layers when computing bounds for any layer in the process of CROWN. We call this relaxed version of CROWN \textbf{Relaxed-CROWN-$v$}.
See a comparison of CROWN and Relaxed-CROWN in Figure~\ref{fig:crown_vs_relax}.
\vspace{-1em}
\paragraph{Linear Bound Propagation.}
We are particularly interested in the special case of Relaxed-CROWN-$1$, namely, we only back-propagate $1$ layer in the process of CROWN. This leads us to the following theorem.
\begin{theorem}
\label{thm:lbp}
Assume we already know two linear functions of $\vx$ to bound $\vz^{(k-1)}$:
\begin{align}
\small
\begin{split}
\tilde{\mW}^{(k-1)L} \vx + \tilde{\vb}^{(k-1)L} \leq \vz^{(k-1)} \leq \tilde{\mW}^{(k-1)U} \vx + \tilde{\vb}^{(k-1)U}.
\nonumber
\end{split}
\end{align}
We then compute the closed-form bounds $\vl^{(k-1)}, \vu^{(k-1)}$ of $\vz^{(k-1)}$ using these two linear functions, and choose two linear functions $\vh^{(k-1)L}(\vz^{(k-1)}), \vh^{(k-1)U}(\vz^{(k-1)})$ to bound $\va^{(k-1)}=\sigma(\vz^{(k-1)})$ as shown in \eqref{eqn:bounding_lines_of_k-1_layer}.
Then under the condition that
$\vvs^{(k-1)L} \geq 0, \vvs^{(k-1)U} \geq 0$,
$\vz^{(k)} = \mW^{(k)} \sigma(\vz^{(k-1)}) + \vb^{(k)}$ can be bounded by
\begin{align}
\small
\begin{split}
\tilde{\mW}^{(k)L} \vx + \tilde{\vb}^{(k)L} \leq \vz^{(k)} \leq \tilde{\mW}^{(k)U} \vx + \tilde{\vb}^{(k)U},
\end{split}
\end{align}
where
\begin{align}
\small
\begin{split}
\tilde{\mW}^{(k)L}
&= [relu(\mW^{(k)}) * \vvs^{(k-1)L}] \tilde{\mW}^{(k-1)L}
\\&\;\;\;\;+ [neg(\mW^{(k)}) * \vvs^{(k-1)U}] \tilde{\mW}^{(k-1)U}, \\
\tilde{\vb}^{(k)L}
&= \vb^{(k)} + [relu(\mW^{(k)}) * \vvs^{(k-1)L}] \tilde{\vb}^{(k-1)L} \\
&\;\;\;\;+ [neg(\mW^{(k)}) * \vvs^{(k-1)U}] \tilde{\vb}^{(k-1)U} \\
&\;\;\;\;+ relu(\mW^{(k)}) \vt^{(k-1)L} + neg(\mW^{(k)}) \vt^{(k-1)U},
\end{split}
\end{align}
where the operator ``$*$'' between a matrix $\mW$ and a vector $\vvs$ is defined as $(\mW*\vvs)_{ij} = \mW_{ij} \vvs_j$.
\end{theorem}
We refer readers to
{Appendix A.3} for the formulas of $\tilde{\mW}^{(k)U}, \tilde{\vb}^{(k)U}$ and the proof of Theorem~\ref{thm:lbp}.
Note that
the condition $\vvs^{(k-1)L} \geq 0, \vvs^{(k-1)U} \geq 0$
in Theorem~\ref{thm:lbp} is not necessary. We impose this condition because it simplifies the expressions of $\tilde{\mW}^{(k)L}, \tilde{\vb}^{(k)L},$ and it generally holds true when people choose bounding lines.
The significance of Theorem~\ref{thm:lbp} is that it allows us to compute bounds starting from the first layer $\vz^{(1)}$, which can be bounded by $\mW^{(1)} \vx + \vb^{(1)} \leq \vz^{(1)} \leq \mW^{(1)} \vx + \vb^{(1)}$, and then compute bounds layer by layer in a forward manner until the final output just like IBP.
The computation complexity is reduced to $\mathcal{O}(m)$ and memory cost is reduced to $\mathcal{O}(n_0 \max\{n_1, n_2, \cdots, n_m\})$,
since we only need to record a matrix $\tilde{\mW}^{(k)}$ from the input $\vx$ to every intermediate layer $\vz^{(k)}$. We call this method \textbf{Linear Bound Propagation (LBP)}, which is equivalent to Relaxed-CROWN-$1$.
See a comparison of LBP and IBP in Figure~\ref{fig:ibp_vs_lbp}.
As expected, there is no free lunch. As we will show in the next section, the reduction of computation and memory cost of LBP makes it less tight than CROWN.
Although developed from a different perspective, we find LBP similar to the forward mode in the work~\cite{xu2020automatic}. See a detailed comparison between them in Appendix A.3.
Zhang~\etal \cite{zhang2020towards} propose to compute bounds for the first $(m-1)$ layers using IBP and then use CROWN to compute bounds for the last layer to obtain tighter bounds of the last layer. The resulting method is named CROWN-IBP. In the same gist, we can use LBP to compute bounds for the first $(m-1)$ layers and then use CROWN to compute bounds for the last layer. We call this method \textbf{CROWN-LBP}.
\section{Relationship of IBP, LBP and CROWN}
\vspace{-0.5em}
\label{sec:relationship_ibp_lbp_crown}
In Section~\ref{sec:relaxed_crown}, we develop a relaxed version of CROWN, LBP.
In this section, we study the relationship between IBP, LBP and CROWN, and investigate why CROWN gives looser bounds than IBP on IBP trained networks~\cite{zhang2020towards}.
First, we manage to prove IBP is a special case of CROWN and LBP where the bounding lines are chosen as constants as shown in Figure~\ref{fig:constant_bdl}:
\begin{align}
\small
\begin{split}
\vh^{(k)L}(\vz^{(k)}) = \sigma(\vl^{(k)})&,
\vh^{(k)U}(\vz^{(k)}) = \sigma(\vu^{(k)}),\\
k=1,2,&\cdots,m-1.
\end{split}
\label{eqn:IBP_bounding_lines}
\end{align}
In other words, CROWN and LBP degenerate to IBP when they choose constant bounding lines for every neuron in every layer. See the proof of this conclusion in
{Appendix A.5}.
On the other hand, Lyu \etal~\cite{lyu2020fastened} prove tighter bounding lines lead to tighter bounds in the process of CROWN, where $\tilde{\vh}_i^{(k)L/U}(\vz_i^{(k)})$ is defined to be tighter than $\hat{\vh}_i^{(k)L/U}(\vz_i^{(k)})$ in the interval $[\vl^{(k)}_i, \vu^{(k)}_i]$ if
\begin{align}
\small
\begin{split}
\hat{\vh}_i^{(k)L}(\vz_i^{(k)}) \leq \tilde{\vh}_i^{(k)L}(\vz_i^{(k)}), &\tilde{\vh}_i^{(k)U}(\vz_i^{(k)}) \leq \hat{\vh}_i^{(k)U}(\vz_i^{(k)}), \\
\forall \vz_i^{(k)} \in& [\vl^{(k)}_i, \vu^{(k)}_i].
\end{split}
\end{align}
We manage to prove it is also true for LBP in
{Appendix A.3}. Therefore, if CROWN and LBP adopt the
tight strategy in Figure~\ref{fig:zero_bdl} to choose bounding lines, which is guaranteed to be tighter than the constant bounding lines
in a specified interval, CROWN and LBP are guaranteed to give tighter bounds than IBP.
We formalize this conclusion and include conclusions for CROWN-IBP and CROWN-LBP in the following theorem.
\begin{figure}[tb]
\centering
\includegraphics[width=0.99\columnwidth]{figures/tightness_comparison.png}
\caption{Tightness comparison of IBP, CROWN-IBP, LBP, CROWN-LBP, CROWN on a normally trained MNIST classifier. ``Bound Range'' is mean of
$\vu^{(m)}-\vl^{(m)}$. The mean is taken over the $10$ output logits and averaged over $100$ test images in MNIST. ``Epsilon'' is the radius of the $l_{\infty}$ ball. ``Adap'' and ``Tight'' are the adaptive and tight strategies as shown in Figure~\ref{fig:bounding_line_strategies}.
}
\label{fig:compare_ibp_lbp_crown_on_normal_model}
\vspace{-1em}
\end{figure}
\vspace{-0.5em}
\begin{theorem}
\label{thm:compare_5_methods}
Assume the closed-form bounds of the last layer computed by IBP, CROWN-IBP, LBP, CROWN-LBP, and CROWN are $\vl^{(m)}_I$, $\vu^{(m)}_I$; $\vl^{(m)}_{CI}$, $\vu^{(m)}_{CI}$; $\vl^{(m)}_L$, $\vu^{(m)}_L$; $\vl^{(m)}_{CL}$, $\vu^{(m)}_{CL}$; $\vl^{(m)}_C$, $\vu^{(m)}_C$, respectively. And CROWN-IBP, LBP, CROWN-LBP, CROWN adopt the
tight strategy to choose bounding lines as shown in Figure~\ref{fig:zero_bdl}. Then we have
\begin{align}
\small
\begin{split}
\vl^{(m)}_I &\leq \{\vl^{(m)}_{L} , \vl^{(m)}_{CI}\} \leq \vl^{(m)}_{CL} \leq \vl^{(m)}_C, \\
\vu^{(m)}_I &\geq \{\vu^{(m)}_{L} , \vu^{(m)}_{CI}\} \geq \vu^{(m)}_{CL} \geq \vu^{(m)}_C,
\end{split}
\end{align}
where the sets in the inequalities mean that the inequalities hold true for any element in the sets.
\end{theorem}
See proof of Theorem~\ref{thm:compare_5_methods} in
{Appendix A.6}.
Now we can answer the question proposed at the beginning of this section.
The reason that CROWN gives looser bounds than IBP \cite{zhang2020towards} is because CROWN uses the
adaptive strategy as shown in Figure~\ref{fig:adaptive_bdl_1} and~\ref{fig:adaptive_bdl_2} to choose bounding lines by default.
The lower bounding line chosen in the adaptive strategy for an unstable neuron is not always tighter than the one chosen by the constant strategy adopted by IBP.
Zhang \etal~\cite{zhang2018crown} emperically show the
adaptive strategy gives tighter bounds for normally trained networks.
An intuitive explanation is that this strategy minimizes the area between the lower and upper bounding lines in the interval, but there is no guarantee for this intuition.
On the other hand, for IBP trained networks, the loss is optimized at the point where bounding lines are chosen as constants. Therefore we should choose the same constant bounding lines or tighter bounding lines for LBP or CROWN when verifying IBP trained networks, which is exactly what we are doing in the
tight strategy.
\begin{table}[tb]
\centering
\caption{Mean lower bound of the margin defined in~\eqref{eqn:margin} and verified errors obtained by IBP, CROWN-IBP(C.-IBP), LBP, CROWN-LBP(C.-LBP) on an IBP trained CIFAR-10 classifier. The network is trained with $\epsilon=8.8/255$ and tested with $\epsilon=8/255$ ($l_{\infty}$ norm).
Results are taken over $100$ test images.
}
\label{tbl:compare_ibp_lbp_crown_on_robust_model}
\resizebox{1\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c}
\noalign{\hrule height 0.75pt}
Adaptive & IBP & C.-IBP & LBP & C.-LBP \\
Verified Err(\%) & 70.10 & 85.66 & 100 & 99.99\\
Lower Bound & 2.1252 & -12.016 & -2.4586E5 & -1.5163E5\\
\hline
Tight & IBP & C.-IBP & LBP & C.-LBP \\
Verified Err(\%) & 70.10 & 70.01 & 70.05 & 69.98\\
Lower Bound & 2.1252 & 2.1520 & 2.1278 & 2.1521\\
\noalign{\hrule height 0.75pt}
\end{tabular}
}
\vspace{-1em}
\end{table}
We conduct experiments to verify our theory.
We first compare IBP, LBP and CROWN on a normally trained MNIST classifier (See its detailed structures in
{Appendix B.1}).
Result is shown in Figure~\ref{fig:compare_ibp_lbp_crown_on_normal_model}. The average verification time for a single image of IBP, CROWN-IBP, LBP, CROWN-LBP, CROWN are 0.006s, 0.011s, 0.027s, 0.032s, 0.25s, respectively, tested on one NVIDIA GeForce GTX TITAN X GPU.
We can see LBP is tighter than IBP while being faster than CROWN.
And the adaptive strategy usually obtains tighter bounds than the tight strategy.
See more comparisons of these methods in
{Appendix B.2}.
Next, we compare them on an IBP trained network.
The network we use is called DM-large (See its detailed structure in
{Appendix B.1}), which is the same model in the work\cite{zhang2020towards, gowal2019scalable}.
Results are shown in Table~\ref{tbl:compare_ibp_lbp_crown_on_robust_model}.
We don't test CROWN on this network because it exceeds GPU memory (12 GB) and takes about half an hour to verify a single image on one Intel Xeon E5-2650 v4 CPU.
We can see CROWN-IBP, LBP and CROWN-LBP give worse verified errors than IBP when adopting adaptive strategy to choose bounding lines, but give better results when adopting the tight strategy as guaranteed by Theorem~\ref{thm:compare_5_methods}.
However, we can see the improvement of LBP and CROWN-LBP over IBP and CROWN-IBP is small compared with the normally trained network. We investigate this phenomenon in the next section.
\section{Parameterized Ramp Activation}
\vspace{-0.5em}
This section starts by investigating the phenomenon discovered in Section~\ref{sec:relationship_ibp_lbp_crown}: Why the improvement of LBP and CROWN-LBP over IBP and CROWN-IBP is so small on the IBP trained network compared with the normally trained network.
Study of this phenomenon inspires us to design a new activation function to achieve
lower verified errors.
\vspace{-1em}
\paragraph{Investigate the limited improvement of LBP.}
We argue that the limited improvement of LBP and CROWN-LBP is because most neurons are dead in IBP trained networks. Recall that we define three status of a ReLU neuron according to the range of its input in Figure~\ref{fig:bounding_line_strategies}: Dead, Alive, Unstable.
We demonstrate neuron status in each layer of an IBP trained network in Figure~\ref{fig:neuron_status_comparison}.
We can see most neurons are dead. However, we find most neurons (more than 95\%) are unstable in a normally trained network.
For unstable neurons, bounding lines in the tight strategy adopted by LBP and CROWN are tighter than the constant bounding lines chosen by IBP. This explains why LBP and CROWN are several orders tighter than IBP for a normally trained network.
However, for dead neurons, the bounding lines chosen by LBP and CROWN are the same as those chosen by IBP, which explains the limited improvement of LBP and CROWN-LBP on IBP trained networks. We conduct experiments in
{Appendix B.3} to further verify this explanation.
\begin{figure}[tb]
\centering
\includegraphics[width=1\columnwidth]{figures/IBP_model_neuron_status.png}
\caption{Neuron status of an IBP trained DM-large network at $\epsilon=2.2/255$ ($l_\infty $ norm) on CIFAR-10.
Bounds are computed using IBP at $\epsilon=2/255$. The horizontal axis is the layer index, and the vertical axis is the percentage of every neuron status in the layer. The percentage is averaged over $100$ CIFAR-10 test images.}
\label{fig:neuron_status_comparison}
\vspace{-1em}
\end{figure}
It seems reasonable that most neurons are dead in IBP trained networks, since dead neurons can block perturbations from the input, which makes the network more robust.
However, we argue that there are two major drawbacks caused by this phenomenon:
First, gradients from both the normal cross-entropy loss and IBP bound loss in~\eqref{eqn:IBP_loss} can not back-propagate through dead neurons. This may prevent the network from learning at some point of the training process.
Second, it restricts the representation capability of the network, since most activations are $0$ in intermediate layers.
\vspace{-2em}
\paragraph{Parameterized Ramp function.}
To mitigate these two problems, one simple idea is to use LeakyReLU instead of ReLU during training. We will consider this approach as the baseline and compare with it.
We propose to use a \textbf{Parameterized Ramp (ParamRamp)} function to achieve better result. The Parameterized Ramp function can be seen as a LeakyReLU function with the right part being bent flat at some point $r$, as shown in Figure~\ref{fig:param_ramp}. The parameter $r$ is tunable for every neuron. We include it to the parameters of the network and optimize over it during training. The intuition behind this activation function is that it provides another robust (function value changes very slowly with respect to the input) region on its right part.
This right part has function values greater than $0$ and tunable, in comparison to the left robust region with function values close to $0$.
Therefore during the IBP training process, a neuron has two options to become robust: to become either left dead or right dead as shown in Figure~\ref{fig:param_ramp}.
This could increase the representation capability of the network while allow it to become robust.
We compare effects of ReLU, LeakyReLU and ParamRamp functions in terms of training verifiably robust networks in the next section.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\columnwidth]{figures/param_ramp_with_neuron_status.png}
\caption{Parameterized Ramp (ParamRamp) function. The bending point $r$ is tunable. We can define six status of a neuron according to the input range $[l,u]$ as shown in the left side. See
{Appendix A.7} for how to choose bounding lines for ParamRamp.}
\label{fig:param_ramp}
\vspace{-1em}
\end{figure}
\section{Experiments}
\begin{table*}[t]
\centering
\vspace{-1.75em}
\caption{\small{Errors of IBP trained and CROWN-IBP trained networks with different activations on CIFAR-10. We report errors on clean images(Clean: Percentage of images wrongly classified), IBP verified errors(IBP), CROWN-LBP verified errors(C.-LBP), and PGD attack errors(PGD: Percentage of images
successfully attacked).
Experiments are conducted on $3$ variants of ParamRamp: Ramp(0), ParamRamp with $\eta=0$; Ramp(0.01), $\eta=0.01$; Ramp(0.01$\rightarrow$0), $\eta$ starts from $0.01$ and gradually decreases to $0$ during training.
$3$ variants of ReLU are similarly designed,
\eg, ReLU(0.01) means LeakyReLU with leakage slope 0.01.
Networks are trained at $\epsilon=2.2/255, 8.8/255$ and evaluated at $\epsilon=2/255, 8/255$ respectively.
Results of ReLU(0) are directly copied from the original works~\cite{gowal2019scalable, zhang2020towards}. We compute C.-LBP verified errors based on our re-run networks for these experiments. Therefore, C.-LBP verified errors are not comparable to IBP verified errors on these networks.
We also report results run on large ReLU networks in the work~\cite{xu2020automatic} at the right side.}}
\label{tbl:cifar}
\resizebox{2.09\columnwidth}{!}{
\begin{tabular}{c|l||c|c|c|c||c|c|c|c||c|c|c|c}
\noalign{\hrule height 0.9pt}
\multirow{2}{*}{\makecell{Training\\Method}} & \multirow{2}{*}{Activation} & \multicolumn{4}{c||}{Errors (\%) for $\epsilon=2/255$} & \multicolumn{4}{c||}{Errors (\%) for $\epsilon=8/255$} & \multicolumn{4}{c}{Errors (\%) for $\epsilon=8/255$} \\
\cline{3-14}
& & Clean & IBP & C.-LBP & PGD & Clean & IBP & C.-LBP & PGD & Model & Clean & IBP & PGD \\
\noalign{\hrule height 0.8pt}
\multirow{6}{*}{IBP} & ReLU(0) & 39.22 & 55.19 & 54.38 & 50.40 & 58.43 & 70.81 & 69.98 & 68.73 & \multirow{6}{*}{\makecell{CNN-7+BN\\Densenet\\WideResNet\\ResNeXt}} & \multirow{6}{*}{\makecell{57.95\\57.21\\58.07\\\textbf{56.32}}} & \multirow{6}{*}{\makecell{\textbf{69.56}\\69.59\\70.04\\70.41}} & \multirow{6}{*}{\makecell{\textbf{67.10}\\67.75\\67.23\\67.55}} \\
& ReLU(0.01) & \textbf{32.3} & 52.02 & 47.26 & 44.22 & 55.16 & 69.05 & 68.45 & 66.05 & & & & \\
& ReLU(0.01$\rightarrow$0)& 34.6 & 53.77 & 51.62 & 46.71 & 55.62 & 68.32 & 68.22 & 65.29 & & & & \\
\cline{2-10}
& Ramp(0) & 36.47 & 53.09 & 52.28 & 46.52 & 56.32 & 68.89 & 68.82 & 63.89 & & & & \\
& Ramp(0.01) & 33.45 & 48.39 & \textbf{47.19} & 43.87 & \textbf{54.16} & 68.26 & 67.78 & 65.06 & & & & \\
& Ramp(0.01$\rightarrow$0) & 34.17 & \textbf{47.84} & 47.46 & \textbf{42.74} & 55.28 & \textbf{67.26} & \textbf{67.09} & \textbf{60.39} & & & & \\
\noalign{\hrule height 0.8pt}
\multirow{6}{*}{\makecell{CROWN\\-IBP}} & ReLU(0) & 28.48 & 46.03 & 45.04 & 40.28 & 54.02 & 66.94 & 66.69 & 65.42 & \multirow{6}{*}{\makecell{CNN-7+BN\\Densenet\\WideResNet\\ResNeXt}} & \multirow{6}{*}{\makecell{\textbf{53.71}\\56.03\\53.89\\53.85}} & \multirow{6}{*}{\makecell{\textbf{66.62}\\67.57\\67.77\\68.25}} & \multirow{6}{*}{\makecell{64.31\\65.09\\64.42\\\textbf{64.16}}} \\
& ReLU(0.01) & 28.49 & 46.68 & 44.09 & 39.29 & 55.18 & 68.54 & 68.13 & 66.41 & & & & \\
& ReLU(0.01$\rightarrow$0)& \textbf{28.07} & 46.82 & 44.40 & 39.29 & 63.88 & 72.28 & 72.13 & 70.34 & & & & \\
\cline{2-10}
& Ramp(0) & 28.48 & \textbf{45.67} & 44.03 & 39.43 & 52.52 & 65.24 & 65.12 & 62.51 & & & & \\
& Ramp(0.01) & 28.63 & 46.17 & 44.28 & 39.61 & 52.15 & 66.04 & 65.75 & 63.85 & & & & \\
& Ramp(0.01$\rightarrow$0)& 28.18 & 45.74 & \textbf{43.37} & \textbf{39.17} & \textbf{51.94} & \textbf{65.19} & \textbf{65.08} & \textbf{62.05} & & & & \\
\noalign{\hrule height 0.9pt}
\end{tabular}
}
\vspace{-0.5em}
\end{table*}
\begin{table}[t]
\centering
\caption{Comparison of ParamRamp and ReLU on MNIST dataset. Notations are the same as those in Table~\ref{tbl:cifar}. The networks are both trained and tested at $\epsilon=0.4$. See more experiments tested with different activation functions and at different $\epsilon$ in
{Appendix B.5}.}
\label{tbl:mnist}
\resizebox{1\columnwidth}{!}{
\begin{tabular}{c|l||c|c|c|c}
\noalign{\hrule height 0.9pt}
\multirow{2}{*}{\makecell{Training\\Method}} & \multirow{2}{*}{Activation} & \multicolumn{4}{c}{Errors (\%) for $\epsilon=0.4$} \\
\cline{3-6}
& & Clean & IBP & C.-LBP & PGD \\
\hlin
\multirow{2}{*}{IBP} & ReLU(0) & 2.74 & 14.80 & 16.13 & 11.14 \\
& Ramp(0.01$\rightarrow$0) & 2.16 & {10.90} & {10.88} & 6.59 \\
\hlin
\multirow{2}{*}{\makecell{CROWN\\-IBP}} & ReLU(0) & 2.17 & 12.06 & 11.90 & 9.47 \\
& Ramp(0.01$\rightarrow$0) & 2.36 & {10.68} & {10.61} & 6.61\\
\noalign{\hrule height 0.9pt}
\end{tabular}
}
\vspace{-2em}
\end{table}
In this section, we conduct experiments to train verifiably robust networks using our proposed activation function, ParamRamp, and compare it with ReLU and LeakyReLU.
We use the loss defined in~\eqref{eqn:IBP_loss} and consider $l_{\infty}$ robustness in all experiments.
The experiments are conducted on $3$ datasets: MNIST, CIFAR-10, and Tiny-ImageNet.
For MNIST and CIFAR-10 datasets, we use the same DM-large network, and follow the same IBP training and CROWN-IBP training procedures in the works~\cite{gowal2019scalable, zhang2020towards}.
For the Tiny-ImageNet dataset, we follow the training procedure in the work~\cite{xu2020automatic}. The networks we train on Tiny-ImageNet are a $7$-layer CNN with Batch Normalization layers (CNN-7+BN) and a WideResNet.
We refer readers to the original works
or
{Appendix B.4} for detailed experimental set-ups and network structures.
During the training of ParamRamp networks, it is important to initialize the tunable parameters $r$ appropriately. We also find ParamRamp networks have overfitting problems in some cases. See how we initialize $r$ and solve the overfitting problem in Appendix B.4.
After training, we use IBP and CROWN-LBP with the tight strategy to compute verified errors. IBP verified errors allow us to compare results with previous works, and CROWN-LBP gives us the best verified errors as guaranteed in Theorem~\ref{thm:compare_5_methods}.
CROWN is not considered because it exceeds GPU memory (12 GB) to verify a single image on the networks we use and is extremely slow running on CPU. We also use 200-step PGD attacks~\cite{madry2018towards} with 10 random starts to emperically evaluate robustness of the networks.
Results on CIFAR-10 and MNIST datasets are presented in Table~\ref{tbl:cifar} and Table~\ref{tbl:mnist}, respectively.
We can see networks with ParamRamp activation achieve better verified errors, clean errors, and PGD attack errors than ReLU networks in almost all settings.
And our proposed bound computation method, CROWN-LBP, can always provide lower verified errors than IBP.
See more experiments for networks of different structures in
{Appendix B.5}.
For Tiny-ImageNet dataset, the CNN-7+BN and WideResNet networks with ParamRamp activation achieve $84.99\%$ and $82.94\%$ IBP verified errors at $\epsilon=1/255$, respectively. To the best of our knowledge, $82.94\%$ is the best verified error at $\epsilon=1/255$ ever achieved on Tiny-ImageNet. See a comparison with ReLU networks from the work~\cite{xu2020automatic} in
{Appendix B.5}.
ParamRamp activation brings additional parameters to the network.
We are concerned about its computational overhead compared with ReLU networks. On MNIST, we find the average training time per epoch of a ParamRamp network is $1.09$ times of that of a ReLU network in IBP training, and is $1.51$ times in CROWN-IBP training.
We observe an overhead of similar level on CIFAR-10 and Tiny-ImageNet datasets. See a full comparison in
{Appendix B.5}.
Comparing ParamRamp with ReLU on the same network may not be convincing enough to demonstrate the superiority of ParamRamp, as it has additional parameters.
We compare it with larger size ReLU networks trained in the work~\cite{xu2020automatic}.
We report their results on CNN-7+BN, Densenet~\cite{huang2017densely}, WideResNet~\cite{zagoruyko2016wide} and ResNeXt~\cite{xie2017aggregated} in the right part of Table~\ref{tbl:cifar}.
Despite being larger than the DM-large network with ParamRamp activation, these ReLU networks still can not obtain lower IBP verified errors than our model.
We think this is because ParamRamp activation brings more diversity of neuron status, which increases the representation capability of the network.
Recall that most neurons are dead in IBP trained ReLU networks as shown in Figure~\ref{fig:neuron_status_comparison}.
We present neuron status of an IBP trained ParamRamp network in Figure~\ref{fig:neuron_status_of_param_ramp}.
We can see although lots of neurons are still left dead, there is a considerable amount of neurons are right dead.
Note that the activation value of right dead neurons are not $0$ and tunable. This allows the network to become robust while preserving representation capability.
See more neuron status comparisons of ReLU and ParamRamp networks in Appendix B.5.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/neuron_status_of_IBP_trained_param_ramp_model_2.png}
\caption{Neuron status of an IBP trained network on CIFAR-10 with ParamRamp activation. We only present left dead status and right dead status since most neurons are in these two status. The network is trained at $\epsilon=2.2/255$ ($l_\infty$ norm) and bounds are computed using IBP at $\epsilon=2/255$.}
\label{fig:neuron_status_of_param_ramp}
\vspace{-1.5em}
\end{figure}
\vspace{-0.5em}
\section{Conclusion}
\vspace{-0.5em}
\label{sec:conclusion}
We propose a new verification method, LBP, which has better scalability than CROWN while being tighter than IBP.
We further prove CROWN and LBP are always tighter than IBP when choosing appropriate bounding lines, and can be used to verify IBP trained networks to obtain lower verified errors.
We also propose a new activation function, ParamRamp, to mitigate the problem that most neurons become dead in ReLU networks during IBP training.
Extensive experiments demonstrate networks with ParamRamp activation outperforms ReLU networks and achieve state-of-the-art $l_\infty$ verified robustness on MNIST, CIFAR-10 and Tiny-ImageNet datasets.
\vspace{-1em}
\paragraph{Acknowledgement.}
This work is partially supported by General Research Fund (GRF) of Hong Kong (No. 14203518), Collaborative Research Grant from SenseTime Group (CUHK Agreement No. TS1712093, and No. TS1711490), and the Shanghai Committee of Science and Technology, China (Grant No. 20DZ1100800). The authors thank useful discussions with Xudong Xu.
{\small
\bibliographystyle{ieee_fullname}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Density Functional Theory \cite{DFT0,DFT1} (DFT) is undoubtedly one of the biggest success stories of condensed matter theory, since it has made realistic electronic structure calculations possible for a huge range of materials, and since it has led to numerous insights \cite{Kohn1999,martin2004}. Two main problems had to be overcome in order to make DFT applicable in practice: first, it was necessary to find reliable approximations for the total energy as functional of the ground state density; second, an efficient way to determine the ground state density itself was needed. The solution to both problems relies on the approach of Kohn and Sham \cite{DFT1}, where the interacting system is mapped onto an auxiliary system of non-interacting electrons with an effective Kohn-Sham (KS) potential that is designed to yield the ground state density. The exchange-correlation (xc) contribution to this potential, \mbox{{$v_{\rm xc}({\bf r})$}} , and to the xc energy density per particle, $\epsilon_{\rm xc}({\bf r})$, is unknown in most systems. The initial breakthrough came with the Local Density Approximation \cite{DFT1} (LDA). This approximation takes the energy density, and hence \mbox{{$v_{\rm xc}({\bf r})$}} , locally from the homogeneous electron gas (HEG), where it was calculated using Quantum Monte Carlo \cite{Cepe1980} (QMC). However, to find approximations that are systematically better than the LDA has turned out to be exceedingly difficult \cite{Medvedev2017,Becke2022}. Today, in spite of the developments of successful gradient corrections and sophisticated approximations tailored by exact constraints \cite{PerdewWang1986,PBE1996,Perdew2008,Sun2015}, one may say that there is still no generally established multi-purpose approximation beyond the LDA. One of the difficulties is that it is not easy to benchmark \mbox{{$v_{\rm xc}$}} . First, the Kohn-Sham potential is not an observable by itself, which means, there are no experimental data to compare with. Second, since \mbox{{$v_{\rm xc}$}}\ is the potential of an \textit{auxiliary} system, besides the density, any other observables calculated in the KS system can in principle be arbitrarily far from measured values. The prototype example for this dilemma is the KS eigenvalue band gap\cite{Perdew1982,ShamSchlueter1983,Perdew1983}. For example in the LDA, this KS gap is in general much smaller than the fundamental gap extracted from direct and inverse photoemission \cite{Perdew1981,Schilfgaarde2006}. Since there is no reason for the gap of the auxiliary KS system to equal the photoemission gap of the true material\cite{DFT1,Sham1966}, the respective errors of the LDA and of the KS system itself have been a matter of debate for many years. Results derived from many-body perturbation theory to first order in the screened Coulomb interaction \cite{Godby1986,Godby87,Godby88,Niquet2004,Gruning2006,GruningJCP2006,Kotani1998,Klimes2014,Riemelmoser2021} gave evidence that the error in simple semiconductors is mainly due to the auxiliary nature of the KS system, reflected as a missing contribution that stems from a derivative discontinuity\cite{Perdew1982,ShamSchlueter1983,Perdew1983}, rather than due to the LDA. However, these are merely estimates based on perturbation theory, and the numerically exact KS potential and KS band gap of solids remain to date unknown.
More information is available in low-dimensional, often finite, systems, where ways have been proposed to invert the KS equations and find the KS potential starting from a given density \cite{PhysRevA.49.2421,Zhao1994,Wu2003,Peirs2003}. This density could be determined by analytical or numerical methods. This has given precious insight about the potential and observables in the KS system \cite{Almbladh1984,Aryasetiawan1988,Goerling1992,Knorr1992,Knorr1994,Umrigar1994,Gritsenko1995,Tozer1996,Helbig2009,Burke1D2012,Varsano2014,Hollins2016,Hodgson2016,Hodgson2017,Wetherell2019,Nam2020}. For example, in the helium atom the exact HOMO lies about 25 eV below the vacuum level, and an additional electron is unbound. The exact KS HOMO-LUMO gap, instead, turns out to be only 20.3 eV, since KS binds the LUMO\cite{Umrigar1994,Savin1998,Li2019}. On top of this underestimate, the LDA reduces the HOMO-LUMO gap further, yielding 15.85 eV. The inversion of the KS system is not an easy task, though, and in particular a finite basis set may lead to drastically modified results \cite{Schipper1997,Mura1997,Heaton-Burgess2007,Jacob2011,Gaiduk2013}. Moreover, small changes in the density can yield large differences in the potential \cite{Jensen2018,Shi2021}. Altogether, a reliable inversion of the KS equations remains a difficult task even for finite systems, and while various methods have been proposed to overcome the problems, research in this direction is still ongoing \cite{Ryabinkin2012,Jensen2018,Ou2018,Kanungo2019,Kumar2019,Kumar2020,Kumar2020a,Callow2020,Nam2021,Shi2021,erhard2022,Shi2022}.
In realistic three-dimensional periodic systems, to the best of our knowledge, no results for \mbox{{$v_{\rm xc}$}}\ obtained directly from a numerically exact density are available. This has several reasons, including the fact that
data for numerically exact densities of solids were not available in the literature, and that the inversion in extended three-dimensional systems may bear new technical difficulties.
Therefore,
to date a series of important fundamental questions remain to be answered, in particular: \textit{Can one invert in practice the KS equations in an extended three-dimensional system and if yes, how, and which kind of precision can be obtained? How different is the resulting \mbox{{$v_{\rm xc}$}}\ from standard approximations for solids, such as the LDA or Perdew-Burke-Ernzerhof (PBE) generalised gradient approximation\cite{PBE1996} (GGA)? What about observables in this numerically exact KS system, and in particular, the band gap?} \textit{How much does \mbox{{$v_{\rm xc}$}}\ depend on details of the density? And if it depends significantly, do the resulting changes have an impact on other KS observables?} Starting from {nearly} numerical exact densities\footnote{Given the pseudopotential. In other words, all discussions are valid because we have used the same hamiltonian, with a fixed LDA pseudopotential, for the valence electron problem throughout, including for the QMC calculations. A full all-electron $v_{\rm xc}$ would of course look different.} for the simple semiconductor bulk silicon and insulating sodium chloride obtained {by} the Auxiliary Field (AF) QMC {method \cite{ Zhang2003, Motta2018}} in Ref. \cite{SiyuanChen}, in the present work we answer these long-standing questions.
\section{How to invert the KS problem in infinite systems}
The probably simplest algorithm to obtain the KS potential from a given density $n_{\rm ref}$ has been proposed for finite systems by van Leeuwen and Baerends\cite{PhysRevA.49.2421}. In its original form it was derived by solving the KS equations for the KS potential $v_{\rm KS}$. The result was then translated into an iteration procedure which relates a potential $v^{\rm i+1}$ at step $i+1$ to the potential $v^i$ at step $i$ by the ratio of the target density $n_{\rm ref}$ and the density $n^i$ at step $i$. As pointed out in \cite{Peirs2003}, the best use of this ratio depends on the sign of the potential that is updated: for example, $v$ may be either the usually negative total $v_{\rm KS}$, or its rather positive interaction part $v_H+v_{\rm xc}$ with $v_H$ the Hartree potential. In the present work we use
\begin{equation}
\label{invertvxc}
v_{\rm xc}^{i+1}({\bf r}) = \frac{n_{\rm ref}({\bf r})+a}{\tilde n^{i}({\bf r})+a} v_{\rm xc}^i({\bf r})\,,
\end{equation}
where $a$ is a parameter that avoids instabilities in regions of very low density as suggested in \cite{PhysRevA.49.2421},
and the mixing $\tilde n^i=\alpha n^{\rm i-1}+(1-\alpha)n^i$, with $0<\alpha<1$, is introduced to smooth the convergence. This density $\tilde n^i$ is also used to update the Hartree potential at each iteration. \refEq{invertvxc} is clearly a good strategy if $v_{\rm xc}$ is negative, and if the density { at} a point $\mathbf{r}$ is determined only by the KS potential {at} that same point. Suppose that at a given iteration $\tilde n^i(\mathbf{r})$ is larger than $n_{\rm ref}(\mathbf{r})$. The algorithm then decreases the absolute value of $v_{\rm xc}(\mathbf{r})$. If the exchange-correlation\ potential is negative, this step makes the potential more shallow, and less density will be attracted to the point ${\mathbf{r}}$ in the next iteration, which pushes the solution in the good direction. Of course, it is not true that $n({\bf r})$ depends only on the KS potential in the same point ${\bf r}$, and it has to be seen to which extent the relation is nearsighted enough to make the algorithm work in a solid.
The negative sign of the potential that is updated in \eqref{invertvxc} is crucial for the algorithm to work, because a positive sign would drive the result in the wrong direction.
However, contrary to the HEG,
a real system can also exhibit regions of positive $v_{\rm xc}$. Moreover, while in a low-dimensional system one can impose that the potential tends to zero at large distances, in a three-dimensional solid the zero of the potential is not defined.
This fact represents both an advantage and a drawback. On the upside, it allows us to introduce a rigid negative shift such that the potential remains negative throughout the iteration. This shift is arbitrary within reasonable limits: if it is too small, positive regions may appear and become an obstacle for convergence. If it is too large, the algorithm becomes unstable, as the shift is multiplied at every step by the density ratio. Reasonable values lie within the maximum amplitude of the potential. On the downside, iteration of \eqref{invertvxc} yields $v_{\rm xc}$ only up to a constant. This is not due to our introduction of a shift, but to the fact that the density does not contain information about the absolute value of the potential\cite{DFT0}. Therefore, this limitation cannot be avoided. The resulting potential can, however, be used to calculate a well defined density and KS observables such as {the KS band structure} (besides the meaningless constant shift).
We have tested the algorithm using the LDA, and we have found it to be sufficiently accurate to support all our claims below. Detailed results of the tests can be found in Appendix \ref{subsec:algorithm}. App. \ref{subsec:stability} shows that the final results do not depend on the starting point of the iterative procedure, including starting points as far from the final result as, e.g., $0.1\times v_{\rm xc}^{\rm LDA}$. For the results shown in the following, we use as starting point $0.3\times v_{\rm xc}^{\rm LDA}$ with a rigid downwards shift of 0.2 Hartree for silicon and 0.4 Hartree for NaCl.
In the following, we report results in atomic units for both densities (expressed in bohr$^{-3}$) and xc potentials (in Hartree).
\section{Results}
\subsection{ Kohn-Sham potential of silicon and sodium chloride}
We have applied the algorithm to obtain the exchange-correlation\ potential from the charge density obtained by AFQMC calculations. For silicon, we have used the results of Ref. \cite{SiyuanChen}. For NaCl, we have applied additional symmetry operations to the density from the same Ref. \cite{SiyuanChen}. Since the QMC data is noisy, the inversion has a more limited precision than in the case of, e.g., clean LDA data (see App. \ref{subsec:noisy-densities}){.
The} Mean Absolute value of the Percentage Error (MeAPE) of the density of silicon does not fall below 0.02\%{, while}
the Maximum (over the unit cell) of the Absolute value of the Percentage Error (MaAPE) of the density decreases to 0.38\% at $i=20$ iterations{. This} is in any case sufficient to make significant distinctions between different densities and potentials.
The upper panel of Fig. \ref{fig:dens_xcpot_SILICON-QMC} shows the Local Percentage Difference (LPD) of the iterative density with respect to the QMC density after 20 iterations {(blue line)}, $100\times(1-n^{\rm QMC,20}({\bf r})/n_{\rm QMC}({\bf r}))$, along a path through the unit cell (the same as in Ref. \cite{SiyuanChen}, see the inset to the second panel of Fig. \ref{fig:dens_xcpot_SILICON-QMC}).
The result stays well within the stochastic error bar of the QMC calculation (grey area). For comparison, we also show the LPD of the LDA and PBE densities (dot-dashed orange and dashed green lines, respectively), with respect to the QMC.
As also shown in Ref. \cite{SiyuanChen}, differences between LDA, PBE and the QMC densities are largest on the atoms and also in other regions of low density\footnote{Note that we have defined the error with opposite sign with respect to Ref. \cite{SiyuanChen}} {(see the magenta line in the second panel of Fig. \ref{fig:dens_xcpot_SILICON-QMC})}, but they are still significant in regions of higher density, along the (110) direction, where LDA and PBE are very similar, but differ from the QMC result. Most importantly, the differences between different densities are much larger than the error due to the inversion of the QMC density: while the MeAPE at $i=20$ is 0.04\%, the mean absolute relative difference between the LDA and QMC densities is 1.93 \%, and it is 1.07 \% between the PBE and QMC densities.
The xc potentials are compared in the {third} panel of Fig. \ref{fig:dens_xcpot_SILICON-QMC}. {For this comparison, the potentials are aligned at their average value.} Our numerically determined and supposedly most accurate KS xc potential{, obtained from the QMC density,} is similar to the local and semi-local approximations. This result is stable: the QMC result obtained at $i=10$, where the MaAPE and MeAPE on the density are 0.90 \% and 0.09 \%, respectively, is almost indistinguishable from the one at $i=20$. The differences between QMC on one side, and LDA and PBE on the other side, can be appreciated in the bottom panel, which shows the LPD of LDA and PBE with respect to the QMC xc potential obtained at $i=20$. These differences are similar for LDA and PBE along most of the path.
The MeAPE with respect to the QMC result for potentials is 3.90~\% for the LDA and 3.88~\% for the PBE: of similar order, though larger, than the MeAPE of the densities. Instead, the LPD of the QMC potential at $i=10$ with respect to the potential obtained at $i=20$ can hardly be seen.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/invqmc_st30LDA_shift2e-1_v9.png}
\caption{ {Density and xc potential of bulk silicon along the same path across the unit cell as in Ref. \cite{SiyuanChen}. The positions of atoms are indicated by dotted vertical lines.
The iterative inversion follows}
\refEq{invertvxc} with the QMC density $n_{\rm QMC}$ of silicon as reference density.
The potential $v_{\rm xc}^{\rm QMC,20}$ is obtained after $i=20$ iterations. The density $n^{\rm QMC,20}$ is calculated using $v_{\rm xc}^{\rm QMC,20}$ in the KS equation. The MaAPE at $i=20$ compared to $n_{\rm QMC}$ is 0.38 \%, and the MeAPE is 0.04\%.
Top panel: LPD of $n^{\rm QMC,20}$ (blue), self-consistent LDA $n_{\rm LDA}$ (orange), and PBE $n_{\rm PBE}$ (green) densities with respect to $n_{\rm QMC}$.
The grey area is the stochastic error bar of the QMC density.
{Second panel: The QMC density $n_{\rm QMC}$ (magenta line)}. {The inset shows the chosen path across the crystal from Ref. \cite{SiyuanChen}.}
{Third} panel:
$v_{\rm xc}^{\rm QMC,20}$ (blue), $v_{\rm xc}^{\rm LDA}$ (orange), $v_{\rm xc}^{\rm PBE}$ (green), and $v_{\rm xc}^{\rm QMC,10}$ (red). {Note that the two QMC potentials (blue and red lines) are almost indistinguishable.}
The average potentials are aligned.
Bottom panel: LPD of xc potentials with respect to $v_{\rm xc}^{\rm QMC,20}$ for LDA (orange), PBE (green), and QMC at $i=10$ {(red)}.
}
\label{fig:dens_xcpot_SILICON-QMC}
\end{figure}
We have hence reached sufficient precision on the density, which lies within the QMC error bar, and the xc potential, which shows some differences with respect to common functionals. The effect of iterating further using the noisy QMC data is discussed in the App. \ref{subsec:noisy-densities}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/invqmc_nacl_v11.pdf}
\caption{ Iterative inversion following \refEq{invertvxc} with the QMC density $n_{\rm QMC}$ of NaCl as reference density {along the same path across the unit cell as in Ref. \cite{SiyuanChen}}.
The result of the QMC inversion is shown at $i=200$. The MaAPE on the density at $i=200$ compared to $n_{\rm QMC}$ is 0.29 \%, and the MeAPE is 0.03\%.
Top panel:
LPD of $n^{\rm QMC,200}$ (blue), self-consistent LDA $n_{\rm LDA}$ (orange), and PBE $n_{\rm PBE}$ (green) densities with respect to $n_{\rm QMC}$.
The grey area { in the top panel indicates} the stochastic error bar of the QMC density. {Second} panel: {The QMC density $n_{\rm QMC}$ (thin magenta line)}.
{The inset shows the chosen path across the crystal from Ref. \cite{SiyuanChen}.} {Third panel: $v_{\rm xc}^{\rm QMC,200}$ (blue), $v_{\rm xc}^{\rm LDA}$ (orange) and $v_{\rm xc}^{\rm PBE}$ (green). The averages of the potentials are aligned.}
Bottom panel: LPD of xc potentials with respect to $v_{\rm xc}^{\rm QMC,200}$ for LDA (orange) and PBE (green).
}
\label{fig:invqmc_nacl}
\end{figure}
Results for sodium chloride show similar trends, with even better convergence properties of the potential due to the fact that
our QMC density for NaCl is less noisy than the one of silicon in the important regions of high density (see also App. \ref{subsec:nacl}). For the density, we obtain a MeAPE of 0.03\% and a MaAPE of 0.29\% at $i=200$. Here, analogous to Fig. \ref{fig:dens_xcpot_SILICON-QMC} for silicon, in Fig. \ref{fig:invqmc_nacl} we show the LPD of the density and of the xc potential along a path {(see the inset to the second panel)}.
The QMC-derived xc potential differs from the LDA and PBE especially on the sodium atoms, where the density shows rapid changes.
At first sight, however, and as in the case of silicon, it is difficult to rationalize the differences between the three potentials. While it is exciting to see the numerically exact xc potential for real semiconductors and insulators, it is useful to switch to a representation that highlights the essence of the difference, {in order to gain more insight.}
\subsection{Non-local dependence of the KS exchange-correlation\ functional on the density}
In the LDA, \mbox{{$v_{\rm xc}({\bf r})$}}\ is a monotonic function of $n({\mathbf{r}})$. The exact KS potential is a functional of the density everywhere, which means that { it} can take different values in different points $\mathbf{r}$ where the density, instead, is the same. This expresses the fact that \mbox{{$v_{\rm xc}({\bf r})$}}\ depends not just on the local density, but also on the environment. In order to highlight {how} the true $v_{\rm xc}$ differs from a function of the local density, we create a map of $v_{\rm xc}({\bf r})$ with respect to $n({\bf r})$: for each point ${\bf r}$ in real space, we add {a point} [$v_{\rm xc}({\bf r}) \leftrightarrow n({\bf r})$] to Fig. \ref{fig:si_grad_tau} and Fig. \ref{fig:nacl_grad_tau} for silicon and NaCl, respectively.
In the case of the LDA, this plot shows the universal function $v_{\rm xc}(n)$, the same for silicon and NaCl, which is identical to the function in the HEG. Beyond the LDA, different environments may change this function, such that it is different for different materials. Moreover, in one and the same material the presence of different environments may lead to the presence of more than one function, and finally, if the result is very sensitive to details, one might find it difficult to detect anything like a limited number of functions. All these effects are possible signatures of the non-local dependence of $v_{\rm xc}({\bf r})$ on the density, and being able to discern them, and to characterize different environments, may give precious input for further modeling of $v_{\rm xc}$.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figures/si_grad_tau.pdf}
\caption{Map of the xc potential of silicon with respect to the local density {at all points in the unit cell}. Color codes reflect the modulus of the local gradient of the density (left column) or the local kinetic energy density (right column). Upper figures are for PBE and bottom figures for QMC. The LDA is shown in green.}
\label{fig:si_grad_tau}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figures/nacl_grad_tau_v8.png}
\caption{Same as Fig. \ref{fig:si_grad_tau}, for NaCl. Note the much larger range for $n(\bf r)$ here.
{Upper panels are for PBE and bottom panels for QMC. The LDA is shown in green. The inset in the bottom right panel shows the QMC density along a part of the path across the unit cell.}
}
\label{fig:nacl_grad_tau}
\end{figure*}
The maps in Fig. \ref{fig:si_grad_tau} and \ref{fig:nacl_grad_tau} contain the results of LDA, PBE and QMC.
In silicon, the PBE result {(upper panels of Fig. \ref{fig:si_grad_tau})} is dominated by a simple monotonic function, but it is slightly steeper than the universal LDA function. Moreover, the result appears to be a little more scattered. Finally, a new branch appears at low densities. For more insight, the colors reflect, respectively, the modulus of the local gradient of the density (left panels) and the KS kinetic energy density defined as
$
\tau(\mathbf{r}) = \tfrac{1}{2}\sum_i^{\rm occ} \rvert\nabla\phi_i(\mathbf{r})\lvert^2
$, where $\phi_i$ are KS orbitals
(right panels).
The QMC xc potential in the bottom panels is more blurred than the PBE result, but this may be due to the noisy density, since test calculations using the LDA density augmented with some Gaussian noise lead to similar scattering, as shown in App. \ref{subsec:noisy-densities}.
The overall shape and branches of the QMC xc potential, instead, are significant. Similarly to the PBE, one can identify a dominant curve, and, with respect to the LDA, two main changes are seen: the curve is steeper than {that of} the LDA, and at low densities an additional branch appears. The change in slope of the main branch with respect to the LDA goes in the same direction as in the PBE result, and it is even more pronounced. Also the branching happens in a similar region as in the case of PBE. However, the branch departs in the opposite direction.
As in the case of PBE, it is characterized by a very different gradient and kinetic energy density with respect to the main branch at the same local density. Indeed, the region in space where the potential lies on the extra branch is close to the center of the atoms, where the density varies rapidly. It should therefore be noted that it will be particularly sensitive to details of the pseudopotential. This, together with the fact that the inversion error on the density is largest on the atoms (see Fig. \ref{fig:dens_xcpot_SILICON-QMC-noise} in the Appendix), which then also influences the large density gradients in the vicinity, suggests that
the wrong direction of the PBE branch observed here would deserve
more studies including many more QMC datasets using different pseudopotentials and including different materials,
which is beyond the scope of the present work. The changed slope of the main branch, instead, happens over the complete range of densities and should be a feature of silicon independent of the pseudopotential and other ingredients of the calculation.
The modifications of the different branches with respect to the LDA $v_{\rm xc}^{\rm LDA}$ may be translated in different ways, for example: $v_{\rm xc}^{\rm e}({\bf r})=F^e(n({\bf r}))v_{\rm xc}^{\rm LDA}(n({\bf r}))$ with a correction factor $F^e$ that depends on the local density and on an environment $e$, which must be characterized. Another possibility would be $v_{\rm xc}^{\rm e}({\bf r})=v_{\rm xc}^{\rm LDA}(\mathcal{F}^e(n({\bf r}))\times n({\bf r}))$. The GGAs, for example, are an attempt to characterize the environment by the local gradient of the density (see, e.g., \cite{PerdewChevary1992}). Our results motivate further search for improved approximations of the true \textit{functional} that can be expressed as \textit{functions} of a limited number of parameters, such as the local density and its gradients.
Consistently with the fact that the QMC density for NaCl is less noisy than in the case of silicon, the map for NaCl in Fig. \ref{fig:nacl_grad_tau} {(bottom panels)} shows less scattering. As for silicon,
we find a main branch that corresponds to a modified LDA. Moreover, there is an additional branch at low densities and another branch at high density, both characterized by differences in the gradient or kinetic energy density.
{The analogous secondary branches for PBE (see upper panels) are less pronounced.}
The inset in the lower right panel also shows the QMC density along {part of} the path. Numbers indicate to which {locations} selected data points correspond. For example, data point 1 on the additional high-energy branch corresponds to the potential on the sodium atom, with an environment where the density is very quickly varying, which explains why the LDA completely fails. Data point 2, instead, corresponds to a place with similar density but located in a more gentle environment, although the gradient of the density is significant. As expected, in this point we are on the main branch, which is, however, modified with respect to the LDA.
Similarly, points 3 and 4 on the chlorine atom explain the extra branch at lower density. These results show that the potential-density relation presented as maps such as the one in Figs. \ref{fig:si_grad_tau} and \ref{fig:nacl_grad_tau} may give { further} insight about the most efficient way to introduce correction factors, and about the most important features distinguishing different environments, which could eventually be combined with machine learning approaches \cite{Kalita2021}.
\subsection{Significance of the Kohn-Sham potential}
A striking observation from the iteration procedure is that very different \mbox{{$v_{\rm xc}$}}\ can yield very similar densities.\cite{Savin2003,Kim2013,Wasserman2017}
This becomes evident when one starts from the noisy LDA {(see App. \ref{noisyLDA})} or the QMC data for silicon and pushes the iteration process well inside the region where the MeAPE of the density does not improve any more {(see Fig. \ref{fig:error_lda_pbd_qmc} in the Appendix)}. While also the MaAPE of the density remains modest, of the order of, or smaller than, the noise itself, the xc potential at higher iterations develops strong spikes when the reference density is noisy. In order to quantify the effect, {the upper panel of} Fig. \ref{fig:bandgap-noisy} shows the MaAPE of the xc potential as a function of the iteration $i$, calculated with respect to the smooth potential that is obtained at $i_0=20$ in the case of QMC, and $i_0=24$ in the case of noisy LDA (The potentials themselves are shown in the Appendix, in the second panel of Fig. \ref{fig:dens_xcpot_SILICON-QMC-noise}). In both cases, this deviation from the smooth potential shows strong variations. This raises the question what will happen to KS observables other than the density: even though, as discussed above, these do not by themselves have direct physical meaning in an exact sense, they can still be seen as an approximation to the physical quantities \cite{Chong2002,Filippi1997}, and they are frequently used as starting points for calculations in a more appropriate framework, such as many-body perturbation theory\cite{Martin2016}. We therefore show in {the bottom panel of} Fig. \ref{fig:bandgap-noisy} the direct KS band gap of silicon at $\Gamma$
as a function of the number $i$ of iterations at which the KS potential and corresponding density were calculated. The result converges very rapidly with $i$ and remains stable, within 1 meV, over a wide range of iterations even after the potential has developed huge spikes. This means that very different xc potentials can yield not only very similar densities, but also very similar KS observables more in general. Moreover, the figure shows that the gaps corresponding to clean and noisy LDA densities are almost indistinguishable, i.e., the noise does not affect KS observables. It confirms the statement, mostly based on findings from low-dimensional systems, that examining the xc potential alone is not sufficiently meaningful\cite{Savin2003,Kim2013,Wasserman2017}.
It also suggests that an effort is needed to distinguish in the KS potential crucial features, which must be contained in good functionals, from others that may be quantitatively strong in the potential, but insignificant for its effects.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/error_xc_pot.pdf}
\includegraphics[width=\linewidth]{Figures/band_direct_gap_si_v3.pdf}
\caption{({Upper} panel) The xc potential of silicon MaAPE as a function of iteration $i$, calculated with respect to the result at $i_0=20$ for QMC (dashed blue line) and with respect to the result at $i_0=24$ for noisy LDA (purple line).
({Bottom} panel)
{Direct KS band gap of silicon at $\Gamma$:
convergence with number of iterations for clean and noisy LDA {(purple and dashed orange lines, respectively, overlapping almost entirely)}, and QMC {(dashed blue line)}. Horizontal lines are the KS gap for the clean LDA (grey){ at 2.55 eV}, and the best estimated result from the iterative procedure for QMC (black) {at 2.72 eV}. Note the break in vertical scale. }
}
\label{fig:bandgap-noisy}
\end{figure}
\subsection{Kohn-Sham approximation to the band gap}
The study of the KS band gap is interesting by itself, since there is
long-standing debate concerning the difference between observables in the real and KS auxiliary systems. In absence of knowledge of the exact Kohn-Sham potential, it was not possible to distinguish between the discrepancies due to approximations of the functional, and those due to the difference between the (even exact) Kohn-Sham system itself and the real material.
Precious hints were already given by work on model systems; for example, Knorr and Godby \cite{Knorr1992,Knorr1994} determined the xc potential by inversion from the density of a finite one-dimensional model semiconducting wire that was then extrapolated to infinite length. Most of the band gap error was shown to be due to the fact that the exact KS eigenvalue gap differs from the fundamental electron addition and removal gap, and not due to approximations. Indeed, the KS eigenvalue gap calculated at fixed particle number disregards the derivative discontinuity of the exact xc potential upon change of particle number \cite{Perdew1982,ShamSchlueter1983,Perdew1983}. Since the numerically exact density and/or xc potential could be obtained only for very few, low-dimensional, systems, several studies used the link between the xc potential and the self-energy given by the Sham-Schl\"uter equation \cite{ShamSchlueter1983} in order to extract $v_{\rm xc}$ from the self-energy. These include work on a two-plane wave model \cite{ShamSchlueter1983,Lannoo1985}, the surface barrier for semi-infinite jellium \cite{Eguiluz1992}, finite systems \cite{Niquet2004,Hellgren2007,Hellgren2010,Bleiziffer2013}, and the study of several real semiconductors and insulators \cite{Godby1986,Godby88,Niquet2004,Gruning2006,GruningJCP2006,Kotani1998,Klimes2014,Riemelmoser2021}. These studies confirmed that the error inherent in using Kohn-Sham eigenvalues instead of true electron addition and removal energies is significant. However, the approaches used to determine the potential involved themselves approximations whose quantitative impact on the findings are not known: first, the Sham-Schl\"uter equation was linearized in all studies; second, the self-energy itself was approximated in many-body perturbation theory, mostly on the GW\cite{Hedin-GW1965} level.
With the present work, we finally do have an almost numerically exact Kohn-Sham potential at hand for real materials, and we can therefore draw definite conclusions concerning the band structure, and in particular the band gap, of standard semiconductors and insulators.
Results for the converged band gaps of silicon and NaCl
are shown in Table \ref{tab:bandgap_si}. For silicon, our numerically exact minimum indirect KS band gap is 0.69 eV, about 40 \% larger than the KS gap of 0.49 eV calculated in LDA, and significantly smaller than the experimental gap\cite{silicon-gap} of 1.17 eV. The PBE gap of 0.66 eV is close to the QMC-derived value. The direct band gap opening of QMC with respect to LDA is analogously 0.17 eV at $\Gamma$ and 0.12 eV at X.
In NaCl, the situation is similar, with the QMC-derived gap about 14\% larger than the LDA one, and only 3\% larger than the PBE gap.
The 5.25 eV QMC-derived KS gap is again much smaller than the 8.5 eV experimental gap\cite{nacl-gap}.
The bandwidths do not change in a noteworthy way with respect to LDA (or PBE):
in silicon the QMC valence bandwidth is reduced by 0.1 eV compared to LDA (and 0.05 eV compared to PBE), while in NaCl the QMC bandwidth is 0.15 eV smaller than LDA and 0.04 eV larger than PBE.
Our QMC derived KS gaps confirm the conclusion of Ref.\cite{Godby1986,Godby88} and thus definitely highlight the fact that a good multiplicative KS potential will not yield a ``good'' eigenvalue band gap in solids.
Overall, the band gap is an excellent illustration for the fact that the exact Kohn-Sham system is an auxiliary system designed to yield the density in principle exactly, but for other observables, it can only give an approximation.
\begin{table}
\begin{tabular}{c|cc|c}
\hline\hline
& \multicolumn{2}{c|}{Si} & NaCl \\
& indirect & \ \ \ direct at $\Gamma$ & direct at $\Gamma$\\
\hline
QMC derived& 0.69 & 2.72&5.25\\
PBE & 0.66 & 2.60 &5.08\\
LDA & 0.49 &2.55 &4.59\\
\hline
Exp. & 1.17 \cite{silicon-gap} & 3.05\cite{Ortega1993} & 8.5\cite{nacl-gap} \\
& & 3.40\cite{silicon-gap} & \\
\hline\hline
\end{tabular}
\caption{KS minimum band gaps and direct band gaps at $\Gamma$ (eV) in comparison with experimental photoemission gaps from Refs. \cite{silicon-gap,Ortega1993,nacl-gap}.} \label{tab:bandgap_si}
\end{table}
\section{Conclusion}
In conclusion, we have shown that a simple algorithm allows one to obtain the Kohn-Sham xc potential for periodic semiconductors and insulators, given their ground state density. The precision that can be obtained is limited by the quality of the input data. Here, we use densities taken from AFQMC calculations, and the limiting factor is the stochastic noise. Nevertheless, meaningful results are obtained, with an error bar smaller than the difference between the resulting potentials and their LDA or PBE counterparts, which allows us to safely draw conclusions. In particular, for the materials studied here, namely bulk silicon and NaCl, the xc potential \textit{functional of the density everywhere} can be represented in terms of two or three \textit{functions of the local density}, each of which is determined by a specific environment. These environments appear to be characterized by the local gradient of the density or, even more clearly, by the local kinetic energy density.
The function that represents most of the data points is close to the LDA, but with slight material-dependent deviations. PBE also predicts deviations and the existence of the additional functions, although it does not always describe them well. On the other hand, our results clearly illustrate that very different potentials may lead to very similar densities, and more generally, to very similar KS observables. In particular, the KS band gap converges rapidly with the number of iterations of the inversion process, while the xc potential still undergoes violent modifications. More work is needed to discern important features of the xc potential from those that do not influence KS observables; sum rules and other exact constraints may be helpful for this \cite{PerdewWang1986,PBE1996,Perdew2008,Sun2015}. Our results for the KS band gap confirm previous conjectures based on model systems and/or many-body perturbation theory, which predict that the exact KS band gap is closer to the LDA one than to the measurable electron addition and removal gap, in other words, that the derivative discontinuity of the true xc potential is sizable. Still, the LDA error is non negligible, whereas PBE predicts the exact KS gap with an error of less than 5\% for the materials studied here. Our work highlights directions for the improvement of density functionals, stressing the need for, and usefulness of, QMC calculations of the density in many more materials.
\section{Acknowledgements}
{Fruitful discussions with Kieron Burke and Rex Godby are acknowledged. S.C. was supported by the U.S. Department of Energy (DOE) under Grant No. DE-SC0001303, and by the Center for Computational Quantum Physics, Flatiron Institute. The Flatiron Institute is a division of the Simons Foundation.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In recent years,
in large-scale image classification tasks, image classifiers with deep convolutional neural networks (CNN) have achieved accuracies equivalent to humans~\cite{ilsvrc,resnet}.
The recognition capabilities of these methods are limited by the object classes included in the training data. However, for an image recognition system running in the real world, for example a robot, considering all existing object classes in the world during training is unfeasible.
If such a robot was able to ask for information about {\it objects it cannot recognize}, the robot would not have to learn all classes in advance.
In this paper, we define an {\it unknown object} as an object belonging to a class not included in the training data.
In order to acquire knowledge about the unknown object class, the most reliable way is to obtain information directly from humans. For example, the robot can present an image to a human and ask them to annotate the class of an object, as in active learning~\cite{least_confident}. When the class is unknown, selecting the appropriate object and generating a suitable question about it is a challenging problem, and has not been tackled yet.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\hsize]{q_example.pdf}
\caption{Examples of suitable/unsuitable questions for unknown objects. A suitable question should specify the target object ({\it stuffed toy}), so the answer is the class of the unknown object ({\it teddy bear}). Therefore, questions such as (a) are suitable.
On the other hand, simple questions such as (b) and questions about location such as (c) are unsuitable.}
\label{fig:example}
\end{figure}
The goal of this research is to generate questions that request information about a specific unknown object in an image.
As shown in Fig.~\ref{fig:example}, compared to a simple question such as {\it ``What is this?''}, a specific question such as {\it ``What is the stuffed toy sitting next to the dog?''} targets better the class of the unknown object.
There exist several approaches~\cite{vqg1,vqg_creative} for general visual question generation (VQG) by using recurrent neural networks (RNN). Also, VQG with a specific target for a question has also been studied~\cite{disambiguate} by providing a {\it target word} (i.e., a word indicating what object the question is targeting) to an RNN as a condition. However, in these works, only the known classes are given as the target word. To the best of our knowledge, VQG targeting unknown objects has not been studied yet.
Also, in order to realize a VQG method for unknown objects, first we need to detect and classify the unknown object. However, we cannot rely on object classification methods~\cite{resnet} nor object region proposal methods~\cite{fast_rcnn} if they only consider known/labeled classes (i.e., supervised learning).
In this paper, to find unknown objects in an image, we propose object regions by selective search~\cite{selective_recognition}, which is not based on supervised learning, and then classify whether the proposed objects are unknown or not. Since our method has to classify all the objects, in order to reduce the execution time we propose an efficient unknown object classification based on uncertainty prediction.
In addition, we approach VQG for unknown objects by generating a question containing the {\it hypernym} of the unknown object.
The hypernym of a given word is another word that is higher in the semantic hierarchy, such as {\it ``animal''} for {\it ``dog.''}
{\bf Contributions}: (1) We propose the novel task of automatically generating questions to get information about unknown objects in images. (2) We propose a method to generate questions using the semantic hierarchy of the target word. (3) We construct the whole pipeline by combining modules of object region proposal, unknown object classification, and visual question generation, and show that it can successfully acquire information about unknown objects from humans.
The paper is organized as follows.
First, we explain previous studies related to this research in Sec.~\ref{related}. Next, we introduce our proposed system in Sec. ~\ref{pipeline}. Then, we show experiments on our module for unknown object classification in Sec.~\ref{unknown}, and our visual question generation module in Sec.~\ref{question}. In Sec.~\ref{all_pipeline}, we evaluate our entire pipeline to get the class of unknown objects. Finally, in Sec.~\ref{conclusion}, we discuss the conclusions and future work.
\section{Related Works}\label{related}
First, we explain active learning, an information acquisition method that also considers human help for learning. Next, we introduce the research related to each of our modules, namely, object detection, unknown object classification, and visual question generation.
{\bf Active Learning.}
The aim of active learning is achieving efficient learning by automatically selecting data that seems to contribute the most to improve the performance of the classifier and requesting a human annotator to label them.
Uncertainty Sampling~\cite{least_confident} has been proposed to select the instances whose class is the least certain.
There are three methods for Uncertainty Sampling:
(1){\it Least Confident}~\cite{least_confident}:
Select the instance whose classification probability is the smallest and whose class has the greatest overall classification probability.
(2){\it Margin Sampling}~\cite{margin_sampling}:
Select the instance whose difference between the most and the second most classification probabilities is the smallest.
(3){\it Entropy Sampling}~\cite{entropy1,entropy2}:
Select the instance whose distribution of classification probabilities has the largest entropy.
The main difference between active learning and this research is that active learning targets only instances whose class is included in the training set, whereas we target instances whose class is not in the training set.
Also, active learning only presents data to the annotator; it does not generate questions.
{\bf Object Region Proposal.}
Object region proposal methods detect the region surrounding objects in an image.
Recent methods perform object detection that performs both object region proposal and object classification at the same time via supervised learning using CNN~\cite{fast_rcnn,yolo}.
These methods achieve accurate object detection with a huge amount of labeled data for training. However, they do not consider unknown objects.
In contrast, there is some research on {\it objectness} that simply estimates the existence objects in a specific region of the image, without classifying the object. Alexe et al.~\cite{objectness} perform objectness estimation by using saliency, contrast, edge, and superpixel information. Cheng et al.~\cite{bing} learn objectness from image gradients.
Also, a method called selective search~\cite{selective_recognition,selective_segmentation} allows object region proposal by using image segmentation, and integrating similar regions with each other. Since it does not require object labels, it can propose regions without learning the object class.
{\bf Unknown Object Classification.}\label{classification}
Unknown object classification performs binary classification of objects in an image as {\it known} or {\it unknown}.
Traditionally, object classification methods estimate the actual class of an object in an input image.
Recent research in object classification are CNN-based methods~\cite{resnet,vgg}.
These methods assume a {\it closed set}, that is, they only consider the classes included in training and not unknown classes.
On the other hand, there is research on the task called {\it open set recognition}~\cite{openset} for object classification that includes unknown objects.
Open set recognition is a task aimed at classifying to the correct class if the input belongs to the trained class, and if the input is unknown, classifying it as unknown.
For open set recognition, methods using SVM~\cite{openset} and methods extending the nearest neighbor method~\cite{openworld} have been proposed.
Also, Bendale et al.~\cite{openmax} proposed open set recognition using CNN. They classify an object as unknown if its feature distribution extracted from the CNN hidden layers is distant from known classes.
{\bf Visual Question Generation.}
Visual Question Generation (VQG) was recently proposed as an extension of image captioning. Whereas image captioning methods~\cite{showandtell} generate descriptive sentences about the content of an image, VQG methods generate questions (e.g., {\it What color is the car?}).
The common approach in VQG is encoding image features via CNN and generating a sentence by decoding those features using an RNN. Methods that use a gated recurrent unit (GRU)~\cite{vqg1} and a long short-term memory (LSTM)~\cite{vqg_creative} have been proposed.
Traditional VQG methods generate questions from the whole image, without focusing in any particular image region.
Only recently, methods that generate questions targeting a particular image region have been proposed.
Zhang et al.~\cite{groundedvqg} detect different regions to generate a variety of questions from the same image.
In contrast, Li et al.~\cite{disambiguate} generate questions focusing on a specific region with the goal of distinguishing between two images. For this, they input a target word (e.g., blue) related to the region as a condition to the LSTM. In~\cite{disambiguate}, target words are known classes learned in advance. To the best of our knowledge, VQG targeting an unknown object has not been approached yet.
\section{Proposed System}
\label{pipeline}
Fig.~\ref{fig:all_pipeline} shows the overview of the proposed method.
First, objects in the input image are detected by the object region proposal module.
Next, the unknown object classification and target selection module identifies whether each object is unknown or not, and selects an object region to be the target of the question. We refer to this region as the {\it target region}.
Finally, the visual question generation module generates a question using features extracted from the whole image and the target region.
\begin{figure}[tb]
\centering
\includegraphics[width=\hsize]{all_pipeline_7.pdf}
\caption{Overview of the proposed method.
First, regions from objects in the image (including unknown objects) are detected.
Then, unknown objects are classified and the target region is selected.
Finally, the target region along with the whole image is coded into a feature vector, and a question for the unknown object is generated}
\label{fig:all_pipeline}
\end{figure}
\subsection{Object Region Proposal}
Our object region proposal module detects all objects in the input image via selective search.
The proposed method needs to detect unknown objects (i.e., objects never learned before), so supervised learning is not an option since it requires labels for all objects.
As mentioned in Sec.~\ref{related}, selective search provides candidate regions for objects without supervised learning. Thus, unknown objects can also be detected, and the number of object regions can be reduced compared with an exhaustive search.
Therefore, this seems to be suitable as a method for object region proposal.
\subsection{Unknown Object Classification and Target Selection}
This module selects the {\it target object}, that is, the object to acquire information about. For that, we classify objects into {\it known} or {\it unknown}, and then select the most salient unknown object. This prevents generating questions about unimportant regions that may have been proposed by mistake by the object region proposal module.
We define unknown object classification as follows: for an input object image, if its class is included in the training set, classify it to the correct class, and if not, classify it as unknown.
Specifically, we perform unknown object classification on the classification results of a CNN as follows.
The output of the softmax function of the CNN can be regarded as the confidence with which the input is classified into a certain class.
We consider that images of unknown objects result in a low confidence value for all classes.
That is, the more uniform the confidence distribution, the lower the confidence for all classes and the more possibilities the object is unknown.
Therefore, we perform unknown object classification by estimating the dispersion of the probability distribution using an entropy measure, with reference to the method of Uncertainty Sampling in active learning~\cite{entropy1,entropy2}.
The entropy measure $E$ is defined as:
\begin{equation}
E = - \sum_{j=1}^K p_j \log_2 p_j
\end{equation}
where $p_j$ is the output of the softmax function when a given input $x$ is classified into class $C_j$ $( j = 1, 2, ..., K )$.
$E$ takes the maximum value $\log_2K$ when all $p_j$ are all equal, that is, when $p_j = 1/K$. On the other hand, the larger the dispersion of the probability distribution is, the smaller the entropy becomes.
Also, it is necessary to select which object to generate a question about among the objects classified as unknown.
For example, in some cases, the region proposed by selective search contains only the background.
Background regions are likely to be classified as unknown, but they do not contain an object to ask about.
In order to solve this problem, we calculate the saliency of each proposed region in the image as a criterion for selecting the target region.
That is, we ask questions about objects that are unknown and particularly salient in the image.
Thus, to select salient objects in the image, we propose using a saliency map.
The saliency map is a plot obtained by estimating the saliency for each pixel in the image.
We calculate the saliency map using the method of Zhu et al.~\cite{saliency}.
This method estimates low saliency for background pixels and high saliency for foreground pixels. Therefore, it is considered to be suitable for this research.
First, we preprocessed the image by applying mask based on saliency map and applied non-maximum suppression to reduce the large number of object regions.
Then, the saliency of each proposed object region is expressed by:
\begin{equation}
I_{region} = \sum_{I(p)\; \ge \; \theta}I(p) \times \frac{S_{salient}}{S_{region}}
\end{equation}
where, $I(p)$ is the saliency value of each pixel, $\theta$ is the threshold value, $S_{salient}$ is the area in the region where saliency exceeds $\theta$, and $S_{region}$ is the total area of the region. The threshold $\theta$ was determined using Otsu method~\cite{otsu}.
The region with the highest saliency is selected as the target region.
\subsection{Visual Question Generation}\label{proposal:question}
Figure~\ref{fig:question_module} depicts the visual question generation module. We generate a question following the encoder-decoder methodology of Mostafazadeh et al.~\cite{vqg1} and Li et al.~\cite{disambiguate}. The encoder extracts visual features of both the entire image and the object region (submodule (a)), and (submodule (b)) the target word into a word embedding vector representation. The decoder takes the encoded features and generates a question via LSTM.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\hsize]{q_module4.pdf}
\caption{Overview of the VQG module. First, we obtain the common hypernym of the prediction class of the classifier as the target word. The target word and the image features are input as conditions to the LSTM, and the question is generated}
\label{fig:question_module}
\end{figure}
{\bf Encoding of Image Features.}
This submodule uses a pretrained CNN model to extract the features $f_I$ of the entire image and the features $f_R$ of the target region.
In our method, we use the output (a 1,000-dimensional vector) of the {\it fc} layer of ResNet152~\cite{resnet}.
Then, in order to express the spatial information of the target region, we follow the method by Li et al.~\cite{disambiguate} to define a five-dimensional vector $l_R$ as:
\begin{equation}
l_R = \left[\frac{x_{tl}}{W}, \:\frac{y_{tl}}{H}, \:\frac{x_{br}}{W}, \:\frac{y_{br}}{H}, \:\frac{S_R}{S_I} \right]
\end{equation}
where $(x_{tl}, \:y_{tl}) $, $(x_{br}, \:y_{br})$ is the upper left and the lower right coordinate of the target region, $S_R$ and $S_I$ represent the area of the target region and the entire image respectively, and $W$ and $H$ denote the width and the height of the image respectively.
We concatenate $f_I,\:f_R,\:l_R$, and let the 2,005 dimensional vector $f = \left[f_R, f_I, l_R \right]$ be the image feature encoding.
{\bf Question Target.}\label{proposed:question}
This submodule selects a target word to represent the object in the target region and embeds it into a vector representation.
Since the target object is unknown, that is, is not in the trained classes, it is not possible to use the class label as the target word as in Li et al.~\cite{disambiguate}.
Therefore, we need to devise how to specify the target word.
For example, if we do not know the class dog, asking a question referring to an {\it animal} is natural (e.g., {\it ``What is this animal?''}).
In this case, the word {\it animal} for {\it dog} is considered to be a hypernym. Such hypernym can be used as the target word.
We use WordNet~\cite{wordnet} to get the hierarchical relationship of words.
Each word in WordNet is hierarchically arranged based on semantic relationships, and thus, it is possible to get the hypernym of a word by going up in the hierarchy.
As shown in Fig.~\ref{fig:q_target} , we use the $ k $ predicted classes ($pred_1, pred_2, \dots pred_k$) with the highest confidence of the classification result, and we select the word with the lowest level among the common hypernyms of the $k$ class labels.
If the value of $k$ is too large, the common hypernym becomes a very abstract word such as {\it whole} or {\it entity}, and it is not possible to designate the target appropriately.
Therefore, the value of $k$ should be chosen carefully.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\hsize]{target_module2.pdf}
\caption{Overview of the question target module for target word selection using WordNet~\cite{wordnet}. WordNet is used to obtain a hypernym common to the predicted class labels with the highest confidence. The hypernym is then input to the visual question generation module}
\label{fig:q_target}
\end{figure}
Then, we use the Poincar\'e Embeddings~\cite{poincare} to embed words into feature vectors using a neural network similar to Word2Vec~\cite{word2vec1,word2vec2}. However, unlike Word2Vec, Poincar\'e Embeddings are suitable for expressing a structure in which words are hierarchically represented, such as WordNet, as a vector.
Let the target word embedded by Poincar\'e Embeddings be the vector $\sigma(v)$. Then the input to the decoder LSTM is the visual feature vector $f$, and the conditional input is the word embedded vector $\sigma(v)$.
The decoder LSTM is trained by minimizing the negative log likelihood:
\begin{equation}
L = \sum -\log p(Q \;|\;f, \;\sigma(v)\;;\; \theta) \label{eq:target_loss}
\end{equation}
where $\theta$ is the parameters of the LSTM, and $Q$ denotes the generated question.
\section{Evaluation of the Unknown Object Classification}\label{unknown}
Before evaluating the entire pipeline we performed experiments on independent modules to study their performance. First, we evaluated how accurately the unknown object classification module can classify whether the input image is an unknown object image or not.
\subsection{Experimental Settings}
We used CaffeNet~\cite{alexnet}, VGGNet~\cite{vgg}, and ResNet152~\cite{resnet}, which are well-known CNN models, to study the variation in unknown classification accuracy when employing different classifiers.
We pretrained our classifier with the 1,000 class dataset used in the object classification task of ILSVRC2012.
We used 50,000 images of the same dataset for validation.
Then, we used the dataset in the object classification task of ILSVRC2010 to create the unknown dataset. We excluded all images whose class is {\it known} (i.e., included in the ILSVRC2012 dataset), as well as the images whose class is a hypernym of any {\it known} classes. The reason for removing hypernyms is to avoid including general classes (e.g., ``dog'') in the unknown dataset when the specific class is already known (e.g., ``chihuahua'').
Thus, we selected 50,850 images of 339 classes from ILSVRC2010 dataset, which are not included in ILSVRC2012 and its hypernyms.
\subsection{Methods}
We compare unknown object classification using entropy $E$, which is the proposed method, with the following two methods.
{\bf Least Confident}~\cite{least_confident}
We used the method of Uncertainty Sampling described in Sec.~\ref{related}. We set a threshold for the softmax probability of the class label with the highest probability, that is, the most probable label. Then, if the probability is lower than the threshold, the label is considered as unknown.
{\bf Bendale et al.}~\cite{openmax}
We used the method of Bendale et al. mentioned in Sec.~\ref{classification}.
We performed this experiment based on the code published by the authors, and change the classification models for CaffeNet, VGGNet, and ResNet.
\subsection{Evaluation Metrics}
We calculated the F measure as:
\begin{equation}
F = \frac{2TP}{2TP+FP+FN}
\end{equation}
where $TP$ is defined as the number of known data classified into the correct class, $FN$ as the number of unknown data misclassified into known data, and $FP$ as the number of misclassified known data~\cite{openmax}.
We performed the evaluation using a five-fold cross-validation.
Also, we measured the execution time of each method.
First, we measured the time required by the classifier to calculate the softmax probability distribution of one image. Next, we experimented the calculation time per image for each method, taking the distribution of softmax probability for 100 images as input.
We repeated this operation five times and calculated the average execution time.
\subsection{Experimental Results}
Table~\ref{table:f_result} shows the resulting F measure per classifier and method.
\begin{table}[t]
\centering
\caption{Comparison of the proposed unknown object classification method in terms of F measure results $\pm$ standard error. We performed experiments on CaffeNet, VGGNet, and ResNet. In all three cases, the proposed method outperformed the other methods}
\label{table:f_result}
\begin{tabular}{c|c|c|c}
\hline
& \multicolumn{3}{c}{F measure} \\ \hline
& CaffeNet & VGGNet & ResNet \\ \hline \hline
Ours & ${\bf 0.526\pm1.1\cdot10^{-3}}$ & ${\bf 0.602\pm0.2\cdot10^{-3}}$ & ${\bf 0.654\pm0.9\cdot10^{-3}}$ \\ \hline
Least Confident & $0.522\pm1.1\cdot10^{-3}$ & $0.590\pm1.5\cdot10^{-3}$ & $0.635\pm1.2\cdot10^{-3}$ \\ \hline
Bendale et al.~\cite{openmax} & $0.524\pm0.9\cdot10^{-3}$ & $0.553\pm0.6\cdot10^{-3}$ & $0.624\pm1.7\cdot10^{-3}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Comparison of the proposed unknown object classification method in terms of execution time, with CaffeNet as a classifier. We performed classification for 100 images and showed the average time per image $\pm$ standard error}
\label{table:time}
\begin{center}
\begin{tabular}{c|c}
\hline
&time (sec/image)\\
\hline\hline
Ours&$0.0400\pm0.0017$\\
\hline
Least Confident&${\bf 0.0365\pm0.0019}$\\
\hline
Bendale et al.~\cite{openmax}&$15.6\pm0.7$\\
\hline
\end{tabular}
\end{center}
\end{table}
For all three methods, the F measure increased as the classifier was changed from CaffeNet to VGGNet, and to ResNet.
The higher the accuracy of the classifier,
when inputting known classes,
the distribution of the classification probabilities varies more largely, and thus the entropy becomes smaller.
In the case of inputting unknown classes, the more accurate the classifier is, the distribution of the classification probabilities is more uniform and thus the entropy becomes larger.
Table~\ref{table:time} shows a comparison of the execution time for each method when CaffeNet is used as a classifier. Our method and the Least Confident method take much less time than the method of Bendale et al.
This is because the method of Bendale et al. has to calculate the distance to the average distribution of 1,000 known classes for each image, so the calculation cost is large, but in the method using entropy and the method using threshold of confidence, calculation is performed only with distribution of the input image, so calculation time is shortened.
\section{Evaluation of the Visual Question Generation}\label{question}
We studied the performance of the proposed visual question generation module given a target region and compared to other methods.
\subsection{Datasets}\label{unknown_dataset}
In this experiment, we used a dataset called Visual Genome~\cite{visual_genome} with about 100,000 images, and captions and questions.
There is a subset of questions that is associated with a specific region in the image.
We preprocessed the data as follows.
First,
we removed questions not beginning with {\it ``What.''}
Furthermore, since questions about colors are not the goal of our method, we also removed questions beginning with {\it ``What color.''}
Next, for questions associated with a specific region in the image, if the word representing the object in the region was included in the answer of the question, that word was taken as the target word.
For questions not associated with an image region, we searched the object included in the answer among all objects in the image. Then, if there is only one instance of the object in the image, the word and the region where the object exists are set as the target word and the target region corresponding to the question.
Furthermore, in order to eliminate the imbalance in the type of questions in the data, we limited to 50 the maximum number of times the same question can be included.
Through this preprocessing, we gathered 202,208 questions corresponding to specific target regions in the image (one question per region) and 528 target words.
\subsection{Methods}\label{experimental}
We split the 202,208 questions into 179,997 questions for training, 10,002 questions for validation, and 12,209 questions for testing. At training time, questions were generated by inputting images, regions, and target words. For embedding target words, we used Poincar\'e Embeddings trained on the tree structure of WordNet.
We used the following methods as a baseline to compare our proposed method.
{\bf CNN + LSTM.}
As in Mostafazadeh et al.~\cite{vqg1}, we generated questions by inputting only the features of the entire image encoded by a CNN.
{\bf Retrieval}.
Following Mostafazadeh et al.~\cite{vqg1}, we also used retrieval method as baseline. First, we extracted features of the target regions in the training images using the {\it fc} layer of ResNet152. Then we retrieved the $m$ regions with the higher cosine similarity between their features and the input target region.
Then, for each question associated to the retrieved region, we calculated the similarity with the other $m-1$ questions using the BLEU score~\cite{bleu}, which measures textual similarity.
Finally, the question with the highest BLEU score, that is, the most representative question, was taken as the final output.
\subsection{Evaluation Metrics}
In our experiments, we use BLEU~\cite{bleu} and METEOR~\cite{meteor} for measuring the similarity between the automatically generated questions and the ground truth. The larger the value, the more accurate the result.
Besides the automatic evaluation, we also performed human evaluation via Amazon Mechanical Turk (AMT)\footnote{https://www.mturk.com/}.
We presented an image with the target region and the target word to the human workers. We asked workers to blindly evaluate each method and the ground truth using a score between 5 (best) and 1 (worst). We used two criteria for evaluation: (1) whether each question is expressed naturally, and (2) whether each question is related to the target region and the target word.
For the human evaluation, we used questions generated for 100 images extracted randomly from the test data.
\begin{table}[tb]
\caption{Comparison between our method and the baseline in terms of automatic evaluation metrics. The proposed method outperformed baseline methods}
\label{table:auto_result1}
\begin{center}
\begin{tabular}{c|ccccc}
\hline
&BLEU-1&BLEU-2&BLEU-3&BLEU-4&METEOR\\
\hline\hline
Ours&{\bf0.518}&{\bf0.359}&{\bf0.244}&{\bf0.175}&{\bf0.197}\\
\hline
CNN + LSTM&0.456&0.296&0.175&0.110&0.163\\
\hline
Retrieval&0.438&0.275&0.157&0.094&0.151\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[tb]
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{q_natural_result.pdf}
\caption{(1) Human evaluation results on the naturalness of questions }\label{fig:q_amt_natural}
\end{minipage}
\begin{minipage}{0.02\hsize}
\hspace{2mm}
\end{minipage}
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{q_region_result.pdf}
\caption{(2) Human evaluation results on the relevance of questions to their region}\label{fig:q_amt_ref}
\end{minipage}
\end{figure}
\subsection{Experimental Results}
As shown in Table~\ref{table:auto_result1}, the proposed method outperformed the baselines for all metrics.
This result suggests that inputting the target object (visual features and target word condition) to the decoder LSTM allows generating more accurate questions.
Figures~\ref{fig:q_amt_natural} and \ref{fig:q_amt_ref} show the results of the human evaluation of our method compared to baselines.
From the viewpoint of the naturalness of the question, the difference between the proposed method and CNN + LSTM was small.
We believe the reason is that both methods use LSTM as the decoder.
When evaluating the relevance of the question to the target region, the proposed method outperformed baselines. This is because, CNN + LSTM does not specify a target object to the decoder, so it may generate questions that are related to the image but not the target region.
Also, Retrieval can generate only questions existing in the training data, so the variety of questions is limited, and thus, it may not generate questions related to the target region.
\section{Evaluation of VQG for Unknown Objects}\label{all_pipeline}
Lastly, we performed experiments using the whole pipeline, in which we generate questions to acquire knowledge about the class of unknown objects in the image.
\subsection{Datasets}
In order to test our VQG method for unknown objects, we used images that include unknown objects extracted from the following two datasets.
First, from the test set of Visual Genome, we extracted 50 images with unknown objects, that is, not included in the 1,000 classes of ILSVRC2012.
Also, from the dataset of the ILSVRC2010, 50 images of 339 unknown classes as described in Sec.~\ref{unknown} were extracted.
In the images from the Visual Genome dataset, target regions contain an average of 8.7 objects, including small objects like ``eye'' and ``button''. According to our method, 68.4 \% of those objects were unknown. Note that we cannot indicate objectively the number of objects in the images from the ILSVRC2010 dataset since its ground truth does not include object regions.
\subsection{Methods}
The classifier used for unknown object classification was ResNet152, which is the method with the highest accuracy in Sec.~\ref{unknown}. We pretrained ResNet152 with the 1,000 class data used in the object classification task of ILSVRC2012. The visual question generation module was pretrained with the dataset created in Sec.~\ref{unknown_dataset}.
Furthermore, as described in Sec.~\ref{proposed:question}, when choosing a hypernym common to the top $k$ classification results, if the value of $k$ is too large, the target word becomes too abstract.
Therefore, we performed experiments with two settings, $k=2$ and $k=3$.
As baseline methods, we used the CNN + LSTM method and the Nearest Neighbors Retrieval method described in Sec.~\ref{question}.
\subsection{Evaluation Metrics}
Since there is no ground truth in this experiment, it is not possible to perform automatic evaluation by comparison with the ground truth. Therefore, we performed only human evaluation via AMT, which consists of the following two tasks.
(1) We presented to three workers images and the questions generated automatically by our method and the baselines, and asked them to answer the generated questions. When they cannot understand the meaning of the question, we instructed them to answer {\it ``Do not understand.''} Note that this task did not present a target region.
(2) Also, we evaluated the question and the answer obtained in task (1).
Specifically, we presented to three workers with the question, the answer of each worker in task (1), and the image with the target region, and asked them whether the question and the answer are related to the target region in a 5-point scale.
We evaluated only answers different from {\it ``Do not understand.''}
As the evaluation value for task (2), we used the median of the evaluation values of the three workers.
Lastly, we evaluated to what extent the generated questions are able to successfully acquire information on unknown objects. We counted only the questions whose answers (task (1)) are not included in the known classes of the classifier, and the relevance of the question and target region in the image (task (2)) is four or more.
\subsection{Experimental Results}
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{q_all_result_cameraready.pdf}
\caption{Examples of input images (upper), the target words and generated questions by our proposed VQG method for unknown objects (middle), and the generated questions by the \textit{CNN + LSTM} and \textit{retrieval} baselines (lower).}
\label{fig:all_result}
\end{figure}
Fig.~\ref{fig:all_result} shows our qualitative results.
In Fig.~\ref{fig:all_result} (a) and (b), when $k=2$, the target word is a concrete word (i.e., {\it ``camera''} and {\it ``garment''}), and the generated question refers to an object in the target region.
In the case of $k=3$, the target word is an abstract word such as {\it ``equipment''} and {\it ``artifact''}, and the generated question is not related to the region.
Fig.~\ref{fig:all_result} (c) shows an example where the object region proposal is not performed properly, and thus, it is not possible to generate the question accurately.
The lower part of the image shows examples of questions generated by the baselines.
\begin{figure}[tb]
\begin{tabular}{lcr}
\begin{minipage}{0.50\textwidth}
\centering
\includegraphics[width=1.0\hsize]{all_result.pdf}
\caption{Comparison of our method with the baseline in terms of the human evaluation in task (2). Task (2) evaluates whether or not the generated question, the image region, and the obtained answer are related. The greater the score, the higher the relevance.}
\label{fig:amt_all1}
\end{minipage}&
\begin{minipage}{0.02\textwidth}
~
\end{minipage}
\begin{minipage}{0.48\textwidth}
\centering
\makeatletter
\def\@captype{table}
\makeatother
\caption{The number of generated questions that successfully allowed acquiring information on unknown objects (out of 300). We counted only the questions whose answers (task (1)) are not included in the known classes of the classifier, and the relevance of the question and target region in the image (task (2)) is four or more.}
\label{table:unknown}
\begin{tabular}{c|c}\hline
Ours($k=2$) & {\bf 61} \\
Ours($k=3$) & 49 \\
CNN + LSTM & 46 \\
Retrieval & 45 \\ \hline
\end{tabular}
\end{minipage}
\end{tabular}
\end{figure}
Figure~\ref{fig:amt_all1} shows the results of the human evaluation. The answer {\it ``Do not understand''} in task (1) is shown as {\it ``no answer.''} The average of evaluation values is calculated by assigning {\it ``no answer''} a score of 0.
The proposed method outperformed the baseline in terms of relevance to the region and relevance to the answer. The reason is that the proposed method specifies a target object to the LSTM to generate the question, whereas the baselines do not consider any target.
Regarding the number $k$ of class labels used to select the target word, the average score is higher when $k=3$, but the ratio of the highest score 5 is higher when $k=2$.
Also, the proportion of {\it ``no answer''} is higher when $k=2$. This means that, when $k=3$, the target word becomes more generic and the relevance with the target region is less clear than when using $k=2$. On the other hand, a value of $k=2$ is more likely to specify a wrong target word for the visual question generation.
Table~\ref{table:unknown} shows the number of generated questions that successfully allowed acquiring information on unknown objects. We consider successful questions whose answers were not included in the known class of the unknown classifier neither in their hypernym, and whose relevance score in task (2) was 4 or more.
We obtained the highest number of successful questions using our method with $k=2$, since the selected target word is more concrete.
On the other hand, our method with $k=3$ generates questions for a more generic target word, and thus, it does not necessarily get the expected answer, but is still partly related to the target region.
We can conclude that the proposed method can successfully generate questions that allow acquiring information about unknown objects in an image.
\section{Conclusions}\label{conclusion}
In this paper, we presented a novel visual question generation (VQG) task to acquire class information of unknown (i.e., not learned previously) objects in the image, and proposed a method that automatically generates questions that target unknown objects in the image.
To the best of our knowledge, this is the first research that approaches acquiring unknown information via VQG.
The evaluation of our method shows that it can successfully acquire class information of unknown objects from humans. We believe this research will help other researchers in tackling this novel task.
Our future work includes feeding back the acquired information about the unknown object to the system, and learning it as new knowledge.
For example, our method could be combined with recent works in few-shot learning and incremental learning to re-train the classifier with the new class.
In addition, it is considered that the answers from humans will be noisy, so a system that can use noisy answers for re-training is necessary.
If the answer obtained from humans is not the expected, it can be useful to generate multiple questions as necessary.
\section{Acknowledgement}
This work was supported by JST CREST Grant Number JPMJCR1403, Japan.
\bibliographystyle{splncs}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The variable order structure \cite{gabrielle-book} is a natural extension of the well-known fixed (partial) order given by a closed, pointed and convex cone. This kind of orderings models situations in which the comparison between two points depends on a set-valued application.
The research focused on vector optimization problems on variable ordered spaces and its applications. These problems have recently received much attention from the optimization community due to its broad applications in several different areas.
{Variable order structures (VOS) given by a point-to-cone valued application were well studied in \cite{gabrielle-book, gabi2, ap13}, motivated by important applications. VOS appear in medical diagnosis \cite{Eichfelder-med},
portfolio optimization \cite{ap31}, capability theory of wellbeing \cite{bao-mord-soub2}, psychological modeling \cite{bao-mord-soub}, consumer preferences \cite{John1, John2} and location theory, etc; see, for instance, \cite{ap2, ap13}.} The main goal of these models is to find an element of a certain set such that the evaluation of the objective function cannot be improved by the image of any other feasible point with respect to the variable order. So, their mathematical description corresponds to the called Optimization Problem(s) on Variable Ordered Spaces (OPVOS(s)). {For the above reasons}, it is important to find efficient solution algorithms for solving these kinds of models.
OPVOSs have been treated in \cite{ap11}, in the sense of finding a minimizer of the image of a vector function, with respect to an ordered structure depending on points
in the image. It is a particular case of the problem described in \cite{gabi2}, where the goal of the model is to find a minimum of a set.
Here we will consider a partial (variable) order defined by the cone-valued application which is used to define our problem - OPVOS. We want to point out that OPVOSs {generalize the classical vector optimization problems}. Indeed, it corresponds to the case in which the order is defined by a constant cone valued application. Many approaches have been proposed to solve the classical constrained vector optimization, such as projected gradient methods, proximal points iterations, weighting techniques schemes, Newton-like and subgradient methods; see, for instance, \cite{yunier-2013,yunier-luis-2014,BIS,grana-maculan-svaiter,fliege-grana-svaiter,jahn0, [21], luis-jef-yun, [20], luc1,fliege-svaiter,grana-svaiter}. The use of extensions of these iterative algorithms to the variable ordering setting is currently a promising idea. It is worth noting that, as far as we know, only a few of these schemes mentioned above have been proposed and studied in the variable ordering setting; as e.g., the steepest descent algorithm and sub-gradient-like algorithms for unconstrained problems; see, for instance, \cite{nous, luis-gema-yunier-2014}.
In this work,
due to its simplicity and the adaptability to the structure of the vectorial problem,
we present an inexact projected gradient method for solving constrained variable order vector problems. The properties of the accumulation points of the generated sequence are studied and its convergence is also analyzed under convexity. Moreover, we derive the convergence of the exact projected gradient method and the inexact projected gradient one for the unconstrained problem. Finally, analogous results are obtained if the variable order is given by a point-to-cone application whose domain coincides with the image of the objective function.
This work is organized as follows. The next section provides some notations and preliminary results that will be used in the remainder of this paper. We also recall the concept of $K$--convexity of a function on a variable ordered space and present
some properties of this class. Section $3$ is devoted to the presentation of {the inexact projected gradient algorithm}.
The convergence of the sequence generated by the projected gradient method is shown in Section $4$. Then, under the $K$--convexity of the objective function and the convexity of the set of feasible solutions, we guarantee that the generated sequence is bounded and all its accumulation points are
solutions of the variable order problem. Section $5$ discusses the properties of this algorithm when the variable order is taken as a cone-value set from the image of the objective function. Section $6$ introduces some examples illustrating the behavior of both proposed methods. Finally, some final remarks are given.
\section{Preliminaries}
In this section {we present some preliminary results and definitions.}
First we introduce some useful notations: Throughout
this paper, we write $p:=q$ to indicate that $p$ is defined to be
equal to $q$ and we
write $\ensuremath{\mathbb N}$ for the nonnegative integers $\{0, 1, 2,\ldots\}$. The inner product in $\ensuremath{\mathbb R}^n$ will be denoted by $\langle\cdot, \cdot \rangle$ and the induced norm by $\|\cdot\|$. The closed ball centered at $x$ with radius $r$ is represented by
$\mathbb{B}(x,r):=\{y\in\ensuremath{\mathbb R}^n:{\mbox{dist}(x,y):=\|y-x\|}\leq r\}$ and also the sphere by $\mathbb{S}(x,r):=\{y\in\mathbb{B}(x,r):\mbox{dist}(x,y)= r\}$. Given two bounded sets $A$ and $B$, we will consider
$\mbox{d}_H(A,B)$ as the {\em Hausdorff} distance, \emph{i.e.} $$\mbox{d}_H(A,B) := \max\left\{\,\sup_{a \in A} \inf_{b \in B} \mbox{dist}(a,b),\, \sup_{b \in B}
\inf_{a \in A} \mbox{dist}(a,b)\,\right\},$$ or equivalently
$
\mbox{d}_H(A,B)=\inf\{\epsilon\ge0 : A\subseteq B_\epsilon\quad \mbox{and}\quad B\subseteq A_\epsilon \},
$ where $$D_\epsilon:=\displaystyle\cup_{ d\in D}\{x\in\ensuremath{\mathbb R}^n : \mbox{dist}(d,x)\le \epsilon\}$$ is the $\epsilon$--enlargement of any set $D$.
The set $D^c$ denotes the complement of $D$ and its interior is denoted by ${\rm int}(D)$. {{${\rm conv}(D)$ is used for the convex hull of $D$, i.e., the intersection of all convex sets containing $D$. If $D$ is closed and convex, we define the orthogonal projection of $x$ onto $D$, denoted by $P_D(x)$, as the unique point in $D$ such that $\|P_D(x)-y\| \le \|x-y\|$ for all $y \in D$}. Given the partial order structure induced by a cone $\mathcal{K}$,
the concept of infimum of a sequence can be defined. Indeed, for a
sequence $(x^k)_{k\in\ensuremath{\mathbb N}}$ and a cone $\mathcal{K}$, the point
$x^*$ is $\inf_k \{x^k\}$ iff $(x^k-x^*)_{k\in\ensuremath{\mathbb N}}\subset
\mathcal{K}$, and there is not $x$ such that {$x-x^*\in \mathcal{K}$, $x\neq x^*$}
and $(x^k- x)_{k\in\ensuremath{\mathbb N}}\subset \mathcal{K}$. We said that $\mathcal{K}$ has the {\em Daniell} property if for all sequence $(x^k)_{k\in\ensuremath{\mathbb N}}$ such that $(x^k-x^{k+1})_{k\in\ensuremath{\mathbb N}}\subset \mathcal{K}$ and for some $\hat{x}$, $(x^k-\hat x)_{k\in\ensuremath{\mathbb N}}\subset \mathcal{K}$,
then $\lim_{k\rightarrow\infty}x^k=\inf_k \{x^k\}$.
Here we assume that $K(x)$, $x\in\ensuremath{\mathbb R}^n$, is a convex, pointed, and closed cone, which guarantees that $K(x)$ has the Daniell property as was shown in \cite{theluc}.
For each $x\in \mathbb{R}^n$, the dual cone of $K(x)$ is defined as
$K^*(x):=\{w\in \mathbb{R}^m: \langle w,y\rangle\geq 0, \text{ for all }y\in K(x)\}$.
As usual, the graph of a {set-valued map $K: \ensuremath{\mathbb R}^n\rightrightarrows\ensuremath{\mathbb R}^m$} is the set $Gr(K):=\{(x,y)\in \mathbb{R}^n\times \mathbb{R}^m:\; y\in K(x)\}.$ Finally, we remind that the mapping $K$ is
closed if
$Gr(K)$ is a closed subset of $\mathbb{R}^n\times \mathbb{R}^m$.
Next, we will define the constrained vector optimization problem on variable ordered spaces, which finds a $K$--minimizer of the vector function $F:\ensuremath{\mathbb R}^n\to \ensuremath{\mathbb R}^m$ in the set $C$:
\begin{equation}\label{(P)}K-\min F(x),\quad x\in C.\end{equation}
Here $C$ is a nonempty convex and closed subset of $\ensuremath{\mathbb R}^n$ and $K:\mathbb{R}^n\rightrightarrows \mathbb{R}^m$ is a point-to-cone map, where for each $x\in \mathbb{R}^n${,
$K(x)$} is a {pointed, convex and closed cone with nonempty interior}. We say that the point $x^*\in C$ is a minimizer of problem \eqref{(P)} if for all $x\in C$, $$F(x)-F(x^*)\notin {-}K(x^*)\setminus\{0\}.$$
The set of all minimizers {(or efficient solutions)} of problem \eqref{(P)} is denoted by $S^*$.
As in the case of classical vector optimization, related solution concepts such as weakly efficient and stationary points can be extended to the constrained setting.
The point $x^*\in C$ is a weak solution of problem \eqref{(P)} {iff} for all $x\in C$, $F(x)-F(x^*)\notin -{\rm int}(K(x^*))$, $S^w$ is the set of all weak solution points.
We want to point out that this definition corresponds with the concept of weak minimizer given in \cite{gabi2}. On the other hand, if $F$ is a continuously differentiable function, the point $x^*$ is stationary, iff for all $d\in C-x^*:=\{d\in\mathbb{R}^n:\; d=c-x^*,\text{ for some }c\in C\}$, we have
\begin{equation}\label{stationary-inclusion} J_F(x^*)d\notin -{\rm int}(K(x^*)),\end{equation} where $J_F$ denotes the Jacobian {matrix} of $F$.
The set of all stationary points will be denoted by $S^{s}$.
Now we present a constrained version of Proposition~2.1 of \cite{nous}{, which is an extension of Lemma $5.2$ of \cite{[20]} from vector optimization.}
\begin{proposition}\label{primal}
Let $x^*$ be a weak solution of problem \eqref{(P)}. If $F$ is a continuously differentiable function, then $x^*$ is a stationary point.
\end{proposition}
\begin{proof} Suppose that $x^*$ is a weak solution of problem \eqref{(P)}. Fix $d\in C-x^*$. By definition there exists $c\in C$, such that $d=c-x^*$.
Since $C$ is a convex set, for all $\alpha\in[0,1]$, $x^*+\alpha d\in C.$ Since $x^*$ is a weak solution of problem \eqref{(P)}, $F(x^*+\alpha d)-F(x^*)\notin -{\rm int}(K(x^*))$. Hence,
\begin{equation}\label{**}F(x^*+\alpha d)-F(x^*)\in (-{\rm int}(K(x^*)))^c.
\end{equation}
The Taylor expansion of $F$ at $x^*$ leads us to $F(x^*+\alpha d)=F(x^*)+\alpha J_F(x^*)d +o(\alpha).$ The last equation together with \eqref{**} implies
$\alpha J_F(x^*)d +o(\alpha)\in (-{\rm int}(K(x^*)))^c.$
Using that $(-{\rm int}(K(x^*)))^c$ is a closed cone, and since $\alpha> 0$, it follows that $$ J_F(x^*)d +\frac{o(\alpha)}{\alpha}\in (-{\rm int}(K(x^*)))^c.$$
Taking limit in the above inclusion, when $\alpha$ goes to $0$, and using the closedness of $(-{\rm int}(K(x^*)))^c$, we obtain that $ J_F(x^*)d\in (-{\rm int}(K(x^*)))^c,$
establishing that $x^*\in S^s$.\end{proof}
In classical optimization, {stationarity is also a sufficient condition for weak} minimality {under convexity assumptions}. For vector optimization problems on variable ordered spaces, the convexity concept was introduced in Definition $3.1$ of \cite{nous} as follows:
\begin{definition}
We say that $F$ is a $K$--convex function {on} $C$ if for all $\lambda\in [0,1]$, $x,\bar{x}\in C$,
$$F(\lambda x+(1-\lambda)\bar{x})\in \lambda F(x)+(1-\lambda)F(\bar{x})- K(\lambda x+(1-\lambda)\bar{x}).$$
\end{definition}
It is worth noting that in the variable order setting the convexity of {${\rm epi}{(F)}:=\{(x,y)\in\ensuremath{\mathbb R}^n\times\ensuremath{\mathbb R}^m \,|\, F(x)\in y - K(x)\}$} is equivalent to the $K$--convexity of $F$ iff $K(x)\equiv K$ for all $x\in \ensuremath{\mathbb R}^n$; see Proposition $3.1$ of \cite{nous}.
As already shown in \cite{nous}, $K$--convex functions have directional derivatives under natural assumptions; see Proposition $3.5$ of \cite{nous}. In particular, if $Gr(K)$ is closed and $F\in \mathcal{C}^1$ is $K$--convex, then we have the gradient inclusion inequality as follows: $$F(x)-F(\bar{x})\in J_F(\bar{x})(x-\bar{x})+K(\bar{x}), \quad x\in C,\; \bar{x}\in C.$$
In the next proposition, we study the relation between stationarity, descent directions and weak solution concept in the constrained sense for problem \eqref{(P)} {extending to the variable order setting the results presented in Proposition $1$ of \cite{[21]} and Lemma 5.2 of \cite{[20]}.}
\begin{proposition}\label{note1}
Let $K$ be a point-to-cone and closed mapping, and $F\in \mathcal{C}^1$ be a $K$--convex function. Then:
\item[ {\bf(i)}] The point $x^*$ is a weak solution of problem \eqref{(P)} iff it is a stationary point.
\item[ {\bf(ii)}] If for all $d\in C-x^*$, $ J_F(x^*)d\notin -K(x^*)\setminus\{0\}$, then $x^*$ is a minimizer.
\end{proposition}
\begin{proof} {\bf (i):} Let $x^*\in S^s$, where $S^s$ is the set of the stationary points. If $x^*$ is not a weak minimizer then there exists $x\in C$ such that
$- k_1:=F(x)-F(x^*)\in -{\rm int}(K(x^*))$.
By the convexity of $F$, for some $k_2\in K(x^*)$, we have
\begin{equation*} -k_1=F(x)-F(x^*)= J_F(x^*)(x-x^*)+k_2.\end{equation*} It follows from {the above equality} that
\begin{equation}
\label{stationarity}
J_F(x^*)(x-x^*)=-(k_1+k_2).\end{equation}
Moreover, since $ K(x^*)$ is a convex cone, $k_1\in {\rm int}(K(x^*))$ and $k_2\in K(x^*)$, it holds that $k_1+k_2\in {\rm int}(K(x^*))$.
Thus, {the last two equalities} imply that $ J_F(x^*)(x-x^*)\in - {\rm int}(K(x^*))$, which contradicts the fact that $x^*$ is a stationary point because $x$ belongs to $C$ and hence $x-x^*\in C-x^*$.
The conversely implication was already shown in Proposition \ref{primal}.
\medskip
\noindent {\bf (ii):} By contradiction suppose that there exists $x\in C$ such that $F(x)-F(x^*)=-k_1,$ where $k_1\in K(x^*)\setminus \{0\}.$
Combining the previous condition with \eqref{stationarity}, it follows that $$ J_F(x^*)(x-x^*)=-(k_1+k_2)\in -K(x^*).$$
Using that $ J_F(x^*)(x-x^*)\notin -K(x^*)\setminus \{0\}$, we get that $(k_1+k_2)= 0$, and as $k_1,k_2\in K(x^*)$, $k_1=-k_2$. It follows from the pointedness of the cone $K(x^*)$ that $k_1=k_2=0$, contradicting the fact that $k_1\neq 0$.
\end{proof}
It is worth mentioning that the concept of $K$--convexity for $F$ depends of the point-to-cone mapping $K$. Thus, this general approach covers several convexity concepts, from the scalar setting to the vector one and it can be used to model a large number of applications; see, for instance, \cite{bao-mord-soub, bao-mord-soub2, Eichfelder-med}. In Section $5$ we discuss another variable order when the point-to-cone application depends of the image set of $F$, such kind of variable orders was introduced and studied in \cite{luis-gema-yunier-2014,nous}.
{The Inexact Projected Gradient Method to solve problem \eqref{(P)} is presented in the next section.}
\section{The Inexact Projected Gradient Method}
This section is devoted to present an inexact projected gradient method for solving constrained smooth problems equipped with a variable order. This method uses an Armijo-type line-search, which is done on inexact descent feasible directions. The proposed scheme here has two main differences with respect to the approach introduced in \cite{nous}. {(i) it solves constrained problems. (ii) it accepts approximate solutions of the
subproblem \eqref{ee} with some tolerance.}
In the following, several constrained concepts and results will be presented and proved, which will be used in the convergence analysis of the proposed method below.
We start this section by presenting some definitions and basic properties of some auxiliary functions and sets, which will be useful in the convergence analysis of the proposed algorithms.
Firstly, we define the set valued mapping $G:\mathbb{R}^n\rightrightarrows \mathbb{R}^m$, which for each $x$, defines the set of the normalized generators of
$K^*(x)$, \emph{i.e.} $G(x)\subseteq K^*(x)\cap\mathbb{S}(0,1)$ is a compact set such that the cone generated by its convex hull is $K^*(x)$.
Although the set of the dual cone $K^*(x)\cap \mathbb{S}(0,1)$ fulfils those properties, in general, it is possible to take smaller sets; see, for instance, \cite{jahn, jahn1, luc}. On the other hand, assuming that $F\in \mathcal{C}^1$, we consider {the support} function {$\rho:\mathbb{R}^n\times\mathbb{R}^m\to \mathbb{R}$ as
\begin{equation}\label{Drho}\rho(x,w):=\max_{y\in G(x)} y^T w.\end{equation}
$\rho(x,w)$ was extensively studied for the vector optimization in {Lemma 3.1 of \cite{[20]}} and it is useful to define the useful auxiliary function
$\phi:\mathbb{R}^n\times\mathbb{R}^n\to \mathbb{R}$, as
\begin{equation}\label{asconsec}\phi(x,v):=\max_{y\in G(x)} y^T J_F(x)v.\end{equation}
Then, we are ready to introduce the following auxiliary subproblem,
for each $x\in \mathbb{R}^n$ and $\beta>0$, as
\begin{equation}\tag{$P_x$}\label{ee}
\min_{v\in C-x}\left\{ \frac{\| v\|^2}{2}+\beta \phi(x,v)\right\}.
\end{equation}}
\begin{remark}\label{newone}
Since $G(x)$ is compact, the function $\phi(x,\cdot):\mathbb{R}^n\to \mathbb{R}$ is well defined for each $x\in \ensuremath{\mathbb R}^n$. Moreover, it is a continuous function.
\end{remark}
Next proposition provides a characterization of the stationarity using the auxiliary function $\phi$, defined in \eqref{asconsec}. The unconstrained version of the following proposition can be found in Proposition 4.1 of \cite{nous}. {A version of this proposition was presented in Lemma 2.4 of \cite{[19]} for vector optimization}.
\begin{proposition}\label{prop1}The following statements hold:
\item[ {\bf(i)}] For each $x\in \mathbb{R}^n$, $\max_{y\in G(x)} y^T\hat{w}<0$ if and only if $\hat{w}\in -{\rm int}( K(x))$.
\item[ {\bf(ii)}]The point $x$ is not stationary iff there exists $v\in C-x$ such that $\phi(x,v)<0$.
\item[ {\bf(iii)}]If $\phi(x,v)<0$ and {$\beta>0$}, then there exists $\overline{\lambda}>0 $ such that $\displaystyle \frac{\|\lambda v\|^2}{2}+\beta \phi(x,\lambda v)<0 $ for all $\lambda\in (0, \overline{\lambda} ]$.
\item[ {\bf(iv)}] For each $x\in \mathbb{R}^n$, subproblem \eqref{ee} has a unique solution, denoted by $v(x)$.
\end{proposition}
\begin{proof}
{\bf (i):} The result of this item follows as in Proposition 4.1(i) of \cite{nous}.
\noindent {\bf (ii):} Note that, fixing $x$, it follows from \eqref{asconsec} that $\phi(x,v)= \rho(x, J_F(x)v)$. Then, by the definition of stationarity and item (i), the statement holds true.
\noindent {\bf (iii):} It follows from the definition of $\phi(x,v)$ that $\phi(x,\cdot)$ is a positive homogeneous function. Thus, for all $\lambda>0$,
\begin{equation}
\label{abEq}
\frac{\|\lambda v\|^2}{2}+\beta \phi(x,\lambda v)=\lambda\left(\lambda \frac{\| v\|^2}{2}+\beta \phi(x, v)\right).\end{equation}
Since $\phi(x, v)<0$, there exists $\bar \lambda>0$ small enough such that $\bar \lambda \displaystyle\frac{\| v\|^2}{2}+\beta \phi(x, v)<0.$ Hence, \eqref{abEq} together with the above inequality implies that $\displaystyle\frac{\|\lambda v\|^2}{2}+\beta \phi(x,\lambda v)<0,$ for all $\lambda\in (0,\bar \lambda]$, as desired.
\medskip
\noindent {\bf (iv):} Using the definition of the function $\phi(x,v)$, given in \eqref{abEq}, it is easy to prove that $\phi$ is a sublinear function as well.
Hence, $\phi(x,\cdot)$ is a convex function, and then, $\displaystyle\frac{\| v\|^2}{2}+\beta \phi(x,v)$ is a strongly convex function. Since $C$ is a convex set, $C-x$ is also convex and therefore, subproblem \eqref{ee} has a unique minimizer.
\end{proof}
Based on Proposition \ref{prop1}(iii), we {can} define $v(x)$ as the unique solution of problem \eqref{ee} and $y(x,v)$ is an element of the compact set $G(x)$
such that $y(x,v)^T J_F(x)v=\phi(x,v)$. {Next} we will discuss about the continuity {of the function \begin{equation}
\label{teta}
\theta_\beta(x):=\frac{\| v(x)\|^2}{2}+\beta \phi(x,v(x)),\end{equation} which is related with the one defined in (35) of \cite{[21]}.}
The following proposition is the constrained version of Proposition 4.2 in \cite{nous}. {Items (i)-(ii), (iii) and (iv) can be seen as a version for the variable vector optimization of Proposition $3$ of \cite{[21]}, Proposition $2.5$ of \cite{[19]} and Proposition 3.4 of \cite{[20]}, respectively}.
\begin{proposition}\label{prop2} Let $F\in \mathcal{C}^1$ and fix $\beta>0$. Then, the following hold
{\item[ {\bf(i)}] $\theta_\beta(x)\leq 0$ for all $x\in C$.}
{ \item[ {\bf(ii)}] $x$ is a stationary point iff $\theta_\beta(x)=0$.}
\item[ {\bf(iii)}] $\|v(x)\|\leq 2\beta\| J_F(x)\|$. \item[ {\bf(iv)}]If $G$ is a closed application, then $\theta_\beta$ is an upper semi-continuous function on $C$.
\end{proposition}
\begin{proof}
{\noindent {\bf (i):} Note} that as $0\in C-x$ for all $x\in C$ and $\displaystyle\theta_\beta(x)\leq \frac{\|0\|^2}{2}+\beta\phi(x,0)=0.$
\noindent {{\bf (ii):}} As shown in Proposition \ref{prop1}(ii),
$x$ is a non stationary point iff for some $v\in C-x$, $\phi(x,v)<0$. Then, by Proposition \ref{prop1}(iii), there exists $\hat{v}\in C-x$ such that $\displaystyle\frac{\lambda^2}{2}\|v\|^2+\lambda \beta\phi(x,v)< 0$ and hence $\theta_\beta(x)<0$.
\noindent {\bf (iii):} By (i), $0\geq \theta_\beta(x)=\displaystyle\frac{\|v(x)\|^2}{2}+\beta y(x,v(x))^T J_F(x)v(x).$ Then, after some algebra, we get
$$\frac{\|v(x)\|^2}{2}\leq -\beta y(x,v(x))^T J_F(x)v(x)\leq \beta\| y(x,v(x))^T J_F(x)v(x) \|.$$
Using that $\| y(x,v(x))\|=1$, it follows from the above inequality that
$$\frac{\|v(x)\|^2}{2}\leq \beta \| J_F(x)\|\|v(x)\|,$$ and the result follows {after dividing the above inequality by the} positive term ${\|v(x)\|/2}\neq 0$.
\noindent {\bf (iv):} Now we prove the upper semi-continuity of the function $\theta_\beta$. Let $(x^k)_{k\in\ensuremath{\mathbb N}}$ be a sequence converging to $x$. Take $\hat{x}\in C$ such that $v(x)=\hat{x}-x$ and also denote $\hat x^k:=v^{k}+x^k$.
It is clear that, for all $k\in\ensuremath{\mathbb N}$, $\hat{x}-x^k\in C-x^k$, and so,\begin{align}\label{yy}\nonumber \theta_\beta(x^k)&= \frac{\| \hat{x}^k-x^k\|^2}{2}+\beta \phi(x^k,\hat{x}^k-x^k)\\\nonumber&\le \frac{\| \hat{x}-x^k\|^2}{2}+\beta \phi(x^k,\hat{x}-x^k)\\&
=\frac{\| \hat{x}-x^k\|^2}{2}+\beta y_k^T J_F(x^k)(\hat{x}-x^k).\end{align}
Since each $y_k:=y(x^k,\hat x-x^k)$ belongs to the compact set $G(x^k)\subseteq K^*(x^k)\cap \mathbb{S}(0,1)\subseteq \mathbb{B}(0,1)$ for all $k\in\ensuremath{\mathbb N}$, then the sequence $(y_k)_{k\in\ensuremath{\mathbb N}}$ is bounded because is in $\cup_{k\in\ensuremath{\mathbb N}} G(x^k)\subseteq \mathbb{B}(0,1)$. Therefore, there exists a convergent subsequence of $(y_k)_{k\in\ensuremath{\mathbb N}}$. We can assume without lost of generality that $\lim_{k\to \infty} y_k= y$, and also since
$G$ is closed, $y\in G(x)$. Taking limit in \eqref{yy}, we get
\begin{align*}\limsup_{k\to\infty} \theta_\beta(x^k)&\leq \limsup_{k\to\infty}\displaystyle \frac{\| \hat{x}-x^k\|^2}{2}\displaystyle+\beta y_k^T J_F(x^k)(\hat{x}-x^k)\\
&=\frac{\| \hat{x}-x\|^2}{2}+
\beta y^T J_F(x)(\hat{x}-x)\\&\leq
\frac{\| \hat{x}-x\|^2}{2}+\beta \phi(x,\hat{x}-x)=\theta_\beta(x).\end{align*}
Then, the function $\theta_\beta$, defined in \eqref{teta}, is upper semi-continuous.
\end{proof}
\begin{lemma}\label{remark1}
Consider any {$x, \hat x\in C$ and $z\in\ensuremath{\mathbb R}^n$}. If $ J_F$ is locally Lipschitz, $\mbox{d}_H(G(x),G(\hat x))\leq L_G\|x-\hat x\|$ for some $L_G>0$ and $C$ is bounded, then $${\left|\phi(x,z)-\phi(\hat x,z)\right|\leq L\|x-\hat x\|,}$$ for some $L>0$. Hence $\phi$ is a continuous function {in the first argument}.
\end{lemma}
\begin{proof}By Proposition~4.1(iv) of \cite{nous}, {and using the Lipschitz assumption for $G$ in $C$, $\rho(x,w)$, defined in \eqref{Drho}, is also a Lipschitz function for all $(x,w)\in C\times W$} for any bounded subset $W\subset \ensuremath{\mathbb R}^n$. That is, if {$\|w_i\|\leq M$, $i=1,2$ with} $M>0$, then
\begin{equation}\label{lipsro}
|\rho(x_1,w_1)-\rho(x_2,w_2)|\leq \hat{L}\|x_1-x_2\| +\|w_1-w_2\|,
\end{equation}
where $\hat{L}:=L_GM$.
Taking \eqref{lipsro} for $x_1=x$, $x_2=x^k$, $w_1= J_F(x)(\hat{x}^k-x^k)$ and
$w_2= J_F(x^k)(\hat{x}^k-x^k)$, {we get} \begin{align*}
\Big|\rho\big(x, J_F(x)(\hat{x}^k-x^k)\big) -\rho\big(x^k, J_F(x^k)(\hat{x}^k-x^k)\big)\Big| &\leq \hat{L}\|x-x^k\|+\|( J_F(x)- J_F(x^k))(\hat{x}^k-x^k)\|
\\
&\leq \hat{L}\|x-x^k\|+\| J_F(x)- J_F(x^k)\|\|\hat{x}^k-x^k\|,
\end{align*}
because of {the compactness of} $C$ and {the continuity of} $J_F$, $\| J_F(x)(\hat{x}^k-x^k)\|\leq M$ for all $k\in\ensuremath{\mathbb N}$ and $x\in C$.
Noting that
$$\phi(x,\hat{x}^k-x^k)-\phi(x^k,\hat{x}^k-x^k)=
\rho\left(x, J_F(x)(\hat{x}^k-x)\right)-\rho\left(x^k, J_F(x^k)(\hat{x}^k-x^k)\right),
$$ and due to $ J_F$ is locally Lipschitz and \eqref{lipsro}, it follows that
\begin{equation}\label{eq33}\left|\phi(x,\hat{x}^k-x^k)-\phi(x^k,\hat{x}^k-x^k)\right|\leq (\hat{L}+L_F\hat M)\|x-x^k\|,
\end{equation} where $L_F$ is the Lipschitz constant of $ J_F$ and $\|\hat{x}^k-x^k\|\le \hat M$ for all $k\in\ensuremath{\mathbb N}$. {This proves the continuity of $\phi$ in the first argument}.
\end{proof}
Now we can prove the lower semicontinuity of $\theta_\beta$ {by following similar ideas of the result presented in Proposition $3.4$ of \cite{[20]} for vector optimization.}
\begin{proposition}\label{item(iv)} Let $F\in \mathcal{C}^1$ and consider any $x, \hat x\in C$ {with $C$ bounded}. Then, if $\mbox{d}_H(G(x),G(\hat x))\leq L_G\|x-\hat x\|$ for some $L_G>0$ and $ J_F$ is locally Lipschitz, $\theta_\beta$ is a lower semicontinuos function on $C$.
\end{proposition}
\begin{proof}
We consider the function $\theta_\beta(x)$. Note further that
\begin{align*}\label{eqm}
\theta_\beta(x)\leq& \beta\phi(x,\hat{x}^k-x)+\frac{\|\hat{x}^k-x\|^2}{2}\\
=&\theta_\beta(x^k)+\beta\left[\phi(x,\hat{x}^k-x)-\phi(x^k,\hat{x}^k-x^k)\right]+\frac{\|\hat{x}^k-x\|^2-\|\hat{x}^k-x^k\|^2}{2}\\
=&\theta_\beta(x^k)+\beta\left[\phi(x,\hat{x}^k-x)-\phi(x^k,\hat{x}^k-x^k)\right]+\frac{1}{2}\left[-2\langle \hat{x}^k, x^k-x\rangle +\|x\|^2-\|x^k\|^2\right].
\end{align*}
Thus, taking limit in
the previous inequality and using Lemma \ref{remark1}, we get
$$\lim_{k\to \infty}\phi(x,\hat{x}^k-x)-\phi(x^k,\hat{x}^k-x^k)=0.$$ Also, it is follows that $\lim_{k\to \infty}
\frac{1}{2}\left[\|x\|^2-\|x^k\|^2\right]-\langle \hat{x}^k, x^k-x\rangle= 0.$ Hence,
\begin{align*} \theta_\beta(x)&\leq \liminf_{k\to\infty}\left\{
\theta_\beta(x^k)+\beta\left[\phi(x,\hat{x}^k-x)-\phi(x^k,\hat{x}^k-x^k)\right]-\langle \hat{x}^k, x^k-x\rangle +\frac{\|x\|^2-\|x^k\|^2}{2}\right\}\\
&= \liminf_{k\to\infty} \theta_\beta(x^k),
\end{align*} establishing the desired result.
\end{proof}
{Now we recall the concept of $\delta$-approximate direction introduced in Definition $3.1$ of \cite{[19]}.
\begin{definition}\label{deltasol} Let $x\in C$ and $\beta>0$. Given $\delta\in [0,1)$, we say that $v$ is a $\delta$-approximate solution of problem \eqref{ee} if $v\in C-x$ and
$\beta\phi(x,v)+\displaystyle\frac{\|v\|^2}{2}\leq (1-\delta) \theta_\beta(x).
$
If $v\neq 0$ we say that $v$ is a $\delta$-approximate direction.
\end{definition}}
Hence, from a numerical
point of view, it would be interesting to consider algorithms in which the line-search is {given over a} $\delta$-approximate {solution of subproblem \eqref{ee} instead of on an exact solution of it}.
\begin{remark} Note that if the solution of \eqref{ee} is $0$, then the only possible $\delta$-approximate solution is $v=0$. {In other case, since $\theta_\beta(x)<0$, there exist feasible directions $v$ such that} $$\beta\phi(x,v)+\frac{\|v\|^2}{2}\in\left[\theta_\beta(x), (1-\delta) \theta_\beta(x)\right].$$ In particular {$v(x)$}, the solution of \eqref{ee}, is always a $\delta$-approximate solution. {As was presented in \cite{durea}, a variant to this approach consists in to find $v$ such that $J_F(x_k)v\in -\cup_{x\in C}K(x)$.
However, this approach will converge to a stationary point as in the case we have proposed. Non-dominated points are minimizers under well-known conditions. A future research line will be to find, in this case, how we can guarantee that the stationary points, that the algorithm converges, are non-dominated ones}.\end{remark}
Next we present an inexact algorithm for solving problem \eqref{(P)}. {The algorithm requires
the following exogenous parameters: $ \delta\in[0,1)$ and $\sigma, {\gamma} \in(0,1)$ and $0<\bar \beta\le \hat{\beta}<+\infty$.}
\begin{center}\fbox{\begin{minipage}[b]{\textwidth}
\noindent{{\bf I}nexact {\bf P}rojected {\bf G}radient Method ({\bf IPG Method}).}\;\; {Assume that $\beta_k\in[\beta,\hat{\beta}]$ for all $k\in\ensuremath{\mathbb N}$}.
\medskip
\noindent {\bf Initialization:} Take $x^0\in \mathbb{R}^n$ and $\beta_0$.
\medskip
\noindent {\bf Iterative step:} Given $x^k$ and $\beta_k$, compute $v^k$, $\delta$-approximate solution of $(P_{x^k})$.
If $v^k=0$, then stop. Otherwise compute
\begin{equation}\label{Armijo-type}
j(k):=\min\left\{j\in\mathbb{Z}_+\colon F(x^k)- F(x^k+\gamma^{j}v^k)+\sigma \gamma^{j} J_F(x^k)v^k\in K(x^k)\right\}.
\end{equation}
\noindent Set
$
x^{k+1}=x^k+\gamma_kv^k\in C,
$
with $\gamma_k=\gamma^{j(k)}$.
\end{minipage}}\end{center}
{It is worth noting that {\bf IPG Method} extends Algorithm $3.3$ of \cite{[19]} to the variable order setting.}
Next proposition proves that the stepsize $\gamma_k$ is well defined for all $k\in \ensuremath{\mathbb N}$, \emph{i.e.,} there exists a finite positive integer $j$ that fulfils Armijo-type rule given in \eqref{Armijo-type} at each step of {\bf IPG Method}. {The proof of the next result uses a similar idea to the presented in Proposition $2.2$ of \cite{[19]}.}
\begin{proposition} Subproblem \eqref{Armijo-type} has a finite solution, i.e., there exists an index $j(k)<+\infty$ which is solution of \eqref{Armijo-type}.
\end{proposition}
\begin{proof} If $v^k=0$ then {{\bf IPG Method} stops}. Otherwise, if $v^k\neq 0$ then by Proposition \ref{prop2}(ii), $x^k$ is not a stationary point and $\theta_{\beta_k}(x^k)<0$. Moreover, $$\beta_k\phi(x^k,v^k)\leq \beta_k\phi(x^k,v^k)+\frac{\|v^k\|^2}{2}\le (1-\delta) \theta_{\beta_k}(x^k)<0.$$
Note further that $\phi(x^k,v^k)=\max_{y\in G(x^k)} y^T J_F(x^k)v^k<0.$ Thus, it follows from Proposition \ref{prop1}(i) that
\begin{equation}\label{tt}
J_F(x^k)v^k\in -{\rm int}(K(x^k)).
\end{equation}
Using the Taylor expansion of $F$ at $x^k$, we obtain that
\begin{equation}\label{igual-para-armijo}F(x^k)+\sigma \gamma^{j} J_F(x^k)v^k- F(x^k+\gamma^{j}v^k)=(\sigma -1)\gamma^{j} J_F(x^k)v^k +o(\gamma^{j}).\end{equation}
Since $\sigma<1$ and $K(x^k)$ is a cone, it follows from \eqref{tt} that $(\sigma -1)\gamma^{j} J_F(x^k)v^k\in {\rm int}(K(x^k)).$
Then, there exists $\ell\in\ensuremath{\mathbb N}$ such that, for all
$j\ge \ell$, we get $(\sigma -1)\gamma^{j} J_F(x^k)v^k +o(\gamma^{j})\in K(x^k).$ Combining the last inclusion with \eqref{igual-para-armijo}, we obtain
$F(x^k)+\sigma \gamma^{j} J_F(x^k)v^k- F(x^k+\gamma^{j}v^k)\in K(x^k)$ for all $j\ge \ell$. {Hence \eqref{Armijo-type} holds for $j(k)=\ell$}. \end{proof}
\begin{remark}{\rm After this {proposition} it is clear that given $(x^k,v^k)$, $j(k)$ is well-defined. Furthermore, the sequence generated by {\bf IPG Method} is always feasible. Indeed, as $x^k, x^k+v^k\in C$, $\gamma_k\in (0,1]$ and $C$ is convex, $x^{k+1}=x^k+\gamma_kv^k\in C$.}
\end{remark}
\section{Convergence Analysis of IPG Method}
In this section we prove the convergence of {\bf IPG Method} presented in the previous section. First we consider the general case and then
the result is refined for $K$--convex functions. From now on, $(x^k)_{k\in\ensuremath{\mathbb N}}$ denote the sequence generated by {\bf IPG Method}.
We begin the section with the following lemma.
\begin{lemma}\label{le} Let $F\in \mathcal{C}^1$.
Assume that
\item[ {\bf(a)}] $\cup_{x\in C} K(x)\subseteq \mathcal{K}$, where
$\mathcal{K}$ is a closed, pointed and convex cone. \item[ {\bf(b)}] {The application $G$ is closed}. \item[ {\bf(c)}] $\mbox{d}_H(G(x),G(\bar{x}))\leq L_G\|x-\bar{x}\|$, for all
$x,\bar{x}\in C$.
\item []
If $x^*$ is an accumulation point of $(x^k)_{k\in\ensuremath{\mathbb N}}$, then $\lim_{k\to\infty}F(x^k)= F(x^*)$.
\end{lemma}
\begin{proof} Let $x^*$ be any accumulation point of the sequence $(x^k)_{k\in\ensuremath{\mathbb N}}$ and denote $(x^{i_k})_{k\in\ensuremath{\mathbb N}}$ a subsequence of $(x^k)_{k\in\ensuremath{\mathbb N}}$ such that $\lim_{k\to\infty}x^{i_k}= x^*$. It follows from the definition of Armijo-type line-search in \eqref{Armijo-type} that
\begin{equation}\label{fg}F(x^{k+1}) -F(x^k)-\sigma\gamma_k J_F(x^k)v^k\in -K(x^k).\end{equation}
Since {\bf IPG Method} does not stop after finitely many steps, $v_k\neq 0$, which means that {$\phi(x^k,v^k)<0$}. By Proposition \ref{prop1}(i), this means that $ J_F(x^k)v^k\in -{\rm int}(K(x^k)).$
Multiplying the last inclusion by $\sigma\gamma_k>0$ and summing with \eqref{fg}, we get from the convexity of $K(x^k)$ that
$$F(x^{k+1}) -F(x^k)-\sigma\gamma_k J_F(x^k)v^k+\sigma\gamma_k J_F(x^k)v^k\in -{\rm int}(K(x^k)). $$
Thus,
$ F(x^{k+1})-F(x^{k})\in - {\rm int}(K(x^k)).$
Since $\cup_{x\in C} K(x)\subseteq \mathcal{K}$ {from (a)}, it holds that ${\rm int}(K(x))\subseteq {\rm int}(\mathcal{K})$ for all $x$, and
$ F(x^{k+1})-F(x^{k})\in -{\rm int}( \mathcal{K}).$ {Hence, $(F(x^k))_{k\in\ensuremath{\mathbb N}}$ is decreasing with respect to cone $\mathcal{K}$.
The continuity of $F$ implies that $\lim_{k\to \infty} F(x^{i_k})= F(x^*).$
Then, to prove that the whole sequence $(F(x^k))_{k\in\ensuremath{\mathbb N}}$ converges to $F(x^*)$, we use that it is decreasing with respect to cone $\mathcal{K}$, which is a closed, pointed and convex cone;
see, for instance, \cite{Peressini,danielltammer}. Thus, we get that
$\lim_{k\to \infty} F(x^k)= F(x^*),$ as desired.}\end{proof}
Next we present an analogous result as was proved in Proposition \ref{prop2}(iii) where $v^k$ is a $\delta$-solution of subproblem $\left(P_{x^k}\right)$, which gives us a upper bound for the norm of $v^k$. {Next lemma is a version of Proposition 2.5 of \cite{[19]} to the variable order setting.}
\begin{lemma}\label{opu} {Let $(x^k)_{k\in\ensuremath{\mathbb N}}$ and $(\beta_k)_{k\in\ensuremath{\mathbb N}}$ be sequences generated by {\bf IPG Method}. Then, $\|v^k\|\leq 2 \beta_k\| J_F(x^k)\|.$}\end{lemma}\begin{proof} By the definition of $\delta$-approximate direction
$\beta_k\phi(x^k,v^k)+\displaystyle\frac{\|v^k\|^2}{2}\leq (1-\delta) \theta_{\beta_k}(x^k).$
As was shown in Proposition \ref{prop2}(ii), $(1-\delta) \theta_{\beta_k}(x^k)\leq 0$, since $x^k\in C$. Thus, $\displaystyle\frac{\|v^k\|^2}{2}\leq -\beta_k\phi(x^k,v^k)$ and the result follows as in Proposition \ref{prop2}(iii).\end{proof}
Next we prove the stationarity of the accumulation points of the generate sequence. {Some arguments used in the proof of the next theorem are similar to those of Theorem $3.5$ of \cite{[19]} and Theorem $5.1$ of \cite{nous} for {fixed} and variable vector optimization, respectively.}
\begin{theorem}\label{t1}Suppose that
\item[ {\bf(a)}] $\cup_{x\in C} K(x)\subseteq \mathcal{K}$, where
$\mathcal{K}$ is a closed, pointed and convex cone. \item[ {\bf(b)}] {The application $G$ is closed}. \item[ {\bf(c)}] $\mbox{d}_H(G(x),G(\hat{x}))\leq L_G\|x-\hat{x}\|$, for all $x,\hat{x}\in C$.
\item[ {\bf(d)}] {$J_F$} is a locally Lipschitz function.
\item [] {If $(\beta_k)_{k\in\ensuremath{\mathbb N}}$ is a bounded sequence}, then all {the} accumulation points of $(x^k)_{k\in\ensuremath{\mathbb N}}$ are stationary points of problem \eqref{(P)}.
\end{theorem}
\begin{proof} Let $x^*$ be an accumulation point of the sequence $(x^k)_{k\in\ensuremath{\mathbb N}}$. Denote $(x^{i_k})_{k\in\ensuremath{\mathbb N}}$ any convergent subsequence to $x^*$. Since $F\in \mathcal{C}^1$, Lemma \ref{opu} implies that the subsequence $(v^{i_k})_{k\in\ensuremath{\mathbb N}}$ is also bounded and hence has a convergent subsequence. Without loss of generality, we assume that $(v^{i_k})_{k\in\ensuremath{\mathbb N}}$ converges to $v^*$, {$\beta_{i_k}$ and $\gamma_{i_k}$ converge to $\beta_*\ge \bar \beta$ and $\gamma^*$, respectively}. {Recall that} $\rho(x,w)=\max_{y\in G(x)} y^Tw$.
By definition we have
$F(x^{k+1}) -F(x^k)-\sigma\gamma_k J_F(x^k)v^k\in -\mathcal{K}.$
Using Proposition \ref{prop1}(i), implies that $\rho\left(x^{i_k},F(x^{k+1}) -F(x^k)-\sigma\gamma_k {J_F(x^k)v^k}\right)\leq 0$. Since the function
$\rho$ is sublinear, as shown in
Proposition \ref{prop1} (iv), we get
\begin{equation}\label{bolita}
\rho\left(x^k, F(x^{k+1})-F(x^k)\right)\leq \sigma\gamma_k\rho\left(x^k, J_F(x^k)v^k\right) .
\end{equation}
{Using the semilinear property for $\rho$ in the second argument, we can rewrite} \eqref{bolita} as
$\rho\left( x^k,F(x^k)\right) - \rho\left(x^k,F(x^{k+1})\right)\geq -\sigma\gamma_k\rho\left(x^k, J_F(x^k)v^{k}\right)\geq 0$. {Now
considering the subsequences $\{x^{i_k}\}$ and $\{v^{i_k}\}$, where {$v^{i_k}=v(x^{i_k})$} on this inequality, we have}
$$\lim_{k\to\infty} {\left[\rho\left(x^{i_k},F(x^{i_k})\right) - \rho\left(x^{i_k},F(x^{i_{k}+1})\right)\right]}\geq -\sigma \lim_{k\to\infty} \gamma_{i_k} \rho\left(x^{i_k},
J_F(x^{i_k}) v^{i_k}\right)\geq 0.$$As already was observed in {the proof of} Lemma \ref{remark1},
$\rho$ is continuous and moreover from Lemma \ref{le}, we have $\lim_{k\to\infty} F(x^k)=F(x^*)$. Thus,
$$\lim_{k\to\infty} {\left[ \rho\left(x^{i_k},F(x^{i_k})\right) - \rho\left(x^{i_k},F(x^{i_{k}+1})\right)\right]}=\rho\left(x^*,F(x^*)\right)-
\rho\left(x^*,F(x^*)\right)=0.$$
These facts imply that
$\lim_{k\to\infty} \gamma_{i_k} \rho\left(x^{i_k}, J_F(x^{i_k})v^{i_k}\right)=0.$
Hence we can split our analysis in two cases $\gamma^*> 0$ and $\gamma^*=0$.
\medskip
\noindent {\bf Case 1:} $ \gamma^*> 0$.
Here
\begin{equation}\label{rrr}\lim_{k\to\infty} \phi(x^{i_k}, v^{i_k})=\lim_{k\to\infty} \rho\left(x^{i_k}, J_F(x^{i_k})v^{i_k}\right)=0.\end{equation}
Suppose that \begin{equation}\label{tttt}\theta_{\beta_*}(x^*)=\|v(x^*)\|^2/2+\beta_*\phi(x^*,v(x^*)) <-\epsilon<0, \end{equation} where $v(x^*)=\hat{x}-x^*$ {with $\hat x\in C$}. Due to the continuity of $\phi(\cdot,\cdot)$ in both arguments, Lemma \ref{remark1} and \eqref{rrr} imply that $$\phi(x^{i_k},v^{i_k})>-\frac{(1-\delta)\epsilon}{\max_{k\in\ensuremath{\mathbb N}} \beta_k}=-\frac{(1-\delta)\epsilon}{\hat \beta}$$ for $k$ large enough. After note that $(\beta_k)_{k\in\ensuremath{\mathbb N}}$ is a positive and bounded sequence, then
\begin{equation}\label{jj}
\|v^{i_k}\|^2/2+\beta_{i_k}\phi(x^{i_k},v^{i_k})\geq \beta_{i_k}\phi(x^{i_k},v^{i_k})>-\beta_{i_k}\frac{(1-\delta) \epsilon}{\hat\beta}\geq-(1-\delta) \epsilon.
\end{equation}
By definition of the subsequence $(v^{i_k})_{k\in\ensuremath{\mathbb N}}$, we have, for all $v^{i_k}\in C-x^{i_k}$ {and $v\in C-x^{i_k}$},
\begin{equation}\label{j}
(1-\delta)\left( \frac{\|v \|^2}{2}+\beta_{i_k} \phi(x^{i_k},v)\right)\geq (1-\delta)
\theta_{\beta_{i_k}}(x^{i_k})\geq
\frac{\|v^{i_k}\|^2}{2}+\beta_{i_k} \phi(x^{i_k},v^{i_k}).
\end{equation}
Combining \eqref{jj} and \eqref{j}, we obtain that $(1-\delta)\left(\displaystyle\frac{\|v\|^2}{2}+\beta_{i_k} \phi(x^{i_k},v)\right) >- (1-\delta)\epsilon$.
In particular consider $\hat{v}^k=\hat{x} -x^{i_k}$. Dividing by $(1-\delta)>0$, we obtain
$$ \frac{\|\hat{v}^k\|^2}{2}+\beta_{i_k} \phi(x^{i_k},\hat{v}^k) >- \epsilon . $$
By the continuity of function $\phi$ with respect to the first argument and taking limit in the previous inequality, lead us to the following inequality $\|v(x^*)\|^2/2+{\beta^*\phi(x^*,v(x^*))}\ge- \epsilon$. This fact and \eqref{tttt} imply
$$-\epsilon>\frac{\|v(x^*)\|^2}{2}+\beta_* \phi({x^*},v(x^*)) \geq - \epsilon, $$ which is a contradiction.
Thus, we can conclude that $\theta_{\beta_*}(x^*)\geq 0$ and, hence, using {Proposition \ref{prop2}}, $x^*$ is a stationary point if $\limsup_{k\to\infty} \gamma_{i_k}> 0$.
\noindent {\bf Case 2:} $\gamma^*=0$.
We consider the previously defined convergent subsequences $(x^{i_k})_{k\in\ensuremath{\mathbb N}} $, $(\beta_{i_k})_{k\in\ensuremath{\mathbb N}}$, $(v^{i_k})_{k\in\ensuremath{\mathbb N}} $, $(\gamma_{i_k})_{k\in\ensuremath{\mathbb N}}$ convergent to $x^*$, $\beta_*$, $v^*$ and $\gamma^*=0$, respectively. Since $\beta_*>0$, we get that
$$\rho\left(x^{i_k}, J_F(x^{i_k})v^{i_k}\right) \leq \rho\left(x^{i_k}, J_F(x^{i_k})v^{i_k}\right) +\frac{\|v^{i_k}\|^2}{2\beta_{i_k}}.$$
Since $v^{i_k}$ is a $\delta$-approximate solution of {$(P_{x^{i_k}})$}, see Definition \ref{deltasol}, then
$$ \rho\left(x^{i_k}, J_F(x^{i_k})v^{i_k}\right) +\frac{\|v^{i_k}\|^2}{2\beta_{i_k}}\leq \frac{(1-\delta)}{\beta_{i_k}}\theta_{\beta_{i_k}}(x^{i_k})<0.$$
It follows from taking limit above that $\rho(x^*, J_F(x^*)v^*)\leq - \displaystyle\frac{\|v^*\|^2}{2\beta_*}\leq 0.$
Fix $q\in \mathbb{N}$. Then, for $k$ large enough
$F(x^{i_k}+\gamma^{q}v^{i_k})\notin F(x^{i_k})+\sigma\gamma^q J_F(x^{i_k})v^{i_k}- K(x^{i_k}),$
as there exists $\hat{y}_{i_k}\in G(x^{i_k})$ such that
$\left\langle F(x^{i_k}+\gamma^{q}v^{i_k}) - F(x^{i_k}) -\sigma\gamma^q J_F(x^{i_k})v^{i_k}, \hat{y}_{i_k}\right\rangle >0,$
it holds that
$$\rho\left(x^{i_k}, F(x^{i_k}+\gamma^{q}v^{i_k})-F(x^{i_k})- \sigma\gamma^q J_F(x^{i_k})v^{i_k}\right)\geq 0.$$
Taking limit as $k$ tends to $+\infty$, and using that $\rho$ is a continuous function, then
$$\rho\left(x^*,F(x^*+\gamma^{q}v^*)-F(x^*)-\sigma\gamma^q J_F(x^*)v^*\right)\geq 0.$$
But {$\rho(x,\cdot)$ is a positive homogeneous} function, so,
$$\rho\left(x^*,\frac{F(x^*+\gamma^{q}v^*)-F(x^*)}{\gamma^{q}}- \sigma J_F(x^*) v^*\right) \geq 0.$$
Taking limit as $q$ tends to $+\infty$, we obtain $\rho\left(x^*,(1-\sigma) J_F(x^*) v^*\right)\geq 0.$
Finally, since
$\rho\left(x^*, J_F(x^*) v^*\right)\leq 0$, it holds
$\rho\left(x^*, J_F(x^*) v^*\right)= 0$
and by Proposition \ref{prop1}(ii), this is equivalent to say that $x^*\in S^s.$
\end{proof}
The above result generalizes Theorem $5.1$ of \cite{nous}, where the exact steepest descent method for unconstrained problems was studied. Recall that at the exact variant of the algorithm the direction $v^k$ is computed as an exact solution of problem $(P_{x^k})$. In order to fill the gap between these two cases, we present two direct consequences of the above result, the inexact method for unconstrained problems and the exact method for the constrained problem.
\begin{corollary}Suppose that conditions (a)-(d) of Theorem \ref{t1} are fulfilled.
Then all accumulation points of the sequence $(x^k)_{k\in\ensuremath{\mathbb N}}$ generated by the exact variant of {\bf IPG Method} are stationary points of problem \eqref{(P)}.
\end{corollary}
\begin{proof} Apply Theorem \ref{t1} to the case {$\delta=0$}.\end{proof}
\begin{corollary}
{Suppose that conditions (a)-(d) of Theorem \ref{t1} are fulfilled for $C=\ensuremath{\mathbb R}^n$.
If $(\beta_k)_{k\in\ensuremath{\mathbb N}}$ is a bounded sequence, then all accumulation points of $(x^k)_{k\in\ensuremath{\mathbb N}}$ computed by {\bf IPG Method} are stationary points of problem \eqref{(P)}.}
\end{corollary}
\begin{proof} {Directly by applying Theorem \ref{t1} for $C=\ensuremath{\mathbb R}^n$}. \end{proof}
The result presented in Theorem~\ref{t1} assumes the existence of accumulation points. We want to emphasize that this is a fact that takes place even when the projected gradient method is applied to the solution of classical scalar problems, i.e., $m=1$ and $K(x)=\ensuremath{\mathbb R}_+$. The convergence of the whole sequence generated by the algorithm is only possible under stronger assumptions as convexity.
Now, based on quasi-F\'ejer theory, we will prove the full convergence of the sequence generated by {\bf IPG Method} when we assume that $F$ is $K$--convex. We start by presenting its {definitions} and its
properties.
\begin{definition}\label{def-cuasi-fejer}
Let $S$ be a nonempty subset of $\mathbb{R}^n$. A
sequence $(z^k)_{k\in\ensuremath{\mathbb N}}$ is said to be quasi-Fej\'er convergent
to $S$ iff for all $x \in S$, there exists $\bar{k}$ and a summable sequence
$(\varepsilon_k)_{k\in\ensuremath{\mathbb N}}\subset \mathbb{R}_+$ such that $\| z^{k+1}-x\|^2 \leq \|
z^{k}-x\|^2+\varepsilon_k$ for all $k\ge\bar{k}$.
\end{definition}
This definition originates in \cite{browder} and has been further
elaborated in \cite{IST}. A useful result on quasi-Fej\'er sequences is the following.
\begin{fact}\label{cuasi-Fejer}
If $(z^k)_{k\in\ensuremath{\mathbb N}}$ is quasi-Fej\'er convergent to $S$ then,
\item[ {\bf(i)}] The sequence $(z^k)_{k\in\ensuremath{\mathbb N}}$ is bounded.
\item[ {\bf(ii)}] If an accumulation point
of $(z^k)_{k\in\ensuremath{\mathbb N}}$ belongs to $S$, then the whole sequence
$(z^k)_{k\in\ensuremath{\mathbb N}}$ converges.
\end{fact}
\begin{proof} {See Theorem $1$ of \cite{bmauricio}.}\end{proof}
For guaranteeing the convergence of {\bf IPG Method}, we introduce the following definition {which is related with the one presented in Definition $4.2$ of \cite{[19]}.}
\begin{definition}\label{scompatible}Let $x\in C$. A direction $v\in C - x$ is scalarization compatible (or
simply s-compatible) at $x$ if there exists $w\in {\rm conv}(G(x))$ such that
$v = P_{C-x}(-\beta J_F(x)w).$\end{definition}
Now we proceed as in Proposition 4.3 of \cite{[19]}. In the following we present the relation between inexact and s-compatible directions.
\begin{proposition} Let $x\in C$, $w\in {\rm conv}(G(x))$, $v = P_{C-x}( -\beta J_F(x)w)$ and $\delta\in [0, 1)$.
If
$$\beta\phi( J_F(x)v)\leq (1-\delta)\beta \langle w, J_F(x)v\rangle - \frac{\delta}{2}\|v\|^2,$$
then $v$ is a $\delta$-approximate projected gradient direction.\end{proposition}
\begin{proof} See Proposition 4.3 of \cite{[19]}. \end{proof}
We start the analysis with a technical result {which is related with the proof of Lemma $5.3$ of \cite{[19]}.}
\begin{lemma}\label{auxi1} Suppose that $F$ is $K$--convex. Let $(x^k)_{k\in\ensuremath{\mathbb N}}$ be a sequence generated by
{\bf IPG Method} where $v^k$ is an $s$-compatible direction at $x^k$, given by
{$v^k = P_{C-x^k}(-\beta_k J_F(x)w^k)$}, with $w^k\in {\rm conv}(G(x^k))$ for all $k\in\ensuremath{\mathbb N}$. If for a given $\hat{x}\in C$ we have
$F(\hat{ x})-F(x^k)\in -K(x^k)$, then
$$\|x^{k+1}- \hat{x}\|^2 \leq \|x^k- \hat{x}\|^2+
2\beta_k\gamma_k|\langle w^k, J_F(x^k)v^k\rangle|.$$\end{lemma}
\begin{proof} Since $x^{k+1} = x^k + \gamma_kv^k$, we have
$\|x^{k+1}- \hat{x}\|^2 =\|x^k- \hat{x}\|^2+\gamma_k^2\|v^k\|^2-2\gamma_k\langle v^k,\hat{x}-x^k\rangle.$
Let us analyze the rightmost term of the above expression. It follows from the definition of $v^k$
and the obtuse angle property of projections that
$\langle-\beta_k J_F(x^k)w^k-v^k, v-v^k\rangle\leq 0,$ for all $v\in C- x^k$.
Taking $v = \hat{x}- x^k\in C-x^k$ on the above inequality, we obtain
$$-\langle v^k, \hat{x}-x^k\rangle \leq \beta_k\langle w^k, J_F(x^k)( \hat{x}-x^k)\rangle- \beta_k\langle w^k, J_F(x^k)v^k\rangle-\|v^k\|^2.$$
Now, it follows from the convexity of $F$ that $\langle w^k, J_F(x^k)(\hat{x}-x^k)\rangle \leq \langle w^k,F(\hat{x})-F(x^k)\rangle$. Also the fact {$F(\hat{x})\preceq_{K(x^k)} F(x^k)$, i.e., $F(\hat{x})-F(x^k)\in -K(x^k)$,} together with $w^k\in K^*(x^k)$ imply that $\langle w^k,F(\hat{x})-F(x^k)\rangle \leq 0$. Moreover, {by using $J_F(x^k)v^k \in {\rm int}(-K(x^k))$ and $w^k$ in $G(x^k)=K^*(x^k)\cap \mathbb{S}(0,1)$}, we have
$\langle w^k, J_F(x^k)v^k\rangle < 0$ . Thus, we get
$-\langle v^k,\hat{x}-x^k\rangle \leq \beta_k|\langle w^k, J_F(x^k)v^k\rangle|-\|v^k\|^2.$
The result follows because $\gamma_k\in (0,1]$.\end{proof}
We still need to make a couple of supplementary assumptions, which are standard
in convergence analysis of classical (scalar-valued) methods and in its extensions to the vector
optimization setting.
\noindent {\bf Assumption 4.4:} Let $(z^k)_{k\in\ensuremath{\mathbb N}}\in F(C)$ be a sequence such that $z^k- z^{k+1}\in K(z^k)$ for all $k\in\ensuremath{\mathbb N}$
and $z\in C$, $z^k-z\in \mathcal{K}$ for some closed, convex and pointed cone $\mathcal{K}$, $\cup_{k\in\ensuremath{\mathbb N}} K(z^k)\subset \mathcal{K}$. Then there exists $\hat{z}\in C$ such that {$F(\hat{z})\preceq_\mathcal{K} z^k$ for all $k\in\ensuremath{\mathbb N}$, i.e., $F(\hat{z})-z^k\in-\mathcal{K}$}. {In this case we said that the sequence $(z^k)_{k\in\ensuremath{\mathbb N}}$ is bounded below with respect to $\mathcal{K}$.}
Recently, it was observed in \cite{Kim-2016} that this assumption could be replaced by assuming that the restriction of $F$ on $C$ has compact sections. This assumption is related to the completeness of the image of $F$. It is important to mention that completeness is a standard assumption for ensuring existence of efficient points in vector problems \cite{luc}.
\noindent {\bf Assumption 4.5:} The search direction $v^k$ is s-compatible at $x^k$, that is to say, $v^k =P_{C-x^k} (-\beta J_F(x^k)^Tw^k)$, where $w^k\in {\rm conv}(G(x^k))$ for all $k\in\ensuremath{\mathbb N}$.
This assumption holds automatically in the exact case. Moreover, it has been widely used in the literature in the vector case; see, for instance, \cite{[19]}. {Version of these assumptions are also used in \cite{[19]}, when the order is given by a constant cone.}
{The} next result is an extension to the variable order setting of Theorem 5.2 of \cite{[19]}.
\begin{theorem}\label{teo4.3g} Assume that $F$ is $K$--convex and that Assumptions 4.4 and 4.5 hold. If ${\rm int}(\cap_{k\in\ensuremath{\mathbb N}} K(x^k))\neq \varnothing$ and there exists $\mathcal{K}$, a pointed, closed and convex cone such that {$K(x^k)\subset \mathcal{K}$ for all $k\in\ensuremath{\mathbb N}$},
then every sequence generated by the inexact projected gradient method {\rm (}{\bf IPG Method}{\rm)} is bounded and its accumulation points are
weakly efficient solutions.\end{theorem}
\begin{proof} Let us consider the set $T :=\{x\in C: F(x^k)- F(x)\in K(x^k), \text{ for all }k\}$,
and take $\hat{x}\in T$, which exists by {Assumption 4.4.} Since $F$ is a $K$--convex function and {Assumption
4.5} holds, it follows from Lemma \ref{auxi1} that
\begin{equation}\|x^{k+1}- \hat{x}\|^2\leq \|
x^k- \hat{x}\|^2 + 2\beta_k\gamma_k
|\langle
w^k, J_F(x^k)v^k\rangle |,\end{equation}
for all $k\in\ensuremath{\mathbb N}$. By the definition of $v^k$, it is a descent condition. This means that
$- J_F(x^k)v^k\in K(x^k)$. Hence $\langle
w^k, J_F(x^k)v^k\rangle\leq 0.$
Then,
\begin{equation}\label{22}\|x^{k+1}- \hat{x}\|^2-\|
x^k- \hat{x}\|^2 \leq 2\beta_k\gamma_k
|\langle
w^k, J_F(x^k)v^k\rangle |\leq - 2\beta_k\gamma_k
\langle
w^k, J_F(x^k)v^k\rangle.\end{equation}
On the other hand as $\mathcal{K}$ is a closed, convex and pointed cone with nonempty interior,
$\mathcal{K}^*$ is also a closed, convex and pointed cone with nonempty interior. Since $K(x^k)\subset\mathcal{K}$, it holds that $\mathcal{K}^*\subset K^*(x^k)$. Hence $\mathcal{K}^*\subset \cap_{k\in\ensuremath{\mathbb N}} K^*(x^k)$. Let $\omega_1,\ldots,\omega_m\in \mathcal{K}^*$ be a {basis} of $\ensuremath{\mathbb R}^m$ {which exists because ${\rm int}(K^*)\neq \emptyset$}.
Then, there exits $\alpha^k_1,\ldots, \alpha^k_m\in \ensuremath{\mathbb R}$ such that
$w^k=\sum_{i=1}^m \alpha^k_i\omega_i$. Substituting in \eqref{22},
\begin{equation}\|x^{k+1}- \hat{x}\|^2-\|
x^k- \hat{x}\|^2 \leq - 2\beta_k\gamma_k\sum_{i=1}^m\alpha_i^k
\langle\omega_i\\
, J_F(x^k)v^k\rangle.\end{equation}
On the other hand, since $- J_F(x^k)v^k\in K(x^k)$, $\omega_1,\ldots,\omega_m\in \mathcal{K}^*\subset K^*(x^k)$ and $\beta_k, \gamma_k> 0$ for all $k\in\ensuremath{\mathbb N}$, it holds
$\langle \omega_i, -2\beta_k\gamma_k J_F(x^k)v^k\rangle \geq 0$. {Without loss of generality, we can assume that $\|\omega_i\|=1$. Then,} $\alpha_i^k$ is uniformly bounded, i.e. there exits $M>0$ such that for all $k,i$
$|\alpha_i^k|\leq M.$
Hence,
\begin{equation}\label{222}\|x^{k+1}- \hat{x}\|^2-\|
x^k- \hat{x}\|^2 \leq - 2M\beta_k\gamma_k\sum_{i=1}^m
\langle\omega_i\\
, J_F(x^k)v^k\rangle.\end{equation}
By the Armijo-type line-search in \eqref{Armijo-type},
$F(x^{k+1})-F(x^k)-\gamma_k\sigma J_F(x^k)v^k\in -K(x^k).$
Recall that $\omega_i\in \cap_{k\in\ensuremath{\mathbb N}} K^*(x^k)$, we obtain
$\displaystyle\frac{\langle \omega_i, F(x^k)-F(x^{k+1})\rangle}{\sigma} \geq \langle \omega_i, -\gamma_k J_F(x^k)v^k\rangle.$
It follows from \eqref{222} that
\begin{equation}\label{2221}\|x^{k+1}- \hat{x}\|^2-\|
x^k- \hat{x}\|^2 \leq 2\frac{M}{\sigma}\beta_k\sum_{i=1}^m
\langle\omega_i, F(x^k)-F(x^{k+1})\rangle.\end{equation}
For the Fej\'er convergence of $(x^k)_{k\in\ensuremath{\mathbb N}}$ to $T$, it is enough to prove that the term $\beta_k\sum_{i=1}^m
\langle\omega_i, F(x^k)-F(x^{k+1})\rangle\ge 0$ is summable at all $k\in\ensuremath{\mathbb N}$.
Since $\beta_k\le \hat\beta$ for all $k\in\ensuremath{\mathbb N}$,
\begin{equation}
\label{des-20}
\sum_{k=0}^{n}\beta_k\sum_{i=1}^m
\langle\omega_i, F(x^k)-F(x^{k+1})\rangle \le \hat \beta \sum_{i=1}^m
\langle\omega_i, F(x^0)-F(x^{n+1})\rangle.\end{equation}
As consequence of the Armijo-type line-search, we have $F(x^k)-F(x^{k+1})\in K(x^k)\subset \mathcal{K}$. So, $(F(x^k))_{k\in\ensuremath{\mathbb N}}$ is a decreasing sequence with respect to $\mathcal{K}$. Furthermore {by {\bf Assumption 4.4}}, it is bounded below, also with respect to the order given by $\mathcal{K}$, by $F(\hat{x})$, where $\hat{x}\in T$.
Hence, the sequence $(F(x^k))_{k\in\ensuremath{\mathbb N}}$ converges and using \eqref{des-20} in the inequality below, we get \begin{align*}\sum_{k=0}^{\infty}\beta_k\sum_{i=1}^m
\langle\omega_i, F(x^k)-F(x^{k+1})\rangle&=\lim_{n\to\infty}\sum_{k=0}^{n}\beta_k\sum_{i=1}^m
\langle\omega_i, F(x^k)-F(x^{k+1})\rangle\\&\le\hat\beta\lim_{n\to\infty}\sum_{i=1}^m
\langle\omega_i, F(x^0)-F(x^{n+1})\rangle\\&= \hat\beta \sum_{i=1}^m
\langle\omega_i, F(x^0)-\lim_{n\to \infty} F(x^{n+1})\rangle\\&= \hat\beta\sum_{i=1}^m
\langle\omega_i, F(x^0)-F(\hat x)\rangle<+\infty.\end{align*} So, the quasi-Fej\'er {convergence} is fulfilled.
Since $\hat{x}$ is an arbitrary element of $T$, it is clear that $(x^k)_{k\in\ensuremath{\mathbb N}}$ converges quasi-Fej\'er to $T$. Hence, by Fact \ref{cuasi-Fejer}, it follows that $(x^k)_{k\in\ensuremath{\mathbb N}}$ is bounded. Therefore, $(x^k)_{k\in\ensuremath{\mathbb N}}$
has at least one accumulation point, which, by Theorem \ref{t1} is stationary. By Proposition \ref{note1}, this point is also weakly efficient {solution}, because $F$ is $K$--convex. Moreover,
since $C$ is closed and the whole sequence is feasible, then this accumulation point belongs to
$C$. \end{proof}
\section{Another Variable Order}\label{sect6}
As was noticed in Section $6$ of \cite{luis-gema-yunier-2014} the variable order structure could formulated in two different ways. Moreover, Examples 3.1 and 3.2 in \cite{luis-gema-yunier-2014} illustrate the differences of considering one order or the other.
Thus, the variable order for the optimization problem may also depend of new order by using the cone valued mapping
$\hat K:\ensuremath{\mathbb R}^m \rightrightarrows \ensuremath{\mathbb R}^m$ where $\hat K(y)$ is a convex, closed and pointed cone for all $y\in {C}\subset \ensuremath{\mathbb R}^m$ {where $C$ is the feasible set}. It is worth noting that the domain of the new mapping $\hat K$ is in $\ensuremath{\mathbb R}^m$ and the orderings considered in the previous sections, are defined by applications whose domain is $\ensuremath{\mathbb R}^n$. As already discussed in \cite{luis-gema-yunier-2014}, convexity can be defined and convex functions satisfy nice properties such as the existence of subgradients.
{Given a closed and convex set $C$, we say that $x^*\in C$ solves the optimization problem \begin{equation}\label{p234}\hat K-\min F(x) \text{ s.t. } x\in C,\end{equation} if, for all $x\in C$,}
$$F(x)-F(x^*)\notin {-}\hat K(F(x^*))\setminus \{0\}.$$Here we can assume that
$\hat K:F(C)\subseteq \ensuremath{\mathbb R}^m \rightrightarrows \ensuremath{\mathbb R}^m$. We shall mention that the main difference between the above problem and \eqref{(P)} yields in the definition of the variable order given now by $\hat K$.
For a more detailed study of the properties of the minimal points and their characterizations and convexity concept on this case; see \cite{gabrielle-book, luis-gema-yunier-2014}.
In this framework the definitions of weak solution and stationary point are analogous. The main difference is that instead of $K(x^*)$, the cone $\hat K(F(x^*))$ is considered to define the variable partial order. That is, the point $x^*$ is stationary, iff for all $d\in C-x^*$, we have
$ J_F(x^*)d\notin -{\rm int}(\hat K(F(x^*))).$
Then, similarly as in the case of problem \eqref{(P)}, the following holds.
\begin{proposition} If $F$ is a continuously differentiable function and $C$ is a convex set, weak solutions of \eqref{p234} are stationary points. Moreover if $F$ is also convex with respect to $\hat K$, the converse is true.\end{proposition}
\begin{proof} It follows the same lines of the proof of Propositions \ref{primal} and \ref{note1}: the Taylor expansion of $F$ and the {closedness} of $\hat K(F(x^*))$ {imply} the result. \end{proof}
The inexact algorithm is adapted in the following way
\begin{center}\fbox{\begin{minipage}[b]{\textwidth}
\noindent{{\bf F}-{\bf I}nexact {\bf P}rojected {\bf G}radient Method ({\bf FIPG Method}).} Given $0<\bar \beta\le \beta_k\le\hat{\beta}<+\infty$, $ \delta\in(0,1]$ and $\sigma, {\gamma} \in(0,1)$.
\medskip
\noindent {\bf Initialization:} Take $x^0\in \mathbb{R}^n$ and $\beta_0$.
\medskip
\noindent {\bf Iterative step:} Given $x^k$ and $\beta_k$, compute $v^k$, $\delta$-approximate solution of $(Q_{x^k})$.
If $v^k=0$, then stop. Otherwise compute
\begin{equation}\label{Armijo1}
\ell(k):=\min\left\{\ell\in\mathbb{Z}_+\colon F(x^k)+\sigma \gamma^{\ell} J_F(x^k)v^k- F(x^k+\gamma^{\ell}v^k)\in \hat K(F(x^k))\right\}.
\end{equation}
\noindent Set
$
x^{k+1}=x^k+\gamma_kv^k\in C,
$
with $\gamma_k=\gamma^{\ell(k)}$.
\end{minipage}}\end{center}
Here the auxiliary problem
$(Q_{x^k})$ is defined as
\begin{equation}\tag{$Q_{x^k}$}\label{iee}
\min_{v\in C-x^k}\left\{ \frac{\| v\|^2}{2}+\beta_k \phi(x^k,v)\right\},
\end{equation} where
$\phi:\mathbb{R}^n\times\mathbb{R}^n\to \mathbb{R}$,
\begin{equation}\label{asconsecq}\phi(x,v):=\max_{y\in G(F(x))} y^T J_F(x)v,\end{equation} for $G:\mathbb{R}^m\rightrightarrows \mathbb{R}^m$ generator of ${\hat K}^*(F(x)):=\left[{\hat K}(F(x))\right]^*$.
With this ordering{,} the function $\phi$ characterizes the stationarity. Furthermore, subproblem $(Q_{x^k})$ has a unique solution which is $v^k=0$ if and only if $x^k$ is a stationary point.
Results analogous to those proven in Propositions \ref{prop2} and \ref{item(iv)} are also true. These facts implies that {\bf FIPG Method} is well defined, i.e., if it stops, then the computed point is a stationary point and in other case there exists $\ell(k)$ which satisfies the Armijo-type line-search \eqref{Armijo1}. So, only the convergence of a sequence generated by it must be studied.
As in the last section, we analyze the convergence of the functional values sequence $(F(x^k))_{k\in\ensuremath{\mathbb N}}$.
\begin{lemma}\label{lie}Suppose that $x^*$ is an accumulation point of $(x^k)_{k\in\ensuremath{\mathbb N}}$ of the sequence generated by {\bf FIPG Method}.
If $\cup_{x\in C} \hat K(F(x))\subseteq \mathcal{K}$, where
$\mathcal{K}$ is a closed, pointed and convex cone, $G(F(x))$ is a closed application such that $\mbox{d}_H(G(F(x)),G(F(\bar{x})))\leq L_{GF}\|x-\bar{x}\|$, for all
$x,\bar{x}\in C$, then $\lim_{k\to\infty}F(x^k)= F(x^*)$.
\end{lemma}
\begin{proof} The result is again proven by the existence of a non-increasing sequence with an accumulation point. \end{proof}
Next, with the help of the last Lemma, {we summarize the convergence} of the generated sequence with the following result.
\begin{theorem}\label{t11}Suppose that
\item[ {\bf(a)}] $\cup_{x\in C} \hat K(F(x))\subset \mathcal{K}$, where
$\mathcal{K}$ is a a closed, pointed and convex cone. \item[ {\bf(b)}] {The application $G\circ F$ is closed}. \item[ {\bf(c)}] $\mbox{d}_H(G(F(x)),G(F(\hat{x})))\leq L_{GF}\|x-\hat{x}\|$, for all $x,\hat{x}\in C$.
\item[ {\bf(d)}] $J_F(x)$ is a locally Lipschitz function.
\item [] {If $(\beta_k)_{k\in\ensuremath{\mathbb N}}$ is a bounded sequence}, then all accumulation points of $(x^k)_{k\in\ensuremath{\mathbb N}}$ generated by {\bf FIPG Method} are stationary points of problem \eqref{p234}.
\end{theorem}
\begin{proof} It follows from the same lines of the proof of Theorem \ref{t1}. \end{proof}
We want to point out that in the last theorem the condition $$\mbox{d}_H(G(F(x)),G(F(\hat x)))\leq L_{GF}\|x-\hat{x}\|, \quad \forall x,\hat x\in C$$ {replaces the similar one given in Theorem \ref{t1}, i.e.,
$\mbox{d}_H(G(y),G(\hat{y}))\leq L_{G}\|y-\hat{y}\|,$ for all $y,\hat y\in C$.
Moreover, if $F$ is Lipschitz on $C$, then this last condition implies the condition (c) in Theorem \ref{t11} by taking $y=F(x)$ and $\hat y=F(\hat x)$. }
{Next result is an extension to the variable order setting of Theorem 5.2 of \cite{[19]}}.
\begin{theorem} Assume that $F$ is $\hat K$--convex and additionally: \item[ {\bf (a)}] If $(z^k)_{k\in\ensuremath{\mathbb N}}\subset F(C)$ is a sequence such that $z^k- z^{k+1}\in \hat K(F(z^k))$ for all $k\in\ensuremath{\mathbb N}$
and $z\in C$, $z^k-z\in \mathcal{K}$ for some closed, convex and pointed cone $\mathcal{K}$, $\cup_{k\in\ensuremath{\mathbb N}} \hat K(F(z^k))\subseteq \mathcal{K}$, then there exists $\hat{z}\in C$ such that $F(\hat{z})\preceq z^k$ for all $k\in\ensuremath{\mathbb N}$.
\item[ {\bf (b)}] The search direction $v^k$ is s-compatible at $x^k$, i.e., $v^k =P_{C-x^k} (-\beta J_F(x^k)^Tw^k)$, where $w^k\in {{\rm conv}(G(x^k))}$, for all $k\in\ensuremath{\mathbb N}$. \item[ {\bf (c)}]${\rm int}(\cap_{k\in\ensuremath{\mathbb N}} \hat K(F(x^k))) \neq \varnothing$.\item[ {\bf (d)}]There exists $\mathcal{K}$, a pointed, closed and convex cone such that $ \hat K(F(x^k))\subseteq \mathcal{K}$ for all $k\in\ensuremath{\mathbb N}$.
\item [] Then every sequence generated {\bf FIPG Method} is bounded and its accumulation points are
weakly efficient solutions.\end{theorem}
\begin{proof} It follows from the same lines of the proof of Theorem \ref{teo4.3g} using now the new variable order structure. \end{proof}
\section{Illustrative Examples}
In this section we present three examples two for problem \eqref{(P)} and one for \eqref{p234}, illustrating how both proposed methods {work} {starting at ten different random initial points}. We verify our assumptions in each problem and make some comparisons between the proposed methods by changing the exactness of the approximate gradient direction and its exact versions.
The algorithms were implemented in MatLab R2012 and ran at a Intel(R) Atom(TM) CPU N270 at 1.6GHz. All starting points are not solutions and {are} randomly generated. {Despite the fact that it may not be an easy task to compute the positive dual cone of a given one}, the computation of (approximate) directions is, in general, complicated. Indeed, after the definition, the optimal value of problem $(P_x)$ must be known. The use of $s$-compatible directions at the iteration $k$ of the proposed methods, see Definition \ref{scompatible}, is recommended in the case in which the exact projection onto the set $C-x^k$ is not too complicated.
This is the case of the set of feasible solutions of the next example. Clearly in all examples below {the defining order cone-valued mappings are closed} applications and Lipschitz with respect to the Hausdorff distance. {The stopping criteria were $\|v^k\|<10^{-4}$ and also when the 30-iterations is reached. The solutions were displayed with four digits, CPU time was recorded in seconds and moreover, the number of iterations was also shown in each case.}
\begin{example}
We consider the vector problem as \eqref{(P)} with $$K - \min\; F(x)=\left(x+1,x^2+1\right),\text{ s.t. } \; x\in [0,1],$$ where $F:\ensuremath{\mathbb R}\rightarrow \ensuremath{\mathbb R}^2$ and the variable order is given by $K:\ensuremath{\mathbb R}\rightrightarrows \ensuremath{\mathbb R}^2$, $$K(x):=\left\{(z_1,z_2)\in \ensuremath{\mathbb R}^2: z_1\geq 0,\;(x^2+1)z_1-(x +1)z_2\leq 0\right\}.$$
In this model
the closed interval $[0, \sqrt{2}-1]\approx[0,\,0.4142]$ is the set of minimizers.
{{\bf IPG Method} was run ten times by using ten random initial points, out of the set of minimizers, ended at solution points,} which are obtained after the verification of the stopping criterion.
\noindent The method gives the following data:
{\small
$$\begin{array}{|l|c|c|c|c|c|c|c|c|c|c|}\hline {\bf {Instances}}& {\bf 1}&{\bf 2}&{\bf 3}&{\bf 4}&{\bf 5}&{\bf 6}&{\bf 7}&{\bf 8}&{\bf 9}&{\bf 10}\\\hline\hline {\bf Initial\, Points} & 0.6557 & 0.6948& 0.8491 & 0.9340 & 0.6787& 0.7577 & 0.7431 & 0.4387 & 0.6555 &0.9502
\\\hline {\bf Solutions}& 0.4115 \,& 0.4128 \,& 0.4140 \,& 0.4135 \,& 0.4116 \,& 0.4131 \,& 0.4127 \,& 0.4136 \,& 0.4114\,& 0.4130 \,
\\\hline
{\bf CPU \; Time} &0.0001& 0.0250&0.0001& 0.0156 & 0.0001 & 0.0001& 0.0001 &0.1094& 0.0156& 0.0781 \\\hline {\bf N\frac{o}{}\, Iterations}&16 & 19 & 23& 26& 17 & 20& 20& 4 & 16& 28
\\\hline\end{array}$$}\\
Note that in all cases optimal solutions were computed and that the solution point{s} are in the set of optimal solutions. {Moreover, to illustrate how the inexactness of {\bf IPG Method} affect the performance (average of CPU time and the average of the number of iterations for $10$ instances) of the convergence is considering for the above example for different values of $\delta$ the approximate solution parameter.} \\
{
{\small
$$\begin{array}{|l|c|c|c|c|}\hline {\delta-{\bf Approximation \; Parameters}}\;& {\delta=0}&{\delta=0.25}&{\delta=0.5}&{\delta=0.75}\\\hline\hline {\bf Average \; of \; CPU \; Time} &\; 0.4819&\; 0.2232&\;0.0244&\; 0.0251\\\hline {\bf Average \; of\; N\frac{o}{}\, Iterations}&9 & 15 & 19 & 21
\\\hline\end{array}$$} \\
The above table shows that when the inexactness increases, CPU time decreases but the number of iterations increases. }
\end{example}
Next example is a non-convex problem corresponding to the model studied in the previous section.
\begin{example}{[\rm cf. Example $4.13$ of \cite{ap11}}] Consider the following vector problem as \eqref{p234}
$$\hat{K} - \min\; F(x_1,x_2)=\left( x_1^2, x_2^2\right),\text{ s.t. }\pi\leq x_1^2+x_2^2\leq 2\pi,$$
where $F:\ensuremath{\mathbb R}^2\rightarrow \ensuremath{\mathbb R}^2$ and the variable order is leading by the cone $\hat{K}:\ensuremath{\mathbb R}^2\rightrightarrows \ensuremath{\mathbb R}^2$, $$\hat{K}(y):=\left\{z=(z_1,z_2)\in \ensuremath{\mathbb R}^2: \|z\|_2\leq\left[ \left(\begin{matrix} 2 &1\\-1& -1\end{matrix}\right)y \right]^T z/\pi\right\}.$$
The set of solutions (stationary points) of this problem is directly computed by using the definition of $S^{s}$ given in \eqref{stationary-inclusion}. $$\left\{(x_1,x_2)\in \ensuremath{\mathbb R}^2: x_1^2+x_2^2=\pi\quad \mbox{or}\quad x_1^2+x_2^2=2\pi \right\} .$$
{\bf FIPG Method} computes the following points:
{\small
$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline{\bf {Instances}}&{\bf 1}&{\bf 2}&{\bf 3}&{\bf 4}&{\bf 5}&{\bf 6}&{\bf 7}&{\bf 8}&{\bf 9}&{\bf 10}\\\hline\hline {\bf Initial\, Points}& 1.8650 & 1.7525 & 2.4190& 1.9573& 0.7931& 1.2683& 1.8135& -2.0485 &-0.6446& -0.8561\\
& 1.6400 & 1.6350& 0.0835& 0.2813& -2.0321& -1.6814& 0.3050
&0.3229 &1.9606& 2.1011\\\hline
{\bf Solutions}&\! \!
1.1632 \!\! &\! \! 0.9850 \!\! &\! \! 1.7705 \!\!&\! \! 1.7492\!\!&\! \! 0.8634 \!\! &\! \! 1.4208 \!\! &\! \! 1.7456\!\!&\! \! -1.7438\!\!&\! \! -0.8016\!\!&\! \!-0.9535\! \!\\
& 2.2204& 2.3050& 0.0841 & 0.2859& -2.3532 & -2.0650& 0.3074& 0.3172& 2.3750& 2.3182
\\\hline
{\bf CPU\, Time}& 0.2969 & 0.2344 & 0.2188 & 0.1719 & 0.1563 & 0.5625 & 0.2656& 0.1875 & 0.2031& 0.3125 \\\hline\!\!{\bf N\frac{o}{}\, Iterations}\!\!&2 & 4& 2& 2& 4& 17 & 3& 2& 2& 5\\\hline\end{array}$$}\\
Note that the solutions computed at all iterations of the proposed method belong to the set of optimal solutions.\end{example}
{ The last example is widely studied in Section 9.2 of the book \cite{gabrielle-book}.
\begin{example}{[\rm cf. Example $9.5$ of \cite{gabrielle-book}}] Consider the following problem
$$K - \min\; F(x)=(x_1,x_2), \,\text{ s.t. }\; x\in C$$ where
$$C:=\left\{x=(x_1,x_2)\in [0,\pi]\times [0,\pi] \,\left| \begin{array}{ll} x_1^2+x_2^2-1-\displaystyle\frac{1}{10}\cos\left(16 \arctan\left(\displaystyle\frac{x_1}{x_2}\right)\right)\geq 0,\\[2.5ex] (x_1-0.5)^2+(x_2-0.5)^2\leq 0.5.\end{array}\right.\right\}.$$
For the variable ordering structure is considered the map $K:\ensuremath{\mathbb R}^2\rightrightarrows \ensuremath{\mathbb R}^2$, by $$K(x):=\left\{z\in \ensuremath{\mathbb R}^2: \|z\|_2\le \frac{2}{\min_{i=1,2}x_i}x^Tz\right\}.$$
{\bf IPG Method} stating at ten random initial points performs as following:
{\small
$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline{\bf {Instances}}&{\bf 1}&{\bf 2}&{\bf 3}&{\bf 4}&{\bf 5}&{\bf 6}&{\bf 7}&{\bf 8}&{\bf 9}&{\bf 10}\\\hline\hline {\bf Initial\, Point}\;
& 0.9735& 0.7932& 0.8403& 0.9847 & 0.7508& 0.9786 & 0.9790& 0.9679 & 0.8082 & 0.8965\\
& 0.6608 & 0.9050 & 0.8664 & 0.6228& 0.9326& 0.6448& 0.6433& 0.6762 & 0.8937 & 0.8046
\\\hline
{\bf Solution}\;\; &
0.9011 & 0.7407 & 0.7854& 0.9096& 0.7004& 0.9050& 0.9054& 0.8967 & 0.7551 & 0.7754\\&
0.5589\,& 0.7916\,& 0.7541\,& 0.5228\,& 0.8182\,& 0.5437\,& 0.5423\,& 0.5735 \,& 0.7806 \,& 0.5859\,\\\hline
{\bf CPU\; Time}& 0.0938 & 0.0313 & 0.0625 & 0.0313 & 0.0469 & 0.0156& 0.0313& 0.0313 & 0.0313& 0.0156\\\hline\end{array}$$}\\
{In this case, the maximum number of iterations is achieved.} Nevertheless, good approximations to minimal elements have been computed at each of the ten instances presented in the above table.
\end{example}
For all the above examples, we have also run the respectively exact version by taking the same initial points and the comparison between them in term of the CPU time (in seconds) are shown in the following figures.
\begin{figure}[!ht]
\begin{subfigure}[t]{0.3705\textwidth}
\centering
\includegraphics[width=\textwidth]{Figure_1.png}
\caption*{\label{fig:pp-twosubspaces1}Example 6.1}
\end{subfigure}\!\!\!\!\!\!\!\!\!\!
\begin{subfigure}[t]{0.3705\textwidth}
\centering
\includegraphics[width=\textwidth]{Figure_2.png}
\caption*{\label{fig:pp-twosubspaces2} Example 6.2}
\end{subfigure}\!\!\!\!\!\!\!\!\!\!
\begin{subfigure}[t]{0.3705\textwidth}
\centering
\includegraphics[width=\textwidth]{Figure_3.png}
\caption*{\label{fig:pp-twosubspaces3} Example 6.3}
\end{subfigure}\\
This shows that the inexact versions are significantly faster (CPU Time), in almost all instances, than the exact ones.
However, for all initial data of the above examples, the exact versions of the proposed methods take fewer iterations than the inexact ones to converge to a solution. It is worth emphasizing that the exact versions have to solve exactly the harder subproblems $P_{x^k}$ and $Q_{x^k}$ to find the descent direction at each iteration $k$. This is a serious drawback for the computational implementation point of view (avoidable for the above examples), making the exact implementation inefficient in general.
\end{figure}
\section{Final Remarks}The projected gradient method is one of the classical and basic schemes for solving constrained optimization problems. In this paper, we have extended the exact and unconstrained scheme proposed in \cite{[19]}. The new proposed scheme solves now smooth and constrained vector optimization problems under a variable ordering by taking inexact descent directions. This inexact projected approach promotes future research on other efficient variants for these kind of problems. As it is shown in the examples above it is more effective and implementable than the exact one. Moreover, now constrained variable optimization problems can be solved by using the proposed methods.
However, the full convergence of the generated sequence to a weakly efficient {solution} is still an open problem in variable order setting.
Future work will be addressed to investigate this theoretical issue; also for some particular instances in which the objective function is a non-smooth function, extending the projected subgradient method proposed in \cite{yunier-2013} to the variable order setting.
The numerical behavior of these approaches under $K$--convexity of the non-smooth objective function remains open and is a promising direction to be investigated. Despite its computational shortcomings, it hopefully sets the foundations of future more efficient and general algorithms for this setting.
\medskip
\noindent {\large \bf Acknowledgements:} JYBC was partially supported by the National Science Foundation (NSF) Grant DMS - 1816449.
The results of this paper were developed during the stay of the second author at the University of Halle-Wittenberg supported by the Humboldt Foundation.
The authors would like to extend their gratitude towards anonymous referees whose suggestions helped us to improve the presentation of this manuscript.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Background}
First-principles many-body theory provides invaluable insights for predicting and deciphering the behavior of electronic states without relying on empirical parameters. In particular, one encounters the largest need for predictive first principles theory in systems with emergent phenomena. For instance, the coupling between nominally weakly correlated subsystems may lead to new states exhibiting all hallmarks of strong correlations. Describing such phenomena is often hindered by the large system sizes that pose an insurmountable challenge for conventional calculations. Here, by presenting new computational developments we expand our previous work that enabled the application of the ab-initio many-body theory to giant systems~\cite{brooks2020stochastic}.
We exemplify the new methodology by studying twisted bilayer graphene (tBLG), which is a prototypical moir\'{e} superstructure in which the coupling of individual monolayers is controlled primarily by the twist angle, $\theta$.\cite{cao2018correlated} As $\theta$ approaches 1.1$^\circ$ ``magic angle'', tBLG transitions from a simple semimetal to a system hosting correlated electronic states\cite{cao2018correlated}. Under charge carrier doping tBLG at (or near) the magic angle exhibits superconducting, insulating, and magnetic properties~\cite{cao2018correlated,cao2018unconventional,yankowitz2019tuning,yankowitz2018dynamic,sharpe2019emergent,lu2019superconductors,saito2020independent,choi2019electronic,xie2019spectroscopic,kerelsky2019maximized,jiang2019charge}. These emergent states are associated with a shallow moir\'{e} potential, which localizes electrons in so-called AA stacking regions of the superstructure~\cite{trambly2010localization,koshino2018maximally,kang2018symmetry,calderon2020interactions,goodwin2019attractive,kerelsky2019maximized} and is responsible for the formation of flat (i.e., dispersionless) bands near the Fermi level~\cite{bistritzer2011moire,utama2020visualization}.
Fundamentally, the electron localization is governed by the strong interaction between the monolayers and the states' hybridization near the respective Dirac points. On the one hand, this is realized at small twists near magic angle, but equally well by the bilayer's in-plane strain or compression, as was shown by recent experiments and theoretical works~\cite{yankowitz2018dynamic,yankowitz2019tuning,bultnick2021}. In the latter case, the interlayer distance reduction drives electronic localization for angles substantially larger than 1.1$^\circ$~\cite{yankowitz2018dynamic,yankowitz2019tuning}. Unlike the twist angle, the degree of compression can be adjusted even after the deposition of the layers and hence represents a unique control mechanism for realizing correlated states. Yet, it has been so far studied to a lesser degree.
At high pressures, graphene becomes thermodynamically unstable; a suitably chosen combination of pressure transmission media and encapsulation allows reaching pressures up to 37~GPa~\cite{tao2020raman,clark2013few} for tBLG. Even higher compression may be possible for up to 50~GPa before graphene inevitably transforms to diamond~\cite{clark2013few}. Nevertheless, even with the improved experimental setup, the tBLG electronic states cannot be probed in the same detail as at ambient conditions~\cite{tao2020raman,clark2013few}. Further, it is unknown whether the decreasing interlayer spacing affects the entire valence and conduction states or leads to selective hybridization of Dirac point states. Finally, weakly correlated states dynamically screen the many-body interactions within the flat bands and critically affect the emergent many-body phenomena~\cite{goodwin2019attractive,pizarro2019internal}; however, the effect of compression on screening is also unknown.
Previous studies of the tBLG electronic structure were limited to tight-binding and continuum models~\cite{dosSantos2012,lian2019twisted,yuan2018model,yuan2018erratum,po2018origin,xu2018topological,roy2019unconventional,volovik2018graphite,padhi2018doped,dodaro2018phases,wu2018theory,isobe2018unconventional,huang2019antiferromagnetically,zhang2019nearly,kang2018symmetry,koshino2018maximally,kennes2018strong,zhang2018lowest,pizarro2019internal,guinea2018electrostatic,zou2018band,lima2017stacking,rademaker2018charge,angeli2018emergent,goodwin2020hartree} that demonstrated the magic angle-induced flat band formation at the Fermi level. The model parameters were usually determined from mean-field (DFT) calculations, which markedly deviate from quasiparticle energies.\cite{martin2016interacting} The ground state of tBLG at the magic angle was further investigated by atomistic Hartree and Hartree-Fock calculations based on the continuum model~\cite{xie2020nature,cea2020band,zhang2020correlated,bultinck2020ground,liu2021nematic,goodwin2020hartree,liu2021theories, gonzalez2020time, rademaker2019charge}. These calculations revealed that unscreened Coulomb interactions are responsible for stabilizing the insulating states in tBLG. Recently, exact diagonalization of downfolded many-body Hamiltonians (within a subspace of flat bands) was used to address the superconducting regime of tBLG.~\cite{potasz2021exact}
Investigations of the high-pressure behavior remain scarce and limited to the MF or model Hamiltonian treatment,~\cite{munoz2016bilayer,chittari2018pressure,carr2018pressure,padhi2019pressure,lin2020pressure,green2020landau} which focused on describing the dispersion of states near the Fermi level. Further, the strength of the electron-electron interaction at high pressure was investigated neither. Thus, it remains unclear whether the flat bands' formation under compression is equivalent to that at (or near) the magic angle. While these questions can be answered by the first principles many-body approaches, they were not applied until now due to their enormous computational cost.
In this work, we \textit{overcome practical limitations} of \emph{ab initio} many-body method: we propose a series of new developments in the stochastic many-body perturbation theory (MBPT) techniques\cite{vlcek2017stochastic,Vlcek2018swift,neuhauser2014breaking,vlcek2019stochastic,vlvcek2018simple,vlvcek2018quasiparticle} which can readily elucidate how the electronic structure behaves in giant moir\'{e} systems. We investigate tBLG with a large twist angle of $\theta \approx6^{\circ}$ (with the supercell of size $4\times7$ nm containing 2184 atoms, i.e., 8736 valence electrons), which is weakly correlated at ambient conditions but develops flat bands at high compressions. The pressure-induced coupling of the two monolayers is substantially different and affects only states near the Fermi level.
Further, we develop a stochastic constrained random-phase approximation (s-cRPA), which efficiently (i.e., with minimal computational cost) maps the correlated subspace on a Hubbard model with \textit{dynamical} on-site interactions, $U(\omega)$. We find that the electron-electron interactions in the flat bands are more screened under compression. However, the screening does not fully cancel out the bare Coulomb interaction. Thus, the effective interaction increases with pressure. As a result, the strong correlation is not only driven by vanishing band dispersion but also by increased on-site terms. These results are the same for Dyson quasiparticle orbitals and mean field canonical single-particle orbitals. We present our results first, then their significance and implications; the new theory is discussed in the Methods at the end of this work.
\section{Results}\label{sec:theory}
The simulations employ rectangular supercells of the graphene bilayer with twist angles $\theta=0^{\circ}$ ($24\times 12$ conventional unitcells with 9216 valence electrons in total) and $\theta \approx6^{\circ}$ ($1\times3$ moir\'e conventional cells or $\sim 16.5\times 16.5$ in terms of conventional unitcells with 8736 valence electrons in total). Within our real-space methodology, the Brillouin zone of the supercell is sampled by the $\Gamma$-point. The $1\times3$ moir\'e supercell makes our real-space grid commensurate with high-symmetry $\tilde K$ - point (coinciding with the Dirac point location), where the electronic localization principally occurs. We extract bandstructures with the projector-based energy-momentum analysis. \cite{brooks2020stochastic,popescu2012extracting,huang2014general,medeiros2014effects, PhysRevB.71.115215,boykin2007approximate} We study undoped tBLG, and thus, the Dirac point in our calculations is aligned with the Fermi level. Further, spin and valley symmetry breaking are not considered here. The ideal bilayer interlayer distance was first optimized with the first-principles DFT calculations with van der Waals corrections. For simplicity and to separate out the effects of electron-electron interactions treated by our novel methodology, we employed flat geometry, as lattice reconstructions are significant for small twist angles~\cite{yoo2019atomic,liang2020effect,cantele2020structural}. The equation of state is extracted from the total energy calculations for bilayers with variable interlayer distances (the details of the ground state calculations and the pressure estimation are provided in the Methods and SI). Our results are in excellent agreement with previous theoretical calculations employing weak interlayer interactions using the random phase approximation\cite{leconte2017}.
\subsection{Pressure-induced localization} \label{results1}
This section provides computational results obtained with the stochastic $GW$ approximation (see Methods) for both diagonal and off-diagonal parts and discusses the role of the orbital basis. We will first report results for the ideal Bernal-stacked graphene bilayer and then for the tBLG.
As a first step, we investigate the role of non-local and dynamical correlations in an ideal graphene bilayer. We compare the mean-field DFT and QP energies computed using the diagonal approximation to the self-energy (i.e., $\Delta=0$ in Eq.~\eqref{eqp} of Methods). The corresponding bandstructures and the QP densities of states (DOS) are in Fig.~\ref{fig:ideal_dos}a. As expected,\cite{heske1999band,strocov2001photoemission,gruneis2008electron} MBPT significantly increases the bandwidth (compared to DFT), leading to an excellent agreement with the available experimental data.\cite{ohta2007interlayer,ohta2006controlling}
\begin{figure*}
\centering
\includegraphics[width=6.69in]{./Fig1.pdf}
\caption{(a)Left: Black-DFT (blue-GW) bandstructure of ideal bilayer graphene. Due to the rectangular cell, corresponding bands are refolded onto $\Gamma$-point. Lines in band structure are guides for the eyes. Right: comparison of DFT and GW DOS. Red arrow indicates where should be an experimental peak corresponding to the flattening of the band in ARPES in Ref.~\cite{ohta2006controlling,ohta2007interlayer}. The onset figure schematically shows a comparison of the rectangular and hexagonal first Brillouin zones. (b) Top: Pressure- interlayer distance (compression) curve. Ref.~a\cite{leconte2017}. Bottom: Charge density of the Dirac point KS states at corresponding pressures. The isovalue is the same for all density plots. }
\label{fig:ideal_dos}
\end{figure*}
Specifically, the experiments show a local minimum of a non-degenerate Dirac band at the M high-symmetry point, which is visible as a peak in the QP DOS at $-2.93$~eV (red arrow in Fig.~\ref{fig:ideal_dos}a). In the bandstructure for the rectangular cell in Fig.~\ref{fig:ideal_dos}a (obtained from momentum space projection~\cite{popescu2012extracting,huang2014general,medeiros2014effects,brooks2020stochastic}), the $\Gamma$ and M points coincide (due to the Brillouin zone reflection as depicted in the inset figure). The DFT calculation places the peak in DOS incorrectly at $-1.94$~eV (i.e., $1.0$~eV too close to the Fermi level, as shown by the black dashed line and an arrow). This agrees with previous DFT calculations that also underestimated the band dispersion by $10-20\%$ with respect to the experiments~\cite{heske1999band,zhou2005coexistence,ohta2007interlayer,gruneis2008electron,ohta2012evidence,kerelsky2019maximized,strocov2001photoemission}. In contrast, our $GW$ results predict a corresponding feature to appear at $-3.11$~eV (blue dashed line and an arrow), which is in excellent agreement with the experiment (the peak is placed only $0.18$~eV lower than the measured value).
Next, we explore tBLG systems under various pressures ranging from 0 up to 100 GPa (corresponding to maximal compression of $28\%$ of the interlayer distance -- see Figure~\ref{fig:ideal_dos}b and SI). The structure is characterized by a hexagonal symmetry with a periodicity of $23.4$~\AA\, between the AA stacking regions. Since the twisting angle is high, the coupling between the individual graphene layers is small at ambient conditions, and the system does not differ substantially from the ideal bilayer. First, the electronic states at the Dirac point are fairly delocalized (bottom of Figure~\ref{fig:ideal_dos}b), and the distinction between the increase of the orbital density in the AA region is hardly noticeable. Second, (due to the lack of localization) band dispersion is large, and there is no increase in the density of states at the Fermi level (Fig.~\ref{fig:qp_dos}).
The situation changes with the compression (for the pressure of 20 GPa and higher). The electronic states become more localized in the AA stacking areas. Already at 20 GPa we clearly observe the spatial redistribution of the orbital density (see illustration in Fig.~\ref{fig:ideal_dos}b bottom). In our calculations, we further explore even higher pressures which may be difficult to realize experimentally (though the highest reported pressures achieved for tBLG was $P=$37~GPa~\cite{tao2020raman,clark2013few}). With increasing $P$, localization becomes even more pronounced, and at 100~GPa, roughly $75\%$ of the Dirac point states' orbital density is localized within $8$~\AA\, radius around the AA stacking point. This localization of the Dirac point states translates into the flat-band formation, corresponding to a peak around the Fermi level DOS (Fig.~\ref{fig:qp_dos}). Note that the structure at the pressure of 100~GPa is thermodynamically unstable. Still, the observations are useful as an indicator of the electronic correlation in tBLG: based on experiments\cite{yankowitz2018dynamic}, the same type of localization is expected for lower $\theta$ angles at much lower pressures for which graphene is a stable polymorph.
Note that the localization illustrated here lacks contributions beyond those included in the mean-field (DFT) Hamiltonian. In practice, the confinement of orbitals under pressure is driven by the external (ionic) potential and the degree of the localization is impacted by the delocalization error of semilocal DFT functionals. We address this question below and show that the mean field DFT orbitals, however, are remarkably close to the Dyson orbitals computed with MBPT.
\subsection{Mean-field vs MBPT energy spectrum}
At this stage, we compare the first-principles results obtained with the mean-field (DFT) and MBPT ($GW$) approaches (Fig.~2).
While the single particle energies are converged with respect to the supercell (see SI), the DOS curves however do not have sufficient resolution to capture certain details, such as the van Hove singularities near the Fermi level~\cite{brihuega2012unraveling}. We can extract their position, since it coincides with the energy of the M critical points of the Brillouin zone, which are refolded on the $\Gamma$-point. DFT places the singularity at $0.37$~eV away from the Fermi level. In contrast, GW positions them at 0.55 eV away from the Fermi level, in excellent agreement with the value of $0.56$~eV obtained experimentally~\cite{brihuega2012unraveling} (see SI).
Overall, Fig~\ref{fig:qp_dos} shows that MBPT significantly increases the bandwidth and widens the DOS features at 0~GPa compared to the mean-field solution. However, the most apparent changes are for the high pressures at which the DFT DOS significantly contracts and predicts a strong reduction in the width of \textit{all} bands. While the flat-band formation leads to a peak at the Fermi level, its signature is suppressed by the proximity of the \textit{entire} set of top valence and bottom conduction bands, which become closer in energy. The DFT results show that occupied and unoccupied states' behavior is mostly symmetric around the chemical potential (moving up and down in energy, respectively).
The many-body calculations show a different picture.
Up to 50 GPa, the entire QP DOS shifts up in energy, i.e., the valence states are moving closer to the Fermi level while the conduction states away from it. The bandwidth of the states away from the Fermi level is mostly unaffected by the increased pressure. Simultaneously, we observe the flat band formation around the chemical potential (indicated by a red arrow in Figure~\ref{fig:qp_dos}), which does not overlap with the rest of the occupied and unoccupied states. The QP DOS peak comprises eight quasi-degenerate states (corresponding to the Dirac points at K and K' in the moir\'e hexagonal Brillouin zone). Increasing the pressure further (i.e., $P>50$~GPa) leads to more pronounced structures in the QP DOS, but the peaks' position remains roughly the same. This difference in the behavior can be understood from the compression curve shown in Figure~\ref{fig:ideal_dos}b: the change of the pressure between 50 and 100~GPa requires only a small decrease in the interlayer distance, i.e. small change of the coupling of the monolayers. For 100 GPa, the flat band is clearly visible in between the conduction and valence bands. The key observation is that the compression-driven flat bands' formation leaves the rest of the states largely unaffected. Hence, despite the reduced interlayer spacing leads to the electron localization in the vicinity of the Dirac point, the increased coupling between the graphene monolayers is confined to a narrow energy range of the states near the chemical potential.
\begin{figure}
\centering
\includegraphics[width=3.37in]{./Fig2.pdf}
\caption{Left: qpDOS of a tBLG $\theta\approx 6^{\circ}$ as a function of pressure. The stochastic error on identifying the quasiparticle energy for the stochastic $GW$ calculation is $\sim20$~meV. Right: DFT DOS. Both DFT and qpDOS were constructed with Gaussian functions centered at each state (with broadening of 0.35~eV), for more details about qpDOS construction see Supplemental Information. (*) In SI we show that the DOS eigenenergies are well converged with the supercell size, we note, however, that some $k$-points are incommensurate with our real-space sampling grids, thus, some DOS features such as van Hove singularities~\cite{trambly2012numerical} are missing.}
\label{fig:qp_dos}
\end{figure}
~\\
\subsection{Role of the off-diagonal self-energy elements}\label{sec:QPvsKS}
To explore the mutual coupling in the localized moir\'e states, we investigate the role of \textit{off-diagonal} self-energy terms $\Sigma_{j\neq k}=\langle \phi_j|\Sigma(\omega)|\phi_k\rangle$ in Eq.~\ref{eqp}, using our new development to the stochastic $GW$ described in the Methods section. In practice, we compute the contributions of $\Delta_{j}$ for the states up to $\pm0.5$~eV away from the highest occupied state. We note that the $\Sigma_{ij}$ is computed for a wide frequency range for all off-diagonal terms \textit{at once}. The cost of the $GW$ calculation is practically identical to the previously developed diagonal implementation (see Methods section). Next, we resort to a common procedure in the self-consistent $GW$\cite{faleev2004all,bruneval2006effect} and construct a symmetrized and self-adjoint quasiparticle Hamiltonian $H$ as:
\begin{equation}
H_{jk} = H_{jk}^{KS} - v^{xc}_{jk} +\frac{1}{4}\bigg\{ \left[\Sigma_{jk}(\epsilon_{j}) + \Sigma_{kj}(\epsilon_{k})\right] + c.c.\bigg\}.
\label{eq:Hqp}
\end{equation}
The QP energies of $H$, however, differ only negligibly from those computed with the diagonal approximation to Eq.~\ref{eqp}. We find that the difference for most of the states is less than $1\%$ and only for three states it is roughly $3\%$. The inclusion of the $\Delta_j$ terms can thus be safely neglected in the QP DOS. This is true also for the flat bands around the Fermi level. Here, the off-diagonal terms are not necessarily small, but they tend to ``average-out,'' and their net contribution is negligible.
While the QP energies remain unaffected, this does not imply that the QP states, $\{\psi\}$, are close to the mean-field DFT orbitals $\{\phi\}$. To investigate this, we diagonalize $H_{jk}$ (Eq.~\eqref{eq:Hqp}) for the states in the $0.5$~eV vicinity of the Fermi level. Employing the DFT orbitals' basis, the new eigenstates are ${\psi_j} = \sum_k C_{kj} {\phi_k}$, with $\sum_k |C_{kj}|^2 = 1$. The expansion coefficients are illustrated graphically in Fig.~\ref{fig:Cqp}. For most states, the $C_{kj}$ matrix is practically diagonal. On the other hand, the nearly degenerate flat-band states, show substantial mixed character around the Fermi level and $\sum_{k\in \{\varphi \}} |C_{kj}|^2 = 0.995$, where $\{\varphi \}$ denotes a subspace of eight correlated states at the Fermi level. Nevertheless, each of them has a dominant contribution from a single DFT eigenvector. Indeed, the visual comparison (inset of Fig.~\ref{fig:Cqp}) shows that $\{\psi\}$ and $\{\phi\}$ are generally similar, e.g., both are localized in the AA stacking regions.
We conclude that the stochastic approach efficiently computes both diagonal and off-diagonal terms (at the same cost). Further, our analysis shows that the off-diagonal terms have little impact on the QP density of states and that the pressure-induced coupling is limited to the subset of quasi-degenerate Dirac point states. The differences in the distribution of the single-particle orbitals are visually small, but a more quantitative analysis is provided in the next section.
\begin{figure}
\centering
\includegraphics[width=3.37in]{./Fig3.pdf}
\caption{Coefficient matrix obtained from the exact diagonalization of the $51\times51$ $H^{QP}$ at 100 GPa pressure. The absolute values of the complex coefficients are plotted. A zero position is set between occupied (negative) and unoccupied (positive) states. With a black circle we indicate 8 Dirac point states that are quasi-degenerate in energy. Onset: We plot a density of state -3 to compare a KS orbital to the QP orbital. State -3 is highlighted with a dotted line in the figure.}
\label{fig:Cqp}
\end{figure}
~\\
\subsection{Pressure dependence of the dynamically screened on-site interaction}
In the remainder of the paper, we will demonstrate the stochastic methodology for extracting the dynamically screened on-site interaction $U(\omega)$, Eq.~\eqref{U}. With this approach we will explore the role of screening on the orbitals around the Fermi level at various pressures.
We map the strongly correlated states (identified in the previous section and denoted $\{\varphi\}$) onto the Hubbard Hamiltonian (see Section \ref{sec:scRPA}), with the effective hopping, $t$, and on-site interaction, $U$, terms. The latter contains the information of all other electrons via the non-local and dynamical screening $\tilde W$ in Eq.~\eqref{U}. The electron correlation is commonly characterized by the interplay of the on-site interaction and the kinetic energy.\cite{kennes2018strong,goodwin2019twist,xian2019multiflat} In practice, the $U\gg t$ indicates the regime when the system is strongly correlated. For tBLG close to the magic angle, the strongly correlated regime was driven by the drastic reduction of the hopping (due to localization) to $t\le30$~meV.\cite{lin2020pressure} As a result, the $U/t$ ratio becomes very high,\cite{guo2018pairing,cao2018correlated,goodwin2019attractive, goodwin2019twist} although the screening was predicted to play an important role in reducing the value of the onsite Coulomb term $U$. Previous calculations suggested that the physics is dominated by the competition between low $t$ and dynamical screening: the dielectric constant was predicted to be 20 times larger at the magic angle than in the ideal bilayer\cite{goodwin2019attractive}.
We estimate only the upper bound for the $t$ parameter from the localized states' bandwidth, extracted~\cite{goodwin2019twist,goodwin2019attractive,neto2009electronic} from the dispersion of the corresponding QP energies.\footnote{The hopping term is $1/6$ of the bandwidth associated with the flat bands.} For the system with the most pronounced localization, i.e., at 100 GPA, we find $t\approx40$~meV, which is in good agreement with the results for the correlated phase at much lower twisting angles.
We note that, in contrast, the band dispersion at $0$~GPa is very large and $t \sim 600$~meV. Clearly, the pressure-induced localization is responsible for qualitative changes in the hopping (and the associated $t$ parameter decreases by order of magnitude).
Although the band flattening appears as the primary driver of electron-electron correlation, the on-site Coulomb interaction changes are equally important. Following the approach of Refs.~\cite{xian2019multiflat,miyake2009ab}, we provide the mean $U$ values in order to describe the correlation strength of the chosen subspace by a single number rather than comparing $U/t$ ratios for each state. However, we also discuss the variation of $U$ among distinct orbitals. In the text below, we first consider the total screened effective interaction $U(\omega)$. Next, we decompose $U(\omega)$ into the static bare contribution $U^b$ and dynamically screened counterpart $U^p(\omega)$. Note, while the $U$ is computed in the basis of correlated states, the idea of the scRPA is to include the dynamical renormalization due to screening from all electronic states in the orthogonal complement of the correlated subspace. The set of localized KS orbitals is a straightforward and convenient choice of basis for scRPA calculations since no additional localization or orthonormalization procedure is required~\cite{miyake2009ab}. The approach to compute $U$ in the KS basis has been previously used in Refs.~\cite{xian2019multiflat,ma2021quantum}, but other options (e.g., Wannier basis~\cite{kang2018symmetry}) have been proposed. For simplicity, we resort to the first option and compute $U$ in the basis of KS orbitals $\{\phi\}$, next we compare the results to analogous calculations for QP orbital basis $\{\psi\}$.
Computed for the Dirac point states, see Eq.~\eqref{U}, the effective interaction $U(\omega)$, is screened by all the weakly interacting electrons confined to both monolayers. To account for the dynamical screening, we employ a set of random states which sample the dynamics of all weakly correlated states. The resulting $U(\omega)$ converges extremely fast (only 8 random vectors are necessary to yield a negligible stochastic error of $1$~meV for a wide range of frequencies -- see Fig~\ref{fig:U_pressure}) with minimal computational requirements ($<$120~CPU-hours; see section~\ref{sec:scRPA}). The frequency dependence of $U(\omega)$ is illustrated in Figure~\ref{fig:U_pressure}. In practice, similar to Refs.~\cite{miyake2009ab,ma2021quantum}, we will discuss the static limit ($\omega\to0$) since there is no elegant mathematical framework to solve the effective Hamiltonian with the frequency-dependent $U$.
For the tBLG at 0~GPa the total screened interaction $U(\omega\to0)=202$~meV. This is identical to the result for the uncompressed ideal bilayer at $0$~GPa where $U(\omega\to0)=201$~meV. Clearly, our initial assertion holds, and the $6^{\circ}$-tBLG indeed behaves like an ordinary bilayer at ambient conditions.
Under pressure, the screened on-site interaction in tBLG increases (as much as by $\approx 40\%$ for the highest compression ). At 100~GPa, the screened $U~=~282$~meV corresponds to a large ratio $U/t\approx7$ that suggests strongly correlated behavior. This is in striking contrast to the ideal bilayer, for which the $U$ parameter remains practically constant (see Figure~\ref{fig:U_pressure}). One can intuitively understand the difference in the on-site interaction behavior in tBLG and ideal bilayer from the charge density distributions of the DP states: in tBLG, the electrons in the DP states are trapped in the shallow moir\'{e} potential and become confined between the monolayers. The pressure-induced localization leads to increased on-site interaction; the weakly correlated states, on the other hand, remain spread over the entire system. In contrast, the DP electrons in the ideal bilayer experience no localizing potential. Even under compression they remain fully delocalized in the in-plane direction, and their distribution is little affected along the normal to the bilayer. Further, the twist is a necessary prerequisite of coupling between layers since it allows to form energetically degenerate states at K and K' points of the Brillouin zone. While in the ideal bilayer, K points of two Brillouin zones appear on top of each other forcing the energetic gap between corresponding states, and forbidding them from coupling. Hence, the on-site term in the ideal bilayer is insensitive to pressure.
From the previous analysis (Section~\ref{sec:QPvsKS}) we saw that the Dyson orbitals $\{\psi\}$ are similar to the canonical KS states. Indeed, if we employ the approximate Dyson orbitals instead of the KS states, the $U$ parameter is only insignificantly smaller even at high pressures (cf.~Figure~\ref{fig:U_pressure}): at 100 GPa, the on-site term for the $\{\psi\}$ states is $260$~meV, and the $U/t$ is $\approx 6.5$. For this analysis, we use the mean values of the $U$ parameters. However, to emphasize that the individual states of the correlated subspace yield different values of $U$, we also provide the standard deviation with a shaded area (cf.~Figure~\ref{fig:U_pressure}, left panel). The spread of the individual $U$ values is particularly pronounced for higher pressures (cf.~Figure~\ref{fig:U_pressure}, right panel) and can be explained by lifting the degeneracy within the correlated subspace. In practice, QP and KS orbitals thus produce very similar $U$ interaction. This can be easily understood, given that $\{\psi\}$ and $\{ \phi\}$ are very similar.
Finally, the right panel of Figure~\ref{fig:U_pressure} shows the frequency dependence of $U(\omega)$ for each DP state in tBLG at 0 and 100 GPa. Two main effects of the decreasing interlayer distance can be observed: a vertical shift of the entire $U(\omega)$ curve, and an eight-fold increase of the magnitude of the oscillations.
To explore the $U(\omega)$ pressure behavior, we now turn to the analysis of the \textit{bare} and polarization terms, $U^b$ and $U^p (\omega)$. We find that the (average) values of the bare term are large: when the DFT states are used, $U^b = 232$~meV at $0$GPa and it increases to $U^b = 480$~meV at 100 GPa (cf.~Figure~\ref{fig:U_pressure}). The approximate Dyson orbitals yield similar values of $230$~meV and $412$~meV at 0 and 100 GPa (not shown). Clearly, the difference between the bare terms partially accounts for the discrepancy between the average $U$ values for the $\{\phi\}$ and $\{\psi\}$ states.
The dynamical component of the on-site interaction, $U^p(\omega)$ contains the effect of the dynamical charge density fluctuations of all the weakly correlated states (outside of the correlated subspace); it is computed via s-cRPA (see the Methods sections). $U^p(\omega)$ increases under pressure and renormalizes the bare term: the low frequency limit $U^{p}(\omega\to0)$ at 0~GPa is $-30$~meV and at 100~GPa it is $-200$~meV. Apparently, this screening is not enough to entirely cancel out the $U^b$ contribution. Therefore, the total screened interaction $U(\omega\to0)$ is driven by the prevalent bare term and grows with compression. We also see that the magnitude of features in $U^p(\omega)$ curve increases as the (occupied) states shift closer to the Fermi level. Note that this result is independent of the $\{\phi\}$ or $\{\psi\}$ basis.\footnote{When computing the screening for the approximate Dyson orbitals, $ \psi$, the weakly correlated subspace remains identical, since $ \psi$ is composed of the combination of the quasi-degenerate states}
From the computational prospective we conclude that the stochastic approach is extremely efficient and yields $U(\omega)$ for even extremely large systems while requiring only minimal computational resources. This method enabled efficient downfolding and first principles calculations of the renormalized on-site interactions in undoped tBLG supercells.
\begin{figure}
\centering
\includegraphics[width=3.37in]{./Fig4.pdf}
\caption{Left: Screened and bare $U(\omega=0)$ as a function of pressure for twisted bilayer graphene and screened for ideal bilayer graphene. Curve labeled with orange triangles is computed in the basis of DFT subspace orbitals, and the curves labeled with violet circles are in the QP basis. The lines connecting the mean values of bare and screened $U(\omega=0)$ are guide for the eyes. Stochastic error in determining the $U$ is smaller than the marker size. The shaded green and orange area provides the standard deviation from a mean value due to the difference of $U(\omega=0)$ for individual states within a correlated subspace. Right: Frequency dependence of the screened $U(\omega)$ computed in the KS basis at 0 (red) and 100 (blue) GPa pressures. Different curves within one color represent individual 8 states within a correlated subspace. The colors of the curves match the triangles' colors from the left panel at 0 and 100 GPa. }
\label{fig:U_pressure}
\end{figure}
\section{Discussion and Conclusions}
We presented and applied several new numerical developments within the stochastic many-body theory which efficiently treat even large systems. We illustrated the method on large scale many-body calculations for twisted bilayer graphene with nearly 9,000 valence electrons. We have expanded the stochastic computational toolkit to address the role of off-diagonal self-energy and the basis representation. Further, we have developed stochastic s-cRPA, enabling the downfolding of even giant systems onto model Hamiltonian problems. Our stochastic approaches are applicable to general systems and will find wide application in a wide variety of condensed matter problems.
Our $GW$ results show an excellent agreement with the experimental positions of the van Hove singularities. We also show the formation of electronic localization under compression of the tBLG, which is in agreement with available experimental data and indicate that the compression provides a unique path towards controlled coupling of monolayers and practical realization of moir\'e states. For systems that are weakly correlated at ambient conditions, the decreased interlayer spacing leads to the formation of flat bands associated with strong correlations. These localized states are found in the vicinity of the Fermi level. In contrast, the majority of the states (delocalized over the individual monolayers) are weakly correlated and remain practically unaffected by the compression.
We compare then effects of pressure to those by varying twist angle on the interplay between screening and electron-electron interaction. The previous theoretical investigations showed that the screening has a crucial role in reducing the on-site interactions, $U$, as the system approaches the magic twist angle. Thus, the correlations (characterized by large $U/t$ ratios) are primarily driven by the vanishing dispersion (i.e., $t\to0$) of the states near the Fermi level.\cite{lu2016local,stauber2016quasi,liu2020tuning,pizarro2019internal,goodwin2019attractive,zhu2101dynamical,vanhala2020constrained,calderon2020interactions} In contrast, our stochastic calculations reveal a different scenario: the dynamical screening of the on-site term, $U$, increases relatively slowly with pressure and the effect of the screening does not fully compensate the bare term. Due to the interlayer coupling, the localized states are strongly affected by compression, and the electronic correlation stems from both small $t$ and \textit{large} $U$.
Our calculations indicate that dynamical electronic correlations lead to only small changes in the single-particle orbitals. Consequently, the corresponding $U/t$ ratios in KS and QP basis are practically the same.
By neglecting the graphene layer reconstruction (i.e., in the absence of corrugation) we overestimate the screening effect: structural relaxations in tBLG tend to separate the flat bands from the rest of the spectrum ($\sim20$~meV for small angles) \cite{nam2017lattice,angeli2018emergent,leconte2019relaxation,lin2020pressure, cantele2020structural} and increased gap would translate to a reduced static limit of the screening. At the same time, the effect of structural relaxations on screening in this system is likely to be small since their effect on bandstructure was shown to be prominent only for twist angles $<2^{\circ}$~\cite{yoo2019atomic,liang2020effect}. In addition, tBLG is usually encapsulated with hBN in high pressure experiments, effectively suppressing the out-of-plane relaxation. While further investigation of the role of structural reconstruction is needed, our methodology will likely play a critical role in performing such calculations.
Since the electron-electron interactions in the flat bands appear only mildly screened even at large compression, the electronic structure at high pressures is likely associated with robust insulating states.\cite{liu2020tuning} However, the absence of internal screening can be efficiently modified, e.g., by encapsulation and by extrinsic adjustable screening~\cite{pizarro2019internal,liu2020tuning}.
Our methodology is critical in providing the \emph{ab initio} information about internal screening effects. It informs future works combining dielectric materials and high pressures. Further, it opens a route to the theoretical understanding of precise control of quasiparticle states.
\section{Methods}
\subsection{Stochastic many-body theory}
To compute the QP energies and analyze the MB interactions, we employ a combination of MBPT and mapping of the selected (strongly correlated) subspace on the Hubbard model. Within MBPT, the central quantity is the self-energy, $\Sigma(\omega)$, which is a dynamical and non-local potential acting on a single QP state and incorporates all many-body effects. The QP energies correspond to the poles of the Green's function, $G$, representing a QP propagator that is expressed in terms of the Dyson series: $G^{-1}(\omega)=G_0^{-1}(\omega)-\Sigma(\omega)$, where $G_0$ is the reference (non-interacting) Green's function (GF). Here, the reference GF is taken from DFT calculations with PBE exchange-correlation functional\cite{perdew1996generalized}; $\Sigma$ is found using a perturbation expansion on top of $G_0$ and is responsible for capturing the dynamical correlation effects.
Here, we employ the basis of single particle states, $\{\phi_j\}$, obtained from the ground state DFT calculations and the self-energy thus becomes a matrix composed of elements $\Sigma_{j,k}(\omega) \equiv \braket{\phi_j|\Sigma(\omega)|\phi_k}$. The QP energies, $\varepsilon$, are:
\begin{equation}
\epsilon_j = \epsilon_j^0 - v^{xc}_j+ \operatorname{Re}\left[\Sigma_{j,j}(\omega = \epsilon_j)\right] + \operatorname{Re}\left[\Delta_{j}(\omega = \epsilon_j)\right].
\label{eqp}
\end{equation}
where $\varepsilon_j^0$ is the Kohn-Sham (KS) eigenvalue, $v^{xc}_j$ is the exchange-correlation potential, and $\Delta_{j}(\omega)$ comprises the coupling due to the off-diagonal elements of $\Sigma_{j,k}(\omega)$. The frequency dependent self-energy, $\Sigma(\omega)$, is evaluated at $\varepsilon$. We resort to the $GW$ approximation to the self-energy, which contains exchange and correlation parts; the latter is approximated by the dynamical effects of the charge density fluctuations due to the addition (removal) of an electron to (from) $\left| \phi_j\right\rangle$.\cite{Hedin1965,hybertsen1986electron,Aryasetiawan1998,martin2016interacting}
In general, the off-diagonal contributions to the self-energy $\Delta_j$ capture the deviation of the $\{\phi\}$-states from the QP (Dyson) orbitals $\left|\psi_j(x)\right\rangle$\footnote{The Dyson orbital is defined by an overlap of a many-body wavefunction for the ground state of $N$ particles, $\Psi_0^{N}$, and $N\pm1$ particles state, $\Psi_j^{N}$ for the $j^{\rm th}$ excited state. For a hole in the $j^{\rm th}$ state, the Dyson orbital is $\psi_j(x)\equiv \sqrt{N} \Psi_j^{N-1}(\bar x_1,\bar x_2,\dots,\bar x_N)\Psi_0^{N}(\bar x_1,\bar x_2,\dots,\bar x_N,x)$, where $x_k$ is the spin-space coordinate of electron $k$ and one integrates over all coordinates with bar on top}. For common weakly correlated systems, the diagonal contributions, $\Sigma_{jj}$, strongly dominate while the off-diagonal terms are orders of magnitude smaller and can be neglected (i.e., $\Delta_j =0$).\cite{faleev2004all,kaplan2015off}
Our first development efficiently expands the \textit{stochastic} methodology\cite{neuhauser2014breaking,Vlcek2018swift} and computes \textit{both types of contributions} using a single-step correction. The expectation values of $\Sigma_{jk}$ are sampled via decomposition of the Green's function into random vectors $\zeta$ spanning the occupied and unoccupied subspace and propagated backward and forward in time, hence representing particle and hole components of the time-ordered Green's function.
The resulting expression is, e.g. for $t<0$, $iG({\bf r},{\bf r}',t) \equiv \{\zeta({\bf r}',t)\zeta({\bf r})\} $, where $\{\cdots\}$ denotes stochastic averaging and $\ket{\zeta(t)}\equiv e^{-i \hat H t}\ket{\zeta}$, and $\hat H$ is the system Hamiltonian. For non-interacting Green's function, $G_0$ ,(i.e., in the one shot correction scheme), the time evolution is governed by the underlying DFT Hamiltonian $\hat H_0$.
The real-time sampling of the induced densities is performed by another set of random vectors $\eta$ representing the charge density fluctuations, i.e., $\delta n({\bf r},t) \approx |{{\eta}}({\bf r},t)|^2$. For nanoscale systems with thousands of atoms, $\sim 100$ samples suffice to represent the GF, with only $\sim10$ needed to represent $\delta n({\bf r},t)$.~\cite{vlcek2017stochastic,Vlcek2018swift,vlcek2019stochastic}
The stochastic methodology capitalizes on the fact that the key quantities ($G$ and $W$) are determined by collective properties, which are inherently low-rank and captured by the dynamics of a few (random) states within the Hilbert space of single-particle states. This approach leads to a linear scaling algorithm that can treat thousands of atoms.\cite{neuhauser2014breaking,Vlcek2018swift} The new implementation expands this methodology and efficiently yields also $\Delta_j$ terms (the details are provided in section~\ref{offdiag}). Further, using the QP hamiltonian matrix (represented in the $\{\phi\}$-state basis in the Eq.~\eqref{eq:Hqp}), we compute the QP orbitals $\psi$, corresponding to the first step of the self-consistent renormalization loop.
\subsection{Off-diagonal self-energy\label{offdiag}}
The off-diagonal terms in the polarization self-energy have been implemented in our development version of the stochastic $GW$ code~\cite{Vlcek2018swift}. In the stochastic $GW$ formalism, the non-interacting Green's function $G_0$ and the screened Coulomb potential $W$ are sampled with two independent sets of random functions $\{\zeta\}$ and $\{\eta\}$ respectively. Additional set of random vectors is used for the sparse stochastic compression in the time-ordering procedure. As a result, the expectation value for the polarization self-energy is a statistical estimator with a statistic error decreasing with number of random vectors as $1/\sqrt{N}$. A specific offdiagonal term of the self-energy has the following expression:
\begin{equation}
\langle \phi_j |\Sigma_P| \phi_k \rangle \simeq \frac{1}{N_{ \bar{\zeta}}} \sum_{\bar{\zeta}} \int \phi_j({\bf r})\zeta({\bf r},t)u_{\zeta,k}({\bf r},t) d^3{\bf r}
\label{eq:offdiag_sigma}
\end{equation}
where $\simeq$ denotes that the expression is exact in the limit of $N_{\bar \zeta} \to \infty$. The function $\zeta$ at time $t$ is defined with a help of the time evolution operator $U_{0}(t) \equiv e^{-i H_0 t}$ and the projector $P_\mu(t)$ that selects the states above or below the chemical potential, $\mu$, depending on the sign of $t$.:
\begin{equation}\label{tevolve}
|\zeta(t)\rangle~\equiv~U_{0}(t) P_\mu(t)|{\zeta}\rangle,
\end{equation}
The $\zeta$ vectors in the occupied and unoccupied subspace are propagated backward or forward in time and contribute selectively to the hole and particle non-interacting Green's functions.
In Eq.\eqref{eq:offdiag_sigma} the overlap with $\phi_k$ is hidden within $u_{\zeta,k}({\bf r},t)$~--~an induced charge density potential:
\begin{equation}
u_{\zeta,k}({\bf r},t) = \int W_P({\bf r},{\bf r}',t) \bar{\zeta}({\bf r}')\phi_k({\bf r}')d^3{\bf r}',
\label{eff_potential}
\end{equation}
$u_{\zeta,k}({\bf r},t)$ represents the time-ordered potential of the response to the charge addition or removal. $u_{\zeta,k}$ is calculated from the retarded response potential, which is $\tilde u_{\zeta,k} = \int \tilde W_P ({\bf r},{\bf r}',t) \bar{\zeta}({\bf r}')\phi_k({\bf r}')d^3{\bf r}'$ with a subsequent time-ordering procedure.\cite{FetterWalecka,vlcek2017stochastic,Vlcek2018swift}
Further, the retarded response is related to the time-evolved charge density
$
\delta n ({\bf r},t) \equiv \frac{1}{\lambda} \left[n({\bf r},t) - n({\bf r}, 0) \right] $
induced by a scaled perturbing potential $\delta v = \lambda [ \nu({\bf r},{\bf r}'){\bar\zeta}({\bf r}')\phi_k({\bf r}')]$. Here $\lambda$ is selected to be small, i.e., inducing a linear response; in our case we chose $\lambda=10^{-4}$~a.u.. The retarded response becomes:
\begin{align}\label{eqn:u}
&\tilde u_{\zeta,k}({\bf r},t) = \nonumber\\
\iiint \nu({\bf r},{\bf r}'')& \chi({\bf r}'',{\bf r}''',t) \delta v({\bf r}''',{\bf r}'){\rm d}{\bf r}'{\rm d} {\bf r}''{\rm d}{\bf r}''' \nonumber \\
&\equiv \int \nu({\bf r},{\bf r}') \delta n({\bf r}',t) {{\rm d}}{\bf r}'
\end{align}
Instead of computing $\delta n({\bf r},t)$ by a sum over single-particle states, we employ the set of random vectors $\left\{ \eta \right\}$ confined to the occupied subspace. This reduces tremendously a cost of the computation $\tilde u_{\zeta,k}$. Time-dependent density $n({\bf r},t)$ is thus\cite{neuhauser2014breaking,vlcek2017stochastic,Vlcek2018swift,gao2015sublinear,Rabani2015,Neuhauser_2016}
\begin{equation}
n({\bf r},t)=\lim_{N_\eta \to \infty} \frac{1}{N_{\eta}}\sum_{l}^{N_{\eta}}|\eta_l({\bf r},t)|^2,
\label{TDdensity}
\end{equation}
where $\eta_l$ is propagated in time using $U_{0,t}$
\begin{equation}\label{tpropeta_Un}
\left| \eta(t)\right\rangle = U_{0,t} [n(t)] \left| \eta\right\rangle.
\end{equation}
Note, the perturbing potential $\delta v$ depends explicitly on the state $\phi_k$ and the $\bar\zeta$ vector, which samples the whole Hilbert space.
We employ RPA, i.e., performing evolution within the time-dependent Hartree approximation\cite{Baroni2001,BaerNeuhauser2004,Neuhauser2005}, to calculate $\tilde u_{\zeta,k}$.
\subsection{Stochastic Hamiltonian downfolding}\label{sec:scRPA}
The second new development presented in this work enables efficient Hamiltonian downfolding and extracting effective parameters for model approaches using first principles. We identify correlated states using the QP orbitals analysis (discussed in the main text) and map the corresponding subspace on a dynamically screened Hubbard model\cite{hubbard1963electron,springer1998frequency,kotani2000ab}:
\begin{equation}\label{eq:hub_ham}
\hat{H}= - \sum_{i,j, \sigma}t_{ij}\hat{c}^{\dagger}_{i\sigma}\hat{c}_{j\sigma}
+ \sum_{i\sigma}^{} U\hat{n}^{\dagger}_{i\uparrow}\hat{n}_{i\downarrow},
\end{equation}
where $\hat{c}^{\dagger}_{i\sigma}$, $\hat{c}_{i,\sigma}$ are creation and annihilation operators, $\hat{n}_{i\sigma}=\hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma}$ is a particle number operator.
In practice, we extract the hopping and on-site Coulomb terms, $t$ and $U$ from the first-principles calculations. The latter is :\cite{miyake2009ab}
\begin{align}\label{U}
U(\omega) =
\frac{1}{N}\sum_{i=1}^{N}\iint d{\bf r} d{\bf r}' |\varphi_i({\bf r})|^2\tilde W({\bf r},{\bf r}',\omega)|\varphi_i({\bf r}')|^2.
\end{align}
Here, $\{\varphi_i\}$ is a set of KS or QP states spanning the correlated subspace (represented by $\{\phi\}$ and $\{\psi\}$ sets respectively). They are subject to Coulomb interaction $\tilde W$ that contains both bare (instantaneous) and screened (i.e., dynamical) terms. Symbolically, the interaction is $\tilde W = \nu + \nu \tilde\chi \nu, $ where $\nu$ is the bare Coulomb kernel and $\tilde\chi$ is the polarizability due to electronic states orthogonal to the $\{\varphi_i\}$-subspace that contains the DP states.
The cost of the conventional calculation is huge, as it needs to be evaluated by considering all possible transitions between occupied and unoccupied states (see below). Calculations for large systems (such as those studied here) were thus out of reach. In contrast, we propose an efficient approach in which Eq.~\eqref{U} is evaluated \textit{stochastically} within the constrained random-phase approximation (cRPA): the real-time formalism samples the dynamics of all occupied states using a new set of random vectors confined to the occupied subspace and orthogonal to $\{\varphi_i\}$. The separation of the Hilbert space employs our recently developed decomposition technique\cite{romanova2020decomposition}. This technique is computationally \textit{inexpensive}: $U(\omega)$ screened by 4364 valence bands require \textit{merely} $<$120 CPU$\cdot$hrs on a 2.5~GHz processor.\footnote{The testing configuration was AMD EPYC 7502 with 2.5~GHz frequency using 10 out of 32 physical cores. The total computational time was 9.6~hrs.} This methodology thus enables Hamiltonian downfolding even for extremely large systems. The details of the implementation are provided in the next sections on the bare Coulomb interaction (\ref{sec:Ub}) and its dynamical screening evaluated by s-cRPA (\ref{cRPA}).
~\\
\subsection{The effective bare Coulomb interaction\label{sec:Ub}}
We calculate the bare effective interaction parameter, the Hubbard $U^b$, both in a basis of KS or QP wavefunctions (see discussion on the orbital construction in Sec.~\ref{results1}) of a chosen subspace of $N=8$ states:
\begin{equation}\label{eq:Ub}
U^{b} = \frac{1}{N}\sum_{i=1}^{N}\iint d{\bf r} d{\bf r}' |\varphi_i({\bf r})|^2\nu({\bf r},{\bf r}')|\varphi_i({\bf r}')|^2,
\end{equation}
where $\nu({\bf r},{\bf r}')$ is a bare Coulomb kernel.
The full Hubbard is $U(\omega)$ is given by Eq.~\eqref{U} and contains, besides $U^b$, also the dynamical screened \textit{polarization} term. The latter part is computed stochastically as detailed below, in Sec.~\ref{cRPA}.
~\\
\subsection{Stochastic constrained RPA (s-cRPA)\label{cRPA}}
Here we discuss the implementation of the dynamical Hubbard term, $U(\omega) = U^b + U^p(\omega)$, where the latter is: \begin{align}
U^p(\omega) =
\frac{1}{N}\sum_{i=1}^{N}\iint d{\bf r} d{\bf r}' |\varphi_i({\bf r})|^2\tilde W_P({\bf r},{\bf r}',\omega)|\varphi_i({\bf r}')|^2.
\end{align}
The polarization operator $\tilde W_P =\nu \tilde\chi \nu$ is computed by the stochastic constrained random-phase approximation (s-cRPA). The key idea is to capture the effect of the entire system on the correlated electrons in states $\{\varphi\}$ described by the (downfolded) Hubbard Hamiltonian Eq.\eqref{eq:hub_ham}. In practice, one accounts for the screening through a projection on the subspace, which \textit{excludes} all correlated states $\{\varphi\}$. In cRPA, $\tilde W_P$ thus contains contributions of the induced density fluctuations in the weakly correlated portion of the system.
Conventional techniques evaluate $\tilde W_P=\nu \tilde\chi \nu$ in frequency domain by the sum over all single-particle transitions outside the correlated subspace ($i,j \notin \{ \varphi \}$), requiring operation on both the entire occupied and unoccupied space~\cite{miyake2009ab}:
\begin{align}
\tilde\chi({\bf r},{\bf r}',\omega) = \sum_i^{occ} \sum_j^{unocc} \phi_i({\bf r})\phi_i({\bf r}')^*\phi_j({\bf r})^*\phi_j({\bf r}') \times \nonumber\\
\times \left( \frac{1}{\omega - \varepsilon_j+\varepsilon_i+i\lambda}-\frac{1}{\omega+\varepsilon_j-\varepsilon_i-i\lambda} \right).
\end{align}
Hence, these calculations become expensive for large systems.
In contrast, we compute $\tilde W_P$ term stochastically in real-time domain:
\begin{equation}
\langle \varphi_j \varphi_j|\tilde W_P| \varphi_j \varphi_j\rangle \simeq \int |\varphi_j({\bf r})|^2 \tilde u({\bf r},t) d^3{\bf r}.
\label{eq:wp}
\end{equation}
This expression is computed by time-ordering from the retarded charge density potential:
\begin{equation}
\tilde u^r({\bf r},t) =\int \nu({\bf r},{\bf r}') \delta \tilde n({\bf r}',t) {{\rm d}}{\bf r}'.
\end{equation}
where $\delta \tilde n$ is the induced charge density in the weakly correlated subspace perturbed by a potential due to $|\varphi_j({\bf r})|^2$. This is formally equivalent to Eq.~\ref{eff_potential} (representing the action of the self-energy in the $GW$ approximation). In practice, the density is constructed from random vectors $\{\tilde \eta\}$:
\begin{equation}
\ket{\tilde\eta} = \left(1-P_\varphi\right)\ket{\eta}
\end{equation}
where the $\{\eta\}$-states are described in Sec.~\ref{offdiag} and $P_\varphi$ is the projection operator on the $\{ \varphi\}$-subspace:
\begin{equation}\label{projectorphi_occ}
P_\varphi = \sum_{k\in \left\{\varphi\right\} } f_k \left| k\middle\rangle \middle \langle k \right|.
\end{equation}
where $f_k$ is the occupation of state $\ket k$. Note that the time evolution of $\{\tilde \eta\}$ vectors follows Eq.~\ref{tpropeta_Un}, which depends on the \textit{total} density. For details of the time-evolution of subspaces, see Ref.~\onlinecite{romanova2020decomposition}.
The method is implemented alongside the stochastic $GW$ formalism and both can be evaluated at once. However, in practice, the statistical error in $U^p(\omega)$ is orders of magnitude smaller since: (i) it stems from \textit{one} random sampling of $W$; in contrast, the $GW$ self-energy suffers from larger statistical errors due to the additional random vectors sampling the Green's function. (ii) it contains only contributions of states orthogonal to those which it is acting on; as a result, the dynamics is ``well-behaved'' and characterized by only a few dominant (resonant) frequencies which can be efficiently sampled by a small number of random vectors.
\subsection{Equilibrium geometry and equation of state}\label{sec:eos}
The tBLG cells at a specific out-of-plane pressure have been approximated using the interlayer distance of the ideal bilayer graphene in the Bernal stacking at a corresponding pressure. All the calculations for the ideal bilayer graphene have been performed using hexagonal unit cell in QuantumESPRESSO code\cite{QE2017} and Tkatchenko-Scheffler's total energy Van der Waals corrections\cite{TS_2009} and Effective Screening Medium Method\cite{Otani_2006}. Troullier-Martins pseudopotentials\cite{TroullierMartins1991}, and the PBE\cite{PerdewWang} functional have been employed.
To calculate the pressure-distance curves in the ideal bilayer, we have fitted the total energy $E$ as a function of the volume $V$ with the Murnaghan equation of state\cite{murnaghan1944}.
\begin{equation}
E(V) = E(V_0) + \frac{B_0V}{B'_0}\left[\frac{(V_0/V)^{B'_0}}{B'_0-1}+1\right] - \frac{V_0B_0}{B'_0-1} ,
\label{eq:eos}
\end{equation}
where $V=S \cdot z$ is the volume confined by two graphene layers, $S=a_{lat}^2$ is a surface of the layer, and $z$ is the interlayer distance. $S$ was kept constant using the equilibrium lattice parameter $a_{lat}=2.464$~\AA, while $z$ was varied. The neglect of the pressure-induced in plane expansion is, in part, justified by the large anisotropy of the bulk modulus in the in- and out-of-plane directions.\cite{yankowitz2018dynamic} $B_0$ and $B'_0$ are the bulk modulus and its pressure derivative at the equilibrium volume $V_0$. The resulting fit and fitter parameters are provided in the Supplemental Information. Using fitted parameters $P(z)$ pressure - distance curves were calculated as a derivative $P = dE/dV$:
\begin{equation}
P = \frac{B_0}{B'_0}\left[1 - \left(\frac{V0}{S\cdot z}\right)^{B'_0}\right]
\end{equation}
\subsection{Starting point DFT calculations}
The starting-point calculations are performed with the density functional theory (DFT) in a real-space implementation, employing regular grids, Troullier-Martins pseudopotentials,\cite{TroullierMartins1991} and the PBE\cite{PerdewWang} functional for exchange and correlation. We investigate tBLG infinite systems using modified periodic boundary conditions with Coulomb interaction cutoffs.\cite{Rozzi_2006} To converge the occupied $H_0$ eigenvalues to $< 5$~meV, we use a kinetic energy cutoff of 26~$E_h$ and $192\times 132 \times 66$ real-space grid with the step of $0.4 \times 0.4 \times 0.5$~$a_0$, where the $z$-direction is aligned with the normal of the bilayer plane.
\subsection{$G_0W_0$ calculations}
The $GW$ calculations were performed using a development version of the StochasticGW code.\cite{neuhauser2014breaking, Vlcek2018swift, vlcek2017stochastic}
The calculations employ an additional set of 20,000 random vectors for the sparse stochastic compression used for time-ordering of $\tilde u_{\zeta}$.\cite{Vlcek2018swift} The sampling of the Green's function $G$ was performed using $N_{\zeta}$=500 random vectors. $N_{\eta}$=8 was used to sample the induced charge density.\cite{Vlcek2018swift} The final stochastic error on quasiparticle energies is $\le 20$~meV.
The time propagation of the induced charge density was performed for a maximum propagation time of 50~a.u., with the time-step of 0.05~a.u.
\section{Data availability}
All the data supporting the results of this study are available upon reasonable request to the corresponding author
\section{Code availability}
The public version of the stochastic $GW$ code is available at www.stochasticGW.com. We used a development version of the stochastic $GW$ to perform calculations, which will be released soon and is available upon reasonable request.
\begin{acknowledgments}
The development of the off-diagonal self-energy and s-cRPA (VV) was supported by the NSF through NSF CAREER award Grant No. DMR-1945098. The development of the downfolding and the implementation (M.R.)~were supported by the Materials
Research Science and Engineering Centers (MRSEC)
Program through Grant No. DMR-1720256 (Seed
Program). M.R.'s work was also supported by the NSF Quantum Foundry through Q-AMASE-i
program Award No. DMR-1906325. The calculations were performed as part of the XSEDE\cite{Towns_2014} computational Project No.~TG-CHE180051. Use was made of computational facilities purchased with
funds from the National Science Foundation (CNS-1725797)
and administered by the Center for Scientific Computing
(CSC). The CSC is supported by the California NanoSystems
Institute and the Materials Research Science and Engineering
Center (MRSEC; NSF DMR-1720256) at UC Santa Barbara.
\end{acknowledgments}
\subsection{Contributions}
M.R. conducted the research work under the guidance of V.V. All authors contributed and reviewed the manuscript.
\subsection{Competing interests}
The authors declare no competing interests.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In \cite{Ma13}, from a probabilistic point of view,
Jiming Ma introduced and studied two models of random links.
We here consider the one which is defined as
the braid closures of randomly chosen braids via random walks on the braid groups.
Suppose that such a random walk
on the braid group $\mathfrak{B}_n$ of $n$-strings
induces the uniform distribution on
the symmetric group $\mathfrak{S}_n$ on $n$ letters
via the natural projection $\mathfrak{B}_n \to \mathfrak{S}_n$ ($n \ge 3$).
Then, Ma showed in \cite[Theorem 1.1]{Ma13} that,
for the random link coming from a random walk of $k$-step
on $\mathfrak{B}_n$ ($n \ge 3$),
the expected value of the number of components converges to
\[ 1+ \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \]
when $k$ diverges to $\infty$.
See the next section for the precise definition of the random link.
From this result, it is natural to ask what is
the most expected number of components for such a random link.
We first answer this question as follows.
\begin{theorem}\label{Thm1}
Consider a random link obtained from a random walk on $\mathfrak{B}_n$.
Suppose that the random walk on $\mathfrak{B}_n$
is defined for the probability distribution on $\mathfrak{B}_n$
which induces the uniform distribution on $\mathfrak{S}_n$
via the natural projection $\mathfrak{B}_n \to \mathfrak{S}_n$ $(n \ge 3)$.
Then the most expected number of components is equal to
\[
K_n = \left[ \log(n+1) + \gamma -1
+ \frac{\zeta(2)-\zeta(3)}{\log(n+1)+ \gamma - 1.5}
+ \frac{h}{(\log(n+1)+ \gamma - 1.5)^2} \right]
\]
where $[x]$ denotes the integer part of $x$,
$\zeta$ is the Riemann zeta function,
$\gamma = 0.5772\dots$ is the Euler-Mascheroni constant,
and $h$ with $-1.1 < h < 1.5$ is a function on $n$, i.e, $h = h(n)$.
In particular, if $n >188$, it follows that
\[ \left[\;\log n - \frac{1}{2}\;\right] < K_n < \left[\;\log n\;\right]\,. \]
\end{theorem}
In fact, this can be obtained from
some known results on Combinatorics and Analytic number theory.
To connect the problem of random link to them,
the key is the correspondence between
components of the closure of a braid
and
cycles in the cycle decomposition
of the permutation corresponding to the braid.
In particular,
the number of components are calculated
as the number of cycles.
In view of this, we can relate
random braids to random partitions of integers (i.e., the numbers of strings).
Then it is also natural to ask what is
the most expect partition of the number of strings for a random braid.
About this question, against our naive intuition, we can show the following.
\begin{theorem}\label{Thm2}
Consider a random braid obtained from a random walk on $\mathfrak{B}_n$.
Suppose that the random walk on $\mathfrak{B}_n$
is defined for the probability distribution on $\mathfrak{B}_n$
which induces the uniform distribution on $\mathfrak{S}_n$
via the natural projection $\mathfrak{B}_n \to \mathfrak{S}_n$ $(n \ge 3)$.
Then the most expected partition of the number of the strings is $( (n-1), 1) $.
\end{theorem}
Actually the probability for such a partition of the number of the strings is shown to converge to $1/(n-1)$.
\bigskip
The first author thanks Jiming Ma for useful discussions in this topic,
and also thanks Kazuma Shimomoto for letting him know about the Stirling number of the first kind.
\section{Link, braid and random walk}
We here give a brief review of the setting for studying the random links introduced in \cite{Ma13}. See \cite{Ma13} for details.
Throughout the paper,
we denote the braid group of $n$-strings by $\mathfrak{B}_n$,
and the symmetric group on $n$ letters by $\mathfrak{S}_n$.
We consider a probability distribution $\mu$ on $\mathfrak{B}_n$.
By using such a probability distribution,
one can define a random walk
by setting the transition probability as $\mathbb{P} (x,y) = \mu (x y^{-1})$.
Here we suppose that our random walk starts at the identity element at time zero.
By considering the natural projection $\mathfrak{B}_n \to \mathfrak{S}_n$,
such a random walk on $\mathfrak{B}_n$ induces
a random walk on $\mathfrak{S}_n$.
We here suppose that the probability distribution $\mu$
induces the uniform distribution on $\mathfrak{S}_n$ via the natural projection.
Here, by the \textit{uniform distribution} on $\mathfrak{S}_n$,
we mean the probability distribution satisfying
$\mathbb{P} (s) = 1/n!$ holds for any $s \in \mathfrak{S}_n$.
That is, we are assuming that the probability $\mathbb{P} (s)$
for any $s \in \mathfrak{S}_n$
induced from the random walk is sufficiently close to $1/n!$,
or, in other words,
the induced random walk on $\mathfrak{S}_n$
is the uniformly distributed random walk.
Then, conceptually, we said a braid is a \textit{random braid}
if it is represented by a braid coming from a random walk on $\mathfrak{B}_n$ with sufficiently long steps.
We remark that our assumption on the probability distribution does not give severe restriction.
Actually, Ma showed the following as \cite[Theorem 2.5]{Ma13}.
Let $\mu$ be a probability distribution on $\mathfrak{B}_n$,
which induces
a random walk $\overline{ \omega_{n,k} }$ on $\mathfrak{S}_n$.
Suppose that
the probability $\mathbb{P} ( \overline{ \omega_{n,1} } = e )$ is larger than 0,
for the identity element $e \in \mathfrak{S}_n$,
and the support of $\mu$ generates $\mathfrak{B}_n$.
Then $\mu$ induces the uniform distribution on $\mathfrak{S}_n$.
For example,
the probability distribution $\mu_c$ on $\mathfrak{B}_n$ defined by
$$
\mu_c ( e) = \mu_c ( \sigma_i ) = \mu_c ( \sigma_i^{-1} )
= \frac{1}{2n-1} $$
for the identity element $e$ and each canonical generator
$\sigma_i \in \mathfrak{B}_n$ ($1 \le i \le n-1$)
is shown to satisfy the assumption.
Now we consider a random walk $\omega_{n,k}$ on $\mathfrak{B}_n$,
and the probability $p_{n,k}^m $ for the link
corresponding to the random walk $\omega_{n,k}$
which has exactly $m$ components.
Then, for the random link,
we say that \textit{the most expected number of components is $m$}
if, for any sufficiently large $k$, $p_{n,k}^m$ is maximal among $p_{n,k}^j$ for $1 \le j \le n$.
\section{Most expected number of components}
In this section, we give a proof of Theorem \ref{Thm1}.
\begin{proof}[Proof of Theorem \ref{Thm1}]
Consider a random walk $\omega_{n,k}$ on $\mathfrak{B}_n$.
By taking the braid closure of $\omega_{n,k}$,
we have a link $\widehat{\omega_{n,k}}$ in the 3-sphere.
Consider the natural projection $\pi : \mathfrak{B}_n \to \mathfrak{S}_n$.
We see that a component of $\widehat{\omega_{n,k}}$
corresponds to an orbit of the action of $\pi(\omega_{n,k})$ on $n$ letters.
It follows that,
if we consider the decomposition of $\pi(\omega_{n,k})$
into cycles with mutually distinct letters,
the number of components of $\widehat{\omega_{n,k}}$
is equal to the number of cycles in the decomposition of $\pi(\omega_{n,k})$.
Now we are supposing that $\omega_{n,k}$ is defined by
a probability distribution on $\mathfrak{B}_n$
which induces the uniform probability distribution on $\mathfrak{S}_n$
via the natural projection $\pi : \mathfrak{B}_n \to \mathfrak{S}_n$.
This means that, for any $s \in \mathfrak{S}_n$,
the probability $\mathbb{P} (s)$ defined by
the induced random walk $\pi ( \omega_{n,k} )$ converges to $1/n!$.
Let $p_{n,k}^m$ be the probability for the link $\widehat{\omega_{n,k}}$
corresponding to the random walk $\omega_{n,k}$
which has exactly $m$ components.
It then follows that, as $k \to \infty$, $p_{n,k}^m$ converges to the ratio of
the number of permutations with disjoint $m$ cycles in $\mathfrak{S}_n$.
Here we note that
the number of permutations of $n$ letters with disjoint $m$ cycles
is called \textit{the Stirling number of the first kind}, denoted by $c(n,m)$.
Consequently,
to obtain the most expected number of components for $\widehat{\omega_{n,k}}$,
it suffices to study the value of $m$
for which $c(n,m)$ is maximal for $1 \le m \le n$.
This was already established by Hammersley in \cite{Hammersley} that
$c(n,m)$ is maximal for $1 \le m \le n$ if $m$ is equal to
$$
K_n = \left[ \log(n+1) + \gamma -1
+ \frac{\zeta(2)-\zeta(3)}{\log(n+1)+ \gamma - 1.5}
+ \frac{h}{(\log(n+1)+ \gamma - 1.5)^2} \right]
$$
where $[x]$ denotes the integer part of $x$,
$\zeta$ is the Riemann zeta function,
$\gamma = 0.5772\dots$ is the Euler-Mascheroni constant,
and $h$ with $-1.1 < h < 1.5$ is a function on $n$, i.e, $h = h(n)$.
Furthermore, if $n >188$, Erd\"os proved in \cite{Erdos} that
$$ \left[ \log n - \frac{1}{2} \right] < K_n < \left[ \log n \right]$$
holds.
This completes the proof of Theorem \ref{Thm1}.
\end{proof}
\section{Partition of the number of strings for braid}
In this section, we give a proof of Theorem \ref{Thm2}.
Before starting the proof, we should fix our terminology.
An element of the symmetric group $\mathfrak{S}_n$ on $n$ letters
is uniquely represented as a composition of several cycles with distinct letters.
The set of the lengths of such cycles gives a partition of the integer $n$.
That is, if an element of $\mathfrak{S}_n$ is represented as a composition of cycles of lengths $n_1, n_2, \cdots, n_m$ with $n_1 \ge n_2 \ge \cdots \ge n_m$, then we have a partition $(n_1, n_2, \cdots, n_m)$ of $n$,
for $n = n_1 + n_2 + \cdots + n_m$ holds.
In view of this, given a braid $\sigma$ with $n$-strings with $n > 0$,
we define \textit{a partition of the number of strings} for $\sigma$ as a non-increasing sequence of positive integers $(n_1, n_2, \cdots , n_m)$ which is obtained in that way for the element $\pi (\sigma)$ of $\mathfrak{S}_n$, where $\pi$ denotes the natural projection $\mathfrak{B}_n \to \mathfrak{S}_n$.
\medskip
We here prepare the following,
which is the key algebraic lemma to prove Theorem \ref{Thm2}.
\begin{lemma}
In the symmetric group on $n$ letters with $n \ge 3$,
the conjugacy class of the maximal cardinality is the one containing the $(n-1)$-cycle $(1 \ 2 \ \dots \ n-1)$, and the cardinality is $n \cdot (n-2)!$.
\end{lemma}
\begin{proof}
Let $\mathfrak{S}_n$ be the symmetric group on $n$ letters $(n \ge 3)$.
It is known that the cardinality of the conjugacy classes including $a \in \mathfrak{S}_n$
is given by $|\mathfrak{S}_n| / |Z(a)|$ (see \cite[Chapter 6, pp.198]{Ar} for example),
where $Z(a)$ denotes the centralizer of $a$, that is, $\{ g \in \mathfrak{S}_n | ga=ag \}$.
Thus it suffice to show that $|Z(a)| \ge n-1$ for any element $a \in \mathfrak{S}_n$.
We first claim that, in general, $k_1\cdots k_r \ge k_1+\cdots + k_r$ holds for a tuple of integers $k_1,...,k_r \ge 2$.
This is easily shown by induction, and the equality holds only when $r=1$, or $r=2$ and $k_1=k_2=2$.
Now let us describe $a \in \mathfrak{S}_n$ by a product of cycles without common letters:
for example, $a=a_1 \cdots a_r$ with $a_i$ is a $k_i$-cycle and $k_1 \ge k_2 \ge \cdots \ge k_r \ge 1$.
Here we note that, if $k_r \ge 2$, then the centralizer $Z(a)$ contains the direct product of abelian groups generated by $a_1,\ldots,a_r$.
Thus the order of $Z(a)$ is at least $k_1\cdots k_r$, which is greater than or equal to $k_1+\cdots + k_r=n$ by the above claim.
If $k_{r-1} \ge 2, k_r =1$, then $Z(a)$ contains the direct product of abelian groups generated by $a_1,\ldots,a_{r-1}$, which has $k_1\cdots k_{r-1}$ elements.
Again, by the above claim, the order of $Z(a)$ is at least $k_1+ \cdots + k_{r-1} = n-1$.
Finally, if $k_p=\cdots =k_r =1$ for some $p$ with $2 \le p \le r-1$, then $Z(a)$ contains the direct product of abelian groups generated
by $a_1,\ldots,a_{p-1}$ and the $(r-p+1)$-cycle of the other letters.
The order of the cycle is at least $k_1\cdots k_{p-1}\cdot (r-p+1) \ge k_1+\cdots + k_{p-1}+(r-p+1)=n$.
Consequently we see that $|Z(a)| \ge n-1$.
Furthermore, suppose that $|Z(a)|=n-1$ holds, then we have $r-1=1$, $k_1 \ge 2$, and $k_2=1$,
that is, $a$ is the $(n-1)$-cycle, or $r-1=2, k_1=k_2=2,k_3=1,n-1=4$.
In the latter case, i.e., $n=5$, we may assume that $a=(12)(34)$.
But in this case, we have $|Z(a)|=8$ (the quaternion group is also contained), and so, the equality does not hold.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm2}]
Consider a random walk $\omega_{n,k}$ on $\mathfrak{B}_n$.
By using the natural projection,
we have the induced random walk on $\mathfrak{S}_n$.
Since we have assumed that
this induced random walk is uniformly distributed,
the probability of a braid in the sequence with a given partition,
say $(n_1, n_2, \cdots, n_m)$, of the number $n$ of the strings converges to $\Delta_{n,i} / n!$,
where $\Delta_{n,i}$ denotes
the number of elements in $\mathfrak{S}_n$
giving that partition of the integer $n$.
This $\Delta_{n,i}$ is equal to the cardinality of the conjugacy class of an element in $\mathfrak{S}_n$ decomposed into the cycles of distinct letters of lengths $n_1, n_2, \cdots, n_m$.
Then, by the above lemma, $\Delta_{n,i}$ takes maximum for the one containing the $(n-1)$-cycle $(1 \ 2 \ \dots \ n-1)$.
That is, the most expected partition of the number of the strings for a random $n$-braid must be $( (n-1), 1) $.
Also, since the maximum of $\Delta_{n,i}$ is $n \cdot (n-2)!$,
the most expected probability is $n \cdot (n-2)! / n! = 1/(n-1)$.
Furthermore, in that case, the link comes from the braid corresponding to the $(n-1)$-cycle $(1 \ 2 \ \dots \ n-1)$.
\end{proof}
Actually, in the same way, it can be shown that the probability that a given random link becomes a knot converges to $1/n$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Questions Generation (QG) from Knowledge Graphs is the task consisting in generating natural language questions given an input knowledge base (KB) triple~\cite{Serban16}.
QG from knowledge graphs has shown to improve the performance of existing factoid question answering (QA) systems either by dual training or by augmenting existing training datasets~\cite{Dong17,Khapra17}.
Those methods rely on large-scale annotated datasets such as SimpleQuestions~\cite{Bordes15}.
In practice many of the predicates and entity types in KB are not covered by those annotated datasets. For example $75\%$ of Freebase predicates are not covered by SimpleQuestions.
So, one challenge for QG from knowledge graphs is to adapt to predicates and entity types that were \textit{not} seen at training time (aka Zero-Shot QG).
Ultimately, generating questions to predicates and entity types unseen at training time will allow QA systems to cover predicates and entity types that would not have been used for QA otherwise.
Intuitively, a human who is given the task to write a question on a fact offered by a KB, would read natural language sentences where the entity or the predicate of the fact occur, and build up questions that are aligned with what he reads from both a lexical and grammatical standpoint.
In this paper, we propose a model for Zero-Shot Question Generation that follows this intuitive process.
In addition to the input KB fact, we feed our model with a set of textual contexts paired with the input KB triple through distant supervision.
Our model derives an encoder-decoder architecture, in which the encoder encodes the input KB triple, along with a set of textual contexts into hidden representations.
Those hidden representations are fed to a decoder equipped with an attention mechanism to generate an output question. \\
In the Zero-Shot setup, the emergence of new predicates and new class types during test time requires new lexicalizations to express these predicates and classes in the output question. These lexicalizations might not be encountered by the model during training time and hence do not exist in the model vocabulary, or have been seen only few times not enough to learn a good representation for them by the model.
Recent works on Text Generation tackle the rare words/unknown words problem using copy actions~\cite{Luong15,Gulcehre16}: words with a specific position are copied from the source text to the output text -- although this process is blind to the role and nature of the word in the source text.
Inspired by research in open information extraction~\cite{Fader11} and structure-content neural language models~\cite{Kiros14}, in which part-of-speech tags represent a distinctive feature when representing relations in text, we extend these positional copy actions.
Instead of copying a word in a specific position in the source text, our model copies a word with a specific part-of-speech tag from the input text -- we refer to those as part-of-speech copy actions.
Experiments show that our model using contexts through distant supervision significantly outperforms the strongest baseline among six ($+2.04$ BLEU-4 score). %
Adding our copy action mechanism further increases this improvement ($+2.39$).
Additionally, a human evaluation complements the comprehension of our model for edge case
; it supports the claim that the improvement brought by our copy action mechanism is even more significant than what the BLEU score suggests.
\section{Related Work}
QG became an essential component in many applications such as education~\cite{HeilmanS10}, tutoring~\cite{graesser2004,evens2006} and dialogue systems~\cite{Shang15}.
In our paper we focus on the problem of QG from structured KB and how we can generalize it to unseen predicates and entity types.
\cite{SeylerYB15} generate quiz questions from KB triples. Verbalization of entities and predicates relies on their existing labels in the KB and a dictionary.
\cite{Serban16} use an encoder-decoder architecture with attention mechanism trained on the SimpleQuestions dataset~\cite{Bordes15}.
\cite{Dong17} generate paraphrases of given questions to increases the performance of QA systems; paraphrases are generated relying on paraphrase datasets, neural machine translation and rule mining.
\cite{Khapra17} generate a set of QA pairs given a KB entity.
They model the problem of QG as a sequence to sequence problem by converting all the KB entities to a set of keywords.
None of the previous work in QG from KB address the question of generalizing to unseen predicates and entity types. \\
Textual information has been used before in the Zero-Shot learning. \cite{Socher13} use information in pretrained word vectors for Zero-Shot visual object recognition.
\cite{Levy17} incorporates a natural language question to the relation query to tackle Zero-Shot relation extraction problem.
Previous work in machine translation dealt with rare or unseen word problem problem for translating names and numbers in text.
\cite{Luong15} propose a model that generates positional placeholders pointing to some words in source sentence and copy it to target sentence (\textit{copy actions}).
\cite{Gulcehre16,Gu16} introduce separate trainable modules for copy actions to adapt to highly variable input sequences, for text summarization.
For text generation from tables, \cite{lebret2016} extend positional copy actions to copy values from fields in the given table.
For QG,~\cite{Serban16} use a placeholder for the subject entity in the question to generalize to unseen entities.
Their work is limited to unseen entities and does not study how they can generalize to unseen predicates and entity types.
\begin{figure*}[h]
\centering
\small
\def.75\linewidth{.75\linewidth}
\includegraphics[width=0.75\textwidth]{./figures/model.pdf}
\caption{The proposed model for Question Generation. The model consists of a single fact encoder and $n$ textual context encoders, each consists of a separate GRU. At each time step $t$, two attention vectors generated from the two attention modules are fed to the decoder to generate the next word in the output question.}
\label{fig:model}
\end{figure*}
\section{Model}
Let $F=\{s,p,o\}$ be the input fact provided to our model consisting of a subject $s$, a predicate $p$ and an object $o$, and $C$ be the set of textual contexts associated to this fact.
Our goal is to learn a model that generates a sequence of $T$ tokens
$Y = y_1,y_2,\ldots,y_T$ representing a question about the subject $s$, where the object $o$ is the correct answer.
Our model approximates the conditional probability of the output question given an input fact $p(Y|F)$, to be the probability of the output question, given an input fact and the additional textual context $C$, modelled as follows:
\begin{align}
p(Y|F) &= \prod_{t=1}^{T}p(y_t|y_{<t}, F,C)
\end{align}
where $y_{<t}$ represents all previously generated tokens until time step $t$. %
Additional textual contexts are natural language representation of the triples that can be drawn from a corpus -- our model is generic to any textual contexts that can be additionally provided, though we describe in Section~\ref{subsec:textualcontexts} how to create such texts from Wikipedia.
Our model derives the encoder-decoder architecture of~\cite{Sutskever14,Bahdanau14} with two encoding modules: a feed forward architecture encodes the input triple (sec. \ref{sec:fact-encoder}) and a set of recurrent neural network (RNN) to encode each textual context (sec. \ref{sec:textual-encoder}).
Our model has two attention modules~\cite{Bahdanau14}: one acts over the input triple and another acts over the input textual contexts (sec. \ref{sec:attention}).
The decoder (sec. \ref{sec:decoder}) is another RNN that generates the output question. At each time step, the decoder chooses to output either a word from the vocabulary or a special token indicating a copy action (sec. \ref{sec:copy-actions}) from any of the textual contexts.
\subsection{Fact Encoder}~\label{sec:fact-encoder}
Given an input fact $F=\{s,p,o\}$, let each of $e_s$, $e_p$ and $e_o$ be a 1-hot vectors of size $K$.
The fact encoder encodes each 1-hot vector into a fixed size vector $h_s = \mathbf{E_f}\,e_s$, \enspace $h_p = \mathbf{E_f}\,e_p$ \enspace and $h_o = \mathbf{E_f}\,e_o$,
where $\mathbf{E_f} \in \mathbb{R}^{H_k \times K}$ is the KB embedding matrix, $H_k$ is the size of the KB embedding and $K$ is the size of the KB vocabulary.
The \emph{encoded fact} $ h_f \in \mathbb{R}^{3H_k}$ represents the concatenation of those three vectors and we use it to initialize the decoder.
\begin{align}\label{eq:fact-emb}
h_f = [h_s;\enspace h_p;\enspace h_o]
\end{align}
Following~\cite{Serban16}, we learn $\mathbf{E_{f}}$ using \textit{TransE}~\cite{Bordes15}. We fix its weights and do not allow their update during training time.
\subsection{Textual Context Encoder}~\label{sec:textual-encoder}
Given a set of $n$ textual contexts $ C = \{c_1, c_2, \ldots, c_n\ : c_j = (x_1^j, x_2^j, \ldots, x^j_{|c_j|})\}$, where $x_i^j$ represents the 1-hot vector of the $i^{th}$ token in the $j^{th}$ textual context $c_j$, and $|c_j|$ is the length of the $j^{th}$ context.
We use a set of $n$ Gated Recurrent Neural Networks (GRU)~\cite{Cho2014} to encode each of the textual concepts separately:
%
\begin{align}\label{eq:gru-enc}
h_i^{c_j} = GRU_j\left (\mathbf{E_c}\,x_{i}^{j},\enspace h_{i-1}^{c_j} \right )
\end{align}
where $h_i^{c_j} \in \mathbb{R}^{H_c}$ is the hidden state of the GRU that is equivalent to $x_i^j$ and of size $H_c$ . $\mathbf{E_{c}}$ is the input word embedding matrix.
The \emph{encoded context} represents the encoding of all the textual contexts; it is calculated as the concatenation of all the final states of all the encoded contexts:
\begin{align}\label{eq:context-emb}
h_c &= [h^{c_1}_{|c_1|}; h^{c_2}_{|c_2|};\ldots; h^{c_n}_{|c_n|}].
\end{align}
\subsection{Decoder} \label{sec:decoder}
For the decoder we use another GRU with an attention mechanism~\cite{Bahdanau14}, in which the decoder hidden state $s_t\in\mathbb{R}^{H_{d}}$ at each time step $t$ is calculated as:
\begin{align}
s_{t} &= z_t \circ s_{t-1} + (1-z_t) \circ \tilde{s}_{t} \enspace,
\end{align}
Where:
\small
\begin{align}
\tilde{s}_{t} &= tanh\left(W E_{w}y_{t-1} + U [r_t \circ s_{t-1}] + A \, [a^f_t; a^c_t] \right) \\
z_t &= \sigma\left(W_z\,E_{w}\,y_{t-1} + U_z \, s_{t-1} + A_z \, [a^f_t; a^c_t] \right) \\
r_t &= \sigma\left(W_r\,E_{w}\,y_{t-1} + U_r \, s_{t-1} + A_r \, [a^f_t; a^c_t] \right)
\end{align}
\normalsize
$W,W_z,W_r \in \mathbb{R}^{m \times H_{d}}$, $U,U_z,U_r,A,A_z,A_r \in \mathbb{R}^{H_{d} \times H_{d}}$ are learnable parameters of the GRU.
$E_w \in \mathbf{R}^{m \times V}$ is the word embedding matrix, $m$ is the word embedding size and $H_d$ is the size of the decoder hidden state.
$a^f_t$, $a^c_t$ are the outputs of the fact attention and the context attention modules respectively, detailed in the following subsection. \\
In order to enforce the model to pair output words with words from the textual inputs, we couple the word embedding matrices of both the decoder $E_w$ and the textual context encoder $E_c$ (eq.(\ref{eq:gru-enc})).
We initialize them with GloVe embeddings~\cite{PenningtonSM14} and allow the network to tune them.\\
The first hidden state of the decoder $s_0 = [h_f;\,h_c]$ is initialized using a concatenation of the encoded fact~(eq.(\ref{eq:fact-emb})) and the encoded context~(eq.(\ref{eq:context-emb})) .\\
At each time step $t$, after calculating the hidden state of the decoder, the conditional probability distribution over each token $y_t$ of the generated question is computed as the $softmax(W_{o}\,s_t)$ over all the entries in the output vocabulary, $W_o \in \mathbb{R}^{H_d \times V}$ is the weight matrix of the output layer of the decoder.\\
\subsection{Attention} \label{sec:attention}
Our model has two attention modules: \\
\textbf{Triple attention} over the input triple to determine at each time step $t$ an attention-based encoding of the input fact $a^f_t \in \mathbb{R}^{H_k}$:
\begin{align}
a^f_t = \alpha_{s,t} \, h_{s} + \alpha_{p,t} \, h_p + \alpha_{s,t} \, h_o \enspace,
\end{align}
$\alpha_{s,t}, \alpha_{p,t}, \alpha_{o,t}$ are scalar values calculated by the attention mechanism to determine at each time step which of the encoded subject, predicate, or object the decoder should attend to. \\
\textbf{Textual contexts attention} over all the hidden states of all the textual contexts $a^c_t \in \mathbb{R}^{H_c}$:
\begin{align}
a^c_t = \sum_{i=1}^{|C|} \sum_{j=1}^{|c_i|} \alpha_{t,j}^{c_i} \, h_j^{c_i} \enspace,
\end{align}
$\alpha_{t,j}^{c_i}$ is a scalar value determining the weight of the $j^{th}$ word in the $i^{th}$ context $c^i$ at time step $t$.
Given a set of encoded input vectors $I = \{h_1,h_2,...h_k\}$ and the decoder previous hidden state $s_{t-1}$, the attention mechanism calculates $\alpha_{t} = \alpha_{i,t},\ldots,\alpha_{k,t}$ as a vector of scalar weights, each $\alpha_{i,t}$ determines the weight of its corresponding encoded input vector $h_i$.
\begin{align}
e_{i,t} &= \mathbf{v_a}^\top \enspace tanh(\mathbf{W_a \, s_{t-1} + U_a \, h_i}) \\
\alpha_{i,t} &= \frac{exp\left(e_{i,t}\right)}{\sum_{j=1}^{k} exp\left(e_{j,t}\right)} \enspace,
\end{align} where
$\mathbf{v_a}, \mathbf{W_a}, \mathbf{U_a}$ are trainable weight matrices of the attention modules.
It is important to notice here that we encode each textual context separately using a different GRU, but we calculate an overall attention over all tokens in all textual contexts: at each time step the decoder should ideally attend to only one word from all the input contexts.
\input{figures/copyactions}
\subsection{Part-Of-Speech Copy Actions}\label{sec:copy-actions}
We use the method of \cite{Luong15} by modeling all the copy actions on the data level through an annotation scheme.
This method treats the model as a black box, which makes it adaptable to any text generation model.
Instead of using positional copy actions, we use the part-of-speech information to decide the alignment process between the input and output texts to the model.
Each word in every input textual context is replaced by a special token containing a combination of its context id (e.g. \texttt{C1}) and its POS tag (e.g. \texttt{NOUN}).
Then, if a word in the output question matches a word in a textual context, it is replaced with its corresponding tag as shown in Table~\ref{table:copyactions}. \\
Unlike \cite{Serban16,lebret2016} we model the copy actions in the input and the output levels.
Our model does not have the drawback of losing the semantic information when replacing words with generic placeholders, since we provide the model with the input triple through the fact encoder.
During inference the model chooses to either output words from the vocabulary or special tokens to copy from the textual contexts.
In a post-processing step those special tokens are replaced with their original words from the textual contexts.
\section{Textual contexts dataset}
As a source of question paired with KB triples we use the SimpleQuestions dataset~\cite{Bordes15}.
It consists of 100K questions with their corresponding triples from Freebase, and was created manually through crowdsourcing.
When asked to form a question from an input triple, human annotators usually tend to mainly focus on expressing the predicate of the input triple.
For example, given a triple with the predicate \texttt{fb:spacecraft/manufacturer} the user may ask \textit{"What is the manufacturer of \texttt{[S]} ?"}.
Annotators may specify the entity type of the subject or the object of the triple: \textit{"What is the manufacturer of the \textbf{spacecraft} \texttt{[S]}?"} or \textit{"Which \textbf{company} manufactures \texttt{[S]}?"}.
Motivated by this example we chose to associate each input triple with three textual contexts of three different types.
The first is a phrase containing lexicalization of the predicate of the triple.
The second and the third are two phrases containing the entity type of the subject and the object of the triple.
In what follows we show the process of collection and preprocessing of those textual contexts.
\subsection{Collection of Textual Contexts}\label{subsec:textualcontexts}
We extend the set of triples given in the SimpleQuestions dataset by using the FB5M~\cite{Bordes15} subset of Freebase. As a source of text documents, we rely on Wikipedia articles.
\paragraph{Predicate textual contexts:}
In order to collect textual contexts associated with the SimpleQuestions triples, we follow the distant supervision setup for relation extraction~\cite{mintz_2009}.
The distant supervision assumption has been effective in creating training data for relation extraction and shown to be 87\% correct~\cite{riedel_2010} on Wikipedia text. \\
First, we align each triple in the FB5M KB to sentences in Wikipedia if the subject and the object of this triple co-occur in the same sentence.
We use a simple string matching heuristic to find entity mentions in text\footnote{
We map Freebase entities to Wikidata through the Wikidata property P646, then we extract their labels and aliases.
We use the Wikidata truthy dump:~\url{https://dumps.wikimedia.org/wikidatawiki/entities/}}.
Afterwards we reduce the sentence to the set of words that appear on the dependency path between the subject and the object mentions in the sentence.
We replace the positions of the subject and the object mentions with \texttt{[S]} and \texttt{[O]} to the keep track of the information about the direction of the relation.
The top occurring pattern for each predicate is associated to this predicate as its textual context.
Table~\ref{table:example-dep} shows examples of predicates and their corresponding textual context.
\paragraph{Sub-Type and Obj-Type textual contexts:}
We use the labels of the entity types as the sub-type and obj-type textual contexts.
We collect the list of entity types of each entity in the FB5M through the predicate \texttt{fb:type/instance}.
If an entity has multiple entity types we pick the entity type that is mentioned the most in the first sentence of each Wikipedia article.
Thus the textual contexts will opt for entity types that is more natural to appear in free text and therefore questions.
\input{figures/dep_examples.tex}
\subsection{Generation of Special tokens}
To generate the special tokens for copy actions~(sec.~\ref{sec:copy-actions}) we run POS tagging on each of the input textual contexts\footnote{For the predicate textual contexts we run pos tagging on the original text not the lexicalized dependency path}.
We replace every word in each textual context with a combination of its context id (e.g. \texttt{C1}) and its POS tag (e.g. \texttt{NOUN}).
If the same POS tag appears multiple times in the textual context, it is given an additional id (e.g. \texttt{C1\_NOUN\_2}).
If a word in the output question overlaps with a word in the input textual context, this word is replaced by its corresponding tag. \\
For sentence and word tokenization we use the Regex tokenizer from the NLTK toolkit~\cite{NLTK}, and for POS tagging and dependency parsing we use the Spacy\footnote{https://spacy.io/} implementation.
\section{Experiments}
\subsection{Zero-Shot Setups}~\label{sec:zeroshot-setup}
We develop three setups that follow the same procedure as~\cite{Levy17} for Zero-Shot relation extraction to evaluate how our model generalizes to: 1) unseen predicates, 2) unseen sub-types and 3) unseen obj-types. \\
For the unseen predicates setup we group all the samples in SimpleQuestions by the predicate of the input triple, and keep groups that contain at least 50 samples.
Afterwards we randomly split those groups to 70\% train, 10\% valid and 20\% test mutual exclusive sets respectively.
This guarantees that if the predicate \texttt{fb:person/place\_of\_birth} for example shows during test time, the training and validation set will not contain any input triples having this predicate.
We repeat this process to create 10 cross validation folds, in our evaluation we report the mean and standard deviation results across those 10 folds.
While doing this we make sure that the number of samples in each fold -- not only unique predicates -- follow the same 70\%, 30\%, 10\% distribution.
We repeat the same process for the subject entity types and object entity types (answer types) individually.
Similarly, for example in the unseen object-type setup, the question \textit{"Which \textbf{artist} was born in Berlin?"} appearing in the test set means that, there is no question in the training set having an entity of type \textbf{\textit{artist}}.
Table~\ref{tab:dataset-sizes} shows the mean number of samples, predicates, sub-types and obj-types across the 10 folds for each experiment setup.
\input{figures/dataset_size.tex}
\subsection{Baselines}~\label{sec:baselines}
\vspace{-15pt}
\paragraph{\texttt{SELECT}}
is a baseline built from~\cite{Serban16} and adapted for the zero shot setup.
During test time given a fact $F$, this baseline picks a fact $F_c$ from the training set and outputs the question that corresponds to it.
For evaluating unseen predicates, $F_c$ has the same answer type (obj-type) as $F$.
And while evaluating unseen sub-types or obj-types, $F_c$ and $F$ have the same predicate.
\paragraph{\texttt{R-TRANSE}}
is an extension that we propose for \texttt{SELECT}.
The input triple is encoded using the concatenation of the TransE embeddings of the subject, predicate and object.
At test time, \texttt{R-TRANSE} picks a fact from the training set that is the closest to the input fact using cosine similarity and outputs the question that corresponds to it.
We provide two versions of this baseline:
\textbf{\texttt{R-TRANSE}} which indexes and retrieves raw questions with only a single placeholder for the subject label, such as in~\cite{Serban16}.
And \textbf{\texttt{R-TRANSE\textsubscript{copy}}} which indexes and retrieves questions using our copy actions mechanism (sec.~\ref{sec:copy-actions}).
\paragraph{\texttt{IR}}
is an information retrieval baseline. Information retrieval has been used before as baseline for QG from text input~\cite{Rush15,Du17}.
We rely on the textual context of each input triple as the search keyword for retrieval.
First, the IR baseline encodes each question in the training set as a vector of TF-IDF weights~\cite{joachims_97} and then does dimensionality reduction through LSA~\cite{halko_svd_2011}.
At test time the textual context of the input triple is converted into a dense vector using the same process and then the question with the closest cosine distance to the input is retrieved.
We provide two versions of this baseline: \textbf{\texttt{IR}} on raw text and \textbf{\texttt{IR\textsubscript{copy}}} on text with our placeholders for copy actions.
\paragraph{\texttt{Encoder-Decoder.}}
Finally, we compare our model to the Encoder-Decoder model with a single placeholder, the best performing model from~\cite{Serban16}.
We initialize the encoder with TransE embeddings and the decoder with GloVe word embeddings.
Although this model was not originally built to generalize to unseen predicates and entity types, it has some generalization abilities represented in the encoded information in the pre-trained embeddings.
Pretrained KB terms and word embeddings encode relations between entities or between words as translations in the vector space.
Thus the model might be able to map new classes or predicates in the input fact to new words in the output question.
\input{figures/eval_unseenpredicates}
\subsection{Training \& Implementation Details}
To train the neural network models we optimize the negative log-likelihood of the training data with respect to all the model parameters. For that we use the RMSProp optimization algorithm with a decreasing learning rate of~$0.001$, mini-batch size~$=200$, and clipping gradients with norms larger than $0.1$.
We use the same vocabulary for both the textual context encoders and the decoder outputs. We limit our vocabulary to the top $30,000$ words including the special tokens.
For the word embeddings we chose GloVe~\cite{PenningtonSM14} pretrained embeddings of size $100$.
We train TransE embeddings of size $H_k=200$, on the FB5M dataset~\cite{Bordes15} using the TransE model implementation from~\cite{Lin15}.
We set GRU hidden size of the decoder to $H_d=500$, and textual encoder to $H_c=200$.
The networks hyperparameters are set with respect to the final BLEU-4 score over the validation set.
All neural networks are implemented using Tensorflow~\cite{tensorflow}. %
All experiments and models source code are publicly available\footnote{\url{https://github.com/NAACL2018Anonymous/submission}} for the sake of reproducibility.
\subsection{Automatic Evaluation Metrics}
To evaluate the quality of the generated question, we compare the original labeled questions by human annotators to the ones generated by each variation of our model and the baselines.
We rely on a set of well established evaluation metrics for text generation: BLEU-1, BLEU-2, BLEU-3, BLEU-4~\cite{bleu}, METEOR~\cite{meteor} and ROUGE\textsubscript{L}~\cite{lin2004rouge}.
\\
\subsection{Human Evaluation}
Metrics for evaluating text generation such as BLEU and METEOR give an measure of how close the generated questions are to the target correct labels.
However, they might not be able to evaluate directly whether the predicate in the question was expressed in the text or not.
Thus we run two further human evaluations to directly measure the following. \\
\textbf{\textit{Predicate identification}}: annotators were asked to indicate whether the generated question contains the given predicate in the fact or not, either directly or implicitly. \\
\textbf{\textit{Naturalness}}: following~\cite{Ngomo13}, we measure the comprehensibility and readability of the generated questions.
Each annotator was asked to rate each generated question using a scale from $1$ to $5$, where:
(5) perfectly clear and natural,
(3) artificial but understandable, and (1) completely not understandable.
We run our studies on $100$ randomly sampled input facts alongside with their corresponding generated questions by each of the systems using the help of $4$ annotators.
\section{Results \& Discussion}
\paragraph{Automatic Evaluation} Table~\ref{table:eval-pred} shows results of our model compared to all other baselines across all evaluation metrics.
Our that encodes the KB fact and textual contexts achieves a significant enhancement over all the baselines in all evaluation metrics, with $\mathbf{+2.04}$ BLEU-4 score than the Encoder-Decoder baseline.
Incorporating the part-of-speech copy actions further improves this enhancement to reach \textbf{$\mathbf{+2.39}$} BLEU-4 points. \\
Among all baselines, the Encoder-Decoder baseline and the R-TRANSE baseline performed the best.
This shows that TransE embeddings encode intra-predicates information and intra-class-types information to a great extent, and can generalize to some extent to unseen predicates and class types.
Similar patterns can be seen in the evaluation on unseen sub-types and obj-types (Table~\ref{table:eval-type}). Our model with copy actions was able to outperform all the other systems.
Majority of systems have reported a significantly higher BLEU-4 scores in these two tasks than when generalizing to unseen predicates ($+12$ and $+8$ BLEU-4 points respectively).
This indicates that these tasks are relatively easier and hence our models achieve relatively smaller enhancements over the baselines.
\input{figures/qualitative_eval.tex}
\input{figures/results_examples.tex}
\paragraph{Human Evaluation}
Table~\ref{table:human-eval} shows how different variations of our system can express the unseen predicate in the target question with comparison to the Encoder-Decoder baseline. \\
Our proposed copy actions have scored a significant enhancement in the identification of unseen predicates with up to $\mathbf{+40\%}$ more than best performing baseline and our model version without the copy actions. \\
By examining some of the generated questions~(Table~\ref{table:results-example}) we see that models without copy actions can generalize to unseen predicates that only have a very similar freebase predicate in the training set. For example \texttt{fb:tv\_program/language} and \texttt{fb:film/language}, if one of those predicates exists in the training set the model can use the same questions for the other during test time. \\
Copy actions from the sub-type and the obj-type textual contexts can generalize to a great extent to unseen predicates because of the overlap between the predicate and the object type in many questions (Example 2 Table~\ref{table:results-example}). %
Adding the predicate context to our model has enhanced model performance for expressing unseen predicates by $+9\%$ (Table~\ref{table:human-eval}).
However we can see that it has affected the naturalness of the question.
The post processing step does not take into consideration that some verbs and prepositions do not fit in the sentence structure, or that some words are already existing in the question words (Example 4 Table~\ref{table:results-example}).
This does not happen as much when having copy actions from the sub-type and the obj-type contexts because they are mainly formed of nouns which are more interchangeable than verbs or prepositions.
A post-processing step to reform the question instead of direct copying from the input source is considered in our future work.
\section{Conclusion}
In this paper we presented a new neural model for question generation from knowledge bases, with a main focus on predicates, subject types or object types that were not seen at the training phase (Zero-Shot Question Generation). %
Our model is based on an encoder-decoder architecture that leverages textual contexts of triples, two attention layers for triples and textual contexts and finally a part-of-speech copy action mechanism. %
Our method exhibits significantly better results for Zero-Shot QG than a set of strong baselines including the state-of-the-art question generation from KB. %
Additionally, a complimentary human evaluation, helps in showing that the improvement brought by our part-of-speech copy action mechanism is even more significant than what the automatic evaluation suggests.
The source code and the collected textual contexts are provided for the community: \url{https://github.com/NAACL2018Anonymous/submission}
\section{Introduction}
Questions Generation (QG) from Knowledge Graphs is the task consisting in generating natural language questions given an input knowledge base (KB) triple~\cite{Serban16}.
QG from knowledge graphs has shown to improve the performance of existing factoid question answering (QA) systems either by dual training or by augmenting existing training datasets~\cite{Dong17,Khapra17}.
Those methods rely on large-scale annotated datasets such as SimpleQuestions~\cite{Bordes15}.
In practice many of the predicates and entity types in KB are not covered by those annotated datasets. For example $75\%$ of Freebase predicates are not covered by SimpleQuestions.
So, one challenge for QG from knowledge graphs is to adapt to predicates and entity types that were \textit{not} seen at training time (aka Zero-Shot QG).
Ultimately, generating questions to predicates and entity types unseen at training time will allow QA systems to cover predicates and entity types that would not have been used for QA otherwise.
Intuitively, a human who is given the task to write a question on a fact offered by a KB, would read natural language sentences where the entity or the predicate of the fact occur, and build up questions that are aligned with what he reads from both a lexical and grammatical standpoint.
In this paper, we propose a model for Zero-Shot Question Generation that follows this intuitive process.
In addition to the input KB fact, we feed our model with a set of textual contexts paired with the input KB triple through distant supervision.
Our model derives an encoder-decoder architecture, in which the encoder encodes the input KB triple, along with a set of textual contexts into hidden representations.
Those hidden representations are fed to a decoder equipped with an attention mechanism to generate an output question. \\
In the Zero-Shot setup, the emergence of new predicates and new class types during test time requires new lexicalizations to express these predicates and classes in the output question. These lexicalizations might not be encountered by the model during training time and hence do not exist in the model vocabulary, or have been seen only few times not enough to learn a good representation for them by the model.
Recent works on Text Generation tackle the rare words/unknown words problem using copy actions~\cite{Luong15,Gulcehre16}: words with a specific position are copied from the source text to the output text -- although this process is blind to the role and nature of the word in the source text.
Inspired by research in open information extraction~\cite{Fader11} and structure-content neural language models~\cite{Kiros14}, in which part-of-speech tags represent a distinctive feature when representing relations in text, we extend these positional copy actions.
Instead of copying a word in a specific position in the source text, our model copies a word with a specific part-of-speech tag from the input text -- we refer to those as part-of-speech copy actions.
Experiments show that our model using contexts through distant supervision significantly outperforms the strongest baseline among six ($+2.04$ BLEU-4 score). %
Adding our copy action mechanism further increases this improvement ($+2.39$).
Additionally, a human evaluation complements the comprehension of our model for edge case
; it supports the claim that the improvement brought by our copy action mechanism is even more significant than what the BLEU score suggests.
\section{Related Work}
QG became an essential component in many applications such as education~\cite{HeilmanS10}, tutoring~\cite{graesser2004,evens2006} and dialogue systems~\cite{Shang15}.
In our paper we focus on the problem of QG from structured KB and how we can generalize it to unseen predicates and entity types.
\cite{SeylerYB15} generate quiz questions from KB triples. Verbalization of entities and predicates relies on their existing labels in the KB and a dictionary.
\cite{Serban16} use an encoder-decoder architecture with attention mechanism trained on the SimpleQuestions dataset~\cite{Bordes15}.
\cite{Dong17} generate paraphrases of given questions to increases the performance of QA systems; paraphrases are generated relying on paraphrase datasets, neural machine translation and rule mining.
\cite{Khapra17} generate a set of QA pairs given a KB entity.
They model the problem of QG as a sequence to sequence problem by converting all the KB entities to a set of keywords.
None of the previous work in QG from KB address the question of generalizing to unseen predicates and entity types. \\
Textual information has been used before in the Zero-Shot learning. \cite{Socher13} use information in pretrained word vectors for Zero-Shot visual object recognition.
\cite{Levy17} incorporates a natural language question to the relation query to tackle Zero-Shot relation extraction problem.
Previous work in machine translation dealt with rare or unseen word problem problem for translating names and numbers in text.
\cite{Luong15} propose a model that generates positional placeholders pointing to some words in source sentence and copy it to target sentence (\textit{copy actions}).
\cite{Gulcehre16,Gu16} introduce separate trainable modules for copy actions to adapt to highly variable input sequences, for text summarization.
For text generation from tables, \cite{lebret2016} extend positional copy actions to copy values from fields in the given table.
For QG,~\cite{Serban16} use a placeholder for the subject entity in the question to generalize to unseen entities.
Their work is limited to unseen entities and does not study how they can generalize to unseen predicates and entity types.
\begin{figure*}[h]
\centering
\small
\def.75\linewidth{.75\linewidth}
\includegraphics[width=0.75\textwidth]{./figures/model.pdf}
\caption{The proposed model for Question Generation. The model consists of a single fact encoder and $n$ textual context encoders, each consists of a separate GRU. At each time step $t$, two attention vectors generated from the two attention modules are fed to the decoder to generate the next word in the output question.}
\label{fig:model}
\end{figure*}
\section{Model}
Let $F=\{s,p,o\}$ be the input fact provided to our model consisting of a subject $s$, a predicate $p$ and an object $o$, and $C$ be the set of textual contexts associated to this fact.
Our goal is to learn a model that generates a sequence of $T$ tokens
$Y = y_1,y_2,\ldots,y_T$ representing a question about the subject $s$, where the object $o$ is the correct answer.
Our model approximates the conditional probability of the output question given an input fact $p(Y|F)$, to be the probability of the output question, given an input fact and the additional textual context $C$, modelled as follows:
\begin{align}
p(Y|F) &= \prod_{t=1}^{T}p(y_t|y_{<t}, F,C)
\end{align}
where $y_{<t}$ represents all previously generated tokens until time step $t$. %
Additional textual contexts are natural language representation of the triples that can be drawn from a corpus -- our model is generic to any textual contexts that can be additionally provided, though we describe in Section~\ref{subsec:textualcontexts} how to create such texts from Wikipedia.
Our model derives the encoder-decoder architecture of~\cite{Sutskever14,Bahdanau14} with two encoding modules: a feed forward architecture encodes the input triple (sec. \ref{sec:fact-encoder}) and a set of recurrent neural network (RNN) to encode each textual context (sec. \ref{sec:textual-encoder}).
Our model has two attention modules~\cite{Bahdanau14}: one acts over the input triple and another acts over the input textual contexts (sec. \ref{sec:attention}).
The decoder (sec. \ref{sec:decoder}) is another RNN that generates the output question. At each time step, the decoder chooses to output either a word from the vocabulary or a special token indicating a copy action (sec. \ref{sec:copy-actions}) from any of the textual contexts.
\subsection{Fact Encoder}~\label{sec:fact-encoder}
Given an input fact $F=\{s,p,o\}$, let each of $e_s$, $e_p$ and $e_o$ be a 1-hot vectors of size $K$.
The fact encoder encodes each 1-hot vector into a fixed size vector $h_s = \mathbf{E_f}\,e_s$, \enspace $h_p = \mathbf{E_f}\,e_p$ \enspace and $h_o = \mathbf{E_f}\,e_o$,
where $\mathbf{E_f} \in \mathbb{R}^{H_k \times K}$ is the KB embedding matrix, $H_k$ is the size of the KB embedding and $K$ is the size of the KB vocabulary.
The \emph{encoded fact} $ h_f \in \mathbb{R}^{3H_k}$ represents the concatenation of those three vectors and we use it to initialize the decoder.
\begin{align}\label{eq:fact-emb}
h_f = [h_s;\enspace h_p;\enspace h_o]
\end{align}
Following~\cite{Serban16}, we learn $\mathbf{E_{f}}$ using \textit{TransE}~\cite{Bordes15}. We fix its weights and do not allow their update during training time.
\subsection{Textual Context Encoder}~\label{sec:textual-encoder}
Given a set of $n$ textual contexts $ C = \{c_1, c_2, \ldots, c_n\ : c_j = (x_1^j, x_2^j, \ldots, x^j_{|c_j|})\}$, where $x_i^j$ represents the 1-hot vector of the $i^{th}$ token in the $j^{th}$ textual context $c_j$, and $|c_j|$ is the length of the $j^{th}$ context.
We use a set of $n$ Gated Recurrent Neural Networks (GRU)~\cite{Cho2014} to encode each of the textual concepts separately:
%
\begin{align}\label{eq:gru-enc}
h_i^{c_j} = GRU_j\left (\mathbf{E_c}\,x_{i}^{j},\enspace h_{i-1}^{c_j} \right )
\end{align}
where $h_i^{c_j} \in \mathbb{R}^{H_c}$ is the hidden state of the GRU that is equivalent to $x_i^j$ and of size $H_c$ . $\mathbf{E_{c}}$ is the input word embedding matrix.
The \emph{encoded context} represents the encoding of all the textual contexts; it is calculated as the concatenation of all the final states of all the encoded contexts:
\begin{align}\label{eq:context-emb}
h_c &= [h^{c_1}_{|c_1|}; h^{c_2}_{|c_2|};\ldots; h^{c_n}_{|c_n|}].
\end{align}
\subsection{Decoder} \label{sec:decoder}
For the decoder we use another GRU with an attention mechanism~\cite{Bahdanau14}, in which the decoder hidden state $s_t\in\mathbb{R}^{H_{d}}$ at each time step $t$ is calculated as:
\begin{align}
s_{t} &= z_t \circ s_{t-1} + (1-z_t) \circ \tilde{s}_{t} \enspace,
\end{align}
Where:
\small
\begin{align}
\tilde{s}_{t} &= tanh\left(W E_{w}y_{t-1} + U [r_t \circ s_{t-1}] + A \, [a^f_t; a^c_t] \right) \\
z_t &= \sigma\left(W_z\,E_{w}\,y_{t-1} + U_z \, s_{t-1} + A_z \, [a^f_t; a^c_t] \right) \\
r_t &= \sigma\left(W_r\,E_{w}\,y_{t-1} + U_r \, s_{t-1} + A_r \, [a^f_t; a^c_t] \right)
\end{align}
\normalsize
$W,W_z,W_r \in \mathbb{R}^{m \times H_{d}}$, $U,U_z,U_r,A,A_z,A_r \in \mathbb{R}^{H_{d} \times H_{d}}$ are learnable parameters of the GRU.
$E_w \in \mathbf{R}^{m \times V}$ is the word embedding matrix, $m$ is the word embedding size and $H_d$ is the size of the decoder hidden state.
$a^f_t$, $a^c_t$ are the outputs of the fact attention and the context attention modules respectively, detailed in the following subsection. \\
In order to enforce the model to pair output words with words from the textual inputs, we couple the word embedding matrices of both the decoder $E_w$ and the textual context encoder $E_c$ (eq.(\ref{eq:gru-enc})).
We initialize them with GloVe embeddings~\cite{PenningtonSM14} and allow the network to tune them.\\
The first hidden state of the decoder $s_0 = [h_f;\,h_c]$ is initialized using a concatenation of the encoded fact~(eq.(\ref{eq:fact-emb})) and the encoded context~(eq.(\ref{eq:context-emb})) .\\
At each time step $t$, after calculating the hidden state of the decoder, the conditional probability distribution over each token $y_t$ of the generated question is computed as the $softmax(W_{o}\,s_t)$ over all the entries in the output vocabulary, $W_o \in \mathbb{R}^{H_d \times V}$ is the weight matrix of the output layer of the decoder.\\
\subsection{Attention} \label{sec:attention}
Our model has two attention modules: \\
\textbf{Triple attention} over the input triple to determine at each time step $t$ an attention-based encoding of the input fact $a^f_t \in \mathbb{R}^{H_k}$:
\begin{align}
a^f_t = \alpha_{s,t} \, h_{s} + \alpha_{p,t} \, h_p + \alpha_{s,t} \, h_o \enspace,
\end{align}
$\alpha_{s,t}, \alpha_{p,t}, \alpha_{o,t}$ are scalar values calculated by the attention mechanism to determine at each time step which of the encoded subject, predicate, or object the decoder should attend to. \\
\textbf{Textual contexts attention} over all the hidden states of all the textual contexts $a^c_t \in \mathbb{R}^{H_c}$:
\begin{align}
a^c_t = \sum_{i=1}^{|C|} \sum_{j=1}^{|c_i|} \alpha_{t,j}^{c_i} \, h_j^{c_i} \enspace,
\end{align}
$\alpha_{t,j}^{c_i}$ is a scalar value determining the weight of the $j^{th}$ word in the $i^{th}$ context $c^i$ at time step $t$.
Given a set of encoded input vectors $I = \{h_1,h_2,...h_k\}$ and the decoder previous hidden state $s_{t-1}$, the attention mechanism calculates $\alpha_{t} = \alpha_{i,t},\ldots,\alpha_{k,t}$ as a vector of scalar weights, each $\alpha_{i,t}$ determines the weight of its corresponding encoded input vector $h_i$.
\begin{align}
e_{i,t} &= \mathbf{v_a}^\top \enspace tanh(\mathbf{W_a \, s_{t-1} + U_a \, h_i}) \\
\alpha_{i,t} &= \frac{exp\left(e_{i,t}\right)}{\sum_{j=1}^{k} exp\left(e_{j,t}\right)} \enspace,
\end{align} where
$\mathbf{v_a}, \mathbf{W_a}, \mathbf{U_a}$ are trainable weight matrices of the attention modules.
It is important to notice here that we encode each textual context separately using a different GRU, but we calculate an overall attention over all tokens in all textual contexts: at each time step the decoder should ideally attend to only one word from all the input contexts.
\input{figures/copyactions}
\subsection{Part-Of-Speech Copy Actions}\label{sec:copy-actions}
We use the method of \cite{Luong15} by modeling all the copy actions on the data level through an annotation scheme.
This method treats the model as a black box, which makes it adaptable to any text generation model.
Instead of using positional copy actions, we use the part-of-speech information to decide the alignment process between the input and output texts to the model.
Each word in every input textual context is replaced by a special token containing a combination of its context id (e.g. \texttt{C1}) and its POS tag (e.g. \texttt{NOUN}).
Then, if a word in the output question matches a word in a textual context, it is replaced with its corresponding tag as shown in Table~\ref{table:copyactions}. \\
Unlike \cite{Serban16,lebret2016} we model the copy actions in the input and the output levels.
Our model does not have the drawback of losing the semantic information when replacing words with generic placeholders, since we provide the model with the input triple through the fact encoder.
During inference the model chooses to either output words from the vocabulary or special tokens to copy from the textual contexts.
In a post-processing step those special tokens are replaced with their original words from the textual contexts.
\section{Textual contexts dataset}
As a source of question paired with KB triples we use the SimpleQuestions dataset~\cite{Bordes15}.
It consists of 100K questions with their corresponding triples from Freebase, and was created manually through crowdsourcing.
When asked to form a question from an input triple, human annotators usually tend to mainly focus on expressing the predicate of the input triple.
For example, given a triple with the predicate \texttt{fb:spacecraft/manufacturer} the user may ask \textit{"What is the manufacturer of \texttt{[S]} ?"}.
Annotators may specify the entity type of the subject or the object of the triple: \textit{"What is the manufacturer of the \textbf{spacecraft} \texttt{[S]}?"} or \textit{"Which \textbf{company} manufactures \texttt{[S]}?"}.
Motivated by this example we chose to associate each input triple with three textual contexts of three different types.
The first is a phrase containing lexicalization of the predicate of the triple.
The second and the third are two phrases containing the entity type of the subject and the object of the triple.
In what follows we show the process of collection and preprocessing of those textual contexts.
\subsection{Collection of Textual Contexts}\label{subsec:textualcontexts}
We extend the set of triples given in the SimpleQuestions dataset by using the FB5M~\cite{Bordes15} subset of Freebase. As a source of text documents, we rely on Wikipedia articles.
\paragraph{Predicate textual contexts:}
In order to collect textual contexts associated with the SimpleQuestions triples, we follow the distant supervision setup for relation extraction~\cite{mintz_2009}.
The distant supervision assumption has been effective in creating training data for relation extraction and shown to be 87\% correct~\cite{riedel_2010} on Wikipedia text. \\
First, we align each triple in the FB5M KB to sentences in Wikipedia if the subject and the object of this triple co-occur in the same sentence.
We use a simple string matching heuristic to find entity mentions in text\footnote{
We map Freebase entities to Wikidata through the Wikidata property P646, then we extract their labels and aliases.
We use the Wikidata truthy dump:~\url{https://dumps.wikimedia.org/wikidatawiki/entities/}}.
Afterwards we reduce the sentence to the set of words that appear on the dependency path between the subject and the object mentions in the sentence.
We replace the positions of the subject and the object mentions with \texttt{[S]} and \texttt{[O]} to the keep track of the information about the direction of the relation.
The top occurring pattern for each predicate is associated to this predicate as its textual context.
Table~\ref{table:example-dep} shows examples of predicates and their corresponding textual context.
\paragraph{Sub-Type and Obj-Type textual contexts:}
We use the labels of the entity types as the sub-type and obj-type textual contexts.
We collect the list of entity types of each entity in the FB5M through the predicate \texttt{fb:type/instance}.
If an entity has multiple entity types we pick the entity type that is mentioned the most in the first sentence of each Wikipedia article.
Thus the textual contexts will opt for entity types that is more natural to appear in free text and therefore questions.
\input{figures/dep_examples.tex}
\subsection{Generation of Special tokens}
To generate the special tokens for copy actions~(sec.~\ref{sec:copy-actions}) we run POS tagging on each of the input textual contexts\footnote{For the predicate textual contexts we run pos tagging on the original text not the lexicalized dependency path}.
We replace every word in each textual context with a combination of its context id (e.g. \texttt{C1}) and its POS tag (e.g. \texttt{NOUN}).
If the same POS tag appears multiple times in the textual context, it is given an additional id (e.g. \texttt{C1\_NOUN\_2}).
If a word in the output question overlaps with a word in the input textual context, this word is replaced by its corresponding tag. \\
For sentence and word tokenization we use the Regex tokenizer from the NLTK toolkit~\cite{NLTK}, and for POS tagging and dependency parsing we use the Spacy\footnote{https://spacy.io/} implementation.
\section{Experiments}
\subsection{Zero-Shot Setups}~\label{sec:zeroshot-setup}
We develop three setups that follow the same procedure as~\cite{Levy17} for Zero-Shot relation extraction to evaluate how our model generalizes to: 1) unseen predicates, 2) unseen sub-types and 3) unseen obj-types. \\
For the unseen predicates setup we group all the samples in SimpleQuestions by the predicate of the input triple, and keep groups that contain at least 50 samples.
Afterwards we randomly split those groups to 70\% train, 10\% valid and 20\% test mutual exclusive sets respectively.
This guarantees that if the predicate \texttt{fb:person/place\_of\_birth} for example shows during test time, the training and validation set will not contain any input triples having this predicate.
We repeat this process to create 10 cross validation folds, in our evaluation we report the mean and standard deviation results across those 10 folds.
While doing this we make sure that the number of samples in each fold -- not only unique predicates -- follow the same 70\%, 30\%, 10\% distribution.
We repeat the same process for the subject entity types and object entity types (answer types) individually.
Similarly, for example in the unseen object-type setup, the question \textit{"Which \textbf{artist} was born in Berlin?"} appearing in the test set means that, there is no question in the training set having an entity of type \textbf{\textit{artist}}.
Table~\ref{tab:dataset-sizes} shows the mean number of samples, predicates, sub-types and obj-types across the 10 folds for each experiment setup.
\input{figures/dataset_size.tex}
\subsection{Baselines}~\label{sec:baselines}
\vspace{-15pt}
\paragraph{\texttt{SELECT}}
is a baseline built from~\cite{Serban16} and adapted for the zero shot setup.
During test time given a fact $F$, this baseline picks a fact $F_c$ from the training set and outputs the question that corresponds to it.
For evaluating unseen predicates, $F_c$ has the same answer type (obj-type) as $F$.
And while evaluating unseen sub-types or obj-types, $F_c$ and $F$ have the same predicate.
\paragraph{\texttt{R-TRANSE}}
is an extension that we propose for \texttt{SELECT}.
The input triple is encoded using the concatenation of the TransE embeddings of the subject, predicate and object.
At test time, \texttt{R-TRANSE} picks a fact from the training set that is the closest to the input fact using cosine similarity and outputs the question that corresponds to it.
We provide two versions of this baseline:
\textbf{\texttt{R-TRANSE}} which indexes and retrieves raw questions with only a single placeholder for the subject label, such as in~\cite{Serban16}.
And \textbf{\texttt{R-TRANSE\textsubscript{copy}}} which indexes and retrieves questions using our copy actions mechanism (sec.~\ref{sec:copy-actions}).
\paragraph{\texttt{IR}}
is an information retrieval baseline. Information retrieval has been used before as baseline for QG from text input~\cite{Rush15,Du17}.
We rely on the textual context of each input triple as the search keyword for retrieval.
First, the IR baseline encodes each question in the training set as a vector of TF-IDF weights~\cite{joachims_97} and then does dimensionality reduction through LSA~\cite{halko_svd_2011}.
At test time the textual context of the input triple is converted into a dense vector using the same process and then the question with the closest cosine distance to the input is retrieved.
We provide two versions of this baseline: \textbf{\texttt{IR}} on raw text and \textbf{\texttt{IR\textsubscript{copy}}} on text with our placeholders for copy actions.
\paragraph{\texttt{Encoder-Decoder.}}
Finally, we compare our model to the Encoder-Decoder model with a single placeholder, the best performing model from~\cite{Serban16}.
We initialize the encoder with TransE embeddings and the decoder with GloVe word embeddings.
Although this model was not originally built to generalize to unseen predicates and entity types, it has some generalization abilities represented in the encoded information in the pre-trained embeddings.
Pretrained KB terms and word embeddings encode relations between entities or between words as translations in the vector space.
Thus the model might be able to map new classes or predicates in the input fact to new words in the output question.
\input{figures/eval_unseenpredicates}
\subsection{Training \& Implementation Details}
To train the neural network models we optimize the negative log-likelihood of the training data with respect to all the model parameters. For that we use the RMSProp optimization algorithm with a decreasing learning rate of~$0.001$, mini-batch size~$=200$, and clipping gradients with norms larger than $0.1$.
We use the same vocabulary for both the textual context encoders and the decoder outputs. We limit our vocabulary to the top $30,000$ words including the special tokens.
For the word embeddings we chose GloVe~\cite{PenningtonSM14} pretrained embeddings of size $100$.
We train TransE embeddings of size $H_k=200$, on the FB5M dataset~\cite{Bordes15} using the TransE model implementation from~\cite{Lin15}.
We set GRU hidden size of the decoder to $H_d=500$, and textual encoder to $H_c=200$.
The networks hyperparameters are set with respect to the final BLEU-4 score over the validation set.
All neural networks are implemented using Tensorflow~\cite{tensorflow}. %
All experiments and models source code are publicly available\footnote{\url{https://github.com/NAACL2018Anonymous/submission}} for the sake of reproducibility.
\subsection{Automatic Evaluation Metrics}
To evaluate the quality of the generated question, we compare the original labeled questions by human annotators to the ones generated by each variation of our model and the baselines.
We rely on a set of well established evaluation metrics for text generation: BLEU-1, BLEU-2, BLEU-3, BLEU-4~\cite{bleu}, METEOR~\cite{meteor} and ROUGE\textsubscript{L}~\cite{lin2004rouge}.
\\
\subsection{Human Evaluation}
Metrics for evaluating text generation such as BLEU and METEOR give an measure of how close the generated questions are to the target correct labels.
However, they might not be able to evaluate directly whether the predicate in the question was expressed in the text or not.
Thus we run two further human evaluations to directly measure the following. \\
\textbf{\textit{Predicate identification}}: annotators were asked to indicate whether the generated question contains the given predicate in the fact or not, either directly or implicitly. \\
\textbf{\textit{Naturalness}}: following~\cite{Ngomo13}, we measure the comprehensibility and readability of the generated questions.
Each annotator was asked to rate each generated question using a scale from $1$ to $5$, where:
(5) perfectly clear and natural,
(3) artificial but understandable, and (1) completely not understandable.
We run our studies on $100$ randomly sampled input facts alongside with their corresponding generated questions by each of the systems using the help of $4$ annotators.
\section{Results \& Discussion}
\paragraph{Automatic Evaluation} Table~\ref{table:eval-pred} shows results of our model compared to all other baselines across all evaluation metrics.
Our that encodes the KB fact and textual contexts achieves a significant enhancement over all the baselines in all evaluation metrics, with $\mathbf{+2.04}$ BLEU-4 score than the Encoder-Decoder baseline.
Incorporating the part-of-speech copy actions further improves this enhancement to reach \textbf{$\mathbf{+2.39}$} BLEU-4 points. \\
Among all baselines, the Encoder-Decoder baseline and the R-TRANSE baseline performed the best.
This shows that TransE embeddings encode intra-predicates information and intra-class-types information to a great extent, and can generalize to some extent to unseen predicates and class types.
Similar patterns can be seen in the evaluation on unseen sub-types and obj-types (Table~\ref{table:eval-type}). Our model with copy actions was able to outperform all the other systems.
Majority of systems have reported a significantly higher BLEU-4 scores in these two tasks than when generalizing to unseen predicates ($+12$ and $+8$ BLEU-4 points respectively).
This indicates that these tasks are relatively easier and hence our models achieve relatively smaller enhancements over the baselines.
\input{figures/qualitative_eval.tex}
\input{figures/results_examples.tex}
\paragraph{Human Evaluation}
Table~\ref{table:human-eval} shows how different variations of our system can express the unseen predicate in the target question with comparison to the Encoder-Decoder baseline. \\
Our proposed copy actions have scored a significant enhancement in the identification of unseen predicates with up to $\mathbf{+40\%}$ more than best performing baseline and our model version without the copy actions. \\
By examining some of the generated questions~(Table~\ref{table:results-example}) we see that models without copy actions can generalize to unseen predicates that only have a very similar freebase predicate in the training set. For example \texttt{fb:tv\_program/language} and \texttt{fb:film/language}, if one of those predicates exists in the training set the model can use the same questions for the other during test time. \\
Copy actions from the sub-type and the obj-type textual contexts can generalize to a great extent to unseen predicates because of the overlap between the predicate and the object type in many questions (Example 2 Table~\ref{table:results-example}). %
Adding the predicate context to our model has enhanced model performance for expressing unseen predicates by $+9\%$ (Table~\ref{table:human-eval}).
However we can see that it has affected the naturalness of the question.
The post processing step does not take into consideration that some verbs and prepositions do not fit in the sentence structure, or that some words are already existing in the question words (Example 4 Table~\ref{table:results-example}).
This does not happen as much when having copy actions from the sub-type and the obj-type contexts because they are mainly formed of nouns which are more interchangeable than verbs or prepositions.
A post-processing step to reform the question instead of direct copying from the input source is considered in our future work.
\section{Conclusion}
In this paper we presented a new neural model for question generation from knowledge bases, with a main focus on predicates, subject types or object types that were not seen at the training phase (Zero-Shot Question Generation). %
Our model is based on an encoder-decoder architecture that leverages textual contexts of triples, two attention layers for triples and textual contexts and finally a part-of-speech copy action mechanism. %
Our method exhibits significantly better results for Zero-Shot QG than a set of strong baselines including the state-of-the-art question generation from KB. %
Additionally, a complimentary human evaluation, helps in showing that the improvement brought by our part-of-speech copy action mechanism is even more significant than what the automatic evaluation suggests.
The source code and the collected textual contexts are provided for the community: \url{https://github.com/NAACL2018Anonymous/submission}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Idea}
\section{Introduction}
Multiple Object Tracking (\mot) denotes the process of successively determining the number and states of multiple dynamic objects based on noisy sensor measurements. Tracking is a key technology for many technical applications in areas such as robotics, surveillance, autonomous driving, automation, medicine, and sensor networks. Extended object tracking is defined as \mot where each object generates multiple measurements per time step and the measurements are spatially structured on the object, see \cite{GranstromBR:2017}.
Extended object tracking is applicable in many different scenarios, e.g., environment perception for autonomous vehicles using camera, lidar and automotive radar. The multiple measurements per object and time step create a possiblity to estimate the object extent, in addition to the position and the kinematic properties such as velocity and heading. This estimation requires an object state space model, including modelling of the object dynamics and the measurement process. Extended obejct models include the Random Matrix model \cite{Koch:2008,FeldmannFK:2011}, the Random Hypersurface model \cite{BaumH:2014}, and Gaussian Process models \cite{WahlstromO:2015}. A comprehensive overview of extended object tracking can be found in \cite{GranstromBR:2017}.
In this paper we focus on the Random Matrix model, also known as the Gaussian inverse Wishart (\giw) model. The random matrix model was originally proposed by Koch \cite{Koch:2008}, and is an example of a spatial model. In this model the shape of the object is assumed to be elliptic. The ellipse shape is simple but still versatile, and the random matrix model has been integrated into many different multiple extended object tracking frameworks \cite{WienekeK:2010,WienekeD:2011,WienekeK:2012,GranstromO:2012a,LundquistGO:2013,BeardRGVVS:2016,GranstromFS:2016fusion,SchusterRW:JAIF,SchusterRW:2015,VivoneB:2016}. Indeed, the random matrix model is applicable in many real scenarios, e.g., pedestrian tracking using video \cite{WienekeK:2010,WienekeD:2011,WienekeK:2012} or lidar \cite{GranstromO:2012a} and tracking of boats and ships using marine radar \cite{GranstromNBLS:2014,GranstromNBLS:2015_TGARS,VivoneB:2016,VivoneBGW:2016,VivoneBGNC:2015_ConvMeas,VivoneBGNC:2017,SchusterRW:2015}.
The focus of this paper is on Bayesian smoothing for the random matrix model. A preliminary version of this work was presented in \cite{Bramstang:2018}. This paper is a significant extension of \cite{Bramstang:2018}, and presents the following contributions:
\begin{itemize}
\item Closed form smoothing expressions for the conditional \giw model from \cite{Koch:2008}.
\item Closed from smoothing expressions for the factorized \giw model from \cite{FeldmannFK:2011}.
\item A simulation study that compares the derived smoothers to both prediction and filtering.
\end{itemize}
As a minor contribution, a closed form expression for the random matrix prediction from \cite{GranstromO:2014} is presented.
The rest of the paper is organized as follows. A problem formulation is given in the next section. In Section~\ref{sec:randomMatrixReview} a review of the random matrix model is given. Smoothing for the conditional \giw model is presented in Section~\ref{sec:condSmoothing}; smoothing for the factorised \giw model is presented in Section~\ref{sec:factSmoothing}. Results from a simulation study are presented in Section~\ref{sec:simulationStudy}. Concluding remarks are given in Section~\ref{sec:Conclusions}.
\begin{table}
\caption{Notation}
\label{tab:notation}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
\begin{itemize}
\item $\mathbb{R}^{n}$: space of vectors of dimension $n$
\item $\mathbb{I}^{n\times n}$: space of non-singular $n\times n$ matrices
\item $\mathbb{S}_{+}^{d}$: space of positive semi-definite $d\times d$ matrices
\item $\mathbb{S}_{++}^{d}$: space of positive definite $d\times d$ matrices
\item $\Id$: unit matrix of size $d\times d$
\item $\mathbf{0}_{m\times n}$: all-zero $m\times n$ matrix
\item $\otimes$: Kronecker product
\item $|\cdot |$: set cardinality
\item $\diag{\cdot}$: diagonal matrix
\item $\mathbb{E}[\cdot]$: expected value
\item $ \Npdfbig{\sx}{m}{P}$: Gaussian pdf for random vector $\sx\in\mathbb{R}^{n_x}$ with mean vector $m\in\mathbb{R}^{n_x}$ and covariance matrix $P\in\mathbb{S}_{+}^{n_x}$
\item $\IWishpdf{\ext}{v}{V}$: inverse Wishart pdf for random matrix $\ext\in\mathbb{S}_{++}^{d}$ with degrees of freedom $v>2d$ and parameter matrix $V\in\mathbb{S}_{++}^{d}$, see, e.g., \cite[Def. 3.4.1]{GuptaN:2000}
\item $\Wishpdf{\ext}{v}{V}$: Wishart pdf for random matrix $\ext\in\mathbb{S}_{++}^{d}$ with degrees of freedom $v\geq d$ and parameter matrix $V\in\mathbb{S}_{++}^{d}$, see, e.g., \cite[Def. 3.2.1]{GuptaN:2000}
\item $\mathcal{GB}_{d}^{II}\left(\ext \ ; \ a,\ b,\ \Omega,\ \Psi\right)$: Generalized matrix variate beta type II pdf for random matrix $\ext\in\mathbb{S}_{++}^{d}$ with degrees of freedom $a>\frac{d-1}{2}$, $b>\frac{d-1}{2}$, parameter matrix $\Psi\in\mathbb{S}_{+}^{d}$, and parameter matrix $\Omega$ such that $(\Omega-\Psi)\in\mathbb{S}_{++}^{d}$, see, e.g., \cite[Def. 5.2.4]{GuptaN:2000}
\end{itemize}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\section{Problem formulation}
Let $\xi_{k} $ denote the extended object state at time $k$, let $\setZ_{k}$ denote the set of measurements at time step $k$, and let $\setZ_{1:k}$ denote the sets of measurements from time $1$ up to, and including, time $k$. Bayesian extended object filtering builds upon two steps, the Chapman-Kolmogorov prediction
\begin{align}
p(\xi_{k+1} | \setZ_{1:k}) & = \int p(\xi_{k+1}|\xi_{k}) p(\xi_{k} | \setZ_{1:k}) \diff \xi_{k}
\label{eq:SmoothingForwards}
\end{align}
where $p(\xi_{k+1}|\xi_{k})$ is the transition density, and the Bayes update
\begin{align}
p(\xi_{k+1} | \setZ_{1:k+1}) & = \frac{p(\setZ_{k+1} | \xi_{k+1}) p(\xi_{k+1} | \setZ_{1:k})}{ \int p(\setZ_{k+1} | \xi_{k+1}) p(\xi_{k+1} | \setZ_{1:k}) \diff \xi_{k+1}}
\end{align}
where $p(\setZ_{1:k+1} | \xi_{k+1})$ is the measurement likelihood. The focus of this paper is on Bayesian extended object smoothing,
\begin{align}
p(\xi_{k} | \setZ_{1:K}) & = p(\xi_{k} | \setZ_{1:k}) \int \frac{p(\xi_{k+1}|\xi_{k}) p(\xi_{k+1}|\setZ_{1:K})}{p(\xi_{k+1}|\setZ_{1:k})} \diff \xi_{k+1},
\label{eq:SmoothingBackwards}
\end{align}
where $K$ is the final time step. For the random matrix model, the Chapman-Kolmogorov prediction and Bayes update have been covered extensively in previous litterature, see, e.g., \cite{Koch:2008,FeldmannFK:2011,GranstromO:2014,LanRL:2012} for the prediction, and, e.g., \cite{Koch:2008,FeldmannFK:2011,Orguner:2012,ArdeshiriOG:2015,SaritasO:2018,LanRL:2012} for the update. In this paper, we focus on Bayesian extended object smoothing. In previous literature, smoothing is only discussed briefly in \cite[Sec. 3.F]{Koch:2008}, and complete details are not given.
Bayesian filtering and smoothing for the random matrix model is an example of \emph{assumed density filtering}: the functional form of the state density is to be preserved in the prediction and the update. It is therefore necessary that Bayesian smoothing also preserves the functional form of the extended object state density. Two different assumed state densities can be found in the literature: the conditional Gaussian inverse Wishart \cite{Koch:2008}, and the factorized Gaussian inverse Wishart \cite{FeldmannFK:2011}.
The problem considered in this paper is to use the Bayesian smoothing equation \eqref{eq:SmoothingBackwards} to compute the smoothing \giw parameters for both the conditional model and the factorized model.
\section{Review of random matrix model}
\label{sec:randomMatrixReview}
In this section we give a brief review of the random matrix model; a longer review can be found in \cite[Sec. 3.A]{GranstromBR:2017}.
In the random matrix model \cite{Koch:2008,FeldmannFK:2011}, the extended object state is a tuple $\xi_{k} = \left(\sx_{k},\ext_{k}\right) \in \mathbb{R}^{n_x} \times \mathbb{S}_{++}^{d}$. The vector $\sx_{k} \in \mathbb{R}^{n_x}$ represents the object's position and its motion properties, such as velocity, acceleration, and turn-rate. The matrix $\ext_{k} \in \mathbb{S}_{++}^{d}$ represents the object's extent, where $d$ is the dimension of the object; $d=2$ for tracking with 2D position and $d=3$ for tracking with 3D position. The matrix $\ext_{k}$ is modelled as being symmetric and positive definite, which means that the object shape is approximated by an ellipse.
In the literature, there are two alternative models for the extended object state density, the conditional and the factorised. In the conditional model, first presented in \cite{Koch:2008}, the following state density is used,
\begin{subequations}
\begin{align}
p(\xi_{k} | \setZ_{1:\ell}) = & p\left(\sx_{k}|\ext_{k},\setZ_{1:\ell}\right)p\left(\ext_{k}|\setZ_{1:\ell}\right) \label{eq:conditionalDensityGeneral} \\
= & \Npdfbig{\sx_{k}}{m_{k|\ell}}{P_{k|\ell}\otimes\ext_{k}} \nonumber \\
& \times \IWishpdf{\ext_{k}}{v_{k|\ell}}{V_{k|\ell}}, \label{eq:conditionalGIW}%
\end{align}%
\end{subequations}%
where $m_{k|\ell}\in\mathbb{R}^{n_x}$, $P_{k|\ell}\in\mathbb{S}_{+}^{s}$, $v_{k|\ell}>2d$, $V_{k|\ell}\in\mathbb{S}_{++}^{d}$, and $s=\frac{n_x}{d}$. In this model, the random vector $\sx$ consists of a $d$-dimensional spatial component (the position) and its derivatives (velocity, acceleration, etc.), see \cite[Sec. 3]{Koch:2008}. Thus, $s-1=\frac{n_x}{d}-1$ describes up to which derivative the kinematics are described, see \cite[Sec. 3]{Koch:2008}.
In the factorised model, first presented in \cite{FeldmannFK:2011}, the following state density is used,
\begin{subequations}
\begin{align}
p(\xi_{k} | \setZ_{1:\ell}) = & p\left(\sx_{k}|\setZ_{1:\ell}\right)p\left(\ext_{k}|\setZ_{1:\ell}\right) \\
= & \Npdfbig{\sx_{k}}{m_{k|\ell}}{P_{k|\ell}} \IWishpdf{\ext_{k}}{v_{k|\ell}}{V_{k|\ell}},
\end{align}%
\label{eq:factorizedGIW}%
\end{subequations}%
where $m_{k|\ell}\in\mathbb{R}^{n_x}$, $P_{k|\ell}\in\mathbb{S}_{+}^{n_x}$, $v_{k|\ell}>2d$, and $V_{k|\ell}\in\mathbb{S}_{++}^{d}$. In this model, the random vector $\sx$ consists of a $d$-dimensional spatial component (the position) and additional motion parameters; note that, in contrast to the conditional model, here the motion parameters are not restricted to being derivatives of the spatial component, and non-linear dynamics can be modelled, see further in \cite{FeldmannFK:2011}.
The random matrix transition density can expressed as
\begin{subequations}
\begin{align}
p(\xi_{k+1}|\xi_{k}) = & p \left(\sx_{k+1},\ext_{k+1}|\sx_{k},\ext_{k}\right) \\
= & p \left(\sx_{k+1}|\ext_{k+1},\sx_{k},\ext_{k}\right) p \left(\ext_{k+1}|\sx_{k},\ext_{k}\right) \\
= & p \left(\sx_{k+1}|\ext_{k+1},\sx_{k}\right) p \left(\ext_{k+1}|\sx_{k},\ext_{k}\right) \label{eq:randomMatrixTransitionDensity}
\end{align}
\end{subequations}
where the last equality follows from a Markov assumption, see \cite{Koch:2008}. The random matrix measurement likelihood can be expressed on a general form as
\begin{align}
p(\setZ_{k} | \xi_{k}) \propto \prod_{\sz\in\setZ_{k}} p(\sz | \sx_k,\ext_k)
\end{align}
Note that the modelling of the extended object measurement set cardinality is outside the scope of this work, see \cite[Sec. 2.C]{GranstromBR:2017} for an overview of different models for the number of measurements.
\section{Conditional model smoothing}
\label{sec:condSmoothing}
In the conditional model, we have conditional Gaussian inverse Wishart densities, cf. \eqref{eq:conditionalGIW},
and under assumed density filtering we seek a smoothed density of the same form, i.e.,
\begin{subequations}
\begin{align}
p(\xi_{k} | \setZ_{1:K}) = & p(\sx_{k}|\ext_{k},\setZ_{1:K})p(\ext_{k} | \setZ_{1:K}) \\
= & \Npdfbig{\sx_{k}}{m_{k|K}}{P_{k|K}\otimes\ext_{k}} \nonumber \\
& \times \IWishpdf{\ext_{k}}{v_{k|K}}{V_{k|K}}.
\end{align}
\label{eq:condSmoothedGIW}
\end{subequations}
\subsection{Assumptions and modelling}
The following assumptions are made for the conditional \giw model, see \cite[Sec. 2]{Koch:2008}.
\begin{assumption}\label{ass:ExtentTransitionDensity}
The time evolution of the extent state is assumed independent of the kinematic state,
\begin{align}
p\left(\ext_{k+1}|\sx_{k},\ext_{k}\right) = p\left(\ext_{k+1}|\ext_{k}\right).
\end{align}
\hfill$\square$
\end{assumption}
\begin{assumption}\label{ass:SlowlyChangingExtent}
The extent changes slowly with time, $\ext_{k+1} \approx \ext_{k}$, such that for the kinematic state, conditioned on the extent state, the following holds,
\begin{align}
p(\sx_{k} | \ext_{k}) & \approx p(\sx_{k} | \ext_{k+1}), \\
p(\sx_{k+1} | \ext_{k+1}) & \approx p(\sx_{k+1} | \ext_{k}), \\
p\left(\sx_{k+1}|\ext_{k+1},\sx_{k}\right) & \approx p\left(\sx_{k+1}|\ext_{k},\sx_{k}\right).
\end{align}
\hfill$\square$
\end{assumption}
The validity of Assumptions~\ref{ass:ExtentTransitionDensity} and \ref{ass:SlowlyChangingExtent} is discussed in \cite{Koch:2008}.
In the conditional random matrix model, the transition density \eqref{eq:randomMatrixTransitionDensity} is Gaussian-Wishart, see \cite[Sec. 3.A/B]{Koch:2008},
\begin{subequations}
\begin{align}
p(\xi_{k+1}|\xi_{k})
\approx & p\left(\sx_{k+1}|\ext_{k+1},\sx_{k}\right)p\left(\ext_{k+1}|\ext_{k}\right) \label{eq:condTransitionDensityGeneral} \\
= & \Npdfbig{\sx_{k+1}}{(F_k\otimes \mathbf{I}_{d})\sx_{k}}{D_k \otimes \ext_{k+1}} \\
& \times \Wishpdf{\ext_{k+1}}{n_{k}}{\frac{\ext_{k}}{n_{k}}} \nonumber
\end{align}
where the $s\times s$ matrix $F_k$ is the motion model, the $s\times s$ matrix $D_k$ is the process noise, and the degrees of freedom $n_k \geq d$ govern the uncertainty of the time evolution of the extent. This transition density was generalised by \cite{LanRL:2012} by introducing a $d\times d$ parameter matrix $A$ for the extent transition,
\begin{align}
p(\xi_{k+1}|\xi_{k}) \approx & \Npdfbig{\sx_{k+1}}{(F_k\otimes \mathbf{I}_{d})\sx_{k}}{D_k \otimes \ext_{k+1}} \label{eq:condTransitionDensity} \\
& \times \Wishpdf{\ext_{k+1}}{n_{k}}{\frac{A\ext_{k} A^{\tp}}{n_{k}}} \nonumber
\end{align}
\end{subequations}
In the remainder of the paper, we consider this generalised transition density. The measurement model is \cite[Sec. 3.D]{Koch:2008}
\begin{align}
p(\sz | \sx_k,\ext_k) = \Npdfbig{\sz}{\left(H_k \otimes \mathbf{I}_d \right)\sx_{k}}{\ext_{k}},
\label{eq:condMeasurementModel}
\end{align}
where the $1 \times s$ matrix $H_k$ is the measurement model.
\begin{table}[t]
\caption{Conditional model: prediction}
\label{tab:condPrediction}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
\begin{align*}
m_{k+1|k} &= \left(F_{k}\otimes\Id\right) m_{k|k} \\
P_{k+1|k} &= F_{k} P_{k|k} F_{k}^{\tp} + D \\
v_{k+1|k} &= d+1 + \left(1 + \frac{v_{k|k}-2d-2}{n}\right)^{-1}(v_{k|k}-d-1) \\
V_{k+1|k} &= \left(1 + \frac{v_{k|k}-d-1}{n-d-1}\right)^{-1} A V_{k|k} A^{\tp}
\end{align*}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\begin{table}[t]
\caption{Conditional model: update}
\label{tab:condUpdate}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
\begin{align*}
\begin{array}{rcl}
m_{k|k} &= & m_{k|k-1} + (K \otimes \mathbf{I}_{d}) \varepsilon \\
P_{k|k} &= & P_{k|k-1} - K S K^{\tp} \\
v_{k|k} &= & v_{k|k-1} + |\setZ_{k}| \\
V_{k|k} &= & V_{k|k-1} + N + Z \\
\varepsilon &= & \bar{\sz} - (H \otimes \mathbf{I}_{d}) m_{k|k-1}^{}\\
\bar{\sz} &= & \frac{1}{|\setZ_{k}|}\sum_{\sz \in \setZ_{k}}{\sz} \\
Z &= & \sum_{\sz\in \setZ_{k}} \left(\sz - \bar{\sz}\right)\left(\sz - \bar{\sz}\right)^{\tp} \\
S &= & H P_{k|k-1} H^{\tp} + \frac{1}{|\setZ_{k}|}\\
K &= & P_{k|k-1} H^{\tp} S^{-1}\\
N^{} &= & S^{-1} \varepsilon \varepsilon^{\tp}
\end{array}
\end{align*}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\begin{table}[t]
\caption{Conditional model: smoothing}
\label{tab:condSmoothing}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
\begin{align*}
\begin{array}{rcl}
m_{k|K} & = & m_{k|k} + \left(G \otimes \Id \right) \left( m_{k+1|K} - m_{k+1|k}\right) \\
P_{k|K} & = &P_{k|k} - G\left( P_{k+1|k} - P_{k+1|K} \right) G^{\tp} \\
v_{k|K} & = & v_{k|k} + \eta^{-1} \left(v_{k+1|K}-v_{k+1|k} - \frac{2(d+1)^{2}}{n}\right) \\
V_{k|K} & = & V_{k|k} + \eta^{-1} A^{-1} \left(V_{k+1|K} - V_{k|k}\right) (A^{-1})^{\tp} \\
G & = & P_{k|k} F_k^{\tp} P_{k+1 | k}^{-1} \\
\eta & = & 1 + \frac{v_{k+1|K}-v_{k|k}-3(d+1)}{n}
\end{array}
\end{align*}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\subsection{Prediction, update, and smoothing}
The prediction and the update for the conditional model are reproduced in Table~\ref{tab:condPrediction} and in Table~\ref{tab:condUpdate}, respectively.
The smoothing is given in the following theorem.
\begin{theorem}\label{thm:condSmoothing}
Let the densities $p(\xi_{k} | \setZ_{1:k})$, $p(\xi_{k+1} | \setZ_{1:K})$ and $p(\xi_{k+1} | \setZ_{1:k})$ be conditional Gaussian inverse Wishart \eqref{eq:conditionalGIW}, and let the transition density be Gaussian Wishart \eqref{eq:condTransitionDensity}. The smoothed density $p(\xi_{k} | \setZ_{1:K})$, see \eqref{eq:SmoothingBackwards}, is conditional Gaussian inverse Wishart, see with parameters $(m_{k|K},P_{k|K},v_{k|K},V_{k|K})$ given in Table~\ref{tab:condSmoothing}.\hfill$\square$
\end{theorem}
The proof of Theorem~\ref{thm:condSmoothing} is given in Appendix~\ref{app:cond_model_smoothing}.
\section{Factorized model}
\label{sec:factSmoothing}
For the random matrix model in \cite{FeldmannFK:2011}, we have factorised Gaussian inverse Wishart densities, cf. \eqref{eq:factorizedGIW},
and under assumed density filtering we seek a smoothed density of the same form, i.e.,
\begin{subequations}
\begin{align}
p(\xi_{k} | \setZ_{1:K}) = & p\left(\sx_{k}|\setZ_{1:K}\right)p\left(\ext_{k}|\setZ_{1:K}\right) \\
= & \Npdfbig{\sx_{k}}{m_{k|K}}{P_{k|K}} \nonumber \\
& \times \IWishpdf{\ext_{k}}{v_{k|K}}{V_{k|K}}.
\end{align}
\label{eq:factSmoothedGIW}
\end{subequations}
\subsection{Assumptions, approximations}
The following assumption is made for the factorized \giw model, see \cite{FeldmannFK:2011}.
\begin{assumption}\label{ass:IndependentKinematicTransition}
The time evolution of the kinematic state is independent of the extent state,
\begin{align}
p\left(\sx_{k+1}|\ext_{k+1},\sx_{k}\right) = p\left(\sx_{k+1}|\sx_{k}\right)
\end{align}
\hfill$\square$
\end{assumption}
The validity of this assumption is discussed in \cite{FeldmannFK:2011,GranstromO:2014}. The transition density is Gaussian Wishart \cite{GranstromO:2014},
\begin{align}
p(\xi_{k+1}|\xi_{k})
= & \Npdfbig{\sx_{k+1}}{f_k \left( \sx_{k} \right)}{Q_k} \label{eq:factTransitionDensity}\\
& \times \Wishpdf{\ext_{k+1}}{n_{k}}{\frac{M(\sx_{k})\ext_{k} M^{\tp}(\sx_{k})}{n_{k}}} \nonumber
\end{align}
where the function $f_{k}(\cdot) : \mathbb{R}^{n_x} \rightarrow \mathbb{R}^{n_x}$ is the motion model, the $n_x \times n_x$ matrix $Q_k$ is the process noise covariance, the degrees of freedom $n_k \geq d$ govern the uncertainty of the time evolution of the extent, and the function $M(\cdot) : \mathbb{R}^{n_x} \rightarrow \mathbb{I}^{d \times d}$ describes how the extent changes over time due to the object motion. For example, $M(\cdot)$ can be a rotation matrix. In what follows, we write $M_{\sx} = M(\sx)$ for brevity.
The measurement model is
\begin{align}
p(\sz | \sx_k,\ext_k) = \Npdfbig{\sz}{\tilde{H}_k\sx_{k}}{\rho\ext_{k}+R_{k}},
\end{align}
where the $d \times n_x$ matrix $\tilde{H}_k$ is the measurement model, $\rho>0$ is a scaling factor, and $R_{k}\in\mathbb{S}_{+}^{d}$ is the measurement noise covariance. The scaling factor $\rho$ and the noise covariance $R_{k}$ were added to better model scenarios where the sensor noise is large in relation to the size of the extended object, see discussion in \cite[Sec. 3]{FeldmannFK:2011}. In this paper, to enable a straightforward comparison to the conditional model, which assumes that the sensor noise is small in comparison to the size of the extended object, we focus on the case $\rho=1$ and $R_{k} = \mathbf{0}_{d\times d}$.
\subsection{Prediction, update, and smoothing}
The prediction and the update for the conditional model are reproduced in Table~\ref{tab:factPrediction} and in Table~\ref{tab:factUpdate}, respectively. The smoothing is given in the following theorem.
\begin{theorem}\label{thm:factSmoothing}
Let the densities $p(\xi_{k} | \setZ_{1:k})$, $p(\xi_{k+1} | \setZ_{1:K})$ and $p(\xi_{k+1} | \setZ_{1:k})$ be factorised Gaussian inverse Wishart \eqref{eq:factorizedGIW}, and let the transition density be Gaussian Wishart \eqref{eq:factTransitionDensity}. The smoothed density $p(\xi_{k} | \setZ_{1:K})$, see \eqref{eq:SmoothingBackwards}, is factorised Gaussian inverse Wishart, see with parameters $(m_{k|K},P_{k|K},v_{k|K},V_{k|K})$ given in Table~\ref{tab:factSmoothing}.\hfill$\square$
\end{theorem}
The proof of Theorem~\ref{thm:factSmoothing} is given in Appendix~\ref{app:FactSmoothing}.
\begin{table}
\caption{Factorized model: prediction}
\label{tab:factPrediction}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
If $M_{\sx} = A$, where $A$ is a $d\times d$ invertible matrix,
\begin{align*}
\begin{array}{rcl}
m_{k+1|k} &= & f_{k}( m_{k|k} ) \\
P_{k+1|k} &= & \tilde{F}_{k}P_{k|k}\tilde{F}_{k}^{\tp} + Q \\
v_{k+1|k} & = & d+1 + \left(1 + \frac{v_{k|k}-2d-2}{n}\right)^{-1}(v_{k|k}-d-1) \\
V_{k+1|k} & = & \left(1 + \frac{v_{k|k}-d-1}{n-d-1}\right)^{-1} A V_{k|k} A^{\tp} \\
\tilde{F}_{k} & = & \left. \nabla_{\sx} f_{k}(\sx) \right |_{\sx=m_{k|k}}
\end{array}
\end{align*}
else,
\begin{align*}
\begin{array}{rcl}
m_{k+1|k} &= & f_{k}( m_{k|k} ) \\
P_{k+1|k} &= & \tilde{F}_{k}P_{k|k}\tilde{F}_{k}^{\tp} + Q \\
v_{k+1|k} & = & d+1 + \eta^{-1}(v_{k|k}-d-1) \\
V_{k+1|k} & = & \eta^{-1} \left(1 - \frac{d+1}{s}\right)\left(1 - \frac{d+1}{n}\right) C_{2} \\
\eta & = & 1 + (v_{k|k} -2d-2)\left( \frac{1}{s} + \frac{1}{n} - \frac{d+1}{ns} \right) \\
s & = & \frac{d+1}{d}\tr\left\{ C_{1} C_{2} \left( C_{1} C_{2} - \Id \right)^{-1} \right\} \\
\tilde{F}_{k} & = & \left. \nabla_{\sx} f_{k}(\sx) \right |_{\sx=m_{k|k}} \\
C_{1} & = & \mathbb{E}_{k|k}\left[ \left(M_{\sx} V_{k|k} M_{\sx}^{\tp}\right)^{-1} \right] \\
C_{2} & = & \mathbb{E}_{k|k}\left[ M_{\sx} V_{k|k} M_{\sx}^{\tp} \right]
\end{array}
\end{align*}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\begin{table}
\caption{Factorized model: update}
\label{tab:factUpdate}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
\begin{align*}
\begin{array}{rcl}
m_{k|k} &= & m_{k|k-1} + K \varepsilon \\
P_{k|k} &= & P_{k|k-1} -K S K^{\tp}\\
v_{k|k} &= & v_{k|k-1} + |\setZ_{k}|\\
V_{k|k} &= & V_{k|k-1}+\hat{N} + \hat{Z} \\
\varepsilon^{} &= & \bar{\sz} - \tilde{H} m_{k|k-1}\\
\bar{\sz} &= & \frac{1}{|\setZ_{k}|}\sum_{\sz^{i} \in \setZ_{k}}{\sz_{}^{i}} \\
Z_{}^{} &= & \sum_{\sz_{k}^{i}\in \setZ_{k}} \left(\sz_{}^{i} - \bar{\sz}_{}^{}\right)\left(\sz_{}^{i} - \bar{\sz}_{}^{}\right)^{\tp} \\
\hat{X} & = & V_{k|k-1} \left(v_{k|k-1}-2d-2\right)^{-1} \\
Y &= & \rho \hat{X} + R\\
S^{} &= & \tilde{H} P_{k|k-1} \tilde{H}^{\tp} + \frac{Y}{|\setZ_{k}|}\\
K^{} &= & P_{k|k-1}\tilde{H}^{\tp} S^{-1}\\
\hat{N} &= & \hat{X}^{\frac{1}{2}} S^{-\frac{1}{2}} \varepsilon \varepsilon^{\tp} S^{-\frac{\tp}{2}} \hat{X}^{\frac{\tp}{2}} \\
\hat{Z} & = & \hat{X}^{\frac{1}{2}} Y^{-\frac{1}{2}} Z Y^{-\frac{\tp}{2}} \hat{X}^{\frac{\tp}{2}}
\end{array}
\end{align*}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\begin{table}
\caption{Factorized model: smoothing}
\label{tab:factSmoothing}
\vspace{-5mm}
\rule[0pt]{\columnwidth}{1pt}
If $M_{\sx} = A$, where $A$ is a $d\times d$ invertible matrix,
\begin{align*}
\begin{array}{rcl}
m_{k|K} & = & m_{k|k} + G_{k} \left( m_{k+1|K} - m_{k+1|k}\right) \\
P_{k|K} & = & P_{k|k} - G \left( P_{k+1|k} - P_{k+1|K} \right) G^{\tp} \\
v_{k|K} & = & v_{k|k} + \eta^{-1} \left(v_{k+1|K} - v_{k+1|k} -\frac{2(d+1)^2}{n}\right) \\
V_{k|K} & = & V_{k|k} + \eta^{-1} A^{-1}\left(V_{k+1|K} - V_{k+1|k}\right) (A^{-1})^{\tp} \\
G & = & P_{k|k} \tilde{F}_k^{\tp} P_{k+1 | k}^{-1} \\
\eta & = & 1 + \frac{v_{k+1|K} - v_{k+1|k}-3(d+1)}{n}
\end{array}
\end{align*}
else
\begin{align*}
\begin{array}{rcl}
m_{k|K} & = & m_{k|k} + G_{k} \left( m_{k+1|K} - m_{k+1|k}\right) \\
P_{k|K} & = & P_{k|k} - G \left( P_{k+1|k} - P_{k+1|K} \right) G^{\tp} \\
v_{k|K} & = & v_{k|k} + \eta_{2}^{-1} \left(g -\frac{2(d+1)^2}{h+d+1}\right) \\
V_{k|K} & = & V_{k|k} + \eta_{3}^{-1}C_{4} \\
G & = & P_{k|k} \tilde{F}_k^{\tp} P_{k+1 | k}^{-1} \\
W & = & V_{k+1|K} - V_{k+1|k} \\
w & = & v_{k+1|K} - v_{k+1|k} \\
g & = & \eta_{1}^{-1}\left(w - \frac{2(d+1)^2}{n}\right) \\
h & = & \frac{d+1}{d}\tr\left\{ C_{3} C_{4} \left( C_{3} C_{4} - \Id \right)^{-1} \right\} \\
\eta_{1} & = & 1+\frac{w-3(d+1)}{n} \\
\eta_{2} & = & 1+\frac{g-3d-3}{h+d+1} \\
\eta_{3} & = & 1 + \frac{g-d-1}{h-d-1} \\
C_{3} & = & \mathbb{E}_{k|K}\left[ \left(M_{\sx}^{-1} W (M_{\sx}^{-1})^{\tp}\right)^{-1} \right] \\
& = & \mathbb{E}_{k|K}\left[ M_{\sx}^{\tp} W^{-1} M_{\sx} \right] \\
C_{4} & = & \mathbb{E}_{k|K}\left[ M_{\sx}^{-1} W (M_{\sx}^{-1})^{\tp} \right] \\
& = & \mathbb{E}_{k|K}\left[ \left( M_{\sx}^{\tp} W^{-1} M_{\sx} \right)^{-1}\right]
\end{array}
\end{align*}
\rule[0pt]{\columnwidth}{1pt}
\end{table}
\subsection{Expected value approximation}
Note that both the prediction and the smoothing require expected values, see $C_{1}$ and $C_{2}$ in Table~\ref{tab:factPrediction}, and $C_{3}$ and $C_{4}$ in Table~\ref{tab:factSmoothing}. For a Gaussian distributed vector $\sx \sim \mathcal{N}(m,P)$, the expected value of $M_{\sx}VM_{\sx}^{\tp}$
can be approximated using third order Taylor expansion
\begin{align}
C_{1} \approx & \left( M(m)VM(m)^{\tp} \right)^{-1} \nonumber \\
& + \sum_{i=1}^{n_{x}} \sum_{j=1}^{n_{x}} \left.\frac{\diff^{2} \left( M_{\sx}VM_{\sx}^{\tp} \right)^{-1}}{\diff \sx^{[i]} \diff \sx^{[j]}}\right|_{\sx=m} P^{[i,j]}
\end{align}
where $\sx^{[i]}$ is the $i$th element of $\sx$, $P^{[i,j]}$ is the $i,j$th element of $P$. The necessary differentiations are
\begin{subequations}
\begin{align}
\frac{\diff M_{\sx}VM_{\sx}^{\tp}}{\diff \sx^{[j]}} = & \frac{\diff M_{\sx}}{\diff \sx^{[j]}}V M_{\sx}^{\tp} + M_{\sx} V \frac{\diff M_{\sx}^{\tp}}{\diff \sx^{[j]}} \\
\frac{\diff^{2} M_{\sx}VM_{\sx}^{\tp}}{\diff \sx^{[i]} \diff \sx^{[j]}} = & \frac{\diff^{2} M_{\sx}}{\diff \sx^{[i]} \diff \sx^{[j]}}VM_{\sx}^{\tp} + \frac{\diff M_{\sx}}{\diff \sx^{[j]}}V\frac{\diff M_{\sx}^{\tp}}{\diff \sx^{[i]}} \nonumber \\
& + \frac{\diff M_{\sx}}{\diff \sx^{[i]}}V\frac{\diff M_{\sx}^{\tp}}{\diff \sx^{[j]}} + M_{\sx} V \frac{\diff^{2} M_{\sx}^{\tp}}{\diff \sx^{[i]} \diff \sx^{[j]}}
\end{align}
and, for any function $N_{\sx}=N(\sx)$,
\begin{align}
\frac{\diff^{2} N_{\sx}^{-1}}{\diff \sx^{[i]} \diff \sx^{[j]}} = & N_{\sx}^{-1} \frac{\diff N_{\sx}}{\diff \sx^{[j]}} N_{\sx}^{-1}\frac{\diff N_{\sx}}{\diff \sx^{[i]}} N_{\sx}^{-1} - N_{\sx}^{-1} \frac{\diff^{2} N_{\sx}}{\diff\sx^{[i]} \diff\sx^{[j]}} N_{\sx}^{-1} \nonumber \\
& +N_{\sx}^{-1} \frac{\diff N_{\sx}}{\diff \sx^{[i]}} N_{\sx}^{-1}\frac{\diff N_{\sx}}{\diff \sx^{[j]}} N_{\sx}^{-1}
\end{align}
\end{subequations}
The expected values $C_{2}$, $C_{3}$ and $C_{4}$ can be approximated analogously.
\section{Simulation study}
\label{sec:simulationStudy}
In this section we present the results of a simulation study. In all simulations, the dimension of the extent is $d=2$.
\subsection{Implemented smoothers}
Three different smoothers were implemented.
\subsubsection{Conditional \giw model with constant velocity motion model (CCV)} The state vector contains 2D Cartesian position and velocity, $\sx_{k} = [p_{k}^{x}, \ p_{k}^{y},\ v_{k}^{x}, \ v_{k}^{y}]^{\tp}$, $n_x=4$ and $s=2$. The following models are used,
\begin{subequations}
\begin{align}
F_{k} &= \begin{bmatrix} 1 & T \\ 0 & 1\end{bmatrix}, \label{eq:CVmodel} \\
D_{k} &= \sigma_{a}^{2} \begin{bmatrix} \frac{T^4}{4} & \frac{T^3}{2} \\ \frac{T^3}{2} & T^2 \end{bmatrix}, \label{eq:CVnoiseCovariance} \\
H_{k} &= \begin{bmatrix} 1 & 0 \end{bmatrix}.
\end{align}
\end{subequations}
$A = \Id$, and $n_k = 100$, where $T$ is the sampling time.
\subsubsection{Factorized \giw model with constant velocity motion model (FCV)} The state vector contains 2D Cartesian position and velocity, $\sx_{k} = [p_{k}^{x}, \ p_{k}^{y},\ v_{k}^{x}, \ v_{k}^{y}]^{\tp}$, and $n_x=4$. The following models are used,
\begin{subequations}
\begin{align}
f_{k}(\sx) &= \begin{bmatrix} \mathbf{I}_{2} & T \mathbf{I}_{2} \\ \mathbf{0}_{2\times 2} & \mathbf{I}_{2} \end{bmatrix} \sx, \\
Q_{k} &= \sigma_{a}^{2} \begin{bmatrix} \frac{T^4}{4} \mathbf{I}_{2} & \frac{T^3}{2} \mathbf{I}_{2} \\ \frac{T^3}{2} \mathbf{I}_{2} & T^2 \mathbf{I}_{2} \end{bmatrix} \\
\tilde{H}_{k} & = \begin{bmatrix} \mathbf{I}_{2} & \mathbf{0}_{2\times 2} \end{bmatrix}
\end{align}
\end{subequations}
$n_k = 100$, and $M_{\sx} = \Id$.
\subsubsection{Factorized \giw model with coordinated turn motion model (FCT)} The state vector contains 2D Cartesian position and velocity, as well as turn-rate, $\sx_{k} = [p_{k}^{x}, \ p_{k}^{y},\ v_{k}^{x}, \ v_{k}^{y}, \ \omega_{k}]^{\tp}$, and $n_x=5$. The following models are used,
\begin{subequations}
\begin{align}
f_{k}(\sx_k) & = {\scriptsize \begin{bmatrix}
1 & 0 & \frac{\sin(T\omega_k)}{\omega_k} & -\frac{1-\cos(T\omega_k)}{\omega_k} & 0 \\
0 & 1 & \frac{1-\cos(T\omega_k)}{\omega_k} & \frac{\sin(T\omega_k)}{\omega_k} & 0 \\
0 & 0 & \cos(T\omega_k) & - \sin(T\omega_k) & 0 \\
0 & 0 & \sin(T\omega_k) & \cos(T\omega_k) & 0 \\
0 & 0 & 0 & 0 & 1
\end{bmatrix} } \sx_k, \label{eq:CTmodel} \\
Q_{k} &= G \diag{[ \sigma_{a}^{2},\ \sigma_{a}^{2}, \ \sigma_{\omega}^{2} ]} G^{\tp}, \label{eq:CTnoiseCovariance} \\
G & = \begin{bmatrix} \frac{T^2}{2} \mathbf{I}_{2} & \mathbf{0}_{2\times 1} \\ T\mathbf{I}_{2} & \mathbf{0}_{2\times 1} \\ \mathbf{0}_{1\times 2} & 1 \end{bmatrix} \\
M_{\sx} & = \begin{bmatrix} \cos(T\omega) & -\sin(T\omega) \\ \sin(T\omega) & \cos(T\omega) \end{bmatrix},\\
\tilde{H}_{k} & = \begin{bmatrix} \mathbf{I}_{2} & \mathbf{0}_{2\times 3} \end{bmatrix}
\end{align}
\end{subequations}
and $n_k = \infty$. For the matrix transformation function $M_{\sx}$ we have the following,
\begin{subequations}
\begin{align}
& M_{\sx}^{-1} = M_{\sx}^{\tp} \\
& \frac{\diff M_{\sx}}{\diff \sx^{[i]}} = \begin{cases} \scriptsize T \begin{bmatrix} -\sin(T\omega) & -\cos(T\omega) \\ \cos(T\omega) & -\sin(T\omega) \end{bmatrix} & i=5 \\ \mathbf{0}_{2\times 2} & i \neq 5 \end{cases} \\
& \frac{\diff^2 M_{\sx}}{\diff \sx^{[i]} \diff \sx^{[j]}} = \begin{cases} \scriptsize T^2 \begin{bmatrix} -\cos(T\omega) & \sin(T\omega) \\ -\sin(T\omega) & -\cos(T\omega) \end{bmatrix} & i,j=5 \\ \mathbf{0}_{2\times 2} & i,j \neq 5 \end{cases}
\end{align}
\end{subequations}
\subsection{Simulated scenarios}
We focused on two types of scenarios: in the first the true tracks were generated by a constant velocity model; in the second the true tracks were generated by a coordinated turn model. This allows us to test the different smoothers both when their respective motion models match the true model, and when there is motion model mis-match.
The CV tracks were generated using the CV model \eqref{eq:CVmodel} and \eqref{eq:CVnoiseCovariance} with $\sigma_{a} = 1$; the extent's major axis was simulated to be aligned with the velocity vector of the extended object.
The CT tracks were generated using the CT model \eqref{eq:CTmodel} and \eqref{eq:CTnoiseCovariance} with $\sigma_{a} = 1$ and $\sigma_{\omega} = \pi/180$; the extent's major axis was simulated to be aligned with the velocity vector of the extended object.
For both motion models, in each time step $k$, a detection process was simulated by first sampling a probability of detection $p_{\rm D}$, and, if the object is detected, sampling $N_{z}$ detections using a Gaussian likelihood. We simulated two combinations: $(p_{\rm D}, \ N_{z}) = (0.25, \ 10)$ and $(p_{\rm D}, \ N_{z}) = (0.75, \ 10)$.
\subsection{Performance evaluation}
For performance evaluation of extended object estimates with ellipsoidal extents, a comparison study has shown that among six compared performance measures, the Gaussian Wassterstein Distance (\gwd) metric is the best choice \cite{YangBG:2016}. The \gwd is defined as \cite{GivensS:1984}
\begin{align}
\Delta_{k|\ell} = & \| \mathbf{p}_{k} - \hat{\mathbf{p}}_{k|\ell} \|^{2} \\
& + \tr\left(\ext_{k}+\hat{\ext}_{k|\ell}-2 \left( \ext_{k}^{\frac{1}{2}} \hat{\ext}_{k|\ell} \ext_{k}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right), \nonumber
\end{align}
where the expected extended object state is
\begin{align}
\hat{\xi}_{k|\ell} = & \mathbb{E}_{p(\xi_{k} | \setZ_{1:\ell})} \left[ \xi_{k} \right] = \left(\hat{\sx}_{k|\ell},\ \hat{\ext}_{k|\ell}\right) \\
= & \left( m_{k|\ell} , \ \frac{V_{k|\ell}}{v_{k|\ell} - 2d-2} \right)
\end{align}
and $\mathbf{p}_{k}$ is the position part of the extended object state vector $\sx_{k}$.
\subsection{Results}
We show results for estimates $\hat{\xi}_{k|\ell}$ for $\ell \in \{k-1, \ k, \ K\}$, i.e., prediction, filtering and smoothing. Results for true tracks generated by a CV model are shown in Figure~\ref{fig:GWD_CV}; results for true tracks generated by a CT model are shown in Figure~\ref{fig:GWD_CT}.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\columnwidth]{GWD_CV_pD_25_Nz_10_siga_1_sigw_1}
\includegraphics[width=\columnwidth]{GWD_CV_pD_75_Nz_10_siga_1_sigw_1}
\caption{Results for object tracks generated by a CV model, with probability of detection $p_{\rm D}=0.25$ (left) and $p_{\rm D}=0.75$ (right) and, if detected, $N_{z}=10$ object measurements (both). CCV is shown in blue, FCV is shown in red, and FCT is shown in orange. Each line is the median from $1000$ Monte Carlo simulations.}
\label{fig:GWD_CV}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\columnwidth]{GWD_CT_pD_25_Nz_10_siga_1_sigw_1}
\includegraphics[width=\columnwidth]{GWD_CT_pD_75_Nz_10_siga_1_sigw_1}
\caption{Results for object tracks generated by a CT model, with probability of detection $p_{\rm D}=0.25$ (left) and $p_{\rm D}=0.75$ (right) and, if detected, $N_{z}=10$ object measurements (both). CCV is shown in blue, FCV is shown in red, and FCT is shown in orange. Each line is the median from $1000$ Monte Carlo simulations.}
\label{fig:GWD_CT}
\end{center}
\end{figure*}
We see that in all cases, as expected, the smoothing errors are smaller than the filtering errors, which are smaller than the prediction errors. This confirms that the derived smoothers work as they should. It is also in accordance with expectations that the performance is worse when the probability of detection is lower. Perhaps counter-intuitive is that for CV true motion, the FCT smoother performs best despite the modelling error in the motion model. We believe that this is due, at least in part, to the standard assumption that the orientation of the extent ellipse is aligned with the velocity vector. The motion noice on the velocity vector introduces rotations on the extent ellipse, and the motion model used in FCT captures these rotations better.
\section{Conclusions and future work}
\label{sec:Conclusions}
This paper presented Bayesian smoothing for the random matrix model used in extended object tracking. Two variants of Gaussian inverse Wishart state densities exist in the literature, a conditional and a factorised; closed form Bayesian smoothing was derived for both of them. The derived smoothers were implemented and tested in a simulation scenario. In future work the smoothers will be used with real data, e.g., data from camera, lidar or radar.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $\Omega$ be an open set in $\mathbb R^d$ ($d \geq 3$),
$X_p:=L_{{\rm loc}}^p(\Omega,dx)$ ($p \geq 1$), $H^{m,p}(\Omega)$ the standard Sobolev space and $-\Delta:=-\sum_{k=1}^d \frac{\partial^2}{\partial x_k^2}$ the Laplace operator. Let
$\mathcal D'(\Omega)$ be the space of distributions on $\Omega$ and $\mathcal L_{{\rm loc}}^{2,1}(\Omega):=\{f \in X_1: \Delta f \in \mathcal D'(\Omega) \cap X_1\}$.
Let now $\Omega$ be connected. For $Y_V \subset \mathcal L_{{\rm loc}}^{2,1}(\Omega)$ a space of functions depending on $V \in X_1$ we say that the differential inequality
\begin{equation}
\label{diffineq}
|\Delta u(x)| \leq |V(x)||u(x)| \quad \text{ a.e. in } \Omega
\end{equation}
has the property of \textit{weak} unique continuation (WUC) in $Y_V$~($=:Y_V^{\rm {weak}}$) provided that whenever $u$ in $Y_V$ satisfies inequality (\ref{diffineq}) and vanishes in an open subset of $\Omega$ it follows that $u \equiv 0$ in $\Omega$.
We also say that (\ref{diffineq}) has the property of \textit{strong} unique continuation (SUC) in $Y_V$~($=:Y_V^{\rm {str}}$) if whenever $u$ in $Y_V$ satisfies (\ref{diffineq})
and vanishes to an infinite order at a point $x_0 \in \Omega$, i.e.,
\begin{equation*}
\lim_{\rho \to 0} \frac{1}{\rho^k} \int_{|x-x_0|<\rho} |u(x)|^2 dx=0, \text{ for all } k \in \mathbb N,
\end{equation*}
it follows that $u \equiv 0$ in $\Omega$.
Throughout our work we make use of the following notations. $\mathbf{1}_{S}$ is the characteristic function of a set $S \subset \mathbb R^d$, $B(x_0,\rho):=\{x \in \mathbb R^d:|x-x_0|<\rho\}$ and $B_S(x_0,\rho):=B(x_0,\rho) \cap S$ (also, set $B(\rho):=B(0,\rho)$ and $B_S(\rho):=B_S(0,\rho)$), $\|A\|_{p \mapsto q}$ is the norm of operator $A:L^p(\mathbb R^d) \mapsto L^q(\mathbb R^d)$, $(-\Delta)^{-\frac{z}{2}}$, $0<\mbox{Re}(z)<d$, stands for the Riesz operator whose action on a function $f \in C_0^\infty(\mathbb R^d)$ is determined by the formula
\begin{equation*}
(-\Delta)^{-\frac{z}{2}} f(x)=c_z\int_{\mathbb R^d}\left[(-\Delta)^{-\frac{z}{2}}\right](x,y) f(y)dy,
\end{equation*}
where
\begin{equation*}
\left[(-\Delta)^{-\frac{z}{2}}\right](x,y):=|x-y|^{z-d}, \quad c_z:=\Gamma\left(\frac{d-z}{2}\right)\left(\pi^{d/2} 2^z \Gamma \left(\frac{z}{2} \right) \right)^{-1}
\end{equation*}
(see, e.g., \cite{SteBook}).
The first result on unique continuation was obtained by T.~Carleman \cite{Carl}. He proved that (\ref{diffineq}) has the WUC property in the case $d=2$, $V \in L^{\infty}_{{\rm loc}}(\Omega)$.
Since then,
the properties of
unique continuation were extensively studied by many authors (primarily following the original Carleman's approach), with
the best possible for $L_{{\rm loc}}^p$-potentials SUC result obtained by D.~Jerison and C.~Kenig ($p=\frac{d}{2}$, $Y_V^{\rm {str}}=H^{2,\bar{p}}_{{\rm loc}}$, $\bar{p}:=\frac{2d}{d+2}$) \cite{JK}, and its extension for $L_{{\rm loc}}^{d/2,\infty}$-potentials obtained by E.M.~Stein \cite{Ste}.
In \cite{SS,Forese,Saw} the authors
are proving unique continuation for potentials from
the following `abstract' classes:
1) $V \in L^2_{{\rm loc}}(\mathbb R^d)$ satisfies $\|\mathbf{1}_{B(x,1)}V(-\Delta)^{-1}|V| \mathbf{1}_{B(x,1)}\|_{2 \mapsto 2}<\infty$ for all $B(x,1) \subset \mathbb R^d$ in \cite{SS};
2) $V \in L^2_{{\rm loc}}(\mathbb R^d)$ satisfies $\inf_{\lambda>0}\|~V(-\Delta+\lambda)^{-1}\|_{2 \mapsto 2}=0$ and $$\inf_{\lambda>0}\sup_{x \in \mathbb R^d} \|\mathbf{1}_{B(x,1)}V_1(-\Delta+\lambda)^{-3/4}\|_{2 \mapsto 2}=0$$ for all $B(x,1) \subset \mathbb R^d$, see \cite{Forese};
3) Kato class (see Section \ref{compsect})
in \cite{Saw} ($d=3$).
Further improvements of Stein's result were obtained in \cite{SawChan,RV,W}, where unique continuation is proved for potentials $V$ locally in
Campanato-Morrey class (see Section \ref{compsect}).
Our main result is that
differential inequality (\ref{diffineq}) has the WUC property in the space of solutions
$$Y_V^{\rm {weak}}:=\left\{f \in \mathcal L^{2,1}_{{\rm loc}}: |V|^{\frac{1}{2}}f \in X_2 \right\}$$
(containing eigenfunctions of the corresponding self-adjoint Schr\"{o}dinger operator, see below)
and, respectively, the SUC property in
$$Y_V^{\rm {str}}:=Y_V^{\rm {weak}} \cap H_{{\rm loc}}^{1,\bar{p}}(\Omega), $$
for potentials $V$ in the following class (for the motivation see (\ref{formineq99}) and (\ref{fbetaloc}) below)
\begin{equation*}
\mathcal F^d_{\beta,{\rm loc}}:=\left\{W \in X_{\frac{d-1}{2}}:
\sup_{K}\overline{\lim\limits_{\rho \to 0}} \sup_{x_0 \in K} \|\mathbf{1}_{B_K(x_0,\rho)} |W|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{2}}|W|^{\frac{d-1}{4}}\mathbf{1}_{B_K(x_0,\rho)}\|_{2 \mapsto 2}\leq \beta\right\},
\end{equation*}
where $K$ is a compact subset of $\Omega$.
Historically, the most important reason for establishing the WUC property is its application to
the problem of absence of positive eigenvalues of the self-adjoint Schr\"{o}dinger operators,
discovered in 1959 by T.~Kato \cite{Kato1}. He proved
that if $V$ has a compact support, then all eigenfunctions corresponding to positive eigenvalues must vanish outside of a ball of finite radius, hence by WUC must be identically equal to zero. In what follows, we employ our WUC result for (\ref{diffineq}) to prove the absence of positive eigenvalues of the self-adjoint Schr\"{o}dinger operator $H \supset -\Delta+V$ in the complex Hilbert space $\mathcal H:=L^2(\mathbb R^d,dx)$ defined in the sense of quadratic forms (see \cite{Kato2,RS}), namely:
\begin{equation}
\label{formsum}
H:=H_+\dotplus(-V_-),
\end{equation}
where $H_+:=H_0\dotplus V_+$, $H_0=(-\Delta|_{C^\infty(\mathbb R^d)})^{\ast}$, $D(H_0)=H^{2,2}(\mathbb R^d)$, $V=V_+-V_-$, $V_{\pm} \geq 0$, $V_{\pm} \in L^1(\mathbb R^d)$ and
\begin{equation}
\label{formineq}
\inf_{\lambda>0}\left\|V_-^{\frac{1}{2}}(H_++\lambda)^{-1} V_-^{\frac{1}{2}}\right\|_{2 \mapsto 2} \leq \beta<1.
\end{equation}
The latter inequality guarantees the existence of the form sum (\ref{formsum}), see \cite[Ch.VI]{Kato2}),
and the inclusion $D(H) \subset Y_V^{\rm {weak}}$
(see Section \ref{comparisonsect}).
On the other hand it is easy to see that if $V \in L_{{\rm loc}}^1(\mathbb R^d)$ satisfies the inequality
\begin{equation}
\label{formineq99}
\inf_{\lambda>0}\left\| |V|^{\frac{1}{2}}(H_0+\lambda)^{-1}|V|^{\frac{1}{2}}\right\|_{2 \mapsto 2} \leq \beta<1,
\end{equation}
then $V$ satisfies (\ref{formineq}) with the same $\beta$, and therefore the existence of the form sum (\ref{formsum}) follows.
The local nature of the problem of unique continuation for (\ref{diffineq}) leads to the definition of `local analogue' of potentials satisfying (\ref{formineq99}):
\begin{equation}
\label{fbetaloc}
F_{\beta,{\rm loc}}:=\left\{W \in X_1:\sup_K\overline{\lim\limits_{\rho \to 0}}\sup_{x_0 \in K}\|\mathbf{1}_{B_K(x_0,\rho)} |W|^{\frac{1}{2}}(-\Delta)^{-1}|W|^{\frac{1}{2}}\mathbf{1}_{B_K(x_0,\rho)}\|_{2 \mapsto 2} \leq \beta\right\},
\end{equation}
where $K$ is a compact subset of $\Omega$.
This class coincides with $\mathcal F^d_{\beta,{\rm loc}}$ if $d=3$, and contains $\mathcal F^d_{\beta,{\rm loc}}$ as a proper subclass if $d \geq 4$ (the latter easily follows from Heinz-Kato inequality, see, e.g., \cite{Kato0}). Arguments of this article do not apply to the larger class of potentials $F_{\beta,{\rm loc}}$ for $d \geq 4$.
\vspace*{4mm}
Class $\mathcal F^d_{\beta,{\rm loc}}$ contains the potentials considered in \cite{JK,Ste,Saw,SawChan,W} as proper subclasses.
Previously WUC and SUC properties were derived only for $Y_V=H^{2,\bar{p}}_{{\rm loc}}$.
We note that though the dependence of $Y_V$ on $V$ (i.e., $u \in Y_V$ implies $|V|^{\frac{1}{2}}u \in X_2$) does not appear explicitly in the papers cited above,
it is implicit, see Section \ref{compsect}.
\vspace*{4mm}
Following Carleman, most proofs of unique continuation rely on Carleman type estimates on the norms of the appropriate operators acting from $L^p$ to $L^q$, for certain $p$ and $q$ (e.g., Theorem 2.1 in \cite{JK}, Theorem 1 in \cite{Ste}). Our method is based on an $L^2 \mapsto L^2$ estimate of Proposition \ref{ourlem} and Lemma \ref{sawlem1}, proved in \cite{Saw}.
In the case $d=3$ we derive Proposition \ref{ourlem} using only Lemma \ref{sawlem1}. The case $d \geq 4$ is reduced to the case $d=3$ at the cost of a more restrictive class of potentials:
the proof uses Stein's interpolation theorem for analytic families of operators \cite{SW}, and relies on Lemma \ref{sawlem2} -- a variant of pointwise inequalities considered in \cite{Saw} and \cite{Ste} (cf. Lemma 1 in \cite{Saw}, Lemma 5 in \cite{Ste}) -- and Lemma \ref{JKlem} of \cite{JK}. \\
The results of this article have been announced in \cite{KiSh}.
\vspace*{4mm}
\noindent \textbf{Acknowledgments. }We are grateful to Yu.~A.~Semenov for introducing us to the subject of unique continuation and close guidance throughout our work on this article, and to Pierre Milman for his supervision and, in particular, help in communicating our results here.
\vspace*{2mm}
\section{Main Results}
\label{comparisonsect}
Our main results state that (\ref{diffineq}) has the WUC and SUC properties with potentials from $\mathcal F^d_{\beta,{\rm loc}}$. The difference between the results is in the classes $Y_V$ within which we look for solutions to (\ref{diffineq}).
\begin{theorem}
\label{mainthm}
There exists a sufficiently small constant $\beta<1$ such that if $V \in \mathcal F^d_{\beta,{\rm loc}}$ then (\ref{diffineq}) has the WUC property in $Y_V^{\rm {weak}}$.
\end{theorem}
\begin{theorem}
\label{mainthm2}
There exists a sufficiently small constant $\beta<1$ such that if $V \in \mathcal F^d_{\beta,{\rm loc}}$, then (\ref{diffineq}) has the SUC property in $Y_V^{\rm {str}}$.
\end{theorem}
The proofs of Theorems \ref{mainthm} and \ref{mainthm2} are given in Section \ref{mainsect}.
Concerning the eigenvalue problem, we have the following result.
\begin{theorem}
\label{eigthm}
Suppose that $H$ is defined by (\ref{formsum}) in assumption that (\ref{formineq}) holds. Let us also assume that $V \in F^d_{\beta,{\rm loc}}$ for $\beta<1$ sufficiently small, and ${\rm supp}(V)$ is compact in $\mathbb R^d$. Then the only solution to the eigenvalue problem
\begin{equation}
\label{eigprob}
Hu=\lambda u, \quad u \in D(H), \quad \lambda>0
\end{equation}
is zero.
\end{theorem}
\begin{proof}
The following inclusions are immediate from the definition of operator $H$: $$D(H) \subset H^{1,2}(\mathbb R^d) \cap D(V_+^{\frac{1}{2}}) \cap D(V_-^{\frac{1}{2}}),$$ $$D(H) \subset D(H_{\max}),$$
where $$D(H_{\max}):=\{f \in \mathcal H: ~ \Delta f \in \mathcal D'(\mathbb R^d) \cap L^1_{{\rm loc}}(\mathbb R^d), Vf \in L^1_{{\rm loc}}(\mathbb R^d), -\Delta f+Vf \in \mathcal H\}.$$
Therefore, $D(H) \subset Y_V^{\rm {weak}}$ and if $u \in D(H)$ is a solution to (\ref{eigprob}), then $$|\Delta u|=|(V-\lambda) u| \quad \text{ a.e. in }\mathbb R^d.$$
By Kato's theorem \cite{Kato1} $u$ has compact support.
Now Theorem \ref{eigthm} follows from Theorem \ref{mainthm}.
\end{proof}
\section{Historical context}
\label{compsect}
\vspace*{2mm}
1) D.~Jerison and C.~Keing \cite{JK} and E.M.~Stein \cite{Ste} proved the validity of the SUC property for potentials from classes $L^{\frac{d}{2}}_{{\rm loc}}(\Omega)$ and $L^{\frac{d}{2},\infty}_{{\rm loc}}(\Omega)$ (weak type $d/2$ Lorentz space), respectively. Below $\|\cdot\|_{p,\infty}$ denotes weak type $p$ Lorentz norm.
One has
\begin{equation}
\label{incll}
L^{\frac{d}{2}}_{{\rm loc}}(\Omega) \subsetneq \bigcap_{\beta>0}\mathcal F^d_{\beta,{\rm loc}},
\end{equation}
\begin{equation}
\label{incllweak}
L^{\frac{d}{2},\infty}_{{\rm loc}}(\Omega) \subsetneq \bigcup_{\beta>0}\mathcal F^d_{\beta,{\rm loc}}.
\end{equation}
The first inclusion follows straightforwardly from the Sobolev embedding theorem. For the following proof of the second inclusion let us note first that
\begin{equation*}
\|\mathbf{1}_{B(x_0,\rho)} |W|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{2}}|W|^{\frac{d-1}{4}}\mathbf{1}_{B(x_0,\rho)}\|_{2 \mapsto 2}= \|\mathbf{1}_{B(x_0,\rho)}|V|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{4}}\|_{2 \mapsto 2}^2.
\end{equation*}
Next, if $V \in L^{d/2,\infty}$, then
\begin{equation}
\label{strichineq}
\|\mathbf{1}_{B(x_0,\rho)}|V|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{4}}\|_{2 \mapsto 2} \leq \left(\frac{2d^{-1}\pi^{\frac{d}{2}}c_{\frac{1}{2}}}{\Gamma\left(\frac{d}{2} \right)c_{\frac{d}{2}}}\right) \|\mathbf{1}_{B(x_0,\rho)} V\|_{\frac{d}{2},\infty}^{\frac{d-1}{4}},
\end{equation}
which is a special case of Strichartz inequality with sharp constants, proved in \cite{KPS}. Required inclusion follows.
To see that the latter inclusion is strict we introduce a family of potentials
\begin{equation}
\label{steinV}
V(x):=\frac{C\bigl(\mathbf{1}_{B(1+\delta)}(x)-\mathbf{1}_{B(1-\delta)}(x)\bigr)}{\bigl(|x|-1\bigr)^{\frac{2}{d-1}}\left(-\ln \bigl||x|-1\bigr| \right)^b}, \quad \text{ where }b>\frac{2}{d-1}, \quad 0<\delta<1.
\end{equation}
A straightforward computation shows that
$V \in \mathcal F^d_{\beta,{\rm loc}}$, as well as
$V \in L_{{\rm loc}}^{\frac{d-1}{2}}(\Omega) \setminus L_{{\rm loc}}^{\frac{d-1}{2}+\varepsilon}(\Omega)$ for any $\varepsilon>0$, so that $V \not \in L_{{\rm loc}}^{\frac{d}{2},\infty}(\Omega)$.
\vspace*{2mm}
The result in \cite{Ste} can be formulated as follows.
Suppose that $d \geq 3$ and $V \in L^{\frac{d}{2},\infty}_{{\rm loc}}(\Omega)$. There exists a sufficiently small constant $\beta$ such that if
\begin{equation*}
\sup_{x_0 \in \Omega}\overline{\lim\limits_{\rho \to 0}}\|\mathbf{1}_{B_K(x_0,\rho)}V\|_{\frac{d}{2},\infty} \leq \beta,
\end{equation*}
then (\ref{diffineq}) has the SUC property in
$Y_V:=H^{2,\bar{p}}_{{\rm loc}}(\Omega)$, where $\bar{p}:=\frac{2d}{d+2}$. (It is known that the assumption of $\beta$ being sufficiently small can not be omitted, see
\cite{KT}.)
In view of (\ref{incll}), (\ref{incllweak}), the results in \cite{Ste} and in \cite{JK} follow from Theorem \ref{mainthm2} provided that we show $|V|^{\frac{1}{2}}u \in X_2$. Indeed, let $L^{q,p}$ be the $(q,p)$ Lorentz space (see \cite{SW}). By Sobolev embedding theorem for Lorentz spaces $H_{{\rm loc}}^{2,\bar{p}}(\Omega) \hookrightarrow L_{{\rm loc}}^{\bar{q},\bar{p}}(\Omega)$ with $\bar{q}:=\frac{2d}{d-2}$ \cite{SW}. Hence, by H\"{o}lder inequality in Lorentz spaces $|V|^{\frac{1}{2}}u \in X_2$ whenever $u \in L_{{\rm loc}}^{\bar{q},\bar{p}}(\Omega)$ and $V \in L_{{\rm loc}}^{d/2,\infty}$. Also, $H_{{\rm loc}}^{2,\bar{p}}(\Omega) \hookrightarrow H_{{\rm loc}}^{1,\bar{p}}(\Omega)$, so $H_{{\rm loc}}^{2,\bar{p}}(\Omega) \subset Y^{\rm {str}}_V,$
as required.
\vspace*{2mm}
2) E.T.~Sawyer \cite{Saw} proved uniqueness of continuation for the case $d=3$ and potential $V$ from the local Kato-class
\begin{equation*}
\mathcal K_{\beta,{\rm loc}}:=\{W \in L^{1}_{{\rm loc}}(\Omega): \sup_K\overline{\lim\limits_{\rho \to 0}} \sup_{x_0 \in K} \|(-\Delta)^{-1} \mathbf{1}_{B_K(x_0,\rho)} |W|\|_{\infty} \leq \beta\},
\end{equation*}
where $K$ is a compact subset of $\Omega$.
It is easy to see that
\begin{equation*}
\mathcal K_{\beta,{\rm loc}} \subsetneq F_{\beta,{\rm loc}}.
\end{equation*}
To see that the latter inclusion is strict consider, for instance,
potential
\begin{equation*}
V_{\beta}(x):=\beta v_0, \quad v_0:=\left(\frac{d-2}{2}\right)^2|x|^{-2}.
\end{equation*}
By Hardy's inequality, $V_\beta \in F_{\beta,{\rm loc}}$. At the same time, $\|(-\Delta)^{-1} v_0\mathbf{1}_{B(\rho)}\|_{\infty}=\infty$ for all $\rho>0$, hence $V_{\beta} \not \in \mathcal K_{\beta,{\rm loc}}$ for all $\beta \ne 0$.
The next statement is essentially due to E.T.~Sawyer \cite{Saw}.
\begin{theorem}
\label{sawthm}
Let $d=3$. There exists a constant $\beta<1$ such that if $V \in \mathcal K_{\beta,{\rm loc}}$ then (\ref{diffineq}) has the WUC property in $Y_V^{\mathcal K}:=\{f \in X_1: \Delta f \in X_1,~Vf \in X_1\}$.
\end{theorem}
The proof of Theorem \ref{sawthm} is provided in Section \ref{exsect}.
Despite the embedding $\mathcal K_{\beta,{\rm loc}} \hookrightarrow F_{\beta,{\rm loc}}$, Theorem \ref{mainthm} does not imply Theorem \ref{sawthm}. The reason is simple:
$Y_V^{\mathcal K} \not\subset Y_V^{\rm {weak}}$.
\vspace*{2mm}
3) S.~Chanillo and E.T.~Sawyer showed in \cite{SawChan} the validity of the SUC property for (\ref{diffineq}) in $Y_V=H^{2,2}_{{\rm loc}}(\Omega)$ ($d \geq 3$) for potentials $V$ locally small in Campanato-Morrey class $M^p$ ($p>\frac{d-1}{2}$),
\begin{equation*}
M^p:=\{W \in L^p: \|W\|_{M^p}:=\sup_{x \in \Omega,~r>0}r^{2-\frac{d}{p}}\|\mathbf{1}_{B(x,r)} W\|_p<\infty\}.
\end{equation*}
Note that for $p>\frac{d-1}{2}$
\begin{equation*}
M^p_{{\rm loc}} \subsetneq \bigcup_{\beta>0} \mathcal F_{\beta,{\rm loc}}^d
\end{equation*}
(see \cite{SawChan,F,KS}).
To see that the above inclusion is strict one may consider, for instance, potential defined in (\ref{steinV}).
It is easy to see, using H\"{o}lder inequality, that if $u \in H^{2,2}_{{\rm loc}}(\Omega)$ and $V \in M^p_{{\rm loc}}$ ($p>\frac{d-1}{2}$), then $|V|^{\frac{1}{2}}u \in X_2$, i.e., $u \in Y_V^{\rm {weak}}$. However, the assumption `$u \in H^{2,2}_{{\rm loc}}$' is in general too restrictive for application of this result to the problem of absence of positive eigenvalues (see Remark \ref{proprem}).
\begin{remark}
\label{proprem}
Below we make several comments about $H^{2,q}$-properties of the eigenfunctions of the self-adjoint Schr\"{o}dinger operator $H=(-\Delta \dotplus V_+)\dotplus (-V_-)$, $V=V_+-V_-$, defined by (\ref{formsum}) in the assumption that condition
\begin{equation}
\label{formineq3}
V_- \leq \beta (H_0 \dotplus V_+)+c_{\beta}, \quad \beta<1, c_\beta<\infty
\end{equation}
is satisfied. (Note that (\ref{formineq3}) implies condition (\ref{formineq}). We say that (\ref{formineq3}) is satisfied with $\beta=0$ if (\ref{formineq3}) holds for any $\beta>0$ arbitrarily close to 0, for an appropriate $c_{\beta}<\infty$.)
Let $u \in D(H)$ and $Hu=\mu u$. Then
\begin{equation*}
e^{-tH}u=e^{-t\mu}u, \quad t>0.
\end{equation*}
As is shown in \cite{LS},
for every $2 \leq q< \frac{2d}{d-2}\frac{1}{1-\sqrt{1-\beta}}$ there exists a constant $c=c(q,\beta)>0$ such that
\begin{equation}
\label{LSineq}
\|e^{-tH}f\|_q \leq ct^{-\frac{d}{2}\left(\frac{1}{2}-\frac{1}{q} \right)}\|f\|_2,
\end{equation}
where $f \in L^2=L^2(\mathbb R^d)$. Let us now consider several possible $L^p$ and $L^{p,\infty}$ (as well as $L^p_{{\rm loc}}$ and $L^{p,\infty}_{{\rm loc}}$) conditions on potential $V$. In each case, the corresponding result on $H^{2,q}$-properties of the eigenfunction $u$ immediately implies the inclusion $|V|^{\frac{1}{2}}u \in L^2$ (respectively, $|V|^{\frac{1}{2}}u \in X_2$) (cf. $D(H)$ and $Y_V^{\rm {weak}}$).
\vspace*{2mm}
(A) Suppose in addition to (\ref{formineq3}) that $V \in L^{\frac{d-1}{2}}_{{\rm loc}}$. Then by H\"{o}lder inequality and (\ref{LSineq}) $Vu \in L^q_{{\rm loc}}$ and, due to inclusion $D(H) \subset D(H_{\max})$, $\Delta u \in L^q_{{\rm loc}}$ for any $q$ such that
\begin{equation*}
\frac{1}{q}>\frac{2}{d-1}+\frac{d-2}{d}\frac{1-\sqrt{1-\beta}}{2}.
\end{equation*}
The latter implies that $q<2$ in general, i.e., when $\beta$ in (\ref{formineq3}) is close to $1$. Hence, in general the assumption `$u \in H_{{\rm loc}}^{2,2}$' is too restrictive for applications to the problem of absence of positive eigenvalues even under additional hypothesis
of the type $V \in L^p_{{\rm com}}$, $\frac{d-1}{2}<p<\frac{d}{2}$
or $V \in M^p_{{\rm com}}$, $\frac{d-1}{2}<p<\frac{d}{2}$ (cf. \cite{SawChan,RV}).
\vspace*{2mm}
(B1) If $V=V_1+V_2 \in L^p+L^\infty$, $p>\frac{d}{2}$, then (\ref{formineq3}) holds with $\beta=0$ and $u \in L^\infty$. Moreover, it follows that $u \in C^{0,\alpha}$ for any $\alpha \in (0,1-\frac{2}{d}]$. Therefore, $u \in H_{{\rm loc}}^{2,p}$ and, in particular, for $d \geq 4$, $u \in H^{2,2}$.
\vspace*{2mm}
(B2) Assume in addition to (\ref{formineq3}) that $V \in L^p_{{\rm loc}}$, $p>\frac{d}{2}$, and $\beta=0$. Then $u \in H^{2,\underline{p}}_{{\rm loc}}$, $\underline{p}>\frac{d}{2}$. If $d=3$, and $p>\frac{d}{2}$ is close to $\frac{d}{2}$, then $u \not\in H^{2,2}_{{\rm loc}}$, but of course $u \in H^{2,\bar{p}}_{{\rm loc}}$, $\bar{p}=\frac{2d}{d+2}$~($<\underline{p}$).
\vspace*{2mm}
(B3) If $V=V_1+V_2 \in L^{\frac{d}{2}}+L^\infty$, then (\ref{formineq3}) is satisfied with $\beta=0$ and $u \in \cap_{2 \leq r<\infty} L^r$. Therefore $u \in H^{2,q}_{{\rm loc}}$, $q<\frac{d}{2}$ (cf. Remark in \cite{ABG}). In particular, $u \in H^{2,\bar{p}}_{{\rm loc}}$ (cf. \cite{JK}). But for $d \geq 5$ it follows $u \in H^{2,2}_{{\rm loc}}$.
\vspace*{2mm}
(B4) Finally, suppose that $V=V_1+V_2 \in L^{\frac{d}{2},\infty}+L^\infty$ is such that
$$\beta:=\left(\frac{d^{-1}\pi^{\frac{d}{2}} \Gamma \left(\frac{d}{4}-\frac{1}{2} \right)}{\Gamma \left(\frac{d}{2}\right)\Gamma \left(\frac{d}{4}+\frac{1}{2} \right)}\right) \|V_1\|_{\frac{d}{2},\infty}<1.$$
Then we have
\begin{equation}
\label{formineq4}
|V| \leq \beta H_0+c_{\beta}, \quad c_\beta<\infty
\end{equation}
and, at the same time,
\begin{equation*}
\|V(\lambda+H_{0,\bar{p}})^{-1}\|_{\bar{p} \mapsto \bar{p}} \leq \beta, \quad \lambda \geq \frac{c_\beta}{\beta}
\end{equation*}
(see \cite{KPS}),
where $H_{0,\bar{p}}$ stands for the extension of $-\Delta$ in $L^{\bar{p}}$ with $D(H_{0,\bar{p}})=H^{2,\bar{p}}$.
The first inequality implies condition (\ref{formineq3}) and, hence, allows us to conclude that the form sum $H:=H_0\dotplus V$ is well defined. In turn, the second inequality implies existence of the algebraic sum $\hat{H}_{\bar{p}}:=H_{0,\bar{p}}+V$ defined in $L^{\bar{p}}$ with $D(\hat{H}_{\bar{p}})=H^{2,\bar{p}}$, which
coincides with $H$ on the intersection of domains $D(H) \cap H^{2,\bar{p}}$. By making use of the representation
\begin{equation*}
(\lambda+\hat{H}_{\bar{p}})^{-1}=(\lambda+H_{0,\bar{p}})(1+V(\lambda+H_{0,\bar{p}})^{-1})^{-1}
\end{equation*}
one immediately obtains that $(\lambda+\hat{H}_{\bar{p}})^{-1}:L^{\bar{p}} \mapsto L^2$, i.e., any eigenfunction of operator $\hat{H}_{\bar{p}}$ belongs to $L^2$. Furthermore, an analogous representation for $(\lambda+H)^{-1}$ yields the identity
\begin{equation*}
(\lambda+H)^{-1}f=(\lambda+\hat{H}_{\bar{p}})^{-1}f, \quad f \in L^2 \cap L^{\bar{p}}.
\end{equation*}
Therefore, any eigenfunction of $\hat{H}_{\bar{p}}$ is an eigenfunction of $H$ (cf. \cite{Ste}). The converse statement is valid, e.g., for eigenfunctions having compact support.
If $V \in L^{\frac{d}{2},\infty}_{{\rm loc}}$ and (\ref{formineq4}) holds, then $u \in H^{2,q_0}_{{\rm loc}}$ for some $q_0>\bar{p}$. Indeed, we have $V \in L^r_{{\rm loc}}$ for any $r<\frac{d}{2}$, and so by (\ref{LSineq}) $u \in L^p$ for some $p>\frac{2d}{d-2}$. Thus, $Vu \in L^{q_0}_{{\rm loc}}$ for a certain $q_0>\bar{p}$ and, hence, $u \in H^{2,q_0}_{{\rm loc}}$. The latter confirms that the result in \cite{Ste}) applies to the problem of absence of positive eigenvalues.
\end{remark}
\section{Proofs of Theorems \ref{mainthm} and \ref{mainthm2}}
\label{mainsect}
Let us introduce some notations.
In what follows, we omit index $K$ in $B_K(x_0,\rho)$, and write simply $B(x_0,\rho)$.
\vspace*{2mm}
Let $W \in X_{\frac{d-1}{2}}$, $x_0 \in \Omega$, $\rho>0$, $d \geq 3$, define
\begin{equation}
\label{taudef}
\tau(W,x_0,\rho):=\|\mathbf{1}_{B(x_0,\rho)} |W|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{2}}|W|^{\frac{d-1}{4}}\mathbf{1}_{B(x_0,\rho)}\|_{2 \mapsto 2}.
\end{equation}
Let
$\mathbf{1}_{B(\rho \setminus a)}$ be the characteristic function of set $B(0,\rho) \setminus B(0,a)$, where $0<a<\rho$, and
\begin{equation*}
N_{d}^\delta:=N+\left(\frac{d}{2}-\delta\right)\frac{d-3}{d-1}.
\end{equation*}
We define integral operator
\begin{equation*}
\left[(-\Delta)^{-\frac{z}{2}}\right]_N f(x):=\int_{\mathbb R^d} \left[(-\Delta)^{-\frac{z}{2}}\right]_N(x,y) f(y)dy, \quad 0 \leq \mbox{Re}(z) \leq d-1
\end{equation*}
whose kernel $\bigl[(-\Delta)^{-\frac{z}{2}}\bigr]_N(x,y)$ is defined by subtracting Taylor polynomial of degree $N-1$ at $x=0$ of function $x \mapsto |x-y|^{z-d}$,
\begin{equation*}
\left[(-\Delta)^{-\frac{z}{2}}\right]_N(x,y):=c_z \left( |x-y|^{z-d} -
\sum_{k=0}^{N-1} \frac{(x \cdot \nabla)^k}{k!}|0-y|^{z-d}
\right),
\end{equation*}
where $(x \cdot \nabla)^k:=\sum_{|\alpha|=k} \frac{k!}{\alpha_1!\dots\alpha_d!}x^\alpha \frac{\partial^k}{\partial x_1^{\alpha_1}\dots x_n^{\alpha_n}}$ is the multinomial expansion of $(x \cdot \nabla)$.
Define, further,
\begin{equation*}
\left[(-\Delta)^{-\frac{z}{2}}\right]_{N,t}:=\varphi_t\left[(-\Delta)^{-\frac{z}{2}}\right]_N\varphi_t^{-1},
\end{equation*}
where $\varphi_{t}(x):=|x|^{-t}$.
Note that if $V$ is a potential from our class $\mathcal F^d_{\beta,{\rm loc}}$, and $V_1:=|V|+1$, then for a fixed $x_0 \in \Omega$
\begin{equation}
\label{onerel}
\tau(V_1,x_0,\rho) \leq \tau(V,x_0,\rho)+\varepsilon(\rho),
\end{equation}
where $\varepsilon(\rho) \to 0$ as $\rho \to 0$.
\subsection{Proof of Theorem \ref{mainthm}}Our proof is based on inequalities of Proposition \ref{ourlem} and Lemma \ref{sawlem1}.
\begin{proposition}
\label{ourlem}
If $\tau(V,0,\rho)<\infty$, then there exists a constant $C=C(\rho,\delta,d)>0$ such that
\begin{equation*}
\|\mathbf{1}_{B(\rho \setminus a)}|V|^{\frac{1}{2}}\left[(-\Delta)^{-1}\right]_{N,N_{d}^\delta}|V|^{\frac{1}{2}}\mathbf{1}_{B(\rho \setminus a)}\|_{2 \mapsto 2} \leq C\tau(V,0,\rho)^{\frac{1}{d-1}},
\end{equation*}
where $0<\delta<1/2$, for all positive integers $N$.
\end{proposition}
\begin{lemma}
\label{sawlem1}
There exists a constant $C=C(d)$ such that
\begin{equation*}
\left|\left[(-\Delta)^{-1}\right]_N(x,y) \right|
\leq
C N^{d-3}\left(\frac{|x|}{|y|} \right)^N (-\Delta)^{-1}(x,y)
\end{equation*}
for all $x$, $y \in \mathbb R^d$ and all positive integers $N$.
\end{lemma}
In turn, the proof of Proposition \ref{ourlem} in the case $d=3$ follows immediately from Lemma \ref{sawlem1} which is a simple consequence of Lemma 1 in \cite{Saw} (i.e., Lemma \ref{sawlem2} below for $\gamma=0$). In the case that $d \geq 4$ we prove Proposition \ref{ourlem} using Stein's interpolation theorem and the estimates of Lemma \ref{JKlem}, which is due to
D.~Jerison and C.~Kenig \cite{JK}, and Lemma \ref{sawlem2}, which generalizes the inequalities considered in \cite{Saw} and \cite{Ste} (cf. Lemma 1 in \cite{Saw} and Lemma 5 in \cite{Ste}).
\begin{lemma}[\cite{JK}]
\label{JKlem}
There exist constants $C_2=C_2(\rho_1,\rho_2,\delta,d)$ and $c_2=c_2(\rho_1,\rho_2,\delta,d)>0$ such that
\begin{equation*}
\|\mathbf{1}_{B(\rho_1 \setminus a)} \left[(-\Delta)^{-i\gamma}\right]_{N,N+\frac{d}{2}-\delta} \mathbf{1}_{B(\rho_2 \setminus a)}\|_{2 \mapsto 2} \leq C_2 e^{c_2|\gamma|},
\end{equation*}
where $0<\delta<1/2$, for all $\gamma \in \mathbb R$ and all positive integers $N$.
\end{lemma}
\begin{lemma}
\label{sawlem2}
There exist constants $C_1=C_1(d)$ and $c_1=c_1(d)>0$ such that
\begin{equation*}
\left|\left[(-\Delta)^{-\frac{d-1+i\gamma}{2}}\right]_N(x,y) \right| \leq C_1e^{c_1\gamma^2} \left(\frac{|x|}{|y|} \right)^N (-\Delta)^{-\frac{d-1}{2}}(x,y)
\end{equation*}
for all $x$, $y \in \mathbb R^d$, all $\gamma \in \mathbb R$ and all positive integers $N$.
\end{lemma}
We prove Lemma \ref{sawlem2} at the end of this section.
\begin{proof}[Proof of Proposition \ref{ourlem}]
In the case that $d=3$ result follows immediately from Lemma \ref{sawlem1}, proved in \cite{Saw}.
In the case $d \geq 4$ the proof can be obtained, using Lemmas \ref{JKlem} and \ref{sawlem2}, by making use of Stein's interpolation theorem (see, e.g., \cite{SW}). Indeed, consider the operator-valued function
\begin{equation*}
F(z):=\mathbf{1}_{B(\rho \setminus a)} |V|^{\frac{d-1}{4}z}\varphi_{N+\left(\frac{d}{2}-\delta\right)(1-z)} \left[(-\Delta)^{-\frac{d-1}{2}z}\right]_N\varphi_{N+\left(\frac{d}{2}-\delta\right)(1-z)}^{-1}|V|^{\frac{d-1}{4}z}\mathbf{1}_{B(\rho \setminus a)}
\end{equation*}
defined on the strip $\{z \in \mathbb C: 0 \leq \mbox{Re}(z) \leq 1\}$.
By Lemma \ref{JKlem},
\begin{equation*}
\|F(i\gamma)\|_{2 \mapsto 2} \leq C_2e^{c_2|\gamma|}, \quad \gamma \in \mathbb R,
\end{equation*}
and by Lemma \ref{sawlem2} and definition of norm $\tau(V,0,\rho)$ (see (\ref{taudef}))
\begin{equation*}
\|F(1+i\gamma)\|_{2 \mapsto 2} \leq \tau(V,0,\rho) C_1e^{c_1\gamma^2}, \quad \gamma \in \mathbb R.
\end{equation*}
Together with obvious observations about analyticity of $F$ this implies that
$F$ satisfies all conditions of Stein's interpolation theorem. In particular, $F\bigl(\frac{2}{d-1}\bigr)$ is a bounded $L^2 \mapsto L^2$,
which completes the proof of Proposition \ref{ourlem}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainthm}]
Let $u \in Y_V^{\rm {weak}}$. Without loss of generality we may assume $u \equiv 0$ on $B(0,a)$ for $a>0$ sufficiently small, such that there exists $\rho>a$ with the properties $\rho<1$ and $\bar{B}(0,3\rho) \subset \Omega$.
In order to prove that $u$ vanishes on $\Omega$ it suffices to show that $u \equiv 0$ on $B(0,\rho)$ for any such $\rho$.
Let $\eta \in C_0^\infty(\Omega)$ be such that $0 \leq \eta \leq 1$, $\eta \equiv 1$ on $B(0,2\rho)$, $\eta \equiv 0$ on $\Omega \setminus B(0,3\rho)$, $|\nabla \eta| \leq \frac{c}{\rho}$, $|\Delta \eta| \leq \frac{c}{\rho^2}$.
Let $E_\eta(u):=2 \nabla \eta \nabla u+u\Delta \eta \in X_1$. Denote $u_\eta:=u\eta$. Since $\mathcal L^{2,1}_{{\rm loc}}(\Omega) \subset H_{{\rm loc}}^{1,p}(\Omega)$, $p<\frac{d}{d-1}$, we have $E_\eta(u) \in L_{{\rm com}}^1(\Omega)$
and hence
\begin{equation*}
\Delta u_\eta=\eta\Delta u+E_\eta(u)
\end{equation*}
implies
$\Delta u_\eta \in L_{{\rm com}}^1(\Omega)$. Thus, we can write
\begin{equation*}
u_\eta=(-\Delta)^{-1}(-\Delta u_\eta).
\end{equation*}
The standard limiting argument (involving consideration of $C^\infty_0$-mollifiers, subtraction of Taylor polynomial of degree $N - 1$ at
$0$ of function $u_\eta$ and interchanging the signs of differentiation and integration) allows us to conclude further
\begin{equation}
\label{identity}
u_\eta=[(-\Delta)^{-1}]_N(-\Delta u_\eta).
\end{equation}
Let us denote $\mathbf{1}_{B(\rho)}^c:=1-\mathbf{1}_{B(\rho)}$, so that $\Delta u_\eta=(\mathbf{1}_{B(\rho \setminus a)}+\mathbf{1}_{B(\rho)}^c) \Delta u_\eta$. Observe that
\begin{equation*}
{\rm supp}~\eta\Delta u\subset \bar{B}(0,3\rho) \setminus B(0,a), \quad {\rm supp}~ E_\eta(u) \subset \bar{B}(0,3\rho) \setminus B(0,2\rho)
\end{equation*}
and, thus, $\mathbf{1}_{B(\rho)}^c \eta\Delta u=\mathbf{1}_{B(3\rho \setminus \rho)} \Delta u$, $\mathbf{1}_{B(\rho)}^c E_\eta(u)=\mathbf{1}_{B(3\rho \setminus 2\rho)} E_\eta(u)$.
Identity (\ref{identity}) implies then
\begin{multline}
\notag
\mathbf{1}_{B(\rho)} V_1^{\frac{1}{2}}\varphi_{N_d^{\delta}} u=\mathbf{1}_{B(\rho)} V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_{d}^\delta}V_1^{\frac{1}{2}}\mathbf{1}_{B(\rho \setminus a)}\varphi_{N_{d}^\delta}\frac{-\Delta u}{V_1^{\frac{1}{2}}}+\\+\mathbf{1}_{B(\rho)} V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_{d}^\delta} V_1^{\frac{1}{2}}\mathbf{1}^c_{B(\rho)}\varphi_{N_{d}^\delta}\frac{-\eta \Delta u }{V_1^{\frac{1}{2}}}+\\+
\mathbf{1}_{B(\rho)} V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_{d}^\delta} \mathbf{1}_{B(3\rho \setminus 2\rho)}\varphi_{N_{d}^\delta}(-E_\eta(u))
\end{multline}
(we assume that $0<\delta<1/2$ is fixed throughout the proof)
or, letting $I$ to denote the left hand side and, respectively, $I_1$ ,
$I_1^c$ and $I_2$ the three summands of the right hand side of the last
equality, we rewrite the latter as
\begin{equation*}
I=I_1+I_1^c+I_2.
\end{equation*}
We would like to emphasize that a priori $I \not\in L^2$, but only $I \in L^s$, $s<d/(d-2)$. Hence, in the case that $d \geq 4$ we must first prove that $I_1$, $I_1^c$ and $I_2$ are in $L^2$, so that $I \in L^2$ as well.
Therefore, we obtain estimates $\|I_1^c\|_2 \leq c_1 \varphi_{N_d^\delta}(\rho)$, $\|I_2\|_2 \leq c_2 \varphi_{N_d^\delta}(\rho)$ and $\|I_1\|_2 \leq \alpha \|I\|_2$, $\alpha<1$, and conclude that
$(1-\alpha) \|I\|_2 \leq (c_1+c_2) \varphi_{N_d^\delta}(\rho)$, and therefore that
\begin{equation*}
\left\|\mathbf{1}_{B(\rho \setminus a)} \frac{\varphi_{N_d^\delta}}{\varphi_{N_d^\delta}(\rho)}u\right\|_2 \leq \frac{c_1+c_2}{1-\alpha}.
\end{equation*}
Letting $N \to \infty$, we derive identity $u \equiv 0$ in $B(0,\rho)$.
1)
\textit{Proof of $I_1 \in L^2$ and $\|I_1\|_2 \leq \alpha \|I\|_2$, $\alpha<1$.}
Observe that $$\mathbf{1}_{B(\rho \setminus a)}\frac{|\Delta u|}{V_1^{1/2}} \leq \mathbf{1}_{B(\rho)}\frac{|V||u|}{V_1^{1/2}} \leq \mathbf{1}_{B(\rho)}|V|^{1/2}|u| \in X_2 \quad (\text{since } u \in Y_V^{\rm {weak}}),$$
and hence, according to Proposition \ref{ourlem},
\begin{equation*}
\|I_1\|_{2} \leq \left\|\mathbf{1}_{B(\rho \setminus a)} V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_{d}^\delta}V_1^{\frac{1}{2}}\mathbf{1}_{B(\rho \setminus a)}\right\|_{2 \mapsto 2}\left\|\mathbf{1}_{B(\rho)}\varphi_{N_{d}^\delta}|V|^{\frac{1}{2}} u\right\|_{2} \leq \beta_1 \|\mathbf{1}_{B(\rho)} \varphi_{N_{d}^\delta}|V|^{\frac{1}{2}} u\|_{2}.
\end{equation*}
Here
$\beta_1:=C \tau(V_1,0,\rho)^{\frac{1}{d-1}}$,
where $C$ is the constant in formulation of Proposition \ref{ourlem}. We may assume that $\beta_1<1$ (see (\ref{onerel})).
2) \textit{Proof of $\|I_1^c\|_2 \leq c_1 \varphi_{N_d^\delta}(\rho)$.} By Proposition \ref{ourlem},
\begin{multline}
\notag
\|I_1^c\|_{2} \leq \left\|\mathbf{1}_{B(\rho \setminus a)} V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_{d}^\delta}V_1^{\frac{1}{2}}\mathbf{1}_{B(3\rho \setminus \rho)}\right\|_{2 \mapsto 2}\left\|\mathbf{1}_{B(\rho)}^c\varphi_{N_{d}^\delta}|V|^{\frac{1}{2}} u\right\|_{2} \leq \\ \leq \beta_2 \varphi_{N_{d}^\delta}(\rho)\|\mathbf{1}_{B(3\rho)} |V|^{1/2}u\|,
\end{multline}
where
$\beta_2:=C \tau(V_1,0,3\rho)^{\frac{1}{d-1}}<\infty$. \\ [-3mm]
3) \textit{Proof of $\|I_2\|_2 \leq c_2 \varphi_{N_d^\delta}(\rho)$.} We need to derive an estimate of the form
\begin{equation*}
\|I_2\|_2 \leq C \varphi_{N_{d}^\delta}(\rho) \|E_\eta(u)\|_1,
\end{equation*}
where $C$ can depend on $d$, $\delta$, $a$, $\rho$, $\|\mathbf{1}_{B(\rho)} V\|_1$, but not on $N$. We have
\begin{multline}
\notag
\|I_2\|_2 \leq \left\|\mathbf{1}_{B(\rho \setminus a)} V_1^{1/2} [(-\Delta)^{-1}]_{N,N_{d}^\delta} \mathbf{1}_{B(3\rho \setminus 2\rho)}\right\|_{1 \mapsto 2}\left\|\mathbf{1}_{B(3\rho \setminus 2\rho)} \varphi_{N_{d}^\delta} E_\eta(u)\right\|_1 \leq \\ \left\|\mathbf{1}_{B(\rho \setminus a)} V_1^{1/2} [(-\Delta)^{-1}]_{N,N_{d}^\delta} \mathbf{1}_{B(3\rho \setminus 2\rho)}\right\|_{1 \mapsto 2} 2^{-N}\varphi_{N_{d}^\delta}(\rho) \left\|E_\eta(u)\right\|_1.
\end{multline}
Now for $h \in L^1(\mathbb R^d)$, in virtue of Lemma \ref{sawlem1},
\begin{multline}
\notag
\|\mathbf{1}_{B(\rho \setminus a)} V_1^{1/2} [(-\Delta)^{-1}]_{N,N_{d}^\delta} \mathbf{1}_{B(3\rho \setminus 2\rho)} h\|_2 \leq \\ \leq \|\mathbf{1}_{B(\rho)} V_1^{1/2}\|_2 \|\mathbf{1}_{B(\rho \setminus a)} [(-\Delta)^{-1}]_{N,N_d^\delta} \mathbf{1}_{B(3\rho \setminus 2\rho)} h\|_{\infty} \leq \\
\leq \|\mathbf{1}_{B(\rho)} V_1^{1/2}\|_2 CN^{d-3} \varphi_{\left(\frac{d}{2}-\delta\right)\frac{d-3}{d-1}}(a)\varphi_{\left(\frac{d}{2}-\delta\right)\frac{d-3}{d-1}}^{-1}(3\rho) \|\mathbf{1}_{B(\rho)} (-\Delta)^{-1} \mathbf{1}_{B(3\rho \setminus 2\rho)}h\|_\infty \leq \\ \leq
(\|\mathbf{1}_{B(\rho)}\|_1+\|\mathbf{1}_{B(\rho)} V\|_1)^{1/2} CN^{d-3} \left(\frac{3\rho}{a} \right)^{\left(\frac{d}{2}-\delta\right)\frac{d-3}{d-1}} M_{\rho},
\end{multline}
where
\begin{equation*}
M_{\rho}:= C_2 {\rm esssup}_{x \in B(0,\rho)} \int_{2\rho \leq |y| \leq 3\rho} |x-y|^{2-d}|h(y)|dy \leq C_2 \rho^{2-d}\|h\|_1.
\end{equation*}
Therefore
\begin{multline}
\notag
\left\|\mathbf{1}_{B(\rho \setminus a)} V_1^{1/2} [(-\Delta)^{-1}]_{N,N_{d}^\delta} \mathbf{1}_{B(3\rho \setminus 2\rho)}\right\|_{1 \mapsto 2} \leq \\ \leq (\|\mathbf{1}_B(\rho)\|_1+\|\mathbf{1}_{B(\rho)} V\|_1)^{1/2} CC_2 N^{d-3} \left(\frac{3\rho}{a} \right)^{\left(\frac{d}{2}-\delta\right)\frac{d-3}{d-1}}\rho^{2-d}
\end{multline}
Hence, there exists a constant $\hat{C}=\hat{C}(d,\delta,a,\rho,\|\mathbf{1}_{B(\rho)} V\|_1)$ such that
\begin{equation*}
\|I_2\|_2 \leq \hat{C}N^{d-3} 2^{-N} \varphi_{N_d^\delta}(\rho)\|E_\eta(u)\|_1,
\end{equation*}
which implies the required estimate.
\end{proof}
\begin{proof}[Proof of Lemma \ref{sawlem2}]
The proof essentially follows the argument in \cite{Saw}.
Put
\begin{equation*}
\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right]:=\prod_{j=1}^k \left(1+\frac{-\frac{1}{2}+\frac{i\gamma}{2}}{j} \right).
\end{equation*}
Then
\begin{multline}
\label{aest111}
\left|~\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right]~\right|=\prod_{j=1}^k \left(1-\frac{1}{2j} \right) \prod_{j=1}^k \sqrt{1+\frac{\gamma^2}{(2j-1)^2}} \leq \\ \leq \prod_{j=1}^k \left(1-\frac{1}{2j} \right) e^{\gamma^2\sum_{j=1}^k \frac{1}{(2j-1)^2}} \leq \prod_{j=1}^k \left(1-\frac{1}{2j} \right) e^{\gamma^2 c}, \quad c=\frac{\pi^2}{48}.
\end{multline}
We may assume, after a dilation and rotation, that $x=(x_1,x_2,0,\dots,0)$, $y=(1,0,\dots,0)$. Thus, passing to polar coordinates $(x_1,x_2)=te^{i\theta}$, we reduce our inequality to inequality
\begin{equation*}
\left||1-te^{i\theta}|^{-1-i\gamma}-P_{N-1}(t,\theta) \right| \leq C e^{c\gamma^2} t^{N}|1-te^{i\theta}|^{-1}, \quad \text{ for all } \gamma \in \mathbb R
\end{equation*}
and for appropriate $C>0$, $c>0$. Here $P_{N-1}(t,\theta)$ denotes the Taylor polynomial of degree $N-1$ at point $z=0$ of function $z=te^{i\theta} \mapsto |1-z|^{-1}$. Similarly to \cite{Saw}, via summation of geometric series we obtain a representation
\begin{equation*}
P_{N-1}(t,\theta)=\sum_{m=0}^{N-1} a^{\gamma}_m(\theta)t^m,
\end{equation*}
where
\begin{equation*}
a^{\gamma}_m(\theta):=\sum_{k+l=m} \left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ l \end{array} \right]~\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right] e^{i(k-l)\theta}.
\end{equation*}
Note that
\begin{equation*}
a_m^{0}(0)=\sum_{k+l=m}\left[\begin{array}{c} -\frac{1}{2} \\ l \end{array} \right]~\left[\begin{array}{c} -\frac{1}{2} \\ k \end{array} \right]=1
\end{equation*}
since
\begin{equation*}
\sum_{m=0}^\infty a_m^{0}(0)t^m=(1-t)^{-1}=\sum_{m=0}^\infty t^m.
\end{equation*}
Now estimate (\ref{aest111}) and identity $a_m^{0}(0)=1$ yield
\begin{equation*}
|a^{\gamma}_m(\theta)| \leq \sum_{k+l=m}\left|~\left[\begin{array}{c} -\frac{1}{2} \\ l \end{array} \right]~\right|~\left|~\left[\begin{array}{c} -\frac{1}{2} \\ k \end{array} \right]~\right|e^{2c\gamma^2}=e^{2c\gamma^2}.
\end{equation*}
We have to distinguish between four cases $t \geq 2$, $1<t<2$, $0\leq t \leq \frac{1}{2}$ and $\frac{1}{2} <t<1$. Below we consider only the cases $t \geq 2$ and $1<t<2$ (proofs in two other cases are similar).
If $t \geq 2$, then
\begin{equation*}
|P_{N-1}(t,\theta)| \leq \sum_{m=0}^{N-1} |a^{\gamma}_m(\theta)|t^m \leq e^{2c\gamma^2}t^N \leq \frac{3}{2} e^{2c\gamma^2}t^N|1-te^{i\theta}|^{-1}
\end{equation*}
since $1 \leq \frac{3}{2}t|1-te^{i\theta}|^{-1}$. Hence, using $\bigl|~|1-te^{i\theta}|^{-1-i\gamma}~\bigr| \leq t^N|1-te^{i\theta}|^{-1}$, it follows
\begin{equation*}
\left||1-te^{i\theta}|^{-1-i\gamma}-P_{N-1}(t,\theta) \right| \leq t^N|1-te^{i\theta}|^{-1}+\frac{3}{2} e^{2c\gamma^2}t^N|1-te^{i\theta}|^{-1} \leq C e^{2c\gamma^2}t^N|1-te^{i\theta}|^{-1}
\end{equation*}
for an appropriate $C>0$, as required.
If $1<t<2$, then, after two summations by parts, we derive
\begin{multline}
\notag
P_{N-1}(t,\theta)=\sum_{l=0}^{N-3} S \left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ l \end{array} \right] D_l(\bar{z})\sum_{k=0}^{N-l-3}S\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right]D_k(z)+\\
+\sum_{l=0}^{N-2} S \left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ l \end{array} \right] \left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ N-l-2 \end{array} \right]D_l(\bar{z})D_{N-l-2}(z)+\\
+\sum_{k=0}^{N-1}\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right]~\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ N-k-1 \end{array} \right]z^kD_{N-1-k}(z)=J_1+J_2+J_3,
\end{multline}
where
\begin{equation*}
S\left[\begin{array}{c} \delta \\ k \end{array} \right]:=\left[\begin{array}{c} \delta \\ k \end{array} \right]-\left[\begin{array}{c} \delta \\ k+1 \end{array} \right], \quad D_k(z):=\sum_{j=0}^k z^j.
\end{equation*}
We use estimate
\begin{equation*}
\left|~S\left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right]~\right|=\left|~ \left[\begin{array}{c} -\frac{1}{2}+\frac{i\gamma}{2} \\ k \end{array} \right] \left(\frac{-\frac{1}{2}+\frac{i\gamma}{2}}{1+k} \right)~\right| \leq C(k+1)^{-\frac{1}{2}}e^{c\gamma^2}
\end{equation*}
to obtain, following an argument in \cite{Saw}, that each $J_i$ ($i=1,2,3$) is majorized by $Ce^{c\gamma^2}t^N|1-te^{i\theta}|^{-1}$ for some $C>0$. Since $\bigl|~|1-te^{i\theta}|^{-1-i\gamma}~\bigr| \leq t^N|1-te^{i\theta}|^{-1}$, Lemma \ref{sawlem2} follows.
\end{proof}
\subsection{Proof of Theorem \ref{mainthm2}} Choose $\Psi_j \in C^\infty(\Omega)$ in such a way that $0 \leq \Psi_j \leq 1$, $\Psi_j(x)=1$ for $|x|>\frac{2}{j}$, $\Psi_j(x)=0$ for $|x|<\frac{1}{j}$, $|\nabla \Psi_j(x)| \leq c'j$, $|\Delta \Psi_j(x)| \leq c'j^2$.
\begin{proposition}
\label{fourestlem}
Let $\tau(V,0,\rho)<\infty$. There exists a constant $C=C(\rho,\delta,d)>0$ such that for all positive integers $N$ and $j$
$$
\|\mathbf{1}_{B(\rho)} \Psi_j |V|^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^{\delta}}|V|^{\frac{1}{2}} \Psi_j \mathbf{1}_{B(\rho)}\|_{2 \mapsto 2} \leq C \tau(V,0,\rho)^{\frac{1}{d-1}},
\leqno (E1)
$$
$$
\|\mathbf{1}_{B(\rho)} \Psi_j |V|^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^{\delta}}|V|^{\frac{1}{2}} \mathbf{1}_{B(3\rho \setminus \rho)}\|_{2 \mapsto 2} \leq C \tau(V,0,3\rho)^{\frac{1}{d-1}},
\leqno (E2)
$$
$$
\left\|\mathbf{1}_{B(\rho)}\Psi_j |V|^{\frac{1}{2}} [(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})} \right\|_{p \mapsto 2} \leq C \tau(V,0,\rho)^{\frac{1}{d-1}},
\leqno (E3)
$$
$$
\left\|\mathbf{1}_{B(\rho)}\Psi_j |V|^{\frac{1}{2}} [(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(3\rho \setminus 2\rho)} \right\|_{p \mapsto 2} \leq C \tau(V,0,3\rho)^{\frac{1}{d-1}},
\leqno (E4)
$$
where $p=\frac{2d}{d+2}$.
\end{proposition}
We prove Proposition \ref{fourestlem} at the end of this section.
\begin{proof}[Proof of Theorem \ref{mainthm2}]
We use the same notations as in the proof of Theorem \ref{mainthm}. Suppose that $u \in Y_V^{\rm {str}}$ satisfies (\ref{diffineq}) and vanishes to an infinite order at $0 \in \Omega$.
We wish to obtain an estimate of the form
\begin{equation}
\label{reqineq7}
\left\|\mathbf{1}_{B(\rho)}\frac{\varphi_{N_d^\delta}}{\varphi_{N_d^\delta}(\rho)} u\right\|_2 \leq C.
\end{equation}
Then, letting $N \to \infty$, we would derive the required identity: $u \equiv 0$ in $B(0,\rho)$.
The same argument as in the proof of Theorem \ref{mainthm} leads us to an identity
\begin{equation*}
u_{\eta_j}=(-\Delta)^{-1}(-\Delta u_{\eta_j}), \quad \eta_j=\eta \Psi_j,
\end{equation*}
which, in turn, implies
\begin{multline}
\notag
\mathbf{1}_{B(\rho)} \Psi_j V_1^{\frac{1}{2}}\varphi_{N_d^{\delta}}u=\\=
\mathbf{1}_{B(\rho)} \Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^{\delta}}V_1^{\frac{1}{2}}\varphi_{N_d^{\delta}}
\frac{-\eta_j\Delta u}{V_1^{\frac{1}{2}}}
+\mathbf{1}_{B(\rho)} \Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^{\delta}}\varphi_{N_d^{\delta}} E_j(u).
\end{multline}
Letting $I$ to denote the left hand side of the previous
identity, and, respectively, $I_1$
and $I_2$ the two summands of the right hand side, we rewrite the latter as
\begin{equation*}
I=I_1+I_2.
\end{equation*}
Here $0<\delta<1/2$ is fixed, $2/j \leq \rho$, $\Delta u_{\eta_j}=\eta_j \Delta u+E_j(u)$ and
\begin{equation*}
E_j(u):=2 \nabla \eta_j \nabla u+(\Delta \eta_j)u.
\end{equation*}
Note that $I \in L^2$, since $H^{1,p}_{{\rm loc}}(\Omega) \subset X_2$ by Sobolev embedding theorem, and $|V|^{\frac{1}{2}}u \in X_2$ by the definition of $Y_V^{\rm {str}}$.
Next, we expand $I_1$ as a sum $I_{11}+I_{11}^c$, where
\begin{equation*}
I_{11}:=\mathbf{1}_{B(\rho)} \Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^{\delta}}V_1^{\frac{1}{2}}\mathbf{1}_{B(\rho)}\varphi_{N_d^{\delta}}
\frac{-\Psi_j\Delta u}{V_1^{\frac{1}{2}}}
\end{equation*}
and
\begin{equation*}
I_{11}^c:=\mathbf{1}_{B(\rho)} \Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^{\delta}}V_1^{\frac{1}{2}}\mathbf{1}_{B(\rho)}^c\varphi_{N_d^{\delta}}
\frac{-\eta\Delta u}{V_1^{\frac{1}{2}}}.
\end{equation*}
Proposition \ref{fourestlem} and inequalities (E1) and (E2) imply the required estimates:
\begin{equation*}
\|I_{11}\|_2 \leq C \tau(V_1,0,\rho)^{\frac{1}{d-1}}\|I\|_2
\end{equation*}
and
\begin{equation*}
\|I_{11}^c\|_2 \leq C \varphi_{N_d^\delta}(\rho)\tau(V_1,0,3\rho)^{\frac{1}{d-1}}\|\mathbf{1}_{B(3\rho)}|V|^{\frac{1}{2}}u\|_2.
\end{equation*}
Finally, we represent $I_2$ as a sum $I_{21}+I_{22}$, where
\begin{equation*}
I_{21}:=\mathbf{1}_{B(\rho)}\Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})} \varphi_{N_d^\delta} E_j^{(1)}(u)
\end{equation*}
and
\begin{equation*}
I_{22}:=\mathbf{1}_{B(\rho)}\Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(3\rho \setminus 2\rho)} \varphi_{N_d^\delta} E_j^{(2)}(u).
\end{equation*}
Here
\begin{equation*}
E_j^{(1)}(u):=-2 \nabla \Psi_j \nabla u-(\Delta \Psi_j)u, \quad
E_j^{(2)}(u):=-2 \nabla \eta \nabla u-(\Delta \eta)u.
\end{equation*}
In order to derive an estimate on $\|I_{21}\|_2$, we expand
\begin{equation*}
I_{21}=I_{21}'+I_{21}'',
\end{equation*}
where
\begin{equation*}
I_{21}':=\mathbf{1}_{B(\rho)}\Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})} \varphi_{N_d^\delta} (-\Delta \Psi_j)u,
\end{equation*}
\begin{equation*}
I_{21}'':=\mathbf{1}_{B(\rho)}\Psi_j V_1^{\frac{1}{2}}[(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})} \varphi_{N_d^\delta} \bigl(-2 \nabla \eta \nabla u\bigr).
\end{equation*}
1) Term $I_{21}'$ presents no problem: by (E3),
\begin{multline}
\notag
\|I_{21}'\|_2 \leq \left\|\mathbf{1}_{B(\rho)}\Psi_j V_1^{\frac{1}{2}} [(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})}\right\|_{p \mapsto 2} \left\|\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})}\varphi_{N_d^\delta}(\Delta \Psi_j)u\right\|_2 \leq \\ \leq C \tau(V_1,0,\rho)^{\frac{1}{d-1}} \left\|\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})}\varphi_{N_d^\delta}(\Delta \Psi_j)u\right\|_2,
\end{multline}
where
\begin{equation*}
\left\|\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})}\varphi_{N_d^\delta}(\Delta \Psi_j)u\right\|_2 \leq C j^{N_d^\delta+2} \left\|\mathbf{1}_{B(\frac{2}{j})}u\right\|_2 \to 0 \quad \text{ as } j \to \infty
\end{equation*}
by the definition of
the SUC property.
2) In order to derive an estimate on $I_{21}''$, we once again use inequality (E3):
\begin{multline}
\notag
\|I_{21}''\|_2 \leq \left\|\mathbf{1}_{B(\rho)}\Psi_j V_1^{\frac{1}{2}} [(-\Delta)^{-1}]_{N,N_d^\delta}\mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})} \right\|_{p \mapsto 2}\|\mathbf{1}_{B(\frac{2}{j})}\varphi_{N_d^\delta} \nabla \Psi_j \nabla u\|_p \leq \\ \leq C \tau(V_1,0,\rho)^{\frac{1}{d-1}}\|\mathbf{1}_{B(\frac{2}{j})}\varphi_{N_d^\delta} \nabla \Psi_j \nabla u\|_p \leq
\tilde{C} j^{N_d^\delta+1} \|\mathbf{1}_{B(\frac{2}{j})} \nabla u\|_p,
\end{multline}
where $p:=\frac{2d}{d+2}$. We must estimate $\|\mathbf{1}_{B(\frac{2}{j})}\nabla u\|_2$ by $\|\mathbf{1}_{B(\frac{4}{j})}u\|_2$ in order to apply the SUC property.
For this purpose, we make use of the following well known interpolation inequality
\begin{equation*}
\left\|\mathbf{1}_{B(\frac{2}{j})} \nabla u\right\|_p \leq C j^{\frac{d}{p}} \left(C'j^{\frac{d}{2}-1}\left\|\mathbf{1}_{B(\frac{4}{j})}u\right\|_2+j^{\frac{d+6}{2}}\left\|\mathbf{1}_{B(\frac{4}{j})}\Delta u\right\|_r \right),
\end{equation*}
where $r:=\frac{2d}{d+4}$ (see \cite{Maz}). Using differential inequality (\ref{diffineq}), we reduce the problem to the problem of finding an estimate on $\|\mathbf{1}_{B(\frac{4}{j})} Vu\|_r$ in terms of $\|\mathbf{1}_{B(\frac{4}{j})}u\|_2^{\mu}$, $\mu>0$. By H\"{o}lder inequality,
\begin{equation*}
\left\|\mathbf{1}_{B(\frac{4}{j})}Vu\right\|_r \leq \left\|\mathbf{1}_{B(\frac{4}{j})}|V|^{\frac{1}{2}}u\right\|_2^{\frac{2}{d}}\left\|\mathbf{1}_{B(\frac{4}{j})}V\right\|_{\frac{d-1}{2}}^{\frac{d-1}{d}}\left\|\mathbf{1}_{B(\frac{4}{j})}u\right\|_2^{1-\frac{2}{d}},
\end{equation*}
as required.
As the last step of the proof, we use inequality (E4) to derive an estimate on term $I_{22}$:
\begin{equation*}
\|I_{22}\|_2 \leq C \tau(V_1,0,3\rho)^{\frac{1}{d-1}}\varphi_{N_d^\delta}(\rho)\left\|E_j^{(2)}(u)\right\|_{p}.
\end{equation*}
This estimate and the estimates obtained above imply (\ref{reqineq7}).
\end{proof}
\begin{proof}[Proof of Proposition \ref{fourestlem}]
Estimates (E1) and (E2) follow straightforwardly from Proposition \ref{ourlem}. In order to prove estimate (E3), we introduce the following interpolation function:
\begin{equation*}
F_1(z):=\mathbf{1}_{B(\rho)} \Psi_j |V|^{\frac{d-1}{4}z}\varphi_{N+(\frac{d}{2}-\delta)(1-z)}\left[(-\Delta)^{-\frac{d-1}{2}z}\right]_N \varphi^{-1}_{N+(\frac{d}{2}-\delta)(1-z)} \mathbf{1}_{B(\frac{2}{j} \setminus \frac{1}{j})}, \quad 0 \leq \mbox{Re}(z) \leq 1.
\end{equation*}
According to Lemma \ref{JKlem}, $\|F_1(i\gamma)\|_{2 \mapsto 2} \leq C_1 e^{c_1|\gamma|}$ for appropriate $C_1$, $c_1>0$. Further, according to Lemma \ref{sawlem2},
\begin{multline}
\notag
\|F_1(1+i\gamma)\|_{\frac{2d}{2d-1} \mapsto 2} \leq C_2 e^{c_2 \gamma^2}\left\|\mathbf{1}_{B(\rho)}|V|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{2}}\right\|_{\frac{2d}{2d-1} \mapsto 2} \leq \\
\leq C_2 e^{c_2 \gamma^2}\left\|\mathbf{1}_{B(\rho)}|V|^{\frac{d-1}{4}}(-\Delta)^{-\frac{d-1}{4}}\right\|_{2 \mapsto 2}\left\|(-\Delta)^{-\frac{d-1}{4}}\right\|_{\frac{2d}{2d-1} \mapsto 2} \leq \\ \leq C_2 e^{c_2 \gamma^2}\tau(V,x_0,\rho)^{\frac{1}{2}}\left\|(-\Delta)^{-\frac{d-1}{4}}\right\|_{\frac{2d}{2d-1} \mapsto 2}
\end{multline}
for appropriate $C_2$, $c_2>0$, where, clearly, $\|(-\Delta)^{-\frac{d-1}{4}}\|_{\frac{2d}{2d-1} \mapsto 2}<\infty$. Therefore, by Stein's interpolation theorem,
\begin{equation*}
\left\|F_1\left(\frac{2}{d-1}\right)\right\|_{p \mapsto 2} \leq C\tau(V,x_0,\rho)^{\frac{1}{2(d-1)}}.
\end{equation*}
The latter inequality implies (E3).
The proof of estimate (E4) is similar: it suffices to consider interpolation function
\begin{equation*}
F_2(z):=\mathbf{1}_{B(\rho)} \Psi_j |V|^{\frac{d-1}{4}z}\varphi_{N+(\frac{d}{2}-\delta)(1-z)}\left[(-\Delta)^{-\frac{d-1}{2}z}\right]_N \varphi^{-1}_{N+(\frac{d}{2}-\delta)(1-z)} \mathbf{1}_{B(3\rho \setminus 2\rho)}
\end{equation*}
for $0 \leq \mbox{Re}(z) \leq 1$.
\end{proof}
\section{Proof of Theorem \ref{sawthm}}
\label{exsect}
\begin{proof}[Proof of Theorem \ref{sawthm}]
Let $u \in Y_V^{\mathcal K}$. Suppose that $u \equiv 0$ in some neighbourhood of $0$. Assume that $\rho>0$ is sufficiently small, so that $\bar{B}(0,2\rho) \subset \Omega$, and let $\eta \in C^\infty(\Omega)$ be such that $\eta \equiv 1$ on $B(0,\rho)$, $\eta \equiv 0$ on $\Omega \setminus B(0,2\rho)$. We may assume, without loss of generality, that $V \geq 1$. The standard limiting argument implies the following identity:
\begin{equation*}
\mathbf{1}_{B(\rho)} u=\mathbf{1}_{B(\rho)} [(-\Delta)^{-1}]_{N} (-\Delta u_\eta).
\end{equation*}
Therefore, we can write
\begin{multline}
\notag
\mathbf{1}_{B(\rho)} \varphi_N V u=\\=\mathbf{1}_{B(\rho)} \varphi_N V [(-\Delta)^{-1}]_N \varphi_{N}^{-1} \mathbf{1}_{B(\rho)} \varphi_N (-\Delta u)+\mathbf{1}_{B(\rho)} \varphi_N V [(-\Delta)^{-1}]_N \varphi_N^{-1}\mathbf{1}^c_{B(\rho)}\varphi_N (-\Delta u_\eta),
\end{multline}
or, letting $K$ to denote the left hand side and,
respectively, $K_1$ and $K_2$ the two summands of the right hand side of
the last equality, we rewrite the latter as $$K=K_1+K_2.$$
Note that $K \in L^1(\mathbb R^d)$, as follows from definition of space $Y_V^{\mathcal K}$. Lemma \ref{sawlem1} implies that
\begin{equation*}
\|\mathbf{1}_{B(\rho)} \varphi_N V [(-\Delta)^{-1}]_N \varphi_{N}^{-1} f\|_{1} \leq C\|\mathbf{1}_{B(\rho)} V (-\Delta)^{-1} f\|_{1} \leq C\beta \|f\|_{1},
\end{equation*}
for all $f \in L^1(\Omega)$,
which implies an estimate on $K_1$:
\begin{equation*}
\|K_1\|_{1} \leq C\beta \|K\|_{1}.
\end{equation*}
In order to estimate $K_2$, we first note that $\mathbf{1}^c_{B(\rho)} (-\Delta u_\eta)=\mathbf{1}_{B(2\rho \setminus \rho)} (-\Delta u_\eta)$. According to Lemma \ref{sawlem1} there exists a constant $\hat{C}>0$ such that
\begin{equation*}
\|\mathbf{1}_{B(2\rho)} \varphi_N V [(-\Delta)^{-1}]_N \varphi_{N}^{-1}\|_{1 \mapsto 1} \leq \hat{C}.
\end{equation*}
Hence,
\begin{equation*}
\|K_2\|_{1} \leq \hat{C}\|\mathbf{1}_{B(2\rho \setminus \rho)} \varphi_N (-\Delta u_\eta)\|_{1} \leq \hat{C}\rho^{-N}\|\Delta u_{\eta}\|_1.
\end{equation*}
Let us choose $\beta>0$ such that $C\beta<1$. Then the estimates above imply
\begin{equation*}
(1-C\beta)\|\mathbf{1}_{B(\rho)} \rho^N \varphi_N u\|_{1} \leq (1-C\beta)\|\rho^N K\|_{1} \leq \|\rho^N K_2\|_{1} \leq \hat{C}\|\Delta u_{\eta}\|_1.
\end{equation*}
Letting $N \to \infty$, we obtain $u \equiv 0$ in $B(0,\rho)$.
\end{proof}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Hidden Markov models (HMMs; \cite{hmm_seminal}) are commonly used for modeling disease progression, because they capture complex and noisy clinical measurements as originating from a smaller set of latent health states.
When fit to patient data, HMMs yield \textit{state transition probabilities} (the probabilities of patients transitioning between latent health states) and \textit{emission probabilities} (the probabilities of observing patients' true health states), both of which can have useful clinical interpretations. State transition probabilities describe the dynamics of disease progression, while emission probabilities describe the accuracy of clinical tests and measurements. Because of their intuitive parameter interpretations and flexibility, HMMs have been used to model sepsis \cite{hmm_sepsis}, Alzheimer's progression \cite{continuous_hmm},
and patient response to blood anticoagulants \cite{nemani_paper}.
Researchers may wish to use patient-level covariates to improve the fit of HMM parameter solutions \cite{tumor_cytogenetics}, or to integrate HMMs directly with treatment planning algorithms \cite{nemani_paper}.
Either modification requires incorporating additional parameters into the HMM, which is typically intractable with expectation-maximization algorithms. Incorporating covariates or additional treatment planning models therefore requires multiple estimation steps (e.g., \cite{tumor_cytogenetics}) changes to HMM parameter interpretation (e.g., \cite{nemani_paper}), or Bayesian estimation, which involves joint prior distributions over all parameters and can suffer from poor convergence in complex models \cite{ryden2008bayesian}.
We present neural networks as a valuable alternative for implementing and solving HMMs for disease progression modeling. Neural networks' substantial modularity allows them to easily incorporate additional input variables (e.g., patient-level covariates) or predictive models and simultaneously estimate all parameters \cite{caelli1999modularity}.
We therefore introduce
Hidden Markov Recurrent Neural Networks ({HMRNN}s) - recurrent neural networks (RNNs) that mimic the computation of hidden Markov models while allowing for substantial modularity with other predictive networks.
When trained on state observation data only (i.e., the same data used to train HMMs), the {HMRNN} has the same parameter set and likelihood function as a traditional HMM. Yet the HMRNN's neural network structure allows it to incorporate additional patient data; in this case, the HMRNN's likelihood function differs from that of a traditional HMM, but it still yields interpretable state transition and emission parameters (unlike other classes of neural networks). Thus, HMRNNs are a type of RNN with a specific structure that allows their parameters to be interpreted as HMM paramters, and HMRNNs can in turn be reduced to standard HMMs. In this way, {HMRNN}s balance the interpretability of HMMs with the flexibility of neural networks (as shown in Figure \ref{fig:fig2}).
\begin{figure}
\centering
\includegraphics[scale=.8]{hmrnn_fig2.pdf}
\caption{Conceptual overview of HMRNN. `Interpretability' refers to
model's ability to produce interpretable state transition and emission parameters. `Flexibility' refers to model's ability to incorporate additional data sources in its predictions. When trained on the same data, HMRNNs and HMMs optimize the same likelihood functions and yield the same interpretable parameter solutions (demonstrated in Study 1). The HMRNN can also combine an HMM with an additional predictive neural network, allowing it to use additional patient data not available to standard HMMs (demonstrated in Study 2).
}
\label{fig:fig2}
\end{figure}
Our primary contributions are as follows: (1) We prove how recurrent neural networks (RNNs) can be formulated to optimize the same likelihood function as HMMs, with parameters that can be interpreted as HMM parameters (sections \ref{sec:methods} and \ref{sec:study1}), and (2) we demonstrate the HMRNN's utility for disease progression modeling, by combining it with other predictive neural networks to improve forecasting and offer unique parameter interpretations not afforded by simple HMMs (section \ref{sec:study2}).
\section{Related work}\label{sec:related_research}
A few studies in the speech recognition literature model HMMs with neural networks \cite{wessels_hmm_rnn,alpha_nets}; these implementations require HMM pre-training \cite{wessels_hmm_rnn} or minimize the mutual information criterion \cite{alpha_nets},
and they are not commonly used outside the speech recognition domain.
These works also
present only theoretical justification, with no empirical comparisons with expectation-maximization algorithms.
A limited number of healthcare studies have also explored connections between neural networks and Markov models. \cite{nemani_paper} employs a recurrent neural network to approximate latent health states underlying patients' ICU measurements. \cite{hmm_nn_surgery} compares HMM and neural network effectiveness in training a robotic surgery assistant, while
\cite{vae_rl_paper} proposes a generative neural network for modeling ICU patient health based on
HMMs.
These studies differ from our approach of directly formulating HMMs as neural networks, which maintains the interpretability of HMMs while allowing for joint estimation of the HMM with other predictive models.
\section{Methods}\label{sec:methods}
In this section, we briefly review HMM preliminaries, formally define the {HMRNN}, and prove that it optimizes the same likelihood function as a corresponding HMM.
\subsection{HMM preliminaries}\label{sec:preliminaries}
Formally, an HMM models a system over a given time horizon $T$, where the system occupies a hidden state $x_{t} \in S=\{1,\dots,k\}$ at any given time point $t\in \{0,1,\dots,T\}$; that is, $x_{t}=i$ indicates that the system is in the $i$-th state at time $t$. For any state $x_{t} \in S$ and any time point $t \in \{0,1, \dots,T\}$, the system emits an observation according to an emission distribution that is uniquely defined for each state.
We consider the case of categorical emission distributions, which are commonly used in healthcare (e.g., \cite{HMM_breastcancer, hmm_sepsis}).
These systems emit a discrete-valued observation $y_{t} \in O$ at each time $t$,
where $O=\{1,\dots,c\}$.
Thus, an HMM is uniquely defined by a $k$-length initial probability vector $\bm{\pi}$, $k \times k$ transition matrix $\bm{P}$, and $k \times c$ emission matrix $\bm{\Psi}$. Entry $i$ in the vector $\bm{\pi}$ is the probability of starting in state $i$, row $i$ in the matrix $\bm{P}$ is the state transition probability distribution from state $i$, and row $i$ of the matrix $\bm{\Psi}$ is the emission distribution from state $i$. We also define $\operatorname{diag}(\bm{\Psi}_{i})$ as a $k \times k$ diagonal matrix with the $i$-th column of $\bm{\Psi}$ as its entries (i.e., the probabilities of observation $i$ from each of the $k$ states). We define the likelihood of an observation sequence $\bm{y}$ in terms of $\alpha_{t}(i)$, the probability of being in state $i$ at time $t$ \textit{and} having observed $\{y_{0},...,y_{t}\}$. We denote $\bm{\alpha}_{t}$ as the (row) vector of all $\alpha_{t}(i)$ for $i \in S$, with
\begin{equation}
\bm{\alpha}_{t}=\bm{\pi}^\top \cdot \operatorname{diag}(\bm{\Psi}_{y_{0}}) \cdot (\prod_{i=1}^{t} \bm{P} \cdot \operatorname{diag}(\bm{\Psi}_{y_i}))
\label{eqn:alpha}
\end{equation}
for $t \in \{1,...,T\}$, with $\bm{\alpha}_{0}=\bm{\pi}^\top \cdot \operatorname{diag}(\bm{\Psi}_{y_{0}})$. The likelihood of a sequence $\bm{y}$ is thus given by $\textrm{Pr}(\bm{y})=\bm{\alpha}_{T} \cdot \textbf{1}_{k \times 1}$.
\subsection{{HMRNN} definition}
\label{sec:HMRNN}
An {HMRNN} is a recurrent neural network whose parameters directly correspond to the initial state, transition, and emission probabilities of an HMM. As such, training an HMRNN optimizes the joint log-likelihood of the $N$ $T$-length observation sequences given these parameters.
\begin{definition}
An {HMRNN} is a recurrent neural network with parameters $\bm{\pi}$ (a $k$-length vector whose entries sum to 1), $\bm{P}$ (a $k \times k$ matrix whose rows sum to one), and $\bm{\Psi}$ (a $k \times c$ matrix whose rows sum to one). It receives $T+1$ input matrices of size $N \times c$, denoted by $\bm{Y}_{t}$ for $t \in \{0,1,\dots,T\}$, where the $n$-th row of matrix $\bm{Y}_{t}$ is a one-hot encoded vector of observation $y_{t}^{(n)}$ for sequence $n \in \{1,\dots,N\}$. The {HMRNN} consists of an inner block of hidden layers that is looped $T+1$ times (for $t \in \{0,1,\dots,T\}$), with each loop containing hidden layers $\bm{h}_{1}^{(t)}$, $\bm{h}_{2}^{(t)}$, and $\bm{h}_{3}^{(t)}$, and a $c$-length input layer $\bm{h}_{y}^{(t)}$ through which the input matrix $\bm{Y}_{t}$ enters the model. The {HMRNN} has a single output unit $o^{(T)}$ whose value is the joint negative log-likelihood of the $N$ observation sequences
under an HMM with parameters $\bm{\pi}$, $\bm{P}$, and $\bm{\Psi}$; the summed value of $o^{(T)}$ across all $N$
observation sequences
is the loss function (minimized via neural network optimization, such as gradient descent).
Layers $\bm{h}_{1}^{(t)}$, $\bm{h}_{2}^{(t)}$, $\bm{h}_{3}^{(t)}$, and $o^{(T)}$ are defined in the following equations. Note that the block matrix in equation (\ref{eqn:GHMNN_2}) is a $c \times (kc)$ block matrix of $c$ $\bm{1}_{1 \times k}$ vectors, arranged diagonally, while the block matrix in equation (\ref{eqn:GHMNN_3}) is a $(kc) \times k$ row-wise concatenation of $c$ $k \times k$ identity matrices.
\begin{align}
\label{eqn:GHMNN_1}
\bm{h}_{1}^{(t)} & = \begin{cases}
\bm{\pi}^\top, & t=0,\\
\bm{h}_{3}^{(t-1)} \bm{P}, & t>0.
\end{cases}
\\
\label{eqn:GHMNN_2}
\bm{h}_{2}^{(t)} & = \!\begin{aligned}[t] & \operatorname{ReLu}
\Big(\bm{h}_{1}^{(t)} \begin{bmatrix} \operatorname{diag}(\bm{\Psi}_{1}) \dots \operatorname{diag}(\bm{\Psi}_{c}) \\\end{bmatrix}+
\\
& \bm{Y}_{t}\begin{bmatrix} \bm{1}_{1 \times k} && \dots && \bm{0}_{1 \times k} \\
\dots && \dots && \dots \\
\bm{0}_{1 \times k} && \dots && \bm{1}_{1 \times k} \end{bmatrix} - \bm{1}_{n \times (kc)}
\Big)
\end{aligned}
\\
\label{eqn:GHMNN_3}
\bm{h}_{3}^{(t)} & = \bm{h}_{2}^{(t)}\begin{bmatrix} \bm{I}_{k} & \dots & \bm{I}_{k}\\
\end{bmatrix}^{\top}
\\
\label{eqn:GHMNN_4}
o^{(t)} & = -\log(\bm{h}_{3}^{(T)}\mathbf{1}_{k \times 1}).
\end{align}
\end{definition}
\begin{figure*}[t]
\centering
\includegraphics[scale=.35]{figure01.jpg}
\caption{Structure of the hidden Markov recurrent neural network ({HMRNN}). Solid lines indicate learned weights that correspond to HMM parameters; dotted lines indicate weights fixed to 1. The inner block initializes with the initial state probabilities then mimics multiplication by $\operatorname{diag}(\bm{\Psi}_{y_{t}})$; connections between blocks mimic multiplication by $\bm{P}$.}
\label{fig:GHMNN_diagram}
\end{figure*}
Fig.\ \ref{fig:GHMNN_diagram} outlines the structure of the {HMRNN}. Note that layer $\bm{h}_{3}^{(t)}$ is equivalent to $\bm{\alpha}_{t}$, the probability of being in each hidden state given $\{y_{0},...,y_{t}\}$. Also note that, for long sequences, underflow can be addressed by normalizing layer $\bm{h}_{3}^{(t)}$ to sum to 1 at each time point, then simply subtracting the logarithm of the normalization term (i.e., the log-sum of the activations) from the output $o^{(T)}$.
\subsection{Proof of HMM/{HMRNN} equivalence}\label{sec:proof}
We now formally establish that the {HMRNN}'s output unit, $o^{(T)}$, is the negative log-likelihood of an observation sequence under an HMM with parameters $\bm{\pi}$, $\bm{P}$, and $\bm{\Psi}$. We prove this for the case of $N=1$ and drop notational dependence on $n$ (i.e., we write $y_{t}^{(1)}$ as $y_{t}$), though extension to $N>1$ is trivial since the log-likelihood of multiple independent sequences is the sum of their individual log-likelihoods. We first rely on the following lemma
\begin{lemma}\label{lemma1}
If all units in $\bm{h}_{1}^{(t)}(j)$ are between 0 and 1 (inclusive), then $\bm{h}_{3}^{(t)}=\bm{h}_{1}^{(t)} \operatorname{diag}(\bm{\Psi}_{y_{t}})$.
\end{lemma}
\begin{proof}
Let $\bm{h}_{1}^{(t)}(j)$ and $\bm{h}_{3}^{(t)}(j)$ represent the $j$th units of layer $\bm{h}_{1}^{(t)}$ and $\bm{h}_{3}^{(t)}$, respectively, and recall that
$\bm{h}_{2}^{(t)}$ contains $k \times c$ units, which we index with a tuple $(l,m)$ for $l \in \{1,\dots,c\}$ and $m \in \{1,\dots,k\}$. According to equation (\ref{eqn:GHMNN_2}), the connection between units $\bm{h}_{1}^{(t)}(j)$ and $\bm{h}_{2}^{(t)}(l,m)$ is $\bm{\Psi}_{j,l}$ when $j=m$, and 0 otherwise. Also recall that matrix $\bm{Y}_{t}$ enters the model through a $c$-length input layer that we denote $\bm{h}_{y}^{(t)}$. According to equation (\ref{eqn:GHMNN_3}),
the connection between unit $\bm{h}_{y}^{(t)}(j)$ and unit $\bm{h}_{2}^{(t)}(l,m)$ is 1 when $j=l$, and 0 otherwise. Thus, unit $\bm{h}_{2}^{(t)}(l,m)$ depends only on $\bm{\Psi}_{m,l}$, $\bm{h}_{1}^{(t)}(m)$, and $\bm{h}_{y}^{(t)}(l)$. Lastly, a bias of $-1$ is added to all units in $\bm{h}_{2}^{(t)}$, which is then subject to a ReLu activation, resulting in the following expression for each unit in $\bm{h}_{2}^{(t)}$:
\begin{equation}\label{eqn:layer2}
\bm{h}_{2}^{(t)}(l,m)=\operatorname{ReLu}(\bm{\Psi}_{m,l}\cdot \bm{h}_{1}^{(t)}(m)+\bm{h}_{y}^{(t)}(l)-1).
\end{equation}
\looseness-1 Because $\bm{h}_{y}^{(t)}(l)$ is 1 when $y_{t}=l$, and equals 0 otherwise, then
if all units in $\bm{h}_{1}^{(t)}$ are between 0 and 1, this implies $\bm{h}_{2}^{(t)}(l,m)=\bm{\Psi}_{m,l}\cdot \bm{h}_{1}^{(t)}(m)$ when $j=y_{t}$ and $\bm{h}_{2}^{(t)}(l,m)=0$ otherwise. According to equation (\ref{eqn:GHMNN_4}), the connection between $\bm{h}_{2}^{(t)}(l,m)$ and $\bm{h}_{3}^{(t)}(j)$ is 1 if $j=m$, and 0 otherwise. Hence,
\begin{equation}
\bm{h}_{3}^{(t)}(j)=\sum_{j=0}^{c} \bm{h}_{2}^{(t)}(l,j)=\bm{\Psi}_{j,y_{t}}\cdot \bm{h}_{1}^{(t)}(j).
\end{equation}
Thus, $\bm{h}_{3}^{(t)}=\bm{h}_{1}^{(t)} \operatorname{diag}(\bm{\Psi}_{y_{t}})$.
\end{proof}
\begin{theorem}
\label{maintheorem}
An {HMRNN} with parameters $\bm{\pi}$ ($1 \times k$ stochastic vector), $\bm{P}$ ($k \times k$ stochastic matrix), and $\bm{\Psi}$ ($k \times c$ stochastic matrix), and with layers defined as in equations (\ref{eqn:GHMNN_1}-\ref{eqn:GHMNN_4}), produces output neuron $o^{(T)}$
whose value is the negative log-likelihood of a corresponding HMM
\end{theorem}
\begin{proof}
Note that, based on Lemma \ref{lemma1} and equation (\ref{eqn:GHMNN_1}), $\bm{h}_{3}^{(t)}=\bm{h}_{3}^{(t-1)} \cdot \bm{P} \cdot \operatorname{diag}(\bm{\Psi}_{y_{t}})$ for $t \in \{1,...,T\}$, assuming that $\bm{h}_{1}^{(t)}(j) \in [0,1]$ for $j \in \{1,..,k\}$. Since $\bm{\alpha}_{t}=\bm{\alpha}_{t-1} \cdot \bm{P} \cdot \operatorname{diag}(\bm{\Psi}_{y_t})$, then if $\bm{h}_{3}^{(t-1)}=\bm{\alpha}_{t-1}$, then
$\bm{h}_{1}^{(t)}(j) \in [0,1]$ for $j \in \{1,..,k\}$ and therefore $\bm{h}_{3}^{(t)}=\bm{\alpha}_{t}$. We show the initial condition that $\bm{h}_{3}^{(0)}=\bm{\alpha}_{0}$, since $\bm{h}_{1}^{(0)}=\bm{\pi}^\top$ implies that $\bm{h}_{3}^{(0)}=\bm{\pi}^\top \cdot \operatorname{diag}(\bm{\Psi}_{y_{0}})=\bm{\alpha}_{0}$. Therefore, by induction, $\bm{h}_{3}^{(T)}=\bm{\alpha}_{T}$, and $o^{(T)}=-\log(\bm{\alpha}_{T} \cdot \bm{1}_{k \times1})$, which is the logarithm of the HMM likelihood based on equation (\ref{eqn:alpha}).
\end{proof}
\section{Experiments and Results}\label{sec:results}
In study 1, we demonstrate that when an HMRNN is trained on the same data as an HMM, the two models yield statistically similar parameter estimates.
More specifically, study 1 provides a simulation-based validation of Theorem \ref{maintheorem},
according to which an HMRNN and an HMM trained on the same data share the same likelihood.
In study 2, we use disease progression data to demonstrate that an HMRNN combining an HMM with an additional predictive neural network attains better predictive performance over its HMM component, while still yielding an interpretable parameter solution. Theorem \ref{maintheorem} does not apply to study 2, since the inclusion of an additional predictive neural network to the HMRNN yields a likelihood function different from the likelihood of the consituent HMM.
\subsection{Study 1: HMRNN Reduces to an HMM}\label{sec:study1}
We demonstrate that an {HMRNN} trained via gradient descent yields statistically similar solutions to Baum-Welch. We show this with synthetically-generated observations sequences for which the true HMM parameters are known.
We simulate systems with state spaces $S={1,2,...,k}$ that begin in state $1$, using $k=5$, $10$, or $20$ states. These state sizes are consistent with disease progression HMMs, which often involve less than 10 states \cite{tumor_cytogenetics,jackson_multistate_models,sukkar_hmm}. We assume that each state `corresponds' to one observation, implying the same number of states and observations ($c=k$). The probability of correctly observing a state ($P(y_{t}=x_{t})$) is $\psi_{ii}$, which is the diagonal of $\bm{\Psi}$ and is the same for all states. We simulate systems with $\psi_{ii}=0.6, 0.75$, and $0.9$.
We test three variants of the transition probability matrix $\bm{P}$. Each is defined by their same-state transition probability $p_{ii}$, which is the same for all states. For all $\bm{P}$ the probability of transitioning to higher states increases with state membership; this is known as `increasing failure rate' and is a common property for Markov processes. As $p_{ii}$ decreases, the rows of $\bm{P}$ stochastically increase, i.e., lower values of $p_{ii}$ imply a greater chance of moving to higher states. We use values of $p_{ii}=0.4, 0.6$, and $0.8$, for 27 total simulations ($k=\{5,10,20\} \times \bm{\Psi}_{ii}=\{0.6,0.75,0.9\} \times p_{ii}=\{0.4,0.6,0.8\}$).
For each of the 27 simulations, we generate 100 trajectories of length $T=60$; this time horizon might practically represent one hour of data collected each minute or two months of data collected each day. Initial state probabilities are fixed at $1$ for state $1$ and $0$ otherwise. Transition parameters are initialized based on the observed number of transitions in each dataset, using each observation as a proxy for its corresponding state. Since transition probabilities are initialized assuming no observation error, the emission matrices are correspondingly initialized using $\psi_{ii}=0.95$ (with the remaining 0.05 distributed evenly across all other states). For Baum-Welch and HMRNN, training ceased when all parameters ceased to change by more than 0.001. For each simulation, we compare Baum Welch's and the HMRNN's average Wasserstein distance between the rows of the estimated and ground truth $\bm{P}$ and $\bm{\Psi}$ matrices. This serves as a measure of each method's ability to recover the true data-generating parameters. We also compare the Baum-Welch and HMRNN solutions' log-likelihoods using a separate hold-out set of 100 trajectories.
Across all simulations, the average Wasserstein distance between the rows of the true and estimated transition matrices was 0.191 for Baum-Welch and 0.178 for HMRNN (paired $t$-test $p$-value of 0.483). For the emission matrices, these distances were 0.160 for Baum-Welch and 0.137 for HMRNN (paired $t$-test $p$-value of 0.262). This suggests that Baum-Welch and the HMRNN recovered the ground truth parameters with statistically similar degrees of accuracy. This can be seen in Figure \ref{fig:p_recovery}, which presents the average estimated values of $p_{ii}$ and $\psi_{ii}$ under each model. Both models' estimated $p_{ii}$ values are, on average, within 0.05 of the ground truth values, while they tended to estimate $\psi_{ii}$ values of around 0.8 regardless of the true $\psi_{ii}$. Note that, while Baum-Welch was slightly more accurate at estimating $p_{ii}$ and $\psi_{ii}$, the overall distance between the ground truth and estimated parameters did not significantly differ between Baum-Welch and the HMRNN.
For each simulation, we also compute the log-likelihood of a held-out set of 100 sequences under the Baum-Welch and HMRNN parameters, as a measure of model fit. The average holdout log-likelihoods under the ground truth, Baum-Welch, and HMRNN parameters are -9250.53, -9296.03, and -9303.27, respectively (paired $t$-test $p$-value for Baum-Welch/HMRNN difference of 0.440). Thus, Baum-Welch and HMRNN yield similar model fit on held-out data.
\begin{figure}
\centering
\includegraphics[scale=0.3]{study1_graph.png
\caption{Results from Study 1. GD=Gradient Descent. Estimated $p_{ii}$ (left) and $\psi_{ii}$ (right) under Baum-Welch and HMRNN, shown by ground truth parameter value. Results for each column are averaged across 9 simulations. Dashed lines indicate ground truth $p_{ii}$ (left) and $\psi_{ii}$ (right) values, and error bars indicate 95\% confidence intervals (but do not represent tests for significant differences). Baum-Welch and the HMRNN produce near-identical parameter solutions according to the Wasserstein distance metric.}
\label{fig:p_recovery}
\end{figure}
\subsection{Study 2: HMRNN Improves Predictive Accuracy over HMM}
\label{sec:study2}
We demonstrate how combining an {HMRNN} with other predictive neural networks improves predictive accuracy and offers novel clinical interpretations over a standard HMM, using an Alzheimer's disease case study. Recall that, by incorporating an additional predictive neural network into an {HMRNN}, its likelihood function differs from that of a traditional HMM but it still produces interpretable state transition and emission parameters. We test our {HMRNN} on clinical data from $n=426$ patients with mild cognitive impairment (MCI), collected over the course of three ($n=91$), four ($n=106$), or five ($n=229$) consecutive annual clinical visits \cite{adni}. Given MCI patients' heightened risk of Alzheimer's, modeling their symptom progression is of considerable clinical interest. We analyze patients' overall cognitive functioning based on the Mini Mental Status Exam (MMSE; \cite{MMSEref}).
MMSE scores range from 0 to 30, with score categories for `no cognitive impairment' (scores of 27-30), `borderline cognitive impairment' (24-26), and 'mild cognitive impairment' (17-23) \cite{MMSE_cutoffs_review}. Scores below 17 were infrequent (1.2\%) and were treated as scores of 17
for analysis. We use a 3-state latent space $S=\{0,1,2\}$, with $x_{t}=0$ representing `no cognitive impairment,' $x_{t}=1$ representing `borderline cognitive impairment,' and $x_{t}=2$ representing `mild cognitive impairment.' The observation space is $O=\{0,1,2\}$, using $y_{t}=0$ for scores of $27-30$, $y_{t}=1$ for scores of $24-26$, and $y_{t}=2$ for scores of $17-23$. This HMM therefore allows for the possibility of measurement error, i.e., that patients' observed score category $y_{t}$ may not correspond to their true diagnostic classification $x_{t}$.
To showcase the benefits of the {HMRNN}'s modularity, we
augment it with two predictive neural networks. First, we predict patient-specific initial state probabilities based on gender, age, degree of temporal lobe atrophy,
and amyloid-beta 42 levels (A$\beta$42, a relevant Alzheimer's biomarker \cite{abref_2}), using a single-layer neural network with a softmax activation. Second, at each time point, the probability of being in the most impaired state,
$\bm{h}_{t}^{(1)}(2)$,
is used to predict concurrent scores on the Clinical Dementia Rating (CDR, \cite{CDRref}), a global assessment of dementia severity, allowing another relevant clinical metric to inform paramter estimation. We use a single connection and sigmoid activation to predict patients' probability of receiving a CDR score above 0.5 (corresponding to `mild dementia'). The HMRNN is trained via gradient descent to minimize $o^{(T)}$ from equation (\ref{eqn:GHMNN_4}), plus the predicted negative log-likelihoods of patients' CDR scores. Figure \ref{fig:mmse_hmrnn} visualizes the structure of this augmented HMRNN.
\begin{figure}[t]
\centering
\includegraphics[scale=.15]{fig2.png}
\caption{Augmented HMRNN for Alzheimer's case study. CDR$^{(t)}$ refers to predicted CDR classification (above or below 0.5) at time $t \in \{0,1,2,3,4\}$. `Lobe' refers to measure of temporal lobe atrophy. Units $\bm{h}_{y}^{(t)}$ are a one-hot encoded representation of the MMSE score category at time $t$.}
\label{fig:mmse_hmrnn}
\end{figure}
We compare the {HMRNN} to a standard HMM without these neural network augmentations, trained using Baum-Welch, an expectation-maximization algorithm \cite{hmm_seminal}. We assess parameter solutions' ability to predict patients' final MMSE score categories from their initial score categories, using 10-fold cross-validation. We evaluate performance using weighted log-loss $L$,
i.e., the average log-probability placed on each final MMSE score category. This metric accounts for class imbalance and rewards models' confidence in their predictions, an important component of medical decision support \cite{bussone_trust}. We also report $\bar{p}$, the average probability placed on patients' final MMSE scores (computed directly from $L$). We train all models using a relative log-likelihood tolerance of $0.001\%$. Runtimes for Baum-Welch and the HMRNN are 2.89 seconds and 15.24 seconds, respectively.
Model results appear in Table \ref{tab:mmse_parameters}. Note that the {HMRNN}'s weighted log-loss $L$ is significantly lower than Baum-Welch's (paired $t$-test $\mbox{p-value}=2.396\times10^{-6}$), implying greater predictive performance. This is supported by Figure \ref{fig:mmse_plot}, which shows $\bar{p}$, the average probability placed on patients' final MMSE scores by score category. Note that error bars represent marginal sampling error and do not represent statistical comparisons between Baum-Welch and HMRNN.
The {HMRNN} also yields lower transition probabilities and lower estimated diagnostic accuracy for the MMSE (i.e., lower diagonal values of $\bm{\Psi}$) than Baum-Welch, For instance, the baseline HMM estimates at least an 80\% chance of correctly identifying borderline and mild cognitive impairment ($\bm{\Psi}_{22} = 0.819$ and $\bm{\Psi}_{33} = 0.836$). These probabilities are (respectively) only 54.8\% and 68.7\% under the {HMRNN},
suggesting that score changes are more likely attributable to testing error as opposed to true state changes.
\begin{table}[t]
\caption{Results from Alzheimer's disease case study. $\bm{\pi}$ is initial state distribution, $\bm{P}$ is state transition matrix, $\bm{\Psi}$ is emission distribution matrix, $L$ is weighted log-loss, and $\bar{p}$ is average probability placed on ground truth score categories.}
\centering
\begin{tabular}{|l|c|c|}
\hline
& Baum-Welch & {HMRNN} \\
\hline
$\bm{\pi}$ & $ \begin{array}{ccc} 0.727 & 0.271 & 0.002 \end{array}$ & $ \begin{array}{ccc} 0.667 & 0.333 & 0.000 \end{array}$ \\
\hline
$\bm{P}$ & $\begin{array}{ccc} 0.898 & 0.080 & 0.022 \\ 0.059 & 0.630 & 0.311 \\ 0.000 & 0.016 & 0.984 \end{array}$ & $\begin{array}{ccc} 0.970 & 0.028 & 0.002 \\ 0.006 & 0.667 & 0.327 \\ 0.000 & 0.003 & 0.997 \end{array}$\\
\hline
$\bm{\Psi}$ & $\begin{array}{ccc} 0.939 & 0.060 & 0.001 \\ 0.175 & 0.819 & 0.006 \\ 0.004 & 0.160 & 0.836 \end{array}$ & $\begin{array}{ccc} 0.930 & 0.067 & 0.003 \\ 0.449 & 0.548 & 0.003 \\ 0.005 & 0.308 & 0.687 \end{array}$ \\
\hline
$L$ & -0.992 & -0.884 \\
\hline
$\bar{p}$ & 0.371 & 0.413 \\
\hline
\end{tabular}
\label{tab:mmse_parameters}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.3]{study2_graph.png
\caption{Results from study 2. GD=Gradient Descent. Plot shows average probability placed on final MMSE scores, by score category. Recall that the HMRNN's average performance significantly outperforms Baum-Welch (paired $t$-test $p$-value$=2.396\times10^{-6}$). As see in the Figure, this effect is consistent across score categories. Error bars indicate 95\% confidence intervals, and do not represent tests for significant differences.}
\label{fig:mmse_plot}
\end{figure}
\section{Discussion}\label{sec:discussion}
We outline a flexible approach for HMM estimation using neural networks. The {HMRNN} produces statistically similar solutions to HMMs when trained on the same data. It can also combine HMMs with other neural networks to improve disease progression forecasting when additional patient data is available.
In our Alzheimer's disease experiment (study 2), augmenting an {HMRNN} with two predictive networks improves forecasting performance compared with a standard HMM trained with Baum-Welch. The {HMRNN} also yields a clinically distinct parameter interpretation, predicting poor diagnostic accuracy for the MMSE's `borderline' and `mild' impairment categories. This suggests that fewer diagnostic categories might improve MMSE utility, which aligns with existing research \cite{MMSE_cutoffs_review} and suggests the {HMRNN} might be used to improve the clinical utility of HMM parameter solutions. We also make a novel theoretical contribution by formulating discrete-observation HMMs as a special case of RNNs and proving coincidence of their likelihood functions.
Future work might formally assess {HMRNN} time complexity. Yet since data sequences in healthcare are often shorter than in other domains that employ HMMs (e.g., speech analysis), runtimes will likely be reasonable for many healthcare datasets. Future work might explore the {HMRNN} in other healthcare applications besides disease progression. Lastly, while we the address the case of discrete-state, discrete-time HMMs, neural networks might also be used to implement more complex HMM structures. For instance, the HMRNN might be extended to continuous-time HMMs, for which parameter estimation is quite difficult. The HMRNN also might be extended to partially-observable Markov decision processes (POMDPs), in which latent state transitions are affected by actions taken at each time point. Since actions are a form of time-varying covariates, the HMRNN structure would easily allow for parameter estimation when action data are available.
\section*{Acknowledgment}
This research is partially supported by the Joint Directed Research and Development program at Science Alliance, University of Tennessee. Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012).
This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/down-loads/doe-public-access-plan). This research was sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the US Department of Energy under contract DE-AC05-00OR22725.
\input{main.bbl}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Background and Motivation}
\label{sec_background}
\subsection{DNNs for Streaming Video Analytics}
DNNs have become a core element of various video processing tasks such as frame classification, human action recognition, object detection, face recognition, and so on. Though accurate, DNNs are computationally expensive, requiring significant CPU and memory resources. As a result, these DNNs are often too slow when running on mobile devices and become the latency bottleneck in video analytics systems. Huynh \textit{et al.}\xspace~\cite{deepmon} experimented with VGG~\cite{simonyan2014very} of 16 layers on the Samsung Galaxy S7 and noted that classification on a single image takes as long as 644 ms, leading to less than 2 fps for continuous classification. Motivated by the observation, we explore in {ApproxNet}\xspace how we can make DNN-based video analytics pipelines more efficient through content-aware approximate computation {\em within the neural network}.
{\noindent\bf ResNet}:
Deep DNNs are typically hard to train due to the vanishing gradient problem~\cite{resnet}. ResNet solved this problem by introducing a short cut identity connection in between layers, which helps achieve at least the same accuracy upon further increasing the number of network layers. The unit of such connected layers is called a ResNet block. We leverage this key idea of a deeper model producing no higher error than its shallower counterpart, for the construction of an architecture that provides more accurate execution as it proceeds deeper into the DNN.
{\noindent{\bf Spatial Pyramid Pooling (SPP)}~\cite{spp}}:
Popular DNN models, including ResNet, consist of convolutional and max-pooling (CONV) layers and fully-connected (FC) layers and the shape of an input image is fixed. Changing the input shape in a CNN typically requires re-designing the architecture of the CNN. SPP is a special layer that eliminates this constraint. The SPP layer is added at the end of CONV layers, providing the following FC layers with a fixed-dimensioned feature representation by pooling the CONV layer output with bins whose shapes are proportional to the input shape. We use SPP layers to change input shape as an approximation knob.
{\noindent\bf Trading-off accuracy for inference latency}:
DNNs can have several variants due to different configurations, and these variants yield different accuracies and latencies. But these variants are trained and inferenced independently and cannot be switched efficiently at inference time to meet differing accuracy or latency requirements. For example, MCDNN~\cite{mcdnn} sets up an ensemble of (up to 68) model variants to satisfy different latency/accuracy/cost requirements. MSDNet~\cite{huang2017multi} enables five early exits in a {\em single} model but does not evaluate on streaming video with any variable content or contention situations. Hence, we set ourselves to design a single-model DNN that is capable of handling the accuracy-latency trade-off at inference time and guarantees our video analytics system's performance under variable content and runtime conditions.
\subsection{Content-aware Approximate Computing}
IRA~\cite{laurenzano2016input} and VideoChef~\cite{xu2018videochef} first introduced the notion of content-aware approximation and applied the idea, respectively to image and video processing pipelines. These works for the first time showed how to tune approximation knobs as content characteristics change, {e.g.,}\xspace the video scene became more complex. In particular, IRA performs approximation targeting individual images, while VideoChef exploits temporal similarity among frames in a video to further optimize accuracy-latency trade-off. However, these works do not perform approximation for ML-based inferencing, which comprises the dominant form of video analytics. In contrast, we apply approximation to the DNN model itself with the intuition that depending on complexity of the video frame, we want to feed input of a different shape and output at a different depth of layers to achieve the target accuracy.
\subsection{Contention-aware Scheduling}
Managing the resource contention of multiple jobs on high-performance clusters is a very active area of work. Bubble-Up~\cite{mars2011bubble}, Bubble-Flux~\cite{yang2013bubble}, and Pythia~\cite{xu2018pythia} develop characterization methodologies to predict the performance degradation of latency-sensitive applications due to shared resources in the memory subsystem. SMiTe~\cite{zhang2014smite} and Paragon~\cite{delimitrou2013paragon} further extend such concurrent resource contention scenario to SMT processors and thousands of different unknown applications, respectively. On the other hand, we apply contention-aware approximation to the DNN model on the embedded and mobile devices, and consider the three major sources of contention -- CPU, GPU, and memory bandwidth.
\section{Conclusion}
\label{sec_conclusion}
There is a push to support streaming video analytics close to the source of the video, such as, IoT devices, surveillance cameras, or AR/VR gadgets. However, state-of-the-art heavy DNNs cannot run on such resource-constrained devices. Further, the runtime conditions for the DNN's execution may change due to changes in the resource availability on the device, the content characteristics, or user's requirements. Although several works create lightweight DNNs for resource-constrained clients, none of these can adapt to changing runtime conditions. We introduced {ApproxNet}\xspace, a video analytics system for embedded or mobile clients. It enables novel dynamic approximation techniques to achieve desired inference latency and accuracy trade-off under changing runtime conditions. It achieves this by enabling two approximation knobs within a single DNN model, rather than creating and maintaining an ensemble of models. It then estimates the effect on latency and accuracy due to changing content characteristics and changing levels of contention. We show that {ApproxNet}\xspace can adapt seamlessly at runtime to such changes to provide low and stable latency for object classification on a video stream. We quantitatively compare its performance to ResNet, MCDNN, MobileNets, NestDNN, and MSDNet, five state-of-the-art object classification DNNs.
\section{Discussion}
\label{sec_discussion}
\noindent \textbf{Training the approximation-enabled DNN} of {ApproxNet}\xspace may take longer than conventional DNNs, since at each iteration of training, different outports and input shapes try to minimize their own softmax loss, and thus, they may adjust internal weights of the DNN in conflicting ways. In our experiments with the VID dataset, we observe that our training time is around 3 days on our evaluation edge server, described in Section~\ref{sec_platform}, compared to 1 day to train a baseline ResNet-34 model. However, training being an offline process, the time is of less concern. However, training can be sped up by using one of various actively researched techniques for optimizing training, such as~\cite{le2011optimization}.
\noindent \textbf{Generalizing the approximation-enabled DNN to other architectures}. The shape and depth knobs are general to all CNN-based architectures. Theoretically, we can attach an outport (composed of SPP to adjust the shape and fully-connected layer to generate classification) to any layers. The legitimate input shape can be tricky and dependent on the specific architecture. Considering the training cost and exploration space, we select 7 shapes in multiples of 16 and 6 outports, in mostly equally spaced positions of the layers. More input shapes and outports enable finer-grained accuracy-latency trade-offs and the granularity of such a trade-off space depends on the design goal.
\section{Evaluation}
\label{sec_evaluation}
\subsection{Evaluation Platforms} \label{sec_platform}
We evaluate {ApproxNet}\xspace by running it on an NVIDIA Jetson TX2~\cite{tx2}, which includes 256 NVIDIA Pascal CUDA cores, a dual-core Denver CPU, a quad-core ARM CPU on a 8GB unified memory~\cite{unifor-mem} between CPU and GPU. The specification of this board is close to what is available in today's high-end smart phones such as Samsung Galaxy S20 and Apple iPhone 12. We train the approximation-enabled DNN on a server with NVIDIA Tesla K40c GPU with 12GB dedicated memory and an octa-core Intel i7-2600 CPU with 24GB RAM. For both the embedded device and the training server, we install Ubuntu 16.04 and TensorFlow v1.14.
\subsection{Datasets, Task, and Metrics}
\subsubsection{ImageNet VID dataset}
We evaluate {ApproxNet}\xspace on the video object classification task using ILSVRC 2015 VID dataset~\cite{ILSVRC2015_VID}. Although the dataset is initially used for object detection, we convert the dataset so that the task is to classify the frame into one of the ground truth object categories. If multiple objects exist, the classification is considered correct if matched with any one of the ground truth classes and this rule applies to both {ApproxNet}\xspace and baselines. According to our analysis, 89\% of the video frames are single-object-class frames and thus the accuracy is still meaningful under such conversion.
For the purpose of training, ILSVRC 2015 VID training set contains too many redundant video frames, leading to an over-fitting issue. To alleviate this problem, we follow the best practice in~\cite{kang2017t} such that the VID training dataset is sub-sampled every 180 frames and the resulting subset is mixed with ILSVRC 2014 detection (DET) training dataset to construct a new dataset with DET:VID=2:1. We use 90\% of this video dataset to train {ApproxNet}\xspace's DNN model and keep aside another 10\% as validation set to fine-tune {ApproxNet}\xspace (offline profiling). To evaluate {ApproxNet}\xspace's system performance, we use ILSVRC 2015 VID validation set -- we refer to this as the ``test set'' throughout the paper.
\subsubsection{ImageNet IMG dataset}
We also use ILSVRC 2012 image classification dataset~\cite{deng2009imagenet} to evaluate the accuracy-latency trade-off of our single DNN. We use 10\% of the ILSVRC training set as our training set, first 50\% of the validation set as our validation set to fine-tune {ApproxNet}\xspace, and the remaining 50\% of the validation set as our test set. The choices made for training-validation-test in both the datasets follows common practice and there is no overlap between the three. \textbf{Throughout the evaluation, we use ImageNet VID dataset by default, unless we explicitly mention the use of the ImageNet IMG dataset.}
\subsubsection{Metrics}
We use latency and top-5 accuracy as the two metrics. The latency includes the overheads of the respective solutions, including the switching overhead, the execution time of FCE, RCE and scheduler.
\subsection{Baselines}\label{sec:baseline}
We start with the evaluation on the static models without the ability to adapt. This is because we want to reveal the relative accuracy-latency trade-offs in the traditional settings, compared to the single approximation branches in {ApproxNet}\xspace. The baselines for this static experiment include model variants, which are designed for different accuracy and latency goals, {i.e.,}\xspace ResNet~\cite{resnet} MobileNets~\cite{howard2017mobilenets} and MSDNets~\cite{huang2017multi} for which we use 5 execution branches in the single model that can provide different accuracy and latency goals. We use the ILSVRC IMG dataset to evaluate these static models, since this dataset is larger and with more classes.
We then proceed with the evaluation on the streaming videos, under varying resource contention. This brings two additional aspects for evaluation -- (1) how the video frames are being processed in a timely and streaming manner as frames in this case cannot be batched like images, and (2) how the technique can meet the latency budget in the presence of resource contention from other application that can raise the processing latency.
The baselines we use are: MCDNN~\cite{mcdnn} as a representative of the multi-model approach, and MSDNets, a representative of the single-model approach (with multiple execution branches).
We also compare the switching overhead of our single-model design with the multi-capacity models in NestDNN~\cite{fang2018nestdnn}.
Unfortunately, we were not able to use BranchyNet~\cite{teerapittayanon2016branchynet}, because their DNN is not designed for large images in the ImageNet dataset. BranchyNet was evaluated on MNIST and CIFAR datasets in thir paper and it does not provide any guidance on the parameter settings for training and makes it impossible to use on different datasets.
The details of each baseline are as follows,
\noindent \textbf {ResNet}: ResNet is the base DNN architecture of many state-of-the-art image and video object classification tasks, with superior accuracy to other architectures. While it was originally meant for server-class platforms, as resources on mobile devices increase, ResNet is also being used on such devices~\cite{lu2017modeling, zhang2018shufflenet, wang2018pelee}. We use ResNet of 18 layers (ResNet-18) and of 34 layers (ResNet-34) as base models. We modify the last FC layer to classify into 30 labels in the VID dataset and fine-tune the whole model. ResNet-34 plays a role as the reference providing the upper bound of the target accuracy. ResNet architectures with more than 34 layers (\cite{resnet} has considered up to 152 layers) become impractical as they are too slow to run on the resource-constrained mobile devices and their memory consumption is too large for the memory on the board.
\noindent \textbf {MobileNets}: This refers to 20 model variants (trained by the original authors) specifically designed for mobile devices ($\alpha=1,0.75,0.5,0.35, shape=224,192,160,128,96$).
\noindent \textbf {MSDNets}: This refers to the 5 static execution branches to meet the different latency budgets in their anytime evaluation scenario. We have enhanced MSDNets with a scheduler to dynamically choose the static branches for dynamic runtime conditions. The former is compared with static models in the IMG dataset and the latter is compared with adaptive systems in the VID dataset. For the sake of simplicity, we reuse the same term (MSDNets) to refer to both.
\noindent \textbf {NestDNN}: This solution provides multi-capacity models with ResNet-34 architecture by varying the number of filters in each convolutional layer. We compose 9 descendant models, where (1) the seed model, or the smallest model, reduces the number of filters of all convolutional layers uniformly by 50\%, (2) the largest descendant model is exactly ResNet of 34 layers, and (3) the other descendant models reduce the number of filters of all convolutional layers uniformly by a ratio equally distributed between 50\% and 100\%. We only compare the switching overhead of NestDNN inside its descendant models with {ApproxNet}\xspace because NestDNN is not open-sourced and the paper does not provide enough details about the training process or the architecture.
\noindent \textbf {MCDNN}: We change the base model in MCDNN from VGG to the more recent ResNet for a fairer comparison. This system chooses between MCDNN-18 and MCDNN-34 depending on the accuracy requirement. MCDNN-18 uses two models: a specialized ResNet-18 followed by the generic ResNet-18. The specialized ResNet-18 is the same as the ResNet-18 except the last layer, which is modified to classify the most frequent $N$ classes only. This is MCDNN's key novelty that most inputs belong to the top $N$ classes, which can be handled by a reduced-complexity DNN. If the top-1 prediction label of the specialized model in MCDNN is not among the top $N$ frequent classes, then the generic model processes the input again and outputs its final predictions. Otherwise, MCDNN uses the top-5 prediction labels of the specialized model as its final predictions. We set $N=20$ that covers 80\% of training video frames in the VID dataset.
\subsection{Typical Usage Scenarios}
\label{sec:scenarios}
We use a few usage scenarios to compare the protocols, although {ApproxNet}\xspace can support finer-grained user requirements in latency or accuracy.
\begin{itemize}
\item \textbf{High accuracy, High latency (HH)} refers to the scenario where {ApproxNet}\xspace has less than 10\% (relative) accuracy loss from ResNet-34, our most accurate single model baseline. Accordingly, the runtime latency is also high to achieve such accuracy.
\item \textbf{Medium accuracy, Medium latency (MM)} has an accuracy loss less than 20\% from our base model ResNet-34.
\item \textbf{Low accuracy, Low latency (LL)} can tolerate an accuracy loss of up to 30\% with a speed up in its inferencing.
\item \textbf{Real time (RT)} scenario, by default, means the processing pipeline should keep up with 30 fps speed, {i.e.,}\xspace maximum 33.33 ms latency. This is selected if no requirement is specified.
\end{itemize}
\begin{figure*}[b]
\centering
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=0.99\textwidth]{Figures/TradeOff_IMGMN_allshape_img_20180723_val.png}
\caption{Pareto frontier for test accuracy and inference latency on the ImageNet IMG dataset for ApproxNet compared to ResNet and MobileNets, the latter being specialized for mobile devices.}
\label{trade_off_img_dataset}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.99\textwidth]{Figures/SystemComparison_VID_Caw_20181107.png}
\caption{Comparison of system performance in typical usage scenarios. ApproxNet is able to meet the accuracy requirement for all three scenarios. User requirements are shown in dashed lines.}
\label{trade_off_system}
\end{minipage}
\end{figure*}
\begin{table}[t]
\centering
\caption{Averaged accuracy and latency performance of ABs on the Pareto frontier in ApproxNet and those of the baselines on validation set of the VID dataset. Note that the accuracy on the validation dataset can be higher due to its similarity with the training dataset. Thus the validation accuracy is only used to construct the look up table in {ApproxNet}\xspace and baselines and does not reflect the true performance.}
\label{table:LUT_VID}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(a) Averaged accuracy and latency performance in ApproxNet.}
\begin{tabular}{lcccc}
\hline
Usage Scenario (rate) & Shape & Layers & Latency & Accuracy \\
\hline
{\bf HH} (32 fps) & 128x128x3 & 24 & 31.42 ms & 82.12\% \\
& 160x160x3 & 20 & 31.33 ms & 80.81\% \\
& 128x128x3 & 20 & 27.95 ms & 79.35\% \\
& 112x112x3 & 20 & 26.84 ms & 78.28\% \\
{\bf MM} (56 fps) & 128x128x3 & 12 & 17.97 ms & 70.23\% \\
& 112x112x3 & 12 & 17.70 ms & 68.53\% \\
& 96x96x3 & 12 & 16.78 ms & 67.98\% \\
{\bf LL} (62 fps) & 80x80x3 & 12 & 16.14 ms & 66.39\% \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(b) Lookup table in MCDNN's scheduler.}
\begin{tabular}{lcccc}
\hline
Scenario (rate) & Shape & Layers & Latency & Accuracy \\
\hline
{\bf HH} (11 fps) & 224x224x3 & 34 & 88.11 ms & 77.71\% \\
{\bf MM/LL} (17 fps) & 224x224x3 & 18 & 57.83 ms & 71.40\% \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(c) Lookup table in MSDNet's scheduler.}
\begin{tabular}{lcccc}
\hline
Scenario (rate) & Shape & Layers & Latency & Accuracy \\
\hline
{\bf HH} (5.2 fps) & 224x224x3 & 191 & 153 ms & 96.79\% \\
{\bf MM/LL} (16 fps) & 224x224x3 & 63 & 62 ms & 95.98\% \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(d) Reference performance of single model variants or execution branches.}
\begin{tabular}{ccccl}
\hline
Model name (rate) & Shape & Layers & Latency & Accuracy \\
\hline
ResNet-34 (16 fps) & 224x224x3 & 34 & 64.44 ms & 85.86\% \\
ResNet-18 (22 fps) & 224x224x3 & 18 & 45.22 ms & 84.59\% \\
MSDNet-branch5 (5.2 fps) & 224x224x3 & 191 & 153 ms & 96.79\% \\
MSDNet-branch4 (5.6 fps) & 224x224x3 & 180 & 146 ms & 96.55\% \\
MSDNet-branch3 (7.8 fps) & 224x224x3 & 154 & 129 ms & 96.70\% \\
MSDNet-branch2 (10 fps) & 224x224x3 & 115 & 100 ms & 96.89\% \\
MSDNet-branch1 (16 fps) & 224x224x3 & 63 & 62 ms & 95.98\% \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figures/per_category_trade_off.png}
\caption{Content-specific accuracy of Pareto frontier branches. Branches that fulfill real-time processing (30 fps) requirement are labeled in green. Note that both ResNet-18 and ResNet-34 models, though with the higher accuracy, cannot meet the 30 fps latency requirement.}
\label{per_category_trade_off}
\end{figure*}
\subsection{Accuracy-latency Trade-off of the Static Models}
We first evaluate {ApproxNet}\xspace on ILSVRC IMG dataset on the accuracy-latency trade-off of each AB in our single DNN as shown in Figure~\ref{trade_off_img_dataset}. Our AB with higher latency (satisfying 30 fps speed though higher) has close accuracy to ResNet-18 and ResNet-34 but much lower latency than ResNet-34. Meanwhile, our AB with reduced latency (25 ms to 30 ms) has close accuracy to MobileNets. And finally, our AB is superior to all baselines in achieving extremely low latency (< 20 ms). However, the single execution branches in MSDNet are much slower than {ApproxNet}\xspace and other baselines. The latency ranges from 62 ms to 153 ms, which cannot meet the real-time processing requirement and will be even worse in face of the resource contention. MobileNets can keep up with the frame rate but it lacks the configurability in the latency dimensional that {ApproxNet}\xspace has. Although MobileNets does win on the IMG dataset at higher accuracy, it needs an ensemble of models (like MCDNN) when it comes to the video where content characteristics, user requirement, and runtime resource contention change.
\subsection{Adaptability to Changing User Requirements} \label{sec:adp_user_req}
We then, from now on, switch to the ILSVRC VID dataset and show how {ApproxNet}\xspace can meet different user requirements for accuracy and latency. We list the averaged accuracy and latency of Pareto frontier branches in Table~\ref{table:LUT_VID}(a), which can serve as a lookup table in the simplest scenario, {i.e.,}\xspace without considering frame complexity categories and resource contention. {ApproxNet}\xspace, provides content-aware approximation and thus keeps a lookup table for each frame complexity category, and to be responsive to resource contention, updates the latency in the lookup table based on observed contention.
We perform our evaluation on the entire test set, but without the baseline protocols incurring any switching penalty. Figure~\ref{trade_off_system} compares the accuracy and latency performance between {ApproxNet}\xspace and baselines in three typical usage scenarios ``HH'', ``MM'', and ``LL'' (AN denotes {ApproxNet}\xspace). In this experiment, {ApproxNet}\xspace uses the content-aware lookup table for each frame complexity category and chooses the best AB at runtime to meet the user accuracy requirement. MCDNN and MSDNet use similar lookup tables (Table~\ref{table:LUT_VID}(b) and (c)) to select among model variants or execution branches to satisfy the user requirement. We can observe that ``AN-HH'' achieves the accuracy of 67.7\% at a latency of 35.0 ms, compared to ``MCDNN-HH'' that has an accuracy of 68.5\% at the latency of 87.4 ms. Thus, MCDNN-HH is 2.5X slower while achieving 1.1\% accuracy gain over {ApproxNet}\xspace. On the other hand, MSDNet is more accurate and slower than all {ApproxNet}\xspace's branches. The lightest branch and heaviest branch achieve 4.3\% and 7.3\% higher accuracy respectively, and incur 1.8X and 4.4X higher latency respectively. In ``LL'' and ``MM'' usage scenarios, MCDNN-LL/MM is 2.8-3.3X slower than {ApproxNet}\xspace, while gaining in accuracy 3\% or less. MSDNets, on the other hand, is running with much higher latency (62 ms to 146 ms) and higher accuracy (72.0\% to 76.2\%).
Thus, compared to these baseline models, {ApproxNet}\xspace wins by providing lower latency, satisfying the real-time requirement, and flexibility in achieving various points in the (accuracy, latency) space.
\subsection{Adaptability to Changing Content Characteristics \& User Requirements}\label{system_latency}
We now show how {ApproxNet}\xspace can adapt to changing content characteristics and user requirements within the same video stream. The video stream, typically at 30 fps, may contain content of various complexities and this can change quickly and arbitrarily. Our study with the FCC on the VID dataset has shown that in 97.3\% cases the frame complexity category of the video will change within every 100 frames. Thus, dynamically adjusting the AB with frame complexity category is beneficial to the end-to-end system. We see in Figure~\ref{per_category_trade_off} that {ApproxNet}\xspace with various ABs can satisfy different (accuracy, latency) requirements for each frame complexity category. According to user's accuracy or latency requirement, {ApproxNet}\xspace's scheduler picks the appropriate AB. The majority of the branches satisfy the real-time processing requirement of 30 fps and can also support high accuracy quite close to the ResNet-34.
In Figure~\ref{temporal_system}, we show how {ApproxNet}\xspace adapts for a particular representative video from the test dataset. Here, we assume the user requirement changes every 100 frames between ``HH'', ``MM'', and ``LL''. This is a synthetic setting to observe how models perform at the time of switching. We assume a uniformly distributed model selection among 20 model variants for MCDNN's scheduler (in~\cite{mcdnn}, the MCDNN catalog uses 68 model variants) while the embedded device can only cache two models in the RAM (more detailed memory results in Section~\ref{subsec:overhead}). In this case, MCDNN has a high probability to load a new model variant into RAM from Flash, whenever the user requirement changes. This results in a huge latency spike, typically from 5 to 20 seconds at each switch. It is notable that for some cases, there are also small spikes in MCDNN following the larger spikes because the generic model is invoked due to the specialized model's prediction of ``infrequent'' class. On the other hand, {ApproxNet}\xspace and MSDNets incur little overhead in switching between any two branches, because they are all available within the same single-model DNN. Similar to the results before, {ApproxNet}\xspace wins over MSDNets at lower latency to meet the real-time processing requirement even though its accuracy is slightly lower.
\begin{figure*}[t]
\centering
\begin{minipage}[thb]{0.4\linewidth}
\includegraphics[width=1\textwidth]{Figures/Runtime_VID_Caw.png}
\caption{Latency performance comparison with changing user requirements throughout video stream.}
\label{temporal_system}
\end{minipage}
\hfill
\begin{minipage}[thb]{0.58\linewidth}
\begin{minipage}[thb]{0.47\linewidth}
\includegraphics[width=1\textwidth]{Figures/MeanTransitionOverhead.png}
\centerline{(a) {ApproxNet}\xspace}
\end{minipage}
\begin{minipage}[thb]{0.52\linewidth}
\includegraphics[width=1\textwidth]{Figures/MeanTransitionOverheadNestDNN.png}
\centerline{(b) NestDNN}
\end{minipage}
\caption{Transition latency overhead across (a) ABs in ApproxNet and (b) descendant models in NestDNN. ``from'' branch on Y-axis and ``to'' branch on X-axis. Inside brackets: (input shape, outport depth). Latency unit is millisecond.}
\label{fig:transition}
\end{minipage}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=1\textwidth]{Figures/Contention_Latency_WholeSet.png}
\centerline{(a) Inference latency (w/ CPU contention)}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/Contention_Accuracy_WholeSet.png}
\centerline{(b) Accuracy (w/ CPU contention)}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/GPUContention_Latency.png}
\centerline{(c) Inference latency (w/ GPU contention)}
\end{minipage}
\caption{Comparison of ApproxNet vs MCDNN under resource contention. (a) and (b) inference latency and accuracy on the whole test dataset. (c) inference latency on a test video.}
\label{fig:system_contention}
\end{figure*}
To see in further detail the behavior of {ApproxNet}\xspace, we profile the mean transition time of all Pareto frontier branches under no contention as shown in Figure~\ref{fig:transition} (a). Most of the transition overheads are extremely low, while only a few transitions are above 30 ms. Our optimization algorithm (Equations~\ref{eq:opt-latency-constraint} and~\ref{eq:opt-latency}) filters out such expensive transitions if they happen too frequently. In Figure~\ref{fig:transition} (b), we further show the transition time between the descendant models in NestDNN, which uses a multi-capacity model with varying number of filters in the convolutional layers. The notation c50 stands for the descendant model with 50\% capacity of the largest variant, and so on. We can observe that the lowest switching cost of NestDNN is still more than an order of magnitude higher than the highest switching cost of {ApproxNet}\xspace. The highest switching cost of NestDNN compared to the highest of {ApproxNet}\xspace is more than three orders of magnitude---the highest cost is important when trying to guarantee latencies with the worst-case latency spikes. We can observe that the transition overhead can be up to 25 seconds from the smallest model to the largest model, and is generally proportional to the amount of data that is loaded into the memory. This is because NestDNN only keeps a single model, the one in use in memory and loads/unloads all others when switching. In summary, the benefit of {ApproxNet}\xspace comes from the fact that (1) it can accommodate multiple (accuracy, latency) points within one model through its two approximation knobs while MCDNN has to switch between model variants, and (2) switching between ABs does not load large amount of data and computational graphs.
\subsection{Adaptability to Resource Contention}
\label{resource_contention}
We evaluate in Figure~\ref{fig:system_contention}, the ability of {ApproxNet}\xspace to adapt to resource contention on the device, both CPU and GPU contention. First, we evaluate this ability by running a \textit{bubble application}~\cite{mars2011bubble, xu2018pythia} on the CPU that creates stress of different magnitudes on the (shared) memory subsystem while the video analytics DNN is running on the GPU. We generate bubbles, of two different memory sizes 10 KB (low contention) and 300 MB (high contention). The bubbles can be ``unpinned'' meaning they can run on any of the cores or they can be ``pinned'' in which case they run on a total of 5 CPU cores leaving the 6th one for dedicated use by the video analytics application. The unpinned configuration causes higher contention. We introduce contention in phases---low pinned, low unpinned, high pinned, high unpinned.
As shown in Figure~\ref{fig:system_contention}(a), MCDNN with its fastest model variant MCDNN-18, runs between 40ms and 100 ms depending on the contention level and has no adaptation. For {ApproxNet}\xspace, on the other hand, our mean latency under low contention (10 KB, pinned) is 25.66 ms, and it increases a little to 34.23 ms when the contention becomes high (300 MB, unpinned). We also show the accuracy comparison in Figure~\ref{fig:system_contention}(b), where we are slightly better than MCDNN under low contention and high contention (2\% to 4\%) but slightly worse (within 4\%) for intermediate contention (300 MB, pinned).
To further evaluate {ApproxNet}\xspace with regard to GPU contention, we run a synthetic matrix manipulation application concurrently with {ApproxNet}\xspace. The contention level is varied in a controlled manner through the synthetic application, from 0\% to 100\% in steps of 10\%, where the control is the size of the matrix, and equivalently, the number of GPU threads dedicated to the synthetic application. The contention value is the GPU utilization when the synthetic application runs alone as measured through \texttt{tegrastats}. For baseline, we use the MCDNN-18 model again since among the MCDNN ensemble, it comes closest to the video frame rate (33.3 ms latency). As shown in Figure~\ref{fig:system_contention}(c), without the ability to sense the GPU contention and react to it, the latency of MCDNN increases by 85.6\% and is far beyond the real-time latency threshold. The latency of {ApproxNet}\xspace also increases with gradually increasing contention, 20.3 ms at no contention to 30.77 ms at 30\% contention. However, when we further raise the contention level to 50\% or above, {ApproxNet}\xspace's scheduler senses the contention and switches to a lighter-weight approximation branch such that the latency remains within 33.3 ms. The accuracy of MCDNN and {ApproxNet}\xspace were identical for this sample execution. Thus, this experiment bears out the claim that {ApproxNet}\xspace can respond to contention gracefully by recreating the Pareto curve for the current contention level and picking the appropriate AB.
\begin{figure*}[t]
\begin{minipage}[thb]{0.44\linewidth}
\begin{minipage}[thb]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/SystemOverhead_Caw.png}
\caption{System overhead in ApproxNet and MCDNN.}
\label{fig:timing_share}
\end{minipage}
\begin{minipage}[thb]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/bar_plot.png}
\caption{Memory consumption of solutions in different usage scenarios (unit of GB).}
\label{fig:memory-result}
\end{minipage}
\end{minipage}
\hfill
\begin{minipage}[thb]{0.53\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/Latency_Contention.png}
\centerline{(a) Inference latency}
\includegraphics[width=1\textwidth]{Figures/Accuracy_Contention.png}
\centerline{(b) Accuracy}
\caption{Case study: performance comparison of ApproxNet vs MCDNN under resource contention for a Youtube video.}
\label{fig:system_contention_case_study}
\end{minipage}
\end{figure*}
\subsection{Solution Overheads} \label{subsec:overhead}
With the same experiment as in Section~\ref{system_latency}, we compare the overheads of {ApproxNet}\xspace, MCDNN, and MSDNets in Figure~\ref{fig:timing_share}. For {ApproxNet}\xspace, we measure the overhead of all the steps outside of the core DNN, {i.e.,}\xspace frame resizing, FCE, RCE, and scheduler. For MCDNN, the dominant overhead is the model switching and loading. The model switching overhead of MCDNN is measured at each switching point and averaged across all frames in each scenario. We see that {ApproxNet}\xspace, including overheads, is $7.0X$ to $8.2X$ faster than MCDNN and $2.4X$ to $4.1X$ faster than MSDNets. Further, we can observe that in ``MM'' and ``LL'' scenarios, {ApproxNet}\xspace's averaged latency is less than 30 ms and thus {ApproxNet}\xspace can achieve real-time processing of 30 fps videos. As mentioned before, MCDNN may be forced to reload the appropriate models whenever the user requirement changes. So, in the best case for MCDNN the requirement never changes or it has all its models cached in RAM. {ApproxNet}\xspace is still $5.1X$ to $6.3X$ faster.
Figure~\ref{fig:memory-result} compares the peak memory consumption of {ApproxNet}\xspace and MCDNN in typical usage scenarios. {ApproxNet}\xspace-mixed, MCDNN-mixed, and MSDNet-mixed are the cases where the experiment cycles through the three usage scenarios. We test MCDNN-mixed with two model caching strategies: (1) the model variants are loaded from Flash when they get triggered (named ``re-load''), simulating the minimum RAM usage (2) the model variants are all loaded into the RAM at the beginning (named ``load-all''), assuming the RAM is large enough. We see that {ApproxNet}\xspace in going from ``LL'' to ``HH'' requirement consumes 1.6 GB to 1.7 GB memory and is lower than MCDNN (1.9 GB and 2.4 GB). MCDNN's cascade DNN design (specialized model followed by generic model) is the root cause that it consumes about 15\% more memory than our model even though they only keep one model variant in the RAM and it consumes 32\% more memory if it loads two. For the mixed scenario, we can set an upper bound on the {ApproxNet}\xspace memory consumption---it never exceeds 2.1 GB no matter how we switch among ABs at runtime, an important property for proving operational correctness in mobile or embedded environments. Further, {ApproxNet}\xspace, with tens of ABs available, offers more choices than MCDNN and MSDNet, and MCDNN cannot accommodate more than two models in the available RAM.
Storage is a lesser concern but it does affect the pushing out of updated models from the server to the mobile device, a common use case. {ApproxNet}\xspace's storage cost is only 88.8 MB while MCDNN with 2 models takes 260 MB and MSDNet with 5 execution branches takes 177 MB. A primary reason is the duplication in MCDNN of the specialized and the generic models which have identical architecture except for the last FC layer. Thus, {ApproxNet}\xspace is well suited to the mobile usage scenario due to its low RAM and storage usage.
\subsection{Ablation Study with FCE} \label{sec_ablation_study_FCE}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{Figures/SystemComparison_VID_Ablation_FCE.png}
\caption{Comparison of system performance in typical usage scenarios between {ApproxNet}\xspace with FCE and {ApproxNet}\xspace without FCE.}
\label{fig:ablation_FCE}
\end{figure}
To study the necessity of FCE, we conduct an ablation study of {ApproxNet}\xspace with a content agnostic scheduler. Different from the content-aware scheduler, this scheduler picks the same AB for all video frames regardless of the content (the contention level is held unchanged). The test dataset obviously has variations in the content characteristics. With the same experiment as in Section~\ref{sec:adp_user_req}, we compare the accuracy and latency performance between {ApproxNet}\xspace with FCE and that without FCE in three typical usage scenarios ``HH'', ``MM'', and ``LL''. Figure~\ref{fig:ablation_FCE} shows that {ApproxNet}\xspace with FCE is able to improve the accuracy by 2.0\% under a ``HH'' scenario with an additional 3.57 ms latency cost. {ApproxNet}\xspace with FCE under ``MM'' and ``LL'' scenarios is either slightly slower or more accurate than that without FCE. To summarize, {ApproxNet}\xspace's FCE is more beneficial when the accuracy goal is higher, and however the latency budget is also higher.
\subsection{Case Study with YouTube Video} \label{sec_case_study}
As a case study, we evaluate {ApproxNet}\xspace on a randomly picked YouTube video~\cite{YoutubeVideoLink}, to see how it adapts to different resource contention scenarios at runtime (Figure~\ref{fig:system_contention_case_study}). The video is a car racing match with changing scenes and objects, and thus, we want to evaluate the object classification performance. The interested reader may see a demo of {ApproxNet}\xspace and MCDNN on this and other videos at \url{https://approxnet.github.io/}. Similar to the control setup in Section~\ref{resource_contention}, we test {ApproxNet}\xspace and MCDNN for four different contention levels. Each phase is 300-400 frames and the latency requirement is 33 ms to keep up with the 30 fps video. We see {ApproxNet}\xspace adapts to the resource contention well---it switches to a lightweight AB while still keeping high accuracy, comparable to MCDNN (seen on the demo site). Further, {ApproxNet}\xspace is always faster than MCDNN, while MCDNN, with a latency of 40-80 ms and even without switching overhead, has degraded performance under resource contention, and has to drop approximately every two out of three frames. As for the accuracy, there are only some occasional false classifications in {ApproxNet}\xspace (in total: 51 out of 3,000 frames, or 1.7\%). MCDNN, in this case, has slightly better accuracy (24 false classifications in 3,000 frames, or 0.8\%). We believe commonly used post-processing algorithms~\cite{kang2017t, han2016seq} can easily remove these occasional classification errors and both approaches can achieve very low inaccuracy.
\section{Introduction}
\label{sec_introduction}
There is an increasing number of scenarios where various kinds of analytics are required to be run on live video streams, on resource-constrained mobile and embedded devices. For example, in a smart city traffic system, vehicles are redirected by detecting congestion from the live video feeds from traffic cameras while in Augmented Reality (AR)/Virtual Reality (VR) systems, scenes are rendered based on the recognition of objects, faces or actions in the video. These applications require low latency for event classification or identification based on the content in the video frames. Most of these videos are captured at end-client devices such as IoT devices, surveillance cameras, or head-mounted AR/VR systems. Video transport over wireless network is slow and these applications often must operate under intermittent network connectivity. Hence such systems must be able to run video analytics in-place, on these resource-constrained client devices\footnote{For end client devices, we will use the term ``mobile devices'' and ``embedded devices'' interchangeably. The common characteristic is that they are computationally constrained. While the exact specifications vary across these classes of devices, both are constrained enough that they cannot run streaming video analytics without approximation techniques.} to meet the low latency requirements for the applications.
\noindent\textbf{State-of-the-art is too heavy for embedded devices:}
Most of the video analytics queries involve performing an inference over DNNs (mostly convolutional neural networks, {\em aka} CNNs) with a variety of functional architectures for performing the intended tasks like classification~\cite{simonyan2014very, szegedy2015going, resnet, huang2017densely}, object detection~\cite{liu2016ssd, redmon2016you, ren2015faster, shankar2020janus}, face~\cite{parkhi2015deep, schroff2015facenet, wen2016discriminative, taigman2014deepface} or action recognition~\cite{ji20133d, poppe2010survey, liu2016spatio, simonyan2014two} etc. With advancements in deep learning and emergence of complex architectures, DNN-based models have become \textit{deeper} and {\em wider}. Correspondingly their memory footprints and their inference latency have become significant. For example, DeepMon~\cite{deepmon} runs the VGG-16 model at approximately 1-2 frames-per-second (fps) on a Samsung Galaxy S7. ResNet~\cite{resnet}, with its 101-layer version, has a memory footprint of 2.8 GB and takes 101 ms to perform inference on a single video frame on the NVIDIA Jetson TX2. MCDNN, Mainstream, VideoStorm and Liu \textit{et al.}\xspace~\cite{mcdnn, jiang2018mainstream, liu2019edge, videostorm} require either the cloud or the edge servers to achieve satisfactory performance. Thus, on-device inference with a low and stable inference latency ({i.e.,}\xspace 30 fps) remains a challenging task.
\begin{figure}[b]
\centering
\begin{minipage}{0.49\columnwidth}
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/SimpleImg224Shape.JPEG}
\end{minipage}
\centering\hfill
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/SimpleImg112Shape.JPEG}
\end{minipage}
\centerline{(a) Simple video frame}
\end{minipage}
\begin{minipage}{0.49\columnwidth}
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/ComplexImg224Shape.JPEG}
\end{minipage}
\centering\hfill
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/ComplexImg112Shape.JPEG}
\end{minipage}
\centerline{(b) Complex video frame}
\end{minipage}
\caption{Examples of using a heavy DNN (on the left) and a light DNN (on the right) for simple and complex video frames in a video frame classification task. The light DNN downsamples an input video frame to half the default input shape and gets prediction labels at an earlier layer. The classification is correct for the simple video frame (red label denotes the correct answer) but not for the complex video frame.}
\label{fig:easy_hard}
\end{figure}
\noindent\textbf{Content and contention aware systems: } Content characteristics of the video stream is one of the key runtime conditions of the systems with respect to approximate video analytics. This can be leveraged to achieve the desired latency-accuracy tradeoff in the systems. For example, as shown in Figure~\ref{fig:easy_hard}, if the frame is very simple, we can downsample it to half of the original dimensions and use a shallow or small DNN model to make an accurate prediction. If the frame is complex, the same shallow model might result in a wrong prediction and would need a larger DNN model.
Resource contention is another important runtime condition that the video analytics system should be aware of. In several scenarios, the mobile devices support multiple different applications, executing concurrently. For example, while an AR application is running, a voice assistant might kick in if it detects background conversation, or a spam filter might become active, if emails are received. All these applications share common resources on the device, such as, CPU, GPU, memory, and memory bandwidth, and thus lead to \textit{resource contention}~\cite{kayiranMICRO2014, ausavarungnirunASPLOS2018, bagchi2020new} as these devices do not have advanced resource isolation mechanisms. It is currently an unsolved problem how video analytics systems running on the mobile devices can maintain a low inference latency under such variable resource availability and changing content characteristics, so as to deliver satisfactory user experience.
\noindent\textbf{Single-model vs. multi-model adaptive systems}: How do we architect the system to operate under such varied runtime conditions? Multi-model designs came first in the evolution of systems in this space. These created systems with an ensemble of multiple models, satisfying varying latency-accuracy conditions, and some scheduling policy to choose among the models. MCDNN~\cite{mcdnn}, being one of most representative works, and well-known DNNs like ResNet, MobileNets~\cite{resnet, howard2017mobilenets} all fall into this category. On the other hand, the single-model designs, which emerged after the multi-model designs, feature one model with tuning knobs inside so as to achieve different latency-accuracy goals. These typically have lower switching overheads from one execution path to another, compared to the multi-branch models. MSDNet~\cite{huang2017multi}, BranchyNet~\cite{teerapittayanon2016branchynet}, NestDNN~\cite{fang2018nestdnn} are representative works in this single-model category. However, none of these systems can adapt to runtime conditions, primarily, changes in content characteristics and contention levels on the device.
\begin{table}[b]
\centering
\caption{ApproxNet's main features and comparison to existing systems.}
\label{table:features}
\scalebox{0.85}{
\begin{tabular}{|p{1.4in}|p{0.4in}|p{0.8in}|p{0.5in}|p{0.6in}|p{0.5in}|p{0.6in}|}
\hline
Solution & Single model & Considers switching overhead & Focused on video & Handles runtime conditions & Open-sourced & Replicable in our datasets\\
\hline
MCDNN [MobiSys'16] & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
MobileNets [ArXiv'17] & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
MSDNet [ICLR'18] & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
BranchyNet [ICPR'16] & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} \\
\hline
NestDNN [MobiCom'18] & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} \\
\hline
ApproxNet & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
\multicolumn{7}{|c|}{\includegraphics[width=0.17in]{Figures/icon_support.png} Supported \includegraphics[width=0.17in]{Figures/icon_part_support.png} Partially Supported \includegraphics[width=0.17in]{Figures/icon_not_support.png} Not Supported} \\
\hline
\multicolumn{7}{|l|}{Notes for partially support:} \\
\multicolumn{7}{|l|}{1. MCDNN and NestDNN only consider the switching overhead in memory size, but in the execution latency.} \\
\multicolumn{7}{|l|}{2. NestDNN handles multiple concurrent DNN applications with joint optimization goals.} \\
\multicolumn{7}{|l|}{3. The core models in MCDNN are open-sourced while the scheduling components are not.} \\
\hline
\end{tabular}
}
\end{table}
\noindent {\bf Our solution: {ApproxNet}\xspace}.
In this paper, we present {ApproxNet}\xspace, our content and contention aware object classification system over streaming videos, geared toward GPU-enabled mobile/embedded devices. We introduce a novel workflow with a set of integrated techniques to solve the three main challenges as mentioned above: (1) on-device real-time video analytics, (2) content and contention aware runtime calibration, (3) a single-model design. The fundamental idea behind {ApproxNet}\xspace is to perform approximate computing with tuning knobs that are changed automatically and seamlessly within the same video stream. These knobs trade off the accuracy of the inferencing for reducing inference latency and thus match the frame rate of the video stream or the user's requirement on either a latency target or an accuracy target. The optimal configuration of the knobs is set, particularly considering the resource contention and complexity of the video frames, because these runtime conditions affects the accuracy and latency of the model much.
In Table~\ref{table:features}, we compare {ApproxNet}\xspace with various representative prior works in this field. First of all, none of these systems~\cite{mcdnn, howard2017mobilenets, huang2017multi, teerapittayanon2016branchynet, fang2018nestdnn} is able to adapt to dynamic runtime conditions (changes in content characteristics and contention levels) as we can. Second, although most systems are able to run at variable operation points of performance, MCDNN~\cite{mcdnn} and MobileNets~\cite{howard2017mobilenets} use a multi-model approach and incur high switching penalty. For those that works with a single model, namely, MSDNet~\cite{huang2017multi}, BranchyNet~\cite{teerapittayanon2016branchynet}, and NestDNN~\cite{fang2018nestdnn}, they do not consider switching overheads in their models (except partially for NestDNN, which considers switching cost in memory size), do not focus on video content, and do not show how their models can adapt to the changing runtime conditions (except partially for NestDNN, which considers joint optimization of multiple DNN workloads). For evaluation, we mainly compare to MCDNN, ResNet, and MobileNets, as the representatives of multi-model approaches, and MSDNet, as the single-model approach. We cannot compare to BranchyNet as it is not designed and evaluated for video analytics and thus not suitable for our datasets. BranchyNet paper evaluates it on small image dataset: MNIST and CIFAR. We cannot compare to NestDNN since it's models or source-code and architecture and hyperparameter details are publicly available and we need those for replicating the experiments.
To summarize, we make the following contributions in this paper:
\begin{enumerate}
\item We develop an end-to-end, approximate video object classification system, {ApproxNet}\xspace, that can handle dynamically changing workload contention and video content characteristics on resource-constrained embedded devices. It achieves this through performing system context-aware and content-aware approximations with the offline profiling and online lightweight sensing and scheduling techniques.
\item We design a novel workflow with a set of integrated techniques including the adaptive DNN that allows runtime accuracy and latency tuning \textit{within a single model}. Our design is in contrast to ensemble systems like MCDNN that are composed of multiple independent model variants capable of satisfying different requirements. Our single-model design avoids high switching latency when conditions change and reduces RAM and storage usage.
\item We design {ApproxNet}\xspace to make use of video features, rather than treating video as a sequence of image frames. Such characteristics that we leverage include the temporal continuity in content characteristics between adjacent frames. We empirically show that on a large-scale video object classification dataset, popular in the vision community, {ApproxNet}\xspace achieves a superior accuracy-latency tradeoff than the three state-of-the-art solutions on mobile devices, MobileNets, MCDNN, and MSDNet (Figures~\ref{trade_off_img_dataset} and \ref{trade_off_system}).
\end{enumerate}
The rest of the paper is organized as follows. Section~\ref{sec_background} gives the relevant background. Section~\ref{sec_overview} gives our high-level solution overview. Section~\ref{sec_technique} gives the detailed design. Section~\ref{sec_evaluation} evaluates our end-to-end system. Section~\ref{sec_discussion} discusses the details about training the DNN. Section~\ref{sec_related_work} highlights the related works. Finally, Section~\ref{sec_conclusion} gives concluding remarks.
\section{Overview}
\label{sec_overview}
Here we give a high-level overview of {ApproxNet}\xspace. In Section~\ref{sec_technique}, we provide details of each component.
\subsection{Design Principles and Motivation}
We set four design requirements for streaming video analytics on the embedded devices motivated by real-world scenarios and needs. {\em First}, the application should adapt to changing input characteristics, such as, the complexity of the video frames because the accuracy of the DNN may vary based on the content characteristics. We find such changes happen often enough within a single video stream and without any clear predictive pattern. {\em Second}, the application should adapt to the resource contention due to the shared CPU, GPU, memory, or memory bandwidth with other concurrent applications on the same device. Such contention can happen frequently with co-location due to limited resources and the lack of clean resource isolation on these hardware platforms. Again, we find that such changes can happen without a clear predictive pattern. {\em Third}, the application should support different target accuracies or latencies at runtime with little transition overhead. For example, the application may require low latency when a time-critical query, such as detection of a miscreant, needs to be executed and have no such constraint for other queries on the steam. Thus, the aggregate model must be able to make efficient transitions in the tradeoff space of accuracy and latency, and less obviously throughput, optionally using edge or cloud servers. {\em Fourth}, the application must provide real-time processing speed (30 fps) while running on the mobile/embedded device. To see three instances where these four requirements come together, consider mobile VR/AR games like Pokemon Go (some game consoles support multitasking, accuracy requirements may change with the context of the game), autonomous vehicles (feeds from multiple cameras are processed on the same hardware platform resulting in contention, emergency situations require lower latency than benign conditions such as for fuel efficiency) and autonomous drones (same arguments as for autonomous vehicles).
A {\em non-requirement} in our work is that multiple concurrent applications consuming the same video stream be jointly optimized. MCDNN~\cite{mcdnn}, NestDNN~\cite{fang2018nestdnn}, and Mainstream~\cite{jiang2018mainstream} bring significant design sophistication to handle the concurrency aspect. However, we are only interested in optimizing a single video analytics application.
\subsection{Design Intuition and Workflow}
\label{subsec:workflow}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{Figures/SystemOverview.png}
\caption{Workflow of {ApproxNet}\xspace. The input is a video frame and an optional user requirement, and the outputs are prediction labels of the video frame. Note that the shaded profiles are collected offline to alleviate the online scheduler overhead.}
\label{fig:overview}
\end{figure}
To address these challenges in our design, we propose a novel workflow with a set of integrated techniques to achieve a content and contention-aware video object classification system. We show the overall structure with three major functional units: \textit{executor}, \textit{profiler}, and \textit{scheduler} in Figure~\ref{fig:overview}. {ApproxNet}\xspace takes a video frame and optional user requirement for target accuracy or latency as an input, and produces top-5 prediction labels of the object classes as outputs.
The executor (Section~\ref{sec:approx_dnn}) is an approximation-enabled, single-model DNN. Specifically, the single-model design largely reduces the switching overhead so as to support the adaptive system. On the other hand, the multiple \textbf{approximation branches (ABs)}, each with variable latency and accuracy specs, are the key to support dynamic content and contention conditions. {ApproxNet}\xspace is designed to provide real-time processing speed (30 fps) on our target device (NVIDIA Jetson TX2). Compared to the previous single-model designs like MSDNet and BranchyNet, {ApproxNet}\xspace provides novelty in enabling both depth and shape as the approximation knob for run-time calibration.
The scheduler is the key component to react to the dynamic content characteristics and resource contention. Specifically, it selects an AB to execute by combining the precise accuracy estimation of each AB due to changing content characteristics via a {\bf Frame Complexity Estimator} ({\bf FCE}, Section~\ref{subsec_FCE}), the precise latency estimation of each AB due to resource contention via a {\bf Resource Contention Estimator} ({\bf RCE}, Section~\ref{subsec_RCE}), and the switching overhead among ABs (Section~\ref{sec:profiler}). It finally reaches a decision on which AB to use based on the user's latency or accuracy requirement and its internal accuracy, latency, and overhead estimation (Section~\ref{subsec:pareto}).
Finally, to achieve our goal of real-time processing, low switching overhead, and improved performance under dynamic conditions, we design an offline profiler (Section~\ref{sec:profiler}). We collect three profiles offline --- first, the accuracy profile for each AB on video frames of different complexity categories; second, the inference latency profile for each AB under variable resource contention, and third, the switching overhead between any two ABs.
{\bf Video-specific design}. We incorporate these video-specific designs in {ApproxNet}\xspace, which is orthogonal to the existing techniques presented in the prior works on video analytics, e.g. frame sampling~\cite{jiang2018mainstream, videostorm} and edge device offloading~\cite{liu2019edge}.
\begin{enumerate}
\item
The FCE uses a Scene Change Detector (SCD) as a preprocessing module to further alleviate its online cost. This optimization is beneficial because it reduces the frequency with which the FCE is invoked (only when the SCD flags a scene change). This relies on the intuition that discontinuous jumps in frame complexity are uncommon in a video stream.
\item The scheduler decides whether to switch to a new AB or stay depending on how long it predicts the change in the video stream to last and the cost of switching.
\item We drive our decisions about the approximation knobs by the goal of keeping up with the streaming video rate (30 fps). We achieve this under most scenarios when evaluated with a comprehensive video dataset (the ILSVRC VID dataset).
\end{enumerate}
\section{Related Work}
\label{sec_related_work}
\noindent\textbf{System-wise optimization:}
There have been many optimization attempts to improve the efficiency of video analytics or other ML pipelines by building low power hardware and software accelerators for DNNs~\cite{deepx, reagen2016minerva, chen2017eyeriss, park2015big, han2016eie, parashar2017scnn, gao2017tetris, zhang2016cambricon} or improving application performance using database optimization, either on-premise~\cite{mahgoub2019sophia} or on cloud-hosted instances~\cite{mahgoub2020optimuscloud}. These are orthogonal and {ApproxNet}\xspace can also benefit from these optimizations. VideoStorm~\cite{videostorm}, Chameleon~\cite{jiang2018chameleon}, and Focus~\cite{hsieh2018focus} exploited various configurations and DNN models to handle video analytics queries in a situation-tailored manner.
ExCamera~\cite{excamera-nsdi-2017} and Sonic~\cite{mahgoub2021sonic} enabled low-latency video processing on the cloud using serverless architecture (AWS Lambda~\cite{amazon_lambda}). Mainstream~\cite{jiang2018mainstream} proposed to share weights of DNNs across applications. These are all server-side solutions, requiring multiple models to be loaded simultaneously, which is challenging with resource constraints. NoScope~\cite{kang2017noscope} targeted to reduce the computation cost of video analytics queries on servers by leveraging a specialized model trained on a small subset of videos. Thus, its applicability is limited to a small subset of videos that the model has seen.
Closing-the-loop~\cite{xu2020closing} uses genetic algorithms to efficiently search for video editing parameters with lower computation cost. VideoChef~\cite{xu2018videochef} attempted to reduce the processing cost of video pipelines by dynamically changing approximation knobs of preprocessing filters in a content-aware manner. In contrast, {ApproxNet}\xspace, and concurrently developed ApproxDet~\cite{xu2020approxdet} (for video-specific object detection), approximate in the core DNN, which have a significantly different and computationally heavier program structure than filters. Due to this larger overhead of approximation in the core DNN, {ApproxNet}\xspace's adaptive tuning is challenging. Thus, we plan on using either distributed learning~\cite{ghoshal2015ensemble} or a reinforcement learning-based scheduler for refining this adaptive feature~\cite{thomas2018minerva}.
\noindent\textbf{DNN optimizations:}
Many solutions have been proposed to reduce computation cost of a DNN by controlling the precision of edge weights~\cite{hubara2017quantized, gupta2015deep, zhou2016dorefa, rastegari2016xnor} and restructuring or compressing a DNN model~\cite{denton2014exploiting, howard2017mobilenets, bhattacharya2016sparsification, iandola2016squeezenet, chen2015compressing, han2015learning, wen2016learning}. These are orthogonal to our work and {ApproxNet}\xspace's one-model approximation-enabled DNN can be further optimized by adopting such methods. There are several works that also present similar approximation knobs (input shape, outport depth). BranchyNet, CDL, and MSDNet~\cite{teerapittayanon2016branchynet, panda2016conditional, huang2017multi} propose early-exit branches in DNNs. However, BranchyNet and CDL only validate on small datasets like MNIST~\cite{MNIST} and CIFAR-10~\cite{CIFAR10} and have not shown practical techniques to selectively select these early-exit branches in an end-to-end pipeline. Such an adaptive system, in order to be useful (especially on embedded devices), needs to be responsive to resource contention, content characteristics, and users' requirement, {e.g.,}\xspace end-to-end latency SLA. MSDNet targets a very simple image classification task without a strong use case and does not show a data-driven manner of using the early exits. It would have been helpful to demonstrate the system's end-to-end latency on either a server-class or embedded device. BlockDrop~\cite{wu2018blockdrop} trains a policy network to determine whether to skip the execution of several residual blocks at inference time. However, its speedup is marginal and it cannot be applied directly to mobile devices for real-time classification.
\section{Design and Implementation}
\label{sec_technique}
\subsection{Approximation-enabled DNN}
\label{sec:approx_dnn}
{ApproxNet}\xspace's key enabler, an approximation-enabled DNN, is designed to support multiple accuracy and latency requirements at runtime \textit{using a single DNN model}. To enable this, we design a DNN that can be approximated using two approximation knobs. The DNN can take an input video frame in different shapes, which we call \textit{input shapes}, our first approximation knob and it can produce a classification output at multiple positions in the intervening layers, which we call \textit{outports}, our second approximation knob. There are doubtless other approximation knobs, {e.g.,}\xspace model quantization, frame sampling, and others depending on specific DNN models. These can be incorporated into {ApproxNet}\xspace and they will all fit within our novelty of the one-model DNN to achieve real-time on-device, adaptive inference. The appropriate setting of the tuning knobs can be determined on the device (as is done in our considered usage scenario) or, in case this computation is too heavyweight, this can be determined remotely and sent to the device through a reprogramming mechanism such as~\cite{panta2011efficient}.
Combining these two approximation knobs, {ApproxNet}\xspace creates various {\bf approximation branches (ABs)}, which trade off between accuracy and latency, and can be used to meet a particular user requirement. This tradeoff space defines a set of Pareto optimal frontiers, as shown in Figure~\ref{fig:pareto}. Here, the scatter points represent the accuracy and latency achieved by all ABs. A Pareto frontier defines the ABs which are either superior in accuracy or latency against all other branches.
\begin{figure*}[t]
\begin{minipage}[t!]{0.35\linewidth}
\centering
\includegraphics[width=1\columnwidth]{Figures/pareto.png}
\caption{A Pareto frontier for trading-off accuracy and latency in a particular frame complexity category and at a particular contention level.}
\label{fig:pareto}
\end{minipage}
\hfill
\begin{minipage}[t!]{0.60\linewidth}
\centering
\includegraphics[width=1\columnwidth]{Figures/outport.png}
\caption{The outport of the approximation-enabled DNN.}
\label{fig:detailed_DNN}
\end{minipage}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/DNN_arch2.png}
\caption{The architecture of the approximation-enabled DNN in {ApproxNet}\xspace.}
\label{fig:dnn_arch}
\end{figure*}
\begin{table}[t]
\centering
\caption{The list of the total 30 ABs supported for a baseline DNN of ResNet-34, given by the combination of the input shape and the outport from which the result is taken. ``--'' denotes the undefined settings.}
\label{table:a_value}
\scalebox{0.8}{
\begin{tabular}{p{0.75in}|cccccc}
Input shape & Outport 1 & Outport 2 & Outport 3 & Outport 4 & Outport 5 & Outport 6 \\
\hline
224x224x3 & 28x28x64 & 28x28x64 & 14x14x64 & 14x14x64 & 14x14x64 & 7x7x64 \\
192x192x3 & 24x24x64 & 24x24x64 & 12x12x64 & 12x12x64 & 12x12x64 & -- \\
160x160x3 & 20x20x64 & 20x20x64 & 10x10x64 & 10x10x64 & 10x10x64 & -- \\
128x128x3 & 16x16x64 & 16x16x64 & 8x8x64 & 8x8x64 & 8x8x64 & -- \\
112x112x3 & 14x14x64 & 14x14x64 & 7x7x64 & 7x7x64 & 7x7x64 & -- \\
96x96x3 & 12x12x64 & 12x12x64 & -- & -- & -- & -- \\
80x80x3 & 10x10x64 & 10x10x64 & -- & -- & -- & -- \\
\end{tabular}
}
\end{table}
We describe our design using ResNet as the base DNN, though our design is applicable to any other mainstream CNN consisting of convolutional (CONV) layers and fully-connected (FC) layers such as VGG~\cite{simonyan2014very}, DenseNet~\cite{huang2017densely} and so on. Figure~\ref{fig:dnn_arch} shows the design of our DNN using ResNet-34 as the base model. This enables 7 input shapes (${s\times s\times 3}$ for $s=224,192,160,$ $128,112,96,80$) and 6 outports (after 11, 15, 19, 23, 27, and 33 layers). We adapt the design of ResNet in terms of the stride, shape, number of channels, use of convolutional layer or maxpool, and connection of the layers. In addition, we create {\em stacks}, with stacks numbering 0 through 6 and each stack having 4 or 6 ResNet layers and a variable number of blocks from the original ResNet design (Table~\ref{table:a_value}). We then design an outport (Figure~\ref{fig:detailed_DNN}), and connect with stacks 1 to 6, whereby we can get prediction labels by executing only the stacks (i.e., the constituent layers) till that stack. The use of 6 outports is a pragmatic system choice---too small a number does not provide enough granularity to approximate in a content and contention-aware manner and too many leads to a high training burden. Further, to allow the approximation knob of downsampling the input frame to the DNN, we use the SPP layer at each outport to pool the feature maps of different shapes (due to different input shapes) into one unified shape and then connect with an FC layer. The SPP layer performs max-pooling on its input by three different levels $l=1,2,3$ with window size $\lceil a/l\rceil$ and stride $\lfloor a/l\rfloor$, where $a$ is the shape of the input to the SPP layer. Note that our choice of the 3-level pyramid pooling is a typical practice for using the SPP layer~\cite{spp}. In general, a higher value of $l$ requires a larger value of $a$ on the input of each outport, thereby reducing the number of possible ABs. On the other hand, a smaller value of $l$ results in coarser representations of spatial features and thus reduces accuracy. To support the case $l=3$ in the SPP, we require that the input shape of an outport be no less than 7 pixels in width and height, \textit{i.e}., $a\ge7$. This results in ruling out some input shapes as in Table~\ref{table:a_value}. Our model has 30 configuration settings in total, instead of 7 $\times$ 6 (number of input shapes $\times$ number of outports) because too small input shapes cannot be used when the outport is deep.
To train {ApproxNet}\xspace towards finding the optimal parameter set $\theta$, we consider the softmax loss $L_{s,i}(\theta)$ defined for the input shape $s\times s\times 3$ and the outport $i$. The total loss function $L(\theta)$ that we minimize to train {ApproxNet}\xspace is a weighted average of $L_{s,i}(\theta)$ for all $s$ and $i$, defined as
\begin{equation}
L(\theta)=\sum_{\forall i} \frac{1}{n_i} \sum_{\forall s} L_{s,i}(\theta)
\end{equation}
where the value of $n_i$ is the factor that normalizes the loss at an outport by dividing by the number of shapes that are supported at that port $i$. This makes each outport equally important in the total loss function. For mini-batch, we use 64 frames for each of the 7 different shapes.
To train on a particular dataset or generalize to other architectures, we discuss more details in Section~\ref{sec_discussion}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Figures/FrameComplexityCategorizer.png}
\caption{Workflow of the Frame Complexity Estimator.}
\label{fig:fce}
\end{figure*}
\begin{figure*}[t]
\begin{minipage}[thb]{0.54\linewidth}
\includegraphics[width=1\textwidth]{Figures/ConCat_EdgeMap.png}
\caption{Sample frames (first row) and edge maps (second row), going from left to right as simple to complex. Normalized mean edge values from left to right: 0.03, 0.24, 0.50, and 0.99 with corresponding frame complexity categories: 1, 3, 6, and 7}
\label{fig:sample_edge_maps}
\end{minipage}
\hfill
\begin{minipage}[thb]{0.44\linewidth}
\includegraphics[width=1\textwidth]{Figures/sensitivity_curve.png}
\caption{Latency increase of several ABs in ApproxNet under resource contention with respect to those under no contention. The input shape and outport depth of the branches are labeled.}
\label{fig:sensitivity}
\end{minipage}
\end{figure*}
\subsection{Frame Complexity Estimator (FCE)}
\label{subsec_FCE}
The design goal of the Frame Complexity Estimator (FCE), which executes online, is to estimate the expected accuracy of each AB in a content-aware manner. It is composed of a Frame Complexity Categorizer (FCC) and a Scene Change Detector (SCD) and it makes use of information collected by the offline profiler (described in Section~\ref{sec:profiler}). The workflow of the FCE is shown in Figure~\ref{fig:fce}.
\noindent \textbf{Frame Complexity Categorizer (FCC)}. FCC determines how hard it is for {ApproxNet}\xspace to classify a frame of the video. Various methods have been used in the literature to calculate frame complexity such as edge information-based methods~\cite{edge-compression-complexity,edge-complexity-2}, compression information-based methods~\cite{edge-compression-complexity} and entropy-based methods~\cite{cardaci2009fuzzy}. In this paper, we use mean edge value as the feature of the frame complexity category, since it can be calculated with very low computation overhead (3.9 ms per frame on average in our implementation). Although some counterexamples may show the edge value is not relevant, we show empirically that with this feature, the FCE is able to predict well the accuracy of each AB with respect to a large dataset.
To expand, we extract an edge map by converting a color frame to a gray-scale frame, applying the Scharr operator~\cite{jahne1999handbook} in both horizontal and vertical directions, and then calculating the L2 norm in both directions. We then compute the mean edge value of the edge map and use a pre-trained set of boundaries to quantize it into several frame complexity categories. The number and boundaries of categories is discussed in Section~\ref{sec:profiler}. Figure~\ref{fig:sample_edge_maps} shows examples of frames and their edge maps from a few different complexity categories.
\noindent \textbf{Scene Change Detector (SCD)}. The Scene Change Detector is designed to further reduce the online overhead of FCC by determining if the content in a frame is significantly different from that in a prior frame in which case the FCC will be invoked. SCD tracks a histogram of pixel values, and declares a scene change when the mean of the absolute difference across all bins of the histograms of two consecutive frames is greater than a certain threshold (45\% of the total pixels in our design). To bound the execution time of SCD we use only the R-channel and downsample the shape of the frame to $112 \times 112$. We empirically find that such optimizations do not reduce the accuracy of detecting new scenes but do reduce the SCD overhead, to only 1.3 ms per frame.
\subsection{Resource Contention Estimator (RCE)}
\label{subsec_RCE}
The design goal of the Resource Contention Estimator (RCE), which also executes online, is to estimate the expected latency of each AB in a contention-aware manner. Under resource contention, each AB is affected differently and we call the latency increase pattern the \textit{latency sensitivity}. As shown in Figure~\ref{fig:sensitivity}, five approximation branches have different ratios of latency increase under a certain amount of CPU and memory bandwidth contention.
Ideally we would use a sample classification task to probe the system and observe its latency under the current contention level $C$. The use of such micro-benchmarks is commonly done in datacenter environments~\cite{lo2015heracles, xu2018pythia}. However, we do not need the additional probing since the inference latencies of the latest video frames form a natural observation of the contention level of the system. Thus we use the averaged inference latency $\overline{L_B}$ of the current AB $B$ across the latest $N$ frames. We then check the latency sensitivity $L_{B, C}$ of branch $B$ (offline profile as discussed in Section~\ref{sec:profiler}) and get an estimated contention level $\hat{C}$ with the nearest neighbor principle,
\begin{equation}
\hat{C} = \argmin_C abs(L_{B, C} - \overline{L_B})
\end{equation}
By default, we use $N=30$. This will lead to an average over last one second when frame-per-second is 30. Smaller $N$ can make {ApproxNet}\xspace adapt faster to the resource contention, while larger $N$ make it more robust to the noise. Due to the limited observation data (one data point per frame), we cannot adapt to resource contention that is changing faster than the frame rate.
Specifically in this work, we consider CPU, GPU and memory contention among tasks executing on the device (our SoC board shares the memory between the CPU and the GPU), but our design is agnostic to what causes the contention.
Our methodology considers the resource contention as a black-box model because we position ourselves as an application-level design instead of knowing the execution details of all other applications. We want to deal with the effect of contention, rather than mitigating it by modifying the source of the contention.
\subsection{Offline Profiler}
\label{sec:profiler}
\noindent \textbf{Per-AB Content-Aware Accuracy Profile}.
The boundaries of frame complexity categories are determined based on the criteria that all frames within a category should have an identical Pareto frontier curve (Figure~\ref{fig:pareto}) and frames in different categories should have distinct curves. We start with considering the whole set of frames as belonging to a single category and split the range of mean edge values into two in an iterative binary manner, till the above condition is satisfied. In our video datasets, we derive 7 frame complexity categories with 1 being the simplest and 7 the most complex. To speedup the online estimation of the accuracy on any candidate approximation branch, we create the offline accuracy profile $A_{B, F}$ given any frame complexity categories $F$ and any ABs $B$, after the 7 frame complexity categories are derived.
\noindent \textbf{Per-AB Contention-Aware Latency Profile}.
{ApproxNet}\xspace is able to select ABs at runtime in the face of resource contention. Therefore, we perform offline profiling of the inference latency of each AB under different levels of contention. To study the resource contention, we develop our synthetic contention generator (CG) with tunable ``contention levels'' to simulate resource contention and help our {ApproxNet}\xspace profile and learn to react under such scenarios in real-life. Specifically, we run each AB in {ApproxNet}\xspace with the CG in varying contention levels to collect its contention-aware latency profile. To reduce the profiling cost, we quantize the contention to 10 levels for GPU and 20 levels for CPU and memory and then create the offline latency profile $L_{B, C}$ for each AB $B$ under each contention level $C$. Note that contention increases latency of the DNN but does not affect its accuracy. Thus, offline profiling for accuracy and latency can be done independently and parallelly and profiling overhead can be reduced.
\noindent \textbf{Switching Overhead Profile}.
Since we find that the overhead of switching between some pairs of ABs is non-negligible, we profile the overhead of switching latency between any pairs of approximation branches offline. This cost is used in our optimization calculation to select the best AB.
\subsection{Scheduler}
\label{subsec:pareto}
The main job of the scheduler in {ApproxNet}\xspace is to select an AB to execute. The scheduler accepts user requirement on either the minimum accuracy, the maximum latency per frame. The scheduler requests from the FCE a runtime accuracy profile ($B$ is the variable for the AB and $\hat{F}$ is the frame category of the input video frame) $A_{B, \hat{F}} \forall B$. It then requests from the RCE a runtime latency profile ($\hat{C}$ is the current contention level) $L_{B, \hat{C}} \forall B$. Given a target accuracy or latency requirement, we can easily select the AB to use from drawing the Pareto frontier for the current $(\hat{F}, \hat{C})$. If no Pareto frontier point satisfies the user requirement, {ApproxNet}\xspace picks the AB that achieves metric value closest to the user requirement. If the user does not set any requirement, {ApproxNet}\xspace sets a latency requirement to the frame interval of the incoming video stream. One subtlety arises due to the cost of switching from one AB to another. This cost has to be considered by the scheduler to avoid too frequent switches without benefit to outweigh the cost.
To rigorously formulate the problem, we denote the set of ABs as $\mathcal{B}=\{B_1, ... B_N\}$ and the optimal AB the scheduler has to determine as $B_{opt}$. We denote the accuracy of branch $B$ on a video frame with frame complexity $F$ as $A_{B,F}$ , the estimated latency of branch $B$ under contention level $C$ as $L_{B,C}$, the one-time switch latency from branch $B_p$ to $B_{opt}$ as $L_{B_p \rightarrow B_{opt}}$, and the expected time window over which this AB can be used as $W$ (in the unit of frames). For $W$, we use the average number of frames for which the latest ABs can stay unchanged and this term introduces hysteresis to the system so that the AB does not switch back and forth frequently. The constant system overhead per frame (due to SCD, FCC, and resizing the frame) is $L_0$. Thus, the optimal branch $B_{opt}$, given the latency requirement $L_\tau$, is:
\begin{equation}
B_{opt} = \argmax_{B \in \mathcal{B}} A_{B,F},~s.t.~L_{B,C}+\frac{1}{W}L_{B_p \rightarrow B}+L_0 \leq L_\tau
\label{eq:opt-latency-constraint}
\end{equation}
When the accuracy requirement $A_\tau$ is given,
\begin{equation}
B_{opt} = \argmin_{B \in \mathcal{B}} [L_{B,C}+\frac{1}{W}L_{B_p \rightarrow B}+L_0], s.t.~A_{B,F} \geq A_\tau
\label{eq:opt-latency}
\end{equation}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{ Introduction}
Interest in all-optical switching devices has led to the study and design of
several promising configurations of nonlinear couplers which display
intensity-triggered power switching. The basic nonlinear coherent coupler,
introduced by Jensen \cite{jensen}, consists of two similar waveguides made
of a material with third-order susceptibilities, embedded in a host with
purely linear susceptibility. When the guides are placed parallel to each
other and in close proximity over a given distance, the guide fields overlap
to some extent and power can be transferred between the two. When all the
power is initially launched into one of the guides, the nonlinear
susceptibility can give rise to self-trapping of power in the original
guide. The output power in the original guide, for a device length equal to
a coupling length, can be made to switch from essentially zero percent at
low power levels, to one hundred percent for input power levels exceeding a
characteristic threshold. In addition to the pioneering work by Jensen,
several other coupler configurations have been considered. It was found that
a three-in-a-line configuration of couplers displays a more abrupt switching
profile, at the expense however, of greater input power\cite{finlayson}. The
same tendency was reported for a linear array of many couplers\cite{schmidt}%
. In an effort to improve the switching profile, we introduced in a recent
work\cite{MDT-PD93} the Doubly Nonlinear Trimer (DNT) coupler consisting of
two nonlinear guides coupled to a third, linear guide. Such a system
displays the interesting phenomenon of power self-trapping {\em tunability}:
the critical input power level necessary for the onset of power
self-trapping can be tuned to low values, by adjusting the value of the
(linear) coupling between the nonlinear guides and the linear one.\cite
{MDT-PD93},\cite{MT-PRA92} In the optimal configuration, switching was
achieved at one-fourth the power needed to produce switching in the Jensen
coupler. The price to pay for this improved switching is the use of larger
device lengths, up to ten times that reported by Jensen\cite{MDT-PD93}.
In the present work, our interest is in learning if couplers having
waveguides with differing types of nonlinear susceptibilities would have
better switching characteristics than other standard models. We first
investigate a different nonlinear coupler composed of two identical guides
made of optical material lacking inversion symmetry and therefore having a
nonvanishing second-order susceptibility. We show that this new coupler
array possesses a power self-trapping transition and an associated sharp
power switching profile, albeit at a larger input power level than in
Jensen's and in our earlier DNT coupler. Then, after examining a number of
two-guide couplers of mixed compositions, with each guide having purely
linear (L), and second-order (SO) or the usual third-order (TO)
susceptibilities we found that for a particular choice of parameters, a
coupler composed of an SO guide and a TO guide displays a relatively sharp
power self-trapping profile at an input power level lower than previously
reported, if power is initially launched in the SO guide. Next, as in the
DNT case, the onset of self-trapping can be tuned to even lower power
levels, by perturbing the two-guide coupler by adding a purely linear
control guide and adjusting the strength of the interaction with this third
guide. The resulting three-guide coupler, dubbed SO-TO-L, resembles the DNT
configuration, with one of the third-order guides replaced by a second-order
guide; it displays a reasonably sharp switching profile and, as far as we
know, does so at the lowest input power reported so far.
\section{A new two-guide coupler}
Consider a linearly coupled system of two nonlinear guides, each having the
same second-order nonlinear susceptibility. In the single mode
approximation, the normalized mode amplitudes satisfy
\begin{eqnarray}
i\frac{dC_1}{dz} &=&VC_2-\chi |C_1|C_1 \label{C1} \\
i\frac{dC_2}{dz} &=&VC_1-\chi |C_2|C_2, \label{C2}
\end{eqnarray}
where $\chi =Q^{(2)}\sqrt{P}$ is the product of an integral $Q^{\left(
2\right) }$ containing the second-order nonlinear susceptibility\cite{jensen}
and the square root of the input power $P$. The linear coupling of the
guides is determined by the coefficient $V.$ With all the power initially
launched in guide $1$, the initial conditions are $C_1(0)=1$, $C_2(0)=0$. We
will now show that Eqns.(\ref{C1})-(\ref{C2}) predicts a {\em self-trapping}
of power in the original guide (guide $1$). First, it is convenient to
rewrite (\ref{C1}-\ref{C2}) as a set of four equations for the complex
quantities $\rho _{ij}\equiv C_iC_j^{*}$:
\begin{eqnarray}
i\frac{d\rho _{11}}{dz} &=&-V(\rho _{12}-\rho _{21}) \label{r11} \\
i\frac{d\rho _{22}}{dz} &=&V(\rho _{12}-\rho _{21}) \label{r22} \\
i\frac{d\rho _{12}}{dz} &=&-V(\rho _{11}-\rho _{22}) \nonumber \label{r12}
\\
&&+\chi (\sqrt{\rho _{22}}-\sqrt{\rho _{11}})\rho _{12} \label{r12} \\
i\frac{d\rho _{21}}{dz} &=&V(\rho _{11}-\rho _{22}) \nonumber \label{r21}
\\
&&-\chi (\sqrt{\rho _{22}}-\sqrt{\rho _{11}})\rho _{21}. \label{r21}
\end{eqnarray}
We have two conserved quantities: the total power, normalized to unity: $%
\rho _{11}+\rho _{22}=1$ and the total ``energy'' $H=V(\rho _{12}+\rho
_{21})-(2/3)\chi (\rho _{11}^{3/2}+\rho _{22}^{3/2})=-(2/3)\chi $ leaving
only two independent unknowns, which precludes any chaotic dynamics for the
system. Making use of these conserved quantities we find, after some tedious
algebra, the following first-order equation for $\rho _{11}\equiv \rho $:
\begin{equation}
{\frac 1{{2}}}({\frac{d\rho }{{dz}}})^2+U(\rho )=0 \label{energy}
\end{equation}
with
\begin{eqnarray}
U(\rho ) &=&-2\rho (1-\rho )+{\frac 1{{2}}}\left( {\frac{2\chi }{{3V}}}%
\right) ^2 \nonumber \label{U} \\
&&\ -{\frac 2{{3}}}\left( {\frac \chi {{V}}}\right) ^2\sqrt{1-\rho }\left( -{%
\frac 2{{3}}}\rho ^{3/2}(1-\rho )\right) \nonumber \\
&&\ -{\frac 1{{3}}}(\rho ^3+(1-\rho )^3) \nonumber \label{eq:5} \\
&&\ +{\frac 2{{3}}}(\rho ^{3/2}+(1-\rho )^{3/2}). \label{U}
\end{eqnarray}
Equation (\ref{energy}) describes a classical particle of unit mass, moving
under the influence of an external potential $U(\rho )$, with initial
condition $\rho (0)=1$. Fig.1 shows the effective potential $U(\rho )$ for
several different values of $\chi /V$. For small nonlinearity values, the
effective potential is concave and conservation of energy allows complete
oscillations of the ``particle''; that is, power is transferred between the
two guides. As nonlinearity (input power) is increased, the potential
develops a local maximum whose height increases with increasing
nonlinearity. The condition for self-trapping of power in the original guide
translates here into the condition for the potential $U(\rho )$ to develop a
double root at $\rho =\rho ^{*}$ for some critical value of $\chi /V$, i.e.,
$U(\rho ^{*})=0$ and $(dU/d\rho )_{\rho ^{*}}=0$. Close examination of Eq.(%
\ref{U}) and Fig.1 reveals $U(\rho )$ to be {\em even} around $\rho =1/2$
and that $\rho ^{*}=1/2$. From that, the critical value of the nonlinearity
is obtained in closed form as
\begin{equation}
\left( {\frac \chi {{V}}}\right) _c={\left( \frac 3{{\sqrt{2}}}\right) }%
\sqrt{3+2\sqrt{2}}\approx 5.121. \label{eq:6}
\end{equation}
This value is greater than the critical values for Jensen's coupler ($=4$)
and for the array of three nonlinear (third-order) couplers$^2$ ($\approx
4.5 $). Figure 2 shows the average transmittance of the guide, defined as
\begin{equation}
<P>\equiv \lim_{L\rightarrow \infty }(1/L)\int_0^L\rho (z)dz. \label{<P>}
\end{equation}
Clearly, we see that for $(\chi /V)<(\chi /V)_c$, power is equally
distributed between the two guides. At $(\chi /V)=(\chi /V)_c$, an abrupt
transition takes place and power begins to self-trap in the original guide.
Onset of self-trapping is a precursor for the appearance of a sharp
switching profile in the transmittance of the guide. The transmittance,
defined as $|C_1(L_c)|^2,$ is the quantity of basic interest for optics. The
length $L_c$ is usually chosen as the shortest length for which $|C_1(z)|^2$
is zero, or very nearly so, in the absence of nonlinearity ($\chi =0$). In
the case of the two waveguide system, $L_c=\pi /(2V)$. The abrupt increase
in transmittance caused by an increment of the nonlinearity parameter (input
power) can be used as a power triggered switch\cite{jensen}.
Figure 3 shows the transmittance characteristics of our two-guide
second-order (SO) coupler, and compares it with Jensen's third-order (TO)
nonlinear coupler which is also shown in the figure, along with the TO
nonlinear coupler with three guides\cite{finlayson}. We note the SO
nonlinear coupler array does not have a competitive switching profile
compared to Jensen's and the three-coupler array.
\section{A New Hybrid Configuration}
After considering the above nonlinear coupler, having second-order
susceptibility, we next examined a variety of mixed two-guide couplers in
which each guide was either a purely linear one, a SO or a TO guide. The
objective was to find other two-guide couplers that displayed power
self-trapping for the initial condition where all the initial power is put
into one guide. We found that, in most cases there is no self-trapping
transition at all but a continuous power trapping. For a given mixed
two-guide coupler, the trapping profile depends in a sensitive way on the
order of the nonlinear susceptibility of the guide initially receiving all
power. To illustrate this point, we now describe the most interesting case
we found: The SO-TO guide system, where guide $1$ possesses a second-order
nonlinear susceptibility integral\cite{jensen} $Q_1^{(2)}$ and guide $2$
possesses the usual third-order susceptibility integral\cite{jensen} $%
Q_2^{(3)}$. The equations for the mode amplitudes are
\begin{eqnarray}
i\frac{dC_1}{dz} &=&VC_2-\chi _1|C_1|C_1 \label{pepe1} \\
i\frac{dC_2}{dz} &=&VC_1-\chi _2|C_2|^2C_2, \label{pepe2}
\end{eqnarray}
where $\chi _1=Q_1^{(2)}\sqrt{P}$ and $\chi _2=Q_2^{(3)}P$. When all initial
input power goes into the TO guide (\#2), the initial condition for the
system, Eqns. (\ref{pepe1})-(\ref{pepe2}), is $C_1(0)=0$, $C_2(0)=1$. A
numerical investigation of $<P>$ reveals a ``delayed'' self-trapping
transition at $\chi _1=\chi _2=\chi _c\approx 6.3\ V$ (Fig.4). This value is
much greater than Jensen's and is, therefore, not useful for our purposes.
On the other hand, when all input power is put initially into the SO guide
(\#1), we have the initial condition $C_1(0)=1$, $C_2(0)=0$. In this case, a
numerical search reveals that this system does not show a self-trapping
transition: the effective potential $U(\rho ,\chi _1,\chi _2)$ does not
develop a double root for any combination of $\chi _1,\chi _2$. However, for
the special case $\chi _1=\chi _2\equiv \chi $, we found a relatively sharp
power self-trapping profile occurring at $\chi \approx 3.0\ V$ (Fig.4); {\em %
i.e.}, a smaller power than Jensen's critical value for self-trapping. We
then proceeded to ``tune'' the trapping profile to even lower power levels,
by allowing the SO-TO coupler to interact linearly with a third (control)
guide possessing only linear susceptibility. The enlarged set of equations
for the mode amplitudes in this SO-TO-L coupler now reads
\begin{eqnarray}
i\frac{dC_1}{dz} &=&VC_2+WC_3-\chi |C_1|C_1 \label{SOTOL1} \\
i\frac{dC_2}{dz} &=&VC_1+WC_3-\chi |C_2|^2C_2 \label{SOTOL2} \\
i\frac{dC_3}{dz} &=&W(C_1+C_2), \label{SOTOL3}
\end{eqnarray}
with initial conditions $C_1(0)=1$, $C_2(0)=C_3(0)=0$. It is assumed here
that the guides have the same {\em linear} susceptibility, to minimize
possible phase mismatch effects. After examining $<P>$ as a function of $%
\chi $ for different $W$ values, we found that $W\approx 1.1V$ brings the
onset of self-trapping down to a power level $\chi \approx 0.4\ V$. Note
that this optimal $W$ value is the same as found for the DNT coupler\cite
{MDT-PD93}. Now, to evaluate the transmittance of this SO-TO-L array, we
need to calculate the coupling length $L_c(W)$. This is obtained from Eqns. (%
\ref{SOTOL1})-(\ref{SOTOL3}) as the position $z$ at which $|C_1(z)|^2\approx
0$, for $\chi =0$. In this limit the system of equations can be solved in
closed form\cite{MDT-PD93} and yields for $|C_1(z)|^2$:
\begin{eqnarray}
|C_1(z)|^2 &=&A\cos \left[ \left( \frac{3V-\sqrt{V^2+8W^2}}{{2}}\right)
z\right] \nonumber \\
&&\ \ \ \ \ \ \ +B\cos \left[ \left( \frac{3V+\sqrt{V^2+8W^2}}{{2}}\right)
z\right] \nonumber \\
&&\ \ \ \ \ \ \ +{\em C}\cos [\sqrt{V^2+8W^2}z]+{\em D,} \label{c1sqr}
\end{eqnarray}
where
\[
A=\left( \sqrt{V^2+8W^2}-V\right) /\left( 4\sqrt{V^2+8W^2}\right)
\]
\[
B=\left( \sqrt{V^2+8W^2}+V\right) /\left( 4\sqrt{V^2+8W^2}\right)
\]
\[
C=W^2/\left( V^2+8W^2\right)
\]
\[
D=\left( V^2+4W^2\right) /\left[ 4\left( V^2+8W^2\right) \right] +1/4.
\]
For $W=1.1\ V,$ Eqn.(\ref{c1sqr}) gives $L_c\approx 21/V$, the same value as
for the DNT coupler. Figure 5 shows the transmittance of the SO-TO-L system
as a function of input power, for the optimal linear coupling value $W=1.1V$%
. For comparison we also show the transmittance for the DNT coupler.
Jensen's device switches at about $\chi =4V$ and the side-by-side
three-nonlinear guide coupler of ref. 2 switches at about $\chi \sim 4.5\ V,$
but because of the scale of the figure, neither of these transitions is
shown. We note that the new coupler configuration SO-TO-L is capable of
achieving over $99\%$ power switching for input power levels below $\chi
\sim 0.65\ V$ which is a $48\%$ reduction in input power needed compared to
the DNT device.
\section{Discussion}
In order for the above results to be meaningful, it must be true that $\chi
_2$ and $\chi _3$ can be at least approximately equal for some materials.
These coefficients involve the usual susceptibilities $\chi ^{\left(
j\right) }$ defined here to give the electric polarization $P_{E}$ in
the form
\[
P_{Ei}=\epsilon _0\left[ \chi _{ij}^{\left( 1\right) }E_j+\chi
_{ijk}^{\left( 2\right) }E_jE_k+\chi _{ijkm}^{\left( 3\right)
}E_jE_kE_m+\cdots \right] .
\]
To find the ratio $\chi _2/\chi _3=$ $Q_2/\left( Q_3\sqrt{P}\right) ,$ we
use the definitions from ref. 1 of the integrals $Q_2$ and $Q_3,$ inserting
the exact expressions for mode fields and susceptibilities. Rather than
going through those calculations, we make the simplifying assumptions that
the $\chi ^{\left( j\right) }$ are constant across each guide and that the
mode field is also constant (approximately true for the $TE_0$ mode) across
the guide; then the integrals are easily done and we get
\[
\chi _2/\chi _3\simeq \frac{\chi ^{\left( 2\right) }}{\chi ^{\left( 3\right)
}\left| E\right| \sqrt{P}},
\]
where $P$ is the input power and $\left| E\right| $ is the amplitude of a
slab waveguide mode field, normalized to one watt/meter. Then the ratio $%
\chi _2/\chi _3$ can be on the order of unity within the range of known
values of the susceptibilities\cite{boyd} and power in the range 0.01 - 1 kw.
As mentioned previously, the critical length $L_c$ for the SO coupler is the
same as for the Jensen coupler, but the SO\ device switches less abruptly
and at higher power than Jensen's. The SO-TO coupler shows final-state
asymmetry depending on which guide receives input power. If power enters the
TO leg, a self-trapping transition occurs at more than 1.5 times the Jensen
level, $P_J.$ If the SO\ leg receives the power, a relatively sharp
self-trapping sets in at about 25\% below $P_J.$
A greatly lowered power switching level is shown by SO-TO-L, but its $L_c$
is an order of magnitude larger than the Jensen $L_c.$ Typical values for $%
L_c$ are about a millimeter\cite{yeh} for weakly coupled devices ( $i.e.,$
the separations between waveguides are large enough that coupled-mode theory
can be used) and less for stronger coupling. Then $L_c$ for SO-TO-L is on
the order of a centimeter or less.
The linear interaction coefficients $V$ and $W$ are overlap integrals,
across one waveguide, of the product of the electric mode field of that
guide and the mode field of a second guide. Therefore, $V$ and $W$ are
functions of the separation of the waveguides and in principle, it is
possible to alter one without changing the other; that is, the system can be
tuned to achieve minimum power switching level, by changing the distances
between the linear guide and the other two, nonlinear guides.
\section{Conclusions}
Our primary interest was the investigation of switching characteristics of
model nonlinear couplers having mixtures of waveguides, not necessarily with
the same orders of nonlinear susceptibilities. Earlier work on the DNT
system suggested tunability might also be used in a hybrid coupler to
decrease switching power levels. It appears possible to meet the condition $%
\chi _2\cong $ $\chi _3,$ as far as known values of these quantities are
concerned. Whether specific materials can be found that meet this condition
and are also compatible with one another in a device, is another matter and
one we have not addressed in this paper .
Switching characteristics of SO is inferior to the TO system. For SO-TO, the
asymmetry of final states with respect to input guide may be the only aspect
of its performance that could be of interest.
The most interesting coupler was the SO-TO-L, formed by adding a linear
guide to SO-TO and tuning for minimum power by adjusting the relative
positions of the guides. The transition power level drops to less than
one-sixth of $P_J.$ Although a disadvantage of this coupler is a critical
length that is longer than for the Jensen coupler by an order of magnitude,
that may be tolerable in some applications.
Of course, there are various other configurations involving arrays of these
couplers but those were not investigated.
One of the authors (M. I. M.) acknowledges support from FONDECYT grant
1950655.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Lyman-break Galaxies}
The study of Lyman-break Galaxies (LBGs) is a long established probe of the high-redshift Universe (the first few billion years), with samples of many hundreds of star-forming galaxies now known to $z \sim 10$ (e.g. \citealp{Oesch2016,Bouwens2016,McLeod2016}). LBGs are particularly useful as it is possible to establish their photometric redshift to reasonable accuracy in a luminosity regime where spectroscopic confirmation is challenging (e.g. \citealp{Pentericci2014}). The neutral gas in the inter-galactic medium (IGM) is essentially opaque to photons with wavelengths shorter than the `Lyman Break' (1216\AA , in the far ultraviolet). The source therefore appears faint bluewards of this wavelength, but retains its original luminosity redwards, creating a sharp drop in luminosity. When this spectrum is then redshifted, the location of the break provides a clear spectral feature with which to select galaxies to high-redshifts using broad-band filters.
The technique, originally developed in the early 1990's (\citealp{Guhathakurta1990,Steidel1992,Steidel1996}) in the context of $z \sim3$ galaxies, where the Lyman break is shifted into visible wavebands, first started providing large numbers of sources with the \textit{Hubble Space Telescope} (HST) in the late 1990's and 2000's (e.g. \citealp{Giavalisco2002,Bouwens2007,Dunlop2013}). More recently, the approach is being used to push scientific boundaries at $z\sim 6-9$ where the break is shifted into the near-infrared (see \citealp{Stark2016} for a review). Wide field surveys like the United Kingdom Infrared Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS, in particular the Ultra Deep Survey, UDS, \citealp{Hartley2013c}), and more recently public surveys on the Visible and Infrared Survey Telescope for Astronomy (VISTA) such as the UltraVISTA survey in the COSMOS field (\citealp{McCracken2012}) and the VISTA Deep Extragalactic Observations (VIDEO) survey (\citealp{Jarvis2013}) give access to the deep NIR images of the sky needed for detecting statistically significant samples of the brightest LBGs, which has lead to advances in the understanding of their star formation rates and number densities beyond the break in the luminosity function.
A key observable that can be calculated for LBG surveys (and galaxy surveys in general), is the luminosity or mass function, the comoving number density of galaxies as a function of absolute luminosity or stellar mass (see \citealp{Johnston2011} for a review). Measuring and understanding the evolution of luminosity functions with redshift allows us to trace the build-up and evolution of galaxies through cosmic time (\citealp{Madau2014}); is a key way to compare cosmological simulations of structure formation to observations (\citealp{Lacey2016,Clay2015}); and can be readily linked theoretically to the dark matter equivalent, the halo mass function (HMF; the comoving density of dark matter halos as a function of halo mass, see \citet{Murray2013} for a review of current constraints). Luminosity functions are typically observed to have the form of a Schechter function: $n(L)=\phi^{*}(L/L^{*})^{\alpha}\exp(-L/L^{*})$ (\citealp{Schechter1976}). In this parametrisation, $\alpha$ describes the power law behaviour of number density at the low-luminosity end, $L^{*}$ is the transition luminosity to the high luminosity exponential cutoff, and $\phi^{*}$ is a normalisation. The rest frame UV luminosity function for $z \sim 4-8$ has been determined by several studies (e.g. recent work by \citealp{McLure2013, Bouwens2015,Finkelstein2015,Bowler2015}) with broad agreement. The highest redshift constraints on the LBG luminosity function are currently at $z \sim 9-10$ e.g. \cite{Bouwens2015,Bouwens2016,McLeod2016}.
\subsection{Clustering}
Galaxies are formed and live in dark matter halos, and the environment of the host halo is believed to be of critical importance for the formation of the resident galaxies (\citealp{Cooray2002}). One way of obtaining information about the galaxy-halo connection is `abundance matching' - matching the galaxy comoving number density value to the halo mass that is predicted to have the same number density by theoretical considerations of the halo mass function/N-body simulations e.g. rarer galaxies are associated with more massive halos because such halos are rarer (\citealp{Vale2004}). Abundance matching however can only ever give an incomplete account of the connection due to three complications. Firstly, halos can host multiple galaxies, this can be partially mitigated through \textit{sub-}halo matching (\citealp{Moster2009}), but this assumes that the occupation statistics are the same for sub-halos and isolated halos. Secondly, scatter in the halo mass to galaxy mass/luminosity relation is not captured by abundance matching. Finally, variations in observational properties of a single population can bias results, in particular orientation or temporal effects e.g. if a given population has a different appearance 10 percent of the time, a straightforward abundance matching will erroneously place these sources with a different appearance in more massive halos as they are rarer, even though they are the same object as the underlying population. For this reason, other probes of the galaxy halo connection are needed. Information from lensing is very effective, either strong (e.g. \citealp{Jullo2007}) or weak (galaxy-galaxy lensing, e.g. \citealp{Coupon2015,Mandelbaum2013}), but requires the background sources to be at even higher redshift than the lenses, making it unfeasible for high ($z>2$) galaxies.
One viable and popular approach is to measure the clustering (2-point statistics) of the galaxies alongside the number counts (1-point statistics). This can then be linked to models/our theoretical understanding of structure formation to estimate the typical environment of the galaxies. One popular framework for modelling galaxy clustering is the `Halo Occupation Distribution' model (\citealp{Zehavi2005}), which models the non-linear clustering of galaxies within individual halos, and the large scale clustering of the halos simultaneously, giving information about how many galaxies are in each halo as a function of halo mass. The HOD model has been applied extensively at $z=0$ (\citealp{Guo2015}) and $z=0.5-2$ (\citealp{McCracken2015,Coupon2015,Hatfield2015}) where large galaxy samples are available. In the more uncertain high-redshift regime, the HOD model has recently been applied to low-luminosity LBG galaxies at $z=4-7$ by \citet{Harikane2015}. It is crucial however to understand the relationship at the massive/most luminous end, as this is where AGN-driven feedback may have a role (\citealp{Silk2011}). There are preliminary hints that redshifts of $z=6-7$ may mark the onset of quenching (\citealp{Bowler2014,Bowler2015}), so this is a vital time period for galaxy evolution in the history of the Universe.
\subsection{Probes of Reionization}
As well as being a crucial period for galaxy formation (see \citealp{Shapley2011} for a review), understanding large-scale structure/clustering at $z=5-8$ is also important cosmologically, in particular for our understanding of \textit{reionization}. In the standard cosmological model, the Universe was initially an ionised plasma, that during \textit{recombination} at $z\sim1000$ cooled sufficiently for protons and electrons to combine to form neutral atoms. However this medium must have been reionized by the first stars and galaxies at some point between then and $z \sim 6$ to produce the ionized intergalactic medium we see today (see \citealp{Zaroubi2013,Natarajan2014} for an observational and theoretical review respectively). However there is still debate in the literature about which sources had the energy to do this e.g. massive galaxies, AGN, or something else (\citealp{Grissom2013,Robertson2015,Bouwens2015}). Furthermore, most models of reionzation are `patchy' e.g. non-instantaneous, with some parts of the Universe being reionised earlier than others (\citealp{Becker2015,Dore2007}). The current best constraint on the average reionization redshift from the Planck mission (based on the measured optical depth) is $z=7.8-8.8$ (\citealp{PlanckCollaboration2016}), with many probes of the epoch (e.g. \citealp{Pentericci2014,Becker2015}) suggesting that some parts of the Universe could still be undergoing reionization by $z \sim 6$. Uncovering the cause and nature of reionization is likely to require understanding how the reionization power spectrum interweaves with galaxy large scale structure, so it is essential to build up our understanding of the large scale structure (LSS) of the galaxy population at these redshifts. For example, \citet{McQuinn2007} suggest that differences in the clustering of LBGs and Lyman-alpha Emitting galaxies (LAEs) could give an insight into the possible `patchy' nature of reionization. The Lyman-alpha line is suppressed if the source is in a largely neutral region which biases the observations of LAEs towards large ionized H{\sc ii} regions. The result is a larger `observed' clustering for LAEs than the `intrinsic' clustering of the underlying objects - effectively neutral regions obstruct the line of sight in a way that enhances the clustering of LAEs. LBGs and other probes of the high redshift galaxy population however do not receive such an effect on their clustering, so a boost in the clustering of LAEs relative to LBGS (properly controlling for other variables) could be indicative of reionization. Such an effect is yet to be conclusively measured, e.g. \citet{Ouchi2010} find little evidence at $z=6.6$ with 207 LAEs observed with the Subaru telescope, but this approach and others like it are likely to give improved constraints to the nature of the epoch of reionization over the coming years.
\subsection{Objectives of this work}
LBG studies can be informally divided into analyses of `faint' galaxies (in extremely deep, but narrow surveys), and `bright' galaxies (in slightly less deep, but extremely wide surveys). \citet{Harikane2015} provide an analysis of the clustering of relatively faint LBGs found within HST deep surveys at $z = 4-7$. In this study we seek to extend these measurements to brighter luminosities by utilising wider area surveys. To do this, we measure and model the clustering of the \citet{Bowler2015} high luminosity $z \sim 6$ sample, which covers the degree-scale UDS and UltraVISTA fields. A clustering analysis of a subset of the UDS sample has been performed in \citet{McLure2009}, who modelled the correlation function with a single power law, concluding the sources are in dark matter haloes of masses $10^{11.5-12} M_{\odot}$. In this study we perform a similar analysis, but extend to a full HOD model, including an additional field and using deeper data available. Using this enlarged sample, we are then able to discuss what our results mean for feedback processes, models of structure formation, and cosmic variance at high redshift. While samples of bright galaxies do exist at $z>6.5$ (\citealp{Bowler2015}), they are too small to provide constraints on the clustering, and hence we limit our analysis to $z\sim6$.
The structure of this paper is as follows. In Section 2 we describe the sample of LBGs used in this study. In Section 3 we discuss how we measured the correlation function in the sample and constructed our halo occupation distribution models and fitting process. The results are presented in Section 4. In Section 5 we discuss our results, linking them to the literature, and interpreting the cosmic variance between the fields in light of our measurements. Magnitudes are given in the AB system (\citealp{Oke1983}) and all calculations are in the concordance cosmology $\sigma_{8}=0.8$, $\Omega_{\Lambda}=0.7$, $\Omega_{m}=0.3$ and $H_{0}=70 \text{ km} \text{ s}^{-1} \text{Mpc}^{-1}$ unless otherwise stated.
\section{Data and Sample Selection} \label{sec:Data}
In this study we use the high luminosity Lyman break galaxy sample of \citet{Bowler2015}. Deep optical and infrared data (spanning wavelengths of $0.3-2.5 \mu$m) across two main fields (see Fig. \ref{fig:field_geometry}) was used to select the sample; we summarise the observations and selection criteria below, but see \citet{Bowler2015} for a more in depth description.
\subsection{UltraVISTA/COSMOS} \label{sec:UltraVISTA}
UltraVISTA (\citealp{McCracken2012,Laigle2016}) is the deepest of the 6 public surveys on the VISTA telescope, providing $YJHK_{s}$ near infra-red data covering the Cosmic Evolution Survey (COSMOS) field (\citealp{Scoville2007}). The `paw-print' focal plane of VISTA and the survey observing strategy give a continuous `deep' field, with discontinuous `ultra-deep' stripes across it that receive more observing time. \citet{Bowler2015} also used $u^{*}$, $g$, $r$ and $i$ optical data from the T0007 release of CFHTLS in the D2 field, as well as Subaru/SuprimeCam $z'$-band imaging. The maximal area of overlap of these datasets is in the one square degree of CFHTLS, of which 0.62 deg$^2$ has ultra-deep UltraVISTA data, and 0.38 deg$^2$ shallower UltraVISTA. The majority of the sample is in the ultra-deep field and hence for our purposes here we only use the ultra-deep 0.62 deg$^2$ (see Fig. \ref{fig:field_geometry}).
\subsection{UDS} \label{sec:UDS}
For the UKIDSS UDS field, \citet{Bowler2015} used $B$, $V$, $R$, $i$ and $z'$ data from the Subaru XMM-Newton Deep Survey (SXDS, \citealp{Furusawa2008}), and $J$, $H$ and $K$ band data from the DR10 of the UKIDSS UDS (\citealp{Lawrence2006}). Again separate $z'$-band data from Subaru/SuprimeCam was obtained, and in addition, $Y$ band data from the VIDEO survey (\citealp{Jarvis2013}) was also used. The total overlapping area is 0.74 deg$^2$ (see Fig. \ref{fig:field_geometry}).
\subsection{Candidate Selection} \label{sec:selection}
Again, \cite{Bowler2015} describes the full sample selection, but we summarise the process here. Sources were detected with {\sc SExtractor} (\citealp{Bertin1996}), and photometric redshifts were determined with {\sc LePhare} (\citealp{Arnouts1999,Ilbert2006}). Contaminant populations (low redshift interlopers and brown dwarfs) were removed in the SED fitting process. This leaves 156 and 107 $5.5<z <6.6$ galaxies in the UltraVISTA and UDS fields respectively. The UltraVISTA field was found to have a higher surface density than the UDS field (by a factor of $\sim 1.8$); potential causes for this, including lensing and cosmic variance are discussed in section 7 of \citet{Bowler2015}.
This process gives in total 263 LBGs in the range $-22.7< M_{UV} < -20.5$ with $5.5<z<6.5$ over 1.35 deg$^2$. We take our final sample as all 161 sources with $M_{UV} <-21.125$, as the sample completeness drops off rapidly faintwards of this value, as discussed in \cite{Bowler2015}, see their figure 6, but is fairly constant with magnitude brightwards of this value.
\begin{figure*}
\includegraphics[scale=0.6]{field_geometry_z6.pdf}
\caption{The geometry of the UDS and UltraVISTA fields. The red points are the galaxy locations from \citet{Bowler2015}. The blue points are the random points chosen to cover the fields used for the construction of RR for the calculation of the correlation function. The three galaxies in the UltraVISTA field that are not surrounded by blue points are the $z=6$ sources detected in the deep (as opposed to ultra-deep) part of the UltraVISTA field, that we do not include in this study. The overall shape of the fields is predominantly determined by the part of the sky that the multiple different surveys overlap in. The small scale gaps and holes are foreground stars and detector artefacts etc. See figures 1 and 2 of \citet{Bowler2014} to see how the irregular footprints arise from the intersection of the sky patches covered by different surveys.}
\label{fig:field_geometry}
\end{figure*}
\section{Correlation Functions and HOD Modelling} \label{sec:selection}
There is a large selection of statistical measurements that can be used to characterise the clustering of extragalactic sources and large-scale structure, including nearest neighbour \citep{Bahcall1983}, genus \citep{GottIii2009}, power spectrum \citep{Tegmark2003} and counts in cells \citep{White1979}. In this study we measure and model the two-point correlation function, the excess probability of how much more likely two galaxies are to be at a given separation than a random uniform distribution (this statistic can be linked to other measurements e.g. counts in cells statistics are `averaged' correlation functions, and the power spectrum is the Fourier transform of the correlation function).
The underlying meaningful physical relation is the full three dimensional spatial correlation function; however we only have the observables of angular separations and relatively coarse redshift information. Limber Inversion (\citealt{Limber:1954zz}) provides a way to connect the two. The two main approaches to connect the observables to the spatial correlation function are to either calculate the angular correlation function, and compare to angular projections of the model, or to use the redshift information to form the projected correlation function in both transverse and longitudinal directions (incorporating redshift space distortions, which are normally integrated out), see \citet{Davis1983} and \citet{Fisher1994}. Here we measure and model the angular correlation function, as calculating the projected correlation function requires precise knowledge of the redshifts of the sample to avoid being biased, and is in general more appropriate for surveys with spectroscopic redshifts.
\subsection{The Angular Correlation Function}\label{sec:ACF_def}
The angular two-point correlation function $\omega(\theta)$ is defined by:
\begin{equation}
dP=\sigma(1+\omega(\theta))d\Omega ,
\end{equation}
where $dP$ is the probability of finding two galaxies at an angular separation $\theta$, $\sigma$ is the surface number density of galaxies, $d\Omega$ is the solid angle. This is most commonly estimated by calculating $DD(\theta)$, the normalised number of galaxies at a given separation in the real data, and $RR(\theta)$, the corresponding figure for a synthetic catalogue of random galaxies identical to the data catalogue in every way (i.e. occupying the same field) except position. We use the \citet{Landy1993} estimator:
\begin{equation}
\omega(\theta)=\frac{DD-2DR+RR}{RR} ,
\end{equation}
which also uses $DR(\theta)$, data to random pairs, as it has a lower variance (as an estimator) and takes better account of edge effects.
Uncertainties are calculated with `bootstrap resampling', which samples the galaxies with replacement from the dataset, from which we recalculate the correlation function (see \citealp{Ling1986}). Repetition of this process produces multiple `realisations' of the correlation function, from which the covariance matrix of the $\omega(\theta)$ values can be estimated. It is possible to calculate the error bars from Poisson uncertainty on the DD values, but \citet{Cress1996} and \citet{Lindsay2014} found errors calculated in this manner were a factor of 1.5 to 2 smaller than those estimated with bootstrap. It is particularly key to account for covariance between adjacent angular space bins in the small-number counts regime here as each galaxy will contribute to multiple bins. In this paper we use 100 bootstrap resamplings to estimate the uncertainty at the 16th and 84th percentiles.
For the construction of our random catalogue we created a mask over the fields to exclude image artefacts and foreground stars. Five galaxies in the UDS field were found to be within the masked area. Although the mask may be a little conservative, it is likely that our measurements of clustering in the vicinity of these sources will be heavily biased by the presence of the artefact being masked, so we do not use these five galaxies when calculating the correlation function (although it makes very little difference to our analysis). The fact that the survey area has a finite area gives a negative offset to the true correlation function, usually known as the integral constraint. As per \citet{Beutler2011} and \citet{Hatfield2015}, we calculate the integral constraint using the numerical approximation of \citet{Roche1999} and treat it as part of the model when fitting parameters. In this paper we calculate the correlation function with both the binning method (DD and RR are how many galaxy pair separations in each angular scale bin) and the continuous estimation/kernel smoothing method described in \citet{Hatfield2015}.
\subsection{Halo Occupation Distribution modelling}\label{sec:HOD_def}
Halo Occupation Modelling is an increasingly popular way of modelling galaxy clustering measurements. We do not describe the full details of the scheme here, see \citet{Coupon2012} and \citet{McCracken2015} for a more complete breakdown. A given set of galaxy occupation statistics is given, usually parametrised by 3-5 numbers, e.g. the number of galaxies in a halo as a function of halo mass. The model correlation function is broken down to a `1-halo' term, describing the small-scale clustering of galaxies within an individual halo, and a `2-halo' term, describing the clustering of the halos themselves. The `1-halo' term is constructed by convolving the profile of galaxies within a halo with itself, weighting by the number of galaxies in the halo, and then integrating over all halo masses. The profile is usually taken to be one galaxy at the centre of the halo (the `central') and all other galaxies tracing a Navarro-Frank-White (NFW; \citealp{Navarro1996}) profile. The 2-halo term is constructed by scaling the dark matter linear correlation function by the weighted-average halo bias of the host halos.
The most general HOD parametrisation commonly used is that of \citet{Zehavi2005}, that gives the total number of galaxies in a halo as:
\begin{equation} \label{eq:params_tot}
\langle N_{tot}(M_{h}) \rangle=\langle N_{cen}(M_{h}) \rangle+\langle N_{sat}(M_{h}) \rangle ,
\end{equation}
the total number of central galaxies as:
\begin{equation} \label{eq:params_cen}
\langle N_{cen}(M_{h}) \rangle=erf \left( \frac{M_{h}-M_{min}}{\sigma_{\log M}} \right) ,
\end{equation}
and the total number of satellites as:
\begin{equation} \label{eq:params_sat}
\langle N_{sat}(M_{h}) \rangle=\langle N_{cen}(M_{h}) \rangle \left( \frac{M_{h}-M_{0}}{M_{1}} \right)^{\alpha} .
\end{equation}
This model has five parameters; $M_{min}$ describes the minimum halo mass required to host a central galaxy, $\sigma_{\log M}$ describes how sharp this step jump is (equivalently to the central to halo mass scatter), $M_{0}$ is a halo mass below which no satellites are found, and $M_{1}$ is the scale mass for accumulating satellites ($M_{0}$ is typically a lot smaller than $M_{1}$, so $M_{1}$ is commonly said to be the halo mass at which the first satellite is accreted, although analytically they are very slightly different - this is the difference between $M_{1}$ and $M'_{1}$ used by some authors). The power law index $\alpha$ describes how the number of satellites grows with halo mass. The random variables of the number of galaxies of the sample under consideration in a halo as a function of halo mass are $N_{cen}$ and $N_{sat}$ (e.g. they are not absolute values). The halo has a central galaxy (of the given galaxy population) with a probability given by Equation \ref{eq:params_cen} and no central with the complementary probability (which is why the expected value for the number of centrals is given by Equation \ref{eq:params_cen}). $N_{sat}$ is zero if $N_{cen}$ is zero, and Poisson distributed with mean $\left( \frac{M_{h}-M_{0}}{M_{1}} \right)^{\alpha}$ if $N_{cen}=1$, giving the expectation in Equation \ref{eq:params_sat}. However, although we have the largest sample of bright LBGs at these redshifts, this is still only a comparatively small sample for HOD modelling. Thus, in order to reduce the number of parameters in the model (six once duty cycle is included, see Section \ref{sec:DC}), we fix some as functions of others.
As per \citet{Harikane2015} we fix $\sigma_{\log M}=0.2$. The assumptions that go into this choice however are based on results at much lower redshifts (\citealp{Kravtsov2004,Zheng2005,Conroy2006}) which do not necessarily hold at these early times, when the luminosity-halo scatter is fairly unconstrained. Indeed \citet{Hatfield2015} found a scatter of $\sim0.6$ consistent with the data at $z\sim1$. However fortunately for our purposes (unfortunately from the perspective of using clustering to infer the scatter) the 2-point statistics have very little dependence on the scatter. Hence our conclusions do not alter dramatically with choice of $\sigma_{\log M}$, and so we fix it as the same as the \citet{Harikane2015} value for ease of comparison.
We additionally fix $\alpha=1$; this is both the fiducial value (it is logical to expect that once in the most massive halo regime that the number of satellites scales linearly with the halo mass, as the bulk of the halo mass will have been accreted), as well as the result found by most measurements at moderate ($z<2$) redshift (e.g. \citealp{Hatfield2015,Coupon2012}).
We investigate the consequences of allowing various parameters to be fixed or free in the fitting process. Again as per \citet{Harikane2015}, if not free, $M_1$ and $M_0$ are fixed as functions of $M_{min}$ following the $z=0-5$ results of \citet{Conroy2006}:
\begin{equation} \label{eq:M1}
\log{{M_{1}}}=1.18 \log{M_{min}}-1.28 ,
\end{equation}
\begin{equation} \label{eq:chi}
\log{M_{0}}=0.76 \log{M_1} +2.3 .
\end{equation}
In this work we use the halo mass function of \cite{Behroozi2012}, and the halo bias of \citet{Tinker2010}.
\subsection{Duty Cycle} \label{sec:DC}
The role of a duty cycle (DC) is the main difference to be incorporated when modelling LBG galaxies at high redshift compared with studies in the local Universe. Clustering analyses of LBGs typically find that there is a mismatch between the measured number density, and the number density implied by the clustering (\citealp{Ouchi2010}). This is in agreement with current understanding of galaxies at these redshifts that suggest that star formation may be highly episodic e.g. \cite{Stark2009}. Typically the occupation statistics model implied by fitting \textit{only} to the clustering will suggest a larger comoving number density than is observed in the luminosity function. This discrepancy is typically explained by invoking a \textit{duty cycle}, that the observed LBGs have luminosities that vary dramatically in time, and are being observed only when in a bright phase. This illustrates the importance of understanding clustering alongside the number counts. With a duty cycle of 10 percent (i.e. it is only in its bright phase 10 percent of the time), the underlying galaxy appears 10 times rarer than it actually is. A straight abundance matching in this scenario would then mistakenly put them in rarer, and thus more massive, halos.
Without incorporating the duty cycle, the implied comoving number density is the mean number of galaxies in a halo, times the halo mass function, integrated over all halo masses. This number is then multiplied by the duty cycle to give the model comoving density:
\begin{equation} \label{eq:n_gal}
n_{gal}=\textrm{DC} \times \int^{\infty}_{0} \textrm{HMF}(M_{h}) \times \langle N_{tot}(M_{h}) \rangle \textrm{dM}_{h} ,
\end{equation}
where HMF is the halo mass function and DC is the duty cycle.
\subsection{MCMC Fitting} \label{sec:MCMC_description}
To compare with observations, we use the {\sc Halomod} \footnote{https://github.com/steven-murray/halomod} code (Murray, Power, Robotham, in prep.) to calculate the spatial correlation function. We then project this to an angular correlation function (as per \citealt{Limber:1954zz}), and subtract off the numerical approximation of the integral constraint to get our final model correlation function.
We use {\sc Emcee} \footnote{http://dan.iel.fm/emcee/current/} (\citealp{Foreman-Mackey2012}) to provide a Markov Chain Monte Carlo sampling of the parameter space to fit our correlation function. We use a likelihood of:
\begin{equation} \label{eq:chi}
\chi^{2}= \frac{[\log n_{\mathrm{\mathrm{gal}}}^{\mathrm{obs}}-\log n_{\mathrm{\mathrm{gal}}}^{\mathrm{model}}]^{2}}{\sigma_{\log n}^{2}}
\end{equation}
\begin{equation} \label{eq:chi}
+ \sum\limits_{i,j} [\omega^{\mathrm{obs}}(\theta_{i})-\omega^{\mathrm{model}}(\theta_{i})] [C^{-1}_{i,j}][\omega^{\mathrm{obs}}(\theta_{j})-\omega^{\mathrm{model}}(\theta_{j})] ,
\end{equation}
where $n_{\mathrm{\mathrm{gal}}}^{\mathrm{obs}}$ is the observed galaxy number density, $n_{\mathrm{\mathrm{gal}}}^{\mathrm{model}}$ is the model galaxy number density, $\sigma_{\log n}$ is the error on the log of the number density including both Poisson noise and cosmic variance, $\theta_{i}$ are the angular scales we fit over, $\omega^{\mathrm{obs}}$ is the observed angular correlation function, $\omega^{\mathrm{model}}$ is the angular correlation function of a given model, and $C_{i,j}$ is the covariance matrix of the measurements of the correlation function from the bootstrapping.
When the parameters are free, we use a uniform prior over $10<\log_{10}{(M_{\mathrm{min}}/M_{\odot})}<13$, $\log_{10}{(M_{\mathrm{min}}/M_{\odot})}<\log_{10}{(M_{\mathrm{1}}/M_{\odot})}<14$ (uniform in log space) and $0<\textrm{DC}<1$. We used 20 walkers with 1000 steps, which have starting positions drawn uniformly from the prior.
We use 500,000 random data points in this study. As per \citet{Hatfield2015} and \citet{Hatfield2016a}, we use 100 bootstrap resamplings to estimate the uncertainty at the 16th and 84th percentiles of the resampling. For $n_{\mathrm{\mathrm{gal}}}^{\mathrm{obs}}$, we use the value obtained when integrating the luminosity function brightwards to infinity from $M_{UV}=-21.125$ (as opposed to the number obtained by dividing the number of sources by the volume probed) as the luminosity function already has incompleteness factored in. This equates to $n_{\mathrm{\mathrm{gal}}}^{\mathrm{obs}}=4.1 \times 10^{-5}$Mpc$^{-3}$, for the fields combined. This is from the best-fitting double power law model in \cite{Bowler2015}. Using the best-fitting Schecter function gives the marginally lower value of $n_{\mathrm{\mathrm{gal}}}^{\mathrm{obs}}=3.8 \times 10^{-5}$Mpc$^{-3}$. Changing from one value to the other does not impact our conclusions. The main sources of incompleteness are blending with foreground sources, and misclassification of true $z \sim 6$ LBGs as dwarf stars or lower redshift contaminants, see \cite{Bowler2015}.
\section{Results} \label{sec:RESULTS}
\subsection{Clustering Measurements} \label{sec:full_sample_measurements}
Fig. \ref{fig:full_acf} shows the angular correlation function of the full sample over the range $10^{-3 }<\theta / \textrm{deg}<10^{-0.5} $, estimated with both the binning approach (where galaxy pair separations are counted in discrete angular ranges) and the kernel smoothing method (where the distribution of galaxy pair separations is smoothed to produce a continuous estimation of the correlation function in the angular range under consideration). They (as expected) agree well, and produce the familiar approximate power law $\sim \theta^{-0.8}$, although the kernel smoothing method is able to cope better with bins that contain a small number of pairs. For the rest of our analysis, we take the value of the smoothed correlation function, at the ten angular scales calculated for the bins, as our final measurements \footnote{With the continuous estimation of the correlation function, one can in principle extract an estimate of the correlation function at an arbitrary number of angular scales in the range probed. However this gives dramatically diminishing returns as adjacent measurements would be increasingly covariant e.g. one could take the estimate of the correlation function at 1000 points in the angular range for which we estimate the correlation function, but adjacent points would be almost perfectly correlated and no extra information would be gained.}.
In general, measurements of clustering at different scales will be covariant as individual galaxies contribute multiple times to $DD$, usually at different scales. Furthermore, extra care with covariances is needed when using the kernel method, as a given galaxy pair contributes at a range of scales (this can be mitigated by picking measurements larger than the smoothing scale, but is important to keep track of here as we are in a low-data regime). Therefore we also construct the covariance matrix from the bootstrapped samples for our measurements, in order to account for these covariances in the fitting process. We show the correlation matrix (the covariance matrix with each value normalised by the standard deviation of each measurement) in Fig. \ref{fig:covariance}. Not taking covariances into account would be the equivalent of ignoring the off-diagonal values, which are non-negligible, particular at the large and very small scales.
\begin{figure*}
\includegraphics[scale=0.8]{auto_corr_z6.pdf}
\caption{The angular correlation function for our sample of bright ($M_{UV}<-21.125$) $z\sim6$ LBGs from Bowler et al., (2015). The figure shows the correlation function estimated both with a binning method (blue points), and a kernel smoothing method (red curve, with dotted lines showing uncertainty). Where the binned correlation function dips to negative values corresponds to where there were no galaxy pairs in the bin.}
\label{fig:full_acf}
\end{figure*}
\begin{figure} \label{fig:covariance}
\includegraphics[scale=0.5]{covariance_matrix.pdf}
\caption{The correlation coefficients (covariance normalised) of our measurements. Blue values are positive correlations, red values are negative values. The red diagonal corresponds to the standard deviation measurements (a random variable is always perfectly correlated with itself). }
\label{fig:covariance}
\end{figure}
\subsection{Modelling Results} \label{sec:full_sample_modelling}
We carry out several MCMC fits to the data with the HOD model as per Section \ref{sec:MCMC_description}, with four variations. The four scenarios considered were:
\begin{itemize}
\item $M_{min}$, $M_1$ and DC free (e.g. 1-halo and 2-halo amplitudes and number counts all free)
\item $M_{1}$ fixed as function of $M_{min}$ as per Section \ref{sec:HOD_def} (e.g. 1-halo amplitude as fixed function of 2-halo amplitude)
\item $M_{1}$ fixed as function of $M_{min}$, $\textrm{DC}=0.6$ (e.g. 1-halo amplitude as fixed function of 2-halo amplitude, duty cycle fixed at the \citealp{Harikane2015} value)
\item $M_{1}$ fixed as function of $M_{min}$, $\textrm{DC}=1$ (e.g. 1-halo amplitude as fixed function of 2-halo amplitude, no duty cycle - galaxies `on' at all times)
\end{itemize}
In addition to these four models we also compute:
\begin{itemize}
\item Halo masses for the most straightforwards abundance matching scheme e.g. $M_{min}$ for a sample of galaxies above a given luminosity threshold is the halo mass such that the comoving number density of halos greater than that mass is equal to the comoving number density of the galaxy sample (see Table \ref{table:summary_of_abundance_matching})
\item Galaxy bias from a pure bias model e.g. fit for $b$ where $\xi_{\textrm{model}}=b^{2} \xi_{\textrm{DM}}$.
\end{itemize}
The results from these 6 models are shown in Tables \ref{table:summary_of_models} and \ref{table:summary_of_abundance_matching}. Fig. \ref{fig:MCMC_fit_various_models} shows the data, the best-fit models. We show the posterior from one fitting in Fig. \ref{fig:posterior} for illustrative purposes. It is clear that the amplitude of the correlation function is roughly two orders of magnitude larger than the dark matter correlation function in the linear regime, corresponding to a very high bias. Most of our models suggest that $M_{min} \sim 10^{11.5} M_{\sun}$ e.g. our galaxies are hosted by halos of that mass and above. It also seems that the satellite fraction is at most a few percent, which suggests that at most 5-6 galaxies in our sample are satellites (in the scenario that these sources were the same underlying population as the lower luminosity LBGs, the satellite fraction could have been higher as a non-trivial portion would have been from halos hosting multiple galaxies).
It becomes harder however to make statements beyond these basic claims because the HOD fits are only of moderate quality (see the $\chi^2$ values in Table \ref{table:summary_of_models}). In general fits decreased in quality as more parameters were fixed, as expected. We discuss the tensions more in section \ref{sec:compare}, but summarise the results of each model here. The $M_{min}$, $M_{1}$ and DC free model is free to go to high masses until tension between model and measured number density stop it from going higher. This model can also take $M_1$ extremely high, to bring the amplitude of the small scale clustering down to match the data. Models that do not have $M_1$ free cannot vary their small scale behaviour freely. This forces their halo masses down, as the small scale behaviour grows rapidly with $M_{min}$; if they went higher the disagreement on small scales would become much larger. When DC is free (and $M_1$ is fixed as a function of $M_{min}$), the model actually prefers to go even lower than the abundance matching halo mass, and uses the duty cycle to reach agreement with the number counts. However when these models have the duty cycle fixed, they cannot do this, so the trade off between agreeing with small scale clustering and the number counts sets the halo mass. For $DC=0.6$ to agree with the observed number counts, the intrinsic number counts must be higher than for $DC=1$, forcing the model to prefer slightly lower halo masses. Conversely the `pure bias' model was able to fit the clustering data well. This suggests that the reason for the poor quality fit is a mismatch between the clustering and the number counts - halos of the halo mass implied by the bias are far rarer than the observed galaxies are \footnote{This problem would have been even worse if the HMF had been taken from \cite{Tinker2010} without the high redshift correction of \cite{Behroozi2012}.}, which is problematic\footnote{This is the opposite problem to what the duty cycle is invoked to solve - duty cycles in clustering studies of LBGs solved the issue of number counts being lower than implied by clustering, the problem here is the number counts are higher.} - although part of the raison d'etre of HOD schemes is to understand how multiple galaxies can occupy the same halo, which would allow the number of galaxies to be greater than the number of halos, this is, as discussed, a few percent effect, as opposed to a factor of ten effect. In addition, the fact that no 1-halo term emerges is slightly anomalous. We note that \citet{Harikane2015} use fitting formulae of the HMF in \citet{Tinker2010} directly without the normalization constraint, which overestimates the abundance by a factor of ~1.7 at $z=4$ (Y. Harikane 2017, private communication). So the results of \citet{Harikane2015} are likely more consistent with a duty cycle of 1 (rather than the fixed value of 0.6 that they use in their analysis).
\begin{figure*}
\includegraphics[scale=0.8]{different_models.pdf}
\caption{Comparison of our measurements (red curve and shaded area) with the five different models we fit. The two blue curves correspond to models with only $M_{min}$ free, the dashed curve has $DC=1$ and the full line $DC=0.6$. On linear solutions all models are very similar, apart from the bias only model, as the number density constraint is restricting the model from going too high. Only the $M_1$ free model allows the small-scale amplitude to vary independently.}
\label{fig:MCMC_fit_various_models}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.8]{triangle_plot_z6.pdf}
\caption{Triangle plot of our posterior from our MCMC fitting for the HOD model with $M_{min}$, $M_{1}$ and DC free (masses in log base ten Solar mass units). Dashed lines on the one dimensional single parameter plots are 16th, 50th and 84th percentiles.}
\label{fig:posterior}
\end{figure*}
\begin{table*}
\caption {Our constraints on the HOD parameters from the MCMC fitting. Also shown are the corresponding satellite fractions ($f_{\textrm{sat}}$) and galaxy biases ($b$) and fit reduced $\chi^2$ of the samples. Quantities in brackets are either fixed in the model, or fixed as a function of other parameters in the model. Masses are in Solar mass units (log base ten). Note that values and error bars quoted are the 16th, 50th and 84th percentiles of the posterior, as opposed to the peak values. This makes very little difference apart from the posterior for the duty cycle value for the $M_{min}$, $M_1$ and DC free model, which is peaked at DC=1 and hence only has one tail, see Fig. \ref{fig:posterior}. The lower luminosity parameter values are taken directly from \citet{Harikane2015}, apart from satellite fraction, which we calculate.}
\begin{tabular}{ ||p{2cm}|p{1cm}|p{1cm}|p{1.5cm}|p{1.5cm}|p{0.5cm}|p{0.5cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| }
\hline
Model & $M_{UV}$ & $\log M_{min}$ & $\log M_{1}$ & $\log M_{0}$ & $\alpha$ & $\sigma$ & DC & $10^{2}f_{sat}$ & $b$ & $\chi^{2}/\textrm{d.o.f.}$ \\
\hline
$M_1$, DC free &-21.125 & {$11.53\substack{+0.05\\ -0.07}$} & {$13.64\substack{+0.99 \\ -0.91}$} & {($12.67\substack{+0.75 \\ -0.69}$)} & (1) & (0.2) & {$0.79\substack{+0.15 \\ -0.25}$} & {$<0.2$} & {$8.28\substack{+0.23 \\ -0.32}$} & 1.3\\
DC free &-21.125 & {$11.35\substack{+0.13\\ -0.05}$} & {($12.12\substack{+0.16 \\ -0.05)}$} & {($11.51\substack{+0.12 \\ -0.04}$)} & (1) & (0.2) & {$0.36\substack{+0.3 \\ -0.12}$} & {$3.87\substack{+0.24\\ -0.64}$} & {$7.65\substack{+0.57 \\ -0.18}$} & 1.5\\
DC $=0.6$ &-21.125 & {$11.48\substack{+0.02\\ -0.02}$} & {($12.26\substack{+0.03 \\ -0.03)}$} & {($11.62\substack{+0.02 \\ -0.02}$)} & (1) & (0.2) & {$(0.6)$} & {$3.29\substack{+0.11\\ -0.1}$} & {$8.16\substack{+0.1 \\ -0.1}$} & 1.3\\
DC $=1$ &-21.125 & {$11.51\substack{+0.02\\ -0.02}$} & {($12.3\substack{+0.03 \\ -0.02)}$} & {($11.65\substack{+0.02 \\ -0.02}$)} & (1) & (0.2) & {$(1)$} & {$3.16\substack{+0.08\\ -0.1}$} & {$8.3\substack{+0.12 \\ -0.08}$} & 1.9\\
Bias Only &-21.125 & {NA} & {NA} & {NA} & {NA} & {NA} & {NA} & {NA} & {$10.86\substack{+0.1 \\ -0.2}$} & 0.8\\
Harikane16 & -20.0 & {$11.30\substack{+0.10 \\ -0.13}$} & {($12.06\substack{+0.07 \\ -0.16})$} & {($11.47\substack{+0.05 \\ -0.12}$)} & (1) & (0.2) & (0.6) & 5.0 & {$6.3\substack{+0.4 \\ -0.4}$} & 0.5\\
Harikane16 & -19.1 & {$11.03\substack{+0.05 \\ -0.18}$} & {($11.75\substack{+0.20 \\ -0.29})$} & {($11.23\substack{+0.15 \\ -0.22}$)} & (1) & (0.2) & (0.6) & 7.1 & {$5.5\substack{+0.2\\ -0.4}$} & 1.4\\
\hline
\end{tabular}
\label{table:summary_of_models}
\end{table*}
\begin{table*}
\caption {Comparison of abundance matching results to clustering fits for our data. Columns are (1) LBG sample used, (2) LBG threshold absolute magnitude, (3), observed comoving number density (Mpc$^{-3}$), (4) the minimum halo mass in the most straightforwards abundance matching scheme (log base ten Solar mass units), (5) the model comoving number density (Mpc$^{-3}$) of the best fit HOD models in this work and Harikane et al., (2016) without incorporating duty cycle, (6) the corresponding minimum halo mass from the HOD model (log base ten Solar mass units).}
\begin{tabular}{ ||p{2cm}|p{1cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}| }
\hline
Data & $M_{UV}$ & $n_{g}^{\textrm{observed}}$ (Mpc$^{-3}$) & $\log M_{min}^{\textrm{matched}}$ & $n_{g}^{\textrm{model}}$ (Mpc$^{-3}$) & $\log M_{min}^{\textrm{model}}$ \\
\hline
Bowler15 & -21.125 & $4.1 \times 10^{-5}$ & 11.51 & $8.9 \times 10^{-6}$ & {$11.53\substack{+0.05\\ -0.07}$} \\
Harikane16 & -20.0 & $3.8 \times 10^{-4}$ & 11.09 & $2.1 \times 10^{-4}$ & {$11.30\substack{+0.10 \\ -0.13}$} \\
Harikane16 & -19.1 & $13.4 \times 10^{-4}$ & 10.79 & $7.3 \times 10^{-4}$ & {$11.03\substack{+0.05 \\ -0.18}$} \\
\hline
\end{tabular}
\label{table:summary_of_abundance_matching}
\end{table*}
\section{Discussion}
\subsection{The link between low- and high-luminosity galaxies and their haloes at $z \sim 6$} \label{sec:compare}
The most relevant previous study to compare our results to is \citet{Harikane2015}, which presented clustering measurements and HOD fits to $z\sim6$ LBG galaxies, but on smaller angular scales of approximately $10^{-3.25 } <\theta / \textrm{deg} <10^{-1.25} $, compared with our $10^{-3 } <\theta / \textrm{deg}<10^{-0.5} $, and for fainter rest frame absolute magnitudes of $ -20.5<M_{UV}<-19$ compared with our $ -22.7<M_{UV}<-21.125$ sample (see Fig. \ref{fig:MCMC_fit}). Thus our results combined with \citet{Harikane2015} describe LBG clustering over almost three orders of angular scale and a factor of 40 in luminosity.
\begin{figure}
\includegraphics[scale=0.45]{hod_compare.pdf}
\caption{Comparison of our measurements (red curve, 1-$\sigma$ uncertainties in the lighter curves), the lower luminosity Harikane et al., (2016) measurements, and the dark matter angular correlation function (black curve).}
\label{fig:MCMC_fit}
\end{figure}
Our bias and halo mass results compared with the results of \citet{Harikane2015} are shown in Fig. \ref{fig:harikane_compare}. Although only moderate quality fits (possible reasons for which are discussed in the subsequent sub-sections), all our fitted models suggest our galaxy sample has a substantially higher typical host halo mass and galaxy bias than the lower luminosity samples in \citet{Harikane2015}. This higher bias is evident by directly comparing the two measurements of the correlation function. Our sample has an amplitude $\sim 3$ times higher than the \citet{Harikane2015} bright ($M_{UV}<-20$) sample with $\omega(0.01^{\circ}) \sim 0.2$, and our measured bias is a factor of 1.7 greater than that measured for the lower luminosity \citet{Harikane2015} sample (as $\omega \propto b^2$). In general higher luminosity and higher stellar mass galaxy samples have higher biases, but it is important to note that it was not a foregone conclusion to measure a bias this high. It was entirely possible that our $M_{UV} \sim -21.5$ sample could have been the same (or largely the same) population as the sample of \citet{Harikane2015}, just observed during a particularly vigorous but rare burst of star formation. If that had been the case, we would have measured a lower clustering amplitude, and inferred a much lower duty cycle. The comoving space density of the galaxies in our sample is $4.1 \times 10^{-5} \textrm{ Mpc}^{-3}$, compared with $3.8 \times 10^{-4} \textrm{ Mpc}^{-3}$ for the most luminous $z=6$ \citet{Harikane2015} sample. \citet{Harikane2015} do not measure the duty cycle for this sample, but assume it to be equal to 0.6. As an illustrative example, a duty cycle of 0.6 for the \citet{Harikane2015} sample would mean an actual underlying population comoving density of $6.3\times 10^{-4} \textrm{Mpc}^{-3}$. If our sample was part of the same population, that would correspond to $DC=0.06$ (in other words that the fainter population spends approximately 6 percent of its time in this super-enhanced state of star formation). However the amplitude of the clustering rules this out and our $M_{UV} < -21.125$ sample is comprised of continuously high-luminousity objects in very dense environments.
\begin{figure}
\includegraphics[scale=0.45]{harikane_compare_z6.pdf}
\caption{Comparison of our results with comparable measurements of lower luminosity LBGs from \citet{Harikane2015}. Top plot: $M_{min}$ as a function of absolute UV luminosity threshold (in units of Solar mass). Bottom plot: galaxy bias as a function of absolute UV luminosity threshold. The results from our six different models are shown for comparison (x-axis values slightly offset for each model for clarity).}
\label{fig:harikane_compare}
\end{figure}
\subsection{Apparent lack of a 1-Halo term} \label{sec:no_1_halo}
Models using extrapolated values of $M_1$ suggest that at scales of $10^{-2.5}$deg and smaller there should be a sharp upturn in the value of the correlation function as the observations start to probe clustering of multiple galaxies within individual halos (see Fig. \ref{fig:MCMC_fit_various_models}). We do not observe this in the data, in contrast to \cite{Harikane2015}, see Fig. \ref{fig:MCMC_fit}. A direct interpretation of this would be that $M_1$ just increases much faster than the extrapolation of Equation \ref{eq:M1} i.e. the satellite fraction drops off extremely fast and an unfeasibly large (for this redshift) halo is needed to host two of these sources. Another possibility is suggested by \cite{Jose2013}, who also observe a lack of a 1-halo term in clustering measurements of $z \sim 3-7$ LAEs. Their proposed solution was that halo occupancy behaved in a sub-Poissonian manner, and they found that a modified distribution (see their equation 15) was able to reproduce the measurements. However, we suggest that there are good reasons to believe that there is strong cosmic variance on our small scale measurements that is not accounted for in the bootstrap uncertainties, making it hard to make direct inferences about the satellite population of these galaxies.
For single contiguous field observation, cosmic variance is smaller on small scales than on large scales in a limited sense, simply because one observes more instances of small scale structure. However these are not independent instances of small-scale structure, as they all come from the same large-scale density field. To illustrate this, suppose our sources have $M_{\textrm{min}} \sim10^{11.4} M_{\sun}$, then we would expect $M_{1} \sim 10^{12.2} M_{\sun}$ e.g. only halos with $M>10^{12.2} M_{\sun}$ host more than one of our bright sample. The comoving density of $M>10^{12.2} M_{\sun}$ halos is $5.2 \times 10^{-7} \textrm{Mpc}^{-3}$ and the comoving volume probed by the observations is $\sim 1.7 \times 10^{7} \textrm{Mpc}^{3}$. This means that the expected numbers of $M>10^{12.2} M_{\sun}$ halos in the volume surveyed is $\sim 10$. Just these ten would give $\sim 10$ close pairs (in addition a few more would be expected from projection effects), which is more than twice the 4 close ($10^{-3} - 10^{-2.5}$deg) pairs observed here, and would push the small scale correlation function up. However these halos will be extremely biased, much more than the $ \sim10^{11.4} M_{\sun}$ halos. Conceivably for an extreme case, it could be that if our observations were repeated 10 times, we would find that 9 times no $M>10^{12.2} M_{\sun}$ halos were observed, and the tenth time a very overdense region is observed, which has 100 in. In the first nine cases, no satellites would be observed, leading to a flat correlation function to small radii that we see in our observations, and the tenth time an overestimate of the satellite fraction is measured. Therefore we would expect there to be very substantial cosmic variance on our measurements of the correlation function on small scales - variance that is not incorporated into our errors on our clustering measurements. Essentially the 1-halo term is dominated by contributions from very massive halos, which are the most biased, so there is the most cosmic variance on small scale measurements of the correlation function. It appears to be the case that neither of our fields are overdense enough to sample the highly biased sample of massive halos at this redshift which could be massive enough to host multiple bright LBGs, and instead our small scale measurements are dominated by the angular projection of the linear clustering (e.g. objects near in angular space by chance, but not near in physical space).
The \cite{Harikane2015} measurements however do have a prominent 1-halo term. We suggest that the reason for this may lie in the fact that a) they are at lower luminosities, so the cosmic variance on halos required to host multiple galaxies is less extreme and b) their correlation functions are measured from galaxies in seven different fields (rather than our two), so they had a greater chance of observing a dense field that had the massive halos necessary for satellites.
\subsection{Mismatch between the number counts and bias measurements}
As discussed in Section \ref{sec:full_sample_modelling}, it was not possible to obtain a good HOD fit to the number counts and clustering. A good fit was only obtained for the clustering measurements with a pure bias model. The core of the discrepancy is that to obtain the directly measured bias of $\sim 10$, the galaxies would need to be in halos of minimum mass $M_{\textrm{min}} \sim 10^{12} M_{\sun}$. This corresponds to a comoving density of $2.8 \times 10^{-6} \textrm{Mpc}^{-3}$, compared with an observed number density of $4.1 \times 10^{-5} \textrm{Mpc}^{-3}$ i.e. approximately a factor of 15 lower than the observed value. Similarly, plain abundance matching would suggest minimum mass $M_{\textrm{min}} \sim 10^{11.5} M_{\sun}$ corresponding to a bias of $\sim 8$. We note that \citet{Barone-Nugent2014} report a very similar issue at $z \sim 7.2$, where they found that a duty cycle of 1 was needed for their LBG sample, and that even then the measured bias was slightly inconsistent with the number density. We discuss in this sub-section possible explanations for this discrepancy.
\subsubsection{Contaminants}
It is possible that some of the sources used for our clustering measurements are not truly $z \sim 6$ LBGs, but are instead brown dwarfs or galaxies at other redshifts. This is unlikely to be the cause of the discrepancy, as \cite{Bowler2015} had access to photometry across a very large range of wavelengths and performed extensive testing with brown dwarf templates to rule out substantial contamination. Furthermore, stellar contamination would actually reduce the clustering amplitude as stars are unclustered and have no physical correlation with the galaxies. This is also in general the case for contamination from galaxies at other redshifts, which would not be spatially correlated with the region of space probed at $z\sim6$. One possible exception to this is if a substantial proportion of the sources are actually at $z \sim 1.3$ (the redshift degenerate with $z \sim 6$ when it is difficult to distinguish between the two spectral breaks in SED modelling), in which case we would effectively be measuring the correlation function at $z \sim 1.3$, which would render all modelling so far void. However we dismiss this possibility - although it is possible there are a few $z \sim 1.3$ interlopers in the sample, it seems extremely unlikely that they make up a substantial proportion of the sample a) because of our confidence in the template fitting, and b) because of the deep multi-wavelength data used in \cite{Bowler2015}. We conclude that contamination is unlikely to be a substantial factor in the clustering-number density discrepancy.
\subsubsection{More complex galaxy-halo relations}
HOD modelling is predicated on the principle that the only thing that determines the galaxy content of a halo is the mass of the halo - if this is violated, then in general more complex relations between the galaxy-halo relation and clustering measurements are possible. Well known cases include assembly bias (see \citealp{Hearin2015}, where the bias of halos depends on halo assembly history as well as mass), a requirement of a nearby halo to interact with (\citealp{Cen2015}), or a dependence on the large scale linear density field. These effects are potentially plausible - it is possible to imagine a galaxy having a brief starburst (which we then observe as an ultra-bright source) as a result of an interaction in a denser region of the Universe, or for the amount of gas left for star formation in a halo to be related to the age of the halo. However it is in general very hard to distinguish between these different effects if one only has access to the large-scale linear bias (which is effectively a one dimensional measurement). Galaxy-galaxy lensing can in principle observationally break these degeneracies (e.g. if sources are in older, lower mass halos, their clustering will reflect the assembly-dependent bias of the hosts, but the lensing will reflect just the mass), but this is likely to never be possible at these redshifts as it requires a high number density of even higher redshift sources. It seems likely that comparison with simulations is the only way to investigate the viability of such underlying processes.
\subsubsection{Uncertainty in knowledge of the high-redshift dark matter distribution}
A key input to HOD modelling is our knowledge of the spatial distribution of the underlying dark matter, in particular the HMF and halo bias as a function of halo mass (which comes predominantly from N-body simulations). If the dark matter model used is incorrect, then the conclusions from HOD modelling will also in general be incorrect. As shown in Table \ref{table:summary_of_models}, abundance matching suggests that the sources are in $>10^{11.5} M_{\sun}$ halos. However, the bias from the clustering would suggest that they are in $>10^{12} M_{\sun}$ halos, which are a factor of twenty rarer. \cite{Tinker2008} and \cite{Tinker2010} found model HMFs and halo biases from N-body simulations at redshifts of $z=0-2.5$. \cite{Behroozi2012} then introduced a high-redshift calibration to the \cite{Tinker2010} HMF, extending the validity to $z \sim 8$ (representing an increase of approximately 20 percent at $z=6$ for $M \sim10^{11.3} M_{\sun}$ halos). Hence confidence in N-body simulations makes a correction factor of $\sim 20$ appears implausible, within the current structure formation paradigm. However, \cite{Behroozi2012} did not calibrate the high redshift halo bias, so we are effectively using biases at $z \sim 2.5$ extrapolated to $z \sim 6$. The excess in the clustering amplitude is only around a factor of 50 percent, which would require around a 25 percent correction in bias. Thus we suggest that our results could potentially be explained with a high-redshift calibration to the halo bias function that steepens it at the high mass end. See also \cite{Behroozi2016} for a discussion on this direction of inference e.g. how high redshift stellar-mass functions can give information on the high-redshift HMF.
An alternate potential correction to our understanding of the distribution of dark matter is the incorporation of `quasi-linear effects'. HOD modelling makes a binary division between non-linear clustering within halos, and large-scale linear bias. However this transition is gradual, not sharp, and bias can be scale-dependent on up to 10 Mpc scales (relevant for scales probed with our observations), although it always tends to a constant value at large scales (\citealp{Mann1997a}). Introducing a functional form for scale-dependent bias can model some of these effects and \cite{Jose2016} conclude that quasi-linear clustering has the largest effect at high redshift ($z>2$), and high hight halo mass. In particular, \cite{Jose2017} note a similar discrepancy to ours at $3<z<5$, and find that quasi-linear effects can cause one to over estimate halo mass by up to a factor of ten if unaccounted for. They give $\gamma$ values (the correction to the bias) of order 30-40 percent - around the size of our inconsistency - at the relevant masses and scales relevant for our analysis, so it seems plausible that incorporating quasi-linear effects could solve our discrepancy, and be necessary for future analysis. \cite{Jose2017} also show that quasi-linear effects make the transition into the 1-halo term less sharp, which could explain why we do not observe one, as discussed in section \ref{sec:no_1_halo}. Quasi-linear effects are sub-percent at lower redshift, so HOD modelling at lower redshift is not invalidated (\citealp{Bosch2012}).
\subsubsection{Modification of Our understanding of high redshift structure formation}
Alternatively, it may be the case that N-body simulations do not correctly capture the physics of early structure formation, in a way that no calibration will be able to account for. The potential issue of `too many' high mass/luminosity galaxies has been identified by \cite{Steinhardt2016}. They summarise results that suggest that the best constraints on the HMF at $z=4-10$ inferred from observations is dramatically higher than the HMF from $\Lambda$CDM, with the discrepancy getting worse towards high halo masses and higher redshifts. Although in a challenging observational regime, they suggest that the observations show that current theories of structure assembly at $z>4$ could be flawed.
Although a possibility, our results are not in sufficient disagreement with models to warrant support of this hypothesis yet - it is necessary to explore the much more likely possibilities of the high-redshift halo bias needing calibration and quasi-linear issues before considering more dramatic changes to theories of structure formation.
\subsection{Estimating Cosmic Variance} \label{sec:selection_lum}
Cosmic variance is a term that can be used to refer to a number of related but subtly different effects. The specific context in which we use the term here is that many extragalactic statistical measurements vary by more than sample variance between different fields because of large scale structure. As noted in \cite{Bowler2015}, the number density of our two fields varies by much more than sample variance assuming a Poisson distribution. This is a consequence of large-scale structure, which our clustering measurements quantify. These clustering measurements can be linked back to the number count estimates to see if the cosmic variance observed is consistent with the clustering measurements, or if one of the fields is over/under dense, even accounting for large-scale structure. Understanding cosmic variance can be important for correctly connecting high redshift observations of galaxies with our understanding of reionization e.g. \cite{Ouchi2009}.
Note that in general it is possible for two populations to have the same average number counts, but different cosmic variances - this occurs when they have the same 1-point statistics, but different 2-point statistics. Thus we can use the 2-point statistics to refine the estimate of cosmic variance in \citet{Bowler2015} who used the Trenti Cosmic Variance calculator (\citealp{Trenti2008}) - which only uses 1-point statistics. A clustered and unclustered population of the same number density will have substantial and zero cosmic variance respectively (both will have Poisson variance). Also 2-point statistics only give the \textit{variance} of the full probability distribution of counts in a field. Higher order statistics (n-point correlation function etc.) are needed to fully probe the full distribution. Similarly, 3-point statistics are needed to quantify the cosmic variance on measurements of 2-point statistics, 4-point statistics are needed to quantify the cosmic variance on measurements of 3-point statistics, and so on ad infinitum. Alternatively, more complex cosmic variance behaviours can be studied with mock catalogs from cosmological simulations of structure formation (\citealp{Trenti2008}).
The cosmic variance is related to the \textit{expected value} of the correlation function in the geometry of the field, that is to say the expected value of the correlation function at the separation of two points randomly selected in the field. Analytically we can write the expectation as:
\begin{equation}
\bar{\omega}(A)=\frac{\int_{A}\int_{A}\omega(|\vec{\theta_{i}}-\vec{\theta{j}}|)d^{2}\theta_{i} d^{2}\theta_{j}}{\int_{A}\int_{A}d^{2}\theta_{i} d^{2}\theta_{j}} ,
\end{equation}
where $A$ is the angular region of the field, $\omega$ is the 2-point correlation function, $\bar{\omega}(A)$ is the expectation of the correlation function in that field, $\theta_{i}$ and $\theta_{j}$ are points in the field, $|\vec{\theta_{i}}-\vec{\theta{j}}|$ is their angular separation, and the integrals are double integrals over the area of the field. We calculate this numerically by sampling 100,000 pairs of points in the field, calculating their angular separation, finding the value of the correlation function at that angular scale (with the best fit model from Section \ref{sec:full_sample_modelling}), and then taking the average.
\citet{Trenti2008} summarise results from \citet{Peebles1993,Newman2001} and \citet{Somerville2003} that conclude:
\begin{equation}
\bar{\omega}(A)=\frac{\langle N^{2} \rangle - \langle N \rangle^2}{\langle N \rangle^2} - \frac{1}{\langle N \rangle}
\end{equation}
where $N$ is the random variable of the number of objects in a field. This can be rearranged to the form:
\begin{equation}
\langle N^{2} \rangle - \langle N \rangle^2=\langle N \rangle+\bar{\omega}(A) \langle N \rangle^{2} ,
\end{equation}
\begin{equation*}
\sigma^{2}_{total}= \sigma^{2}_{poisson}+\sigma^{2}_{CV} ,
\end{equation*}
where $\sigma_{total}=\sqrt{\langle N^{2} \rangle - \langle N \rangle^2}$ is the total standard deviation on measurements of number counts, $\sigma_{Poisson}=\sqrt{\langle N \rangle}$ is the Poisson standard deviation and $\sigma_{CV}=\sqrt{\bar{\omega}(A)} \langle N \rangle$ is the standard deviation from cosmic variance e.g. total standard deviation is the Poisson and cosmic variance standard deviations added in quadrature. The standard deviation from cosmic variance reduces to $\sigma_{CV}=b \sqrt{\bar{\omega}_{DM}(A)} \langle N \rangle$ in the `pure-bias' case where $\omega=b^2\omega_{DM}$, where $b$ is the bias and $\omega_{DM}$ is the dark matter angular correlation function.
This formalism has all the properties one would expect from cosmic variance. Cosmic variance is higher when sources are more clustered (further away from uniform). Cosmic variance becomes lower as the size of the field increases, as the correlation function is sampling larger scales, where the function has a lower value, a consequence of the fact that a larger range of environments are being probed. A more subtle effect is that cosmic variance also varies with field shape, as well as size. The average length scale probed for a circle is a lot smaller than for a long thin rectangle of the same area (for example), corresponding to a higher average correlation function value, and greater cosmic variance. This can be interpreted (as described in \citealp{Trenti2008}) as a consequence of the fact that a more compact field geometry is predominantly sampling the same environment, be it an over- or under-density. However a long thin geometry is sampling from a large range of environments, and overdensities and underdensities are more likely to cancel out. The formalism for describing cosmic variance here also works when the field is disconnected. If the `field' is actually two disconnected subfields separated by a vast distance in the sky, when calculating the average of the correlation function over this field, half the time the two points will be in different sub-fields, and the value of the correlation function on this scale will be effectively zero. This halves the value of $\bar{\omega}(A)$, effectively reducing cosmic variance contribution by a factor of $\sqrt{2}$ as completely different regions of the Universe are being probed.
\begin{table*}
\caption {Actual and expected number of galaxies in each field. The columns are: field used, the field angular area (in deg$^2$), the actual number of galaxies in the field ($N_{\textrm{a}} $), the actual angular galaxy density in the field ($\rho_{\textrm{a}}$, in deg$^{-2}$), the expected number of galaxies in the field if it had the mean density ($N_{\textrm{e}} $), the expected angular galaxy density in the field - all identical figures as we are considering deviations from the mean density ($\rho_{\textrm{e}}$, in deg$^{-2}$), standard deviation from Poisson statistics, equal to the square root of $N_{\textrm{e}} $ ($\sigma_{Poisson}$), standard deviation from cosmic variance, estimated from our clustering measurements ($\sigma_{CV}$), our Poisson and cosmic variance errors added in quadrature ($\sigma_{Total}$)}
\begin{tabular}{ |p{2cm}||p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| }
\hline
Field & Area (deg$^{2}$) & $N_{\textrm{a}} $ & $\rho_{\textrm{a}}$ (deg$^{-2}$) & $N_{\textrm{e}}$ & $\rho_{\textrm{e}}$ (deg$^{-2}$) & $\bar{\omega}$ & $\sigma_{Poisson}$ & $\sigma_{CV}$ & $\sigma_{Total}$ \\
\hline
UDS & 0.74 & 64 & 86 & 92 & 124 & 0.027 & 10 & 15 & 18\\
UltraVISTA & 0.62 & 103 & 166 & 77 & 124 & 0.023 & 9 & 12 & 15\\
Total & 1.35 & 167 & 124 & 167 & 124 & 0.013 & 13 & 19 & 23\\
\hline
\end{tabular}
\label{table:summary}
\end{table*}
Table \ref{table:summary} summarises our results when applied to the UltraVISTA and UDS fields for our $z\sim6$ samples (using our best fit pure bias model). The most important columns to compare are the $N_{a}$ (the actual number of galaxies in the field), $N_{e}$ (the expected number there would be if both fields had the average density) and $\sigma_{total}$ the standard deviation on counts including Poisson and cosmic variance implied by our clustering measurements. In both cases, the observed number of galaxies in each field is approximately a 1.5$\sigma$ deviation from the model value (as noted in \citealp{Bowler2014}, they are the most over- and under-dense respectively of the five CANDELS fields). Thus at these redshifts, both UltraVISTA and UDS appear to be moderate, but not unreasonable, over and under-densities respectively.
\subsection{The onset of quenching} \label{sec:onset_quench}
\cite{Bowler2014} and \cite{Bowler2015} report a rapid evolution in the high luminosity end of the luminosity function at $z=6-7$, a transition from a power-law drop off to an exponential cut-off, interpreted as the onset of quenching or dust obscuration. We show in Fig. \ref{fig:simple_abundance_match} the luminosity to halo mass ratios implied by the \cite{Bowler2015} luminosity functions and the simplest possible abundance matching scheme:
\begin{equation*}
M_{h}(M_{UV})=\psi^{-1}_{HMF}(\psi_{lum}(M_{UV})) ,
\end{equation*}
where $M_{h}$ is the halo mass corresponding to the magnitude $M_{UV}$, $\psi_{HMF}(M_{h})$ is the comoving number density of halos greater than mass $M_{h}$ and $\psi_{lum}(M_{h})$ is the comoving number density of LBGs brighter than magnitude $M_{UV}$. This is a very simple model (more complex abundance matching schemes do exists e.g. SHAM, \citealp{Guo2015}), and ignores complexities such as scatter, satellites and duty cycle, but illustrates the argument in \cite{Bowler2014} that $z=6-7$ is the onset of quenching. At the low luminosity end, the luminosity to halo mass ratio drops with time, as \cite{Harikane2015} find using clustering (see their figure 10). At the high mass end, the ratio is fairly constant, before dropping off towards low redshift e.g. \cite{Behroozi2012}. The unphysical rise in the $z=7$ luminosity to halo mass ratio at the high luminosity end is a result of the fact that the luminosity function cannot (as noted in \citealp{Bowler2014}) continue as a power law to even brighter magnitudes, as it would quickly be dramatically higher than the HMF. More realistically, in a scenario with no high luminosity/mass end quenching, the luminosity function would drop off at the same rate as the HMF (which doesn't drop off as fast as a Schechter function). If at higher luminosities it really does continue as a power law, then the LBGs would become much more numerous than their corresponding halos, and almost certainly these bright objects would be rare phases in the duty cycle of a more common population. \cite{Davidzon2017} make a similar argument for the onset of quenching, using stellar mass estimates from UltraVISTA-DR2, SPLASH and Hyper-Suprime-Cam data in the COSMOS field, except finding the transition at $z \sim 3$ rather than $z \sim 6$.
\begin{figure}
\includegraphics[scale=0.6]{LBG_paper_ratio_pdf.pdf}
\caption{Luminosity (monochromatic luminosity at $1500\AA$; that is to say, $\lambda f_{\lambda}$ where $\lambda$ is the wavelength and $f_{\lambda}$ is the flux per wavelength) to halo mass ratios as a function of halo mass, derived abundance matching Bowler et al., (2015) luminosity functions and Behroozi et al., (2013) halo mass functions. The results are shown for both Schechter function fit and the double power law fit, for $z=5$, $z=6$ and $z=7$. In bold are the best fitting functions, double power law for $z=6$ and $z=7$, Schechter for $z=5$, and the dashed line shows the alternative. }
\label{fig:simple_abundance_match}
\end{figure}
The fact that these galaxies at the bright end of the luminosity function truly are in the densest regions of the Universe as opposed to less biased objects caught in extremely rare massive star-bursts, supports the explanation that the drop off in the luminosity to halo mass ratio at the high-mass end is essentially still operational at $z=6$. If the sample had been just rare episodes of vigorous star formation, then the interpretation of the steepness/drop off rate at the bright end of the luminosity function would be different - it would instead be dominated by the distribution of star-formation rates a given population has, and how rare its episodes of high star formation are, as opposed to the luminosity function being dominated by a modified halo mass function.
To determine if $z \sim 6-7$ is the onset of mass quenching, a similar analysis to this work of luminous $z=7$ LBGs would need to be performed to see if the luminosity to halo mass ratio doesn't drop off. There are three main possibilities at $z\sim7$:
\begin{itemize}
\item $M_{UV}\sim -22$ LBGs have the same host halo mass as $M_{UV}\sim -20$ objects; this would suggest they are the same population of objects at different points in their duty cycle
\item $M_{UV}\sim -22$ LBGs have the same host halo mass as they do at $z\sim6$. This would suggest that the luminosity to halo mass ratio at the high luminosity end doesn't change much over $z\sim6-7$, which would not support $z\sim6-7$ being the onset of mass quenching/dust obscuration
\item $M_{UV}\sim -22$ LBGs have a lower host halo mass as they do at $z\sim6$, but still higher than the galaxies with $M_{UV}\sim -20$, in such a way that the luminosity to halo mass ratio was constant as a function of halo mass. This would be supportive that $z\sim6-7$ was indeed the onset of quenching (or dust obscuration).
\end{itemize}
\section{Conclusions} \label{sec:conclusions}
We have used the largest existing sample of extremely bright Lyman-break galaxies at $z\sim6$ to investigate their large scale structure and links to the possible onset of feedback quenching or dust obscuration at this redshift. This sample (detailed in \citealp{Bowler2015}) of 263 LBGs was selected in the UltraVISTA/COSMOS and UDS/SXDS fields, using deep optical and near-infrared data required to distinguish the galaxies from contaminant populations. The method we used to study the connection between the galaxies and their host halo was to measure their clustering with the angular correlation function, and model these measurements with a HOD scheme.
The key conclusions of this work are:
\begin{itemize}
\item Bright LBGs ($M_{UV}\leqslant-21$) appear to be highly biased ($b \sim 10$) objects in dense environments, as opposed to being rare temporal episodic incarnations of fainter galaxies ($M_{UV} \sim -19$). This suggests that the bright-end of the luminosity function at $z \sim 6$ is determined by feedback processes or dust obscuration, rather than duty cycles. Our results have important implications for the physical origin of the observed steepening of the bright end of the ultra-violet luminosity function between $z \sim 6$ and $z \sim 7$ (\citealp{Bowler2014,Bowler2015}) - which in a straightforward abundance matching scheme would imply a dramatically increased luminosity to halo mass ratio at $z \sim 7$ to $z \sim 6$.
\item We find a tension between the observed number counts and bias, that suggests that some modification to our knowledge of the high-redshift dark matter distribution is needed This is most likely to be the incorporation of quasi-linear effects (as described in \citealp{Jose2017}), or possibly a minor calibration upwards of halo bias at high redshift.
\item Although number counts within each field differ by far more than Poisson sample variance, estimates of the cosmic variance from the clustering would suggest that both fields are only moderate 1.5-$\sigma$ over/under densities.
\item We do not require duty cycle to explain our observations (equivalently $DC \sim 1$), and the satellite fraction of the sources is very small, at most a few percent
\end{itemize}
In the next few years, deep, wide surveys such as VIDEO and the VISTA Extragalactic Infrared Legacy Survey (VEILS, \citealp{Honig2016}), which will extend the area of VIDEO, will provide improved constraints on the luminosity function and clustering of high-redshift galaxies, and allowing extension to even more luminous LBGs. By the mid 2020s it should be possible to use the \textit{EUCLID} space telescope mission to do this with 1,000s of LBGs (\citealp{Bowler2016a}), which will reveal how the measured large scale structure of LBGs and LAEs relates to reionization.
\section*{Acknowledgements}
The first author wishes to acknowledge support provided through an STFC studentship, and the Rector and Fellows of Lincoln College for support through the Graduate Research Fund. Many thanks to Yuichi Harikane for useful discussions on high redshift clustering, and to Steven Murray for advice for using {\sc Halomod}. This work was supported by the Oxford Centre for Astrophysical Surveys which is funded through generous support from the Hintze Family Charitable Foundation, the award of the STFC consolidated grant (ST/N000919/1), and the John Fell Oxford University Press (OUP) Research Fund. Based on data products from observations made with ESO Telescopes at the La Silla or Paranal Observatories under ESO programme ID 179.A- 2006. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
\bibliographystyle{mn2e_mod}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Consider $\mathcal O$ to be a bounded connected domain in $\mathbb R^2$ with a smooth boundary. The electrical impedance tomography (EIT) problem (e.g., \cite{borcea}) concerns the determination of the admittivity in the interior of $\mathcal O$, given simultaneous measurements of direct or alternating electric currents and voltages at the boundary $\partial \mathcal O$.
If the magnetic permeability is negligible, then the problem can be reduced to the inverse conductivity problem (ICP), which consists of reconstructing a function $\gamma(z), z \in \mathcal O$, via the known, dense in some adequate topology, set of data $(u|_{\partial \mathcal O},\frac{\partial u}{\partial\nu}|_{\partial \mathcal O})$, where
\begin{equation}\label{set27A}
\mbox{div}(\gamma \nabla u(z)) =0, ~ z\in \mathcal O.
\end{equation}
Here $\nu$ is the unit outward normal to $\partial \mathcal O$, $\gamma(z) = \sigma(z)+ i \omega\epsilon(z)$, where $\sigma$ is the electric conductivity and $\epsilon$ is the electric permittivity. If the frequency $\omega$ is sufficiently
small, then one can approximate $\gamma$ by a real-valued function.
Previous approaches for EIT with isotropic admittivity, which were in active use for the last three decades, can be divided in two groups: closed-form solution or sample methods where we refer to reviews \cite{borcea} and \cite{Pott} for further details as well as latest articles \cite{Aless1,Aless2,afr,BIY15,bukh,BKL,GL,GLSS,HT,knud,LL,lnv,lbcond,ltv,T}. These approaches do not fully coincide,
for example the Linear Sampling Method (LSM) allows the reconstruction of the parameter's jump location, but it assumes the medium is known outside of the jump. An example of a weak point for the actual closed-form methods, i.e. Complex Geometric Optics (CGO) methods, is that they do not allow for the presence of impenetrable obstacles anywhere inside the medium.
Another problem which appears in the case of complex conductivities is the existence of exceptional points, i.e. non-trivial scattering solutions to the Lippmann-Schwinger equation - roughly speaking, points where the solution for a given spectral parameter is not unique. Most methods for the inverse conductivity problem require the condition that such exceptional points cannot occur (see, for example, \cite{nachman}). First ideas on how to handle the case of exceptional points appear in \cite{Novikov} and further in \cite{lnv}, \cite{lbcond}.
For several years E. Lakshtanov and B. Vainberg had a parallel work with Armin Lechleitner on topics
like Interior Transmission Eigenvalues, inside-outside duality and factorization methods. In autumn of 2016 Armin wrote to them "...I'm actually not sure whether it pays off to develop these sampling methods further and further but I would be more interested in having methods for background media. We might try to continue our work in this direction, or towards Maxwell's equations, if this makes sense to you...". Although, the factorization methods are quite stable relatively to measurement errors, they fail if the outside medium is not known exactly but approximately. Armin Lechleitner obtain several results
in this direction, e.g. \cite{BKL}, \cite{GL}. In turn, E. Lakshtanov, R. Novikov and B. Vainberg also got some closed-form reconstruction/uniqueness results \cite{lnv},\cite{lbcond}. Furthermore, E. Lakshtanov, and B.Vainberg got a feeling that LSM and CGO methods can be applied simultaneously to
reconstruct the shape of the jump even if the potential is unknown. This lead to the new ideas being presented in the current paper, first among them the concept of admissible points. It is our believe that this concept will be an important step on how to proceed in the case of non-zero frequencies.
The author would like to point out that the main ideas in this paper are from E. Lakshtanov and B. Vainberg who due to life circumstances were unable to pursue this line of research. The author is deeply indebted to them for allowing him to work out the details.
As the methods for 2D and 3D are quite different even at the level of Faddeev Green function analysis, we focus our analysis on the 2D case only. Although, future plans are to extend the machinery we will present in order to obtain similar results in the 3D case.
Moreover, in this paper we treat the isotropic case for complex conductivities with a jump. Recent results on the anisotropic case with real piecewise constant conductivities can be found in \cite{Aless3,Aless4}. One further extension of our approach could be to consider the anisotropic case with complex conductivities based on the previously mentioned works.
We suppose that the conductivity function $\gamma$ is somehow smooth (to determine later) except in a closed contour $\Gamma \Subset \mathcal O$. Let $\gamma^+$ be the trace of $\gamma$ at the exterior part of the $\Gamma$ and $\gamma^-$ be the trace at the interior part. By $\mathcal{D}$ we denote the interior part of $\Gamma$.
Under our assumption on $\gamma$ we look at solutions of the problem (\ref{set27A}) which are quite smooth in each domain, $u^- \in \mathcal{D}$ and $u^+ \in \mathcal O \backslash \overline{\mathcal D},$ and satisfy the following condition at $\Gamma$
\begin{equation}\label{TransCond}
\left \{
\begin{array}{l}
u^-(z)-u^+(z)=0, \quad \quad \quad \\
\gamma^- \frac{\partial u^-}{\partial \nu}(z) - \gamma^+\frac{\partial u^+}{\partial \nu}(z) =0, \quad \quad
\end{array}
z \in \Gamma.
\right .
\end{equation}
\vspace{0.2cm}
The purpose of this approach is to establish a new method to overcome the limitation of Lipschitz conductivities in the current literature. In particular, we have in mind the handling of cases where separation of tissues is an important issue, like in detection of nodules through medical imaging.
The reconstruction procedure of $\gamma$ starts by converting the conductivity similarly to \cite{bu} and \cite{fr}. Let $u$ be a solution of (\ref{set27A}) but only on the domain $\,\mathcal{O}\setminus\Gamma\,$ satisfying the transmission condition above (\ref{TransCond}).
Below $z$ denotes a point in the complex plane and $\mathcal{O}$ is a domain in $\mathbb{C}$. Let $\partial=\frac{1}{2}\left(\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}\right)$. Then the pair
\begin{align}\label{0707B}
\phi=(\phi_1, \phi_2)=\gamma^{1/2}(\partial u, \bar{\partial} u)^t=\gamma^{1/2}\left(
\!\!\!\begin{array}{c}
\partial u \\
\bar{\partial} u\\
\end{array} \!\!\!
\right)
\end{align}
satisfies the Dirac equation
\begin{equation}\label{firbc}
\left ( \begin{array}{cc} \bar{\partial} & 0 \\ 0 & \partial \end{array} \right ) \phi(z) = q(z) \phi(z), \quad z=x+iy \in \mathbb{C}\setminus\Gamma,
\end{equation}
with the potential $q$ defined also in $\mathbb{C}\setminus\Gamma$ by
\begin{eqnarray}\label{char1bc}
q(z)=\left ( \begin{array}{cc}0 &q_{12}(z) \\
q_{21}(z) & 0\end{array} \right ), \quad q_{12}=-\frac{1}{2}\partial \log \gamma, \quad q_{21}=-\frac{1}{2}\bar{\partial }\log \gamma,
\end{eqnarray}
where we extend, as usual, $\gamma$ to the outside of $\mathcal{O}$ by setting $\gamma=1$. On $\Gamma$, the pair $\phi$ satisfies a transmission condition for the Dirac equation which is derived from the previous one and we show it below.
Thus, it is enough to solve the inverse Dirac scattering problem instead of the ICP. If it is solvable and $q$ can be found then the conductivity $\gamma$ is immediately obtained from (\ref{char1bc}), up to a constant.
In order to complete the reduction of the ICP to the inverse Dirac problem, one needs only to obtain the scattering data for the Dirac equation via the set of data $\left (u\!\left |_{\partial \mathcal O}\right .,\frac{\partial u}{\partial\nu}\!\left |_{\partial \mathcal O}\right. \right).$
In fact, the scattering data for the Dirac equation can be obtained by simple integration of its Dirichlet data against the conjugate of an entire function $U$, which is related to the new set of complex geometric optic asymptotics, i.e.
for a spectral parameter $\lambda$ and $w$ a certain type of point to be introduced, we have the scattering data to be define by
\begin{align*}
h(\lambda,w)= \int_{\partial \mathcal O} \overline{U(z,w,\lambda)} \,e^{-\overline{\lambda (z-w)^2}/4}\phi_2(z,w,\lambda)d\bar{z}.
\end{align*}
In this paper we give a reconstruction formula of the potential $q$ in the so-called admissible points (see Theorem~\ref{210918T}). We announce the result here in terms of a uniqueness theorem first since it does not require the introduction of the formal definition of the scattering data.
We assume that $\log \gamma$ is well defined in the whole complex plane, by assuming that the real part of the conductivity has a positive lower bounded.
We have to remark that, in fact, we are going to present only a partial result, given that we cannot yet reconstruct, and show uniqueness of, the potential $q$ in the whole of $\mathbb{C}\setminus\Gamma$. Our proof is based on a new concept, that is based on a specific set of points, which we now define:
\begin{defn} \label{PontosAdmissiveis}
We say that a point $w\in\mathcal{O}$ is an {\it admissible point} if there is a number $\lambda_{\mathcal O} \in \mathbb{C}$ such that
\begin{align*}
A&:=\sup_{z \in \overline{\mathcal O}}\, \textnormal{Re}[ \lambda_{\mathcal O}(z-w)^2]< 1/2\\
B&:= \sup_{z \in \overline{\mathcal D}} \, \textnormal{Re}[ \lambda_{\mathcal O}(z-w)^2] < -1/2.
\end{align*}
Moreover, if $w$ is an admissible point and the constants $A$ and $B$ fulfills
$A=1/2-\epsilon_1, \, B=-1/2-\epsilon_2,$ with $\epsilon_2-\epsilon_1>0$, we further say that $w$ is a proper admissible point.
\end{defn}
The main theorem of this chapter will be obtained using this novel idea. Even though the proof follows by reconstruction, we give here the uniqueness theorem without introducing the scattering data first, given that this will be related with new CGO incident waves.
\begin{theorem}
Let $\Omega$ be a bounded Lipschitz domain in the plane, and let $\gamma\in W^{2,\infty}(\mathcal{D})\cap W^{2,\infty}(\mathcal{O} \setminus\overline{\mathcal{D}})$ such that $\textnormal{Re}(\gamma)\geq c>0$. If $\sqrt{\frac{\gamma^-}{\gamma^+}}-1$ is small enough on $L^{\infty}(\Gamma)$, we have that the Dirichlet-to-Neumann map $\Lambda_\gamma$ determines the conductivity $\gamma$ uniquely in any proper admissible point.
\end{theorem}
Hereby, we want to point out that the Dirichlet-to-Neumann map determines the scattering data uniquely can be proven similarly to~\cite{lbcond} (see Section \ref{ScDN}).
\\
Now, Theorem~\ref{210918T} will even provide a reconstruction formula for the potential $q$ in so-called proper admissible points. This is an improvement of previous existent methods insofar as a convenient enlargement of the set of CGO incident waves allows to highlight the desirable areas around such points. Thus, this article provides a 2D reconstruction result for complex conductivities which are discontinuous on a contour which, although being apparently a rather weak result, cannot possibly be obtained by any previous technique, at least that we know of, and represents a first step in this direction. In fact the main goal of the article is to show the viability of the presented approach. In this manner, all our efforts are to present the main tools for this approach, leaving other questions like, stability of determination as in \cite{Aless}, many contours and geometry of admissible points, to future work.
We also want to point out that our definition of admissible point is not sharp, i.e. it can be made sharper by considering higher regularity of the conductivity outside the curves of discontinuities $\Gamma$.
Several technical problems need to be solved and presented now in order to facilitate the subsequent study. These include: the right choice of the functional space, a set of admissible points (essential to the reconstruction), and the enrichment of the set of CGO incident waves (i.e. we use solutions like $|\lambda|^{f(z)}$ which highlight desirable areas). The latter solutions are unlimited even after the CGO-Faddeev normalization and we are required to obtain two-dimensional Laplace Transform analogues of the Hausdorff-Young inequality to derive our reconstruction formula. \\
The paper is organized as follows: In Section 2 we recall necessary facts on the transmission condition and the construction of the Lippmann-Schwinger equation for CGO-Faddeev solutions in our case. In Section 3. we introduce the necessary function spaces as well as related lemmas. We present the novel concept of \textit{admissible points} (see Definition \ref{PontosAdmissiveis}) based on a convenient enrichment of the set of CGO incident waves and we study the scattering data and reconstruction of the potential in these type of points. We finalize this section with two subsections containing some more necessary results and the proof of our main theorem. For the sake of readability we placed some additional results together with its proofs in an appendix.
\section{Main construction}
\subsection{Transmission condition}
We denote by $n(z)=\left(n_x(z),n_y(z)\right)$ the unit outer normal vector in $\Gamma$ and on the complex plane by $\nu(z)=n_x(z)+in_y(z)$.
During the paper we consider two orientations for the contour $\Gamma$: positively oriented $\Gamma^+$ (curve interior $\mathcal{D}$ is to the left) and negatively oriented $\Gamma^-$ (curve interior is to the right).
\begin{lemma}
The transmission condition (\ref{TransCond}) implies the following condition to the Dirac equation on $\Gamma$
\begin{equation}\label{7JunB}
\left(
\begin{array}{c}
\phi_1^+ - \phi_1^- \\
\phi_2^+ - \phi_2^- \\
\end{array}
\right)
=
\frac{1}{2}\left(
\begin{array}{cc}
\alpha+\frac{1}{\alpha}-2 & (\alpha-\frac{1}{\alpha})\bar{\nu}^2 \\
(\alpha-\frac{1}{\alpha})\nu^2 & \alpha+\frac{1}{\alpha}-2 \\
\end{array}
\right)
\left(
\begin{array}{c}
\phi_1^- \\
\phi_2^- \\
\end{array}
\right)
\end{equation}
where $\alpha=\sqrt{\frac{\gamma^-}{\gamma^+}}$.
\end{lemma}
\begin{proof}
Let $l(z)=(-n_y(z),n_x((z))$ be a unit tangential vector to $\Gamma$.
From the first equation of (\ref{TransCond}) follows for the tangential derivative that $\frac{\partial}{\partial l} (u^+(z)-u^-(z))=0$ and, therefore,
$$
\sqrt{\gamma^+} u_l^+ - \sqrt{\gamma^-} u_l^- =u_l^- \sqrt{\gamma^-} (\frac{1}{\alpha}-1),
$$
where $u_l=\frac{\partial u}{\partial l}$. Moreover, during this proof and to simplify the computations we denote the normal derivative as $u_n=\frac{\partial u}{\partial \nu}$. From the second equation of (\ref{TransCond}) we get $u_n^+ =\frac{\gamma^-}{\gamma^+} u_n^- $, where $u_n^{\pm}$ denotes the normal derivative of $u^{\pm}$, so that
$$
\sqrt{\gamma^+} u_n^+ - \sqrt{\gamma^-} u_n^-=\sqrt{\gamma^-} u_n^- (\alpha-1).
$$
Note that we have now
\begin{eqnarray}
{\partial}u = \frac{1}{2} (\bar{\nu} u_n - i\bar{\nu} u_l), \label{8} \\
\bar{\partial}u = \frac{1}{2} (\nu u_n + i\nu u_l), \label{9}
\end{eqnarray}
\begin{align*}
\phi_1^+-\phi_1^-= \sqrt{\gamma^+} {\partial}u^+ - \sqrt{\gamma^-} {\partial}u^-=
\left(\frac{1}{\alpha}-1\right) u_l^- \sqrt{\gamma^-} \frac{1}{2}(-i\bar{\nu}) + ({\alpha}-1) u_n^- \sqrt{\gamma^-} \frac{1}{2}\bar{\nu}, \\
\phi_2^+-\phi_2^-= \sqrt{\gamma^+} \bar{\partial}u^+ - \sqrt{\gamma^-} \bar{\partial}u^-=
\left(\frac{1}{\alpha}-1\right) u_l^- \sqrt{\gamma^-} \frac{1}{2}(i\nu) + \left({\alpha}-1\right) u_n^- \sqrt{\gamma^-} \frac{1}{2}\nu.
\end{align*}
These relations take the matricial form
$$
\left(
\begin{array}{cc}
\phi_1^+-\phi_1^-\\
\phi_2^+-\phi_2^-\\
\end{array}
\right) = \frac{1}{2}\left(
\begin{array}{cc}
(\alpha-1)\bar{\nu} & (\frac{1}{\alpha}-1) (-i\bar{\nu}) \\
(\alpha-1){\nu} & (\frac{1}{\alpha}-1)(i{\nu}) \\
\end{array}
\right)\left(
\begin{array}{cc}
u_n^- \sqrt{\gamma^-} \\
u_l^- \sqrt{\gamma^-} \\
\end{array}
\right).
$$
Using (\ref{8}) and (\ref{9}), together with the definition of $\phi,$ we obtain the relation
$$
\left(
\begin{array}{cc}
u_n^- \sqrt{\gamma^-} \\
u_l^- \sqrt{\gamma^-} \\
\end{array}
\right)=
\left(
\begin{array}{cc}
{\nu} & \bar{\nu} \\
i\nu & -i\bar{\nu} \\
\end{array}
\right)\left(
\begin{array}{cc}
\phi_1^- \\
\phi_2^-\\
\end{array}
\right).
$$
These two previous displayed equations allows us to complete the proof of the lemma.
\end{proof}
\subsection{The Lippmann-Schwinger equation for CGO-Faddeev solutions}
Consider the vector $\phi$ which satisfies (\ref{firbc}) and the following asymptotic
\begin{align}\label{8JunA}
\begin{array}{l}
\phi_1(z,w,\lambda)=e^{\lambda (z-w)^2/4}U(z,w,\lambda)+e^{\lambda (z-w)^2/4}o(1), \quad\\ \phi_2(z,w,\lambda)=e^{\overline{\lambda (z-w)^2}/4}o(1),
\end{array} \quad z \rightarrow \infty.
\end{align}
where $U(z,w,\lambda)$ is an entire function with respect to the parameter $z$.
We denote
\begin{equation}\label{200918A}
\mu_1(z,w,\lambda)=\phi_1(z,w,\lambda) e^{-\lambda (z-w)^2/4}, \quad \mu_2(z,w,\lambda)=\phi_2(z,w,\lambda)e^{-\overline{\lambda (z-w)^2}/4}.
\end{equation}
Further, we introduce some matrix functions that will establish a integral equation for $\mu$.
Due to (\ref{firbc}), this functions fulfill the following equation on $\mathbb{C}\setminus\Gamma$:
\begin{equation}\label{muirbc}
\left ( \begin{array}{cc} \bar{\partial}_z & 0 \\ 0 & \partial_z \end{array} \right ) \mu =\left( \begin{array}{cc}
0 & q_{12}(z)e^{-i\,\text{Im}[\lambda(z-w)^2/2]} \\
q_{21}(z)e^{i\,\text{Im}[\lambda(z-w)^2/2]}
\end{array}\right) \mu =: \tilde{q}\mu.
\end{equation}
On the contour $\Gamma$ they fulfill a transmission condition similar to (\ref{7JunB}), with the right-hand side being substituted by:
$$
\widetilde{A}_\lambda\mu=\frac{1}{2}\left(
\begin{array}{cc}
\alpha+\frac{1}{\alpha}-2 & (\alpha-\frac{1}{\alpha})\bar{\nu}^2 e^{-i\,\textnormal{Im}[\lambda (z-w)^2/2]}\\
(\alpha-\frac{1}{\alpha})\nu^2e^{i\,\textnormal{Im}[\lambda (z-w)^2/2]} & \alpha+\frac{1}{\alpha}-2 \\
\end{array}
\right)\,
\left(\begin{array}{cc}
\mu_1^- \\
\mu_2^-
\end{array}\right),
$$
where $\mu_1^-$ and $\mu_2^-$ are the traces values of $\mu$ taken from the interior of $\Gamma$.
Through this we obtain an integral equation for $\mu$:
\begin{proposition}
Let $\mu$ be a solution of (\ref{muirbc}) given as above through a function $\phi$ which fulfills (\ref{firbc}) and the asymptotics (\ref{8JunA}). Then $\mu$ is a solution of the following integral equation:
\begin{equation}\label{0807A}
(I+P\widetilde{A}_\lambda -D\widetilde{Q}_\lambda )\mu= \left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right),
\end{equation}
where $D=\left(
\begin{array}{cc}
\bar{\partial}^{-1} & 0 \\
0 & \partial^{-1} \\
\end{array}
\right)$ with $ \bar{\partial}^{-1}f(z)=\frac{1}{2\pi i}\int_{\mathbb{C}} \frac{f(\varsigma)}{\varsigma-z}\,d\varsigma\wedge d\bar{\varsigma}
$
and the $\partial^{-1}$ is given through the complex conjugate of the kernel $(\varsigma-z)^{-1}$. The matrix $\widetilde{Q}_\lambda$ has the following form
$$
\widetilde{Q}_\lambda=\left(
\begin{array}{cc}
0 & Q_{12} e^{-i\,\textnormal{Im}[\lambda (z-w)^2/2]}\\
Q_{21} e^{i\,\textnormal{Im}[\lambda (z-w)^2/2]} & 0 \\
\end{array}
\right),
$$ where $Q_{12},\,Q_{21}$ are $L^{\infty}$ extensions of $q_{12},\,q_{21}$ to $\Gamma$.
Moreover, $P$ is a projector
\begin{equation}\label{14a}
P=\left(
\begin{array}{cc}
P_+ & 0 \\
0& P_- \\
\end{array}
\right),
\end{equation} where $P_+,\,P_-$ are the Cauchy projector and its complex adjoint, respectively:
$$
P_+f(w)=\frac{1}{2\pi i} \int_{\Gamma^+} \frac{ f(z)}{z-w}\,dz, \quad P_-f(w)=\frac{1}{2\pi i} \int_{\Gamma^+} \frac{ f(z)}{\bar{z}-\bar{w}}\,d\bar{z}, \quad w \in \mathbb C.
$$
Hereby, $f$ is a function defined on the contour $\Gamma$.
\end{proposition}
\begin{proof} We use the same approach as in \cite{lbcond}. The following Cauchy-Green formulas hold for each $f\in C^1(\overline{\Omega})$ and an arbitrary bounded domain $\Omega$ with smooth boundary:
\begin{eqnarray}\label{2DecB}
f(z)= \frac{1}{2\pi i} \int_{ \Omega} \frac{\partial f (\varsigma)}{\partial \bar{\varsigma}} \frac{1}{\varsigma-z} \,d\varsigma\wedge d\bar{\varsigma} + \frac{1}{2\pi i} \int_{\Bound} \frac{f(\varsigma)}{\varsigma - z}d\varsigma, \quad z \in \Omega, \\ \label{2DecC}
0= \frac{1}{2\pi i} \int_{\Omega} \frac{\partial f (\varsigma)}{\partial \bar{\varsigma}} \frac{1}{\varsigma-z}\,d\varsigma\wedge d\bar{\varsigma} + \frac{1}{2\pi i} \int_{\Bound} \frac{f(\varsigma)}{\varsigma - z}d\varsigma,\quad z \not \in \overline{\Omega}.
\end{eqnarray}
Denote by $D_R$ a disk of radius $R$ and centered at z, and take $D_R^-=D_R\setminus \overline{\mathcal{D}} $. We recall, $D$ is the interior part of $\Gamma$. Assume that $z\in \mathcal{D}$, $f=\mu_1$ in both formulas, and $\Omega=\mathcal{D}$ in (\ref{2DecB}) and $\Omega=D_R^-$ in (\ref{2DecC}). We add the left- and right-hand sides in formulas (\ref{2DecB}) and (\ref{2DecC}). Taking the transmission condition for $\mu$ into account, we obtain for fixed $w$ that
\begin{equation}\label{11NovA}
\mu_1(z,\lambda) = \frac{1}{2\pi i} \int_{D_R\setminus\Gamma} (\tilde{q}\mu)_1(\varsigma,\lambda) \frac{1}{\varsigma-z}\,d\varsigma\wedge d\bar{\varsigma} + \frac{1}{2\pi i} \int_{\Gamma^-} \frac{ [\mu_1](\varsigma)}{\varsigma - z}d\varsigma+ \frac{1}{2\pi i} \int_{\partial D_R} \frac{\mu_1(\varsigma)}{\varsigma - z}d\varsigma,
\end{equation}
where $[\mu_1]=\mu_1^- -\mu_1^+$.
Noticing that $\mu_1$ converges to $U$ at infinity and since $U$ is entire, then taking the limit $R\to \infty$, it follows that the last term is $U(z)$. In this way, by taking the limit, and reordering we obtain:
\begin{equation}
\mu_1(z,\lambda) - \frac{1}{2\pi i} \int_{\mathbb{C}} (\widetilde{Q}_\lambda\mu)_1(\varsigma,\lambda)\frac{1}{\varsigma-z} \,d\varsigma\wedge d\overline{\varsigma} +\frac{1}{2\pi i}\int_{\Gamma^+} \frac{(\widetilde{A}_{\lambda}\mu)_1(\varsigma,\lambda)}{\varsigma-z}\,d\varsigma = U(z).
\end{equation}
This equation together with similar computations for $z \in D_R^{-}$ and showing the case for $\mu_2$ (similarly by taking the adjoint Cauchy-Green formulas) we obtain the desired integral equation.\end{proof}
\section{Technical details}
\subsection{The choice of the function space}
Let $1<p<\infty$, $R>0$ and $f = \left(\begin{array}{cc}
f_1 \\ f_2
\end{array}\right)$ be a vector function.
To define our spaces we keep in mind the notation introduced in \cite{ltv}. Denote by $L^{\infty}_z(B)$ the space of bounded functions of $z\in \mathbb{C}$ with values in a Banach Space $B$. Thus, picking $B=L^p_{\lambda}(|\lambda|>R)$ we introduce the first space
$$\mathcal H_1^p:=\left\{f: f_1,\,f_2\;\, \textnormal{continuous functions in}\, L^{\infty}_z\left(L^p_{\lambda}(|\lambda|>R)\right) \cap L^{\infty}_z\left(L^{\infty}_{\lambda}(|\lambda|>R)\right)\right\}.$$
To simplify the notation ahead, we introduce the following function space:
$$S=\left\{g:\Gamma\times\{\lambda\in\mathbb{C}:|\lambda|>R\}\rightarrow \mathbb{C}^2 \;\; \text{s.t.}\; \sum_{i\in\{1,2\}} \int_{|\lambda|>R}
\int_{\Gamma} |g_i(z,\lambda)|^p\,d|z| d\sigma_\lambda < \infty\right\},$$
where $d\sigma_{\lambda}$ is the Lebesgue measure in $\mathbb{R}^2$ (similarly we define $d\sigma_{z}$).
Following the idea of Hardy spaces and to obtain desirable properties at the contour $\Gamma$ we define the second space through the projector $P$ in (\ref{14a}) by:
\begin{align*}
\mathcal H_2^p:= \left\{F\in \mathcal{R}(P): \sum_{i\in\{1,2\}} \int_{|\lambda|>R}\int_{\Gamma} \left|F^-_i(z,\lambda)\right|^p\,d|z|d\sigma_{\lambda}<\infty\right\}
\end{align*}
where by $\mathcal{R}(P)$ we mean the range of the matrix projector $P$ with domain $S$. Hence we have that for $F \in \mathcal{H}^p_2$ there exists a function $f\in S$ such that $F=Pf$ and in $\Gamma$ it fulfills $F^-=f$. Moreover, this allows to consider this space with the norm
$$\|F\|_{\mathcal{H}^p_2}^p:=\sum_{i\in\{1,2\}} \int_{|\lambda|>R} \int_{\Gamma} |F^-_i(z,\lambda)|^p\,d|z| d\sigma_\lambda= \sum_{i\in\{1,2\}} \int_{|\lambda|>R} \int_{\Gamma} |f_i(z,\lambda)|^p\,d|z| d\sigma_\lambda.$$
Finally, the space we are going to work with is given as $\mathcal H^p = \mathcal H^p_1 + \mathcal H^p_2$ endowed with the norm
\begin{equation}\label{2107B}
\|t\|_{\mathcal H^p}=\inf_{\substack{u+v=t \\u \in \mathcal H^p_1, v \in \mathcal H^p_2}} \max (\|u\|_{\mathcal H^p_1}, \|v\|_{\mathcal H^p_2 }).
\end{equation}
Let us remind that the operations of intersection and union of two Banach spaces are correctly defined if all terms can be continuously embedded into a common locally convex space. In our situation this common locally convex space will be a space endowed with the semi-norms
$$
\int_{|\lambda|>R} \int_{ \mathcal O} \frac{1}{|\lambda|^2}|f(z,\lambda)|\,d\sigma_zd\sigma_\lambda.
$$
If $f \in \mathcal H_1^p$ the embedding is evident. For $f \in \mathcal H_2^p$ we have
$$
\|Pf\|_{L^p(\mathcal O)} \leq C \|f\|_{L^p(\Gamma)},
$$
so that $\left[\|Pf\|_{L^p(\mathcal O)} \right]^p \leq \left[\|f\|_{L^p(\Gamma)}\right]^p$
and
$$
\int \left ( \int_{\mathcal O} |Pf(z)|^p \,d\sigma_z \right ) \,d\sigma_\lambda \leq \int [\|f\|_{L^p(\Gamma)}]^p\, d\sigma_\lambda = \int_{|\lambda|>R} \int_{\Gamma} |f(z,\lambda)|^p \,d|z|d\sigma_\lambda.
$$
The boundedness of each semi-norm follows from the continuity of the embedding of $L^p(\mathcal O)$ into $L^1(\mathcal O)$.
\begin{lemma}\label{lemma2107D}
The operators $\widehat{P}_\pm: f \rightarrow (Pf)|_{\Gamma^\pm}$ are bounded in the space with norm
$$\left[\int_{|\lambda|>R} \int_{\Gamma} |f(z,\lambda)|^p\,dz d\sigma_\lambda\right]^{1/p}.$$
\end{lemma}
\begin{proof}
During the proof the sign $\pm$ in the projectors will be omitted. From the continuity of Cauchy projectors in $L^p(\Gamma)$ follows
$$
\|\widehat{P} f \|_{L^p(\Gamma)} \leq C \|f \|_{L^p(\Gamma)}.
$$
and therefore
$$
\left (\|\widehat{P} f \|_{L^p(\Gamma)} \right )^p\leq C^p \left ( \|f \|_{L^p(\Gamma)} \right )^p.
$$
Finally
\begin{small}
$$\|P\widehat{P}f\|_{\mathcal H^p_2}= \int_{|\lambda|>R} \left (\|\widehat{P} f \|_{L^p(\Gamma)} \right )^p \,d\sigma_\lambda \leq C^p \int_{|\lambda|>R} \left( \|f \|_{L^p(\Gamma)} \right)^p \,d\sigma_\lambda = C^p \int_{|\lambda|>R} \int_{\Gamma} |f(z,\lambda)|^p \,d|z|d\sigma_\lambda .$$
\end{small}
\end{proof}
\begin{lemma}\label{lemma2107C}
Let $u \in \mathcal H^p_1$. Then $P(u|_{\Gamma}) \in \mathcal H^p_2$.
\end{lemma}
\begin{proof}
From the definition of $\mathcal H^p_1$, combined with the fact that $u$ is a continuous function, we get
$$
\|u\|_{L^p_\lambda} \in L^\infty_z(\Gamma).
$$
Since $\Gamma$ is a bounded set, the $L^p$ norm does not exceed (up to a constant) the $L^\infty$ norm and, therefore
$$
\|\|u\|_{L^p_\lambda}\|_{L^p_z(\Gamma)} \leq C \|u\|_{\mathcal H^p_1}.
$$
Now we just note the left-hand side of the above inequality is the norm $\mathcal H^p_2$ norm.
\end{proof}
\subsection{Analysis of the Lippmann-Schwinger equation}
Multiplying equation (\ref{0807A}) by $I+D\widetilde{Q}_\lambda $ we get
\begin{equation}\label{0907H}
(I+M)\mu= (I+D\widetilde{Q}_\lambda) \left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right)
\end{equation}
where
\begin{equation}\label{0907I}
M=P\widetilde{A}_\lambda +D\widetilde{Q}_\lambda P\widetilde{A}_\lambda -D\widetilde{Q}_\lambda D\widetilde{Q}_\lambda.
\end{equation}
\begin{lemma}\label{lemma2107A}
Let $\,\textnormal{Re}(\gamma)\geq c >0$ and $\widetilde{A}_{\lambda}$ in $L^{\infty}(\Gamma)$. Then the operators $D\widetilde{Q}_\lambda P\widetilde{A}_\lambda ,\,D\widetilde{Q}_\lambda D\widetilde{Q}_\lambda$ are bounded in $\mathcal H^p,\, p>1$.
Moreover, if $R>0$ is large enough they are contractions and if $\alpha-1$ is small enough in $L^{\infty}(\Gamma)$ then $P\widetilde{A}_{\lambda}$ is a contraction in $\mathcal{H}^p, \,p>1$.
\end{lemma}
\begin{proof} In order to estimate $\|(D\widetilde{Q}_\lambda P\widetilde{A}_\lambda)t\|_{\mathcal H^p}$ and $\|(D\widetilde{Q}_\lambda D\widetilde{Q}_\lambda )t\|_{\mathcal H^p}$ (recall Definition \ref{2107B}) we consider the representation $t=u+v$ where the infimum is (almost) achieved. It is easy to see that the desired estimate follows from the fact that these operators are a contraction in each of the spaces, $\mathcal H_1^p$ and $\mathcal H_2^p$. This fact can be shown as follows.
In Lemma 2.1 of \cite{ltv} it was proved that the operator $D\widetilde{Q}_\lambda D\widetilde{Q}_\lambda$ is bounded in $\mathcal H^p_1$. The proof that it is also a contraction in $\mathcal{H}^p_2$ and the statement for $D\widetilde{Q}_\lambda P\widetilde{A}_\lambda$ follows in a similar manner.
Hereby, we show the case for $D\widetilde{Q}_\lambda P \widetilde{A}_{\lambda}$. By definition we have:
$$
D\widetilde{Q}_\lambda P\widetilde{A}_{\lambda} u(z)=\left\{\begin{array}{ccc}
\int_{\Gamma} [\widetilde{A}_{\lambda}u]_2(z_2)G_1(z,z_2,\lambda,w)\,dz_2 \\
\int_{\Gamma} [\widetilde{A}_{\lambda}u]_1(z_2)G_2(z,z_2,\lambda,w)\,dz_2
\end{array}\right.$$
where
\begin{equation}\label{AAA}
G(z,z_2,\lambda,w)=\left(\begin{array}{cc}
G_1\\
G_2
\end{array}\right) =\left\{\begin{array}{cc}
(2\pi i)^{-2}\int_{\mathcal O}
\frac{e^{-i\,\textnormal{Im}[\lambda(z_1-w)^2]/2}}{{z_1}-{z}} \frac{{Q}_{12}(z_1)}{\bar{z}_2-\bar{z}_1} \, d{\sigma_{z_1}}\\
(2\pi i)^{-2}\int_{\mathcal O}
\frac{e^{i\,\textnormal{Im}[\lambda(z_1-w)^2]/2}}{{\bar{z}_1}-\bar{z}} \frac{{Q}_{21}(z_1)}{{z}_2-{z}_1} \, d{\sigma_{z_1}}
\end{array}\right. .
\end{equation}
By following a similar estimation on the proof of Lemma 2.1 of \cite{ltv} we obtain by the stationary phase approximation:
$$
\sup_{\substack{z\\|\lambda|>R}} \|G_i(z,\cdot,\lambda,w)\|_{L^q_{z_2}(\Gamma)} \leq \frac{1}{R}, ~ \quad 1/p+1/q=1 ~ \text{and} ~i=1,2.
$$
Thus
$$
|D\widetilde{Q}_\lambda P \widetilde{A}_{\lambda}u|(z)\leq \|G(z,\cdot,\lambda,w)\|_{L^q_{z_2}(\Gamma)} \|\widetilde{A}_{\lambda}u\|_{L^p_{z_2}(\Gamma)}.
$$
Then, we have for
$$
\|D\widetilde{Q}_\lambda P\widetilde{A}_{\lambda} u(z)\|_{L^p_\lambda} \leq \|G(z,\cdot,\lambda,w)\|_{L^q_{z_2}(\Gamma)} \|\widetilde{A}_{\lambda}\|_{L^\infty(\Gamma)} \|u\|_{\mathcal H^p_2}
$$
where we used the fact that $u \in \mathcal H^p_2$ is the same as $ \|u\|_{L^p_{z_2}(\Gamma)} \in L^p_\lambda$. The final estimate follows from the definitions of both spaces and the above uniform bound on $G_i$.
If we take $R>0$ large enough then it follows that $D\widetilde{Q}_{\lambda}D\widetilde{Q}_{\lambda}$ and $D\widetilde{Q}_{\lambda}P\widetilde{A}_{\lambda}$ are contractions in $\mathcal{H}^p$ as long as $\|\widetilde{A}\|_{L^{\infty}(\Gamma)}$ is finite.
By the definition of $\mathcal{H}^p$ the boundedness of $P\widetilde{A}_{\lambda}$ follows from the usual $L^p$ boundedness. Since this operator will not have the same dependence on $\lambda$ as the others we need the jump to be close enough to $1$ so that the supremum norm in $z$ of $\widetilde{A}_{\lambda}$ on $\Gamma$ is small enough and possibilitates the norm of the whole operator to be less than $1$.
A rough estimate for this norm is given in terms of the jump by:
$$\|\widetilde{A}_{\lambda}\|_{L^{\infty}(\Gamma)}\leq 2\left|\alpha-1\right|\left(1+\left|\frac{1}{\alpha}\right|\right)\leq 4\epsilon,$$
where $\epsilon>0$ is an upper bound for
$\left|\alpha-1\right|$.
Hence for $P\widetilde{A}_{\lambda}$ to be a contraction on $\mathcal{H}^p$ we need that
$$|\alpha-1| \leq \frac{1}{4\|P\|_{\mathcal{H}^p}}.$$
\end{proof}
\subsection{Enrichment of the set of CGO incident waves}
Let $w \in \mathcal O$ be a fixed point. For the asymptotics (\ref{8JunA}) we can take any entire function. In our approach, we take this entire functions to be:
\begin{equation}\label{0907A}
U(z,w,\lambda)=e^{\ln |\lambda| \lambda_{\mathcal O}(z-w)^2},
\end{equation}
where $z\in \mathbb{C}$ and $\lambda_{\mathcal O}$ is a parameter.
\\
These functions lead us to the concept of admissible points.
\\
We recall here their definition:
We say that a point $w \in \mathcal{O}$ is an {\it admissible point}, if there is a number $\lambda_{\mathcal O}\in\mathbb{C}$ such that
\begin{align*}\label{0907B}
A&:=\sup_{z \in \overline{\mathcal O}} \textnormal{Re}[ \lambda_{\mathcal O}(z-w)^2]< 1/2,\\
B&:= \sup_{z \in \overline{\mathcal D}} \textnormal{Re}[ \lambda_{\mathcal O}(z-w)^2] < -1/2.
\end{align*}
Moreover, if $w$ is an admissible point and $A$ and $B$ fulfills
$A=1/2-\epsilon_1, \, B=-1/2-\epsilon_2,$ with $\epsilon_2-\epsilon_1>0$, we further say that $w$ is a proper admissible point.
{\bf Note:} The set of admissible points is not empty. In order to see this we consider a boundary point $w_0 \in \partial \mathcal O$ which belongs also to the convex hull of $\mathcal O$. It is easy to see that all interior points $w \in \mathcal O$ near the $w_0$ would be admissible.
We will not try to give a general geometric description of admissible points. Instead, we are only aiming to show the viability of the concept.
Denote
\begin{equation}\label{1107A}
f=\mu - (I+D\widetilde{Q}_\lambda)\left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right),
\end{equation}
where $\mu$ is defined in (\ref{200918A}).
The vector $f$ satisfies the equation
\begin{equation}\label{1007P}
(I+M)f= -M(I+D\widetilde{Q}_\lambda) \left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right).
\end{equation}
We know already that for $R>0$ large enough the operator in the left-hand side of this equation is a contraction in $\mathcal H^p,\, p>1$ and below we show that in fact we have for the right-hand side:
\begin{equation}\label{1007Q}
\frac{1}{|\lambda|^A} M(I+D\widetilde{Q}_\lambda) \left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right) \in \mathcal H^p, \quad p>2.
\end{equation}
Therefore, we get the following statement
\begin{lemma}\label{lemma1107A}
For any $p>2$ and $R$ large enough such that $U$ is given in terms of a proper admissible point $w$ we have
\begin{equation}\label{0907F}
\frac{1}{|\lambda|^A} \left [ \mu - (I+D\widetilde{Q}_\lambda)\left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right) \right ] \in \mathcal H^p.
\end{equation}
\end{lemma}
\begin{proof}
We start by showing that
\begin{equation}\label{Hp}\frac{1}{|\lambda|^A}\U \in \mathcal{H}_2^p
\end{equation} and $$\frac{1}{|\lambda|^A}MD\tilde{Q}_{\lambda}\U \in \mathcal{H}_1^p.$$ Since $M$ is a contraction for $R > 0$ big enough we are going to obtain $(\ref{1007Q})$ and the result will immediately follows for $p > 2$.
To show (\ref{Hp}) we refer to the following simple estimate
\begin{align*}
\Bigg[\int_{|\lambda| > R}\int_{\Gamma}\Bigg|\frac{1}{|\lambda|^A}e^{\ln |\lambda|\lambda_s(z-w)^2}\Bigg|\,d|z|\,d\sigma_{\lambda}\Bigg]^{1/p}
&\leq \Bigg[\int_{|\lambda| > R}\int_{\Gamma}\Bigg|\frac{1}{|\lambda|^A} |\lambda|^B\Bigg|^p\,d|z|\,d\sigma_{\lambda}\Bigg]^{1/p}
\\
&
= \Bigg[\int_{|\lambda| > R}\int_{\Gamma}\Bigg|\frac{1}{|\lambda|^{1+(\epsilon_2-\epsilon_1)}} \Bigg|^p\,d|z|\,d\sigma_{\lambda}\Bigg]^{1/p} < \infty
\end{align*}
For the second statement we need to dismantle $M$ into its various parts and show that the statement holds for each one of them. The trick is always the same, so we will only show one of the computations, namely the one corresponding to the term $\frac{1}{|\lambda|^A}(D\tilde{Q}_{\lambda})^3\U$. By Lemma \ref{2310F} we get
\begin{align*}
&\sup_{z \in \mathcal O} \Bigg[\int_{|\lambda|>R}\Bigg|\frac{1}{|\lambda|^A}\int_{ \mathcal{O}}\frac{e^{i\,\textnormal{Im}(\lambda(z_1-w)^2)}}{\overline{z_1-z}}Q_{21}(z_1)\int_{ \mathcal{O}}\frac{e^{-i\,\textnormal{Im}(\lambda(z_2-w)^2)}}{z_2-z_1}Q_{12}(z_2) \cdot
\\
&
\hspace{7cm}\cdot\int_{ \mathcal{O}}\frac{e^{\rho(z_3)}}{z_3-z_2}Q_{21}(z_3)\,d\sigma_{z_3}\,d\sigma_{z_2}\,d\sigma_{z_1}\Bigg|^p\,d\sigma_{\lambda}\Bigg]^{1/p} \\
&\leq \sup_{z \in \mathcal O} \Bigg[C ||Q||_{L^{\infty}}^3\int_{ \mathcal{O}}
\int_{\mathcal{O}}\frac{1}{|z_1-z|}\frac{1}{|z_2-z_1|}\frac{1}{|z_2-w|^{1-\delta}} \, d\sigma_{z_2}\,d\sigma_{z_1}\Bigg] < C'
\end{align*}
Thus, the result $(\ref{1007Q})$ follows, and in consequence also (\ref{0907F}) holds from (\ref{1107A}) and (\ref{1007P}).
\end{proof}
\subsection{Scattering data and reconstruction of the potential in admissible points}
Let $w\in\mathcal{O}$ be an admissible point. We consider the function
\begin{equation}\label{0907C}
e^{\overline{\ln |\lambda| \lambda_s(z-w)^2}},
\end{equation}
where the number $\lambda_s$ is chosen as
\begin{equation}\label{0907D}
\sup_{z \in \overline{\mathcal O}} \textnormal{Re}[ \lambda_s(z-w)^2]<1/2, \quad \sup_{z \in \overline{\mathcal D}} \textnormal{Re}[ \lambda_s(z-w)^2]<-1/2.
\end{equation}
A point $w$ can be admissible to more than one spectral parameter. To define the scattering data we want to use the above exponentials depending on $\lambda_s$. Since $\mu$ fulfills an asymptotic with the spectral parameter being $\lambda_{\mathcal O}$ we also define our scattering data with respect to this spectral parameter, i.e., $\lambda_s=\lambda_{\mathcal O}$.
However, this is not a requirement and $\lambda_s$ could have been one of the other parameters which makes $w$ admissible. We just fix it like this to simplify the proofs ahead.
Consider now our scattering data
\begin{equation}\label{1007S}
h(\lambda,w)= \int_{\partial \mathcal O} \overline{e^{\ln |\lambda| \lambda_s(z-w)^2}} \mu_2(z)d\bar{z}.
\end{equation}
Using Green's theorem
$$
\int_{\partial \mathcal O} f \, d\bar{z} = -2i \int_{\mathcal O} \partial {f} \, d{\sigma_{z}}
$$
we can see that
\begin{equation}\label{0907E}
h(\lambda,w)= \int_{\Gamma^+} \overline{e^{\ln |\lambda| \lambda_s(z-w)^2}} \mu_2(z)d\bar{z} + \int_{\mathcal O \setminus\overline{\mathcal{D}}}
\overline{e^{\ln |\lambda| \lambda_s(z-w)^2}} e^{-i\,\textnormal{Im} [\lambda (z-w)^2]/2} q_{21}(z)\mu_1(z) d\sigma_z.
\end{equation}
This formula gives raise to an operator that we denote by $T$ and it is defined by
$$
T[G](\lambda)= \int_{\mathcal O \setminus\overline{\mathcal{D}}}
\overline{e^{\ln |\lambda| \lambda_s(z-w)^2}} e^{-i\,\textnormal{Im} [\lambda (z-w)^2]/2} q_{21}(z)G(z) d\sigma_z.
$$
From our representation for the solution $\mu$ $(\ref{1107A})$ and the fact that the matrix $\widetilde{Q}_{\lambda}$ is off-diagonal we get
$$
T\left [ \left ( (I+D\widetilde{Q}_\lambda)\left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right) \right )_1\right ] = T[U].
$$
This allows us to state our main theorem.
\begin{theorem}\label{210918T}
Let the potential $q$ being given through (\ref{char1bc}) for an admittivity $\gamma$ satisfying $\textnormal{Re}(\gamma)\geq c>0$ and $\gamma\in W^{2,\infty}(\mathcal{D})\cap W^{2,\infty}(\mathcal{O}\setminus\overline{\mathcal{D}})$. If the jump $\alpha-1$ is small enough in $L^{\infty}(\Gamma)$, and $w$ is a proper admissible point for a spectral parameter $\lambda_s$, then
\begin{equation}\label{210918A}
\frac{\lambda_s}{4\pi^2\ln 2}\lim_{R\to\infty}\int_{R<|\lambda|<2R} |\lambda|^{-1} \, h(\lambda,w)\, d\sigma_\lambda=q_{21}(w).
\end{equation}
\end{theorem}
The proof of this theorem requires some additional results concerning the behavior of $h(\lambda,w)/|\lambda|.$ These results will be given in the form of three lemmas which we establish in the next section.
\subsection{Necessary results for the proof of Theorem~\ref{210918T}}
We start by presenting a result which we need afterwards. For its proof we refer to Appendix A.
Consider two arbitrary numbers $\lambda_0,\,w \in \mathbb C$, denote
$\rho(z)= -i\,\textnormal{Im}[\lambda(z-w)^2]/2+\ln |\lambda| \lambda_0(z-w)^2$, and let
$A_0=\sup_{z \in \mathcal O}\, \textnormal{Re}[ \lambda_0(z-w)^2].$
\begin{lemma}
\label{2310F}
Let $z_1 \in \mathbb C\setminus\{w\}$, $p > 2,$ and $\varphi \in L^\infty$ with compact support. Then
$$
\left \|\frac{1}{|\lambda|^{A_0}}\int_{\mathbb C} \varphi(z)\frac{e^{\rho(z)} }{z-z_1} \, d{\sigma_{z}} \right \|_{L^p_\lambda(\mathbb C)}
\leq C\frac{\|\varphi\|_{L^\infty}}{|z_1-w|^{1-\delta}},
$$
where the constant $C$ depends only on the support of $\varphi$ and on $\delta =\delta(p)>0$.
\end{lemma}
To study the main term in (\ref{210918A}), we have the following lemma.
\begin{lemma}\label{lemma08007A}
Let $\varphi \in W^{1,\infty}(\Omega)$, and $\rm{supp}(\varphi) \subset \Omega$, where $\Omega$ is a domain in $\mathbb R^2$ and $w \in \Omega$ such that
\begin{equation}\label{0807C}
\sup_{z \in \overline{\Omega}} \textnormal{Re} (z-w)^2 <1.
\end{equation}
Then the following asymptotic holds
\begin{equation}\label{0807B}
\int_{\Omega} e^{-i\,\textnormal{Im} (\lambda (z-w)^2)+\ln|\lambda|(z-w)^2} \varphi(z) d\sigma_z = \frac{2\pi}{|\lambda|}\varphi(w)+R_w(\lambda),
\end{equation}
where $|\lambda|^{-1}R_w \in L^1(\lambda :|\lambda|>R)$.
\end{lemma}
\begin{proof} Consider two domains
$$
I_1=\{ z\in\Omega ~ : ~ |z-w| < 3\varepsilon\} \quad\text{and} \quad I_2=\{ z\in\Omega ~ : ~ |z-w| > \varepsilon \}, \quad
$$
where $\varepsilon>0$ is an a priori chosen arbitrarily small but fixed number. Furthermore, we pick two functions $\delta_1$ and $\delta_2$ with supports $I_1$ and $I_2$, respectively, such that $\delta_1+\delta_2 \equiv 1$ in $\mathcal O$. Moreover, we assume that $\delta_1(z-w)$ is represented as a product of $\widehat{\delta}_1(x)\widehat{\delta}_1(y)$ and that the function $\widehat{\delta}_1(x)$ decreases monotonically as $|x|$ grows.
The integrand is multiplied by $(\delta_1+\delta_2)$ and this naturally splits the integral into two terms.
The term corresponding to $\delta_2$ can be integrated by parts once and then the required estimate follows from the Hausdorff-Young inequality (\ref{0607F}) for $p=q=2$. We also use here the fact that the inequality (\ref{0807C}) is strict.
Now, we consider the term corresponding to integration against $\delta_1$.
This term will be divided into two parts as well correspondent to the representation
$$
\delta_1(z) \varphi(z)=\delta_1(z)\varphi(w)+ \delta_1(z)(\varphi(z)-\varphi(w))
$$
Keeping in mind the properties of $\delta_1$ and the fact that $\varphi \in W^{1,\infty}$ is a Lipschitz function the second part can be treated as in Lemma \ref{2310F}, i.e., we make a change of variables $u=(z-w)^2$ and thus $d\sigma_{z}=\frac{1}{4|u|}d\sigma_{u}$. From this we obtain two integrals due to the splitting of the domain. In each one of them the change of variables generates a singularity of total order $|u|$. In both integrals we can apply integration by parts. We will obtain an area integral and a contour integral. On the area integral we have a singularity of total order $|u|^{3/2}$. Hence, we can apply Hausdorff-Young inequality for the Laplace transform for $p=4/3$ and obtain here the required estimate. For the contour integral we can apply the one-dimensional Hausdorff-inequality to the Laplace transform and obtain the needed estimate.
To the first part we consider the change of variables $y={\sqrt{|\lambda|}}(z-w)$. Due to the separation of variables in $\delta_1$ the asymptotic of
\begin{equation}\label{0807d}
\frac{\varphi(w)}{|\lambda|}
\int e^{-i\,\textnormal{Im} y^2+\frac{\ln|\lambda|}{|\lambda|}y^2} \delta_1\left(\left|w+\frac{y}{\sqrt{|\lambda|}}\right|\right) d\sigma_y
\end{equation}
follows from the formula
\begin{equation}\label{2207A}
\int_0^{\lambda^{1/2}\delta} e^{-ix^2+\frac{\ln|\lambda|}{|\lambda|}x^2} \widehat{\delta}_1\left(\frac{|x|}{\sqrt{|\lambda|}}\right) dx = \frac{1}{\sqrt{2\pi}}(1+o(1)), \quad \lambda \rightarrow \infty.
\end{equation}
This can be proven in the following way:
consider the change of variables $$x^2=t, \;\,g(t):=\widehat{\delta}_1\left(\frac{|x(t)|}{\sqrt{|\lambda|}}\right)$$ then, we have
$$
\int_0^{\lambda^{1}\delta^2} e^{-it+\frac{\ln|\lambda|}{|\lambda|}t} \frac{1}{\sqrt t}g(t) dt =
$$
$$
\int_0^1 e^{-it+\frac{\ln|\lambda|}{|\lambda|}t} \frac{1}{\sqrt t}g(t) dt +
\frac{1}{-i+\frac{\ln|\lambda|}{|\lambda|}}\int_1^{\lambda \delta^2} \left( e^{-it+\frac{\ln|\lambda|}{|\lambda|}t} \right )' \frac{1}{\sqrt t}g(t) dt.
$$
For the second term we obtain
$$
\frac{1}{-i+\frac{\ln|\lambda|}{|\lambda|}}\int_1^{\lambda \delta^2} \left( e^{-it+\frac{\ln|\lambda|}{|\lambda|}t} \right )' \frac{1}{\sqrt t}g(t) dt =
\left . \frac{1}{-i+\frac{\ln|\lambda|}{|\lambda|}}\left( e^{-it+\frac{\ln|\lambda|}{|\lambda|}t} \right ) \frac{g(t)}{\sqrt t} \right |_1^{\lambda\delta^2}+
$$
$$
\frac{1}{-i+\frac{\ln|\lambda|}{|\lambda|}}\int_1^{\lambda \delta^2} e^{-it+\frac{\ln|\lambda|}{|\lambda|}t} \frac{1}{2t^{3/2}}g(t) dt =
$$
$$
\frac{-1}{-i}\left( e^{-i} \right ) \frac{g(1)}{\sqrt 1} +\frac{1}{-i}\int_1^{\lambda \delta^2} e^{-it} \left ( \frac{g(t)}{2t^{1/2}} \right )' dt +o(1), \quad \lambda \to \infty.
$$
We used here the fact that the last integral is absolutely convergent ($g$ has a finite support) and
\begin{equation}\label{2207D}
\sup_{z \in I_1} \left |e^{\frac{\ln|\lambda|}{|\lambda|}y^2} -1 \right | = o(1), \quad \lambda \rightarrow \infty.
\end{equation}
Therefore, we get
$$
\int_0^{\lambda \delta^2} e^{-ix^2+\frac{\ln|\lambda|}{|\lambda|}x^2} f(x) dx =
\int_0^{\lambda \delta^2} e^{-it} \frac{1}{\sqrt t}g(t) dt = \int_0^{\infty} e^{-it} \frac{1}{\sqrt t}g(t) dt +o(1).
$$
Now the result of our lemma is an immediate consequence of this formula.
\end{proof}
To prove our asymptotic formula of the scattering data, we will substitute $\mu$ with the help of (\ref{1107A}). This will leave some terms which need to vanish as $|\lambda|\rightarrow +\infty$ in order to obtain the desired formula through Lemma \ref{lemma08007A}.
In this sense, the two lemmas that follow assure this remaining terms are integrable in $\lambda$ and, therefore, their impact vanishes as we take the limit.
\begin{lemma}\label{lemma1008A}
For some $p<2$, with $R$ large enough and $f$ defined as in (\ref{1107A}), we get
\begin{equation}\label{1007A}
\frac{1}{|\lambda|}T\left [ M(I+D\widetilde{Q}_\lambda)\left(
\begin{array}{c}
U \\
0 \\
\end{array}
\right) \right ] \in L^p(\lambda : |\lambda|>R), \quad
\end{equation}
and
\begin{equation}\label{1107B}
\frac{1}{|\lambda|}T[Mf] \in L^p(\lambda : |\lambda|>R).
\end{equation}
\end{lemma}
\begin{proof}
Given the structure of $M=P\tilde{A}_{\lambda}+D\tilde{Q}_{\lambda}-D\tilde{Q}_{\lambda}D\tilde{Q}_{\lambda}$ and that $\frac{1}{|\lambda|}T$ is a linear operator, it is enough to show that each term applied to both, $\U$ and $D\tilde{Q}_{\lambda}\U$, belongs to $L^p(\lambda:|\lambda|> R)$.
We look directly at the computations of each term. By using Fubini's Theorem, Minkowski integral inequality, H\"older inequality, and Lemma \ref{2310F} we can show that all of these terms are in fact in $L^p(\lambda:|\lambda|> R)$. Since the computations for each term follow roughly the same lines, and for the convenience of the reader, we present just the computation in one of these cases, the computations of the remaining terms being analogous, with special attention to the convergence of the integrals.
We look at the term $$\frac{1}{|\lambda|}T\Bigg[D\tilde{Q}_{\lambda}D\tilde{Q}_{\lambda}\U\Bigg] \in L^p(\lambda:|\lambda|> R).$$
Let us denote $\rho(z)=i\,\textnormal{Im}[\lambda(z-w)^2]/2+\ln|\lambda|\lambda_s(z-w)^2$
and $A=S=\sup_{z \in \overline{\mathcal{O}}} \,\textnormal{Re}[\lambda(z-w)^2] < 1/2$.
\begin{footnotesize}
\begin{align*}
&\left\| \frac{1}{|\lambda|}T\Bigg[D\tilde{Q}_{\lambda}D\tilde{Q}_{\lambda}\U\Bigg] \right\|_{L^p(\lambda:|\lambda|> R)}=
\\
&=\Bigg[\int_{|\lambda| > R} \Bigg| \frac{1}{4\pi^2|\lambda|}\int_{\mathcal{O}\setminus \overline{\mathcal{D}}}e^{\overline{\rho(z)}}q_{21}(z)\int_{\mathcal{O}}\frac{e^{-i\,\textnormal{Im}[\lambda(z_1-w)^2]/2}}{z_1-z}Q_{12}(z_1)
\int_{\mathcal{O}}\frac{e^{\rho(z_2)}}{\overline{z_2-z_1}}Q_{21}(z_2)\,d\sigma_{z_2}\,d\sigma_{z_1}\,d\sigma_z\Bigg|^p\,d\sigma_{\lambda}\Bigg]^{1/p}
\\
&= \Bigg[\int_{|\lambda| > R} \Bigg|\frac{1}{4\pi^2|\lambda|}\int_{\mathcal{O}}\Bigg(\int_{\mathcal{O}\setminus\overline{\mathcal{D}}}\frac{e^{\overline{\rho(z)}}}{z_1-z}q_{21}(z)\,d\sigma_z\Bigg)\Bigg(\int_{\mathcal{O}}\frac{e^{\rho(z_2)}}{\overline{z_2-z_1}}Q_{21}(z_2)\,d\sigma_{z_2}\Bigg)\,\cdot
\\
&\hspace{2cm}\cdot Q_{12}(z_1)e^{-i\,\textnormal{Im}[\lambda(z_1-w)^2]/2} \, d\sigma_{z_1} \Bigg|^p \, d\sigma_{\lambda}\Bigg]^{1/p}
\\
&\leq \int_{\mathcal{O}} \Bigg[ \int_{|\lambda| > R} \Bigg|\frac{|\lambda|^{A+S}}{|\lambda|}\Bigg(\frac{1}{|\lambda|^A}\int_{\mathcal{O}\setminus D}\frac{e^{\overline{\rho(z)}}}{z_1-z}q_{21}(z)\,d\sigma_z\Bigg) \Bigg(\frac{1}{|\lambda|^S}\int_{\mathcal{O}}\frac{e^{\rho(z_2)}}{\overline{z_2-z_1}}Q_{21}(z_2)\,d\sigma_{z_2}\Bigg)\Bigg|^p\, d\sigma_{\lambda}\Bigg]^{1/p}\hspace{-0.5cm}|Q_{12}(z_1)|\,d\sigma_{z_1}
\\
& \leq \|Q \|_{L^{\infty}} \int_{\mathcal{O}} \Bigg|\Bigg|\frac{1}{|\lambda|^A}\int_{\mathcal{O}\setminus \overline{\mathcal{D}}}\frac{e^{\overline{\rho(z)}}}{z_1-z}q_{21}(z)\,d\sigma_z \Bigg|\Bigg|_{L^{2p}(\lambda:|\lambda|>R)} \Bigg|\Bigg|\frac{1}{|\lambda|^S}\int_{\mathcal{O}}\frac{e^{\rho(z_2)}}{\overline{z_2-z_1}}Q_{21}(z_2)\,d\sigma_{z_2}\Bigg|\Bigg|_{L^{2p}(\lambda:|\lambda|>R)} \, d\sigma_{z_1}
\\
& \leq C \|Q \|_{L^{\infty}} \int_{\mathcal{O}} \frac{1}{|z_1-w|^{1-\delta}}\frac{1}{|z_1-w|^{1-\delta}} \, d\sigma_{z_1} < \infty.
\end{align*}
\end{footnotesize}
With these calculations we obtain (\ref{1007A}). To show (\ref{1107B}) we have that $\frac{1}{|\lambda|^A}f \in \mathcal{H}^p$, for $p>2$, by Lemma \ref{lemma1107A}. We consider $T$ applied to each term of $M$. Again, we present only the computations for the case $\frac{1}{|\lambda|}T[D\tilde{Q}_{\lambda}D\tilde{Q}_{\lambda}f]$, since the other computations are analogous, with special attention to the behavior of $\frac{1}{|\lambda|^A}f$. In the same spirit, we only present the calculation for the first term of the vector.
\begin{align*}
&\Bigg[\int_{|\lambda| > R} \Bigg|\frac{1}{|\lambda|}\int_{\mathcal{O}\setminus\overline{\mathcal{D}}} e^{\overline{\rho(z)}}Q_{21}(z) \int_{ \mathcal{O}} \frac{e^{-i\,\textnormal{Im}(\lambda(z_1-w)^2)}}{z_1-z}Q_{12}(z_1)\cdot
\\
& \hspace{2cm}\cdot \int_{ \mathcal{O}}\frac{e^{i\,\textnormal{Im}(\lambda(z-w)^2)}}{\overline{z_2-z_1}}Q_{21}(z_2)f_1(z_2)\,d\sigma_{z_2}\,d\sigma_{z_1}\,d\sigma_{z}\Bigg|^p\,d\sigma_{\lambda}\Bigg]^{1/p}
\\
&
\leq C\|Q \|_{L^{\infty}}^3\int_{ \mathcal{O}}\int_{ \mathcal{O}} \frac{1}{|z_2-z_1|} \frac{1}{|z_1-w|^{1-\delta}} \left\|\frac{1}{|\lambda|^{A}}f_1(z_2)\right\|_{L^{2p}_{\lambda}}\,d\sigma_{z_2}\,d\sigma_{z_1} < \infty
\end{align*}
The boundedness of the last integral follows from the fact that $\frac{1}{|\lambda|^A}f \in \mathcal{H}^p$ implies its boundedness with respect to the $z$ variable.
\end{proof}
\begin{lemma}\label{260918A}
For R large enough, and $w$ being a proper admissible point, we have
$$
\frac{1}{|\lambda|} \int_{\Gamma^+} e^{\overline{\ln |\lambda| \lambda_s(z-w)^2}} \mu_2(z)d\bar{z} \in L^1(|\lambda|>R).
$$
\end{lemma}
\begin{proof}
We divide the integral
\begin{equation} \label{121A}
\frac{1}{|\lambda|}\int_{\Gamma^{+}} e^{\overline{\ln |\lambda|\lambda_s(z-w)^2}}\mu_2(z) d\bar{z},
\end{equation}
into two pieces, according to the decomposition of $\mu_2$ given by formula (\ref{1107A}), that is
\begin{equation}\label{122A}
\mu_2=\bigg[D\tilde{Q}_{\lambda}\begin{pmatrix}
U \\
0
\end{pmatrix}\bigg]_2 + f_2.
\end{equation}
By Lemma \ref{lemma1107A} we have that $\frac{1}{|\lambda|^A} f \in \mathcal{H}^p$, for any $p > 2$. Therefore, we apply (\ref{122A}) to (\ref{121A}) and we split the integral into $I_1$ and $I_2$, according to the order in (\ref{122A}).
Since, by assumption, $w$ is an admissible point there exists a $\lambda_s$ fulfilling the inequality $\sup_{z \in \overline{\mathcal{D}}}\, \textnormal{Re}[\lambda_s(z-w)^2] < -1/2.$
So, for $z \in \Gamma^{+}$ we get
\begin{align}
\nonumber \left| |\lambda|^A e^{\overline{\ln |\lambda|\lambda_s(z-w)^2}} \right|&=|\lambda|^A|e^{\ln |\lambda|\,\textnormal{Re}[\lambda_s(z-w)^2]}| < |\lambda|^A e^{-1/2\ln|\lambda|}=|\lambda|^{A-1/2} \\
\left| |\lambda|^A e^{\overline{\ln |\lambda|\lambda_s(z-w)^2}}\right| & < |\lambda|^{-\delta},
\end{align}
where we choose $-\delta=A-1/2 < 0$ (recall, $A < 1/2$). Hence, we obtain
\begin{align*}
|I_2| &\leq \frac{1}{|\lambda|}\int_{\Gamma^{+}} \bigg||\lambda|^Ae^{\overline{\ln |\lambda|\lambda_s(z-w)^2}} \bigg(\frac{1}{|\lambda|^A}f_2\bigg)\bigg| \, d|\bar{z}| \\
& < \frac{1}{|\lambda|^{1+\delta}} \int_{\Gamma^{+}} \bigg|\frac{1}{|\lambda|^A}f_2\bigg| \, d|\bar{z}|.
\end{align*}
Integrating with respect to the spectral parameter, we have for $R > 0$ large enough
\begin{align*}
\int_{|\lambda| > R} |I_2| d\sigma_{\lambda} &\leq \int_{|\lambda| > R} \frac{1}{|\lambda|^{1+\delta}} \int_{\Gamma^{+}} \bigg|\frac{1}{|\lambda|^A}f_2\bigg| \, d|\bar{z}| d\sigma_{\lambda} \\
& \leq \left\|\frac{1}{|\lambda|^{1+\delta}}\right\|_{L^q_{\lambda}} \left\|\int_{\Gamma^{+}} \bigg|\frac{1}{|\lambda|^A}f_2\bigg| \, d|\bar{z}|\right\|_{L^p_{\lambda}}.
\end{align*}
Therefore, by Lemma \ref{lemma1107A} the second norm is finite for $p > 2$. We now pick $q$ such that $q(1+\delta) > 2$, which is always possible given that $\delta > 0$. Hence, $I_2$ is in $L^1(\lambda:|\lambda| > R).$
Now, we look at $I_1$. By definition we have
\begin{equation}
I_1=\frac{1}{2|\lambda|} \int_{\Gamma^{+}} e^{\overline{\ln |\lambda|\lambda_s(z-w)^2}}\int_{\mathcal{O}} \frac{e^{\ln |\lambda|\lambda_s(z_1-w)^2+i\,\textnormal{Im}(\lambda(z_1-w)^2)/2}}{\bar{z}-\bar{z}_1}Q_{21}(z_1) \,d\sigma_{z_1}\,d\bar{z}.
\end{equation}
Again, integrating against the spectral parameter we get:
\small
\begin{align*}
\int_{|\lambda| > R} |I_1| d\sigma_{\lambda} &\leq \int_{|\lambda| > R} \frac{1}{2|\lambda|}\int_{\Gamma^{+}} |\lambda|^{-\delta} \Bigg|\frac{1}{|\lambda|^A}\int_{\mathcal{O}} \frac{e^{\ln |\lambda|\lambda_s(z_1-w)^2+i\,\textnormal{Im}(\lambda(z_1-w)^2)/2}}{\bar{z}-\bar{z}_1}Q_{21}(z_1) d\sigma_{z_1}\Bigg|\, d|\bar{z}|\,d\sigma_{\lambda} \\
& = \int_{\Gamma^{+}}\int_{|\lambda| > R} \frac{1}{2|\lambda|^{1+\delta}} \Bigg|\frac{1}{|\lambda|^A}\int_{\mathcal{O}} \frac{e^{\ln |\lambda|\lambda_s(z_1-w)^2+i\,\textnormal{Im}(\lambda(z_1-w)^2)/2}}{\bar{z}-\bar{z}_1}Q_{21}(z_1) d\sigma_{z_1}\Bigg|\, d\sigma_{\lambda}\,d|\bar{z} \\
& \leq \Bigg|\Bigg|\frac{1}{2|\lambda|^{1+\delta}}\Bigg|\Bigg|_{L^q_{\lambda}} \int_{\Gamma^{+}} \Bigg|\Bigg|\frac{1}{|\lambda|^A}\int_{\mathcal{O}} \frac{e^{\ln |\lambda|\lambda_s(z_1-w)^2+i\,\textnormal{Im}(\lambda(z_1-w)^2)/2}}{\bar{z}-\bar{z}_1}Q_{21}(z_1) d\sigma_{z_1}\Bigg|\Bigg|_{L^p_{\lambda}} d|\bar{z}|,
\end{align*}
\normalsize
where we use Fubini's theorem and H\"older inequality, with $p > 2$ small enough so that the first norm is finite as in the computation of $I_2$.
Now, we can use Lemma \ref{2310F}, given that we assume that our potential $Q$ has support in $O$ and it is in $L^{\infty}_{z}$, to obtain a constant $C>0$ depending only on the support of the potential and on a certain $\tilde{\delta} > 0$:
\begin{align*}
\int_{|\lambda| > R} |I_1| d\sigma_{\lambda} \leq C \left\| \frac{1}{2|\lambda|^{1+\delta}}\right\|_{L^q_{\lambda}} \|Q_{21} \|_{L^{\infty}_{z}} \int_{\Gamma^{+}} \frac{1}{|\bar{z}-w|^{1-\tilde{\delta}}} \, d|\bar{z}|.
\end{align*}
Given that the last integral is finite, we have $I_1 \in L^1(\lambda:|\lambda| > R)$ and the desired result follows. \end{proof}
\subsection{Proof of Theorem \ref{210918T} }
Now we can present the proof of our main theorem, using the lemmas of the previous section while paying close attention to how $\mu$ and $f$ are defined.
\begin{proof}
Let us start by taking a look at the following term
\begin{align}\label{Hsc}
\nonumber\frac{h(\lambda,w)}{|\lambda|}= \frac{1}{|\lambda|}\Bigg[&\int_{\Gamma^{+}} e^{\overline{\ln|\lambda|\lambda_s(z-w)^2}}\mu_2(z)d\bar{z}\\
&+\int_{\mathcal{O}\setminus\overline{\mathcal{D}}} e^{\overline{\text{ln}|\lambda|\lambda_s(z-w)^2}}e^{-i\,\textnormal{Im}(\lambda(z-w)^2)/2} q_{21}(z)\mu_1(z) d\sigma_z\Bigg].
\end{align}
From (\ref{1107A}) we have
$$\mu=f+(I+D\tilde{Q}_{\lambda}) \begin{pmatrix}
U \\
0
\end{pmatrix},$$
whereby $f$ is a solution of
$$f=-\Bigg(Mf + M(I+D\tilde{Q}_{\lambda}) \begin{pmatrix}
U \\
0
\end{pmatrix}\Bigg).$$
This leads to
$\mu_1=-\left[Mf+ M(I+D\tilde{Q}_{\lambda}) \begin{pmatrix}
U \\
0
\end{pmatrix}\right]_1+U.$ Therefore, by (\ref{Hsc}) and the definition of the operator $T$, we get
\begin{align}
\nonumber \frac{h(\lambda,w)}{|\lambda|}=\frac{1}{|\lambda|}&\int_{\Gamma^{+}} e^{\overline{\ln|\lambda|\lambda_s(z-w)^2}}\mu_2(z)d\bar{z}-\frac{1}{|\lambda|}T\bigg(\big[Mf\big]_1\bigg)\\&-\frac{1}{|\lambda|}T\bigg( \bigg[M(I+D\tilde{Q}_{\lambda}) \begin{pmatrix}
U \\
0
\end{pmatrix}\bigg]_1\bigg) +
\frac{1}{|\lambda|}T[U] =:A+B+C+D.
\end{align}
We need to study the terms $A,\,B,\,C,\,D$.
By Lemma \ref{lemma1008A}, we have for $p < 2$ and $R$ large enough that:
$$B,C \in L^p(\lambda:|\lambda|> R).$$
From Lemma \ref{260918A}, we obtain
$$A \in L^1(\lambda:|\lambda|> R).$$
Hence, we just need to analyze the behavior of the last term.
\begin{align*}
T[U]&=\int_{\mathcal{O}\setminus\overline{\mathcal{D}}} e^{\overline{\text{ln}|\lambda|\lambda_s(z-w)^2}}e^{-i\,\textnormal{Im}(\lambda(z-w)^2)/2} q_{21}(z)e^{\ln|\lambda|\lambda_s(z-w)^2} d\sigma_z \\
&= \int_{\mathcal{O}\setminus\overline{\mathcal{D}}} e^{\overline{\text{ln}|\lambda|(\sqrt{\lambda_s}z-\sqrt{\lambda_s}w)^2}}e^{-i\,\textnormal{Im}(\lambda(z-w)^2)/2} q_{21}(z)e^{\ln|\lambda|(\sqrt{\lambda_s}z-\sqrt{\lambda_s}w)^2} d\sigma_z \\
&= \frac{1}{\lambda_s}\int_{\mathcal{O}\setminus \overline{\mathcal{D}}} e^{\overline{\text{ln}|\lambda|(z-\sqrt{\lambda_s}w)^2}}e^{-i\,\textnormal{Im}(\lambda/\sqrt{\lambda_s}(z-\sqrt{\lambda_s}w)^2)/2} q_{21}(z)\\ & \hspace{2.5cm}e^{\ln|\lambda|(\sqrt{\lambda_s}z-\sqrt{\lambda_s}w)^2}e^{-i\,\textnormal{Im}(\lambda(z-\sqrt{\lambda_s}w)^2)} e^{i\,\textnormal{Im}(\lambda(z-\sqrt{\lambda_s}w)^2)} d\sigma_z,
\end{align*}
where we did a simple change of variables.
We define
$$\phi(z)=e^{-i\,\textnormal{Im}(\lambda/\lambda_s(z-\sqrt{\lambda_s}w)^2)/2}e^{i\,\textnormal{Im}(\lambda(z-\sqrt{\lambda_s}w)^2)} e^{\overline{\text{ln}|\lambda|(z-\sqrt{\lambda_s}w)^2}} q_{21}(z/\sqrt{\lambda_s}).$$
Given that the conditions of Lemma \ref{lemma08007A} are fulfilled, we obtain:
\begin{align}
T[U]&=\frac{1}{\lambda_s}\Bigg[\frac{2\pi}{|\lambda|}\phi(\sqrt{\lambda_s}w)+R_{\sqrt{\lambda_s}w}(\lambda)\Bigg],
\end{align}
which by substitution implies:
$$\frac{1}{|\lambda|}T[U]=\frac{1}{\lambda_s}\frac{2\pi}{|\lambda|}q_{21}(w)+\frac{1}{\lambda_s}|\lambda|^{-1}R_{\sqrt{\lambda_s}w}(\lambda)=:D_1+D_2$$
By Lemma \ref{lemma08007A}, we have $D_2 \in L^1(\lambda:|\lambda|> R)$.
So finally we are ready to evaluate the left-hand side of (\ref{210918A}):
\begin{align*}
\lim_{R \to \infty} \int_{R < |\lambda|< 2R} |\lambda|^{-1} h(\lambda,w) d\sigma_{\lambda} &= \lim_{R \to \infty} \int_{R < |\lambda|< 2R} \frac{2\pi}{\lambda_s}|\lambda|^{-2}q_{21}(w) d\sigma_{\lambda} \\
&= q_{21}(w)\frac{4\pi^2}{\lambda_s} \lim_{R \to \infty} \int_{R}^{2R} r^{-1} dr \\
&= q_{21}(w)\frac{4\pi^2}{\lambda_s} \lim_{R \to \infty} \ln r\Big|_{R}^{2R} = q_{21}(w)\frac{4\pi^2\ln 2}{\lambda_s}.
\end{align*}
From this we get the desired asymptotic:
$$ q_{21}(w)=\frac{\lambda_s}{4\pi^2\ln 2} \lim_{R \to \infty} \int_{R < |\lambda|< 2R} |\lambda|^{-1} h(\lambda,w) d\sigma_{\lambda}.$$
\end{proof}
\section{Scattering data for Dirac equation via the Dirichlet-to-Neumann map} \label{ScDN}
Our next goal is to establish a relation between the Dirichlet-to-Neumann map for equation (\ref{set27A}) and the traces of the solutions of (\ref{firbc}) on $\partial\mathcal O$. Let
$$
\mathcal{T}_q:= \left\{\phi|_{\partial \mathcal O} : \quad \phi=\left(
\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array}
\right)
\mbox{ is a solution of } (\ref{firbc}), \quad \phi_1, \phi_2 \in H^{1}(\mathcal O)\right\}.
$$
Let $u\in H^{2}( \mathcal O\setminus\overline{\mathcal{D}})\cap H^2(\mathcal{D})$ be a solution of (\ref{set27A}) with $u|_{\partial \mathcal O}= f \in H^{3/2}(\partial \mathcal O)$. Consider $\phi=\gamma^{1/2}(\partial u, \bar{\partial} u) \in H^{1}(\mathcal O\setminus\overline{\mathcal{D}})\cap H^1(\mathcal{D})$. Then, formally
\begin{equation}\label{1112A}
\phi|_{\partial \mathcal O}= \frac{1}{2}
\left ( \!\begin{array}{cc} \ \bar{\nu} & -i\bar{\nu} \\ \nu & i\nu \end{array} \!\right )
\left ( \!\!\!\begin{array}{c} \Lambda_\gamma f \\ \partial_s f \end{array} \!\!\!\right ),
\end{equation}
where $\Lambda_\gamma$ is the co-normal D-t-N map and $\partial_s$ is the operator of the tangential derivative. Inverting we get
\begin{equation}\label{1112C}
\left ( \!\!\!\begin{array}{c} \Lambda_\gamma f \\ \partial_s f \end{array} \!\!\!\right )=
\left ( \!\begin{array}{cc} {\nu} & \bar{\nu} \\ i\nu & -i\bar{\nu} \end{array} \!\right ) \phi|_{\partial \mathcal O}.
\end{equation}
We normalize $\partial_s^{-1}$ in such a way that
$$
\int_{\partial \mathcal O} \partial_s^{-1}f ds =0.
$$
Then (\ref{1112C}) could be rewritten as {\it a boundary relation}
\begin{equation}\label{1112B}
(I-i\Lambda_\gamma\partial_s^{-1}) (\nu \phi_1|_{\partial \mathcal O}) = (I+ i\Lambda_\gamma\partial_s^{-1}) (\bar{\nu} \phi_2|_{\partial \mathcal O})
\end{equation}
Let us show the generalization of \cite[Thm 3.2]{knud}, where $\gamma \in C^{1+\epsilon}(\mathbb{R}^2)$, to the case of non-continuous $\gamma$.
\begin{theorem}\label{Th1112A}
$$
\mathcal{T}_q = \left\{ (h_1,h_2)\in H^{1/2}(\partial \mathcal O) \times H^{1/2}(\partial \mathcal O) : (I-i\Lambda_\gamma\partial_s^{-1}) (\nu h_1) = (I+ i\Lambda_\gamma\partial_s^{-1}) (\bar{\nu} h_2) \right\}
$$
\end{theorem}
\begin{proof}
First we show that any pair $(h_1,h_2)^t\in H^{1/2}(\partial \mathcal O) \times H^{1/2}(\partial \mathcal O)$ that satisfies the boundary relation above is in $\mathcal{T}_q$.
Consider a solution $u\in H^2(\mathcal O\setminus \overline{\mathcal{D}})\cap H^2(\mathcal{D})$ of (\ref{set27A}) with the boundary condition
$$
u|_{\partial \mathcal O} = i\partial_s^{-1}(\nu h_1 -\bar{\nu} h_2)\in H^{3/2}(\partial \mathcal O).
$$
Since $\gamma\in W^{1,\infty}(\mathcal O\setminus\overline{\mathcal{D}})\cap W^{1,\infty}(\mathcal D)$ and $\gamma$ is separated from zero, it follows that $\gamma^{1/2}\in W^{1,\infty}(\mathcal O\setminus\overline{\mathcal{D}})\cap W^{1,\infty}(\mathcal{D})$. Then, both components of the vector $\phi=\gamma^{1/2}(\partial u, \bar{\partial} u)^t$ belong to $H^{1}( \mathcal O\setminus\overline{\mathcal{D}})\cap H^1(\mathcal D)$ and $\phi$ satisfies (\ref{firbc}). The fact $\phi|_{\partial \mathcal O}=(h_1,h_2)^t$ follows from (\ref{1112A}) and (\ref{1112B}).
Conversely, we start with a solution $\phi \in H^1(\mathcal O\setminus \overline{\mathcal{D}})\cap H^1(\mathcal{D})$ of (\ref{firbc}) satisfying (\ref{7JunB}) on $\Gamma$. From (\ref{firbc}) and (\ref{char1bc}) the following compatibility condition holds on $\Gamma$
$$
\bar{\partial} (\gamma^{-1/2} \phi_1) = {\partial} (\gamma^{-1/2} \phi_2).
$$
The Poincaré lemma ensure the existence of a function $u$ such that
$$
\left ( \!\!\!\begin{array}{c} \phi_1 \\ \phi_2 \end{array} \!\!\!\right ) =
\gamma^{1/2}\left ( \!\!\!\begin{array}{c} \partial u \\ \bar{\partial} u\end{array} \!\!\!\right ) \quad \text{on} \quad \mathcal{O}\setminus\Gamma.
$$
It is easy to check that $u$ is a solution to (\ref{set27A}) on $\mathcal{O\setminus}\Gamma$ and belongs to $H^2(\mathcal O\setminus\overline{\mathcal{D}})\cap H^2(\mathcal{D})$. Moreover, through the Poincaré Lemma and (\ref{7JunB}) it satisfies the transmission condition (\ref{TransCond}). Then, (\ref{1112A})-(\ref{1112B}) proves that $h=\phi|_{\partial \mathcal O}$ satisfies the boundary relation stated in the theorem.
\end{proof}
Denote $S_{\lambda,w} : H^{1/2}(\partial \mathcal O) \rightarrow H^{1/2}(\partial \mathcal O) $
$$
S_{\lambda,w} f (z)=\frac{1}{i\pi} \int_{\partial \mathcal O}f(\varsigma) \frac{e^{-\lambda(z-w)^2+ \lambda(\varsigma -w)^2}}{\varsigma - z} d \varsigma
$$
This integral is understood in the sense of principal value.
A future idea to explore is to determine conditions on how to find the trace of $\phi$ at $\partial\mathcal{O}$. A hint to this is given in \cite[Th. III.3.]{knud} for $\gamma \in C^{1+\varepsilon}(\mathbb R^2)$, although in here the exponential growing solutions are of the type $e^{ikz}$.
In this sense, we state a conjecture that we would like to prove, in future work, for our method:
\begin{conjecture}
The only pair $(h_1,h_2) \in H^{1/2}(\partial \mathcal O) \times H^{1/2}(\partial \mathcal O) $ which satisfies
\begin{eqnarray}\label{1611A}
(I-S_{\lambda,w} )h_1=2e^{\lambda (z-w)^2}, \\
(I-\overline{S_{\lambda,w}})h_2=0, \\
(I-i\Lambda_\gamma\partial_s^{-1}) (\nu h_1) = (I+ i\Lambda_\gamma\partial_s^{-1}) (\bar{\nu} h_2)
\end{eqnarray}
is $(\left.\phi_1\right|_{\partial\mathcal{O}},\left.\phi_2\right|_{\partial\mathcal{O}})$, where $\phi_1,\phi_2$ are the solutions of the Dirac equation (\ref{firbc}) satisfying the asymptotics (\ref{8JunA}).
\end{conjecture}
\section{Appendix A.}
Here we show the proof of Lemma \ref{2310F}, which corresponds to the application of the Laplace Transform analogue of the Hausdorff-Young inequality. This lemma stems from conversations and discussions with S. Sadov \cite{Sadov}. Our deep thanks.
\section{Laplace Transform analogue of the Hausdorff-Young inequality}
We need to recall some statements on the Laplace Transform.
The following results hold (see \cite{Sadov}): consider the map
$$
L_\gamma f(s)= \int_0^\infty e^{-z(s)t}f(t)dt
$$
where $s>0$ is the arclength of a contour $\gamma ~:~= \{z(s): s>0,\, \textnormal{Re}(z(s))>0 \}$.
Theorem 7 from \cite{Sadov} claims that $L_\gamma$ is a bounded operator from $L^q(\mathbb R^+)$ to $L^p(\gamma)$, where $1 \leq q \leq 2$ and $1/p+1/q=1$.
Moreover, the norm of this map is bounded uniformly in the class of convex contours.
Now we consider only contours such that $|(\textnormal{Re} z(s))'|<1/2$ for $s>>1$. This means that the spaces $L^p, p>1$ for the variable $s>0$ and for variable $\,\textnormal{Im} z(s)$ are equivalent. We now prove that the result of the Hausdorff-Young inequality is valid for the following map on the plane:
$$
L f(\lambda_1,\lambda_2)= \int_0^\infty \int_1^\infty e^{-i\lambda_1 x} e^{-i\lambda_2 y -\ln|\lambda_2| y} f(x,y)d x d y, \quad \lambda_1,\lambda_2>0,
$$
namely we prove that for some fixed domain $D$ and constant $C=C(D)>0,$ we have
\begin{equation}\label{0607F}
\|Lf\|_{L^p_{\lambda_1,\lambda_2}} \leq C \|f\|_{L^q_{x,y}}, \quad \mbox{Supp} f \subset D.
\end{equation}
{\bf Proof}
Consider the function $A(y,\lambda_1)$:
\begin{equation}\label{0607A}
A(y,\lambda_1)= \int_0^\infty e^{-i\lambda_1 x} f(x,y)d x, \quad y>1
\end{equation}
and note that by Hausdorff-Young inequality we get
\begin{equation}\label{0607E}
\|A(y,\cdot)\|_{L^p_{\lambda_1}} = \left ( \int |A(y,\lambda_1)|^p d\lambda_1 \right )^{1/p} \leq \left ( \int |f(x,y)|^qdx \right )^{1/q}.
\end{equation}
For the sake of simplicity we omit all positive constants here and in further inequalities.
We claim that $A(\cdot,\lambda_1) \in L^q_y$ and we prove this fact later. Accepting this claim and using the above mentioned theorem from \cite{Sadov} we get
\begin{equation}\label{0607B}
\int \left | \int e^{-i\lambda_2 y -\ln|\lambda_2| y} A(y,\lambda_1) dy \right |^p d\lambda_2 \leq (\|A(\cdot,\lambda_1)\|_{L^q_y})^p.
\end{equation}
Further we use the notation $\|\cdot \|$ for $\|\cdot \|_{L^p_{\lambda_1,\lambda_2}}$. Now we are ready to estimate $\|Lf\|$:
\begin{equation}\label{0607C}
\|Lf\|^p= \int \int \left | \int e^{-i\lambda_2 y -\ln|\lambda_2| y} A(y,\lambda_1) dy \right |^p d\lambda_1 d\lambda_2 \leq
\int (\|A(\cdot,\lambda_1)\|_{L^q_y})^p d \lambda_1.
\end{equation}
First we apply the integral form of the Minkowski inequality, and then (\ref{0607E}). Hence, we get:
\begin{equation}\label{0607D}
\left ( \int \|A(\cdot,\lambda_1)\|^p_{L^q_y} d \lambda_1 \right )^{q/p}= \left ( \int \left | \int |A(y,\lambda_1)|^q dy \right |^{p/q} d\lambda_1 \right )^{q/p} \leq
\end{equation}
$$
\int \left ( \int |A(y,\lambda_1)|^p d\lambda_1 \right )^{q/p} dy \leq \int \left ( \int |f(x,y)|^qdx \right ) dy.
$$
This proves (\ref{0607F}). Now let us show that $A(\cdot,\lambda_1) \in L^q_y$. From Minkowski inequality we get
$$
\|A(\cdot,\lambda_1)\|_{L^q_y} = \left (\int \left | \int_0^\infty e^{-i\lambda_1 x} f(x,y)dx \right |^q dy \right)^{1/q} \leq
\int \left ( \int |f(x,y)|^q dy \right )^{1/q} dx.
$$
Since $f$ has finite support, then the function $\int |f(x,y)|^q dy$ has finite support too. Let us denote by $C_1$ the length of its support. Therefore
$$
\int \left ( \int |f(x,y)|^q dy \right )^{1/q} dx \leq \int \left ( \int |f(x,y)|^q dy \right )^{} dx + C_1.
$$
\section{Proof of Lemma \ref{2310F}}
The following lemma represents a generalization of Lemma 3.2 from \cite{ltv}.
Consider $\lambda_0 \in \mathbb C$, denote by
$\rho(z)= -i\,\textnormal{Im}[\lambda(z-w)^2]/2+\ln |\lambda| \lambda_0(z-w)^2$, and let
$A_0=\sup_{z \in \mathcal O}\, \textnormal{Re}[ \lambda_0(z-w)^2].$ For convenience we recall Lemma \ref{2310F}:\\
\textbf{Lemma}
\textit{Let $z_1,w \in \mathbb C$, $p > 2$ and $\varphi \in L^\infty_{\rm{comp}}$. Then
$$
\left \|\frac{1}{|\lambda|^{A_0}}\int_{\mathbb C} \varphi(z)\frac{e^{\rho(z)} }{z-z_1} \, d{\sigma_{z}} \right \|_{L^p_\lambda(\mathbb C)}
\leq C\frac{\|\varphi\|_{L^\infty}}{|z_1-w|^{1-\delta}},
$$
where the constant $C$ depends only on the support of $\varphi$ and on $\delta =\delta(p)>0$.}
\begin{proof}
Denote by $F=F(\lambda,w,z_1)$ the integral on the left-hand side of the inequality above. In order to have non-positiveness of the real part of the phase we make a change of variables $u=(z-w)^2$ in $F$ and take into account that $d{\sigma_{u}} =4|z-w|^2 d{\sigma_{z}}$. Then
\begin{equation}\label{250918A}
F=\frac{1}{4}\sum_{\pm} \int_{\mathbb C} \varphi(w\pm\sqrt{u})\frac{e^{i\,\textnormal{Im}(\lambda u)/2+\lambda_0 \ln|\lambda| u }}{|u|(\pm\sqrt{u}-(z_1-w))} \, d{\sigma_{u}}.
\end{equation}
Now, we consider a new change of variable $\widehat{u}=u-u^0$, where
$$
u^0=\rm{argmax}_{w\pm\sqrt{u} \in \rm{supp} \varphi}\,\textnormal{Re}(\lambda_0 u)
$$
and apply the Hausdorff-Young inequality for the Laplace transform on a contour (\ref{0607F}). The result on Lemma \ref{2310F} follows immediately from \cite{ltv}, Lemma 3.1 which we recall here for the reader's convenience
{\it \cite[Lemma 3.1]{ltv}
Let $1\leq p<2$. Then the following estimate is valid for an arbitrary $0 \neq a \in \mathbb C$ and some constants $C=C(p,R)$ and $\delta=\delta(p)>0$:
$$
\left \| \frac{1}{u(\sqrt{u}-a)} \right \|_{L^p(u\in \mathbb C:|u|<R)} \leq C(1+|a|^{-1+\delta}).
$$
}
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
The precise data which led to our present era of observation-driven
cosmology have largely been taken from measurements of the
temperature and polarization anisotropies
of the Cosmic Microwave Background \cite{Dunkley:2008ie,Komatsu:2008hk}.
These anisotropies were apparently sourced by large-scale fluctuations
of primordial origin, which passed inside the causal horizon while the
universe was dominated by a hot plasma of relativistic matter
and radiation. The existence of these
long wavelength fluctuations can be inferred today
by studying the imprint of the plasma oscillations which they seeded
and which were left intact when radiation subsequently decoupled from
matter.
The origin of these superhorizon fluctuations is uncertain.
With present-day observations, their properties are indistinguishable
from a precisely gaussian random field.
It is possible they were synthesized during an early epoch of inflation,
in which case the detectability of non-gaussian features is known to be
a precise discriminant between the simplest model, characterized by a
single degree of freedom \cite{Maldacena:2002vr}
and more complicated models, characterized by many degrees of
freedom \cite{Lyth:2005fi}.
(For example, see
Refs.~\cite{Lyth:2005qk,Alabidi:2006wa,
Sasaki:2008uc,Naruko:2008sq,Byrnes:2008wi,Byrnes:2008zy}.)
In addition to its dependence on the dynamics of the inflationary phase,
the non-gaussian signal may
be sensitive to the composition of the universe
before and after inflation.
Subsequent to the end of the inflationary era, its evolution has been
explored by several authors
\cite{Bartolo:2003gh,Bartolo:2006cu,
Bartolo:2006fj,Pitrou:2008ak,Bartolo:2008sg,Boubekeur:2009uk}.
On the other hand, if inflation persisted for some number of e-folds
prior to observable scales leaving the horizon then the impact of
any pre-inflationary physics can be expected to be quite negligible
\cite{Wald:1983ky}.
If inflation did not last long enough to erase all memory of
earlier times, what information could be expected to survive in
the statistics of large-scale fluctuations?
Chen {\etal} \cite{Chen:2006xjb}
studied the impact of excitations above the Minkowski
vacuum%
\footnote{When promoted to de Sitter space, the choice of boundary
conditions which reproduces the Minkowski vacuum on small scales
is usually known as the Bunch--Davies state
\cite{Bunch:1978yq}.}
and found them to have a distinctive momentum dependence. Moreover,
under certain assumptions the traces of these excitations could be made
large. More detailed investigations were subsequently carried out
in Refs.~\cite{Holman:2007na,Meerburg:2009ys}.
Statistical features of this sort could probe
an enhancement of the correlations predicted by conventional inflationary
models due to perturbative effects,
but would necessarily have limited reach to detect
a coherent modification of the background spatial geometry.
In the conventional picture, where inflation is taken to have lasted
for many e-folds prior to horizon exit of observable scales,
the spatial geometry is taken to be flat to high precision.
If this assumption is relaxed, however, then the background may be
different. To study such a scenario one requires a new set of
propagators and vertices, which are the elements out of which
perturbation theory is constructed. Predictions obtained in this way could
quite easily be rather different to conventional results.
In the case of both perturbative or non-perturbative enhancements
the effect diminishes as spatial slices inflate and
isotropize, unless the modification is associated with
new physics at an energy somewhat higher than the
inflationary scale.%
\footnote{If there is a sensitivity to new ultra-violet physics, then
quantum excitations can have correlations which are not
well-described by the standard formulation. Such modifications are
not inflated away, but instead become imprinted in each
new mode which crosses the horizon
\cite{Chen:2006xjb,Holman:2007na,Meerburg:2009ys}.}
According to present ideas the pre-inflationary phase could have been
characterized by strong spatial disorder, of which the simplest manifestation
would be non-vanishing spatial curvature. Assuming
dark energy to be non-dynamical, observation presently
requires $-0.0178 < \Omega_k < 0.0066$ at $95\%$ confidence,
where $\Omega_k$ is evaluated today and the constraints are obtained
by combining the WMAP 5-year dataset with baryon acoustic oscillation
and supernova data \cite{Komatsu:2008hk}.
As observations improve the allowed range of $\Omega_k$ can be
expected to diminish, but if it remains consistent with zero
it will presumably be impossible to rule out a small positive
or negative value.
One might be led to seek more information in the statistical properties
of the large-scale perturbations, but
it is already known that curvature is hard to detect in the power spectrum
alone \cite{Halliwell:1984eu}.
For that purpose it seems worthwhile to explore whether the situation is
better for higher-order correlations.
Thus, both to decide what can be learned about the
pre-inflationary initial conditions,
and to determine whether curvature at the level
$|\Omega_k| \sim 10^{-2}$
can be probed by non-gaussian statistics,
one would like to obtain the inflationary bispectrum computed with
non-flat spatial slices.
In this paper we focus on the case of positive spatial curvature,
corresponding to $S^3$ spatial slices, since in that case the observational
limit is more favourable, as discussed above.
How large an effect should we expect?
The longest observable wavelength in the CMB contributes to its dipole,
although one must be wary in extracting data at low multipoles because
of scatter due to cosmic variance. Nevertheless, as a simple estimate,
a calculation of the comoving distance to the CMB dipole (or $\ell=2$ mode;
cf. Ref.~\cite{Freivogel:2005vv}) gives
the diameter of the last scattering sphere as
\begin{equation}
\fl
d_{\ell=2}=2\int_{t_{\mathrm{dc}}}^{t_{0}}
\frac{d t}{e^{\rho}} = 2
\int_{(1+z_{\mathrm{dc}})^{-1}}^{1}
\frac{d x}{x \sqrt{-1 + x^2 (\Omega_{m}
x^{-3}+\Omega_{\Lambda}+\Omega_{r} x^{-4})/|\Omega_k|}} ,
\end{equation}
where we have specialized to $S^3$ slices and, as above,
the $\Omega_{i}$ are evaluated today (at
time $t_0$).
The decoupling time is denoted $t_{\mathrm{dc}}$ and
$e^{\rho}$ is the scale factor.
For the purposes of making an estimate, let us assume that the present-day
spatial curvature takes the WMAP5+BAO+SN central value
(at $68\%$ confidence)
$\Omega_k \approx -0.0050$ \cite{Komatsu:2008hk}.
The remaining cosmological parameters can be chosen equal to the
best-fit concordance model where
$\Omega_\Lambda = 0.721$,
$\Omega_r = 8 \times 10^{-5}$
and the CMB decouples at a redshift $z_{\mathrm{dc}} = 1100$.
This gives $d_{\ell=2} \approx 0.46$. On the other hand, in these units,
the radius of curvature
of the spatial slices is set by the scale $r = 1$.
It follows that the CMB dipole receives contributions from wavelengths
which are a significant fraction of the radius of curvature.
Where this is the case, we do not expect
to be able to neglect spatial curvature in calculating
microwave background observables.
In this paper we estimate the observational signatures
of this scenario. We study the correction to the power spectrum
from single-field, slow-roll inflation
assuming that inflation began while $\Omega_k$ was non-negligible,
and then consider the implications for the
three-point function of the curvature perturbation.
Our result agrees with the known flat-slicing result
\cite{Maldacena:2002vr} in the small-scale limit $k \gg 1$
(where $k$ is the wavenumber of
the mode under consideration), in agreement with
the na\"{\i}ve expectation that curvature should play a less important
role as inflation proceeds. On the other hand for small $k$
the fluctuations can feel the curvature of the $S^3$ slices, leading
to terms which are larger than the flat $\mathbb{R}^3$
prediction by a factor
of $1/\epsilon$, where $\epsilon \equiv \dot\phi^2 / 2 \dot\rho^2$
is the slow-roll parameter which measures the deviation of the universe
from exact de Sitter space.
This enhancement is driven by quantum interference among the
fluctuations at horizon-crossing, and is maximized close to the
equilateral configuration where all momenta participating in the
three-point function have an approximately equal magnitude
\footnote{Therefore,
we do not expect that this enhancement by $1/\epsilon$
would provide an explanation for the observation of large
non-gaussianity reported by Yadav \& Wandelt \cite{Yadav:2007yy}.}
The layout of this paper is as follows. In \S\ref{sec:action} we
compute the action for a scalar field coupled to Einstein gravity
in de Sitter space with $S^3$ spatial hypersurfaces. After including the
effect of small perturbations in the metric, this controls how
small fluctuations interact and become non-linear as
the expansion of the universe draws them across the causal horizon.
In order to obtain a description of the non-linearities which are
observable in the CMB bispectrum,
it is sufficient to expand the action to third order in the perturbations.
In \S\ref{sec:power} we study corrections to the two-point function, and
show that curvature plays essentially
no role in determining its structure at tree level.
In \S\ref{sec:third_order_action} we give the third order action and
in \S\ref{sec:three-point} we calculate the three point function.
We then discuss the non-gaussian parameter
$f_{\mathrm{NL}}$ imprinted in this scenario
and show that we have the expected limit for modes of
small wavelength.
Finally, we give our conclusions in \S\ref{sec:conclusions}.
Throughout this paper,
we work in units where the $\hbar = c = 1$ and the reduced Planck
mass is normalized to unity, giving $M_{\mathrm{P}}^{-2} \equiv 8 \pi G = 1$,
where $G$ is Newton's gravitational constant.
The unperturbed background metric is
\begin{equation}
d s^2 = - d t^2 + e^{2 \rho} \gamma_{ij} \, d x^i \, d x^j ,
\end{equation}
and $\gamma_{ij}$ is the metric on the unit three-sphere.
This choice means that the constant curvature $k$ is normalized to unity,
and to avoid clutter we will omit writing $k$ explicitly.
\section{Computation of the action}
\label{sec:action}
To study the evolution of small fluctuations around the time of
horizon exit, one must couple gravity to
whichever fields are responsible for driving inflation.
Since the energy density in these fields
dominates the energy budget of the universe by assumption, small
fluctuations necessarily induce perturbations in the background metric.
The non-linearity of gravity endows these perturbations with
self-interactions which cannot be neglected.
To simplify the subsequent calculation, it is very convenient
to take the perturbed metric in the Arnowitt--Deser--Misner or
so-called \emph{ADM} form \cite{Arnowitt:1960es}, where
\begin{equation}
d s^{2}=-N^{2}d t^2 + h_{i j} (d x^{i}+N^{i} d t) (d x^{j}+N^{j} d t) .
\end{equation}
In this equation $N$ and $N^i$ are the lapse function and shift-vector,
whose function is to guarantee reparametrization invariance of
the metric. They are
not propagating degrees of freedom but are instead removed by
constraint equations coming from the Einstein action;
and $h_{ij}$ is the perturbed metric on spatial slices,
\begin{equation}
h_{ij} = e^{2(\rho + \zeta)} ( \gamma_{ij} + \Gamma_{ij} ) .
\label{eq:gauge-choice}
\end{equation}
The quantity $\zeta$ is known as the \emph{curvature perturbation}.
If gravitational waves are present, represented here by a
transverse and traceless first-order tensor $\Gamma_{ij}$,
then they should be included
as corrections to the unit three-sphere metric $\gamma_{ij}$.
We have implicitly adopted a spatial gauge in order to write
Eq.~\eref{eq:gauge-choice}.
In the present paper we focus solely on the properties of scalar
degrees of freedom, which can be summarized in terms of the
curvature perturbation, and set $\Gamma_{ij} = 0$.
This is adequate for the purpose of computing the bispectrum of $\zeta$.
In the absence of perturbations,
the lapse function and shift-vector
reduce to unity and zero, respectively,
and otherwise can be expressed algebraically in terms of the propagating
degrees of freedom.
The action is taken to be the standard Einstein--Hilbert term,
together with a real scalar field $\phi$, supplemented with
the York--Gibbons--Hawking boundary term,
\begin{equation}
\fl\nonumber
S = \frac{1}{2} \int d t \, d^3 x \; \sqrt{h} \Bigg\{
N \left( R^{(3)} - 2 V - h^{ij} \partial_i \phi \partial_j \phi
\right) \\ \mbox{} + \frac{1}{N} \left( E_{ij} E^{ij} - E^2 +
[ \dot{\phi} - N^i \partial_i \phi ]^2
\right) \Bigg\} ,
\label{eq:action}
\end{equation}
where $E_{ij}$ is a rescaled version of the extrinsic curvature,
defined by $K_{ij} \equiv E_{ij}/N$ and given explicitly by
\begin{equation}
E_{i j}
\equiv
\frac{1}{2} (\dot{h}_{i j}-\nabla_{i} N_{j}-\nabla_{j} N_{i}) .
\end{equation}
The scalar $E$ is its trace, obeying $E \equiv E^i_i$, where spatial
indices are raised and lowered with the metric $h_{ij}$.
The covariant derivative compatible with
this metric
is denoted $\nabla_{i}$ and an overdot denotes a derivative with
respect to cosmic time, so that $\dot{x} \equiv d x / d t$.
There are two scalar degrees of freedom in this system: the
fluctuations $\zeta$ and $\delta \phi$.
However, only one of these is a propagating mode; the other is a
gauge degree of freedom and can be removed by a coordinate redefinition.
Although a wide variety of gauge choices exist \cite{Kodama:1985bj}, two
are of special interest. The first is \textit{comoving gauge}, where
the scalar perturbation is set to zero and only the curvature
perturbation propagates,
\begin{equation}
\label{comoving_gauge}
\delta \phi = 0, \quad \mbox{and} \quad
h_{i j}=e^{2\rho +2\zeta}\gamma_{i j} .
\end{equation}
Late on when a mode has passed outside the horizon $\gamma_{ij}$ can be
effectively replaced by a flat metric on $\mathbb{R}^3$ since the
equations of motion are negligibly affected by
the curvature of the $S^3$ slices.
In this limit
the scale of the metric is arbitrary and the
equations of motion possess a symmetry under
dilatations of the scale factor, where $\rho \mapsto \rho + \alpha$
for any constant $\alpha$. It follows that on very large scales
$\zeta$ must be effectively conserved,
because it becomes indistinguishable from a shift in the
background value of $\rho$. This conservation law has been shown to
follow---independently of the theory of gravity in
question \cite{Lyth:2004gb}---from energy conservation and the absence of
non-adiatbatic pressure.
It is therefore convenient to express the answer in
terms of $\zeta$ when outside the horizon.
Unfortunately, the calculation of correlation functions directly in
comoving gauge is technically complicated. It is more convenient to work
in the \textit{uniform curvature gauge}, which is defined by
\begin{equation}
\label{deltaphi}
\delta\phi = \varphi, \quad \mbox{and} \quad
h_{i j}=e^{2 \rho}\gamma_{i j}.
\end{equation}
In this gauge it is simple to compute the correlation functions of
$\varphi$, but more work is needed to
make a gauge transformation into the
observationally-relevant variable $\zeta$. It is this approach which we will
follow in the remainder of this paper.
\subsection{Constraint equations}
The action~\eref{eq:action} does not depend on time derivatives of $N$
or $N^i$, and therefore its variation with respect to these fields does
not produce evolution equations but only constraints.
We need to solve these constraint equations to first order%
\footnote{As noted in
Ref.~\cite{Maldacena:2002vr} and given in more detail in
Ref.~\cite{Chen:2006nt}, the higher order terms in $N$
and $N^{i}$ occur in the action multiplying the
Hamiltonian and momentum constraints respectively,
and hence they can be neglected.}.
The Hamiltonian constraint is
\begin{equation}
R^{(3)} - 2V - \frac{1}{N^2} (E_{i j} E^{i j} -E^{2})
- \frac{1}{N^2} (\dot{\phi}-N^{i} \phi_{,i})^{2}
- h^{i j} \phi_{,i} \phi_{,j} = 0 ,
\end{equation}
where we have adopted the convention that a partial derivative with
respect to $x^i$ is denoted with a comma, so that $\phi_{,j} \equiv
\partial_j \phi$.
The momentum constraint is
\begin{equation}
\nabla_{i} \left\{ \frac{1}{N} (E_{j}^{i}-\delta_{j}^{i} E) \right\}
- \frac{1}{N} (\dot{\phi} -N^{i} \phi_{,i}) \phi_{,j} = 0
\end{equation}
In subsequent expressions, to simplify the notation, we will use
$\phi=\phi (t)$ to denote the background value of the field
and reserve $\varphi$ for its perturbation.
One can now
specialize to the uniform curvature gauge.
In this gauge, the solution to the constraints takes the
form $N \equiv 1 + \delta N$, where $\delta N$ obeys
\begin{equation}
\delta N \equiv \frac{1}{\dot{\rho}}
\Big( \frac{\dot{\phi}}{2} \varphi + \chi \Big) ;
\end{equation}
and $N_{i} \equiv \chi_{,i}$, where $\chi$ is an auxiliary quantity
determined by
\begin{equation}
\label{chi}
\chi \equiv \epsilon (\triangle+3-\epsilon)^{-1} \Big[\frac{d}{d
t}\Big(-\frac{\dot{\rho}}{\dot{\phi}}\varphi
\Big)+\frac{e^{-2\rho}}{\dot{\phi}}\varphi\Big]
\end{equation}
and $\triangle$ is the laplacian on the unit three-sphere.
We note that both $\delta N$ and $N^i$ are suppressed by the square
root of a slow-roll parameter,
\begin{equation}
\label{order}
\delta N = \Or (\sqrt{\epsilon}) \quad \mbox{and} \quad
N^{i} = \Or (\sqrt{\epsilon}) ,
\end{equation}
where $\epsilon \equiv \dot{\phi}^2/2 \dot{\rho}^2$ is the
slow-roll parameter defined in \S\ref{sec:introduction}.
This scaling will be extremely important
when we come to extract the terms of lowest order in the
slow-roll approximation from the action.
\subsection{The second order action}
In the uniform curvature gauge, Eq.~\eref{deltaphi},
the second order term in the action is
\begin{eqnarray}
\fl\nonumber
S_{2}=\frac{1}{2}\int d t \, d^{3} x \; e^{3\rho}\sqrt{\gamma} \;
\Big[-V_{0}''\varphi^{2}-\frac{\dot{\phi}}{\dot{\rho}} V_{0}'
\varphi^{2}-\frac{2}{\dot{\rho}} V_{0}' \varphi \chi +
\dot{\varphi}^{2}-e^{-2\rho} \gamma^{i j}\varphi_{,i}\varphi_{,j}\\
\mbox{}-2\chi
\triangle \chi - \frac{\dot{\phi}^{2}}{\dot{\rho}}\varphi
\dot{\varphi}-\frac{2\dot{\phi}}{\dot{\rho}}\chi\dot{\varphi}+\Big(
\frac{\dot{\phi}^{2}}{\dot{\rho}^{2}}-6 \Big)
\Big(\frac{\dot{\phi}^{2}}{4}\varphi^{2}+\dot{\phi}\chi\varphi+\chi^{2}
\Big)\Big]
\end{eqnarray}
where $\chi$ is given by Eq.~\eref{chi}
and $\triangle$ is again the laplacian on the unit three-sphere. Since $V_{0}'' =-3 e^{-2\rho}+O(\epsilon)$ and \eref{order} holds we have at
lowest order in slow roll
\begin{equation}
S_{2}=\frac{1}{2}\int d t \, d^{3} x \; e^{3\rho}\sqrt{\gamma} \;
[\dot{\varphi}^{2}+3 e^{-2\rho}\varphi^{2}-e^{-2\rho}\gamma^{ij}
\varphi_{,i}\varphi_{,j}] .
\end{equation}
The field equation for $\varphi$ which follows from this action is
\begin{equation}
\label{initialeom}
\frac{d}{d t} \Big( e^{3 \rho}\frac{d}{d t}\varphi \Big) -
3 e^{\rho}\varphi - e^{\rho}\triangle \varphi = 0.
\end{equation}
As discussed below Eq.~\eref{comoving_gauge},
the terms depending on curvature scale out of Eq.~\eref{initialeom}
in the limit $e^{\rho} \rightarrow \infty$ after which
Eq.~\eref{initialeom} has a time-independent solution.
\section{Corrections to the power spectrum from curvature}
\label{sec:power}
\subsection{Scalar mode functions on the three-sphere}
We wish to solve Eq.~\eref{initialeom} at lowest order in the slow-roll
approximation. For this purpose it is first necessary
to obtain an expression for $e^{\rho}$ at
lowest order. One does this by Taylor expanding the scale factor about a particular time and collecting the terms of lowest order.
In principle, the Taylor expansion can be performed at any point.
However, because
our final answer will involve an integral which receives its
largest contribution from around the time of horizon crossing,
we can achieve the greatest accuracy by expanding around the
time of horizon exit. This is precisely analogous to the expansion of
$H$ and other background quantities around the time of horizon exit in
the standard calculation on flat spatial slices.
The background equations for the scale factor
consist of the Friedmann constraint,
\begin{equation}
3 \dot{\rho}^{2} =
\frac{1}{2} \dot{\phi}^{2}+V_{0}-3 e^{-2\rho} ,
\label{eq:friedmann}
\end{equation}
together with the Raychaudhuri equation
\begin{equation}
\ddot{\rho}=
- \frac{1}{2}\dot{\phi}^{2}+e^{-2\rho} .
\end{equation}
In addition,
the scalar field evolves according to the conventional Klein--Gordon
equation
\begin{equation}
\ddot{\phi}+3 \dot{\rho}\dot{\phi}+V_{0}' = 0 ,
\end{equation}
which is not corrected in the presence of curvature. At lowest order in slow roll we can write the $n$th order
time derivative of the scale factor in terms of alternating expressions
for odd and even $n$,
\begin{eqnarray}
\left.\left( \frac{d}{d t} \right)^{2 n}e^{\rho}
\right|_{t=t_{*}}=\,e^{\rho_{*}} (\dot{\rho}_{*}^{2}+e^{-2
\rho_{*}})^{n} + \Or(\epsilon) ; \\
\left.\left( \frac{d}{d t} \right)^{2 n+1}e^{\rho}
\right|_{t=t_{*}}=\,e^{\rho_{*}} \dot{\rho}_{*} (\dot{\rho}_{*}^{2}+
e^{-2 \rho_{*}})^{n}+\Or(\epsilon) .
\end{eqnarray}
where an asterisk $\ast$ denotes evaluation at the time of horizon
crossing for the corresponding wavenumber $k$.
It follows that after expansion around the time
$t = t_\ast$, the scale factor can be
expressed at lowest order in the slow-roll approximation by
\begin{equation}
\fl
e^{\rho} \simeq e^{\rho_{*}}\cosh [(t-t_{*})\sqrt{\dot{\rho}_{*}^{2}+
e^{-2 \rho_{*}}}]
+ \frac{\dot{\rho}_{*} e^{\rho_{*}}}
{\sqrt{\dot{\rho}_{*}^{2}+e^{-2 \rho_{*}}}}
\sinh [(t-t_{*})\sqrt{\dot{\rho}_{*}^{2}+e^{-2 \rho_{*}}}] .
\label{eq:approx-scale}
\end{equation}
We can see that this is the scale factor for a de Sitter spacetime with
$\rho$ and $\dot{\rho}$ matched to our slowly rolling background
at $t=t_{*}$. In fact, it is usually more convenient to work in terms
of a conformal time variable, $\eta$, defined by
$\eta = \int_{t_{\mathrm{throat}}}^t d t' / e^\rho(t')$,
where $t_{\mathrm{throat}}$ corresponds to the throat of the de Sitter
hyperboloid.
In terms of the $\eta$ variable,
Eq.~\eref{eq:approx-scale} can be written
\begin{equation}
\label{conf_sc_fact}
e^{\rho}=
\frac{1}{\dot{\rho}_{*} \lambda \cos \eta}+\Or(\epsilon) .
\end{equation}
It follows that
in the neighbourhood of $\eta = \eta_\ast$ the Hubble rate obeys
\begin{equation}
\dot\rho = \dot\rho_* \lambda \sin \eta+\Or(\epsilon),
\end{equation}
where $\lambda\equiv\sqrt{1+e^{-2 \rho_{*}} / \dot{\rho}_{*}^{2}}$
and $\eta \in (-\pi / 2, \pi / 2 ) $ for $t\in \mathbb{R}$.
The relation between conformal and physical time is
\begin{equation}
\tan \frac{\eta}{2} = \tanh \frac{1}{2} \Big[(t-t_{*})\dot{\rho}_{*}
\lambda + \tanh^{-1} \Big(\frac{1}{\lambda}\Big)\Big]
\end{equation}
Combining Eqs.~\eref{initialeom} and~\eref{conf_sc_fact},
we find that the equation for the modes at lowest order in slow roll is
\begin{equation}
\label{eom}
\varphi_{k}'' +2 \tan \eta\; \varphi_{k}' + (k^2 -4) \varphi_{k} = 0
\end{equation}
where a prime $'$ denotes a derivative with respect to $\eta$,
and $-(k^2 -1)$, with $k \in \mathbb{N} - \{0\}$, are the eigenvalues of the
laplacian $\triangle$. This is solved by the
normalised positive mode \cite{Birrell:1982ix}
constructed to vanish at the south pole of the Hawking--Moss instanton
\cite{Hawking:1982my}
describing de Sitter space at horizon crossing,
\begin{eqnarray}
\fl\nonumber
\varphi_{k}^{\mathrm{cl}}= \,
\frac{\dot{\rho}_{*} \lambda (k^{2}-3)^{1/4}}
{\sqrt{2 (k^{2}-4)}} \Bigg[ F \Big(
\begin{array}{cc}
- 1/2 + \sqrt{k^2-3}/2; & - 1/2 - \sqrt{k^2-3}/2 \\
\multicolumn{2}{c}{1/2}
\end{array}
\Big| \sin^{2}\eta \Big) \\
\label{mode}
\mbox{} + \imath \,
\frac{(k^2-4)}{\sqrt{k^2-3}}\sin \eta \, F \Big(
\begin{array}{cc}
\sqrt{k^2-3}/2; & -\sqrt{k^2-3}/2 \\
\multicolumn{2}{c}{3/2}
\end{array}
\Big| \sin^{2}\eta \Big)\Bigg]
\end{eqnarray}
where $F$ is the Gauss
hypergeometric function and $\imath^2 \equiv -1$.
The south pole is at $\eta= \imath \infty$.
This is equivalent to the usual formulation in which the positive mode
is taken to approach its Minkowski value deep inside the horizon, where
it is insensitive to the curvature of spacetime.
Ultimately we wish to use these mode functions to obtain analytic
estimates of the
$n$-point expectation values of fluctuations in $\varphi$
around the time of horizon crossing. However,
the presence of hypergeometric functions in Eq.~\eref{mode}
means that it is awkward to use such expressions for this purpose.
To obtain a more useful approximation, one may first re-write the equation of motion, Eq.~\eref{eom}, as
\begin{equation}
\left(\frac{\varphi_{k}}{\cos\eta}\right)'' = - (k^2 -5 -2 \tan^2 \eta )
\frac{\varphi_{k}}{\cos\eta} .
\end{equation}
The equation admits an ``almost-WKB'' solution, corresponding to
\begin{equation}
\label{almost_wkb}
\varphi_{k} \approx \frac{\dot{\rho}_{*} \lambda}{\sqrt{2 k}}
e^{\imath k \eta} \cos \eta ,
\end{equation}
which becomes a good approximation for an increasingly large neighbourhood
of $\eta = 0$ as $k \rightarrow \infty$. Unfortunately, we cannot
conclude that Eq.~\eref{almost_wkb} is sufficient to obtain an
acceptable estimate of the three-point expectation value of scalar
fluctuations near horizon crossing (where $\eta \rightarrow \pi/2$),
because it freezes out to the wrong asymptotic value.
This is a fatal deficiency. It would lead to an unreliable
estimate of the magnitude of the bispectrum and therefore untrustworthy
conclusions regarding the observational relevance of $f_{\mathrm{NL}}$.
To find an approximate form for the mode which gives a good approximation
over the \emph{entire} relevant range of $\eta$,
we can use standard results relating to hypergeometric functions to find that at late times $\eta \rightarrow \pi/2$ the field freezes out to
\begin{equation}
\varphi_{k}^{\mathrm{cl}} \rightarrow
- \frac{\imath \dot\rho_* \lambda}
{(k^2-3)^{1/4} \sqrt{2 (k^2-4)}}
e^{\imath \pi \sqrt{k^2-3} / 2}
\approx
- \frac{\imath \dot \rho_* \lambda}{\sqrt{2 k^3}}
e^{\imath \pi k/2},
\end{equation}
where the approximation holds for large $k$.
It follows that if we take $\varphi_k^{\mathrm{cl}}$ to be given by
\begin{equation}
\label{mode_approx}
\varphi_{k}^{\mathrm{cl}} \approx
\frac{\dot{\rho}_{*} \lambda}{\sqrt{2 k}}
\Big(\cos \eta - \frac{\imath}{k} \Big) e^{\imath k \eta} ,
\end{equation}
then we obtain the correct value and derivative at $\eta=\pi / 2$,
and indeed this gives a good estimate for all $\eta$.
In Eq.~\eref{mode_approx} and below, the label
``$\mathrm{cl}$'' indicates that this set of mode functions form a good
basis out of which we can build a quantum field when we come to quantize
this theory. To do so, one introduces creation
and annihilation operators $a^\dag_{\vec{k}}$ and $a_{\vec{k}}$
which respectively create and destroy particles
(as measured by an inertial observer deep inside the horizon)
with momentum $\vec{k}$.
The quantum field corresponding to $\varphi$ can be constructed
by the usual canonical procedure, leading to
\begin{equation}
\label{field1}
\varphi(\eta, \vec{x}) = \sum_{\vec{k}} \Big( Q_{\vec{k}}(\vec{x})
\,\varphi_{k}^{\mathrm{cl}}(\eta) a_{\vec{k}}^{\dagger} + Q_{\vec{k}}^{*}(\vec{x})
\,\varphi_{k}^{\mathrm{cl} *}(\eta) a_{\vec{k}}\Big)
\end{equation}
where the $S^3$ harmonics $Q_{\vec{k}}$ are defined
by $\triangle Q_{\vec{k}}=-(k^2 -1) Q_{\vec{k}}$.
The case $k=1$ is the homogeneous background and the case $k=2$ gives
purely gauge modes.%
\footnote{To see that the $k=2$ modes are pure gauge,
suppose $\varphi_{2}$ satisfies
$\triangle \varphi_{2} =-3 \varphi_{2}$. It must also satisfy
$\varphi_{2 | i j}=-\gamma_{i j} \varphi_{2}$,
as can be verified by checking each $k=2$ mode,
and therefore
it is possible to perform a gauge transformation from
the metric
\begin{equation}
d s^2 = -N^2 d t^2 + e^{2 \rho} \gamma_{i j}
(d x^i +N^i d t) (d x^j +N^j d t), \quad \varphi=\varphi_{2}
\end{equation}
to a superficially different but gauge-equivalent metric
\begin{equation}
d s^2 = -\tilde N^2 d \tilde t^2 + e^{2 \rho} \gamma_{i j}
(d \tilde x^i +\tilde N^i d \tilde t)
(d \tilde x^j +\tilde N^j d \tilde t), \quad
\tilde \varphi \equiv 0 .
\end{equation}
The coordinate transformation which allows us to pass between these
two equivalent forms can be written
\begin{equation}
\tilde t = t + \frac{\varphi_2}{\dot{\phi}}, \quad
\tilde x^i = x^i + \frac{\dot \rho}{2 \dot \phi} \varphi_{2}^{|i} .
\end{equation}
}
For the purposes of studying fluctuations neither of these modes are
relevant, and we will therefore always be
interested in modes of wavenumber three or larger, so that
$k \geq 3$.
\subsection{Appropriate coordinates}
We use coordinates in which the metric on $S^3$ takes the form
\begin{equation}
d s_3^2 = d\chi^2+\sin^2 \chi \; d \Omega_2^2
\end{equation}
where $d \Omega_2^2$ is the usual metric on $S^2$. These coordinates are
convenient because if we take our position to be given by $\chi = 0$ then
the CMB can be thought of as located on the copy of
$S^2$ at $\chi=\chi_L$, where the subscript $L$ denotes
the time of last scattering.
In this form the harmonics $Q_{\vec{k}}$ are given by
\begin{equation}
\label{harmonic_breakdown}
Q_{\vec{k}} \equiv \Pi_{k}^{l} (\chi) Y_{l}^{m} (\theta , \phi) ,
\end{equation}
where $\vec{k} \equiv (k,l,m)$, the
$Y_{l}^{m}$ are the usual normalised harmonics on $S^2$ and
\begin{equation}
\label{harmonic_breakdown_2}
\Pi_k^l (\chi) = \sqrt{\frac{2 (k-1-l)! k}{\pi (k+l)!}}
(2^l l! \sin^l \chi) C_{k-1-l}^{l+1} (\cos \chi) .
\end{equation}
In Eq.~\eref{harmonic_breakdown_2},
$C_{k-1-l}^{l+1}$ is a Gegenbauer polynomial and
$k-1 \geq l \geq | m |$. The list $\vec{k} = (k,l,m)$ labels the
quantum numbers associated with each harmonic, with $k$ labelling
the ``radial'' $\chi$-harmonic and $(l,m)$ the usual harmonic labels
on $S^2$ associated with the spherical harmonics $Y_{l}^m$.
Under the exchange $m \mapsto -m$, the harmonics $Q_{\vec{k}}$
have the property%
\footnote{The $(-1)^m$ term here comes from
$[Y_l^m (\theta,\phi)]^* =(-1)^m Y_l^{-m} (\theta,\phi)$.}
that $Q_{\vec{k}}^* = (-1)^m Q_{-\vec{k}}$.
Therefore we may write the field (\ref{field1}) in the convenient form
\begin{equation}
\varphi(\eta, \vec{x}) = \sum_{\vec{k}} Q_{\vec{k}}(\vec{x}) \Big[
\,\varphi_{k}^{\mathrm{cl}}(\eta) a_{\vec{k}}^{\dagger} + (-1)^m
\varphi_{k}^{\mathrm{cl} *}(\eta) a_{-\vec{k}} \Big]
\end{equation}
\subsection{The two-point function}
From the above we arrive at the two-point function evaluated at late times
\begin{equation}
\fl
\langle \varphi_{\vec{k}_1}\varphi_{\vec{k}_2} \rangle = (-1)^{m_1}
\delta_{-\vec{k}_1, \vec{k}_2} |
\varphi_{k_1}^{\mathrm{cl}} (\pi /2) |^2 = (-1)^{m_1}
\delta_{-\vec{k}_1, \vec{k}_2}
\frac{\dot\rho_*^2 \lambda^2}{2 \sqrt{k_1^2 -3} (k_1^2 - 4)} .
\end{equation}
In order to sum expectation values of $\varphi$ into
expectation values of the curvature perturbation, $\zeta$, one can use
the non-linear $\delta N$ formalism, which will be described in more
detail in \S\ref{sec:deltaN} below. Here we merely quote the result for the
two-point function in order to demonstrate that the power spectrum
is consistent with the case of flat spatial slices, giving
\begin{equation}
\label{two_pt}
\langle \zeta_{\vec{k}_1}\zeta_{\vec{k}_2} \rangle
= (-1)^{m_1} \delta_{-\vec{k}_1, \vec{k}_2}
\frac{\dot\rho_*^4 }{2 \dot\phi_*^2 k_1^3}
\left[1+ O(k_1^{-2})\right] \, .
\end{equation}
It follows that the power
spectrum of fluctuations in $\zeta$ is
the same as in the $\mathbb{R}^3$ case \cite{Halliwell:1984eu}, up to
very small corrections.
\subsection{The scalar spectral tilt}
In the absence of non-gaussianity, which will be studied in detail in
\S\S\ref{sec:third_order_action}--\ref{sec:three-point},
the principal discriminant among models
of inflation is the spectral tilt. Although Eq.~\eref{two_pt}
shows that the \emph{magnitude} of the power spectrum of fluctuations
approximately matches the flat space spectrum if $k$ is not small, the
tilt contains
curvature terms from $\lambda$ and $\ddot{\rho}_\ast$. These
curvature terms must be taken into account when determining whether the
model is compatible with the WMAP constraints on the spectral index $n_s$.
In the case of flat equal time slices one removes a term $1/k^3$ from the
two-point function when determining the tilt.
This is because the tilt is a measure of the deviation from
scale invariance. Since the degeneracy of modes on $S^3$ with a given $k$
increases discretely with $k$, the notion of scale invariance is not natural on $S^3$;
however, experiment proceeds on the supposition of flat equal time slices and
thereby demands that we compute the tilt in the usual way.
For large $k$ the term $\dot\rho_*^{-2} e^{-2 \rho_*}$ is small and so the
tilt which follows from Eq.~\eref{two_pt} is
\begin{equation}
\label{tilt}
n_s - 1
= 2 \eta_* -6 \epsilon_* +O(k^{-2}) \, .
\label{eq:large-k-tilt}
\end{equation}
where the slow-roll parameters $\epsilon$ and $\eta$ are defined by%
\footnote{In the case of flat spatial slices, one often uses
$\epsilon_H \equiv - \ddot\rho / \dot\rho^2$
instead of the quantity $\epsilon$; here these differ by a curvature term,
so that
$\epsilon = \epsilon_H + \dot\rho^{-2} e^{- 2 \rho}$.
The difference may be
significant
for those modes passing outside the horizon when there is substantial
curvature, so we are
taking Eq.~\eref{eq:sr-def} to be the fundamental choice because the field
is rolling slowly on a Hubble timescale. This also leads to
$(V')^2/2V^2\approx \epsilon / \lambda^4$ and $V''/V \approx \eta$.
Note that $\epsilon_H$ need not be small
for modes passing out of the horizon early on.}
\begin{equation}
\epsilon \equiv
\frac{\dot\phi^2}{2\dot\rho^2}
\quad \mbox{and} \quad
\eta \equiv -
\frac{\ddot\phi}{\dot\phi \dot\rho} +
\frac{\dot\phi^2}{2 \dot\rho^2} .
\label{eq:sr-def}
\end{equation}
The last term in Eq.~\eref{eq:large-k-tilt}
will be significantly smaller than the first
two for $k\gtrsim 20$, and so we will recover
the usual result for flat spatial slices.
In this case it is known observationally that the tilt is small
\cite{Komatsu:2008hk}, so it follows that the
combination of $\epsilon_*$ and $\eta_*$ in Eq.~\eref{eq:large-k-tilt} is
too: generically, they are both small.
More generally, Eq.~\eref{eq:large-k-tilt} shows that the two-point function
is unlikely to be strongly sensitive to curvature unless good constraints
can be obtained on a possible running.
\subsection{Projecting onto the sky}
\label{sec:sky}
In order to properly compare results in the $S^3$ case with those in the
$\mathbb{R}^3$ case one should project onto the sky and compare the
resulting \emph{angular} expectation values of $\zeta$.
For this we need to consider the $S^3$ harmonics evaluated at $\chi_L$ on
the sky, where $\chi_L \ll 1$. From Eq.~\eref{harmonic_breakdown}
and the eigenvalue equation $\triangle Q_{\vec{k}} = -(k^2 - 1) Q_{\vec{k}}$
we see that $\Pi_k^l (\chi)$ obeys the defining equation
\begin{equation}
\frac{1}{\sin^2 \chi}
\frac{d}{d \chi} \Big( \sin^2 \chi \frac{d}{d \chi} \Pi_k^l(\chi \Big)
- \frac{l(l+1)}{\sin^2 \chi} \Pi_k^l(\chi) + (k^2 -1)\Pi_k^l (\chi) = 0 .
\end{equation}
In the limit $\chi \ll 1$, $k \gg 1$ and $l \gg 1$ the solutions become
proportional to a spherical Bessel function, $\Pi_k^l(\chi) \propto
j_l(k\chi)$. The correct normalization can be obtained by studying
Eq.~\eref{harmonic_breakdown_2} in the
limit $\chi \rightarrow 0$ and yields
\begin{equation}
\label{approx_pi}
\Pi_k^l (\chi) \approx \sqrt{\frac{2}{\pi}} \,k \, j_l (k \chi)
\end{equation}
for $\chi \ll 1$, $k \gg 1$ and $l \gg 1$.
In the limit $\chi_L \ll 1$ the surface of last scattering is becoming
close to our point of observation, compared with the radius of curvature
of the $S^3$. We therefore expect that curvature ceases to play any role
and the two-point function projected onto the sky goes over to its flat
space counterpart. Indeed, one can show that in this limit (and with the
restriction $l \gg 1$ which guarantees we are looking on small angular
scales)
\begin{equation}
\langle \zeta_{l_1 m_1} \zeta_{l_2 m_2} \rangle
\approx (-1)^{m_1} \delta_{l_1, l_2} \delta_{m_1, -m_2}
\int_{0}^{\infty} \frac{d k}{k} \; \frac{\dot\rho_*^4}{\dot\phi_*^2}
\frac{1}{\pi} \Big[ j_l (k \chi_L) \Big]^2
\label{two-point-sky}
\end{equation}
where the approximation of continuing the integral to $0$ is valid
because the main contribution comes from the region $k\approx l \chi_L^{-1}$.
Eq.~\eref{two-point-sky} is equivalent to the well-known result in the
$\mathbb{R}^3$ case, which demonstrates the consistency of our calculation.
\section{The third-order action}
\label{sec:third_order_action}
In the previous section we expanded the action to second order in the
small fluctuation $\varphi$, which is sufficient to determine its
two-point statistics and therefore the power spectrum.
If we wish to
go further, however, and determine the leading non-linearity then it is
necessary to obtain a description of the process by which three
$\varphi$ quanta can interact. This information is provided by the third
order term in the expansion of the action, which we will determine in the
present section before going on to study the three-point function
in \S\ref{sec:three-point}.
After expanding $S$ according to its definition, it follows that
the third order term can be written
\begin{eqnarray}
\fl\nonumber
S_{3} = \frac{1}{2} \int dt \, d^{3}x \; e^{3\rho} \sqrt{\gamma} \Big( -
\frac{1}{3} V_{0}''' \varphi^{3} -V_{0}'' \delta N \varphi^{2}
- \delta N \dot{\varphi}^{2}-2\dot{\varphi} N^{i} \varphi_{,i}
\\ \nonumber \qquad\qquad \mbox{}
- \delta N e^{-2 \rho} \gamma^{i j} \varphi_{,i} \varphi_{,j}
+ 6\dot{\rho}^{2} \delta N^{3} +4 \dot{\rho} \triangle \chi \delta N^{2}
- \delta N \chi_{|ij} \chi^{|ij}
\\ \qquad\qquad \mbox{}
+ \delta N(\triangle \chi)^{2}
+ 2 \dot{\phi} \delta N N^{i} \varphi_{,i} -\dot{\phi}^{2} \delta N^{3}
+ 2\dot{\phi} \delta N^{2} \dot{\varphi}
\Big) ,
\label{eq:three-point-action}
\end{eqnarray}
where to avoid unnecessary clutter we have denoted the covariant
derivative on the unit three-sphere by a vertical bar, so that
$X_{|i} \equiv \nabla_i X$ and
$X^{|i} \equiv h^{ij} X_{|j}$.
From this expression we require the leading order term in slow-roll.
This will allow us to compute the expectation value
$\langle \varphi \varphi \varphi \rangle$ to the first non-trivial
order at horizon crossing, after which we must perform a gauge transformation
to determine $\langle \zeta \zeta \zeta \rangle$.
After horizon crossing we expect the correlation functions of
the curvature perturbation to be approximately conserved.
To identify the leading slow-roll terms, we must relate the derivatives
of the potential to the motion of the background field $\phi$.
For this purpose one can use the relations
\begin{equation}
V_{0}''' = \Big( \frac{6 \dot{\rho}}{\dot{\phi}}
-3 \frac{\ddot{\phi}}{\dot\phi^{2}} \Big)
e^{-2\rho} + \Or(\epsilon^{3/2}) ,
\quad \mbox{and} \quad
V_{0}'' = -3 e^{-2 \rho} + \Or(\epsilon) .
\label{eq:order-estimate}
\end{equation}
All other terms in Eq.~\eref{eq:three-point-action} are suppressed by
at least one power of $\dot{\phi}/\dot{\rho}$, so
the leading contribution to the action comes from $V_0'''$.
On flat spatial slices this term is usually negligible
\cite{Falk:1992sf,Seery:2008qj}, since it is proportional to
powers of $\dot{\phi}$ and $\ddot{\phi}$.
These terms are indeed present in Eq.~\eref{eq:order-estimate},
but are accompanied by a term of order $\dot{\rho}/\dot{\phi}$
whose source is the curvature term in the background scalar field
equation, Eq.~\eref{eq:friedmann}.
Accordingly, a significant three-point interaction can be present at early
times when the scalar field is behaving in a way quite different from the
flat slicing expectation.
We are assuming that a slow-roll hierarchy exists at the time of
evaluation, so it follows that the $\ddot{\phi}$ contribution can be
discarded and the leading contribution to the action can be written
\begin{equation}
\label{action_dom}
S_{3} = -\int dt \, d^{3}x \; \sqrt{\gamma} e^{\rho} \;
\frac{\dot{\rho}\varphi^3}{\dot{\phi}} .
\end{equation}
This is of order $\dot{\rho}/\dot{\phi} \sim \epsilon^{-1/2}$ and is
a qualitatively new contribution which is not present in the
interactions among $\varphi$ quanta on $\mathbb{R}^3$ spatial slices
\cite{Maldacena:2002vr}. This term is suppressed by
two powers of the scale factor compared to the vacuum energy density
and therefore appears in
Eq.~\eref{eq:order-estimate} proportional to $e^{-2\rho}$, which implies
that at late times where $e^{\rho} \rightarrow \infty$ it no longer
contributes to the interactions. Thus, for large $k$ we can expect that the
effect of primordial curvature disappears and we recover the standard
flat space result.
In \S\ref{sec:three-point} we will determine the contribution
that this interaction makes to the $\langle \zeta \zeta \zeta \rangle$
expectation value. In fact,
we will see that it gives rise to a term which
behaves like $\epsilon^{-2}$, and which can therefore
dominate over those terms which are common to flat hypersurfaces
and $S^3$ hypersurfaces (to be described below),
which behave at leading order like $\epsilon^{-1}$.
Of course, as we have described, other terms in the slow-roll expansion of
$V_{0}'''$ will dominate if $e^{-2\rho}$ is small---that is,
if the curvature of the spatial hypersurfaces
is small. We can only expect the term in equation~\eref{action_dom} to dominate in the three point function if there has not been much
inflation prior to the mode leaving the horizon.
The next-to-leading order terms contribute proportional to
$\epsilon^{1/2}$. Their effect in the action can be written
\begin{equation}
\fl
\label{action_flat}
S_{3} = \frac{1}{2} \int dt\, d^{3}x \; e^{3 \rho} \sqrt{\gamma} \Big(
\frac{\ddot\phi}{\dot\phi^2} e^{-2 \rho} \varphi^3
+ 3 e^{-2 \rho} \delta N \varphi^{2}
- \delta N \dot\varphi^{2}
- 2 \dot\varphi N^{i} \varphi_{| i}
- \delta N e^{-2 \rho} \varphi^{| i} \varphi_{| i}\Big) .
\end{equation}
These terms give rise to contributions which are common to both $S^3$ and
$\mathbb{R}^3$, and are suppressed by a power of $\epsilon$ compared to the
leading curvature term~\eref{action_dom}. They are the same as the terms
obtained by Maldacena \cite{Maldacena:2002vr}.
Adding both these contributions together, integrating by parts,
and using the background equations for $\rho$ and $\phi$, we obtain
\begin{equation}
\label{total_action}
\fl
S_{3} = - \int dt\, d^{3}x \; \left\{ \sqrt{\gamma} \;
\Big(
e^{\rho}\, \frac{\dot{\rho}}{\dot{\phi}}\, \varphi^{3}
+ e^{5 \rho} \dot\phi \dot\varphi^2 (\triangle +3)^{-1} \dot\varphi
\Big)
+ \frac{\delta L_{2}}{\delta \varphi} f(\varphi)
\right\}
\end{equation}
where $f(\varphi)$ is an auxiliary function defined by
\begin{equation}
\fl
\label{field_redefinition}
f(\varphi) \equiv
\frac{\dot \phi}{8 \dot \rho} \varphi^2
- \frac{3 \dot \phi}{8 \dot \rho} (\triangle +3)^{-1} (\varphi^2)
- \frac{\dot \phi}{4 \dot \rho} (\triangle +3)^{-1}
\left\{ \varphi (\triangle +3) \varphi \right\} + \cdots
\end{equation}
and `$\cdots$' represents terms which vanish at late times.
The term involving $\delta L_2/\delta \varphi$ is proportional to the
leading-order equation of motion and therefore vanishes when we take
the interaction picture field to be on-shell. However, it cannot simply
be discarded; it records the contribution of boundary terms which
were generated after integrating by parts and which we have not written
explicitly \cite{Seery:2006tq}. The correct procedure is to make a field
redefinition to remove these terms, by introducing a shifted field
$\varphi_c$, defined by
\begin{equation}
\label{field_redefinition2}
\varphi = \varphi_{c} + f(\varphi_{c}) .
\end{equation}
This removes the nuisance terms proportional to the equation of motion
\emph{and} the boundary terms, giving a simplified action
\begin{equation}
\label{action_refined}
S_{3} = -\int dt \, d^{3}x \;
\sqrt{\gamma} \;
\left(
e^{\rho}\, \frac{\dot{\rho}}{\dot{\phi}}\, \varphi_{c}^{3}
+ e^{5 \rho} \dot\phi \dot\varphi_{c}^2 (\triangle +3)^{-1}
\dot\varphi_{c}
\right) .
\end{equation}
We may now compute the simpler correlation functions of
$\varphi_c$, and rewrite the result in terms of the interesting field
$\varphi$ by using Eq.~\eref{field_redefinition2}.
\section{The three-point function}
\label{sec:three-point}
\subsection{The $\varphi$ correlation function}
The final step is to calculate the three-point function in the
comoving gauge~\eref{comoving_gauge}. As we have described,
the comoving curvature perturbation $\zeta$ is conserved after horizon
exit in the absence of non-adiabatic pressure.
Once a given scale
falls back inside the horizon, $\zeta$ can be used to seed the subsequent
calculation of temperature and density fluctuations in the coupled
baryon--photon plasma.
To obtain the correlation functions of $\zeta$ entails a number of steps.
The first
requires that we obtain the contribution from the reduced
third order action~\eref{action_refined},
and rewrite it in terms of the correlation functions of the original
field $\varphi$ using the field redefinition~\eref{field_redefinition}.
Once this has been done it is necessary to determine how the
correlation functions of
$\zeta$ are related to those of $\varphi$. Fortunately, there is a simple
prescription (the so-called ``$\delta N$ formalism'')
which is valid on large scales
\cite{Starobinsky:1986fx,Sasaki:1995aw,Lyth:2004gb,Lyth:2005fi}.
The result is that $\zeta(t,\vec{x})$ evaluated at any time $t$ later than the
horizon-crossing time $t_\ast$ can be written
\begin{equation}
\zeta(t,\vec{x}) \equiv \delta N(t,\vec{x}) =
\sum_{n=1}^{\infty} \frac{1}{n!}
\left\{ \frac{\partial^n}{\partial \phi_\ast^n} N(t,t_\ast) \right\}
[\varphi_\ast(t_\ast,\vec{x})]^n ,
\label{eq:delta-N}
\end{equation}
where $t_\ast$ is taken to be a spatial slice on which the curvature perturbation $\zeta$ vanishes, $t$ labels a slice
of uniform energy density, and $N$ is the number of e-folds between these two slices. Note that this formula applies only in
coordinate space and must be treated accordingly when transforming to
harmonic modes.
Eq.~\eref{eq:delta-N} can also be understood in terms of a
further field redefinition which
changes the action from the uniform curvature to comoving gauge
\cite{Maldacena:2002vr}.
The whole computation is performed with the interaction picture fields using either~\eref{mode_approx} or~\eref{mode} according to whether $k$ is large
or small.
Let us first determine the contribution from the first term in the reduced
third-order action, Eq.~\eref{action_refined}.
Provided we are only interested in tree-level amplitudes, the interaction
Hamiltonian is simply given by minus the interaction term in the
Lagrangian \cite{Seery:2007we,Dimastrogiovanni:2008af},
so that $H_{\mathrm{int} \; 3} = - L_{\mathrm{int} \; 3}$.
It follows that the contribution from the first term
in~\eref{action_refined}, for large $k$, is
\begin{eqnarray}
\fl\nonumber
\langle
\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}\varphi_{\vec{k}_{3}}
\rangle
\supseteq
- \imath \int_{\imath \infty}^{\pi/2 - \varepsilon} d \eta \;
\langle [\varphi_{\vec{k}_{1}} ( \pi /2 )
\varphi_{\vec{k}_{2}} ( \pi / 2)
\varphi_{\vec{k}_{3}} ( \pi / 2) , H_{\mathrm{int} \;3}(\eta)
] \rangle \\
= \cdots - \frac{6 \dot{\rho}_{*}^{5} \lambda^{4}}
{8 k_{1}^{2} k_{2}^{2} k_{3}^{2} \dot{\phi}_{*}}
e^{{-\imath \pi k_t / 2}}
\int d^{3}x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}} J
+ \mbox{c.c.} ,
\label{eq:primitive-three-a}
\end{eqnarray}
where ``c.c.'' denotes the complex conjugate of the preceding expression
and $k_{t} \equiv k_{1}+k_{2}+k_{3}$.
The function $J$ is defined by
\begin{equation}
J = \int_{\imath \infty }^{\pi/2-\varepsilon} d\eta \;
\frac{\sin \eta}{\cos^2 \eta}
\Big(\cos \eta - \frac{\imath}{k_1} \Big)
\Big(\cos \eta - \frac{\imath}{k_2} \Big)
\Big(\cos \eta - \frac{\imath}{k_3} \Big)
e^{\imath k_t \eta} .
\label{eq:primitive-three-b}
\end{equation}
In Eqs.~\eref{eq:primitive-three-a}--\eref{eq:primitive-three-b}
we have carried the integration over conformal time to within a small
parameter $\varepsilon$ (not to be confused with the slow-roll
parameter $\epsilon$) of future infinity.
We can expect that this will be a good approximation because
the integral receives its largest contribution from around the time of
horizon exit and thereafter does not evolve appreciably, so there is
little error in continuing the integration into the infinite future.
Inside the horizon the integral is taken over a contour which turns
a right-angle at $\eta = 0$ and approaches infinity along the positive
imaginary axis, corresponding to evaluation of the interaction in the
Hartle--Hawking state. This is equivalent to the interacting vacuum of
the full theory.
Eqs.~\eref{eq:primitive-three-a}--\eref{eq:primitive-three-b}
give rise to a term in the $\varphi$ correlation function which takes the
form
\begin{equation}
\fl
\langle
\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}\varphi_{\vec{k}_{3}}
\rangle
\supseteq
- \frac{3 \dot{\rho}_{*}^{5}}
{2 \dot{\phi}_{*} k_{1}^{3} k_{2}^{3} k_{3}^{3}}
\int d^3 x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}}
\Big( -2 \frac{k_1 k_2 k_3}{k_t^2}
+ k_t
- \frac{1}{k_t^2} \sum_{i \ne j} k_i k_j^2
\Big)
\label{eq:primitive-three-c}
\end{equation}
In order to obtain the contribution this makes to the $\zeta$ correlation
funtion we will shortly see that the $\delta N$ formalism tells us we must
multiply by $(-\dot\rho_* / \dot\phi_*)^3$. The result will be proportional
to $\epsilon^{-2} k^{-8} \dot{\rho}_{*}^4$, whereas the usual terms from the
calculation with flat constant time hypersurfaces are proportional to
$\epsilon^{-1} k^{-6} \dot{\rho}_{*}^4$. So we expect the terms given in
Eq.~\eref{eq:primitive-three-c} to dominate for small $k$.
On the other hand, some of the usual terms in the three point function are
obtained by calculating the contribution from the second term in Eq.~\eref{action_refined}. These terms are given by
\begin{equation}
\fl
\langle
\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}\varphi_{\vec{k}_{3}}
\rangle
\supseteq
- \frac{\dot{\phi}_{*} \dot{\rho}_{*}^3}
{2 k_{1}^{3} k_{2}^{3} k_{3}^{3}}
\int d^3 x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}}
\sum_{i>j} \frac{k_i^2 k_j^2}{k_t} .
\end{equation}
Adding on the terms from the field redefinition (see section~\eref{section:field_redefinition_terms}), given by
Eqs.~\eref{field_redefinition} and~\eref{field_redefinition2}, and taking the dominant terms for large $k$, we
obtain
\begin{equation}
\fl
\langle
\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}\varphi_{\vec{k}_{3}}
\rangle
\supseteq
- \frac{\dot{\phi}_{*} \dot{\rho}_{*}^{3}}
{8 k_{1}^{3} k_{2}^{3} k_{3}^{3}}
\int d^3 x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}}
\Big( \frac{4}{k_t} \sum_{i>j} k_i^2 k_j^2
- \frac{1}{2} \sum_{i} k_i^3
+ \frac{1}{2} \sum_{i \ne j}k_i k_j^2
\Big)
\end{equation}
\subsection{Terms due to the field redefinition}
\label{section:field_redefinition_terms}
The field redefinition given by Eqs.~\eref{field_redefinition}--\eref{field_redefinition2} contributes a number of terms to the three point function. These are calculated with the aid of a convolution (see Eq.~\eref{eqn:convolution}) and then contraction of the fields in such a way that only connected diagrams are produced. The first term of Eq.~\eref{field_redefinition} results in a calculation much like that which will be done in section \S\ref{sec:deltaN}. The second and third terms in Eq.~\eref{field_redefinition} are given by a marginally more complicated calculation where one must integrate by parts to throw the $(\Delta +3)^{-1}$ onto the $Q_{\vec{k}}$ in the convolution, and then contract the fields. The sum total of the redefinition, Eq.~\eref{field_redefinition}, is then
\begin{eqnarray}
\nonumber
\fl
\langle
\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}\varphi_{\vec{k}_{3}}
\rangle
-&
\langle
\varphi_{c\,\vec{k}_{1}}\varphi_{c\,\vec{k}_{2}}\varphi_{c\,\vec{k}_{3}}
\rangle
=\\
&
\frac{\dot\phi_* \dot\rho_*^3 \lambda^4}{16 \prod_l k_l^3}
\int d^3 x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}}
\left(
\sum_i k_i^3 + 3 \sum_i k_i - \sum_{i\ne j} k_i^2 k_j
\right)
\end{eqnarray}
\subsection{The $\delta N$ formalism}
\label{sec:deltaN}
Finally, one uses the $\delta N$ prescription to obtain the full
$\zeta$ correlation function. Multiplying three copies of
Eq.~\eref{eq:delta-N}, taking correlations using Wick's theorem
and truncating to tree-level terms which do not involve unconstrained
momentum integrations, we find
\begin{equation}
\fl
\langle
\zeta_{\vec{k}_{1}}\zeta_{\vec{k}_{2}}\zeta_{\vec{k}_{3}}
\rangle
=
\left(\frac{\delta N}{\delta \phi_\ast}\right)^3
\langle
\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}\varphi_{\vec{k}_{3}}
\rangle
+ \frac{1}{2}
\left( \frac{\delta N}{\delta \phi_\ast} \right)^2
\frac{\delta^2 N}{\delta\phi_\ast^2}
\left[
\langle\varphi_{\vec{k}_{1}}\varphi_{\vec{k}_{2}}
(\varphi\star\varphi)_{\vec{k}_{3}}\rangle
+ \mbox{cyclic}
\right],
\end{equation}
where $\star$ denotes a `convolution,'
\begin{equation}
\label{eqn:convolution}
(\varphi\star\varphi)_{\vec{k}}(t)\equiv \int d^3 x \sqrt{\gamma} Q_{\vec{k}}^{*}(\vec{x}) \varphi^2 (t,\vec{x}),
\end{equation}
``cyclic'' indicates that all cyclic
permutations of $\{ 1, 2, 3 \}$ should be included in the sum,
and $N$ measures by how many e-folds the mode in question is outside
the horizon (see description below Eq.~\eref{eq:delta-N}).
Since we are working with a single-field model of inflation,
the derivatives of $N$ can be evaluated directly, yielding
\begin{equation}
\frac{\delta N}{\delta \phi_\ast} =
- \frac{\dot{\rho}_\ast}{\dot{\phi}_\ast} ,
\quad \mbox{and} \quad
\frac{\delta^2 N}{\delta \phi_\ast^2} =
\frac{\dot{\rho}_\ast \ddot{\phi}_\ast}{\dot{\phi}_\ast^3}
+ \frac{1}{2}
- \frac{1}{\dot{\phi}_\ast^2 e^{2 \rho}_\ast} .
\end{equation}
\subsection{The $\zeta$ correlation function}
Collecting all these terms, the final result for the three point function evaluated on a late time slice is
\begin{eqnarray}
\fl\nonumber
\langle
\zeta_{\vec{k}_{1}}\zeta_{\vec{k}_{2}}\zeta_{\vec{k}_{3}}
\rangle =
\frac{\dot{\rho}_{*}^{6}}
{8 \dot{\phi}_{*}^{2} k_{1}^{3} k_{2}^{3} k_{3}^{3}}
\int d^3 x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}} \\ \nonumber \qquad
\mbox{} \times \Bigg[
\frac{4}{k_t} \sum_{i>j} k_i^2 k_j^2
+ \frac{1}{2} \sum_{i} k_i^3
+ \frac{1}{2} \sum_{i \ne j} k_i k_j^2
+ \frac{2\dot{\rho}_* \ddot{\phi}_*}{\dot{\phi}_*^3}
\sum_{i} k_i^3
\\ \qquad\qquad \mbox{}
+ \frac{2 \dot{\rho}_*^2}{\dot{\phi}_*^2}
\Big( -12 \frac{k_1 k_2 k_3}{k_t^2}
+ 6 k_t
- \frac{6}{k_t^2} \sum_{i \ne j} k_i k_j^2
-\frac{1}{\dot\rho_{*}^{2} e^{2 \rho_*}} \sum_i k_i^3
\Big)
\Bigg]
\label{final_correlator}
\end{eqnarray}
Of course, this uses the large $k$ approximation for $\varphi^{\mathrm{cl}}$ and so
it can only be regarded as approximate.
The leading term in this expression scales with momentum like $[k^{-6}]$.
This is the term computed by Maldacena, and is dominant for sufficiently
large $k$, that is, on small scales.
The correction terms we have computed scale with momentum like
$[k^{-8}]$. These terms can become dominant on larger scales.
On sufficiently large scales, of course, one must remember that
Eq.~\eref{final_correlator} will be accompanied by other corrections
that scale even faster as powers of momentum, like $[k^{-2n}]$ for
any integer $n \geq 3$, and that these corrections will eventually
overwhelm the ones we have computed. In this regime, for accuracy, one
should use the full expression for the modes~\eref{mode} and perform a
numerical assessment of the three point function.
\subsection{An estimate of $f_{\mathrm{NL}}$}
\label{sec:projection}
We are finally in a position to estimate the magnitude of the observable
non-linearity parameter $f_{\mathrm{NL}}$ which is produced by the sensitivity to
curvature in Eq.~\eref{final_correlator}. We define the sign of $f_{\mathrm{NL}}$
according to the WMAP convention, where the bispectrum is parameterized
in harmonic space via
\begin{equation}
B(k_1,k_2,k_3) =
\frac{6}{5} f_{\mathrm{NL}} \left\{
P(k_1) P(k_2) + P(k_1) P(k_3) + P(k_2) P(k_3)
\right\} .
\label{eq:fnl-def}
\end{equation}
One must be careful in interpreting this formula for modes on $S_3$, because
when we work with the harmonics $Q_{\vec{k}}$, the $k_i$ no longer have the
same meaning as in flat space. The correct way to compare the magnitude of
$f_{\mathrm{NL}}$ between models with different spatial geometries is to project
onto the sky and compare the non-linearity parameter in the angular
expectation values of $\zeta$.
To obtain the correct projection, we begin by writing
the three-$\zeta$ correlator in terms of a function $\xi(k_1,k_2,k_3)$
defined by
\begin{equation}
\langle
\zeta_{\vec{k}_{1}}\zeta_{\vec{k}_{2}}\zeta_{\vec{k}_{3}}
\rangle
=
\int d^3 x \; \sqrt{\gamma}
Q_{\vec{k}_{1}} Q_{\vec{k}_{2}} Q_{\vec{k}_{3}} \xi(k_1, k_2, k_3) .
\end{equation}
To obtain the angular correlation function of the $\zeta$s,
we follow the route outlined in
\S\ref{sec:sky}: the correlator in harmonic space is transformed back to
coordinate space, and evaluated at the radial distance corresponding to
last scattering. In terms of the polar coordinate on $S^3$, this is
given by $\chi = \chi_L$. One finds
\begin{eqnarray}
\fl\nonumber
\langle
\zeta_{l_1 m_1} \zeta_{l_2 m_2} \zeta_{l_3 m_3} \rangle_{\chi_L}
= C_{l_1\;\;\, l_2\;\;\, l_3}^{m_1 m_2 m_3}
\sum_{k_i \geq (l_i + 1)}
\Pi_{k_1}^{l_1} (\chi_L)
\Pi_{k_2}^{l_2} (\chi_L)
\Pi_{k_3}^{l_3} (\chi_L) \\
\times
\int_0^\pi \sin^2 \chi \; d \chi \;
\Pi_{k_1}^{l_1} (\chi)
\Pi_{k_2}^{l_2} (\chi)
\Pi_{k_3}^{l_3} (\chi)
\xi(k_1, k_2, k_3)
\end{eqnarray}
where
\begin{equation}
C_{l_1\;\;\, l_2\;\;\, l_3}^{m_1 m_2 m_3} =
\int d \Omega^2(\hat{\vec{x}}) \;
Y_{l_1}^{m_1} (\hat{\vec{x}})
Y_{l_2}^{m_2} (\hat{\vec{x}})
Y_{l_3}^{m_3} (\hat{\vec{x}}) ,
\end{equation}
and $d \Omega^2(\hat{\vec{x}})$ is an element of solid angle on $S^2$ in the direction
of the unit vector $\hat{\vec{x}}$. The integration over $\chi$ is
symmetric or antisymmetric about $\chi = \pi/2$, with the two regions
interfering constructively for $k_t$ odd and destructively for $k_t$ even.
Moreover, the $\Pi_k^l(\chi)$ under the integral can be replaced by
spherical Bessel functions for the range of $\chi$, $k$ and $l$ of interest.
It follows that
\begin{eqnarray}
\fl\nonumber
\langle
\zeta_{l_1 m_1} \zeta_{l_2 m_2} \zeta_{l_3 m_3} \rangle_{\chi_L}
\approx
C_{l_1\;\;\, l_2\;\;\, l_3}^{m_1 m_2 m_3}
\Big( \frac{2}{\pi} \Big)^3
\int_0^{\pi / 2} \sin^2 \chi \; d \chi
\\ \nonumber
\mbox{} \times
\Bigg\{
\sum_{k_1 \geq (l_1 + 1)} k_1^2 j_{l_1} (k_1 \chi)
j_{l_1} (k_1 \chi_L)
\Bigg\}
\Bigg\{ 1 \leftrightarrow 2 \Bigg\}
\Bigg\{ 1 \leftrightarrow 3 \Bigg\}
\\
\mbox{} \times
[1-(-1)^{k_t}] \xi(k_1,k_2,k_3)
\end{eqnarray}
The influence of the $[1-(-1)^{k_t}]$ term averages out to $1$ as we sum
over, say, $k_3$ and the summation over the $k_i$ may be replaced by
integrals to give
\begin{eqnarray}
\fl\nonumber
\langle
\zeta_{l_1 m_1} \zeta_{l_2 m_2} \zeta_{l_3 m_3} \rangle_{\chi_L}
\approx
C_{l_1\;\;\, l_2\;\;\, l_3}^{m_1 m_2 m_3}
\Big( \frac{2}{\pi} \Big)^3
\int_0^{\pi / 2} \sin^2 \chi \; d \chi
\\ \label{eq:angular-bispectrum-intermediate}
\mbox{} \times
\Bigg\{
\int_{(l_1 + 1)}^\infty d k_1 \; k_1^2 j_{l_1} (k_1 \chi)
j_{l_1} (k_1 \chi_L)
\Bigg\}
\Bigg\{ 1 \leftrightarrow 2 \Bigg\}
\Bigg\{ 1 \leftrightarrow 3 \Bigg\}
\xi(k_1,k_2,k_3)
\end{eqnarray}
First consider the part of $\xi$ which scales with momentum like
$[k^{-6}]$, coming from those terms in Eq.~\eref{final_correlator}
which were computed by Maldacena and are present in the flat space
expectation value. In order to carry out the integrations in
Eq.~\eref{eq:angular-bispectrum-intermediate} explicitly
it is most convenient if the terms involving each $k_i$ in
$\xi$ factorize, so that each $k_i$ integral can be evaluated independently.
The necessary components of $\xi$ are
\begin{equation}
\label{xi_large_k}
\fl
\xi(k_1,k_2,k_3)
=
\frac{\dot{\rho}_{*}^{6}}
{8 \dot{\phi}_{*}^{2} k_{1}^{3} k_{2}^{3} k_{3}^{3}}
\Big(
\frac{4}{k_t} \sum_{i>j} k_i^2 k_j^2
+ \frac{1}{2} \sum_{i} k_i^3
+ \frac{1}{2} \sum_{i \ne j} k_i k_j^2
+ \frac{2\dot{\rho}_* \ddot{\phi}_*}{\dot{\phi}_*^3}
\sum_{i} k_i^3
\Big)+\ldots
\end{equation}
There is a small variation with $k$ owing to the variation of
$\dot{\rho}_\ast$, $\dot{\phi}_\ast$ and $\ddot{\phi}_\ast$ with the
epoch of horizon exit. Ignoring this slow variation, however,
the only term which does not factorize involves $1/k_t$.
A similar obstruction was encountered by Smith \& Zaldarriaga
\cite{Smith:2006ud}, who found a factorizable form by
introducing a Schwinger parameter,
\begin{equation}
\frac{1}{k_t} \equiv \int_0^\infty d t \; e^{-t (k_1 + k_2 + k_3)} .
\end{equation}
To understand which regions make significant contributions to the
integral in
Eq.~\eref{eq:angular-bispectrum-intermediate}
one can account for the influence of the Bessel functions
using a stationary phase method.
In practice, one employs a WKB approximation
to write these functions as an amplitude multiplied by a term with
rapidly varying phase.
An appropriate WKB solution can be constructed from the defining
equation of the spherical Bessel functions,
\begin{equation}
\frac{d^2}{d z^2}
\left[ z j_l (z) \right]
=\Big({l(l+1)\over z^2} -1 \Big)
\left[ z j_l(z) \right] .
\end{equation}
The derivative of the WKB phase satisfies
\begin{equation}
\frac{d}{d k}(\textrm{WKB phase})
=
\frac{d}{d k} \int_{\sqrt{l(l+1)}}^{k\chi} \sqrt{1-{l(l+1)\over z^2}}
\; d z
=
\chi \sqrt{1-{l(l+1)\over k^2 \chi^2}}.
\end{equation}
First consider contributions to
the $\chi$ integral in Eq.~\eref{eq:angular-bispectrum-intermediate}
from the region where $\chi \gg \chi_L$.
There are two cases, depending whether $k^2$ is larger or smaller than
$l(l+1)/\chi^2$.
For very large $k$, the
stationary phase approximation implies that
there is essentially no contribution to the integral.
We are therefore left with the region $k^2 \le l(l+1)/\chi^2$.
Since $\chi \gg \chi_L$ it follows that $k^2 \chi_L^2 \ll l(l+1)$, and
therefore $j_l (k \chi_L)$ is becoming exponentially suppressed.
It follows that there is also a negligible contribution in this case.
The only significant contribution comes from the region where
$\chi \lesssim \chi_L$.
In this region we can approximate
$\sin \chi \approx \chi$ because $\chi_L \ll 1$; the error this induces
in the region $\chi > \chi_L$ is immaterial because the integrand is
exponentially suppressed there.
Moreover,
in view of what has been said about the region where $\chi \gg \chi_L$,
there is very little
error in extending the range of $\chi$ integration to infinity.
Accordingly, we have
\begin{eqnarray}
\fl\nonumber
\langle \zeta_{l_1 m_1} \zeta_{l_2 m_2} \zeta_{l_3 m_3} \rangle_{\chi_L}
\approx
C_{l_1\;\;\, l_2\;\;\, l_3}^{m_1 m_2 m_3}
\Big( \frac{2}{\pi} \Big)^3
\int_0^{\infty} \chi^2 \; d \chi \\
\times
\Bigg\{
\int_{(l_1 +1)}^{\infty} d k_1 \; k_1^2 j_{l_1} (k_1 \chi)
j_{l_1} (k_1 \chi_L)
\Bigg\}
\Bigg\{ 1 \leftrightarrow 2 \Bigg\}
\Bigg\{ 1 \leftrightarrow 3 \Bigg\}
\xi(k_1,k_2,k_3) .
\end{eqnarray}
We are assuming that $\xi$ is factorizable, but if it is not
a similar discussion applies after the introduction of Schwinger parameters.
Let us relate this expression to the flat space limit, where $\chi_L \ll 1$.
There is only an exponentially suppressed contribution to each $k$
integral from the region $k_i \in [0, l_i + 1]$, because if $\chi_L \ll 1$
then it must also be true that $k_i^2 \chi_L^2 \ll l_i (l_i + 1)$
for each $i$,
and it follows that the Bessel function $j_{l_i}(k_i \chi_L)$ is undergoing
exponential suppression in this region. We therefore incur essentially
no penalty in extending the lower limit of integration to $0$, giving
\begin{eqnarray}
\fl\nonumber
\langle \zeta_{l_1 m_1} \zeta_{l_2 m_2} \zeta_{l_3 m_3} \rangle_{\chi_L}
\approx
C_{l_1\;\;\, l_2\;\;\, l_3}^{m_1 m_2 m_3}
\Big( \frac{2}{\pi} \Big)^3
\int_0^{\infty} \chi^2 \; d \chi \\
\mbox{} \times
\Bigg\{
\int_{0}^{\infty} d k_1 \; k_1^2 j_{l_1} (k_1 \chi)
j_{l_1} (k_1 \chi_L)
\Bigg\}
\Bigg\{ 1 \leftrightarrow 2 \Bigg\}
\Bigg\{ 1 \leftrightarrow 3 \Bigg\}
\xi(k_1,k_2,k_3)
\end{eqnarray}
This is the standard formula for the angular bispectrum of the
curvature perturbation with flat spatial slices. It follows that in
the approximate flat space limit, we can compare $\xi$ between the
cases of $\mathbb{R}^3$ and $S^3$ spatial slices to find an estimate
of the comparative magnitude of $f_{\mathrm{NL}}$. One finds
\begin{eqnarray}
\nonumber\fl
f_{\mathrm{NL}}
= - \frac{5}{12 \sum_l k_l^3}
\Big[
\frac{\dot\phi_*^2}{\dot\rho_*^2}
\Big( \frac{4}{k_t} \sum_{i>j} k_i^2 k_j^2
+ \frac{1}{2} \sum_i k_i^3
+ \frac{1}{2} \sum_{i\ne j} k_i k_j^2
+ \frac{2\dot\rho_* \ddot \phi_*}{\dot\phi_*^3}
\sum_i k_i^3
\Big) \\ \qquad
\mbox{} +
2 \Big( -12 \frac{k_1 k_2 k_3}{k_t^2}
+ 6 k_t
- \frac{6}{k_t^2} \sum_{i \ne j} k_i k_j^2
-\frac{1}{\dot\rho_*^2 e^{2 \rho_*}}\sum_i k_i^3
\Big)
\Big] .
\label{fnl}
\end{eqnarray}
The correction terms can be larger than the flat space result when
$k \lesssim \epsilon^{-1/2}$.
\section{Conclusions}
\label{sec:conclusions}
We work in the scenario where the Universe is a slowly rolling
but positively curved spacetime formed from a background with
constant density spatial slices which are copies of $S^3$, as opposed to $\mathbb{R}^3$.
This possibility is compatible with observation if
$|\Omega_k| \lesssim 10^{-2}$--$10^{-3}$, depending whether the universe
is taken to be positively or negatively curved.
Indeed, interpreting present observational limits literally,
the diameter of the CMB $d_{\ell=2}$ satisfies approximately
$d_{\ell=2}\approx 0.46$ (in units where the radius of curvature
is unity), so that the $S^2$ corresponding to
the surface of last scattering spans about a twelth of the
circumference of a great circle on the $S^3$ of constant time. On such a last
scattering wall one can expect that the fluctuation eigenmodes on $S^3$ of
$k\gtrsim 8$ will contribute to the CMB.
Modes of long wavelength, corresponding to small $k$, will pass outside
the horizon early and therefore will be sensitive the presence of curvature.
The question is to what degree a realistic observable such as the power
spectrum, or $f_{\mathrm{NL}}$, can constrain the appearance of primordial curvature.
We have shown that such modes, of small $k$, will not contribute
significantly
to the spectrum, but may contribute to $f_{\mathrm{NL}}$.
We perform the appropriate selection of the vacuum for our slowly rolling de
Sitter type spacetime, corresponding to the Hartle--Hawking state,
and demonstrate how one should execute the calculation
of both the two and three point functions on this manifold. Along the way
we find exact expressions for the scalar fluctuation modes in global de
Sitter coordinates, and since these are rather unwieldy we explore appropriate
approximations. For a generic potential $V$ one finds that the contributions
to the action due to curvature at leading order in slow roll are remarkably
simple. Further, we show how the
$S^3$ slicing can be reliably compared with the conventional flat-slicing
calculation on an $S^2$ surface corresponding to last scattering.
In this way we can establish continuity with the $\mathbb{R}^3$ calculation
as one approaches the flat-space limit.
We estimate the contribution of small-$k$ harmonics to the bispectrum.
To compare with observations, it is most convenient
to state the result in terms of $f_{NL}$, given in Eq.~\eref{fnl}.
As written, this expression should be understood to be valid
in the approximately equilateral case where all $k_i$ have
an approximately equal magnitude. However, we believe our calculation
could be generalized, using methods similar to those of
Maldacena \cite{Maldacena:2002vr}, to accommodate the squeezed limit.
How large an $f_{\mathrm{NL}}$ can be obtained? The $k=1$ mode is the homogeneous
background, and $k=2$ is pure gauge. Therefore the interesting
contributions to microwave background fluctuations must come from
modes where $k \ge 3$.
To estimate an upper limit,
we specialize to the equilateral limit where $k_i = k$
for all $i$. After collecting terms in Eq.~\eref{fnl}, this yields
\begin{equation}
f_{\mathrm{NL}} = -\frac{5}{36} (23 \epsilon_* - 6 \eta_*) - \frac{145}{54 k^2} \, .
\end{equation}
The first term, proportional to the slow-roll parameters $\epsilon$ and
$\eta$, is precisely the flat-space contribution first derived by
Maldacena. The second term is new, and accounts for the leading effect of
curvature. It is irrelevant in the limit $k \rightarrow \infty$, but can
be large for small $k$.
Since the signal is maximized by choosing $k$
as small as possible, we can obtain an approximate upper bound by
setting $k = 3$, leading to $f_{\mathrm{NL}} \sim 0.3$.
It follows that the effect of curvature is marginally below the expected
detection threshold for the \emph{Planck} satellite, usually supposed
to be of order $f_{\mathrm{NL}} \sim 5$
\cite{Komatsu:2001rj}, but might lie on the limit of what is
practicable with futuristic technology such as
a high-redshift 21cm survey
\cite{Cooray:2006km,Pillepich:2006fj}.
In the latter case, however, one would need
some way to distinguish this small signal from the ubiquitous
non-linearities of gravity itself \cite{Boubekeur:2009uk}.
\ack
TC acknowledges support from EPSRC. DS is supported by STFC.
\section*{References}
\providecommand{\href}[2]{#2}\begingroup\raggedright | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Acknowledgements}
The author would like to thank Tiziano Peraro for useful discussions.
This work is supported by Fondazione Cassa di Risparmio di Padova e Rovigo (CARIPARO) and
by Padua University Project CPDA144437.
\\The Feynman diagrams depicted in this paper were generated using {{\sc FeynArts}}~\cite{Hahn:2000kx}.
\bibliographystyle{utphys}
\section{Introduction}
\label{sec:intro}
With the successful results delivered by Run I, it is necessary to achieve more accurate results for measure quantities at the theoretical level. Then, in order to test phenomenological predictions we make use of scattering amplitudes, which can be studied in terms of their symmetries and analytic properties. Tree-level or Leading-Order (LO) computations provide a qualitative information, affected by large uncertainties due to the poor convergence of the coupling constant. Therefore, to establish a proper comparison between theory and data, Next-to-Leading-Order (NLO) is needed.\\
As a main ingredient of the NLO contributions we consider one-loop corrections, in which any amplitude can be decomposed in a explicit set of Master Integrals (MI's), where the coefficients appearing in this combination are rational functions of the kinematic variables~\cite{Passarino:1978jh}. It is possible to recover the structure of scattering amplitudes
at integral level by constructing the integrands through the multi-particle
pole expansion rising from the analiticity properties and unitarity
of the S-matrix. In fact, scattering amplitudes analytically continued to complex
momenta, reveal their singularity structures in terms of poles and branch
cuts. The unitarity based method (UBM) allows to determine the coefficients of the MI's by
expanding the integrand of tree level cut amplitudes into an expression
that resembles the cut of the basis integrals.
In this talk, we review the four dimensional formulation (FDF) proposed in~\cite{Fazio:2014xea}, in which the Four-Dimensional-Helicity (FDH) scheme~\cite{Bern:1991aq,Bern:1995db,Bern:2002zk} is extended by considering ingredients in four dimensions and providing explicit representations of the polarisation and helicity states for the four-dimensional particles propagating in the loop. FDF has been successfully applied to reproduce one-loop corrections to
$gg \to gg$, $q {\bar q} \to gg$, $gg \to Hg$ (in the heavy top
limit), as well as $gg \to ggg$ and $gg \to gggg$ \cite{Bobadilla:2015wma}.
In addition, we study the Colour-Kinematics (C/K) dual representation of QCD amplitudes by considering the kinematic part of the numerators, which is found to obey Jacobi identities
and anti-symmetry relations similar to the ones holding for the
corresponding structure constants of the Lie algebra~\cite{Bern:2008qj,Bern:2010ue}.
\par\noindent We report the results obtained in~\cite{Mastrolia:2015maa}, where we consider the tree-level diagrams for $gg\to X$, for massless final state particles, with $X = ss, q\bar{q}, gg$, in four- and $d$- dimensions. We work in axial gauge, describing scalars in the adjoint representation and fermions in the fundamental one. We deal with the Jacobi relation of the kinematic numerators keeping the partons off-shell. Due to the off-shellness of the external particles, the C/K-duality is broken, and anomalous terms emerge. This anomaly vanishes in the on-shell limit, as it should, recovering the exact C/K-duality\\
\section{Four-Dimensional-Formulation}
\label{AppA}
In this section we briefly recall the main features of the FDF scheme.
\begin{itemize}
\item We use barred notation for quantities referred to unobserved particles, living in a $d$-dimensional space. Thus, the metric tensor
\begin{align}
\bar g^{\mu \nu} = g^{\mu \nu} + \tilde g^{\mu \nu} \, ,
\end{align}
can be decomposed in terms of a four-dimensional tensor $g$ and a $-2 \epsilon$-dimensional one, $ \tilde g$.
The tensors $g$ and $\tilde g$ project a $d$-dimensional vector $\bar q$ into the four-dimensional and the
$-2 \epsilon$-dimensional subspaces, respectively.
\item $d$-dimensional momenta $\bar \ell$ are decomposed as
\begin{align}
\bar \ell = \ell + \tilde \ell \, , \qquad \bar \ell^2 = \ell^2 -\mu^2 = m^2 \,
\label{Eq:Dec0}
\end{align}
\item The algebra of matrices $\tilde \gamma^\mu = \tilde g^{\mu}_{\phantom{\mu} \nu} \, \bar \gamma^\nu$,
\begin{align}
[ \tilde \gamma^{\alpha}, \gamma^{5} ] &= 0 \, , &
\{\tilde \gamma^{\alpha}, \gamma^{\mu} \} &=0 \ , \label{Eq:Gamma01} &
\{\tilde \gamma^{\alpha}, \tilde \gamma^{\beta} \} &= 2 \, \tilde g^{\alpha \beta} \, .
\end{align}\label{Eq:Gamma02}
is implemented through the substitutions
\begin{align}
\tilde g^{\alpha \beta} \to G^{AB}, \qquad \tilde \ell^{\alpha} \to i \, \mu \, Q^A \; , \qquad \tilde \gamma^\alpha \to \gamma^5 \, \Gamma^A\, .
\label{Eq:SubF}
\end{align}
together with the set of selection rules, ($-2\epsilon$)-SRs,
\begin{align}
G^{AB}G^{BC} &= G^{AC}, & G^{AA}&=0, & G^{AB}&=G^{BA}, &
\Gamma^A G^{AB} &= \Gamma^B,\pagebreak[0] \nonumber \\ \Gamma^A \Gamma^{A} &=0, & Q^A \Gamma^{A} &=1, &
Q^A G^{AB} &= Q^B, & Q^A Q^{A} &=1.
\label{Eq:2epsA}
\end{align}
which, ensuring the exclusion of the terms containing odd powers of $\mu$, completely
defines the FDF and allows the construction of integrands which, upon
integration, yield to the same result as in the FDH scheme.
\item
The spinors of a $d$-dimensional fermion fulfil the completeness relations
\begin{align}
\sum_{\lambda=\pm}u_{\lambda}\left(\ell \right)\bar{u}_{\lambda}\left(\ell \right) & = \slashed \ell + i \mu \gamma^5 + m \, , &
\sum_{\lambda=\pm}v_{\lambda}\left(\ell \right)\bar{v}_{\lambda}\left(\ell \right) & = \slashed \ell + i \mu \gamma^5 - m \, ,
\label{Eq:CompF4}
\end{align}
which consistently reconstruct the numerator of the cut propagator.
\item In the axial gauge, the helicity sum of a $d$-dimensional transverse polarisation vector can be disentangled in
\begin{small}
\begin{align}
&\sum_{i=1}^{d -2} \, \varepsilon_{i\, (d)}^\mu\left (\bar \ell , \bar \eta \right )\varepsilon_{i\, (d)}^{\ast \nu}\left (\bar \ell , \bar \eta \right ) =
\left ( - g^{\mu \nu} +\frac{ \ell^\mu \ell^\nu}{\mu^2} \right) -\left ( \tilde g^{\mu \nu} +
\frac{ \tilde \ell^\mu \tilde \ell^\nu}{\mu^2} \right ) \, ,
\label{Eq:CompGD2}
\end{align}
\end{small}
\noindent where the first term can be regarded as the cut propagator of a massive vector boson,
\begin{align}
\sum_{\lambda=\pm,0}\varepsilon_{\lambda}^{\mu}(\ell) \, \varepsilon_{\lambda}^{*\nu}(\ell)&= -g^{\mu\nu}+\frac{\ell^{\mu}\ell^{\nu}}{\mu^{2}} \, , \label{flat}
\end{align}
and the second term of the r.h.s. of Eq.~(\ref{Eq:CompGD2}) is related to the numerator of cut propagator of the scalar $s^{\bullet}$ and can be expressed in terms
of the $(-2 \epsilon)$-SRs as:
\begin{equation}
\tilde g^{\mu \nu} +\frac{ \tilde \ell^\mu \tilde \ell^\nu}{\mu^2} \quad \to \quad G^{AB} - Q^A Q^B \, .
\label{Eq:Pref}
\end{equation}
\end{itemize}
Within FDF scheme, the QCD $d$-dimensional Feynman rules in axial gauge have the following four-dimensional formulation:
\begin{subequations}
\vspace{-0.4cm}
\begin{align}
\parbox{20mm}{
\unitlength=0.20bp%
\begin{feynartspicture}(300,300)(1,1)
\FADiagram{}
\FAProp(4.,10.)(16.,10.)(0.,){/Cycles}{0}
\FALabel(5.5,8.93)[t]{\tiny $a, \alpha$}
\FALabel(14.5,8.93)[t]{\tiny $b, \beta$}
\FALabel(10.,12.5)[]{\tiny $k$}
\FAVert(4.,10.){0}
\FAVert(16.,10.){0}
\end{feynartspicture}} &= i \, \frac{ \delta^{ab} }{k^2 -\mu^2 +i 0}\bigg(-g^{\alpha\beta}+\frac{k^{\alpha}k^{\beta}}{\mu^2}\bigg)\,
\label{Eq:FRglu}\\[-3.0ex]
\parbox{20mm}{\unitlength=0.20bp%
\begin{feynartspicture}(300,300)(1,1)
\FADiagram{}
\FAProp(4.,10.)(16.,10.)(0.,){/ScalarDash}{0}
\FALabel(5.5,8.93)[t]{\tiny $a, A$}
\FALabel(14.5,8.93)[t]{\tiny $b, B$}
\FALabel(10.,12.5)[]{\tiny $k$}
\FAVert(4.,10.){0}
\FAVert(16.,10.){0}
\end{feynartspicture}} &= i \, \,\frac{\delta^{ab}}{k^2 -\mu^2+ i0} \left(G^{AB}-Q^AQ^B\right)\, ,
\\[-3.0ex]
\parbox{20mm}{\unitlength=0.20bp%
\begin{feynartspicture}(300,300)(1,1)
\FADiagram{}
\FAProp(4.,10.)(16.,10.)(0.,){/Straight}{1}
\FALabel(5.5,8.93)[t]{\tiny $i$}
\FALabel(14.5,8.93)[t]{\tiny $j$}
\FALabel(10.,12.5)[]{\tiny $k$}
\FAVert(4.,10.){0}
\FAVert(16.,10.){0}
\end{feynartspicture}} &= i \, \delta^{ij} \,\frac{ \slashed k + i \mu \gamma^5 +m }{k^2 -m^2 -\mu^2+i0} \, ,
\label{Eq:FRfer}
\end{align}
\label{Eq:FR4}
\end{subequations}
\section{One-loop amplitudes}
\label{sec:2P}
In this section we present preliminary results obtained through FDF for the leading colour-ordered one-loop helicity amplitudes $A_{5}\left(1^{+},2^{+},3^{+},4^{+},\text{H}\right)$ and $A_{6}\left(1^{+},2^{+},3^{+},4^{+},5^{+},\text{H}\right)$ in the heavy top mass limit.\\
Besides providing the analytic values of coefficients of each master integral, we give necessary ingredients to carry out this computation.
In order to apply generalised-unitarity methods within FDF, we consider as examples the one-loop $2 \to 2,3,4$ scattering amplitudes, where external particles can be either gluons, quarks or the Higgs boson.
\smallskip
In general, due to the decomposition formulae any massless $n$-point one-loop amplitude can be
decomposed in terms MIs, as follows
\begin{multline}
A_n^{\text{1-loop}} {}= \frac{1}{(4 \pi)^{2-\epsilon}} \, \sum_{i<j<k<l}^{n-1} \bigg [ c_{i|j|k|l;\,0}\, I_{i|j|k|l}+c_{ij|k|l;\,0}\, I_{ij|k|l}
+c_{ij|kl;\,0}\, I_{ij|kl}\\
+ c_{i|j|k|l;\,4}\, I_{i|j|k|l}[\mu^{4}]+c_{ij|k|l;\,2}\, I_{ij|k|l}[\mu^{2}]
+c_{ij|kl;\,2}\, I_{ij|kl}[\mu^{2}] \bigg ] \, .
\label{Eq:Decomposition}
\end{multline}
In Eq.~(\ref{Eq:Decomposition}), we see the first line corresponds to the cut-constructible, while the second one to the rational part. Once again, we emphasise FDF gives the full contribution to the one-loop amplitude, being no need of distinguishing those two pieces.
The coefficients $c$'s entering in the decompositions~(\ref{Eq:Decomposition}) can
be obtained by using the generalised unitarity techniques for quadruple~\cite{Britto:2004nc,Badger:2008cm},
triple~\cite{Mastrolia:2006ki,Forde:2007mi,Badger:2008cm}, and double~\cite{Britto:2005ha, Britto:2006sj, Mastrolia:2009dr} cuts. Since internal particles are massless, the single-cut
techniques~\cite{Kilgore:2007qr,Britto:2009wz, Britto:2010um} are not needed for this computation. In general, the cut $C_{i_1\cdots i_k}$, defined by the conditions $D_{i_1} =\cdots = D_{i_k}=0$, allows for the determination of the coefficients $c_{i_1\cdots i_k; \, n}$.
\subsection{The gggggH amplitude}
We show the explicit structure of the analytic contribution of the one-loop Higgs plus five gluon all-plus amplitude. For sake of simplicity, we do not write the coefficients of the finite part of the amplitude.
\noindent The leading-order contribution of the six-point can be written as,
\begin{align}
A_{6,H}^{\text{tree}}\left(1^{+},2^{+},3^{+},4^{+},5^{+},H\right)=\frac{-i\,m_{H}^{4}}{\langle1|2\rangle\langle2|3\rangle\langle3|4\rangle\langle4|5\rangle\langle5|1\rangle}.
\end{align}
\begin{figure}[htb!]
\begin{subequations}
\begin{align*}
&\parbox{15mm}{\input{FeynmanDiagrams/5gHQ1a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHQ1b.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHQ1c.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHQ2a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHQ2b.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHQ3a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHQ3b.tex}}
\\
&\parbox{15mm}{\input{FeynmanDiagrams/5gHT1a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHT1b.tex}}\quad
\parbox{15mm}{\input{FeynmanDiagrams/5gHT2a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHT2b.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHT2c.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHT3.tex}}\\
&\parbox{15mm}{\input{FeynmanDiagrams/5gHD1a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHD2a.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHD2b.tex}}\qquad
\parbox{15mm}{\input{FeynmanDiagrams/5gHD3.tex}}
\end{align*}
\end{subequations}
\caption{Independent box-, triangle- and bubble- integral topologies for the amplitude $A_6^{1-loop}(H,1,2,3,4,5)$}\label{6top}
\end{figure}
The one-loop correction to this amplitude is obtained by considering the independent topologies depicted in fig.~\ref{6top} and takes the form,
\begin{align}
A_{6}^{1-loop}\left(H,1^+,2^+,3^+,4^+,5^+\right)=&\frac{1}{2}A_{6}^{tree}\left(s_{1234}s_{1235}-s_{123}m_{H}^{2}\right)\,I_{123|4|5|H}\left[1\right]-\frac{1}{2}A_{6}^{tree}s_{34}s_{45}\,I_{H12|3|4|5}\left[1\right]\pagebreak[0] \nonumber \\
&-\frac{1}{2}A_{6}^{tree}\left(s_{234}s_{345}-s_{34}s_{2345}\right)\,I_{H1|2|34|5}+\frac{1}{2}A_{6}^{tree}\left(s_{1234}-m_{H}^{2}\right)\,I_{1234|5|H}\left[1\right]\pagebreak[0] \nonumber \\
&+c_{123|4|H|5}I_{123|4|H|5}[\mu^{4}]+c_{H12|3|4|5}I_{H12|3|4|5}[\mu^{4}]\pagebreak[0] \nonumber \\
&+c_{1234|5|H}I_{1234|5|H}[\mu^{2}]+c_{1234|H|5}I_{1234|H|5}[\mu^{2}]+c_{H123|4|5}I_{H123|4|5}[\mu^{2}]\pagebreak[0] \nonumber \\
&+c_{H12|34|5}I_{H12|34|5}[\mu^{2}]+c_{123|4|5H}I_{123|4|5H}[\mu^{2}]+c_{123|4H|5}I_{123|4H|5}[\mu^{2}]\pagebreak[0] \nonumber \\
&+c_{12|345H}I_{12|345H}[\mu^{2}]+c_{123|45H}I_{123|45H}[\mu^{2}]+c_{H1|2345}I_{H1|2345}[\mu^{2}]\pagebreak[0] \nonumber \\
&+\text{cyclic perm,}
\end{align}
being $c$'s non-vanishing coefficients.\\
A similar study was carried out for the analytic expression of the one-loop Higgs plus four gluon amplitudes , wherein the helicity configurations $A_{5}\left(H,1^{+},2^{+},3^{+},4^{+}\right)$, $A_{5}\left(H,1^{-},2^{+},3^{+},4^{+}\right)$, $A_{5}\left(H,1^{-},2^{-},3^{+},4^{+}\right)$ and $A_{5}\left(H,1^{-},2^{+},3^{-},4^{+}\right)$ have been considered, finding agreement with~\cite{{Badger:2006us,Badger:2009hw}}\\
The procedure for computing the one-loop amplitudes given above has
been fully automated. In particular, we have implemented the FDF Feynman rules (including
the $(-2\epsilon)$-SRs) in {{\sc FeynArts}}/{{\sc FeynCalc}}\,~\cite{Hahn:2000kx}, in order to automatically build the
tree-level amplitudes to be sewn in the cuts. Then, the coefficients of the master integrals are determined by applying integrand reduction via Laurent expansion~\cite{Mastrolia:2012bu}, which has been implemented in Mathematica, by using the package {{\sc S@M}}~\cite{Maitre:2007jq}.
\section{Colour-Kinematics-duality}
\label{sec:3P}
In this section we briefly describe the diagrammatic study we give to the C/K-duality we presented in \cite{Mastrolia:2015maa}, where off-shell tree-level currents are embedded in higher-multiplicity/multi-loop amplitudes. This investigation is carried out by considering the tree-level diagrams for the process $gg\to X$, for massless final states, with $X=ss, q\bar{q},gg$, in four dimensions. The same analysis, first made in $d$-dimensions, has then been extended to dimensionally regulated amplitudes, taking the FDF as a regularisation scheme. The calculation has been performed in axial gauge, describing scalars in the adjoint representation and fermions in the fundamental one.
\subsection{Colour-kinematics duality for gluons}
\label{CKg}
\begin{figure}[h]
\centering
\includegraphics[scale=1.15]{fig5.eps}
\caption{Jacobi combination for gluons.}\label{BCJs}
\end{figure}
\par\noindent We consider the tree-level scattering, $gg\to gg$, which receives contributions from four Feynman diagrams, however, because of colour algebra, the contribution from the $4$-gluon vertex can be absorbed in the $s,t$ and $u$ channels, so that the amplitude is exposed in terms of three diagrams. The corresponding numerators, say $n_1$, $n_2$ and $n_3$, can be combined in Jacobi-like fashion as shown in Fig.~\ref{BCJs},
\begin{align}
N_{\text{g}}=-n_1+n_2+n_3, \label{BCJ}
\end{align}
The numerator of the gluon propagator in axial gauge has the form
\begin{equation}
\Pi^{\mu\nu}(p,q)=\Pi^{\mu\nu}_{\text{Fey}}+\Pi^{\mu\nu}_{\text{Ax}}(p,q),
\end{equation}
where $\Pi^{\mu\nu}_{\text{Fey}}$ corresponds to the numerator of the propagator in Feynman gauge and $\Pi^{\mu\nu}_{\text{Ax}}(p,q)$ labels the term depending on an arbitrary light-like reference momentum $q^{\mu}$,
\begin{align}
\Pi^{\mu\nu}_{\text{Fey}}&=-i\,g^{\mu\nu}\,, &
\Pi^{\mu\nu}_{\text{Ax}}(p,q)&=i\,\frac{p^\mu q^\nu+q^\mu p^\nu}{q\cdot p}.
\end{align}
The explicit form of (\ref{BCJ}) is given by the contraction of an
off-shell current with gluon polarisations as,
\begin{align}
\left(N_{\text{g}}\right)_{\alpha_1...\alpha_4}=
\big(J^{\mu_{1}..\mu_{4}}_{\text{g-Fey}}+J^{\mu_{1}...\mu_{4}}_{\text{g-Ax}}\big)
\varepsilon_{\mu_1}\left(p_1,q_1\right)
\varepsilon_{\mu_2}\left(p_2,q_2\right)
\varepsilon_{\mu_3}\left(p_3,q_3\right)
\varepsilon_{\mu_4}\left(p_4,q_4\right),
\label{BCJtg}
\end{align}
where $J_{\text{s-Fey}}^{\mu_{1}\mu_{4}}$ is the sum of the Feynman gauge-like terms of the three numerators,
\begin{align}
-i\,{J}_{\text{g-Fey}}^{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}(p_1,p_2,p_3,p_4)
=&p_{1}^{\mu_{1}}[g^{\mu_{3}\mu_{4}}\left(p_{1}+2p_{4}\right)^{\mu_{2}}-g^{\mu_{2}\mu_{4}}\left(p_{1}+2p_{4}\right)^{\mu_{3}}+g^{\mu_{2}\mu_{3}}\left(p_{1}+2p_{3}\right)^{\mu_{4}}]\pagebreak[0] \nonumber \\
&+\text{cyclic perm.}
\label{jfeyg}
\end{align}
and
\begin{multline}
-i\, J_{\text{g-Ax}}^{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}(p_{1},p_{2},p_{3},p_{4})=\frac{1}{q\cdot(p_{1}+p_{2})}\Bigg\{\\
\left(p_{1}^{\mu_{1}}p_{1}^{\mu_{2}}-p_{2}^{\mu_{2}}p_{2}^{\mu_{1}}-\left(p_{1}^{2}-p_{2}^{2}\right)g^{\mu_{1}\mu_{2}}\right)[q\cdot\left(p_{4}-p_{3}\right)g^{\mu_{3}\mu_{4}}-\left(p_{3}+2p_{4}\right)^{\mu_{3}}q^{\mu_{4}}+\left(p_{4}+2p_{3}\right)^{\mu_{4}}q^{\mu_{3}}]\\+\left(p_{3}^{\mu_{3}}p_{3}^{\mu_{4}}-p_{4}^{\mu_{3}}p_{4}^{\mu_{4}}-\left(p_{3}^{2}-p_{4}^{2}\right)g^{\mu_{3}\mu_{4}}\right)[q\cdot\left(p_{1}-p_{2}\right)g^{\mu_{1}\mu_{2}}+\left(p_{1}+2p_{2}\right)^{\mu_{1}}q^{\mu_{2}}-\left(p_{2}+2p_{1}\right)^{\mu_{2}}q^{\mu_{1}}]\Bigg\}
\\-[(1234)\to(4123)]-[(1234)\to(4231)].
\label{jaxg}
\end{multline}
is the contribution depending on the reference momentum.
\begin{figure}[htb!]
\begin{align*}
\parbox{8mm}{\input{FeynmanDiagrams2/4g.tex}}\quad\quad&=\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4gP2bg.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4gP3bg.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4gP4bg.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4gP1bg.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4g12g.tex}}\\
\\[-2.5ex]
&\qquad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4g13g.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4g14g.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4g23g.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4g24g.tex}}\quad\quad+\quad
\parbox{8mm}{\input{FeynmanDiagrams2/4g34g.tex}}
\end{align*}
\caption{Off-shell colour-kinematics duality for gluons. The Jacobi combination of tree-level numerators (l.h.s) is expressed in terms of subdiagrams only (r.h.s.). }
\label{bcjLoopa}
\end{figure}
With the expressions of the off-shell currents of eqs.~(\ref{jfeyg},\ref{jaxg}) we embed the Jacobi-like combination of tree-level numerators into a generic diagram, where in the most general case the legs $p_1,p_2,p_3$ and $p_4$ become internal lines and polarisations associated to the particles are replaced by the numerator of their propagators.
Accordingly, Eq.~\eqref{BCJ} generalises to the following contraction,
\begin{align}
N_{\text{g}} &=(N_{\text{g}})_{\alpha_1\hdots\alpha_4}X^{\alpha_1\hdots\alpha_4}.
\label{eq:Ns:def}
\end{align}
between the tensor $(N_{\text{g}})_{\alpha_1\hdots\alpha_4}$, defined as,
\begin{equation}
(N_{\text{g}})_{\alpha_1\hdots\alpha_4}=-
\big(J^{\mu_{1}\hdots\mu_{4}}_{\text{g-Fey}}+J_{\text{g-Ax}}^{\mu_{1}\hdots\mu_{4}}\big)
\Pi_{\mu_1\alpha_1}\!\!\left(p_1,q_1\right)
\Pi_{\mu_2\alpha_2}\!\!\left(p_2,q_2\right)
\Pi_{\mu_3\alpha_3}\!\!\left(p_3,q_3\right)
\Pi_{\mu_4\alpha_4}\!\!\left(p_4,q_4\right),
\label{BCJgN}
\end{equation}
and the arbitrary tensor $X^{\alpha_1\hdots\alpha_4}$, standing for the residual kinematic dependence of the diagrams, associated to either higher-point tree-level or to multi-loop topologies.
Using momentum conservation, we find that the r.h.s. of \eqref{BCJgN} can be cast in the following suggestive form,
\begin{align}
\left(N_{\text{g}}\right)_{\alpha_1...\alpha_4}\!&=
\sum_{i=1}^{4}p_i^2(A^i_g)_{\alpha_1...\alpha_4}+\sum_{\substack{i,j=1\\
i\neq j}}^{4}p_i^2p_j^2(C^{ij}_g)_{\alpha_1...\alpha_4},
\label{BCJgN1}
\end{align}
where $A^{i}_g$ and $C^{ij}_g$ are tensors depending both on the momenta $p_i$ of gluons, eventually depending on the loop variables, and on the reference momenta $q_i$ of each gluon propagators.
Remarkably, Eq.(\ref{BCJgN1}) shows the full decomposition of a combinations of generic numerators built from the Jacobi relation in terms of squared momenta of the particles entering the Jacobi identity defined in Fig.~\ref{BCJs}. In particular, this result implies that the C/K duality is certainly satisfied when imposing the on-shell cut-conditions $p_i^2 =0$. A diagrammatic representation of the consequences of the decomposition (\ref{BCJgN1}) in (\ref{eq:Ns:def}) is given in Fig.~\ref{bcjLoopa}, where each term in the r.h.s. contains an effective vertex, associated with the C/K violating term, which we have fully identified. Similar results have been obtained for $ss$ and $q\bar q$ in the final states.
\subsection{Colour-kinematics duality in $d$-dimensions}
In this Section we study the C/K-duality for tree-level amplitudes in
dimensional regularisation by employing the FDF scheme, recently
introduced in~\cite{Fazio:2014xea}.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.05]{bcjfdf1.eps}
\includegraphics[scale=1.05]{bcjfdf2.eps}
\includegraphics[scale=1.05]{bcjfdf3.eps}
\caption{Jacobi combinations for FDF particles.}
\label{BCJFDF}
\end{figure}
The relations depicted in Fig.~\ref{BCJFDF} are the basic building blocks for the determination of higher-order scattering amplitudes within generalised unitarity based methods. The particles with dots represents the generalised (or dimensional regulated) particles, while all other lines identify particles living in $4$-dimensions.
\noindent A diagrammatic analysis, similar to the one done in section~\ref{CKg} was carried out in~\cite{Mastrolia:2015maa}, where the C/K duality is shown to be obeyed by the numerators of tree-level amplitudes within the FDF scheme, which involve non-trivial relations involving both massless and massive particles. The C/K-duality is recovered once transversality conditions of (generalised) gluon polarisations and Dirac equation are taken into account. More specifically, generalised gluons of momentum $p$ obey $\varepsilon_\lambda\cdot p=0\,(\lambda=0,\pm)$, while for generalised quarks one has $\bar{u}(p)(\slashed p-i\mu\gamma_5)=0$ and $(\slashed p-i\mu\gamma_5)v(p)=0$.
As a non-trivial example we provide explicit expressions for C/K dual building block for the process $g^{\bullet}g^{\bullet}(s^{\bullet}s^{\bullet})\to q\bar q g$, where initial states of gluons are treated $d$-dimensional particles, whereas the final state remains fully four-dimensional.
\section{Conclusions}
\label{sec:4}
At one-loop level, we have made use of the unitarity methods and the Four-Dimensional-Formulation scheme to compute the analytical expressions for $gg\to ggH$ and $gg\to gggH$.
Possible applications and extensions of this study are the computation of the full analytical expression for $\text{Higgs}+3$ jets in the final state, and as well as the two-loop implementation of the Four-Dimensional-Formulation scheme.
On the the Colour-Kinematics side we have explicitly shown that any higher-point/loop diagram obtained from the Jacobi identity between kinematics numerators of off-shell diagrams -constructed with Feynman rules in axial gauge-
can be decomposed in terms of sub-diagrams where one or two internal propagators are pinched. As consequence of this decomposition, we diagrammatically that proved the colour-kinematics duality is satisfied at multi-loop only by imposing on-shellness of the four particles entering in the Jacobi combination. This behaviour holds for $d$-dimensional regulated amplitudes. | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sectionintroduction}
Suppose we have data $(X_1,Y_1),\ldots, (X_n,Y_n)$ from a distribution
$P$, where $X_i\in\mathbb{R}^d$ and $Y_i\in\mathbb{R}$. Further, we
have a second set of data $X_{n+1},\ldots, X_N$ from the same
distribution but without the $Y$'s. We refer to ${\cal L} =
\{(X_i,Y_i)\dvtx i=1,\ldots, n\}$ as the \textit{labeled data} and
${\cal U} = \{X_i\dvtx i=n+1,\ldots, N\}$ as the \textit{unlabeled data}.
There has been a major effort, mostly in the machine learning
literature, to find ways to use the unlabeled data together with the
labeled data to constuct good predictors of $Y$. These methods are
known as \textit{semisupervised methods}. It is generally assumed that the
$m=N-n$ unobserved labels $Y_{n+1},\ldots, Y_N$ are missing
completely at random and we shall assume this throughout.
To motivate semisupervised inference,
consider the following example.
We download a large number $N$ of webpages $X_i$.
We select a small subset of size $n$ and label these
with some attribute $Y_i$.
The
downloading process is cheap
whereas the labeling process is expensive so typically $N$ is huge
while $n$ is much smaller.
\begin{figure}
\includegraphics{1092f01.eps}
\caption{The covariate $X=(X_1,X_2)$ is two dimensional.
The response $Y$ is binary and is shown as a square
or a circle. Left: the labeled data.
Right: labeled and unlabeled data.}
\label{figtoy}
\end{figure}
Figure~\ref{figtoy} shows a toy example
of how unlabeled data can help with prediction.
In this case, $Y$ is binary, $X\in\mathbb{R}^2$
and we want to find the decision boundary
$\{x\dvtx P(Y=1|X=x)=1/2\}$.
The left plot shows a few labeled data points
from which it would be challenging to find the boundary.
The right plot shows labeled and unlabeled points.
The unlabeled data show that there are two clusters.
If we make the seemingly reasonable assumption
that $f(x)=P(Y=1|X=x)$ is very smooth over the two clusters,
then identifying the decision boundary becomes much easier.
In other words, if we assume some link between $P_X$ and $f$, then we
can use the unlabeled data;
see Figure~\ref{figexplain}.
\begin{figure}[b]
\fbox{\parbox{353pt}{\Large
\[
{\cal L}_n \ \Longrightarrow \ \hat f
\hspace{1in}
{\cal L}_n \ \Longrightarrow \ \hat f\
\stackrel{\mathsf{SS}\ \mathsf{assumption}}{\Longleftarrow \!=\!=\!=\!=}
\hat{P}_X
\Longleftarrow
{\cal U}_N
\]}}
\caption{Supervised learning (left) uses only the labeled data ${\cal L}_n$.
Semisupervised learning (right) uses the unlabeled data ${\cal U}_N$ to estimate
the marginal distribution $P_X$
which helps estimate $f$ if there is some link between $P_X$ and $f$.
This link is the semisupervised (SS) assumption.}
\label{figexplain}
\end{figure}
The assumption that
the regression function $f(x) = \mathbb{E}(Y|X=x)$ is very smooth over
the clusters
is known as the \textit{cluster assumption}.
In the special case where the clusters are low-dimensional submanifolds,
the assumption is called
the \textit{manifold assumption}.
These assumptions link the regression function $f$ to the distribution
$P_X$ of $X$.
Many semisupervised methods are developed based on the above
assumptions, although this is not always made explicit. Even with such
a link, it is not obvious that semisupervised methods
will outperform supervised methods.
Making precise
how and when these assumptions actually improve inferences is
surprisingly elusive, and most papers
do not address this issue; some exceptions are \citet{rigollet07},
\citet{SSLTR}, \citet{LWnips07}, \citet{nadler09},
\citet{ben-davidcolt08}, \citet{NIPS20091025},
\citet{belkinniyogi} and \citet{partha}.
These authors have shown that the degree to which
unlabeled data improves performance is very sensitive to the cluster
and manifold assumptions. In this paper, we introduce \textit{adaptive
semisupervised inference}. We define a parameter $\alpha$ that
controls the sensitivity of the distance metric to the density, and
hence the strength of the semisupervised assumption. When $\alpha= 0$
there is no semisupervised assumption, that is, there is no link
between $f$ and $P_X$. When $\alpha= \infty$ there is a very strong
semisupervised assumption. We use the data to estimate $\alpha$, and
hence we adapt to the appropriate assumption linking $f$ and $P_X$.
In addition, we should add that we focus on regression while most
previous literature
only deals with binary outcomes (classification).
This paper makes the following contributions:
\begin{longlist}[(6)]
\item[(1)] We formalize the link between
the regression function $f$ and the marginal distribution
$P_X$ by defining a class of function spaces
based on a metric that depends on $P_X$.
This is called a \textit{density sensitive metric}.
\item[(2)] We show how to consistently estimate the density-sensitive metric.
\item[(3)] We propose a semi-supervised kernel estimator based on
the density-sensitive metric.
\item[(4)]
We provide some minimax bounds and
show that under some conditions the semisupervised method
has smaller predictive risk than any supervised method.
\item[(5)] The function classes depend on a parameter $\alpha$ that
controls how strong the
semisupervised assumption is.
We show that it is possible to adapt to $\alpha$.
\item[(6)] We provide numerical simulations to support the theory.
\end{longlist}
We now give an informal statement of our main results.
In Section~\ref{sectionminimax} we define a nonparametric class of
distributions
${\cal P}_n$.
Let $0 < \xi< d-3$ and assume that
$m \geq n^{2/(2+\xi)}$.
Let ${\cal S}_n$ denote the set of supervised estimators;
these estimators use only the labeled data.
Let ${\cal SS}_N$ denote the set of semisupervised estimators;
these estimators use the labeled data and unlabeled data.
Then:
\begin{longlist}[(3)]
\item[(1)]
(Theorem~\ref{thmupper-bound} and Corollary~\ref{corollaryupper}.)
There is a semisupervised estimator $\hat f$ such that
\begin{equation}
\sup_{P\in{\cal P}_n}R_P(\hat f) \leq \biggl(
\frac{C}{n} \biggr)^{{2}/({2+\xi})},
\end{equation}
where $R_P(\hat f)$ is the risk of the estimator $\hat f$ under
distribution $P$.
\item[(2)] (Theorem~\ref{thmlower-bound}.)
For supervised estimators ${\cal S}_n$ we have
\begin{equation}
\inf_{\hat f\in{\cal S}_n}\sup_{P\in{\cal P}_n} R_P(\hat
f) \geq \biggl(\frac{C}{n} \biggr)^{{2}/({d-1})}.
\end{equation}
\item[(3)] Combining these two results we conclude that
\begin{equation}
\frac{\inf_{\hat f\in{\cal SS}_N}\sup_{P\in{\cal P}_n} R_P(\hat f)} {
\inf_{\hat f\in{\cal S}_n}\sup_{P\in{\cal P}_n} R_P(\hat f)} \leq
\biggl(\frac{C}{n} \biggr)^{{2(d-3-\xi)}/({(2+\xi)(d-1)})} \to0
\end{equation}
and hence, semisupervised estimation dominates supervised estimation.
\end{longlist}
\begin{Remark*}
We assume, as is standard in the literature
on semisupervised learning,
that the margial $P_X$ is the same for the labeled and unlabeled
data. Extensions to the case where the marginal distribution changes
are possible, but are beyond the scope of the paper.
\end{Remark*}
\textit{Related work.}
There are a number of papers that
discuss conditions under which
semisupervised methods can succeed
or that discuss metrics that are useful for
semisupervised methods.
These include Castelli and Cover
(\citeyear{castellicover95,castellicover96}),
\citet{ratsaby1995learning},
\citet{bousquet04}, \citet{SSLTR}, \citet{LWnips07},
\citet{NIPS20091025},
\citet{ben-davidcolt08},
\citet{nadler09}, \citet{orlitsky}, \citet{bigral11},
\citet{belkinniyogi}, \citet{partha}
and references therein.
Papers on semisupervised inference in the statistics
literature are rare; some exceptions include
\citet{Culp2008}, \citet{Culp2011} and
\citet{West2007}.
To the best of our knowledge,
there are no papers that explicitly study
adaptive methods that allow the data to choose the
strength of the semisupervised assumption.
There is a connection between our work on the semisupervised
classification method in \citet{rigollet07}. He divides the covariate
space ${\cal X}$ into clusters $C_1,\ldots, C_k$ defined by the upper
level sets $\{p_X > \lambda\}$ of the density $p_X$ of $P_X$. He
assumes that the indicator function $I(x) = I(p(y|x) > 1/2)$ is
constant over each cluster $C_j$. In our regression framework, we could
similarly assume that
\[
f(x) = \sum_{j=1}^k f_{\theta_j}(x)
I(x\in C_j) + g(x) I(x\in C_0),
\]
where $f_\theta(x)$ is a parametric regression function, $g$ is a
smooth (but nonparametric function) and $C_0 = {\cal X} -
\bigcup_{j=1}^k C_j$. This yields\vspace*{1pt} parametric,
dimension-free rates over ${\cal X}-C_0$. However, this creates a
rather unnatural and harsh boundary at $\{x\dvtx p_X(x) = \lambda\}$.
Also, this does not yield improved rates over $C_0$. Our approach may
be seen as a smoother version of this idea.\vadjust{\goodbreak
\textit{Outline.} This paper is organized as follows. In Section
\ref{secsetup} we give definitions and assumptions. In Section
\ref{sectiondefine-metrics} we define density sensitive metrics and the
function spaces defined by these metrics. In Section~\ref{secest} we
define a density sensitive semisupervised estimator, and we bound its
risk. In Section~\ref{sectionminimax} we present some minimax results.
We discuss adaptation in Section~\ref{secadap}. We provide simulations
in Section~\ref{secsim}. Section~\ref{secdisc} contains the closing
discussion. Many technical details and extensions are contained in the
supplemental article
[\citet{AzizyanSSLsupplement}]
\section{Definitions}
\label{secsetup}
Recall that $X_i\in\mathbb{R}^d$ and $Y_i\in\mathbb{R}$.
Let
\begin{equation}
{\cal L}_n = \bigl\{(X_1,Y_1),\ldots,(X_n,Y_n)\bigr\}
\end{equation}
be an i.i.d. sample from $P$. Let $P_X$
denote the $X$-marginal of $P$, and let
\begin{equation}
{\cal U}_N=\{X_{n+1},\ldots, X_N\}
\end{equation}
be an i.i.d. sample from $P_X$.
Let $f(x) \equiv f_P(x)= \mathbb{E}(Y|X=x)$. An estimator of $f$ that
is a function of ${\cal L}_n$ is called a \textit{supervised learner},
and the set of such estimators is denoted by ${\cal S}_n$. An estimator
that is a function of ${\cal L}_n \cup{\cal U}_N$ is called a
\textit{semisupervised learner}, and the set of such estimators is
denoted by ${\cal SS}_N$. Define the risk of an estimator $\hat f$ by
\begin{equation}
R_P(\hat f) = \mathbb{E}_P \biggl[\int\bigl(\hat f(x)
- f_P(x)\bigr)^2 \,dP(x) \biggr],
\end{equation}
where $\mathbb{E}_P$ denotes the expectation over data drawn
from the distribution $P$.
Of course,
${\cal S}_n \subset{\cal SS}_N$ and
hence
\[
\inf_{\hat g\in{\cal SS}_N}\sup_{P\in{\cal P}}R_P(\hat g)
\leq \inf_{\hat g\in{\cal S}_n}\sup_{P\in{\cal P}}R_P(
\hat g).
\]
We will show that,
under certain conditions,
semisupervised methods outperform
supervised methods in the sense that the left-hand side of the above equation
is substantially smaller than the right-hand side.
More precisely,
for certain classes of
distributions
${\cal P}_n$,
we show that
\begin{equation}
\frac{\inf_{\hat g\in{\cal SS}_N}\sup_{P\in{\cal P}_n}R_P(\hat g)} {
\inf_{\hat g\in{\cal S}_n}\sup_{P\in{\cal P}_n}R_P(\hat g)} \to0
\end{equation}
as $n\to\infty$.
In this case we say that
semisupervised learning is \textit{effective}.
\begin{Remark*}
In order for the asymptotic analysis to reflect the
behavior of finite samples, we need to let ${\cal P}_n$ to change with
$n$, and we need $N = N(n)\to\infty$ and $n/N(n)\to0$ as $n\to\infty$.
As an analogy, one needs to let the number of covariates in a
regression problem increase with the sample size to develop relevant
asymptotics for high-dimensional regression. Moreover, ${\cal P}_n$
must have distributions that get more concentrated as $n$ increases.
The reason is that if $n$ is very large and $P_X$ is smooth, then there
is no advantage to semisupervised inference. This is consistent with
the finding in \citet{ben-davidcolt08} who show that if $P_X$ is smooth,
then ``$\ldots$ knowledge of that distribution cannot improve the
labeled sample complexity by more than a constant
factor.''
\end{Remark*}
\textit{Other notation.}
If $A$ is a set and $\delta\geq0$, we define
\[
A\oplus\delta= \bigcup_{x\in A} B(x,\delta),
\]
where $B(x,\delta)$ denotes a ball of radius $\delta$ centered at $x$.
Given a set $A\subseteq\mathbb{R}^d$, define
$d_A(x_1,x_2)$
to be the length of the shortest path in $A$ connecting $x_1$ and $x_2$.
We write
$a_n = O(b_n)$ if
$|a_n/b_n|$ is bounded for all large $n$.
Similarly,
$a_n = \Omega(b_n)$ if
$|a_n/b_n|$ is bounded away from 0 for all large $n$.
We write
$a_n \asymp b_n$ if
$a_n = O(b_n)$ and
$a_n = \Omega(b_n)$.
We also write
$a_n \preceq b_n$ if
there exists $C>0$ such that
$a_n \leq C b_n$ for all large $n$.
Define
$a_n \succeq b_n$ similarly.
We use symbols of the form
$c,c_1,c_2,\ldots, C,C_1,C_2,\ldots$
to denote generic positive constants whose
value can change in different expressions.
\section{Density-sensitive function spaces}
\label{sectiondefine-metrics}
We define a smoothed version of $P_X$ as follows.
(This is needed since we allow the marginal distribution $P_X$ to be singular.)
Let $K$ denote a symmetric kernel on $\mathbb{R}^d$
with compact support,
let $\sigma>0$
and define
\begin{equation}
p_\sigma(x) \equiv p_{X,\sigma}(x) = \int\frac{1}{\sigma^d}K \biggl(
\frac{\|x-u\|}{\sigma} \biggr) \,dP_X(x).
\end{equation}
Thus,
$p_{X,\sigma}$ is the density of the convolution
$P_{X,\sigma}=P_X\star\mathbb{K}_\sigma$
where
$\mathbb{K}_\sigma$ is the measure with density
$K_\sigma(\cdot) = \sigma^{-d}K(\cdot/\sigma)$.
$P_{X,\sigma}$ always has a density even if $P_X$ does not.
This is important because, in high-dimensional problems,
it is not uncommon to find that $P_X$ can be highly concentrated near a
low-dimensional manifold. These are
precisely the cases where semisupervised methods are often useful
[\citet{ben-davidcolt08}].
Indeed, this was one of the original motivations for semisupervised inference.
We define $P_{X,0} = P_X$.
For notational simplicity, we shall sometimes drop the $X$ and simply write
$p_\sigma$ instead of $p_{X,\sigma}$.
\subsection{The exponential metric}
Following previous work in the area, we will assume that the regression
function is smooth in regions where $P_X$ puts lots of mass. To make
this precise, we define a \textit{density sensitive metric} as follows.
For any pair $x_1$ and $x_2$ let $\Gamma(x_1,x_2)$ denote the set of
all continuous finite curves from $x_1$ to $x_2$ with unit speed
everywhere, and let $L(\gamma)$ be the length\vadjust{\goodbreak} of curve $\gamma$; hence
$\gamma(L(\gamma))=x_2$. For any $\alpha\geq0$ define the
\textit{exponential metric}
\begin{equation}
D(x_1,x_2) \equiv D_{P,\alpha,\sigma}(x_1,x_2)
= \inf_{\gamma\in\Gamma(x_1,x_2)} \int_0^{L(\gamma)} \exp
\bigl[-\alpha p_{X,\sigma
}\bigl(\gamma(t)\bigr) \bigr]\,dt.
\end{equation}
In the supplement, we also
consider a second metric, the \textit{reciprocal metric}.
Large $\alpha$ makes points connected by high density
paths closer; see Figure~\ref{figdensitymetricpicture}.
\begin{figure}
\includegraphics{1092f03.eps}
\caption{With a density metric, the points $X$ and $Z$ are
closer than the points $X$ and $Y$ because there is a high density
path connecting $X$ and $Z$.}
\label{figdensitymetricpicture}
\end{figure}
Note that $\alpha=0$ corresponds to Euclidean distance.
Similar definitions are used in \citet{orlitsky}, \citet{bigral11} and
\citet{bousquet04}.
\subsection{The regression function}
Recall that $f(x)\equiv f_P(x)=E(Y|X=x)$ denotes the regression
function. We assume that $X\in[0,1]^d\equiv{\cal X}$ and that $|Y| \leq
M$ for some finite constant $M$.\setcounter{footnote}{2}\footnote{The
results can be extended to unbounded $Y$ with suitable conditions on
the tails of the distribution of $Y$.} We formalize the semisupervised
smoothness assumption by defining the following scale of function
spaces. Let ${\cal F} \equiv{\cal F}(P,\alpha,\sigma,L)$ denote the set
functions $f\dvtx [0,1]^d \to\mathbb{R}$ such that, for all
$x_1,x_2\in{\cal X}$,
\begin{equation}
\bigl|f(x_1) - f(x_2)\bigr| \leq L D_{P,\alpha,\sigma}(x_1,x_2).
\end{equation}
Let
${\cal P}(\alpha,\sigma,L)$ denote all joint distributions for $(X,Y)$
such that $f_P\in{\cal F}(P,\alpha,\break\sigma,L)$
and such that $P_X$ is supported on ${\cal X}$.
\subsection{Properties of the function spaces}
Let $B_{P,\alpha,\sigma}(x,\varepsilon)=\{z\dvtx D_{P,\alpha,\sigma
}(x,\break z) \leq\varepsilon\}$
be a ball of size $\varepsilon$.
Let $S_P$ denote the support of $P$, and
let ${\cal N}_{P,\alpha,\sigma}(\varepsilon)$ denote the covering number,
the smallest number of balls of size $\varepsilon$
required to cover $S_P$.
The covering number measures the size of the function space,
and the variance of any regression estimator
on the space ${\cal F}(P,\alpha,\sigma,L)$ depends on this covering number.
Here, we mention a few properties of
${\cal N}_{P,\alpha,\sigma}(\varepsilon)$.
In the Euclidean case $\alpha=0$,
we have
${\cal N}_{P,0,\sigma}(\varepsilon) \leq(C/\varepsilon)^d$.
But when $\alpha>0$ and $P$ is concentrated on or near
a set of dimension less than $d$, the
${\cal N}_{P,\alpha,\sigma}(\varepsilon)$ can be much smaller than
$(C/\varepsilon)^d$.
The next result gives a few examples
showing that concentrated distributions have small covering numbers.
We say that a set $A$ is \textit{regular} if there is a $C>0$ such that,
for all small $\varepsilon>0$,
\begin{equation}
\sup_{ \stackrel{x,y\in A}{\|x-y\| \leq\varepsilon}} \frac
{d_A(x,y)}{\|x-y\|} \leq C,
\end{equation}
where $d_A(x_1,x_2)$
is the length of the shortest path in $A$ connecting $x_1$ and $x_2$.
Recall that $S_P$ denotes the support of $P$.
\begin{lemma}
Suppose that $S_P$ is regular.
\begin{longlist}[(4)]
\item[(1)] For all $\alpha$, $\sigma$ and $P$,
${\cal N}_{P,\alpha,\sigma}(\varepsilon)\preceq\varepsilon^{-d}$.
\item[(2)] Suppose that $P = \sum_{j=1}^k \delta_{x_j}$
where $\delta_x$ is a point mass at $x$.
Then, for any $\alpha\geq0$ and any $\varepsilon>0$,
${\cal N}_{P,\alpha,\sigma}(\varepsilon)\leq k$.
\item[(3)] Suppose that $\mathsf{dim}(S_P) = r < d$.
Then,
${\cal N}_{P,\alpha,\sigma}(\varepsilon) \preceq\varepsilon^{-r}$.
\item[(4)]
Suppose that $S_P = W\oplus\gamma$ where
$\mathsf{dim}(W) =r<d$.
Then, for $\varepsilon\geq C \gamma$,
${\cal N}_{P,\alpha,\sigma}(\varepsilon) \preceq
(\frac{1}{\varepsilon} )^{r}$.
\end{longlist}
\end{lemma}
\begin{pf}
(1) The first statement follows since the covering number of $S_P$ is
no more than the covering number of $[0,1]^d$ and on $[0,1]^d$,
$D_{P,\alpha,\sigma}(x,y) \leq\|x-y\|$. Now $[0,1]^d$ can be covered
$O(\varepsilon^{-d})$ Euclidean balls.
(2) The second statement follows since $\{\{x_1\},\ldots, \{x_k\}\}$
forms an $\varepsilon$-covering for any $\varepsilon$.
(3) We have that $D_{P,\alpha,\sigma}(x,y) \leq d_{S_P}(x,y)$.
Regularity implies that, for small $d_{S_P}(x,y)$,
$D_{P,\alpha,\sigma}(x,y) \leq c\|x-y\|$. We can thus cover $S_P$ by
$C\varepsilon^{-r}$ balls of size~$\varepsilon$.
(4) As in (3), cover $W$ with $N=O(\varepsilon^{-r})$ balls of $D$ size
$\varepsilon$. Denote these balls by $B_1,\ldots, B_N$. Define $C_j =
\{x\in S_P\dvtx d_{S_P}(x,B_j) \leq\gamma\}$. The $C_j$ form a
covering of size $N$ and each $C_j$ has $D_{P,\alpha,\sigma}$ diameter
$\max\{\varepsilon,\gamma\}$.
\end{pf}
\section{Semisupervised kernel estimator}
\label{secest}
We consider the following
semisupervised
estimator which uses a kernel that is
sensitive to the density.
Let $Q$ be a kernel and let
$Q_h(x) = h^{-d} Q(x/h)$.
Let
\begin{equation}
\label{eqdefine-estimator} \hat f_{h,\alpha,\sigma}(x) = \frac{\sum^n_{i=1}Y_i Q_h
(\widehat D_{\alpha,\sigma}(x,X_i) )} {
\sum^n_{i=1}Q_h (\widehat D_{\alpha,\sigma}(x,X_i) )},
\end{equation}
where
\begin{eqnarray}
\hat D_{\alpha,\sigma}(x_1,x_2) &=& \inf
_{\gamma\in\Gamma(x_1,x_2)} \int_0^{L(\gamma)}\exp \bigl[ -
\alpha\hat p_\sigma\bigl(\gamma (t)\bigr) \bigr] \,dt,
\\
\hat{p}_\sigma(x) &=& \frac{1}{m}\sum_{i=1}^m
\frac{1}{\sigma^d} K \biggl( \frac{\|x-X_{i+n}\|}{\sigma} \biggr),
\end{eqnarray}
and
$m = N-n$ denotes the number of unlabeled points.
We use a kernel estimator for the regression function
because it is simple, commonly used and, as we shall see, has
a fast rate of convergence in the semisupervised case.
The estimator
$\hat D_{\alpha,\sigma}(x_1,x_2)$
is discussed in detail in the supplement
where we study its properties
and we give an algorithm for
computing it.
Now we give an upper bound
on the risk of $\hat f_{h,\alpha,\sigma}$.
In the following we take, for simplicity,
$Q(x) = I(\|x\| \leq1)$.
\begin{theorem}
\label{thmupper-bound}
Suppose that $|Y|\leq M$.
Define the event $\mathcal{G}_m = \{ \|\hat p_\sigma- p_\sigma
\|_\infty\leq\varepsilon_m\}$
(which depends on the unlabeled data)
and suppose that
$\mathbb{P}({\cal G}_m^c)\leq1/m$.
Then,
for every $P\in{\cal P}(\alpha,\sigma,L)$,
\begin{equation}
R_P(\hat f_{h,\alpha,\sigma})\leq L^2 \bigl(h
e^{\alpha\varepsilon_m}\bigr)^{2} + \frac{M^2 (2 + {1}/{e} ) {\cal
N}(P,\alpha,\sigma,e^{-\varepsilon_m\alpha}h/2)}{n} +
\frac{4M^2}{m}.\hspace*{-32pt}
\end{equation}
\end{theorem}
\begin{pf}
The risk is
\begin{eqnarray*}
R_P(\hat f) &=& \mathbb{E}_{n,N} \biggl[ (1- \mathcal{G}_m)
\int\bigl(\hat{f}_{h,\alpha,\sigma}(x) - f(x) \bigr)^2 \,dP(x) \biggr]\\
&&{} +
\mathbb{E}_{n,N} \biggl[ \mathcal{G}_m
\int\bigl(\hat{f}_{h,\alpha,\sigma}(x) - f(x) \bigr)^2 \,dP(x) \biggr].
\end{eqnarray*}
Since $|Y| \leq M$
and $\sup_x|\hat f(x)| \leq M$,
\[
\mathbb{E}_{n,N} \biggl[ (1-\mathcal{G}_m) \int\bigl(
\hat{f}_{h,\alpha,\sigma}(x) - f(x)\bigr)^2 \,dP(x) \biggr]\leq 4
M^2 \mathbb{P}\bigl(\mathcal{G}_m^c\bigr)
\leq\frac{4M^2}{m}.
\]
Now we bound the second term.
Condition on the unlabeled data.
Replacing the Euclidean distance with $\hat D_{\alpha,\sigma}$ in
the proof of Theorem 5.2 in \citet{gyorfi2002nonparametric},
we have that
\begin{eqnarray*}
&&
\mathbb{E}_{n} \biggl[ \int\bigl(\hat{f}_{h,\alpha,\sigma}(x) - f(x)
\bigr)^2 \,dP(x) \biggr]\\
&&\qquad\leq L^2 R^{2} +
\frac{M^2 (2 + {1}/{e} ) \int{dP(x)}/{P(\hat
B_{\alpha,\sigma}(x,h))}}{n},
\end{eqnarray*}
where
\[
R = \sup \bigl\{ D_{P,\alpha,\sigma}(x_1,x_2)\dvtx
(x_1,x_2) \mbox{ such that }\hat D_{\alpha,\sigma}(x_1,x_2)
\leq h \bigr\}
\]
and
$\hat B_{\alpha,\sigma}(x,h) = \{z\dvtx \hat{D}_{\alpha,\sigma
}(x,z)\leq h\}$.
On the event ${\cal G}_m$,
we have from Lemma~2 in the supplement
that
$e^{-\alpha\varepsilon_m} D_{\alpha,\sigma}(x_1,x_2) \leq
\hat D_{\alpha,\sigma}(x_1,x_2) \leq
e^{\alpha\varepsilon_m} D_{\alpha,\sigma}(x_1,x_2)$
for all $x_1,x_2$.
Hence,
$R^{2}\leq e^{2\alpha\varepsilon_m} h^{2}$ and
\[
\int\frac{dP(x)}{P(\hat B_{\alpha,\sigma}(x,h))}\leq \int\frac{dP(x)}{P(B_{P,\alpha,\sigma}(x, e^{-\alpha\varepsilon_m}h))}.
\]
A simple covering argument [see page 76 of \citet{gyorfi2002nonparametric}] shows that,
for any $\delta>0$,
\[
\int\frac{dP(x)}{P(B_{P,\alpha,\sigma}(x,\delta))} \leq{\cal N}(P,\alpha,\sigma,\delta/2).
\]
The result follows.
\end{pf}
\begin{corollary}
\label{corollaryupper}
If
${\cal N}(P,\alpha,\sigma,\delta) \leq(C/\delta)^\xi$
for
$\delta\geq(1/2)e^{-\alpha\varepsilon_m} (n\times \break e^{2\alpha\varepsilon
_m})^{-{1}/({2+\xi})}$
and $N \geq2n$,
then
\begin{equation}
R_P(\hat f_{\alpha,\sigma,h}) \leq e^{\alpha\varepsilon_m (2\vee\xi)} \biggl[
L^2 h^2 + \frac{1}{n} \biggl(\frac{C}{h}
\biggr)^\xi \biggr] + \frac{4 M^2}{m}.
\end{equation}
Hence, if
$m \geq n^{2/(2+\xi)}$
and $h \asymp(n e^{\alpha\varepsilon_m (2-\xi)})^{-{1}/({2+\xi})}$,
then
\begin{equation}
\sup_{P\in{\cal P}(\alpha,\sigma,L)}R_P(\hat f_{h,\alpha,\sigma}) \preceq
\biggl(\frac{C}{n} \biggr)^{{2}/({2+\xi})}.
\end{equation}
\end{corollary}
\section{Minimax bounds}
\label{sectionminimax}
To characterize when semisupervised methods
outperform supervised methods,
we show that there is a class
of distributions~${\cal P}_n$
(which we allow to change with $n$) such that
$R_{SS}$ is much smaller than
$R_{S}$, where
\[
R_S = \inf_{\hat f\in{\cal S}_n}\sup_{P\in{\cal P}_n}
R_P(\hat f) \quad\mbox{and}\quad R_{SS}= \inf
_{\hat f\in{\cal SS}_N}\sup_{P\in{\cal P}_n} R_P(\hat f).
\]
To do so, it suffices to find a lower bound on
$R_{S}$ and an upper bound on $R_{SS}$.
Intuitively, ${\cal P}_n$ should be a set
distributions whose $X$-marginals are highly concentrated
on or near lower-dimensional sets, since this is where semisuspervised
methods deliver improved performance.
Indeed, as we mentioned earlier,
for very smooth distributions $P_X$
we do not expect semisupervised learners to offer much improvement.
\subsection{The class ${\cal P}_n$}
Here we define the class ${\cal P}_n$.
Let $N= N(n)$ and $m = m(n)=N-n$ and define
\begin{equation}
\varepsilon_m \equiv\varepsilon(m,\sigma) = \sqrt{\frac{C \log m}{m
\sigma^d}}.
\end{equation}
Let $\xi\in[0,d-3)$, $\gamma>0$ and define
\begin{equation}
{\cal P}_{n} = \bigcup_{(\alpha,\sigma)\in{\cal A}_n\times\Sigma_n} {\cal Q}(
\alpha,\sigma,L),
\end{equation}
where
${\cal Q}(\alpha,\sigma,L)\subset{\cal P}(\alpha,\sigma,L)$ and
${\cal A}_n\times\Sigma_n \subset[0,\infty]^2$
satisfy the following conditions:
\begin{eqnarray*}
&&\mbox{(C1)}\quad {\cal Q}(\alpha,\sigma, L) \\
&&\hphantom{\mbox{(C1)}}\quad\qquad= \biggl\{P\in{
\cal P}(\alpha,\sigma,L)\dvtx {\cal N}(P,\alpha,\sigma,\varepsilon) \leq \biggl(
\frac{C}{\varepsilon} \biggr)^\xi\ \forall \varepsilon\geq \biggl(
\frac{1}{n} \biggr)^{{1}/({2+\xi})} \biggr\};
\\
&&\mbox{(C2)}\quad \alpha\leq\frac{\log2}{\varepsilon(m,\sigma)};
\\
&&\mbox{(C3)}\quad \biggl(\frac{1}{m} \biggr)^{{1}/({d(1+\gamma)})} \leq \sigma\leq
\frac{1}{4C_0} \biggl(\frac{1}{n} \biggr)^{{1}/({d-1})},
\end{eqnarray*}
where $C_0$ is the diameter of the support of $K$.
Here are some remarks about ${\cal P}_{n}$:
\begin{longlist}[(3)]
\item[(1)] (C2) implies that
$e^{\alpha\varepsilon_m} \leq2$ and hence,
(C3) and Theorem 1.3 in the supplement
$(1/2)D_{P,\alpha,\sigma}(x_1,x_2) \leq
\hat D_{\alpha,\sigma}(x_1,x_2) \leq
2D_{P,\alpha,\sigma}(x_1,x_2)$
with probability at least $1-1/m$.
\item[(2)] The constraint in (C1) on ${\cal N}(\varepsilon)$ holds
whenever $P$ is concentrated on or near a
set of dimension less than $d$ and $\alpha/\sigma^d$ is large.
The constraint
does not need to hold for
arbitrarily small $\varepsilon$.
\item[(3)] Some papers on semisupervised learning
simply assume that $N=\infty$
since in practice $N$ is usually very large compared to $n$.
In that case,
there is no upper bound on $\alpha$ and no
lower bound on $\sigma$.
\end{longlist}
The class ${\cal P}_n$ may seem complicated.
This is because showing conditions where
semisupervised learning provably outperforms
supervised learning is subtle.
Intuitively, the class ${\cal P}_n$
is simply the set of high concentrated distributions
with $\alpha/\sigma$ large.
\subsection{Supervised lower bound}
\begin{theorem}
\label{thmlower-bound}
Suppose that
$m \geq n^{{d(1+\gamma)}/({d-1})}$.
There exists $C>0$ such that
\begin{equation}
R_S = \inf_{\hat f\in{\cal S}_n}\sup_{P\in{\cal P}_{n}}
R_P(\hat f) \geq \biggl(\frac{C}{n} \biggr)^{{2}/({d-1})}.
\end{equation}
\end{theorem}
\begin{pf}
Let
$A_1$ and $A_0$ be the top and bottom of the cube ${\cal X}$,
\begin{eqnarray*}
A_1 &=& \bigl\{ (x_1,\ldots, x_{d-1},1)\dvtx 0
\leq x_1,\ldots, x_{d-1} \leq 1\bigr\},
\\
A_0 &=& \bigl\{ (x_1,\ldots, x_{d-1},0)\dvtx 0
\leq x_1,\ldots, x_{d-1} \leq 1\bigr\}.
\end{eqnarray*}
Fix $\varepsilon= n^{-{1}/({d-1})}$.
Let $q = (1/\varepsilon)^{d-1}\asymp n$.
For any integers
$s=(s_1,\ldots, s_{d-1})\in N^{d-1}$
with $0 \leq s_i \leq1/\varepsilon$,
define the tendril
\[
\bigl\{ (s_1 \varepsilon,s_2 \varepsilon,\ldots,
s_{d-1}\varepsilon,x_d)\dvtx \varepsilon\leq x_d \leq1-
\varepsilon\bigr\}.\vadjust{\goodbreak}
\]
There are
$q = (1/\varepsilon)^{d-1}\approx n$
such tendrils.
Let us label the tendrils as
$T_1,\ldots, T_q$.
Note that the tendrils do not quite join up with $A_0$ or $A_1$.
Let
\[
C = A_0 \cup A_1 \cup \Biggl(\bigcup
_{j=1}^q T_j \Biggr).
\]
Define a measure $\mu$ on $C$ as follows:
\[
\mu= \frac{1}{4} \mu_0 + \frac{1}{4}
\mu_1 + \frac{1}{2 q (1-2\varepsilon)} \sum_j
\nu_j,
\]
where $\mu_0$ is $(d-1)$-dimensional Lebesgue measure on $A_0$,
$\mu_1$ is $(d-1)$-dimensional Lebesgue measure on $A_1$ and
$\nu_j$ is one-dimensional Lebesgue measure on $T_j$.
Thus, $\mu$ is a probability measure and
$\mu(C)=1$.
\begin{figure}
\includegraphics{1092f04.eps}
\caption{The extended tendrils used in the proof of the lower bound,
in the special case where $d=2$.
Each tendril has length $1-\varepsilon$ and joins up with either the top $A_1$
or bottom $A_0$ but not both.}
\label{figsimpletendrils}
\end{figure}
Now we define extended tendrils that are joined to the top or bottom of
the cube
(but not both).
See Figure~\ref{figsimpletendrils}.
If
\[
T_j = \bigl\{ (s_1 \varepsilon,s_2 \varepsilon,\ldots, s_{d-1}\varepsilon,x_d)\dvtx \varepsilon\leq x_d
\leq1-\varepsilon\bigr\}
\]
is a tendril, define
its extensions
\begin{eqnarray*}
T_{j,0} &=& \bigl\{ (s_1 \varepsilon,s_2
\varepsilon,\ldots, s_{d-1}\varepsilon,x_d)\dvtx 0 \leq
x_d \leq1-\varepsilon\bigr\},
\\
T_{1,j} &=& \bigl\{ (s_1 \varepsilon,s_2
\varepsilon,\ldots, s_{d-1}\varepsilon,x_d)\dvtx \varepsilon\leq
x_d \leq1\bigr\}.
\end{eqnarray*}
Given
$\omega\in\Omega=\{0,1\}^q$, let
\[
S_\omega= A_0 \cup A_1 \cup \Biggl(\bigcup
_{j=1}^q T_{j,\omega
_j} \Biggr)
\]
and
\[
P_{\omega,X} = \frac{1}{4}\mu_0 + \frac{1}{4}
\mu_1 + \frac{1}{2 q(1-\varepsilon)}\sum_j
\nu_{j,\omega_j},
\]
where
$\nu_{j,\omega_j}$ is one-dimensional Lebesgue measure on
$T_{j,\omega_j}$.
This
$P_{\omega,X}$ is a probability measure supported on $S_\omega$.
Notice that $S_\omega$ consists of two connected components, namely,
\[
U_{\omega}^{(1)} = A_1 \cup \biggl(\bigcup
_{j: \omega_j=1} T_{j,\omega_j} \biggr) \quad\mbox{and}\quad
U_{\omega}^{(0)} = A_0 \cup \biggl(\bigcup
_{j: \omega_j=0} T_{j,\omega_j} \biggr).
\]
Let
\[
f_\omega(x) = \frac{L \varepsilon}{8} I\bigl(x\in U_{\omega}^{(1)}
\bigr).
\]
Finally,
we define
$P_\omega= P_{\omega,X}\times P_{\omega, Y|X}$
where
$P_{\omega, Y|X}$
is a point mass at $f_\omega(X)$.
Define
$d^2(f,g) = \int(f(x)-g(x))^2 \,d\mu(x)$.
We complete the proof with a series of claims.\vspace*{9pt}
\textit{Claim} 1: For each $\omega\in\Omega$, $P_\omega\in{\cal
P}_{n}$.\vspace*{9pt}
\textit{Proof}:
Let
\[
\sigma= \biggl(\frac{1}{m} \biggr)^{{1}/({d(1+\gamma)})}
\]
and let
\begin{equation}
\label{eqalp} \frac{3}{2+\xi} \frac{\log m}{m^{{1}/({1+\gamma})}} \leq\alpha \leq \sqrt{
\frac{m^{{\gamma}/({1+\gamma})}}{\log m}}.
\end{equation}
It follows that (C2) and (C3) hold. We must verify (C1). If $x$ and $y$
are in the same connected component, then
$|f_\omega(x)-f_\omega(y)|=0$. Now let $x$ and $y$ be in different
components, that is, $x\in U_\omega^{(1)}, y\in U_\omega^{(0)}$. Let us
choose $x$ and $y$ as close as possible in Euclidean distance; hence
$\|x-y\|=\varepsilon$. Let $\gamma$ be any path connecting $x$ to $y$.
Since $x$ and $y$ lie on different components, there exists a subset
$\gamma_0$ of $\gamma$ of length at least $\varepsilon$ on which
$P_\omega$ puts zero mass. By assumption (C3),
$\sigma\leq\varepsilon/(4C_0)$ and hence $P_{X,\sigma}$ puts zero mass
on the portion of $\gamma_0$ that is at least $C_0\sigma$ away from the
support of $P_\omega$. This has length at least $\varepsilon- 2 C_0
\sigma\geq\varepsilon/2$. Since $p_{X,\sigma}(x)=0$ on a portion
of~$\gamma_0$,
\[
D_{P,\alpha,\sigma}(x,y) \geq\frac{\varepsilon}{2} = \frac{\|x-y\|}{2}.
\]
Hence,
$\|x-y\| \leq2 D_{P,\alpha,\sigma}(x,y)$.
Then
\[
\frac{|f_\omega(x)-f_\omega(y)|}{D_{P,\alpha,\sigma}(x,y)} \leq
\frac{2|f_\omega(x)-f_\omega(y)|}{\|x-y\|},
\]
and the latter is maximized by finding two points $x$ and $y$ as close
together with nonzero numerator. In this case, $\|x-y\| = \varepsilon$
and $|f_\omega(x)-f_\omega(y)| = L \varepsilon/8$. Hence,
$|f_\omega(x)-f_\omega(y)|\leq L D_{P,\alpha,\sigma}(x,y)$ as required.
Now we show that each $P=P_\omega$ satisfies
\[
{\cal N}(P,\alpha,\sigma,\varepsilon) \leq \biggl(\frac{C}{\varepsilon
}
\biggr)^\xi
\]
for all $\varepsilon\geq n^{-{1}/({2+\xi})}$. Cover the top $A_1$ and
bottom $A_0$ of the cubes with Euclidean spheres of radius $\delta$.
There are $O((1/\delta)^{d-1})$ such spheres. The $D_{P,\alpha,\sigma}$
radius of each sphere is at most $\delta e^{-\alpha K(0)/\sigma^d}$.
Thus, these form an $\varepsilon$ covering as long as $\delta
e^{-\alpha K(0)/\sigma^d}\leq\varepsilon$. Thus the covering number of
the top and bottom is at most $2(1/\delta)^{d-1} \leq 2(1/(e^{\alpha
K(0)/\sigma^d}\varepsilon))^{d-1}$. Now cover the tendris with
one-dimensional segments of length $\delta$. The $D_{P,\alpha,\sigma}$
radius of each segment is at most $\delta e^{-\alpha/\sigma^d}$. Thus,
these form an $\varepsilon$ covering as long as $\delta e^{-\alpha
K(0)/\sigma^d}\leq\varepsilon$. Thus the covering number of the
tendrils is at most $q/\delta= n/\delta\leq n/(\varepsilon e^{\alpha
K(0)/\sigma^d})$. Thus we can cover the support with
\[
N(\varepsilon)\leq2 \biggl(\frac{1}{e^{\alpha K(0)/\sigma^d}\varepsilon
} \biggr)^{d-1} +
\frac{n}{\varepsilon e^{\alpha K(0)/\sigma^d}}
\]
balls of size $\varepsilon$. It follows from (\ref{eqalp}) that
$N(\varepsilon)\leq(1/\varepsilon)^\xi$ for $\varepsilon\geq
n^{-{1}/({2+\xi})}$ as required.\vspace*{9pt}
\textit{Claim} 2: For any $\omega$, and any $g \geq0$, $\int g(x)
\,dP_\omega(x) \geq \frac{1}{2} \int g(x) \,d\mu(x)$.\vspace*{9pt}
\textit{Proof}:
We have
\begin{eqnarray*}
\int_{S_\omega} g \,dP_\omega
&\geq& \int
_{C} g \,dP_\omega= \frac{1}{4}\int
_{A_0} g \,d\mu_0 + \frac{1}{4}\int
_{A_1} g \,d\mu_1 + \frac{\sum_j \int_{T_j} g \,d\nu_{j,\omega}}{2 q (1-\varepsilon)}
\\
&=& \frac{1}{4}\int_{A_0} g \,d\mu_0 +
\frac{1}{4}\int_{A_1} g \,d\mu_1 \\
&&{} +
\frac{(({1-2\varepsilon})\sum_j \int_{T_j} g \,d\nu_{j})/({1-\varepsilon})} {
2 q (1-2\varepsilon)} \times
\frac{{1}/{2} + q (1-2\varepsilon)}{ {1}/{2} + q (1-\varepsilon
)}
\\
&\geq& \frac{1}{2} \biggl( \frac{1}{4}\int_{A_0}
g \,d\mu_0 + \frac{1}{4}\int_{A_1} g d
\mu_1 + \frac{\sum_j \int_{T_j} g \,d\nu_{j}} {
2 q (1-2\varepsilon)} \biggr) = \frac{1}{2}\int g \,d\mu.
\end{eqnarray*}
\textit{Claim} 3:
For any $\omega,\nu\in\Omega$,
\[
d^2(f_\omega,f_\nu) = \frac{ \rho(\omega,\nu) L^2 \varepsilon^2 (1-2\varepsilon)} {
2 q (1-2\varepsilon)}.
\]
\textit{Proof}: This follows from direct calculation.\vspace*{9pt}
\textit{Claim} 4: If $\rho(\omega,\nu)=1$, then $\|P_\omega^n \wedge
P_\nu^n\| \geq1/(16e)$.\vadjust{\goodbreak
\textit{Proof}: Suppose that $\rho(\omega,\nu)=1$. $P_\omega$ and
$P_\nu$ are the same everywhere except $T_{j,0}\cup T_{j,1}$, where $j$
is the index where $\omega$ and $\nu$ differ (assume $\omega_j=0$ and
$\nu_j=1$). Define $A= T_{j,0} \times\{0\}$ and
$B=T_{j,1}\times\{L\varepsilon\}$. Note that $A\cap B=\varnothing$. So,
\[
P_\omega(T_{j,0}\cup T_{j,1})= P_\omega(A)=P_\nu(T_{j,0}
\cup T_{j,1})=P_\nu(B)=\frac{1-\varepsilon
}{2q(1-\varepsilon)}
\]
and
\begin{eqnarray*}
\mathsf{TV}(P_\omega,P_\nu)&=&\bigl|P_\omega(A)-P_\nu(A)\bigr|
=\bigl|P_\omega(B)-P_\nu(B)\bigr|
\\
&=&\frac{1-\varepsilon}{2q(1-\varepsilon)}=\frac{1}{2q} = \frac{\varepsilon^{d-1}}{2}.
\end{eqnarray*}
Thus,
\[
\bigl\|P_\omega^n\land P_\nu^n\bigr\| \geq
\tfrac{1}{8} \bigl(1-\mathsf{TV}(P_\omega,P_\nu)
\bigr)^{2n} \geq\tfrac{1}{8} \bigl(1- \varepsilon^{d-1}/2
\bigr)^{2n}.
\]
Since $\varepsilon=n^{-{1}/({d-1})}$, this implies that
\[
\bigl\|P_\omega^n\land P_\nu^n\bigr\| \geq
\frac{1}{8} \biggl(1-\frac
{1}{2n} \biggr)^{2n} \geq
\frac{1}{16e}
\]
for all large $n$.\vspace*{9pt}
\textit{Completion of the proof.}
Recall that $\varepsilon= n^{-{1}/({d-1})}$.
Combining Assouad's lemma
(see Lemma 3
in the supplement)
with the above claims,
we have
\begin{eqnarray*}
R_S &=& \inf_{\hat f\in{\cal S}_n}\sup_{P\in{\cal P}_{n,\xi}}
R_P(\hat f) \geq \inf_{\hat f\in{\cal S}_n}\sup
_{P\in{\cal P}_\Omega} R_P(\hat f) \geq \frac{1}{2}\inf
_{\hat f}\max_{\omega\in\Omega} \mathbb{E}_{\omega}
\bigl[d^2(f_{\omega},\hat f)\bigr]
\\
&\geq& \frac{q}{16} \times\frac{(L/8)^2\varepsilon^2(1-2\varepsilon
)}{2q(1-2\varepsilon)} \times\frac{1}{16e} = C
\frac{q\varepsilon^2(1-2\varepsilon)}{2q(1-2\varepsilon)}
\\
&\geq& C \varepsilon^2 = C n^{-{2}/({d-1})}.
\end{eqnarray*}
\upqed\end{pf}
\subsection{Semisupervised upper bound}
Now we state the upper bound for this class.
\begin{theorem}
Let $h = (n e^{2(2-\xi)})^{-{1}/({2+\xi})}$.
Then
\begin{equation}
\sup_{P\in{\cal P}_{n}}R(\hat f_{h,\alpha,\sigma})\leq \biggl(
\frac{C}{n} \biggr)^{{2}/({2+\xi})}.
\end{equation}
\end{theorem}
\begin{pf}
This follows from (C2), (C3) and
Corollary~\ref{corollaryupper}.
\end{pf}
\subsection{Comparison of lower and upper bound}
Combining the last two theorems we have:\vadjust{\goodbreak}
\begin{corollary}
Under the conditions of the previous theorem,
and assuming that $d > \xi+ 3$,
\begin{equation}
\frac{R_{SS}}{R_{S}} \preceq \biggl(\frac{1}{n} \biggr)
^{{2(d-3-\xi)}/({ (2+\xi)(d-1)})}\to0
\end{equation}
as $n\to\infty$.
\end{corollary}
This establishes the effectiveness of semi-supervised inference
in the minimax sense.
\section{Adaptive semisupervised inference}
\label{secadap}
We have established a bound on the risk
of the density-sensitive semisupervised kernel estimator.
The bound is achieved by using an estimate $\hat D_{\alpha,\sigma}$
of the
density-sensitive distance.
However, this requires knowing the density-sensitive parameter $\alpha
$, along with
other parameters.
It is critical to choose $\alpha$ (and $h$) appropriately, otherwise
we might
incur a large error if the semisupervised assumption does not hold, or
holds with a different density sensitivity value $\alpha$.
We consider two methods for choosing the parameters.
The following result shows that we can adapt to the correct degree of
semisupervisedness if cross-validation is used to select the
appropriate $\alpha,\sigma$ and $h$. This implies that the estimator
gracefully degrades to a supervised learner if the semisupervised
assumption (sensitivity of regression function to marginal density)
does not hold ($\alpha= 0$).
For any $f$, define the risk
$R(f) = \E[(f(X)-Y)^2]$ and the excess risk
$\cE(f) = R(f) - R(f^*) = \E[(f(X)-f^*(X))^2]$
where $f^*$ is the true regression function.
Let
${\cal H}$ be a finite set of bandwidths,
let ${\cal A}$ be a finite set of values for $\alpha$
and let $\Sigma$ be a finite set of values for $\sigma$.
Let $\theta= (h,\alpha,\sigma)$,
$\Theta= {\cal H}\times{\cal A}\times\Sigma$
and $J = |\Theta|$.
Divide the data into training data $T$
and validation data $V$.
For notational simplicity, let both sets have size $n$.
Let
${\cal F} = \{\hat f^T_{\theta}\}_{\theta\in\Theta}$
denote the semisupervised kernel estimators trained on
data $T$ using $\theta\in\Theta$.
For each $\hat f_{\theta}^T\in{\cal F}$ let
\[
\hat R^V \bigl(\hat f^T_{\theta}\bigr) =
\frac{1}{n}\sum^n_{i=1}\bigl(\hat
f^T_{\theta}(X_i)-Y_i
\bigr)^2,
\]
where the sum is over $V$.
Let $Y_i = f(X_i) + \varepsilon_i$ with $\varepsilon_i \stackrel
{\mathrm{i.i.d.}}{\sim} {\cal N}(0,\sigma^2)$.
Also, we assume that
$|f(x)|, |\hat f^T_{\theta}(x)| \leq M$, where $M>0$ is a
constant.\footnote{Note that the estimator can always be truncated if necessary.}
\begin{theorem}
\label{thmcrossval}
Let ${\cal F} = \{\hat f^T_{\theta}\}_{\theta\in\Theta}$
denote the semisupervised kernel estimators trained on
data $T$ using $\theta\in\Theta$.
Use validation data $V$ to pick
\[
\hat\theta= \mathop{\arg\min}_{\theta\in\Theta} \hat R^V\bigl(\hat
f^T_{\theta}\bigr)\vadjust{\goodbreak}
\]
and define the corresponding estimator
$\hat f = \hat f_{\hat\theta}$. Then,
for every $0 < \delta< 1$,
\begin{equation}
\E\bigl[\cE(\hat f_{\theta})\bigr] \leq\frac{1}{1-a} \biggl[\min
_{\theta\in\Theta} \E\bigl[\cE(\hat f_{\theta})\bigr] +
\frac{\log(J)/\delta)}{nt} \biggr] + 4\delta M^2,
\end{equation}
where $0<a<1$ and $0<t < 15/(38(M^2+\sigma^2))$ are constants. $\E$
denotes expectation over
everything that is random.
\end{theorem}
\begin{pf}
First,\vspace*{1pt} we derive a general concentration of $\hat\cE(f)$ around $\cE(f)$
where
$\hat\cE(f) = \hat R(f) - \hat R(f^*) = -\frac1{n}\sum^n_{i=1}U_i$
and $U_i = -(Y_i-f(X_i))^2+(Y_i-f^*(X_i))^2$.
If the variables $U_i$ satisfy the following moment condition:
\[
\E\bigl[\bigl|U_i - \E[U_i]\bigr|^k\bigr] \leq
\frac{\mathsf{Var}(U_i)}{2}k! r^{k-2}
\]
for some $r>0$, then the Craig--Bernstein (CB) inequality [Craig (\citeyear{Cra33})]
states that with probability $>1-\delta$,
\[
\frac1{n}\sum^n_{i=1}
\bigl(U_i - \E[U_i]\bigr) \leq\frac{\log(1/\delta
)}{nt} +
\frac{t \operatorname{\mathsf{Var}}(U_i)}{2(1-c)}
\]
for $0\leq tr \leq c <1$. The moment conditions are satisfied by
bounded random variables as well as Gaussian random variables; see, for example,
\citet{RandprojJHaupt}.
To apply this inequality, we first show that $\mathsf{Var}(U_i) \leq
4(M^2+\sigma^2)\cE(f)$ since
$Y_i = f(X_i) + \varepsilon_i$ with $\varepsilon_i \stackrel{\mathrm{i.i.d.}}{\sim}
{\cal N}(0,\sigma^2)$.
Also, we assume that $|f(x)|$, $|\hat f(x)| \leq M$, where $M>0$ is a constant.
\begin{eqnarray*}
\mathsf{Var}(U_i) &\leq& \E\bigl[U_i^2\bigr]
= \E\bigl[\bigl(-\bigl(Y_i-f(X_i)\bigr)^2+
\bigl(Y_i-f^*(X_i)\bigr)^2
\bigr)^2\bigr]
\\
&=& \E\bigl[\bigl(-\bigl(f^*(X_i)+\varepsilon_i-f(X_i)
\bigr)^2+(\varepsilon_i)^2\bigr)^2
\bigr]
\\
&=& \E\bigl[\bigl(-\bigl(f^*(X_i)-f(X_i)
\bigr)^2-2\varepsilon_i\bigl(f^*(X_i) -
f(X_i)\bigr)\bigr)^2\bigr]
\\
&\leq& 4 M^2 \cE(f) + 4 \sigma^2 \cE(f) = 4
\bigl(M^2+\sigma^2\bigr)\cE(f).
\end{eqnarray*}
Therefore using the CB inequality we get, with probability
$>1-\delta$,
\[
\cE(f) - \hat\cE(f) \leq\frac{\log(1/\delta)}{nt} + \frac{t
2(M^2+\sigma^2)\cE(f)}{(1-c)}.
\]
Now set $c = tr = 8t(M^2+\sigma^2)/15$ and let $t < 15/(38(M^2+\sigma
^2))$. With this choice,
$c < 1$ and define
\[
a = \frac{t 2(M^2+\sigma^2)}{(1-c)} <1.
\]
Then, using $a$ and rearranging terms, with probability $>1-\delta$,
\[
(1-a)\cE(f) - \hat\cE(f) \leq\frac{\log(1/\delta)}{nt},
\]
where $t < 15/(38(M^2+\sigma^2))$.
Then, using the previous
concentration result, and taking union bound over all $f\in{\cal F}$,
we have with probability $>1-\delta$,
\[
\cE(f) \leq\frac1{1-a} \biggl[\hat\cE^V(f) + \frac{\log(J/\delta
)}{nt}
\biggr].
\]
Now,
\begin{eqnarray*}
\cE(\hat f_{\hat\theta})
&=& R(\hat f_{\hat\theta
}) - R
\bigl(f^*\bigr)
\\
&\leq& \frac1{1-a} \biggl[\hat R^V(\hat
f_{\hat\theta}) - \hat R^V\bigl(f^*\bigr) + \frac{\log(J/\delta)}{nt}
\biggr]
\\
&\leq& \frac1{1-a} \biggl[\hat R^V(f) - \hat
R^V\bigl(f^*\bigr) + \frac{\log(J/\delta)}{nt} \biggr].
\end{eqnarray*}
Taking expectation with respect to validation dataset,
\[
\E_V\bigl[\cE(\hat f_{\hat\theta})\bigr] \leq\frac1{1-a}
\biggl[R(f) - R\bigl(f^*\bigr) + \frac{\log(J/\delta)}{nt} \biggr] + 4\delta
M^2.
\]
Now taking expectation with respect to training dataset,
\[
\E_{\mathrm{TV}}\bigl[\cE(\hat f_{\hat\theta})\bigr] \leq \frac1{1-a}
\biggl[\E_T\bigl[R(f) - R\bigl(f^*\bigr)\bigr] + \frac{\log(J/\delta
)}{nt}
\biggr] + 4 \delta M^2.
\]
Since this holds for all $f \in{\cal F}$, we get
\[
\E_{\mathrm{TV}}\bigl[\cE(\hat f_{\hat\theta})\bigr] \leq \frac1{1-a}
\biggl[\min_{f\in{\cal F}}\E_T\bigl[\cE(f)\bigr] +
\frac{\log(J/\delta)}{nt} \biggr] + 4 \delta M^2.
\]
The result follows.
\end{pf}
In practice, both
$\Theta$ may be taken to be
of size $n^a$ for some $a>0$.
Then we can approximate the optimal $h,\sigma$ and $\alpha$
with sufficient accuracy to achieve the optimal rate.
Setting $\delta= 1/(4 M^2n)$, we then see that
the penalty for
adaptation is
$\frac{\log(J/\delta)}{nt} + \delta M = O(\log n /n)$
and hence introduces only a logarithmic term.
\begin{Remark*}
Cross-validation is not the only way to adapt. For example, the
adaptive method in \citet{Kpotufe} can also be used here.
\end{Remark*}
\section{Simulation results}
\label{secsim}
In this section we describe the results of a series of numerical
experiments on a simulated data\vadjust{\goodbreak} set to demonstrate the effect of using
the exponential version of the density sensitive metric for small, labeled
sample sizes. For the marginal distribution of $X$, we used a
slightly modified version of the swiss roll distribution used in
\citet{Culp2011SSS}. Figure~\ref{figswissroll} shows a sample from
this distribution, where the point size represents the response $Y$.
We repeatedly sampled $N=400$ points from this
distribution, and computed the mean squared error of the kernel
regression estimator using a set of values for $\alpha$ and for
labeled sample size ranging from $n=5$ to $n=320$.
We used the approximation method described in the supplement [see
equation (10)]
with the number of nearest neighbors used set to $k=20$.
\begin{figure}
\includegraphics{1092f05.eps}
\caption{The swiss roll data set. Point size represents regression function.}
\label{figswissroll}
\end{figure}
\begin{figure}
\includegraphics{1092f06.eps}
\caption{MSE of kernel regression on the swiss roll data set for a
range of labeled sample sizes using different values of $\alpha$.}
\label{figsimulation}
\end{figure}
Figure~\ref{figsimulation} shows the average results after $300$
repetitions of this procedure with error bars indicating a $95\%$
confidence interval. As expected, we observe that for small labeled
sample sizes, increasing $\alpha$ can decrease the error. But as the
labeled sample size increases, using the density sensitive metric
becomes decreasingly beneficial, and can even hurt.
\section{Discussion}
\label{secdisc}
Semisupervised methods are very powerful, but like all methods, they
only work under certain conditions. We have shown that, under certain
conditions, semisupervised methods provably outperform supervised
methods. In particular, the advantage of semisupervised methods is
mainly when the distribution $P_X$ of $X$ is concentrated near a
low-dimensional set rather than when $P_X$ is smooth.
We introduced a family of estimators indexed by a parameter $\alpha$.
This parameter controls the strength of the semi-supervised assumption.
The behavior of the semi-supervised method depends critically on
$\alpha$. Finally, we showed that cross-validation can be used to
automatically adapt to $\alpha$ so that $\alpha$ does not need to be
known. Hence, our method takes advantage of the unlabeled data when the
semi-supervised assumption holds, but does not add extra bias when the
assumption fails. Our simulations confirm that our proposed estimator
has good risk when the semi-supervised smoothness holds.\looseness=-1
The analysis in this paper can be extended in several ways. First, it
is possible to use other density sensitive metrics such as the
diffusion distance [\citet{wasserman08spectral}]. Second, we
defined a method to estimate the density sensitive metric that works
under broader conditions than the two existing methods due to
\citet{orlitsky} and \citet{bigral11}. We suspect that faster
methods can be developed. Finally, other estimators besides kernel
estimators can be used. We will report on these extensions elsewhere.
\begin{supplement
\stitle{Supplement to ``Density-sensitive semisupervised inference''\\}
\slink[doi]{10.1214/13-AOS1092SUPP}
\sdatatype{.pdf}
\sfilename{aos1092\_supp.pdf}
\sdescription{Contains technical details, proofs and extensions.}
\end{supplement}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
\par Recent developments in mesoscopic physics revived the interest in shot noise \cite{nato,samina,martin}. Shot noise refers to the current fluctuations due to the discretness of the electron charge and was discovered by W. Schottky in 1918 in thermionic vacuum tubes. In these devices the electrons can be totally uncorrelated: in that case their emission follows a Poisson distribution and gives rise to the power spectral density:
\begin{equation}
P=2e^{*}I=P_{Poisson}
\end{equation}
where $e^{*}$ and $I$ are respectively the carriers charge and the net average current. The key point is that shot noise reflects the degree of correlation between electrons. In samples small enough to experience no inelastic scattering, some interesting features such as the so-called "quantum suppression of shot noise" below $P_{Poisson}$ were predicted and experimentally verified \cite{kumar}. When superconducting reservoirs are used to connect these small devices, the shot noise should increase above the prediction for normal materials because of Cooper pairs of charge $e^{*}=2e$ participating in the current transport \cite{dejong}. Experimental results regarding the interplay of shot noise suppression and Andreev reflections at NS interfaces are difficult to obtain. Most of the results were obtained with room temperature electronics with two independant FET amplifiers and cross correlation techniques \cite{glattli}. This technique is limited to source resistance of the order of $100\Omega$ at $4.2K$.
The cross correlation technique can still work with samples using a nanowire \cite{strunk} or pinholes in an insulator \cite{dielemann} as the normal metal but not with wider normal sections. We present a new experiment allowing the measurement of the shot noise in low impedance devices with high reliability since the noise of the measurement apparatus is small compared to the ground noise of interest. We will describe the low temperature resistance bridge with the SQUID first. Then we will give an overview of the two feedback loops and discuss the performances of the system.
\section{RESISTANCE BRIDGE}
\par We wish to perform measurements from the thermal noise regime ($V_{bias}=0$) to the full shot noise regime ($e^{*}V\gg kT$). The maximum bias current which can be applied is determined by the gap voltage of the metal used for the superconducting electrodes. For a junction made of niobium this value is about $3mV$, therefore if its resistance is $0.25\Omega$ the corresponding bias current is around $10mA$. On the other hand the ground level of the noise current is in the tens of $pA/\sqrt{Hz}$ range, so a very good resolution is required. A classical method of applying a bias voltage at low temperature consists in a reference resistance of value much smaller than $R_{x}$ placed in parallel with the latter and thus creating a low impedance voltage source \cite{devoret}. This scheme is not applicable in our case as most of the current flows through the small resistance and the $10mA$ required in the sample would impose excessively high total currents not compatible with low noise sources. We have designed a new experiment based on a resistance bridge to obtain a high sensitivity while supplying high dc bias currents.
\\The low temperature part of the circuit consists in the resistance bridge and the SQUID in a calorimeter (Figure \ref{fig:grandg}). It is composed with a reference resistance $R_{ref}$ and the sample $R_{x}$ both connected with 4 wires. The sample is connected with short gold wires which give an additionnal resistance $r$ much smaller than $R_{ref}$ and $R_{x}$. The input coil of the SQUID is connected in series with the voltage leads by means of superconducting wires. The aim is to apply a high bias current that is not measured by the SQUID, so that the SQUID electronics can be set to a high sensitivity in order to detect small currents.
\begin{figure}[h]
\psfig{figure=grandg.eps,width=80mm}
\caption{\footnotesize Schematic of the circuit in the noise measurement configuration. The noise current $I_{n_{x}}$ of the sample is related to the output measured voltage noise by equation[\ref{vnout}]. At $4.2K$, $R_{ref}=0.177\Omega$, $r=4m\Omega$ and $0.25\Omega<R_{x}<0.3\Omega$. The current sources $i_{n_{ref}}$, $i_{n_{x}}$ and $i_{n_{r}}$ are the equivalent noise current generators for $R_{ref}$, ${R_{x}}$ and $r$. The noise current generator $i_{n_{setup}}$ represents the total noise of the experimental setup expressed as a current in the input coil. It is mainly due to the Voltage Controlled Current Source (VCCS) and the SQUID electronics.}
\label{fig:grandg}
\end{figure}
A DC current $I_{ref}$ injected through $R_{ref}$ sets the voltage $V_{bias}=R_{ref}I_{ref}$. The bridge is balanced with a DC feedback current $I_{x}$ injected in $R_{x}$ so that $i_{squid}=0$. The equation which rules the DC balance is:
\begin{equation}
V_{bias}=R_{ref}I_{ref}=R_{x}I_{x}
\label{equilibre}
\end{equation}
\\As $i_{squid}$ is then null in the DC limit, the SQUID can measure with a high gain the AC variations of $i_{squid}$ arising from the noise of the resistances composing the bridge. The noise voltage $V_{n_{out}}$ measured by the spectrum analyzer at the output of the SQUID electronics is:
\begin{eqnarray}
V_{n_{out}}^{2}={1\over{G}^{2}}\Bigg[\left(1\over{\Sigma R}\right)^{2}\Big[{(R_{ref}i_{n_{ref}})}^{2}+{(ri_{n_{r}})}^{2}\nonumber\\
+{(R_{dyn_{x}}i_{n_{x}})}^{2}\Big]+{i_{n_{setup}}}^{2}\Bigg]
\label{vnout}
\end{eqnarray}
where $\Sigma R=r+{R_{ref}+R_{dyn_{x}}}$ and $G$ is the overall gain of the SQUID electronics $G={i_{squid}\over{V_{out}}}$. The term $R_{dyn_{x}}$ denotes the dynamic resistance of the sample which is not strictly ohmic: $R_{dyn_{x}}={\left( \partial V \over{\partial I} \right)_{I_{x}}}$. The second term in equation[\ref{vnout}] is the sum of the noise contributions of the three resistances and the non fundamental noise arising from the experimental setup expressed as a current in the input coil and noted $i_{n_{setup}}$. This equation is simply derived from the expression of $i_{squid}$ using current division of each equivalent noise current in its own source resistance and the rest of the circuit (see Figure[\ref{fig:grandg}]).
\begin{figure}[h]
\psfig{figure=petitg.eps,width=80mm}
\caption{\footnotesize Schematic of the circuit showing the DC feedback loop. The low noise Voltage Controlled Current Source (VCCS) is driven by the squid electronics through a regulation program. It provides the DC current $I_{x}$ which counterbalances $I_{ref}$ so that $V_{bias}$ is applied on the sample $R_{x}$. }
\label{fig:petitg}
\end{figure}
The wiring and the temperature control are very similar to those used in reference\cite{plomb}. The SQUID and the reference resistance (made of constantan, $R_{ref}=0.177\Omega$) are kept at helium temperature by copper rods ending in the helium bath. The sample holder allows very precise temperature control. Special attention is paid to RF filtering to reduce the noise on the SQUID. We also use magnetic shielding with superconducting materials. The twisted superconducting wires connected to the SQUID are shielded with superconducting tinned tubes. Thermalization pads and reference resistance are placed in soldered lead box. Finally the whole calorimeter is surrounded by two layers of magnetic shielding tape \cite{vacuum} and a thick soldered lead foil. Flux jumps are totally suppressed and the SQUID can be operated with DC currents.
\section{FEEDBACK SCHEMES}
\par We use a Quantum Design \cite{qd} DC-SQUID and an electronics designed and built in our laboratory. It works as a Flux Locked Loop (FLL) system with modulation at $500kHz$. The FLL operation is realized on the modulation/feedback coil.
A practical experiment includes two different stages. First the desired bias current is reached. The second stage is devoted to the noise measurement with the spectrum analyzer (SR785).
During the first stage the current $I_{ref}$ delivered by a floating battery is increased manually and the feedback current $I_{x}$ is fed through $R_{x}$ in order to satisfy equation \ref{equilibre}. Figure \ref{fig:petitg} shows this "DC feedback" loop. At this point the SQUID electronics has a low gain corresponding to $\approx 100{\Phi_{0}}/V$ i.e. $G\approx 20\mu A/V$. The dynamics of the system are then substantial enough to inject relatively high currents. A Labview program running on a PC regulates $V_{out}$ to keep it null while $I_{ref}$ is manually increased. First a voltmeter (Keithley2000) reads $V_{out}$; the program then drives a voltage source (HP3245A) which output is filtered by a first order $RC$ low pass filter with a long time constant. This DC feedback voltage finally drives a purposely designed low noise Voltage Controlled Current Source (VCCS). The transconductance of this VCCS is $1mA/V$. Its output feedback current $I_{x}$ is reinjected through $R_{x}$.
The high value of the transconductance makes it necessary to filter dramatically the noises generated by the electronics before the VCCS. Indeed input voltage noises of a few $nV/\sqrt{Hz}$ must be obtained in order to generate no more than a few $pA/\sqrt{Hz}$ at the output of the VCCS. That is why a RC low pass filter with $4.7s$ time constant is inserted before the VCCS. Therefore the only white noise generated by the DC feedback loop is the intrinsic noise of the VCCS. The VCCS is supplied with two lead batteries ($\pm 12V$) and is placed with the other lead battery providing $I_{ref}$ in a $\mu -metal$ tube closed at one end. This results in spectra showing strictly no peaks (Fig. \ref{fig:spectra}) and helps to obtain a very high stability of the SQUID electronics.
\\When the desired bias current is reached and the bridge is balanced according to equation \ref{equilibre}, $i_{squid}$ becomes zero in the DC limit, then the gain of the SQUID electronics can be increased continuously to $1{\Phi_{0}}/V$, i.e. $G=184nA/V$. At this stage corresponding to Figure \ref{fig:grandg} the FFT analyzer can measure the noise spectrum of interest. The FFT is typically computed in the $[16Hz - 12.8kHz]$ range with 800 FFT lines. The dispersion on the spectra is $\approx 0.4pA/\sqrt{Hz}$ after 3000 averagings, the acquisition time is then less than 4 minutes. The same operations are repeated to get spectra at different bias currents.
\section{PERFORMANCE OF THE SYSTEM}
Experiments have been carried out on macroscopic metallic resistors to test the absence of shot noise due to the electronics or the reference resistor. The sample was simply replaced by a resistance made with the same constantan wire than $R_{ref}$ and of similar value. Spectra were performed from zero bias current to the mA range. They were all found superimposed within the dispersion determined by the averaging. Neither the $1/f$ noise nor the white noise were affected by the DC bias current. This indicates that no excess noise is added by the electronics or $R_{ref}$ when a bias current is supplied. Consequently any detectable current dependence of the noise observed with a real sample will undoubtedly be due to the sample itself. Also the results were constantly found to be capable of being reproduced even after warming up the experiment.
\begin{figure}
\psfig{figure=iexp.eps,width=80mm}
\caption{\footnotesize Intrinsic total noise of the experiment. The white noise level of $2.6\times 10^{-23} A^{2}/Hz{}(5.1pA/\sqrt{Hz})$ is due to the VCCS and the SQUID system. The low frequency excess noise is due to the amplifiers composing the VCCS.}
\label{fig:iexp}
\end{figure}
Equation \ref{vnout} gives the total output noise which is measured. At zero bias current it is the sum of the thermal noise generated by the resistance bridge and the equivalent noise $i_{n_{setup}}^2$ of the whole electronics in the input coil
. As there is no shot noise at zero bias, the contribution from the three resistances is simply the thermal noise of each, given for a resistance of value R at temperature T by the Johnson-Nyquist relation $i_{n}^{2}=4k_{B}T/R$. Therefore $i_{n_{setup}}$ can be obtained from the raw datas at zero bias using equation \ref{vnout}:
\begin{equation}
{i_{n_{setup}}}^{2}={(GV_{n_{out}})}^{2}-{{{4k_{B}}\over{{(\Sigma R)}^{2}}}} \left[T_{x}(R_{dyn_{x}}+r)+4.2R_{ref}\right]
\label{iexp}
\end{equation}
The noise obtained is shown in Figure \ref{fig:iexp}. It is fitted by a constant added to a power law of the frequency to take into account the low frequency excess noise. Typically we found $i_{exp}^2=2.6\times 10^{-23}+3\times 10^{-20}/f^{1.5}$, i.e. a white noise level of $5.1pA/\sqrt{Hz}$. This noise comes from the VCCS and the SQUID setup. It is in very good agreement with noise simulations of the VCCS using SPICE modelling and with measurements of the SQUID system noise.
\\The VCCS is composed with two ultra low noise ($LT1028$ and $LT1128$) operationnal amplifiers and 4 matched resistances. A noise level of $4.5pA/\sqrt{Hz}$ is predicted by SPICE simulations using the typical noise specifications of the operationnal amplifiers and the actual values of the resistors fixing the transconductance. The second source of noise is the SQUID and its electronics which has been measured: it contributes for $\approx 2.5pA/\sqrt{Hz}$ (i.e. $13\mu\Phi_{0}/\sqrt{Hz})$. Therefore at $4.2K$ the intrinsic noise power of the experiment is only 6 \% of the thermal noise of the resistance bridge ($0.4\Omega$).
\\To conclude let us show some results illustrating the capacity of our setup to detect small changes of noise. We can easily measure the evolution of thermal noise with temperature of a $0.22\Omega$ sample and check that it follows the Johnson Nyquist relation (Figure \ref{fig:johnson}).
\begin{figure}
\psfig{figure=johnson.eps,width=80mm}
\caption{\footnotesize Temperature dependance of thermal noise for a $0.22\Omega$ sample. The experimental results (dots) are very much in agreement with the Johnson-Nyquist formula (line). The latter does not yield to a straight line because the resistance is slightly temperature dependent.}
\label{fig:johnson}
\end{figure}
\begin{figure}
\psfig{figure=spectra.eps,width=80mm}
\caption{\footnotesize Typical examples of $S_{I}(f)$ in a SNS junction of $\approx 0.25\Omega$ at $4.2K$. Solid curve represents fit to the $8mA$ data by the sum of a $1/f$ and frequency independant contribution. The acquisition time is about 4 minutes for the $12.8kHz$ span with 800 points, it yields to a dispersion of $\approx 0.4pA/\sqrt{Hz}$. These data clearly show an increase of the noise level with the DC bias current.}
\label{fig:spectra}
\end{figure}
Because of the clear domination of the thermal noise of the bridge at zero bias, it is possible to study the crossover between the thermal and the shot noise regimes as well as the full shot noise regime . Figure \ref{fig:spectra} shows the first results obtained in an SNS (Nb-Al-Nb) junction of $\approx 0.5\mu m$ length \cite{jehl}.
Unlike the preliminary experiment with a macroscopic metallic resistance instead of the sample, we clearly observe a modification of the noise spectra with the applied current. The low frequency noise behaves as 1/f with a coefficient roughly proportionnal to ${I_{bias}}^{2}$ indicating resistance fluctuations. The higher frequency noise which is the noise of interest clearly increases with the current. Because of its very low noise our experiment provides very reliable shot noise results with high dc bias currents. It opens new perspectives in the very rich field of shot noise measurements in mesoscopic physics.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsubsection*{Acknowledgment.}}
\usepackage{amsmath,amsfonts,amsthm,amssymb,verbatim,enumerate,quotes,graphicx}
\usepackage[flushmargin]{footmisc}
\usepackage[noadjust]{cite}
\usepackage{color}
\newcommand{\comments}[1]{}
\numberwithin{equation}{section}
\makeatletter
\def\xdef\@thefnmark{}\@footnotetext{\xdef\@thefnmark{}\@footnotetext}
\makeatother
\newcommand{\red}[1]{{\color{red} #1}}
\newcommand{\blue}[1]{{\color{blue} #1}}
\newcommand{\green}[1]{{\color{green} #1}}
\definecolor{orange}{rgb}{1,0.5,0}
\newcommand{\orange}[1]{{\color{orange} #1}}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\mbox{int}\,}{\mbox{int}\,}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{prop}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{cors}[thm]{Corollaries}
\newtheorem{lem}[thm]{Lemma}
\theoremstyle{definition}
\newtheorem{dfn}[thm]{Definition}
\newtheorem{ex}[thm]{Example}
\newtheorem{exs}[thm]{Examples}
\theoremstyle{remark}
\newtheorem{nota}[thm]{Note}
\newtheorem{rmk}[thm]{Remark}
\renewcommand{\qedsymbol}{$\blacksquare$}
\newcommand{{\mathbb{C}}}{{\mathbb{C}}}
\newcommand{{\hat{\mathbb{C}}}}{{\hat{\mathbb{C}}}}
\newcommand{{\mathbb{C}^*}}{{\mathbb{C}^*}}
\newcommand{\mathcal B}{\mathcal B}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{{\mathbb{Z}}}{{\mathbb{Z}}}
\newcommand{{\mathbb{N}}}{{\mathbb{N}}}
\newcommand{{\mathbb{Q}}}{{\mathbb{Q}}}
\newcommand{{\mathcal{K}}}{{\mathcal{K}}}
\renewcommand{\L}{\Lambda}
\renewcommand{\Re}{\operatorname{Re}}
\renewcommand{\Im}{\operatorname{Im}}
\newcommand{\operatorname{dist}}{\operatorname{dist}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\bar\partial}{\bar\partial}
\newcommand{\e}{\varepsilon}
\newcommand{\partial}{\partial}
\newcommand{\alpha}{\alpha}
\newcommand{\zeta}{\zeta}
\def\mline#1{\par\hspace*{-\leftmargin}\parbox{\textwidth}{\vspace{5pt}\begin{center}\hspace{10pt}$\displaystyle #1$\end{center}\vspace{5pt}}}
\makeatletter
\newcommand{\hbox to \@totalleftmargin{\hfil}}{\hbox to \@totalleftmargin{\hfil}}
\makeatother
\begin{document}
\bibliographystyle{amsalpha}
\title[Escaping Fatou components of transcendental self-maps of ${\mathbb{C}}^*$]{Escaping Fatou components of transcendental self-maps of the punctured plane}
\author[D. Mart\'i-Pete]{David Mart\'i-Pete}
\address{Department of Mathematics\\ Faculty of Science\\ Kyoto University\\ Kyoto 606-8502\\ Japan}
\email{[email protected]}
\date{\today}
\thanks{This research was supported by The Open University and by the Grant-in-Aid for Scientific Research JP16F16807 from the Japanese Society for the Promotion of Science.}
\maketitle
\begin{abstract}
We study the iteration of transcendental self-maps of ${\mathbb{C}}^*:={\mathbb{C}}\setminus \{0\}$, that is, holomorphic functions $f:{\mathbb{C}}^*\to{\mathbb{C}}^*$ for which both zero and infinity are essential singularities. We use approximation theory to construct functions in this class with escaping Fatou components, both wandering domains and Baker domains, that accumulate to $\{0,\infty\}$ in any possible way under iteration. We also give the first explicit examples of transcendental self-maps of ${\mathbb{C}}^*$ with Baker domains and with wandering domains. In doing so, we developed a sufficient condition for a function to have a simply connected escaping wandering domain. Finally, we remark that our results also provide new examples of entire functions with escaping Fatou components.
\end{abstract}
\section{Introduction}
Complex dynamics concerns the iteration of a holomorphic function on a Riemann surface $S$. Given a point $z\in S$, we consider the sequence given by its iterates $f^n(z)=(f\circ\displaystyle\mathop{\cdots}^{n}\circ f)(z)$ and study the possible behaviours as $n$ tends to infinity. We partition $S$ into the \textit{Fatou set}, or stable set,
$$
F(f):=\bigl\{z\in S\ :\ (f^n)_{n\in\mathbb N} \mbox{ is a normal family in some neighbourhood of } z\bigr\}
$$
and the \textit{Julia set} $J(f):=S\setminus F(f)$, where the chaotic behaviour takes place. We refer to a connected component of $F(f)$ as a \textit{Fatou component} of $f$. If $S\subseteq {\hat{\mathbb{C}}}$, $f:S\rightarrow S$ is holomorphic and ${\hat{\mathbb{C}}}\setminus S$ consists of essential singularities, then conjugating by a M\"obius transformation, we can reduce to one of the following three cases:
\begin{itemize}
\item $S={\hat{\mathbb{C}}}:={\mathbb{C}}\cup\{\infty\}$ and $f$ is a rational map;
\item $S={\mathbb{C}}$ and $f$ is a transcendental entire function;
\item $S={\mathbb{C}^*}:={\mathbb{C}}\setminus\{0\}$ and \textit{both} zero and infinity are essential singularities.
\end{itemize}
We study this third class of maps, which we call \textit{transcendental self-maps of} ${\mathbb{C}^*}$. Such maps are all of the form
\begin{equation}
f(z)=z^n\exp\bigl(g(z)+h(1/z)\bigr),
\label{eqn:bhat}
\end{equation}
where $n\in{\mathbb{Z}}$ and $g,h$ are non-constant entire functions. We define the \textit{index}~of~$f$, denoted by $\textup{ind}(f)$, as the index (or winding number) of $f(\gamma)$ with respect to the origin for any positively oriented simple closed curve $\gamma$ around the origin; note that $\textup{ind}(f)=n$ in \eqref{eqn:bhat}. Transcendental self-maps of ${\mathbb{C}}^*$ arise in a natural way in many situations, for example, when you complexify circle maps, like the so-called Arnol'd standard family: $f_{\alpha,\beta}(z)=ze^{i\alpha}e^{\beta(z-1/z)/2}$, $0\leqslant \alpha\leqslant 2\pi,\ \beta\geqslant 0$ \cite{fagella99}. Note that if $f$ has three or more omitted points, then, by Picard's theorem, $f$ is constant and, consequently, a non-constant holomorphic function $f:{\mathbb{C}}^*\rightarrow {\mathbb{C}}^*$ has no omitted values. The book \cite{milnor06} is a basic reference on the iteration of holomorphic functions in one complex variable. See \cite{bergweiler93} for a survey on transcendental entire and meromorphic functions. Although the iteration of transcendental (entire) functions dates back to the times of Fatou \cite{fatou26}, R{\aa}dstr\"om \cite{radstrom53} was the first to consider the iteration of transcendental self-maps of ${\mathbb{C}}^*$. An extensive list of references on this topic can be found in \cite{martipete}.
We recall the definition of the \textit{escaping set} of an entire function $f$,
$$
I(f):=\{z\in{\mathbb{C}}\ :\ f^n(z)\rightarrow \infty \mbox{ as } n\rightarrow \infty\},
$$
whose investigation has provided important insight into the Julia set of entire functions. For polynomials, the escaping set consists of the basin of attraction of infinity and its boundary equals the Julia set. For transcendental entire functions, Eremenko showed that $I(f)\cap J(f)\neq \emptyset$, $J(f)=\partial I(f)$ and the components of $\overline{I(f)}$ are all unbounded \cite{eremenko89}. If $f$ is a transcendental self-map of ${\mathbb{C}}^*$, then the escaping set of $f$ is given by
$$
I(f):=\{z\in{\mathbb{C}^*}\ :\ \omega(z,f)\subseteq \{0,\infty\}\},
$$
where $\omega(z,f)$ is the classical omega-limit set $\omega(z,f):=\bigcap_{n\in{\mathbb{N}}}\overline{\{f^k(z)\ :\ k\geqslant n\}}$ with the closure being taken in ${\hat{\mathbb{C}}}$. In \cite{martipete1}, we studied the basic properties of $I(f)$ for transcendental self-maps of ${\mathbb{C}}^*$ and introduced the following notion. We define the \textit{essential itinerary} of a point $z\in I(f)$ as the symbol sequence \mbox{$e=(e_n)\in\{0,\infty\}^{{\mathbb{N}}_0}$} given by
$$
e_n:=\left\{
\begin{array}{ll}
0, & \mbox{ if } |f^n(z)|\leqslant 1,\vspace{5pt}\\
\infty, & \mbox{ if } |f^n(z)|>1,
\end{array}
\right.
$$
for all $n\in {\mathbb{N}}_0$. Then, for each sequence $e\in\{0,\infty\}^{{\mathbb{N}}_0}$, we consider the set of points whose essential itinerary is eventually a shift of $e$, that is,
$$
I_e(f):=\{z\in I(f)\ :\ \exists \ell,k\in{\mathbb{N}}_0,\ \forall n\in\mathbb{N}_0,\ |f^{n+\ell}(z)|>1\Leftrightarrow e_{n+k}=\infty\}.
$$
Observe that if $e_1,e_2\in\{0,\infty\}^{{\mathbb{N}}_0}$ satisfy $\sigma^m(e_1)=\sigma^n(e_2)$ for some $m,n\in{\mathbb{N}}_0$, where $\sigma$ is the Bernoulli shift map (we say that $e_1$ and $e_2$ are \textit{equivalent}), then $I_{e_1}(f)=I_{e_2}(f)$ and, otherwise, the sets $I_{e_1}(f)$ and $I_{e_2}(f)$ are disjoint. Hence, the concept of essential itinerary provides a partition of $I(f)$ into uncountably many non-empty sets of the form $I_e(f)$ for some $e\in\{0,\infty\}^{\mathbb{N}_0}$. In \cite{martipete1}, we also showed that, for each $e\in\{0,\infty\}^{{\mathbb{N}}_0}$, we have $I_e(f)\cap J(f)\neq \emptyset$, $J(f)=\partial I_e(f)$~and the components of $\overline{I_e(f)}$ are all unbounded in ${\mathbb{C}}^*$, that is, their closure in ${\hat{\mathbb{C}}}$ contains zero or infinity. We say that $U$ is an \textit{escaping Fatou component} of $f$ if $U$ is a component of $F(f)\cap I(f)$.
As usual, the set of singularities of the inverse function, $\mbox{sing}(f^{-1})$, which consists of the critical values and the finite asymptotic values of $f$, plays an important role in the dynamics of $f$. In \cite{fagella-martipete} we studied the class
$$
\mathcal B^*:=\{f \mbox{ transcendental self-map of } {\mathbb{C}^*}\ :\ \mbox{sing}(f^{-1}) \mbox{ is bounded away from } 0,\infty\},
$$
which is the analogue of the Eremenko-Lyubich class~$\mathcal{B}$ considered in \cite{eremenko-lyubich92}. We proved that if $f\in \mathcal B^*$, then $I(f)\subseteq J(f)$ or, in other words, functions in the class~$\mathcal{B}^*$ have no escaping Fatou components.
In this paper, we are concerned with transcendental self-maps of ${\mathbb{C}}^*$ that have escaping Fatou components. By normality, if $U$ is a Fatou component of a transcendental self-map $f$ of ${\mathbb{C}}^*$ and $U\cap I(f)\neq\emptyset$, then $U\subseteq I(f)$. Moreover, note that every pair of points in an escaping Fatou component $U$ have, eventually, the same essential itinerary, and hence we can associate an essential itinerary to $U$, which is unique up to equivalence. We mentioned before that $I_e(f)\cap J(f)\neq \emptyset$ for each sequence $e\in\{0,\infty\}^{{\mathbb{N}}_0}$ (see \cite[Theorem~1.1]{martipete1}). Therefore it is a natural question whether for each $e\in\{0,\infty\}^{{\mathbb{N}}_0}$, there exists a transcendental self-map of~${\mathbb{C}}^*$ with a Fatou component in $I_e(f)$.
For both transcendental entire functions and transcendental self-maps of~${\mathbb{C}}^*$, escaping Fatou components can be classified in the following two kinds: let $U$ be a Fatou component of $f$ and denote by $U_n$, $n\in\mathbb{N}$, the Fatou component of $f$ that contains $f^n(U)$, then we say that\vspace{-1pt}
\begin{itemize}
\item $U$ is a \textit{wandering domain} if $U_m\cap U_n=\emptyset$ for all $m,n\in{\mathbb{N}}$ such that $m\neq n$,\vspace*{1pt}
\item $U$ is a \textit{Baker domain} (or a preimage of it) if $U\subseteq I(f)$ and $U$ is (\textit{pre})\textit{periodic}, that is, $U_{p+m}=U_m$ for some $p\in \mathbb{N}$, the \textit{period} of $U$, and $m=0$ ($m>0$).\vspace{-1pt}
\end{itemize}
Note that not all wandering domains are in $I(f)$. For instance, Bishop~\cite[Theorem~17.1]{bishop15} constructed an entire function in the class $\mathcal B$ with a wandering domain whose orbit is unbounded but it does not escape.
The first example of a transcendental entire function with a wandering domain was given by Baker \cite{baker63, baker76} and was an infinite product that had a sequence of multiply connected Fatou components escaping to infinity; see \cite{bergweiler-rippon-stallard13} for a detailed study of the properties of such functions. For holomorphic self-maps of ${\mathbb{C}}^*$, Baker \cite{baker87} showed that all Fatou components, except possibly one, are simply connected, and hence this kind of wandering domains cannot occur. Further examples of simply connected wandering domains of entire functions are due, for example, to Herman \cite[Example~2]{baker84} or Baker \cite[Example~5.3]{baker84}.
Baker \cite{baker87} also constructed the first holomorphic self-map of ${\mathbb{C}}^*$ (which is entire) with a wandering domain that escapes to infinity. The first examples of trans\-cendental self-maps of ${\mathbb{C}}^*$ with a wandering domain are due to Kotus \cite{kotus90}, where the wandering domain accumulates to zero, infinity or both of them. In the same paper, Kotus also constructed an example with an infinite limit set (by adapting the techniques from \cite{eremenko-lyubich92}). Mukhamedshin \cite{mukhamedshin91} used quasiconformal surgery to create a trans\-cendental self-map of~${\mathbb{C}}^*$ with a Herman ring and two wandering domains, one escaping to zero and the other one to infinity. Finally, Baker and Dom\'inguez \cite[Theorem~6]{baker-dominguez98} gave an example of a doubly connected wandering domain that is relatively compact in~${\mathbb{C}}^*$ and all of whose images are simply connected and escape to infinity.
In our notation, all the previous examples of wandering domains of transcendental self-maps of ${\mathbb{C}}^*$ had essential itinerary $e\in\{\overline{\infty},\overline{0}, \overline{\infty 0}\}$, where $\overline{e_1e_2\hdots e_p}$ denotes the $p$-periodic sequence that repeats $e_1e_2\hdots e_p$. The following result provides examples of transcendental self-maps of ${\mathbb{C}}^*$ that have a wandering domain with any prescribed essential itinerary $e\in\{0,\infty\}^{{\mathbb{N}}_0}$.
\begin{thm}
\label{thm:wandering-domains}
For each sequence $e\in\{0,\infty\}^{{\mathbb{N}}_0}$ and $n\in{\mathbb{Z}}$, there exists a trans\-cendental self-map $f$ of ${\mathbb{C}}^*$ such that $\textup{ind}(f)=n$ and the set $I_e(f)$ contains a wandering domain.
\end{thm}
Observe that, in particular, in Theorem~\ref{thm:wandering-domains} we obtain functions with wandering domains whose essential itinerary is not necessarily a periodic sequence.
The other type of escaping Fatou component is a Baker domain. The first example of a transcendental entire function with a Baker domain was already given by Fatou \cite{fatou26}: $f(z)=z+1+e^{-z}$. See \cite{rippon08} for a survey on Baker domains.
A result of Cowen \cite{cowen81} on holomorphic self-maps of $\mathbb D$ whose Denjoy-Woff point lies on $\partial \mathbb D$ led to the following classification of Baker domains by Fagella and Henriksen \cite{fagella-henriksen06}, where $U/f$ is the Riemann surface obtained by identifying points of $U$ that belong to the same orbit under $f$:
\begin{itemize}
\item a Baker domain $U$ is \textit{hyperbolic} if $U/f$ is conformally equivalent to $\{z\in{\mathbb{C}}\ :$\linebreak $-s<\Im z<s\}/{\mathbb{Z}}$ for some $s>0$;
\item a Baker domain $U$ is \textit{simply parabolic} if $U/f$ is conformally equivalent to $\{z\in{\mathbb{C}}\ :\ \Im z>0\}/{\mathbb{Z}}$;
\item a Baker domain $U$ is \textit{doubly parabolic} if $U/f$ is conformally equivalent to ${\mathbb{C}}/{\mathbb{Z}}$.
\end{itemize}
Note that this classification does not require $f$ to be entire and is valid also for Baker domains of transcendental self-maps of ${\mathbb{C}}^*$. K\"onig \cite{koenig99} provided a geometric characterisation for each of these types (see Lemma~\ref{lem:bd-koenig}). It is known that if $U$ is a doubly parabolic Baker domain, then $f_{|U}$ is not univalent, but if $U$ is a hyperbolic or simply parabolic Baker domain, then $f_{|U}$ can be either univalent or multivalent. Several examples of each type had been constructed and recently Bergweiler and Zheng completed the table of examples by constructing a transcendental entire function with a simply parabolic Baker domain in which the function is not univalent \cite[Theorem~1.1]{bergweiler-zheng12}.
The only previous examples of Baker domains of transcendental self-maps of~${\mathbb{C}}^*$ that the author is aware of are due to Kotus \cite{kotus90}. She used approximation \mbox{theory} to construct two functions with invariant hyperbolic Baker domains whose points escape to zero and to infinity respectively. The following theorem provides functions with Baker domains that have any periodic essential itinerary $e\in\{0,\infty\}^{{\mathbb{N}}_0}$ and, in particular, Baker domains whose points accumulate to both zero and infinity.
\begin{thm}
\label{thm:baker-domains}
For each periodic sequence $e\in\{0,\infty\}^{{\mathbb{N}}_0}$ and $n\in{\mathbb{Z}}$, there exists a trans\-cendental self-map $f$ of ${\mathbb{C}}^*$ such that $\textup{ind}(f)=n$ and $I_e(f)$ contains a hyperbolic Baker domain.
\end{thm}
\begin{rmk}
We observe that our method can be modified to produce doubly parabolic Bakers domains as well. However, the cons\-truction of simply parabolic Baker domains using approximation theory seems more difficult.
\end{rmk}
We also give the first explicit examples of transcendental self-maps of ${\mathbb{C}}^*$ with wandering domains and Baker domains. They all have the property that in a neighbourhood of infinity they behave like known examples of transcendental entire functions with wandering domains and Baker domains.
\begin{ex}
\label{ex:main-ex}
The following transcendental self-maps of ${\mathbb{C}}^*$ have escaping Fatou components:
\begin{enumerate}
\item[(i)] The function $f(z)=z\exp\left(\frac{\sin z}{z}+\frac{2\pi}{z}\right)$ has a bounded wandering domain that escapes to infinity (see Example~\ref{ex:wand-domain}).
\item[(ii)] The function $f(z)=2z\exp\bigl(\exp(-z)+1/z\bigr)$ has an invariant hyperbolic Baker domain that contains a right half-plane and whose points escape to infinity (see Example~\ref{ex:hyp-baker-domain}).
\item[(iii)] The function $f(z)=z\exp\left((e^{-z}+1)/z\right)$ has an invariant doubly parabolic Baker domain that contains a right half-plane and whose point escape to infinity (see Example~\ref{ex:dpar-baker-domain}).
\end{enumerate}
\end{ex}
It seems hard to find explicit examples of functions with Baker domains and wandering domains with any given essential itinerary, but it would be interesting to have a concrete example of a function with an escaping Fatou component that accumulates to both zero and infinity. It also seems difficult to find explicit examples of functions with simply parabolic Baker domains.
We remark that in order to show that the function from Example~\ref{ex:main-ex}~(i) has a simply connected escaping wandering domain we introduced a new criterion (see Lemma~\ref{lem:Julia-in-annulus}) which is of more general interest.
Let $f$ be a transcendental self-map of ${\mathbb{C}}^*$, then there exists a transcendental entire function $\tilde{f}$ such that $\exp \circ \,\tilde{f}=f\circ \exp$; we call $\tilde{f}$ a \textit{lift} of $f$. If the function $f$ has a wandering domain, then $\tilde{f}$ has a wandering domain, while if $f$ has a Baker domain, then $\tilde{f}$ has either a Baker domain (of the same type) or a wandering domain; see Lemmas~\ref{lem:semiconj-wd}~and~\ref{lem:semiconj-bd}.
It is important that in both Theorems~\ref{thm:wandering-domains}~and~\ref{thm:baker-domains} we can choose the index of the function since, for example, if $\textup{ind}(f)\neq 1$, then $f$ does not have Herman rings. In \cite{martipete4} the author compares the escaping set of $f$ with that of a lift $\tilde{f}$ of $f$ according to $\textup{ind}(f)$.
Finally, observe that our constructions using approximation theory can also produce holomorphic self-maps of ${\mathbb{C}}^*$ of the form $f(z)=z^n\exp(g(z))$, with $n\in{\mathbb{Z}}$ and $g$ a non-constant entire function. In particular, they can provide new examples of transcendental entire functions with no zeros in ${\mathbb{C}}^*$ that have wandering domains and Baker domains.
\vspace{5pt}
\noindent
\textbf{Structure of the paper.} In Sections 2 and 3 we prove that the functions from Example~\ref{ex:main-ex} have the properties that we state. In Section 4 we introduce the tools from approximation theory that we will use in Sections 5 and 6 to construct functions with escaping wandering domains and Baker domains respectively. Theorem~\ref{thm:wandering-domains} is proved in Section 5 and Theorem~\ref{thm:baker-domains} is proved in Section 6.
\vspace{5pt}
\noindent
\textbf{Notation.} In this paper ${\mathbb{N}}_0={\mathbb{N}}\cup\{0\}=\{0,1,2,\hdots\}$ and, for $z_0\in {\mathbb{C}}$ and $r>0$, we define
$$
D(z_0,r):=\{z\in{\mathbb{C}}\ :\ |z-z_0|<r\},\quad \mathbb H_r:=\{z\in{\mathbb{C}}\ :\ \Re z>r\}.
$$
\vspace{10pt}
\noindent
\textbf{Acknowledgments.} The author would like to thank his supervisors Phil Rippon and Gwyneth Stallard for their support and guidance in the preparation of this paper.
\section{Explicit functions with wandering domains}
\label{sec:explicit-wd}
As mentioned in the introduction, the author is not aware of any previous explicit examples of transcendental self-maps of ${\mathbb{C}}^*$ with wandering domains or Baker domains as all such functions were constructed using approximation theory.
Kotus \cite{kotus90} showed that transcendental self-maps of ${\mathbb{C}}^*$ can have escaping wandering domains by constructing examples of such functions using approximation theory. Here we give an explicit example of such a function by modifying a transcendental entire function that has a wandering domain.
\begin{ex}
The function $f(z)=z\exp\bigl(\frac{\sin z}{z}+\frac{2\pi}{z}\bigr)$ is a transcendental self-map of~${\mathbb{C}}^*$ which has a bounded simply connected wandering domain that escapes to infi\-nity (see Figure~\ref{fig:wand-domain}).
\label{ex:wand-domain}
\end{ex}
\begin{figure}[h!]
\includegraphics[width=.49\linewidth]{002-dot.png}
\includegraphics[width=.49\linewidth]{wd001-12-2.png}
\caption[Phase space of a transcendental self-map of ${\mathbb{C}}^*$ which has a wandering domain]{Phase space of the function $f(z)=z\exp\left(\frac{\sin z}{z}+\frac{2\pi}{z}\right)$ from Example~\ref{ex:wand-domain} which has a wandering domain. On the right, the wandering domain for large values of $\textup{Re}\, z$.}
\label{fig:wand-domain}
\end{figure}
Baker \cite[Example~5.3]{baker84} (see also \cite[Example~2]{rippon-stallard08}) studied the dynamics of the trans\-cendental entire function $f_1(z)=z+\sin z+2\pi$ that has a wandering domain containing the point $z=\pi$ that escapes to infinity. Observe that the function $f$ from Example~\ref{ex:wand-domain} satisfies tha
\begin{equation}
f(z)=z+\sin z+2\pi+o(1)\quad \mbox{ as } \mbox{Re}\,z\rightarrow +\infty
\label{eq:ex-wand-domain}
\end{equation}
in a horizontal band defined by $|\textup{Im}\, z|<K$ for some $K>0$.
We first prove a general result which gives a sufficient condition that implies that a function has a bounded wandering domain (see Figure~\ref{fig:annuli-lemma}) using some of the ideas from \cite[Lemma~7(c)]{rippon-stallard08}. Given a doubly connected open set $A\subseteq {\mathbb{C}}$, we define the \textit{inner boundary}, $\partial_\textup{in}A$, and the \textit{outer boundary}, $\partial_\textup{out}A$, of $A$ to be the boundary of the bounded and unbounded complementary components of $A$ respectively
\begin{lem}
Let $f$ be a function that is holomorphic on ${\mathbb{C}}^*$, let $M$ be an affine map, let $A$ be a doubly connected closed set in ${\mathbb{C}}^*$ with bounded complementary component $B$, and let $C\subseteq B$ be compact. Put
$$
A_n:=M^n(A),\quad B_n:=M^n(B)\quad \mbox{ and }\quad C_n:=M^n(C)\quad \mbox{ for } n\in{\mathbb{N}}_0,
$$
and suppose that
\begin{itemize}
\item $A_n\cup B_n\subseteq {\mathbb{C}}^*$ for $n\in{\mathbb{N}}_0$,
\item the sets $\{B_n\}_{n\in{\mathbb{N}}_0}$ are pairwise disjoint,
\item $f(\partial_\textup{in}\,A_n)\subseteq C_{n+1}$ for $n\in{\mathbb{N}}_0$,
\item $f(\partial_\textup{out}\,A_n)\subseteq {\mathbb{C}}^*\setminus (A_{n+1}\cup B_{n+1})$ for $n\in{\mathbb{N}}_0$.
\end{itemize}
Then $f$ has bounded simply connected wandering domains $\{U_n\}_{n\in{\mathbb{N}}_0}$ such that
$$
\partial_\textup{in}\,A_n\subseteq U_n \quad \mbox{ and } \quad \partial U_n \subseteq A_n \quad \mbox{ for } n\in{\mathbb{N}}_0.
$$
\label{lem:Julia-in-annulus}
\end{lem}
\begin{figure}[h!]
\centering
\vspace*{-10pt}
\def.60\linewidth{\linewidth}
\input{annuli-lemma.pdf_tex}
\vspace{15pt}
\caption[Sketch of a construction which implies that a function has a wandering domain]{Sketch of the construction in Lemma \ref{lem:Julia-in-annulus}.}
\label{fig:annuli-lemma}
\end{figure}
In order to prove this lemma, we first need the following result on limit functions of holomorphic iterated function systems by Keen and Lakic \cite[Theorem~1]{keen-lakic03}.
\begin{lem}
Let $X$ be a subdomain of the unit disc $\mathbb D$. Then all limit functions of any sequence of functions $(F_n)$ of the form
$$
F_n:=f_n\circ f_{n-1}\circ \cdots \circ f_2\circ f_1\quad \mbox{ for } n\in{\mathbb{N}},
$$
where $f_n:\mathbb D\to X$ is a holomorphic function for all $n\in {\mathbb{N}}$, are constant functions in $\overline{X}$ if and only if $X\neq\mathbb D$.
\label{lem:keen-lakic}
\end{lem}
We now proceed to prove Lemma~\ref{lem:Julia-in-annulus}.
\begin{proof}[Proof of Lemma~\ref{lem:Julia-in-annulus}]
Since $f(\overline{B_n})\subseteq C_{n+1}\subseteq B_{n+1}$, the iterates of $f$ on each set $\overline{B_n}$ omit more than three points and hence, by Montel's theorem, the sets $\{\overline{B_n}\}_{n\in{\mathbb{N}}_0}$ are all contained in $F(f)$. For $n\in {\mathbb{N}}_0$, let $U_n$ denote the Fatou component of $f$ that contains $\overline{B_n}$. We now show that the functions
$$
\Phi_{k}(z):=M^{-k}(f^k(z))\quad \mbox{ for } k\in{\mathbb{N}}_0,
$$
form a normal family in $U_n$ for all $n\in{\mathbb{N}}_0$.
Suppose first that the Fatou components $\{U_n\}_{n\in{\mathbb{N}}_0}$ are not distinct. Then there are two sets $B_m$ and $B_{m+p}$ with $m\in{\mathbb{N}}_0$ and $p>0$ which lie in the same Fatou components $U_m=U_{m+p}$. Then, since $f^p(B_m)\subseteq B_{m+p}$ and $B_n\to \infty$ as $n\to \infty$, $U_m$ must be periodic and in $I(f)$, and hence a Baker domain.
Let $z_m\in B_m$ and let $K$ be any compact connected subset of $U_m$ such that $K\supseteq B_m$. Then by Baker's distortion lemma (see \cite[Lemma~6.2]{martipete1} or \cite[Lemma~2.22]{martipete} for a proof of the version of this result that we use here), there exist constants \mbox{$C(K)>1$} and $n_0\in{\mathbb{N}}_0$ such that
$$
|f^k(z)|\leqslant C(K) |f^k(z_m)|\quad \mbox{ for } z\in K,\ k\geqslant n_0.
$$
Since $M$, and hence $M^{-k}$, is an affine transformation, $M^{-k}$ preserves the ratios of distances, so
$$
|\Phi_k(z)|=|M^{-k}(f^k(z))|\leqslant C(K)|M^{-k}(f^k(z_m))|=C(K)|z_m'|
$$
where $z_m'\in B_m$ satisfies $M^k(z_m')=f^k(z_m)$. Hence the family $\{\Phi_k\}_{k\in {\mathbb{N}}_0}$ is locally uniformly bounded on $U_m$, and hence is normal on $U_m$.
Suppose next that the Fatou components $\{U_n\}_{n\in{\mathbb{N}}_0}$ are disjoint. In this case we consider the sequence of functions
$$
\varphi_k(z):=M^{-(k+1)}(f(M^k(z)))\quad \mbox{ for } k\in{\mathbb{N}}_0,
$$
which are defined on $U_n$, for $n\in{\mathbb{N}}_0$. Then
\begin{equation}
\Phi_k(z) = (\varphi_{k-1} \circ \cdots \circ \varphi_1 \circ \varphi_0)(z) = M^{-k}(f^k(z))\quad \mbox{ for } k\in{\mathbb{N}}_0.
\label{eq:annuli-lem-1}
\end{equation}
Since the Fatou components $\{U_n\}_{n\in{\mathbb{N}}_0}$ are pairwise disjoint and
$$
f^k(U_n)\subseteq U_{n+k},
$$
we deduce that
$$
f^k(U_n)\cap B_{n+k+1}=\emptyset
$$
and hence
$$
\Phi_k(U_n)\cap B_{n+1}=\emptyset\quad \mbox{ for } k,n\in{\mathbb{N}}_0.
$$
Thus $\{\Phi_k\}_{k\in{\mathbb{N}}_0}$ is normal on each $U_n$, by Montel's theorem, as required.
Now take $n\in{\mathbb{N}}_0$, and let $\{\Phi_{k_j}\}_{j\in{\mathbb{N}}_0}$ be a locally uniformly convergent subsequence of $\{\Phi_k\}_{k\in{\mathbb{N}}_0}$ on $B_n$. Note that
$$
M^k(B_n)=B_{n+k} \quad \mbox{ so } \quad f(M^k(B_n))\subseteq C_{n+k+1}
$$
and hence, for $k\in {\mathbb{N}}_0$,
$$
\varphi_k(B_n)=M^{-(k+1)}(f(M^k(B_n)))\subseteq M^{-(k+1)}(C_{n+k+1})=C_n.
$$
We can now apply Lemma~\ref{lem:keen-lakic}, after a Riemann mapping from $B_n$ to the open unit disc $\mathbb D$, to deduce from \eqref{eq:annuli-lem-1} that there exists $\alpha_n\in \overline{B_n}$ such that, for all $z\in U_n$,
$$
\Phi_{k_j}(z)\to\alpha_n\quad \mbox{ as } j\to\infty.
$$
To complete the proof that $U_n$ is bounded by $\partial_\textup{out}\,A_n$ for all $n\in{\mathbb{N}}$, suppose to the contrary that there is a point $z_0\in\partial_\textup{out}\,A_n$ that lies in $U_n$ for some $n\in{\mathbb{N}}$. Let $\gamma\subseteq U_n$ be a curve that joins $z_0$ to a point $z_1\in B_n$. Since $\gamma$ is compact, $\Phi_{k_j}(\gamma)\to \alpha$ as $j\to\infty$ which contradicts the fact that $f^k(\gamma)\cap \partial_\textup{out}\,A_{n+k}\neq\emptyset$ for all $k\in{\mathbb{N}}$ (this follows from the hypothesis that $f(\partial_\textup{out}\,A_n)\subseteq (A_{n+1}\cup B_{n+1})^c$ for $n\in{\mathbb{N}}_0$). Thus, $\partial U_n\subseteq A_n$ for all $n\in{\mathbb{N}}$, and so the proof is complete.
\end{proof}
We now use Lemma~\ref{lem:Julia-in-annulus} to show that the function $f$ from Example~\ref{ex:wand-domain} has a bounded wandering domain that escapes to infinity along the positive real axis.
\begin{proof}[Proof of Example~\ref{ex:wand-domain}]
The transcendental entire function $g(z)=z+\sin z$ has\linebreak superattracting fixed points at the odd multiples of $\pi$. For $n\in{\mathbb{N}}_0$, take\linebreak \mbox{$B_n := D((2n+1)\pi,r)$} and $C_n:=D((2n+1)\pi,r/2)$ for some $r>0$ sufficiently small that \mbox{$g(B_n)\subseteq C_{n}$} and put
$$
R_n:= \{z\in {\mathbb{C}}\ :\ |\textup{Re}\, z-(2n+1)\pi|\leqslant 3\pi/2,\ |\textup{Im}\, z|\leqslant 3\}.
$$
It follows from a straightforward computation that $g(\partial R_n)\subseteq R_n^c$ for all $n\in{\mathbb{N}}_0$ (see Figure~\ref{fig:spiral}).
\begin{figure}[h!]
\centering
\def.60\linewidth{.65\linewidth}
\input{spiral-final2.pdf_tex}
\caption[Image of a rectangle by the map~\mbox{$g(z)=z+\sin z$}]{Rectangle $R_0$ and its image under $g(z)=z+\sin z$.}
\label{fig:spiral}
\end{figure}
Then, by \eqref{eq:ex-wand-domain}, there exists $N\in{\mathbb{N}}_0$ such that $f(B_n)\subseteq C_{n+1}$ and $f(\partial R_n)\subseteq R_{n+1}^c$ for all $n>N$. Thus, we can apply Lemma~\ref{lem:Julia-in-annulus} to $f$ with $M(z)=z+2\pi$ and $A_n:=R_n\setminus B_n$ for $n>N$ and conclude that the function $f$ has wandering domains $U_n$ that contain $B_n$ and whose boundary is contained in $R_n$.
\end{proof}
The next lemma relates the wandering domains of a transcendental self-map of~${\mathbb{C}}^*$ and a lift of it.
\begin{lem}
Let $f$ be a transcendental self-map of ${\mathbb{C}}^*$ and let $\tilde{f}$ be a lift of $f$. Then, if $U$ is a wandering domain of $f$, every component of $\exp^{-1}(U)$ is a wandering domain of $\tilde{f}$ which must be simply connected.
\label{lem:semiconj-wd}
\end{lem}
\begin{proof}
By a result of Bergweiler \cite{bergweiler95}, every component of $\exp^{-1}(U)$ is a Fatou component of $\tilde{f}$. Let $U_0$ be a component of $\exp^{-1}(U)$ and suppose to the contrary that there exist $m,n\in{\mathbb{N}}_0$, $m\neq n$, and a point $z_0\in\tilde{f}^m(U_0)\cap \tilde{f}^n(U_0)$. Then, there exists points $z_1,z_2\in U_0$ such that \vspace*{-5pt}
$$
f^m(e^{z_1})=\exp \tilde{f}^m(z_1)=\exp z_0=\exp \tilde{f}^n(z_2)=f^n(e^{z_2}).\vspace*{-5pt}
$$
Since $e^{z_1},e^{z_2}\in U$, this contradicts the assumption that $U$ is a wandering domain of $f$. Hence $U_0$ is a wandering domain of $\tilde{f}$.
Finally, by \cite[Theorem~1]{baker87}, the Fatou component $U$ is either simply connected or doubly connected and surrounds the origin. Since the exponential function is periodic, taking a suitable branch of the logarithm one can show that the components of $\exp^{-1}(U)$ are simply connected.
\end{proof}
\begin{rmk}
Observe that the converse of Lemma~\ref{lem:semiconj-wd} does not hold. If $f$ is a transcendental self-map of ${\mathbb{C}}^*$ with an attracting fixed point $z_0$ and $A$ is the immediate basin of attraction of $z_0$, then there is a lift $\tilde{f}$ of $f$ such that a component of $\exp^{-1}(A)$ is a wandering domain.
\end{rmk}
If a transcendental self-map of ${\mathbb{C}}^*$ has an escaping wandering domain, then we can use the previous lemma to obtain automatically an example of a transcendental entire function with an escaping wandering domain.
\begin{ex}
The transcendental entire function $\tilde{f}(z)=z+\frac{\sin e^z}{e^z}+\frac{2\pi}{e^z}$, which is a lift of the function $f$ from Example~\ref{ex:wand-domain}, has infinitely many grand orbits of bounded wandering domains that escape to infinity.
\end{ex}
\section{Explicit functions with Baker domains}
\label{sec:explicit-bd}
We now turn our attention to Baker domains. As we mentioned in the introduction, Baker domains can be classified into hyperbolic, simply parabolic and doubly parabolic according to the Riemann surface $U/f$ obtained by identifying the points of the Baker domain $U$ that belong to the same orbit under iteration by the function $f$. K\"onig \cite{koenig99} introduced the following notation.
\begin{dfn}[Conformal conjugacy]
Let $U\subseteq {\mathbb{C}}$ be a domain and let \mbox{$f:U\to U$} be analytic. Then a domain $V\subseteq U$ is \textit{absorbing} (or \textit{fundamental}) for $f$ if $V$ is simply connected, $f(V)\subseteq V$ and for each compact set $K\subseteq U$, there exists $N=N_K$ such that $f^N(K)\subseteq V$.
Let $\mathbb H:=\{z\in{\mathbb{C}}\ :\ \textup{Re}\, z>0\}$. The triple $(V,\phi,T)$ is called a \textit{conformal conjugacy} (or \textit{eventual conjugacy}) of $f$ in $U$ if
\begin{enumerate}
\item[(a)] $V$ is absorbing for $f$;
\item[(b)] $\phi:U\to \Omega\in\{\mathbb H, {\mathbb{C}}\}$ is analytic and univalent in $V$;
\item[(c)] $T:\Omega\to\Omega$ is a bijection and $\phi(V)$ is absorbing for $T$;
\item[(d)] $\phi(f(z))=T(\phi(z))$ for $z\in U$.
\end{enumerate}
In this situation we write $f\sim T$.
\end{dfn}
Observe that properties (b) and (d) imply that $f$ is univalent in $V$. K\"onig also provided the following geometrical characterization of the three types of Baker domains \cite[Theorem~3]{koenig99}. This characterisation is also valid for any simply connected Baker domain of a transcendental self-map of ${\mathbb{C}}^*$
\begin{lem}
Let $U$ be a $p$-periodic Baker domain of a meromorphic function~$f$ in which $f^{np}\to\infty$ and on which $f^p$ has a conformal conjugacy. For $z_0\in U$, put
$$
c_n=c_n(z_0):=\frac{|f^{(n+1)p}(z_0)-f^{np}(z_0)|}{\textup{dist}(f^{np}(z_0),\partial U)}.
$$
Then exactly one of the following cases holds:
\begin{enumerate}
\item[(a)] $U$ is hyperbolic and $f^p\sim T(z)=\lambda z$ with $\lambda>1$, which is \mbox{equivalent~to}
$$
c_n>c\quad \mbox{ for } z_0\in U,\ n\in {\mathbb{N}},\quad \mbox{ where } c=c(f)>0.
$$
\item[(b)] $U$ is simply parabolic and $f^p\sim T(z)=z\pm i$, which is equivalent to
$$
\liminf_{n\to\infty} c_n>0 \quad \mbox{ for } z_0\in U,\quad \mbox{ but } \inf_{z_0\in U}\limsup_{n\to\infty} c_n=0;
$$
\item[(c)] $U$ is doubly parabolic and $f^p\sim T(z)=z+1$, which is equivalent to
$$
\lim_{n\to\infty} c_n=0\quad \mbox{ for } z_0\in U.
$$
\end{enumerate}
\label{lem:bd-koenig}
\end{lem}
\begin{figure}[h!]
\centering
\def.60\linewidth{\linewidth}
\input{bd-types1.pdf_tex}
\hspace*{17pt}(a) $U$ hyperbolic\hspace*{14pt} (b) $U$\! simply\! parabolic \hspace*{3pt} (c) $U$\! doubly\! parabolic
\caption[Classification of Baker domains with their absorbing domains]{Classification of Baker domains with their absorbing domains.}
\label{fig:bd-types}
\end{figure}
We now give a couple of explicit examples of transcendental self-maps of ${\mathbb{C}}^*$, with a hyperbolic and a doubly parabolic Baker domain, respectively.
\begin{ex}
For every $\lambda>1$, the function $f_\lambda(z)=\lambda z\exp(e^{-z}+1/z)$ is a transcendental self-map of ${\mathbb{C}}^*$ which has an invariant, simply connected, hyperbolic Baker domain $U\subseteq {\mathbb{C}}^*\setminus {\mathbb{R}}_-$ whose boundary contains both zero and infinity, and the points in~$U$ escape to infinity (see Figure~\ref{fig:hyp-baker-domain}).
\label{ex:hyp-baker-domain}
\end{ex}
\begin{proof}[Proof of Example~\ref{ex:hyp-baker-domain}]
First observe that
\begin{equation}
\begin{array}{rl}
f_\lambda(z)\hspace*{-8pt} & =\lambda z\exp\left(e^{-z}+\tfrac{1}{z}\right)\vspace{5pt}\\
& = \lambda z\left(1+e^{-z}+\tfrac{1}{2!}e^{-2z}+\cdots\right)\left(1+\tfrac{1}{z}+\tfrac{1}{2!}\tfrac{1}{z^2}+\cdots \right)\vspace{5pt}\\
& = \lambda z\left(1+O\left(\tfrac{1}{z}\right)\right) \mbox{ as } \textup{Re}\,z\to\infty.
\end{array}
\label{eq:ex-bd-1}
\end{equation}
Hence $f_\lambda$ maps $\mathbb H_R:=\{z\in{\mathbb{C}}\ :\ \textup{Re}\,z>R\}$ into itself, for $R>0$ sufficiently large, so $\mathbb H_R\subseteq U$, where $U$ is an invariant Fatou component of $f_\lambda$. Also, for real $x>0$,
$$
f_\lambda(x)=\lambda x\exp\left(e^{-x}+\tfrac{1}{x}\right)>\lambda x>x
$$
so $f_\lambda^n(x)\to\infty$ as $n\to \infty$. Thus, $U$ is an invariant Baker domain of $f$ which contains the positive real axis, so $\partial U$ contains zero and infinity.
To show that $U$ is a hyperbolic Baker domain, consider $z_0\in U$. By the contraction property of the hyperbolic metric in $U$, the orbit of $z_0$ escapes to infinity in~$\mathbb H_R$. Hence, by \eqref{eq:ex-bd-1} and since $0\in U^c$,
$$
\begin{array}{rl}
c_n\hspace*{-6pt} &\displaystyle=\frac{|f^{n+1}(z_0)-f^n(z_0)|}{\textup{dist}\,(f^n(z_0),\partial U)}\geqslant \frac{\lambda f^n(z_0)\left(1+O\left(\frac{1}{f^n(z_0)}\right)\right)-f^n(z_0)}{|f^n(z_0)|}\vspace{10pt}\\
&\displaystyle>\lambda -1 -\frac{O(1)}{|f^n(z_0)|} \mbox{ as } n\to \infty,
\end{array}
$$
so
$$
\liminf_{n\to\infty} c_n\geqslant \lambda-1>0,
$$
and thus the Baker domain $U$ is hyperbolic.
Finally, observe that the negative real axis is invariant under $f$, and therefore $(-\infty,0)\cap U=\emptyset$. Since doubly connected Fatou components must surround zero, $U$ is simply connected.
\end{proof}
\begin{figure}[h!]
\includegraphics[width=.49\linewidth]{hypbd-01.png}
\includegraphics[width=.49\linewidth]{hypbd-02.png}
\caption[Phase space of a transcendental self-map of ${\mathbb{C}}^*$ which has a hyperbolic Baker domain]{Phase space of the function $f_2(z)=2z\exp(e^{-z}+1/z)$ from Example~\ref{ex:hyp-baker-domain}. On the right, zoom of a neighbourhood of zero.}
\label{fig:hyp-baker-domain}
\end{figure}
The function $f(z)=2 z\exp(e^{-z}+1/z)$ has a repelling fixed point in the negative real line. If we choose $h(z)=1/z^2$ instead of $1/z$, then $f(z)=2 z\exp(e^{-z}+1/z^2)$ has the positive real axis in a Baker domain while the negative real axis is in the fast escaping set.
We now give a second explicit example of transcendental self-map of ${\mathbb{C}}^*$ with a Baker domain which, in this case, is doubly parabolic.
\begin{ex}
The function $f(z)=z\exp\left((e^{-z}+1)/z\right)$ is a transcendental self-map of ${\mathbb{C}}^*$ which has an invariant, simply connected, doubly parabolic Baker domain $U\subseteq {\mathbb{C}}^*\setminus {\mathbb{R}}_-$ whose boundary contains both zero and infinity, and the points in $U$ escape to infinity (see Figure~\ref{fig:dpar-baker-domain}). \vspace*{-10pt}
\label{ex:dpar-baker-domain}
\end{ex}
\begin{proof}[Proof of Example~\ref{ex:dpar-baker-domain}]
Looking at the power series expansion of $f$, we have
$$
\begin{array}{rl}
f(z)\hspace*{-8pt} & = z\exp\left(\tfrac{e^{-z}}{z}+\tfrac{1}{z}\right)\vspace{5pt}\\
& = z\left(1+\tfrac{e^{-z}}{z}+\tfrac{1}{2!}\tfrac{e^{-2z}}{z^2}+\cdots\right)\left(1+\tfrac{1}{z}+\tfrac{1}{2!}\tfrac{1}{z^2}+\cdots \right)\vspace{5pt}\\
& = z\left(1+\tfrac{1}{z}+O\left(\tfrac{1}{z^2}\right)\right) \mbox{ as } \textup{Re}\,z\to\infty.
\end{array}
$$
Therefore $f$ maps the right half-plane $\mathbb H_R:=\{z\in{\mathbb{C}}\ :\ \textup{Re}\, z>R\}$ into itself for sufficiently large values of $R>0$ and $\mathbb H_R$ is contained in an invariant Baker domain $U$ of $f$, in which $\textup{Re}\,f^n(z)\to+\infty$ as $n\to\infty$. Since $f(x)>x$ for all $x>0$, the positive real axis lies in $U$. Let $z_0\in U$, then
$$
f^{n+1}(z_0)-f^n(z_0)\! =\! f^n(z_0)\! \left(\! 1+O\! \left(\! \frac{1}{f^n(z_0)}\! \right)\! \right)-f^n(z_0)\! =\! O(1)\mbox{ as } n\to \infty
$$
and, if $R$ is as above,
$$
\textup{dist}(f^n(z_0),\partial U)\geqslant \textup{Re}\,f^n(z_0)-R \quad \mbox{ as } n\to \infty,
$$
so
$$
c_n=\frac{|f^{n+1}(z_0)-f^{n}(z_0)|}{\textup{dist}(f^{n}(z_0),\partial U)}\leqslant \frac{O(1)}{\textup{Re}\,f^n(z_0)-R}\to 0 \quad \mbox{ as } n\to\infty.
$$
Thus, by Lemma~\ref{lem:bd-koenig}, the Baker domain $U$ is doubly parabolic.
Finally, observe that, for $x\in (-\infty,0)$, $f^n(x)\to \infty$ along the negative real axis as $n\to\infty$, so $(-\infty,0)\cap U=\emptyset$ and hence $U$ is simply connected.
\end{proof}
\begin{figure}[h!]
\includegraphics[width=.49\linewidth]{dpbd-01.png}
\includegraphics[width=.49\linewidth]{dpbd-02.png
\caption[\mbox{Phase space of a transcendental self-map of ${\mathbb{C}}^*$} \mbox{which has a doubly parabolic Baker domain}]{Phase space of the function $f(z)=z\exp\left((e^{-z}+1)/z\right)$ from Example~\ref{ex:dpar-baker-domain}. On the right, zoom of a neighbourhood of zero.}
\label{fig:dpar-baker-domain}
\end{figure}
\begin{lem}
Let $f$ be a transcendental self-map of ${\mathbb{C}}^*$ and let $\tilde{f}$ be a lift of $f$. Then, if $U$ is a Baker domain of $f$, every component $U_k,\ k\in{\mathbb{Z}},$ of $\exp^{-1}(U)$ is either a (preimage of a) Baker domain or a wandering domain \mbox{of~$\tilde{f}$}. Moreover, if $U$ is simply connected and $U_k$ is a Baker domain, then $U_k$ is hyperbolic, simply parabolic or doubly parabolic if and only if $U$ is hyperbolic, simply parabolic or doubly parabolic, respectively.
\label{lem:semiconj-bd}
\end{lem}
\begin{proof}
By \cite{bergweiler95}, every component of $\exp^{-1}(U)$ is a Fatou component of $\tilde{f}$. Moreover, since $\exp^{-1}(I(f))\subseteq I(\tilde{f})$, $U_k$ is either a Baker domain, a preimage of a Baker domain or an escaping wandering domain of $\tilde{f}$.
Suppose that $U$ has period $p\geqslant 1$ and $U_k$ is periodic. Then the Baker domain $U_k$ has period $q$ with $p\mid q$. Let $(V,\phi,T)$ be a conformal conjugacy of $f^q$ in $U$. Then $(\tilde{V},\tilde{\phi},T)$ is a conformal conjugacy of $\tilde{f}^q$ in $U_k$, where $\tilde{V}$ is the component of $\exp^{-1}V$ that lies in $U_k$ and $\tilde{\phi}=\phi\circ\exp$. Thus, the Baker domains $U$ and $U_k$ are of the same type.
\end{proof}
As before, we use Lemma~\ref{lem:semiconj-bd} to provide examples of transcendental entire functions with Baker domains and wandering domains.
\begin{ex}
The entire function $\tilde{f}(z)\!=\!\ln \lambda\!+\!z\!+\!\exp(-e^z)\!+\!e^{-z}$, which is a lift of the function $f$ from Example~\ref{ex:hyp-baker-domain}, has an invariant hyperbolic Baker domain that contains the real line.
\end{ex}
\begin{ex}
The entire function $\tilde{f}(z)=z+\frac{\exp(-e^z)}{e^z}+e^{-z}$, which is a lift of the function $f$ from Example~\ref{ex:dpar-baker-domain}, has an invariant doubly parabolic Baker domain that contains the real line.
\end{ex}
\section{Preliminaries on approximation theory} \label{sec:approx-theory}
In this section we state the results from approximation theory that will be used in Sections~\ref{sec:wd} and \ref{sec:bd} to construct examples of functions with wandering domains and Baker domains, respectively. We follow the terminology from \cite[Chapter~IV]{gaier87}, and introduce Weierstrass and Carleman sets. Recall that if $F\subseteq {\mathbb{C}}$ is a closed set, then $A(F)$ denotes the set of continuous functions $f:F\to{\mathbb{C}}$ that are holomorphic in the interior of $F$.
\begin{dfn}[Weierstrass set]
\label{dfn:weierstrass-set}
We say that a closed set $F\subseteq{\mathbb{C}} $ is a \textit{Weierstrass set} in ${\mathbb{C}}$ if each $f\in A(F)$ can be approximated by entire functions \textit{uniformly} on $F$; that is, for every $\varepsilon>0$, there is an entire function $g$ for which
$$
|f(z)-g(z)|<\varepsilon\quad \mbox{ for all } z\in F.
$$
\end{dfn}
The next result is due to Arakelyan and provides a characterisation of Weierstrass sets \cite{arakeljan64}. In the case that $F\subseteq {\mathbb{C}}$ is compact and ${\mathbb{C}}\setminus F$ is connected, then it follows from Mergelyan's theorem \cite[Theorem~1~on~p.~97]{gaier87} that functions in $A(F)$ can be uniformly approximated on $F$ by polynomials.
\begin{lem}[Arakelyan's theorem]
A closed set $F\subseteq {\mathbb{C}}$ is a Weierstrass set if and only if the following two conditions are satisfied:
\begin{enumerate}
\item[\emph{(K$_1$)}] ${\hat{\mathbb{C}}}\setminus F$ is connected;
\item[\emph{(K$_2$)}] ${\hat{\mathbb{C}}}\setminus F$ is locally connected at infinity.
\end{enumerate}
\end{lem}
If in addition both the set $F$ and the function $f\in A(f)$ are symmetric with respect to the real line, then the approximating function $g$ can be chosen to be symmetric as well (see \cite[Section 2]{gauthier13}).
Sometimes we may want to approximate a function in $A(f)$ so that the error is bounded by a given strictly positive function $\varepsilon:{\mathbb{C}}\to{\mathbb{R}}_+$ that is not constant, and $\varepsilon(z)$ may tend to zero as $z\to\infty$.
\begin{dfn}[Carleman set]
\label{dfn:carleman-set}
We say that a closed set $F\subseteq {\mathbb{C}}$ is a \textit{Carleman set} in ${\mathbb{C}}$ if every function $f\in A(F)$ admits \textit{tangential approximation} on $F$ by entire functions; that is, for every strictly positive function $\varepsilon\in\mathcal C(F)$, there is an entire function $g$ for which
$$
|f(z)-g(z)|<\varepsilon(z)\quad \mbox{ for all } z\in F.
$$
\end{dfn}
It is clear that Carleman sets are a special case of Weierstrass sets and hence conditions ($\text{K}_1$) and ($\text{K}_2$) are necessary. Nersesyan's theorem gives sufficient conditions for tangential approximation \cite{nersesjan71}.
\begin{lem}[Nersesyan's theorem]
A closed set $F$ is a Carleman set in~${\mathbb{C}}$ if and only if conditions \emph{($K_1$)}, \emph{($K_2$)} and
\begin{enumerate}
\item[\emph{(A)}] for every compact set $K\subseteq {\mathbb{C}}$ there exists a neighbourhood $V$ of infinity in ${\hat{\mathbb{C}}}$ such that no component of $\mbox{int}\, F$ intersects both $K$ and $V$,
\end{enumerate}
are satisfied.
\label{lem:nersesjan}
\end{lem}
Note that there is also a symmetric version of this result: if the set $F$ and the functions $f$ and $\varepsilon$ are in addition symmetric with respect to ${\mathbb{R}}$ then the entire function $g$ can be chosen to be symmetric with respect to ${\mathbb{R}}$ \cite[Section 2]{gauthier13}.
In some cases, depending on the geometry of the set $F$ and the decay of the error function $\varepsilon$, we can perform tangential approximation on Weierstrass sets without needing condition (A); the next result can be found in \cite[Corollary in p.162]{gaier87}.
\begin{lem}
Suppose $F\subseteq {\mathbb{C}}$ is a closed set satisfying conditions ($\text{K}_1$) and ($\text{K}_2$) that lies in a sector
$$
W_\alpha:=\{z\in {\mathbb{C}}\ :\ |\textup{arg}\,z|\leqslant \alpha/2\},
$$
for some $0<\alpha\leqslant 2\pi$. Suppose $\tilde{\varepsilon}(t)$ is a real function that is continuous and positive for $t\geqslant 0$ and satisfies
$$
\int_1^{+\infty} t^{-(\pi/\alpha)-1}\log\tilde{\varepsilon}(t)dt>-\infty.
$$
Then every function $f\in A(F)$ admits $\varepsilon$-approximation on the set $F$ with $\varepsilon(z)=\tilde{\varepsilon}(|z|)$ for $z\in F$.
\label{lem:approx-sectors}
\end{lem}
\section{Construction of functions with wandering domains}
\label{sec:wd}
To prove Theorem \ref{thm:wandering-domains} we modify Baker's construction of a holomorphic self-map of ${\mathbb{C}}^*$ with a wandering domain escaping to infinity \cite[Theorem 4]{baker87} to create instead a transcendental self-map of ${\mathbb{C}}^*$ with a wandering domain that accumulates to zero and to infinity according to a prescribed essential itinerary $e\in\{0,\infty\}^{{\mathbb{N}}_0}$ and with index $n\in {\mathbb{Z}}$.
\begin{proof}[Proof of Theorem \ref{thm:wandering-domains}]
We construct two entire functions $g$ and $h$ using Nersesyan's theorem so that the function $f(z)=z^n\exp\bigl(g(z)+h(1/z)\bigr)$, which is a transcendental self-map of ${\mathbb{C}}^*$, has the following properties:
\begin{itemize}
\item there is a bi-infinite sequence of annuli sectors $\{A_m\}_{m\in{\mathbb{Z}}\setminus\{0\}}$ that accumulate at zero and infinity and integers $s(m)\in{\mathbb{Z}}\setminus\{0\}$, for $m\in{\mathbb{Z}}\setminus\{0\}$, such that $f(A_m)\subseteq A_{s(m)}$ for all $m\in{\mathbb{Z}}$;
\item the discs $B_+:=\overline{D(2,1/4)}$ and $B_-:=1/B_+=\overline{D(32/63, 4/63)}$ both map strictly inside themselves under $f$, $f(B_+)\subseteq \textup{int}\,B_+$ and $f(B_-)\subseteq \textup{int}\,B_-$;
\item there is a bi-infinite sequence of closed discs $\{B_m\}_{m\in{\mathbb{Z}}\setminus\{0\}}$ such that $f(B_m)\subseteq \textup{int}\,B_+$, if $m>0$, and $f(B_m)\subseteq \textup{int}\,B_-$, if $m<0$.
\end{itemize}
Here $s(m):=\pi(\pi^{-1}(m)+1)$ and the map $\pi:{\mathbb{N}}\longrightarrow {\mathbb{Z}}\setminus \{0\}$ is an ordering of the sets $\{A_m\}_{m\in{\mathbb{N}}}$ according to the sequence $e$; that is, $\pi(k)$ is the position of the $k$th component in the orbit of the wandering domain. More formally, we define
\begin{equation}
\pi(k):=\left\{
\begin{array}{ll}
\displaystyle\#\{\ell\in{\mathbb{N}}_0\ :\ e_\ell=\infty \mbox{ for } \ell<k\}+1, & \mbox{ if } e_{k}=\infty,\vspace{5pt}\\
\displaystyle-\,\#\{\ell\in{\mathbb{N}}_0\ :\ e_\ell=0 \mbox{ for } \ell<k\}-1, & \mbox{ if } e_{k}=0,
\end{array}
\right.
\label{eq:p-function}
\end{equation}
for $k\in{\mathbb{N}}$ (see Figure \ref{fig:sketch-wd}).
\begin{figure}[h!]
\centering
\vspace*{45pt}
\def.60\linewidth{.8\linewidth}
\input{wd3.pdf_tex}
\vspace*{30pt}
\caption[Sketch~of~the~construction~of~a~transcendental \mbox{self-map\,of\,${\mathbb{C}}^*$\,with\,a\,wandering\,domain}]{Sketch of the construction in the proof of Theorem \ref{thm:wandering-domains}.}
\label{fig:sketch-wd}
\end{figure}
By Montel's theorem, the domains $\{A_m\}_{m\in {\mathbb{Z}}\setminus\{0\}}$, $\{B_m\}_{m\in{\mathbb{Z}}\setminus\{0\}}$ and $B_+,B_-$ are all contained in the Fatou set. Since $f(B_+)\subseteq \textup{int}\,B_+$, the function $f$~has an attracting fixed point in $B_+$ and the sets $\{B_m\}_{m\in{\mathbb{N}}}$ are contained in the preimages of the immediate basin of attraction of this fixed point. Likewise, the sets $\{B_{-m}\}_{m\in{\mathbb{N}}}$ belong to the basin of attraction of an attracting fixed point in $B_-$. Observe that in order to show that $A_1$ is contained in a wandering domain that escapes following the essential itinerary $e$ we need to prove that every $A_m$ is contained in a different Fatou component.
Now let us construct the entire functions $g$ and $h$ so that the function $f(z)=z^n\exp\bigl(g(z)+h(1/z)\bigr)$ has the properties stated above. Note that in this construction $\log z$ denotes the principal branch of the logarithm with \mbox{$-\pi<\textup{arg}\,z<\pi$}. Let $0<R<\pi/2$ and set, for $m>0$, define
$$
\begin{array}{l}
A_m:=\{z\in{\mathbb{C}}\ :\ -R\leqslant\mbox{arg}(z)\leqslant R,\ k_m\leqslant |z|\leqslant k_m e^{2R}\},\vspace{5pt}\\
B_m:=\overline{D\bigl((k_{m+1}-k_m)/2,\ 1/8\bigr)},
\end{array}
$$
where $k_m$ is any sequence of positive real numbers such that \mbox{$k_m\! >5/2$} and $k_{m+1}>k_m+1/4$ for all $m\in {\mathbb{N}}$. We define $A_{-m}:=1/A_m$ and $B_{-m}:=1/B_m$ for all $m\in{\mathbb{N}}$. Note that $\log A_m$ is a square of side $2R$ centred at a point that we denote by $a_m\in {\mathbb{R}}$. Hence, $\log A_m$ contains the disc $D(a_m,R)$ for all $m\in{\mathbb{Z}}\setminus\{0\}$. The set
$$
F:=\overline{D(0,1)}\cup B_+\cup \bigcup_{m>0} (A_m\cup B_m)
$$
which consists of a countable union of disjoint compact sets is a Carleman set.
Let $\delta_+,\delta_->0$ be such that $|w-\ln 2|\! <\! \delta_+$ and \mbox{$|w-\ln 32/63|\! <\! \delta_-$} imply, respectively, that $|e^w-2|<1/8$ and $|e^w-32/63|<2/63$. Let $K:=\min\{R/4, \delta_\pm/4\}$. By Lemma \ref{lem:nersesjan}, there is an entire function $g$ that satisfies the following conditions:
$$
\left\{
\begin{array}{ll}
|g(z)-a_{s(m)}-n\log z|<R/4, & \mbox{if } z\in A_m \mbox{ with } m>0,\vspace{10pt}\\
|g(z)-\ln 2-n\log z|<\delta_+/4, & \displaystyle\mbox{if } z\in \bigcup_{m>0} B_m\cup B_+,\vspace{5pt}\\
|g(z)|<K, & \mbox{if } z\in D(0,1),
\end{array}
\right.\vspace{5pt}
$$
Similarly, there is an entire function $h$ that satisfies the following conditions:
$$ \left\{
\begin{array}{ll}
|h(z)-a_{s(-m)}-n\log (1/z)|<R/4, & \mbox{if } z\in A_m \mbox{ with } m>0,\vspace{10pt}\\
|h(z)-\ln 32/63-n\log (1/z)|<\delta_-/4, & \displaystyle\mbox{if } z\in \bigcup_{m>0} B_m\cup B_+,\vspace{5pt}\\
|h(z)|<K, & \mbox{if } z\in D(0,1).
\end{array}
\right.\vspace{5pt}
$$
Therefore, since the sets $B_-$ and $A_m$, $m<0$, are contained in $D(0,1)$ and the sets $B_+$ and $A_m$, $m>0$, are contained in ${\mathbb{C}}\setminus \overline{D(0,1)}$, the function $\log f(z)=g(z)+h(1/z)+n\log z$ satisfies
$$
\left\{
\begin{array}{ll}
|\log f(z)-a_{s(m)}|<R/2, & \mbox{if } z\in A_m \mbox{ with } m\neq 0,\vspace{10pt}\\
|\log f(z)-\ln 2|<\delta_+/2, & \displaystyle\mbox{if } z\in \bigcup_{m>0} B_m\cup B_+,\vspace{10pt}\\
|\log f(z)-\ln 32/63|<\delta_-/2, & \displaystyle\mbox{if } z\in \bigcup_{m<0} B_m\cup B_-,\\
\end{array}
\right.
$$
and hence $f$ has the required mapping properties.
Finally, note that this construction is symmetric with respect to the real line and hence all Fatou components of $f$ that intersect the real line will be symmetric too. Thus, since transcendental self-maps of~${\mathbb{C}}^*$ cannot have doubly connected Fatou components that do not surround the origin \cite[Theorem 1]{baker87}, the Fatou components containing the sets $\{A_m\}_{m\in{\mathbb{Z}}\setminus \{0\}}$ are pairwise disjoint and $A_{\pi(0)}$ is contained in a wandering domain in $I_e(f)$.
\end{proof}
\section{Construction of functions with Baker domains}
\label{sec:bd}
In this section we construct holomorphic self-maps of ${\mathbb{C}}^*$ with Baker domains. The construction is split into two cases: first, we deal with the cases that the function $f$ is a transcendental entire or meromorphic function, that is, $f(z)=z^n\exp(g(z))$ where $n\in{\mathbb{Z}}$ and $g$ is a non-constant entire function (see Theorem~\ref{thm:baker-domains-entire}), and then we deal with the case that the function $f$ is a transcendental self-map of~${\mathbb{C}}^*$, that is, $f(z)=z^n\exp(g(z)+h(1/z))$ where $n\in {\mathbb{Z}}$ and $g,h$ are non-constant entire functions (see Theorem~\ref{thm:baker-domains}). For transcendental self-maps of~${\mathbb{C}}^*$, we are able to construct functions with Baker domains that have any given \textit{periodic} essential itinerary $e\in\{0,\infty\}^{{\mathbb{N}}_0}$.
To that end, we use Lemma~\ref{lem:approx-sectors}~to obtain entire functions $g$ and, if necessary, $h$ so that the function $f$ has a Baker domain. After this approximation process, the resulting function $f$ will behave as the function~$T_\lambda(z)=\lambda z$, $\lambda>1$, in a certain half-plane~$W$. We first require the following result that estimates~the asymptotic distance between the boundaries of $\log W$ and $\log T_\lambda(W)\subseteq \log W$.
\begin{lem}
Let $W=\{z\in{\mathbb{C}} : \textup{Re}\, z\geqslant 2\}$ and, for $\lambda>1$, let \mbox{$T_\lambda(z)=\lambda z$}. For $r>0$, let $\delta (r)$ denote the vertical distance between the curves $\partial\log W$ and $\partial\log T_\lambda(W)\subseteq \log W$ along the vertical line $V_r:=\{z\in{\mathbb{C}} : \textup{Re}\, z=r\}$. Then $\delta(r)\sim 2(\lambda-1)e^{-r}$ as $r\to+\infty$.
\label{lem:approx-BD}
\end{lem}
\begin{proof}
Since $\log z=\ln |z|+i\,\mbox{arg}(z)$, the quantity $\delta(r)$ equals the difference between the arguments of the points $z_1,z_2$ with $\textup{Im}\, z_k>0$, $k\in\{1,2\}$, where the vertical lines $\partial W$ and $\partial T(W)$ intersect the circle $\exp V_r$ of radius $e^r$ (see Figure \ref{fig:sketch-bd}).
\begin{figure}[h!]
\centering
\vspace{30pt}
\def.60\linewidth{.8\linewidth}
\input{bd.pdf_tex}
\vspace{15pt}
\caption[Definition of the function $\delta(r)$]{Definition of the function $\delta(r)$.}
\label{fig:sketch-bd}
\end{figure}
Since $\mbox{arg}\,z_1,\mbox{arg}\,z_2 \to\pi/2$ as $r\to+\infty$, we have
$$
\delta(r)=\arccos \frac{2}{e^r}-\arccos\frac{2\lambda}{e^r}\sim\left(\frac{\pi}{2}-\frac{2}{e^r}\right)-\left(\frac{\pi}{2}-\frac{2\lambda}{e^r}\right)=\frac{2(\lambda-1)}{e^r},
$$
as $r\to+\infty$, as required.
\end{proof}
Given $N\in{\mathbb{N}}$ and a periodic sequence \mbox{$e=\overline{e_0e_1\cdots e_{N-1}}\in\{0,\infty\}^{{\mathbb{N}}_0}$}, let $p,q\in{\mathbb{N}}$ denote
\begin{equation}
\begin{array}{l}
p=p(e):=\#\{k\in{\mathbb{N}}_0\ :\ e_k=\infty \mbox{ for } k<N\},\vspace{5pt}\\
q=q(e):=\#\{k\in{\mathbb{N}}_0\ :\ e_k=0 \mbox{ for } k<N\},
\end{array}
\label{eq:p-and-q}
\end{equation}
so that $p+q=N$. We want to construct a holomorphic function \mbox{$f:{\mathbb{C}}^*\to{\mathbb{C}}^*$} with an $N$-cycle of Baker domains that has components $U_i^\infty$, $0\leqslant i<p$, and $U_i^0$, $0\leqslant i<q$, in which
$$
f_{|U_i^\infty}^{Nn}\to\infty\quad \mbox{ and } \quad f_{|U_i^0}^{Nn}\to 0 \quad \mbox{ locally uniformly as } n\to \infty.
$$
In the case that zero is \textit{not} an essential singularity of $f$, then $q=0 $ and $N=p$. Note that the closure of a Baker domain in ${\hat{\mathbb{C}}}$ may contain both zero and infinity.
For $p\in{\mathbb{N}}$ and $X\subseteq {\mathbb{C}}^*$, we define
$$
\sqrt[p]{X}:=\{z\in{\mathbb{C}}^*\, :\, z^p\in X,\ |\textup{arg}\,z|<\pi/p\}.
$$
In order to construct a function with an $N$-periodic Baker domain that has $p$ components around zero or infinity, we will semiconjugate the function $T_\lambda$ that we want to approximate in the half-plane $W$ by the $p$th root function:
$$
\xymatrix{
W \ar[r]^{T_\lambda} & W\\
\sqrt[p]{W} \ar[u]^{z^p} \ar[r]_{T_{\lambda,p}} & \sqrt[p]{W}. \ar[u]_{z^p}
}
$$
Next we look at the effect of this semiconjugation on the function $\delta$.
\begin{lem}
Let $W$ and $T_\lambda$, $\lambda>1$, be as in Lemma~\ref{lem:approx-BD}. For $p\in{\mathbb{N}}$ and $\lambda>1$, define the function $T_{\lambda,p}(z):=\sqrt[p]{T_\lambda(z^p)}$ on $\sqrt[p]{W}$ and, for $r>0$, let $\delta_{p}(r)$ denote the vertical distance between the curves $\partial \log \sqrt[p]{W}$ and $\partial \log T_{\lambda,p}(\sqrt[p]{W})\subseteq \log \sqrt[p]{W}$ along the vertical line \mbox{$V_r:=\{z\in{\mathbb{C}} : \textup{Re}\, z=r\}$}. Then $\delta_{p}(r)\sim 2(\lambda-1)e^{-pr}/p$ as $r\to+\infty$.
\label{lem:approx-root-BD}
\end{lem}
\begin{proof}
The function $z\mapsto z^p$ maps the circle of radius $e^r$ to the circle of radius $e^{pr}$ while the function $z\mapsto \sqrt[p]{z}$ divides the argument of points on that circle by $p$, so
$$
\delta_{p}(r)=\frac{\delta(pr)}{p}
$$
and hence, by Lemma~\ref{lem:approx-BD}, $\delta_{p}(r)\sim 2(\lambda-1)e^{-pr}/p$ as $r\to+\infty$.
\end{proof}
In the following theorem we construct transcendental entire or meromorphic functions that are self-maps of ${\mathbb{C}}^*$ and have Baker domains in which points escape to infinity. These functions are of the form $f(z)=z^n\exp(g(z))$ where $n\in{\mathbb{Z}}$ and $g$~is a non-constant entire function.
\begin{thm}
\label{thm:baker-domains-entire}
For every $N\in{\mathbb{N}}$ and $n\in{\mathbb{Z}}$, there exists a holomorphic self-map~$f$ of ${\mathbb{C}}^*$ with $\textup{ind}(f)=n$ that is a transcendental entire function, if $n\geqslant 0$, or a transcendental meromorphic function, if $n<0$, and has a cycle of hyperbolic Baker domains of period $N$.
\end{thm}
\begin{proof}
Let $\omega_{N}:=e^{2\pi i/N}$ and define
$$
V_{m}:=\omega_{N}^m\sqrt[N]{W}\subseteq {\mathbb{C}}\setminus \overline{\mathbb D} \quad \mbox{ for } 0\leqslant m<N,
$$
where $W$ is the closed half-plane from Lemma~\ref{lem:approx-BD}. We denote by $V$ the union of all $V_m$ for $0\leqslant m<N$, and let $R:={\mathbb{R}}_-$, if $N$ is odd, or $R:=\{z\in{\mathbb{C}}^*\,:\,\textup{arg}\,z=\pi(1-1/N)\}$, if $N$ is even. Then put
\begin{equation}
d:=\min\{(\sqrt[N]{2}-1)/3,\ \textup{dist}(V,R)/4\},
\label{eq:bd-1}
\end{equation}
and define the closed connected set
\begin{equation}
B:=\{z\in{\mathbb{C}}\,:\,\textup{dist}\,(z,V)\geqslant d \mbox{ and } \textup{dist}\,(z,R)\geqslant d\},
\label{eq:bd-2}
\end{equation}
which satisfies $B':=\overline{D(1,d)}\subseteq \textup{int}\,B$ (see Figure~\ref{fig:sketch-bd-entire}).
\begin{figure}[h!]
\centering
\def.60\linewidth{.60\linewidth}
\input{bd-entire.pdf_tex}
\caption[Sketch of the construction of a transcendental entire or meromorphic function that is a self-map of ${\mathbb{C}}^*$ and has a cycle of hyperbolic Baker domains]{Sketch of the construction in the proof of Theorem~\ref{thm:baker-domains-entire} with \mbox{$N\! =\! 3$}. The sets $B$ and $V_m$, $0\leqslant m<N$, are shaded in grey.}
\label{fig:sketch-bd-entire}
\end{figure}
Observe that the closed set $F:=B\cup V$ satisfies the hypothesis of Lemma \ref{lem:approx-sectors}; namely ${\hat{\mathbb{C}}}\setminus F$ is connected and ${\hat{\mathbb{C}}} \setminus F$ is locally connected at infinity, and $F\subseteq W_\alpha$ with $\alpha=2\pi$. We now define a function $\hat{g}$ on~$F$:
\begin{equation}
\hat{g}(z)\!:=\!\left\{\begin{array}{ll}
\!\!\!\log \left(\omega_{N}^{m+1}\sqrt[N]{\lambda (z/\omega_{N}^m)^{N}}\right)\! -n\log z, & \!\mbox{for } z\in V_m,\ 0\leqslant m<N,\vspace{5pt}\\
\!\!\!-n\log z, & \!\mbox{for } z\in B,
\end{array}\right.
\label{eq:bd-3}
\end{equation}
where we have taken an analytic branch of the logarithm defined on ${\mathbb{C}}^*\setminus R$ and hence on $F$. Then $\hat{g}\in A(F)$.
For $r>0$, we define the positive continuous function
\begin{equation}
\varepsilon(r):=\min\{d',\ k^{-(N+1)},\ r^{-(N+1)}\}
\label{eq:bd-4}
\end{equation}
where the constant $d'>0$ is so small that $|e^z-1|<d$ for $|z|<d'$ and the constant $k>0$ is so large that, for all $z\in \log T_\lambda(W)$ with $\textup{Re}\,z<k$, the disc $D(z,k^{-(N+1)})$ is compactly contained in $\log W$ and, moreover, if $\delta_{N}(r)$ is the function from Lemma~\ref{lem:approx-root-BD}, then
\begin{equation}
\varepsilon(r)<\delta_{N}(\ln (\lambda r)) \quad \mbox{ for } r\geqslant k,
\label{eq:bd-5}
\end{equation}
which is possible since
$$
\delta_{N}(\ln (\lambda r))\sim \frac{2(\lambda-1)}{N\lambda^N r^N} \quad \mbox{ as } r\to+\infty.
$$
Since $\varepsilon$ satisfies
$$
\int_1^{+\infty} r^{-3/2}\ln\varepsilon(r)dt=C-(N+1)\int_{r_0'}^{+\infty} \frac{\ln r}{r^{3/2}}dr>-\infty
$$
for some constants $C\in{\mathbb{R}}$ and $r_0'\geqslant r_0$, by Lemma \ref{lem:approx-sectors} (with $\alpha=2\pi$), there is an entire function~$g$ such that
\begin{equation}
|g(z)-\hat{g}(z)|<\varepsilon(|z|)\quad\mbox{ for all } z\in F.
\label{eq:bd-entire-approx}
\end{equation}
We put
\begin{equation}
f(z):=z^n\exp(g(z))=z^n\exp(\hat{g}(z))\exp(g(z)-\hat{g}(z)).
\label{eq:bd-6}
\end{equation}
By Lemma \ref{lem:approx-root-BD} and (\ref{eq:bd-3}-\ref{eq:bd-entire-approx}), $f(V_m)\subseteq V_{m+1}$ for \mbox{$0\leqslant m<N- 1$} and $f(V_{N-1})\subseteq V_0$ and, by (\ref{eq:bd-1}-\ref{eq:bd-entire-approx}), $f(B)\subseteq D(1,d)$. Hence each set $V_m$ is contained in an $N$-periodic Fatou component~$U_m$ for $0\leqslant m<N$ and~$B$ is contained in the immediate basin of attraction of an attracting fixed point that lies in $B'$. It follows that the Fatou components $U_m$ are all simply connected.
To conclude the proof of Theorem~\ref{thm:baker-domains-entire}, it only remains to check that the Fatou components $U_m$, $0\leqslant m<N$, are hyperbolic Baker domains. Due to symmetry, it suffices to deal with the case $m=0$. Let $z_0\in U_0$. Since $V_0\subseteq U_0$ is an absorbing region, we can assume without loss of generality that $z_0\in V_0$ and $|z_0|$ is sufficiently large. For $n\in{\mathbb{N}}$, let
$$
\epsilon_n:=g(f^{n-1}(z_0))-\hat{g}(f^{n-1}(z_0))
$$
which, by \eqref{eq:bd-entire-approx}, satisfies
$$
|\epsilon_n|<\varepsilon(|f^{n-1}(z_0)|) \quad \mbox{ as } n\to \infty.
$$
For $n\in{\mathbb{N}}$, define
$$
C_n:=\prod_{0<k\leqslant n} \exp \epsilon_k=\exp \sum_{0<k\leqslant n} \epsilon_k,
$$
which represents the quotient $f^n(z_0)/\bigl(z^n\exp(\hat{g}(z_0))\bigr)$. Using the triangle inequality, we obtain
\begin{equation}
|C_n|\leqslant \exp \sum_{0<k\leqslant n} |\epsilon_k|<\exp\sum_{0<k\leqslant n} \varepsilon(|f^{k-1}(z_0)|).
\label{eq:bd-9}
\end{equation}
Next, we are going to show that $|C_n|$ is bounded above for all $n\in{\mathbb{N}}$. To that end, we find a lower bound for $|f^{k}(z_0)|$ for $k\in{\mathbb{N}}$ assuming, if necessary, that $|z_0|=r_0$ is sufficiently large. Put $K:=(\sqrt[N]{\lambda}-1)/2>0$. Then $|C_1|>1/K$ for $r_0>0$ sufficiently large and, by \eqref{eq:bd-6} and \eqref{eq:bd-3},
$$
|f(z_0)|=\sqrt[N]{\lambda}|z_0||C_1|\geqslant \frac{\sqrt[N]{\lambda}}{K}r_0=\mu r_0,
$$
with $\mu:=\sqrt[N]{\lambda}/K>1$. Hence, by induction and the symmetry properties of the sets $V_m$, $0\leqslant m<N$,
\begin{equation}
|f^k(z_0)|\geqslant \mu^k r_0\quad \mbox{ for } k\in{\mathbb{N}}.
\label{eq:bd-7}
\end{equation}
In particular, $z_0\in I(f)$ so, by normality, the periodic Fatou components $U_m$, $0\leqslant m<N$, are Baker domains. We deduce by \eqref{eq:bd-9}, \eqref{eq:bd-4} and \eqref{eq:bd-7} that $|C_n|<e^S$ for all $n\in{\mathbb{N}}$, where $S<+\infty$ is the sum of the following geometric series
$$
S:=\sum_{k=0}^\infty \frac{1}{(\mu^kr_0)^{N+1}}=\frac{1}{r_0^{N+1}} \sum_{k=0}^\infty \left( \frac{1}{\mu^{N+1}} \right)^k =\frac{\mu^{N+1}}{r_0^{N+1}(\mu^{N+1}-1)}.
$$
Next we use the characterisation of Lemma~\ref{lem:bd-koenig} to show that the Baker domains are hyperbolic. For $n\in{\mathbb{N}}$, define
$$
c_n=c_n(z_0)=\frac{|f^{(n+1)N}(z_0)-f^{nN}(z_0)|}{\textup{dist}(f^{nN}(z_0),\partial U)}.
$$
We have
$$
f^{nN}(z_0)=C_{nN}\sqrt[N]{\lambda^{nN}z_0^N}=C_{nN}\lambda^{n}z_0 \quad \mbox{ for } n\in{\mathbb{N}}
$$
and therefore
$$
|f^{(n+1)N}(z_0)-f^{nN}(z_0)|\sim C_\infty\lambda^n(\lambda-1)|z_0|\quad \mbox{ as } n\to\infty,
$$
where $C_\infty:=\lim_{n\to\infty} C_n$. Also, $\textup{dist}(f^{nN}(z_0),\partial U_0)\leqslant e^{S}\lambda^n|z_0|$ and hence if $c:=(\lambda-1)/2>0$, we have $c_n(z_0)>c$ for all $n\in{\mathbb{N}}$. Thus, by Lemma~\ref{lem:bd-koenig}, the Baker domain $U_0$ is hyperbolic. This completes the proof of Theorem~\ref{thm:baker-domains-entire}.
\end{proof}
Finally we prove Theorem \ref{thm:baker-domains} in which we construct a function~$f$ that is a transcendental self-map of ${\mathbb{C}}^*$ with $\textup{ind}(f)=n$ that has a cycle of hyperbolic Baker domains in $I_e(f)$, where $e$ is any prescribed periodic essential itinerary $e\in\{0,\infty\}^{{\mathbb{N}}_0}$.
\begin{proof}[Proof of Theorem \ref{thm:baker-domains}]
Let $N\in {\mathbb{N}}$ be the period of $e$ and let $p,q\in{\mathbb{N}}_0$ denote, respectively, the number of symbols $0$ and $\infty$ in the sequence $e_0e_1\hdots e_{N-1}$, where $p+q=N$; see \eqref{eq:p-and-q}. We modify the proof of Theorem~\ref{thm:baker-domains-entire} to obtain a transcendental self-map of ${\mathbb{C}}^*$ of the form
$$
f(z):=z^n\exp(g(z)z^{N+1}+h(1/z)/z^{N+1})
$$
that has a hyperbolic Baker domain $U$ in $I_e(f)$, where the entire functions $g, h$~will be constructed using approximation theory.
We start by defining a collection of $p$ sets $\{V_m^\infty\}_{0\leqslant m<p}$, whose closure in ${\hat{\mathbb{C}}}$ contains infinity. Put $\omega_{p}:=e^{2\pi i/p}$ once again and define
$$
V_{m}^\infty:=\omega_{p}^m\sqrt[p]{W}\subseteq {\mathbb{C}}\setminus \overline{D(0,\rho)} \quad \mbox{ for } 0\leqslant m<p,
$$
where $W$ is the half-plane from Lemma~\ref{lem:approx-BD} and $\rho:=1+(\sqrt[N]{2}-1)/6$. We denote by $V_\infty$ the union of all $V_m^\infty$, $0\leqslant m<p$.
As before, we define a set $B_\infty$ that will be contained in an immediate basin of attraction of $f$ and put $R_\infty={\mathbb{R}}_-$, if $p$ is odd, or $R_\infty=\{z\in{\mathbb{C}}^*\,:\,\textup{arg}\,z=\pi(1-1/p)\}$, if $p$ is even. Then, let
$$
d_\infty:=\min\{(\sqrt[N]{2}-1)/6,\ \textup{dist}(V_\infty,R_\infty)/4\},
$$
and define the closed connected set
$$
B_\infty:=\{z\in{\mathbb{C}}\,:\,\textup{dist}\,(z,V_\infty)\geqslant d_\infty \mbox{ and } \textup{dist}\,(z,R_\infty)\geqslant d_\infty\}\setminus D(0,\rho),
$$
which compactly contains the disc $B_\infty':=\overline{D((1+\sqrt[N]{2})/2,(\sqrt[N]{2}-1)/6)}$. Finally, we define the disc $D:=D(0,1/\rho)$, which is contained in $\mathbb D$. We will construct the function $g$ by approximating it on the closed set $F_\infty:=V_\infty\cup B_\infty\cup D$, which satisfies the hypothesis of Lemma \ref{lem:approx-sectors}; namely ${\hat{\mathbb{C}}}\setminus F_\infty$ is connected and ${\hat{\mathbb{C}}} \setminus F_\infty$ is locally connected at infinity, and $F_\infty\subseteq W_\alpha$ with $\alpha=2\pi$ (see Figure~\ref{fig:sketch-bd-cstar-1side}).
\begin{figure}[h!]
\centering
\def.60\linewidth{.60\linewidth}
\input{bd-cstar-3.pdf_tex}
\caption[Sketch of the construction of a transcendental self-map of ${\mathbb{C}}^*$ that has a cycle of hyperbolic Baker domains I]{Sketch of the construction of the entire function $g$ in the proof of Theorem~\ref{thm:baker-domains} with $e=\overline{\infty\infty00\infty}$. The sets $D$, $B_\infty$ and $V_m^\infty$, \mbox{$0\leqslant m<p$}, are shaded in grey.}
\label{fig:sketch-bd-cstar-1side}
\end{figure}
Similarly, we define a set $B_0$ and a collection of $q$ unbounded sets $\{V_m^0\}_{0\leqslant m<q}$ by using the same procedure as above, just replacing $p$ by $q$, and then, if $V_0$ is the union of all $V_m^0$, $0\leqslant m<q$, we put $F_0:=V_0\cup B_0\cup D$. The Fatou set of the function $f$ will contain all the sets $V_m^\infty$, $0\leqslant m<p$, and all the sets $\tilde{V}_m^0:=1/V_m^0$, $0\leqslant m<q$, which are unbounded in ${\mathbb{C}}^*$.
In order to define the functions $\hat{g}\in A(F_\infty)$ and $\hat{h}\in A(F_0)$, we first introduce some notation to describe how $\hat{g}$ and $\hat{h}$ map the components of $V_\infty$ and $V_0$, respectively; we use the same notation as in Theorem~\ref{thm:wandering-domains}. Let $\pi:\{0,\hdots,N-1\}\to \{-q,\hdots,-1,1,\hdots,p\}$ denote the function given by, for $0\leqslant k<N$,
$$
\pi (k):=\left\{
\begin{array}{ll}
\#\{\ell\in{\mathbb{N}}_0\ :\ e_\ell=\infty \mbox{ for } \ell<k\}+1, & \mbox{ if } e_k=\infty,\vspace{5pt}\\
-\,\#\{\ell\in{\mathbb{N}}_0\ :\ e_\ell=0 \mbox{ for } \ell<k\}-1, & \mbox{ if } e_k=0.\\
\end{array}
\right.
$$
The function $\pi$ is an ordering of the components of $V_\infty \cup 1/V_0$ according to the sequence $e$. Suppose that $V$ is the starting component; that is, $V=\tilde{V}_0^0$, if $e_0=0$, and $V=V_0^\infty$, if $e_0=\infty$. Then
$$
f^k(V)\subseteq\left\{
\begin{array}{ll}
V_{\pi(k)}^\infty, & \mbox{ if } \pi(k)>0,\vspace{5pt}\\
\tilde{V}_{-\pi(k)}^0, & \mbox{ if } \pi(k)<0.
\end{array}\right.\vspace*{-5pt}
$$
For $m\in \{-q,\hdots,-1,1,\hdots ,p\}$, we define the function
$$
s(m):=\pi(\pi^{-1}(m)+1 \pmod{N}),\vspace*{-5pt}
$$
which describes the image of the component $V_m^\infty$, if $m>0$, and $\tilde{V}_m^0$, if $m<0$, so that the function $f$ to be constructed has a Baker domain that has essential itinerary $e$. More formally, for $0\leqslant m<p$,
$$
f(V_m^\infty)\subseteq\left\{
\begin{array}{ll}
V_{s(m)}^\infty, & \mbox{ if } s(m)>0,\vspace{5pt}\\
\tilde{V}_{-s(m)}^0, & \mbox{ if } s(m)<0;
\end{array}
\right.\vspace*{-5pt}
$$
and, for $0\leqslant m<q$,
$$
f(\tilde{V}_m^0)\subseteq\left\{
\begin{array}{ll}
V_{s(-m)}^\infty, & \mbox{ if } s(-m)>0,\vspace{5pt}\\
\tilde{V}_{-s(-m)}^0, &\mbox{ if } s(-m)<0.
\end{array}
\right.
$$
We now give the details of the construction of the entire function $g$ from the function $\hat{g}\in A(F_\infty)$. For $z\in V_m^\infty$, $0\leqslant m<p$, we put
$$
\hat{g}(z):=\left\{\begin{array}{ll}
\left(\log \left(\omega_{p}^{s(m)}\sqrt[p]{\lambda (z/\omega_{p}^m)^{p}}\right)-n\log z\right)/z^{N+1}, & \mbox{ if } s(m)>0,\vspace{5pt}\\
\left(\log \left(\omega_{p}^{s(m)}/\sqrt[p]{\lambda (z/\omega_{p}^m)^{p}}\right)-n\log z\right)/z^{N+1}, & \mbox{ if } s(m)<0,
\end{array}\right.
$$
for $z\in B_\infty$, we put $\hat{g}(z):=(\log(1+(\sqrt[n]{2}-1)/2)-n\log z)/z^{N+1}$ and, for $z\in D$, we put $\hat{g}(z):=0$, where we have taken an analytic branch of the logarithm defined on ${\mathbb{C}}^*\setminus R_\infty$ and hence on $V_\infty\cup B_\infty$ (see Figure~\ref{fig:sketch-bd-cstar-1side}). Then $\hat{g}\in A(F_\infty)$. For $r>0$, we define the positive continuous function $\varepsilon_\infty$ by
$$
\varepsilon_\infty(r):=\min\{d'_\infty,\ k_\infty^{-(N+1)},\ r^{-(N+1)}\} /(2r^{N+1})
$$
where the constant $d'_\infty>0$ is so small that $|e^z-1|<d_\infty$ for $|z|<d'_\infty$ and the constant $k_\infty>0$ is so large that, for all $z\in \log T_\lambda(W)$ with $\textup{Re}\,z<k_\infty$, the disc $D(z,k_\infty^{-(N+1)})$ is compactly contained in $\log W$ and, moreover, if $\delta_{N}(r)$ is the function from Lemma~\ref{lem:approx-root-BD}, then
$$
\varepsilon_\infty(r)\cdot 2r^{N+1}<\delta_{N}(\ln (\lambda r)) \quad \mbox{ for } r\geqslant k_\infty,
$$
which, as before, is possible since
$$
\delta_{N}(\ln (\lambda r)) \sim \frac{2(\lambda-1)}{N\lambda^Nr^N} \quad \mbox{ as } r\to+\infty.
$$
Since $\varepsilon_\infty$ satisfies
$$
\int_1^{+\infty} r^{-3/2}\ln\varepsilon_\infty(r)dt>-\infty,
$$
by Lemma \ref{lem:approx-sectors} (with $\alpha=2\pi$), there is an entire function~$g$ such that
\begin{equation}
|g(z)-\hat{g}(z)|<\left\{\begin{array}{ll}
\varepsilon_\infty(|z|) & \mbox{ for } z\in V_\infty\cup B_\infty,\vspace{5pt}\\
1/2 & \mbox{ for } z\in D.
\end{array}\right.
\label{eq:bd-entire-approx}
\end{equation}
Similarly, we can construct an entire function $h$ that approximates a function $\hat{h}\in A(F_0)$ so that the function
$$
\begin{array}{rl}
f(z):=&\hspace{-6pt}z^n\exp(g(z)z^{N+1}+h(1/z)/z^{N+1})\vspace{5pt}\\
=&\hspace{-6pt}z^n\exp(\hat{g}(z)z^{N+1})\exp(\hat{h}(1/z)/z^{N+1}) \cdot \vspace{5pt}\\
&\cdot \exp((g(z)-\hat{g}(z))z^{N+1})\exp((h(z)-\hat{h}(z))/z^{N+1})
\end{array}
$$
has the desired properties. Observe that if $z\in V_\infty\cup B_\infty$, then $1/z\in D$ and if $1/z\in V_0\cup B_0$, then $z\in D$. Thus, $\hat{h}(1/z)=0$ for $z\in V_\infty\cup B_\infty$ and
$$
\begin{array}{c}
|\hat{h}(1/z)/z^{N+1}+(g(z)-\hat{g}(z))z^{N+1}+(h(z)-\hat{h}(z))/z^{N+1}|\leqslant\vspace{5pt}\\
\leqslant 0+1/(2|z|^{N+1})+1/(2|z|^{N+1})=1/|z|^{N+1}
\end{array}
$$
for $z\in V_\infty\cup B_\infty$.
\begin{figure}[ht!]
\centering
\def.60\linewidth{.60\linewidth}
\input{bd-cstar-2.pdf_tex}
\caption[Sketch of the construction of a transcendental self-map of ${\mathbb{C}}^*$ that has a cycle of hyperbolic Baker domains II]{Sketch of the construction of the function $f$ in the proof of Theorem~\ref{thm:baker-domains} with $e=\overline{\infty\infty00\infty}$.
}
\label{fig:sketch-bd-cstar-2sides}
\end{figure}
Finally, a similar argument to that in the proof of Theorem~\ref{thm:baker-domains-entire} shows that the Fatou components that we have constructed are hyperbolic Baker domains; we omit the details.
\end{proof}
\red{
}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The RadiaBeam/SLAC dechirper has been installed and commissioned at the Linac Coherent Light Source (LCLS)~\cite{LCLS}. It consists of two pairs of flat plates with small corrugations, with the beam passing in between. The purpose of a dechirper is to compensate residual energy chirp---energy to longitudinal position correlation---just upstream of the undulator in a linac-based, free electron laser (FEL).
The flat geometry allows for adjustment of strength of the dechirper; having both a horizontal and vertical module allows for cancellation of unavoidable quad wake effects.
The LCLS, however, doesn't generally need extra chirp control; the installed dechirper has, instead, been used more as a fast kicker, to facilitate a two-color mode of operation~\cite{two_color}.
Analytical formulas for the longitudinal and transverse wakes of a dechirper have been developed to make it easier to do parameter studies and to plan the effective use of the device~\cite{Bane}.
These formulas are more accurate than results of perturbation calculations of the past~\cite{perturbation}. Note that we are here interested in very short-range wakes/high frequency impedances: the typical rms bunch length $\sigma_z\sim10$~$\mu$m, which implies that the typical frequency of interest is $f\sim c/(2\pi\sigma_z)=5$~THz ($c$ is the speed of light).
In this report we begin by introducing the concepts of wakefield and impedance, particularly in periodic structures and in flat geometry. Then we discuss the corrugated structure as a dechirper, the surface impedance approach to wake calculation, and how explicit analytical formulas for the wakes of the dechirper are obtained. This is followed by comparisons with numerical simulations and with measurements at the LCLS, and finally by a summary.
This report summarizes much of the theoretical work on the dechirper done together with G.~Stupakov, I.~Zagorodnov, and E.~Gjonaj. The comparison with measurements are taken from Ref.~\cite{MarcG}, \cite{Zemella}.
The formulas of this report are given in Gaussian units; to change a wake or impedance to MKS units, one merely multiplies by $Z_0c/(4\pi)$, with $Z_0=377$~$\Omega$.
\section{Wakefields and Impedances}
Let us here limit consideration to periodic structures with boundaries made of metal or dielectrics; {\it e.g.} resistive pipes, dielectric tubes, periodic cavities.
A driving charge $Q$ passes at the speed of light $c$ through such a structure. A test charge moves on a parallel trajectory, also at speed $c$, but at distance $s$ behind. The longitudinal (point charge) wake, $w(s)$, is the (longitudinal) voltage loss of the test particle per unit charge $Q$ per unit length of structure.
Thus, the point charge wake, for structure period $p$, is
\begin{equation}
w(s)=-\frac{1}{Qp}\int_0^p E_z(z,t)\Big|_{t=(s+z)/c}\,dz\ ,
\end{equation}
with $E_z$ the longitudinal electric field, $z$ the longitudinal position, and $t$ the time of the test particle.
The {\it bunch wake} is given by the convolution of the point charge wake and the (longitudinal) bunch distribution $\lambda(s)$:
\begin{equation}
W_\lambda(s)=-\int_0^\infty w(s')\lambda(s-s')\,ds'\ ;\label{bunch_wake_eq}
\end{equation}
note that a value of $W_\lambda<0$ means energy loss by the beam.
The impedance, for wavenumber $k=2\pi f/c$, is given by the Fourier transform of the wake, $\tilde w(k)$:
\begin{equation}
Z(k)=\tilde w(k)=\frac{1}{c}\int_0^\infty w(s)e^{iks}\,ds\ .
\end{equation}
The transverse wakes and impedances, $w_y(s)$, $Z_y(k)$, which are concerned with the transverse force on the test particle, are defined analogously to the longitudinal ones.
\subsection{Case of Flat Geometry}
Typically we are interested in wakes of {\it pencil beams}, {\it i.e.} of beams with small transverse extent. In round (cylindrically symmetric) geometry it is known that, for particles near the axis, the longitudinal wake is (nearly) independent of their transverse positions, and the dominant transverse wake---the dipole wake---depends linearly on the offset of the driving particle. Consider, however, the flat geometry of two (longitudinally) corrugated infinite plates, with the aperture defined by $y=\pm a$. The transverse wakes and impedances, for particles near the axis, are of the form:~\cite{surface_impedance}
\begin{equation}
w_{y}(s)=y_{0}\, w_{d}(s) + y\, w_{q}(s)\ ,\quad w_x(s)= w_q(s)(x_0-x)\ ,
\end{equation}
with $(x_0, y_0)$ the transverse offset of the driving charge, $(x,y)$ that of the test charge.
\subsection{Wakes at the Origin, at $s=0^+$}
For a periodic structure, the wake approaches a finite constant as $s\rightarrow0$, $w_0\equiv w(0^+)$, one that is independent of the properties of the boundary material. This has been shown to be true if the boundary is a resistive metal, a (metallic) disk-loaded structure, or a dielectric tube. It is probably generally true for boundaries made of metal or dielectric (see~\cite{Baturin}; but not if the boundary is a plasma~\cite{Gennady_plasma}). The constant $w_0$ depends only on the shape and size of the aperture ({\it e.g.} iris radius in disk-loaded RF structure) and on the (transverse) location of the particles. The same statement can be made about the slope of the transverse wakes at the origin $w_{y0}'$---for both the dipole and quad wakes discussed above.
For example, in a round structure, with the particles moving on axis, $w_0=4/ a^2$, with $a$ the radius of the aperture. However, for particles on axis in flat geometry, $w_0=\pi^2/(4a^2)$, where here $a$ represents the half-aperture of the structure.
One can see that, if one knows the impedance $Z(k)$, then $w_0$ is given by ($c/\pi$ times) the area under $Re[Z(k)]$. However, one can obtain $w_0$ also from the asymptotic behavior of $Z(k)$ at high frequencies~\cite{Gluckstern, zeroth_order}. The asymptotic impedance $Z_a(k)$ is related to the wake at the origin $w_0$. By letting the wake $w(s)\approx H(s)w_0$, we find the asymptotic form
\begin{equation}
Z_{a}(k)
=
\frac{w_0}{c}
\int_{0}^\infty
ds\, e^{iks}
=
i \frac{w_0}{kc}\ ,
\end{equation}
where we have neglected the contribution to the integral at the upper limit.
Thus if we know $Z(k)$, we can obtain $w_0$ by the relation~\cite{zeroth_order}
\begin{equation}
w_0=-ikcZ_{a}(k)=-ic\lim_{k\to\infty} kZ(k)\ ;\label{w0_eq}
\end{equation}
this procedure will yield a positive constant.
In the transverse case the wakes start at the origin linearly:
\begin{equation}
w_y(s)= H(s)w'_{y0} s\ ,
\end{equation}
with $w'_{y0}$ the value of the slope of the wake at the origin. Substituting into the impedance formula, we find the form of the asymptotic impedance
\begin{align}\label{eq:9}
Z_{ya}(k)
=
-
i\frac{w'_{y0}}{c}
\int_{0}^\infty
ds\,se^{iks}
=
i \frac{w'_{y0}}{ck^2}
\ ,
\end{align}
(where again we have neglected the contribution to the integral at the upper limit). Thus, from the high frequency impedance we obtain~\cite{zeroth_order}
\begin{equation}
w_{y0}'=-ik^2cZ_{ya}(k)=-ic\lim_{k\to\infty} k^2Z_{y}(k)\ ,\label{w0p_eq}
\end{equation}
which is a positive constant.
\section{The Concept of a Dechirper}
In a linac-based, X-ray FEL, by the use of accelerating structures and chicanes, a low energy, low (peak) current beam ($\sim10$~MeV, $\sim100$~A) is converted to one with high energy and high current ($\sim5$--10~GeV, $\sim1$~kA). After the last bunch compressor the beam is typically left with an energy--longitudinal position correlation (an energy ``chirp"), with the bunch tail at higher energy than the head (see Fig.~\ref{phase_sketch_fi}, the blue curve).
\begin{figure}[htb]
\centering
\includegraphics[draft=false, width=.4\textwidth]{fig1.pdf}
\caption{ Sketch of typical longitudinal phase space of beam at the end of acceleration in a linac-based FEL, before the dechirper (blue), and after passing through the dechirper (red). The front of the bunch is to the left. }\label{phase_sketch_fi}
\end{figure}
A typical value of chirp might be $\nu=40$~MeV/mm. To cancel the chirp, one can run the beam off crest in downstream RF cavities. Running the beam on the zero crossing of the wave, we would need length $L_{rf}=\nu/(G_{rf}k_{rf})$ of extra RF. Or with peak RF gradient $G_{rf}=20$~MeV/m, wave number $k_{rf}=27$/m (for frequency $f=1.3$~GHz), we would need $L_{rf}=74$~m of extra active RF.
A dechirper is a passive way to achieve the same result in a few meters of structure.
An ideal dechirper would have the wake: $w(s)=w_0H(s)$, with $H(s)=0$ (1) for $s<0$ ($s>0$).
This is because at the end of an X-ray FEL the bunch distribution is (approximately) uniform: $\lambda(s)=H(s+\ell/2)H(\ell/2-s)/\ell$, with $\ell$ the full bunch length.
For an ideal dechirper and a uniform bunch distribution we find that Eq.~\ref{bunch_wake_eq} yields, $W_\lambda(s)=-w_0s/\ell$, which is linear in $s$, with the tail losing the most energy. The induced chirp, $\nu=-QLw_0/\ell$, with $Q$ bunch charge and $L$ the length of structure.
Note that a resistive beam pipe or periodic (passive) RF cavities do not work well as dechirpers, since the wake $w(s)$ drops quickly as $s$ moves away from the origin.
\subsection{Corrugated Structure as Dechirper}
A metallic pipe with small corrugations (see Fig.~\ref{geometry_fi}) can function as a good dechirper for an X-ray FEL~\cite{dechirper}. Ideally we would like $h,p\ll a$, and it is important that $h\gtrsim p$. If $h,p\ll a$, then perturbation solutions exist; in the case of round geometry,
the wake is dominated by a single mode: $w(s)\approx H(s)w_0\cos ks$, with $w_0=4/a^2$ and $k=\sqrt{2p/aht}$~\cite{round}.
\begin{figure}[htb]
\centering
\includegraphics[draft=false, width=.30\textwidth]{fig2.pdf}
\caption{Geometry of a dechirper showing three corrugations. The blue ellipse represents an electron beam propagating along the $z$ axis. For the RadiaBeam/SLAC dechirper, (typical) half-gap $a=0.7$~mm, $h=0.5$~mm, $p=0.5$~mm, and $t=0.25$~mm.} \label{geometry_fi}
\end{figure}
As an example calculation we consider the beam properties of the NGLS project, where $Q=300$~pC and $\ell=150$~$\mu$m.
We performed a time-domain wake calculation using I.~Zagorodnov's ECHO code~\cite{ECHO} for (round) dechirper parameters $a=3$~mm, $h=0.45$~mm, $p=1$~mm, $t=p/2$, and $L=8.2$~m. In Fig.~\ref{NGLS_fi} we show the numerical result (the blue curve), the analytical result (the dashed red line), and the bunch shape (in black, with the head to the left). We see that the numerical result, over the core of the beam, agrees well with the analytical one.
\begin{figure}[!htb]
\centering
\includegraphics[draft=false, width=.45\textwidth]{fig3.pdf}
\caption{ Dechirper for NGLS~\cite{dechirper}: wake of model of NGLS bunch distribution (blue). The dashed, red line gives the linear chirp approximation.
The bunch shape $\lambda$, with the head to the left, is given in black.}
\label{NGLS_fi}
\end{figure}
Using a corrugated structure with flat geometry as dechirper has the advantage over round geometry in that the strength of interaction can be changed by simply changing the gap between the two plates. However, as we saw above, $w_0$ becomes weaker (for the same aperture). In addition, an unavoidable quad wake is excited, even when the beam is on axis; its effect, however, can simply be canceled by having half the dechirper oriented horizontally and half vertically. This, in fact, is the configuration of the RadiaBeam/SLAC dechirper that is installed in the LCLS.
\section{Surface Impedance Approach}
To find the impedance of a structure like the dechirper one can use the method of field matching (see {\it e.g.}~\cite{Zhen}).
However, if the corrugations are small ($h\ll a$), the fields excited by the driving particle can be solved using a simpler method, a surface impedance approach~\cite{surface_impedance}.
According to this method, on the walls we let
\begin{equation}
\tilde E_z(k)=\zeta(k)\tilde H_x(k)\ ,
\end{equation}
with $\zeta(k)$ the surface impedance; and $E$, $H$, are the electric and magnetic fields.
We solve Maxwell's equations (in the frequency domain) for the fields excited by the point driving charge. For flat geometry calculation we perform also the Fourier transform in $x$ of the fields, {\it e.g.} for the magnetic field we use
\begin{equation}
\hat H_x(q)=\int_{-\infty}^\infty dx\,\tilde H_xe^{iqx}\ .
\end{equation}
We finally obtain the flat geometry {\it generalized impedances}, {\it i.e.} impedances where the transverse positions of driving and test particle can be located anywhere within the aperture.
The longitudinal generalized impedance can be written in the form:~\cite{Bane}
\begin{equation}
Z(x_0,y_0,x,y,k)=\int_{0}^\infty dq\, f(q,y,y_0,k,\zeta)e^{-iq(x-x_0)}\ ,\label{Z_surf_eq}
\end{equation}
where $f$ is an explicit, analytical function of its arguments. The transverse generalized impedance is given in the same form, with a different (explicit analytical) function in the integral, $f_y(q,y,y_0,k,\zeta)$. Finally, the wake is obtained by numerically performing the inverse Fourier transform.
Thus, by performing two numerical integrals, we obtain an estimate of the generalized longitudinal and transverse wakes.
A subset of these results, one in which we are normally interested, is the special case of pencil beams, {\it i.e.} the case where $x\approx x_0$, $y\approx y_0$.
For the case of a beam passing near a single plate, we begin with the impedance for two plates separated by distance $2a$. In the expression for $f(q,y,y_0,k,\zeta)$ described above we let $y=a-b$ and then $a\rightarrow\infty$. Using the new version of $f$ and performing the same integrals as before, we obtain the wakes of a beam passing by a single dechirper jaw at distance $b$.
The only problem with using the surface impedance approach for the calculation of the RadiaBeam/SLAC dechirper is that it normally is valid only if $(h/a)\ll1$, whereas here nominally $(h/a)=(0.5/0.7)=0.7\not\ll1$ (and similarly for the single jaw case).
However, for the short, LCLS-type bunches---with rms length of 10's of microns---we demonstrate below that this approach still works when used with a surface impedance that represents the wall corrugations at high frequencies.
\section{Explicit Analytical Approximations of Wakes}
We have further simplified the results by extracting (analytical) parameters and using simplified formulas for cases of pencil beams. In the {\it zeroth order} approximation~\cite{zeroth_order}, we let $w(s)=H(s)w_0$, where $w_0$ is obtained using Eqs.~\ref{w0_eq}, \ref{Z_surf_eq} (longitudinal case), and $w_y(s)=H(s)w_{y0}'s$, where $w_{y0}'$ is obtained using Eqs.~\ref{w0p_eq}, \ref{Z_surf_eq} (using $f_y$; transverse case).
For better agreement with results obtained directly from Eq.~\ref{Z_surf_eq} and with numerical simulations, we use the {\it first order} approximation~\cite{Bane}, which in the longitudinal case is of form
\begin{equation}
w(s)=H(s)w_0e^{-\sqrt{s/s_0}}\ .\label{wa_eq}
\end{equation}
Parameter $s_0$ can also be derived in analytical form from the structure of the impedance at high frequencies. Eq.~\ref{wa_eq} corresponds to the two-term, high-frequency Taylor expansion:
\begin{equation}
Z(k)\approx i\frac{w_0}{kc}\left[1-\frac{(1+i)}{\sqrt{2ks_{0}}}\right]\quad\quad (k\rightarrow\infty)\ .\label{imp_round_eqb}
\end{equation}
Thus, to obtain parameter $s_0$, we begin with general form of the impedance (Eq.~\ref{Z_surf_eq}), and substitute in the high frequency surface impedance for the corrugated structure:~\cite{Gennady_periodic,Bane}
\begin{equation}
\zeta(k)=\frac{1}{\alpha p}\left(\frac{2it}{\pi k}\right)^{1/2}\ ,
\end{equation}
with $\alpha\approx1-0.465\sqrt{t/p}-0.070(t/p)$.
We then expand to two terms in Taylor series (at high $k$), integrate over $dq$, and find $s_0$ by comparing with the form of Eq.~\ref{imp_round_eqb}.
A similar procedure is followed for the transverse (dipole and quad) wakes; in this case, the wake is of the form
\begin{equation}
w_{y}(s)=
2H(s)w_{0y}'
s_{0y}\left[1-\left(1+\sqrt{\frac{s}{s_{0y}}}\right)e^{-\sqrt{s/s_{0y}}}\right]\ ,\label{wya_eq}
\end{equation}
with parameters $w_{0y}'$ and $s_{0y}$.
Note that the analytical forms for the longitudinal and transverse, short-range wakes (Eqs.~\ref{wa_eq}, \ref{wya_eq}) have been used before, for the case of periodic, disk-loaded accelerating structures~\cite{nlc_wake, nlc_wakex}; a major difference, however, was that the parameters $s_0$, $s_{0y}$, were there obtained through fitting to numerical results.
We now give results for two specific example calculations. In the first example we consider a short beam on the axis of a two-plate dechirper and find the quad wake, where the two analytical wake parameters are given by~\cite{Bane}
\begin{equation}
w_{0q}'= \frac{\pi^4}{32a^4}\ ,\quad s_{0q}=\frac{a^2t}{2\pi\alpha^2p^2}\left(\frac{15}{16}\right)^2\ .
\end{equation}
We consider the RadiaBeam/SLAC dechirper parameters with half-gap $a=0.7$~mm, and a Gaussian driving bunch with $\sigma_z=10$~$\mu$m. We obtain the zeroth and first order analytical bunch wakes by performing the convolution Eq.~\ref{bunch_wake_eq}. The analytical and numerical results of ECHO(2D)~\cite{echo2d} are shown in Fig.~\ref{two_plate_ex_fi}. We see that the agreement of the 1st order analytical result and the numerical one is quite good.
\begin{figure}[!htb]
\centering
\includegraphics[draft=false, width=.45\textwidth]{fig4.pdf}
\caption{ Quad bunch wake for a Gaussian beam on axis, with $\sigma_z=10$~$\mu$m, $a=0.7$~mm, in the RadiaBeam/SLAC dechirper~\cite{Bane}. Given are the numerical results of ECHO(2D) (blue), and the analytical zeroth order (red) and 1st order results (green). The bunch shape $\lambda(s)$ is shown in black.}\label{two_plate_ex_fi}
\end{figure}
Our second example considers the dipole wake of a beam offset by distance $b$ from one plate. For this case the wake parameters are given by~\cite{single_plate}
\begin{equation}
w_{0d}'= \frac{1}{b^3}\ ,\quad s_{0d}=\frac{8b^2t}{9\pi\alpha^2p^2}\ .
\end{equation}
In Fig.~\ref{one_plate_ex_fi} we plot, as functions of $b$, the analytically obtained (1st order) kick factor $\varkappa_{yd}$---the average of the bunch wake $W_{\lambda d}(s)$---and the numerical result obtained using CST Studio~\cite{CST}. The bunch here is Gaussian with length $\sigma_z=100$~$\mu$m. We see from the figure that the agreement of the analytical and numerical calculations is very good.
\begin{figure}[!htb]
\centering
\includegraphics[draft=false, width=.45\textwidth]{fig5.pdf}
\caption{ Single plate dipole kick factor $\varkappa_{yd}$ as function of distance of the beam from the wall $b$, showing the CST results (blue symbols) and those of the 1st order analytical model (red dashes)~\cite{single_plate_confirm}. The bunch is Gaussian with length $\sigma_z=100$~$\mu$m.}\label{one_plate_ex_fi}
\end{figure}
\vspace{-8mm}\section{Measurements}
One example measurement of the longitudinal effect of the beam on axis between the jaws of both dechirper modules is shown in Fig.~\ref{dechirp_fi}. Here $Q=190$~pC, energy $E=4.4$~GeV, and dechirper half gap $a=1.2$~mm. The X-band, deflecting cavity diagnostic, XTCAV, measures longitudinal phase space of the bunch after the undulator in the LCLS. From it, we obtain the longitudinal bunch distribution $\lambda(s)$ and the induced energy chirp $\Delta E(s)$ due to the dechirper, by taking the average energy at position $s$ minus that when the dechirper jaws are wide open. Here the bunch shape is approximately uniform, with peak current $I=cQ\lambda=1.0$~kA. For the calculation we convolve the measured $\lambda(s)$ with the analytical $w(s)$ (see Eq~\ref{bunch_wake_eq}), noting that $w_0=\pi^2/(4a^2)$ and $s_0=9a^2t/(8\pi\alpha^2p^2)$; then $\Delta E=eQW_\lambda L$, with $L=4$~m, the length of dechirper.
In Fig.~\ref{dechirp_fi} the measured chirp using XTCAV (with arbitrary horizontal and vertical offsets) is given in blue, the calculation (with the head of the bunch at $t\equiv s/c=-85$~fs) is given in red dashes. We see that the agreement in induced chirp is good.
\begin{figure}[!htb]
\centering
\includegraphics[draft=false, width=.45\textwidth]{fig6.pdf}
\caption{Chirp induced in beam passing on axis between the jaws of both dechirper modules, according to measurement (blue) and calculation (red dashes)~\cite{MarcG}. Here $Q=190$~pC, $I=1.0$~kA, $E=4.4$~GeV. The bunch head is to the left.}
\label{dechirp_fi}\vspace{-6mm}
\end{figure}
In the next example the beam was kept fixed, half the dechirper (the horizontal half) was scanned across the beam (keeping the half-gap $a$ fixed), while the other half (the vertical half) was opened wide, and the transverse kick was measured. The parameters are half gap $a=1$~mm, bunch charge $Q=152$~pC, and energy $E=6.6$~GeV. The results are given in Fig.~\ref{two_plates_a1mm_fi}, where we plot the offset at a downstream BPM, $\Delta x_w$ {\it vs.} offset of the beam from the axis in the dechirper, $x$. The figure shows the data (plotting symbols) and the calculations (the curves).
The bunch shape as obtained by measurement (with head to the left), is given in the inset plot.
For the scale of the wake effect, note that a value of $\Delta x_w=0.4$~mm corresponds to an average kick of $V_x=160$~kV.
\begin{figure}[!htb]
\centering
\includegraphics[draft=false, width=.45\textwidth]{fig7.pdf}
\caption{Downstream deflection, $\Delta x_w$, as function of offset in the (horizontal) dechirper, $x$, for half-gap $a=1$~mm~\cite{Zemella}. Here $Q=152$~pC, $E=6.6$~GeV. For the analytic curves, the gap parameter was reduced by 11\% to fit the experimental data.
The bunch current, with head to the left, is shown in the inset.}\label{two_plates_a1mm_fi}\vspace{-2mm}
\end{figure}
For the comparison calculations, we first numerically performed the convolution of Eq.~\ref{bunch_wake_eq} to obtain the bunch wake; then the result is given by $\Delta x_w=eQW_{\lambda x}L_{BPM}/E$, with $L_{BPM}$ ($=16.26$~m) the distance between dechirper and measuring BPM (the dashed curve in the figure). For the analytic curves, the gap parameter was reduced by 11\% to fit the experimental data.
The agreement between theory and measurements is good; the discrepancy in scale is small. In aligning the structure, the ends of the plates are independently adjusted; one possible cause of the discrepancy is that, during measurement, the jaws have an unknown residual tilt.
The final example is a single jaw measurement of downstream offset due to transverse kick, $\Delta x_w$ {\it vs.} beam offset in dechirper, $b$ (see Fig.~\ref{north_kick_fi}). Note that $\Delta x_w>0$ indicates a kick toward the jaw, and a value of $\Delta x_m=0.6$~mm corresponds to a kick of $V_x=480$~kV.
The beam parameters were charge $Q=180$~pC, peak current $I=3.5$~kA,
and energy $E=13$~GeV.
The absolute offset of the beam from the dechirper plate is not well known.
Therefore, a fit of the model to the measurement was performed, where an overall shift in beam offset $\Delta b$ was allowed.
We see that the fit of the theory (the curve) to the data (the plotting symbols) is good. The fit gives an overall shift of $\Delta b=-161$~$\mu$m, and the data in the plot has been shifted by this amount.
During this measurement, the longitudinal wake effect was simultaneously recorded, using a downstream BPM at a location of dispersion. The agreement between the longitudinal theory and measurement was again good, when an overall shift of $\Delta b=-138$~$\mu$m was assumed in the theory. The fact that, in both cases, the theory and measurement curves agree well, and that the fitted shifts are close to each other, is confirmation of our wake models and of our analysis.
\begin{figure}[!htb]
\vspace{-4mm}
\centering
\includegraphics[draft=false, width=.45\textwidth]{fig8.pdf}
\caption{Downstream deflection, $\Delta x_w$ as functions of beam offset from a single dechirper plate, $b$~\cite{Zemella}. Here $Q=180$~pC, $I=3.5$~kA, $E=13$~GeV. The symbols give the data points, with their $b$ values shifted by $-161$~$\mu$m; the curves give the analytical theory. }\label{north_kick_fi}
\vspace{-2mm}
\end{figure}
\vspace{-7mm}\section{Summary}
The corrugated, metallic structure can be used for passive chirp control at the end of a linac-based, X-ray FEL.
It can also be used as a fast kicker, to facilitate two-color, fresh-slice operation of an FEL such as the Linac Coherent Light Source (where it is regularly being used for this purpose).
Using the surface impedance approach, we are able to obtain analytical solutions for the wakes of the structure: longitudinal, dipole, quad wakes; two-plate case, on axis and off; and single plate case.
These wakes agree well with numerical simulations for LCLS-type beam parameters, in spite of the fact that the corrugation perturbation is often not small. They also agree quite well with measurements performed using the RadiaBeam/SLAC dechirper at the LCLS.
More comparisons with numerical simulations can be found in \cite{Bane, single_plate_confirm}, with measurements in \cite{MarcG, Zemella}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The last passage percolation model (LPP) is one of the most well studied models in the Kardar-Parisi-Zhang (KPZ) universality class of stochastic growth models. In this model, to each site $(i,j)\in\Z^2$, one associates an independent random variable $\omega_{i,j}$ exponentially distributed with parameter one. In the simplest case, the point-to-point LPP model, for a given point $(m,n)$ in the first quadrant, one defines the last passage time as
\begin{equation}\label{eq1}
G(m,n)=\max_{\pi:(0,0)\to (m,n)} \sum_{(i,j)\in\pi} \omega_{i,j},
\end{equation}
where the maximum is taken over all up-right paths, that is, paths whose incremental steps are either $(1,0)$ or $(0,1)$. Consider the spatial direction $x$ to be $(1,-1)$ and the time direction $t$ to be $(1,1)$. Then one defines a height function $h(x,t=N)=G(N+x,N-x)$, see~\cite{Jo03b} and also~\cite{PS00,PS02} for a continuous analogue related to the Hammersley process~\cite{Ham72}. The height function has been studied extensively: at time $N$, it has fluctuations of order $N^{1/3}$ and non-trivial correlation over distance $N^{2/3}$, which are the KPZ scaling exponents~\cite{KPZ86,KMH92,BKS85}. Furthermore, both the one-point distributions~\cite{BR99b,BR99,PS00,BDJ99,Jo00b,CFS16} as well as the limiting processes~\cite{Jo03b,PS02,BFPS06,BFS07b,BFP09} are known for a few initial conditions (or geometries in LPP framework). Finally, the correlations in time of the interface are non-trivial over macroscopic distances~\cite{Fer08,CFP10b} and they have been recently partially studied~\cite{Jo18,BG18,JR19,BL17,FO18}.
Another very interesting but less studied aspect of models in the KPZ universality class is the geometrical properties of the geodesics (also known as maximizers). For the LPP model, geodesics are the paths achieving the maximum in \eqref{eq1}. In the case of the exponential random LPP described above, for any end-point $(m,n)$, there is a unique geodesic. Geodesics follow characteristic directions and if the end-point is at distance $\cO(N)$ from the origin, then it has spatial fluctuations of order $\cO(N^{2/3})$ with respect to the line joining the origin with the end-point~\cite{Jo00,BSS14}.
Consider two or more end-points. To each of the end-points there is one geodesic from $(0,0)$ and thus the set of geodesics as seen from the end-points in the direction of the origin, have a non-trivial coalescing structure. Some recent studies of this structure in LPP and related models can be found in~\cite{BBS20,BGHH20,Ham16,Pim16,BSS17,Ham17}. One might expect that on a large scale the coalescing structure is universal and thus not depending on the details of the chosen random variables defining the LPP models (provided, of course, that the model is still in the KPZ class, which rules out, for instance, heavy tailed random variable).
The fact that the height function decorrelates over macroscopic times is reflected in the geometrical behaviour of the geodesics. For instance, taking two end-points at distance $\cO(N^{2/3})$ of each other, the coalescence point of the two geodesics will be at distance of order $\cO(N)$ from the end-points~\cite{FS03b} and have a non-trivial distribution over the full macroscopic scale, as already noticed in some numerical studies in~\cite{FerPhD}. More refined recent results are also available~\cite{SS19,Zha19}.
In the study of the covariance of the time-time correlations~\cite{FO18} it was proven that taking one end-point as $(N,N)$ and the second $(\tau N,\tau N)$, then as $\tau\to 1$, the first order correction to the covariance of the LPP is $\cO((1-\tau)^{2/3}N^{2/3})$ and is completely independent of the geometry of the LPP, i.e., it is the same whether one considers the point-to-point LPP as in \eqref{eq1} or the line-to-point LPP, for which the geodesics start from a point on the antidiagonal crossing the origin. This suggests that the coalescing structure of the end-points in $\{(N+k,N-k),|k|\leq \delta N^{2/3}\}$, for a small $\delta>0$, should be independent of the LPP geometry over a time-span $o(N)$ from the end-points. In particular, the coalescing structure should be locally the same as the one from the stationary model, introduced in~\cite{BCS06}. In~\cite{BBS20} a result in this direction has been proven. Among other results, they showed that the tree of point-to-point geodesics starting from every vertex in a box of side length $\delta N^{2/3}$ going to a point at distance $N$ agree inside the box with the tree of stationary geodesics.
The goal of this work is to improve on previous results in the following points:
\begin{enumerate}
\item In the case of point-to-point LPP, we extend previous results by showing (Theorem~\ref{thm:coal}) that the coalescence to the stationary geodesics holds with high probability for any geodesic starting in a large box around the origin and terminating in a cylinder whose width is of order $N^{2/3}$ and its length is of order $N$ (see Figure~\ref{Fig:boxes}). In other words, we obtain the correct dimensions of the cylinder around the point $(N,N)$.
\item In the case of point-to-point LPP, we improve the lower bound of the coalescence result from exponent $3/8$ to the correct exponent $1/2$ (Theorem~\ref{thm:coal}). In the process of proving it, we provide a simple probabilistic proof (Theorem~\ref{thm:locCylinder}) for the concentration of geodesics around their characteristics with the optimal exponential decay.
\item In the case of point-to-point LPP, we obtain an upper bound on the coalescence event (Theorem~\ref{thm:LBcoal}) that differs from the lower bound only by a logarithmic factor, i.e., we have indeed obtained the correct exponent.
\item In the case of general initial conditions we obtain (Theorem~\ref{thm:coalGeneralIC}) a lower bound on the probability that the geodesic tree agree with that of stationary one in a cylinder of width $N^{2/3}$ and length $N$. The order of the lower bound depends on the concentration of the exit point of the geodesics around the origin.
\end{enumerate}
Another problem that is closely related to the coalescence of the point-to-point geodesic with the stationary one is the question of coalescence of point-to-point geodesics. More precisely, consider the probability that two infinite geodesics starting $k^{2/3}$ away from each other will coalesce after $R k$ steps. A lower bound of the order $C R^{-c}$ was obtained in~\cite{Pim16}. Matching upper bound together with the identification of the constant $c=-2/3$ was found in~\cite{BSS17}. The analogue result for the point-to-point coalescence was completed more recently in~\cite{Zha19}. Finally, in~\cite{BBS20} it is proven that the infinite geodesics in fact coalesce with their point-to-point counterparts and identified the polynomial decay obtained in \cite{Zha19}.
In a second type of coalescence results, one considers the probability that geodesics leaving from two points that are located at distance of order $N^{2/3}$ away from each other and terminate at or around $(N,N)$ coalesce. In the setup of Brownian LPP, one takes $k$ geodesics leaving from a small interval of order $\epsilon N^{2/3}$ and terminating at time $N$ in an interval of the same order. Then, the probability that they are disjoint is of order $\epsilon^{(k^2-1)/2}$ with a subpolynomial correction, see~\cite[Theorem 1.1]{Ham17}. It was conjectured there that the lower bound should have the same exponent. For $k=2$ this is proven in \cite[Theorem 2.4]{BGH19}. Furthermore, an upper bound of order $\tau^{2/9}$ on the probability that two geodesics starting from the points $(0,0)$ and $(0,N^{2/3})$ and terminating at $(N,N)$ do not coalesce by time $(1-\tau)N$ is obtained in~\cite[Theorem 2.8]{BBS20}.
Our Theorem~\ref{thm:coal2} gives the exact exponent $1/2$, for the probability that any two geodesics starting from a large box of dimensions of order $N\times N^{2/3}$ and terminating at a common point in a small box of size $N\times N^{2/3}$ coalesce (see Figure~\ref{Fig:boxes} for more accurate dimensions). Theorem~\ref{thm:coal2} can be compared with rarity of disjoint geodesics that was considered in \cite{Ham17} although for geodesics starting from a big box rather than a small one.
What is then the reason for the discrepancy in the different exponents (exponent $3/2$ in the results in \cite{Ham17} and the $1/2$ exponent in this paper)? Clearly, the geometry is different as in this paper we consider geodesics starting from a large box around the origin, as opposed to a small box of size $\delta N^{2/3}$ in \cite{Ham17}. Let us try to give a heuristic argument for a possible settlement of this discrepancy. Let us divide the interval $I:=\{(0,i)\}_{0 \leq i \leq N^{2/3}}$ into $\delta^{-1}$ sub-intervals of size $\delta N^{2/3}$. If the event that two geodesics starting from $I$ do not meet by the time they reach the small interval around the point $(N,N)$
is dominated by the event that the two geodesics leave from the same small sub-interval and if these events decorrelate on the scale of $N^{2/3}$ then by \cite[Theorem 1.1]{Ham17} we have roughly $\delta^{-1}$ decorrelated events of probability (up to logarithmic correction) $\delta^{3/2}$ which would imply that the probability of two geodesics starting from $I$ to not meet by the time they reach a small interval around $(N,N)$ is (up to logarithmic correction) $\delta^{-1}\delta^{3/2}=\delta^{1/2}$.
Concerning the methods used in this paper, one input we use is a control over the lateral fluctuations of the geodesics in the LPP. In Theorem~\ref{thm:locCylinder} we show that the probability that the geodesic of the point-to-point LPP is not localized around a distance $M N^{2/3}$ from the characteristic line decay like $e^{-c M^3}$, which is the optimal power of the decay. This is proven using the approach of~\cite{BSS14}, see Theorem~\ref{prop1}, once the mid-point analogue estimate is derived, see Theorem~\ref{thm:coal}. The novelty here is a simple and short proof of this latter by using only comparison with stationary models. This probabilistic method is much simpler than previous ones.
To prove Theorem~\ref{thm:coal}, the first step is to prove that with high probability the spatial trajectories of both the geodesic of the point-to-point LPP as well as the one of the stationary model with density $1/2$ are sandwiched between the geodesics for the stationary models with some densities $\rho_+>1/2$ and $\rho_-<1/2$ respectively. This then reduces the problem to finding bounds on the coalescing probability only for the two geodesics of the stationary models. This is done using the coupling between different stationary models introduced in~\cite{FS18}. The main ingredient to prove Theorem~\ref{thm:LBcoal} is to show that the geodesics of the stationary models with different densities did not coalesce too early with some positive probability. Here, the application of the queueing representation of the coupling in~\cite{FS18} is more delicate than the one needed for the lower bound, as we now have to force the geodesic away from each other.
\paragraph{Outline of the paper.} In Section~\ref{SectResults} we define the model and state the main results. We recall in Section~\ref{SectPreliminaries} some recurrent notations and basic results on stationary LPP. In Section~\ref{SectLocalStat} we prove first Theorem~\ref{thm:locCylinder} on the localization of the point-to-point geodesics and then show that the geodesics can be sandwiched between two version of the stationary model, see Lemma~\ref{cor:og}. This allows us to prove Theorems~\ref{thm:coal} and~\ref{thm:coalGeneralIC} in Section~\ref{sectLowerBound}. Section~\ref{SectUpperBound} deals with the proof of Theorem~\ref{thm:LBcoal} and Theorem \ref{thm:coal2}.
\paragraph{Acknowledgments.} The authors are grateful to M\'arton Bal\'azs for initial discussions on the topic that led to our collaboration.
O.\ Busani was supported by the EPSRC EP/R021449/1 Standard Grant of the UK. This study did not involve any underlying data. The work of P.L. Ferrari was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - GZ 2047/1, projekt-id 390685813 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 211504053 - SFB 1060.
\section{Main results}\label{SectResults}
Let $\omega=\{\omega_x\}_{x\in \Z^2}$ be i.i.d.\ $\textrm{Exp}(1)$-distributed random weights on the vertices of $\Z^2$. For $o\in \Z^2$, define the last-passage percolation (LPP) process on $o+\Z_{\geq0}^2$ by
\begin{equation}\label{v:G}
G_{o,y}=\max_{x_{\bullet}\,\in\,\Pi_{o,y}}\sum_{k=0}^{\abs{y-o}_1}\omega_{x_k}\quad\text{ for } y\in o+\Z_{\geq 0}^2.
\end{equation}
$\Pi_{o,y}$ is the set of paths $x_{\bullet}=(x_k)_{k=0}^n$ that start at $x_0=o$, end at $x_n=y$ with $n=\abs{y-o}_1$, and have increments $x_{k+1}-x_k\in\{\mathrm{e}_1,\mathrm{e}_2\}$. The a.s.\ unique path $\pi_{o,y}\in \Pi_{o,y}$ that attains the maximum in \eqref{v:G} is the {\it geodesic} from $o$ to $y$.
Let $\cL=\{x\in\Z^2| x_1+x_2=0\}$ be the antidiagonal crossing through the origin. Given some random variables (in general non independent) $\{h_0(x)\}_{x\in \cL}$ on $\cL$, independent from $\omega$, define the last passage time with initial condition $h_0$ by
\begin{equation}\label{eq1.2}
G^{h_0}_{\cL,y}=\max_{x_{\bullet}\,\in\,\Pi_{\cL,y}}\bigg(h_0(x_0)+\sum_{k=1}^{\abs{y-o}_1}\omega_{x_k}\bigg)\quad \textrm{for } y>\cL,
\end{equation}
where $y>\cL$ is meant in the sense of the order on the lattice. We also denote by $Z^{h_0}_{{\cL,y}}$ the point $x_0$ from where the geodesic from $\cL$ to $y$ leaves the line $\cL$, and refer to it as the \emph{exit point} of the last passage percolation with initial condition $h_0$.
One can define stationary models parameterized by a density $\rho\in (0,1)$, both for the LPP on the positive quadrant as for the LPP on the north-east of $\cL$, see Section~\ref{SectStatLPP} for detailed explanations. In that case we denote the stationary LPP by $G^\rho_{o,y}$ or $G^\rho_{\cL,y}$ respectively.
For $\sigma\in\R_+$ and $0<\tau<1$ we define the cylinder of width $\sigma N^{2/3}$ and length $\tau N$
\begin{equation}
\cC^{\sigma,\tau}=\{i\mathrm{e}_4+j\mathrm{e}_3:(1-\tau) N\leq i \leq N,-\tfrac\sigma2 N^{2/3}\leq j \leq \tfrac\sigma2 N^{2/3}\}.
\end{equation}
Similarly, for $\sigma\in\R_+$ and $0<\tau<1$ we define a set of width $\sigma N^{2/3}$ and length $\tau N$
\begin{equation}\label{R}
\cR^{\sigma,\tau}=\{i\mathrm{e}_4+j\mathrm{e}_3:0 \leq i \leq \tau N,-\tfrac\sigma2 N^{2/3}\leq j \leq \tfrac\sigma2 N^{2/3},|j|<i\}.
\end{equation}
\begin{remark}
Note that the shape of $\cR^{\sigma,\tau}$ in \eqref{R} is somewhat different than the blue cylinder in Figure~\ref{Fig:boxes}. The reason for that is that in this paper we use exit points with respect to the vertical and horizontal axis so that the shape defined in \eqref{R} is easier to work with. We stress that similar results can be obtained for a box as in Figure~\ref{Fig:boxes} by using exit points with respect to the antidiagonal $\cL$.
\end{remark}
Due to the correspondence to stochastic growth models in the KPZ universality class, we denote the time direction by $(1,1)$ and the spatial direction by $(1,-1)$. In particular, for any $0<\tau<1$ define the time horizon
\begin{equation}
L_\tau=\{\tau N \mathrm{e}_4+i\mathrm{e}_3:-\infty<i<\infty\},
\end{equation}
Let $x,y,z\in\Z^2$ be such that $x,y\leq z$. For the geodesics $\pi_{x,z}$ and $\pi_{y,z}$ we define the coalescence point
\begin{equation}
C_p(\pi_{x,z},\pi_{y,z})=\inf\{u\in\Z^2:u\in \pi_{x,z}\cap\pi_{y,z}\},
\end{equation}
where the infimum is with respect to the order $\leq$ on the lattice.
\subsubsection*{Upper and lower bounds on the coalescing point}
The first main result of this paper is that, with probability going to $1$ as $\delta\to 0$, the set of geodesics ending at any point in the cylinder $\cC^{\delta,\tau}$ of the stationary LPP with density $1/2$ is indistinguishable from the geodesics of the point-to-point LPP from the origin for any $\tau\leq \delta^{3/2}/(\log(\delta^{-1}))^3$.
\begin{theorem}\label{thm:coal} Let $o=(0,0)$.
There exist $C,\delta_0>0$ such that for any $\delta\in (0,\delta_0)$ and $\tau\leq\delta^{3/2}/(\log(\delta^{-1}))^3$,
\begin{equation}
\P\Big(C_p(\pi^{1/2}_{o,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},y\in \cR^{\frac18\log\delta^{-1},1/4}\Big)\geq 1- C\delta^{1/2}\log(\delta^{-1})
\end{equation}
for all $N$ large enough.
\end{theorem}
As a direct corollary we have that in a cylinder of spatial width $o(N^{2/3})$ and time width $o(N)$ around the end-point $(N,N)$, the geodesics are indistinguishable from the stationary ones.
\begin{corollary}\label{cor:coal}
For any $\e>0$,
\begin{equation}
\lim_{N\to\infty}
\P\Big(C_p(\pi^{1/2}_{o,x},\pi_{o,x})\leq L_{1-N^{-\e}} \quad \forall x\in \cC^{N^{-\e},N^{-\e}}\Big) =1.
\end{equation}
\end{corollary}
Theorem~\ref{thm:coal} generalizes to LPP with a large class of initial conditions, i.e., consider LPP from $\cL$ with initial condition $h_0$, like the ones considered in~\cite{CFS16,FO18}.
We make the following assumption on $h_0$. Recall the exit point $Z^{h_0}_{\cL,x}$ mentioned right after \eqref{eq1.2}.
\begin{assumption}\label{ass_IC}
Let $x^1=N\mathrm{e}_4 + \tfrac34 \delta N^{2/3} \mathrm{e}_3$ and $x^2=N\mathrm{e}_4 - \tfrac34 \delta N^{2/3} \mathrm{e}_3$. Assume that
\begin{equation}
\P(Z^{h_0}_{\cL,x^1}\leq \log(\delta^{-1}) N^{2/3})\geq 1-Q(\delta)
\end{equation}
and
\begin{equation}
\P(Z^{h_0}_{\cL,x^2}\geq -\log(\delta^{-1}) N^{2/3})\geq 1-Q(\delta)
\end{equation}
for all $N$ large enough, with a function $Q(\delta)$ satisfying $\lim_{\delta\to 0}Q(\delta)=0$.
\end{assumption}
Under Assumption~\ref{ass_IC} the analogue of Theorem~\ref{thm:coal} (and thus of Corollary~\ref{cor:coal}) holds true.
\begin{theorem}\label{thm:coalGeneralIC}
Under Assumption~\ref{ass_IC}, there exist $C,\delta_0>0$ such that for any $\delta\in (0,\delta_0)$ and $\tau\leq\delta^{3/2}/(\log(\delta^{-1}))^3$,
\begin{equation}
\P\Big(C_p(\pi^{1/2}_{\cL,x},\pi^{h_0}_{\cL,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau}\Big)\geq 1- C\delta^{1/2}\log(\delta^{-1})- Q(\delta)
\end{equation}
for all $N$ large enough.
\end{theorem}
The exponent $1/2$ in Theorem~\ref{thm:coal} is optimal as our next result shows.
\begin{theorem}\label{thm:LBcoal} Let $o=(0,0)$.
There exist $C,\delta_0>0$ such that for any $\delta\in (0,\delta_0)$ and $\tau\leq\delta^{3/2}/(\log(\delta^{-1}))^3$,
\begin{equation}
\P\Big(C_p(\pi^{1/2}_{o,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},y\in \cR^{(\frac18\log\delta^{-1}),\tau}\Big)\leq 1- C\delta^{1/2}
\end{equation}
for all $N$ large enough.
\end{theorem}
The following result is closely related to Theorem~\ref{thm:coal} and Theorem~\ref{thm:LBcoal}, it considers the question of coalescence of point-to-point geodesics.
\begin{theorem}\label{thm:coal2}
There exist $C,\delta_0>0$ such that for any $\delta\in (0,\delta_0)$ and $\tau\leq\delta^{3/2}/(\log(\delta^{-1}))^3$,
\begin{equation}
1-C\delta^{1/2}\log(\delta^{-1})\leq \P\Big(C_p(\pi_{w,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},w,y\in \cR^{(\frac18\log\delta^{-1}),\tau}\Big)\leq 1- C\delta^{1/2}
\end{equation}
for all $N$ large enough.
\end{theorem}
\subsubsection*{Cubic decay of localization}
In order to prove the main theorems we will need some control on the spatial fluctuations of the geodesics for the point-to-point problem. As this estimate has its own interest, we state it below as Theorem~\ref{thm:loc}. For a an up-right path $\gamma$ we denote
\begin{equation}
\begin{aligned}
\Gamma^u_k(\gamma)&=\max\{l:(k,l)\in\gamma\},\\
\Gamma^l_k(\gamma)&=\min\{l:(k,l)\in\gamma\}.
\end{aligned}
\end{equation}
When $\gamma$ is a geodesic associated with a direction, we denote the direction by $\xi=(\xi_1,\xi_2)$ with $\xi_1+\xi_2=1$, and we set
\begin{equation}
\Gamma_k(\gamma)=\max\{|\Gamma^u_k(\gamma)-\tfrac{\xi_2}{\xi_1} k|,|\Gamma^l_k(\gamma)-\tfrac{\xi_2}{\xi_1} k|\}.
\end{equation}
\begin{theorem}\label{thm:locCylinder}
Let $\e\in (0,1]$. Then there exists $N_0(\e)$ and $c_1(\e)$ such that for $\xi$ satisfying $\e\leq \xi_2/\xi_1\leq 1/\e$,
\begin{equation}
\P\big(\Gamma_{k}(\pi_{o,\xi N})>M(\tau N)^{2/3}\textrm{ for all }k\in[0,\tau \xi_1 N] \big)\leq e^{-c_1M^3}
\end{equation}
for all $\tau N\geq N_0$ and all $M\leq (\tau N)^{1/3}/\log(N)$.
\end{theorem}
A statement similar to Theorem~\ref{thm:locCylinder} with Gaussian bound is Proposition~2.1 of~\cite{BG18}. The authors employed Theorems~10.1 and~10.5 of~\cite{BSS14}. Potentially their argument could be improved to get a cubic decay using the bounds from random matrices of~\cite{LR10}, but we did not verify this. Instead, we provide a short and self-contained proof of the localization result using comparison with stationarity only.
\begin{remark}\label{remptptlocal}
Theorem~\ref{thm:locCylinder} states the optimal localization scale for small $\tau$. By symmetry of the point-to-point problem, the same statement holds with $\tau$ replaced by $1-\tau$ and and gives the optimal localization scale for $\tau$ close to $1$.
\end{remark}
\subsubsection*{A family of random initial conditions.}
Now let us consider the family of initial conditions, interpolating between flat initial condition (i.e., point-to-line LPP) and the stationary initial condition, for which the time-time covariance was studied in~\cite{FO18}. For $\sigma\geq 0$, let us define
\begin{equation}\label{eq2.10}
h_0(k,-k)=\sigma \times \left\{
\begin{array}{ll}
\sum_{\ell=1}^k(X_\ell-Y_\ell),&\textrm{ for }k\geq 1,\\
0,&\textrm{ for }k=0,\\
-\sum_{\ell=k+1}^0 (X_\ell-Y_\ell),&\textrm{ for }k\leq -1.
\end{array}\right.
\end{equation}
where $\{X_k,Y_k\}_{k\in\Z}$ are independent random variables $X_k,Y_k\sim\textrm{Exp}(1/2)$. For $\sigma=0$ it corresponds to the point-to-line LPP, while for $\sigma=1$ it is the stationary case with density $1/2$.
\begin{proposition}
For LPP with initial condition \eqref{eq2.10}, Assumption~\ref{ass_IC} holds with
\begin{equation}
Q(\delta)= C e^{-c (\log(\delta^{-1}))^3}
\end{equation}
for some constants $C,c>0$.
\end{proposition}
\begin{proof}
The estimates leading to the proofs are all contained in the proof of Lemma~5.2 of~\cite{FO18}, see part \emph{(b) Random initial conditions}. Replacing in that proof $\tau=1$, $M=\tilde M = 2^{-2/3} \tfrac34 \delta$ and $\alpha=2^{-2/3} \log(\delta^{-1})$ we obtain $Q(\delta)\leq \min\{C e^{-c (\alpha-M)^3},C e^{-c (\alpha-M)^4/\alpha}\}$. As $\alpha-M\simeq \alpha$ for small $\delta$, the result follows.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.65, every node/.style={transform shape}]
node/.style={transform shape}]
\def\s{1.2}
\draw [color=red,fill=red!20] (10,10) -- (10.5,9.5) -- (7.5,6.5)--(6.5,7.5) --(9.5,10.5) --(10,10) ;
\draw [color=blue,fill=blue!20] (0,0) -- (-1.5,1.5) -- (2,5)--(5,2) --(1.5,-1.5) -- (0,0) ;
\draw[red,line width=2pt] plot [smooth,tension=0.8] coordinates{(-1,1) (0,2) (2,3.6) (3,5) (4,5.5) (5,6) (6.2,6.7) (8,8) (9,9.2)};
\draw[red,line width=2pt] plot [smooth,tension=0.8] coordinates{(-1,1) (0,2) (2,3.6) (3,5) (4,5.5) (5,6) (6.2,6.5) (8,7.5) (9,8.5)} ;
\draw[red,line width=2pt] plot [smooth,tension=0.8] coordinates{(-1,1) (0,2) (2,3.6) (3,5) (4,5.5) (5,6) (6.2,6.7) (8,8.5) (8.7,9.6)};
\draw[red,line width=2pt] plot [smooth,tension=0.8] coordinates{ (8,8) (9,8.7) (9.5,9.2)};
\draw[line width=0.5pt] plot [smooth,tension=0.8] coordinates{(2,2) (3,4) (4,5) (5,6) (6.2,6.7) (8,8) (9,9.2)};
\draw[line width=0.5pt] plot [smooth,tension=0.8] coordinates{(2,2) (3,4) (4,5) (5,6) (6.2,6.5) (8,7.5) (9,8.5)} ;
\draw[line width=0.5pt] plot [smooth,tension=0.8] coordinates{(2,2) (3,4) (4,5) (5,6) (6.2,6.7) (8,8.5) (8.7,9.6)};
\draw[line width=0.5pt] plot [smooth,tension=0.8] coordinates{ (8,8) (9,8.7) (9.5,9.2)};
\draw [<->](9.8,10.8)--(10.8,9.8);
\node [scale=\s,rotate=-45] at (10.6,10.6) {$\delta N^{2/3}$};
\draw [<->](6.3,7.7)--(9.3,10.7);
\node [scale=\s,rotate=45] at (7.5,9.5) {$\delta^{3/2}\log(\delta^{-1}) N$};
\draw[<->] (2.2,5.2) -- (5.2,2.2);
\node [scale=\s,rotate=-45] at (4.7,3.5) {$\log(\delta^{-1}) N^{2/3}$};
\draw[<->] (-1.8,1.8) -- (1.7,5.3);
\node [scale=\s,rotate=45] at (-0.7,3.7) {$N/4$};
\draw[<->] (1.7,-1.7) -- (11.7,8.3);
\node [scale=\s,rotate=45] at (6.5,2.5) {$N$};
\draw [<->](0,10)--(0,0) -- (10,0);
\end{tikzpicture}
\caption{Illustration of Theorems~\ref{thm:coal} and~\ref{thm:LBcoal}. With high probability, any geodesic tree (black curve) consisting of all geodesics starting from a fixed point in the blue cylinder and terminating at any point in the red cylinder, will agree in the red box with the stationary tree (red curve) of intensity $1/2$.}\label{Fig:boxes}
\end{center}
\end{figure}
\section{Preliminaries}\label{SectPreliminaries}
\subsection{Some general notation}
We mention here some of the notations which will be used throughout the paper. We denote
$\Z_{\ge0}=\{0,1,2,3, \dotsc\}$ and $\Z_{>0}=\{1,2,3,\dotsc\}$. We use four standard vectors in $\R^2$, namely $\mathrm{e}_1=(1,0)$ and $\mathrm{e}_2=(0,1)$. We also denote $\mathrm{e}_3=(1,-1)$ and $\mathrm{e}_4=(1,1)$. The direction $\mathrm{e}_3$ represents the \emph{spatial direction}, while $\mathrm{e}_4$ represents the \emph{temporal direction}. Furthermore, for a point $x=(x_1,x_2)\in\R^2$ the $\ell^1$-norm is $\abs{x}=\abs{x_1} + \abs{x_2}$. We also use the partial ordering on $\R^2$: for $x=(x_1,x_2)\in\R^2$ and $y=(y_1,y_2)\in\R^2$, we write $x\le y$ if $x_1\le y_1$ and $x_2\le y_2$. Given two points $x,y\in\Z^2$ with $x\leq y$, we define the box $[x,y]=\{z\in\Z^2 | x\leq z\leq y\}$. For $u\in\Z^2$ we denote $\Z^2_{\geq u}=\{x:x\geq u\}$.
Finally, for $\lambda>0$, $X\sim\mathrm{Exp}(\lambda)$ denotes a random variable $X$ which has exponential distribution with rate $\lambda$, in other words $P(X>t)=e^{-\lambda t}$ for $t\ge 0$, and thus the mean is $\E(X)=\lambda^{-1}$ and variance $\Var(X)=\lambda^{-2}$. To lighten notation, we do not write explicitly the integer parts, as our results are insensitive to shifting points by order $1$. For instance, for a $\xi=(\xi_1,\xi_2)\in\R^2$, $\xi N$ means $(\lfloor \xi_1 N\rfloor,\lfloor \xi_2 N\rfloor)$.
\subsection{Ordering of paths}
We construct two partial orders on directed paths in $\Z^2$.
\begin{enumerate}
\item[$\leq$:] For $x,y\in \Z^2$ we write $x \leq y$ if $y$ is above and to the right of $x$, i.e.
\begin{equation}
x_1 \leq y_1 \quad \text{and} \quad x_2\leq y_2.
\end{equation}
We also write $x < y$ if
\begin{equation}
x \leq y \quad \text{and} \quad x\neq y
\end{equation}
An up-right path is a (finite or infinite) sequence $\cY=(y_k)_{k}$ in $\Z^2$ such that $y_k-y_{k-1}\in\{\mathrm{e}_1,\mathrm{e}_2\}$ for all $k$. Let $\cU\cR$ be the set of up-right paths in $\Z^2$.
If $A,B\subset \Z^2$, we write $A \leq B$ if
\begin{equation}\label{og3}
x \leq y \quad \forall x\in A\cap\cY,y\in B\cap\cY \quad \forall\cY\in \cU\cR.
\end{equation}
where we take the inequality to be vacuously true if one of the intersections in \eqref{og3} is empty.
\item[$\preceq$:] For $x,y\in \Z^2$ we write $x \preceq y$ if $y$ is below and to the right of $x$, i.e.
\begin{equation}
x_1 \leq y_1 \quad \text{and} \quad x_2\geq y_2.
\end{equation}
We also write $x\prec y$ if
\begin{equation}
x \preceq y \quad \text{and} \quad x\neq y
\end{equation}
A down-right path is sequence $\cY=(y_k)_{k\in\Z}$ in $\Z^2$ such that $y_k-y_{k-1}\in\{\mathrm{e}_1,-\mathrm{e}_2\}$ for all $k\in\Z$. Let $\cD\cR$ be the set of infinite down-right paths in $\Z^2$.
If $A,B\subset \Z^2$, we write $A \preceq B$ if
\begin{equation}\label{og1}
x \preceq y \quad \forall x\in A\cap\cY,y\in B\cap\cY \quad \forall\cY\in \cD\cR.
\end{equation}
where we take the inequality to be vacuously true if one of the intersections in \eqref{og1} is empty (see Figure~\ref{fig:ogs}).
\end{enumerate}
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[scale=0.65, every node/.style={transform shape}]
node/.style={transform shape}]
\def\s{1.33}
\draw [thick] (0,5.5) -- (0,4) -- (2,4) -- (2,2) -- (6,2) -- (6,1) -- (8,1) -- (8,0) -- (10,0);
\node [scale=\s,above] at (9,0) {$\mathcal{Y}$};
\draw [thin, color=blue] (0,0) -- (0,0.25)-- (0.25,0.25) -- (0.75,0.25) -- (0.75,0.5) -- (2,0.5) -- (2,0.75) -- (2.5,0.75) -- (2.5,1.25) -- (3,1.25) -- (3,1.5) -- (3.5,1.5) -- (3.75,1.5) -- (3.75,1.75) -- (4,1.75) -- (4,2.25) -- (4.5,2.25) -- (4.5,3) -- (5,3) -- (5,3.25) -- (5.75,3.25) -- (5.75,3.75) -- (6.25,3.75) -- (6.25,4.5) -- (6.5,4.5) -- (6.5,4.75) -- (7.25,4.75) -- (7.25,5.25);
\node [scale=\s,right] at (7.25,5) {$\gamma_1$};
\fill (4,2) circle (3pt);
\node [scale=\s,above] at (4,2.25) {$x$};
\draw [thin, color=blue] (2,0) -- (2.5,0)-- (2.5,0.25) -- (2.75,0.25) -- (2.75,0.5) -- (3.5,0.5) -- (3.5,0.75) -- (4,0.75) -- (4,1) -- (4.5,1) -- (4.5,1.25) -- (5.25,1.25) -- (5.25,1.5) -- (6.75,1.5) -- (6.75,2) -- (7,2) -- (7,2.25) -- (7.5,2.25) -- (7.5,2.75) -- (8,2.75) -- (8,3) -- (8.5,3) -- (8.5,3.25) -- (9,3.25);
\node [scale=\s,above] at (9,3.25) {$\gamma_2$};
\fill (6,1.5) circle (3pt);
\node [scale=\s,right] at (6,1.75) {$y$};
\end{tikzpicture}
\caption{The two geodesics $\gamma_1$ and $\gamma_2$ are ordered i.e., $\gamma_1\prec \gamma_2$. For any down-right path $\cY$ in $\Z^2$ the set of points $x=\cY\cap \gamma_1$ and $y=\cY\cap \gamma_2$ are ordered, i.e., $x\prec y$.}
\label{fig:ogs}
\end{center}
\end{figure}
\subsection{Stationary LPP}\label{SectStatLPP}
Stationary LPP on $\Z_{\geq 0}^2$ has been introduced in~\cite{PS01} by adding boundary terms on the $\mathrm{e}_1$ and $\mathrm{e}_2$ axis. In~\cite{BCS06} it was shown that it can be set up by using more general boundary domains. In this paper we are going to use two of them.
\paragraph{Boundary weights on the axis.}
For a base point $o=(o_1,o_2)\in \Z^2$ and a parameter value $\rho\in(0,1)$ we introduce the stationary last-passage percolation process $G^\rho_{o,\bullet}$ on $o+\Z_{\ge0}^2$. This process has boundary conditions given by two independent sequences
\begin{equation}\label{IJ}
\{I^\rho_{o+i\mathrm{e}_1}\}_{i=1}^{\infty} \quad\text{and}\quad
\{J^\rho_{o+j\mathrm{e}_2}\}_{j=1}^{\infty}
\end{equation}
of i.i.d.\ random variables with $I^\rho_{o+\mathrm{e}_1}\sim\textrm{Exp}(1-\rho)$ and $J^\rho_{o+\mathrm{e}_2}\sim\textrm{Exp}(\rho)$.
Put $G^\rho_{o,o}=0$ and on the boundaries
\begin{equation}\label{Gr1} G^\rho_{o,\,o+\,k\mathrm{e}_1}=\sum_{i=1}^k I_{o+i\mathrm{e}_1}
\quad\text{and}\quad
G^\rho_{o,\,o+\,l\mathrm{e}_2}= \sum_{j=1}^l J_{o+j\mathrm{e}_2}.
\end{equation}
Then in the bulk
for $x=(x_1,x_2)\in o+ \Z_{>0}^2$,
\begin{equation}\label{Gr2}
G^\rho_{o,\,x}= \max_{1\le k\le x_1-o_1} \; \Bigl\{ \;\sum_{i=1}^k I_{o+i\mathrm{e}_1} + G_{o+k\mathrm{e}_1+\mathrm{e}_2, \,x} \Bigr\}
\bigvee
\max_{1\le \ell\le x_2-o_2}\; \Bigl\{ \;\sum_{j=1}^\ell J_{o+j\mathrm{e}_2} + G_{o+\ell \mathrm{e}_2+\mathrm{e}_1, \,x} \Bigr\} .
\end{equation}
\paragraph{Boundary weights on antidiagonal.}
The stationary model with density $\rho$ can be realized by putting boundary weights on $\cL$ as follows. Let $\{X_k,k\in\Z\}$ and $\{Y_k,k\in\Z\}$ be independent random variables with $X_k\sim\textrm{Exp}(1-\rho)$ and $Y_k\sim\textrm{Exp}(\rho)$. Then, define
\begin{equation}
h_0(k,-k)=\left\{
\begin{array}{ll}
\sum_{\ell=1}^k(X_\ell-Y_\ell),&\textrm{ for }k\geq 1,\\
0,&\textrm{ for }k=0,\\
-\sum_{\ell=k+1}^0 (X_\ell-Y_\ell),&\textrm{ for }k\leq -1.
\end{array}\right.
\end{equation}
Then the LPP defined by \eqref{eq1.2} with initial condition $h_0$ is stationary, that is, the increments $G^{h_0}_{\cL,x+\mathrm{e_1}}-G^{h_0}_{\cL,x}\sim\textrm{Exp}(1-\rho)$ as well as $G^{h_0}_{\cL,x+\mathrm{e_2}}-G^{h_0}_{\cL,x}\sim\textrm{Exp}(\rho)$ for all $x>\cL$.
Next we define the \emph{exit points} of geodesics, these will play an important role in our analysis.
\begin{definition}[Exit points]\label{DefExitPoints}$ $ \\
(a) For a point $p\in o+\Z_{>0}^2$, let $Z_{o,p}$ be the signed exit point of the geodesic $\pi_{o,p}$ of $G_{o,p}$ from the west and south boundaries of $o+\Z_{>0}^2$. More precisely,
\begin{equation}\label{exit2}
Z^\rho_{o,p}=
\begin{cases}
\argmax{k} \bigl\{ \,\sum_{i=1}^k I_{o+i\mathrm{e}_1} + G_{o+k\mathrm{e}_1+\mathrm{e}_2, \,x} \bigr\} &\textrm{if } \pi_{o,p}\cap o+\mathrm{e}_1\neq\emptyset,\\
-\argmax{\ell}\bigl\{ \;\sum_{j=1}^\ell J_{o+j\mathrm{e}_2} + G_{o+\ell \mathrm{e}_2+\mathrm{e}_1, \,x} \bigr\} &\textrm{if } \pi_{o,p}\cap o+\mathrm{e}_2\neq\emptyset.
\end{cases}
\end{equation}
(b) For a point $p>\cL$, we denote by $Z^{h_0}_{\cL,p}\in\Z$ the exit point of the LPP from $\cL$ with initial condition $h_0$, if the starting point of the geodesic from $\cL$ to $p$ is given by $(Z^{h_0}_{\cL,p},-Z^{h_0}_{\cL,p})$. In the case of the stationary model with parameter $\rho$, the exit point is denoted by $Z^{\rho}_{\cL,p}$.
\end{definition}
As a consequence, the value $G^\rho_{o,x}$ can be determined by \eqref{Gr1} and the recursion
\begin{equation}\label{recu}
G^\rho_{o,x}=\omega_x+G^\rho_{o,x-\mathrm{e}_1}\vee G^\rho_{o,x-\mathrm{e}_2}.
\end{equation}
\subsection{Backward LPP}
Next we consider LPP maximizing down-left paths. For $y\leq o$, define
\begin{equation}\label{v:Gr}
\widehat{G}_{o,y}=G_{y,o},
\end{equation}
and let the associated geodesic be denoted by $\hat{\pi}_{o,y}$. For each $o=(o_1,o_2)\in\Z^2$ and a parameter value $\rho\in(0,1)$ define a stationary last-passage percolation processes $\widehat{G}^\rho$ on $o+\Z^2_{\le0}$, with boundary variables on the north and east, in the following way. Let
\begin{equation}\label{IJ hat}
\{\hat{I}^\rho_{o-i\mathrm{e}_1}\}_{i=1}^{\infty}
\quad\text{and}\quad
\{\hat{J}^\rho_{o-j\mathrm{e}_2}\}_{j=1}^{\infty}
\end{equation}
be two independent sequences of i.i.d.\ random variables with marginal distributions $\hat{I}^\rho_{o-i\mathrm{e}_1}\sim\textrm{Exp}(1-\rho)$ and $\hat{J}^\rho_{o-j\mathrm{e}_2}\sim\textrm{Exp}(\rho)$. The boundary variables in \eqref{IJ} and those in \eqref{IJ hat} are taken independent of each other. Put $\widehat{G}^\rho_{o,\,o}=0$ and on the boundaries
\begin{equation}\label{Gr11} \widehat{G}^\rho_{o,\,o-k\mathrm{e}_1}=\sum_{i=1}^k \hat{I}_{o-i\mathrm{e}_1}
\quad\text{and}\quad
\widehat{G}^\rho_{o,\,o-l\mathrm{e}_2}= \sum_{j=1}^l \hat{J}_{o-j\mathrm{e}_2}. \end{equation}
Then in the bulk
for $x=(x_1,x_2)\in o+ \Z_{<0}^2$,
\begin{equation}\label{Gr12}
\widehat{G}^\rho_{o,\,x}= \max_{1\le k\le o_1-x_1} \; \Bigl\{ \;\sum_{i=1}^k \hat{I}_{o-i\mathrm{e}_1} + \widehat{G}_{ \,o-k\mathrm{e}_1-\mathrm{e}_2,x} \Bigr\}
\bigvee
\max_{1\le \ell\le o_2-x_2}\; \Bigl\{ \;\sum_{j=1}^\ell \hat{J}_{o-j\mathrm{e}_2} + \widehat{G}_{ \,o-\ell \mathrm{e}_2-\mathrm{e}_1,x} \Bigr\} .
\end{equation}
For a southwest endpoint $p\in o+\Z_{<0}^2$, let $\widehat{Z}^\rho_{o,p}$ be the signed exit point of the geodesic $\hat{\pi}_{o,p}$ of $\widehat{G}^\rho_{o,p}$ from the north and east boundaries of $o+\Z_{<0}^2$. Precisely,
\begin{equation}\label{exit}
\widehat{Z}^\rho_{o,\,x}=
\begin{cases}
\argmax{k} \bigl\{ \,\sum_{i=1}^k \hat{I}_{o-i\mathrm{e}_1} + \widehat{G}_{\,o-k\mathrm{e}_1-\mathrm{e}_2,x} \bigr\}, &\text{if } \hat{\pi}_{o,x}\cap o-\mathrm{e}_1\neq\emptyset,\\
-\argmax{\ell}\bigl\{ \;\sum_{j=1}^\ell \hat{J}_{o-j\mathrm{e}_2} + \widehat{G}_{\,o-\ell \mathrm{e}_2-\mathrm{e}_1,x} \bigr\}, &\text{if } \hat{\pi}_{o,x}\cap o-\mathrm{e}_2\neq\emptyset.
\end{cases}
\end{equation}
\subsection{Comparison Lemma}\label{SectCompLemma}
We are going to use the comparison between point-to-point LPP and stationary LPP using the lemma by Cator and Pimentel.
\begin{lemma}\label{Lem:Comparison}
Let $o=(0,0)$ and consider two points $p^1\preceq p^2$.\\
If $Z^\rho_{o,p^1}\geq 0$, then
\begin{equation}
G_{o,p^2}-G_{o,p^1}\leq G^\rho_{o,p^2}-G^\rho_{o,p^1}.
\end{equation}
If $Z^\rho_{o,p^2}\leq 0$, then
\begin{equation}
G_{o,p^2}-G_{o,p^1}\geq G^\rho_{o,p^2}-G^\rho_{o,p^1}.
\end{equation}
\end{lemma}
Clearly by reversion of the space we can use this comparison lemma also for backwards LPP.
\begin{lemma}\label{Lem:ComparisonB}
Consider two points $p^1\preceq p^2$ with $p^1,p^2>\cL$.\\
If $Z^\rho_{\cL,p^1}\geq Z^{h_0}_{\cL,p^2}$, then
\begin{equation}
G^{h_0}_{\cL,p^2}-G^{h_0}_{\cL,p^1}\leq G^\rho_{\cL,p^2}-G^\rho_{\cL,p^1}.
\end{equation}
If $Z^\rho_{\cL,p^2}\leq Z^{h_0}_{\cL,p^1}$, then
\begin{equation}
G^{h_0}_{\cL,p^2}-G^{h_0}_{\cL,p^1}\geq G^\rho_{\cL,p^2}-G^\rho_{\cL,p^1}.
\end{equation}
\end{lemma}
For $p^1_2=p^2_2$, Lemma~\ref{Lem:Comparison} is proven as Lemma~1 of~\cite{CP15b}, while Lemma~\ref{Lem:ComparisonB} is Lemma~2.1 of~\cite{Pim18}. The generalization to the case of geodesics starting from $\cL$ (or from any down-right paths) is straightforward, see e.g.\ Lemma~3.5 of~\cite{FGN17}.
\section{Local Stationarity}\label{SectLocalStat}
\subsection{Localization over a time-span $\tau N$.}
In this section we are going to prove Theorem~\ref{thm:locCylinder}.
We shall need the following estimates on the tail of the exit point of a stationary process. For a density $\rho\in (0,1)$ we associate a direction
\begin{equation}
\xi(\rho)=\left(\frac{(1-\rho)^2}{(1-\rho)^2+\rho^2},\frac{\rho^2}{(1-\rho)^2+\rho^2}\right)
\end{equation}
and, vice versa, to each direction $\xi=(\xi_1,\xi_2)$ corresponds a density
\begin{equation}
\rho(\xi)=\frac{\sqrt{\xi_2}}{\sqrt{\xi_1}+\sqrt{\xi_2}}.
\end{equation}
\begin{lemma}[Theorem 2.5 and Proposition 2.7 of \cite{EJS20}]\label{spc}
Let $\e\in (0,1]$. Then there exists $N_0(\e)$, $c_0(\e)$, $r_0(\e)>0$ such that for every direction $\xi$ with $\e\leq \xi_2/\xi_1\leq 1/\e$, $N\geq N_0$ and $r\geq r_0$:
\begin{align}
\P(|Z^{\nu}_{o,\xi N}|> rN^{2/3})&\leq e^{-c_0r^3},\label{spc1}\\
\P(Z^{\nu}_{o,\xi N-rN^{2/3}\mathrm{e}_3}> 0)&\leq e^{-c_0r^3},\label{spc2}\\
\P(Z^{\nu}_{o,\xi N+rN^{2/3}\mathrm{e}_3}< 0)&\leq e^{-c_0r^3},\label{spc3}
\end{align}
for all densities $\nu$ satisfying $|\nu-\rho(\xi)|\leq N^{-1/3}$.
\end{lemma}
Using Lemma~\ref{spc} one can control the path of a point-to-point geodesic.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.65, every node/.style={transform shape}]
node/.style={transform shape}]
\def\s{1.2}
\draw (0,0) -- (0,10) -- (10,10)--(10,0)--(0,0);
\draw [dashed](0,0)--(10,10);
\filldraw (4,4) circle (2pt);
\node [scale=\s,left] at (4,4) {$\tau\xi N$};
\filldraw (4,6) circle (2pt);
\node [scale=\s,left] at (4,6) {$p^1$};
\draw [dashed,blue] (0,0)--(4,6);
\node [scale=\s,left,blue] at (2,3.6) {$\zeta^1$};
\filldraw (4,8) circle (2pt);
\node [scale=\s,left] at (4,8) {$p^3$};
\filldraw (4,3) circle (2pt);
\node [scale=\s,left] at (4,3) {$p^2$};
\draw [dashed,blue] (4,3)--(10,10);
\node [scale=\s,blue] at (8,7) {$\zeta^2$};
\end{tikzpicture}
\caption{Illustration of the geometry of the points and characteristics that appear in the proof of Theorem \ref{thm:loc}.}\label{fig:hfig}
\end{center}
\end{figure}
\begin{theorem}\label{thm:loc}
Let $\e\in (0,1]$. Then there exist $N_0(\e)$ and $c_1(\e)$ such that for $\xi$ satisfying $\e\leq \xi_2/\xi_1\leq 1/\e$,
\begin{equation}
\P\big(\Gamma_{ \tau \xi_1 N}(\pi_{o,\xi N})>M(\tau N)^{2/3} \big)\leq e^{-c_1M^3}
\end{equation}
for all $\tau N\geq N_0$ and all $M\leq (\tau N)^{1/3}/\log(N)$.
\end{theorem}
\begin{proof}
We will show in detail that
\begin{equation}
\P\Big(\Gamma^u_{\tau \xi_1 N}(\pi_{o^1,o^2})>\tau \xi_2N +M (\tau N)^{2/3}\Big)\leq e^{-c_1M^3},
\end{equation}
where $o^1=o$ and $o^2=\xi N$. Similarly one proves
\begin{equation}
\P\Big(\Gamma^l_{\tau \xi_1 N}(\pi_{o^1,o^2})<\tau \xi_2N -M (\tau N)^{2/3}\Big)\leq e^{-c_1M^3}.
\end{equation}
Then Theorem~\ref{thm:loc} follows directly from the definition of $\Gamma_{\tau \xi_1 N}(\pi_{o,\xi N})$. Set the points (see Figure \ref{fig:hfig})
\begin{equation}
\begin{aligned}
p^1&=\tau N \xi+\tfrac M4(\tau N)^{2/3}\mathrm{e}_2,\\
p^2&=\tau N \xi-\tfrac M8 \tfrac{1-\tau}{\tau} (\tau N)^{2/3}\mathrm{e}_2,\\
p^3&=\tau N \xi+\tfrac M2(\tau N)^{2/3}\mathrm{e}_2,
\end{aligned}
\end{equation}
and the characteristics associated with $(o^1,p^1)$ and $(o^2,p^2)$
\begin{equation}
\begin{aligned}
\zeta^1&=(\tau \xi_1 N, \tau \xi_2 N + \tfrac M4 (\tau N)^{2/3}),\\
\zeta^2&=((1-\tau) \xi_1 N, (1-\tau) \xi_2 N + \tfrac M8 \tfrac{1-\tau}{\tau} (\tau N)^{2/3}).
\end{aligned}
\end{equation}
The associated densities are
\begin{equation}
\begin{aligned}\label{den}
\rho_1&=\frac{\sqrt{\tau \xi_2 N+\tfrac M4 (\tau N)^{2/3}}}{\sqrt{\tau \xi_1 N}+\sqrt{\tau \xi_2 N+\tfrac M4 (\tau N)^{2/3}}},\\
\rho_2&=\frac{\sqrt{(1-\tau) \xi_2 N +\tfrac M8 \tfrac{1-\tau}{\tau} (\tau N)^{2/3}}}{\sqrt{(1-\tau) \xi_1 N}+\sqrt{(1-\tau) \xi_2 N + \tfrac M8 \tfrac{1-\tau}{\tau} (\tau N)^{2/3}}}.
\end{aligned}
\end{equation}
Note that by \eqref{spc2}--\eqref{spc3} there exists $c_0>0$ such that
\begin{equation}
\begin{aligned}\label{zub}
\P\big(Z^{\rho_1}_{o^1,p^3}>0\big)\leq e^{-c_0M^{3}},\\
\P\big(\hat{Z}^{\rho_2}_{o^2,p^3}<0\big)\leq e^{-c_0M^{3}}.
\end{aligned}
\end{equation}
Define, for $i\geq 0$,
\begin{equation}
\begin{aligned}
J_i&=G_{o^1,p^3+(i+1)\mathrm{e}_2}-G_{o^1,p^3+i\mathrm{e}_2},\\
\widehat{J}_i&=G_{o^2,p^3+\mathrm{e}_1+i\mathrm{e}_2}-G_{o^2,p^3+\mathrm{e}_1+(i+1)\mathrm{e}_2},\\
J^{\rho_1}_i&=G^{\rho_1}_{o^1,p^3+(i+1)\mathrm{e}_2}-G^{\rho_1}_{o^1,p^3+i\mathrm{e}_2},\\
\widehat{J}^{\rho_2}_i&=G^{\rho_2}_{o^2,p^3+\mathrm{e}_1+i\mathrm{e}_2}-G^{\rho_2}_{o^2,p^3+\mathrm{e}_1+(i+1)\mathrm{e}_2}.
\end{aligned}
\end{equation}
Then, by the Lemma~\ref{Lem:Comparison}, it follows from \eqref{zub} that with probability $1-2e^{-c_0M^{3}}$
\begin{equation}
J_i\leq J_i^{\rho_1} \textrm{ and }\widehat{J}_i^{\rho_2} \leq \widehat{J}_i
\end{equation}
for all $i\geq 0$, and therefore that
\begin{equation}
J_i-\widehat{J}_i\leq J_i^{\rho_1}-\widehat{J}_i^{\rho_2}\label{rwc}
\end{equation}
for all $i\geq 0$. Set $\rho=\rho(\xi)$. Note that for $M\leq (\tau N)^{1/3}/\log(N)$ , using series expansion we have
\begin{equation}
\begin{aligned}\label{rhod}
\rho_1&=\rho+\kappa(\rho)\frac M8(\tau N)^{-1/3}+o((\tau N)^{-1/3}),\\
\rho_2&=\rho+\kappa(\rho)\frac M{16}(\tau N)^{-1/3}+o((\tau N)^{-1/3}),\\
\rho_1-\rho_2&=\kappa(\rho)\frac M{16}(\tau N)^{-1/3}+o((\tau N)^{-1/3})>0,
\end{aligned}
\end{equation}
with $\kappa(\rho)=(1-\rho)(1-2\rho(1-\rho))/\rho>0$ for all $\rho\in (0,1)$.
Define
\begin{equation}
S_i=\sum_{k=0}^{i}J_k-\widehat{J}_k\quad \textrm{and}\quad W_i=\sum_{k=0}^{i}J^{\rho_1}_k-\widehat{J}^{\rho_2}_k
\end{equation}
so that by \eqref{rwc}
\begin{equation}
S_i\leq W_i\textrm{ for }i\geq 0.
\end{equation}
Note that
\begin{equation}
\{\Gamma^u_{\tau \xi_1 N}(\pi_{o^1,o^2})>\tau \xi_2 N +M (\tau N)^{2/3}\} \subseteq \bigg\{\sup_{i\geq \tfrac M2(\tau N)^{2/3}} S_i>0\bigg\} \subseteq \bigg\{\sup_{i\geq \tfrac{M}{2} (\tau N)^{2/3}} W_i>0\bigg\}.
\end{equation}
It follows that it is enough to show that there exists $c_1>0$ such that
\begin{equation}\label{ub5}
\P\bigg(\sup_{i\geq \tfrac M2(\tau N)^{2/3}} W_i>0\bigg) \leq e^{-c_1M^{3}}.
\end{equation}
Note that
\begin{equation}\label{ub4}
\begin{aligned}
\P\bigg(\sup_{i\geq \tfrac M2 (\tau N)^{2/3}} W_i>0\bigg)& \leq \P\Big(W_{\tfrac M2 (\tau N)^{2/3}}>-\frac{ \chi(\rho)M^2}{64}(\tau N)^{1/3}\Big)\\ &+\P\bigg(\sup_{i\geq \tfrac M2 (\tau N)^{2/3}} W_i-W_{\tfrac M2 (\tau N)^{2/3}}>\frac{ \chi(\rho)M^2}{64}(\tau N)^{1/3}\bigg)
\end{aligned}
\end{equation}
for $\chi(\rho)=\kappa(\rho)/\rho^2$.
Plugging \eqref{rhod} in Lemma~\ref{lem:crw}
\begin{equation}
\begin{aligned}\label{ub3}
\P\bigg(\sup_{i\geq \tfrac M2 (\tau N)^{2/3}} W_i-W_{\tfrac M2 (\tau N)^{2/3}}>\frac{ \chi(\rho)M^2}{64}(\tau N)^{1/3}\bigg)
&\leq \frac{\rho_1}{\rho_2}e^{-(\rho_1-\rho_2)\frac {\chi(\rho)M^2}{64 }(\tau N)^{1/3}}\\
&\leq 2e^{-\chi(\rho) \kappa(\rho) M^3/1024}
\end{aligned}
\end{equation}
for all $\tau N$ large enough.
Next, using exponential Tchebishev inequality, we show that
\begin{equation}\label{eq3.20}
\P\Big(W_{\tfrac M2(\tau N)^{2/3}}> -\frac{ \chi(\rho)M^2}{64}(\tau N)^{1/3}\Big)\leq 2 e^{-M^3 \chi(\rho)\kappa(\rho)/8192}
\end{equation}
for all $\tau N$ large enough, which completes the proof. Indeed, using
\begin{equation}
\left(\frac{1}{\rho_1}-\frac{1}{\rho_2}\right)\tfrac M2(\tau N)^{2/3} = -\frac{M^2\chi(\rho)}{32}(\tau N)^{1/3}+o((\tau N)^{1/3}),
\end{equation}
we get, using also the independence of the $J$'s and $\widehat J$'s, that
\begin{equation}
\begin{aligned}
\eqref{eq3.20} &= \P\Bigg(\sum_{k=0}^{\tfrac M2(\tau N)^{2/3}}(J_k^{\rho_1}-\rho_1^{-1}-\widehat{J}_k^{\rho_2}+\rho_2^{-1})> \frac{ \chi(\rho)M^2}{64}(\tau N)^{1/3}+o((\tau N)^{1/3})\Bigg)\\
&\leq \inf_{\lambda>0} \frac{\E\Big(e^{\lambda (J_1^{\rho_1}-\rho_1^{-1}-\widehat{J}_1^{\rho_2}+\rho_2^{-1})}\Big)^{\tfrac M2(\tau N)^{2/3}}}{e^{\lambda [\frac{ \chi(\rho)M^2}{64}(\tau N)^{1/3}+o((\tau N)^{1/3})]}}\\
&=\inf_{\mu>0} e^{-M(M\kappa(\rho)-32 \mu)\mu/(64\rho^2)+o(1)} \leq 2 e^{-M^3\kappa(\rho)\chi(\rho)/8192},
\end{aligned}
\end{equation}
for all $\tau N$ large enough, where in the third step we set $\lambda=\mu (\tau N)^{-1/3}$ and performed simple computations.
\end{proof}
\begin{theorem}\label{prop1}
Let $o=(0,0)$ and $\e\in (0,1]$. There there exists $N_1(\e)$, $c(\e)$, $C(\e)$ such that for every direction $\xi$ with $\e\leq \xi_2/\xi_1\leq 1/\e$, and $v\leq N^{1/3}/\log(N)$, for $N>N_1$
\begin{equation}
\P\Big(\max_{k\in[0, \xi_1N]}\Gamma_k(\pi_{o,\xi N})< v N^{2/3}\Big)\geq 1 - Ce^{-c v^3}.
\end{equation}
\end{theorem}
\begin{proof}
The proof follows the approach of \cite{BSS14}, using the pointwise control of the fluctuations of the geodesic around the characteristic from Theorem~\ref{thm:loc}. Let $m=\min\{j: 2^{-j}N \leq N^{1/2}\}$. Choose $u_1<u_2<\ldots$ with $u_1=v/10$ and $u_j-u_{j-1}=u_1 2^{-(j-1)/2}$. We define
\begin{equation}
u(k)=\Gamma^u_k(\pi_{o,\xi N})-\tfrac{\xi_2}{\xi_1} k, \quad k\in [0,\xi_1 N]
\end{equation}
and the following events
\begin{equation}
\begin{aligned}
A_j&=\{u(k 2^{-j} N) \leq u_j N^{2/3}, 1\leq k \leq 2^j-1\},\\
B_{j,k}&=\{u(k 2^{-j} N) > u_j N^{2/3}\}, \quad k=1,\ldots,2^j-1,\\
L&=\{\sup_{x\in[0,1]} |u((k+x) 2^{-m} N)-u(k 2^{-m} N)|\leq \tfrac12 v N^{2/3}, 0\leq k \leq 2^m-1\},\\
G&=\{u(k)\leq v N^{2/3}\textrm{ for all }0\leq k\leq \xi_1N\}.
\end{aligned}
\end{equation}
Notice that $A_j^c=\bigcup_{k=1}^{2^j-1} B_{j,k}$. Also, since $\lim_{j\to\infty} u_j\leq v/2$, we have
\begin{equation}
\bigcup_{j=1}^m\bigcup_{k=1}^{2^j-1} (B_{j,k}\cap A_{j-1})\supseteq \{u(k 2^{-m} N)\geq \tfrac12v N^{2/3}\textrm{ for some }k=1,\ldots,2^m-1\}.
\end{equation}
This implies that
\begin{equation}
G\supseteq \Bigg(\bigcup_{j=1}^m\bigcup_{k=1}^{2^j-1} (B_{j,k}\cap A_{j-1})\Bigg)^c\cap L.
\end{equation}
Thus we have
\begin{equation}\label{eq2}
\P(G)\leq \P(L^c)+\sum_{j=1}^m\sum_{k=1}^{2^j-1}\P(B_{j,k}\cap A_{j-1}).
\end{equation}
Since the geodesics have discrete steps, in $n$ time steps a geodesic can wonder off by at most $n$ steps from its characteristic. For all $N$ large enough, $N^{1/2}<\tfrac12 v N^{2/3}$ and therefore $\P(L)=1$. Thus we need to bound $\P(B_{j,k}\cap A_{j-1})$ only. As for even $k$ the two events are incompatible, we consider odd $k$.
If $A_{j-1}$ holds, then the geodesic at $t_1=(k-1)2^{-j}N$ and $t_2=(k+1) 2^{-j}N$ satisfies
\begin{equation}
u(t_1)\leq u_{j-1} N^{2/3}\quad \text{ and } \quad u(t_2)\leq u_{j-1} N^{2/3}.
\end{equation}
Consider the point-to-point LPP from $\hat o^1$ to $\hat o^2$ with
\begin{equation}
\hat o^1=(t_1,t_1\tfrac{\xi_2}{\xi_1}+u_{j-1}) \quad \text{and} \quad \hat o^2=(t_2, t_2 \tfrac{\xi_2}{\xi_1}+u_{j-1}).
\end{equation}
Let $\hat u(i)=\Gamma^u_i(\pi_{\hat o^1,\hat o^2})$ for $i\in[t_1,t_2]$. Then, by the order of geodesics
\begin{equation}
u(i)\leq \hat u(i)\textrm{ for } i\in[t_1,t_2],
\end{equation}
so that
\begin{equation}
\{u(i)>u_jN^{2/3}\} \subseteq \{\hat u(i)>u_jN^{2/3}\}\textrm{ for } i\in[t_1,t_2].
\end{equation}
This gives
\begin{equation}
\P(B_{j,k}\cap A_{j-1}) \leq \P\big(\hat u(k 2^{-j}N)>u_jN^{2/3}).
\end{equation}
Since the law of $\hat u$ is the one of a point-to-point LPP over a time distance $t_2-t_1=2^{-j+1}N$, we can apply Theorem~\ref{thm:loc} with $\tau=1/2$, $N=t_2-t_1$ $M$ satisfying $(u_{j}-u_{j-1}) N^{2/3}=M (\tfrac12(t_j-t_{j-1}))^{2/3}$. This gives
\begin{equation}
\P\big(\hat u(k 2^{-j}N)>u_jN^{2/3}) \leq e^{-c_1 (u_1 2^{-(j-1)/2} 2^{2j/3})^3}\leq e^{-c_1 u_1^3 2^{j/2}}.
\end{equation}
This bound applied to \eqref{eq2} leads to $\P(G)\leq C e^{-c v^3}$ for some constants $C,c>0$.
\end{proof}
Now we have all the ingredients to prove Theorem~\ref{thm:locCylinder}.
\begin{proof}[Proof of Theorem~\ref{thm:locCylinder}]
Theorem~\ref{thm:loc} implies that with probability at least $1-e^{-c_1 M^3/8}$, the geodesic from $o$ to $\xi N$ does not deviate more than $\tfrac12 M (\tau N)^{2/3}$ away from the point $\tau \xi N$. Given this event, by order of geodesics, the geodesic from $o$ to $\xi N$ is sandwiched between the geodesics from $\tfrac12 M (\tau N)^{2/3} \mathrm{e}_1$ to $\tau \xi N+\tfrac12 M (\tau N)^{2/3} \mathrm{e}_1$ and the one from $-\tfrac12 M(\tau N)^{2/3}\mathrm{e}_1$ to $\tau \xi N-\tfrac12 M (\tau N)^{2/3} \mathrm{e}_1$. By Theorem~\ref{prop1}, the latter two geodesics fluctuates no more than $\tfrac12 M (\tau N)^{2/3}$, with probability at least $1-C e^{-cM^3/\tau^2}$, which implies the claim.
\end{proof}
\subsection{Localization of the exit point}
In this subsection, we estimate the location of the exit point for densities slightly larger or smaller than $1/2$. This will allow us to sandwich the point-to-point geodesics by those of the stationary. Notice that to apply Lemma~\ref{Lem:Comparison}, it would be enough to set in the event $\cA_1$ (resp.\ $\cA_2$) below that the exit point is positive (resp.\ negative) and bounded by $15 r N^{2/3}$ (resp.\ $-15 r N^{2/3}$) as the exit point for the LPP $G_{o,x}$ is $0$. However, with this slight modification (that the exit point is $rN^{2/3}$ from the origin), the proof is then applicable for more general initial conditions provided the exit points of the LPP with initial conditions $h_0$ on $\cL$ is localized in a $[-r N^{2/3},r N^{2/3}]$ with high probability.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.65, every node/.style={transform shape}]
node/.style={transform shape}]
\def\s{1.2}
\draw [color=red,fill=red!20] (8,0) -- (12,0) -- (12,10)--(8,10) --(8,0);
\node [scale=\s,above] at (3,10) {$x^2$};
\node [scale=\s,above] at (17,10) {$x^1$};
\draw [dashed] (3,0)--(3,10);
\draw [dashed,red,thick] (4.5,0)--(3,10);
\draw [red,thick] (3.5,0)--(3,10);
\node [scale=\s,red] at (3.8,0.3) {$\xi_+$};
\draw [dashed] (17,0)--(17,10);
\draw [dashed,red,thick] (18.5,0)--(17,10);
\draw [red,thick] (17.5,0)--(17,10);
\node [scale=\s,red] at (17.8,0.3) {$\xi_+$};
\draw [blue,thick] (7.5,10)--(7.5,0);
\node [scale=\s,left,blue] at (7.5,5) {$D^2$};
\draw [blue,thick] (15,10)--(15,0);
\node [scale=\s,left,blue] at (15,5) {$D^1$};
\draw [<->] (3,11)--(10,11);
\node [scale=\s,above] at (6,11) {$as_r N^{2/3}$};
\draw [<->,right] (13.2,9.9)--(13.2,0.1);
\node [scale=\s,right] at (12,5) {$t_r N$};
\draw [<->] (8,-0.3)--(10,-0.3);
\node [scale=\s,below] at (9,-0.3) {$\frac{s_r}4 N^{2/3}$};
\draw [<->] (3,-0.3)--(4.5,-0.3);
\node [scale=\s,below] at (3.5,-0.3) {$8rt_r N^{2/3}$};
\draw [<->] (17,-0.2)--(18.5,-0.2);
\node [scale=\s,below] at (17.8,-0.3) {$8rt_r N^{2/3}$};
\draw [<->] (4.5,-0.3)--(7.5,-0.3);
\node [scale=\s,below] at (6,-0.3) {$2M(t_r N)^{2/3}$};
\draw [<->] (0,-1.4)--(20,-1.4);
\node [scale=\s,below] at (10,-1.6) {$s_r N^{2/3}$};
\draw[black,thick] plot [smooth,tension=0.8] coordinates{(3.8,-1.2)(5.2,0) (5.5,2) (4.7,4) (5.4,6.5) (3,10)};
\draw[black,thick] plot [smooth,tension=0.8] coordinates{(16.5,-1.2)(16.8,0) (15.5,2) (15.7,5) (15.4,6.5) (17,10)};
\node [scale=\s,black] at (5.8,5) {$\pi^{\rho_+}_{o,x^2}$};
\node [scale=\s,black] at (16.3,5) {$\pi^{\rho_+}_{o,x^1}$};
\draw (0,10)--(20,10);
\draw (0,0)--(20,0);
\draw (10,0)--(10,10);
\end{tikzpicture}
\caption{Illustration of the geometry around the end-point $(N,N)$ magnified and rotated by $\pi/4$. Choosing $t_r,s_r$ properly forces the geodesic $\pi^{\rho_+}_{o,x^2}$ to traverse to the left of $D^2$ and $\pi^{\rho_+}_{o,x^1}$ to the right of $D^1$.}\label{fig:dim}
\end{center}
\end{figure}
Fix $r>0$ and let $0<s_r,t_r$. $s_r$ and $t_r$ will be determined later and represent the dimensions of space and time respectively. Let $0<a<1$. Define the points
\begin{equation}
\begin{aligned}\label{ep}
x^1&=N\mathrm{e}_4 + a s_r N^{2/3} \mathrm{e}_3,\\
x^2&=N\mathrm{e}_4 - a s_r N^{2/3} \mathrm{e}_3.
\end{aligned}
\end{equation}
Define the densities
\begin{equation}
\rho_+ = \tfrac12 + r N^{-1/3},\quad \rho_- = \tfrac12 - r N^{-1/3},
\end{equation}
and, with $o=(0,0)$, the events
\begin{equation}\label{A}
\begin{aligned}
\cA_1&=\big\{Z^{\rho_+}_{o,x^2}\geq rN^{2/3} ,Z^{\rho_+}_{o,x^1}\leq 15rN^{2/3}\big\},\\
\cA_2&=\big\{Z^{\rho_-}_{o,x^2}\geq -15rN^{2/3} ,Z^{\rho_-}_{o,x^1}\leq -rN^{2/3}\big\},\\
\cA&=\cA_1\cup\cA_2.
\end{aligned}
\end{equation}
The event $\cA$ is highly probable for large $r$ as shows the following lemma.
\begin{lemma}\label{lem:sb}
Assume $0\leq s_r\leq 2r$ and $0<a<1$. There exists $c,N_0>0$ such that for $N>N_0$ and $0<r<N^{1/3}/\log(N),$
\begin{equation}
\P(\cA)\geq 1-e^{-cr^3}.
\end{equation}
\end{lemma}
\begin{proof}
We will show the claim for $\cA_1$, one can similarly prove the claim for $\cA_2$. The result would then follow from union bound. To prove the claim for $\cA_1$, by union bound it is enough to show that
\begin{align}
&\P\big(Z^{\rho_+}_{o,x^2}\geq rN^{2/3}\big)\geq 1-e^{-cr^3}\label{lb1},\\
&\P\big(Z^{\rho_+}_{o,x^1}\leq 15rN^{2/3}\big)\geq 1-e^{-cr^3}\label{lb2}.
\end{align}
Let $x^3=x^2-r N^{2/3} \mathrm{e}_1$. Then by stationarity of the model,
\begin{equation}\label{eq3.44}
\P\big(Z^{\rho_+}_{o,x^2}< rN^{2/3}\big) = \P\big(Z^{\rho_+}_{o,x^3}< 0\big).
\end{equation}
Now we want to use \eqref{spc3}. For that denote $\tilde N=N-\frac{r}{2} N^{2/3}$ and write
\begin{equation}
x^3=\xi(\rho_+) 2\tilde N+\tilde r \tilde N^{2/3} \mathrm{e_3}.
\end{equation}
Solving with respect to $\tilde r$ we obtain
\begin{equation}
\tilde r=\frac{7}{2}r-a s_r +\cO(r^2/N^{1/3}).
\end{equation}
Applying \eqref{spc3} with $\nu\to \rho_+$, $N\to \tilde N$ and $r\to \tilde r$ gives
\begin{equation}
\P\big(Z^{\rho_+}_{o,x^3}< 0\big) \leq e^{-c_0\tilde r^3}.
\end{equation}
Since $s_r\leq 2r$ and $r\leq N^{1/3}/\log(N)$, we have $\tilde r\geq r$ for all $N$ large enough, which proves \eqref{lb1}.
Let $x^4=x^1-15 r N^{2/3} \mathrm{e}_1$. Then by stationarity of the model,
\begin{equation}\label{eq3.48}
\P\big(Z^{\rho_+}_{o,x^1}>15 rN^{2/3}\big) = \P\big(Z^{\rho_+}_{o,x^4}> 0\big).
\end{equation}
We will apply this time \eqref{spc2}. Denote $\tilde N = N - \frac{15}{2} r N^{2/3}$ and write
\begin{equation}
x^4=\xi(\rho_+) 2\tilde N-\hat r \tilde N^{2/3} \mathrm{e_3}.
\end{equation}
Solving with respect to $\hat r$ we obtain
\begin{equation}
\hat r=\frac{7}{2}r-a s_r +\cO(r^2/N^{1/3})\geq r
\end{equation}
for all $N$ large enough. Applying \eqref{spc2} with $\nu\to \rho_+$, $N\to \tilde N$ and $r\to \hat r$ proves \eqref{lb2}.
\end{proof}
\subsection{Uniform sandwiching of geodesics terminating in $\cC^{s_r/2,t_r}$.}
Consider the following assumption.
\begin{assumption}\label{ass}
Let $M_0>0$, $a=3/8$, $s_r\leq \min\{r,4\}$ and make the following assumptions on the parameters:
\begin{equation}\label{eq3.67}
r \leq N^{1/3}/\log(N),\quad M_0\leq M\leq \tfrac1{16} s_r t_r^{-2/3}-4 r t_r^{1/3}.
\end{equation}
\end{assumption}
We shall later discuss this assumption in Remark~\ref{rem:assumptions} below.
Under Assumption~\ref{ass}, the geodesics $\pi^{1/2}_{o,x}$ and $\pi_{y,x}$, for $y\in\cR^{r/2,1/4}$, are controlled by the ones with densities $\rho_+$ and $\rho_-$ for all $x\in \cC^{s_r/2,t_r}$. This is the content of the following result whose proof we defer to the end of this section.
\begin{lemma}\label{cor:og}
Under Assumption~\ref{ass}, there exists $C,c>0$ such that
\begin{equation}\label{eq3.78}
\P\Big(\pi^{\rho_-}_{o,x}\preceq \pi^{1/2}_{o,x},\pi_{y,x} \preceq \pi^{\rho_+}_{o,x} \quad \forall x\in \cC^{s_r/2,t_r},y\in \cR^{r/2,1/4}\Big)\geq 1-Ce^{-cM^3}-2e^{-cr^3}
\end{equation}
for all $N$ large enough.
\end{lemma}
Define
\begin{equation}
\begin{aligned}\label{c}
c^1&=\pi^{\rho_+}_{o,x^1}\cap L_{1-t_r},\\
c^2&=\pi^{\rho_+}_{o,x^2}\cap L_{1-t_r}.
\end{aligned}
\end{equation}
To ease the notation we also denote
\begin{equation}
\begin{aligned}
w^2&=(1-t_r)N\mathrm{e}_4- as_rN^{2/3}\mathrm{e}_3=x^2-t_rN\mathrm{e}_4, \\
w^1&=(1-t_r)N\mathrm{e}_4+ as_rN^{2/3}\mathrm{e}_3=x^1-t_rN\mathrm{e}_4.
\end{aligned}
\end{equation}
\begin{lemma}\label{lem:cyl}
There exists $c,N_0,M_0>0$ such that for $t_r N>N_0$, $r\leq N^{1/3}/\log(N)$, $M\geq M_0$
\begin{align}
&\P\Big(w^2-M(t_rN)^{2/3}\mathrm{e}_3 \preceq c^2\preceq w^2+(8rt_rN^{2/3}+M(t_rN)^{2/3})\mathrm{e}_3\Big) \geq 1-e^{-cM^3},\label{lb3}\\
&\P\Big(w^1-M(t_rN)^{2/3}\mathrm{e}_3 \preceq c^1 \preceq w^1+(8rt_rN^{2/3}+M(t_rN)^{2/3})\mathrm{e}_3 \Big) \geq 1-e^{-cM^3}.\label{lb4}
\end{align}
\end{lemma}
\begin{proof}
Let $p^2$ be the point of intersection of the characteristic $\xi_+$ starting from $x^2$ with the line $L_{1-t_r}$. We have
\begin{equation}
p^2=w^2+(4r t_r N^{2/3}+\cO(r^3 t_r))\mathrm{e}_3=w^2+4r t_r N^{2/3}(1+o(1))\mathrm{e}_3
\end{equation}
for $r\leq N^{1/3}/\log(N)$ and $N$ large enough, implying
\begin{equation}
w^2 \preceq p^2\preceq w^2 + 8rt_r N^{2/3}\mathrm{e}_3=:z^2.
\end{equation}
By the order on geodesics $c^2\preceq \pi^{\rho_+}_{o,x^2+4r t_r N^{2/3}\mathrm{e}_3}$ and if $Z^{\rho_+}_{z^2+M(t_r N)^{2/3}\mathrm{e}_3,x^2+4r t_r N^{2/3}\mathrm{e}_3}<0$, then $\pi^{\rho_+}_{o,x^2+4r t_r N^{2/3}\mathrm{e}_3}\preceq z^2+M(t_r N)^{2/3}\mathrm{e}_3$. Thus
\begin{equation}
\P(c^2\preceq z^2+M(t_r N)^{2/3}\mathrm{e}_3)\geq \P(Z^{\rho_+}_{z^2+M(t_r N)^{2/3}\mathrm{e}_3,x^2+4r t_r N^{2/3}\mathrm{e}_3}<0).
\end{equation}
Using \eqref{spc2}, the latter is bounded from above by $1-e^{-c_0 M^3}$ provided $M\geq M_0$ and $t_r N\geq N_0$.
A similar bound can be obtained for
\begin{equation}
\P(w^2-M(t_r N)^{2/3}\mathrm{e}_3\preceq c^2)
\end{equation}
using \eqref{spc3}. Thus we have shown that \eqref{lb3} holds. The proof of \eqref{lb4} is almost identical and thus we do not repeat the details.
\end{proof}
Set
\begin{equation}
\begin{aligned}\label{q}
q^1&=(1-t_r)N\mathrm{e}_4+ N^{2/3}(a s_r-2 M t_r^{2/3})\mathrm{e}_3,\\
q^2&=(1-t_r)N\mathrm{e}_4+N^{2/3}(8 r t_r-a s_r+2M t_r^{2/3})\mathrm{e}_3
\end{aligned}
\end{equation}
and define the lines (see Figure~\ref{fig:dim})
\begin{equation}
\begin{aligned}
D^1&=\{q^1+\alpha t_r N\mathrm{e}_4:0<\alpha<1\},\\
D^2&=\{q^2+\alpha t_r N\mathrm{e}_4:0<\alpha<1\}.
\end{aligned}
\end{equation}
\begin{lemma}\label{lem:cyl2}
There exist $N_1,c,C>0$ such that for every $N\geq N_1$ and $M\leq N^{1/3}/\log(N)$, $r\leq N^{1/3}/\log(N)$,
\begin{align}
&\P\Big(D^1 \preceq \pi^{\rho_+}_{o,x^1}\Big)\geq 1- Ce^{-c M^3},\label{ub7}\\
&\P\Big(\pi^{\rho_+}_{o,x^2} \preceq D^2\Big)\geq 1- Ce^{-c M^3}.\label{ub8}
\end{align}
\end{lemma}
\begin{proof}
We will show \eqref{ub8} as \eqref{ub7} can be proven similarly. Let
\begin{equation}
u^2=q^2-M(t_rN)^{2/3}\mathrm{e}_3.
\end{equation}
By Theorem~\ref{prop1} we have
\begin{equation}\label{or}
\P\big(\pi_{u^2,x^2}\preceq D^2\big)\geq 1- Ce^{-c M^3}.
\end{equation}
Recall the definition \eqref{c} of $c^1$. By Lemma~\ref{lem:cyl}
\begin{equation}
\P\big(c^2\preceq u^2\big)\geq 1- e^{-c M^3},
\end{equation}
which implies that
\begin{equation}\label{or2}
\P\big(\pi^{\rho_+}_{o,x^2}\preceq \pi_{u^2,x^2}\big)\geq 1- Ce^{-c M^3}.
\end{equation}
\eqref{or} and \eqref{or2} imply \eqref{ub8}.
\end{proof}
\begin{remark}\label{rem:assumptions}
Now we can discuss the origin of the conditions in Assumption~\ref{ass}.
The bound on $r$ comes from Lemma~\ref{lem:cyl2}. The condition on $M$ is a consequence of the conditions $q^2\preceq (1-t_r)N\mathrm{e}_4-\tfrac14 s_r N^{2/3} \mathrm{e_3}$ and also $(1-t_r)N\mathrm{e}_4+\tfrac14 s_r N^{2/3} \mathrm{e}_3\preceq q^1$. As we want $M$ to grow to infinity we need to take $t_r \ll s_r / r$.
\end{remark}
For $0<\tau<1$ and $\sigma\in\R_+$, define the anti-diagonal segment
\begin{equation}
\cI^{\sigma,\tau}=\{(1-\tau) N\mathrm{e}_4+ i\mathrm{e}_3, i\in [-\tfrac\sigma{2} N^{2/3},\tfrac{\sigma}{2}N^{2/3}]\},
\end{equation}
located right below the cylinder $\cC^{\sigma,\tau}$.
Define the events
\begin{equation}
\begin{aligned}
\cO&=\big\{Z^{\rho_-}_{o,x}\in [-15rN^{2/3},-rN^{2/3}],Z^{\rho_+}_{o,x}\in [rN^{2/3},15rN^{2/3}]\quad \forall x\in \cC^{s_r/2,t_r}\big\},\\
\cB&= \Big\{\{\pi^{\rho_-}_{o,x} \cap \cI^{s_r,t_r}\neq \emptyset\} \cap \{\pi^{\rho_+}_{o,x} \cap \cI^{s_r,t_r}\neq \emptyset\} \quad \forall x\in \cC^{s_r/2,t_r}\Big \}.
\end{aligned}
\end{equation}
\begin{corollary}\label{cor:up}
Under Assumption~\ref{ass} there exists $C,c,N_0>0$ such that for $N>N_0$
\begin{align}
\P(\cO)&\geq 1-2e^{-cr^3}-C e^{-cM^3},\label{r1}\\
\P(\cB)&\geq 1-e^{-cM^3}\label{r2}.
\end{align}
\end{corollary}
\begin{proof}
It might be helpful to take a look at Figure~\ref{fig:dim} while reading the proof. We prove in details the statements for $\rho_+$, since the proof for $\rho_-$ is almost identical.
By our choice of parameters,
\begin{equation}
D^2\preceq \cC^{s_r/2,t_r} \preceq D^1.
\end{equation}
By Lemma~\ref{lem:cyl2} and order of geodesics, with probability at least $1-C e^{-c M^3}$,
\begin{equation}\label{eq3.75}
\pi^{\rho_+}_{o,x^2}\preceq \pi^{\rho_+}_{o,x}\preceq \pi^{\rho_+}_{o,x^1}
\end{equation}
for all $x\in \cC^{s_r/2,t_r}$. By Lemma~\ref{lem:sb} the exit point of the geodesics to $x^1$ and $x^2$ for the stationary model with density $\rho_+$ lies between $rN^{2/3}$ and $15r N^{2/3}$ with probability at least $1-e^{-c r^3}$, which leads to \eqref{r1}.
To prove \eqref{r2}, first notice that Assumption ~\ref{ass} implies
\begin{equation}
w^1+(8rt_rN^{2/3}+M(t_rN)^{2/3})\mathrm{e}_3 \preceq w^1+\tfrac12 s_r\mathrm{e}_3
\end{equation}
and
\begin{equation}
w^2-\tfrac12 s_r \mathrm{e}_3 \preceq w^2-M(t_rN)^{2/3}\mathrm{e}_3.
\end{equation}
Thus by Lemma~\ref{lem:cyl} we know that the crossing of $\pi^{\rho_+}_{o,x^1}$ and $\pi^{\rho_+}_{o,x^2}$ with $\cI^{s_r,t_r}$ occurs with probability at least $1-e^{-c M^3}$. This, together with \eqref{eq3.75} implies \eqref{r2}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{cor:og}]
Consider the straight line going from $(0,rN^{2/3})$ to $x^1$ parameterized by $(u,l_1(u))$ with
\begin{equation}
l_1(u)=r N^{2/3}+u \frac{N-(\tfrac38 s_r +r)N^{2/3}}{N+\tfrac38 s_r N^{2/3}},
\end{equation}
and the straight line $(u,l_2(u))$, which overlaps in its first part with the boundary of $\cR^{r/2,1/4}$, defined through
\begin{equation}
l_2(u)=\frac{r}{2}N^{2/3}+u.
\end{equation}
By our assumption $s_r\leq r$,
\begin{equation}\label{lb}
\inf_{0\leq u \leq \frac{N}{4}}( l_1(u)-l_2(u))\geq \frac{r}{2}N^{2/3}-\frac{\frac{7}{4}r N^{2/3}}{N+\frac38 r N^{2/3}} \frac{N}{4}\geq \frac1{16}r N^{2/3}.
\end{equation}
It follows from \eqref{lb} and Theorem~\ref{prop1} that for some $C,c>0$
\begin{equation}\label{eq4.80}
\P(\Gamma_u^l(\pi_{(0,rN^{2/3}),x^1})<l_2(u) \textrm{ for some }0\leq u \leq \tfrac{N}{4})\leq Ce^{-cr^3}.
\end{equation}
By the analogue of Lemma~\ref{lem:cyl2} for $\rho_-$, with probability at least $1-C_1 e^{-c_1 M^3}$, $\pi^{\rho_-}_{o,x}\preceq \pi^{\rho_-}_{o,x^1}$ for all $x\in \cC^{s_r/2,t_r}$. Furthermore, on the event $\cA_2$, $\rho^{\rho_-}_{o,x^1}$ starts from a point above $(0,r N^{2/3})$. Combining these two facts with \eqref{eq4.80} we get
\begin{equation}\label{lb5}
\P\Big(\pi^{\rho_-}_{o,x}\preceq \pi_{(0,rN^{2/3}),x^1} \preceq \cR^{r/2,1/4} \quad \forall x\in \cC^{s_r/2,t_r}\Big)\geq 1-C_1 e^{-c_1 M^3}-Ce^{-cr^3}.
\end{equation}
A similar result can be obtained for $\pi^{\rho_+}_{o,x}$, which combined with \eqref{lb5} gives
\begin{equation}\label{eq4.82}
\P\Big(\pi^{\rho_-}_{o,x} \preceq \cR^{r/2,1/4} \preceq \pi^{\rho_+}_{o,x}\quad \forall x\in \cC^{s_r/2,t_r}\Big)\geq 1-2C_1 e^{-c_1 M^3}-2Ce^{-cr^3}.
\end{equation}
By the order of geodesics, on the event of \eqref{eq4.82}, every geodesic starting in $\cR^{r/2,1/4}$ and ending at $x$, is sandwiched between $\pi^{\rho_-}_{o,x}$ and $\pi^{\rho_+}_{o,x}$.
Next, for each point $x\in \cC^{s_r/2,t_r}$, its associated density $\rho(x)$ satisfies $|\rho(x)-\tfrac12|\leq N^{-1/3}$ for all $s_r\leq 4(1-t_r)$ and $N$ large enough. By \eqref{spc1} of Lemma~\ref{spc}, with probability at least $1-e^{-c_0 r^3}$ the exit point of $\pi^{1/2}_{o,x}$ is also between $-r N^{2/3}$ and $r N^{2/3}$. Thus by appropriate choice of constants $C,c$, the sandwitching of $\pi^{1/2}_{o,x}$ in \eqref{eq3.78} holds.
\end{proof}
\section{Lower bound for the probability of no coalescence }\label{sectLowerBound}
\subsection{Point-to-point case: proof of Theorem~\ref{thm:coal}}
Using the results of Section~\ref{SectLocalStat} we first relate the bound for the coalescing point of $\pi^{1/2}_{o,x}$ and $\pi_{o,x}$ to that of the coalescing point of $\pi^{\rho_+}_{o,x}$ and $\pi^{\rho_-}_{o,x}$.
\begin{lemma}\label{cor:cub}
Under Assumption~\ref{ass}, there exists $C,c>0$ such that
\begin{equation}\label{eq4.1}
\begin{aligned}
&\P\Big(C_p(\pi^{1/2}_{o,x},\pi_{y,x})\leq L_{1-t_r} \quad \forall x\in \cC^{s_r/2,t_r},y\in\cR^{r/2,1/4}\Big)\geq 1-Ce^{-cM^3}-2e^{-cr^3}\\
&-\P\Big(\exists x\in \cC^{s_r,t_r}: C_p(\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x})\geq \cI^{s_r/2,t_r},\cB \Big)
\end{aligned}
\end{equation}
for all $N$ large enough.
\end{lemma}
\begin{proof}
We bound the probability of the complement event. Define the event
\begin{equation}\label{eventG}
\cG=\{\pi^{\rho_-}_{y,x}\preceq \pi^{1/2}_{o,x},\pi_{y,x} \preceq \pi^{\rho_+}_{y,x} \quad \forall x\in \cC^{s_r/2,t_r}, y\in\cR^{r/2,1/4}\}.
\end{equation}
Then
\begin{equation}
\begin{aligned}
& \P\Big(\exists x \in \cC^{s_r/2,t_r},y\in\cR^{r/2,1/4}: C_p(\pi^{1/2}_{o,x},\pi_{y,x})>L_{1-t_r}\Big) \leq \P(\cB^c)+\P(\cG^c)\\
&+ \P\Big(\{\exists x \in \cC^{s_r/2,t_r},y\in\cR^{r/2,1/4}: C_p(\pi^{1/2}_{o,x},\pi_{y,x})>L_{1-t_r}\}\cap \cB\cap \cG\Big).
\end{aligned}
\end{equation}
Note that if $\cB$ and $\cG$ hold, then both geodesics $\pi^{1/2}_{o,x}$ and $\pi_{y,x}$ are sandwiched between $\pi^{\rho_-}_{o,x}$ and $\pi^{\rho_+}_{o,x}$ and their crossings with the line $L_{1-t_r}$ occurs in the segment $\cI^{s_r,t_r}$. Furthermore, if $C_p(\pi^{1/2}_{o,x},\pi_{y,x})>L_{1-t_r}$ and $\cG$ holds, then also $C_p(\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x})>L_{1-t_r}$. This, together with Corollary~\ref{cor:up} and Lemma~\ref{cor:og} proves the claim.
\end{proof}
Thus, to prove Theorem~\ref{thm:coal} it remains to get an upper bound for the last probability in \eqref{eq4.1}.
For $x,y,z\in\Z^2$ such that $x\leq y \leq z$, let $\gamma_{x,z}$ be an up-right path going from $x$ to $z$. Define the exit point of $\gamma_{x,z}$ with respect to the point $y$
\begin{equation}
\cZ_y(\gamma_{x,z})=\sup\{u\in\gamma_{x,z}:u_1=y_1 \text{ or } u_2=y_2\}.
\end{equation}
Define the sets
\begin{equation}
\begin{aligned}
\cH&=\{(1-t_r)N\mathrm{e}_4+\tfrac {s_r}2N^{2/3}\mathrm{e}_3-i\mathrm{e}_1, \quad 1 \leq i \leq \tfrac {s_r}2N^{2/3}\},\\
\cV&=\{(1-t_r)N\mathrm{e}_4-\tfrac {s_r}2N^{2/3}\mathrm{e}_3-i\mathrm{e}_2, \quad 1 \leq i \leq \tfrac {s_r}2N^{2/3}\},
\end{aligned}
\end{equation}
and the point
\begin{equation}
v^c=[(1-t_r)N-\tfrac{s_r}{2} N^{2/3}]\mathrm{e}_4.
\end{equation}
Define the event
\begin{equation}
E_1=\{\cZ_{v^c}(\pi^{\rho_+}_{o,x})\in \cH\cup\cV ,\cZ_{v^c}(\pi^{\rho_-}_{o,x})\in \cH\cup\cV \quad \forall x\in \cC^{s_r/2,t_r}\}.
\end{equation}
Note that (see Figure~\ref{fig:coal})
\begin{equation}
E_1=\cB,
\end{equation}
since to cross the set $\cI^{s_r,t_r}$ the geodesic must cross either $\cH$ or $\cV$ and, viceversa, if the geodesic crosses $\cH\cup\cV$, then it crosses also $\cI^{s_r,t_r}$.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.5, every node/.style={transform shape}]
node/.style={transform shape}]
\def\s{2.0}
\draw [black,thick](0,0) -- (0,10);
\node [scale=\s,left,black] at (0,5) {$\cV$};
\draw [black,thick](0,0) -- (10,0);
\node [scale=\s,below,black] at (5,0) {$\cH$};
\draw [color=red,fill=red!20] (2.5,7.5) -- (7.5,2.5) -- (13.5,8.5) --(8.5,13.5) -- (2.5,7.5);
\draw [blue,thick] (0,10)--(10,0);
\node [scale=\s,red] at (8.5,8.5) {$\cC^{s_r/2,t_r}$};
\node [scale=\s,blue] at (3.3,5.5) {$\cI^{s_r,t_r}$};
\draw[black,thick] plot [smooth,tension=0.6] coordinates{(-3,-3) (-1,1) (2,2) (4,3) (7,6) };
\draw[black,thick] plot [smooth,tension=0.6] coordinates{(-4,-2) (-3,-0.5) (-1,1)};
\node [scale=\s,above,blue] at (-2.5,1.2) {$C_p(\pi^{1/2}_{o,x},\pi_{o,x})$};
\node [scale=\s,above] at (7,6) {$x$};
\draw [blue,fill](-1,1) circle (1mm);
\draw [black,fill](0,0) circle (0.5mm);
\node [scale=\s,left] at (0,0) {$v^c$};
\end{tikzpicture}
\caption{On the event $E_1\cap E_2$, the geodesics $\pi^{\rho_+}_{o,x}$ and $\pi^{\rho_-}_{o,x}$ coalesce before crossing $\cI^{s_r,t_r}$.}
\label{fig:coal}
\end{center}
\end{figure}
On the edges of $\Z_{\geq 0}^2$ define the random field $B$ through
\begin{equation}\label{bf3}
B_{x,x+\mathrm{e}_k}=G(\pi_{o,x+\mathrm{e}_k})-G(\pi_{o,x}),\quad k=1,2
\end{equation}
for all $x>o$. Similarly we define
\begin{equation}
\begin{aligned} B^{\rho_+}_{x,x+\mathrm{e}_k}&=G(\pi^{\rho_+}_{o,x+\mathrm{e}_k})-G(\pi^{\rho_+}_{o,x}), \quad k=1,2,\\ B^{\rho_-}_{x,x+\mathrm{e}_k}&=G(\pi^{\rho_-}_{o,x+\mathrm{e}_k})-G(\pi^{\rho_-}_{o,x}),\quad k=1,2.\label{bf2}
\end{aligned}
\end{equation}
One can couple the random fields $B,B^{\rho_-}$ and $B^{\rho_+}$ (see \cite[Theorem 2.1]{FS18}) such that
\begin{equation}
\begin{aligned}\label{og}
B^{\rho_-}_{x-\mathrm{e}_1,x}&\leq B_{x-\mathrm{e}_1,x} \leq B^{\rho_+}_{x-\mathrm{e}_1,x},\\
B^{\rho_+}_{x-\mathrm{e}_2,x}&\leq B_{x-\mathrm{e}_2,x} \leq B^{\rho_-}_{x-\mathrm{e}_2,x}.
\end{aligned}
\end{equation}
Let $o\leq v\leq x$. Then, since each geodesic has to pass either by one site on the right or above $v$, we have
\begin{equation}
\begin{aligned}
G_{o,x}=G_{o,v}+\max\Big\{&\sup_{0\leq l \leq x_1-v_1}\sum_{i=0}^l B_{v+i\mathrm{e}_1,v+(i+1)\mathrm{e}_1}+G_{v+(l+1)\mathrm{e}_1+\mathrm{e}_2,x},\\
&\sup_{0\leq l\leq x_2-v_2}\sum_{i=0}^l B_{v+i\mathrm{e}_2,v+(i+1)\mathrm{e}_2}+G_{v+(l+1)\mathrm{e}_2+\mathrm{e}_1,x}\Big\}.
\end{aligned}
\end{equation}
Thus setting $v=v^c$, on the event $E_1\cap \cG$, for every $x\in\cC^{s_r/2,t_r}$
\begin{equation}
\begin{aligned}\label{wr}
G_{o,x}=G_{o,v}+\max\Big\{&\sup_{u\in \cH}\sum_{i=0}^{u_1-v^c_1} B_{v^c+i\mathrm{e}_1,v^c+(i+1)\mathrm{e}_1}+G_{u+\mathrm{e}_2,x},\\
&\sup_{u\in \cV}\sum_{i=0}^{u_2-v^c_2} B_{v^c+i\mathrm{e}_2,v^c+(i+1)\mathrm{e}_2}+G_{u+\mathrm{e}_1,x}\Big\}.
\end{aligned}
\end{equation}
$G^{\rho_+}_{o,x}$ and $G^{\rho_-}_{o,x}$ can be decomposed in the same way.
This shows that on the event $E_1\cap \cG$ the restriction of the geodesics $\pi_{o,x},\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x}$ to $\Z^2_{>v^c}$ is a function of the weights \eqref{bf3}--\eqref{bf2} and the bulk weights east-north to $v^c$. More precisely, define $\cE^B_{v^c}$ to be the set of edges in the south-west boundary of $\Z^2_{>v^c}$ and that are incident to $\cH\cup \cV$ i.e.
\begin{equation}
\begin{aligned}
\cE_{v^c}^H&=\{(x-\mathrm{e}_2,x):x\in \cH\},\\
\cE_{v^c}^V&=\{(x-\mathrm{e}_1,x):x\in \cV\},\\
\cE^B_{v^c}&=\cE_{v^c}^H\cup \cE_{v^c}^V.
\end{aligned}
\end{equation}
The representation \eqref{wr} show that on the event $E_1\cap \cG$, for every $x\in\cC^{s_r/2,t_r}$, the restrictions of the geodesics $\pi_{o,x},\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x}$ to $\Z_{> v^c}$ are functions of the bulk weights
\begin{equation}\label{w}
\{\omega_x\}_{x\in \Z^2_{>v^c}}
\end{equation}
and the boundary weights
\begin{equation}\label{bf5}
\{B_e\}_{e\in\cE^B_{v^c}}, \{B^{\rho_+}_e\}_{e\in\cE^B_{v^c}}\textrm{ and }\{B^{\rho_-}_e\}_{e\in\cE^B_{v^c}}
\end{equation}
respectively. The stationary geodesics with densities $\rho_+$ and $\rho_-$ will coalesce before reaching $\cI^{s_r,t_r}$ if the following event holds true:
\begin{equation}
E_2=\{\exists e\in\cE^B_{v^c}: B^{\rho_+}_e= B^{\rho_-}_e \}.
\end{equation}
\begin{lemma}\label{lem:e1e2}
We have
\begin{equation}
\{\exists x\in \cC^{s_r/2,t_r}: C_p(\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x})\geq \cI^{s_r,t_r}, \cB \} \subseteq E_1\cap E_2.
\end{equation}
\end{lemma}
\begin{proof}
The representation \eqref{wr} for the stationary models imply that $G^{\rho_-}_{o,x}$ and $G^{\rho_+}_{o,x}$ are functions of the weights in \eqref{w} and the stationary weights in \eqref{bf5}. This implies that on $(E_1\cap E_2)^c$
\begin{equation}
C_p(\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x})\leq \cH\cup \cV \leq \cI^{s_r,t_r} \quad \forall x\in \cC^{s_r/2,t_r},
\end{equation}
which implies the result.
\end{proof}
Thus it remains to find an upper bound for $\P(E_2)$. For $m>0$, let us define
\begin{equation}
\cA^m=\big\{B^{\rho_+}_{v^c+i\mathrm{e}_1,v^c+(i+1)\mathrm{e}_1}=B^{\rho_-}_{v^c+i\mathrm{e}_1,v^c+(i+1)\mathrm{e}_1}, 0\leq i \leq m\big\}.
\end{equation}
Recall that $\rho_+=1/2+rN^{-1/3}$. In~\cite{BBS20} the following bound is proven.
\begin{lemma}[Lemma~5.9 of~\cite{BBS20}]
Let $m\geq1$. For $0<\theta<\rho_+$, it holds
\begin{equation}
\begin{aligned}\label{tub}
\P(\cA^m)&\geq 1- \frac{2rN^{-1/3}}{\frac12+rN^{-1/3}}\\
&+\frac{\frac12-rN^{-1/3}}{\frac12+rN^{-1/3}}\Bigg[1+\frac{2r\theta N^{-1/3}+\theta^2}{\frac14-(r^2N^{-2/3}+2rN^{-1/3}\theta+\theta^2)}\Bigg]^{m}\frac{1}{1+2\theta r^{-1}N^{1/3}}.
\end{aligned}
\end{equation}
\end{lemma}
\begin{corollary}\label{cor:co}
There exists $C>0$, such that for every $r>0$ and $0<\eta<1/4000$,
\begin{equation}\label{aa}
\P(\cA^{\eta r^{-2}N^{2/3}})\geq 1-C\eta^{1/2}.
\end{equation}
for all $N$ large enough.
\end{corollary}
\begin{proof}
We set $\theta=\eta^{-1/2}rN^{-1/3}$ and plug this into \eqref{tub}. Taking $N\to\infty$ we obtain
\begin{equation}
\lim_{N\to\infty}\P(\cA^{\eta r^{-2}N^{2/3}})\geq 1- \frac{e^{4+8\sqrt{\eta}}\sqrt{\eta}}{1+2\sqrt{\eta}}.
\end{equation}
Taking for instance $C=62$, then for all $0<\eta<1/4000$, we have $1\geq C \sqrt{\eta}\geq \frac{e^{4+8\sqrt{\eta}}\sqrt{\eta}}{1+2\sqrt{\eta}}$, which implies the result.
\end{proof}
We are now able to bound the probability of $E_2$.
\begin{corollary}\label{cor:e2}
There exists $C>0$, such that for every $r>0$ and $0<s_r<1/(2000 r^2)$
\begin{equation}
\P(E_2)\leq Cs_r^{1/2} r
\end{equation}
for all $N$ large enough.
\end{corollary}
\begin{proof}
Define
\begin{equation}
\begin{aligned}
E^H_2&=\{\exists e\in\cE_{v^c}^H : B^{\rho_+}_e\neq B^{\rho_-}_e \},\\
E^V_2&=\{\exists e\in\cE_{v^c}^V : B^{\rho_+}_e\neq B^{\rho_-}_e \},
\end{aligned}
\end{equation}
and note that
\begin{equation}
E_2=E^H_2\cup E^V_2.
\end{equation}
By the symmetry of the problem, it is enough to show
\begin{equation}
\P(E^H_2)\leq Cs_r^{1/2} r.
\end{equation}
Apply Corollary \ref{cor:co} with $m=\frac{s_r}{2}N^{2/3}$ i.e.\ with $\eta=\frac{s_r r^2}{2}$. Then $\P\big((E^H_2)^c\big)=\P(\cA^{\eta r^{-2}N^{2/3}})$ and \eqref{aa} gives the claimed result.
\end{proof}
\begin{corollary}
Consider the parameters satisfying Assumption~\ref{ass} and $s_r r^2<1/4000$. Then, there exist constants $c,C>0$ such that
\begin{equation}\label{eq5.28}
\P\Big(C_p(\pi^{1/2}_{o,x},\pi_{y,x})\leq L_{1-t_r} \quad \forall x\in \cC^{s_r/2,t_r},y\in\cR^{r/2,1/4}\Big) \geq 1-Ce^{-cM^3}-e^{-cr^3}-Cs_r^{1/2}r
\end{equation}
for all $N$ large enough.
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:e1e2}
\begin{equation}
\P\Big(\exists x\in \cC^{s_r/2,t_r}: C_p(\pi^{\rho_+}_{o,x},\pi^{\rho_-}_{o,x})\geq \cI^{s_r,t_r}, \cB \Big) \leq \P(E_2).
\end{equation}
The result follows from Corollary~\ref{cor:e2} and Lemma~\ref{cor:cub}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:coal}]
To end the proof of Theorem~\ref{thm:coal}, we just need to express the parameters $r,s_r,t_r,M$ in terms of $\delta$ so that Assumption~\ref{ass} is in force. For a small $\delta>0$ consider the scaling
\begin{equation}\label{eq4.31}
\begin{aligned}
s_r&=2\delta,\\
t_r&=\delta^{3/2}/(\log(1/\delta))^3,\\
r&=\tfrac14\log(1/\delta),\\
M&=\tfrac14\log(1/\delta).
\end{aligned}
\end{equation}
Let us verify the assumptions. For all $\delta\leq 0.05$, the last inequality in \eqref{eq3.67} holds true and also $s_r<r$. Finally, for $\delta\leq \exp(-4 M_0)$, we have $M\geq M_0$ as well. For small $\delta$, the largest error term in \eqref{eq5.28} is $C s_r^{1/2} r$, which however goes to $0$ as $\delta$ goes to $0$.
\end{proof}
\begin{remark}
Of course, \eqref{eq4.31} is not the only possible choice of parameters. For instance, one can take $t_r=\delta^{3/2}/\log(1/\delta)$, but then we have to take smaller values of $r$ and $M$, e.g., $r=M=\tfrac{1}{16}\sqrt{\log(1/\delta)}$, for which the last inequality in \eqref{eq3.67} holds for $\delta\leq 0.01$, but the decay of the estimate $e^{-c r^3}$ and $e^{-c M^3}$ are much slower.
\end{remark}
\subsection{General initial conditions: proof of Theorem~\ref{thm:coalGeneralIC}}
In this section we prove Theorem~\ref{thm:coalGeneralIC}. The strategy of the proof is identical to the one of Theorem~\ref{thm:coal} and thus we will focus only on the differences.
\begin{proof}[Proof of Theorem~\ref{thm:coalGeneralIC}]
Here we think of the stationary LPP as leaving from the line $\cL=\{x\in\Z^2| x_1+x_2=0\}$ rather than the point $o=(0,0)$. The first ingredient of the proof is a version of Lemma~\ref{lem:sb}, where we adapt Lemma~\ref{lem:sb} so that the statement holds with $Z^{\rho_\pm}_{\cL,x^k}$ rather than $Z^{\rho_\pm}_{o,x^k}$ \footnote{Since the geodesics have slope very close to $1$, one might expect that $r N^{2/3}$ and $15 r N^{2/3}$ should be replaced by their half. However the statement don't need to be changed since we did not choose these boundary to be sharp.}. Next we use line-to-point versions of \eqref{eq3.44} and \eqref{eq3.48} which indeed can be obtained as $\{Z^{\rho_+}_{o,x^3}<0\}=\{Z^{\rho_+}_{\cL,x^3}<0\}$ and therefore the bounds from Lemma~\ref{spc} can still be applied.
Lastly, for general initial conditions, the exit point of $Z^{h_0}_{\cL,x}$ is not $0$ anymore. By Assumption~\ref{ass_IC} it is between $-r N^{2/3}$ and $r N^{2/3}$ with $r=\log(\delta^{-1})$ with probability at least $1-Q(\delta)$. Thus we replace the bound $1-e^{-c r^3}$ for $r=\log(\delta^{-1})$ with $1-Q(\delta)$. The rest of the proof is unchanged.
\end{proof}
\section{Upper bound for the probability of no coalescence}\label{SectUpperBound}
In this section we will prove Theorem~\ref{thm:LBcoal}, but for this we need some preparations.
\subsection{Coupling of stationary models with distinct densities}
Let $\arrv=(\arr_j)_{j\in\Z}$ and $\servv=(\serv_j)_{j\in\Z}$ be two independent sequences of i.i.d.\ exponential random variables of intensity $\beta$ and $\alpha$ respectively, where $0<\beta<\alpha<1$. We think of $a_j$ as the inter-arrival time between customer $j$ and customer $j-1$, and of $s_j$ as the service time of customer $j$. The waiting time of the $j$'th customer is given by
\begin{equation}\label{waittime}
w_j=\sup_{i\leq j}\Big(\sum_{k=i}^{j}s_{k-1}-a_k\Big)^+.
\end{equation}
The distribution of $w_0$ (and by stationarity the distribution of any $w_j$ for $j\in \Z$) is given by
\begin{equation}\label{des}
\P(w_0\in dw):=f(dw)=\big(1-\frac{\beta}{\alpha}\big)\delta_0(dw)+\frac{(\alpha-\beta)\beta}{\alpha}e^{-(\alpha-\beta)w}dw.
\end{equation}
The queueing map $D:\R_+^\Z\times \R_+^\Z\rightarrow\R_+^\Z$ takes the sequence of interarrival times and the service times and maps them to the inter-departure times
\begin{equation}
\begin{aligned}
\depav&=D(\arrv,\servv),\\
d_j&=(w_{j-1}+s_{j-1}-a_j)^-+s_j.
\end{aligned}
\end{equation}
We denote by $\nu^{\beta,\alpha}$ the distribution of $(D(\arrv,\servv),\servv)$ on $\R_+^\Z\times \R_+^\Z$, that is
\begin{equation}\label{qd}
\nu^{\beta,\alpha}\sim (D(\arrv,\servv),\servv).
\end{equation}
By Burke's Theorem~\cite{Bur56} $D(\arrv,\servv)$ is a sequence of i.i.d.\ exponential random variables of intensity $\beta$, consequently, the measure $ \nu^{\beta,\alpha}$ is referred to as a stationary measure of the queue.
One can write
\begin{equation}\label{eqEjs}
d_j=e_j+s_{j},
\end{equation}
where $e_j$ is called the $j$'th \textit{idle time} and is given by
\begin{equation}\label{e}
e_j=(w_{j-1}+s_{j-1}-a_j)^-.
\end{equation}
$e_j$ is the time between the departure of customer $j-1$ and the arrival of customer $j$ in which the sever is idle. Define
\begin{align}
x_j=s_{j-1}-a_j,
\end{align}
and the summation operator
\begin{equation}\label{S}
S^k_l=\sum_{i=k}^lx_i.
\end{equation}
Summing $e_j$ we obtain the cumulative idle time (see Chapter 9.2, Eq. 2.7 of~\cite{WhittBook}).
\begin{lemma}[Lemma A1 of~\cite{BBS20}]
For any $k\leq l$
\begin{equation}\label{sume}
\sum_{i=k}^l e_i=\Big(\inf_{k\leq i\leq l}w_{k-1}+S^k_i\Big)^-.
\end{equation}
\end{lemma}
It has long been known that the LPP on the lattice can be seen as queues in tandem. In particular, the stationary distribution for LPP can be seen as a stationary distribution of queues in tandem. In \cite{FS18} Fan an Sepp\"al\"ainen found the multi-species stationary distribution for LPP. For $0<\beta<\alpha<1$, let $I^\beta=\{I^\beta_i\}_{i\in\Z}$ and $I^\alpha=\{I^\alpha_i\}_{i\in\Z}$ be two i.i.d.\ random sequences such that
\begin{equation}\label{wg}
I^\beta_1\sim \text{Exp}(1-\beta)\textrm{ and }
I^\alpha_1\sim \text{Exp}(1-\alpha).
\end{equation}
Let $x\in \Z$ such that $o=(0,0)\leq N\mathrm{e}_4+x \mathrm{e}_1$. Let $G^\alpha,G^\beta$ be stationary LPP as in Section~\ref{SectStatLPP} with the weights in \eqref{wg}. Define the sequences $I^{\beta,x}$ and $I^{\alpha,x}$ by
\begin{equation}
\begin{aligned}
I^{\beta,x}_i&=G^\beta_{(1-t_r)N\mathrm{e}_4+i\mathrm{e}_1}-G^\beta_{(1-t_r)N\mathrm{e}_4+(i-1)\mathrm{e}_1} \quad\textrm{for }i>x,\\
I^{\alpha,x}_i&=G^\alpha_{(1-t_r)N\mathrm{e}_4+i\mathrm{e}_1}-G^\alpha_{(1-t_r)N\mathrm{e}_4+(i-1)\mathrm{e}_1} \quad\textrm{for }i>x.
\end{aligned}
\end{equation}
The multi-species results in \cite{FS18}, in particular Theorem~2.3 of~\cite{FS18}, show that if we take $(I^{\alpha},I^{\beta})\sim \nu^{1-\alpha,1-\beta}$ then
\begin{align}
(I^{\alpha,x},I^{\beta,x})\sim \nu^{1-\alpha,1-\beta}|_{x+\R^{\Z_+}}.
\end{align}
where $\nu^{1-\alpha,1-\beta}|_{x+\R^{\Z_+}}$ is the restriction of $\nu^{\beta,\alpha}$ to $x+\R^{\Z_+}$.
\subsection{Control on the stationary geodesics at the $(1-t_r)N$.}
As main ingredient in the proof, in Proposition \ref{prop:Lb} below, we show that with positive probability the geodesics ending at $N\mathrm{e}_4$ for the stationary models with density $\rho_+$ and $\rho_-$ do not coalesce before time $(1-t_r)N$.
Let $r_0>0$ to be determined later, $z_0=-r_0t_r^{2/3} N^{2/3}$, $z_1=r_0t_r^{2/3} N^{2/3}$, and define
\begin{equation}\label{ex}
\cH^{\rho}=\sup\{i\in\Z |(1-t_r)N\mathrm{e}_4+(i,0)\in \pi^\rho_{o,N\mathrm{e}_4}\}
\end{equation}
be the exit point of the geodesic $\pi^\rho_{o,N\mathrm{e}_4}$ with respect to the horizontal line $(\Z,(1-t_r)N)$, which geometrically is at position $\widetilde \cH^\rho=(1-t_r)N\mathrm{e}_4+(\cH^\rho,0)$.
Define
\begin{equation}
I= I_-\cup I_+\textrm{ where } I_-=\{z_0,\ldots,0\}, I_+=\{1,\ldots,z_1\}.
\end{equation}
\begin{proposition}\label{prop:Lb}
Under the choice of parameters in \eqref{eq4.31}, there exist $C,\delta_0>0$ such that for $\delta<\delta_0$ and $N$ large enough
\begin{equation}
\P(\cH^{{\rho_-}}\in I_-, \cH^{{\rho_+}}>0 )\geq C\delta^{1/2}.
\end{equation}
\end{proposition}
Before we turn to the proof of Proposition \ref{prop:Lb}, we need some preliminary results. The following lemma shows that with probability close to $1/2$
$\pi^\rho_{o,N\mathrm{e}_4}$ will cross the interval $I$ from its left half.
\begin{lemma}\label{lem:gl}
Under \eqref{eq4.31}, there exists $\delta_0,c>0$ such that for $\delta<\delta_0$ and for large enough $N$
\begin{equation}
\P(\cH^{{\rho_-}}\in I_-)\geq \frac12-e^{-cr_0^3}.
\end{equation}
\end{lemma}
\begin{proof}
Let $v=(1-t_r)Ne_4-r_0t_r^{2/3}N^{2/3}e_1$. Note that
\begin{equation}\label{eq5}
\{Z^{\rho_-}_{v,Ne_4}\in [0,-z_0]\}=\{\cH^{{\rho_-}}\in I_-\}.
\end{equation}
Moreover
\begin{equation}\label{eq4}
\P(Z^{\rho_-}_{v,Ne_4}\in [0,-z_0])=\P(Z^{\rho_-}_{v,Ne_4}\leq -z_0)-\P(Z^{\rho_-}_{v,Ne_4}< 0).
\end{equation}
The exit point $Z^\rho_{v,Ne_4}$ is stochastically monotone in $\rho$, that is,
\begin{equation}
\P(Z^\rho_{v,Ne_4}\leq x)\geq \P(Z^\lambda_{v,Ne_4}\leq x) \quad \text{for $\rho\leq\lambda$}.
\end{equation}
It follows that
\begin{equation}
\begin{aligned}
\P(Z^{\rho_-}_{v,Ne_4}\leq -z_0)&\geq \P(Z^{1/2}_{v,Ne_4}\leq -z_0)=\P(Z^{1/2}_{v-z_0,Ne_4}\leq 0)\\
&=\P(Z^{1/2}_{(1-t_r)Ne_4,Ne_4}\leq 0)=1/2\label{eq},
\end{aligned}
\end{equation}
where the last equality follows from symmetry. Consider the characteristic $\rho_-$ emanating from $Ne_4$ and its intersection point $c^0$ with the set $\{(1-t_r)Ne_4+ie_1\}_{i\in\Z}$. A simple approximation of the characteristic $\frac{(\rho_-)^2}{(1-\rho_-)^2}$ shows that
\begin{equation}
c^0_1= (1-t_r)N -8rt_rN^{2/3}+\cO(N^{1/3}).
\end{equation}
It follows that
\begin{equation}
\begin{aligned}
c^0_1-v_1&\geq r_0t_r^{2/3} N^{2/3}-8rt_rN^{2/3}(1+o(1))\\
&=\Big(r_0\delta(\log(\delta^{-1}))^{-2}-2\delta^{3/2}(\log(\delta^{-1}))^{-2}(1+o(1))\Big)N^{2/3}.
\end{aligned}
\end{equation}
For $\delta$ small enough
\begin{equation}
c^0_1-v_1\geq \frac12 r_0\delta(\log(\delta^{-1}))^{-2}N^{2/3}
\end{equation}
and
\begin{equation}
\frac{c^0_1-v_1}{(t_r N)^{2/3}}\geq \frac{r_0}2.
\end{equation}
It follows from Lemma~\ref{lem:sb} that
\begin{equation}\label{eq3}
\P(Z^{\rho_-}_{v,Ne_4}<0)\leq e^{-cr_0^3}.
\end{equation}
Plugging \eqref{eq} and \eqref{eq3} in \eqref{eq4} and using \eqref{eq5} implies the result.
\end{proof}
Let $o_2=Ne_4$ and define
\begin{equation}
\hat{I}_i=\widehat{G}_{o_2,(1-t_r)N\mathrm{e}_4-(i+1)\mathrm{e}_1}-\widehat{G}_{o_2,(1-t_r)N\mathrm{e}_4-i\mathrm{e}_1} \quad\textrm{for }i\in \Z
\end{equation}
and let $(\arr_j)_{j\in\Z}$ and $(\serv_j)_{j\in\Z}$ be two independent sequences of i.i.d.\ random variables, independent of $\hat{I}$, such that
\begin{equation}
a_0\sim \text{Exp}(1-{\rho_+}),\quad
s_0\sim \text{Exp}(1-{\rho_-}).
\end{equation}
For $i\in\Z$ define the shifted random varaibles
\begin{equation}
X^1_i=\hat{I}_i-2,\quad X^2_i=s_i-2,\quad X^3_i=a_i-2.
\end{equation}
Finally define the random walks
\begin{equation}
\begin{aligned}
&S^{a,x}_i=\sum_{k=x}^{i} X^a_k, \quad \textrm{for }a\in\{1,2,3\},\\
&S^{a,b,x}_i=S^{a,x}_i-S^{b,x}_i \quad \textrm{for }a,b\in\{1,2,3\}.
\end{aligned}
\end{equation}
In particular,
\begin{equation}
S^{2,3,x}_i=\sum_{k=x}^{i}(s_k-a_k).
\end{equation}
We also define unbiased versions of $S^{a,x}$ for $a\in\{2,3\}$
\begin{equation}
\bar{S}^{2,x}_i=\sum_{k=x}^{i}(s_k-(1-{\rho_-})^{-1}),\quad
\bar{S}^{3,x}_i=\sum_{k=x}^{i}(a_k-(1-{\rho_+})^{-1}).
\end{equation}
A simple computation gives, for $i>x$,
\begin{align}\label{ba}
S^{2,x}_i&\leq \bar{S}^{2,x}_i\leq S^{2,x}_i +(i-x)\frac{ rN^{-1/3}}{\frac14+\frac12rN^{-1/3}}\\
\bar{S}^{3,x}_i&\leq S^{3,x}_i\leq \bar{S}^{3,x}_i +(i-x)\frac{ rN^{-1/3}}{\frac14-\frac12rN^{-1/3}}.\label{ba2}
\end{align}
Our next result controls the maximum of $S^{1,z_0}$ on $I$.
\begin{lemma}\label{lem:com}
There exists $c>0$ such that for any fixed $y>9r_0^{1/2}$
\begin{equation}
\P\Big(\sup_{z_0\leq i\leq z_1} |S^{1,z_0}_i|\leq y(z_1-z_0)^{1/2}\Big) \geq 1-2Ce^{-c(y-9r_0^{1/2})^2}-2e^{-cr_0^3}.
\end{equation}
for all $N$ large enough.
\end{lemma}
\begin{proof}
Let
\begin{equation}
\lambda_\pm=\frac12\pm r_0(t_rN)^{-1/3}.
\end{equation}
We first show that
\begin{equation}\label{ine2}
\begin{aligned}
\P(\widehat Z^{\lambda_+}_{N\mathrm{e}_4,x}>0,\widehat Z^{\lambda_-}_{N\mathrm{e}_4,x}<0 \quad\forall x\in I)\geq 1-2e^{-cr_0^3}.
\end{aligned}
\end{equation}
We will show that
\begin{equation}\label{ine}
\P(\widehat Z^{\lambda_+}_{N\mathrm{e}_4,x}>0\quad\forall x\in I)\geq 1-e^{-cr_0^3},
\end{equation}
a similar result can be shown for $\hat{Z}^{\lambda_-}_{N\mathrm{e}_4,x}$ and the result follows by union bound. Note that
\begin{equation}
\P(\widehat Z^{\lambda_+}_{N\mathrm{e}_4,x}>0\quad\forall x\in I)=\P(\widehat Z^{\lambda_+}_{N\mathrm{e}_4,(1-t_r)Ne_4+z_1e_1}>0).
\end{equation}
Consider the characteristic $\lambda_+$ emanating from $Ne_4$ and its intersection point $d$ with the set $\{(1-t_r)Ne_4+ie_1\}_{i\in\Z}$. A simple approximation of the characteristic $\frac{(\lambda_+)^2}{(1-\lambda_+)^2}$ shows that
\begin{equation}
d_1= (1-t_r)N +8r_0(t_rN)^{2/3}+\cO(N^{1/3})\geq (1-t_r)N+2r_0 (t_r N)^{2/3}
\end{equation}
for all $N$ large enough.
It follows that
\begin{equation}
d_1-[(1-t_r)N+z_1]\geq r_0(t_r N)^{2/3}.
\end{equation}
As in previous proofs, applying Lemma~\ref{lem:sb} we conclude \eqref{ine} and therefore \eqref{ine2}.
Define
\begin{equation}
\begin{aligned}
\hat{I}^{\lambda_-}_i&=G^{\lambda_-}_{Ne_4,(1-t_r)Ne_4+ie_1}-G^{\lambda_-}_{Ne_4,(1-t_r)Ne_4+(i+1)e_1},\\
\hat{I}^{\lambda_+}_i&=G^{\lambda_+}_{Ne_4,(1-t_r)Ne_4+ie_1}-G^{\lambda_+}_{Ne_4,(1-t_r)Ne_4+(i+1)e_1}.
\end{aligned}
\end{equation}
Using the Comparison Lemma, see Section~\ref{SectCompLemma}, we obtain
\begin{equation}
\P(\hat{I}^{\lambda_-}_i\leq \hat{I}_i \leq \hat{I}^{\lambda_+}_i \quad i\in I)\geq 1-2e^{-cr_0^3}.
\end{equation}
Therefore, we also have
\begin{equation}\label{ine3}
\P\Big(\sum_{k=z_0}^{i}(\hat I^{\lambda_-}_i-2)\leq S^{1,z_0}_i \leq \sum_{k=z_0}^{i}(\hat I^{\lambda_+}_i-2)\Big)\geq 1-2e^{-cr_0^3}.
\end{equation}
Denote $\widehat S_i:=\sum_{k=z_0}^{i}(\hat I^{\lambda_+}_k-(1-\lambda_+)^{-1})$, which is a martingale starting at time $i=z_0$. Then
\begin{equation}
\begin{aligned}
&\P\Big(\sup_{z_0\leq i\leq z_1}\sum_{k=z_0}^{i}(\hat I^{\lambda_+}_k-2)\leq yr_0^{1/2}(t_rN)^{1/3}\Big)\\
&\geq\P\Big(\sup_{z_0\leq i\leq z_1}\widehat S_i+(z_1-z_0)((1-\lambda_+)^{-1}-2)\leq yr_0^{1/2}(t_rN)^{1/3}\Big)\\
&\geq \P\Big(\sup_{z_0\leq i\leq z_1}r_0^{-1/2}(t_rN)^{-1/3}\widehat S_i\leq y-9r_0^{1/2}\Big),
\end{aligned}
\end{equation}
where in the third line we used
\begin{equation}
(z_1-z_0)((1-\lambda_+)^{-1}-2)= 8r_0(t_rN)^{1/3}+\cO(1)\leq 9r_0(t_rN)^{1/3}
\end{equation}
for all $N$ large enough. The scaling is chosen such that, setting $i=z_0+2 \tau r_0(t_r N)^{2/3}$, the scaled random walk $r_0^{-1/2}(t_rN)^{-1/3}\widehat S_i$ converges as $N\to\infty$ weakly to a Brownian motion on the interval $\tau\in [0,1]$ for some (finite) diffusion constant. Thus there exists constants $C,c>0$ such that for any given $y>9 r_0^{1/2}$
\begin{equation}\label{ine4}
\P\Big(\sup_{z_0\leq i\leq z_1}\sum_{k=z_0}^{i}(\hat I^{\lambda_+}_k-2)\leq yr_0^{1/2}(t_rN)^{1/3}\Big)\geq 1-Ce^{-c(y-9r_0^{1/2})^2}
\end{equation}
for all $N$ large enough. Similarly we show that for any given $y>9r_0^{1/2}$
\begin{equation}\label{ine5}
\P\Big(\inf_{z_0\leq i\leq z_1}\sum_{k=z_0}^{i}(\hat I^{\lambda_-}_k-2)\geq -yr_0^{1/2}(t_rN)^{1/3}\Big)\geq 1-Ce^{-c(y-9r_0^{1/2})^2}
\end{equation}
for all $N$ large enough. From \eqref{ine3}, \eqref{ine4} and \eqref{ine5} it follows that
\begin{equation}
\begin{aligned}
&\P\Big(\sup_{z_0\leq i\leq z_1} |S^{1,z_0}_i|\leq y(z_1-z_0)^{1/2}\Big)=\P\Big(\sup_{z_0\leq i\leq z_1} |S^{1,z_0}_i|\leq yr_0^{1/2}(t_rN)^{1/3}\Big) \\
&\geq\P\Big( \inf_{z_0\leq i\leq z_1}\sum_{k=z_0}^{i}(\hat I^{\lambda_-}_k-2)\geq -yr_0^{1/2}(t_rN)^{1/3},\sup_{z_0\leq i\leq z_1}\sum_{k=z_0}^{i}(\hat I^{\lambda_+}_k-2)\leq yr_0^{1/2}(t_rN)^{1/3}\Big)
\\
&\geq 1-2e^{-c(y-9r_0^{1/2})^2}-2e^{-cr_0^3}.
\end{aligned}
\end{equation}
\end{proof}
Next we control the fluctuations of the random walk $S^{2,z_0}$.
\begin{lemma}\label{lem:Llb1}
Let $\cC=\{\sup_{z_0\leq i \leq z_1}|S^{2,z_0}_i| \leq y(z_1-z_0)^{1/2} \}$. Under the choice of parameters in \eqref{eq4.31}, there exists $c,\delta_0>0$ such that for $\delta<\delta_0$, and any fixed $y>r_0^{1/2}$
\begin{equation}\label{ine1}
\P(\cC)>1-Ce^{-c(y-r_0^{1/2})^2}
\end{equation}
for all $N$ large enough.
\end{lemma}
\begin{proof}
By \eqref{ba} we have
\begin{equation}
\P(\cC )>\P\Big(\sup_{z_0\leq i \leq z_1}|(z_1-z_0)^{-1/2} \bar{S}^{2,z_0}_i| \leq y-(z_1-z_0)^{1/2}\frac{ rN^{-{1/3}}}{\frac14+\frac12rN^{-1/3}}\Big).
\end{equation}
Note that
\begin{equation}\label{eq6.52}
\begin{aligned}
(z_1-z_0)^{1/2}\frac{ rN^{-{1/3}}}{\frac14+\frac12rN^{-1/3}}&=(2r_0)^{1/2}(t_rN)^{1/3}\frac{ rN^{-{1/3}}}{\frac14+\frac12rN^{-1/3}}
=(2r_0)^{1/2}\frac{t_r^{1/3} r}{\frac14+\frac12rN^{-1/3}}\\
&=(2r_0)^{1/2}\frac{\tfrac14\delta^{1/2}}{\frac14+\frac12rN^{-1/3}}.
\end{aligned}
\end{equation}
Thus for all $N$ large enough and $\delta$ small enough
\begin{equation}
\P(\cC )>\P\Big(\sup_{z_0\leq i \leq z_1}|(z_1-z_0)^{-1/2} \bar{S}^{2,z_0}_i| \leq y-r_0^{1/2}\Big).
\end{equation}
Also notice that $(z_1-z_0)^{-1/2} \bar{S}^{2,z_0}_i$ converges weakly to a Brownian motion as $N\to\infty$. Using Doob maximum inequality one deduces that for $N$ large enough
\eqref{ine1} indeed holds.
\end{proof}
For $M>0$ define
\begin{equation}
\cE_2=\Big\{\sup_{i\in I}|S^{2,z_0}_i| \leq M(z_1-z_0)^{1/2} \Big\}.
\end{equation}
For $x>0$, define the sets
\begin{equation}
\begin{aligned}
\cE_1&=\Big\{\cH^{{\rho_-}}\in I_-,\sup_{i\in I}|S^{2,1,z_0}_i|\leq 2M(z_1-z_0)^{1/2}\Big\},\\
\cE_{3,x}&=\Big\{\inf_{i\in I_-} (x+S^{2,3,z_0}_i) > -2M(z_1-z_0)^{1/2} ,\inf_{i\in I_+} (x+S^{2,3,z_0}_i)<-7M(z_1-z_0)^{1/2}\Big\}.
\end{aligned}
\end{equation}
Note that on the event $\cE_2$
\begin{equation}
S^{2,1,z_0}_i>-M(z_1-z_0)^{1/2}-S^{1,z_0}_i\textrm{ and }S^{2,1,z_0}_i<M(z_1-z_0)^{1/2}-S^{1,z_0}_i,
\end{equation}
which implies
\begin{equation}\label{subs}
\cE_1\cap \cE_2 \supseteq \{\cH^{{\rho_-}}\in I_-,\sup_{i\in I}|S^{1,z_0}_i|\leq M(z_1-z_0)^{1/2}\}\cap\cE_2.
\end{equation}
Similarly, on the event $\cE_2$
\begin{equation}\label{eq4.42}
S^{2,3,z_0}_i>-M(z_1-z_0)^{1/2}-S^{3,z_0}_i\textrm{ and }
S^{2,3,z_0}_i<M(z_1-z_0)^{1/2}-S^{3,z_0}_i,
\end{equation}
so that, for $x\in[0,(z_1-z_0)^{1/2}]$ and $M\geq 1$,
\begin{equation}
\begin{aligned}\label{subs2}
\cE_{3,x}\cap \cE_2
&\supseteq\{\inf_{i\in I_-} (x-S^{3,z_0}_i) > -M(z_1-z_0)^{1/2} ,\inf_{i\in I_+} (x-S^{3,z_0}_i)<-8M(z_1-z_0)^{1/2}\}\cap \cE_2\\
&\supseteq \{\sup_{i\in I_-} S^{3,z_0}_i < M(z_1-z_0)^{1/2} ,\sup_{i\in I_+} S^{3,z_0}_i>9M(z_1-z_0)^{1/2}\}\cap \cE_2\\
&\supseteq \{\sup_{i\in I_-} S^{3,z_0}_i < M(z_1-z_0)^{1/2} ,\sup_{i\in I_+} S^{3,z_0}_i-S^{3,z_0}_0>9M(z_1-z_0)^{1/2}-S^{3,z_0}_0\}\cap \cE_2\\
&\supseteq \{\sup_{i\in I_-} |S^{3,z_0}_i| < M(z_1-z_0)^{1/2} ,\sup_{i\in I_+} S^{3,0}_i>10M(z_1-z_0)^{1/2}\}\cap \cE_2\\
&\supseteq \{\sup_{i\in I_-} |S^{3,z_0}_i| < M(z_1-z_0)^{1/2} , S^{3,0}_{z_1}>10M(z_1-z_0)^{1/2}\}\cap \cE_2,
\end{aligned}
\end{equation}
as in the first line we used \eqref{eq4.42}, in the second $x=0$ for the first term and $x=(z_1-z_0)^{1/2}$ for the second one.
Next we are going to prove that, conditioned on $\cE_2$, $\cE_1$ and $\cE_{3,x}$ occurs with positive probability.
\begin{lemma}\label{lem:Llb2} There exists $c_2,r_0>0$ and $M\geq 1$ such that for $x\in [0,(z_1-z_0)^{1/2}]$
\begin{equation}
\P(\cE_1,\cE_{3,x}|\cE_2)>c_2.
\end{equation}
for all $N$ large enough.
\end{lemma}
\begin{proof}
Define
\begin{equation}
\begin{aligned}
&\cF_1=\{\cH^{{\rho_-}}\in I_-,\sup_{i\in I}|S^{1,z_0}_i|\leq M(z_1-z_0)^{1/2}\},\\
&\cF_3=\{\sup_{i\in I_-} |S^{3,z_0}_i| < M(z_1-z_0)^{1/2} , S^{3,0}_{z_1}>10M(z_1-z_0)^{1/2}\}.
\end{aligned}
\end{equation}
By \eqref{subs} and \eqref{subs2} and the independence of $S^{1,z_0}$ and $S^{3,z_0}$
\begin{equation}\label{atb}
\P(\cE_1,\cE_{3,x}|\cE_2)\geq \P(\cF_1,\cF_3|\cE_2)=\P(\cF_1)\P(\cF_3).
\end{equation}
Next we want to derive lower bounds for $\P(\cF_1)$ and $\P(\cF_3)$.
Note that
\begin{equation}
\P(\cF_1)\geq \P(\cH^{{\rho_-}}\in I_-)-\P(\sup_{i\in I}|S^{1,z_0}_i|\geq M(z_1-z_0)^{1/2}).
\end{equation}
By Lemma~\ref{lem:gl} and Lemma~\ref{lem:com} there exists $r_0>0$ and $M>9r_0^{1/2}$ for which
\begin{equation}
\P(\cH^{{\rho_-}}\in I_-)\geq 1/4\quad \textrm{ and }\quad\P(\sup_{i\in I}|S^{1,z_0}_i|\geq M(z_1-z_0)^{1/2})\leq 1/8,
\end{equation}
so that
\begin{equation}\label{f1}
\P(\cF_1)\geq 1/8.
\end{equation}
Let us now try to find a lower bound for $\P(\cF_3)$.
\begin{align}
&\P(\cF_3)=\P\Big(\sup_{i\in I_-} |S^{3,z_0}_i| < M(z_1-z_0)^{1/2} ,S^{3,0}_{z_1}>10M(z_1-z_0)^{1/2}\Big)\\
&=\P\Big(\sup_{i\in I_-} |S^{3,z_0}_i| < M(z_1-z_0)^{1/2}\Big)\P\Big( S^{3,0}_{z_1}>10M(z_1-z_0)^{1/2}\Big)
\end{align}
since the processes $\{S^{3,z_0}_{i}\}_{i\in I_-}$ and $\{S^{3,0}_{i}\}_{i\in I_+}$ are independent.
Note that the centered random walks $\{(z_1-z_0)^{-1/2}\bar S^{3,z_0}_{i}\}_{i\in I_-}$ and $\{(z_1-z_0)^{-1/2}\bar S^{3,0}_{i}\}_{i\in I_+}$ converge weakly to a Brownian motion.
Furthermore, the difference coming from the non-zero drift of $S^{3,z_0}_i$ is, by \eqref{ba2}, bounded by
\begin{equation}
\sup_{i\in I}|(z_1-z_0)^{-1/2}S^{3,z_0}_i-(z_1-z_0)^{-1/2}\bar S^{3,z_0}_i|\leq (z_1-z_0)^{1/2}\frac{r N^{-1/3}}{\frac14-\frac12 r N^{-1/3}}
\end{equation}
which is $o\Big((z_1-z_0)^{1/2}\Big)$ as $\delta\rightarrow0$(similarly to \eqref{eq6.52}).
Thus by choosing $M$ large enough, there exists $c_2>0$ such that for $N$ large enough
\begin{equation}
\begin{aligned}
&\P\Big(\sup_{i\in I_-} |S^{3,z_0}_i| < M(z_1-z_0)^{1/2}\Big)\geq 1/2,\\
&\P\Big( S^{3,0}_{z_1}>10M(z_1-z_0)^{1/2}\Big)\geq 16c_2.
\end{aligned}
\end{equation}
I follows that
\begin{align}\label{f3}
\P(\cF_3)\geq 8c_2
\end{align}
Plugging \eqref{f1} and \eqref{f3} in \eqref{atb} we obtain the result.
\end{proof}
Now we can prove the main statement of this section.
\begin{proof}[Proof of Proposition~\ref{prop:Lb}]
Note that
\begin{equation}
\Big\{\cH^{{\rho_-}}\in I_-,\sup_{i\in I_-}\sum_{k=z_0}^i(\hat I_k^{{\rho_+}}-\hat{I}_k)<\sup_{i\in I_+}\sum_{k=z_0}^i(\hat I_k^{{\rho_+}}-\hat{I}_k)\Big\}
\subseteq \Big\{ \cH^{{\rho_-}}\in I_-, \cH^{{\rho_+}}>0 \Big\}.
\end{equation}
Indeed, if $\cH^{{\rho_-}}\in I_-$ then also $\cH^{{\rho_+}}\geq z_0$, and the second condition implies that $\cH^{{\rho_+}}\not\in I_-$.
Using a decomposition as in \eqref{eqEjs}, we can write
\begin{equation}
\begin{aligned}
&\Big\{\cH^{{\rho_-}}\in I_-,\sup_{i\in I_-}\sum_{k=z_0}^i(\hat I_k^{{\rho_+}}-\hat{I}_k)<\sup_{i\in I_+}\sum_{k=z_0}^i(\hat I_k^{{\rho_+}}-\hat{I}_k)\Big\}\\
=&\Big\{\cH^{{\rho_-}}\in I_-, \sup_{i\in I_-}\sum_{k=z_0}^ie_k+\sum_{k=z_0}^i(\hat I_k^{{\rho_-}}-\hat{I}_k)<\sup_{i\in I_+}\sum_{k=z_0}^ie_k+\sum_{k=z_0}^i(\hat I_k^{{\rho_-}}-\hat{I}_k)\Big\}\label{i}
\end{aligned}
\end{equation}
where $e_j=\hat I^{\rho_+}_j-\hat I^{\rho_-}_j$. By \eqref{sume} the following event has the same probability as \eqref{i}
\begin{multline}
\cE_4=\Big\{\cH^{{\rho_-}}\in I_-,\\
\sup_{i\in I_-}\Big(\inf_{z_0\leq l\leq i}w_{z_0-1}+S^{2,3,z_0}_l\Big)^-+S^{2,1,z_0}_i<\sup_{i\in I_+}\Big(\inf_{z_0\leq l\leq i}w_{z_0-1}+S^{2,3,z_0}_l\Big)^-+S^{2,1,z_0}_i\Big\}.
\end{multline}
It follows that
\begin{equation}\label{Llb5}
\P(\cH^{{\rho_-}}\in I_-, \cH^{{\rho_+}}>0) \geq \P(\cE_4).
\end{equation}
For $x>0$, define
\begin{multline}
\cE_{4,x}=\Big\{\sup_{i\in I_-}S^{2,1,z_0}_i>\sup_{i\in I_+}S^{2,1,z_0}_i,\\
\sup_{i\in I_-}\Big(\inf_{z_0\leq l\leq i}x+S^{2,3,z_0}_l\Big)^-+S^{2,1,z_0}_i<\sup_{i\in I_+}\Big(\inf_{z_0\leq l\leq i}x+S^{2,3,z_0}_l\Big)^-+S^{2,1,z_0}_i\Big\}.
\end{multline}
Note that
\begin{equation}
\cE_2\cap\cE_1\cap\cE_{3,x} \subseteq \cE_{4,x}.
\end{equation}
In our case, the value of $x$ in $\cE_{4,x}$ is random and distributed according to \eqref{des}. Therefore we have
\begin{equation}
\begin{aligned}
\P(\cE_4)&=\int_0^\infty\P(\cE_4|w_{z_0-1}=w)f(dw)=\int_0^\infty\P(\cE_{4,w})f(dw)\\
&\geq \int_0^\infty\P(\cE_2,\cE_1,\cE_{3,w})f(dw)\geq \int_0^{(z_1-z_0)^{1/2}}\P(\cE_2,\cE_1,\cE_{3,w})f(dw),\label{Llb}
\end{aligned}
\end{equation}
where in the second equality we used the fact that the processes $\{S^{i,z_0}_j\}_{j\geq z_0,i\in\{1,2,3\}}$ are independent of $w_{z_0-1}$.
Taking $M$ large enough, Lemma~\ref{lem:Llb1} gives
\begin{equation}\label{e2b}
\P(\cE_2)\geq 1/2
\end{equation}
for all $N$ large enough. \eqref{e2b} and Lemma~\ref{lem:Llb2} imply that there exists $c_4>0$ such that for any $w\in [0,(z_1-z_0)^{1/2}]$, $\delta<\delta_0$, and large enough $N$
\begin{equation}\label{Llb4}
\P(\cE_2,\cE_1,\cE_{3,w})\geq \tfrac12 c_2.
\end{equation}
Plugging \eqref{Llb4} in \eqref{Llb}
\begin{equation}
\P(\cE_4)\geq \tfrac12 c_2\Big(1-\frac{\frac12-rN^{-1/3}}{\frac12+rN^{-1/3}}e^{-2rN^{-1/3}(z_1-z_0)^{1/2}}\Big).
\end{equation}
Note that
\begin{equation}
2rN^{-1/3}(z_1-z_0)^{1/2}=2r (2r_0)^{1/2}t_r^{1/3}=2^{3/2} r_0^{1/2}\delta^{1/2}\rightarrow 0
\end{equation}
if $r_0 \delta\rightarrow 0$ as $\delta\to 0$. Then by first order approximation of the exponential function, there exists $C>0$ such that for $\delta<\delta_0$, $r_0\leq \delta^{-1}(\log\delta^{-1})^{-1}$, and large enough $N$
\begin{equation}\label{Llb6}
\P(\cE_4)\geq C\delta^{1/2}.
\end{equation}
Using \eqref{Llb6} in \eqref{Llb5} we obtain the result.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:LBcoal}}
Finally we prove the second main result of this paper.
\begin{proof}[Proof of Theorem~\ref{thm:LBcoal}]
Let
\begin{equation}
\rho'_+=\frac12+\frac1{120}rN^{-1/3},\quad
\rho'_-=\frac12-\frac1{120}rN^{-1/3},
\end{equation}
and define
\begin{equation}
\cA'=\left\{-\frac r8 N^{2/3} \leq Z^{ \rho'_-}_{o,N\mathrm{e}_4}\leq Z^{\rho'_+}_{o,N\mathrm{e}_4}\leq \frac r8 N^{2/3}\right\}.
\end{equation}
By Lemma~\ref{lem:sb}
\begin{equation}\label{Ap}
\P(\cA')\geq 1-e^{-cr^3}
\end{equation}
Define
\begin{equation}
y^1=\frac14rN^{2/3} \mathrm{e}_1,\quad
y^2=\frac14rN^{2/3} \mathrm{e}_2.
\end{equation}
Let $o^2=N\mathrm{e}_4$. Similar to \eqref{ex} we define
\begin{equation}
H^j=\sup\{i:(i,(1-t_r)N)\in \pi_{y^j,o^2}\} \quad \textrm{for }i\in\{1,2\}.
\end{equation}
Note that on the event $\cA'$, the geodesics $\pi^{\rho'_-}_{o,o^2},\pi^{\rho'_+}_{o,o^2}$ are sandwiched between the geodesics $\pi_{y^1,o^2},\pi_{y^2,o^2}$, which implies that if the geodesics $\pi^{\rho'_-}_{o,o^2},\pi^{\rho'_+}_{o,o^2}$ did not coalesce then neither did $\pi_{y^1,o^2},\pi_{y^2,o^2}$ i.e.\
\begin{equation}\label{cont}
\cA'\cap \{\cH^{\rho'_-}\in I_-, \cH^{\rho'_+}>0\} \subseteq \{C_p(\pi_{y^1,o^2},\pi_{y^2,o^2})>L_{1-t_r}\}.
\end{equation}
Indeed, on the event $\cA'$
\begin{equation}
-y^2_2<Z^{\rho_-}_{o,o^2}\leq Z^{\rho_+}_{o,o^2} \leq y^1_1
\end{equation}
so that
\begin{equation}
\pi_{y_2,o^2} \preceq \pi^{\rho'_-}_{o,o^2}\preceq \pi^{\rho'_+}_{o,o^2} \preceq \pi_{y_1,o^2}
\end{equation}
which implies that under $\cA'\cap \{\cH^{\rho'_-}\in I_-, \cH^{\rho'_+}>0\}$
\begin{equation}
L_{1-t_r}\leq C_p(\pi^{\rho'_+}_{o,o^2},\pi^{\rho'_-}_{o,o^2})\leq C_p(\pi_{y^1,o^2},\pi_{y^2,o^2})
\end{equation}
Note that as $y^1,y^2\in \cR^{r/2,1/4}$ and $o^2\in \cC^{s_r/2,t_r}$
\begin{equation}\label{eq8}
\{C_p(\pi_{y^1,o^2},\pi_{y^2,o^2})>L_{1-t_r}\}\subseteq \{\exists x \in \cC^{s_r/2,t_r},y\in\cR^{r/2,1/4}: C_p(\pi^{1/2}_{o,x},\pi_{y,x})>L_{1-t_r}\}.
\end{equation}
Indeed, if the geodesics $\pi_{y^1,o^2}$ and $\pi_{y^2,o^2}$ do not meet before the time horizon $L_{1-t_r}$, at least one of the them did not coalesce with the geodesic $\pi^{1/2}_{o,o^2}$ before $L_{1-t_r}$. It follows from \eqref{cont}, \eqref{eq8} and \eqref{Ap} that
\begin{equation}
\P(\exists x \in \cC^{s_r/2,t_r},y\in\cR^{r/2,1/4}: C_p(\pi^{1/2}_{o,x},\pi_{y,x})>L_{1-t_r})\geq C\delta^{1/2}.
\end{equation}
\end{proof}
\subsection{Proof of Theorem~\ref{thm:coal2}}
\begin{proof}[Proof of Theorem~\ref{thm:coal2}]
Note that
\begin{equation}\label{inc}
\begin{aligned}
&\{C_p(\pi^{1/2}_{o,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},y\in \cR^{\frac18\log\delta^{-1},1/4}\}\\
&\subseteq \{C_p(\pi_{w,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},w,y\in \cR^{\frac18\log\delta^{-1},1/4}\}.
\end{aligned}
\end{equation}
Indeed, on the event that any geodesic starting from $\cR^{\frac18\log\delta^{-1},1/4}$ and terminating in $\cC^{\delta,\tau}$ coalesces with the stationary geodesic before the time horizon $L_{1-\tau}$ any two geodesics starting from $\cR^{\frac18\log\delta^{-1},1/4}$ and terminating in $\cC^{\delta,\tau}$ must coalesce as well. Theorem~\ref{thm:coal} and \eqref{inc} imply the lower bound in Theorem~\ref{thm:coal2}.
Next note that
\begin{equation}
\begin{aligned}\label{inc2}
&\{\exists x\in \cC^{\delta,\tau},y\in \cR^{\frac18\log\delta^{-1},1/4}: C_p(\pi^{1/2}_{o,x},\pi_{y,x})> L_{1-\tau}, |Z^{1/2}_{o,x}|\leq \frac1{16}\log(\delta^{-1})N^{2/3} \}\\
&\subseteq \{C_p(\pi_{w,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},w,y\in \cR^{\frac18\log\delta^{-1},1/4}\}^c.
\end{aligned}
\end{equation}
To illustrate the validity of \eqref{inc2}, assume w.l.o.g.\ that $x=Ne_4,y=o$ such that $C_p(\pi^{1/2}_{o,e_4N},\pi_{o,e_4N})>L_{1-\tau}$ and that $\frac1{16}\log(\delta^{-1})N^{2/3}\geq Z^{1/2}_{o,Ne_4}=a>0$. It follows that
\begin{equation}\label{set}
C_p(\pi_{ae_1,Ne_4},\pi_{o,Ne_4})> L_{1-\tau}
\end{equation}
holds. The event in \eqref{set} is contained in the event in the last line of \eqref{inc2} which implies \eqref{inc2}.
\eqref{inc2} implies that
\begin{equation}
\begin{aligned}\label{ine7}
&\P\Big(C_p(\pi^{1/2}_{o,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},y\in \cR^{\frac18\log\delta^{-1},1/4}\Big)\\
&+\P\Big(|Z^{1/2}_{o,x}|>\frac1{16}\log(\delta^{-1})N^{2/3} \quad \text{ for some }x\in \cC^{\delta,\tau}\Big)\\
&\geq\P\Big(C_p(\pi_{w,x},\pi_{y,x})\leq L_{1-\tau} \quad \forall x\in \cC^{\delta,\tau},w,y\in \cR^{\frac18\log\delta^{-1},1/4}\Big).
\end{aligned}
\end{equation}
Next we claim that for some $c>0$
\begin{equation}\label{ine6}
\P\Big(|Z^{1/2}_{o,x}|>\frac1{16}\log(\delta^{-1})N^{2/3} \quad \text{ for some }x\in \cC^{\delta,\tau}\Big)\leq e^{-c\log(\delta^{-1})^3}.
\end{equation}
Indeed, it follows by \eqref{r1} with $r=\frac1{15}\frac1{16}\log(\delta^{-1})$ that
\begin{equation}
\begin{aligned}
&\P\Big(Z^{1/2}_{o,x}>\frac1{16}\log(\delta^{-1})N^{2/3} \quad \text{ for some }x\in \cC^{\delta,\tau}\Big)\\
&\leq \P\Big(Z^{\rho_+}_{o,x}>\frac1{16}\log(\delta^{-1})N^{2/3} \quad \text{ for some }x\in \cC^{\delta,\tau}\Big) \leq e^{-c\log(\delta^{-1})^3}.
\end{aligned}
\end{equation}
A similar bound can be obtained for the lower tail to obtain \eqref{ine6}.
Using the upper bound in \eqref{thm:LBcoal} and \eqref{ine6} in \eqref{ine7} we obtain the upper bound in Theorem~\ref{thm:coal2}.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Swimming on the microscale at low Reynolds numbers requires special propulsion
mechanisms which are effective in the presence of dominating viscous
forces.
Phoretic swimmers create gradients in external fields such as
concentration or temperature which in turn give rise
to symmetry-breaking interfacial forces leading to propulsion
if they overcome the friction force of the microswimmer
\cite{Illien2017}.
Besides phoretic swimmers, self-deforming or shape-changing
swimmers are the largest class
of microswimmers. They deform their body in a cyclic way in order to propel.
This general principle has the advantage that it works
independently of the environment, i.e., it does not require an external
field, that eventually changes the properties of the fluid.
The disadvantage of swimming by deformation is, on the other
hand, that there are necessarily ``moving parts'' causing additional
viscous flow in the fluid and forces on the swimmer.
As a consequence, at low Reynolds numbers,
the cyclic deformation pattern must not be invariant under time-reversal:
the scallop theorem formulated by Purcell states that
periodic reciprocal patterns of deformation can not lead to an effective
net motion on the microscale because of
the linearity of the Stokes equation \cite{Purcell1977}.
In nature, many
different examples of deformation swimmers can be found such as bacteria,
algae and spermatozoa \cite{Lauga2009,Elgeti2015}. These natural swimmers often
rely on the movement of a few flagella or many cilia on their surface
\cite{Taylor1951,Berg1973,Goldstein2015,Jeanneret2016}.
Flagella employ a periodic forcing but overcome the scallop theorem by
exploiting friction along the elastic flagellum to break
time-reversibility.
This requires a matching of
driving and frictional damping time scales for
efficient propulsion. Often it also requires
the ability of local actuation for the periodic forcing.
This makes this concept hard to reproduce or imitate in a controlled
fashion in an
artificial system \cite{Dreyfus2005,Tottori2013}.
Another basic strategy to overcome the scallop theorem are
deformation cycles that involve at least two control parameters
and drive the swimmer periodically along a closed contour
in this at least two-dimensional parameter space.
Different shape changing
artificial swimmers have been
developed based on this concept starting with
Purcell's three-link swimmer \cite{Purcell1977} and including
swimmers performing
small deformations of spheres, circles, or cylinders
\cite{Lighthill1952,Blake1971,shapere1987,Felderhof1994,avron2004},
or shape-changing
vesicles \cite{evans2010}.
The most simple shape changing microswimmer is arguably the
one-dimensional linear three-bead swimmer developed by Najafi and Golestanian,
where three beads change their distance in a non-time-reversible
way \cite{NajafiGolestanian2004,Golestanian2008}. By extending the linear
three bead arrangement to a second dimension in
a triangular shape, a three bead swimmer can perform
two-dimensional motions (circles) \cite{Ledesma-Aguilar2012} and
steer \cite{Rizvi2018}.
Nevertheless, despite the simplicity of the
concept, this type of swimmer is difficult to implement
experimentally because it requires fine control
over, at least, two control parameters such as the bead positions
of the three-bead swimmer \cite{Leoni2009,Golestanian2010}.
We employ a different general strategy in order to
overcome the scallop theorem, which is
widely applicable and only involves control of a single
global and scalar
control parameter, which couples, however, to a hysteretic
(or bistable)
shape transition of the system, see Fig.\ \ref{fig:hysteresis_sketch}.
If also the sequence of shapes exhibits hysteresis, this
converts the time-reversible
motion in one-dimensional control parameter space into
a non-time-reversible motion in a higher-dimensional
parameter or shape space.
Hysteretic shape transitions can be realized, for example,
by using the intrinsic properties of elastic
materials. In this work, we will realize
such a hysteretic shape transition
based on a swelling process of a flat and thin circular elastic disk, where
material swelling with swelling ratios of only a few percent
in the central region of the disk
leads to a shape transition from the flat disk shape into curved
conformations, such as a dome-like shape
\cite{KleinEfrati2007,Efrati2009,Pezulla2015}. The snapping into
an elliptic dome-like shape actually faintly resembles the opening
and closing of a
scallop. By further enhancing the elastic disk
with a fixed frame with attractive interactions
we can endow this transition with genuine hysteretic effects.
These hysteresis effects allow us to break the reciprocity
of the shape cycle
although we employ simple cyclic and fully time-reversible
oscillations of
the swelling factor as single global and scalar
control parameter.
The main point of this paper is to give the proof of concept
that this leads to net propulsion and is a viable
realization of a microswimmer.
The principle of exploiting a periodically driven
hysteretic shape transition
in order to achieve net propulsion has been introduced
in Ref.\ \citenum{Djelloul2017} using the
the buckling transition of spherical elastic
shells as propulsion mechanism.
The buckling of spherical shells
is a subcritical hysteretic shape transition
\cite{Knoche2011a,Knoche2014,Baumgarten2019},
which will turn out to be conceptually similar to the
elastic instability triggered in the elastic disk by localized
swelling.
Deformation cycles of
elastic materials have been applied before to design
artificial swimmers.
In Ref.\ \citenum{Palagi2016}, structured light fields were
used to drive elastic deformation waves on swimmers with a
homogeneous body made of a soft material.
Other approaches focused on elastic
double layers, where swelling of one layer can induce bending;
such externally controllable
swelling layers can be engineered, for example, using
thermoresponsive microgels \cite{Mourran2017}.
These ideas
were used to design swimmers with a
helical structure that can be propelled by conformation changes of the
helix \cite{Mourran2017,Zhang2017,Koens2018}. Here conformation changes
are non-reciprocal, partly because of hysteresis effects in the
heating cycle of the thermoresponsive gel.
Deformation swimmers are relatively slow in general, because the swimming
distance only scales quadratically with the deformation
displacement for many
deformation swimmers \cite{Lighthill1952,Blake1971}.
Therefore, a high frequency of conformation
changes is needed in oder to achieve a significant swimming velocity.
This applies also to the concept of swimming by hysteretic shape changes.
The driving frequency is, however, not limited by an
additional damping time scale (as, for example, in flagellar motion)
because breaking
of the time-reversibility is inherent
in the hysteretic shape sequence itself but,
at high driving frequencies, one could leave the realm of
low Reynolds numbers.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Fig1.pdf}
\caption{A completely time-reversible oscillation of the control parameter
(horizontal axis, here: swelling ratio)
gives rise to non-time-reversible
shape cycle of an elastic disk
because of hysteresis of the triggered shape
transition. The deformation cycle between shapes A and C
resembles the
opening and closing of a scallop but is hysteretic (compressed
shape B and the transition state between B and C are only
visited upon swelling).
}
\label{fig:hysteresis_sketch}
\end{figure}
The paper is organized as follows.
At first, we present a ``dry''
numerical analysis of the elastic deformation cycle of the elastic disk
with material swelling in the interior in the absence of a surrounding fluid and
corresponding hydrodynamic interactions.
We quantify the hysteretic effects of the swelling transition and
show that there is most likely no genuine hysteresis in
experiments on simple swelling disks.
We can enhance the model system by adding an additional fixed frame with an
attractive interaction to the disk in order to generate a robust and
genuine hysteretic shape transition.
Then we perform a ``wet'' hydrodynamic simulation featuring a Rotne-Prager
interaction in order to proof the swimming ability of this system and
characterize the net propulsion. Finally, a
simplified 9-bead model is presented that mimics the essence of the
underlying swimming mechanism and is able to qualitatively reproduce and
explain its main characteristics.
\section{Swelling of a flat disk}
\subsection{Theory}
In the following, we consider a flat circular disk with radius
$R_\mathrm{out}$ and thickness $h$. The disk shall be very thin
($h\ll R_\mathrm{out}$), so we can use a two-dimensional model. We
parametrize our two-dimensional disk in polar coordinates $(r,\,\varphi)$. The
basic idea is to deform the disk into a curved shape
by a localized, i.e., inhomogeneous swelling process
which changes the disk's metric
\cite{KleinEfrati2007,Efrati2009,Pezulla2015}.
By swelling we mean a local isotropic swelling, where
the rest lengths of fibers change by a position-dependent factor
$A(r,\,\varphi)$ independent of fiber orientation.
In the following, we restrict
ourselves to radially symmetric
swelling functions $A(r)$; the neutral case of the flat disk is
represented by $A(r) = 1$, $A(r)>1$ corresponds to local swelling,
$A(r)<1$ to local shrinking of fibers.
In order to calculate the change in metric by a swelling
function $A(r)$ we re-parametrize
the deformed shape using Gaussian normal coordinates.
The Gaussian radial coordinate is given by
$\rho \equiv \int_0^r A(\tilde{r}) d\tilde{r}$, which is the
distance of a point to the origin of the coordinate system following the
surface and reduces to the standard radial coordinate $r$ for a
flat disk $A(r)=1$. The angular coordinate $\varphi$ remains
unchanged.
In Gaussian coordinates a deformed fiber in radial direction has
length $dl_\rho = d\rho = A(r) dr$, a deformed
circumferential fiber
a length $dl_\varphi = rA(r) d\varphi$.
In general, the fiber length is related to the
metric by $dl^2 = g_{\rho\rho} d\rho^2 + 2g_{\rho\varphi}d\rho d\varphi+
g_{\varphi\varphi} d\varphi^2$
such that the metric tensor of the swollen disk can be read
off as
%
\begin{align}
\label{eq:metric_tensor_disk}
\bar{\textbf{g}} = \begin{pmatrix}
1 & 0\\
0 & r^2(\rho)A^2(r(\rho))
\end{pmatrix}.
\end{align}
in Gaussian normal coordinates.
This is the so-called target metric which represents the
preferred equilibrium state of the swollen disk \cite{Efrati2009}.
According to the Theorema Egregrium the Gaussian curvature $\bar{K}$
can be deduced solely from the metric tensor. Using the Brioschi formula
(with respect to the Gaussian coordinates $\rho$ and $\varphi$
of the metric) we find
\begin{align}
\bar{K}(\rho) &= -\frac{\partial_\rho^2(r(\rho)A(r(\rho)))}
{r(\rho)A(r(\rho))},
\label{eq:GaussKrho}
\end{align}
as a function of the Gaussian radial coordinate $\rho$.
We can transform to the standard radial coordinate $r$ by using
$dr/d\rho = 1/A(r)$, which gives
\begin{align}
\bar{K}(r) &= - \frac{A'(r) + r A''(r) - r A'^2(r)/A(r)}
{r A^3(r)}.
\label{eq:GaussK}
\end{align}
This is the Gaussian curvature if a shape with the metric
(\ref{eq:metric_tensor_disk}) could be embedded
into three-dimensional Euclidian space.
In order to deform the disk into a curved shape, we define a simple
class of swelling patterns.
Similar to Pezulla {\it et al.} \cite{Pezulla2015}, we
divide the disk into two parts: An inner disk with
$0\le r \le R_{\mathrm{in}}$ and an outer annulus with
$R_{\mathrm{in}}<r\le R_\mathrm{out}$. Within these two
regions, the swelling function $A(r)$ shall be piecewise constant.
To simplify things further, we define the inner disk to swell
with a constant factor $\alpha$, while the outer annulus
shall always do the exact opposite, i.e., it
shrinks with the inverse constant factor $1/\alpha$.
In total, the considered
swelling functions $A(r)$ can be written as
\begin{align}
\label{eq:def_alpha}
A(r) = \begin{cases}
{\alpha}, & r\in [0, R_{\mathrm{in}}]\\
\frac{1}{\alpha}, & r\in(R_{\mathrm{in}}, R_\mathrm{out}].
\end{cases}
\end{align}
The swelling process is thus defined by two simple control parameters: The
swelling factor $\alpha$ and the geometrical ratio
$R_\mathrm{in}/R_\mathrm{out}$, which will be kept constant at
$R_\mathrm{in}/R_\mathrm{out}=0.5$ in the following
so that we can focus on the influence of $\alpha$.
We can distinguish between two general cases: (a) $\alpha>1$
means that the material in the inner disk
is swelling, while the outer annulus is shrinking,
which is illustrated in Fig.\ \ref{fig:swell_sketch}(a).
The opposite case, (b) $\alpha < 1$, leads to a shrinking inner
disk and a swelling annulus (Fig.\ \ref{fig:swell_sketch}(b)).
In order to apply the Brioschi formula (\ref{eq:GaussK}),
we have to smear out the step function at $r=R_{\mathrm{in}}$
in eq.\ (\ref{eq:def_alpha}). For $\alpha>1$,
we have $A'(r)\approx 0$
except around $r=R_{\mathrm{in}}$, where $-A'(r)$ is peaked.
Therefore, the last term in the denominator in eq.\ (\ref{eq:GaussK})
dominates and gives a positive Gaussian curvature
which is peaked around $r=R_{\mathrm{in}}$ if a surface with the
piecewise metric \eqref{eq:metric_tensor_disk}
could be embedded into three-dimensional space.
Although the metric \eqref{eq:metric_tensor_disk}
is piecewise flat, Gaussian curvature has to be introduced
because the inner disk is bonded to the outer annulus.
This Gaussian curvature is positive for $\alpha>1$ and negative
for $\alpha<1$.
Because the metric can actually not be embedded, the Gaussian
curvature will be redistributed on the entire disk to minimize
the elastic energy.
We thus expect a
target elliptic shape with an overall positive Gaussian curvature
for $\alpha>1$, and
a hyperbolic target shape with a negative Gaussian curvature for $\alpha<1$.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Fig2.pdf}
\caption{Schematic demonstration of the swelling pattern. The
disk is divided into two parts. The outer annulus (blue)
swells with a constant factor $\alpha$, the dinner disk
(red) with $1/\alpha$. (a): With $\alpha > 1$, the inner
disk expands and the annulus shrinks. (b): Shrinking in the
inner region and material expansion in the annulus with
$\alpha < 1$.}
\label{fig:swell_sketch}
\end{figure}
Obviously, for $\alpha \neq 1$ the outer annulus
and the inner disk are incompatible at $r=R_\mathrm{in}$.
Therefore, the
surface described by the metric \eqref{eq:metric_tensor_disk} has
no immersion in the Euclidean three-dimensional
embedding space and is an example of non-Euclidean
geometry \cite{Efrati2009}.
The actual shape of the surface in three-dimensional
space is then defined by the minimization of the elastic energy,
where the elastic energy of the deformed
swollen state is defined with respect to the
above target metric $\bar{\textbf{g}}$
from eq.\ \eqref{eq:metric_tensor_disk}.
The incompatibility of the two parts and the
resulting non-existence of an immersion means that there always is
a residual elastic energy after minimization \cite{Efrati2009}.
The elastic energy contains a stretching and a bending
contribution.
The stretching contribution is
caused by strains $\varepsilon_{ij} = (g_{ij} -\bar{g}_{ij})/2$, where
$\textbf{g}$ is the actual metric that the {\em deformed}
swollen state assumes, and given by \cite{Efrati2009}
\begin{align}
E_{\mathrm{s}} &= \int du_1 du_2 \sqrt{|\bar{\textbf{g}}|}
\frac{1}{8} A^{ijkl} (g_{ij}-\bar{g}_{ij}) (g_{kl}-\bar{g}_{kl}),
\label{eq:Es}\\
A^{ijkl} &= \frac{Y_\mathrm{2D}}{1-\nu^2} \left(
\nu \bar{g}^{ij} \bar{g}^{kl}
+ \frac{1-\nu}{2} \left(\bar{g}^{ik} \bar{g}^{jl}+
\bar{g}^{il} \bar{g}^{jk}
\right) \right),
\label{eq:A}
\end{align}
where we use Einstein summation, raising of indices is
performed with the target metric, and $(u_1,u_2)=(\rho,\varphi)$
in Gaussian normal parametrization.
The elastic tensor $A^{ijkl}$ is given by
the two dimensional Young modulus $Y_\mathrm{2D}$
and the Poisson ratio $\nu$, which characterize the stretching elasticity
of the disk material.
The stretching energy thus penalizes deviations from the
target metric.
Likewise, the bending energy is defined with the curvature
tensor $\textbf{L}$ and with respect to a target curvature tensor
$\bar{\textbf{L}}$, which represents a spontaneous curvature
of the material. We assume that local isotropic swelling
does not introduce any spontaneous curvature to the system
such that $\bar{\textbf{L}}=0$.
The general expression for the bending energy is \cite{Efrati2009}
\begin{align}
E_\mathrm{B} &= \int du_1 du_2 \sqrt{|\bar{\textbf{g}}|}
\frac{h^2}{24} A^{ijkl} \left( L_{ij}-\bar{L}_{ij}\right)
\left( L_{kl}-\bar{L}_{kl}\right)
\label{eq:EB}\\
&\approx \int d\bar{A} \frac{1}{2}\kappa_\mathrm{B}
\left( 4H^2 - 2(1-\nu) K \right),
\label{eq:EB_HK}
\end{align}
where the last line applies to $\bar{\textbf{L}}=0$,
$H$ is the mean curvature, $K$ the Gaussian curvature,
and $\kappa_\mathrm{B} = Y_\mathrm{2D}h^2/(12(1-\nu^2))$
the bending modulus of the disk.
The bending energy penalizes deviations from the flat
shape for vanishing target curvature $\bar{\textbf{L}}=0$.
The last line in (\ref{eq:EB_HK}) is an approximation because
we assume $\bar{g}_{ij}\approx g_{ij}$ in $A^{ijkl}$.
Typical strains $\varepsilon_{ij}$
are $\propto (1-\alpha)^2$ such that corrections are
${O}((1-\alpha)^2E_\mathrm{B})$
and will be small at the transition for thin disks
(see eq.\ (\ref{eq:alphace}) and Fig.\ \ref{fig:alphaFvK} below).
For numerical energy minimization the disk and its elastic energies
\eqref{eq:Es} and \eqref{eq:EB} have to be suitably discretized.
\subsubsection{Model}
We calculate the disk's shape with the help of a numerical energy
minimization and use a simple spring mesh model for discretization.
The disk is
triangulated with a Delaunay triangulation (implemented with the fade2D
library \cite{Fade2D}), where every edge $i$ between two vertices represents a
mechanical spring with a rest length $l_i$. The fineness of the mesh is
controlled by the number of vertices $n_\mathrm{B}$ on the boundary of the
disk. In this model, a swelling process is performed by a simple
multiplication of the springs' rest lengths with the swelling function
$A(r)$. The discretized version of the
elastic stretching energy \eqref{eq:Es}
can be written as the sum over all
spring energies,
\begin{align}
\label{eq:stretch_energy_spring}
E_{\mathrm{s}} = \sum\limits_i\frac{1}{2}k_i
(|\vec{r}_{2,i} - \vec{r}_{1,i}| - l_i)^2.
\end{align}
The vectors $\vec{r}_{2,i}$ and $\vec{r}_{1,i}$ describe the positions of the
vertices that define the beginning and the end of a spring. The spring
constants are denoted by $k_i$. In a hexagonal mesh,
the two-dimensional Young modulus $Y_\mathrm{2D}$ is given
by the spring constant $k$ and the Poisson ratio
$\nu$ is fixed,
\cite{Ostoja-Starzewski2002,Nelson1988}
\begin{align}
Y_\mathrm{2D} = \frac{2}{\sqrt{3}}k ~~~\mbox{and}~~\nu = \frac{1}{3}.
\end{align}
In order to evaluate the bending energy \eqref{eq:EB_HK} on
the spring mesh, the curvatures $H$ and $K$ have to be calculated
on the mesh. The mean curvature $H_i$ at a mesh vertex $i$
can be expressed in terms of an area gradient \cite{SurfaceEvolver}:
\begin{align}
H_i = \frac{3}{2}\frac{|\nabla_i A_i|}{A_i}.
\end{align}
The quantity $A_i$ represents the area in the mesh that is associated to the
vertex $i$, see the colored area in Fig.\ \ref{fig:associated_area}. The
gradient $\nabla_i A_i$ then describes derivatives of this area with respect
to the coordinates of the vertex $i$.
The Gaussian curvature $K$, on the other hand, can be calculated using the
Gauss-Bonnet-theorem. We find
\begin{align}
K_i=(2\pi-\sum\limits_j \theta_j )/(A_i/3)
\end{align}
where $\theta_j$ is the angle between the neighboring
vertices $j$ and $j+1$ of the vertex $i$
located at $\vec{r}_i$, see Fig.\ \ref{fig:associated_area}.
Finally, the discretized bending energy \eqref{eq:EB_HK} becomes
\begin{align}
\label{eq:EB_HK_discrete}
E_\mathrm{B} = \sum\limits_i \frac{A_i}{3} \kappa_\mathrm{B}
\left( 2H_i^2 - \frac{2}{3} K_i \right).
\end{align}
\begin{figure}[b]
\centering
\includegraphics[width=0.5\linewidth]{Fig3.pdf}
\caption{Illustration of the direct neighborhood of a vertex
at position $\vec{r}_i$. The area $A_i$ that is associated
with this vertex is shown cyan. The angle between the
springs to the neighbor vertices at $\vec{r}_j$ and
$\vec{r}_{j+1}$ is called $\theta_j$ and is used in the
calculation of the Gaussian curvature.}
\label{fig:associated_area}
\end{figure}
The total energy $E=E_{\mathrm{s}}+ E_\mathrm{B}$ has to be minimized
with respect to all vertex coordinates in the three-dimensional embedding
space. In
order to overcome possible local energy minima, small fluctuations can be
added to the vertex coordinates in terms of a random displacement
$\vec{r}_i\rightarrow \vec{r}_i+\vec{\delta}_i$ with $|\vec{\delta}_i|\ll
l$. After minimizing the global energy minimum with respect
to all vertex positions, the resulting mesh
represents the preferred configuration of the swollen and
deformed disk, see the illustration in Fig.\ \ref{fig:meshes}.
\begin{figure}[t]
\centering
\includegraphics[width=0.90\linewidth]{Fig4.pdf}
\caption{Example Delaunay triangulation with $n_\mathrm{B} =
40$ vertices on the boundary representing the spring
mesh. Left side: flat disk with $\alpha = 1$. Right side:
resulting elliptic shape for $\alpha > 1$
(swelling of interior disk) and hyperbolic
shape for $\alpha < 1$ (shrinking of interior disk).
Swollen springs are shown in red,
while shrunk springs are shown in blue.}
\label{fig:meshes}
\end{figure}
\subsubsection{Control parameters}
After all, our system of the swelling elastic disk is defined by a small set
of dimensionless control parameters. These are the previously
mentioned swelling
factor $\alpha$ and the ratio of the inner and outer radius
$R_\mathrm{in}/R_\mathrm{out}$. In addition,
we also want to be able to describe a disk where the
inner disk and the outer annulus consist of
different materials \cite{Pezulla2015}.
Therefore, we introduce
different elastic moduli and thus different spring constants. The spring
constant $k_\mathrm{in}$ is valid for interior springs
with $r \leq R_\mathrm{in}$,
while $k_\mathrm{out}$ belongs to outer springs with
$r> R_\mathrm{in}$, and the ratio
$k_\mathrm{in}/k_\mathrm{out}$ is another control parameter. Finally, the
thickness of the disk has an influence, even in a two-dimensional model:
the relative importance of the bending energy \eqref{eq:EB_HK}
is governed by $\kappa_\mathrm{B}/Y_\mathrm{2D} \propto h^2$, i.e.,
a thicker disk is harder to bend.
This is usually captured by a dimensionless
F\"oppl-von K\'arm\'an number, a dimensionless ratio of Young modulus and
bending modulus, which we define for our disk as
\begin{align}
\gamma_\mathrm{FvK} \equiv
\frac{Y_\mathrm{2D} R_\mathrm{out}^2}{\kappa_\mathrm{B}}
= 12(1-\nu^2)\frac{R_\mathrm{out}^2}{h^2}.
\label{eq:FvK}
\end{align}
The F\"oppl-von K\'arm\'an number is large for thin disks and
is the fourth and last control parameter of our system.
\subsection{Results}
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{Fig5.pdf}
\caption{Energies (a), shape's height $\Delta z$ and
negative
lowest Hessian eigenvalue $-\lambda_\mathrm{min}$ (b)
in the spring mesh model as functions of the stretch factor
$\alpha$. Circles denote numerical values calculated with
decreasing $\alpha$, while crosses are related to increasing
$\alpha$. The disk is always flat if it is located in the
white area and always curved in the cyan regions. The blue
shapes illustrate the corresponding conformations of the
disk. The red areas mark the regions of pseudo-hysteretic
effects. Arrows illustrate the directions inside the
hysteresis loops. The simulated disk had a mesh with
$n_\mathrm{B}=120$ boundary vertices,
$R_\mathrm{in}/R_\mathrm{out} = 0.5$,
$\gamma_\mathrm{FvK} = 600$,
$k_\mathrm{in}/k_\mathrm{out} = 0.24$ and maximum
fluctuations of
$\delta_\mathrm{max}= 5\times 10^{-4}R_\mathrm{out}$.}
\label{fig:energyplot}
\end{figure*}
Starting with a flat disk with $\alpha = 1$, we increase/decrease
$\alpha$ in small steps $\Delta \alpha$ and minimize the energy after
each step. Figure \ref{fig:energyplot}(a) shows the resulting energies:
the total energy, the spring energy and the bending energy (separated in
mean curvature and Gaussian curvature part), as functions
of $\alpha$, while Fig.\ \ref{fig:energyplot}(b) shows the total
height $\Delta z$ of the shape. The shape can deform both
into positive and negative $z$-direction with equal probability;
we count the height $\Delta z$ of the shape always as the positive
absolute value of the maximal difference of $z$-coordinates.
For small changes of $\alpha$ the disk
stays flat at first, only the spring energy increases quadratically
because of the change of the springs' rest lengths. We have
$E=E_\mathrm{s}$ in this regime.
Swelling ($\alpha>1$) or shrinking ($\alpha<1$) of the interior
disk imparts elastic compression or stretching energy to the
flat state, which is released in the snapping transition.
At a critical swelling factor
$\alpha_{\mathrm{c2,e}}$ for increasing $\alpha$ (or
$\alpha_{\mathrm{c2,h}}$ for decreasing $\alpha$, respectively)
a transition into a curved conformation with $\Delta z >
0$ occurs.
We find two stable curved configurations: for increasing $\alpha$
above $\alpha_{\mathrm{c2,e}}>1$
the disk snaps into an elliptic (subscript ``e'') dome-like shape,
while
it snaps into
a hyperbolic (subscript ``h'') saddle for decreasing $\alpha$ beyond
$\alpha_{\mathrm{c2,h}}<1$ (see Fig.\ \ref{fig:meshes}).
At these transitions, $E_\mathrm{s}$ is reduced,
because the springs can relax to
a certain degree. On the other hand, $E_\mathrm{B}$ is increased because
of the increased curvatures in the dome- or saddle-like
shapes. Increasing (decreasing) $\alpha$ again
in order to get back to $\alpha = 1$,
we do not see a transition back into the flat state
at $\alpha_{\mathrm{c2,e}}$ (or
$\alpha_{\mathrm{c2,h}}$, respectively). Instead, the shape remains
curved for $\alpha <\alpha_{\mathrm{c2,e}}$
($\alpha > \alpha_{\mathrm{c2,h}}$). In the following, the curved
disk flattens continuously with $E_\mathrm{B}$ and $\Delta z$ decreasing
until $\alpha = \alpha_\mathrm{c,e}$ (or
$\alpha = \alpha_\mathrm{c,h}$) is reached. There, $E_\mathrm{B}$ and
$\Delta z$ vanish continuously, and the disk is flat
again. In conclusion, we find an apparent hysteresis loop in the
deformation behavior within the red areas between $\alpha_\mathrm{c,e}$ and
$\alpha_\mathrm{c2,e}$ (or $\alpha_\mathrm{c,h}$ and
$\alpha_\mathrm{c2,h}$).
\subsubsection{Pseudo-hysteresis and long-wavelength bifurcation}
The stability of the disk's conformation upon approaching the transition
can be analyzed in more detail with the help of the
eigenvalues of the Hessian matrix of the system's total energy. If the smallest
eigenvalue $\lambda_\mathrm{min}$ becomes negative, there is a
deformation mode leading directly
to a lower energy, and the system becomes unstable. The
blue scale on the right side of Fig.\ \ref{fig:energyplot}(b) shows the
smallest eigenvalue of our elastic system (please note that we show the
negative eigenvalue $-\lambda_\mathrm{min}$).
It is zero in the flat configuration
until $\alpha$ exceeds $\alpha_\mathrm{c,e}$ (or $\alpha_\mathrm{c,h}$).
Then,
still in the flat configuration in the red area, $\lambda_\mathrm{min}$
becomes significantly negative indicating that the system is unstable.
This means
that already in the entire red area, there is
an unstable deformation mode available that
leads directly into the curved
conformation. After the transition, the curved conformation remains
stable. Therefore, we conclude that the red area is {\it not} an area of
genuine hysteresis. The transition to the curved shape could directly happen at
$\alpha_\mathrm{c,e}$ (or $\alpha_\mathrm{c,h}$) if the system finds the
existing unstable deformation mode. The disk remains flat in the red area
only for numerical
reasons, and the values of
$\alpha_\mathrm{c2,e}$ and $\alpha_\mathrm{c2,h}$ and, thus,
the size of the red region
actually shrinks if we increase random displacements
$|\delta_i|$ (numerical fluctuations)
that are imposed. In the experimental system,
we expect that thermal fluctuations will always allow the
disk to find the unstable mode such that hysteresis
will be absent.
We can perform a linear stability analysis of the flat compressed state
of the inner disk
in order to further characterize the bifurcation into an elliptic dome
for increasing $\alpha$ above $\alpha_\mathrm{c,e}$.
In the limit of a small stiff outer annulus
($k_\mathrm{in}/k_\mathrm{out}\ll 1$
and $R_\mathrm{in}/R_\mathrm{out} \approx 1$),
the effect of swelling the
interior with a factor $\alpha>1$ is to establish a compressive
homogeneous pre-stress
$\sigma_{xx} = \sigma_{yy} = -\sigma_0 = -Y_\mathrm{2D} (\alpha-1)$
in the interior.
We can perform
a linear stability analysis of the flat state $z(x,y)=0$ of an infinite
plate under pre-stress $-\sigma_0$ using plate theory. Expanding
the Airy stress function
$\chi(x,y) = -\sigma_0(x^2+y^2)/2+ \chi_1(x,y)$
and the normal displacement $z(x,y) = z_1(x,y)$ around
the flat, homogeneously pre-stressed state we find the following
plate equations to linear order in $\chi_1$ and $z_1$:
\begin{equation}
\kappa_\mathrm{B} \nabla^4 z_1 +\sigma_0 \nabla^2 z_1=0~,~~
\nabla^4 \chi_1 =0.
\label{eq:stab}
\end{equation}
An Ansatz $\chi_1 = a e^{i\vec{q}\cdot\vec{r}}$ and
$z_1 = b e^{i\vec{q}\cdot\vec{r}}$ for an oscillatory
instability of the flat state with a two-dimensional
wave vector $\vec{q} = (q_x,q_y)$ leads to the condition
\begin{equation}
\sigma_0 = Y_\mathrm{2D}(\alpha-1) = \kappa_\mathrm{B} q^2,
\end{equation}
which is fulfilled for $\alpha> \alpha_\mathrm{c,e}$ with
$\alpha_{c,e}-1 \approx \kappa_\mathrm{B} q^2/Y_\mathrm{2D}$.
The resulting instability
is a long-wavelength instability, i.e.,
sets in at the smallest available wave vector $q$,
as opposed to buckling of a
spherical shell under pressure, where the pressure also leads to a
homogeneous compressive pre-stress, but the buckling instability
is a short-wavelength instability
because of the non-vanishing background curvature
\cite{Hutchinson1967,Baumgarten2018,Baumgarten2019}.
For an inner disk of radius $R_\mathrm{in}\approx
R_\mathrm{out}$ the shortest available wave vectors
have $q \sim 1/R_\mathrm{out}$.
Closer inspection shows that the unstable radially symmetric, oscillating
modes are
$z_1(r) = b J_0(r\sqrt{\sigma_0/\kappa_\mathrm{B}})$ (with
the Bessel function $J_0$).
The approximate boundary condition $\partial_r z_1(R_\mathrm{out})=0$
for a small stiff outer annulus leads to
\begin{equation}
\alpha_{c,e}-1 \approx
3.83^2 \frac{\kappa_\mathrm{B}}{Y_\mathrm{2D}R_\mathrm{out}^2} =
3.83^2 \gamma_\mathrm{FvK}^{-1},
\label{eq:alphace}
\end{equation}
where $3.83$ is the first zero of the Bessel function $J_1(x)$.
This is in good agreement with numerical results even
for $k_\mathrm{in}/k_\mathrm{out}< 1$
and $R_\mathrm{in}/R_\mathrm{out} =0.5$
as shown in Fig.\ \ref{fig:alphaFvK}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{Fig6.pdf}
\caption{Critical swelling factor $\alpha_{c,e}$ as a function
of $\gamma_{\mathrm{FvK}}$ for
$R_\mathrm{in}/R_\mathrm{out} = 0.5$ and different values of
$k_\mathrm{in}/k_\mathrm{out}$. The solid black line
represents the theory curve given by
eq.\ \eqref{eq:alphace}.}
\label{fig:alphaFvK}
\end{figure}
The stability analysis with the stability equation \eqref{eq:stab}
also shows that a genuine hysteresis should
be absent, and the bifurcation is a supercritical pitchfork
bifurcation similar to the Euler buckling bifurcation of a beam.
This is in contrast
to buckling of a spherical shell under pressure,
which is a subcritical bifurcation
\cite{Baumgarten2019}.
\section{Re-establishing hysteresis}
\subsection{Framing the disk and additional attractive interaction}
Now we want to modify the system in a way that a genuine hysteresis
is re-established,
which will also be present in experiments. The basic idea is to energetically
penalize slightly deformed intermediate states of the disk
during the transition to the curved shape
resulting in an additional energy barrier for this transition.
This barrier has to be
overcome or decreased by additional swelling before the disk can snap
into a curved state and stabilizes the flat disk.
In order
to realize that, the first step is to ``frame the disk'':
we combine our flat disk with
radius $R_\mathrm{out}$ in the $xy$-plane (Fig.\ \ref{fig:frame_sketch}(a))
with an additional fixed, undeformable frame (Fig.\ \ref{fig:frame_sketch}
(b)). As a result, the boundary of the disk is now fixed (Fig.\
\ref{fig:frame_sketch}(c)). Therefore, the piecewise constant swelling
function \eqref{eq:def_alpha} is replaced by a
globally constant, homogeneous swelling factor $\alpha$ for the whole disk
(but not for the frame);
$\alpha>1$ still corresponds to swelling the interior
of the disk and, accordingly, leads to a transition
into
a dome-like shape (Fig.\ \ref{fig:frame_sketch}(d)). A saddle shape
can no longer be realized in this set-up.
Framing is equivalent to the above limit of a very thin and stiff
outer annulus.
\begin{figure}[b]
\centering
\includegraphics[width=0.7\linewidth]{Fig7.png}
\caption{Illustration of the set-up of a flat disk inside of a
fixed frame. The flat disk (a) is placed inside a fixed and
undeformable frame (b)
with an attractive central region
(in red). Disk and frame are compatible in the
flat state (c). If the disk swells uniformly, an elliptic
dome-like shape results (d).}
\label{fig:frame_sketch}
\end{figure}
The second step in order to create a genuine
hysteresis is to introduce an additional attractive
interaction between the frame and the disk.
The central region of the frame $A_c$
(red area in Fig.\ \ref{fig:frame_sketch}) attracts the disk leading to an
additional potential energy for the disk. Inspired by an attractive van der
Waals force, we choose the attractive part of a Lennard-Jones potential for
the interaction,
\begin{align}
\label{eq:LJ-pot}
v_\mathrm{pot}(z) =
4\varepsilon\left[
\left(\frac{z+\sqrt[6]{2}\sigma}{\sigma}\right)^{-12}
-\left(\frac{z+\sqrt[6]{2}\sigma}{\sigma}\right)^{-6}\right],
\end{align}
with the total attractive potential energy $E_\mathrm{pot} = \int_{A_c} dA
v_\mathrm{pot}(z)$ or $E_\mathrm{pot} = \sum_{i\in A_c} A_i
v_\mathrm{pot}(z_i)$ for the mesh model of the disk.
The attractive potential has a finite range $\sigma$, and a potential
depth $-\varepsilon$ at $z=0$;
the force of this potential vanishes in the completely flat state ($z=0$)
improving the numerical stability. For
$\sigma \ll R_\mathrm{out}$, on the other hand,
the force nearly vanishes also in the
completely deformed state
because most parts of the disk are out of the potential
range. In conclusion, only the transition itself is energetically
penalized.
\subsection{Hysteresis and short-wavelength bifurcation}
In order to gain insight into the influence of
an attractive potential $v_\mathrm{pot}(z)$ onto the
instability of the swelling disk, we can consider the
case where the attractive potential acts over the
whole area of the disk, i.e., $A_c=A$.
Then the linear stability analysis leads to a short-wavelength
instability as eq.\ \eqref{eq:stab} becomes modified to
\begin{equation}
\kappa_\mathrm{B} \nabla^4 z_1 +\sigma_0 \nabla^2 z_1
+ v_\mathrm{pot}''(0)z_1=0
\label{eq:stab2}
\end{equation}
resulting in an instability condition $\kappa_\mathrm{B} q^4-\sigma_0q^2 +
v_\mathrm{pot}''(0)<0$ (if $v_\mathrm{pot}'(0)=0$ and
with $v_\mathrm{pot}''(0)= 36\times 2^{2/3} \varepsilon/\sigma^2$ for the
potential \eqref{eq:LJ-pot}).
This is exactly equivalent to the wrinkling condition of a
membrane on an elastic substrate (or a Winkler foundation) under compressive
stress \cite{Huang2005}, where $v_\mathrm{pot}''(0)$ corresponds to the
substrate stiffness.
Interestingly, this is also equivalent to the short-wavelength
instability condition for buckling of a
spherical shell under pressure with the
homogeneous compressive pre-stress playing the role of the pre-stress from
homogeneous pressure and the curvature of the potential
$v_\mathrm{pot}''(0)$ playing the
role of the background curvature term \cite{Baumgarten2018}.
Now, an instability sets in at the smallest $\sigma_0$,
for which the instability condition can by fulfilled, which is for
$\sigma_0> 2\sqrt{\kappa_\mathrm{B} v_\mathrm{pot}''(0)}$ or for
$\alpha> \alpha_\mathrm{c,f}$ with
$\alpha_{c,f}-1 = {2\sqrt{\kappa_\mathrm{B}
v_\mathrm{pot}''(0)}}/{Y_\mathrm{2D}}\propto h^2$
(subscript ``f'' for framed disk)
and at the wave vector
$q_0 = \left( {\sigma_0}/{2\kappa_B} \right)^{1/2} =
(v_\mathrm{pot}''(0)/\kappa_\mathrm{B})^{1/4}$. This
is a short-wavelength instability with $q_0>1/L$ if
$v_\mathrm{pot}''(0)$ is sufficiently large.
We also expect to find a subcritical bifurcation
with hysteresis in analogy
to buckling of a spherical shell under pressure
\cite{Baumgarten2019}.
For a localized potential, i.e., if the attractive region $A_c$ is smaller
than $A$ as in Fig.\ \ref{fig:frame_sketch}, we expect that
the critical swelling factor is further increased such that the
unstable wavelength
$1/q_0 = \left( {2\kappa_B}/\sigma_0 \right)^{1/2}$ fits into the
size $\sqrt{A_c}$ of the attractive region.
This
results in a condition $\sigma_0 > \max[2\kappa_\mathrm{B}/A_c, 2\sqrt{\kappa_\mathrm{B} v_\mathrm{pot}''(0)}]$.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{Fig8.pdf}
\caption{Energies (a), shape's height $\Delta z$ and negative
lowest Hessian eigenvalue $-\lambda_\mathrm{min}$ (b)
in the spring mesh of a disk in a fixed frame with
attractive potential as functions of the swelling factor
$\alpha$. Crosses denote numerical values calculated with
increasing $\alpha$, while circles are related to decreasing
$\alpha$. The region with a genuine hysteresis is marked in
yellow, while pseudo-hysteretic effects are again marked in
red. The disk is always flat if it is located in the white
area and always curved in the cyan region. Arrows
illustrate the directions inside the hysteresis loops. The
simulated system is the same system from
Fig.\ \ref{fig:energyplot} but with an additional potential
energy for each vertex given by eq.\ \eqref{eq:LJ-pot}. The
simulated disk had a mesh with $n_\mathrm{B}=120$ boundary
vertices,
$\gamma_\mathrm{FvK} = 1066$ and maximum fluctuations of
$\delta_\mathrm{max}= 10^{-4}R_\mathrm{out}$. The
parameters of the attractive potential were set to
$\varepsilon =
7.3\times 10^{-7}\,k_\mathrm{in}$ and
$\sigma = 0.01\,R_\mathrm{out}$.
The potential acted in an
inner region $A_c$ with a radius of $0.2\,R_\mathrm{out}$.}
\label{fig:energies_fixedFrame}
\end{figure*}
Analogously to Fig.\ \ref{fig:energyplot}, the behavior of the
system including frame and attractive interaction is shown in Fig.\
\ref{fig:energies_fixedFrame}.
Increasing $\alpha$ starting at $\alpha = 1$,
the framed
disk again stays flat at first until $\alpha_\mathrm{c2,f}$ is reached,
where the transition into the curved, dome-like shape occurs. There, we see a
significant reduction of the spring energy and an increase of the bending
energy. In contrast to Fig.\ \ref{fig:energyplot}, also the total energy is
reduced drastically during the transition. Decreasing $\alpha$ again, the
behavior is qualitatively the same as before, the shape continuously
flattens but
stays curved until $\alpha_\mathrm{c3,f}$ is reached, where the disk is flat
again. The significant difference to the simple set-up from above can be found
by taking a look at the smallest Hessian eigenvalue $\lambda_\mathrm{min}$
(Fig.\ \ref{fig:energies_fixedFrame}(b), blue scale). Between
$\alpha_\mathrm{c3,f}$ and $\alpha_{c,f}$ (yellow area), there are no negative
eigenvalues, which means that both the flat disk and the curved shape are
(meta-)stable in this region. Only if $\alpha$ exceeds $\alpha_\mathrm{c,f}$,
$\lambda_\mathrm{min}$ becomes negative signalling an unstable flat shape. In
conclusion, the region between $\alpha_\mathrm{c,f}$ and
$\alpha_\mathrm{c2,f}$ (red area) is again a region of pseudo-hysteresis,
where we see hysteresis in the numerics but hysteresis vanishes in the
presence of sufficient random fluctuations and in the experiment,
while the yellow area is related to a genuine
hysteresis that should also be robustly observable
in an experiment in the presence of some fluctuations.
This gives also rise to a shape hysteresis as indicated in
Fig.\ \ref{fig:hysteresis_sketch}. The intermediate states
upon snapping into the dome-like shape feature a flattened
region around the center, while
this feature is missing when the shape continuously
flattens.
\section{Hydrodynamics}
\subsection{Model}
In the following, we want to show that the hysteretic shape transition
of the modified framed elastic disk can be exploited
as a propulsion mechanism for a microswimmer under a
periodic time-reversible driving of the swelling factor $\alpha$.
To this end, we need to
model the hydrodynamic interaction between the elastic disk and a surrounding
fluid. For this proof of concept we simulate the Stokesian
dynamics\cite{Durlosfsky1987}
and use the Rotne-Prager interaction\cite{Dhont1996,RotnePrager1969}. This
interaction describes the movement of a small sphere in the flow field of
another sphere. Therefore we model our disk as a sheet of small
spheres and place spheres of radius $a\ll R_\mathrm{out}$
on every vertex of the spring mesh, see Fig.\ \ref{fig:sphere_mesh}. The
velocity $\vec{v}_i$ of every sphere $i$ can then be calculated from the
knowledge of the external forces $\vec{f}_j$ on all spheres $j$ via
\begin{align}
\label{eq:v_rotne_prager}
\begin{split}
\vec{v_i} = \frac{1}{6\pi \eta a}\vec{f_i}+
\sum\limits_{j\neq i} \frac{1}{6\pi \eta a}
\left(\frac{3a}{4|\vec{r_i}-\vec{r_j}|}
\left(\underline{\underline{\textbf{I}}}
+ \frac{(\vec{r_i}-\vec{r_j})\otimes
(\vec{r_i}-\vec{r_j})}{|\vec{r_i}-\vec{r_j}|^2}\right)\right.\\
+ \left.\frac{a^3}{4|\vec{r_i}-\vec{r_j}|^3}
\left(\underline{\underline{\textbf{I}}}
-3\frac{(\vec{r_i}-\vec{r_j})\otimes
(\vec{r_i}-\vec{r_j})}{|\vec{r_i}-\vec{r_j}|^2}\right)
\right)\vec{f_j}.
\end{split}
\end{align}
The constant $\eta$ describes the viscosity of the surrounding
fluid and $\underline{\underline{\textbf{I}}}$ represents the
three-dimensional unit matrix. In this model, we ignore torques
and rotations for simplicity.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{Fig9.pdf}
\caption{Illustration of the sphere mesh. Small spheres (blue
circles) are placed on the vertices of a Delaunay
triangulation. Example mesh with $n_\mathrm{B} = 40$
boundary vertices. The spheres have a radius
$a=2\pi R_\mathrm{out}/(10n_\mathrm{B})$, which is about
$10\,\%$ of their average distance.}
\label{fig:sphere_mesh}
\end{figure}
The forces $\vec{f}_i$ can be calculated (analytically) from the
discretized stretching and bending energies \eqref{eq:stretch_energy_spring}
and \eqref{eq:EB_HK_discrete} as gradients with respect to
the vertex position $\vec{r_i}$
at each time step, and eq.\ \eqref{eq:v_rotne_prager} gives the resulting
vertex velocities.
The trajectory of each sphere and the disk's center
of mass as the average of all sphere positions
are calculated by a simple Euler integration of the velocities with
a time step $\Delta t$.
\subsection{Simulation}
To simulate the movement of the disk, the trajectories of the spheres are
calculated based on the forces acting on them.
For a fixed swelling factor $\alpha$ this dynamics will relax
into the same force-free equilibrium state that we determined
also by ``dry'' or static energy minimization in Fig.\
\ref{fig:energies_fixedFrame}. Using the dynamics \eqref{eq:v_rotne_prager}
we can, however, obtain a realistic dynamics of each sphere position
and, thus, of the deformation and propulsion dynamics of the whole disk
in the presence of hydrodynamic interactions
in a viscous fluid.
The general
concept of the simulation stays basically the same.
Again, $\alpha$ is changed
in small steps $\Delta \alpha$. After each step, the trajectories of the
spheres are calculated until the forces on the spheres fall below a threshold,
$\sum |\vec{f}_i| < \epsilon$. The simulation gives, in principle,
the corresponding
hydrodynamic time scale $\Delta \tau_\mathrm{h}$ on which this elastic
relaxation happens.
That means that there are actually two
different time scales operating in this system. The swelling time scale
$T_\mathrm{sw}$ is defined by the swelling process and is the time that
the disk needs to run through a complete deformation cycle.
This is the time scale that can be externally controlled in an
experiment, where swelling frequencies $f_\mathrm{sw}
= 1/T_\mathrm{sw}\sim 5 {\rm Hz}$ are possible
for disks made from thermoresponsive hydrogels
if plasmonic heating of embedded gold particles by laser light
is utilized
\cite{Mourran2017}.
If we
divide a deformation cycle into $N$ small changes $\Delta \alpha_n$
and $\Delta \tau_\mathrm{h,n}$ is the
hydrodynamic relaxation time for each step, we
obtain the second time scale
$\tau_\mathrm{h} = \sum_n \Delta
\tau_\mathrm{h,n}$, which is the hydrodynamic relaxation time scale
for one deformation cycle and determined by the interplay
of elastic forces and hydrodynamic friction. In
our simulation model we assume that hydrodynamic relaxation is much faster
than the swelling process,
$\tau_\mathrm{h} \ll T_\mathrm{sw}$, i.e.,
the disk swells slowly compared to its
deformation motion caused by the swelling, and we can use
quasi-equilibrated forces ($\sum |\vec{f}_i| < \epsilon$) along
the swelling cycle.
We can estimate an order of magnitude for
the hydrodynamic relaxation time scale. The typical
total force onto a disk with
shape height $\Delta z$ close to the instability is
$F \sim \Delta z \sigma_0$ (see eq.\ \eqref{eq:stab}, which equates
areal force densities);
the disk has a friction coefficient $\sim \eta R_\mathrm{out}$, such
that the typical
velocity is
$\partial_t {\Delta z} \sim F/\eta R_\mathrm{out}\sim \Delta
z \sigma_0/\eta R_\mathrm{out}$, which
leads to relaxation times
\begin{equation}
\tau_\mathrm{h}
\sim\frac{ \eta R_\mathrm{out}}{\sigma_0} \sim
\frac{\eta R_\mathrm{out}}{Y_\mathrm{3D}h(\alpha-1)} \sim
\frac{\eta \gamma_\mathrm{FvK}^{3/2}}{Y_\mathrm{3D}}\sim
\frac{\eta R_\mathrm{out}^3}{Y_\mathrm{3D}h^3}
\label{eq:tauh}
\end{equation}
(using $\sigma_0 \sim
Y_\mathrm{2D}(\alpha-1) \sim
Y_\mathrm{2D}/\gamma_\mathrm{FvK} $ close
to the instability for an unframed disk, see eq.\ (\ref{eq:alphace})).
Typical elastic moduli for
thermoresponsive hydrogels
are $Y_\mathrm{3D} \sim 10-100 {\rm kPa}$ \cite{Matzelle2003}
and F\"oppl-von K\'arm\'an numbers for the disks in Ref.\ \citenum{Mourran2017}
are $\gamma_\mathrm{FvK} \approx 300$
(for $R_\mathrm{out} =30 {\rm \mu m}$ and $h = 5 {\rm \mu m}$)
and result in fast hydrodynamic
relaxation time scales
$\tau_\mathrm{h} \sim 10^{-5} {\rm s}$ in water, such that
swelling frequencies up to $f_\mathrm{sw} \sim 10^5 {\rm Hz}$
still satisfy quasi force-equilibrium along
the swelling cycle as assumed in our simulation.
On the other hand,
the hydrodynamical relaxation time scale $\tau_\mathrm{h}$ should
be large enough (the hydrodynamically damped deformation or snapping
velocity of the disk slow enough)
for the
underlying assumption of
low Reynolds number hydrodynamics to apply.
This is the case if
${\rm Re} \sim \rho R_\mathrm{out}^2/\eta \tau_\mathrm{h}<1$ or
$\tau_\mathrm{h} > \rho R_\mathrm{out}^2/\eta $.
Inserting $\tau_\mathrm{h}$ from eq.\ (\ref{eq:tauh})
we obtain a condition on the disk geometry,
$ h^3/R_\mathrm{out} < \eta^2/Y_\mathrm{3D}\rho \sim 10^{-1}{\rm \mu m}^2$,
where the last estimate is
for the density and viscosity of water and moduli
$Y_\mathrm{3D} \sim 10 {\rm k Pa}$.
This implies that disks have to be designed sufficiently
thin (and, thus, bendable) to remain at
low Reynolds numbers, which
is possibly a critical point for experimental
realizations.
For disks of radius $R_\mathrm{out} =30 {\rm \mu m}$, thicknesses
$h<2 {\rm \mu m}$ are required.
As long as the low Reynolds number
assumption applies, the swimming distance per deformation
cycle is independent of the time scale $T_\mathrm{sw}$ of the
swelling process and, thus, the deformation velocity.
The speed of shape changes affects the swimming velocity but
does not affect the swimming distance
as long as shape
changes remain sufficiently slow that the low Reynolds number
assumption holds.
The quality of our discretization and Stokesian dynamics
simulation scheme can be assessed by monitoring the
fluid flow
\begin{equation}
j_S = \sum_i \int_{A_i}
(\vec{u}(\vec{r}) - \vec{v}_i)\cdot \vec{n}_i
\label{eq:jS}
\end{equation}
through the discretized surface (with unit normals $\vec{n}_i$),
which is given by the relative
velocity of the fluid flow $\vec{u}(\vec{r})$ with respect to the
disk vertex velocities $\vec{v}_i$. The fluid flow field
can be obtained from the Rotne-Prager interaction as
\begin{align}
\label{eq:u_rotne_prager}
\begin{split}
\vec{u}(\vec{r}) =
\sum\limits_{j} \frac{1}{6\pi \eta a}
\left(\frac{3a}{4|\vec{r}-\vec{r_j}|}
\left(\underline{\underline{\textbf{I}}}
+ \frac{(\vec{r}-\vec{r_j})\otimes
(\vec{r}-\vec{r_j})}{|\vec{r_i}-\vec{r_j}|^2}\right)\right.\\
+ \left.\frac{a^3}{4|\vec{r}-\vec{r_j}|^3}
\left(\underline{\underline{\textbf{I}}}
-3\frac{(\vec{r}-\vec{r_j})\otimes
(\vec{r}-\vec{r_j})}{|\vec{r}-\vec{r_j}|^2}\right)
\right)\vec{f_j},
\end{split}
\end{align}
while the velocity $\vec{v}_i$ of vertex $i$ is
given by (\ref{eq:v_rotne_prager})
and is essentially $\vec{u}(\vec{r}_i)$ with a regularization
of the $j=i$ term.
The quality of the approximation can be measured by the
dimensionless permeability $|j_S/j_P|$, where $j_P$ is
the fluid stream through a theoretically perfectly permeable
surface that moves with a velocity $\vec{v}_i$ through
the fluid (i.e., setting $\vec{u}(\vec{r}))=0$ in eq.\ (\ref{eq:jS})).
Small permeabilities indicate that discretization and Stokesian dynamics
is a good approximation; ideally, we reach $|j_S/j_P|=0$ because no fluid
should pass through the surface.
For $a\approx 0.1 l_0$ (where $l_0=2\pi R_\mathrm{out}/n_\mathrm{B}$
is the typical rest length of a spring in the discretized mesh) we
find surprisingly
small permeabilities $|j_S/j_P| < 20\%$ for discretizations
with $n_B>100$ in view of the fact that less than $5\%$ of the surface
area are covered by spheres.
\subsection{Swimming motion of the snapping elastic disk}
In order to quantify the swimming motion of the disk
and proof the concept of a net swimming motion, we measure
the movement
of the disk's center of mass as a function of time over multiple swelling
cycles.
Because of its symmetry, the disk only moves into the direction
perpendicular to its initial plane, the $z$-direction. Therefore, Fig.\
\ref{fig:swimming_distance} shows the $z$-coordinate of the center of mass,
$z_\mathrm{CoM}$, as a function of time for ten full swelling
cycles.
We will assume in the following
that the dome-like shape always snaps downwards, i.e.,
the opening is in negative $z$-direction as shown
in Fig.\ \ref{fig:frame_sketch}.
In each
swelling cycle $\alpha(t)$ the swelling factor $\alpha$ changes between
$\alpha_\mathrm{min}=1$ and $\alpha_\mathrm{max}=1.1> \alpha_\mathrm{c2,f}$
in 200 steps back and forth in a completely
time-reversible fashion.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{Fig10.png}
\caption{$z$-coordinate $z_\mathrm{CoM}$ of the disk's center
of mass as a function of time for ten full deformation
cycles. The simulated disk featured the same parameters as
the disk in Fig.\ \ref{fig:energies_fixedFrame} but with
$n_\mathrm{B}= 60$ and
$\varepsilon =
5\times 10^{-8}\,k_\mathrm{out}R_\mathrm{out}^2$. The spheres
in the mesh had a size of
$a=2\pi R_\mathrm{out}/(10n_\mathrm{B})$. In each
swelling cycle, the swelling factor $\alpha$ varied between
$1$ and $1.1$ in 200 steps. The lower graph shows a zoom
into a single (the first) cycle.
The net swimming motion is in the direction of the
opening. }
\label{fig:swimming_distance}
\end{figure}
Each swimming cycle consists of three phases
(see Fig.\ \ref{fig:swimming_distance}).
Beginning at $\alpha=1$ and
$z_\mathrm{CoM} (t=0) = 0$, the disk does not move at
first as long as the disk stays flat until $\alpha > \alpha_\mathrm{f,c2}$,
where the disk snaps into a dome-like shape in the numerics.
The second phase starts with
the short snapping process itself.
During snapping into the
dome-like shape, the swimmer moves into the positive $z$-direction
(the direction of the
tip of the elliptic dome, upwards in Fig.\ \ref{fig:frame_sketch}).
This movement happens nearly instantaneously on the swelling time scale.
Then, for $\alpha_\mathrm{f,c2}<\alpha<\alpha_\mathrm{max}$,
the height $\Delta z$
of the dome-shape slowly continues to grow.
Then, the swelling factor starts to shrink again, and the third
phase starts.
Here, the shape flattens again, and
we see a movement into the negative $z$-direction. When
the disk arrives back in its original state ($\alpha = 1$), a small net
motion into the negative $z$-direction, i.e.\ in the direction of the
opening, remains,
which results in an effective
slope in the diagram over multiple swelling cycles in Fig.\
\ref{fig:swimming_distance}.
The effective
speed of this swimmer is quite small
with a net swimming distance of just below
$1\,\%$ of the disks radius $R_\mathrm{out}$ after ten deformation cycles
for the parameter values given in Fig.\ \ref{fig:swimming_distance}.
This is a consequence of the known problem of all shape-changing
microswimmers that
the swimming distance typically
scales only quadratically with the deformation
displacement \cite{Lighthill1952,Blake1971}.
Nevertheless, this proves the principle
that a the hysteretic shape transition of a flat
elastic disk can be used as a propulsion mechanism for a microswimmer.
Shape-changing swimmers can still be effective if the
driving frequency (the swelling cycle frequency for our swimmer)
is sufficiently high.
At low Reynolds numbers, the resulting swimming distance
$\Delta z_\mathrm{CoM}=z_\mathrm{CoM}(t=T_\mathrm{sw})$
is also independent of the shape of the
actual swelling cycle $\alpha(t)$ but can only depend on the values of
$\alpha_\mathrm{min}=1$ and $\alpha_\mathrm{max}=\alpha_\mathrm{f,c2}$
characterizing the hysteretic part of the swelling cycle.
We can use the reciprocal theorem \cite{Stone1996} in order to obtain
analytical insight into
the dependence of the swimming distance $\Delta z_\mathrm{CoM}$ on the
deformation displacement $\Delta z$ of the swimmer.
We focus on the $z$-component as axisymmetric deformations can only
lead to motion along the axis of symmetry of the disk.
For a deformed disk shape $A(t)$ with a
shape deformation $z=z(r,t)$ and a deformation velocity
$\dot{z}$ in $z$-direction ($r$ is the
radial coordinate in the $xy$-plane)
the reciprocal theorem gives
\begin{align}
F_D v_\mathrm{CoM} &= - \int_{A(t)} dA (\vec{n}\cdot \sigma)_z
\dot{z}
\end{align}
where $F_D$ is the $z$-component of the
viscous drag force of a disk with rigid shape $A(t)$
and velocity $v_\mathrm{CoM}$ in $z$-direction,
$\sigma_D$ the corresponding stress tensor of the fluid,
and $ (\vec{n}\cdot \sigma)_z$ the
$z$-component of the normal stress of the fluid
onto the shape $A(t)$; these quantities
are related
via $F_D = \int_{A(t)}dA (\vec{n}\cdot \sigma_D)_z$.
This leads to
\begin{align}
v_\mathrm{CoM} &= - \frac{ \int_{A(t)} dA
(\vec{n}\cdot \sigma_D)_z \dot{z}}{F_D}
\nonumber\\
&= -
2\pi \int_0^{R_\mathrm{out}}dr r [\sigma](r,t)
(1+ (\partial_r z)^2)^{1/2} \dot{z}(r,t)
\label{eq:vcom}
\end{align}
where
\begin{align}
[\sigma](r,t) &\equiv \frac{(\vec{n}\cdot \sigma)_z}{F_D}
= \frac{1}{2\pi R_\mathrm{out}}
\frac{1}{(R_\mathrm{out}^2-r^2)^{1/2}}
\left( 1- {O}\left( (\partial_rz)^2 \right)\right)
\label{eq:sigma}
\end{align}
is the stress difference (divided by drag force) between
the two faces of a disk with rigid shape $A(t)$.
The first term in the last equation is the result for the
flat disk (see Refs.\ \citenum{Gupta1957,Tanzosh1996,Felderhof2012}).
For weakly deformed rigid disks there are corrections; for the scaling of the
net swimming distance with the shape height $\Delta z$ it is crucial
how these corrections scale with $\Delta z$.
For the total drag force
$F_D = \int_{A(t)}dA (\vec{n}\cdot \sigma_D)_z$
the corrections are quadratic, $-{O}\left( F_D(z/R_\mathrm{out})^2\right)$.
This can be shown by
interpreting the weakly deformed rigid disk as perturbed flat disk
and applying the reciprocal theorem \cite{Masoud2019}. Because
both the
fluid velocity field and the pressure only vary quadratically
in $z$ close to the flat disk shape \cite{Tanzosh1996}, boundary
conditions to the flow and, thus, the
fluid stresses and the drag receive only quadratic corrections.
This is also
supported by an exact result for the axisymmetric Stokes flow past
spherical caps of opening angle
$\beta$ and radius $R$ \cite{Dorrepaal1976}, which can also
be interpreted as weakly bent rigid disks.
The friction force $F_\mathrm{D,cap} =
\mu v_\mathrm{CoM} R(6\beta + 8\sin\beta + \sin(2\beta))$
reduces in the disk limit $\beta \ll 1$, where
$R_\mathrm{out} \approx R \beta $ and $z/R_\mathrm{out}\approx
\beta/2$, to
$F_D \approx 16\mu v_\mathrm{CoM}R_\mathrm{out}
(1- (z/R_\mathrm{out})^2/6)$ in the first two leading orders.
Therefore, we expect to find a leading order reduction of the
stress jump (\ref{eq:sigma}), which is quadratic in $z/R_\mathrm{out}$
for a weakly deformed rigid disk.
This allows us to extract the scaling
of the swimming velocity as a function of the deformation function
$z(r,t)$ which describes the shape hysteresis.
The shape hysteresis as sketched in Fig.\ \ref{fig:hysteresis_sketch}
is mainly caused by flattened shapes during snapping ($\dot z(r)<0$,
$v_\mathrm{CoM}>0$),
while the shape remains dome-like during re-flattening ($\dot z(r)>0$,
$v_\mathrm{CoM}<0$).
Expanding in the deformation $z$
in eq. (\ref{eq:vcom}) and integrating over time to obtain
the swimming distance,
$\Delta z_\mathrm{CoM} = \int_0^{T_\mathrm{sw}}v_\mathrm{CoM}$,
we realize that the leading order term
only depends on final and initial state.
It
gives the same swimming distance
$ \Delta z_\mathrm{CoM,snap}\sim -\Delta z$ both in the
snapping and re-flattening phase and
cancels out exactly for a full
shape cycle. The next-to-leading term, however, gives a slightly
bigger contribution in the re-flattening phase, which should scale
as
$\Delta z_\mathrm{CoM}/R_\mathrm{out}
\sim -(\Delta z/R_\mathrm{out})^3$.
Therefore,
the net swimming distance is only a third
order effect in the hysteretic shape height $\Delta z$.
This is even smaller than the quadratic order observed, for example,
for deforming spheres \cite{Lighthill1952,Blake1971} and
appears
to be a hydrodynamic consequence of the disk geometry.
This is confirmed in Fig.\ \ref{fig:zcom_deltaz}, where
the net swimming distance for swimmers with different thicknesses
is shown as a function of their shape height in the snapped state.
The shape height in the subcritical bifurcation with hysteresis
is limited by the stretching factor $\alpha$ of the shape.
Because a fiber along the diameter of the disk will stretch by a factor
$\alpha-1$, the change in height will scale as $\Delta
z/R_\mathrm{out} \sim (\alpha-1)^{1/2}$
for a curved disk.
For the hysteretic part of the deformation cycle, the relevant
stretching factor is $\alpha = \alpha_\mathrm{c,f}$.
This results in a net swimming distance
\begin{equation}
\frac{|\Delta z_\mathrm{CoM}|}{R_\mathrm{out}} \sim
\left(\frac{\Delta z}{R_\mathrm{out}}\right)^3
\sim (\alpha_\mathrm{c,f}-1)^{3/2}.
\label{eq:zcom}
\end{equation}
The swimming distance $\Delta z_\mathrm{CoM}$ per stroke
can be increased by increasing $\alpha_{c,f}-1$ and, thus,
the width of the
yellow hysteresis area in Fig.\ \ref{fig:energies_fixedFrame}
which can be achieved, for example, by increasing the thickness
and, thus,
the bending modulus of the disk. This is explicitly
demonstrated in Fig.\ \ref{fig:zcom_deltaz}.
The thickness dependence $\alpha_{c,f}-1\propto h^2$ results in $|\Delta
z_\mathrm{CoM}| \propto h^3$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{Fig11.pdf}
\caption{Net swimming distance $|\Delta z_\mathrm{CoM}/R_\mathrm{out}|$ per
cycle as a function of the shape height $\Delta z/R_\mathrm{out}$
both for disks and 9-bead swimmer.
For the 9-bead swimmer the parameter $\Delta z$ is directly
prescribed.
For disks the increasing shape height $\Delta z$ in the
snapped state
is realized by increasing the thickness
in the range $h/R_\mathrm{out}=0.02-0.14$, the other
parameters are as in Fig.\ \ref{fig:energies_fixedFrame}.
The fit is $\Delta z_\mathrm{CoM}/R_\mathrm{out}
= 0.022(\Delta z/R_\mathrm{out})^3$.}
\label{fig:zcom_deltaz}
\end{figure}
The deformation of the disk into a dome-like shape resembles
the deformation pattern of a scallop as envisioned by Purcell
\cite{Purcell1977};
our disk is, however, hysteretic, which enables swimming.
Interestingly, we find swimming into negative
$z$-direction, which is {\it the direction away from the
tip of the dome}. This is the opposite swimming direction that
we expect from Purcells's high Reynolds number scallop, which moves
in the direction of its tip by thrusting fluid out of the opening.
Our disk, on the contrary, will pull in fluid through the
opening because the size of the opening is fixed by the frame.
Also at high Reynolds numbers this leads to inertial
thrust in the direction of the opening.
\section{9-bead model}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.7\linewidth]{Fig12.pdf}
\caption{Illustration of the cyclic deformation
in the 9-bead model. First row:
corresponding states of the disk in the fixed frame. Second
row: three-dimensional sketch of the 9-bead swimmer
sates. Third row: two-dimensional side view on the 9-bead
model. Each column represents
one deformation state (1-2-3-1).}
\label{fig:9-bead-illustration}
\end{figure*}
In order to further elucidate the underlying propulsion mechanism, we
present a strongly simplified 9-bead model of the hysteretic scallop.
In this model, the swimmer only
consists of nine spheres that move on simple prescribed trajectories in
relative coordinates.
The 9-bead-model features three deformation phases, which mimic
the above three phases of the disk swimmer; these deformation phases are
visualized in Fig.\ \ref{fig:9-bead-illustration}. The sketches in the first
row show the corresponding states of the full, framed disk analogously to
Fig.\ \ref{fig:frame_sketch}. The second row shows three-dimensional
illustrations of the 9-bead swimmer and the last row a two-dimensional side
view.
The undeformed flat state is represented in the left column. Nine spheres are
aligned symmetrically in the $xy$-plane. The central sphere (red) is
surrounded by four spheres (blue) that are placed in a symmetrical way in a
distance of $R_\mathrm{out}/2$. The last four spheres (green) are set in
an identical way but in a distance of $R_\mathrm{out}$ to the central
sphere. They represent the fixed frame. The connection lines between the
spheres indicate the structure of the object without a direct physical
meaning. The second state (second column) represents a typical
transition state of
the disk during the snapping transition into the dome-like shape.
There, the outer regions
of the disk are already close to their final position, but the central region
does not show a tip yet. This is modelled by elevating the five inner spheres
(red and blue) in $z$-direction with $\Delta z$.
The third state then is the
finally stable elliptic dome-like shape after the transition. Here, the
central sphere is again elevated by another distance of $\Delta z$. Finally,
the last state is again the flat and relaxed disk so that the cycle can be
repeated. During the evolution from one state into another, the spheres move
on trajectories, which simply
linearly interpolate the positions of the spheres
between initial and final state. Therefore, the first evolution phase
($1\rightarrow2$) and the second phase ($2\rightarrow 3$) represent an
increasing swelling factor $\alpha$, while the last phase ($3\rightarrow 1$)
is related to a decreasing $\alpha$.
\subsection{Swimming behavior}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{Fig13.pdf}
\caption{$z$-coordinate of the 9-bead swimmer's center of mass as a
function of time during a single deformation cycle. The bead size is
$a = 0.1\,R_\mathrm{out}$ and $\Delta z = 0.5\,R_\mathrm{out}$. The
sketches illustrate the conformations of the swimmer during the
deformation cycle.}
\label{fig:9bead_swimming}
\end{figure}
The swimming motion of the 9-bead model is calculated with the same
hydrodynamic model as before. But now, the relative velocities of the spheres
are prescribed. So the forces on them have to be calculated. The knowledge of
eight relative velocities, together with the condition that the
swimmer is force-free, i.e., the total force
must vanish, enables us to calculate the motion of the center of mass
\cite{NajafiGolestanian2004,Golestanian2008}. The results are shown in Fig.\
\ref{fig:9bead_swimming} for a single deformation cycle. In the first
deformation phase ($0<t< 1/3\,T_\mathrm{sw}$), the swimmer moves upwards in
positive $z$-direction. In the second phase, when the central sphere forms the
tip ($1/3\, T_\mathrm{sw}<t< 2/3\, T_\mathrm{sw}$), there is still an upwards
movement, which is simply slower than before because less spheres show a
relative motion in this phase. In the final phase ($t>\, 2/3 T_\mathrm{sw}$),
all spheres relax back to their original position causing a quick movement in
negative direction. After one complete cycle, at $t=T_\mathrm{sw}$, we see a
small net movement in negative direction with
$\Delta z_\mathrm{CoM} = -6.6\times 10^{-4}\,R_\mathrm{out}$. In
conclusion, we see the same qualitative behavior as for
the full disk model
in Fig.\ \ref{fig:swimming_distance}.
We can state that the simplified
9-bead model is able to model the swimming mechanism of a swelling disk that
has a fixed outer frame.
For the 9-bead swimmer simulation results in Fig.\ \ref{fig:zcom_deltaz}
show that the swimming
distance per cycle is also compatible with
$z_\mathrm{CoM}/R_\mathrm{out}\propto -(\Delta z/R_\mathrm{out})^3$
for small prescribed shape heights $\Delta z$ and collapses onto the
results for the swelling disk.
\begin{table}[h]
\small
\caption{\ Swimmer parameter $\gamma$ (see eq.\ (\ref{eq:gamma})
for 9-bead and disk
swimmer in the beginning of the respective phases.}
\label{tbl:gamma}
\begin{tabular*}{0.48\textwidth}{@{\extracolsep{\fill}}lll}
\hline
phase & $\gamma_\mathrm{9-bead}$ & $\gamma_\mathrm{disk}$ \\
\hline
$1\to 2$ (neutral/weakly pushing,
$\gamma$ small) & $-0.0068$ & $0.39$\\
$2\to 3$ (pulling, $\gamma<0$) & $-3.63$ & $-10.88$ \\
$3\to 1$ (pushing, $\gamma>0$) & $9.18$ & $1.24$ \\
\hline
\end{tabular*}
\end{table}
We can also compare the characteristics of the resulting fluid
flow field between the full disk model and the 9-bead model;
Fig.\ \ref{fig:streamplots_combined} shows that there
is good qualitative agreement between both models.
In particular, the characteristic stagnation point
and the vortex ring below the snapping disk are
reproduced. These features are characteristic for
non-convex bodies and are also observed for dragged
spherical caps \cite{Dorrepaal1976}.
We can expand the axisymmetric flow field in the far-field
into Legendre polynomials \cite{Lighthill1952,Blake1971} and
extract the dipole contribution $p_2 P_2(\cos\theta)/r^2$ of the
radial part $u_r(r,\theta)$ of the flow field $\vec{u}(\vec{r})$
from eq.\ (\ref{eq:u_rotne_prager}) and normalize by the
center of mass velocity $|v_\mathrm{CoM}|$ to define
the dimensionless parameter
\begin{equation}
\gamma \equiv \frac{2p_2}{3R_\mathrm{out}^2|v_\mathrm{CoM}|}.
\label{eq:gamma}
\end{equation}
Values $\gamma <0$ ($p_2<0$) indicate ``pulling'' motion of the
swimmer, values $\gamma>0$ ($p_2<0$) ``pushing'' motion, and
$\gamma \approx 0$ ($p_2\approx 0$) a neutral motion \cite{Lauga2009}.
The results for $\gamma$ for both 9-bead and disk swimmer
are shown in table \ref{tbl:gamma}.
In the beginning of phase $1\to 2$
(state 1 in Fig.\ \ref{fig:9-bead-illustration})
the disk is approximately neutral, while the 9-bead swimmer
is a weak pusher; the disk appears neutral because the
subcritical
snapping instability is caused by short-wavelength deformations.
In the beginning of
phase $2\to 3$
(state 2 in Fig.\ \ref{fig:9-bead-illustration}) disk and 9-bead swimmer
are pullers,
and in the beginning of phase $3\to 1$
(state 3 in Fig.\ \ref{fig:9-bead-illustration}) they are
pushers.
\begin{figure*}[p]
\centering
\includegraphics[width=1.0\linewidth]{Fig14.pdf}
\caption{Velocity field $\vec{u}(\vec{r})$ in the $(y=0)$\,-plane for
each deformation phase. The left side shows the disk with a fixed
frame from Fig.\ \ref{fig:energies_fixedFrame} and a discretization of
$n_\mathrm{B}=40$. The right side shows a 9-bead swimmer with
$a=0.1\,R_{\mathrm{out}}$ and $\Delta z =
0.5\,R_{\mathrm{out}}$. Colors indicate the absolute $|\vec{u}|$ in
units of $k_\mathrm{out}/\eta$ for the disk swimmer and in units of
$R_\mathrm{out}/T_{\mathrm{sw}}$ for the 9-bead swimmer. The arrows
show the direction of the fluid velocity vectors.}
\label{fig:streamplots_combined}
\end{figure*}
\section{Conclusions}
We pursued the concept of a low Reynolds number swimmer
based on the concept of utilizing a hysteretic shape transition
in order to convert a
completely time-reversible oscillation of a
control parameter into a directed swimming motion.
We proved this concept with
a flat circular elastic disk that undergoes a
shape transition into curved
shapes by a localized swelling
of an inner disk or an exterior annulus. While
swelling the inner region of the disk with a constant swelling factor
$\alpha$ and the outer annulus with $1/\alpha$, we saw a transition into an
elliptic dome-like shape for $\alpha > 1$ and a transition into a hyperbolic
saddle shape for $\alpha < 1$.
The control parameter of this shape transition is the
swelling factor $\alpha$.
We found the transition to be a supercritical bifurcation with only
numerical hysteresis, which will disappear in a real
experiment in the presence of some fluctuations.
We could re-establish a genuinely hysteretic shape transition by
replacing the outer annulus by a fixed outer frame and by
introducing an additional
attractive short-range interaction in the central region.
The details of this interaction are not relevant, as long as it
is an attractive short-range interaction, as it arises, for example,
from van der Waals forces or screened electrostatic forces.
We then see a hysteretic subcritical shape transition between a
flat state and a dome-like shape.
Embedding this framed disk into a viscous fluid at low Reynolds
numbers, a Stokesian
dynamics simulation of the hydrodynamic interaction with
the surrounding fluid
with a time-reversible and cyclic changing of $\alpha$ showed
that the swimmer is effectively moving into
the direction of the opening of the dome.
This way, a self-deforming microswimmer can be realized that uses only a
single scalar
control parameter, the swelling factor $\alpha$. This control parameter
has to be changed only by a few percent in order to trigger a drastic
conformation change with the snapping transition into the curved shape.
Interestingly, the snapping into
an elliptic dome-like shape resembles the opening
and closing of a
scallop as envisioned in the scallop theorem
\cite{Purcell1977}. As opposed to Purcell's scallop
the elastic disk swimmer actually performs directed motion at
low Reynolds numbers because of the additional hysteresis.
The swimming direction of the snapping elastic disk is
into the direction of the opening of the dome (away from the tip).
As for many shape-changing low Reynolds number microswimmers
swimming is a higher order effect in the deformation displacement.
For the snapping elastic disk
the net swimming distance per swelling cycle
is only a third order effect in the
height of the dome-like shape
($|\Delta z_\mathrm{CoM}/R_\mathrm{out}| \sim
-(\Delta z/R_\mathrm{out})^3$).
The swimming mechanism by hysteretic snapping is reproduced
by a simplified 9-bead model of the disk. Qualitative agreement of the
resulting flow fields, its pusher/puller characteristics and the
net swimming distance (see figs.\ \ref{fig:zcom_deltaz} and
\ref{fig:streamplots_combined}) show that the 9-bead model
captures the essence of the swimming mechanism.
An experimental realization of the snapping disk
microswimmer concept could be possible
using thermoresponsive hydrogels, which have already been
used for the implementation
of helical microswimmers \cite{Mourran2017,Zhang2017}.
These hydrogels can be swollen by
plasmonic heating of embedded gold particles by laser light.
Therefore, localized swelling is possible by embedding gold particles
only in specific parts of the hydrogel, such that these hydrogels
are suited to realize the proposed disk microswimmers.
With hydrogel properties
from the literature, an estimate of the expected swimming speed is also
possible. Mourran {\it et al.} investigated the thermal swelling behavior of
thermoresponsive hydrogels with the help of circular
disks\cite{Mourran2017}. These disks had a diameter of $30\,\mathrm{\mu m}$
and a thickness of $5\,\mathrm{\mu m}$. They showed cyclic diameter changes of
several percent caused by a pulsed laser with frequencies of
$1-5\,$Hz. Therefore, a frequency of $5\,$Hz seems to be realistic for the
realization of the deformation cycle of our microswimmer with comparable
dimensions. With a diameter of $30\,\mathrm{\mu m}$ and a swimming distance of
about $1\,\%$ of the radius after ten cycles, a net velocity of
$0.075\,\mathrm{\mu m}/s$ follows.
Generally, shape-changing swimmers such as the snapping disk
can only be made effective if the
driving frequency (the swelling cycle frequency for our swimmer)
is sufficiently high. On the one hand, we expect higher
frequencies to be possible for thin disks.
On the other hand,
the swimming distance per swelling cycle
can be increased by increasing
the thickness of the disk ($|\Delta z_\mathrm{CoM}| \propto h^3$).
One has to bear in mind that the snapping transition
dynamics has to be sufficiently slow for low Reynolds number
hydrodynamics to apply. We estimated the typical hydrodynamical
relaxation time scale, which is the typical time scale for snapping
in a viscous fluid, to be
$\tau_\mathrm{h} \sim \eta R_\mathrm{out}/\sigma_0 \sim
\gamma_\mathrm{FvK}^{3/2} \eta/Y_\mathrm{3D}$, which is
of the order of $\tau_\mathrm{h} \sim 10^{-5} {\rm s}$ for
the thermoresponsive hydrogel disks from Ref.\ \citenum{Mourran2017}.
Then ${\rm Re}<1$ is equivalent to
$\tau_\mathrm{h} > \rho R_\mathrm{out}^2/\eta $ and restricts
the low Reynolds number regime to sufficiently thin disks with
$ h^3/R_\mathrm{out} < 10^{-1}{\rm \mu m}^2$.
At higher Reynolds numbers, we also
expect swimming motion in the direction away from the
tip of the dome.
The concept of a low Reynolds number swimmer
based on the principle of a hysteretic shape transition
can be applied to other snapping systems such as
tube-like \cite{Overveldea2015} or
shell-based snapping systems \cite{Gorissen2020}.
All systems with a hysteretic snapping instability should give
rise to a net swimming motion
in a viscous fluid at low Reynolds numbers.
As opposed to our disk swimmer, these tube and shell objects
enclose volume and employ
volume or pressure control, such that genuinely autonomous
swimming is not possible because swimming requires attached
devices to control volume or pressure \cite{Djelloul2017}.
Swelling or shrinking materials as proposed in this work are
an alternative to control the equilibrium area and, consequently,
also equilibrium enclosed volume of these objects by swelling and shrinking,
which can often
trigger the same snapping transition at fixed actual volume.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
We acknowledge financial support by the Deutsche Forschungsgemeinschaft
via SPP 1726 ``Microswimmers'' (KI 662/7-2).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:introduction}
Galaxy photometric surveys provide an efficient tool to explore the evolution of the Universe and the properties of galaxies. By taking galaxy images in a few photometric bands and deriving the corresponding fluxes, numbers of important information can be analyzed and obtained, such as magnitude, color, morphology, type, size, etc. Then several powerful cosmological probes, e.g. weak gravitational lensing and angular galaxy clustering measurements, can be fulfilled in the cosmological studies. As we know, there are a number of ongoing and next-generation photometric surveys are running and will be started up in near future, e.g. the Sloan Digital Sky Survey (SDSS)\footnote{\url{ http://www.sdss.org/}} \citep{Fukugita96,York00}, Dark Energy Survey (DES)\footnote{\url{https://www.darkenergysurvey.org/}} \citep{Abbott2016,Abbott2021}, the Legacy Survey of Space and Time (LSST) or Vera C. Rubin Observatory\footnote{\url{https://www.lsst.org/}} \citep{Abell2009, Ivezic2019}, the Euclid space telescope\footnote{\url{https://www.euclid-ec.org/}} \citep{Laureijs2011}, and the Wide-Field Infrared Survey Telescope (WFIRST) or Nancy Grace Roman Space Telescope (RST) \footnote{\url{https://www.stsci.edu/roman}} \citep{Green2012, Akeson2019}. These surveys are expected to observe huge amount of galaxies in wide and deep fields, and can provide extremely precise measurements on the cosmic large-scale structure (LSS), properties of dark energy and dark matter, galaxy formation and evolution, and so on.
The photometric redshift (photo-$z$) is a key quantity in photometric surveys. In cosmological studies, accurate photo-$z$ measurements can provide reliable galaxy location information along the line of sight, which is crucial for probing the LSS from the photometric surveys \citep{Zhan2006, Banerji2008}. Besides, the photo-$z$ accuracy is one of the main systematics in weak lensing surveys, and accurate measurements of photo-$z$ and its variance can effectively suppress the systematics and precisely extract the information of kinetic and dynamic evolution of the Universe \citep{Ma2006, Mandelbaum2008, Abdalla2008, Hearin2010}. Currently, there are two main methods used to derive galaxy photo-$z$ from photometric data. One is called template fitting method, which uses typical galaxy spectral energy distributions (SEDs) to fit photometric data and capture spectral features in multi-bands for deriving photo-$z$ \citep{Lanzetta96, Fernandez99, Bolzonella2000}. Another method can be named as training method, and it will extract photo-$z$ by developing empirical relations between redshift and different galaxy properties, such as magnitude, color, and morphology \citep{Collister2004, Sadeh2016, Brescia2021}. This method always can be fulfilled by neural networks using galaxy spectral data with measured spectroscopic redshift as training sample.
Both methods have their own advantages. The template fitting method can be widely used at all redshifts if the selected galaxy SED templates are representative enough for the whole sample. On the other hand, the training method can obtain more accurate photo-$z$ result if the training sample contains sufficiently large and high-quality spectroscopic galaxy data, which can cover the whole redshifts and all features of galaxies in the photometric survey.
Although obtaining qualified training sample would be a challenge for future next-generation photometric surveys that observing deep fields with large redshift coverage, fortunately, a number of high-quality spectroscopic galaxy surveys are currently running or planned, such as Dark Energy Spectroscopic Instrument (DESI)\footnote{\url{https://www.desi.lbl.gov/}} \citep{Levi2019}, Prime Focus Spectrograph \citep[PFS,][]{Tamura16}, Multi-Object Optical and Near-infrared Spectrograph (MOONS)\footnote{\url{https://vltmoons.org/}} \citep{Cirasuolo20,Maiolino20}, 4-metre Multi-Object Spectroscopic Telescope (4MOST)\footnote{\url{https://www.4most.eu/cms/}} \citep{deJong2019}, MegaMapper \citep{Schlegel19}, Fiber-Optic Broadband Optical Spectrograph (FOBOS) \citep{Bundy19}, and SpecTel \citep{Ellis19}. On the other hand, recently the machine learning algorithms, especially neural networks, are remarkably developed. Neural networks are currently widely involved in astronomical and cosmological studies, and numbers of network forms are proposed to solve different problems. For example, there are two widely used networks, i.e. multi-layer perceptron (MLP) and convolutional neural network (CNN). The MLP is the simplest form of neural network, constructed by an input layer, several hidden layers, and an output layer \citep{Haykin1994}, and the CNN proposed by \citet{Fukushima1982} and \citet{Lecun1998}, can extract features from images by learning kernel arrays, and gain great success in detection and recognition tasks.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Transmission.pdf}
\caption{The intrinsic (dashed curves) and real (solid curves) transmissions of the CSST seven photometric filters. The real transmission curves include the effect of detector quantum efficiency. The details of the transmission parameters can be found in \citet{Cao2018} and Meng et al. (in preparation).}
\label{fig:transmission curve}
\end{figure}
In this work, we focus on the training method by employing neural networks to explore photo-$z$ accuracy in the photometric surveys of the China Space Station Telescope (CSST). The CSST is a next-generation two-meter space telescope, which is planned to launch around 2024~\citep{Zhan2011, Zhan2018, Zhan2021, Cao2018, Gong2019}. It will be in the same orbit with the China Manned Space Station for maintenance and repair. It has seven photometric bands from near-ultraviolet (NUV) to near-infrared (NIR), i.e. $NUV, u, g, r, i, z, y$, covering wavelength range from $\sim250$ nm to $\sim1000$ nm, with 5$\sigma$ magnitude limits as 25.4, 25.4, 26.3, 26.0, 25.9, 25.2 and 24.4, respectively. The intrinsic transmissions and real transmissions including detector quantum efficiency of the CSST seven photometric filters are illustrated in Figure~\ref{fig:transmission curve}. The details of the transmission parameters can be found in \cite{Cao2018} and Meng et al. (in preparation). Totally 17,500 deg$^2$ sky area will be observed during its ten-year mission period, and photometric and spectroscopic surveys will be simultaneously performed. The CSST will answer questions like the properties of dark matter and dark energy, the evolution of the LSS, formation of galaxies, and other important problems in cosmological and astrophysical studies. Many CSST scientific goals and surveys are related to photo-$z$ measurements, e.g. weak gravitational lensing survey, which is the main CSST scientific driver.
In previous studies, the CSST photo-$z$ accuracy has been preliminarily investigated by employing the SED template fitting and MLP neural network methods using the mock flux data directly generated from galaxy SED templates \citep{Cao2018,Zhou2021}. Although relevant background and instrumental noises are properly considered, their results are still ideal, which give photo-$z$ accuracy $\sim0.02-0.03$ and outlier fraction $\sim 3\%$ for the SED fitting method, and $\sim0.01-0.02$ and $\sim0.1\%$ for the neural network. In this work, we will simulate CSST mock galaxy images in the photometric surveys, and measure galaxy flux from these images with aperture photometry technique. Besides, since galaxy morphology should also contain photo-$z$ information ~\citep{Wadadekar2005, Soo2018, WilsonD2020}, we try to estimate photo-$z$ using galaxy images. The MLP and CNN are adopted to extract photo-$z$ from the CSST galaxy flux and image data, respectively. We also propose a hybrid network using the transfer learning technique to efficiently derive photo-$z$ information from both flux and image data by effectively combing the MLP and CNN.
This paper is organized as follows: in Section~\ref{sec:mock data}, we explain the details of generation of the mock image and flux data. In Section~\ref{sec:neural network}, we present architectures of the neural networks we use and details of training process. The results are shown in Section~\ref{sec:result}. We summarize our results in Section~\ref{sec:conclusion}.
\section{Mock Data}\label{sec:mock data}
In order to simulate galaxy images of the CSST photometric survey as real as possible, we generate mock images based on the observations in the COSMOS field performed by the Advanced Camera for Surveys of Hubble Space Telescope (HST-ACS). The image simulation process will be explained in more details by Meng et al. (in preparation). This survey covers about 1.7 deg$^2$ in F814W band, which has similar spatial resolution as the CSST with 80\% energy concentration radius $R_{80}\simeq0.15''$~\citep{Koekemoer2007,Massey2010,Bohlin2016,Cao2018,Gong2019,Zhan2021}. The background noise of the COSMOS HST-ACS F814W survey is also quite low (expected to be $\sim$1/3 of the CSST survey), hence it can provide a good foundation for simulating the CSST galaxy images.
Note that, we only use the measurements from one HST band, i.e. F814W, to generate mock galaxy images without color gradient considered. In real surveys, galaxy morphology will look different in different photometric bands. This would result in wavelength-dependent features and provide more information about galaxy properties, which can be helpful in the neural network training process and improve the accuracy of photo-$z$ estimate. In addition, base on the simulation, the CSST point spread function (PSF) should be more regular or Gaussian-like than the PSF of HST. So the image distortion should be more easily corrected in CSST data processing. However, since the CSST field of view is much larger than the HST (about 300$\times$ larger), precisely understanding and measuring the variances of the PSFs at different locations of the focal plane would be a challenging task, which may affect the image calibration. In this work, for simplicity, we ignore the color gradient and PSF effects on galaxy images.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Redshift_distr.pdf}
\caption{Galaxy redshift distribution of the sample selected from the COSMOS catalog used in the neural networks. These sources have been selected with the SNR greater than 10 in the $g$ or $i$ bands. The distribution has a peak around $z=0.7$, and can extend to $z\sim4$, which is the same as the CSST photometric galaxy sample.}
\label{fig:redshift distri}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=2.\columnwidth]{Image_examples.pdf}
\caption{Examples of simulated galaxy images in the seven CSST photometric bands. we notice that noises on the $NUV, u$ and $y$ bands are larger than others, since these bands have lower transmissions. Many sources, especially at high redshifts, are overwhelmed by the background noises in some bands, which may indicate that neural networks are necessarily needed to extract information from these sources.}
\label{fig:image examples}
\end{figure*}
To obtain high-quality images as "ideal" images for following mock image production, we select the central 0.85$\times$0.85 deg$^2$ area of this survey including $\sim$192,000 galaxies. Then we rescale the pixel size of the COSMOS HST-ACS F814W survey from 0.03$^{\prime\prime}$ to 0.075$^{\prime\prime}$ for matching the CSST pixel size. Next we extract a square stamp image for each galaxy from the survey area, with a galaxy in the center of the stamp image. The size of the square stamp is 15 times of the galaxy's semi-major axis, that means the galaxy stamp images have different sizes. The galaxy's semi-major axis and other relevant galaxy morphological information can be obtained in the COSMOS weak lensing source catalog~\citep{Leauthaud2007}. In addition, we also mask all sources, with signal-to-noise ratio (SNR) >3$\sigma$, in the stamp image except for the galaxy in the center, and replace them by the CSST background noise as we show later, which can be helpful for the neural networks to extract galaxy information.
By making use of galaxy spectral energy distributions (SEDs), we can rescale the galaxy images of the COSMOS HST-ACS F814W survey to the CSST flux level. We first need to find the corresponding SEDs for galaxies by fitting their flux data. Here the COSMOS2015 catalog\footnote{Note that there are still some issues in the COSMOS2015 catalog, and the COSMOS2020 catalog can make relevant improvements and could be used in future analysis~\citep{Weaver2021}.} is adopted to match galaxies in HST-ACS F814W survey, which contains 220,000 galaxies with measures of galaxy redshift, magnitude, size, and so on~\citep{Laigle2016}. To avoid additional errors, we use the 31 SED templates from $\it LePhare$ code~\citep{Arnouts1999, Ilbert2006}, which are also used in the COSMOS2015 catalog, to fit the galaxy flux data. Since the CSST has large wavelength coverage from NUV to NIR, for galaxies detected at $z\gtrsim2$, we also extend the wavelength range of these templates from $\sim900$ \r{A} to $\sim90$ \r{A} using the BC03 method \citep{Bruzual03}. The details can be found in \cite{Cao2018}. We select high-quality photometric data with reliable photo-$z$ measurement and about 100,000 galaxies are selected. Besides, when fitting the flux data with the SED templates and finding the best-fit SEDs, we also include dust extinction and emission lines, such as Ly$\alpha$, H$\alpha$, H$\beta$, [OII] and [OIII].
After obtaining the best-fit SED for each galaxy, we can calculate the theoretical flux data observed by the CSST. The theoretical flux in $\rm e^- s^{-1}$ or electron counting rate for a band $j$ can be estimated as
\begin{equation}
C^j_{\rm th} = A^j_{\rm eff} \int S(\lambda) \tau_j(\lambda) \frac{\lambda}{hc} d\lambda.
\end{equation}
Here $A^j_{\rm eff}$ is the effective telescope aperture area, $h$ and $c$ are the Planck constant and speed of light, $S(\lambda)$ is the galaxy SED, and $\tau_j(\lambda)=T_j(\lambda)Q_j(\lambda)M_j(\lambda)$ is total system throughput, where $T_j(\lambda)$, $Q_j(\lambda)$, and $M_j(\lambda)$ are the intrinsic filter transmission, detector quantum efficiency, and total mirror efficiency, respectively \citep{Cao2018,Zhou2021}. Then the theoretical flux electron counts in $j$ band can be expressed as
\begin{equation}
F^j_{\rm th} = C^j_{\rm th}\, t_{\rm exp} N^j_{\rm exp}\, ,
\end{equation}
where $t_{\rm exp}=150$ s is the exposure time, and $N^j_{\rm exp}$ is the number of exposure that $N^j_{\rm exp}=2$ for $u$, $g$, $r$, $i$ and $z$ bands, and 4 for $NUV$ and $y$ bands.
For the measured galaxy flux $F^j_{\rm obs}$ in the HST-ACS F814W survey, we collect the flux of galaxy in the center of stamp image with photometry aperture size of 2 times of the Kron radius \citep{Kron1980}, i.e. $2\times R_{\rm Kron}$. Then we can rescale the measured galaxy flux to the expected flux observed by the CSST, and produce the CSST mock galaxy images in a band. Note that, in the meantime, we also rescale the background of the stamp image by the same factor, i.e. rescaling the whole stamp image including both galaxy in the center and background, which can be used in the following background correction.
Besides rescaling galaxy flux, we also adjust the background noise in the stamp image to the CSST level. Since the background noise in the HST-ACS F814W survey is expected to be much lower than the CSST survey, we need to add additional noise into the image in $j$ band, and we have
\begin{equation}
N_{\rm add}^j = \sqrt{\left(N_{\rm bkg,th}^j\right)^2 - \left(N_{\rm img}^j\right)^2},
\label{eq:N_add}
\end{equation}
where $N_{\rm img}^j$ is the background noise of the rescaled image discussed above, and $N_{\rm bkg,th}^j$ is the theoretical CSST background noise per pixel, which can be estimated by
\begin{equation}
N_{\rm bkg,th}^j = \sqrt{(B^j_{\rm sky} + B_{\rm dark})t_{\rm exp}N^j_{\rm exp} + R_{\rm n}^2 N^j_{\rm exp}}.
\label{eq:N_bkg}
\end{equation}
Here $B_{\rm dark}=0.02\ \rm e^{-}s^{-1}pix^{-1}$ is the dark current, $R_{\rm n}=5\ \rm e^{-}pix^{-1}$ is the read-out noise, and $B^j_{\rm sky}$ is the sky background in unit of $\rm e^{-}s^{-1}pix^{-1}$, which is given by
\begin{equation}
B^j_{\rm sky} = A^j_{\rm eff} l_{\rm pix}^2 \int I_{\rm sky}(\lambda) \tau_j(\lambda) \frac{\lambda}{hc} d\lambda,
\label{eq:B_sky}
\end{equation}
where $l_{\rm pix}$ is the pixel size in arc-seconds, and $I_{\rm sky}(\lambda)$ is the surface brightness intensity of the sky background in units of $\rm erg\,s^{-1}cm^{-2}${\AA}$^{-1}{\rm arcsec}^{-2}$. We evaluate $B^j_{\rm sky}$ based on the measurements of the earthshine and zodiacal light for the `average' sky background case given in~\cite{Ubeda2011}. We find that $B^j_{\rm sky}$ are 0.0023, 0.018, 0.142, 0.187, 0.187, 0.118 and 0.035 e$^-$s$^{-1}$pix$^{-1}$ for $NUV$, $u$, $g$, $r$, $i$, $z$ and $y$ bands, respectively, which are consistent with the results in \cite{Cao2018}. Then $N_{\rm bkg,th}^j$ are calculated as 10.65, 7.84, 9.93, 10.59, 10.59, 9.56 and 11.53 $\rm e^{-}$ for these bands. Thus the additional noise $N_{\rm add}^j$ can be calculate by subtracting the rescaled image noise $N_{\rm img}^j$ as shown in Equation \ref{eq:N_add}, and added to pixels in stamp images by sampling from a Gaussian distribution with mean$=0$ and $\sigma=N_{\rm add}^j$ in band $j$. Then we obtain the final mock CSST galaxy images for each band.
Next we measure galaxy flux data from the CSST mock galaxy images using aperture photometry method. Firstly, to obtain high SNR source detections and morphological measurements, we stack the $g$, $r$, $i$, and $z$ band images for each galaxy to create the detection images. Secondly, we measure the Kron radius along galaxy major- and minor-axis to find an elliptical aperture with the size $1\times R_{\rm Kron}$, that can improve the SNR. Finally, the flux and error in $j$ band can be calculated from electron number $N^{j}_{\rm e^-}$ and error $\sigma^{j}_{\rm e^-}$ measured within the aperture. Note that if a galaxy in a band is very faint, the measured flux could be negative due to background noise. This effect will not affect the training process of our neural networks, and as we show in Section~\ref{sec:neural network}, we will rescale it and reserve the information. The SNR of measured galaxies in band $j$ can be estimated as
\begin{equation}
{\rm SNR}_j = \frac{F^j_{\rm obs}}{\sqrt{F^j_{\rm obs}+N_{\rm pix}(B_{\rm sky} + B_{\rm dark})t_{\rm exp}N^j_{\rm exp} + R_{\rm n}^2 N^j_{\rm exp}N_{\rm pix}}},
\end{equation}
where $F^j_{\rm obs}$ is the observed electron counts, and $N_{\rm pix}$ is the number of pixels covered by a galaxy for the CSST, which can be derived from the COSMOS2015 catalog.
In Figure~\ref{fig:redshift distri}, we show the redshift distribution of the galaxy sample selected from the COSMOS catalog (the selection details can be found in the next section). Since the selected galaxy sample has high-quality photo-$z$ measurement, we assume that these photo-$z$s can be seen as spectroscopic-redshifts (spec-$z$s), that can be used in the neural networks for method validation purpose. We can see that the distribution has a peak around $z=0.6-0.7$, and can extend to $z\sim4$, which is consistent with the CSST galaxy redshift distribution in previous studies \citep{Cao2018,Gong2019,Zhou2021}. In Figure~\ref{fig:image examples}, the CSST mock galaxy stamp images at different redshifts have been shown. We can find that the low-$z$ galaxies have higher SNR with low backgrounds, while high-$z$ galaxies can be dominated by the background noise, especially in the bands with low transmissions. This indicates that it may be necessary to employ the machine learning technique to extract information from these bands. The corresponding measured galaxy fluxes from mock images by aperture photometry in the seven CSST bands are shown in Figure~\ref{fig:sed examples}. The SEDs are rescaled to the levels of flux data as comparison. As we show, our measured mock flux data match the corresponding SEDs very well, and can correctly represent the features.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{SED_examples.pdf}
\caption{The flux data directly measured from the mock CSST galaxy images shown in Figure~\ref{fig:image examples} by aperture photometry in the seven photometric bands. The SEDs of the corresponding galaxy images are also shown in solid curves as comparison.}
\label{fig:sed examples}
\end{figure}
\section{Neural Networks}\label{sec:neural network}
The MLP and CNN are used to derive the photo-$z$ from the mock flux and image data obtained in the last section, respectively. We also explore the effective way of combining these two networks for testing the improvement of the photo-$z$ accuracy by including both flux and image data. The details of architectures and training processes of these networks are discussed in this section. All networks we construct are implemented by Keras\footnote{\url{https://keras.io}} with TensorFlow\footnote{\url{https://www.tensorflow.org}} as backend.
\subsection{Network Architecture}\label{sec:net arch}
The flux data of a galaxy observed by the CSST would contain seven discrete data points in the seven CSST photometric bands, and we adopt the MLP to predict photo-$z$ from these data.
The MLP is the simplest form of neural networks, consisting of an input layer, several hidden layers and an output layer. Every layer has several units or perceptrons, and the hidden layers can learn the internal relationship between input data points. Layers are connected by weights and biases, which can be initialized by random distributions and optimized in the training process.
We use the measured band flux, color, and error as the inputs of the MLP \citep{Zhou2021}. In order to speed up the training process, we need to rescale these data and make the network effectively extract the relevant information. A useful way we find for rescaling is to take the logarithm of these data. However, as we discussed in the last section, since the flux and error are measured by the aperture photometry and because of the fluctuations of noises, we may encounter some negative fluxes, especially in the CSST $NUV$, $u$, and $y$ bands with relatively low transmissions. To fix this issue, we propose to use the following function
\begin{equation}
f(x) = \left\{
\begin{array}{lcc}
\log(x) & & {x > 0,} \\
-\log(-x) & & {x < 0.} \\
\end{array} \right.
\label{eq:scale function}
\end{equation}
This function can naturally convert a negative value to be a logarithmic one, and could effectively reserve its information which can be useful to derive photo-$z$ as we find \footnote{We also try to set the negative flux to be zero or use its measured upper-limit information, but we find that the negative flux also contains useful information when deriving photo-$z$, and we should reserve it in the rescaling process.}. Before rescaling, we perform normalization-like process for the fluxes and errors to speed up the training process of the network. For the flux data, we divide it by a fixed value that is close to the measured flux. Here we take this fixed value as the flux-limit for each CSST band, which is derived from the magnitude limit shown in Section~\ref{sec:introduction}. Note that our result is not sensitive to the selection of this fixed value in each band. For the error, we divide it by its corresponding flux to obtain a relative error. To include more information, we also construct a color-like quantity, which is obtained by calculating the ratio of fluxes in the two bands. Relative error and color-like quantity are also rescaled by Equation~\ref{eq:scale function}. For simplicity, we call these rescaled normalized flux, relative error, and color-like quantity as flux, error, and color in our following discussion.
Therefore, our MLP has 20 inputs, which contain seven fluxes, seven errors, and six colors. We construct 6 hidden layers with 40 units or neurons in each layer, that means a classic structure with $n:2n:...\ 2n:1$ is applied where $n$ is number of data elements. We find that less number of hidden layers cannot get accurate results, while more layers result in lower efficiency and would not further improve the accuracy. Except for the first hidden layer, the BatchNormalization layer is applied to reduce overfitting problem~\citep{Ioffe2015}, and every hidden layer is activated by the Rectified Linear Unit (ReLU) nonlinear function with a form $y={\rm max}(0,x)$~\citep{Nair2010}. The outputs of the MLP is the photometric redshift. The details of the architecture of the MLP are shown in the blue dash-dotted box of Figure~\ref{fig:network architecture} and Table~\ref{tab:MLP_param}.
\begin{figure*}
\centering
\includegraphics[scale=0.37]{Architecture.pdf}
\caption{Architecture of the MLP, CNN and Hybrid networks. The MLP structure is shown in blue dash-dotted box. The inputs are rescaled fluxes, colors and errors, and totally 6 hidden layers are structured. The CNN structure is displayed in the dashed black box. The inputs are $(32\times32\times7)$ images, convolved and downsampled by the convolutional layer. Then three inception blocks are structured to obtain the features of size 2, which is flattened by global average pooling to a vector of size 72. Next the fully-connected layer with 40 units is applied, and then the photo-$z$ can be obtained. The inception blocks are illustrated in the dashed blue box. We use $(3\times3)$ and $(5\times5)$ kernels to extract features parallelly, and $(1\times1)$ kernels are adopted to reduce the number of features for increasing efficiency of computation. The hybrid network are constructed by concatenating the vectors extracted by the MLP and CNN from its fully-connected layer. Totally 6 fully-connected layers with 80 units are structured, and after each layer the BatchNormalization and ReLU activation function are applied.}
\label{fig:network architecture}
\end{figure*}
We adopt the CNN to analyze galaxy images. The mock CSST galaxy images observed in the seven photometric bands can be seen as 2d arrays or matrices in seven channels. The CNN can process multiple-channel arrays, which is suitable in our case for extracting photo-$z$ from galaxy images. Note that the galaxy images we obtain are in different sizes due to the stamp slicing procedure based on semi-major axis mentioned in Section~\ref{sec:mock data}, but the CNN can only process images in the same sizes. So we crop centrally the images larger than a size of threshold $S_{\rm threshold}$ and pad the images smaller than $S_{\rm threshold}$ with random normal noise. Padding with random noise instead of 0 or fixed values can mimic the observations more realistically. This random normal noises are typical backgrounds derived from Equation~\ref{eq:N_bkg} in Section~\ref{sec:mock data}. The $S_{\rm threshold}$ is set to be 32 pixels in our work, and other sizes with $S_{\rm threshold}=16$ and 64 are also explored. We find that smaller $S_{\rm threshold}$ loses too much information since most galaxies occupy pixels larger than 16, and larger $S_{\rm threshold}$ will be dominated by random padding noises that makes the CNN cannot effectively concentrate on extracting information from galaxy in the center of image. We also make use of the inception blocks to extract image features parallelly~\citep{Szegedy2014}. The inception block is widely used for predicting photometric redshift through galaxy images, and the networks constructed with it display excellent performance~\citep[see e.g.][]{Pasquet2019,Henghes2021}. Inception block is illustrated in dashed blue box in Figure~\ref{fig:network architecture}. This block uses kernels of different sizes, i.e. $(3\times3)$ and $(5\times5)$, to extract image features parallelly. The $(1\times1)$ kernels are used to reduce number of channels to increase computation efficiency.
\renewcommand{\arraystretch}{1.5}
\begin{table}
\caption{Details of MLP architecture.}
\label{tab:MLP_param}
\begin{center}
\begin{tabular}{lcc}
\hline
Layers & Output Status$^a$ & Number of params.$^b$ \\
\hline
\hline
Input & 20 & 0 \\
\hline
FC$^c$ & 40 & 840 \\
\hline
ReLU & 40 & 0 \\
\hline
FC$^c$ & 40 & 1640 \\
\hline
BatchNormalization & 40 & 160$^d$ \\
\hline
ReLU & 40 & 0 \\
\hline
& ...$^e$ & \\
\hline
Output & 1 & 41 \\
\hline
\hline
\end{tabular}
\end{center}
\vspace{-2mm}
\textbf{Notes}.\\
$^a$ Number of data points or neurons. \\
$^b$ Total number of parameters: 9,881. \\
$^c$ FC: fully connected layer. \\
$^d$ Half of them are non-trainable parameters.\\
$^e$ 4 repeats of FC + BatchNormalization + ReLU. \\
\end{table}
\renewcommand{\arraystretch}{1.5}
\begin{table}
\caption{Details of CNN architecture.}
\label{tab:CNN_param}
\begin{center}
\begin{tabular}{lcc}
\hline
Layers & Output Status$^a$ & Number of params.$^b$ \\
\hline
\hline
Input & (32, 32, 7) & 0 \\
\hline
Conv2D & (16, 16, 32) & 2048 \\
\hline
BatchNormalization & (16, 16, 32) & 128$^c$ \\
\hline
ReLU & (16, 16, 32) & 0 \\
\hline
Inception & (8, 8, 72) & 9824 \\
\hline
Inception & (4, 4, 72) & 11744 \\
\hline
Inception & (2, 2, 72) & 11744 \\
\hline
GlobalAveragePooling & 72 & 0 \\
\hline
FC$^d$ & 40 & 2920 \\
\hline
ReLU & 40 & 0 \\
\hline
Output & 1 & 41 \\
\hline
\hline
\end{tabular}
\end{center}
\vspace{-2mm}
\textbf{Notes}.\\
$^a$ Format: (dimension, dimension, channel) or number of neurons.\\
$^b$ Total number of parameters: 38,449.\\
$^c$ Half of them are non-trainable parameters.\\
$^d$ FC: fully connected layer.\\
\end{table}
In our CNN, the inputs are $(32\times32\times7)$ images. Firstly, we apply a convolutional layer with 32 kernels of size $(3\times3)$ and stride size 2 to extract features and downsample images to 16. Then we structure 3 inception blocks obtaining feature images of size 2. After these blocks, we use global average pooling to vectorize these features to 72 values~\citep{Lin2013}, and then apply a fully-connected layer of 40 units. Finally, the network outputs are photo-$z$ values. In addition, after each convolutional layer, BatchNormalization layer and LeakyReLU activation function are applied~\citep{Maas2013}. The architecture of our CNN are illustrated in dashed black box in Figure~\ref{fig:network architecture}, and the details of the CNN architecture are shown in Table~\ref{tab:CNN_param}.
Since the CNN can extract photo-$z$ from the galaxy image mainly using morphology information, it is necessary to explore the improvement of the photo-$z$ accuracy using both galaxy flux and image data. Hence, we construct a hybrid network to combine the MLP and CNN for including all data, i.e. galaxy fluxes, errors, colors, and images. In the hybrid network, the CNN and MLP parts are the same as the ones described above. Their fully connected layers are concatenated, and this means the features extracted by the CNN and MLP from galaxy images and flux are combined together. The constructed vector by the combination is of size 80 (i.e. 40 from the CNN and 40 from the MLP). Then we structure 6 fully connected layers with 80 units in each layer, and the BatchNormalization and ReLU activation function are applied after each layer. Then the network outputs the photo-$z$ values. The illustration of the hybrid network is shown in Figure~\ref{fig:network architecture}.
\subsection{Training}\label{sec:training}
The photo-$z$ data of the CSST will be mainly used in the analysis of the weak gravitational lensing surveys, which needs high-quality sample with accurate photo-$z$ information. Therefore, following previous works \citep{Cao2018,Zhou2021}, we select approximately 40,000 high-quality sources with the SNR in $g$ or $i$ band larger than 10 from the original data set (containing $\sim$100,000 galaxies). So these 40,000 sources have both image and flux data, and also are assumed to have accurate spec-$z$ measurements, although currently they only have high-quality photo-$z$ measurements in the COSMOS catalog. In the future real CSST survey, we will use the real spec-$z$ sample obtained from future spectroscopic surveys as the training sample. We should notice that this selection criterion is conservative, and large amounts of galaxies with low SNRs will be removed, which can be used in weak lensing survey. However, since the systematical errors are the main problem to affect the accuracy of next-generation weak lensing measurements, and photo-$z$ uncertainty is one of the main components, we will try to suppress the systematics with high priority, even losing a fraction of galaxies. Since the quality of galaxy shape measurement also should be considered as an important criterion in weak lensing surveys, other potentially better selection criterions will be explored in future works. Then we divide the above sample into the training and testing data, and randomly save 10,000 sources for testing and the rest of 30,000 data for training, which means the ratio of the training and testing data is roughly $3: 1$. In order to investigate the effect of the number of training data on the photo-$z$ estimation, we also try ratios roughly $1: 1$ and $1:3$ for the training and testing samples.
Note that we have assumed all the samples we use to train the networks have measured spec-$z$ provided by future spectroscopic surveys as discussed in the introduction. Galaxies in these samples can fully represent the features of galaxies in the CSST photometric survey, such as the redshift and magnitude distributions, galaxy types, and so on. In real surveys, this assumption may not hold, and selection effect of spectroscopic samples will influence the estimates on photo-$z$. Since the the result derived from the neural networks is highly dependent on the training sample, if there is a kind of galaxies that is not covered by the spec-$z$ training set, the networks probably would not provide proper photo-$z$ estimates for these galaxies. However, we may fix this problem by analyzing the photometric and spectroscopic samples and finding the missing galaxies in the spec-$z$ training set by some means, e.g. adopting the self-organizing map suggested in~\citet{Masters2015}. Then we can try to cover these galaxies on purpose with the help of available spectroscopic surveys.
In our MLP, we use the mean absolute error (MAE) as our loss function, which calculates the average of absolute difference between predicted redshifts and true redshifts. The Adam optimizer is adopted to optimize learnable weights~\citep{Kingma2014}. Adam optimizer is an efficient stochastic optimization method requiring only first-order gradients, and it can adjust learning rate automatically in the training process for every trainable weight in network. In addition, we define an accuracy metric based on outlier percentage defined as $|z_{\rm true} - z_{\rm pred}| < 0.15(1 + z_{\rm true})$, and the network will monitor this metric when training. We set the initial learning rate to be $10^{-4}$, and use ModelCheckpoint callback to save the model with highest validation accuracy. The maximum number of epochs is set to be 150 in the training process. In order to reduce the effect of the statistical noise, the MLP training data are augmented by random realizations based on flux errors with the Gaussian distribution. This method has found to be effective for eliminating the effect of statistical noise in the training process~\citep{Zhou2021}. Here we use 50 realizations for every training source. We find that more realizations do not improve the results significantly. In Figure~\ref{fig:acc} we show the accuracy versus epoch for both training and validation samples in blue solid and dashed curves, respectively. We can see that the validation accuracy can reach $\sim$0.98. The validation samples are mainly used to tune the hyper-paramaters of a network. On the phase of tuning networks, we randomly select validation sample as 10\% of training sample.
In the CNN, we also use MAE and Adam optimizer with initial learning rate setting to be $10^{-4}$, and the learning rates are reduced by 10 through ReduceLROnPlateau callback when validation accuracy does not improve in 5 epochs. To make the training efficient, we use EarlyStopping callback to shut down training if no improvement occurs in 10 epochs, and save the model with the highest validation accuracy. To augment the training data, galaxy images for training are rotated and flipped, resulting in $8\times$ original data size. In Figure~\ref{fig:acc}, the green lines show the accuracy curves for the CNN training (solid) and validation (dashed) samples. We can find that they stops at 82nd epoch, meaning previous 10 epochs have no improvement in validation accuracy, and the model is saved at the 72nd epoch.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Acc_plot.pdf}
\caption{Accuracy metrics of different networks in training (solid) and validation (dashed). Blue, green, purple, and red lines represent MLP, CNN, Hybrid, and Hybrid transfer respectively. Validation accuracy reaches approximately 0.98 for the MLP. For CNN, the training stops at the 82nd epoch, since the validation accuracy does not improve in previous 10 epochs, and the maximum accuracy can be higher than MLP ones. Hybrid and Hybrid transfer networks can reach highest accuracy trained only 47 and 30 epochs, respectively. The vertical dotted lines shows the epochs when training stops.}
\label{fig:acc}
\end{figure}
For the hybrid network, settings of the loss function, optimizer and callbacks are the same as the corresponding CNN described above. The hybrid network inputs include both flux and image data for a galaxy, i.e. flux, error, and color for the MLP and a corresponding galaxy image for the CNN. The inputs of the MLP part in the hybrid network are also augmented by 50 realizations based on flux errors. For the CNN part, we need to input the corresponding galaxy image given a random flux realization. To make the training process efficient and extracting more information, we input a galaxy image by randomly rotating or flipping. This means that, for a galaxy, the inputs of the hybrid network contain 50 realizations of the flux data and the corresponding 50 randomly rotating or flipping galaxy images.
To investigate the possibility that if we can further improve the network efficiency and accuracy, we also propose and explore a hybrid transfer network, which is inspired by the techniques from the transfer learning. Transfer learning is a technique focusing on using knowledge gained from one problem to solve a different but related problem~\citep{West2007}. Here we borrow this idea and combine the features learned by MLP and CNN together. As we show in the following discussion, this network can have better training efficiency and obtain higher photo-$z$ accuracy. Since our MLP and CNN have already been well trained and can provide good photo-$z$ accuracies, we can directly freeze the weights obtained by the MLP and CNN and transfer them to the MLP and CNN parts of the hybrid network, i.e. constructing a hybrid transfer network. In this way, the MLP and CNN parts are restricted to learn features that is useful to predict photo-z, and the concatenated features in the joint fully connected layers can improve the estimation. Note that the architecture of the hybrid transfer network is identical to the hybrid network, except for applying a different training strategy inspired by transfer learning. Besides, in the MLP and CNN parts, we just freeze the first five fully connected layers in the MLP and layers before the global average pooling layer in the CNN, and the last fully connected layers in the MLP part and the one in the CNN part are still involved in the training process to include more flexibility for better extracting photo-$z$ information.
In Figure~\ref{fig:acc}, the purple and red lines show the accuracy curves of the hybrid and hybrid transfer networks, respectively. We notice that validation curves (purple and red dashed curves) of the two networks can reach high accuracy in early epochs, and the training stops at the 47th and 30th epochs for the hybrid and hybrid transfer networks, respectively. This indicates that the hybrid network can be optimized efficiently, and the hybrid transfer network even performs better.
\section{Result}\label{sec:result}
\begin{figure*}
\centerline{
\includegraphics[scale=0.26]{result_mlp.pdf}
\includegraphics[scale=0.26]{result_cnn.pdf}}
\centerline{
\includegraphics[scale=0.26]{result_hybrid.pdf}
\includegraphics[scale=0.26]{result_hybrid_transfer.pdf}}
\caption{\label{fig:photoz_result} Results of the MLP, CNN, hybrid and hybrid transfer networks when the ratio of training to testing data roughly equals to $3: 1$. The color bar at right of each panel denotes the number density of sources per pixel of the testing data. The MLP using flux data can provide lower photo-$z$ scatter, while the CNN using image data has smaller outlier fraction. The hybrid network can improve the results of both accuracy and outlier fraction, and the hybrid transfer network can even further suppress the outliers and has the best photo-$z$ result.}
\end{figure*}
To assess the accuracy of extracted photo-$z$ from the networks, we define catastrophic outliers to be $|\Delta z|/(1+z_{\rm true}) < 0.15$, where $\Delta z = z_{\rm pred} - z_{\rm true}$, and adopt the normalized median absolute deviation (NMAD)~\citep{Brammer2008}, which is calculated as:
\begin{equation}
\sigma_{\rm NMAD} = 1.48 \times \rm {median}\left(\left| \frac{\Delta z - \rm{median}(\Delta z)}{1 + z_{\rm true}}\right|\right).
\end{equation}
This scatter or deviation can naturally suppress the weights of the outlier redshift, and provide a proper estimation of the photo-$z$ accuracy. We also calculate the average of $|\Delta z|/(1+z_{\rm true})$ as mean absolute error (MAE) in the analysis.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Binned_eta.pdf}
\caption{The percentages of outliers as a function of photometric redshift (top panel) and spectroscopic redshift (bottom panel). The blue, orange, green, and red curves are for the MLP, CNN, hybrid, and hybrid transfer networks, respectively. The average outlier fractions and errors derived from ten mock flux testing datasets using MLP are shown in blue dashed curves and data points with error bars.}
\label{fig:outliers}
\end{figure}
In the upper-left panel of Figure~\ref{fig:photoz_result}, we show the result from the MLP using the flux data. We find that the outlier fraction $\eta = 1.43\%$, $\sigma_{\rm NMAD} = 0.023$ for the testing data. This result has been improved by a factor 2 for the outlier fraction compared to that using the SED template-fitting method shown in \cite{Cao2018}. On the other hand, the result is worse than that given in \cite{Zhou2021}. This is because in this work the flux data we use are obtained by directly measuring galaxy images with aperture photometry, instead of adopting the data estimated from SED templates used in \cite{Zhou2021}, which are more realistic and including more noises. We can see that the MLP tends to predict larger redshifts at $z\lesssim0.5$ and smaller when $z\gtrsim1.5$. We also note that predictions have little outliers in redshift range $0.5<z<1.5$, that is assuring high photo-$z$ accuracy for most of galaxies observed by the CSST (see Figure~\ref{fig:redshift distri}).
The CNN result is shown in the upper-right panel of Figure~\ref{fig:photoz_result}, and we have $\eta = 1.21\%$ and $\sigma_{\rm NMAD} = 0.025$. We can find that this result has smaller outlier fraction and a bit larger $\sigma_{\rm NMAD}$ compared to the MLP using the flux data. Since our galaxy flux data are measured from galaxy images by aperture photometry, in principle, besides the morphology information, the images also abstractly include the information of fluxes and colors. Hence, if the CNN can successfully extract all or part of these information, it is capable to obtain a comparable and even better photo-$z$ result than that using the flux data only. Based on the current result, it seems that our CNN can really extract some flux information from galaxy images in addition to morphology. In Figure~\ref{fig:photoz_result}, we can see that the outlier distribution from the CNN is similar to the MLP, but it can further suppress the outlier fraction in the whole redshift range.
In the lower-left and lower-right panels of Figure~\ref{fig:photoz_result}, we show the results from the hybrid and hybrid transfer networks, respectively. We find that our hybrid network applying transfer learning technique indeed can mildly improve the photo-$z$ estimation with $\eta= 0.90\%$ and $\sigma_{\rm NMAD} = 0.020$ compared to the hybrid network with $\eta= 1.03\%$ and $\sigma_{\rm NMAD} = 0.020$. This indicates that both our hybrid and hybrid transfer network could provide accurate photo-$z$ estimates by including galaxy flux and image data, and the hybrid transfer network performs even better with higher efficiency and accuracy. This implies that the features from trained CNN and MLP are probably better than features directly learned from Hybrid when predicting photo-$z$. In Figure~\ref{fig:photoz_result}, we notice that the hybrid transfer network can further suppress the outliers by 37\% and 26\% compared to the MLP and CNN cases, respectively, especially in $z\lesssim0.5$ and $z\gtrsim1.5$. In addition, the results from hybrid or hybrid transfer networks are obviously better than the MLP only and CNN only cases, that means the CNN indeed can extract useful morphological information from galaxy images to improve the photo-$z$ estimation.
The results shown above are obtained by using a ratio of $r=3:1$ for the training and testing data. We also test if smaller training sample can severely affect our photo-z estimation, since the network training is restricted to the spectroscopic data providing accurate redshifts and we may not have large qualified training sample in real observations. Here we try two training-testing ratios $r=1: 1$ and $1: 3$, that means 20,000 and 10,000 training data are randomly selected, respectively, and the rest are saved for testing. After training and testing processes, the networks can give $\eta=1.67\%, 1.51\%, 1.25\%, 1.06\%$ and $\sigma_{\rm NMAD} = 0.026, 0.027, 0.022, 0.021$ for the MLP, CNN, hybrid and hybrid transfer networks, respectively, for $r=1: 1$ or 20,000 training sample case. The results become $\eta=2.39\%$, 1.98\%, 1.29\% and 1.29\%, and $\sigma_{\rm NMAD}=0.028$, 0.030, 0.023, and 0.023 for $r=1: 3$ or 10,000 training sample case. We can note that decreasing training data indeed result in worse predictions, but not significantly. The hybrid and hybrid transfer network seem mostly immune to this ratio in these four networks, and still can provide $\eta\sim1\%$ and $\sigma_{\rm NMAD}\sim0.02$, that is similar as the $r=3:1$ or 30,000 training sample case. On the other hand, the results become obviously worse for the MLP and CNN. This indicates that relatively more data are needed to sufficiently train the MLP and CNN, compared to the hybrid and hybrid transfer networks. We also should note that the absolute number of training data determines the performance of trained model instead of the ratio of training and testing sample. As long as we have large enough spec-$z$ data which can represent the features of all galaxies in photometric survey, the neural networks can be well trained and applied to derive photo-$z$, no matter how large of the ratio of spectroscopic and photometric data. Besides, the size of the testing sample should also be large enough, the results otherwise can be affected by statistical effects like cosmic variance.
In Figure~\ref{fig:outliers}, the percentages of outliers as a function of photometric redshift and spectroscopic redshift are illustrated in the top and bottom panels, respectively. The redshift bin size is set to be $\Delta z=0.5$. The average outlier fractions and errors derived from ten mock flux testing datasets using MLP are also shown, which can indicate the statistical effects of number of sources in the training sample at different redshifts. Although currently it is hard for us to generate enough mock images to derive errors in the CNN case, the errors should be similar as the MLP case, since they have similar photo-$z$ estimation results. This is also representative for the cases using the hybrid and hybrid transfer networks. In the bottom panel, we can see that, generally speaking, the outlier fraction is increasing as redshift increases. In $z=0.5-1.5$, all of the four networks give the lowest outliers, that is probably due to large training sample in this range (see Figure~\ref{fig:redshift distri}). In $z=3.5-4$, there are few galaxies and the results are dominated by statistical errors, which leads to large scatters in the results between the four networks. The results can be heavily disturbed if shown as a function of photo-$z$ as indicated in the top panel. In this case, we still have small outliers in $z=0.5-1$, but it is not visibly increasing but is relatively flat as photo-$z$ increases.
\section{Summary and Conclusion}\label{sec:conclusion}
We explore the photo-$z$ accuracy of the CSST photometric observations using neural networks. Four neural networks, i.e. MLP, CNN, Hybrid and Hybrid transfer, are adopted and tested to extract the photo-$z$ information from the CSST mock flux and image data.
We first simulate the CSST photometric observations to generate mock galaxy images for the seven bands, which are created based on the HST-ACS observations and the COSMOS catalog considering the CSST instrumental effects. Then we measure fluxes and errors in each band from galaxy images using aperture photometry method. Since the photo-$z$ data would mainly be used in the analysis of the CSST weak gravitational lensing surveys, about 40,000 high-quality sources with ${\rm SNR} > 10$ in the $g$ or $i$ band are selected as the training and testing samples in our neural networks.
The MLP and CNN are used to predict photo-z from fluxes and images, respectively. Since the flux measured by aperture photometry could be negative for faint galaxies especially in the bands with low transmissions, we rescale the measured galaxy flux, error, and color, which also can speed up the training process of the networks. These flux data are inputted into the constructed MLP with six fully connected layers for deriving photo-$z$. The mock galaxy images in CSST seven photometric bands are cropped or padded into the same size as the inputs for the CNN. The inception blocks are employed to extract image features parallelly with different sizes of kernels. To investigate the improvement of photo-$z$ accuracy using both flux and image data, we develop a hybrid network by properly combining the MLP and CNN. We also propose an hybrid transfer network, adopting the transfer learning techniques, which may have higher efficiency and accuracy.
In the MLP training process, to effectively suppress statistical noises of flux data, we randomly generate 50 realizations from Gaussian distribution for each galaxy flux data based on errors as the training data. We also augment galaxy images by rotating and flipping to produce more training data by a factor of 8 in the CNN training process. In the hybrid network, we randomly input a corresponding rotating or flipping galaxy image in the CNN part given a flux realization, and can efficiently provide accurate photo-$z$ result. In the hybrid transfer network, we freeze the weights of the MLP and CNN parts as the ones separately trained by the flux and image data. We find that the MLP, CNN, hybrid and hybrid transfer networks can derive accurate photo-$z$ results with similar deviation 0.020-0.025 and outlier fraction around $1\%$. The CNN outlier fraction is better than the MLP, since our CNN can effectively extract both flux and morphology information from galaxy images. The hybrid (hybrid transfer) network can offer best result with 28\% and 15\% (37\% and 26\%) improvements compared to the MLP and CNN, respectively. The effect of training and testing ratio is also explored, and we find that smaller training sample would not affect the networks significantly.
Note that we only predict photo-$z$ values without uncertainties or probability distributions. In future work, we will try to use probabilistic deep learning methods, such as Bayesian neural networks (BNN)~\citep{MacKay1995, Neal1996, Blundell2015, Gal2015, Wilson2020} or other approaches to output both photo-$z$ values and uncertainties, and even their probability distributions.
\section*{Acknowledgements}
X.C.Z. and Y.G. acknowledge the support of MOST-2018YFE0120800, 2020SKA0110402, NSFC-11822305, NSFC-11773031, NSFC-11633004, and CAS Interdisciplinary Innovation Team. X.L.C. acknowledges the support of the National Natural Science Foundation of China through grant No. 11473044, 11973047, and the Chinese Academy of Science grants QYZDJ-SSW-SLH017, XDB 23040100, XDA15020200. L.P.F. acknowledges the support from NSFC grants 11933002, and the Dawn Program 19SG41 \& the Innovation Program 2019-01-07-00-02-E00032 of SMEC. This work is also supported by the science research grants from the China Manned Space Project with NO.CMS-CSST-2021-B01 and CMS- CSST-2021-A01.
\section*{Data Availability}
The data that support the findings of this study are available from the corresponding author, upon reasonable request.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Throughout this paper $(R, \mathfrak m)$ is a Noetherian local ring of positive dimension $d$ with infinite residue field and $ {\bf a} \subset R$ is an ideal.
We say that ${\bf a}$ is defined set-theoretically by $s$ elements if there exists $f_1, \ldots, f_s \in \mathfrak m$ such that $\sqrt{{\bf a}} = \sqrt{(f_1, \ldots, f_s)}$. If $s = h:= \operatorname{ht}(
{\bf a})$ and $\sqrt{{\bf a}} = \sqrt{(f_1, \ldots, f_h)}$ then ${\bf a}$ is called set-theoretic complete intersection.
One of the remarkable results on set-theoretic complete intersection was proved by R. C. Cowsik and M. V. Nori. They showed that if the characteristic of a field $\Bbbk$, $\operatorname{char}(\Bbbk) = p >0$, then the defining ideal of any affine curve in $\mathbb A_{\Bbbk}^n$ is a set-theoretic complete intersection (\cite[Theorem~1]{cowsik-nori}).
J. Herzog proved that all
monomial curves in the affine space $\mathbb A^3_{\Bbbk}$ over any field $\Bbbk$ are set-theoretic complete intersections.
This was also proved by H. Bresinsky \cite{bresinsky-1979}.
It has been a long standing open problem to characterise ideals that are set-theoretic complete intersections. We shall review a few of these results.
Symbolic powers of ideals have played an important role in connection with the problem of set-theoretic complete intersections. The $n$-th symbolic power of an ideal ${\bf a}$ is defined as
${\displaystyle {\bf a}^{(n)}=\bigcap_{\mathfrak p\in \operatorname{Ass}({\bf a})}({\bf a}^nR_{\mathfrak p}\cap R)}$.
It has also been of importance because of its connection with algebraic geometry which goes back to the work of O. Zariski \cite{zariski} and M. Nagata \cite{nagata}. If $\Bbbk$ is an algebraically closed field, then the $n$-th symbolic power of a given prime ideal consists of functions that vanish up to order $n$ on the algebraic variety defined by the prime ideal \cite{eisenbud}.
Let ${\displaystyle \mathcal R_s({\bf a}):= \bigoplus_{n \geq 0} {\bf a}^{(n)} t^n}$ denote the symbolic Rees algebra of an ideal ${\bf a}$. The study of the symbolic Rees algebras of ideals in a Noetherian local domain was of interest due to a result of D. Rees which gave a counter-example to a problem of Zariski using symbolic Rees algebras.
Rees showed that if $\mathfrak p$ is a prime ideal with $\operatorname{ht}(\mathfrak p)=1$ in a Noetherian normal local domain of dimension two \cite{rees} then the symbolic Rees algebra is Noetherian if and only if some symbolic power of $\mathfrak p$ is principal.
Rees' work was generalized by A. Ooishi \cite{ooishi-1985}. The above results were studied under the condition $\operatorname{dim} (R/ {\bf a})=1$. In 1990, Goto. et. al studied the Noetherian property of any unmixed ideal in a Noetherian local ring \cite{goto}.
The symbolic Rees algebra of prime ideals has also been studied in \cite{huneke-1982}, \cite{roberts-1985}, \cite{katz-ratliff-1986}, \cite{schenzel}, \cite{schenzel-1988}, \cite{sannai-2019} \cite{grifo-2021}.
In $1981$ R. C. Cowsik proved the following remarkable result that rejuvenated interest in symbolic powers of an ideal:
\begin{theorem}
\cite{cowsik-1981} Let
$(R, \mathfrak m)$ be a Noetherian local ring of dimension $d$ and ${\bf a} \not =\mathfrak m $ a radical ideal. If the symbolic Rees algebra
$\mathcal R_{s}({\bf a}) := \oplus_{n \geq 0} {\bf a}^{(n)}$ is Noetherian, then ${\bf a}$
is the radical of an ideal generated by $d-1$ elements. In particular, if ${\bf a}$ has height $d-1$, then it is a set-theoretic complete intersection.
\end{theorem}
The converse of Cowik's result need not be true \cite[Corollary~1.2]{goto-nis-wat}. One of the main results in this paper is to generalize Cowsik's result on set-theoretic complete intersections.
Cowsik's work motivated several researchers to investigate the Noetherian property of the symbolic Rees algebra. In 1987, C. Huneke gave necessary and sufficient conditions for
$\mathcal R_{s}(\mathfrak p)$ to be Noetherian when $R$ is a $3$-dimensional regular local ring and $\mathfrak p$ is a height two prime ideal \cite{huneke-1987}. Huneke's result was
generalised for $\operatorname{dim} R \geq 3$ by M. Morales in \cite{morales-1991} and later by S. Goto \cite{goto}. We state the most general form of the result.
Recall that a local ring $R$ is called unmixed if all the associated primes of the completion of $R$ have the same dimension.
\begin{theorem}
\cite[Theorem~1.1]{goto}
Let $(R, \mathfrak m)$ be a local ring of dimension $d$ and $\mathfrak p$ a prime ideal of height $d-1$.
\been
\item
Suppose $\mathcal R_s(\mathfrak p)$ is Noetherian. Then there exists a system of parameters $x, f_1, \ldots, f_{d-1}$ such that
elements $f_i \in \mathfrak p^{(k_i)}$, $i=1, \ldots, d-1$ with $k_i>0$, satisfying
\beq
\label{goto condition}
e_{(x, f_1, \ldots, f_{d-1}) }(R)
= e(R_{\mathfrak p}) {\lambda} \left( \f{R}{ (\mathfrak p,x) }\right) \prod_{i=1}^{d-1} k_i .
\eeq
\item
If $R$ is an unmixed local ring containing a system of parameters $x, f_1, \ldots, f_{d-1}$ satisfying (\ref{goto condition}), then $\mathcal R_s(\mathfrak p)$ is Noetherian.
\eeen
\end{theorem}
These results provided a useful way to find Noetherian symbolic Rees algebras and to find ideals that are set-theoretic complete intersections.
In this paper we extend the work of Huneke \cite{huneke-1987}, Morales \cite{morales-1991} and Goto and Nishida \cite{goto} for any arbitrary ideal ${\bf a}$ such that $1\leq \operatorname{ht} {\bf a} \leq d-1$ (Theorem~\ref{main theorem}, Theorem~\ref{main theorem converse}). We give necessary and sufficient conditions for the Noetherian property of $\mathcal R_s({\bf a})$. For a radical
ideal ${\bf a}$, we prove in Theorem \ref{analytic} that if $\mathcal R_s({\bf a})$ is Noetherian and $R/{\bf a}^{(n)}$ is Cohen-Macaulay for large $n$, then ${\bf a}$ is set theoretic complete intersection.
We demonstrate our results with a few examples. In Section~3, using our main theorem we prove that the symbolic Rees algebra of the edge ideal of complete graph is Noetherian. In Section~4, we give a new proof for the Noetherian property of the symbolic Rees algebra of the Fermat ideal. In section 5, we prove that the symbolic Rees algebra of the Jacobian ideal the defining polynomial of certain hyperplane arrangement is Noetherian.
\section{The Main Result}
In this section we prove the main result of this paper.
We generalise a result of Cowsik, Huneke, Morales and Goto to any arbitrary ideal ${\bf a}$ of height $h$, where $1\leq h\leq d-1$ and give necessary and sufficient condition for the Noetherian property of $\mathcal R_s({\bf a})$. In order to prove the main theorem first we prove few preliminary results. For an ideal ${\bf a}$, the associated graded ring is defined as
${\displaystyle \mathcal G({\bf a}) := \bigoplus_{n \geq 0} {\bf a}^n / {\bf a}^{n+1}}$ and for any element $x \in {\bf a}^n\setminus {\bf a}^{n+1} $, let $x^{\star}$ denote the image of $x$ in
${\bf a}^{n}/{\bf a}^{n+1}.$
\begin{lemma}
\label{regular}
Let $(R,\mathfrak m)$ be a Noetherian local ring of dimension $d\geq 1$ and ${\bf a}$ be an ideal with $\operatorname{dim} R/{\bf a}=s\geq 1$.
Let $R/{\bf a}^{n}$ be Cohen-Macaulay for all $n\geq 1$.
\been
\item
\label{regular-1}
Then there exist $x_1,\ldots, x_{s}$ such that $x_1^{\star},\ldots ,x_{s}^{\star}$ is a $\mathcal G({\bf a})$-regular sequence
and
$x_1,\ldots ,x_{s}$ is an $R$-regular sequence and
\item
\label{regular-2}
$\operatorname{dim} R/{\bf a} =\operatorname{dim} R-\operatorname{ht} {\bf a} $.
\eeen
\end{lemma}
\begin{proof}
(\ref{regular-1}) We prove (a) by induction on $\operatorname{dim} (R/{\bf a})=s$. First assume that $\operatorname{dim} (R/{\bf a})=1$, then we show that there exists $x_1 \in R$ such that $x_1^{\star}$ is a $\mathcal G({\bf a})$-regular element and $x_1$ is an $R$- regular element.
As $R/{\bf a}^n$ is Cohen-Macaulay for all $n\geq 1,$ $\operatorname{Ass} (R/{\bf a}^n)=
\{P_1,\ldots ,P_t\}$ where $P_1, P_2,\ldots, P_t$ are the minimal primes of ${\bf a}.$
By prime avoidance lemma choose $x\in \mathfrak m\setminus \displaystyle{\cup_{i=1}^t P_i}$. Then $x$ is $R/{\bf a}^{n}$-regular for all $n\geq 1$. To show $x$ is $\mathcal G({\bf a})$-regular, it is enough to show that $x$ is regular on ${\bf a}^n/ {\bf a}^{n+1}$ for all $n \geq 1$. From the exact sequence
\begin{equation*}
0\longrightarrow {\bf a}^{n}/{\bf a}^{n+1}\longrightarrow R/{\bf a}^{n+1}\longrightarrow R/{\bf a}^{n}\longrightarrow 0 \hspace{.2in} (n \geq 1),
\end{equation*}
$x$ is ${\bf a}^{n}/{\bf a}^{n+1}$-regular for all $n \geq 0$ and hence $x^*$ is $\mathcal G({\bf a})$-regular.
We now show that $x$ is $R$-regular. There exists $n\geq 0$ so that $x\in {\bf a}^n\setminus{\bf a}^{n+1}.$ If there is a $y\in R$ such that $xy=0$ and $y\neq 0$ then $y\in {\bf a}^m\setminus {\bf a}^{m+1} $ for some $m.$ As $(xy)^* =x^*y^*=0,$ $y^*=0$ which means $y\in {\bf a}^{m+1}$ which is a contradiction. Hence $x$ is $R$-regular.
Now assume that
$\operatorname{dim} R/{\bf a} >1$.
Using the above argument choose $x_1$ so that $x_1^{\star}$ is $\mathcal G({\bf a})$-regular and $x_1$ is $R$-regular. As $x_1$ is $R/{\bf a}^n$- regular for all $n\geq 1$, we have $(x_1)\cap {\bf a}^n=x_1 {\bf a}^n$ for all $n\geq 1$ which implies that $\mathcal G({\bf a}\overline R)\cong \mathcal G({\bf a})/x_1^* \mathcal G({\bf a})$, where $\overline{R}=R/(x_1)$. Since $x_1$ is $R/ {\bf a}^n$-regular for all $n \geq 1$, we have $ \overline {R}/ \overline{{\bf a}^n}$ is Cohen-Macaulay of dimension $s-1$ for all $n \geq 1$. By induction hypothesis, we can choose $x_2,\ldots, x_{s} \in R$ such that $\overline{x_2}^*,\ldots, \overline{x_{s}}^*$ is $\mathcal G({\bf a}\overline R)\cong \mathcal G({\bf a})/x_1^* \mathcal G({\bf a})$-regular sequence and $\overline{x_2},\ldots, \overline{x_{s}}$ is $\overline{R}$-regular sequence. Thus $x_1^*,\ldots ,x_{s}^*$ is a $\mathcal G({\bf a})$-regular sequence and
$x_1,\ldots ,x_{s}$ is a an $R$-regular sequence. This proves (\ref{regular-1}).
(\ref{regular-2}) By part (1), since there exists an $R/{\bf a}$-regular sequence $ \underline {x} = x_1, \ldots, x_s,$ $\operatorname{dim} ( R/ {\bf a} + (\underline{x})) =0.$ Hence
$\overline{{\bf a}}$ is an $\mathfrak m/(\underline{x})$-primary ideal in $\overline{R}=R/ (\underline{x})$. Since $\underline{x}$ is an $R$- regular sequence,
$$ \operatorname{ht}({\bf a})
\geq \operatorname{ht}(\overline{{\bf a}})
= \operatorname{dim} (R/ (\underline{x}) )
= \operatorname{dim}(R) -\operatorname{dim} R/{\bf a} =\operatorname{ht}({\bf a}).$$
Therefore $s=\operatorname{dim} R/{\bf a}=\operatorname{dim} R-\operatorname{ht} {\bf a}$.
\end{proof}
\begin{example} Lemma~\ref{regular} need not hold true if we drop the assumption that $R/{\bf a}^n$ is Cohen-Macaulay for all $n \geq 1$.
Let $R= k[[x,y,z]] /(xy,xz)= k[[\overline{x}, \overline{y}, \overline{z}]]$ and $I = (\overline{x})$. Then $R/I$ is Cohen-Macaulay, but $R/I^n$ is not Cohen-Macaulay for all $n \geq 2$. Since $\operatorname{dim}(R/I) =2$ and $\operatorname{depth}(R) =1,$ we cannot find a regular sequence of length two in $R$. \qed
\end{example}
In \cite[Theorem~3.3]{ghnv}, Goto et. al proved that if $R$ is an unmixed local ring and ${\bf a}$ an ideal such that $\ell({\bf a}^{(k)}) = \operatorname{ht}({\bf a}^{(k)})$, for some $k \geq 1$, then $\mathcal R_s({\bf a})$ is Noetherian where $\ell({\bf a})$ denote the analytic spread of ${\bf a}$. With the additional assumption that $R/{\bf a}^{(n)}$ is Cohen-Macaulay for all $n \gg 0$, they proved the converse \cite[Theorem 3.6]{ghnv}. In the next lemma we give an alternate proof of the converse.
\begin{theorem}\label{analytic}
Let $(R,\mathfrak m)$ be a local ring of dimension $d,$ and $R/\mathfrak m$ is infinite. Let ${\bf a}$ be an ideal of positive height $h$. If $R/{\bf a}^{(n)}$ is Cohen-Macaulay for all $n \gg 0$ and $\mathcal R_s({\bf a})$ is Noetherian, then $\ell({\bf a}^{(k)})=\operatorname{ht}({\bf a}^{(k)})$ for some $k\geq 1$ and ${\bf a}$ is a set-theoretic complete intersection.
\end{theorem}
\begin{proof}
Since $\mathcal R_s({\bf a})$ is Noetherian, by \cite[Theorem~3.2]{ghnv} there exists $m \geq 1$ such that ${\bf a}^{(m)r}={\bf a}^{(mr)}$ for all $r \geq 1$. By our assumption, there exists an $n_0$ such that for all $n \geq n_0$, $R/{\bf a}^{(n)}$ is Cohen-Macaulay. Choose $k \geq \{m, n_0 \}$ such that both the assumptions are satisfied. Put $I = {\bf a}^{(k)}$. Then
for all $n \geq 1$, $R/I^n$ is Cohen-Macaulay.
Hence by Lemma \ref{regular}(\ref{regular-2}), $\operatorname{dim} R/I=\operatorname{dim} R-\operatorname{ht} I$. Since $R/I^n$ is Cohen-Macaulay we have $\operatorname{depth} (R/I^n)=\operatorname{dim} R/I^n=d- \operatorname{ht}(I)$. By Burch's inequality \cite{burch}, we get
$$
\ell(I) \leq d - \inf_n \operatorname{depth} (R/I^n)=\operatorname{ht}(I).
$$
As $\operatorname{ht}(I) \leq \ell(I)$, equality holds which implies that $\ell({\bf a}^{(k)})=\operatorname{ht}({\bf a}^{(k)})$.
Since $R/\mathfrak m$ is infinite, there is a minimal reduction $J=(f_1,\ldots ,f_h)$ of ${\bf a}^{(k)}$. Hence $\sqrt{{\bf a}} =\sqrt{{\bf a}^{(k)}}=\sqrt{J}$.
\end{proof}
The rest of this section is dedicated for a generalization of results in \cite{huneke-1987} and \cite{morales-1991} and \cite{goto}. We first prove some preliminary results.
\begin{theorem}
\label{main theorem}
Let $(R,\mathfrak m)$ be a
a local ring of dimension $d\geq 1$ with infinite residue field $k.$ Let ${\bf a}$ be an ideal of height $h$ with $1 \leq h \leq d-1$
such that $R/{\bf a}^{(m)}$ is Cohen-Macaulay for large $m$ and $\mathcal R_s({\bf a})$ is Noetherian. Then there exist
$x_1,\ldots, x_{d-h},$ a system of parameters for $R/{\bf a},$ and $f_i\in {{\bf a}}^{(k_i)}$ for $i=1,2,\ldots, h$ such that
\begin{enumerate}
\item
\label{main theorem one}
$x_1,\ldots ,x_{d-h}$ is a regular sequence in $R$.
\item
\label{main theorem two}
$x_1,\ldots ,x_{d-h},f_1,\ldots ,f_h$ is a system of parameters for $R$.
\item
\label{main theorem three}
Let $\operatorname{MinAss} (R/{\bf a}):=\{P_1,\ldots ,P_s\}$. Then
\beqn
e_{(x_1,\ldots ,x_{d-h},f_1,\ldots ,f_h)}(R)=\left ( \displaystyle{\prod_{j=1}^{h}k_j}\right )\displaystyle{\sum_{i=1}^se_{ {\bf a} R_{P_i}}(R_{P_i})e_{(x_1,\ldots ,x_{d-h})}(R/P_i)}.
\eeqn
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{main theorem one}) As $\mathcal R_s({\bf a})$ is Noetherian, by Theorem \ref{analytic} there exists $r\geq 1$ such that $\ell( {\bf a}^{(r)})=h$. By our assumption there exists $n_0$ such that $R/ {\bf a}^{(n)}$ is Cohen-Macaulay for all $n \geq n_0$. Choose $k =\max\{r,n_0 \}$ and put $I= {\bf a}^{(k)}$. Then by the proof of Lemma~\ref{regular}, we can choose an $R$-regular sequence $x_1, \ldots ,x_{d-h}\in \mathfrak m$.
(\ref{main theorem two}) As $k$ is infinite and $\ell(I) = h, $ we can choose a minimal reduction $J=(f_1,\ldots ,f_{h})R$ of $I$. Since $\operatorname{dim} R/I=d-h$, by Lemma~\ref{regular}, we can choose $x_1,\ldots ,x_{d-h}\in \mathfrak m$ whose images in $R/I$ form a system of parameters for $R/I$. As $J$ is a minimal reduction of $I$ we have $\operatorname{MinAss}(R/I)=\operatorname{MinAss} (R/J)$, which implies that $x_1,\ldots ,x_{d-h}$ is also a system of parameters for $R/J$. Thus $x_1,\ldots ,x_{d-h}, f_1,\ldots, f_h$ is a system of parameters of $R$.
(\ref{main theorem three}) Put $B=R/(\underline{x})$ where $\underline{x} = (x_1, \ldots, x_{d-h})$. Then $JB$ is a reduction of $IB$.
Since $R/I^{n}$ Cohen-Macaulay and $\underline{x}$ is a system of parameters in $R/I^n$ we have
\beq
\label{eq3}\nonumber
e_{({\underline{x}, J)}}(R)=e_J(B)
&=& e_I(B) \hspace*{5cm} \mbox{ (as $J$ is a reduction of $I$)}\\
&=& {\limm} \left[\f{h!}{n^{h}}{\lambda}_B \left( \f{B}{I^{n}B} \right)\right]\\ \nonumber
&=& {\limm} \left[ \f{h!}{n^{h}}
{\lambda}_R \left( \f{R}{(\underline{x})+I^{n}} \right) \right]\\\nonumber
&=& {\limm} \f{h!}{n^{h}}
\left[ e_{\underline{x}R} \left (\f{R}{I^{n}} \right) \right]\\ \nonumber
&=& {\limm} \f{h!}{n^{h}} \left[\displaystyle{\sum_{i=1}^s
{\lambda}_{R_{P_i}}\left(\f{R_{P_i}}{I^{n}R_{P_i}}\right)
e_{\underline{x}}\left(\f{R}{P_i}\right)}\right]
\hspace{.2in} \mbox{(by \cite[Theorem 14.7]{mat})}\\ \nonumber
&=& {\limm} \f{k^hh!}{(kn)^{h}}
\left[ \sum_{i=1}^s {\lambda}_{R_{P_i}} \left(\f{R_{P_i}}{{{\bf a}}^{kn}} \right)
e_{\underline{x}} \left(\f{R}{P_i} \right) \right] \hspace{.2in}
\mbox{[since ${{\bf a}}^{(kn)}R_{P_j} = {{\bf a}}^{kn}R_{P_j}$]}\\
&=& k^h \left[ \displaystyle{\sum_{i=1}^s e_{{{\bf a}}R_{P_i}}(R_{P_i})
e_{\underline{x}}(R/P_i)} \right] .
\eeq
\end{proof}
In order to prove the converse of the above theorem we need the following result proved by E. B\"oger.
An ideal $I$ is said to be equimultiple ideal if $\operatorname{ht} I=\ell(I)$.
\begin{theorem}[\cite{B\"oger}] \label{boger}
Let $J$ be an equimultiple ideal of a quasi-unmixed local ring $(R,\frak m)$. Let $I \supseteq J$ be an ideal. Then $J$ is a reduction of $I$ if and only if $e(I_{p}) = e(J_{p})$ for every minimal prime $p$ of $I$ and $\sqrt{I}=\sqrt{J}.$
\end{theorem}
\begin{theorem}
\label{main theorem converse}
Let $(R,\mathfrak m)$ be a quasi-unmixed
local ring of dimension $d\geq 1$. Let ${\bf a}$ be an ideal of height $h$ with $1 \leq h \leq d-1$ and $\operatorname{MinAss}(R/{\bf a})=\{P_1,\ldots ,P_s\}$ such that $\operatorname{ht} P_i=h$ for all $i=1,\ldots ,s$.
If there exist $x_1,\ldots, x_{d-h}$ a system of parameters of $R/{\bf a}$ and $ f_i\in {\bf a}^{(k_i)}$ such that conditions $(\ref{main theorem one})$, $(\ref{main theorem two})$ and $(\ref{main theorem three})$ of Theorem \ref{main theorem} are satisfied, then $\mathcal R_s({\bf a})$ is Noetherian.
\end{theorem}
\begin{proof}
By passing to high powers of the $f_i$'s we may assume that $k=k_i$ for all $1\leq i \leq h$. Fix $k$ and put $I={{\bf a}}^{(k)}$ and $J=(f_1,\ldots ,f_{h})$. We will show that $J$ is a reduction of $I$. Let $B=R/(x_1,\ldots ,x_{d-h})$. Then as $\underline{x}=(x_1,\ldots ,x_{d-h})$ is $R$-regular, by \cite[Theorem 14.11]{mat} and condition (\ref{main theorem three}) we have
\beq
\label{eq7}
e_J(B)
=e_{(\underline{x})+J}(R)
= k^{h} \sum_{i=1}^s
e_{{\bf a} R_{P_i}}(R_{P_i})
e_{\underline{x}} \left( \f{R}{P_i}\right) .
\eeq
Let $F=\operatorname{MinAss}_R (R/J)$. Then for all $n\geq 1$, $F=\operatorname{MinAss}_R (R/J^{n})$. Hence we get
\beq
\label{converse eq1} \nonumber
{e_J(B)}
&=& \left[\limm \f{h!}{n^{h}} {\lambda}_B(B/J^{n}) \right]\\ \nonumber
& = & \limm \left[ \f{h!}{n^{h}} {\lambda}_R(R/(\underline{x})+J^{n})\right]\\ \nonumber
& \geq & \limm \f{h!}{n^{h}} \left[e_{\underline{x}R} \left( \f{R}{J^{n}} \right) \right]\\ \nonumber
& = & \limm \f{h!}{n^{h}} \left[ \displaystyle{\sum_{P\in F}{\lambda}_{R_P}(R_P/J^{n}R_P)e_{\underline{x}}(R/P)} \right]
\mbox{by \cite[Theorem~14.7]{mat}}\\
&=&{\displaystyle{\sum_{P\in F}e_{JR_P}(R_P)e_{\underline{x}}(R/P)}}\label{converse eq}
\eeq
As $\operatorname{ht} I=\operatorname{ht} J=h$ and $J \subseteq I$ for all $i=1, \ldots, s $, $P_i\in F$,
\beq
\label{converse eq2}
\sum_{P\in F}e_{JR_P}(R_P)e_{\underline{x}}(R/P)
\geq \sum_{i=1}^se_{JR_{P_i}}(R_{P_i})e_{\underline{x}}(R/P_i).
\eeq
Now from (\ref{converse eq}) and (\ref{converse eq2}) we get
\beq\label{eq11}
{e_J(B)}
\geq \sum_{i=1}^se_{JR_{P_i}}(R_{P_i})e_{\underline{x}}(R/P_i).
\eeq
Since $J\subseteq I$, for $1\leq i \leq s$,
\beq
e_{JR_{P_i}(R_{P_i})} &\geq & e_{IR_{P_i}}(R_{P_i})
\label{converse eq4}
\eeq
Since $I^{n}R_{P_i} = ({{\bf a}}^{(k)}R_{P_i})^{n}={{\bf a}}^{kn}R_{P_i}$, we have
\beq
\label{converse eq5}
e_{IR_{P_i}}(R_{P_i})= \limm \frac{h!}{n^h}{\lambda}(R_{P_i}/I^{n}R_{P_i})
= \limm \frac{h!}{n^h}{\lambda}(R_{P_{i}}/{{\bf a}}^{kn}R_{P_i})
= k^{h}e_{{{\bf a}}R_{P_i}}(R_{P_i}) .
\eeq
Thus from (\ref{eq11}), (\ref{converse eq4}), (\ref{converse eq5}) and (\ref{eq7}) we have
\begin{eqnarray*}
e_J(B)&\geq &\displaystyle{\sum_{i=1}^se_{JR_{P_i}}(R_{P_i})e_{\underline{x}}(R/P_i)} \\
& \geq & \displaystyle{\sum_{i=1}^se_{IR_{P_i}}(R_{P_i})e_{\underline{x}}(R/P_i)}\\
& = & \displaystyle{\sum_{i=1}^se_{{{\bf a}}R_{P_i}}(R_{P_i})k^{h}e_{\underline{x}}(R/P_i)}\\
& = & e_{J}(B) \label{eq6}
\end{eqnarray*}
Therefore we get $F=\{P_1,\ldots ,P_s\}$ and $e_{JR_{P_i}}(R_{P_i})=e_{IR_{P_i}}(R_{P_i})$ for all $1\leq i \leq s.$ Now since $R$ is quasi-unmixed and $J$ is an equimultiple ideal, by Theorem~\ref{boger}, $J$ is a reduction of $I$, which implies that $\ell(I)=h$. Thus by Theorem \cite[Theorem 3.6]{ghnv}, $\mathcal R_s({\bf a})$ is Noetherian.
\end{proof}
\begin{example} Theorem \ref{main theorem converse} may not be true if $R$ is quasi-unmixed but $\operatorname{ht}({\bf a})=0$.
Let $R = k[[x,y]]/ (xy) = k[[ \overline{x}, \overline{y}]]$ and let ${\bf a} = (\overline{x})$. Then $J = (\overline{x})$ is a minimal reduction of $I$ and $f= \overline{x + y}$ is a parameter for $R$ as $\operatorname{dim}(R)=1$.
Since ${\bf a}^m R_{{\bf a}} \cap R = {\bf a} = (\overline{x})$ for all $m \geq 1$, $R/ {\bf a}^{(m)}$ is Cohen-Macaulay for all $m \geq 1$, but $\mathcal R_s({\bf a})$ is not Noetherian. \qed
\end{example}
\begin{example} Theorem~\ref{main theorem converse} does not hold true if $R$ is not quasi-unmixed.
Let $R = k[[x,y,z]]/ (xy, xz) = k[[ \overline{x}, \overline{y}, \overline{z}]]$ and let ${\bf a} = (\overline{x},\overline{y})$. Put $x_1 = (\overline{x + y})$. Then $(x_1) {\bf a} = {\bf a}^2.$ Hence $(x_1) $ is a minimal reduction of ${\bf a}$. Moreover, if $f_1 = \overline{x + y+z}$, then $x_1, f_1$ is a system of parameters for $R.$
Since ${\lambda}(R/(x_1, f_1)^n)=\binom{n+1}{2}+n,$
$e_{(x_1,f_1)}(R)=1.$ As ${\bf a}$ is a prime ideal, $h=1$, $k_1 = 1$, $s=1$, we get
\beqn
\left ( \displaystyle{\prod_{j=1}^{h}k_j}\right )\displaystyle{\sum_{i=1}^se_{{\bf a} R_{P_i}}(R_{P_i})e_{(x_1,\ldots ,x_{d-h})}(R/P_i) }
= e_{( \overline{x}, \overline{y})}R_{(x,y)} \cdot e_{( \overline{x + y+z ) }} (R/ ( \overline{x},\overline{y}))=e_{(x_1,f_1)}(R)=1.
\eeqn
As ${\bf a}^n R_{{\bf a}} \cap R = (\overline{x}, \overline{y^n})$ for all $n \geq 1,$ $\mathcal R_s({\bf a})$ is not Noetherian. \qed
\end{example}
\section{The symbolic Rees algebra of the edge ideal of complete graph}
Let $\Bbbk$ be a field and $S = \Bbbk[x_1, \ldots, x_{n}]$, a polynomial ring in $n$ indeterminates. Let $G$ be a complete graph on $n$ vertices and $I=I(G) = (x_i x_j \mid \{i,j\} \text{ is an edge of } G) \subset S$ be the corresponding edge ideal of $G$.
By a result of Herzog, Hibi and Trung \cite[Corollary~1.3]{herzog-hibi-trung}, $\mathcal R_s(I) $ is Noetherian. We give another proof of this result.
Let ${\bf x} := x_1, \ldots, x_n$ and $\sigma_1({\bf x}),\ldots, \sigma_n({\bf x})$ be the elementary symmetric functions of $\bf x.$
\begin{proposition} \label{sym} The functions $\sigma_j({\bf x})$ for $j=1, 2, \ldots, n$ form a homogeneous system of parameters of $S$ and
$${\lambda}(S/(\sigma_1, \sigma_2, \ldots, \sigma_n))=n!.$$
\end{proposition}
\begin{proof}
Note that each $x_j$, for $j=1, 2, \ldots, n$, is a root of the monic polynomial
$$x^n-\sigma_1x^{n-1}+\sigma_2x^{n-2}-\cdots+(-1)\sigma_n.$$
Hence $S$ is an integral extension of the polynomial ring $R=\Bbbk[\sigma_1, \sigma_2, \ldots, \sigma_n].$ Therefore $J=(\sigma_1, \ldots, \sigma_n)S$ is $\mathfrak m=(x_1, x_2, \ldots, x_n)S$-primary. Thus $\sigma_1, \sigma_2, \ldots, \sigma_n$ is a homogeneous system of parameters of $S.$
(2) Since $\sigma_1, \sigma_2, \ldots, \sigma_n$ is an $S$-regular sequence, the Hilbert series of $S/J$ is given by
$$
H(S/J, u)=\sum_{n=0}^\infty {\lambda}((S/J)_n)u^n
=\frac{(1-u)(1-u^2)\ldots(1-u^n)}{(1-u)^n}=\prod_{j=1}^{n-1}(1+u+u^2+\cdots+u^j).$$
Therefore ${\lambda}(S/J)=H(S/J, 1)=n!.$
\end{proof}
\begin{example}
\label{complete-graph}
Let $S = \Bbbk[x_1, \ldots, x_{n}]$. Put $R= S_{\mathfrak m}$ and $\mathfrak a = I(G)R$ where $\mathfrak m = (x_1, \ldots, x_{n})$.
\been
\item
\label{complete-graph-11}
$\mathfrak a$ is unmixed and $R/ \mathfrak a^{(k)}$ is Cohen-Macaulay for all $k \geq 1$.
\item
\label{complete-graph-2}
$\mathcal R_s(\mathfrak a)$ is Noetherian.
\item
\label{complete-graph-3}
$\mathfrak a$ is a set-theoretic complete intersection.
\eeen
\end{example}
\begin{proof} (\ref{complete-graph-11}) Put $\mathfrak p_i = (x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{n} )$, $1 \leq i \leq n$.
By \cite[Corollary 3.35]{tyul} it follows that
for all $n \geq 1$,
\beqn
\mathfrak a^{(k)} = \bigcap_{i=1}^{n} \mathfrak p_i^{k}.
\eeqn
As $\operatorname{ht} \mathfrak p_i=n-1$ for all $\mathfrak p_i\in \operatorname{Ass} (S/{\bf a})$, ${\bf a}$ is an unmixed ideal.
Since $x=\sigma_1 \not \in \mathfrak p_i$ for all $i=1, \ldots, n$ it is a nonzerodivisor on $ R/ \mathfrak a^{(k)}$ for all $k \geq 1$. Hence $R/{\bf a}^{(k)}$ is Cohen-Macaulay for all $k\geq 1$.
(\ref{complete-graph-2}) Let $x=\sigma_1.$ Then for all $j=2, \ldots, n$,
\beq
\label{example-symmetric-fi}
\sigma_{j} ({\bf x}) &\in& I^{(j-1)} \hspace{.5in} \mbox{\cite[Lemma 2.6]{bocci}}\\
\label{example-symmetric-multiplicity-1}
e_{(x)} ( R/ \mathfrak p_i)
&=& {\lambda}\left( \f{R} {(x_1, \ldots, x_{i-1}, x, x_{i+1}, \ldots, x_{n} )}\right)
=1\\
\label{example-symmetric-multiplicity-2} \nonumber
e_{\mathfrak a}( R_{\mathfrak p_i})
&=& {\lambda} \left( \f{R_{\mathfrak p_i}}{\mathfrak a R_{\mathfrak p_i} }\right)
= {\lambda} \left( \f{R_{\mathfrak p_i}}{ \mathfrak p_iR_{\mathfrak p_i}}\right)
=1\\
e_{ (x, \sigma_1, \ldots, \sigma_n)}(R) &=& \lambda \left( \frac{R} {(\sigma_1, \ldots, \sigma_n)R }\right)
= n!.
\eeq
By the equation (\ref{example-symmetric-fi}),
$k_j := j-1$ for all $j=2, \ldots, n$.
Hence by Proposition \ref{sym}
\beq
\label{example-symmetric-RHS}
\prod_{j=2}^{n} k_j \sum_{i=1}^{n} e_{\mathfrak a}( R_{\mathfrak p_i})e(x, R/ \mathfrak p_i)
= (n-1)! n = n!=e_{(x,\sigma_2, \ldots, \sigma_n)}(R).
\eeq
By Proposition \ref{sym}, equation (\ref{example-symmetric-RHS}) and Theorem \ref{main theorem} we conclude that $\mathcal R_s(\mathfrak a )$ is Noetherian.
(\ref{complete-graph-3}) By (\ref{complete-graph-11}), (\ref{complete-graph-2}) and Theorem \ref{analytic}, ${\bf a}$ is a set-theoretic complete intersection.
\end{proof}
\section{The symbolic Rees algebra of the Fermat ideal}
Let $S = \mathbb C[x,y,z]$ be the polynomial ring and $\mathfrak m = (x,y,z)$. Let $n \geq 3$ and
\beqn
r_n := y^n-z^n, \hspace{.2in} s_n =z^n-x^n, \hspace{.2in} t_n = x^n-y^n, \text{ and }
J_n=(x r_n, y s_n, z t_n).
\eeqn
The ideal $J_n$ is called the Fermat ideal. The ideal $J_n$ defines a set of $n^2 + 3$ points in $\mathbb P^2$.
For $n=3$, this ideal was studied in \cite{dum-sze-gas} in relation with the containment problem. In \cite{nagel-alexandra} the authors study the the symbolic Rees algebra of $J_n$.
\begin{lemma}
\label{lemma-fermat-ideal}
Let $R = S_{\mathfrak m}$ and $I_n= J_nR$. Then
\been
\item
\label{lemma-fermat-ideal-1}
$J_n$ is a radical ideal.
\item
\label{lemma-fermat-ideal-2}
$ e(R/ I_n) = e(S/ J_n) = n^2 + 3$.
\eeen
\end{lemma}
\begin{proof} Let $\eta$ be a primitive $n^{\rm th}$ root of unity.
Put $P_{ij} = (y-\eta^i z, z -\eta^j x)$ ($i,j=0, \ldots, n-1$), $P_1 = (y,z)$, $P_2 = (x,z)$, $P_3 = (x,y)$ and
\beq
\label{ex2-defn-Kn}
K_n = \bigcap_{i,j=0}^{n-1
} P_{ij} R \bigcap P_1R \bigcap P_2R \bigcap P_3R.
\eeq
Then clearly, $I_n \subseteq K_n$.
To prove the lemma it is enough to show that equality holds.
A minimal free resolution of $J_n $ is
\beq
\label{minimal free resolution}
0
\rightarrow
S[-n-3] \oplus S[-2n]
\xrightarrow{ \displaystyle
\begin{pmatrix}
yz & x^{n-1} \\
xy & z^{n-1}\\
-xz & -y^{n-1}
\end{pmatrix}} S[-n-1)]^3
\xrightarrow{ \displaystyle
\begin{pmatrix}
xr^n & ys^n & zt^n\\
\end{pmatrix}}
S
\rightarrow \frac{S}{J_n}
\rightarrow 0.
\eeq
Since $\mathfrak m$ is a maximal ideal in $S$, $\operatorname{depth}(S/ J_n)_{\mathfrak n} \leq \operatorname{dim} (S/ J_n)_{\mathfrak m}=1$. From (\ref{minimal free resolution}) the projective dimension of $(S/ J_n)_{\mathfrak m}=2$.
By the Auslander-Buchsbaum formula, $ (S/ J_n)_{\mathfrak m}$ is Cohen-Macaulay.
Using the exact sequence (\ref{minimal free resolution}), we obtain the Hilbert series of $S/J_n$. Let $u$ be an indeterminate. Then
$$H(S/J_n, u)=\frac{1-3 u^{n+1}+u^{n+3}+u^{2n}}{(1-u)^3}$$
Since $\operatorname{dim} S/J_n=1,$ we can write $H(S/J_n, u)=\frac{f(u)}{1-u}$ for some $f(u)\in \mathbb Z[u].$ Let $g(u)=1-3 u^{n+1}+u^{n+3}+u^{2n}.$ Then $g(u)=(1-u)^2f(u).$ Therefore
$$e(S/J_n)=f(1)=\lim_{u\to 1} \frac{g(u)}{(1-u)^3}=n^2+3.$$
Every minimal prime of $K_n$ is also that of $J_n.$ Let $a=n^2+3$ and
$$J_n= \bigcap_{i,j=0}^{n-1
} Q_{ij} R \bigcap Q_1R \bigcap Q_2R \bigcap Q_3R
\bigcap_{j=a+1}^b Q_j$$ be a reduced primary decomposition of $J_n$ and let $Q_j$ be $\mathfrak p_j$-primary for all $j=1,2,\ldots,b.$ By the associativity formula for multiplicities, we have
$$
e(\mathfrak m, R/I_n)=\sum_{j=1}^be(\mathfrak m, R/\mathfrak p_j) {\lambda} ((R/I_n)_{\mathfrak p_j})=n^2+3.
$$
This shows that $a=b=n^2+3$ and $Q_j=\mathfrak p_j$ for all $j=1,2,\ldots, a.$ Therefore
$I_n=K_n$ and $I_n$ is a radical ideal.
\end{proof}
\begin{example}
\label{example-fermat-ideal}
Let $R = S_{\mathfrak m}$ and $I_n= J_nR$. Then
\been
\item
\label{example-fermat-ideal-1}
For all $n\geq 3,$ the analytic spread of $I_n$ is $3$.
\item
\label{example-fermat-ideal-2}
For all $k \geq 1$, $R/I_n^{(k)}$ is Cohen-Macaulay.
\item
\label{example-fermat-ideal-3}
Put ${\bf a}= I_n$. Then
\been
\item
\label{example-fermat-ideal-a}
$\mathcal R_s({\bf a})$ is Noetherian.
\item
\label{example-fermat-ideal-b}
${\bf a}$ is a set-theoretic complete intersection.
\eeen
\eeen
\begin{proof} (\ref{example-fermat-ideal-1})
Suppose that $\ell(I_n)=2.$ Since $I_n$ is a radical ideal and $R_\mathfrak p$ is regular for all minimal primes of $I_n,$ by \cite[Theorem]{cn1976},
$I_n$ is generated by two elements. This is a contradiction. Hence $\ell(I_n)=3.$ \\
(\ref{example-fermat-ideal-2})
Since $I_nR$ is a height two ideal of a $3$-dimensional local ring,
$I_n^{(k)}$ is an unmixed ideal for all $k\geq 1.$ Hence $R/I_n^{(k)}$ is a Cohen-Macaulay local ring for all $k\geq 1.$
(\ref{example-fermat-ideal-a})
Put $r = r_n$, $s=s_n$, $t=t_n$, $x_1 = x+y+z$, $f_1 = rs^{n-2}t - 2 r^{n-2}st$ and
$f_2 = rs^{n-2}t + 2( s^{n-2} r^2 x^n + t^{n-2} s^2 y^n + r^{n-2} t^2 z^n).$ We claim that
$x_1, f_1, f_2$ is a system of parameters in $R$. We can write
\beqn
r s^{n-2}t - 2 r^{n-2}st = rst ( s^{n-3}- 2 r^{n-3} )
\eeqn
Let $\mathfrak p$ is a prime ideal and $\mathfrak p \supseteq (x_1,f_1,f_2)$.
If $r \in \mathfrak p$, then as $r+s+t=0$,
\beqn
\mathfrak p \supseteq (x_1, r, f_2)
= (x_1, r, t^{n-2} s^2 y^n )
= (x_1, r, (-r-s)^{n-2}s^2y^n)
= (x_1, r, s ^ny^n ).
\eeqn
Let $\eta$ be a primitive $n^{\rm th}$ root of unity. Then we can write
\beqn
r &=& y^n-z^n = \prod_{i=1}^{n} (y- \eta^i z)\\
s &=& z^n-x^n = \prod_{j=1}^{n} (z- \eta^j x).
\eeqn
Hence, for some $i,j = 1, \ldots, n$,
\beq
\label{mprimary} \nonumber
\mathfrak p
&\supseteq& (x_1, y- \eta^i z, (z- \eta^j x) y)\\ \nonumber
&=& (x+y+z, y- \eta^i z, (z- \eta^j (x+y+z -y-z)) y)\\ \nonumber
&=& (x+y+z, y- \eta^i z, (z- \eta^j ( -y-z)) y)\\ \nonumber
&=& (x+y+z, y- \eta^i z, (z+ \eta^j ( y+z)) y)\\ \nonumber
&=& (x+y+z, y- \eta^i z, (z+ \eta^j ( \eta^i z +z)) ( \eta^i z))\\ \nonumber
&=& (x+y+z, y- \eta^i z, \eta^i z^2( (1+ \eta^{i+j} + \eta^j ))) \\
&=& (x+y+z, y- \eta^i z, z^2).
\eeq
Hence $\mathfrak p=\mathfrak m$.
Similarly, if $s \in \mathfrak p$ or $t \in \mathfrak p$, then
using the similar argument as in (\ref{mprimary}) we get $\mathfrak p=\mathfrak m$.
We now compute $e_{(x_1)} (R/ (f_1,f_2))$. Since $x,f_1, f_2$ is a system of parameters,
\beq
\label{ex2-mult-sop} \nonumber
&&e_{(x_1)} (R/ (f_1,f_2)) \\ \nonumber
&=& e_{(x_1, f_1, f_2)}(R)\\
&=& \begin{cases}
e_{(x_1 , r, f_2)}(R) + e_{(x_1, s, f_2)}(R) + e_{(x_1, t, f_2)}(R) & n=3\\
e_{(x_1 , r, f_2)}(R) + e_{(x_1, s, f_2)}(R) + e_{(x_1, t, f_2)}(R) + e_{(x_1, s^{n-3}- 2r^{n-3}, f_2)}(R) & n >3\\
\end{cases}.
\eeq
Let $n>3$ and $s^{n-3}- 2 r^{n-3} \in \mathfrak p$. Then
\beqn
\mathfrak p
&\supseteq& (x_1, s^{n-3}- 2 r^{n-3} , f_2) \\
\eeqn
Let $\zeta$ be the primitive $(n-3)^{\rm th}$ root of unity. Then
\beqn
s^{n-3}- 2 r^{n-3} = \prod_{i=0}^{n-3} (s- \sqrt[n-3]{2}\zeta^i r)
\eeqn
Fix $i$ ($0 \leq i< n-3$). Put $\xi = \zeta^i$ and $v= s-ar$ where $a= \sqrt[n-3]{2} \xi r$. Since $r + s + t =0$,
\beq
\label{ex2-simplifying-terms} \nonumber
t &=& -(r+s) = - (1+a) r -v\\ \nonumber
f_2
&=& rs^{n-2}t + 2( s^{n-2} r^2 x^n + t^{n-2} s^2 y^n + r^{n-2} t^2 z^n)\\ \nonumber
&=& r ( v + ar)^{n-2} (-r(1+a) -v) + 2 ( v + ar)^{n-2} r^2 x^n\\ \nonumber
&&+ 2 ( -(1+a) r + v)^{n-2} (ar)^2 y^n +2 r^{n-2} ( -(1 +a) r -v)^2 z^n \\ \nonumber
&=& -a^{n-2}(1+a) r^n + 2 a^{n-2} x^n r^n +(-1)^{n-2}2a^2 (1+a)^{n-2} y^n r^n + 2 (1+a)^2 z^n r^n + g(v) \\
&=& [a^{n-2}(1+a) + 2 a^{n-2} x^n +(-1)^{n-2}2a^2 (1+a)^{n-2} y^n + 2 (1+a)^2 z^n] r^n + g(v),
\eeq
where $g(v)$ denotes the terms in $f_2$ which involve positive powers of $v$.
Since $a^{n-2}(1+a) + 2 a^{n-2} x^n +(-1)^{n-2}2a^2 (1+a)^{n-2} y^n + 2 (1+a)^2 z^n$ is a unit in $R$,
if $v \in \mathfrak p$, then
$\mathfrak p \supseteq (x_1, v, f_2) = (x_1, v, r^n) = (x+y+z, s- a r, r^n)$. Hence,
$\mathfrak p \supseteq (x+y+z, s,r)$ which implies that for some $j,k=0, \ldots, n-1$,
$\mathfrak p \supseteq (x+y+z, z- \eta^j x, y- \eta^k z)$. Hence $\mathfrak p = \mathfrak m $.
Since $r+s+t=0$, $(x_1, r, f_2) = (x_1, r, t^{n-2}s^2y^n)$, $x_1, r, f_2$ is a system of parameters. Hence
\beq
\label{ex2-x1rf2-sop} \nonumber
e_{(x_1 , r, f_2)}(R)
& =& e_{(x_1, r, t^{n-2} s^2 y^n)}(R)\\ \nonumber
& =& e_{(x_1, r, y^n)}(R) + e_{(x_1, r, t^{n-2})}(R) + e_{(x_1, r, s^2)}(R)\\
& =& n^2 + n^2(n-2) + 2n^2
= n^3 + n^2.
\eeq
Similarly,
\beq
\label{ex2-sx1sf2-sop}
e(x_1, s, f_2) = e(x_1, t, f_2) = n^3 + n^2.
\eeq
We now compute $e(x_1, s^{n-3}-2r^{n-3}, f_2)$. Since $x_1, s^{n-3}-2r^{n-3}, f_2$ is a system of parameters,
\beq
\label{ex2-x2sn-3f2-sop}
e(x_1,s^{n-3}-2r^{n-3}, f_2)
= n e(x_1, s^{n-3} -2 r^{n-3}, r) = n e( x_1, s^{n-3}, r) = n^3(n-3).
\eeq
Combining (\ref{ex2-mult-sop}), (\ref{ex2-x1rf2-sop}), (\ref{ex2-sx1sf2-sop}) and (\ref{ex2-x2sn-3f2-sop}) we get that for all $n \geq 3$
\begin{equation}\label{mul1}
e_{(x_1)}(R/ (f_1,f_2))
= 3(n^3+n^2) + n^4-3n^3 = n^4 + 3n^2= n^2(n^2 + 3).
\end{equation}
As $f_1, f_2 \in I^{(n)}$, $k_1=k_2=n$ and we have
\begin{equation}\label{mul2}
\left ( \displaystyle{\prod_{j=1}^{2}k_j}\right )
\displaystyle{\sum_{i=1}^{n^2+3} e_{{\bf a} R_{P_i}}(R_{P_i})e_{(x_1)}(R/P_i)}
= n^2 (n^2 + 3)
\end{equation}
Then by (\ref{mul1}) and Theorem \ref{main theorem} we have $\mathcal R_s({\bf a})$ is Noetherian.
(\ref{example-fermat-ideal-b}) follows from (\ref{example-fermat-ideal-2}), (\ref{example-fermat-ideal-a}) and Theorem~\ref{analytic}.
\end{proof}
\end{example}
\section{The Jacobian ideal of hyperplane arragements}
This example was motivated by the paper by J.~Migliore, U.~Nagel, and H.~Schenck \cite{mig-nag-sch}.
\begin{example}
\label{jacobian}
Let $S = k[x,y,z,w]$ and $f = w(x+y) (x+y+z+w)$ and $ (J(f))S$, the Jacobian ideal of $f$. Put $R = S_{\mathfrak m}$ where $\mathfrak m = (x,y,z,w)$ and ${{\bf a}} = (J(f))R$.
Then
\been
\item
\label{jacobian-one}
${{\bf a}}$ is a height two unmixed ideal and $R/{{\bf a}}^{(n)}$ is Cohen-Macaulay for all $n \geq 1$.
\item
\label{jacobian-two}
The symbolic Rees algebra $\mathcal R_s({{\bf a}})$ is Noetherian.
\item
\label{jacobian-three}
${{\bf a}}$ is a set-theoretic complete intersection.
\eeen
\begin{proof} (\ref{jacobian-one}) One can verify that $f_x=f_y$ and
\beqn
{{\bf a}} &=& (f_x=2xw+2yw+zw+w^2, f_z=xw+yw, f_w=x^2+2xy+y^2+xz+yz+2xw+2yw)\\
&=& (zw+w^2, x^2+2xy+y^2+xz+yz, xw+yw)\\
&=& ( w(z+w), (x+y)(x+y+z), w(x+y))\\
&=& (z+w, x+y) \cap (w, x+y) \cap (w, x+y+z).
\eeqn
Put $\mathfrak p_1 = (z+w, x+y)$, $\mathfrak p_2= (w, x+y)$ and $\mathfrak p_3= (w, x+y+z)$.
Then
\beqn
{{\bf a}}^{(n)} = \mathfrak p_1^n \cap \mathfrak p_2^n \cap \mathfrak p_3^n.
\eeqn
Consider the exact sequences:
\beq
\label{ses p1 p2}
0 {\longrightarrow} \f{R}{\mathfrak p_1^n \cap \mathfrak p_2^n}
{\longrightarrow} \f{R}{\mathfrak p_1^n } \oplus \f{R}{ \mathfrak p_2^n}
{\longrightarrow} \f{R}{\mathfrak p_1^n + \mathfrak p_2^n}
{\longrightarrow} 0,\\
\label{ses p1 p2 p3}
0 {\longrightarrow} \f{R}{\mathfrak p_1^n \cap \mathfrak p_2^n \cap \mathfrak p_3^n}
{\longrightarrow} \f{R}{\mathfrak p_1^n \cap \mathfrak p_2^n } \oplus \f{R}{ \mathfrak p_3^n}
{\longrightarrow} \f{R}{ (\mathfrak p_1^n \cap \mathfrak p_2^n) + \mathfrak p_3^n}
{\longrightarrow} 0.
\eeq
Note that $R/\mathfrak p_1^n$, $R/\mathfrak p_2^n$ and $R/\mathfrak p_3^n$ are Cohen-Macaulay.
Since
\beqn
\mathfrak p_1^n + \mathfrak p_2^n &=&
( (x+y)^{n-i} (w^i, (z+w)^i):i=0, \ldots, n)
\eeqn
$\operatorname{ht}(\mathfrak p_1^n + \mathfrak p_2^n)=3$ and $\sqrt{\mathfrak p_1^n + \mathfrak p_2^n } = (x+y, z,w)$. We claim that $x$ is a nonzerodivisor on $R/ \mathfrak p_1^n + \mathfrak p_2^n$.
After a change of variables let $y^{\prime} = x+y$, $z^{\prime} = w +z$. Then we have
\beqn
(\mathfrak p_1^n + \mathfrak p_2^n)S &=& ( (y^{\prime})^{n-i} (w^i, (z^{\prime})^i):i=0, \ldots, n) \subseteq k[x, y^{\prime}, z^{\prime},w],\\
((\mathfrak p_1^n \cap \mathfrak p_2^n) + \mathfrak p_3^n )S &=& ( (z^{\prime}, y^{\prime})^n \cap (w, y^{\prime})^n) + ( w, y^{\prime} +z^{\prime})^n \subseteq k[x, y^{\prime}, z^{\prime},w].
\eeqn
We can write $\mathfrak p_1^n + \mathfrak p_2^n = \cap Q_j$ (finite intersection) where each $Q_j$ is generated by pure powers of the variables (see \cite[Theorem~1.3.1]{herzog-hibi}. Then as $(y^{\prime})^n, (z^{\prime})^n, w^n \in \mathfrak p_1^n + \mathfrak p_2^n$ and $x$ does not occur in any of generators of $\mathfrak p_1^n + \mathfrak p_2^n$, each $Q_j$ is of the form $( (y^{\prime})^{n_{1,j}}, (z^{\prime})^{n_{2,j}}, w^{n_{3,j}})$ for some
${n_{1,j}}, {n_{2,j}}{n_{3,j}} \in \mathbb N$. Since $\sqrt{\mathfrak p_1^n + \mathfrak p_2^n}S = \cap \sqrt{Q_j}S$ and $\sqrt{Q_j}S = (y^{\prime}, z^{\prime}, w)$ for all $j$, localizing we conclude that $(y^{\prime}, z^{\prime}, w)R$ is the only associated prime of $(\mathfrak p_1^n + \mathfrak p_2^n)R$ and hence $x$ is a nonzerodivisor on $R/ (\mathfrak p_1^n + \mathfrak p_2^n)$. By depth lemma applied to the exact sequence \ref{ses p1 p2}, we see that $R/(\mathfrak p_1^n \cap \mathfrak p_2^n)$
is Cohen-Macaulay.
Similarly, we conclude that $x$ is a nonzerodivisor on $R/ ( \mathfrak p_1^n \cap \mathfrak p_2^n) + \mathfrak p_3^n$. By the depth lemma applied to the exact sequence (\ref{ses p1 p2 p3}), it follows that $R/{\bf a}^{(n)}=R/(\mathfrak p_1^n\cap \mathfrak p_2^n\cap \mathfrak p_3^n)$ is Cohen-Macaulay.
(\ref{jacobian-two}) Put $x_1 = x$, $x_2 =z$, $g_1 = w(z+w)$, $g_2 = w(x+y)$, $g_3 = (x+y) (x+y+z)$, $f_1 = g_1 + g_3$ and $f_2 = f = w(x+y)(x+y+z+w) $. Then $f_1 \in {{\bf a}}$. As
\beqn
zf_2 &=& zw (x+y) (x+y+z+w) = g_1 g_2 - g_2 g_3,
\eeqn
$zf_2\in {\bf a}^2 \subseteq {{\bf a}}^{(2)} \subset \sqrt{{{\bf a}}^{(2)}} = \mathfrak p_1 \cap \mathfrak p_2 \cap \mathfrak p_3$. Since $z \not \in \mathfrak p_i$, for $i=1,2,3$, $f_2 \in {{\bf a}}^{(2)}$.
Now,
\beq\label{eq1} \nonumber
e_{(x_1, x_2, f_1, f_2)}(R)
&=& e_{(x, z, g_1 + g_3, f)}(R)\\ \nonumber
&=& e_{(x,z,w(z+w) + (x+y) (x+y+z), w(x+y)(x+y+z+w) )}(R)\\ \nonumber
&=& e_{(x,z, w^2 + y^2, wy(y+w))}(R)\\ \nonumber
& = &e_{( x,z,w^2 + y^2, y)}(R) + e_{( x,z,w^2 + y^2, w)}(R) + e_{( x,z,w^2 + y^2, y+w)}(R) \\
&=& 6.
\eeq
Since $k_1 = 1$ and $k_2=2$, we have
\beq\label{eq2} \nonumber
\left ( \displaystyle{\prod_{j=1}^{2}k_j}\right )\displaystyle{\sum_{i=1}^se_{{\bf a} R_{P_i}}(R_{P_i})e_{(x_1,x_{2})}(R/P_i)}
&=& 1 \cdot 2 \cdot \sum_{i=1}^3 e_{{\bf a} R_{P_i}}(R_{P_i})e_{(x_1,x_2)}(R/\mathfrak p_i) \\ \nonumber
&=& 2 \cdot \sum_{i=1}^3 e_{{\mathfrak p_i}{R_{\mathfrak p_i}}}(R_{\mathfrak p_i}) e_{(x,z)} (R/ \mathfrak p_i)\\ \nonumber
&=& 2 \cdot 3 \\
&=& 6.
\eeq
By (\ref{eq1}), (\ref{eq2}) and Theorem~\ref{main theorem converse}, $R_s({\bf a})$ is Noetherian.
(\ref{jacobian-three}) Since $R/{\bf a}^{(n)}$ is Cohen-Macaulay for $n\geq 1$ and $R_s({\bf a})$ is Noetherian by Theorem~\ref{analytic}, ${\bf a}$ is set theoretic complete intersection.
\end{proof}
\end{example}
{\bf Acknoledgement:} We thank Prof. S. Goto for many useful discussions during his visit at IIT Bombay.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/circular_oversize-crop.pdf}
\caption{\textbf{Comparison between circular convolution and oversized convolution.} We only show horizontal convolution for illustration purpose. a) Circular convolution in ParCNet V1 inevitably distorts context information at the boundary of images. b) Oversized convolution resolves the distortion while maintaining the global receptive field over the whole image.}
\label{fig:circular_oversize}
\end{figure}
Transformers have shown great potentials in computer vision recently. Vision transformer (ViT)~\cite{dosovitskiy2020image} and its variants~\cite{touvron2021training,yuan2021tokens,wang2021pyramid,liu2021swin} have been adopted to various vision tasks such as object detection~\cite{carion2020end,fang2021you}, semantic segmentation~\cite{zheng2021rethinking}, and multi-modal tasks such as visual question answering~\cite{khan2020mmft} and text-to-image synthesis~\cite{ramesh2022hierarchical}. Despite the great performance of vision transformers, they do not win convolutional neural networks (CNNs) in all aspects. For example, the computational complexity of self-attention modules, one of the critical designs in transformers, is quadratic ($\mathcal{O}(N^2C)$) to the resolution of inputs~\cite{vaswani2017attention}. This property restricts its adoption in real applications such as defect inspection, which finds small defects in high-resolution images~\cite{zhang2021multi}. Moreover, transformers are arguably more data-hungry than CNNs~\cite{dosovitskiy2020image,touvron2021training}, making them difficult to be deployed to long-tail applications where no large-scale data is available. Lastly, CNNs have been intensively studied in the past several decades~\cite{lecun1995convolutional}. There are lots of off-the-shelf dedicated features already developed in existing deployment hardware (CPU, GPU, FPGA, ASIC, \emph{etc.}). Some acceleration and deployment techniques are designed mainly around convolution operations, such as operator fusion~\cite{roesch2019relay} and multi-level tiling~\cite{zheng2020ansor,chen2018tvm}.
Thus pushing the envelope of CNNs is still important and valuable. Recent works have improved CNNs from multiple perspectives. A straightforward approach is to take the benefits from both CNNs and transformers by mixing their building blocks ~\cite{graham2021levit,srinivas2021bottleneck,mehta2021mobilevit,chen2022mobile,li2022efficientformer}. While bringing together merits from the two parties, those approaches still keep the ViT blocks and has the quadratic complexity problem. Another line of research is to design purely convolutional architectures. For example, with larger convolution kernels, ConvNeXt~\cite{liu2022convnet}, RepLKNet~\cite{ding2022scaling}, and ParCNetV1~\cite{zhang2022parc} successfully improved the performance of CNNs by encoding broader spatial contexts.
Specifically, ParCNetV1 introduced \textbf{p}osition \textbf{a}ware ci\textbf{r}cular \textbf{c}onvolutions (ParC) to neural networks. It uses depth-wise circular 1D convolutions of input feature map size ($C \times H \times 1$ and $C \times 1 \times W$) to achieve global receptive fields. To avoid spatial over-smoothing caused by global kernels, ParCNetV1 augmented the feature input with absolute position encoding to ensure the feature output is still location sensitive. ParCNetV1 also brought attention mechanisms into the framework by adopting squeeze-and-excitation operations. These modifications lead to the superior performance of ParCNetV1, especially on mobile devices.
Despite the improved model efficiency and accuracy, ParCNetV1 still suffers from some design drawbacks. Firstly, as mentioned in~\cite{zhang2022parc} and shown in Fig~\ref{fig:circular_oversize}, the circular padding introduced spatial distortion by performing convolutions crossing image borders. Secondly, the attention design is relatively weak compared with transformers which may limit the framework performance. Thirdly, it is not feasible to apply global convolution to all blocks in CNNs, especially those shallow blocks due to expensive computational costs and over-smoothing effects.
To address these issues, we propose a pure convolutional neural network architecture called ParCNetV2. It is composed of three essential improvements over ParCNetV1.
We push the kernel size to the extreme by doubling the size of the circular convolution kernel, and remove the absolute positional encoding.
As shown in Figure~\ref{fig:circular_oversize}, through large size (equal to the size of the input) padding, the convolution operation avoids feature distortion around image borders. A welcoming feature of this design is that the oversized kernel implicitly encodes spatial locations when it convolves with the feature maps using constant paddings~\cite{kayhan2020translation}. It enables us to discard the positional encoding module without hurting the network performance. We explain why $2\times$ is the extreme in Sec.\ref{subsec:oversized}.
The original ParC block uses a limited attention mechanism inserted at the end of the channel mixing phase. We propose to use the more flexible bifurcate gate unit (BGU)
at both the token mixing phase (spatial BGU) and channel mixing phase (channel BGU) in our newly designed block. Compared to the squeeze-and-excitation block, the BGU is comparably stronger while more compact and general to combine with various structures, including spatial attention and channel attention. The enhanced attention mechanism also simplifies our ParC V2 block, as both phases of the new block adopt the BGU structure.
In contrast to ParCNetV1 which applies large kernel convolutions only on later stage CNN blocks, we unify the block design by mixing large kernel convolutions with local depth-wise convolutions in all the blocks. Both types of convolutions are operated on a fraction of the input feature map channels. The exact portion is determined by its depth in the model, following a general rule that the usage of oversized kernel convolutions grows when the block is deeper. This progressive design combines local convolution and global convolutions in one convolution step, unlike many other works that stack the two sequentially~\cite{graham2021levit,xiao2021early} or as two separate branches~\cite{chen2022mobile,mehta2021mobilevit,dai2021coatnet,zhang2022parc}. To this end, the resulting redesigned ParC V2 structure is capable of performing local convolutions, global convolutions, token channel mixing, and BGU-based attention all in one block.
To summarize, the main contributions of this paper are as follows:
\iffalse
The ParCNet is one of the recent works that follow these three rules to improve conventional CNNs. Specifically, it introduces global circular convolution, which uses one-dimensional kernels of feature map sizes ($H \times 1 \times C$ or $1 \times W \times C$) to enlarge the perception field and uses squeeze-and-excitation to mix the channels and as an attention mechanism under CNNs architecture space.
In this paper, we introduce an updated pure convolution-based ParC Block that further enhances long-range interactions and channel mixing with attention mechanisms. For the long-range interactions, we propose to further double the size of 1D kernels to $2\times$ the input feature maps size, and remove the position encoding as we find that extra constant value paddings provide an implicit way to encode the relative locations of features and kernels. This together with the conventional $3 \times 3$ depth-wise convolution constitutes the core operations for modeling spatial relations of input features. For the channel mixing-based attention mechanism, we propose to use bifurcate gate units to replace squeeze-and-excitations, which enables each token to have its channel mask for more flexible attention.
Apart from stronger model capacity, the updated ParC V2 block is also more extensible as it can be used in early stages of CNNs and larger CNN models. This is because we choose to use Fast Fourier transform (FFT) to implement the circular convolution. On large feature maps, the FFT acceleration makes the computational cost for the ParC V2 block comparable with its 2D convolution counterparts.
\fi
\begin{itemize}
\item We propose oversized convolutions for the effective modeling of long-range feature interactions in CNNs. Compared to ParCNetV1, it enables homogeneous convolution across all spatial locations, while removes the need for extra position encoding.
\item We propose two bifurcate gate units (spatial BGU and channel BGU), which are compact and powerful attention modules. They boost the performance of ParCNet V2 and could be easily integrated into other network structures.
\item We bring oversized convolution to shallow layers of CNNs and unify the local-global convolution design across blocks.
\end{itemize}
Extensive experiments are conducted to demonstrate that ParCNet V2 outperforms all other CNNs given a similar amount of parameters and computation budgets. It also beats state-of-the-art ViTs and CNN-ViT hybrids.
\section{Related Works}
\textbf{Convolution Networks.} Before transformers were introduced to vision tasks, convolutional neural networks had dominated vision architectures in a variety of computer vision tasks, such as image classification~\cite{krizhevsky2017imagenet,simonyan2014very,he2016deep}, object detection~\cite{ren2015faster,redmon2016you}, and semantic segmentation~\cite{chen2017deeplab,ronneberger2015u}. ResNet~\cite{he2016deep} introduced residual connections to eliminate network degradation, enabling very deep convolutional networks. It has been a strong baseline in various vision tasks. MobileNets~\cite{howard2017mobilenets,sandler2018mobilenetV2,howard2019searching} introduced depth separable convolution and ShuffleNets~\cite{zhang2018shufflenet,ma2018shufflenet} proposed group point-wise convolution with channel shuffling, both aimed to build light-weight models with small memory and computation footprint. After the appearance of vision transformers, researchers improved pure convolution networks with ideas from transformers. RepLKNet~\cite{ding2022scaling} increased kernel size to as large as $31\times 31$, which can extract long-range dependencies in contrast to commonly used $3\times 3$ kernels. ConvNeXt~\cite{liu2022convnet} reviewed the design of the vision transformers and gradually modernized a standard ResNet toward a transformer. They built a pure CNN model that competes favorably with the ViTs while maintaining the simplicity and efficiency of standard CNNs. ParCNet~\cite{zhang2022parc} proposed a pure convolution network with position-aware circular convolution, which achieved better performance than popular light-weight CNNs and vision transformers.
\textbf{Vision Transformers.} Dosovitskiy~\etal introduced the transformer model into vision tasks and proposed ViT~\cite{dosovitskiy2020image}. It cropped images into $16\times 16$ patches as input tokens to the transformer and used positional encoding to learn spatial information. However, the vanilla ViT was hard to train and huge datasets are required such as JFT-300M~\cite{sun2017revisiting}. DeiT~\cite{touvron2021training} exploited knowledge distillation to train ViT models and achieved competitive accuracy with less pretraining data. To further enhance the model architecture, some researchers attempted to optimize ViTs with ideas from CNNs. T2T-ViT~\cite{yuan2021tokens} introduced a token-to-token process to progressively tokenize images to tokens and structurally aggregate tokens. PVT~\cite{wang2021pyramid} inserted convolution into each stage of ViT to reduce the number of tokens and build hierarchical multi-stage structures. Swin transformer~\cite{liu2021swin} computed self-attention among shifted local windows, which has become the new baseline of many vision tasks. PiT~\cite{heo2021rethinking} jointly used pooling layers and depth-wise convolution layers to achieve channel multiplication and spatial reduction. Yu~\etal~\cite{yu2022metaformer} pointed out that the general architecture of the transformers is more essential to the model's performance instead of the specific token mixer module. They initiated the concept of MetaFormer which is compatible with using convolutions, self-attention, and even pooling as the token mixer.
\textbf{Hybrid Convolution Networks and Vision Transformers.} In addition to ViTs, another popular line of research is to combine elements of ViTs and CNNs to absorb the strengths of both architectures. LeViT~\cite{graham2021levit} proposed a hybrid neural network for fast inference and significantly outperformed existing CNNs and ViTs concerning the speed/accuracy trade-off. BoTNet~\cite{srinivas2021bottleneck} replaces the standard convolutions with multi-head attention in the final three bottleneck blocks of ResNet. CvT~\cite{wu2021cvt} introduced depth-wise and point-wise convolution in front of the self-attention unit, which introduced shift, scale, and distortion invariance while maintaining the merits of transformers. Some other works focused on improving efficiency with hybrid models. CMT~\cite{guo2022cmt} combined a convolutional inverted feed-forward network with a lightweight multi-head self-attention way and took advantage of transformers to capture long-range dependencies and CNN to model local features. MobileViT~\cite{mehta2021mobilevit} proposed a lightweight model and a fast training strategy for mobile devices. Mobile-Former~\cite{chen2022mobile} adopted a parallel structure to combine convolution blocks and attention blocks. EfficientFormer~\cite{li2022efficientformer} revisited the inefficient designs in transformers and introduced a dimension-consistent design with latency-driven slimming.
Although many works have successfully combined transformers and CNNs for vision tasks, they are not as much focused as our work on the systematic design of the global receptive field, advanced attention mechanism, and unified local-global balance across the whole network. We invent a newly evolved version of these designs and demonstrate the potential of pure CNNs compared with transformers and hybrid architectures.
\begin{figure*}
\centering
\includegraphics[width=0.80\linewidth]{images/parc_evolution.pdf}
\caption{\textbf{The transitions from the original ParC V1 to ParC V2 block.} Compared with ParCNetV1, we first introduce oversized convolutions to further enhance capacity while simplify architecture; then we design bifurcate gate unit to improve efficiency and strengthen attention; finally we propose uniform local-global block and construct the whole network with this uniform block.
Different colors indicate different functions, \emph{i.e.}, red, blue and yellow represent spatial interaction, channel mixing and attention mechanism, respectively.}
\label{fig:evolution}
\end{figure*}
\section{Methods}
An overview of the ParCNet V2 architecture is presented in Fig.~\ref{fig:evolution} (d). Compared with the original ParCNet, we first substitute the position-aware circular convolution with oversized convolution to encode long-range dependencies along with position information (Fig.~\ref{fig:evolution} (b)). Then we introduce bifurcate gate units as a stronger attention mechanism (Fig.~\ref{fig:evolution} (c)). Finally, we propose a uniform block that balances local and global convolutions to build full ParCNetV2 (Fig.~\ref{fig:evolution}(d)). The following sections describe details of these components.
\subsection{Oversized convolution}
\label{subsec:oversized}
To increase the model capacity and encode the long-range spatial context, we exploit an oversized depth-wise convolution with kernel size about twice of the input feature size. As shown in Fig.~\ref{fig:evolution} (b), our module contains two types of convolutions, one is an oversized convolution of vertical direction (ParC-O-H), and the other is of horizontal direction (ParC-O-W). Here, zero padding is used to provide absolute position information and prepare for acceleration, which will be explained later in this section.
Formally, let the input feature map be denoted as $X\in \mathcal{R}^{C\times H \times W}$, where $C$ denotes the number of channels. $H$ and $W$ represent the height and weight of the input feature map, respectively. The kernel of vertical and horizontal oversized convolution is $k^{h}\in\mathcal{R}^{C\times (2H - 1)\times 1}$ and $k^{w}\in\mathcal{R}^{C\times 1\times (2W - 1)}$. To simplify the formulation, we denote the index of kernels as
\begin{align*}
& \{-(H-1), -(H-2), \cdots, 0, \cdots, (H-1)\}, \\
& \{-(W-1), -(W-2), \cdots, 0, \cdots, (W-1)\},
\end{align*}
and index $0$ is the center of the kernels. We use this size because it is probably the best option, which naturally covers global position and keeps the output size the same as the input without requiring any post-processing. Larger kernels need post-processing to adjust the output size. Smaller kernels can not simultaneously preserve position cues and provide a global receptive field.
Let $Y$ denote the output of the first convolution and $Z$ denote the output of the second, the output of the oversized convolution at location $(i, j)$ is computed as:
\begin{align}
Y_{i,j} & = \sum_{s=-(H - 1)}^{H - 1}\Tilde{k}^{h}_s X_{i+s, j},\label{eq:parc_h} \\
Z_{i,j} & = \sum_{t=-(W - 1)}^{W - 1}\Tilde{k}^{w}_t Y_{i, j+t},\label{eq:parc_w}
\end{align}
where Eq.~\eqref{eq:parc_h} denotes ParC-O-H and Eq.~\eqref{eq:parc_w} denotes ParC-O-W.
Zero-padding means that $X_{i, j} = 0,$ and $Y_{i, j} = 0$ if $i\notin [0, H - 1]$ or $j\notin[0, W - 1]$. Alternating the sequential order of vertical and horizontal convolution does not affect the model output. The oversized convolutions bring two advantages:
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth]{images/oversize_conv.pdf}
\caption{\textbf{Illustration of the oversized convolution.} We use kernels that are almost twice the size of input feature maps, and pad with zeros to keep the resolution.}
\label{fig:oversize_conv}
\end{figure}
\textbf{Covering absolute position information.} Zero-padding directly embeds absolute position information into each location. As shown in Fig.~\ref{fig:oversize_conv}, each position in the output is transformed by different parameters across the input features, and thus embeds position information in the model weights. It is similar to relative position embeddings~\cite{shaw2018self}. As a result, position embeddings are no longer required, and can be abandoned to make the network more concise.
\textbf{Improving capacity with limited computational complexity.} The kernel size is increased to about twice the original size, so the capacity of the model will be significantly enhanced. For computational cost, the oversized kernel is more computationally intensive. But we can seamlessly use Fast Fourier Transform (FFT) acceleration to reduce the flops. According to the convolution theorem, for discrete signals, the dot product in the Fourier domain is equivalent to the convolution in the spatial domain. As shown in Fig.~\ref{fig:parc_branch}, we first pad input to the target size, then follow a pipeline of FFT, element-wise dot, and inverse FFT. Since the multiplication complexity of 1D-FFT with length $N$ is $\mathcal{O}(N\log N)$, the FFT version of oversized convolution is more efficient than the spatial version which has a complexity of $\mathcal{O}(N^2)$. For example, when the input size is $H\times W$, the spatial implementation follows a complexity of $O(CHW(H+W))$, while FFT acceleration runs with a lower complexity of $O(CHW(\log(H)+\log(W)))$.
To deal with input images of different resolutions, each convolution kernel will be first zoomed with linear interpolation to kernel size $C\times (2H - 1)\times 1$ and $C\times 1 \times (2W - 1)$. In addition, this method keeps the model's global receptive field on any input size and learns to extract scale-invariant features. Thus we are able to obtain better mIoU on semantic segmentation tasks without multi-scale testing. Details will be discussed in Sec.~\ref{subsec:semantic}.
\subsection{Bifurcate Gate Unit}
To make the model data-driven as ViT models, ParCNetV1 adopted an extra attention branch, which was demonstrated to be the key to boost the model performance on various tasks. In this work, we reinvent the attention mechanism with two major improvements: strengthened attention and better computation efficiency. Specifically, inspired by gated linear unit (GLU)~\cite{dauphin2017language} which improves MLP through stacking a gating component, we propose the Bifurcate Gate Unit (BGU) structure. Different from GLU which inserts gate operation into two homologous features, the proposed BGU applies gate operation on two features from two branches, one serves as attention and the other serves as features. BGU inherits high computation efficiency from GLU and accomplishes attention and feature extraction in a single unit. Furthermore, we extend BGU design to spatial interaction and channel interaction modules. As shown in Fig.~\ref{fig:bgu}, our proposed BGU is a general module that divides the model into two branches. One branch adopts a point-wise convolution layer and a GELU layer to serve as attention weights. The other transforms the features depending on the purpose of the module, \emph{i.e.}, ParC branch to extract spatial information in the spatial BGU, and point-wise convolution to perform channel mixing in the channel BGU. The outputs of the two branches are fused by an element-wise multiplication operation and an additional point-wise convolution. Our experiments show that the gating operation significantly increases the model capacity and boosts model performance.
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth]{images/bgu.pdf}
\caption{\textbf{Illustration of the Bifurcate gate unit (BGU).} We propose a general BGU which can be easily integrated into various network structures. For Spatial GPU, we insert our ParC branch and a point-wise convolution to extract spatial features. While in Channel BGU, we simply adopt a point-wise convolution to conduct channel mixing.}
\label{fig:bgu}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth]{images/parc_branch.pdf}
\caption{\textbf{Illustration of the ParC branch.} A ParC branch consists of a depth-wise convolution, a local-oversized convolution with split ratio $r$, and a point-wise convolution. Fast Fourier Transform (FFT) can accelerate the oversized convolution during inference.}
\label{fig:parc_branch}
\end{figure}
\textbf{Spatial BGU.} In the spatial BGU, we aim to extract representative spatial information including local and global dependencies. We adopt ParC branch as the feature transform branch, which consists of a standard depth-wise convolution, local and oversized separable convolutions, and a point-wise convolution. We will describe it in detail in Sec.~\ref{subsec:uniform}. Formally, our spatial BGU is defined as:
\begin{align*}
& X_1 = \operatorname{ParC}(X), \\
& X_2 = \operatorname{GELU}(\operatorname{PWConv_1}(X)), \\
& \operatorname{SpatialBGU}(X) = \operatorname{PWConv_2}(X_1\odot X_2).
\end{align*}
\textbf{Channel BGU.} For the channel mixing module, the original feed-forward network (FFN) of common transformers usually contains two point-wise convolutions separated by a GELU activation. The first layer expands the number of channels by a factor of $\alpha$, and the second layer shrinks the dimension back to the original:
\begin{equation*}
\operatorname{FFN}(X) = \operatorname{GELU}(XW_1+b_1)W_2+b_2,
\end{equation*}
where $W_1\in\mathbf{R}^{C\times \alpha C}$ and $W_2\in\mathbf{R}^{\alpha C\times C}$ indicate weights of the two point-wise convolutions, respectively. $b_1$ and $b_2$ are the bias terms. $\alpha=4$ is a common practice in most transformers. Similarly, in our channel BGU, we set an expansion ratio $\Tilde{\alpha}$ for each branch:
\begin{align*}
& X_1 = X\widetilde{W_1}+\Tilde{b_1}, \\
& X_2 = \operatorname{GELU}(X\widetilde{W_2}+\Tilde{b_2}), \\
& \operatorname{ChannelBGU}(X) = (X_1\odot X_2)\widetilde{W_3} + \Tilde{b_3},
\end{align*}
where $\widetilde{W_1}, \widetilde{W_2}\in\mathbf{R}^{C\times \Tilde{\alpha}C}$ and $\widetilde{W_3}\in\mathbf{R}^{\Tilde{\alpha}C\times C}$ indicates weights of point-wise convolutions, $\Tilde{b_1}$, $\Tilde{b_2}$, $\Tilde{b_3}$ denotes biases, respectively.
We adjust $\Tilde{\alpha}$ to fit the model size close to the original FFN. The number of parameters in the original FFN is $2\alpha C^2$, and in our FFN with BGU it is $2\Tilde{\alpha}C^2 + \Tilde{\alpha}C^2=3\Tilde{\alpha}C^2$. To keep the number of parameters almost unchanged, we get $2\alpha C^2=3\Tilde{\alpha}C^2$, thus
\begin{equation}
\Tilde{\alpha}=2\alpha/3.\label{eq:expansion_ratio}
\end{equation}
The expanded ratio in most existing models is 4, so we choose $\Tilde{\alpha} = 2.5$ to approximate the original FFN.
\subsection{Uniform local-global convolution}
\label{subsec:uniform}
ParC V1 used two different network structures, traditional convolutional block MBConvs~\cite{howard2019searching} in shallow layers and ParC operation in deep layers. We suspect the main reason for this design is that the resolution of the feature map is high in the shallow layers. If the global convolution block is directly used in the shallow layer, the calculation is too high. Using different blocks is cumbersome on the same large network structure, thus we design a unified block composed of both local and global convolutions in the entire network. Based on existing research~\cite{dosovitskiy2020image}, even capable of learning global features, the transformers mostly learn local dependencies in shallow layers. In hybrid networks, using local convolutions in the shallow layers and self-attentions in the deep layers has become a common practice~\cite{graham2021levit,mehta2021mobilevit}. Thus we let the shallow layers concentrate on local feature extraction, and then gradually introduce oversized convolutions to extract long-range interactions. As shown in Fig.~\ref{fig:parc_branch}, local convolution and global convolution in our uniform block follow the same vertical-horizontal structure, while one uses standard local convolution kernels and the other uses oversized convolutions. Let $r$ denote the ratio of global convolution channels to all channels. The uniform local-global convolution is defined as:
\begin{align*}
X^* & = \operatorname{DWConv}\left(X\right),
\\
Y_{local} & = \operatorname{Conv-W}\left(\operatorname{Conv-H}\left(X^*_{1:(1-r)C}\right)\right), \\
Y_{global} & = \operatorname{ParC-O-W}\left(\operatorname{ParC-O-H}\left(X^*_{(1-r)C:C}\right)\right), \\
\operatorname{ParC}(X) & = \operatorname{PWConv}\left(\left[Y_{local}, Y_{global}\right]\right).
\end{align*}
We first use a standard depth-wise convolution to extract local cues, and then split it into 2 parts for local separable convolutions and oversized convolutions. Finally, we adopt a point-wise convolution to insert channel mixing into the branch. By increasing $r$ with the growth of the depth, we build a desired local-global transition in the whole network. The extract formulas for defining the ratio will be detailed in Sec.~\ref{sec::fullnet}.
\subsection{ParCNetV2}
\label{sec::fullnet}
Based on the proposed modules above, we build ParCNet V2 with two scales: a smaller one ParCNetV2-10M to compare with ParCNet V1, and a larger one ParCNetV2-26M to compare with modern convolutional and transformer networks. We adopted a hierarchical architecture with four stages like Swin and ConvNeXt. In ParCNetV2-10M, the number of uniform blocks in the four stages are $\{3, 3, 9, 3\}$ and the embedded dimensions are [48, 96, 192, 384] respectively. While in ParCNetV2-26M, the number of uniform blocks in the four stages are $\{3, 3, 12, 3\}$ and the embedded dimensions are [64, 128, 320, 512]. The expand ratio $\Tilde{\alpha}$ of channel BGU is 2.5.
The ratio $r$ of uniform local-global blocks is adjusted based on the depth of the convolution relative to the model. For simplicity, we empirically follow a linear adjustment approach and assign $r$ as [0.25, 0.5, 0.75, 1] in the four stages. This approach makes a marginal improvement to the performance, while it facilitates subsequent improvements of the bifurcate gate units which provide a large increase in model performance.
\section{Experiments}
In this section, we exhibit quantitative and qualitative experiments to demonstrate the effectiveness of the proposed model. First of all, we conduct ablation studies on image classification on the ImageNet-1K dataset~\cite{deng2009imagenet} to show the improvement over ParCNet V1. Then we compare the performance with convolutional neural networks and show that our ParCNet V2 performs the best over pure convolutional networks. Next, we compare our model with modern backbones, including pure CNNs, transformers, and hybrid neural networks. Finally, we choose semantic segmentation as a downstream task and conduct experiments on ADE20K dataset~\cite{zhou2019semantic}. All experiments are implemented based on PyTorch~\cite{paszke2019pytorch}.
\begin{table*}
\centering
\small
\setlength{\tabcolsep}{10pt}
\begin{tabular}{cl|ccc|cc}
\toprule
Row & Backbone & Oversized conv & BGU & Uniform local-global & Param(M) & Top-1(\%) \\
\hline
1 & ParCNet V1 (baseline) & - & - & - & 8.6 & 69.8 \\
2 & ParCNetV2-10M & \checkmark & \checkmark & \checkmark & 9.7 & \textbf{73.7} \\
\hline
3 & ParCNetV2-10M & & \checkmark & \checkmark & 9.6 & 72.7 \\
4 & ParCNetV2-10M & \checkmark & & \checkmark & 9.7 & 72.8 \\
5 & ParCNetV2-10M & \checkmark & \checkmark & & 9.6 & 73.5 \\
\bottomrule
\end{tabular}
\caption{\textbf{Ablation study of each component on the ImageNet-1K classification task.} We use smaller ParCNetV2-10M in ablation and all experiments are trained with 100 epochs for fast evaluation. Top-1 accuracy on the validation set is reported.}
\label{tab:ablation_components}
\end{table*}
\subsection{Ablation Study and Comparison with ParCNet}
In this section, we make an ablation study on ImageNet-1K classification to show that each component in our ParCNet V2 is critical. To speed up the experiment and compare with ParCNet V1, we use the smaller ParCNetV2-10M in this section. Following the practice of ablation studies in~\cite{graham2021levit,ding2022scaling}, all the experiments are trained on ImageNet-1K dataset with 100 epochs, and the other settings are the same as image classification experiments in Sec.~\ref{subsec:classification}.
Experiment results are shown in Tab.~\ref{tab:ablation_components}. By comparing Row 1 and Row 2, it is clear that our ParCNet V2 surpasses ParCNet V1 by a large margin of 3.9\% (73.7\% \emph{v.s.} 69.8\%), with only small parameter size growth. We also make ablation studies on each component in ParCNet V2 to show that all of them are critical for improving performance.
\textbf{Oversized convolution.} Oversized convolution increases the capacity of the model and encodes position information. Without oversized convolution, the model not only loses capacity and position information but also loses the ability to learn long-range dependencies. By comparing Row 2 and Row 3, the accuracy of the model without oversized convolution drops greatly by 1.0\% (73.7\% \emph{v.s.} 72.7\%) top-1 accuracy. It demonstrates that long-range dependencies are important to networks.
\textbf{Bifurcate gate units.} The bifurcate gate unit is an important mechanism to introduce data-driven operations into ParCNet V2. It increases the non-linearity and enhances the fitting ability. There is a large decline of 0.9\% (73.7\% \emph{v.s.} 72.8\% without BGU as shown in Row 2 and Row 4. It is similar to the data-driven operation of the squeeze-and-excitation block in ParC V1, while our BGU differs in the following two points. First BGU does not increase parameters. With $\Tilde{\alpha}=2.5$, our channel BGU is more lightweight than the original FFN. Second, the two branches in our BGU are more balanced. They share a similar number of parameters and computational costs, unlike the heavy main branch and lightweight channel attention in most methods.
\textbf{Uniform local-global convolution block.} The uniform local-global convolution block aims to unify the blocks used in different stages. We empirically assigned a linear adjustment of the global ratio over each stage, and this method brings a marginal performance gain of 0.2\% (73.7\% \emph{v.s.} 73.5\%), as shown in Row 2 and Row 5. This result verifies that vision networks extract more local features in the shallow layers and more global features in the deep layers. The model may get a better boost by further tuning or even learning the local-global ratio. We leave it as future work.
\subsection{Performance Comparison with CNNs}
\label{subsec:pure_conv}
We conduct the image classification task on the ImageNet-1K~\cite{deng2009imagenet} dataset, the most widely used benchmark for this task. We train the proposed ParCNet V2 models on the training set of ImageNet-1K, and report top-1 accuracy on the validation set. We adopt random clipping, random horizontal flipping, label-smoothing~\cite{muller2019does}, mixup~\cite{zhang2017mixup}, cutmix~\cite{yun2019cutmix} and random erasing~\cite{zhong2020random} to augment the training data. In the training process, we train the model for 300 epochs by using AdamW~\cite{kingma2014adam,loshchilov2017decoupled} optimizer with momentum=0.9, weight decay =$5\times10^{-2}$ and batch size =$1024$. Cosine schedule~\cite{loshchilov2016sgdr} and warm-up strategy are employed to adjust the learning rate. The initial learning rate is set to $5\times 10^{-4}$. We adopt a variant of LayerScale~\cite{touvron2021going} in the attention layer which is similar to~\cite{guo2022visual}. Exponential moving average~\cite{polyak1992acceleration} is applied to improve the training process.
The comparison with pure convolution networks on image classification is listed in Tab.~\ref{tab:pure_conv}. It is clear that ParCNetV2-26M outperforms other convolutional networks by a large margin, including variants of the famous ResNet (ResNet50~\cite{he2016deep}, ResNeXt50-32x4d~\cite{xie2017aggregated}, ResNeSt50~\cite{zhang2022resnest}), NAS architecture (ReGNetY-4G~\cite{radosavovic2020designing}), ConvNeXt~\cite{liu2022convnet}, and MetaFormer architecture (PollFormer-S24~\cite{yu2022metaformer}). Specifically, our model surpasses ParCNetV1-27M~\cite{zhang2022parc}, which indicates that our methods go deeper along the larger convolutions and stronger attention mechanisms. In addition, ParCNetV2-26M performs better than some twice larger models of famous CNNs, such as ResNet101, ResNeXt101-64x4d, and RegNetY-8G. These demonstrate that our ParCNetV2 design is an effective and promising paradigm.
\begin{table}
\centering
\setlength{\tabcolsep}{6pt}
\small
\begin{tabular}{l|cccc}
\toprule
Models & Param(M) & FLOPs(G) & Top-1(\%) \\
\hline
ResNet50 & 22.6 & 4.1 & 76.8 \\
ResNet101 & 44.6 & 7.9 & 80.8 \\
ResNeXt50-32x4d & 25.0 & 4.2 & 78.8 \\
ResNeXt101-64x4d & 84 & 32 & 80.9 \\
ReGNetY-4G & 21 & 4.0 & 81.3 \\
ReGNetY-8G & 44.2 & 8.0 & 81.7 \\
ResNeSt50 & 27.5 & 5.4 & 81.1 \\
ConvNeXt-T & 28.6 & 4.5 & 82.1 \\
PoolFormer-S24 & 21.4 & 3.6 & 80.3 \\
ParCNetV1-27M & 26.6 & 4.5 & 82.1 \\
ParCNetV2-26M & 25.6 & 4.8 & \textbf{82.9} \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison with modern convolution networks on image classification.} All experiments are trained on ImageNet-1K dataset with 300 epochs (fully trained). Top-1 accuracy on the validation set is reported. \textbf{ParCNetV1-27M}: ParCNetV1 with bigger backbone.}
\label{tab:pure_conv}
\end{table}
\begin{table*}
\centering
\setlength{\tabcolsep}{16pt}
\small
\begin{tabular}{l|l|l|cccc}
\toprule
Frameworks & Models & Date & Param(M) & FLOPs(G) & Top-1(\%) \\
\hline
\multirow{5}{*}{ViTs} & DeIT-S & ICML 2021 & 22 & 4.6 & 79.9 \\
& T2T-ViT-14 & ICCV 2021 & 21.5 & 4.8 & 81.5 \\
& T2T-ViT-19 & ICCV 2021 & 39 & 8.5 & 81.9 \\
& Swin-T & ICCV 2021 & 28.3 & 4.5 & 81.3 \\
& MViTV2-T & CVPR 2022 & 24 & 4.7 & 82.3 \\
\hline
\multirow{9}{*}{Hybrid} & ConViT-S & ICML 2021 & 27 & 5.4 & 81.3 \\
& CoaT-Lite Small & ICCV 2021 & 20 & 4.0 & 81.9 \\
& LeViT-384 & ICCV 2021 & 39.1 & 2.3 & 82.6 \\
& Twins-SVT-S & NeurIPS 2021 & 24.0 & 2.9 & 81.7 \\
& LIT & AAAI 2022 & 27 & 4.1 & 81.5 \\
& MobileViTV2 & Arxiv 22.06 & 18.5 & 7.5 & 81.2 \\
& CvT-13 & ICCV 2021 & 20.1 & 4.5 & 81.6 \\
& EfficientFormer-L3 & NeurIPS 2022 & 31.3 & 3.9 & 82.4 \\
& Next-ViT-S & Arxiv 22.08 & 31.7 & 5.8 & 82.5 \\
\hline
\multirow{4}{*}{CNNs} & ReGNetY-4G & CVPR2020 & 20.6 & 4.0 & 81.3 \\
& ConvNext-T & CVPR2022 & 28.6 & 4.5 & 82.1 \\
& ParCNetV1-27M & ECCV2022 & 26.6 & 4.5 & 82.1 \\
& ParCNetV2-26M & - & 25.6 & 4.8 & \textbf{82.9} \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison with state of the art convolution and transformer networks on ImageNet-1K classification dataset.} Top-1 accuracy on the validation set is reported. \textbf{ParCNetV1-27M}: ParCNetV1 with bigger backbone.}
\label{tab:classification}
\end{table*}
\subsection{Performance Comparison with CNNs and ViTs}
\label{subsec:classification}
As shown in Tab.~\ref{tab:classification}, ParCNetV2-26M achieves the best performance among the latest state-of-the-art methods, including CNNs, ViTs, and hybrid networks. Specifically, compared with famous transformers such as Swin-T~\cite{liu2021swin}, ParCNetV2-26M improves the accuracy by a large margin of 1.6\% with comparable parameters and computational costs. This result demonstrates that our pure convolution model utilizes the design concepts from transformers in a more efficient way. Compared with hybrid models, ParCNetV2-26M outperforms the latest EfficientFormer-L3~\cite{li2022efficientformer} and Next-ViT~\cite{li2022next} with much fewer parameters. Combined with the above analysis of pure convolutions in Sec.~\ref{subsec:pure_conv}, our proposed model has achieved the highest classification accuracy with comparable parameters and computation sizes over various kinds of architectures.
\subsection{ParC V2 Performance on Downstream Tasks}
\label{subsec:semantic}
\begin{table}
\centering
\setlength{\tabcolsep}{8pt}
\small
\begin{tabular}{l|cccc}
\toprule
backbone & Param(M) & FLOPs(G) & mIoU(\%) \\
\hline
DeiT-S$^\ddagger$ & 52 & 1099 & 44.0 \\
Swin$^\dagger$ & 60 & 945 & 46.1 \\
Twins-SVT-S$^\dagger$ & 54 & 901 & 47.1 \\
ResNet101$^\dagger$ & 96 & 1029 & 44.9 \\
ResNeSt101 & 89 & 1074 & 44.2 \\
ConvNeXt$^\dagger$ & 60 & 939 & 46.7 \\
\hline
ParCNetV1-27M & 56 & 936 & 46.7 \\
ParCNetV2-26M & 55 & 946 & \textbf{48.2} \\
\bottomrule
\end{tabular
\caption{\textbf{Comparisons on ADE20K.} All models use UperNet as a basic framework and are trained with 160K iterations. $^\dagger$ indicates multi-scale testing and the others are evaluated with single-scale testing. $^\ddagger$ indicates additional deconvolution layers are used to produce hierarchical feature maps. FLOPs are measured with the input size of (2048, 512).}
\label{tab:segmentation}
\end{table}
To evaluate the transfer ability of ParC V2, we conduct experiments on the semantic segmentation task with ADE20K~\cite{zhou2019semantic} dataset. We train ParCNetV2-26M on the training set and report mIoU on the validation set with both single-scale testing and multi-scale testing. MMSEG~\cite{contributors2020mmsegmentation} is used as the base framework and UperNet~\cite{xiao2018unified} is used as the segmentation head. All model variants are trained for 160K iterations with a batch size of 16. All models use pre-trained weights from ImageNet1K. Other settings follow~\cite{liu2022convnet}.
Tab.~\ref{tab:segmentation} lists the mIoU, model size and FLOPs for different backbones. Note that the oversized convolution kernel is linearly interpolated into twice as large as the input feature, our model is scale-invariant and thus does not require multi-scale testing during evaluation. This is much more efficient than those requiring multi-scale testing. From the results, it can be seen that ParCNetV2-26M surpasses many transformers with similar computation costs, such as +4.2\% mIoU higher than DeiT-S~\cite{touvron2021training} and +2.1\% mIoU higher than Swin-T~\cite{liu2021swin}. It is also higher than many convolutional networks such as +3.3\% mIoU higher than ResNet101~\cite{he2016deep}, +4.0\% mIoU higher than ResNeSt101~\cite{zhang2022resnest}, and +1.5\% mIoU higher than ConvNeXt~\cite{liu2022convnet}. Specifically, our model is +1.5\% mIoU higher than ParCNetV1-27M~\cite{zhang2022parc}, which shows that our model is stronger than the previous models.
\iffalse
\subsection{Inference Acceleration}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
width=\linewidth,
title={Inference Latency},
xlabel={kernel size},
ylabel={latency (ms)},
xmin=20, xmax=280,
ymin=-1, ymax=40,
xtick={31,63,127,191,255},
ytick={0,5,10,15,20,25,30,35},
legend pos=north west,
legend cell align={left},
ymajorgrids=true,
grid style=dashed,
]
\addplot+[
color=red,
mark=diamond,
dash pattern=on 4pt off 1pt,
]
coordinates {
(31,0.287)(63,1.35)(127,8.22)(191,25.73)(255,59.84)
};
\addlegendentry{CPU}
\addplot+[
color=red,
mark=square,
]
coordinates {
(31,0.695)(63,0.758)(127,3.67)(191,12.72)(255,20.24)
};
\addlegendentry{CPU w/ FFT}
\addplot+[
color=blue,
mark=diamond,
dash pattern=on 4pt off 1pt,
]
coordinates {
(31,0.135)(63,0.135)(127,0.360)(191,1.07)(255,1.99)
};
\addlegendentry{GPU}
\addplot+[
color=blue,
mark=square,
]
coordinates {
(31,0.361)(63,0.367)(127,0.357)(191,0.462)(255,0.711)
};
\addlegendentry{GPU w/ FFT}
\end{axis}
\end{tikzpicture}
\caption{Inference latency of ParC V2 on spatial domain and accelerated with Fast Fourier Transform (FFT) on CPU and GPU devices. CPU used here is i7 6700, and GPU here is TITAN X.}
\label{fig:latency}
\end{figure}
In Sec.~\ref{subsec:oversized} we proposed a fast implementation with FFT, which could offer as a more efficient alternative for the spatial domain implementation when the input resolution is large. We test the inference speed of a single oversized convolution block on different devices including i7 6700 CPU and TITAN X GPU. The results are shown in Fig.~\ref{fig:latency}. It can be seen that when the kernel size is smaller than $31$, the standard spatial convolution and FFT implementation cost almost the same inference time on both devices. With the increase of the kernel size, the inference latency of spatial operations rapidly explodes especially on the CPU device, while the FFT acceleration runs much faster. For example, when the kernel size is $127\times 127$ (when the input size is $64\times 64$), the FFT acceleration spends only 33\% inference time of the spatial version on CPU and 35\% on GPU. Such oversized kernels are rarely seen in general convolutional neural networks but are critical in our approach. We need to emphasize that these results depend highly on hardware optimization, and the FFT optimization is usually not as good as widely used convolutions. In other words, the FFT optimization has more potential in our proposed methods.
\fi
\section{Conclusion}
This paper presents ParCNet V2, a pure convolutional neural network with state-of-the-art performance. It extends position-aware circular convolution with oversized convolutions and strengthens attention through bifurcate gate units. Besides, it utilizes a uniform locla-global convolution block to unify the design of early stage and late stage convolution blocks. We conduct extensive experiments on image classification and semantic segmentation to show the effectiveness and superiority of the proposed ParCNet V2 architecture.
{\small
\bibliographystyle{ieee_fullname}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Quantum computing provides a paradigm that exploits quantum-mechanical principles, such as superposition and entanglement, to solve computational problems far more efficiently than current or future classical computers~\cite{Shor1994, Grover1997, Nielsen2010, Montanaro2016, Childs2018}. A significant potential application of quantum computing is in finding high quality solutions to combinatorial optimisation problems. This class of problems appears often and across a broad range of contexts, for example, in commercial settings such as vehicle routing, in financial settings such as portfolio optimisation, and even in medical research such as protein folding. Often these problems have solution spaces that grow exponentially with increasing problem size, which makes finding the optimal solutions for large problems classically intractable \cite{intractability}. Quantum computers have the ability to operate on these exponentially large solution spaces in quantum parallel by using superposition of computational basis states, one assigned to each solution, the total number of which grows exponentially in the number of qubits. This ability is what allows optimisation algorithms such as the Grover adaptive search (GAS) \cite{GAS2,GAS,GAS3} to deliver significant speed up in finding optimal solutions within unstructured solution spaces relative to a classical random-search or exhaustive search procedure.
However, due to environmental noise, decoherance, and an insufficient number of qubits for error protection, current and near-term quantum computers cannot produce effective and accurate computation at large circuit depths \cite{Preskill2018, NISQ_limitations}. Since the aforementioned GAS algorithm necessarily requires large circuit depths of $\mathcal{O}(\sqrt{N})$, where $N$ is the size of the solution space, it is therefore not likely to be implementable on near-term quantum devices. Consequently, much of the recent quantum algorithm research and development has been focused on achieving quantum advantages while restricted to small circuit depths. One such restricted circuit depth algorithm, focused specifically on combinatorial optimisation, is the Quantum Approximate Optimisation Algorithm (QAOA) \cite{QAOA, farhi2015quantum, farhi2019, Morales2020, Zhou2020}.
The initial motivation behind the alternating operator ansatz which underpins the QAOA is related to the quantum adiabatic theorem \cite{Adiabatic_theorem} and its use in the quantum adiabatic algorithm (QAA) \cite{Adiabatic_QC}. The quantum adiabatic theorem states that a quantum system will remain in its ground state if its Hamiltonian changes sufficiently slowly with time. The QAA involves preparing a system in the ground state of a known Hamiltonian, then slowly evolving it to the ground state of a Hamiltonian that encodes the cost function of the optimisation problem. The QAOA seeks to approximate this process by instead applying these two Hamiltonians on a quantum circuit in alternating fashion, where the application times are controlled and tuned via a classical optimisation process. Alternating application of the Hamiltonians is in essence an amplitude amplification process~\cite{Hadfield2019}. The classical optimisation process seeks to improve the expectation value of solution quality as measured from the final amplitude amplified state, hence increasing the probability that a measurement of this state produces a high quality solution.
The Quantum Walk Optimisation Algorithm (QWOA) \cite{Marsh2019,marsh2020combinatorial} was developed as a generalisation of this process, where it was recognised that application of the two Hamiltonians was essentially equivalent to a continuous time quantum walk over a connected graph, and a quality-dependent phase shift applied to each solution state on the graph. This theoretical framework has proven extremely useful in subsequent research and indeed in the research presented in this paper. For example, the QWOA has been shown to provide a significant improvement in performance over the QAOA for a portfolio optimisation problem when restricting the quantum walk mixing process to just a subset of valid solutions \cite{portfolio}.
Despite its recent popularity, we identify three primary weaknesses of the QAOA, each of which reduces its efficiency in finding optimal solutions to large combinatorial optimisation problems. These issues are explored in depth later in this paper, though can be summarised as follows. The first limitation is that optimising for expectation value of quality does not provide maximum speed-up in finding optimal solutions. A much more effective approach would focus directly on maximising amplification of optimal solutions. The second limitation is that the QAOA does not take advantage of the degrees of freedom available within the QWOA framework. Instead, it provides no treatment to the quality distribution prior to application of quality-dependent phase shifts, and in its typical form, it makes use of the transverse field operator. By generalising to the QWOA framework, and exploring its degrees of freedom, we see that optimal solutions can be amplified significantly more by first transforming the quality distribution to one which is binary (combined with a Grover mixer). The third and perhaps most significant limitation of the QAOA is that its variational procedure is very computationally expensive, as with any high-dimensional optimisation process \cite{LONG2019108}, and that this computational expense prevents the method from providing practical speedup. By exploring and addressing each of these three weaknesses we arrive at the underlying mechanics of a novel algorithm, introduced in this paper as the Maximum Amplification Optimisation Algorithm (MAOA).
The remainder of this paper is organised as follows. In \cref{sec:Justifying_MAOA} the QWOA framework is reviewed and the aforementioned limitations of the QAOA are explored within this framework. This exploration provides a natural introduction to the amplification process which is central to the MAOA. In \cref{sec:Connection_with_Grovers}, the connection between this amplification process and Grover's search is outlined. This relationship is central in developing the MAOA. In \cref{sec:MAOA}, the MAOA is introduced and presented in detail, followed by a discussion on the GAS and RGAS in \cref{sec:GAS}, as these will form a relevant baseline for the comparison of algorithm performance. Numerical simulations, results and analysis are then presented in \cref{sec:Simulation} and \cref{sec:MAOA_analysis}.
\section{Justifying the Maximum Amplification Optimisation Algorithm}
\label{sec:Justifying_MAOA}
\subsection{The QWOA framework}
A detailed description on the theoretical framework of the Quantum Walk Optimisation Algorithm (QWOA) was given in a previous paper \cite{CVRP}; it is included here for clarity and completeness. Formally, we consider a mapping $f: \mathbb{S} \longrightarrow \mathbb{R}$, which returns a measure of the quality associated with each possible solution in the solution space $\mathbb{S}$, where $\mathbb{S}$ has cardinality $N$.
The starting point of the QWOA is a quantum system with $N$ basis states, one for each solution in $\mathbb{S}$, initialised in an equal superposition,
\begin{equation}
\ket{s} = \frac{1}{\sqrt{N}}\sum_{x \in \mathbb{S}}\ket{x}.
\label{eq:eqsuperpos}
\end{equation}
\noindent This initial state is then evolved through repeated and alternating application of the \emph{quality-dependent phase-shift} and \emph{quantum-walk-mixing} unitaries. The quality-dependent phase-shift unitary is defined as
\begin{equation}
U_Q(\gamma_j) = \exp(-\text{i} \gamma_j Q ),
\end{equation}
%
\noindent where $\gamma_j \in \mathbb{R}$ and $Q$ is a diagonal operator such that $Q\ket{x} = f(x) \ket{x}$. The quantum walk mixing unitary is defined as
\begin{equation}
U_W(t_j) = \exp(-\text{i} t_j A ),
\end{equation}
\noindent where $t_j > 0$, and $A$ is the adjacency matrix of a circulant graph that connects the feasible solutions to the problem, i.e. the graph contains $N$ vertices, one for each solution in $\mathbb{S}$ and the vertices/solutions are connected according to the adjacency matrix, $A$. Note that the Laplacian matrix of the graph could also be used, but it would produce equivalent behaviour. The graphs are selected to have circulant connectivity, because all circulant graphs are diagonalised by the Fourier transform and hence can be efficiently implemented on a quantum computer~\cite{Qiang2016, Loke2017b, Zhou2017, Qiang2021}.
The first unitary $U_Q$ applies a phase-shift at each vertex proportional to the quality of the solution at that vertex, with the proportionality constant given by the parameter, $\gamma_j$. The second unitary $U_W$ can be understood as performing a quantum walk over the graph for time $t_j$, mixing the amplitudes across vertices. Following the mixing of phase-shifted amplitudes across the vertices of the graph, constructive and destructive interference will result in quality-dependent amplitude amplification, controlled by the parameters $\gamma_j$ and $t_j$. Application of $U_Q$ and $U_W$ is repeated $r$ times, resulting in a final state of the system given by
\begin{equation} \label{eq:QWOA}
\ket{\bm{\gamma}, \bm{t}} = U_W(t_r) U_Q(\gamma_r)...U_W(t_{1})U_Q(\gamma_{1}) \ket{s},
\end{equation}
\noindent where $\bm{t} = (t_1, t_2, ..., t_r)$ and $\bm{\gamma} = (\gamma_1, \gamma_2, ..., \gamma_r)$.
By tuning the parameters $\bm{\gamma}$ and $\bm{t}$, it is possible to amplify the amplitudes corresponding with high quality solutions, and therefore increase the probability of a measurement of the system collapsing it into a high quality solution. The process of tuning the parameters, also known as the variational procedure, is conducted iteratively through the use of a classical optimisation algorithm which takes as its objective function the expectation value of the $Q$ operator:
\begin{equation}
\label{eq:expectation}
c(\bm{\gamma}, \bm{t}) = \bra{\bm{\gamma}, \bm{t}}Q\ket{\bm{\gamma}, \bm{t}}.
\end{equation}
The QWOA framework also assumes there exists an indexing algorithm which provides a one to one mapping from indices $\in \{0,1,...,N-1\}$ to solutions in the solution space. This allows for the quantum walk and graph connectivity to be restricted to just the space of valid solutions. Note also that the QAOA is contained within the QWOA framework, where application of the transverse field/mixing operator of QAOA is equivalent to a quantum walk over a hypercube \cite{Marsh2019}.
\subsection{The issue with optimising for expectation value of quality}
\label{sec:expectation_value}
Many combinatorial optimisation problems exhibit solution qualities with normal-like distributions (examples of which are shown in \cref{sec:Simulation}), in which case sub-optimal solutions will significantly outnumber the optimal solution(s). Since the variational procedure within the QWOA framework operates on the expectation value for quality, $c(\bm{\gamma}, \bm{t})$, it tends to amplify a large group of sub-optimal solutions more than a less numerous group of optimal solutions. The reasoning for this is subtle. The expectation value for quality would clearly be optimised if the optimal solutions are completely amplified. However, with restricted circuit depths, the maximum possible amplification is limited. In this context of limited and quality-dependent amplification, the expectation value of quality is optimised when favouring amplification of the sub-optimal solutions, simply due to their superior number within the solution space. Moreover, it is also possible for the least-optimal solutions to be amplified as a secondary effect, since these are less numerous and do not exert enough influence on the expectation value of quality to favour their suppression during the optimisation process. Both of these effects were seen clearly in our previous QWOA simulation results \cite{CVRP}, included here in \cref{fig:CVRP_amplifications}.
This clearly demonstrates that the QWOA/QAOA variational procedure does not, in general, maximise amplification of the optimal solutions. On the other hand, the variational procedure produces an amplified state in which the expected value of quality has been optimised. This may not seem like an issue, because when measuring from such an amplified state, we expect to find a solution of reasonably high quality. This is not, however, the most effective approach when searching for the optimal solution(s) to a large problem. Instead, the focus should be on producing a quantum state in which the optimal solutions are maximally amplified.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\columnwidth]{figures/CVRP_amplifications.pdf}
\caption{Probability amplification of solutions to a vehicle routing problem as a function of their cost after 10 and 35 QWOA iterations \cite{CVRP}. Note that the optimal (lowest-cost) solutions are amplified considerably less than the more numerous sub-optimal solutions. In addition the least-optimal (highest-cost) solutions are amplified in spite of their low quality.}
\label{fig:CVRP_amplifications}
\end{figure}
\subsection{Amplification of optimal solutions as a more effective metric}
The context of the QWOA and QAOA is in solving large and hence classically-intractable combinatorial problems with near-term and hence restricted circuit depth quantum computers. For large problems and restricted circuit depths, it is not possible for the optimised expectation value of quality to converge to the highest solution quality, instead, it will converge to a reasonably high, sub-optimal quality. This is because the amount of amplification available is limited by the restricted circuit depth, and for the expectation value of quality to converge to that of the optimal solutions, these optimal solutions require extremely large amounts of amplification, from what is initially only a very small component of the initial equal superposition. If the goal of these near-term algorithms is to find a solution of reasonably high quality, then the variational procedure will be successful in producing a quantum state which, when measured, will regularly produce reasonably high quality solutions. However, this is not, and should not be the goal. Firstly, the variational procedure is computationally expensive, and secondly, producing a reasonably high quality solution is trivially easy through classical means. For example, the process of randomly sampling 1,000 solutions and taking from these the best quality solution, is expected to produce a solution with quality in or near the top 0.1\% of all solutions. Instead, the goal of QWAO and QAOA should be to find an optimal solution, or at least a solution which is very close to being optimal (near-optimal).
Amplification produced in the QWOA amplified states is necessarily limited by restricted circuit depth. As such, a QWOA amplified state is likely to require repeated preparation and measurement in order to produce an optimal or near-optimal solution, since with limited amplification, measurement of these solutions remains relatively unlikely. The amplification of a solution represents the rate at which it will be measured from the amplified state relative to the rate it will be measured via classical random sampling. So the amplification of near-optimal and optimal solutions is the single metric most relevant to the speed-up provided by measuring any particular amplified state. It is for this reason that the following section is focused on maximising amplification of optimal solutions.
\subsection{Effect of graph structure and quality degeneracy on amplification of a single optimal solution}
In order to understand how the two primary degrees of freedom within the QWOA framework, the graph structure used for the quantum walk and the quality distribution, affect the amplification of the optimal solution in a solution space, a meticulous numerical investigation has been carried out, discussed in detail in \cref{sec:AppendixA} and \cref{sec:AppendixB}, with the important results summarised as follows:
\begin{itemize}
\item Increased degeneracy in a quality distribution increases the amplification of the single optimal solution. As such, a binary marking function applied to a quality distribution produces the largest amplification of a single marked solution for any graph. Note that degeneracy here refers to degeneracy in the non-optimal qualities. A binary marking function is defined here as a function which transforms the quality distribution into one of marked (quality $=1$) and unmarked (quality $=0$) solutions. In general the binary marking function will mark all solutions with quality superior to some specified threshold value, but in this case the single optimal solution is the only marked solution.
\item A binary marking function also produces maximum amplification of the single marked solution with repeated applications of the same phase-shift and walk-time parameters, reducing the optimisation landscape for the classically tuned parameters to one that is 2-dimensional for any number of iterations.
\item Specifically in the case of a binary marking function, of all the investigated graphs, the complete graph produces the maximum amplification of the single marked solution. Note that the complete graph is the graph in which each vertex/solution is connected to every other.
\end{itemize}
Besides the fact that a binary-marked complete graph produces the highest amplification of the optimal solution with repeated application of the same parameter pairs, it also has another significant advantage, discussed in the next section.
\subsection{The reduced complete graph}
The complete graph presents a unique opportunity in that its behaviour can be greatly simplified when there is degeneracy in the distribution of qualities across its vertices~\cite{Marsh2021, Grover_Equivalence}. The degenerate vertices can be combined through an edge contraction process to produce a graph with significantly fewer vertices which produces equivalent behaviour with respect to amplitude amplification within degenerate groups. This simplified graph will be referred to from here on as the reduced graph. The reason degenerate vertices can be combined in this way is because each of them are functionally equivalent within the graph. They each receive identical phase shifts and have identical connectivity with regards to neighbouring vertices of each quality. \cref{fig:Edge_contraction} illustrates the edge contraction process and shows how it produces a reduced graph. Note the illustration is specifically for an example solution space containing solutions with 3 distinct qualities. The solutions space can therefore be divided into 3 subsets, $S_A$, $S_B$ and $S_C$, with respective qualities, $q_A$, $q_B$ and $q_C$ containing respectively, $a$, $b$ and $c$ solutions each. In any case, the reduced graph is a weighted graph where each vertex represents one group of degenerate vertices from its parent graph. The weight of each regular edge corresponds with the square root of the number of edges connecting the respective degenerate groups in the parent graph. The weight of each self loop is equal to one less than the number of vertices in the respective degenerate group. The self loops are necessary because they account for the mixing of amplitude that occurs within vertices of the same group.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/Edge_contraction.pdf}
\caption{An illustration of the edge contraction process for a quality distribution containing 3 distinct qualities, distributed over a complete graph. This shows how the behaviour of an arbitrarily large complete graph can be greatly simplified.}
\label{fig:Edge_contraction}
\end{figure}
The reduced graph in \cref{fig:Edge_contraction} can be characterised by the following quality operator $Q_3$, adjacency matrix $A_3$, and initial equal superposition state $\ket{s_3}$, where
\[ Q_3 = \left[ \begin{array}{ccc} q_A & 0 & 0 \\ 0 & q_B & 0 \\ 0 & 0 & q_C \\ \end{array} \right], \;\; A_3 = \left[ \begin{array}{ccc} a-1 & \sqrt{ab} & \sqrt{ac} \\ \sqrt{ab} & b-1 & \sqrt{bc} \\ \sqrt{ac} & \sqrt{bc} & c-1 \\ \end{array} \right],\]
\[\ket{s_3} = \left[ \begin{array}{c} \sqrt{\frac{a}{N}} \\ \sqrt{\frac{b}{N}} \\ \sqrt{\frac{c}{N}} \\ \end{array} \right].\]
This process displays quite clearly a rather significant advantage of the complete graph. Its behaviour with regards to amplitude amplification within degenerate groups of vertices is insensitive to solution placement within the solution space or across the vertices of the graph. This hints at the possibility for analytically derived optimal parameters for amplification into an optimal set of solutions, or at the very least, parameters which are also insensitive to solution placement/ordering with respect to their resulting amplitude amplification.
\subsection{Optimal parameters for a binary marking function on a complete graph}
Up to this point, the focus has been on just a single optimal solution and its amplification. However, in reality, there is no way to know the number of solutions marked by a binary marking function on an unknown solution space, or similarly the number of solutions in the most-optimal partition for some other partitioning of the solution space. As such, the focus will now be on investigating amplification of some unknown fraction of marked solutions.
The two partition or binary marked problem on a complete graph is characterised by the following reduced graph adjacency matrix, quality operator and initial state, where $m$ represents the number of marked vertices (solutions), and $N$ represents the total number of vertices (solutions), namely
\[ Q_2 = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array} \right], \;\; A_2 = \left[ \begin{array}{cc} m-1 & \sqrt{m(N-m)} \\ \sqrt{m(N-m)} & N-m-1 \\ \end{array} \right] ,\]
\[\ket{s_2} = \left[ \begin{array}{c} \sqrt{\frac{m}{N}} \\ \sqrt{\frac{N-m}{N}} \\ \end{array} \right].\]
\noindent Consider a single iteration amplified state,
\[\ket{s_2'} = U_W(t) U_Q(\gamma) \ket{s_2}.\]
\noindent Substituting $m=\rho N$, where $\rho$ is the ratio of marked solutions (which is also the initial marked solution probability), it is possible to derive an expression for the probability contained in the marked solutions of the amplified state. Dividing by $\rho$, then taking the limit for small $\rho$, the final expression for amplification of the marked solutions after a single iteration becomes
\begin{equation}
\label{eq:amplification_r=1}
3+2(\cos{Nt}(\cos{\gamma}-1)-\cos{\gamma})-2\sin{Nt} \sin{\gamma}
\end{equation}
%
\noindent which takes a maximum value of 9 for $\gamma=\pi$ and $t=\frac{\pi}{N}$.
As described previously, and demonstrated within \cref{sec:AppendixB}, amplification of a single marked vertex on the complete graph is maximised by repeated application of the same parameters. Since the above expression for amplification is independent of $m$, and any binary marked complete graph can be characterised by $Q_2$, $A_2$ and $\ket{s_2}$, the amplification of marked vertices should be independent of whether we are looking at a single marked vertex, or any arbitrary number of marked vertices, so long as the ratio of marked vertices is small. So amplification should also be maximised by repeated application of the same parameters for complete graphs with multiple marked vertices. As such, repeated application of the derived parameters, $\gamma=\pi$ and $t=\frac{\pi}{N}$, is expected to maximally amplify marked vertices on the binary marked complete graph, for arbitrary $r$.
In order to show that the binary marking function remains the most effective at amplifying a small group of marked vertices on a very large complete graph and across a range of iteration numbers $r$, the performance of various partitions of the complete graph will be assessed. Namely, 2, 3, 5 and 10 part partitions will be assessed. In addition, the derived parameters, $\gamma=\pi$ and $t=\frac{\pi}{N}$, will be applied to the binary partitioned complete graph to ensure that they do indeed produce optimal amplification. In each case, the graph will have a total of $N=10^8$ vertices, with 10 vertices in the marked partition (with quality 1). The remaining vertices will be partitioned into equal sized groups to achieve the required total number of partitions and each group assigned with a single quality from those distributed uniformly over the interval [0,1]. Note that amplified probabilities for the marked vertices are computed using \cref{eq:QWOA} with the adjacency matrix, quality operator and initial state taken as those for each respective reduced graph. The process for optimising amplification into the marked vertices for a given number of iterations and a given partitioned graph consists of randomly generating 10,000 sets of $2r$ initial parameters. The three sets of parameters that produce maximum initial amplification are taken as the initial values in a Nelder-Mead optimisation procedure. This process was repeated 24 times and the maximum probability from the 72 optimised results was taken as the final most-optimal probability. The results of this analysis are shown in \cref{fig:various_partition_optimisation}, where it is clear that the binary (two-part) partition performs the best, and that the rate of amplification of the optimal solutions decreases significantly as we tend towards the typical QWOA process with increasing numbers of partitions (i.e. decreasing degeneracy). The solid curve shows the maximal amplification given by $(2r+1)^2$, which fits the observed maximum amplification, a fact that will be addressed in \cref{sec:Connection_with_Grovers}. Repeated application of the derived optimal parameters also clearly matches with the results from the optimisation procedure, at least up until $r=15$, after which point the optimisation procedure is outperformed by the derived parameters, likely because the optimisation procedure is not rigorous enough to find the global maxima in the higher dimensional optimisation landscapes.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/Parameter_optimisation_for_various_partitions.pdf}
\caption{Optimised amplification of 10 marked vertices/solutions on a complete graph of $10^8$ total vertices/solutions, with increasing iterations of the QWOA process. We see that a small number of optimal solutions can be amplified significantly more by assigning them into the marked set of a binary partition, than for any other partitioning of the solution space with less degeneracy in the non-optimal solutions.} \label{fig:various_partition_optimisation}
\end{figure}
It is important to note that, for a given circuit depth, characterised by $r$ QWOA iterations, maximum amplification of a small group of high quality solutions can be achieved by using a binary marking function over a complete graph and repeatedly applying the derived set of parameters, $\gamma=\pi$ and $t=\frac{\pi}{N}$. This is the primary mechanism underlying the MAOA, and allows the variational procedure of the typical QWOA or QAOA approach to be avoided entirely. To understand how this process can be implemented within a generalised optimisation procedure, it is first important to understand how it is related to Grover's search.
\section{Connection with Grover's search}
\label{sec:Connection_with_Grovers}
\subsection{Grover's rotation and diffusion operators}
To understand how the MAOA works, it's useful to first understand how the alternating application of continuous-time quantum walks over a complete graph and quality dependent phase shifts with a binary marking function are related to Grover's search \cite{OG_grovers}. Note that the following is outlined in further detail by \citet{Grover_Equivalence}.
Consider a binary marking function which takes as its inputs a threshold quality, $T$, and a solution quality, $f(x)$, and returns a 1 if the solution quality is superior to the threshold quality or a 0 otherwise. Application of this marking function would transform the diagonal $Q$ operator to one with eigenvalues of 1 for marked states and 0 for non-marked states. When combined with the optimal phase shift parameter, $\gamma=\pi$, this transforms the quality-dependent phase-shift unitary, $U_Q(\gamma)$, into an operator which applies a $\pi$ phase shift to all marked states. Note that this is functionally equivalent to the Grover rotation operator, which applies a $\pi$ phase rotation to the marked state(s) \cite{OG_grovers}.
The adjacency matrix of the complete graph can be expressed as, $A=N\ket{s}\bra{s}-\mathbb{I}$, so the quantum walk unitary applied for time $t=\frac{\pi}{N}$ can be expressed as
\begin{equation}
U_W(\frac{\pi}{N}) = \mathrm{e}^{-\text{i} \frac{\pi}{N} A} = \mathrm{e}^{-\text{i} \frac{\pi}{N} (N\ket{s}\bra{s}-\mathbb{I})} = \mathrm{e}^{\text{i} \frac{\pi}{N}} (\mathbb{I}-2\ket{s}\bra{s}).
\end{equation}
This is equivalent to the Grover diffusion operator \cite{OG_grovers} up to a global phase, and hence produces equivalent behaviour with regards to mixing of states.
Given that the two QWOA unitaries applied in sequence and with the derived parameters, $\gamma=\pi$ and $t=\frac{\pi}{N}$, are equivalent to a single iteration of Grover's search, and both processes operate on the initial equal superposition, then $r$ such QWOA iterations produces the same amplitude amplification of the marked solutions as a Grover's search terminated after $r$ rotations. The key difference between the two processes is that Grover's search aims for complete convergence and requires large circuit depths, where as, with restricted circuit depths, the MAOA will terminate the process early and produce only partial convergence into the marked solutions.
Even though it has been demonstrated that this process of maximum amplification is functionally equivalent to a truncated Grover's search, it is also important to note that Grover's search, truncated or not, is not on its own a generalised optimisation procedure. The Grover's search procedure will make use of the amplification process to accelerate the search for a predefined element or group of elements from a larger space, but for a combinatorial optimisation problem, the defining features or qualities of an optimal solution or group of solutions is not known. It will therefore require additional work to incorporate this maximum amplification process into a generalised optimisation procedure, the framework for which will be explored in detail in the following sections.
\subsection{The low-convergence regime of Grover's search}
The amplified probability of the marked states during a Grover's search depends only on the number of completed rotations ($r$) and the ratio of the marked solutions to the total solution space ($\rho=\frac{m}{N}$) \cite{Grover_probability_equation, GAS}, as given by
\begin{equation}
P(r,\rho) = \sin((2r+1)\arcsin(\sqrt{\rho}))^2.
\label{eq:Grovers_prob}
\end{equation}
%
In the limit of small $r$ and $\rho$, \cref{eq:Grovers_prob} reduces to
\begin{equation}
P_{LC}(r,\rho) = \rho (2r+1)^2,
\label{eq:low_convergence_prob}
\end{equation}
%
which gives the probability of measuring marked solutions when the convergence into these states is low. This low-convergence estimate is accurate to within $1\%$ when the amplified probability is less than $\frac{1}{40}$. In \cref{fig:grovers_curve}, we plot these probability expressions with increasing rotation counts, $r$, relative to the rotation count required for complete convergence, i.e. $$r_c=\frac{\pi}{4\arcsin{\sqrt{\rho}}}-\frac{1}{2}.$$
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/Grovers_curve.pdf}
\caption{The amplified probability of a small group of marked solutions with increasing number of Grover rotations $r$ relative to $r_c$.}
\label{fig:grovers_curve}
\end{figure}
The region within which the low-convergence approximation is accurate shall be referred to from here on as the low-convergence regime. When applying a certain rotation count, $r$, a sufficiently small marked-vertex ratio will guarantee that the amplified state will be one that exists within this regime, and hence one that is maximally amplified, since for larger marked-vertex ratios, the marked nodes probability will fall below that predicted by \cref{eq:low_convergence_prob}. The amplification applied to the marked solutions within the low-convergence regime can be read directly from \cref{eq:low_convergence_prob}, where $\rho$ is the initial marked-solution probability and hence $(2r+1)^2$ is the amplification relative to the marked-solution probability in the non-amplified state. Once in this regime, adjusting the threshold so as to further reduce the marked ratio will not result in any further amplification of the marked states, but will instead only reduce the total size of the marked set. Note that at this point it should be apparent why the $(2r+1)^2$ amplification curve was included in \cref{fig:various_partition_optimisation}, as it shows perfect agreement with the amplification produced by the application of the derived optimal parameters to the binary-marked complete graph.
\section{The Maximum Amplification Optimisation Algorithm}
\label{sec:MAOA}
In summary of what has so far been established, for a given circuit depth characterised by $r$ QWOA iterations, a binary marking function on the complete graph produces the maximum possible amplification of the marked states via repeated applications of the QWOA parameters, $\gamma=\pi$ and $t=\frac{\pi}{N}$, so long as the selected quality threshold for the marking function produces a marked ratio small enough to guarantee that the amplified state is in the low-convergence regime of a functionally equivalent Grover's search terminated after $r$ rotations. With respect to finding the optimal solution(s), repeated preparation and measurement of this maximally amplified state represents the maximum speedup available at the specified restricted circuit depth. Much of this section will therefore be focused on presenting a method to reliably and efficiently find a quality threshold which will produce this maximally amplified state.
Note that whether operating within the QWOA framework or that of the truncated Grover's search, the amplification process remains functionally equivalent, so from this point on, the Grover framework will be adopted, i.e. $r$ will be referred to as the number of rotations.
\subsection{Threshold response curves}
In order to understand how the MAOA locates a maximally amplifying quality threshold, it is first useful to introduce the concept of a threshold response curve, which quantifies, for a fixed number of rotations, $r$, how the probability of measuring a marked solution varies with the quality threshold, $T$. The threshold response curve is given by \cref{eq:Grovers_prob} as $P(r,\rho(T))$ where $\rho(T)$ is the marked-solution ratio produced by the marking function. The threshold response of a system represents the rate of successful measurements of marked solutions. A strong response would occur where the probability of measurement is close to $1$. An example threshold response curve for $r=128$ is shown in \cref{fig:r128_response}. This curve is the response of a system which has a quality distribution matching that of the standard normal distribution, i.e. mean quality of 0, standard deviation equal to 1, and with a sufficiently large number of solutions such that the distribution of solution qualities is approximately continuous. This response curve is for a minimisation problem, i.e. the marked set is all solutions with qualities less than $T$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/example_response_curve_r128.pdf}
\caption{Threshold response curve for 128 rotation ($r=128$) amplified states over a solution space with qualities distributed as per the standard normal distribution. The low-convergence, high-convergence and chaotic regimes are all labelled accordingly.}
\label{fig:r128_response}
\end{figure}
In general, the total combined number of peaks and troughs at either side of the median is equal to the rotation count, $r$, so the response curve in \cref{fig:r128_response} contains $64$ peaks and $64$ troughs. It is useful to define three different regions or regimes within the threshold response curve, each of which is labelled in \cref{fig:r128_response}. The chaotic regime refers to the range of thresholds within which there is a tightly spaced fluctuation between high and low response. In terms of a traditional Grover's search, this is where the number of rotations significantly exceeds the minimum number required for complete convergence, $r>2r_c$. The high-convergence regime refers to the region around the most-optimal quality at which we see peak response. Again, in terms of a traditional Grover's search, this is where the number of rotations first approaches that required for complete convergence, $0.1r_c<r<2r_c$. Finally, the low-convergence regime refers to the region of qualities in which solutions are too few in number for $r$ rotations to be sufficient to produce high-convergence. Note that this low-convergence regime is identical to that defined earlier, where maximum amplification of marked solutions is achieved, and corresponds with $P(r,\rho(T))<\tfrac{1}{40}$ and $r<0.1r_c$.
So the goal of the first part of the MAOA is to find a threshold, $T$, which is located within the low-convergence regime of the threshold response curve $P(r,\rho(T))$, where $r$ corresponds with the restricted circuit depth at which the final process of repeated state preparation/measurement is to be carried out. Lowering the threshold and monitoring the response of the system at the final value of $r$ is not likely to be a practical method, as it requires navigation of the chaotic regime in a controlled fashion, which contains $\frac{r}{2}-1$ peaks and requires a highly accurate estimate of the median. On the other hand, there exists a much more efficient method to navigate from an initial threshold located loosely around the median on the $r=1$ response curve, through to a final threshold within the low-convergence regime of the final $r$ response curve, doubling the rotation count as required.
\subsection{Navigating the threshold response curves}
The process of navigating from somewhere near the median on the $r=1$ response curve, to the low-convergence regime on the final $r$ response curve is illustrated in \cref{fig:r1_to_r8}, where the final value for $r$ is taken at 8. This iterative process of doubling the rotation count, and moving from the high-convergence peak of one curve to the trough below it on the next curve, allows for the threshold to remain in the well-behaved high-convergence regime. This allows for reliable navigation from one peak to the next until the rotation count corresponding with the desired final circuit depth is reached. For this method, the number of peaks that must be navigated to arrive at the final peak at $r$ grows with $\log_2(r)$. For contrast, navigating the entire threshold response curve at the final $r$ would involve a number of peaks which would grow linearly with $r$, not to mention the fact that any initial threshold would not have a well-defined position on the threshold response curve due to the tight spacing of peaks around the median. The other benefit of navigating through successive response curves is that state preparations/measurements made during the early stages of the process, at low $r$, incur significantly lower computational effort.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/r1_to_r8.pdf}
\caption{An illustration of how the MAOA navigates across threshold response curves from a threshold near the median for the $r=1$ curve to a final threshold in the low-convergence regime of the $r=8$ curve.}
\label{fig:r1_to_r8}
\end{figure}
It is important to note the necessary connection between doubling the rotation count and the peak to trough relationship between successive threshold response curves, which is not just a coincidental relationship specific to the standard normal distribution. The peak in the high-convergence regime for a given $r$ occurs when the argument of the sine function in \cref{eq:Grovers_prob} is equal to $\frac{\pi}{2}$, conversely, the trough immediately preceding the high-convergence regime occurs when this argument equals $\pi$. Since this argument is proportional to $r$, the doubling of $r$ at the necessary threshold, transforms from the peak in the high-convergence regime at $r$ to the trough immediately preceding the high-convergence regime at $2r$. This relationship becomes much tighter for large $r$, but still holds sufficiently true at low $r$, as is made evident in \cref{fig:r1_to_r8}.
The peak finding method is presented in detail in \cref{sec:pseudocode} but essentially reduces to recording the number of successful marked solution measurements made in a row at steadily improving thresholds (skipping to the next threshold anytime a measurement is unsuccessful at producing a marked solution). If 20 successful measurements are made in a row, the threshold is held, and the rotation count doubled (this happens most of the time and happens near the peak). If 20 consecutive successful measurements are not made, the process of checking successively improving thresholds is continued until the threshold is past the peak, at which point a weighted mean of the success counts is performed to locate the peak. Assessing whether the peak has been passed, is done initially by making use of the interquartile range of a small sample of the non-amplified solution space, and eventually the peak to peak gaps of the previously navigated threshold response curves. These previous peak to peak spacings are also used to derive an adaptive threshold step-size which is updated throughout, which helps to account for the narrowing of peak to peak gaps with increasing $r$, or any other changes in the distribution at larger separations from the mean.
When arriving at the final peak (within the high-convergence regime of the final $r$) an adaptive search \cite{PAS,HAS} with fixed circuit depth can be performed, knowing that any threshold where $P(r,\rho(T))\leq\frac{1}{40}$ is guaranteed to be located in the low-convergence regime and hence will be one that generates maximum amplification in all marked solutions for the given $r$. This adaptive search procedure is shown in \cref{fig:r1_to_r8} as the final descent on the $r=8$ curve.
An important feature of this threshold finding process is that it operates independently of the exact shape of the underlying distribution in solution qualities. This is important, because if the distribution is known exactly, and can be accurately fit with a relatively efficient sampling process, then a threshold within the low-convergence regime can be trivially deduced. This may be appropriate in some cases, but the reality is, for larger rotation counts, the relevant quality thresholds are located at large deviations from the mean, and the behaviour of the quality distributions in these regions becomes less certain without significant sampling effort. Even for problems with solution spaces which are in general normally distributed, the distributions can vary from the perfect normal distribution at large deviations from the mean, for example, in the vehicle routing problem within \cref{sec:Simulation}. In any case, due to imperfections in a problem's actual quality distribution compared to its idealised form, extrapolating from a classical random-sampling can result in selection of a threshold which is too close or too far from the mean. If too close to the mean, a threshold could be located within a trough in the chaotic regime, exhibiting a threshold response which would be otherwise identical to a maximally amplifying threshold, but one that produces significantly less amplification. If too far from the mean, there may not be any marked solutions at all, or at least so few that they may not be measured at all.
\subsection{General strategy}
\noindent To summarise, the general strategy for the MAOA consists of two parts: \\
\\
\underline{Part 1:} For a given problem and restricted rotation count, $r$, find a quality threshold which produces maximum amplification of the marked solutions (through the application of $r$ Grover rotations). Note that this threshold will be one within the low-convergence regime and hence will amplify marked solutions by the maximum factor of $(2r+1)^2$. \\
\\
\underline{Part 2:} Using the quality threshold acquired in part 1, repeatedly prepare and measure the maximally amplified state. The repeated measurement of this amplified state will produce random high quality solutions from the marked set at a rate of approximately $\frac{1}{40}$ or slightly less, depending on the final marked-solution ratio.
\subsection{Computational effort}
\label{sec:Comp_effort}
In order to assess the performance of the MAOA, it is important to be able to quantify the computational effort that it expends. Assuming that the computation of solution qualities is a task that requires a significant fraction of the total computational effort, we will use the number of calls to the quality function, $f(x)$, as a measure of computational effort. Note that in both the Grover and QWOA framework, each time the marked solutions are phase shifted, the quality function is effectively called twice, once to mark the relevant solutions, and once for the uncomputation process, to reset relevant ancillary qubits for subsequent iterations. In addition, once a final solution is measured from the amplified state, its quality must still be computed, adding one more call to quality function. As such, the preparation and measurement of an amplified state incurs a computational effort of $2r+1$. In addition, a classical random-sampling of the solution space incurs a computational effort equal to the number of samples, as it requires only a single call to the quality function for each randomly selected solution. Note that to fully quantify any speedup relative to classical random-sampling, the computational effort involved in the quantum walk/mixing process should also be accounted for. This will not be considered as part of this work, however, the number of walks/mixes is equal to the rotation count, so would pose only a linear overhead on top of the computational effort associated with the objective function calls.
\subsection{Note on the use of expectation value of quality}
Recently, \citet{golden2021threshold}, propose a modification of the QAOA which makes use of a Grover Mixer combined with a binary marking function. Note that the Grover mixer is effectively a continuous-time quantum walk over the complete graph \cite{Grover_Equivalence}, and so the underlying mechanics of their algorithm is close to that of the MAOA. The primary differences are that the performance of the amplified state is monitored via the expectation value of quality, the final pair of parameters are left free for a tuning process, and otherwise the parameter corresponding with the Grover mixer is taken as $\pi$ rather than $\frac{\pi}{N}$. As an aside, the periodicity of the walk on the complete graph is such that a walk time of $\frac{(2i+1)\pi}{N}$, where $i \in \{0,1,2,\dots\}$, will produce maximum mixing, but a walk time which is some multiple of $\frac{2\pi}{N}$ will complete an integer number of cycles and produce no mixing and hence no amplification. This can be observed in the amplification expression in \cref{eq:amplification_r=1}. As such, the mixing parameter equal to $\pi$ will function well only for a solution space with an odd number of solutions.
By monitoring the expectation value of quality, \citet{golden2021threshold} choose a threshold for their marking function which maximises the expected quality produced by the amplified state. The issue with this is that, as discussed in \cref{sec:expectation_value}, a threshold which maximises expectation value of quality will not produce maximum amplification of the highest quality solutions. In fact, measurement of a state in which the optimal solution(s) are maximally amplified will only occasionally produce a marked solution (approximately 1 in every 40 measurements or less), and hence will have an expectation value for quality which is close to the mean quality of the solution space. The expectation value is therefore not a useful metric in determining whether a state produces maximum amplification of the optimal solution(s). This is illustrated in \cref{fig:expectation_response}, where a 128-rotation amplified state is prepared over a solution space with qualities distributed as per the standard normal distribution. The expectation value of quality is shown relative to a target quality corresponding to the minimum solution out of 100 million total solutions. The amplification of the marked set is also shown, relative to the maximum possible amplification.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/Expectation_response.pdf}
\caption{Threshold response for the expectation value of quality over a quality distribution matching the standard normal distribution. Amplification of the marked set is also shown, relative to the maximum possible amplification. This demonstrates that selection of a binary threshold which maximises expected quality produces significantly less amplification of optimal solutions when compared to a threshold selected via the MAOA.}
\label{fig:expectation_response}
\end{figure}
Note that by tuning the applied parameters, it is possible to produce expected qualities along the upper envelope of the MAOA curve, as shown with the dashed curve in \cref{fig:expectation_response}. This dashed curve matches the observed behaviour shown in Figure 1 of the paper by \citet{golden2021threshold}. With fixed parameters, the 128-rotation amplified state of the MAOA oscillates in accordance with the chaotic regime of Grover's search. As shown in \cref{fig:expectation_response}, the threshold which produces the best expectation value for quality corresponds with an amplification of the marked set which is on the order of only 40\% of the maximum possible amplification. The MAOA threshold, on the other hand, produces essentially the maximum possible amplification. Ignoring the computational effort involved in the tuning of the phase shift and mixing parameters involved in this binary threshold version of QAOA, as well as the process of optimising expected quality, this approach would not be capable of producing the same speedup as the MAOA simply due to the fact that it produces less than half of the maximum possible amplification of the marked solutions.
\section{The Grover adaptive search as a benchmark}
\label{sec:GAS}
As mentioned in the introduction, a highly effective quantum optimisation algorithm already exists, called the Grover adaptive search (GAS) \cite{GAS2,GAS,GAS3}. However, it is unlikely to be implementable on near-term quantum devices, due to a requirement for large rotation counts of $\mathcal{O}(\sqrt{N})$. Nevertheless, the performance of this algorithm makes for a useful baseline for comparison against the performance of the MAOA. The D{\"u}rr and H{\o}yer (randomised rotation count) variation of the algorithm will be employed for comparison purpose, which is summarised by \citet{GAS}. The GAS effectively uses the Grover's search procedure to amplify the iteratively improving marked set and perform a hesitant adaptive search \cite{HAS}, which is essentially a pure adaptive search \cite{PAS} where the probability of successfully finding an element in the improving set is generally less than 1.
Since the GAS navigates the solution space in a random manner, there is no way of knowing whether, for a particular rotation count, the chosen threshold falls within the chaotic regime or not, and hence the rotation count must be randomised throughout. As the marked set improves, and hence the marked ratio decreases, the rotation count required to suitably amplify the marked set increases. So although the rotation count needs to be randomised, it is randomly selected from a uniform distribution which has a steadily growing upper-bound. The rate at which this rotation count upper-bound grows is controlled by a parameter, $\lambda$. In their work, \citet{GAS} show that the GAS performs optimally on a solution space with uniformly distributed qualities when $\lambda=1.34$. As it will become clear in \cref{sec:Simulation}, combinatorial optimisation problems often have solution spaces which possess something closer to a normal distribution in qualities. We confirm via simulation that $\lambda=1.34$ remains optimal for normally distributed solution spaces. As such, we will use this value in subsequent simulations.
A novel but quite natural modification of the GAS, making it more suitable for the restricted circuit depth context of near-term quantum computation, is simply to place a limit on the maximum allowable rotation count, where the procedure is otherwise identical. This modified version of the GAS will now be referred to as the restricted Grover adaptive search (RGAS). As will be seen in \cref{sec:Simulation}, the RGAS remains effective in providing speedup relative to classical random-sampling in terms of finding optimal solutions. The speedup is related to the limit placed on the rotation count, with larger rotation count restrictions producing better performance, closer to that of the original GAS, and unsurprisingly, smaller restrictions performing worse. The RGAS will therefore make for a useful comparison with the MAOA, as they can both be restricted to the same circuit depth/rotation count, leveling the playing field, and ensuring comparison between two algorithms which are equally well suited to the context of near-term quantum computation. Note that we should expect the MAOA to outperform the RGAS, because the RGAS requires a randomised rotation count, meaning that many of the amplified states will be prepared using smaller rotation counts than the upper limit, and hence won't make use of the maximum possible amplification, whereas, once the final threshold has been located, the MAOA produces the maximally amplified state every time.
Note also that the typical QWOA and QAOA procedures have not been included for comparison, since the MAOA achieves significantly more amplification of optimal solutions and does so without the need for a computationally expensive variational procedure to arrive at the optimal phase and walk parameters. The MAOA does require a single dimensional optimisation to arrive at the final quality threshold, but this is readily achievable.
\section{Numerical simulation}
\label{sec:Simulation}
Now that the Maximum Amplification Optimisation Algorithm (MAOA), Grover adaptive search (GAS) and its restricted circuit depth variant (RGAS) have all been established, their performances will be compared in the context of combinatorial optimisation. Firstly, each of the three algorithms will be applied, via numerical simulations, to a capacitated vehicle routing problem. Next, they will be applied to a portfolio optimisation problem. Lastly, the MAOA and the RGAS will be compared in the limit of arbitrarily large normally distributed solution spaces. In every case, curves for success probability vs. computational effort will be computed from the results of 10,000 simulations. Success probability is defined in each case, but is essentially the probability of having found an optimal solution after a specified amount of computational effort. Each simulation operates over the actual distribution of solution qualities for each particular problem, where each of these distributions is precomputed. The key assumptions underlying each of the simulations are listed below, and are consistent with the dynamics of a truncated Grover's search:
\begin{enumerate}
\item A given threshold, $T$, produces a marked-solution ratio, $\rho(T)$, which is fully defined by the precomputed quality distribution.
\item For a given threshold, $T$, and rotation count, $r$, the probability of measuring a marked solution is given by $P(r,\rho(T))$ as per \cref{eq:Grovers_prob}.
\item When a marked solution is successfully measured, it has an equal probability of being any one of the marked solutions, so the returned solution is randomly selected (uniformly) from the full set of marked solutions for the specified threshold.
\item The computational effort required for each preparation and measurement of an amplified state is quantified by $2r+1$, as per \cref{sec:Comp_effort}.
\end{enumerate}
The classical method of randomly sampling the solution space in search of optimal solutions, similar to exhaustive-search, is included in all cases as a baseline for comparison, since the performance of random-sampling is well defined and problem independent. We acknowledge, however, that exhaustively searching the solution space for high quality solutions is not the fastest classical approach for most combinatorial optimisation problems. Never-the-less, this comparison allows the speedup of MAOA to be clearly quantified, as discussed in \cref{sec:MAOA_analysis}.
\subsection{The capacitated vehicle routing problem}
A detailed theoretical framework for the capacitated vehicle routing problem (CVRP) is presented by \citet{CVRP} and adopted without modification here for the purpose of generating a solution space and corresponding quality distribution for analysis. The problem essentially involves seeking the lowest cost routes for delivering supplies from a central depot to a number of external locations. The cardinality of the solution space is given by
\begin{equation}
\label{eq:N_CVRP}
N_{CV}(l) = \sum_{k=1}^{l}{l-1 \choose k-1} \frac{l!}{k!},
\end{equation}
where $l$ is the number of locations. For the following simulation, we set $l=10$ giving $N=58,941,091$.
\begin{figure}[ht]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=0.85\columnwidth]{figures/CVRP_distribution_linear.pdf}
\caption{}
\label{fig:CVRP_linear}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=0.85\columnwidth]{figures/CVRP_distribution_log.pdf}
\caption{}
\label{fig:CVRP_log}
\end{subfigure}
\caption{Distribution of solution qualities for the randomly generated 10-location vehicle routing problem, shown with (a) a linear scale and (b) a log scale.}
\label{fig:CVRP_distribution}
\end{figure}
To generate a quality distribution, the package vector was randomly generated from integers on the interval [5,30], a symmetric cost matrix was generated with depot to location costs randomly generated from integers [10,20] and inter-location costs from integers [1,15], and finally, the vehicle capacity was taken as 20. Note that integer values were used in the cost matrix to increase degeneracy in the quality distribution of the solution space, just to provide some variety compared to the portfolio optimisation problem which shows virtually no degeneracy. The distribution of solution qualities is shown in \cref{fig:CVRP_linear}, where it is clear that the qualities are distributed in what approximates a normal distribution. The same distribution is shown with a log-scale in \cref{fig:CVRP_log}, in which it becomes more clear that the distribution is not perfectly normally distributed, but rather, the distribution at lower costs is somewhat discontinuous and truncated. In fact, there are 12 solutions which share the lowest cost.
The simulation results of the MAOA, GAS and RGAS applied to this 10-location CVRP are shown in \cref{fig:CVRP_results} along with the classical random-sampling method, where the success probability refers to the probability of one of the 12 highest quality (lowest cost) solutions being measured.
The GAS method is included to show how an algorithm unrestricted in circuit depth would perform. The rotation count that would be required to produce complete convergence into a single optimal solution over a solution space of this size is given by $r_c=6,029$. Since a user knows the size of the solution space, but not the degeneracy of the optimal solution(s), this is the user-specified maximum rotation count required for the GAS in this instance. In contrast, the MAOA and the RGAS are tested with restricted circuit depths corresponding to rotation counts of $r \in \{8,16,32,64,128\}$. In all cases, the quantum algorithms provide speedup over the classical method, but the amount of speedup increases with increasing rotation counts. The MAOA also outperforms the RGAS, as predicted in \cref{sec:GAS}. To clarify, due to the higher upfront computational expense of the threshold finding process, the RGAS begins to perform better than the MAOA as $r$ approaches $r_c$, but this is not likely to be relevant in the context of large solution spaces and restricted circuit depth near-term quantum computing.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\columnwidth]{figures/CVRP_BT_vs_rGAS.pdf}
\caption{Simulation results for a large vehicle routing problem. The MAOA consistently outperforms the RGAS, both of which significantly outperform a classical random-sampling approach.}
\label{fig:CVRP_results}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1.3\columnwidth]{figures/Portfolio_distribution.pdf}
\caption{Distribution of expected returns and risks associated with each portfolio choice for a large portfolio optimisation problem.}
\label{fig:Portfolio_distribution}
\end{figure*}
\subsection{Portfolio optimisation}
It was demonstrated by \citet{portfolio} that a portfolio optimisation problem based on the Markowitz model \cite{markowitz_portfolio} was a suitable candidate for application of the QWOA framework, as such, it makes an equally suitable candidate for the MAOA and GAS methods. This work will give a somewhat different treatment to the problem, however, by treating the two components of the objective function, risk and expected return, seperately. Given $n$ different stocks/assets, a particular choice of portfolio can be expressed by $\bm{z}=(z_1,z_2,...,z_n)$, where for each asset $i$ we have $z_i \in \{-1,0,1\}$, where each value corresponds with a short position, no position, or long position, respectively. In addition, the portfolio positions are constrained by the net position, $I$, such that:
\begin{equation}
\sum_{i=1}^{n}{z_i}=I.
\end{equation}
Under the Markowitz model, using data for the daily expected percentage return of asset $i$, $R_i$, and the co-variance values between assets $i$ and $j$, $\sigma_{ij}$, for a group of $n$ assets, the expected return and associated risk for each portfolio choice can then be characterised by:
\begin{equation}
\label{eq:return}
\text{Return} = \sum_{i=1}^{n}{R_i z_i}
\end{equation}
\begin{equation}
\label{eq:risk}
\text{Risk} = \sum_{i,j=1}^{n}{\sigma_{ij} z_i z_j}.
\end{equation}
The number of unique and valid portfolio choices available, or the cardinality of the solution space, is given by:
\begin{equation}
N_P(n,I) = \sum_{s=0}^{\left \lfloor{\frac{n-I}{2}}\right \rfloor}{{n \choose I+s}{n-I-s \choose s}},
\end{equation}
where $s$ represents the number of possible shorts, the first term represents the placement of longs within $\bm{z}$, and the second term represents the subsequent placement of the shorts. For the purpose of generating an example solution space and corresponding quality distribution, data for the daily adjusted close prices from 01/01/2019 to 31/12/2020 was analysed for 20 different stocks from the ASX.20 index: AMP, ANZ, AMC, BHP, BXB, CBA, CSL, IAG, MQG, GMG, NAB, RIO, SCG, S32, TLS, WES, BKL, CMW, HUB, ALU.
The net position, $I$, was taken to be 7, such that the total number of solutions or portfolio choices was $N(20,7)=61,757,600$. The distribution of risks and returns for the resulting solution space is shown in \cref{fig:Portfolio_distribution}, where risks are scaled down by a factor of $100$. The distribution resembles a 2D Gaussian, skewed towards high risk portfolios. In order to navigate the 2-dimensional optimisation landscape, the marking function will use two thresholds, one for risk and one for return. It is presumed that a balance between optimising for low risk while still maximising returns is desirable. As such, the risk threshold will be set and fixed at that corresponding to the lowest $10\%$ of all solutions, transforming the optimisation problem into a maximisation of expected return within the subspace of low-risk solutions. As is discussed in more detail in \cref{sec:MAOA_benefits}, the MAOA presents a unique ability to be able to navigate multidimensional optimisation landscapes, but because the GAS/RGAS do not have the same ability, the problem must be transformed into a 1 dimensional problem to allow for effective comparison between the different methods.
The simulation results for this problem are shown in \cref{fig:Portfolio_results}, where the probability of success is taken as the probability of finding the single highest return portfolio from those within the lowest $10\%$ for risk. The user-specified maximum rotation count required for the GAS in this instance is given by $r_c=6,172$, which is derived directly from the known size of the solution space. In contrast, the MAOA and the RGAS are tested with restricted circuit depths corresponding to rotation counts of $r \in \{32,64,128,256,512\}$. The reason these rotation counts are higher than for the CVRP problem, is because here, there is only a single optimal solution, compared to 12 in the case of the CVRP problem. The smaller ratio of optimal solutions requires higher rotation counts for comparable performance. In any case, the results are consistent with those for the CVRP simulations. In all cases, the quantum algorithms provide speedup over the classical method, with the amount of speedup increasing with increasing rotation counts. The MAOA also once again consistently outperforms the RGAS.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{figures/Portfolio_BT_vs_rGAS.pdf}
\caption{Simulation results for a large portfolio optimisation problem. The MAOA consistently outperforms the RGAS, both of which significantly outperform a classical random-sampling approach.}
\label{fig:Portfolio_results}
\end{figure}
\subsection{Simulating an arbitrarily large problem}
The above problems are of a scale where the quality distributions can be readily computed on a desktop computer. In reality, the MAOA is designed to be applied to problems with significantly larger solution spaces, which are intractable through classical methods. It is therefore valuable to understand how the MAOA performs in the limit of very large problems. As can be seen in the CVRP and portfolio optimisation problems, combinatorial optimisation problems often possess solution spaces which have qualities that are normally distributed. It is therefore possible to simulate an arbitrarily large problem by using the standard normal distribution. It is not unreasonable to think that a large enough problem would have a quality distribution which is normally distributed and which approximates a continuous distribution, even at large deviations from the mean. For such a problem, it is therefore possible to take the marked-solution ratio, $\rho(T)$, as the cumulative distribution function of the standard normal distribution at $T$. This allows for the RGAS and MAOA to be simulated in optimising an arbitrarily large normally distributed problem.
For the purpose of assessing the behaviour of the RGAS and MAOA algorithms in the limit of large problems, they will be analysed for a constant restricted rotation count, $r=64$. In each case, they will be seeking a solution within a certain most-optimal target group, forming a fraction of the solution space, referred to as the target ratio, $\mu$. A smaller target ratio corresponds with a search for higher quality solutions. The RGAS finds the target high-quality solutions by sequentially partitioning the solution space into smaller and smaller improving subsets, until a solution within this target group is measured. On the other hand, for a maximum rotation count of $r=64$, the MAOA repeatedly measures from a state prepared with a threshold which produces maximum amplification. Since this is known to occur when the probability of successfully measuring a marked solution, $P(r,T) \leq \frac{1}{40}$, and also when the probability is amplified by a factor of $(2r+1)^2$, the final marked-solution ratio will be approximated by:
\begin{equation}
\label{eq:final_marked_ratio}
\rho(r)=\frac{1}{40(2r+1)^2}.
\end{equation}
So in this case, the MAOA will be somewhat regularly measuring marked solutions within roughly the top $\rho=\num{1.5e-6}$, regardless of the target ratio, $\mu$. It is through this method that the MAOA seeks to find a solution from the target group. The performance of each algorithm has been simulated over a range of target ratios, $\mu \in \{10^{-6},10^{-7},10^{-8},10^{-9},10^{-10}\}$, and the results are shown in \cref{fig:SND_results}. The MAOA is shown to perform better than RGAS in the limit of small $\mu$, or in other words, the MAOA consistently finds solutions of the highest qualities faster than RGAS.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\columnwidth]{figures/SND_r64_logplots.pdf}
\caption{The simulated performance of both the MAOA and RGAS in optimising arbitrarily large, normally distributed solution spaces. Theoretically predicted curves for the MAOA are also included, which are derived in \cref{sec:MAOA_analysis}. The MAOA consistently outperforms the RGAS when searching for the highest-quality solutions.}
\label{fig:SND_results}
\end{figure}
\section{Analysis of the Maximum Amplification Optimisation Algorithm in the large problem limit}
\label{sec:MAOA_analysis}
To understand how well the MAOA performs relative to a classical random-sampling of the solution space, it is useful first to quantify the probability of successfully finding a target solution for the classical case. Since the behaviour of interest is that in the limit of very large solution spaces, it is suitable to ignore the removal of sampled solutions from the solution space (i.e. sampling with repeats). Note this assumption is only reasonable when the target ratio is much larger than that for a single target solution, $\mu\gg\frac{1}{N}$. Given this assumption, the equation for the probability of success in the classical case is given by:
\begin{equation}
P_C(e,\mu)=1-(1-\mu)^{e}.
\end{equation}
Note that $e$ refers to the computational effort, but in this case, can also be understood as the number of classical samples taken, since they are both equivalent. The equation is best understood as being derived from the complement of a successful measurement, i.e. the probability of success after $e$ samples is equal to one subtract the probability of no successes after $e$ samples. The probability of failing to sample a target solution $e$ times in a row is clearly $(1-\mu)^e$, since $\mu$ is the probability of a successful sample.
The equation for probability of success for the MAOA can be derived in a similar fashion, Since the MAOA, once an appropriate threshold has been found, essentially reduces to repeated preparation and subsequent measurement of the maximally amplified state. The equation is therefore given by:
\begin{equation}
\label{eq:success_prob_MAOA}
P_Q(e,\mu,r)=1-(1-\mu (2r+1)^2)^{\frac{e}{2r+1}}.
\end{equation}
The $(2r+1)^2$ term relates directly to the maximum amplification of marked states due to $r$ rotations. The power, $\frac{e}{2r+1}$, gives the number of measurements, since each state preparation and measurement requires computational effort $(2r+1)$. As can be seen in \cref{fig:SND_results}, this analytically derived expression for the success probability is consistent with simulation results in the limit of small target ratios (i.e. those significantly smaller than the final value for $\rho$).
Rearranging each of these equations for the effort in the classical case, $e_C$, and the quantum case, $e_Q$, taking their ratio and taking the limit in small target ratios, an expression for the speedup of the MAOA over classical random-sampling can be derived:
\begin{equation}
\lim_{\mu\to0} \frac{e_C}{e_Q}=\lim_{\mu\to0} \frac{\log(1-\mu(2r+1)^2)}{(2r+1)\log(1-\mu)}=2r+1.
\end{equation}
Note that this result can also be understood intuitively, since each state preparation provides amplification of $(2r+1)^2$ at the expense of $(2r+1)$ computational effort, what remains is a speedup of $(2r+1)$. This result essentially implies that the MAOA is capable of producing speedup (over classical random-sampling) in finding near-optimal or optimal solutions to large combinatorial optimisation problems, and that this speedup grows linearly in the achievable circuit depth. Since the MAOA produces states in which optimal solutions are maximally amplified, and does so without a computationally expensive variational procedure, it represents the upper limit of speedup available in the context of restricted circuit depths using a deterministic quantum amplitude amplification protocol.
It is worth noting, however, that the maximum amplification of optimal solutions produced by the MAOA at a given restricted circuit depth is maximum within the class of ``amplitude amplification" algorithms. That is, within the class of algorithms which amplify target states through the interleaved application of phase shifts and diffusion/mixing operators. Examples include the rotation and diffusion operators of Grover's algorithm, as well as the alternating operator ansatz of QAOA/QWOA. It may be possible to project into high quality solution states with significantly less circuit depth by using circuit ansatz outside of the ``amplitude amplification" framework. One such example is the filtering variational quantum algorithm by \citet{amaro2022filtering}, which has shown promise to converge to optimal solutions in significantly fewer operations compared with QAOA for a MaxCut problem on random cubic weighted graphs. However, an overall speedup is yet to be demonstrated for such methods, due to the computational expense of the required variational procedures.
\label{sec:MAOA_benefits}
The MAOA presents additional useful features beyond its demonstrated speedup:
\begin{enumerate}
\item The analytically derived expression in \cref{eq:success_prob_MAOA} can be used to inform a user how likely they are to have found a solution within a particular target ratio, $\mu$, after a specified amount of computational expense.
\item The MAOA also provides additional flexibility for multi-dimensional optimisation problems. For example, in the portfolio problem, the peak finding process, implemented over the risk threshold, can be used to isolate a known fraction of the lowest risk options. Fixing the risk threshold and transitioning to optimisation over the return threshold then allows the user to optimise within this space of lowest risk options. Note that this multi-stage optimisation procedure can be generalised to other problems too.
\item Repeated sampling of the MAOA amplified state produces a large set of near-optimal solutions. In contrast, the RGAS only produces one solution at each of the measured improving qualities. At the tail end of the RGAS procedure, these near optimal/improving solutions would be measured extremely rarely. On the other hand, for the MAOA, the near-optimal solutions are measured regularly throughout the duration of sampling. Note that the ratio of these regularly sampled marked solutions can be approximated as per \cref{eq:final_marked_ratio}. This is beneficial in acquiring a significantly larger group of near-optimal solutions, which may be of interest in some cases. For example, it may allow one to then select between solutions for features not accounted for within the optimisation procedure or alternatively to simply have access to back-up high quality solutions.
\end{enumerate}
\section{Conclusion}
\label{sec:Conclusion}
This paper serves as a comprehensive introduction to the Maximum Amplification Optimisation Algorithm (MAOA), a near-term quantum algorithm designed for finding high quality solutions to large combinatorial optimisation problems, while constrained to restricted circuit depths. Other existing near-term algorithms, QAOA and QWOA, focus on producing amplified states in which the expected value of quality has been optimised. When measuring from such an amplified state, we expect to find a solution of high quality. However, as we have demonstrated in this paper, the highest quality solutions in such a state are not amplified to the maximum extent possible.
The MAOA shifts the paradigm by seeking an amplified state in which the highest quality solutions are maximally amplified, then repeatedly sampling from the maximally amplified state. Since the highest quality solutions are amplified to the maximum extent possible, subject to a given circuit depth, the frequency with which they will be measured is also maximised, hence delivering maximum possible speedup.
Perhaps more importantly, we demonstrate that these maximally amplified states can be produced via a known set of parameters, which removes entirely the computationally expensive variational process typically associated with the QAOA and QWOA algorithms. As such, we demonstrate that the MAOA is capable of producing optimal solutions to large combinatorial optimisation problems faster than through classical exhaustive search, a result which has remained elusive for variational algorithms.
\section{Acknowledgements}
We would like to thank Sam Marsh and Edric Matwiejew for their important contribution to this work. To Sam, thank you for providing a number of important insights, including inspiration for the graph reduction process, as well as pointing out the connections between QWOA and Grover's search. To Edric, thank you for all your work developing and assisting in the use of the QWOA simulation software with which the proficiency of a binary marking function was first noticed, leading the way to the MAOA's development. This work was also supported with resources provided by the Pawsey Supercomputing Centre, with funding from the Australian Government and the Government of Western Australia.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
Many interesting physical phenomena arise when quantum systems are
subjected to the influence of external time-dependent conditions. For
instance, accelerated neutral objects may radiate photons, even in the
absence of permanent dipole moments. This is the so called {\em motion
induced radiation} or Dynamical Casimir effect (DCE)\cite{reviews}. On
the other hand, neutral objects moving sidewise with constant relative speed
may influence each
other by a frictional force proportional to a power of the velocity
(quantum friction)\cite{qfriction}.
In this work, we study quantum dissipative effects which are due to the motion
of an atom coupled to a vacuum real scalar field. We consider
the cases of an isolated atom and an atom
in the presence of a (planar) plate. The latter will be assumed to
behave as an `imperfect' mirror regarding the reflection/transmission properties
it manifests, under the propagation of vacuum-field waves.
Our description of the microscopic degrees of freedom will be similar
for both the plane and the atom. Indeed, in both cases, they
will be assumed to be modes linearly coupled to the vacuum field,
and to have a harmonic-oscillator like action, with an intrinsic frequency
parameter. We assume the plate to be homogeneous, so that the frequency will be
one and the same for all the points on the plate. This is
essentially the model considered in~\cite{Farias:2014wca} in which
we analyzed quantum friction, except that here we also include a damping
parameter, to account for losses in the dielectric. For the point-like
particle, on the other hand, we use a single harmonic oscillator, with a
linear coupling to the vacuum field. In Ref.~\cite{ludmila} thermal corrections were
also considered.
We use a model based on the assumptions above to derive the vacuum
persistence amplitude as a functional of the trajectory of the particle,
for different kinds of motion. Our goal is to explore the internal
excitation process of the atom with emission of a photon, pairs creation of
photons due to DCE, the appearance of quantum contactless friction, and the
corrections to free emission which are due to the presence of the mirror.
Regarding related works, the relevance of the internal degrees of freedom
of the plate in the context of optomechanics has been analyzed and reviewed
in Ref.~\cite{HuMof}. In Ref.\cite{PAMN}, the radiation produced by an
atom moving non relativistically in free space has been studied in detail
(see also Ref.\cite{Law}). It was shown there that, when the atom
oscillates with a mechanical frequency smaller than the internal
excitation energy, the radiation produced, that consists of photon pairs,
can be considered as a microscopic counterpart of the DCE. In the opposite
regime, the atom becomes mechanically excited, and then emits single
photons returning to its ground state. Accelerated harmonic oscillators
have also been considered in the context of the Unruh effect, as toy models
for particle detectors~\cite{Unruheffect}.
We generalize here previous analyses, to account for the presence of a
plate, treating in a unified fashion photon emission and quantum friction.
The possibility of enhancing the quantum friction forces by considering
arbitrary angles between the atom's direction of motion and the surface has
been discussed in Ref.\cite{DalvitBelen}. It has also been shown that the
presence of a plate may influence the fringe visibility in an atomic
interference experiment (see Refs.\cite{Villanueva,Ccappa}). A molecule
moving with constant speed over a dielectric with periodic grating can show
parametric self-induced excitation and, in turn, it can produce a
detectable radiation \cite{Capasso}. Note that this situation can be
mimicked by the superposition of constant velocity and oscillatory motions
over a flat surface. Although different, this phenomenon reminds the
classical Smith-Purcell radiation for charged objects moving with constant
velocity over a periodic grating, and its eventual influence on a
double-slit experiment with electrons \cite{quantumSmithPurcell}. The
problem of moving atoms near a plate is also relevant when discussing
dynamical corrections to the Casimir-Polder interaction
\cite{CasimirPolder}.
This paper is organized as follows: in Section~\ref{sec:thesys}, we
introduce the model for a particle in free space and define its effective
action. Then in Section~\ref{sec:no mirror} we evaluate that effective
action perturbatively in the coupling between the atom and field. To the
leading order in a weak-coupling expansion, there is a threshold for the
imaginary part of the effective action, associated to the internal
excitation of the atom before radiation emission. The next-to-leading order (NTLO) shows the
combination of this effect and the usual Dynamical Casimir effect, that
does not involve such excitation. In Section \ref{sec:model mirror} we
introduce the model for the imperfect mirror, considering quantum harmonic
oscillators as microscopic degrees of freedom coupled to an environment as
a source of internal dissipation. In Section~\ref{sec:mirror} we evaluate
the vacuum persistence amplitude for the case of an atom moving near the
plate, up to first order in both couplings (atom-field and mirror-field).
We apply the general expressions for the imaginary part of the effective
action to the calculation of dissipative effects, for qualitatively
different particle paths, and look for effects of quantum friction and
motion induced radiation separately. We will see that, due to resonant
effects, the internal dissipation of the mirror is crucial to obtain
finite results. We present our conclusions in Section~\ref{sec:conc}.
\section{The system and its effective action: isolated particle}\label{sec:thesys}
Throughout this paper, we consider the non-relativistic motion of a point
particle in three spatial dimensions, with a trajectory described by $t \to {\mathbf r}(t)
\in {\mathbb R}^3$, with $|\dot{\mathbf r}(t)| < 1$ (we use natural
units, such that $c = 1$ and $\hbar =1$).
We then introduce the in-out effective action,
$\Gamma[{\mathbf r}(t)]$, a functional of the particle's trajectory,
which is defined by means of the expression:
\begin{equation}\label{eq:defgrt}
e^{ i \Gamma[{\mathbf r}(t)]} \;\equiv\;
\frac{\int {\mathcal D}\phi \; e^{i {\mathcal S}(\phi)}}{\int {\mathcal
D}\phi \; e^{i {\mathcal S}_0(\phi)}}
\,=\,
\frac{\int {\mathcal D}\phi \; e^{i {\mathcal S}_I(\phi)} \;
e^{i {\mathcal S}_0(\phi)}}{\int {\mathcal
D}\phi \; e^{i {\mathcal S}_0(\phi)}}
\,\equiv \,
\langle \, e^{i {\mathcal S}_I(\phi)} \, \rangle_0 \;,
\end{equation}
where the functional integrals are over $\phi(x)$, a vacuum real scalar
field in $3+1$ dimensions, equipped
with an action ${\mathcal S}$, which consists of two terms:
\begin{equation}\label{eq:defs}
{\mathcal S}(\phi) \;=\; {\mathcal S}_0(\phi) \,+\, {\mathcal
S}_I^{(p)}(\phi) \;.
\end{equation}
${\mathcal S}_0$, denotes the part of the action which describes its
free propagation:
\begin{equation}
{\mathcal S}_0(\phi) \;=\; \frac{1}{2} \,\int d^4x \left[ \partial_\mu
\phi (x) \partial^\mu \phi (x) + i \epsilon \phi^2(x) \right] \;,
\;\; x = (x^0,x^1,x^2,x^3) \;,
\end{equation}
while ${\mathcal S}_I^{(p)}$ represents the coupling of the scalar field to the
particle. In the kind
of model that we consider here, it is assumed to be
quadratic, namely:
\begin{equation}
{\mathcal S}_I^{(p)}(\phi) \;=\; -\frac{1}{2} \int_{x,y} \phi (x)
V_{p}(x,y)
\phi(y) \;,
\end{equation}
where we have introduced a shorthand notation for the integration over
space-time points. The kernel, $V_p$ is a `potential' resulting from the
integration of microscopic degrees of freedom. It can be regarded as a
symmetric function of $x$ and $y$. In what follows, we consider its form in more detail.
Assuming a single degree of freedom which corresponds bosonic oscillator,
endowed with a coordinate $q$, living on the particle's
internal space, that potential $V_p$ stems from the functional integral over $q$:
\begin{equation}\label{eq:gq}
e^{- \frac{i}{2} \int_{x,y} \phi(x) V_p(x,y) \phi(y) }
\;=\;
\int {\mathcal D}q \; e^{i {\mathcal S}_p(q,\phi;{\mathbf r})} \;,
\end{equation}
where the particle's action, ${\mathcal S}_p$, is given by:
\begin{align}\label{eq:defspo}
{\mathcal S}_p(q,\phi;{\mathbf r}) &=\;{\mathcal S}_p^{(0)}(q)
\,+\,{\mathcal S}_p^{int}(q,\phi;{\mathbf r}) \;,
\nonumber\\
{\mathcal S}^{(0)}_p(q) &=\;\frac{1}{2}\int dt \,
\big( \dot{q}^2 - ( \Omega_p^2 - i \epsilon) q^2 \big) \;,
\nonumber\\
{\mathcal S}_p^{int}(q,\phi;{\mathbf r}) &=\; g \,
\int dt \, q(t)
\; \phi(t,{\mathbf r}(t)) \;.
\end{align}
Here, $\Omega_p$ is the harmonic oscillator frequency, and $g$ determines
the coupling between the oscillator and the real scalar field. Note that
$g$ has the dimensions of $[{\rm mass}]^{1/2}$.
We see that:
\begin{equation}
V_p(x,y) \;=\; g^2 \,
\delta\big({\mathbf x} - {\mathbf r}(x^0)\big)
\,
\Delta_p(x^0-y^0)
\,
\delta\big({\mathbf y} - {\mathbf r}(y^0)\big)
\end{equation}
where:
\begin{equation}\label{eq:defdp}
\Delta_p(x^0 - y^0) \;=\; \int \frac{d\nu}{2\pi} \, e^{-i \nu
(t-t')} \, \widetilde{\Delta}_p(\nu) \;,\;\;\;
\widetilde{\Delta}_p(\nu) \,\equiv \, \frac{1}{\nu^2 - \Omega_p^2
+ i \epsilon} \;.
\end{equation}
We could proceed in the alternative way, and integrate first the scalar field in order to obtain an effective action
for the harmonic oscillator. Although we will not follow this approach here, for later use we note that
such integration gives, when ${\mathbf r}=0$,
\begin{equation}
{\mathcal S}_p^{eff}(q) = {\mathcal S}_p^{(0)}(q)-\frac{g^2}{2}\int dt\, dt' q(t)G_0(t-t')q(t')\, ,
\end{equation}
where $G_0(t-t')$ is the Feynman propagator for the scalar field evaluated at coincident spatial points
\begin{equation}
G_0(t-t')=\int \frac{d^4 p}{(2\pi)^4}\frac{e^{-i p_0(t-t')}}{p_0^2 -{\mathbf p}^2+i\epsilon}
\end{equation}
The integral over the spatial momentum is linearly divergent, and the propagator becomes proportional
to $\Lambda \delta(t-t')$, where $\Lambda$ is $3-$momentum cutoff. This divergence produces a
shift $\delta\Omega$ in the natural frequency of the oscillator
\begin{equation}\label{eq:renOmega}
\Omega_p + \delta\Omega = \Omega_p^{(ren)}\, ,\,\, \delta\Omega=-\frac {g^2}{4\pi^2}\frac{\Lambda}{\Omega_p^{(ren)}}\, .
\end{equation}
The divergence is of course a consequence of considering point-like interactions with the field.
The effective action, $\Gamma_p[{\mathbf r}(t)]$, is a functional of the trajectory and is given by:
\begin{equation}\label{eq:gp}
e^{i\Gamma_p[{\mathbf r}(t)]} \;=\; \left\langle e^{-\frac{i}{2} \int_{x,y} \phi(x)
V_p(x,y) \phi(y)} \right\rangle_0 \;,
\end{equation}
where the average is taken with the free field action. The imaginary part of the effective action has the information of the dissipative effects due to the coupling of the
moving harmonic oscillator and the field.
\section{Accelerated oscillator in free space}\label{sec:no mirror}
A perturbative expansion of $\Gamma_p[{\mathbf r}(t)]$ in powers of $V_p$
will produce a series of terms: \mbox{$\Gamma_p \,=\, \Gamma^{(1)}_p +
\Gamma^{(2)}_p + \ldots$}, where the index denotes the order in $V_p$. We
will consider just the first two terms in what follows, which already give
non-trivial results. The first-order term is given by:
\begin{equation}
\Gamma^{(1)}_p = - \frac{1}{2} \, \int_{x,y} V_p(x,y) \big\langle
\phi(x) \phi(y)\big\rangle_0 \;,
\end{equation}
where $\langle \phi(x) \phi(y) \rangle_0 \equiv G_0(x,y)$, is the
Feynman propagator:
\begin{align}
G_0(x,y) &=\; \int \frac{d^4p}{(2\pi)^4} \, e^{-i p^0 (x^0-y^0)+i {\mathbf
p}\cdot ({\mathbf x} - {\mathbf y})} \, \widetilde{G}_0(p)
\;,\nonumber\\
\widetilde{G}_0(p) &=\; \frac{i}{(p^0)^2 - {\mathbf p}^2 + i \epsilon} \;,
\end{align}
while the second-order one, $\Gamma^{(2)}_p$, becomes:
\begin{equation}
\Gamma^{(2)}_p \;=\; \frac{i}{4} \, \int_{x,y,x',y'}
\, V_p (x,y) \, V_p (x',y') \, G_0(x,x') \, G_0(y,y') \;.
\end{equation}
Let us first evaluate $\Gamma^{(1)}_p$.
Introducing the explicit forms of $V_p$ and $G_0$, we see that:
\begin{equation}
\Gamma^{(1)}_p \;=\; - \frac{g^2}{2} \,\int dx^0 \, \int dy^0 \;
\Delta_p(x^0-y^0) \;
G_0(x^0-y^0,{\mathbf r}(x^0)- {\mathbf r}(y^0)) \;,
\end{equation}
and, in terms of the respective Fourier transforms,
\begin{align}
\Gamma^{(1)}_p \;=\; - i \frac{g^2}{2}&\int \frac{d\nu}{2\pi} \,
\int \frac{dp^0}{2\pi} \,\int \frac{d^3p}{(2\pi)^3}\,
\int dx^0 \, \int dy^0 \,\Big[ \widetilde{\Delta}_p(\nu) \nonumber\\
&\times \frac{e^{-i (\nu + p^0) (x^0 - y^0)+ i {\mathbf p} \cdot ( {\mathbf r}(x^0)
- {\mathbf r}(y^0))}}{(p^0)^2 - {\mathbf p}^2 + i \epsilon} \Big]
\;.
\end{align}
Performing the shift $\nu \to \nu - p^0$,
\begin{equation}
\Gamma^{(1)}_p =\; \frac{1}{2}\,\int \frac{d\nu}{2\pi} \,
\int \frac{d^3p}{(2\pi)^3}\, f(-{\mathbf p}, -\nu)
f( {\mathbf p}, \nu) \;
\Pi(\nu,{\mathbf p},\Omega_p) \;,
\end{equation}
\begin{equation}\label{eq:defpi1p}
\Pi(\nu,{\mathbf p},\Omega_p) \;\equiv\; - i g^2 \,
\int \frac{dp^0}{2\pi} \; \frac{1}{(p^0 - \nu)^2 - \Omega_p^2 + i
\epsilon} \,\frac{1}{(p^0)^2 - {\mathbf p}^2 + i \epsilon} \;,
\end{equation}
where we have introduced:
\begin{equation}
f({\mathbf p},\nu) \,=\,
\int dt \, e^{-i {\mathbf p}\cdot {\mathbf r}(t)} \;
e^{i\nu t} \;.
\end{equation}
After some algebra, and introducing a Feynman parameter $\alpha$,
($p \equiv |{\mathbf p}|$)
\begin{align}\label{eq:defpi}
\Pi(\nu, p,\Omega_p) &=\; \frac{g^2}{4} \; \int_0^1 \, d\alpha \,
\frac{1}{[D(\alpha, \nu, p)]^{3/2}} \nonumber\\
D(\alpha, \nu, p) &\equiv\; \alpha \, \Omega_p^2 + (1-\alpha) \, p^2 -
\alpha (1-\alpha) \, \nu^2 - i \epsilon \;.
\end{align}
Therefore,
\begin{equation}
{\rm Im} [ \Gamma^{(1)}_p]\;=\; \frac{1}{2} \,
\int \frac{d\nu}{2\pi} \int \frac{d^3p}{(2\pi)^3} \;
\big| f({\mathbf p}, \nu) \big|^2 \;
{\rm Im}\big[\Pi(\nu, p,\Omega_p) \big] \;,
\end{equation}
where, from (\ref{eq:defpi}), one finds:
\begin{equation}\label{eq:ImPi}
{\rm Im}\big[\Pi(\nu, p,\Omega_p) \big] \;=\; \frac{\pi g^2}{ 2 p \Omega_p} \,
\big[ \delta( \nu - p - \Omega_p) + \delta( \nu + p + \Omega_p)
\big] \;.
\end{equation}
Thus,
\begin{equation}\label{res:order1}
{\rm Im} [ \Gamma^{(1)}_p]\;=\; \frac{g^2}{8 \,\Omega_p} \,
\int \frac{d^3p}{(2\pi)^3} \;\frac{1}{p} \,
\big| f({\mathbf p}, p + \Omega_p) \big|^2 \;.
\end{equation}
Since $p \geq 0$, note that, for this order to produce a non-vanishing
imaginary part, the frequency must overcome a threshold, namely, $|\nu| >
\Omega_p$. Of course, also $|\nu| > p$ must be satisfied. Those thresholds
may be identified as the frequencies for which the two $0+1$-dimensional
propagators involved in a 1-loop Feynman diagram become on-shell (one of
those propagators has a `mass' equal to $p$ and the other to $\Omega_p$).
On physical grounds, the emission is produced when the center of mass motion is capable
of exciting the harmonic oscillator, and this happens only above the threshold.
As shown in Ref.\cite{PAMN}, the process involves the emission of single ``photons''
as opposed to the case of the usual DCE, in which there is pair creation.
Let us now consider the evaluation of $\Gamma_p^{(2)}[{\mathbf r}(t)]$.
\begin{align}\label{eq:gp_1}
\Gamma^{(2)}_p &=\; \frac{1}{4} \, \int
\frac{d^3{\mathbf p}}{(2\pi)^3}
\int
\frac{d^3{\mathbf q}}{(2\pi)^3}
\int\frac{dp^0}{2\pi}
\int\frac{dq^0}{2\pi} \;
\int\frac{d\nu}{2\pi} \;
f({\mathbf p},p^0)
f({\mathbf q},q^0 ) \nonumber\\
& \times f(-{\mathbf p}, -p^0 - \nu) f(-{\mathbf q}, -q^0 +\nu)
\; C(p^0, q^0, \nu, {\mathbf p}, {\mathbf q}) \;,
\end{align}
with the kernel
\begin{align}\label{eq:defc}
C(\omega, \nu, p^0, {\mathbf p}, {\mathbf q}) = - i \, g^4
\int\frac{d\omega}{2\pi} \Big[
\frac{1}{(\omega - \nu)^2 -\Omega_p^2 + i \epsilon}
\frac{1}{\omega^2 -\Omega_p^2 + i \epsilon}
\frac{1}{(\omega + p^0)^2 - {\mathbf p}^2 + i \epsilon}
\frac{1}{(\omega - q^0)^2 - {\mathbf q}^2 + i \epsilon}
\Big]\;.
\end{align}
Rather than writing the full expression for the imaginary part of
$\Gamma_p^{(2)}$, we consider now its particular form, as well as for
$\Gamma_p^{(1)}$, for small amplitudes. They may be expanded in powers of
the departure of the particle from an equilibrium position ${\mathbf r}_0$.
Namely, ${\mathbf y}(t)$, where ${\mathbf r}(t) = {\mathbf r}_0 +
\mathbf{y}(t)$.
This requires to first expand: $f = f^{(0)} + f^{(1)} + f^{(2)} + \ldots$, where
$f^{(0)} = 2 \pi \, e^{- i {\mathbf p} \cdot {\mathbf r}_0} \delta(\nu)$ is
independent of the departure. In terms of $\tilde{y}^i(\nu)$, the
components of the
Fourier transform of ${\mathbf y}(t)$, the first and second order terms
in the expansion of $f$, are:
\begin{align}
& f^{(1)} \,=\, - i \, e^{- i {\mathbf p} \cdot {\mathbf r}_0} \; p^i
\tilde{y}^i(\nu) \;,\;\;
f^{(2)} \,=\, - \frac{1}{2} \, e^{- i {\mathbf p} \cdot {\mathbf r}_0}
\; p^i p^j \, (\tilde{y}^i \star \tilde{y}^j)(\nu) \;, \nonumber\\
& (\tilde{y}^i \star \tilde{y}^j)(\nu) \,=\, \int \frac{d\nu'}{2\pi} \;
\tilde{y}^i(\nu - \nu') \tilde{y}^j(\nu') \;.
\end{align}
Besides, we shall assume that ${\mathbf r}_0$ is the average position
around which the particle departs, so that $\tilde{y}^i(0) = 0$.
It is worth noting some general properties of the general terms in the
small-amplitude expansion of $f$. It is evident that higher order terms
involve higher convolution products of the Fourier transform of the
departure. That correspond to higher products of the departure itself.
Therefore, one sees that if the departure involves just one harmonic mode,
the $n$-order term will contain frequencies up to $n$-times the one of the
harmonic mode.
Then we see have for the first and second order terms in the expansion:
\subsection{First order effective action $\Gamma^{(1)}_p$}
For $\Gamma_p^{(1)}$, up to the second order in ${\mathbf y}(t)$:
\begin{equation}
{\rm Im}[\Gamma^{(1)}_p] \;=\; \frac{1}{2} \,
\int \,\frac{d\nu}{2\pi} \;
\vert \tilde{y}^j(\nu)\vert^2 \; m_p (\nu,\Omega_p),
\end{equation} where
\begin{equation}\label{mijno}
m_p (\nu,\Omega_p) \;=\; \frac{g^2}{12 \pi \Omega_p} \,\delta^{ij} \, \theta(|\nu| -
\Omega_p) \, \big(|\nu| - \Omega_p \big)^3 \;.
\end{equation}
\subsection{Second order effective action $\Gamma_p^{(2)}$}
Up to the second order in the amplitude, we also have
\begin{equation}\label{eq:Gamma2pert}
\Gamma^{(2)}_p \;=\; \frac{1}{2} \,\int \frac{d^3{\mathbf p}}{(2\pi)^3}
\int \frac{d^3{\mathbf q}}{(2\pi)^3}
\int \frac{d\nu}{2\pi}
\; C(\nu,{\mathbf p}, {\mathbf q}) \;
p^i p^j \;
\tilde{y}^i(-\nu) \tilde{y}^j(\nu)
\;,
\end{equation}
with the kernel
\begin{equation}
C(\nu, {\mathbf p}, {\mathbf q}) \,=\, - i \, g^4
\int\frac{d\omega}{2\pi} \;
\frac{1}{(\omega - \nu)^2 - {\mathbf p}^2 + i \epsilon} \;
\frac{1}{\omega^2 - {\mathbf q}^2 + i \epsilon} \;
\frac{1}{[\omega^2 -\Omega_p^2 + i \epsilon]^2} \;.
\end{equation}
In order to evaluate this kernel, we write
\begin{equation}
\frac{1}{[\omega^2 -\Omega_p^2 + i \epsilon]^2} = \frac{d}{d\Omega_p^2}\frac{1}{[\omega^2 -\Omega_p^2 + i \epsilon]}\, ,
\end{equation}
decompose the last two factors in the integrand in partial fractions and use Eq.\eqref{eq:defpi1p}. The result is
\begin{equation}
C(\nu,{\mathbf p}, {\mathbf q}) \; = \; g^2 \frac{d}{d\Omega_p^2}\left[\frac{1}{q^2-\Omega_p^2}\left(\Pi(\nu,p,q)-\Pi(\nu,p,\Omega_p)\right)\right]\, .
\end{equation}
Using Eq.\eqref{eq:ImPi} we obtain
\begin{eqnarray}\label{eq:ImPif}
{\rm Im}\big[C(\nu, {\mathbf p}, {\mathbf q})\big]&=&\pi g^4\big[\frac{1}{pq}\frac{1}{(q^2-\Omega_p^2)^2}\delta(\nu-p-q) - \frac{1}{2 p\Omega_p^2}\frac{1}{q^2-\Omega_p^2}
\delta'(\nu-p-\Omega_p)\\ \nonumber
&+&\frac{1}{2p\Omega_p^3}\frac{1}{(q^2-\Omega_p^2)^2} (q^2-3\Omega_p^2)\delta(\nu-p-\Omega_p)\big]\, ,
\end{eqnarray}
where we have taking into account that, in order to obtain $\Gamma_P^{(2)}$, $C(\nu, {\mathbf p}, {\mathbf q})$ is multiplied
by an even function of $\nu$. Inserting this result into Eq.\eqref{eq:Gamma2pert} we obtain
\begin{equation}\label{ImGamma2pertfin}
{\rm Im}\big[\Gamma_p^{(2)}\big]=\frac{g^4}{24\pi^3}\int \frac{d\nu}{2 \pi}\vert \tilde y(\nu)\vert^2\Sigma(\nu, \Omega_p)\, ,
\end{equation}
where $\Sigma=\Sigma_1+\Sigma_2+\Sigma_3$ and
\begin{eqnarray}
\Sigma_1(\nu,\Omega_p)&=& \int_0^\nu dq\frac{q(\nu-q)^3}{(q^2-\Omega^2)^2}\\ \label{Sigma1}
\Sigma_2(\nu,\Omega_p)&=& \frac{3}{2}\theta(\nu-\Omega_p)\frac{(\nu-\Omega_p)^2}{\Omega_p^2}\int_0^\infty dq \frac{q^2}{(q^2-\Omega_p^2)}\\\label{Sigma2}
\Sigma_3(\nu,\Omega_p)&=& \frac{1}{2}\theta(\nu-\Omega_p)\frac{(\nu-\Omega_p)^3}{\Omega_p^3}\int_0^\infty dq\frac{q^2(q^2-3\Omega_p^2)}{(q^2-\Omega_p^2)^2}
\label{Sigma3}
\, .
\end{eqnarray}
Several comments are in order. We see that, in the second order, there is a
non vanishing contribution when the center of mass frequency is below the
threshold. This is the contribution coming from $\Sigma_1$ and is related
with the usual pair creation in the DCE, corrected here by the internal
structure of the moving particle. Indeed, $\Sigma_1$ comes from the term
proportional to $\delta(\nu-p-q)$ in Eq.\eqref{eq:ImPif}, that describes
the creation of a pair of particles with energies $p$ and $q$ respectively,
with the $\delta$ function
forcing energy conservation.
Above the threshold, the three terms contribute to the dissipative effects,
and constitute a correction to $\Gamma_p^{(1)}$. There are some subtle
points here. The integrals defining $\Sigma_i$ have potentials divergences
at $q=\Omega_p$ and for $q\to \infty$. One can readily check that the poles
at $q=\Omega_p$ do cancel when adding the three terms. However, $\Sigma_2$
and $\Sigma_3$ are linearly divergent in the ultraviolet, and thus
proportional to a $3-$momentum cutoff $\Lambda$. Due to the coupling to
the scalar field, the frequency $\Omega_p$ of the harmonic oscillator gets
renormalized with a divergent term proportional to $g^2\Lambda$ (see
Eq.\eqref{eq:renOmega}).
When working up to order $g^4$, this shift in the natural frequency must be
taken into account in the first order effective action. From Eq.\eqref{mijno} we obtain
\begin{equation}
m_p(\nu,\Omega_p)=
m_p(\nu,\Omega_p^{ren})-\frac{g^4\Lambda}{48\pi^3(\Omega_p^{(ren)})^2}\left(\frac{(\nu-\Omega_p^{(ren)})^3}{\Omega_p^{(ren)}}+ 3(\nu-\Omega_p^{(ren)})^2\right)\, .
\end{equation}
It is easy to see that the extra terms in the above equation
generate two extra terms in $\Gamma_p^{(1)}(\Omega_p)$,
that cancel the divergences of $\Sigma_2$ and $\Sigma_3$. After this cancellation, $\Gamma_p^{(2)}$ produces a finite correction to $\Gamma_p^{(1)}$. It is given by Eq.\eqref{ImGamma2pertfin} with $\Sigma\to\Sigma^{(ren)}$ and
\begin{eqnarray}
\Sigma_1^{\rm (ren)}(\nu,\Omega)&=& \int_0^\nu dq\frac{q(\nu-q)^3}{(q^2-\Omega^2)^2}\\ \label{Sigma1r}
\Sigma_2^{\rm (ren)}(\nu,\Omega)&=& \frac{3}{2}\theta(\nu-\Omega)\frac{(\nu-\Omega)^2}{\Omega_p^2}\int_0^\infty dq \left[\frac{q^2}{(q^2-\Omega^2)}-1\right]\\\label{Sigma2r}
\Sigma_3^{\rm (ren)}(\nu,\Omega)&=& \frac{1}{2}\theta(\nu-\Omega)\frac{(\nu-\Omega)^3}{\Omega^3}\int_0^\infty dq\left[\frac{q^2(q^2-3\Omega^2)}{(q^2-\Omega^2)^2} -1\right]
\label{Sigma3r}
\, .
\end{eqnarray}
In order to simplify the notation we wrote $\Omega_p^{(ren)}\equiv \Omega$. Although each $\Sigma_i$ has a pole at $q=\Omega$, the sum is finite.
Moreover, splitting the integrals as $\int_0^\infty=\int_0^\nu+\int_\nu^\infty$, $\Sigma^{\rm (ren)}$ can be computed analytically. We omit here the resulting
long expressions, and plot $\Sigma^{\rm (ren)}/\Omega$ as a function of the dimensionless external frequency $\nu/\Omega$ in Fig.1. Below threshold,
the result corresponds to the DCE due to the oscillation of the atom. In the limit $\nu\ll\Omega$ the result is proportional to $\nu^5$, which is
expected by dimensional analysis, since in this limit the effective coupling is $g^4/\Omega^4$ (see Eq.\eqref{eq:defc}). For $\nu\simeq\Omega$,
but still below threshold, the result includes the effect of the internal structure of the atom on the DCE.
\begin{figure}
\includegraphics[scale=2]{Sigma}
\label{fig:sigma}
\caption{Second order correction to the imaginary part of the effective action for a particle in vacuum, whose internal degree of freedom is a QHO of frequency $\Omega$, and whose center-of-mass exhibits a single-frequency motion of frquency $\nu$.}
\end{figure}
Above threshold, the second order result is a small correction to the first order one, and combines both DCE and the emission of single photons
through excitation-deexcitation process. This correction to the imaginary part of the effective action goes as $- g^4(\nu/\Omega)^3$ for $\nu/\Omega\gg 1$ (note that the first order result is proportional to $g^2(\nu/\Omega)^3$ in this limit).
\section{Imperfect mirror: microscopic model}\label{sec:model mirror}
Dielectric slabs are in general nonlinear, inhomogeneous, dispersive and
also dissipative media. These aspects turn difficult the quantization of a
field when all of them have to be taken into account simultaneously. There
are different approaches to address this problem. On the one hand, one can
use a phenomenological description based on the macroscopic electromagnetic
properties of the materials. The quantization can be performed starting
from the macroscopic Maxwell equations, and including noise terms to
account for absorption. In this approach a canonical quantization scheme is
not possible, unless one couples the electromagnetic field to a reservoir,
following the standard route to include dissipation in simple quantum
mechanical systems. Another possibility is to establish a first-principles
model in which the slabs are described through their microscopic degrees of
freedom, which are coupled to the electromagnetic field. In this kind of
models, losses are also incorporated by considering a thermal bath, to
allow for the possibility of absorption of light. There is a large body of
literature on the quantization of the electromagnetic field in dielectrics.
Regarding microscopic models, the fully canonical quantization of the
electromagnetic field in dispersive and lossy dielectrics has been
performed by Huttner and Barnett (HB) \cite{hb92}. In the HB model, the
electromagnetic field is coupled to matter (the polarization field), and
the matter is coupled to a reservoir that is included into the model to
describe the losses. In the context of the theory of quantum open systems,
one can think the HB model as a composite system in which the relevant
degrees of freedom belong to two subsystems (the electromagnetic field and
the matter), and the matter degrees of freedom are in turn coupled to an
environment (the thermal reservoir). The indirect coupling between the
electromagnetic field and the thermal reservoir is responsible for the
losses. It is well known that if we include the absorption, associated
with a dispersive medium, then the dielectric constant will be a complex
quantity, whose real and imaginary parts are related by the Kramers-Kronig
relations. Losses in quantum mechanics imply a coupling to a reservoir
whose degrees of freedom have to be added to the Lagrangian. This suggests
that, in order to quantize the vacuum field in a dielectric in a way that
is consistent with the Kramers-Kronig relations, one has to introduce the
medium into the formalism explicitly. This should be done in such a way
that the interaction between light and matter will generate both dispersion
and damping of the field. The microscopic theory for the interaction
${\mathcal S}_I$ between the scalar field and
the imperfect mirror, consists of a tern of the form:
\begin{equation}
{\mathcal S}_I^{(m)}(\phi) \;=\; -\frac{1}{2} \int_{x,y} \phi (x)
V_{m}(x,y)
\phi(y) \;.
\end{equation}
The kernel $V_m$, is a `potential' resulting from the integration of
microscopic degrees of freedom of the polarization field plus the external
reservoir. It can be regarded as a symmetric function of $x$ and $y$, since
the integrals in ${\mathcal S}_I$ symmetrize any bi-local function.
We then apply the above discussion to note that the potential $V_m$ originated by
the interaction between the vacuum field and the imperfect mirror, may also be obtained by a
variant of the previous procedure for $V_p$; indeed, introducing a bosonic field
$Q(t,x^1,x^2)$, living on $x^3 = 0$, the plane occupied by the plate, playing the role of the polarization field,
also couplet to an external (at equilibrium) environment with degrees of freedom denoted by $q_n(t,x^1,x^2)$,
\begin{equation}\label{eq:gm}
e^{- \frac{i}{2} \int_{x,y} \phi(x) V_m(x,y) \phi(y) }
\;=\;
\int {\mathcal D}Q \; e^{i {\mathcal S}^{eff}_m(Q,\phi)} \;.
\end{equation}
where the effective action ${\mathcal S}^{eff}_m(Q,\phi)$ is the result of integrating out the degrees of freedom $q_n$:
\begin{equation}\label{eq:gm2}
e^{i {\mathcal S}^{eff}_m(Q,\phi)} = \int {\mathcal D}q_n \; e^{i \left( {\mathcal S}_m(Q,\phi) + {\mathcal S}_m(Q,q_n)\right) } \;.
\end{equation}
where
\begin{align}\label{eq:defsQ}
{\mathcal S}_m(Q,\phi) &=\;{\mathcal S}_m^{(0)}(Q)
\,+\,{\mathcal S}_m^{int}(Q,\phi) \;,
\nonumber\\
{\mathcal S}^{(0)}_m(Q) &=\;\frac{1}{2}\int dt dx^1 dx^2\,
\big[ (\partial_t Q)^2 - ( \Omega_m^2 - i \epsilon) Q^2 \big] \;,
\nonumber\\
{\mathcal S}_m^{int}(q,\phi;{\mathbf r}) &=\; \gamma \,
\int dt dx^1 dx^2 \, Q(t,x^1,x^2)
\; \phi(t,x^1,x^2,0) \;,
\end{align}
and
\begin{align}\label{eq:defsq}
{\mathcal S}_m(q_n,\phi) &=\;{\mathcal S}_m^{(0)}(q_n)
\,+\,{\mathcal S}_m^{int}(Q,q_n) \;,
\nonumber\\
{\mathcal S}^{(0)}_m(q_n) &=\;\frac{1}{2}\sum_n \int dt dx^1 dx^2\,
\big[ (\partial_t q_n)^2 - ( \omega_n^2 - i \epsilon) q_n^2 \big] \;,
\nonumber\\
{\mathcal S}_m^{int}(Q,q_n)) &=\; \sum_n \lambda_n \,
\int dt dx^1 dx^2 \, Q(t,x^1,x^2)
\; q_n(t,x^1,x^2) \;.
\end{align}
Here, $\Omega_m$ is a frequency, and $\gamma$ determines
the coupling, which has the dimensions of $[{\rm mass}]^{3/2}$ (similar happens with $\omega_n$ and $\lambda_n$).
Following~\cite{Farias:2014wca}, microscopic matter degrees of freedom on
the mirror, which we assume to occupy the $x^3 = 0$ plane, are assumed to
behave as one-dimensional harmonic oscillators, one at each point of the
plate. Their generalized coordinates take values in an internal
space. Besides, no couplings between the coordinates at different points of the
plate are included, and there is a linear coupling between each oscillator and the
vacuum field and to the external reservoir.
After integrating out the thermal environment, we obtain an effective action for
the internal degrees of freedom into the mirror as
\begin{equation}
{\mathcal S}^{eff}_m(Q) = {\mathcal S}_m^{(0)}(Q) + \int dt dx^1dx^2 Q(t, x^1,x^2) K(t,t') Q(t', x^1,x^2),
\end{equation}
where $K(t,t')$ is a nonlocal kernel that depends on the temperature and spectral density of the environment.
As is well known, for an environment formed by an infinite set of harmonic oscillators at high temperatures
with an ohmic spectral density this kernel becomes local and proportional to a dissipation coefficient $\xi$ \cite{QBM}.
In this limit, the effect of the environment on the in-out effective action can be taken into account just
replacing $\Omega_m^2$ by $\Omega_m^2-i\xi$.
The interaction becomes then local, and $V_m$ has
the form:
\begin{equation}\label{eq:rvm}
V_m(x,y) \;\equiv\; \gamma^2 \; \Delta_m(x^0-y^0) \, \delta({\mathbf
x}-{\mathbf y}) \; \delta(x^3) \;,
\end{equation}
where $\gamma$ denotes a constant, which is the coupling between
the plate harmonic oscillator's degrees of freedom, and the vacuum field.
The precise form of $\Delta_m$ is more conveniently expressed in
terms of its Fourier transform, namely:
\begin{align}\label{eq:deflambdam}
\Delta_m(x^0-y^0) &=\;
\int \frac{d\nu}{2\pi} \, e^{-i \nu (x^0 - y^0)} \,
\widetilde{\Delta}_m(\nu) \nonumber\\
\widetilde{\Delta}_m(\nu) &\equiv\; \frac{1}{\nu^2 - \Omega_m^2 + i
\epsilon + i \xi} \; .
\end{align}
We would also be interested in the case of a mirror imposing
`perfect', i.e., Dirichlet, boundary conditions. Such a case
might be obtained by taking particular limits starting from a given $V_m$;
for example,
\begin{equation}\label{eq:rvmd}
V_m(x,y) \;\to\; V_D(x,y) \,\equiv\, \eta \; \delta({\mathbf x_\parallel}-{\mathbf
y_\parallel}) \, \delta(x^3) \, \delta(y^3) \;, \;\; \eta \to \infty \;,
\end{equation}
where we have adopted the notational convention that, for any spatial
vector ${\mathbf a}$, ${\mathbf a_\parallel} \equiv (x^1, x^2)$. This
Dirichlet limit may be reached from different $V_m$ kernels, although it is
more convenient, given the simple geometry of the system
considered, to use images in order to write the exact scalar field
propagator. The same can be said about Neumann boundary conditions, for
which the field propagator in the presence of the mirror is also obtained
by using images.
\section{Moving atom in the presence of a plate} \label{sec:mirror}
In this Section, we deal with contributions which contain both $V_m$ and
$V_p$. We will work here, for the sake of simplicity, always to the first
order in $V_p$. Therefore, the structure of the term
calculated here is as follows:
\begin{equation}
\Gamma_{mp}[{\mathbf r}(t)] \;=\;- \frac{1}{2} \, \int_{x,y} V_p(x,y) \big\langle
\phi(x) \phi(y)\big\rangle_m \;,
\end{equation}
where now:
\begin{equation}
\langle \, \ldots \, \rangle_m \;\equiv\;
\frac{\int {\mathcal D}\phi \; \ldots \; e^{i [{\mathcal S}_0(\phi)
- \frac{1}{2} \int_{x,y} \phi(x) V_m(x,y) \phi(y)]}}{\int {\mathcal
D}\phi \; e^{i [{\mathcal S}_0(\phi) - \frac{1}{2} \int_{x,y}
\phi(x) V_m(x,y)\phi(y)]}} \;.
\end{equation}
In other words, the kind of contribution we consider here, looks like
$\Gamma_p^{(1)}$, albeit with the free propagator $\big\langle \phi(x)
\phi(y)\big\rangle_0$ replaced by the propagator in the presence of the
mirror, $\big\langle \phi(x) \phi(y)\big\rangle_m$. The latter may be
incorporated either exactly or making some simplifying assumptions, in
order to be able to find the imaginary part in an explicit way.
The first non-trivial contribution, which we evaluate, arises when one
considers the expansion of the propagator to the first order in $V_m$, is:
\begin{equation}
\Gamma^{(1)}_{mp} \;=\; \frac{i}{2} \, \int_{x,y,x',y'}
\, V_m (x,y) \, V_p (x',y') \, G(x,x') \, G(y,y') \;,
\end{equation}
\begin{align}\label{eq:gint_2}
\Gamma^{(1)}_{mp} \;=\; \frac{i}{2} \, \gamma^2 \, g^2 \int
\frac{d^2{\mathbf p_\parallel}}{(2\pi)^2} & \int\frac{dp_3}{2\pi}
\int\frac{dq_3}{2\pi} \int\frac{d\omega}{2\pi}
\int\frac{d\nu}{2\pi} \;\Big[ f({\mathbf p_\parallel},p^3,-\omega -
\nu) \nonumber\\
\times f(-{\mathbf p_\parallel},q^3, \omega + \nu) \;
& \widetilde{\Delta}_m(\omega) \,
\widetilde{G}(-\omega,{\mathbf p_\parallel},p^3)\,
\, \widetilde{\Delta}_p(\nu) \,
\widetilde{G}(\omega,-{\mathbf p_\parallel},q^3) \Big]\,
\end{align}
which, by a shift of variables may be written as follows:
\begin{equation}\label{eq:gint2_1}
\Gamma^{(1)}_{mp} \;=\; \frac{1}{2} \, \int
\frac{d^2{\mathbf p_\parallel}}{(2\pi)^2} \int\frac{dp_3}{2\pi}
\int\frac{dq_3}{2\pi} \int\frac{d\nu}{2\pi} \;
f({\mathbf p_\parallel},p^3, -\nu)
f(-{\mathbf p_\parallel},q^3,\nu)
\; B(\nu,{\mathbf p_\parallel},p^3,q^3)
\end{equation}
where
\begin{equation}\label{eq:defb}
B(\nu,{\mathbf p_\parallel},p^3,q^3) \;=\; i \,\gamma^2 \, g^2 \;
\int\frac{d\omega}{2\pi} \; \widetilde{\Delta}_m(\omega) \,
\widetilde{G}(-\omega,{\mathbf p_\parallel},p^3)\,
\, \widetilde{\Delta}_p(\nu - \omega) \,
\widetilde{G}(\omega,-{\mathbf p_\parallel},q^3) \;.
\end{equation}
Introducing the explicit form of the propagator in momentum space, and of
the $\widetilde{\Delta}$ functions, we see that:
\begin{align}\label{eq:defb1}
B(\nu,{\mathbf p_\parallel},p^3,q^3) \,=\, - i \, \gamma^2 \, g^2 &
\int\frac{d\omega}{2\pi} \Big[ \frac{1}{\omega^2 -\Omega_m^2 + i
\epsilon + i \xi} \;
\frac{1}{\omega^2 - {\mathbf p_\parallel}^2 - (p^3)^2 + i \epsilon}
\nonumber\\
&\times \frac{1}{(\omega - \nu)^2 -\Omega_p^2 + i \epsilon} \;
\frac{1}{\omega^2 - {\mathbf p_\parallel}^2 - (q^3)^2 + i \epsilon} \Big]\;.
\end{align}
Since $B(\nu,{\mathbf p_\parallel},p^3,q^3) = B(\nu,{\mathbf
p_\parallel},q^3,p^3)$, we conclude that:
\begin{align}\label{eq:ig2_1}
{\rm Im}\big[\Gamma^{(1)}_{mp}\big] &=\; \frac{1}{2} \, \int
\frac{d^2{\mathbf p_\parallel}}{(2\pi)^2} \int\frac{dp_3}{2\pi}
\int\frac{dq_3}{2\pi} \int\frac{d\nu}{2\pi} \;
f({\mathbf p_\parallel},p^3, -\nu)
f(-{\mathbf p_\parallel},q^3,\nu) \nonumber\\
&\times\; {\rm Im}\Big[B(\nu,{\mathbf p_\parallel},p^3,q^3) \Big] \;.
\end{align}
Now we come to the actual evaluation of $B$, which resembles a loop (box)
diagram on a $0+1$-dimensional quantum field theory.
Introducing Feynman parameters, and integrating out $\omega$, after a
lengthy calculation we find:
\begin{align}\label{eq:B_1}
B(\nu,{\mathbf p_\parallel},p^3,q^3)& =
- \frac{\gamma^2 \, g^2}{2 \Omega_p}
\Big\{ \frac{\Omega_m +\Omega_p}{\Omega_m} \big[ \frac{1}{p^2 - \Omega_m^2 + i (\epsilon +
\xi)} \frac{1}{q^2 - \Omega_m^2 + i (\epsilon + \xi)} \nonumber\\
& \hspace{2cm}\times \frac{1}{\nu^2 - (\Omega_m + \Omega_p)^2 + i
(\epsilon+\xi)} \big]
\nonumber\\
&- \; \frac{1}{(p^2 - \Omega_m^2 + i \xi)(q^2 - p^2) p} \;\;
\frac{p + \Omega_p}{\nu^2 - (p + \Omega_p)^2 + i \epsilon} \nonumber\\
& - \; \frac{1}{(q^2 - \Omega_m^2 + i \xi)(p^2 - q^2)q} \;\;
\frac{q + \Omega_p}{\nu^2 - (q + \Omega_p)^2 + i \epsilon} \Big\}\;.
\end{align}
Therefore, taking the imaginary part, the $\epsilon \to 0$ limit, and
keeping the leading terms when $\xi \to 0$,
\begin{align}\label{eq:B_3}
{\rm Im}\big[B(\nu,{\mathbf p_\parallel},p^3,q^3)\big] &= \,
\frac{\pi \gamma^2 g^2 }{2 \Omega_p}
\Big\{ \frac{\Omega_m + \Omega_p}{\Omega_m} \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \nonumber\\
\times \, \big[{\mathcal P}_\xi(p^2 - \Omega_m^2) \, {\mathcal
P}_\xi(q^2 -
\Omega_m^2) \,- & \pi^2 \delta_\xi(p^2 - \Omega_m^2) \,
\delta_\xi(q^2 - \Omega_m^2) \big] \nonumber\\
- \frac{p + \Omega_p}{p (q^2 - p^2)} \; {\mathcal P}_\xi(p^2 - \Omega_m^2) \;
\delta\big( \nu^2 - (p + \Omega_p)^2\big) & - \, \frac{q + \Omega_p}{q (p^2
- q^2)} \; {\mathcal P}_\xi(q^2 -
\Omega_m^2) \; \delta\big(\nu^2 - (q + \Omega_p)^2 \big)
\Big\} \;,
\end{align}
where we have introduced notations for the approximants of Cauchy principal
value ${\mathcal P}$ and Dirac's $\delta$-function:
\begin{equation}\label{eq:approximants}
{\mathcal P}_\xi(x) \;=\; \frac{x}{x^2 + \xi^2} \;\; , \;\;\;
\delta_\xi(x) \;=\; \frac{1}{\pi} \, \frac{\xi}{x^2 + \xi^2} \;,
\end{equation}
respectively.
The terms retained in (\ref{eq:B_3}) above are meant to be the most
relevant, when $\xi \to 0$, regarding their contribution to the
integrals over frequency and momenta in the imaginary part of the effective
action. Besides, we have neglected terms which cancel each other in
${\rm Im}[B]$, in that limit.
On the other hand, note that, besides $\delta_\xi$, (\ref{eq:B_3}) also contains $\delta$
functions (they arise when taking the $\epsilon \to 0$ limit, and are
independent of $\xi$). Using standard $\delta$-function properties (note
that, in principle, they are not valid for $\delta_\xi$), we see that:
\begin{align}\label{eq:B_4}
{\rm Im}\big[B(\nu,{\mathbf p_\parallel},p^3,q^3)\big] \, =\,
\frac{\pi \gamma^2 g^2 }{2 \Omega_p}
\Big\{ & \frac{\Omega_m + \Omega_p}{\Omega_m} \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \,
\big[{\mathcal P}_\xi(p^2 - \Omega_m^2) \, {\mathcal P}_\xi(q^2 -
\Omega_m^2) \nonumber\\
& - \, \pi^2 \delta_\xi(p^2 - \Omega_m^2) \,
\delta_\xi(q^2 - \Omega_m^2) \big] \nonumber\\
\,- \, \frac{\theta(|\nu| - \Omega_p)}{2 (|\nu| - \Omega_p)} \, & {\mathcal
P}_\xi\big((|\nu| - \Omega_p)^2 - \Omega_m^2\big) \;
\big[ \frac{\delta(p - |\nu| + \Omega_p)}{q^2 - (|\nu|-\Omega_p)^2 } +
\frac{\delta(q - |\nu| + \Omega_p)}{p^2 - (|\nu|-\Omega_p)^2 } \big]
\Big\} \;.
\end{align}
We identify in (\ref{eq:B_4}) the sum of two contributions, with are quite
different regarding how and when they turn on, as functions of $\nu$.
Indeed, the first one has a $\delta_\xi$ function of $|\nu|$ minus the sum
of $\Omega_m$ and $\Omega_p$, while the second one contains a threshold at $|\nu|
= \Omega_p$. Also, for the latter to contribute, the function $f$ must be non-vanishing
when $|\nu|$ surpasses $p$ (or $q$). That will depend, of course, on the
nature of the motion considered.
In the following examples, depending on the nature of the motion involved
(reflected in $f$), we shall be able to consider the $\xi \to 0$ limit.
This will allow us to simplify the expressions as much as possible, namely,
depending on the smallest possible number of parameters.
\subsection{Quantum friction}
The first example the we consider here corresponds to quantum friction,
namely, to motion with a constant velocity which is parallel to the plate:
\begin{equation}
{\mathbf r}(t) \;=\; {\mathbf r}_0 + {\mathbf u} \, t \;,
\end{equation}
with ${\mathbf u} = (u , 0, 0)$ and $ {\mathbf r}_0 = (0,0,a)$. We
have:
\begin{align}
& f({\mathbf p_\parallel},p^3, \nu) \,=\, 2 \pi \, e^{-i p^3 a} \,
\delta(\nu - p^1 u) \nonumber\\
& f({\mathbf p_\parallel},p^3, - \nu)
f(-{\mathbf p_\parallel},q^3,\nu ) \,=\, e^{-i
(p^3+q^3) a} \,
T \, 2 \pi \, \delta(p^1 u + \nu) \;,
\end{align}
where $T$ denotes the extent of the time interval in the effective action
(which, for a constant velocity, must be extensive in time). We see that,
since $|u| < 1$, $f$ will be non-vanishing only
when $|\nu| < p$, and therefore there will not be contributions to the
imaginary part coming from the term which has a threshold.
Therefore, we shall have:
\begin{align}\label{eq:igf}
\frac{{\rm Im}[\Gamma^{(1)}_{mp}]}{T} &=\;(\Omega_m + \Omega_p) \,
\frac{\pi \gamma^2 g^2 }{4 \Omega_p \Omega_m} \int \frac{d^2{\mathbf
p_\parallel}}{(2\pi)^2} \int\frac{dp_3}{2\pi}
\int\frac{dq_3}{2\pi} \int\frac{d\nu}{2\pi} \;
e^{-i (p^3+q^3) a} \, 2 \pi \, \delta(p^1 u + \nu) \nonumber\\
& \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \,
\big[{\mathcal P}_\xi(p^2 - \Omega_m^2) \, {\mathcal P}_\xi(q^2 -
\Omega_m^2) \, - \, \pi^2 \delta_\xi(p^2 - \Omega_m^2) \,
\delta_\xi(q^2 - \Omega_m^2) \big]
\;.
\end{align}
One sees first that the $\xi \to 0$ limit may be safely taken. Moreover,
the term which contains the product of three $\delta$-functions vanishes,
since their simultaneous contribution requires a frequency which is larger
than $p$ or $q$ in modulus.
Thus
\begin{equation}\label{eq:igf1}
\frac{{\rm Im}[\Gamma^{(1)}_{mp}]}{T} \;=\;
\frac{\pi \gamma^2 g^2 }{8\Omega_p \Omega_m} \int \frac{d^2{\mathbf
p_\parallel}}{(2\pi)^2} \int\frac{dp_3}{2\pi}
\int\frac{dq_3}{2\pi} \; e^{-i (p^3+q^3) a}
\frac{\delta\big(|p^1 u| -(\Omega_m + \Omega_p)\big)}{(p^2 -
\Omega_m^2)\,(q^2 - \Omega_m^2)} \;,
\end{equation}
and, integrating out $p^3$ and $q^3$,
\begin{equation}\label{eq:igf_2}
\frac{{\rm Im}\big[\Gamma^{(1)}_{mp}\big]}{T} \;=\;
\frac{\pi \gamma^2 g^2}{32 \Omega_p \Omega_m} \, \int
\frac{d^2{\mathbf p_\parallel}}{(2\pi)^2}
\; \frac{e^{- 2 a\sqrt{ {\mathbf p_\parallel}^2 -
\Omega_m^2}}}{{\mathbf p_\parallel}^2 - \Omega_m^2}
\delta\big( |p^1 u| - (\Omega_m +\Omega_p) \big) \;.
\end{equation}
We then make use of the remaining $\delta$ function to integrate out $p^1$:
\begin{equation}\label{eq:igf_3}
\frac{{\rm Im}\big[\Gamma^{(1)}_{mp}\big]}{T} \;=\;
\frac{\gamma^2 \, g^2}{32 \pi \Omega_p \Omega_m u} \,
\int_0^\infty dp^2
\; \frac{e^{- 2 a\sqrt{ (p^2)^2 + \frac{1}{u^2}(\Omega_m +
\Omega_p)^2 - \Omega_m^2}}}{(p^2)^2 +
\frac{1}{u^2}(\Omega_m + \Omega_p)^2 - \Omega_m^2} \;,
\end{equation}
or,
\begin{equation}\label{eq:igf_4}
\frac{{\rm Im}\big[\Gamma^{(1)}_{mp}\big]}{T} \;=\;
\frac{\gamma^2 \, g^2 \, a}{32 \pi \Omega_m \Omega_p} \,
\int_0^\infty dx \; \frac{e^{- \frac{2}{u}\sqrt{ x^2 + a^2 (\Omega_m +
\Omega_p)^2 - a^2 u^2 \Omega_m^2}}}{ x^2 +
a^2(\Omega_m + \Omega_p)^2 - a^2 u^2 \Omega_m^2} \;,
\end{equation}
which has the proper dimensions and is consistent with previous results
corresponding to friction between planes; in this case the result becomes
proportional to the area of the planes, and the dimensionality of the
coupling $g$ is different (the same as that of $\gamma$).
\subsection{Small oscillations}
In this example, we consider an expansion entirely analogous to the one of
the free oscillating particle, albeit now in the presence of the plate. We
then use the same expansion for the function $f$, namely, $f = f^{(0)} + f^{(1)} +
f^{(2)} + \ldots$, with exactly the same terms.
Inserting this expansion into the general expression for the imaginary part of $\Gamma_{mp}^{(1)}$,
and retaining up to terms of the second order in the departure ${\mathbf
y}$ from the equilibrium position, we see that the only surviving
contribution is the following:
\begin{equation}\label{eq:gosc_1}
{\rm Im}[\Gamma^{(1)}_{mp}] =\frac{1}{2} \, \int
\frac{d^2{\mathbf p_\parallel}}{(2\pi)^2} \int\frac{dp_3}{2\pi}
\int\frac{dq_3}{2\pi} \int\frac{d\nu}{2\pi}
f^{(1)}({\mathbf p_\parallel},p^3, -\nu)
f^{(1)}(-{\mathbf p_\parallel},q^3,\nu)
{\rm Im}[B(\nu,{\mathbf p_\parallel},p^3,q^3)] \;.
\end{equation}
Inserting the explicit forms of (\ref{eq:B_4}) and $f^{(1)}$ above, we note
that we may have contributions due to departures which are parallel or normal to
the plane will have a different weight; indeed, the remaining symmetries of
the system imply that the structure of the result is:
\begin{equation}\label{eq:gosc_2}
{\rm Im}[\Gamma^{(1)}_{mp}] \;=\;\frac{1}{2} \, \int\frac{d\nu}{2\pi}
\big( \,m_\shortparallel(\nu) \, |\tilde{\mathbf
y}_\shortparallel(\nu)|^2 \,+\, m_\perp(\nu) \, |\tilde{y}_3(\nu)
|^2 \big) \;,
\end{equation}
depending on two scalar functions:
\begin{align}\label{eq:mpar}
m_\shortparallel(\nu) & = \, \frac{1}{2} \,
\frac{\pi \gamma^2 g^2 }{2 \Omega_p}
\Big\{ \frac{\Omega_m + \Omega_p}{\Omega_m} \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \nonumber\\
\times \,\int \frac{d^2{\mathbf p_\parallel}}{(2\pi)^2} |{\mathbf p_\parallel}|^2
& \Big[ \big( \; \int \frac{dp^3}{2\pi}\, {\mathcal P}_\xi(p^2 - \Omega_m^2)
e^{-i p^3 a} \;\big)^2 \,
\, - \, \pi^2 \big( \; \int \frac{dp^3}{2\pi}\, \delta_\xi(p^2 - \Omega_m^2)
e^{-ip^3 a} \;\big)^2 \Big]\nonumber\\
\, - \, \frac{\theta(|\nu| - \Omega_p)}{(|\nu| - \Omega_p)} \,&
{\mathcal P}_\xi\big((|\nu| - \Omega_p)^2 - \Omega_m^2\big) \;
\int \frac{d^2{\mathbf p_\parallel}}{(2\pi)^2} |{\mathbf
p_\parallel}|^2 \,
\int \frac{dp^3}{2\pi} \int \frac{dq^3}{2\pi}
\frac{\delta(p - |\nu| + \Omega_p)}{q^2 - (|\nu|-\Omega_p)^2}
e^{-i(p^3+q^3) a} \Big\} \;,
\end{align}
and
\begin{align}\label{eq:mper}
m_\perp(\nu) & = \, - \frac{\pi \gamma^2 g^2 }{2 \Omega_p}
\Big\{ \frac{\Omega_m + \Omega_p}{\Omega_m} \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \nonumber\\
\times \,\int \frac{d^2{\mathbf p_\parallel}}{(2\pi)^2}
& \Big[ \big( \; \int \frac{dp^3}{2\pi}\, p^3 \, {\mathcal P}_\xi(p^2 - \Omega_m^2)
e^{-i p^3 a} \;\big)^2 \,
\, - \, \pi^2 \big( \; \int \frac{dp^3}{2\pi}\,p^3 \, \delta_\xi(p^2 - \Omega_m^2)
e^{-ip^3 a} \;\big)^2 \Big]\nonumber\\
\, - \, \frac{\theta(|\nu| - \Omega_p)}{(|\nu| - \Omega_p)} \,&
{\mathcal P}_\xi\big((|\nu| - \Omega_p)^2 - \Omega_m^2\big) \;
\int \frac{d^2{\mathbf p_\parallel}}{(2\pi)^2}
\int \frac{dp^3}{2\pi} \int \frac{dq^3}{2\pi}
p^3 \, q^3 \, \frac{\delta(p - |\nu| + \Omega_p)}{q^2 - (|\nu|-\Omega_p)^2}
e^{-i(p^3+q^3) a}\, \Big\} \;.
\end{align}
After performing the integrals over $p^3$ and $q^3$, we find for
$m_\shortparallel$ a more explicit expression:
\begin{align}\label{eq:mpar1}
m_\shortparallel(\nu) & = \, \frac{1}{2} \,
\frac{\pi \gamma^2 g^2 }{2 \Omega_p}
\Big\{ \frac{\Omega_m + \Omega_p}{4 \pi \Omega_m} \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \, A_\shortparallel(\xi,\Omega_m,a) \nonumber\\
& + \, \frac{1}{8 \pi^2} \, \theta(|\nu| - \Omega_p) \, (|\nu| - \Omega_p)^2 \,
{\mathcal P}_\xi\big((|\nu| - \Omega_p)^2 - \Omega_m^2\big) \; B_\shortparallel((|\nu| - \Omega_p) a ) \Big\} \;,
\end{align}
with
\begin{eqnarray}
&& A_\shortparallel(\xi,\Omega_m,a)=\int_{-\Omega_m^2}^\infty \frac{du\, u }{u^2 +\xi^2} \,
e^{-2 \beta a} \, [ u \, \cos( 2 \alpha a) + \xi \, \sin( 2 \alpha a) ]\, ,\nonumber\\
&& B_\shortparallel((|\nu| - \Omega_p) a ) = \int_0^1 du \,[\frac{1-u^2}{u} \, \sin(2(|\nu| - \Omega_p) a u)]\, ,
\end{eqnarray}
and
\begin{equation}\label{eq:defab}
\alpha \equiv \sqrt{\frac{\sqrt{u^2 + \xi^2} - u}{2}} \;\;,\;\;\;
\beta \equiv \sqrt{\frac{\sqrt{u^2 + \xi^2} + u}{2}} \;.
\end{equation}
In obtaining (\ref{eq:mpar1}), no small-$\xi$ approximation has been made, and the results are shown in Fig. \ref{fig:m_parallel} for different values of the parameters of the material (distance to the plate $a$ and frequency $\Omega_m$). The plots show a resonant behaviour for $|\nu | = \Omega_m + \Omega_p$, and the specific shape of this resonance depends on the distance $a$, as we will discuss below.
An entirely similar procedure allows us to find:
\begin{align}\label{eq:mperp1}
m_\perp(\nu) & = \,
\frac{\pi \gamma^2 g^2 }{2 \Omega_p}
\Big\{ \frac{\Omega_m + \Omega_p}{16 \pi \Omega_m} \,
\delta_\xi\big(\nu^2 -(\Omega_m + \Omega_p)^2\big) \,
A_\perp(\xi,\Omega_m,a)
\nonumber\\
& - \, \frac{1}{8 \pi^2} \, \theta(|\nu| - \Omega_p) \,
(|\nu| - \Omega_p)^2 \,
{\mathcal P}_\xi\big((|\nu| - \Omega_p)^2 - \Omega_m^2\big) \;
B_\perp((|\nu| - \Omega_p) a ) \Big\}\;,
\end{align}
with $\alpha$ and $\beta$ as in (\ref{eq:defab}) and
\begin{eqnarray}
&& A_\perp(\xi,\Omega_m,a)=\int_{-\Omega_m^2}^\infty \, du\,
e^{-2 \beta a} \, \cos( 2 \alpha a) \, ,\nonumber\\
&& B_\perp((|\nu| - \Omega_p) a )=\int_0^1 du \, u \, \sin(2(|\nu| - \Omega_p) a u)\, .
\end{eqnarray}
We show $m_\perp$ in Fig. \ref{fig:m_perp} for different characteristics of the materials. The same resonant behaviour is observed near $|\nu| = \Omega_p + \Omega_m$, and also an oscillatory behaviour that was absent for $m_\parallel$. These oscillations have a frequency that depends on the distance to the plate, and their presence is related to the fact that that distance is indeed modified by the center-of-mass motion, in contrast to what happens to the parallel contribution.
\begin{figure}
\includegraphics[scale=1.5]{m_parallel}
\caption{\label{fig:m_parallel}Second order correction to the imaginary part of the effective action, for a particle, modeled as a QHO of frequency $\Omega_p$, whose center-of-mass moves with small oscillations of frequency $\nu$, parallel to the plane, at a distance $a$ above it. The plane is modeled as a continuous set of harmonic oscillators of frequency $\Omega_m$. We have defined $\tilde{a}=a \Omega_p$, $\tilde{\Omega}_m=\Omega_m / \Omega_p$, and we have set the dissipation of the plate as $\xi / \Omega_p^2 = 0.01$.}
\end{figure}
\begin{figure}
\includegraphics[scale=1.5]{m_perp}
\caption{\label{fig:m_perp}Second order correction to the imaginary part of the effective action, for a particle, modeled as a QHO of frequency $\Omega_p$, whose center-of-mass moves with small oscillations of frequency $\nu$ at a distance $a$, normal to a plane. The plane is modeled as a continuous set of harmonic oscillators of frequency $\Omega_m$. We have defined $\tilde{a}=a \Omega_p$, $\tilde{\Omega}_m=\Omega_m / \Omega_p$, and we have set the dissipation of the plate as $\xi / \Omega_p^2 = 0.01$.}
\end{figure}
The coefficients $B_\shortparallel$ and $B_\perp$ can be computed
explicitly, the result being
\begin{eqnarray}
&& B_\shortparallel(x)= -B_\perp(x) + Si(2x)\, ,\nonumber\\
&& B_\perp(x)= \frac{-2 x \cos(2x)+\sin(2x)}{4 x^2}\, ,
\end{eqnarray}
where $Si(x)$ is the sine-integral function.
It is worth noting that there is a qualitative difference in the $a \to
\infty$ behaviours of $m_\shortparallel$ and $m_\perp$ above the $|\nu| >
\Omega_p$ threshold. Indeed, while the latter vanishes, the former reaches
the finite limit:
\begin{equation}\label{eq:mpar2}
m_\shortparallel(\nu) \; \to \; \frac{\gamma^2 g^2 }{64\Omega_p}
\, (|\nu| - \Omega_p)^2 \, {\mathcal P}_\xi\big((|\nu| - \Omega_p)^2 -
\Omega_m^2\big) \;.
\end{equation}
The difference between the response for the two different kinds of
oscillations can be traced back to the fact that the respective effective
actions depend on different properties of the scalar field propagator
in the presence of the plate. Indeed, for parallel motion, one needs the
propagator between two points at the same distance $a$ from the plate,
while for perpendicular motion one has to take two derivatives with respect
the to the third coordinate, and then to evaluate at the average distance, $a$.
This results in different $a \to \infty$ limits. Physically, this may be
interpreted as a consequence of the different response properties of the
plate for normal vs parallel incidence.
We see that, up to this order, the emission probability for both parallel
and perpendicular motions is a combination of the approximants of Cauchy's
principal value ${\mathcal P}$ and Dirac's $\delta$-function (see
Eq.\eqref{eq:approximants}), both localized at the resonant frequency
$\vert\nu\vert = \Omega_m + \Omega_p$. Moreover, in the limit $\xi\to 0$
the coefficients are finite and can be computed explicitly; using the fact
that $\alpha\simeq 0$ and $\beta\simeq \sqrt{\vert u\vert}$. We obtain:
\begin{eqnarray}
A_\shortparallel(0,\Omega_m,a)&=& A_\perp(0,\Omega_m,a) = \int_{-\Omega_m^2}^\infty \, du\,
e^{-2 \sqrt{\vert u\vert}a}\nonumber\\
&=& \frac{2}{a^2}\big(2-(1+\Omega_m a)e^{-\Omega_m a}\big)\, .
\end{eqnarray}
It is interesting to remark that the emission probabilities have peaks at the resonant frequency (with a height determined by he coefficients of the $\delta_\xi$ functions) and regions of enhancement and suppression at both sides of the resonant frequency, with amplitudes given by the coefficients that multiply the principal values. The ratio of the coefficients of $\delta_\xi$
and ${\mathcal P}_\xi$ in Eqs.\eqref{eq:mpar1} and \eqref{eq:mperp1} determine, for each kind of motion, which is the dominant behaviour. We illustrate this in Fig. \ref{fig:plots_m}.
\begin{figure}
\includegraphics[scale=1.5]{plots_m}
\caption{\label{fig:plots_m}Second order correction to the imaginary part of the effective action for $\Omega_m / \Omega_p=2$. We show the behaviour near the resonance, that occurs at $|\nu| = \Omega_m + \Omega_p$. We have defined $\tilde{a}= a \Omega_p$.}
\end{figure}
The structure of the results is reminiscent to what happens with the spontaneous decay rate of an excited atom immersed in an absorbing dielectric \cite{spontaneous}. Note also that the dissipation coefficient $\xi$ regulates the otherwise
infinite results that would be obtained for non-lossy dielectrics. In the electromagnetic case, the presence of $\xi$ insures the validity of the Kramers-Kronig relation. For the particular case ($\xi=0$), one should work beyond the perturbative approximation in the interaction between the mirrors' degrees of freedom and the quantum field. In our case, the atom is outside the dielectric, and the corrections to the free space probability of emission are due to the vacuum fluctuations that are present near the surface of the dielectric plane \cite{Bartolo}.
\section{Conclusions}\label{sec:conc}
We have calculated the vacuum persistence amplitude for a moving harmonic
oscillator, first in free space, and afterwards in the presence of a
dielectric plane. The in-out effective action was
perturbatively evaluated, in an expansion in powers of the coupling between
the atom and the field. We presented the result for the corresponding
imaginary part as a functional of the atom's trajectory, showing that,
to the lowest non trivial order, there is a threshold.
This is associated to the possibility of internal excitation of the atom,
before radiation emission. We also found that the NTLO exhibits the combination
of the previously mentioned effect with the usual DCE (which does not
involve such excitation process).
An interesting point of the calculation is the shift in the natural
frequency of the oscillator (and therefore in the energy levels of the
atom) produced by the vacuum fluctuations. It is mandatory to take into
account this shift in order to obtain finite corrections to the vacuum
persistence amplitude at the NTLO. Further, we have considered the motion of
the atom in the presence of an imperfect mirror, considering quantum
harmonic oscillators as microscopic degrees of freedom coupled to an
environment as a source of internal dissipation. Again, we have evaluated
the vacuum persistence amplitude for the case of an atom moving near the
plate, up to first order in both couplings between the atom and the
microscopic degrees of freedom and the vacuum field. We have shown that, at
the same order in which there is DCE, there also is quantum contactless friction
and corrections to free emission. These corrections show a peculiar
behaviour when the external frequency equals the sum of the frequency of
the atom and the frequency of the microscopic degrees of freedom, with
regions of enhancement and suppression of the vacuum persistence amplitude.
We pointed out that this is similar to what happens with the spontaneous
emission of an atom immersed into a lossy dielectric. The inclusion of
losses in the dielectric is crucial to get a finite vacuum persistence
amplitude for an accelerated motion of the atom. Friction effects are less
sensitive to dissipation, and have a well defined limit for non-lossy
dielectrics.
\section{Acknoweledgements}
This work was supported by ANPCyT, CONICET, UBA and UNCuyo; Argentina. M. B. Far\'\i as acknowledges financial support from the national Research Fund Luxembourg under CORE Grant No. 11352881.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Biological systems often contain microscopic objects of different
sizes, featuring a self-propulsion to play a crucial role in
vital actions. Prominent examples are white blood cells chasing
intruders~\cite{fenteany_COH04}, motor proteins facilitating
the transport of e.g.\ RNA inside cells~\cite{kanai_Neuron04},
or bacteria such as Escherichia coli which are simply searching
for food~\cite{berg_04}.
On the other hand, synthetic microswimmers are designed
to offer hope-bearing solutions to problems that are hard
to approach with the toolbox available to conventional
nanotechnology~\cite{ebbens_SM10,yamamoto_powder15}.
Potential fields of applications are highly directional
drug delivery, improved biomarkers or contrast
agents~\cite{abdelmohsen_JMCB14}, as well as the elimination
of pollutants in the framework of environmental
protection~\cite{gao_ACSNano14}.
By far the simplest synthetic self-propelling agents are
binary Janus-particles of spherical shape. These particles
may be driven by catalytic reactions with an ingredient of the
solvent such as hydrogen peroxide~\cite{howse_PRL07} or
hydrazine~\cite{gao_JACS14}. If the 'fuel' exhibits a
concentration gradient, then the activity of these ABPs
turns position dependent, leading to phenomena that have
recently been demonstrated to resemble
chemotaxis~\cite{peng_AC15,ghosh_PRE15,vuijk_18}.
Instead of being chemically driven, Janus particles may
also respond to light as a result of an inhomogeneous
surface-heating and the resulting local temperature
gradient~\cite{lozano_NatureComm16}. This resembles
phototactic behavior, and since light intensities are
easily controlled in a laboratory setup, not only
position-, but also rapidly varying time-dependent
activity profiles may be employed to manipulate ABPs.
Recent reviews about the state of the art of artificial
nano motors have been presented by Yamamoto et
al.~\cite{yamamoto_powder15} and Bechinger et
al.~\cite{bechinger_RevModPhys16}.
Some of the authors have recently applied the linear response
(Green-Kubo) approach to ABPs and successfully computed average
swim speeds~\cite{sharma_JCP16} in homogeneous systems as well
as a torque-free polarization and the resulting density
distributions in systems with spatially
inhomogeneous activity profiles~\cite{sharma_PRE17}. In the
present work, we generalize that formalism to systems in
which the activity fields are additionally time dependent.
Recently, Geiseler et al.\ have analyzed the behavior of
microswimmers in active density waves that are sweeping
over the ABPs and thereby driving them into selected
directions~\cite{geiseler_PRE16,geiseler_SR17}. Techniques
to manipulate the global direction of motion of
ABPs are naturally of paramount interest to the practitioner.
While existing studies are restricted to two-dimensional
setups and thus affected by boundary effects of the substrate
on which the particles are sliding, our linear response
approach is going to be applied to three dimensional bulk
systems in absence of boundaries.
In Sec.\ \ref{modelandtheory}, we first present our model,
the system of units and a short introduction to linear response
theory. Section \ref{sec:orientation} derives the orientation
profiles of ABPs that are exposed to a propagating sinusoidal
activity wave. The resulting torque-free polarization profile
inside the activity gradient is derived as a fully analytical
expression and compared to Langevin dynamics simulations.
The corresponding density distributions are derived in Sec.\ \ref{sec:densities}
and again a close agreement with the simulations is reported.
Section \ref{sec:flux} studies the fluxes induced into the ABPs
by the propagating activity wave, which strongly depend on
the particle size. This fact is exploited in Sec.\ \ref{sec:separation}
in which a mixture of ABPs of different sizes is de-mixed by
selective applications of induced fluxes. We discuss the
parameter ranges in which the linear response ansatz yields
accurate predictions (Sec.\ \ref{sec:activity}) and summarize
our findings in \ref{sec:summary}.
\section{Model and Theory}\label{modelandtheory}
\subsection{Numerical simulations}
In this work, we introduce a standard particle with a diameter of $b = 1$
and mass $m = 1$, which define the length and mass units. We explicitly
specify the translational diffusion coefficient $D_t = k_B T/\zeta$
and the drag coefficient $\zeta$, and with $k_B = 1$, the unit-less temperature
of the system is specified, too. The unit time $\tau$ is the time which
the standard particle needs to diffuse over the distance of its
diameter, i.e.\ $\tau = (6D_t)^{-1}$. We have thus scaled the remaining
parameters to yield $\tau = 1$.
The Langevin dynamics simulations were carried out with the
molecular dynamics simulation
package LAMMPS~\cite{plimpton_CompPhys95} in fully three dimensional
setups. The activity is induced by an explicit force that acts in the
direction of the particle's internal orientation vector, noise is
induced by a Langevin thermostat at a system temperature of $T = 5/3$.
We use an integration time-step of $4\cdot 10^{-4}$. In the
simulations of sections \ref{sec:orientation} -- \ref{sec:flux},
500 active particles are
placed into a box of the size $L^3 = 15^3$ and
periodic boundaries. The particles have a diameter of $b=1$,
but no interaction, so that actually an ensemble of single
particles is simulated. In Sec.\ \ref{sec:separation},
particles of different diameters are distributed inside a
larger box of size $15\times 15\times 150$, being bounded in
z-direction with impenetrable walls, but periodic in x- and
y-directions.
The remaining parameters are summarized in Tab.\ \ref{tab:parameters}.
Further technical details regarding the implementation of ABPs
in LAMMPS have been described in our previous publication~\cite{merlitz_SM17}.
\begin{table}[htb]
\begin{tabular}[b]{c | c | c | c }
diameter, $b $ & 0.5 & 1 & 2 \\
\hline
mass, $m $ & 1/8 & 1 & 8 \\
frictional drag coefficient, $\zeta $ & 5 & 10 & 20 \\
momentum relaxation time, $\tau_m$ & 1/40 & 1/10 & 2/5 \\
translational diffusion coefficient, $D_t $ & 1/3 & 1/6 & 1/12 \\
rotational diffusion coefficient, $D_r$ & 4 & 1/2 & 1/16 \\
rotational relaxation time, $\tau_r $ & 1/8 & 1 & 8 \\
ratio, $\tau_r / \tau_m$ & 5 & 10 & 20 \\
\hline
\end{tabular}
\caption{Simulation parameters for ABPs of different diameter $b$. The
mass $m$ is required for the integrator, but the resulting momentum
relaxation time $\tau_m$ is significantly shorter than $\tau_r$ and therefore,
the motion of the particle is overdamped on the time scale of $\tau_r$.
\label{tab:parameters}}
\end{table}
With the choice of parameters as above, the motion of an active particle is
overdamped on the time scale of $\tau_r$. In the overdamped limit, the motion
of an active particle can be modelled by the Langevin equations
\begin{align}\label{full_langevin}
&\!\!\!\!\!\!\dot{\boldsymbol{r}} = \frac{f(\boldsymbol{r}, t)}{\zeta}\,\boldsymbol{p} + \boldsymbol{\xi}\;\;,
\;\;\;
\dot{\boldsymbol{p}} = \boldsymbol{\eta}\times\boldsymbol{p} \,.
\end{align}
with coordinates $\boldsymbol{r}$ and the embedded unit vector $\boldsymbol{p}$
which defines the particle orientation. $f$ is the modulus
of the force that drives the particle into the direction
of its orientation vector, and
$\zeta$ is the frictional drag coefficient.
The stochastic vectors $\boldsymbol{\xi}(t)$ and $\boldsymbol{\eta}(t)$ are Gaussian distributed with zero mean and
time correlations
$\langle\boldsymbol{\xi}(t)\boldsymbol{\xi}(t')\rangle=2D_t\delta(t-t')$ and
$\langle\boldsymbol{\eta}(t)\boldsymbol{\eta}(t')\rangle=2D_r\delta(t-t')$,
with the translational and rotational diffusion coefficients
$D_t$ and $D_r$, respectively. The latter is related to the
rotational relaxation time according to $\tau_r = 1/(2D_r)$.
Note that in the second part of Eq.\ \eqref{full_langevin}, the
orientation vector does not couple to the activity field and hence
no direct torque acts on the particle due to the
position-dependent activity. Our linear response, presented in the following subsection, is based on the overdamped set of equations~\eqref{full_langevin}.
\subsection{Linear response ansatz}
It follows exactly from~\eqref{full_langevin} that the joint $N$-particle probability distribution $P(t) \equiv P(\bm{r}^N, \bm{p}^N, t)$ evolves according to~\cite{gardiner85}
\begin{equation}
\frac{\partial P(t)}{\partial t} = \Omega_a(t) P(t)
\label{fp}
\end{equation}
with the time-evolution operator $\Omega_a$.
The time-evolution operator can be split into a sum of two terms,
$\Omega_{\rm a}(t)=\Omega_{\rm eq}+\delta\Omega_{\rm a}(t)$,
where the equilibrium contribution is given by
\begin{align}\label{smol_op_eq}
\Omega_{\rm eq} = \sum_{i=1}^{N} \boldsymbol{\nabla}_{i}\!\cdot\!
\big[
D_\text{t}\!\left(\boldsymbol{\nabla}_{i}\! - \beta\boldsymbol{F}_i\right)
\big] \!+\! D_\text{r}\boldsymbol{R}_i^2,
\end{align}
with rotation operator $\boldsymbol{R}\!=\!\boldsymbol{p}\times\!\nabla_{\!\boldsymbol{p}}$
\cite{morse1953methods}, $\beta = 1/(k_{\rm B}T)$, and
an external force $\boldsymbol{F}_i$ which is absent in what follows.
The active part of the dynamics is described by the operator $\delta\Omega_{\rm a} = -\sum_i \boldsymbol{\nabla}_{i}\!\cdot (v_0(\boldsymbol{r}_i,t)\boldsymbol{p}_i)$. We refer to Refs.~\cite{sharma2016communication,sharma_PRE17} for a detailed elaboration of
the linear response formalism. Here, we only give a brief outline of the method. We obtain from Eq.~\eqref{fp} an exact expression for
the non-equilibrium average of a test function $f \equiv f(\boldsymbol{r}^N\!\!,\boldsymbol{p}^N\!)$ as
\begin{align}\label{faverage}
\langle f \rangle(t) = \langle f \rangle_{\rm eq} - \int_{-\infty}^{t}\!\!dt'\,
\langle G(t') e_{-}^{\int_{t'}^{t}ds\,{\Omega}^{\dagger}_{\rm a}(s)}f \rangle_{\rm eq},
\end{align}
where where $e_{-}$ is a negatively ordered exponential function~\cite{brader2012first}. We have defined $G(t) = K(t) + V(t)$ with
\begin{align}
K(t) &= \sum_{i=1}^{N} v_0(\boldsymbol{r}_i,t)\,\boldsymbol{p}_i \cdot \beta\boldsymbol{F}_i \label{KP},\\
V(t) &= \sum_{i=1}^{N} \boldsymbol{p}_i\cdot \nabla_i v_0(\boldsymbol{r}_i,t)
\label{VP}
\end{align}
and the adjoint operator is given by
${\Omega}^{\dagger}_{\rm a}(t)={\Omega}^{\dagger}_{\rm eq}-\delta\Omega_{\rm a}(t)$, where ${\Omega}^{\dagger}_{\rm eq}=\sum_{i}
D_\text{t}\!\left(\boldsymbol{\nabla}_{i}\! + \beta\boldsymbol{F}_i\right)
\!\cdot\!\boldsymbol{\nabla}_{i} \!+\! D_\text{r}\boldsymbol{R}_i^2$.
Linear response corresponds to the system response when the full-time
evolution operator in~\eqref{faverage} is replaced by the time-independent
equilibrium adjoint operator. This is equivalent to assuming that the active
system is close to the equilibrium and the activity corresponds to a small
perturbation.
\section{Polarization in the activity field}
\label{sec:orientation}
The activity field is a time-dependent
function of the $z-$ coordinate, which defines the magnitude
of the driving force as
\begin{equation}\label{eq:f(z)}
f(z,t) = f_0 \left\{ \sin \left[\omega (z - v t)\right] + s\right\}\;,
\end{equation}
with the factor $f_0 = 2.5$, the system time $t$, the phase
velocity $v$ and the shift $s = 1.0$. A vertical shift of
$s \ge 1$ is required to avoid unphysical negative values for
the activity. The periodicity is accounted for with the choice
\begin{equation}
\omega = \frac{2n\pi}{L}
\end{equation}
and a positive integer $n$. The average orientation per particle
is defined as
\begin{equation}
\bm {p}(\bm{r}) = \frac{\langle \sum_i
\delta(\bm{r}-\bm{r}_i)\bm{p}_i\rangle}{\rho(\bm{r})}\;,
\end{equation}
with the one-body density $\rho(\bm{r}) =\langle \sum_i
\delta(\bm{r}-\bm{r}_i)\rangle$. The steady-state average orientation corresponding to a time-independent inhomogeneous activity field has been already obtained in Ref.~\cite{sharma_PRE17}. Corresponding to a space- and time-dependent activity, the steady-state orientation can be obtained using Eq.~\eqref{faverage} as
\begin{align}
\boldsymbol{p}(\boldsymbol{r},t) &= \int_0^{t}\!dt'\int d\boldsymbol{r}' v_0(\boldsymbol{r}',t')\,\chi(|\boldsymbol{r} - \boldsymbol{r}'|,|t-t'|),
\label{pexpression}
\end{align}
where the space-time response function, $\chi(|\boldsymbol{r} - \boldsymbol{r}'|,t-t')$,
is given by
\begin{align}
\label{response}
\chi(|\boldsymbol{r} - \boldsymbol{r}'|,|t-t'|) =\frac{e^{-2D_{r}|(t-t')|}}{3}\boldsymbol{\nabla} G_{\rm VH}^{\rm s}(|\boldsymbol{r}-\boldsymbol{r}'|,|t-t'|).
\end{align}
In Eq.~\eqref{response}, $G_{\rm VH}(r,t)$ corresponds to the self-part of the Van Hove function. This function can be approximated as a Gaussian~\cite{Hansen90}
\begin{equation}
G_{\rm VH}^s(\bm{r},t) = \frac{1}{(4\pi D_t t)^{3/2}}\, e^{-r^2/4D_tt}.
\end{equation}
We note that this approximation is valid even in the case of interacting
particles for any spherically-symmetric interaction potential, provided the
density is sufficiently low~\cite{sharma_PRE17}. Therefore, the presented
result is a generic one and not limited to ideal gases. In this work, we have
considered non-interacting particles as a special case, for which the Gaussian
approximation for the Van Hove function is exact.
\begin{figure}[t]
\includegraphics[angle=270,width=\columnwidth]{orientation.ps}
\caption{Average orientation of the ABP in the coordinate
system which moves with the phase velocity $v$ of the activity wave. Symbols
are MD simulations, closed curves are the approximation Eq.\ \eqref{eq:p}.
In this example, $n=2$ and $s=1$.}
\label{fig:orientation}
\end{figure}
Using Eqs.~\eqref{pexpression} and \eqref{response}, the average orientation corresponding to the activity wave in Eq.~\eqref{eq:f(z)} is obtained as
\begin{eqnarray} \nonumber
p(z,t) =
&-&\frac{f_0\,\omega}{3\zeta}\int_{0}^{\infty} dt'\,
\int_{-\infty}^{\infty} dz'\, \cos \left[\omega (z' - v(t-t')) \right]\\
&\cdot&
\frac{\exp \left[-2D_r t' - \frac{(z-z')^2}{4D_t t'}\right]}
{\sqrt{4\pi D_t t'}}\;,
\end{eqnarray}
yielding
\begin{equation}\label{eq:p}
p(z,t) = -\frac{f_0\, \omega \cos \left[ \omega (z - vt) +
\psi \right]}{3 \zeta \sqrt{(2D_r + D_t\omega^2)^2 + v^2}}\;,
\end{equation}
where we have introduced the phase shift
\begin{equation}\label{eq:psi}
\psi = \arctan\left[\frac{v}{2D_r + D_t \omega^2} \right]\;.
\end{equation}
The walker's orientation, Eq.\ \eqref{eq:p}, is stationary in the
comoving coordinate frame $\xi = z-vt$ and is shown in Fig.\
\ref{fig:orientation}: With increasing phase velocity, the
amplitude of the orientation is decreasing, while the phase
shift increases. This is immediately obvious in Eq.\ \eqref{eq:p},
which exhibits the phase velocity in its denominator, and
in Eq.\ \eqref{eq:psi} in which the phase shift increases
linearly with $v$ in its leading order Taylor term. With
increasing phase velocity of the activity wave, the activity
changes rapidly so that the particle is eventually unable
to respond to changes of the external field. The crucial time scale is
the rotational relaxation time of $\tau_r = 1$, and
when the activity wave runs from its minimum to its
next maximum during that time, the orientation profile
is essentially turning flat. This happens at phase velocities
of the order of $|v_{\rm max}| \approx \pi / 2$.
The orientation is generally
pointing against the activity gradient, i.e.\ the ABP is
turning toward the direction in which activity
decreases~\cite{sharma_PRE17}.
The linear response theory generally overestimates the degree
of orientation, as well as the amount of phase shift. The
deviation from the simulation results increases with the magnitude
of the driving force, or the pre-factor $f_0$ in Eq.\
\eqref{eq:f(z)}. On the other hand, we have observed almost
perfect agreement with the simulation data when $f_0$ was
set smaller than 0.1. It is clear that the validity of
the linear response approximation remains restricted to
the regime of low activities.
\section{Density distributions}\label{sec:densities}
\begin{figure}[t]
\includegraphics[angle=270,width=\columnwidth]{density.ps}
\caption{Density distributions of the ABPs in the coordinate
comoving frame at different phase velocities of the activity wave. Symbols
are MD simulations, closed curves are the approximation Eq.\ \eqref{eq:rho1}.
In this example, $n=2$ and $s=1$.}
\label{fig:density}
\end{figure}
Traveling activity waves as in Eq.~\eqref{eq:f(z)} induces traveling
orientation waves (Eq.~\eqref{eq:p}) and traveling density waves. One cannot,
however, use linear response approach to calculate the density distribution
because it is invariant to activity in the linear order. To calculate density
distribution, we integrate over $\boldsymbol{p}$, $x$ and $y$ in Eq.~\eqref{fp} to obtain
the following 1-dimensional equation for the density $\rho(z,t)$
\begin{equation}\
\frac{\partial \rho(z,t)}{\partial t} = \frac{\partial}{\partial z}\left[D_{t}\frac{\partial \rho(z,t)}{\partial z} - \frac{f_0(z,t)}{\zeta}p(z,t) \rho(z,t) \right].
\label{eq:density}
\end{equation}
In the comoving frame $\xi$, Eq.~\eqref{eq:density} can be recast as a continuity equation
\begin{equation}\label{eq:delrho}
\frac{\partial \rho}{\partial t} + \frac{\partial J}{\partial \xi}
= 0\;,
\end{equation}
where $\rho = \rho(z-vt) \equiv \rho(\xi)$ and $J(\xi)$ is the flux in the comoving frame given as
\begin{equation}\label{Jflux}
J(\xi) = -D_t \frac{\partial \rho}{\partial \xi} + \frac{f(\xi)}{\zeta}\, p(\xi)\, \rho(\xi)
- v\, \rho(\xi).
\end{equation}
Since the density is stationary in the $\xi$-frame, it follows from Eq.~\eqref{eq:delrho} that $J(\xi)$ is constant. On integrating Eq.~\eqref{Jflux} and using periodicity of $p(\xi)$ and $\rho(\xi)$, one obtains the following equations for the flux
\begin{equation}\label{eq:j2}
\frac{J}{\rho_b L D_t} = \frac{\left[
1 - \exp\left\{
-\int_0^L \frac{b(z)}{D_t} \, dz
\right\}
\right]
}{
\int_0^L dx\, \int_0^L dy\, \exp \left\{
-\int_x^{x+y} \frac{b(z)}{D_t} \, dz
\right\}
}\;
\end{equation}
and the density
\begin{equation}\label{eq:rho1}
\frac{\rho(\xi)}{\rho_b} = \frac{L
\int_0^L dy\, \exp\left[
- \int_{\xi}^{\xi+y} \frac{b(\xi')}{D_t}\, d\xi'
\right]
}{
\int_0^L d\xi\, \int_0^L dy\, \exp \left[
- \int_{\xi}^{\xi+y} \frac{b(\xi')}{D_t}\, d\xi'
\right],
}
\end{equation}
with the particle bulk density $\rho_b = N/V$ and the function
$b(\xi) = \zeta^{-1} f(\xi) \, p(\xi) - v$.
This integral (Eq.~\eqref{eq:rho1}) has to be solved numerically, solutions are
shown in Fig.\ \ref{fig:density}. We have used the theoretical prediction of Eq.~\eqref{eq:p} for $p(\xi)$. The linear response theory
offers an excellent approximation to the simulation data in
the present parameter ranges. Even at high phase
velocities, the density distributions are almost
quantitatively reproduced. A comparison to Fig.\
\ref{fig:orientation} reveals the different time scales at which
the variables respond to the activity field: While the density
distribution is almost uniform at phase velocities as low as
$|v| \approx 1.6$, the polarization pattern is not yet flat
at a far higher velocity of $|v| \approx 6.4$.
This is a consequence of the longer relaxation times
associated with translocations of the ABPs, as
compared to their rotational relaxation times. In the
present parameter setting, the particle positions relax
roughly an order of magnitude slower than their orientations.
\section{Induced flux}\label{sec:flux}
\begin{figure}[t]
\includegraphics[angle=270,width=\columnwidth]{omega_b1.ps}
\caption{Induced drift velocities
(Eq.\ \ref{eq:vd}), as a function of the phase velocity of
the activity wave. Different curves correspond to different angular
frequencies $\omega = 2n\pi/L$ that satisfy the periodic boundary condition.
Here, the size of the ABP equals $b=1$, the activity factor $f_0 = 2.5$.
}
\label{fig:omega_b1}
\end{figure}
The average drift velocity
of the ABPs in laboratory frame can be written as
\begin{equation}\label{eq:vd}
v_d = \frac{J}{\rho_b} + v\;.
\end{equation}
Figure \ref{fig:omega_b1} displays solutions to Eq.\ \ref{eq:vd} for
different angular frequencies $\omega = 2n\pi/L$ of the activity waves.
As a function of the phase velocity, the induced drift generally
exhibits a global maximum, which, however, differs with the angular
frequency. The overall maximum value of the drift velocity is found
with $n = 6$ and a phase velocity about $v \approx 0.75$. We note that
an earlier study has reported on situations in which negative
drifts, i.e.\ in the direction opposite to the propagation of the
activity wave~\cite{geiseler_PRE16,geiseler_SR17}. So far, we have
not been able to reproduce such a phenomenon with our setup in
three dimensional systems, neither in linear response approximation
nor in simulation. It may well be the case that the parameter range
covered in our studies did not allow for such a reversal of the
direction of drift.
\begin{figure}[t]
\includegraphics[angle=270,width=\columnwidth]{vd_vs_vp_all2.ps}
\caption{Induced drift velocities as a function of phase
velocity
for different particle sizes and angular frequencies $\omega = 2n\pi/L$. Data
points are simulations, solid curves computed from Eq.\ \eqref{eq:vd},
and dashed curves are the approximation \eqref{eq:vd_analyt}.}
\label{fig:vd_vs_vp_all}
\end{figure}
It is natural to ask how an induced drift would depend upon
the properties of the particle. We have repeated the analysis of
Fig.\ \ref{fig:omega_b1} with particles of sizes $b=0.5$ and
$b=2$, with properties as summarized in Tab.\ \ref{tab:parameters}.
Figure \ref{fig:vd_vs_vp_all} only contains curves for angular frequencies
at which the respective particles reach the highest drift velocities.
For small ABPs, MD simulations and linear response theory (Eq.\ \eqref{eq:vd})
display a reasonably close agreement. For the larger particle of
diameter $b=2$ (green curve and triangles), however, significant
deviations are visible. Due to their longer rotational relaxation
times, the dynamics of these particles are less
affected by Brownian motion, i.e.\ more ballistic, and linear
response theory begins to break down. In Sec.\ \ref{sec:activity}
we are going to discuss the issue of valididy ranges
of the linear response approach in detail.
It is instructive to investigate an approximation to the induced
drift velocity, considering that the orientation of the particle
relaxes faster than its density profile. If the traveling wave
propagates sufficiently fast, then $\rho(\xi)$ in Eq.~\eqref{Jflux}
may be approximated by the uniform bulk density, turning
Eq.~\eqref{eq:vd} into
\begin{equation}\label{eq:vd_app}
v_d(\xi) \approx \frac{1}{\zeta} p(\xi)\, f(\xi)\;,
\end{equation}
which can be integrated over the box to yield the average drift velocity
\begin{equation}\label{eq:vd_analyt}
v_d = \frac{f_o^2 \omega v}{6 \zeta^2 \left[(2 D_r + D_t \omega^2)^2 + v^2 \right]}\;.
\end{equation}
The plots in Fig.\ \ref{fig:vd_vs_vp_all} show that this approximation
(dashed curves) is reasonable only in case of the large particles, which
display a sufficiently slow reaction to the incoming activity field so
that $\rho(\xi) \approx \rho_b$ remains valid. Yet, Eq.~\eqref{eq:vd_analyt}
provides us with a glimpse of how the underlying dynamics of the ABPs
enables the induction of drift: This function has its extremum at
\begin{equation}\label{eq:maxima}
\tilde{v} = 4D_r \sim b^{-3}\;,\;\;\;\; \tilde{\omega} =
\sqrt{\frac{2D_r}{D_t}}\sim b^{-1}\;,
\end{equation}
at which it reaches the induced drift velocity of
\begin{equation}
v_{d, \rm max} = \frac{\sqrt{2}}{48}\,
\frac{f_o^2}{\zeta^2 \sqrt{D_r D_t}} \sim b^0\;,
\end{equation}
where we used the fact that $\zeta \sim b$. The optimum choice for the velocity
of the traveling wave is determined by the rotational
diffusion and thus strongly dependent on the particle size. While
the $b^{-3}$-scaling of this approximation to the optimum
drift velocity definitely overestimates the simulation results,
for which the scaling is closer to $b^{-2}$, the ideal
angular frequency is inversely proportional to $b$ in
both, approximation and simulation. The
peak drift velocity is not an explicit function of the particle
size, which is as well supported by the simulations
(Fig.\ \ref{fig:vd_vs_vp_all}) and thus no spurious consequence of
the uniform-density-approximation.
In the laboratory, of course, the driving force $f_o$ is likely
to depend on the particle diameter and in this way the
maximum achievable drift
velocities may turn species-dependent.
\section{Separation of mixtures of ABPs with different sizes}
\label{sec:separation}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{threesizes.ps}
\caption{A mixture of three ABP-species of different
diameters. Depending on the choices for the phase velocity $v$ and the angular
frequency $\omega = 2n\pi/L$ of the activity wave, either the large
particles (upper panel), the medium sized particles (center panel) or
the small particles (lower panel) are enriched near the wall.}
\label{fig:threesizes_b1}
\end{figure}
In this set of simulations, the box-length was increased to $150$
in z-direction, where the boundaries were fixed with short-range
repulsive walls. Three particle species were added, 500 each,
with parameters as summarized in Tab.\ \ref{tab:parameters}. No
interactions between particles were enabled, so that these are
pure averages of single particle ensembles. Then, activity waves
were sent through the system, with positive phase velocities
(from the left to the right in Fig. \ref{fig:threesizes_b1})
parameters at which the solid curves of
Fig.\ \ref{fig:vd_vs_vp_all}, e.g.\ Eq.~\eqref{eq:vd}, had
their maxima.
As can be seen in Fig.\ \ref{fig:vd_vs_vp_all}, the different values
of induced drift velocities allow for a separation of the mixture
of particle species: Either one of the three species may be enriched
at the right hand wall, depending on the particular choice of the
activity wave. It is therefore possible to separate a mixture of
ABPs according to their diameter. In the present simulation, the
density distributions of the enriched species had turned stationary
after simulation times of roughly $10^4$, so that a continuation
of that procedure did not improve its selectivity any further.
\section{Validity of the linear response approach}\label{sec:activity}
The solution to Eq.~\eqref{fp} is based on a separation
of the time evolution operator $\Omega_{\rm a}(t)=\Omega_{\rm eq}+\delta\Omega_{\rm a}(t)$
into its (diffusive) equilibrium part and a second, activity
driven part, which is treated as a perturbation. This linear
response ansatz produces accurate dynamics as long as diffusion
dominates over driven motion.
To compare both components, we consider the persistence length
of ballistic motion, $l_b = v_b \tau_r = f\tau_r/\zeta$, with
the average diffusion distance $l_d = \sqrt{6D_t\tau_r}$ during
the same time interval $\tau_r$. As long as $l_b/l_d \ll 1$,
diffusion dominates the motion. In our present simulations,
the peak driving force reaches $f = 5$, at which
$l_b/l_d = 1/4$ ($b=1/2$), $l_b/l_d = 1/2$ ($b=1$) and
$l_b/l_d = 1$ ($b=2$), which indicates that the formalism
is likely to break down in case of the large particles,
as is obvious in Fig.\ \ref{fig:vd_vs_vp_all}.
A diffusion driven dynamics is not uncommon for ABPs of
the sub - $100$ $nm$ scale. As an example, we consider the
dynamics of Au-Pt Janus particles of diameter $30$ nm, as prepared
and analyzed by Lee et al.~\cite{lee_NanoLett14}: In a solution
of water with 2.5\% H$_2$O$_2$, these particles developed
a considerable ballistic speed of $2.2\cdot 10^4$ particle
diameters per second. However, due to their small diameters,
the rotational diffusion was rapid and allowed for a
persistent motion over a distance of only $l_b \approx 4.6$ $nm$
during the directional persistence time of
$\tau_r \approx 7$ $\mu s$. Given the diffusion coefficient
of the passive particle, $D_t \approx 0.013$ $nm^2\cdot ns^{-1}$,
the distance of $l_d = \sqrt{6D_t \tau_r} \approx 23$ $nm$
was covered during one persistence time. This yields the ratio
$l_b/l_d \approx 1/5$, so that linear response theory may
safely be applied to this system.
We summarize that the linear response approximation
describes ABPs at activity levels that are comparable or
smaller than the diffusive motion of the particle
and may hence be a method of choice for small ABPs
of diameters in the sub - $0.1$ $\mu m$ range, as well as
for larger particles which are only weakly driven.
\section{Summary}\label{sec:summary}
In the present work, a linear response (Green-Kubo)
approach has been applied to ABPs
that are exposed to time-varying activity fields. We
have demonstrated how the distribution of particle orientations,
their densities and the resulting induced fluxes are approximated,
and shown that these approximations agree closely with Langevin
dynamics simulations (Sections \ref{sec:orientation} --
\ref{sec:flux}).
The accuracy of linear response theory is granted as long as
the contribution from their active motion remains weak, so that
their diffusive thermal motion is dominating the dynamics. This
is naturally the case with small ABPs of sub- 100 $nm$ diameter,
or with larger particles that are only weakly driven (Sec.\ \ref{sec:activity}).
Dynamic activity waves are capable of inducing fluxes into systems
of ABPs, and the efficiency of that coupling is a function of
the particle diameter, as is most easily seen in the
approximation~\eqref{eq:maxima}. A tuning of the phase- and
angular velocity allows the practitioner to efficiently drive
particles of selected diameters through the system, which might
be used for a controlled separation
of mixtures of ABPs according to their sizes (Sec.\ \ref{sec:separation}).
Within the accuracy of the formalism, linear response theory yields
the parameters for the activity waves which may be applied to facilitate
such a directed transport at highest efficiency.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
As an important concept representing complexity of a dynamical system, the notion of chaos was first introduced by Li and Yorke in 1975 (\cite{Li-Yorke}), and has attracted a lot of attention. Other different versions of chaos, such that Devaney chaos, positive entropy and weakly mixing (see for example \cite{Devaney,Adler-Konheim-McAndrew,Furstenberg} for details) were proposed over the past few decades. The implication among those chaos became a central topic as well.
In 1991, Iwanik showed that weak mixing implies Li-Yorke chaos (\cite{Iwanik}). Latter, Huang and Ye (\cite{Huang-Ye}) proved that Devaney chaos implies Li-Yorke one. In the same year, it was showed by Blanchard et al. that positive entropy implies Li-Yorke chaos (see \cite{Blanchard-Glasner-Kolyada}) and for amenable group case see \cite{Huang-Xu-Yi,Kerr-Li,Wang-Zhang}. Moreover, the result also holds for sofic group actions by Kerr and Li (\cite{Kerr-Li2}). In \cite{Downarowicz}, Downarowicz proved that positive topological entropy implies mean Li-Yorke chaos, see \cite{Huang-Li-Ye-Zhou} for another approach. See a recent survey \cite{Li-Ye} and references therein for more results and details.
Ergodic theory has a long history of interaction with other mathematical fields and in particular with combinatorics and number theory. The seminal work of H. Furstenberg \cite{Furstenberg2}, where an ergodic proof of the theorem of Szemer\'edi \cite{Szemeredi} on arithmetic progressions was given, linked problems in ergodic theory, combinatorics, and number theory, and provided an ideal ground for cross-fertilization. Many
researchers were interested in multiple ergodic averages. To get manageable problems, one typically restricts the class of eligible sequences and usually assumes that they are polynomial sequences, sequences arising from smooth functions, sequences related to the
prime numbers, or random sequences of integers. One can see \cite{Rosenblatt-Wierdl} and references therein for more results. In \cite{Leibman}, the author obtained the pointwise ergodic theorem for polynomial actions of $Z^d$ by translations on a nilmanifold.
Inspired by the previous works, our aim in this paper is to investigate relationships between positive entropy and mean Li-Yorke chaos along polynomials of several variables and prime numbers. Precisely, throughout this paper, by a \emph{topological dynamical system} (TDS for short), we mean a pair $(X,T)$, where $X$ is a compact metric space and $T:X\to X$ is a homeomorphism, and by a \emph{probability space}, we mean a triple $(X,\mathscr{B}_X,\mu)$, where $(X,\mathscr{B}_X)$ is a standard Borel space and $\mu$ is a probability measure on $(X,\mathscr{B}_X)$. For a probability space $(X,\mathscr{B}_X,\mu)$, if $T:X\to X$ is an invertible measure preserving transformation, by a \emph{measure preserving system}, we mean the quadruple $(X,\mathscr{B}_X,\mu,T)$.
\begin{thm}\label{mainthem1}Let $s\in\N$, $P:\Z^s\to\Z$ be a non-constant polynomial, $\{\Phi_n\}_{n=1}^{\infty}$ a F$\phi$lner sequence of $\Z^s$ and $(X,T)$ be a TDS with $h_{top}(X,T)>0$. Then there exists a Cantor subset $C$ of $X$, such that for every distinct $x,y\in C$ one has
$$\left\{
\begin{aligned}
\limsup\limits_{N\to\infty}\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}\rho(T^{P(u)}x,T^{P(u)}y)>0, \\
\liminf\limits_{N\to\infty}\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}\rho(T^{P(u)}x,T^{P(u)}y)=0.
\end{aligned}
\right.
$$
\end{thm}
For any $N\in \N$, let $\pi(N)$ denote the number of primes less than or equal to $N$. Authors of \cite{Frantzikinakis-Host-Kra}, studied multiple recurrence and convergence for sequences related to the prime numbers. Based on those results, we have the following result.
\begin{thm}\label{mainthm2}Let $s\in\N$, $P:\Z^s\to\Z$ be a non-constant polynomial, and $(X,T)$ be a TDS with $h_{top}(X,T)>0$. Then there exists a Cantor subset $C$ of $X$, such that for every distinct $x,y\in C$ we have
$$\left\{
\begin{aligned}
\limsup\limits_{N\to\infty}\frac{1}{\pi(N)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
1\leq p_1,\dotsc, p_s\leq N,\\
p_1,\dotsc,p_s\in\p\end{array}$}}\rho(T^{P(p_1,\dotsc,p_s)}x,T^{P(p_1,\dotsc,p_s)}y)>0, \\
\liminf\limits_{N\to\infty}\frac{1}{\pi(N)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
1\leq p_1,\dotsc, p_s\leq N,\\
p_1,\dotsc,p_s\in\p\end{array}$}}\rho(T^{P(p_1,\dotsc,p_s)}x,T^{P(p_1,\dotsc,p_s)}y)=0.
\end{aligned}
\right.
$$
\end{thm}
This paper is organized as follows. In Section 2, we list basic notions and results needed in our argument. In Section 3, we study some propositions of ergodic average along a no-constant polynomial. In Section 4, we prove Theorem \ref{mainthem1}. Finally, we consider mean Li-Yorke chaos along polynomials of prime numbers and prove Theorem \ref{mainthm2} in Section 5.
\section{Preliminaries}
In this section, we will review some basic notions and fundamental properties that will be used latter.
\subsection{Amenable groups and Banach density in an amenable group}
Recall that a countable discrete group $G$ is called \emph{amenable} if there exists a sequence of finite subsets $\Phi_n\subset G$ such that $\lim\limits_{n\to+\infty}\frac{|g\Phi_n\triangle \Phi_n|}{|\Phi_n|}=0$ holds for every $g\in G$, and we say that such $\{\Phi_n\}_{n=1}^{+\infty}$ is a \emph{F$\phi$lner sequence of $G$}. It is clear that $\Z^s$ is an amenable group for any $s\in N$.
Let $G$ be an amenable group, and $\{\Phi_n\}_{n=1}^{+\infty}$ be a F$\phi$lner sequence of $G$. For any given subset $F$ of $G$, we denote the \emph{upper density of $F$ with respect to $\{\Phi_n\}_{n=1}^{+\infty}$} by
$$\overline{d}_{\{\Phi_n\}}(F):=\limsup\limits_{n\to+\infty}\frac{\big\vert F\cap\Phi_n\big\vert}{\vert\Phi_n\vert}.$$
The \emph{upper Banach density of $F$} is defined by
$$d^*(F):=\sup\big\{\overline{d}_{\{\Phi_n\}}(F):\ \{\Phi_n\}_{n=1}^{+\infty}\ is\ a\ F\phi lner\ sequence \big\}.$$
For $G=\Z^s$, $s\in\N$, the above definition differs from original definition of upper Banach density where the supremum was taken only over intervals instead of arbitrary F$\phi$lner sequence. However the authors of \cite{Beiglock-Bergelson-Fish} proved that the two notions are equivalent(see \cite[Lemma 3.3]{Beiglock-Bergelson-Fish}).
\begin{lem}\label{density} Let $G$ be an amenable group and $\{\Phi_n\}_{n=1}^{+\infty}$ be a F$\phi$lner sequence. Then there exists a sequence $\{t_{n}\}_{n=1}^{+\infty}$ such that $d^*(F)=\overline{d}_{\{\Phi_nt_n\}}(F)$.
\end{lem}
\subsection{disintegration of measure}For a TDS $(X,T)$, we denote the collection of all Borel probability measures of $X$ by $\mathcal{M}(X)$, the collection of all $T$-invariant Borel probability measures of $X$ by $\mathcal{M}(X,T)$, and the collection of all ergodic measures of $(X,T)$ by $\mathcal{M}^e(X,T)$. We now recall the main results and properties of condition expectation and disintegration of
measures. We refer to \cite[Chapter 5]{Einsiedler-Ward} for more details.
Let $(X,\mathscr{B}_X,\mu)$ be a probability space, and $\mathscr{A}\subseteq\mathscr{B}_X$ a sub-$\sigma$-algebra. Then there is a map
$$E(\cdot|\mathscr{A}):L^1(X,\mathscr{B}_X,\mu)\to L^1(X,\mathscr{A},\mu)$$
called the \emph{conditional expectation}, that satisfies the following properties.
\begin{enumerate}
\item For $f\in L^1(X,\mathscr{B}_X,\mu)$, the image function $E(f|\mathscr{A})$ is characterized almost everywhere by the two properties:
\begin{itemize}
\item $E(f|\mathscr{A})$ is $\mathscr{A}$-measurable;
\item for any $A\in\mathscr{A}$, $\int_A E(f|\mathscr{A})d\mu=\int_A fd\mu$.
\end{itemize}
\item $E(\cdot|\mathscr{A})$ is a linear operator of norm 1. Moreover, $E(\cdot|\mathscr{A})$ is positive.
\item For $f\in L^1(X,\mathscr{B}_X,\mu)$ and $g\in L^{\infty}(X,\mathscr{A},\mu)$,
$$E(g\cdot f|\mathscr{A})=g\cdot E(f|\mathscr{A})$$
almost everywhere.
\item $\mathscr{A}'\subseteq\mathscr{A}$ is a sub-$\sigma$-algebra, then
$$E\big(E(f|\mathscr{A})\big|\mathscr{A}'\big)=E(f|\mathscr{A}')$$
almost everywhere.
\end{enumerate}
The following result is well-known (see e.g. \cite[ Theorem 14.26]{Glasner}, \cite[Section 5.2]{Einsiedler-Ward}).
\begin{thm}\label{function limits}Let $(X,\mathscr{B}_X,\mu)$ be a probability space. Suppose that $\{\mathscr{A}_n\}_{n=1}^{\infty}$ is a decreasing sequence (resp. an increasing sequence) of sub-$\sigma$-algebras of $\mathscr{B}_X$ and $\mathscr{A}=\bigcap\limits_{n\geq}\mathscr{A}_n$ (resp. $\mathscr{A}=\bigvee\limits_{n\geq1}\mathscr{A}_n$). Then for any $f\in L^1(\mu)$,
$$E(f|\mathscr{A}_n)\to E(f|\mathscr{A})$$
as $n\to\infty$ in $L^1(\mu)$ and $\mu$-almost everywhere.
\end{thm}
Let $(X,\mathscr{B}_X,\mu)$ be a Borel probability space, and $\mathscr{A}\subseteq\mathscr{B}_X$ a $\sigma$-algebra. Then $\mu$ can \emph{be disintegrated over $\mathscr{A}$} as
\[
\mu=\int_X \mu_x^{\mathscr{A}} d\mu(x)
\]
in the sense that for any $f\in L^1(X,\mathscr{B}_X,\mu)$, one has
\begin{equation}
E(f|\mathscr{A})(x)=\int f(y)d\mu_x^{\mathscr{A}}(y)\quad
\text{for }\mu\text{-a.e.\ }x\in X,\label{eq:Ef-mux}
\end{equation}
where $\mu_x^{\mathscr{A}}\in \mathcal{M}(X)$.
\subsection{Entropy}
Let $(X,\mathscr{B}_X,\mu,T)$ be a measure preserving system. A partition of $X$ is a cover of $X$, whose elements are pairwise disjoint. For a finite measurable partition $\alpha$, the \emph{measure-theoretic entropy of $\mu$ relative to $\alpha$}, denoted by $h_{\mu}(T,\alpha)$, is defined as
$$h_{\mu}(T,\alpha)=\lim_{n\to\infty}\frac{1}{n}H_{\mu}
\biggl(\bigvee_{i=0}^{n-1}T^{-i}\alpha\biggr),$$
where $H_{\mu}(\alpha)=-\sum\limits_{A\in\alpha}\mu(A)\log \mu(A)$. The \emph{measure-theoretic entropy of $\mu$} is defined as
$$h_{\mu}(X,T)=\sup\limits_{\alpha}h_{\mu}(T,\alpha),$$
where the supremum ranges over all finite measurable partitions of $X$.
The Pinsker $\sigma$-algebra of a system $(X,\mathscr{B}_X,\mu,T)$ is defined as
$$P_{\mu}(T)=\{A\in\mathscr{B}_X: h_{\mu}(T,\{A,X\backslash A\})=0.$$
The following Rokhlin-Sinai theorem identifies the Pinsker $\sigma$-algebra as the ``remote past'' of a generating partition (see \cite{Rokhlin-Sinai}).
\begin{thm}\label{Rokhlin-Sinai}Let $(X,\mathscr{B}_X,\mu,T)$ be a measure preserving system, then there exists a sub-$\sigma$-algebra $\mathscr{P}$ of $\mathscr{B}_X$ such that $T^{-1}\mathscr{P}\subset\mathscr{P}$, $\bigvee\limits_{k=0}^{\infty}T^k\mathscr{P}=\mathscr{B}_X$ and $\bigcap\limits_{n=0}^{\infty}T^{-k}\mathscr{P}=P_{\mu}(T).$
\end{thm}\section{Ergodic average along a no-constant polynomial}
In this section, by partially following from the arguments in \cite{Li-Qiao}, we study ergodic average along a non-constant polynomial. For a measure preserving system $(X,\mathcal{B}_X,\mu,T)$ and a measurable function $f$, we write $Tf$ for the function defined by $(Tf)(x)=f(Tx)$. In \cite{Leibman}, the author proved the following result.
\begin{thm}\label{Leibman}Let $(X,\mathscr{B}_X,\mu,T)$ be a measure preserving system, $s\in\N$ and $P:\Z^s\to\Z$ be a polynomial, then for any $f\in L^{\infty}(\mu)$ and any F$\phi$lner sequence $\{\Phi_N\}_{N=1}^{\infty}$ of $\Z^s$, the averages
$$\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_{N}}T^{P(u)}f$$
converges in $L^2(\mu)$ as $N\to\infty$.
\end{thm}
\begin{prop}\label{disintegration} Let $s\in\N$, $P:\Z^s\to\Z$ be a non-constant polynomial and $\{\Phi_N\}_{N=1}^{\infty}$ be a F$\phi$lner sequence of $\Z^s$. Then for any measure preserving system $(X,\mathscr{B}_X,\mu,T)$, there exists a disintegration of $\mu$
$$\mu=\int \tau_xd\mu(x),$$
in the sense that, there exists a subset $X_0\in\mathscr{B}_X$ of $X$ with $\mu(X_0)=1$, and $\{N_i\}_{i=1}^{\infty}\subset\N$ such that for any $f\in C(X)$ and $x\in X_0$ one has
$$\lim\limits_{i\to\infty}\frac{1}{|\Phi_{N_i}|}\sum_{u\in\Phi_{N_i}}f(T^{P(u)}x)=\int fd\tau_x,$$
and
$$\int\int fd\tau_xd\mu(x)=\int fd\mu.$$
\end{prop}
\begin{proof}Let $\{g_n\}_{n=1}^{\infty}$ be a dense subset of $C(X)$. For $g_1$, by Theorem \ref{Leibman}, there exists an increasing sequence $\{N_i^1\}_{i=1}^{\infty}\subset\N$ and $X_1\in\mathscr{B}_X$ with $\mu(X_1)=1$ such that for every $x\in X_1$ the averages
$$\frac{1}{|\Phi_{N_i^1}|}\sum\limits_{u\in\Phi_{N_i^1}}T^{P(u)}g_1(x)$$
converges as $i\to\infty$. Continuing this process, we can get $\{N_i^{k+1}\}_{i=1}^{\infty}\subset\{N_i^k\}_{i=1}^{\infty}$ and $X_{k+1}\subset X_k$ with $\mu(X_{k+1})=1$ such that for every $x\in X_{k+1}$ the averages
$$\frac{1}{|\Phi_{N_i^{k+1}}|}\sum\limits_{u\in\Phi_{N_i^{k+1}}}T^{P(u)}g_{k+1}(x)$$
converges as $i\to\infty$, for $k=1,2,\dotsc$. Let $N_i=N_i^i$ for every $i\in\N$ and $X_{\infty}=\bigcap_{i=1}^{\infty}X_i$, then $\mu(X_{\infty})=1$ and for every $x\in X_{\infty}$, $k\in\N$ the averages
$$\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}T^{P(u)}g_k(x)$$
converges as $i\to\infty$. Let $X_0=X_{\infty}\cap \supp(\mu)$, then $\mu(X_0)=1$.
For every $f\in C(X)$, there exists $\{f_i\}_{i=1}^{\infty}\subset\{g_i\}_{i=1}^{\infty}$ such that $||f_j-f||_{L^{\infty}}\to 0$ as $j\to\infty$. For any $x_0\in X_0$ and $\varepsilon>0$ there exists $j_0$ such that $\Vert f_{j_0}-f\Vert_{L^{\infty}}\leq\varepsilon$. Since $f_j,f$ are continuous and $X_0\subseteq \supp(\mu)$, one has $T^nx_0\notin\{x\in X: \vert f_j(x)-f(x)\vert>\varepsilon\}$ for every $n\in\Z$. For $j_0$ there exists $i_0\in\N$ such that $\big\vert\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}T^{P(u)}
f_{j_0}(x_0)-\frac{1}{|\Phi_{N_m}|}\sum\limits_{u\in\Phi_{N_m}}T^{P(u)}f_{j_0}(x_0)\big\vert<\varepsilon$ for every $i,m>i_0$. Then by triangle inequality one has
$$\bigg\vert\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}T^{P(u)}
f(x_0)-\frac{1}{|\Phi_{N_m}|}\sum\limits_{u\in\Phi_{N_m}}T^{P(u)}f(x_0)\bigg\vert<3\varepsilon$$
for every $i,m>i_0$. Thus the average
$$\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}T^{P(u)}f(x)$$
converges as $i\to\infty$ for every $f\in C(X)$ and $x\in X_0$.
For every $x\in X_0$, we set $L_x:C(X)\to\R$, $f\to\lim\limits_{i\to\infty}\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}T^{P(u)}f(x)$. Then $L_x$ is a positive linear function with $L_x(1)=1$. By Riesz Representation Theorem, there exists $\tau_z\in \mathcal{M}(X)$ such that for any $f\in C(X)$ one has $L_x(f)=\int fd\tau_x$. And
\begin{align*}
\int\int fd\tau_xd\mu(x)&=\int\lim\limits_{i\to\infty}\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}T^{P(u)}f_(x)d\mu(x)\\
&=\lim\limits_{i\to\infty}\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}\int T^{P(u)}f_(x)d\mu(x)\\
&=\int fd\mu(x),
\end{align*}
the second equation holds because $f$ is bounded. This ends the proof.
\end{proof}
\begin{thm}\label{characteristic}Let $s\in\N$, $P:\Z^s\to\Z$ be a non-constant polynomial, and $(X,\mathscr{B}_X,$ $\mu,T)$ be a measure preserving system with the Pinsker $\sigma$-algebra $P_{\mu}(T)$. Then for any $f\in L^{\infty}(\mu)$, one has
$$\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f-\frac{1}{|\Phi_{N}|}\sum\limits_{u\in\Phi_N}T^{P(u)}E(f|P_{\mu}(T))\to 0,$$
as $N\to\infty$ in $L^2(\mu)$.
\end{thm}
\begin{proof}By Theorem \ref{Rokhlin-Sinai}, there exists a sub-$\sigma$-algebra $\mathscr{P}$ of $\mathscr{B}_X$ such that $T^{-1}\mathscr{P}\subset\mathscr{P}$, $\bigvee\limits_{k=0}^{\infty}T^k\mathscr{P}=\mathscr{B}_X$ and $\bigcap\limits_{n=0}^{\infty}T^{-k}\mathscr{P}=P_{\mu}(T)$. For any given $f\in L^{\infty}(\mu)$, firstly, we assume $f$ is $\mathscr{P}$-measurable. Since $f\in L^{\infty}(\mu)$, without lost of generality, we can assume $ \bigl\Vert f\bigr\Vert\leq 1$. Let $f^{\infty}$ denote $E(f|P_{\mu}(T))$ and $f^n$ denote $E(f|T^{-n}\mathscr{P})$ for $n\in\N$. For any $0<\varepsilon<\frac{1}{2}$, by Theorem \ref{function limits}, there exists $m\in\N$ such that $||f^m-f^{\infty}||_{L^2(\mu)}<\varepsilon$. Then for every $N\in\N$
\begin{footnotesize}
\begin{align}\label{e1}
&\bigg\Vert\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f-\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_{N}}T^{P(u)}f^{\infty} \bigg\Vert_{L^2(\mu)}\nonumber\\
&\leq\bigg\Vert\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f-\frac{1}{|\Phi_{N}|}\sum\limits_{u\in\Phi_{N}}T^{P(u)}f^{m} \bigg\Vert_{L^2(\mu)}+
\bigg\Vert\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}(f^m-f^{\infty}) \bigg\Vert_{L^2(\mu)}\nonumber\\
&\leq\bigg\Vert\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f-\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f^{m} \bigg\Vert_{L^2(\mu)}+\varepsilon.
\end{align}
\end{footnotesize}
Let $E=\{(u,v)\in\Z^s\times\Z^s\colon|P(u)-P(v)|\leq m\}$, then by Lemma \ref{density}, for the F$\phi$lner sequence $\{\Phi_n':=[-n,n]^{s}\times[-n,n]^s\}$ there exists $\{t_n\}\subset\Z^{2s}$ such that $d^*(E)=\overline{d}_{\{\Phi_n'+t_n\}}(E)$. For any $n\in\N$, we have $\big\vert E\cap(\Phi_n'+t_n)\big\vert\leq (2n+1)^{(2s-1)}(2m+1)k$, where $k$ is the degree of $P$. Then
$$d^*(E)=\overline{d}_{\{\Phi_n'+t_n\}}\leq\limsup\frac{(2n+1)^{(2s-1)}(2m+1)k}{(2n+1)^{2s}}\to 0$$
as $n\to\infty$. This means $\overline{d}_{\{\Phi_n\times\Phi_n\}}(E)=0$. Thus there exists $N_1\in\N$ such that $\frac{|E\cap(\Phi_N\times\Phi_N)|}{|\Phi_N|^2}<\frac{\varepsilon^2}{4}$ holds for any $N>N_1$. Then for every $N>N_1$ we have
\begin{align}\label{e2}
&\bigg\Vert\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f-\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}T^{P(u)}f^{m} \bigg\Vert_{L^2(\mu)}^2\nonumber\\
&=\frac{1}{|\Phi_{N}|^2}\sum\limits_{u,v\in\Phi_N}\int(T^{P(u)}f-T^{P(u)}f^m)(T^{P(v)}f-T^{P(v)}f^m) d\mu\nonumber\\
&=\frac{1}{|\Phi_{N}|^2}\sum\limits_{u,v\in\Phi_N}\bigl(A_{uv}+B_{uv}-C_{uv}-D_{uv}\bigr),
\end{align}
where
$$A_{uv}=\int T^{P(u)}f\cdot T^{P(v)}fd\mu,$$
$$B_{uv}=\int T^{P(u)}f^m\cdot T^{P(v)}f^{m}d\mu,$$
$$C_{uv}=\int T^{P(u)}f\cdot T^{P(v)}f^md\mu,$$
$$D_{uv}=\int T^{P(u)}f^m\cdot T^{P(v)}fd\mu.$$
For $(u,v)\notin E_N$, if $P(u)>P(v)+m$, we have
\begin{small}
\begin{align*}
A_{uv}&=\int E(T^{P(u)-P(v)}f\cdot f\big|T^{-m}\mathscr{P})d\mu\\
&=\int T^{P(u)-P(v)}f\cdot E(f\big|T^{-m}\mathscr{P})d\mu\\
&=C_{uv},
\end{align*}
\end{small}
and
\begin{small}
\begin{align*}
D_{uv}&=\int E\left(T^{P(u)}f^m\cdot T^{P(v)}f\big|T^{-\left(P(v)+m\right)}\mathscr{P}\right)d\mu\\
&=\int T^{P(u)}f\cdot E(T^{P(v)}f\big|T^{-(P(v)+m))}\mathscr{P})\\
&=B_{uv}.
\end{align*}
\end{small}
Similarly, when $P(v)>P(u)+m$ we have $A_{uv}=D_{uv}$ and $B_{uv}=C_{uv}$. Moreover, since $\Vert f\Vert_{\infty}=1$, we have $|A_{uv}|$, $|B_{uv}|$, $|C_{uv}|$, $|D_{uv}|\leq 1$. Thus we have
\begin{footnotesize}
\begin{align*}
(\ref{e2})&\leq\frac{1}{|\Phi_N|^2}\left(\bigg|\sum\limits_{(u,v)\in F_N}\left(A_{uv}+B_{uv}-C_{uv}-D_{uv}\right)\bigg|+\sum\limits_{(u,v)\in E_N}\left(A_{uv}+B_{uv}-C_{uv}-D_{uv}\right)\right)\\
&=\frac{1}{|\Phi_N|^2}\left(\sum\limits_{(u,v)\in E_N}\big\vert A_{uv}+B_{uv}-C_{uv}-D_{uv}\big\vert\right)\\
&\leq4\frac{1}{|\Phi_N|^2}\leq\varepsilon^2,
\end{align*}
\end{footnotesize}
Where $F_N=(\Phi_N\times\Phi_N)\setminus E$ and $E_N=E\cap(\Phi_N\times\Phi_N)$. Then $(\ref{e1})\leq 2\varepsilon$ for any $N>N_1$. Thus the conclusion holds for
$\mathcal{P}$-measurable functions in $L^{\infty}(\mu)$ because of the arbitrary of $\varepsilon$, also for $T^r\mathcal{P}$-measurable functions, $r\in\N$, for $\mu$ is $T$-invariant. For general functions $f\in L^{\infty}(\mu)$, there exists $T^k\mathcal{P}$-measurable functions $f_{k}\in L^{\infty}(\mu)$ which satisfy the conclusion and converge to $f$ when $k\to\infty$, and we can know this conclusion holds for $f$. Thus the result holds for all functions in $L^\infty(\mu)$.
\end{proof}
\begin{lem}\label{tauup-mu}Let $s\in\N$, $P:\Z^s\to\Z$ be a non-constant polynomial, $(X,\mathscr{B}_X,\mu,T)$ be a measure preserving system with the Pinsker $\sigma$-algebra $P_{\mu}(T)$, and $\mu=\int \mu_xd\mu(x)$ be the disintegration of $\mu$ over $P_{\mu}(T)$. Then for the disintegration of $\mu$ as in Proposition \ref{disintegration}, $\mu=\int \tau_xd\mu(x)$, and every $f\in C(X)$ we have
$$\int fd\tau_x=\int\int fd\tau_yd\mu_x(y),$$
holds for $\mu$-a.e. $x\in X$.
\end{lem}
\begin{proof}Let $\{\Phi_n\}_{n=1}^{\infty}$ be a F$\phi$lner sequence of $\Z^s$. By Proposition \ref{disintegration}, there exists $\{N_i\}_{i=1}^{\infty}\subseteq\N$ such that
$$\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}f(T^{P(u)}x)\to\int fd\tau_x$$
holds as $i\to\infty$ for $\mu$-a.e. $x\in X$ . Combining with Theorem \ref{characteristic} we have
$$\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}E(f|P_{\mu}(T))(T^{P(u)}x)\to\int fd\tau_x$$
in $L^2(\mu)$ as $i\to\infty$. Thus $\int fd\tau_x$ is $P_{\mu}(T)$-measurable, this means
$$\int fd\tau_x=E(\int fd\tau_y|P_{\mu}(T))(x)=\int\int fd\tau_yd\mu_x.$$
\end{proof}
\section{Proof of Theorem \ref{mainthem1}}
In this section, we will give the proof of Theorem \ref{mainthem1}. To begin with, we introduce some definitions and properties.
Let $(X,T)$ be a topological dynamical system with the metric $\rho$. For a point $x\in X$, the \emph{stable set of $x$} is defined as
$$W^s(x,T)=\{y\in X: \lim\limits_{k\to\infty}\rho(T^kx,T^ky)=0\},$$
and the \emph{unstable set of $x$} is defined as
$$W^u(x,T)=\{y\in X:\lim\limits_{k\to\infty}\rho(T^{-k}x,T^{-k}y)=0\}.$$
In \cite{Huang-Xu-Yi}, the authors showed the following results.
\begin{thm}\label{W^s(x,T)dense}(\cite[proof of Theorem 1]{Huang-Xu-Yi}) Let $(X,\mathscr{B},\mu,T)$ be a ergodic system with $h_{\mu}(T)>0$, $P_{\mu}(T)$ be the Pinsker $\sigma$-algebra of $(X,\mathscr{B},\mu,T)$ and $\mu=\int\mu_xd\mu(x)$ be the disintegration of $\mu$ over $P_{\mu}(T)$, then for $\mu$-a.e. $x\in X$ one has
$$\overline{W^s(x,T)\cap \supp(\mu_x)}=\supp(\mu_x)\ \ and\ \ \overline{W^u(x,T)\cap \supp(\mu_x)}=\supp(\mu_x).$$
\end{thm}
Let $(X,\mathscr{B}_X,\mu)$ be a Borel probability space, $\mathscr{A}\subseteq\mathscr{B}_X$ a $\sigma$-algebra, and $\mu=\int \mu_x^{\mathscr{A}} d\mu(x)$ be the disintegrated over $\mathscr{A}$ of $\mu$. $\mu\times_{\mathscr{A}}\mu$ is a measure on $(X\times X,\mathscr{B}_X\times\mathscr{B}_X)$ defined by
\[\mu\times_{\mathscr{A}}\mu(A)=\int \mu_x^{\mathscr{A}}\times\mu_x^{\mathscr{A}}(A)d\mu(x).
\]
The following theorem is a classic result(see \cite[Theorem 0.4(iii)]{Danilenko} and \cite[lemma 4.2]{Huang-Xu-Yi} see also
\cite[Theorem 4]{Glasner-Thouvenot-Weiss} for free action).
\begin{thm}\label{Pinsker algebra}Let $(X,\mathscr{B},\mu,T)$ be a ergodic system with the Pinsker $\sigma$-algebra $P_{\mu}(T)$. If $\lambda=\mu\times_{P_{\mu}(T)}\mu$ and $\pi:X\times X\to X$ is the canonical projection to the first factor, then $P_{\lambda}\left(T\times T\vert \pi^{-1}P_{\mu}(T)\right)=\pi^{-1}(P_{\mu}(T))$(mod $\lambda$).
\end{thm}
Note that, under the above settings, for every $A\in P_{\lambda}\left(T\times T\vert\pi^{-1}(P_{\mu}(T))\right)$, there exists $A_0\in P_{\mu}(T)$ such that $A=\pi^{-1}(A_0)=A_0\times X\ (mod\ \lambda)$. Then one has
\begin{align*}
h_{\lambda}(T\times T,\{A,(X\times X)\setminus A\})&=h_{\lambda}(T\times T, \{A_0\times X, (X\setminus A_0)\times X\})\\
&=h_{\mu}(T,\{A_0, X\setminus A_0\})=0.
\end{align*}
This means $A\in P_{\lambda}(T\times T)$. Thus $P_{\lambda}\left(T\times T\right)=\pi^{-1}(P_{\mu}(T))$(mod $\lambda$).
Now we are ready to give the proof Theorem \ref{mainthem1}.
\begin{proof}[Proof of Theorem \ref{mainthem1}]Since $h_{top}(X,T)>0$, there exists $\mu\in \mathcal{M}^e(X,T)$ such that $h_{\mu}(X,T)>0$. Let $P_{\mu}(T)$ be the Pinsker $\sigma$-algebra and $\mu=\int \mu_xd\mu(x)$ be the disintegration of $\mu$ over $P_{\mu}(T)$. We set $\lambda= \mu\times_{P_{\mu}(T)}\mu$, then for the measure preserving system $(X\times X,\mathscr{B}_X\times\mathscr{B}_X,\lambda,T\times T)$, by Theorem \ref{disintegration}, there exist $\{N_i\}_{i=1}^{\infty}$ and a disintegration $\lambda=\int \tau_Zd\lambda(z)$ of $\lambda$, such that
$$\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}f(T^{P{u}}x_1,T^{P(u)}x_2)\to\int fd\tau_{(x_1,x_2)}$$
holds for every $f\in C(X\times X)$ and $\lambda$-a.e. $(x_1,x_2)$.
By Theorem \ref{Pinsker algebra}, we know the Pinsker $\sigma$-algebra of $(X\times X,\mathscr{B}_X\times \mathscr{B}_X,\lambda, T\times T)$ is $\pi^{-1}(P_{\mu}(T))$, where $\pi:X\times X\to X$, $(x_1,x_2)\to x_1$ is the canonical projection to the first coordinate. Thus $\lambda=\int \mu_x\times\mu_xd\mu(x)$ can be also regarded as the disintegration of $\lambda$ over $\pi^{-1}(P_{\mu}(T))$. Then by Lemma \ref{tauup-mu} one has
$$\int fd\tau_{(z_1,z_2)}=\int\int fd\tau_{(y_1,y_2)}d\mu_{z_1}\times\mu_{z_1}(y_1,y_2)$$
for every $f\in C(X^2)$ and $\lambda$-a.e. $(z_1,z_2)\in x^2$. This means $\int fd\tau_{(z_1,z_2)}$ is constant for $\supp(\mu_x)\times \supp(\mu_x)$-a.e. $(z_1,z_2)$ and $\mu$-a.e. $x\in X$.
We set $f_0(x_1,x_2)=\rho(x_1,x_2)$, then $f_0(x_1,x_2)>0$ for every $(x_1,x_2)\notin \Delta_X:=\{(x,x)\colon x\in X\}$. Since $\lambda(\Delta_X)=0$, we have
$$0<\int f_0d\lambda=\int\int\int f_0d\tau_{(y_1,y_2)}d\mu_z\times\mu_zd\mu(z).$$
Then there exist a subset $X_2$ of $X$ with $\mu(X_2)>0$ and a constant $c>0$ such that
$\int\int f_0d\tau_{(y_1,y_2)}d\mu_z\times\mu_z(y_1,y_2)>c$ for any $z\in X_2$. Thus $\int f_0d\tau_{(y_1,y_2)}>c$ for $\mu_z\times\mu_z$-a.e. $(y_1,y_2)$. This is
$$\frac{1}{|\Phi_{N_i}|}\sum\limits_{u\in\Phi_{N_i}}\rho(T^{P(u)}x_1,x_2)\to\int fd\tau_{x_1,x_2}>c>0$$
for $\mu_z\times\mu_z$-a.e. $(x_1,x_2)\in X\times X$ and any $z\in X_2$. We set
$$A=\{(x_1,x_2)\in X\times X: \limsup_{N\to\infty}\frac{1}{|\Phi_{N}|}\sum\limits_{u\in\Phi_{N}}\rho(T^{P(u)}x_1,T^{P(u)}x_2)>c\},$$
it is a $G_\delta$ subset of $X\times X$. And $A\cap(\supp(\mu_z)\times\supp(\mu_z))$ is dense $G_\delta$ subset of $(\supp{\mu_z}\times\supp{\mu_z})$ for every $z\in X_2$.
By Theorem \ref{W^s(x,T)dense}, we know there exists a subset $X_3$ of $X$ with $\mu(X_3)=1$ such that for every $x\in X_3$
$$\overline{W^s(x,T)\cap\supp(u_x)}=\supp(\mu_x).$$
We set
$$B=\{(x_1,x_2)\in X\times X: \liminf_{N\to\infty}\frac{1}{|\Phi_{N}|}\sum\limits_{u\in\Phi_{N}}\rho(T^{P(u)}x_1,T^{P(u)}x_2)=0\},$$
it is a $G_{\delta}$ subset of $X\times X$ and now we shall show $W^s(x,T)\times W^s(x,T)\subseteq B$. Since $X$ is compact, we can assume $diam(X)=1$. For every $y_1,y_2\in W^s(x,T)$ and any $\varepsilon>0$ there exists a $K_0\in\N$ such that for every $k>K_0$ one has $d(T^ky_1,T^ky_2)<\frac{\varepsilon}{2}$.
Let $E=\{u\in\Z^s: P(u)\leq K_0\}$, then by Lemma \ref{density}, for the F$\phi$lner sequence $\{\Phi_n':=[-n,n]^{s}\}$ there exists $\{t_n\}\subset\Z^{s}$ such that $d^*(E)=\overline{d}_{\{\Phi_n'+t_n\}}(E)$. For any $n\in\N$, we have $\big\vert E\cap(\Phi_n'+t_n)\big\vert\leq (2n+1)^{(s-1)}(K_0+1)k$, where $k$ is the degree of $P$. Then
$$d^*(E)=\overline{d}_{\{\Phi_n'+t_n\}}\leq\limsup\frac{(2n+1)^{(s-1)}(K_0+1)k}{(2n+1)^{s}}\to 0$$
as $n\to\infty$. This means $\overline{d}_{\{\Phi_n\}}(E)=0$. Thus there exists $N_1\in\N$ such that $\frac{|E_N|}{|\Phi_N|}<\frac{\varepsilon}{2}$ holds for every $N>N_1$, and
\begin{align*}
&\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi_N}\rho(T^{P(u)}y_1,T^{P(u)}y_2)\\
&=\frac{1}{|\Phi_N|}\sum\limits_{u\in E_N}\rho(T^{P(u)}y_1,T^{P(u)}y_2)+\frac{1}{|\Phi_N|}\sum\limits_{u\in\Phi\backslash E_N}\rho(T^{P(u)}y_1,T^{P(u)}y_2)\\
&\leq\frac{|E_N|}{|\Phi_N|}+\frac{\varepsilon}{2}\leq\varepsilon,
\end{align*}
Where $E_N=E\cap\Phi_N$. This implies $(y_1,y_2)\in B$, thus $W^s(x,T)\times W^s(x,T)\subseteq B$.
Then $B\cap\big(\supp(\mu_x)\times\supp(\mu_x)\big)$ is a dense $G_{\delta}$ subset of $\supp(\mu_x)\times\supp(\mu_x)$ for every $x\in X_3$. Thus $A\cap B\cap\big(\supp(\mu_x)\times\supp(\mu_x)\big)$ is a dense $G_{\delta}$ subset of $\supp(\mu_x)\times\supp(\mu_x)$ for every $x\in X_2\cap X_3$. Then there exists a Cantor subset $C$ of $\supp(\mu_x)\times\supp(\mu_x)$ with $C\subseteq A\cap B\cap\big(\supp(\mu_x)\times\supp(\mu_x)\big)$ satisfying the requirement of the Theorem \ref{mainthem1}.
\end{proof}
\begin{rem}In the above proof, for any given non-constant polynomial $Q:\Z^s\to\Z^-$, by showing $W^u(x,T)\times W^u(x,T)\subseteq B$ and noticing the fact that $\overline{W^u(x,T)\cap}$ $\overline{\supp(\mu_x)}= \supp(\mu_x)$, we can also show that $B\cap \supp(\mu_x)\times\supp(\mu_x)$ is a dense $G_{\delta}$ subset of $\supp(\mu_x)$. Hence Theorem \ref{mainthem1} also established along $Q$.
\end{rem}
\begin{rem}For a given F$\phi$lner sequence $\{\Phi_n\}_{n=1}^{\infty}$, let $t_n=\min\limits_{u\in\Phi_n}\min\{n_i,u=(n_1,n_2,\dotsc,n_s)\}$. If there exists $t_0\in\N$ such that $t_0\leq t_n$ for every $n\in\N$ (for example $\{[0,n]^s\}_{n=1}^{\infty}$). Then for any $P:\Z^s\to\Z$, Theorem \ref{mainthem1} also established.
\end{rem}
\section{mean li-yorke chaos along polynomials of prime numbers}
For studying an average over the primes, the authors of \cite{Frantzikinakis-Host-Kra} replaced this average with a certain weighted average over the integers, and in the proof of Lemma 1 they gave the following result.
\begin{lem}\cite[Proof of Lemma 1]{Frantzikinakis-Host-Kra}\label{Host-Kra} For any $\varepsilon>0$ there exists $N_0\in\N$, such that for any $N>N_0$ and map $a:\N\to\R$ with $\vert a(n)\vert\leq1$ one has
$$\bigg\vert\frac{1}{\pi(N)}\sum\limits_{p\in\p,p<N}a(p)-\frac{1}{N}\sum\limits_{n=0}^{N-1}\Lambda(n)a(n)\bigg\vert<\varepsilon.$$
where $\lambda\colon\mathbb{Z}\to\mathbb{R}$ is the von Mangoldt function defined by
$$\Lambda(n)=
\begin{cases}
\log p & \text{if $n=p^m$ for some $m\in\mathbb{N}$ and $p\in\mathbb{N}$,}\\
0& \text{otherwise.}
\end{cases}$$
\end{lem}
Then for any $s\in\N$ and map $a:\N^s\to\R$ with $\vert a(n)\vert\leq 1$, one has
\begin{scriptsize}
\begin{align*}
&\bigg\vert\frac{1}{(\pi(N))^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
p_1,\dotsc p_s\in\p\nonumber\\
p_1,\dotsc,p_s<N\end{array}$}}a(p_1,\dotsc,p_s)-\frac{1}{N^s}
\sum\limits_{n_1,\dotsc,n_s=0}^{N-1}\Lambda(n_1)\dotsc\Lambda(n_s)a(n_1,\dotsc,n_s)\bigg\vert\\
&\leq\sum_{\ell=1}^{s}\left(\frac{1}{N^{\ell-1}}\sum\limits_{n_1,\dotsc,n_{\ell-1}=0}^{N-1}\Lambda(n_1)\dotsc\Lambda(n_{\ell-1})
\frac{1}{\pi(N)^{s-\ell-1}}\sum\limits_{\mbox{\tiny$\begin{array}{c}
p_{\ell+1},\dotsc p_s\in\p\nonumber\\
p_{\ell+1},\dotsc,p_s<N\end{array}$}}
\bigg\vert A_{n_1,\dotsc,n_{\ell-1},p_{\ell+1},\dotsc,p_s}(N)\bigg\vert\right)
\end{align*}
\end{scriptsize}
where
\begin{align*}
A_{n_1\dotsc n_{\ell-1},p_{\ell+1}\dotsc p_s}(N)=\bigg\vert\frac{1}{\pi(N)}\sum\limits_{p_\ell\in\p,p_{\ell}\leq N}
a(n_1\dotsc n_{\ell-1},p_{\ell}\dotsc p_s)-\\
\frac{1}{N}\sum\limits_{n_{\ell}=0}^{N-1}\lambda(n_{\ell}) a(n_1\dotsc n_{\ell},p_{\ell+1}\dotsc p_s)\bigg\vert.
\end{align*}
By Lemma \ref{Host-Kra}, for large enough $N\in\N$ and $d_1,\dotsc,d_{\ell-1}$, $p_{\ell},\dotsc,p_s\in\N$, we have $\vert A_{n_1,\dotsc,n_{\ell-1},p_{\ell+1},\dotsc,p_s}(N)\vert<\frac{\varepsilon}{2s}$ for every $n_1,\dotsc,n_{\ell-1},p_{\ell+1},\dotsc,p_s$ and $\ell\in\N$. Then by the well known fact that $\Lambda$ has mean one, for large enough $N$ one has $\frac{1}{N^{\ell-1}}\sum\limits_{n_1,\dotsc,n_{\ell-1}=0}^{N-1}\Lambda(n_1)\dotsc\Lambda(n_{\ell-1})\leq 2$ for every $\ell=1,2,\dotsc,s$. Concluding the above discussion, we have the following lemma.
\begin{lem}\label{P-host-kra} For any $s\in\N$ and $\varepsilon>0$, there exists $N_0\in\N$ such that for large enough $N$ and map $a:\N^s\to[-1,1]$ one has
$$\bigg\vert\frac{1}{(\pi(N))^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
p_1,\dotsc p_s\in\p\nonumber\\
p_1,\dotsc,p_s<N\end{array}$}}a(p_1,\dotsc,p_s)-\frac{1}{N^s}
\sum\limits_{n_1,\dotsc,n_s}^{N-1}\Lambda(n_1)\dotsc\Lambda(n_s)a(n_1,\dotsc,n_s)\bigg\vert<\varepsilon.$$
\end{lem}
In the proof of the \cite[Theorem 3]{Frantzikinakis-Host-Kra}, the authors proved the following lemma.
\begin{lem}\label{P-host-kra2}For any $\varepsilon>0$ and measure preserving system $(X,\mathscr{B}_X,T,\mu)$, there exists $W_0$ and $N_0\in\N$ such that for any $N>N_0$, and $a\colon X\times\N\to\R\in C(X,\N)$ with $\vert a(x,n)\vert\leq 1$, we have
\begin{scriptsize}
\begin{align*}&\bigg\Vert \frac{1}{[W_0N/3]}\sum\limits_{0\leq n<[W_0N/3]}\big(\Lambda(n)a(x,n)\big)-\frac{1}{\phi(W_0)}\sum\limits_{\mbox{\tiny$\begin{array}{c}
0\leq r<W_0,\nonumber\\
(r,W_0)=1\end{array}$}}\frac{1}{[N/3]}\sum\limits_{0\leq n<[N/3]}a(x,W_0n+r)\bigg\Vert_{L^2(\mu)}<\varepsilon,
\end{align*}
\end{scriptsize}
where $\phi$ is the Euler function.
\end{lem}
\begin{thm}\label{P-L^2} Let $s\in\N$, $P:\N^s\to\N$ be a non-constant integer polynomial, then for any measure preserving system $(X,\mathscr{B}_X,\mu,T)$ and $f\in L^{\infty}(X)$, the average
$$\frac{1}{\pi(N)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
0\leq p_1\dotsc p_s< N,\\
p_1\dotsc p_s\in\p\end{array}$}}T^{P(p_1,p_2,\dotsc,p_s)}f$$
convergence in $L^2(\mu)$ as $N\to\infty$.
\end{thm}
\begin{proof}Without loss of generality, we can assume $\Vert f\Vert_{\infty}\leq 1$. For any $x\in X$, we set $a_x:\N^s\to\R$ as $a_x(n_1,\dotsc,n_s)=f(T^{P(n_1,\dotsc,n_s)}x)$. Then by Lemma \ref{P-host-kra}, it is suffice to show the average
$$\frac{1}{N^s}\sum\limits_{0\leq n_1\dotsc n_s<N}\bigg(\Lambda(n_1)\dotsc\Lambda(n_s)T^{P(n_1,n_2,\dotsc,n_s)}f\bigg)$$
convergence in $L^2(\mu)$ as $N\to\infty$.
\begin{footnotesize}
\begin{align}\label{P-e3}
&\bigg\Vert \frac{1}{[WN/3]^s}\sum\limits_{n_1,\dotsc,n_s=0}^{[WN/3]-1}\bigg(\Lambda(n_1)\dotsc\Lambda(n_s)a_x(n_1\dotsc n_s)\bigg)\nonumber\\
&\ \ \ \ \ \ \ \ \ \ \ \ -\frac{1}{\phi(W)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
0\leq r_1,\dotsc,r_s<W,\nonumber\\
(r_i,W)=1\end{array}$}}\frac{1}{[N/3]^s}\sum\limits_{n_1,\dotsc,n_s=0}^{[N/3]-1}a_x(Wn_1+r_1,\dotsc,Wn_s+r_s)\bigg\Vert_{L^2(\mu)}\nonumber\\
&\leq \sum\limits_{\ell=1}^{s}\frac{1}{\phi(W)^{\ell-1}}\sum\limits_{\mbox{\tiny$\begin{array}{c}
0\leq r_1,\dotsc,r_{\ell-1}<W,\\
(r_i,W)=1\end{array}$}}\frac{1}{[N/3]^{\ell-1}}\sum\limits_{n_1,\dotsc,n_{\ell-1}=0}^{[N/3]-1}
B_{r_1\dotsc r_{\ell-1}}^{n_1\dotsc n_{\ell-1}}([WN/3])
\end{align}
\end{footnotesize}
where $\phi$ the Euler function,
\begin{footnotesize}
$$B_{r_1\dotsc r_{\ell-1}}^{n_1\dotsc n_{\ell-1}}([WN/3])=\frac{1}{[WN/3]^{s-\ell}}\sum\limits_{n_{\ell+1\dotsc n_s}=0}^{[WN/3]-1}\Lambda(n_{\ell+1})\dotsc\Lambda(n_s)D_{r_1\dotsc r_{\ell-1}}^{n_1\dotsc n_{\ell-1},n_{\ell+1},\dotsc,n_s}([WN/3]),$$
\end{footnotesize}
and
\begin{scriptsize}
\begin{align*}
D_{r_1,\dotsc,r_{\ell-1}}^{n_1,\dotsc,n_{\ell-1},n_{\ell+1},\dotsc,n_s}([WN/3])=\bigg\Vert\frac{1}{[WN/3]}\sum\limits_{n_\ell=0}^{[WN/3]-1}
\Lambda(n_\ell)a_x(Wn_1+r_1,\dotsc Wn_{\ell-1}+r_{\ell-1},n_\ell,\dotsc n_s)\\
-\frac{1}{\phi(W)}\sum\limits_{\mbox{\tiny$\begin{array}{c}
0\leq r_{\ell}<W,\nonumber\\
(r_\ell,W)=1\end{array}$}}\frac{1}{[N/3]}\sum\limits_{n_\ell=0}^{[N/3]-1}a_x(Wn_1+r_1,\dotsc,Wn_\ell+r_{\ell},n_{\ell+1},\dotsc,n_s)\bigg\Vert_{L^2(\mu)}.
\end{align*}
\end{scriptsize}
For any $\varepsilon>0$, by the Lemma \ref{P-host-kra2}, there exists $W_0\in\N$ and such that for $N\in\N$ large enough, one has
$$D_{r_1,\dotsc,r_{\ell-1}}^{n_1,\dotsc,n_{\ell-1},n_{\ell+1},\dotsc,n_s}([W_0N/3])<\frac{\varepsilon}{2s},$$
for every $r_1,\dotsc,r_{\ell-1}$ and $n_1,\dotsc,n_{\ell-1},n_{\ell+1},\dotsc,n_s$ and $\ell=0,1,\dotsc,s$. By the fact that $\Lambda$ has mean one, for large enough $N\in\N$ we have
$$\frac{1}{[W_0N/3]^{s-\ell}}\sum\limits_{n_{\ell+1}\dotsc n_s=0}^{[W_0N/3]-1}\Lambda(n_{\ell+1})\dotsc\Lambda(n_s)<2$$
for every $1\leq\ell\leq s$. Thus there exits $W_0\in\N$ and $N_0\in\N$, such that for any $N>N_0$ one has $(\ref{P-e3})<\varepsilon$.
By Theorem \ref{Leibman},
$$\frac{1}{\phi(W_0)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
1\leq r_1,\dotsc,r_s<W_0,\nonumber\\
(r_i,W_0)=1\end{array}$}}\frac{1}{[N/3]^s}\sum\limits_{n_1\dotsc n_s=0}^{[N/3]-1}a_x(W_0n_1+r_1,\dotsc,W_0n_s+r_s)$$
converges in $L^2(\mu)$ as $N\to\infty$. We set
$$E([W_0N/3])(x)=\frac{1}{[W_0N/3]^s}\sum\limits_{n_1\dotsc,n_s=0}^{[W_0N/3]-1}\Lambda(n_1)\dotsc\Lambda(n_s)a_x(n_1\dotsc n_s)).$$
Using triangle inequality we have that for large enough $M$, $N$
$$\bigg\Vert E([W_0N/3])(x)-E([W_0M/3])(x)\bigg\Vert<\varepsilon.$$
Since $E([W_0N/3]+i)(x)-E([W_0N/3])(x)\to0$ as $N\to\infty$, for $0\leq i<W_0/3$, we conclude that $E(N)(x)$ is Cauchy. And this finishes our proof.
\end{proof}
Similar with the proof of Proposition \ref{disintegration}, we have the following proposition.
\begin{prop}\label{P-disintegration}Let $s\in\N$ and $P:\N^s\to\N$, then for any measure preserving system $(X,\mathscr{B}_X,\mu,T)$, there exists a disintegration of $\mu$, $\mu=\int\tau_xd\mu(x)$, in the sense that there exist $\{N_i\}_{i=1}^{\infty}$ and a Borel subset $X_0\subseteq X_1$ with $\mu(X_0)=1$, such that
$$\lim\limits_{i\to\infty}\frac{1}{\pi(N_i)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
1\leq p_1\dotsc,p_s\leq N_i,\\
p_i\in\p\end{array}$}}f(T^{P(p_1,p_2,\dotsc,p_s)}x)=\int fd\tau_x$$
holds for every $x\in X_0$ and $f\in C(X)$, and
$$\int\int fd\tau_xd\mu(x)=\int fd\mu.$$
\end{prop}
Note that, for any $s\in\N$, non-constant polynomial $P:\N^s\to\Z$ with degree $m$ and $t_0<t_1\in\Z$, let
$$E(N)=\{(p_1,p_2,\dotsc,p_s)\in\p^s\cap[1,N]^s:\ \ t_0<P(p_1,p_2,\dotsc,p_s)<t_1\}.$$
We have
$$\vert E(N)\vert\leq\pi(N)^{(s-1)}(t_1-t_0)m.$$
Thus $\frac{\vert E(N)\vert}{\pi(N)^s}\to 0$ as $N\to\infty$. Then following the proof of Theorem \ref{characteristic} and \ref{tauup-mu}, one has the following results.
\begin{thm}\label{P-characteristic}Let $s\in\N$, $P\colon\N^s\to\N$, $(X,\mathscr{B}_X,\mu,T)$ be a measure preserving system, and $P_{\mu}(T)$ be its Pinsker $\sigma$-algebra. Then for any $f\in L^{\infty}(\mu)$
$$\frac{1}{\pi(N)^s}\sum\limits_{\mbox{\tiny$\begin{array}{c}
0\leq p_1,\dotsc,p_s<N,\\
p_i\in\p\end{array}$}}\bigg(T^{P(p_1,\dotsc,p_s)}f-T^{P(p_1,\dotsc,p_s)}E(f|P_{\mu}(T)\bigg)\to0$$
in $L^2(\mu)$ as $N\to\infty$.
\end{thm}
\begin{lem}\label{P-constant}Let $(X,\mathscr{B}_X,\mu,T)$ be a measure preserving system, $P_{\mu}(T)$ be its Pinsker $\sigma$-algebra and $\mu=\int\mu_xd\mu(x)$ be the disintegration of $\mu$ over $P_{\mu}(T)$. If $\mu=\int\tau_xd\mu(x)$ is the disintegration of $\mu$ as in the Proposition \ref{P-disintegration}, then for every $f\in C(X)$ one has
$$\int f\tau_x=\int\int f\tau_yd\mu_x(y)$$
for $\mu$-a.e. $x\in X$.
\end{lem}
The proof of Theorem \ref{mainthm2} can be obtained by following the proof of Theorem \ref{mainthem1} and combining with the above results.
\bibliographystyle{amsplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
With the advent of the era of Internet of Things (IoT), the unprecedented growth of latency-critical applications are nevertheless hardly satisfied by {\em mobile cloud computing (MCC)} alone. To cater for the low-latency requirements while alleviating the burden over backhaul networks, {\em mobile-edge computing (MEC)}, also interchangeably known as {\em fog computing} has aroused a paradigm shift by extending cloud capabilities to the very edge within the radio access network (RAN) (see \cite{mao2017survey} and the references therein).
Both industry and academia have devoted constant effort to providing the next generation mobile networks with ultra-reliable low latency communications (uRLLC). Among pioneering industrialization on fog computing, Cisco has proposed fog computing as a promising candidate for IoT architecture \cite{Bonomi2012ACM}. In academics, \cite{6574874,7264984,7442079,6675770} focused on one-to-one offloading scheme where there is one mobile user and one corresponding cloudlet, \cite{6678114} \cite{7511367} presented multiple-user cases where there are multiple edge servers, while \cite{6787113} related to multiple-to-one scenarios where multiple mobile users offload computing to one edge server.
Recently, the intrinsic collaborative properties of the input data for computation offloading was investigated for augmented reality (AR) in \cite{7906521}. In fact, in many mobile applications such as augmented reality (AR) and virtual reality (VR), multiple mobile devices share parts of computing input/output in common, thus making it possible for further reducing computing latency at the edge. In \cite{8332500}, some important insights on the interplay among the social interactions in the VR mobile social network was revealed, and a significant reduce on the end-to-end latency was achieved through stochastic optimization technique. \cite{8335683} investigated potential spatial data correlation for VR applications to minimize the delay of accomplishing computation.
On another front, joint optimization of computation offloading with communications resources (such as power, bandwidth, and rate) proves to improve the performance of fog computing by explicitly taking channel conditions and communications constraints into account. In an early research \cite{4536215}, the offloading decision making was examined through the estimation of bandwidth data without considering the allocation of communication resources and channel conditions. For communications-aware computation offloading, \cite{7769867687e} minimized the local user's computation latency in a multi-user cooperative scenario, while \cite{8234686} minimized the energy consumption of remote fog computing nodes. However, these line of work have not taken the shared data feature aforementioned into account, thus failing to fully reap the advantage of fog computing.
In this paper, we consider a multi-user fog computing system, in which multiple single-antenna mobile users running applications featuring shared data can choose between (partially) offloading their computing tasks to a nearby single-antenna cloudlet and executing them locally, and then download the results from the cloudlet. Mobile users' overall energy consumption is minimized via joint optimization of computation offloading and communications resource allocation. Compared with existing literature, e.g., \cite{7906521}, although it investigated the energy minimization problem of shared-data featured offloading, it did not find the optimal solution. Moreover, it did not draw explicit conclusion regarding the channel condition's influence in the computation offloading. From this point of view, our work provides in-depth understanding of the shared-data featured offloading in MEC systems.
\section{System Model}
We consider a mobile-edge system that consists of $U$ mobile users running AR applications, denoted as ${\cal U}=\{1, ..., U\}$, and one base station (BS) equipped with computing facilities working as a cloudlet. All of the mobile users and the BS are assumed to be equipped with single antenna.
The input data size for user $u$ is denoted by $D^I_u$, $\forall u \in {\cal U}$, in which one fraction data size of $D_S^I$ bits are the shared data that is the same across all $U$ mobile users and the other fraction of $D_u^L$ bits are the data executed locally by user $u$. The shared data can be transmitted from each user by part, denoted by \(D_{u,S}^I\), \(\forall u \in {\cal U}\), such that \(\sum_{u=1}^UD_{u,S}^I=D_S^I\). The amount of input data that is exclusively transmitted by $u$ is thus given by $\bar{D}^I_u=D^I_u-D^I_{S}-D^L_u, \forall u\in {\cal U}$.
\begin{figure}[h]
\centering
\includegraphics[width=8.5cm]{timing_diagram.eps}
\caption{\bf{Timing illustration for the considered multi-user MEC system.}}
\label{fig-sample}
\end{figure}
It can be seen from Fig. 1 that there are two consecutive sub-phases for both input data offloading and results downloading phases: the shared and the individual data transmission. The transmission duration for offloading the shared input data is denoted by $t_{u,S}^{ul}$, $\forall u \in {\cal U}$; the offloading duration for the individual data is denoted as $t_u^{ul}$, $\forall u \in {\cal U}$; and the durations for downloading the shared and the individual output data are $t^{dl}_{u,S}$ and $t^{dl}_u$, $\forall u \in {\cal U}$ respectively. The remote computation time are also illustrated in Fig. 1, where $t_S^C$ and $t_u^C$, $\forall u \in {\cal U}$, denote that for the shared and the individual data transmitted to the cloudlet, respectively. Similarly, $F$ and $f_u$, $\forall u \in {\cal U}$, denote the computational frequency (in cycles/s) allocated to the shared and the individual tasks, respectively, by the cloudlet. In addition, the local computation time is denoted by $t_{u,L}^C$, $\forall u \in {\cal U}$.
\subsection{Uplink Transmission}
As observed from Fig. 1, there are two consecutive uplink transmission sub-phases: the shared data and the individual data offloading \cite{7906521}. Each mobile user offloads its computation task to the cloudlet server via frequency division multiple access (FDMA). The channel coefficient from user $u$ is given by $h_u, \forall u \in {\cal U}$, which is assumed to remain unchanged during the uplink transmission duration. With the transmission power given by $p_{u,S}^{ul}$, the achievable individual data rate for offloading the shared data is expressed as:
\begin{equation}
R_{u,S}^{ul}=W_u^{ul}log_2(1+\dfrac{p_{u,S}^{ul}|h_u|^2}{N_0}), \forall u \in {\cal U},
\end{equation}
where $W_u^{ul}=\tfrac{W^{ul}}{U}$ with $W^{ul}$ denoting the overall bandwidth available for the uplink transmission, and $N_0$ is the additive white Gaussian noise (AWGN) power. Accordingly, $t_{u,S}^{ul}=D^I_{u,S}/R_{u,S}^{ul}$, and the energy consumed by the $u$-th user in the shared data offloading sub-phase is given as
\begin{equation}
E_{u,S}^{ul}=t^{ul}_{u,S}p_{u,S}^{ul}=\dfrac{t^{ul}_{u,S}}{|h_u|^2}f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}}), \forall u \in {\cal U}, \label{eq:shared data energy}
\end{equation}
where the function $f(x)$ is defined as $f(x)=N_0(2^{\tfrac{x}{W^{ul}_u}}-1)$.
Similarly, the energy consumption for the $u$-th user in the individual data offloading sub-phase is expressed as:
\begin{equation}
E_u^{ul}=t^{ul}_{u}p_{u}^{ul}=\dfrac{t^{ul}_{u}}{|h_u|^2}f(\dfrac{D^I_u-D^I_S-D^L_u}{t^{ul}_{u}}), \forall u \in {\cal U}.\label{eq:individual data energy}
\end{equation}
\subsection{Computation Model}
Based on the energy model in \cite{6787113}, given the local computing bits $D_u^L$, the energy consumption for executing local computation is given by:
\begin{equation}
E^C_u=\kappa_0\dfrac{(\lambda_0 D_u^{L})^3}{{t^{C}_{u,L}}^2}, \forall u \in {\cal U}, \label{eq:local computing energy}
\end{equation}
where $\lambda_0$ (in cycles/bit) denotes the number of CPU cycles needed for processing one bit of input data, and $\kappa_0$ is the energy consumption capacitance coefficient.
\subsection{Downlink Transmission}
Similar to the uplink transmission, the downlink transmission phase also has two separate sub-phases: the shared and the individual results downloading. The shared output data are multicasted to the mobile users by the cloudlet at its maximum transmitting power $P_{\max}$. The achievable individual rate for the shared data downloading is thus given by
\begin{equation}
R^{dl}_{u,S}=W_u^{dl}log_2(1+\dfrac{P_{max}|g_u|^2}{N_0}), \forall u \in {\cal U},
\end{equation}
where $W_u^{dl}=\tfrac{W^{dl}}{U}$ with $W^{dl}$ denotes the overall bandwidth available for downlink transmission. The downlink channel coefficient is given by $g_u$, $\forall u \in {\cal U}$. The relation between the shared output data and the input data is given by \(D_S^O=a_0D_S^I\), where \(a_0\) is the factor representing the number of output bits for executing one bit of input data. Accordingly, $t^{dl}_{u,S}=D^O_S/R^{dl}_{u,S}, \forall u \in {\cal U}$, and thus the latency for transmitting the shared output data to all mobile users is given by
\begin{equation}
t^{dl}_S=\max_{u\in{\cal U}}\{t^{dl}_{u,S}\}.
\end{equation}
This is because the individual results downloading cannot be initiated until the shared data has finished transmission.
After the multicasting transmission, the individual output data is sent to each mobile user via FDMA. Denoting the downlink transmitting power for the $u$-th individual data by $p^{dl}_u$, the achievable rate for individual data downloading is thus expressed as:
\begin{equation}
R^{dl}_u=W^{dl}_ulog_2(1+\dfrac{p^{dl}_u|g_u|^2}{N_0}), \forall u \in {\cal U}.
\end{equation}
Similarly, denoting the individual output data size by \(D_u^O\), \(\forall u \in {\cal U}\), \(D_u^O=a_0\bar D_u^I=a_0(D_u^I-D_S^I-D_u^L)\), and \(t_u^{dl}=D^O_u/R^{dl}_u\).
For energy consumption, the overall energy consumed for decoding the result sent back by the cloudlet at the $u$-th mobile user is given by \cite{7906521}
\begin{equation}
E^{dl}_u=(t^{dl}_{u,S}+t^{dl}_u)\rho^{dl}_u, \forall u \in {\cal U}, \label{eq:downloading energy}
\end{equation}
where $\rho^{dl}_u$ (in Joules/second) captures the energy expenditure per second.
In addition, the total energy consumed by the BS for results transmission is given by,
\begin{align}
\sum_{u \in {\cal U}}\dfrac{t_u^{dl}}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u}), \forall u \in {\cal U},
\end{align} which is required not to exceed $E_{\max}$ by the BS operator.
\subsection{Total Latency}
Next, we consider the overall computing latency. As illustrated in Fig. 1, it is observed the individual data downloading in Phase II cannot start until the cloudlet completes individual data computing, and the BS finishes the shared data transmission over the downlink.
Moreover, for the individual data computing, it cannot start before either the corresponding individual data finishes offloading or the cloudlet completes the shared data computing, i.e., \(\max\{t^{ul}_{u,S}+t^{ul}_u,\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S\}\). Furthermore, also seen from Fig. 1, for the shared data results, it can only start being transmitted in the downlink after the cloudlet completes the shared data computing and all the individual data finishes offloading in the uplink, i.e., \(\max\bigg\{\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S, \displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}+t^{ul}_u\}\bigg\}\). Combining the above facts, the total computing latency is expressed as follows:
\begin{equation}
\begin{split}
\label{equ:latency expression}
&\tau_u=\max\Bigg\{\max\{t^{ul}_{u,S}+t^{ul}_u,\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S\}+t^C_u,\\
&\max\bigg\{\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S, \displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}+t^{ul}_u\}\bigg\}+t^{dl}_S\Bigg\}+t^{dl}_u ,\\
&\forall u \in {\cal U}.\\
\end{split}
\end{equation}
\section{Problem Formulation}
The overall energy consumption at the mobile users consists of three parts: data offloading over the uplink (c.f. (2) and (3)), local computing (c.f. (4)), and results retrieving (c.f. (8)), which is thus given by
\begin{equation}
\begin{split}
&E_{total}=\sum_{u\in{\cal U}}\kappa_0\dfrac{(\lambda_0D^L_u)^3}{{t^C_{u,L}}^2}+\sum_{u\in{\cal U}}\dfrac{t_{u,S}^{ul}}{|h_u|^2}f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}})\\
&+\sum_{u\in{\cal U}}\dfrac{t^{ul}_u}{|h_u|^2}f(\dfrac{D^I_u-D^I_S-D^L_u}{t^{ul}_u})+\sum_{u\in{\cal U}}(t^{dl}_{u,S}+t^{dl}_u)\rho^{dl}_u .
\end{split}
\end{equation}
The objective is to minimize the overall energy consumption given by $E_{total}$, subject to the computing latency constraints, the maximum local computing frequencies, and the total energy consumption on the individual data at the BS. Specifically, the optimization problem is formulated as below:
\begin{subequations}
\begin{align}
\mathrm{(P1)}: &{\kern-12pt}\mathop{\mathtt{min}}_{\{t_{u,S}^{ul},t^{ul}_u,t^C_{u,L},t^{dl}_u,D^L_u, D^I_{u,S}\}}{\kern-12pt}
~E_{total}\\
&{\kern20pt}\mathtt {s.t.} \notag\\
&~\tau_u \leq T_{max}, \forall u \in {\cal U},\label{eq:latency constraint}\\
&\sum_{u \in {\cal U}}\dfrac{t_u^{dl}}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u}) \leq E_{max}, \label{eq:downlink energy constraint}\\
&0 \leq t^C_{u,L}\leq T_{max}, \forall u \in {\cal U},\label{eq:local latency constraint}\\
&\lambda_0D^L_u\leq t^C_{u,L}f_{u,max},\label{eq:local bits constraint}\\
&0\leq D^L_u \leq D_u^I-D_S^I, \forall u \in {\cal U},\\
&\sum_{u\in{\cal U}}D^I_{u,S}=D^I_S, D^I_{u,S}\geq 0,\label{eq:shared data constraint}\\
&t_{u,S}^{ul}\geq0, t^{ul}_u\geq0, t^C_{u,L}\geq0, t^{dl}_u\geq 0, \forall u \in {\cal U}.
\end{align}
\end{subequations}
Constraint \eqref{eq:latency constraint} and \eqref{eq:local latency constraint} gives the latency constraints that the time taken for accomplishing computing tasks cannot excess the maximum allowed length, both for offloading and local computing. \eqref{eq:downlink energy constraint} tells that the available energy for downlink transmission of remote computing node should be lower than a maximum level. \eqref{eq:local bits constraint} restricts the number of allowable local computing bits imposed by local computing capabilities. Besides, \eqref{eq:shared data constraint} puts that adding all the shared data bits offloaded by all mobile users respectively, the value should be equal to the exact amount of shared bits existing in the same user group.
\section{Optimal scheme for joint offloading and communication resource allocation}
\subsection{Problem Reformulation}
Although the latency expression \eqref{equ:latency expression} looks complex in its from, \eqref{eq:latency constraint} is still a convex constraint. For the ease of exposition, we assume herein that the cloudlet executes the shared and the individual computing within the duration of the individual data offloading and the shared results downloading, respectively, i.e., \(t_S^C\ll t_u^{ul}\), and \(t_u^c\ll t_{u,S}^{dl}\), \(\forall u \in {\cal U}\)\footnote{We assume herein that the computation capacities at the cloudlet is relatively much higher than those at the mobile users, and thus the computing time taken is much shorter than the data transmission time.}. As a result, \eqref{eq:latency constraint} can be simplified as below:
\begin{equation}
\max\{t^{ul}_{u,S}+t^{ul}_u\}+t^{dl}_S+t^{dl}_u \leq T_{max}, \forall u \in {\cal U}.\label{eq:transformed latency constraint}
\end{equation}
by introducing the auxiliary variable $t^{dl}$, which satisfies $t^{dl}_u \leq t^{dl}, \forall u \in {\cal U}$, \eqref{eq:transformed latency constraint} reduces to
\begin{equation}
t^{ul}_{u,S}+t^{ul}_u\leq T_{max}-t^{dl}_S-t^{dl}, \forall u \in {\cal U} .\label{eq:final latency constraint}
\end{equation}
Notice that \(E_u^C\)'s (c.f. (4)) is monotonically decreases with respect to the local computing time $t_{u,L}^C$ for each mobile user. To obtain the minimal energy consumption, it is obvious that $t_{u,L}^C=T_{max}, \forall u \in {\cal U}$. Then the optimization problem to be solved is reformulated as:
\begin{subequations}
\begin{align}
\mathrm{(P1^\prime)}: &{\kern-12pt}\mathop{\mathtt{min}}_{\{t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},D^L_u, D^I_{u,S}\}}{\kern-12pt}
~E_{total}\\
&{\kern20pt}\mathtt {s.t.}\notag\\
& ~(12c-12h), (14).\\
&t^{dl}_u \leq t^{dl}, \forall u \in {\cal U}.
\end{align}
\end{subequations}
\subsection{Joint offloading and communication resource allocation}
Introducing dual variables ${\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu}$, the Lagrangian of problem $(P1^\prime)$ is presented as:
\begin{equation}
\begin{split}
&L({\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu},t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},D^L_u, D^I_{u,S})=\\
&\sum_{u\in{\cal U}}\dfrac{t_{u,S}^{ul}}{|h_u|^2}f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}})+\sum_{u\in{\cal U}}\dfrac{t^{ul}_u}{|h_u|^2}f(\dfrac{D^I_u-D^I_S-D^L_u}{t^{ul}_u})\\
&+\sum_{u\in{\cal U}}\kappa_0\dfrac{(\lambda_0D^L_u)^3}{{t^C_{u,L}}^2}+\sum_{u\in{\cal U}}(t^{dl}_{u,S}+t^{dl}_u)\rho^{dl}_u+\sum_{u\in{\cal U}}\beta_u(t^{ul}_{u,S}\\
&+t^{ul}_u-T_{max}+t^{dl}_S+t^{dl})+\sum_{u\in{\cal U}}\omega_u(\lambda_0D^L_u\\
&-t^C_{u,L}f_{u,max})+\sum_{u\in{\cal U}}\sigma_u(t^{dl}_u-t^{dl})\\
&+\nu[\sum_{u\in{\cal U}}\dfrac{t^{dl}_u}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u})-E_{max}],
\end{split}
\end{equation}
where ${\bm {\beta}}=\{\beta_1, ..., \beta_U\}$ are dual variables associated with the latency constraint \eqref{eq:final latency constraint}, ${\bm {\omega}}=\{\omega_1, ..., \omega_U\}$ are associated with local computing bits constraint \eqref{eq:local bits constraint}), ${\bm {\sigma}}=\{\sigma_1, ..., \sigma_U\}$ are connected with the constraint for auxiliary variable $t^{dl}$, and $\nu$ catches the downlink transmission energy constraint \eqref{eq:downlink energy constraint}. Hence, we have the Lagrangian dual function expressed as:
\begin{equation}
\begin{split}
&g({\bm {\beta}}, {\bm {\omega}},{\bm {\sigma}}, {\nu})\\
&=\smash{\displaystyle\min_{\{t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},D^L_u, D^I_{u,S}\}}} L({\bm {\beta}}, {\bm {\omega}},{\bm {\sigma}}, {\nu},t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},\\
&D^L_u, D^I_{u,S}),\label{eq:Lagrangian dual function}
\end{split}
\end{equation}
\quad\quad s.t.
\quad\quad\quad\quad\quad\quad (12f-12h).
Consequently, the corresponding dual problem is formulated as:
\begin{equation}
\smash{\displaystyle\max_{\{{\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu}\}}} g({\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}},{\nu})\label{eq:Dual Problem}
\end{equation}
\quad\quad s.t.
\begin{equation}
\quad\quad\quad\quad\quad\quad {\bm{\beta}}\succeq 0, {\bm{\omega}}\succeq 0, {\bm{\sigma}}\succeq 0, \nu\geq0. \notag
\end{equation}
\begin{proposition}
Given a determined set of dual variables \({\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu}\), the optimal solution to the Lagrangian dual problem (16) can be determined as follows. \label{proposition1}
The optimal primal variables \(t_{u,S}^{ul}\), \(t_{u}^{ul}\), and \(t_u^{dl}\), are given by
\begin{equation}
\hat t^{ul}_{u,S}=\dfrac{\hat D^I_{u,S}}{\dfrac{W^{ul}_u}{ln2}[W_0(\dfrac{1}{e}(\dfrac{\beta_u|h_u|^2}{N_0}-1))+1]}, \forall u \in {\cal U}.\label{equ:share time}
\end{equation}
\begin{equation}
\label{equ:uplink time}
\hat t^{ul}_{u}=\dfrac{D^I_u-D^I_S-\hat D^L_u}{\dfrac{W^{ul}_u}{ln2}[W_0(\dfrac{1}{e}(\dfrac{\beta_u|h_u|^2}{N_0}-1))+1]}, \forall u \in {\cal U}.
\end{equation}
\begin{equation}
\label{equ:downlink time}
\hat t^{dl}_u=\dfrac{a_0(D^I_u-D^I_S-\hat D^L_u)}{\dfrac{W^{dl}_u}{a_0ln2}[W_0(\dfrac{1}{e}(\dfrac{(\rho^{dl}_u+\sigma_u)|g_u|^2}{\nu N_0}-1))+1]}, \forall u \in {\cal U}.
\end{equation}
where $W_0(x)$ is the principle branch of the Lambert $W$ function defined as the solution for $W_0(x)e^{W_0(x)}=x$ \cite{8234686}, $e$ is the base of the natural logarithm;
the optimal auxiliary variable \(t^{dl}\) is given by:
\begin{equation}
\label{equ:auxiliary tdl}
\hat t^{dl}=\left\{
\begin{aligned}
0, \quad&\sum_{u\in{\cal U}}\beta_u-\sum_{u\in{\cal U}}\sigma_u >0,\\
T_{max}-t^{dl}_S, \quad&otherwise;
\end{aligned}
\right.
\end{equation}
and the optimal local computing data size is given by
\begin{equation}
\label{equ:local bits}
\begin{split}
&\hat D^L_u=\\
&\min\Bigg\{T_{max}\sqrt{\bigg[\dfrac{N_0ln2}{3\kappa_0{\lambda_0}^3}(\dfrac{2^{\tfrac{\hat r^{ul}_{u}}{W^{ul}_u}}}{W^{ul}_u|h_u|^2}+\dfrac{\nu a_0\cdot 2^{\tfrac{a_0 \hat r^{dl}_{u}}{W^{dl}_u}}}{W^{dl}_u|g_u|^2})-\dfrac{\omega_u}{3\kappa_0 \lambda_0^2}\bigg]^+}\\
&, D^I_u-D^I_S\Bigg\}, \forall u \in {\cal U}, \notag
\end{split}
\end{equation}
where $\hat r^{ul}_{u}=\frac{W^{ul}_u}{ln2}[W_0(\frac{1}{e}(\frac{\beta_u|h_u|^2}{N_0}-1))+1]$ and $\hat r^{dl}_{u}=\frac{W^{dl}_u}{a_0ln2}[W_0(\frac{1}{e}(\frac{(\rho^{dl}_u+\sigma_u)|g_u|^2}{\nu N_0}-1))+1]$, $\forall u \in {\cal U}$.
\end{proposition}
\begin{IEEEproof}
Please refer to Appendix A.
\end{IEEEproof}
In fact, on one hand, \(\hat r_u^{ul}\)'s and \(\hat r_u^{dl}\)'s can be interpreted as the optimum transmission rate for the shared/individual data offloading and the individual data downloading, respectively, given the dual variables. On the other hand, for each user $u$, the optimal transmission rate for the shared data is seen to be identical to that of the individual data over the uplink, given that the uplink channel gains remain unchanged during the whole offloading phase.
Next, to obtain the optimal offloading bits of the shared data for each user, i.e., \(\hat D_{u,S}^I\), we need the following lemma.
\begin{lemma}
The optimal offloaded shared data for user $u$ is expressed as,
\begin{equation}
\label{equ:shared bits}
\hat D^I_{u,S}=\left\{
\begin{aligned}
D^I_S, \quad&\hat{u}=arg \smash{\displaystyle\min_{1 \leq u \leq U}} \Delta_u,\\
0, \quad&otherwise,
\end{aligned}
\right.
\end{equation}
where $\Delta_u=\frac{f(\hat r^{ul}_{u,S})}{\hat r^{ul}_{u,S}|h_u|^2}+\frac{\beta_u}{\hat r^{ul}_{u,S}}, \forall u \in {\cal U}$.
\end{lemma}
\begin{IEEEproof}
Please refer to Appendix B.
\end{IEEEproof}
Notable, it is easily observed from Lemma 1 that the shared data is optimally offloaded by one specific user instead of multiple ones.
Based on Proposition 1, the dual problem can thus be iteratively solved according to ellipsoid method (with constraints), the detail of which can be referred to \cite{EE364b}. The algorithm for solving $\mathrm{(P1^\prime)}$ is summarized in Table \ref{table: Algorithm I}.
\small\begin{table}[htp]
\begin{center}
\caption{Algorithm I for solving \(\mathrm{(P1^\prime)}\)} \label{table: Algorithm I}
\vspace{-0.75em}
\hrule
\vspace{0.50em}
\begin{algorithmic}[1]
\REQUIRE \((\bm{\beta^{(0)}}, \bm {\omega^{(0)}}, \bm {\sigma^{(0)}}, \nu^{(0)})\)
\REPEAT
\STATE Solve \eqref{eq:Lagrangian dual function} given \((\bm {\beta^{(i)}},\bm {\omega^{(i)}},\bm {\sigma^{(i)}}, \nu^{(i)})\) according to Proposition~\ref{proposition1} and obtain \(\{\hat t_{u,S}^{ul}, \hat t_{u}^{ul}, \hat t_u^{dl}, \hat t_{dl}, \hat D_u^L, \hat D_{u,S}^I\}\);
\STATE update the subgradient of \(\bm{\beta},\bm{ \omega},\bm {\sigma}, \nu\) respectively, i.e., \(t^{ul}_{u,S}+t^{ul}_u-T_{max}+\displaystyle\max_{u \in {\cal U}}\{t^{dl}_{u,S}\}+t^{dl}\), \(\lambda_0D^L_u-t^C_{u,L}f_{u,max}\), \(t^{dl}_u-t^{dl}\), \(\sum_{u\in{\cal U}}\frac{t^{dl}_u}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u})-E_{max}\) in accordance with the ellipsoid method \cite{EE364b};
\UNTIL the predefined accuracy threshold is satisfied.
\ENSURE The optimal dual variables to the dual problem \eqref{eq:Dual Problem} \((\bm{\beta^\ast},\bm{\omega^\ast},\bm{\sigma^\ast}, \nu^\ast)\)
\STATE Solve \eqref{eq:Lagrangian dual function} again with \((\bm{\beta^\ast},\bm{\omega^\ast},\bm{\sigma^\ast},\nu^\ast)\)
\ENSURE \(\{t_{u,S}^{ul\ast}, t_{u}^{ul\ast}, t_u^{dl\ast}, t^{dl\ast}, D_u^{L\ast}, D_{u,S}^{\ast}\}\)
\end{algorithmic}
\vspace{0.50em}
\hrule
\end{center}
\vspace{-1.0em}
\end{table}
\section{Numerical Results}
\begin{normalsize}
In this section, the numerical results of the proposed algorithm together with other baseline algorithms are presented. Except for the local computing only scheme where users execute all the data bits locally, there are three other offloading schemes presented as baseline algorithms: 1) {\it Offloading without considering the shared data}: the collaborative properties are ignored, every user makes the offloading decision without coordination among other users; 2) {\it Full offloading only}: the shared data is taken into consideration, but the whole chunks of input data of every user are forced to be offloaded to the edge computing node, excluding the local computing capability from participating in the computation tasks; 3) {\it Offloading with equal time length}: taking the correlated data into consideration, the data offloading and downloading are performed for each user with equal time length, with optimal solutions obtained through CVX.
In the simulation, the bandwidth avaialble is assumed to be $W^{ul}=W^{dl}$=10MHz, the maximum downlink transmit power $P_{max}=1W$, and the input data size $D_u^I=10kbits$ for all users. The spectral density of the (AWGN) power is -169 dBm/Hz. The mobile energy expenditure per second in the downlink is $\rho^{dl}_u$=0.625 J/s \cite{7906521}, the maximum local computing capability $f_{u,max}=1G$Hz. Besides, $\lambda_0=1\times 10^3$ cycle/bit, $a_0=1$, $\kappa_0=10^{-26}$. The pathloss model is $PL=128.1+37.6log_{10}(d_u)$, where $d_u$ represents the distance between user $u$ and edge computing node in kilometers.
\end{normalsize}
\begin{figure}
\centering
\includegraphics[scale=0.37]{differ_Tmax_new.eps}
\caption{\bf{Energy consumption versus different latency constraints}}
\label{fig-sample}
\end{figure}
\begin{normalsize}
Fig.2 depicts how the energy consumption changes with different latency constraints. The energy consumption are becoming lower as the latency requirement gets longer for all listed offloading algorithms. Only the proposed offloading scheme can give the lowest energy consuming performance. The best energy saving improvement can only be achieved through the joint participation of local computing and shared data coordination. Besides, even though the equal time length offloading has lower complexity than the proposed algorithm, it cannot compete with the proposed one in terms of energy saving. Recalling our conclusion that the best way to achieve the energy saving is to let these correlated bits transmitted by one specific user, the reason is that forcing offloading time duration to be equal makes the shared data to be transmitted by all users simultaneously.
The energy consumed for computing one data bit increases exponentially as the latency constraint diminishes. Hence for the local computing only scheme, when latency constraint comes to 0.01 second the energy taken to finish the computation tasks, which is 1000 mJoules, can reach up to nearly 100 times more than those of all the offloading algorithms. Then it drops exponentially to 10 mJoules when the latency constraint goes to 0.1 second. As a result, the curve representing local computing only is not added in Fig.2, otherwise the comparison of the offloading schemes will not be clear.
\end{normalsize}
\begin{normalsize}
In Fig.3, the energy consumption changes with the percentage of shared data is demonstrated. Apparently, as long as we take the shared data into consideration when making offloading decisions, the lower overall energy consumption is achieved when the proportion of shared data gets higher. More energy will be saved when the percentage of shared data gets higher for proposed offloading scheme compared to the scheme without considering the existence of shared data. This trend applies to the full offloading only algorithm as well, because it also cares about the existence of shared data when making offloading decisions. The energy consumptions for full offloading only do not always go under that of offloading without considering shared data. That is because when given specific latency constraint, the importance of local computing capabilities diminishes in saving mobile users' energy consumption as the share of common data increases. Since most of the data will be offloaded to the edge node, few input bits would remain local for computing. Then the energy consumption of the full offloading only scenario represents that it get closer to that of the proposed algorithm when the percentage of shared data increases. Similar trend applies to the equal time length offloading as well.
\end{normalsize}
\begin{figure}
\centering
\includegraphics[scale=0.388]{differ_perc_new.eps}
\caption{\bf{Energy consumption versus different percentage of shared data}}
\label{fig-sample}
\end{figure}
\begin{equation}
t_S^C=\lambda_0 D^I_S/F
\end{equation}
\section{Conclusions}
\begin{normalsize}
In this paper, a multi-user fog computing system was considered, in which multiple single-antenna mobile users running applications featuring shared data can partially offload their individual computation tasks to a nearby single-antenna cloudlet and then download the results from it. The mobile users' energy consumption minimization problem subject to the total latency, the total downlink transmission energy and the local computing constraints was formulated as a convex problem with the optimal solution obtained by classical Lagrangian duality method. Based upon the semi-closed form solution, it was proved that the shared data is optimally transmitted by only one of the mobile users instead of multiple ones collaboratively. The proposed joint computation offloading and communications resource allocation was verified by simulations against other baseline algorithms that ignore the shared data property or the mobile users' own computing capabilities..
\end{normalsize}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\begin{appendices}
\section{}\label{appendix:proof of rank-one for W_p^prime}
In order to find the optimal solutions of the primary problem, we need to examine the related partial derivatives $\frac{\partial L}{\partial D^L_u}, \frac{\partial L}{\partial D^I_{u,S}}, \frac{\partial L}{\partial t^{ul}_{u,S}}, \frac{\partial L}{\partial t^{ul}_{u}}, \frac{\partial L}{\partial t^{dl}_{u}}, \frac{\partial L}{\partial t^{dl}}, \forall u \in {\cal U}$. After obtaining these partial derivatives, the KKT conditions can be applied to find the optimal solutions. For example, let $\frac{\partial L}{\partial D^L_u}$ and $\frac{\partial L}{\partial D^I_{u,S}}$ be equal to 0. The inverse function of $y=f(x)-xf'(x)$ for $x>0$ is given by $x=\tfrac{W^{ul}_u}{ln2}[W_0(-\tfrac{y}{eN_0}-\tfrac{1}{e})+1]$. Then it follows that $f(\hat r^{ul}_{u,S})-\hat r^{ul}_{u,S}f'(\hat r^{ul}_{u,S})=f(\hat r^{ul}_{u})-\hat r^{ul}_{u}f'(\hat r^{ul}_{u})=-\beta_u|h_u|^2$, and the optimal uplink transmission rate of the shared data ${\hat r^{ul}_{u,S}}$ and that of the exclusively offloaded data ${\hat r^{ul}_u}$ are thus derived. Then the expressions of the optimal primary variables are readily obtained as shown in \eqref{equ:share time}, \eqref{equ:uplink time}, \eqref{equ:downlink time}, \eqref{equ:auxiliary tdl}, \eqref{equ:local bits}, and \eqref{equ:shared bits}.
\section{}\label{appendix:proof of shared data offloading}
To obtain how the shared input data offloading $\hat D^I_{u,S}$ are distributed among users, we need to examine the partial Lagrangian regarding $D^I_{u,S}$ and $t_{u,S}^{ul}$. Replacing the shared data offloading time \(t_{u,S}^{ul}\) with \(\frac{D_{u,S}^I }{\hat r_u^{ul}}\), the partial Lagrangian is expressed as
\begin{equation}
\label{eq:partial Lagrangian}
\begin{split}
&\smash{\displaystyle\min_{\{D^I_{u,S}\}}} \overline{L}=\sum_{u \in {\cal U}}[\dfrac{t^{ul}_{u,S}}{|h_u|^2} f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}})+\beta_u t^{ul}_{u,S}]\\
&\\
&=\sum_{u \in {\cal U}}[\dfrac{D^I_{u,S}}{\hat r^{ul}_{u,S}|h_u|^2}f(\hat r^{ul}_{u,S})+\beta_u\dfrac{D^I_{u,S}}{\hat r^{ul}_{u,S}}]\\
&=\sum_{u \in {\cal U}}\Delta_u\cdot D^I_{u,S}
\end{split} \tag{24a}
\end{equation}
\quad\quad s.t.
\begin{equation}
\sum_{u \in {\cal U}}D^I_{u,S}=D^I_S, D^I_{u,S}\geq 0, \forall u \in {\cal U}, \tag{24b}
\end{equation}
where we define $\Delta_u=\dfrac{f(\hat r^{ul}_{u,S})}{\hat r^{ul}_{u,S}|h_u|^2}+\dfrac{\beta_u}{\hat r^{ul}_{u,S}}$ as a constant given the dual variable \(\beta_u\)'s. As a result, the optimal solution to the linear programming (LP) (24) is easily obtained as shown in \eqref{equ:shared bits}.
\end{appendices}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{picture}}]{John Doe}
\blindtext
\end{IEEEbiography}
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
The international Future Circular Collider (FCC) study aims to design p-p, $\rm e^{+}e^{-}$ and e-p colliders to be built in a new 100~km tunnel in the Geneva region.
The $\rm e^{+}e^{-}$ collider (FCC-ee) has a centre of mass energy range between 91.2 and 365~GeV with instantaneous luminosities as high as $\rm 2.3\times10^{36}cm^{-2}s^{-1}$ and $\rm 1.55\times10^{34}cm^{-2}s^{-1}$, respectively~\cite{FCCCDRLept}. The design of the interaction region is crucial to reach such unprecedented energies and luminosities.
The main characteristics of the interaction region optics design
is determined by the crab-waist scheme with a local chromatic correction system and a horizontal crossing angle of
30~mrad at the interaction point.
A description of the main challenges of the interaction region and machine detector interface (MDI) design can be found in Ref.~\cite{ref:mdi_epj}.
The baseline optics for the FCC-ee double-ring collider is described in Ref.~\cite{ref:prab-oide}.
The total synchrotron radiation power is limited by design at 100\,MW for the two beams and consequently the stored current per beam
varies from 1.4\,A at Z to 5.4\,mA at the $\mathrm{t \bar{t}}$ data taking stage.
Following the LEP-2 experience where the highest local critical energy was 72\,keV for photons emitted at 260\,m from the IP~\cite{gvh}
the FCC-ee optics design maintains critical energies from bending magnets below 100\,keV
starting at 100\,m from the interaction point; the critical energy from the first bend after the interaction point
is higher, at 691\,keV for the $\mathrm{t \bar{t}}$ threshold.
An asymmetric optics has been designed to meet these critical energy goals in the interaction region.
Synchrotron radiation mask tips
are placed in the horizontal plane just in front of the first final focus quadrupole at 2.1\,m from the interaction point.
The free length between the interaction point and the first final focus quadrupole is 2.2\,m, which is inside the detector.
Figure~\ref{GEANT4_IR} shows the {\sc geant4} model with the shielding and the luminometer that was used for background simulation studies~\cite{FCCCDRLept}.
\begin{figure}[ht!]
\centering\includegraphics[width=0.5\textwidth]{G4-IR_CDR.png}
\caption{Sketch of the FCC-ee interaction region. (a) Detail of the shielding; (b) luminometer; (c) HOM absorber; (d) thick tungsten shielding\,\cite{FCCCDRLept}.}
\label{GEANT4_IR}
\end{figure}
The compactness of the MDI design, determined by the space available to host all the necessary components like the first final focus quadrupole and the anti-solenoids being placed inside the detector, poses interesting technical challenges.
Part of the challenge is the development of modern flexible software tools
for the beam optics model, including the beam induced background scattering processes with an interface with the experiments.\par
This essay is organised as follows.
We give an overview of the existing accelerator codes related to the interaction region and MDI design in Section~\ref{sec:2}; in Section~\ref{sec:fccsw} we describe the experiment software and the current strategy to interface the accelerator and experiment codes; in Section~\ref{sec:geometry} we discuss the geometrical description of the relevant elements. Finally, in Section~\ref{sec:concl} we summarise the status and the challenges ahead.
\section{Review of the accelerator codes and MDI considerations}
\label{sec:2}
From the beginning of the studies for FCC, it was realised that the design of the interaction regions is particularly challenging and it is necessary that the requirements of the machine and experiments should be evaluated and optimised concurrently.
\subsection{Lattice codes}
\label{sec:lattice}
The most popular code used for accelerator design at CERN and several other laboratories is the Methodical Accelerator Design program ({\sc mad}).
The {\sc mad} program has been developed, maintained and upgraded for more than 3 decades. The main version used at LEP was {\sc mad8}\,\cite{Grote:1991zp}, written in FORTRAN V. Since then, it was largely rewritten as {\sc mad-x} using a combination of FORTRAN95 and C, and later also C++ for selected modules. The activity started in the new millenium\,\cite{Grote:2003ct}, coinciding with the end of LEP operation and the shift of priorities motivated by developments for the LHC and its injectors, where synchrotron radiation only plays a minor role.
For the FCC-ee design, we also profit from the more recent experience and code developments for lepton colliders, by working with the Strategic Accelerator Design ({\sc sad})\,\cite{sad} program used for SuperKEKB. {\sc sad}, like {\sc mad-x}, is an accelerator lattice design code, developed independently at KEK and therefore it provides a good opportunity for comparison and cross-checking.
Both {\sc sad} and {\sc mad-x} read the machine description from formatted text files, using their own specific commands to specify magnet types, strengths and position. These input files are typically referred to as sequence files.
A translator to convert the {\sc sad} lattices into {\sc mad-x} format exists.
The {\sc mad-x} lattice input text format allows the specification of aperture information like beam pipe shapes, sizes, and more recently also information on materials, provided as formatted comments\,\cite{Deniau:2019nca}.
Even for large machines like FCC with the order of 10\,000 magnets, the size of the sequence files is always manageable, often less than a megabyte.
The basis for the MDI work is the lattice description in {\sc mad-x} format, typically starting from {\sc sad} translated to {\sc mad-x} format, with additional information on aperture and beam pipe material.
The output from {\sc mad-x} provides extra information per element like, beam position, beam size, TWISS parameters (beta functions, phase advance ..), $6\times 6$ element transfer matrices and optionally $6\times 6\times 6$ second order transfer maps. The {\sc mad-x} output format is known as TFS-format\,\cite{Grote:1991zp}, a human readable tabular format with extra general header information specifying parameters that apply to the whole machine like nominal beam energy and beam particle type.
Even if we select all options and write numbers with 17 digits to avoid any loss in precision, the file sizes remain below 100 megabytes.
\begin{figure}[ht!]
\centering\includegraphics[width=0.9\textwidth]{FCC_MADX_MDISim_ROOT_GEANT4.pdf}
\caption{Illustration of the FCC-hh interaction region using {\sc root}, and interfacing the {\sc mad-x} generated machine layout with
{\sc MDISim} to {\sc geant4} to track the protons through the interaction region and generate synchrotron radiation photons~\cite{Collamati_2017,Collamati:2017hlr}.}
\label{FCC_MADX_MDISim_ROOT_GEANT4}
\end{figure}
\subsection{MDI considerations}
\label{sec:mdi}
The software used in MDI studies must be able to simulate in detail what happens in the interaction region.
The beam pipe aperture should be sufficiently large to allow the beam particles to pass without any major losses for all modes of operation, including injection, and also be compatible with possible failure scenarios.
Heating by HOM (higher order modes losses, induced by the electromagnetic fields of the intense bunched beams) must be minimised.
Even under optimal conditions, the intense particle beams stored in the FCC will always result in some level of particle losses and synchrotron radiation, the latter is considered by the experiments as unwanted beam induced backgrounds. The modeling of the beam induced background effects has often been performed with custom Monte Carlo codes and relying on input and geometry data in specific formats which have to be written or modified by hand.
These days, computers are sufficiently powerful and have enough memory and storage capacity
that it could be possible to build a supercode that combines all that is needed in a single program.
Another possibility would be to make all codes available from a unified code library.
This was already attempted in the nineties\,\cite{Iselin:1996jz} for accelerator codes, but with rather limited adoption and lack of support.
The choices made can also be influenced by sociological perspectives and personal preferences. Working with large programs which have grown historically can be intimidating and may appear less rewarding than creating well defined smaller programs and code pieces associated with the names of a few authors.
For FCC (ee and hh), we have made an effort in the development of {\sc MDISim}~\cite{ref:mdisim} to use and combine existing codes as much as possible using a light, flexible interface, minimising the need for hand coded geometries, and privileging open source and well supported codes and exchange formats. We use {\sc MDISim} to read the TFS files generated by {\sc mad-x} and automatically translate them into an exchange format, directly readable by {\sc geant4}~\cite{GEANT4} and {\sc root}~\cite{ROOT}. The format used is GDML\,\cite{GDML} for the geometry information, complemented by magnet strengths, initial beam positions and directions provided by automatically generated human readable text files. {\sc geant4} is used to track the particles through the interaction region with generation of secondary particles and simulation of interactions in materials. The ROOT Event Visualization Environment is used for the display.
An example is shown in Figure~\ref{FCC_MADX_MDISim_ROOT_GEANT4} \,\cite{Collamati_2017}.
Further information on the geometry and material outside the beam pipe like vacuum equipment, shielding, magnet material or the experiment detectors can then be added on the GDML level.
Many commercial and open software codes are being developed that can work with GDML and interface with other flexible geometry descriptions including CAD formats\,\cite{instep,salome,cadmesh,blender,step,vtcad,sw2gdml,cadmc,pyg4ometry}.
\subsection{Beam induced backgrounds}
\label{sec:beambkg}
To simulate and minimise the impact of all relevant processes that can result in the loss of beam particles or secondary particles generated by the beams in the experiment detectors is a complex task. Beam-gas, Touschek and thermal photon scattering as well as synchrotron radiation will always be present, even if beams are not colliding, and they require the whole ring to be studied.
For FCC-ee, the minimisation of synchrotron radiation effects is of primary importance and has strongly influenced the basic design and layout choices.
The collisions of the beams in the interaction regions will generate additional losses
and produce synchrotron radiation by deflection in the electromagnetic fields of the opposing beam (referred to as beamstrahlung).
The {\sc MDISim} code is capable of efficiently generating the accelerator and beam pipe geometry for shower simulations for the whole ring. The {\sc geant4} toolkit\,\cite{GEANT4} has code for all major particle scattering processes including the generation of synchrotron radiation\,\cite{Burkhardt:2007zza}, as well as tracking in magnetic fields, and is well suited for detector simulations. {\sc geant4} can be considered as a candidate for a supercode that simulates both machine and experiment, and has been used as the basis for the combined codes {\sc g4beamline} and {\sc bdsim}\,\cite{G4beamline,BDSIM}.
For benchmarking purposes, we use {\sc geant4} directly to track a few particles over several turns in small machines, and have been contributing to developments to improve the tracking precision in {\sc geant4}. For large machines like FCC this is not realistic at present:
we would need to consider some $10^{11}$ particles per bunch, circulating many times in a 100\,km ring, to be able to determine the effects of radiation and the loss of a tiny fraction of the circulating particles in the interaction regions.
At present, we restrict the {\sc geant4} based simulations to roughly a kilometer around the interaction region and work with
other, more dedicated codes to complete these studies when needed.
To gain speed, it is easier to guarantee a high level of numerical precision in accelerator codes that track deviations from the design path rather than tracking absolute positions.
The synchrotron radiation is emitted in excellent approximation in beam direction and not deflected by magnet fields. With scattering and reflection processes included, we find that only the synchrotron radiation generated in a limited range (some hundred meters) around the interaction is relevant as a source of detector backgrounds, and that more general effects like non-Gaussian tail generation in collisions can be taken into account by a proper choice of the beam distribution. Details of the {\sc geant4} based synchrotron radiation background simulations are described
in Ref.~\cite{MarianThesis}. We also make comparisons with the more dedicated {\sc sync\_bkg}~\cite{ref:sync} and {\sc synrad+}~\cite{ref:synrad} codes.
Non-gaussian tails were observed at LEP\,\cite{Burkhardt:1999xb}. They can be generated by scattering processes, and be enhanced by non-linearities in the beam-beam interaction or strong sextupoles in combination with machine imperfections. A popular code, much used at LHC to study the effect of non-linearities and imperfections in multiturn tracking is {\sc SixTrack}\,\cite{SixTrack}.
It would have to be adapted for $\mathrm{e^+e^-}$ multiturn tracking, for example to account for the synchrotron radiation damping. A new tracking tool named {\sc Xtrack}, part of the Xsuite project\,\cite{xsuite} is being considered for collimation studies at FCC-ee.
The {\sc guineapig}\,\cite{guineapig} code is used as generator of beamstrahlung and radiative Bhabha scattering in the interaction regions. We also use the {\sc bbbrem}~\cite{Kleiss:1994wq} code as generator for the simulation of radiative Bhabha scattering . This process is characterised by an energy loss of one of the colliding particles and very small scattering angles, such that the scattered particles remain within the beam pipe.
The process has been integrated into {\sc sad} and was in particular used for simulations of FCC-ee at the Z energy, where the beam intensity and luminosity are highest.
Particle losses by beam-gas and thermal photon scattering in multiple turns around the ring, are taken into account using {\sc sad, mad-x}, or {\sc ptc} for the transport element by element, performing aperture checks at element boundaries. The results\,\cite{ciarma} were compared
with the more detailed {\sc geant4} simulations performed around the interaction regions~\cite{Boscolo:2018ltl} and they were found to be in good agreement.
The vacuum pressure profile in the MDI area, as well as in the whole ring, can be used as input for the beam-gas scattering simulations rather than assuming a constant pressure profile inside the beam pipe.
The local pressure profile can be evaluated by means of the Monte Carlo code {\sc molflow+}\,\cite{molflow}.
{\sc molflow+} provides detailed 3D calculations of vacuum properties in the molecular flow regime, such as pressure profiles, effective pumping speeds, and adsorption distributions which are of interest mainly for vacuum engineers.
It also allows the simulation of gas propagation in CAD imported geometries and simulates pumpdown processes.
Touschek scattering is an intra-beam scattering particularly relevant for low energy storage rings,
and is
the major beam lifetime limitation for lepton colliders like DA\textPhi NE, SuperKEKB and all the modern low and ultra-low emittance light sources.
For the high energy FCC-ee, a simpler, more dedicated Monte Carlo embedded in particle tracking as developed for DA\textPhi NE~\cite{Boscolo:2012ws} and used for SuperKEKB should be fully sufficient.
Beam polarisation will be important for the FCC-ee and in particular for the precise determination of the beam energy.
It requires studies of the whole ring including realistic modeling of imperfections and optics corrections~\cite{Gianfelice-Wendt:2016jgk}.
A good candidate for detailed simulations with polarisation is the {\sc sitros} code~\cite{Kewisch:1983uq}.
Another good candidate is {\sc bmad}~\cite{bmad} which is a flexible software toolkit for the simulation of charged particles and X-rays including the spin tune and polarisation.
\section{Experiment software}
\label{sec:fccsw}
The software used to study the physics potential of FCC goes generically under the name of FCCSW and comprises a set of tools covering the needs of an experiment: signal generation, detector description and simulation, event reconstruction and final analysis~\cite{fccswchep}.
FCCSW is a result of a process started just after the FCC project kick-off in 2014. The design goal has been to support physics and detector studies with parameterised, fast, and full simulation, also allowing a mixture of the three. It has to be modular enough to allow for evolution, allowing component parts to be improved separately. Finally it has to allow multi-paradigms for analysis, with C++ and Python at the same level. The strategy to meet these challenging requirements has been to adopt solutions developed for LHC, such as the Gaudi framework~\cite{gaudi} and to look at ongoing common projects; among the latter, it is worth mentioning those developed under the AIDA EU R\&D effort~\cite{AIDA}: {\sc podio}~\cite{podio}, used to define the event data model, and {\sc dd4hep}~\cite{dd4hep}, used for the geometrical description of all the elements relevant for the physics measurements, i.e.\ the sensitive and passive elements of the sub-detectors, supports, magnet and elements of the interaction region affecting the detector performance, such as the beam pipe and other elements which can scatter or produce particle debris in the detector.
The Gaudi framework implements an architecture in which data flow through a transient data store (in memory), where they can be modified by algorithms representing the various steps of the data processing chain, e.g. generation, simulation or reconstruction. Readers and converters are special algorithms that can inject data into the transient store for processing.
Because of its capability of coping effectively with the data processing needs of High Energy Physics (HEP) experiments and the challenges of HL-LHC, Gaudi has also been chosen as the main framework of the Key4hep common software project, around which FCCSW is going to evolve in the future.
\subsection{MDI induced backgrounds}
\label{sec:fccswmdibkg}
The processes described in Section~\ref{sec:beambkg}, in addition to influencing the beam lifetime and stability, can be sources of backgrounds - and therefore of systematic effects - for the physics measurements and need to be controlled as precisely as possible.
Before FCC-ee, the use of the codes described in Section~\ref{sec:mdi} was restricted to a few experts who were producing estimates of what turned out to be small effects, which were possibly mentioned as upper limits on systematic errors in final analysis. The only beam-related effects included by physicists in the simulation of the detector response were spreads in the beam energy and the position of the effective interaction point, which are important and non-negligible but do not cover the full picture.
The unprecedented design luminosities of FCC-ee require a better evaluation of all the effects, including those of Section~\ref{sec:beambkg}, which can only be obtained by simulating these backgrounds in the experimental apparatus to properly estimate detector occupancies and the level of spurious objects, such as additional tracks.
This requires inter-operability of the relevant codes with FCCSW.
\subsection{Interplay between accelerator and experiment codes}
\label{sec:fccswmdi}
There are several levels of software programs that can inter-operate. The one that we will pursue in this case is one which is at the lowest level and which goes through common data formats and which can also work for programs running on different hardware or operating systems. We have seen in Section~\ref{sec:2} that the codes for MDI-induced backgrounds typically produce outputs in the form of formatted text files. There is no common output format for all the programs, but there is enough information to understand the outputs and use them in other contexts.
The underlying idea is to develop a set of Gaudi readers and/or converters to inject the events produced by the MDI-background codes in the data processing chain.\footnote{While in the default running mode inter-operability is through persistent files, the availability of dedicated readers and/or converters opens the way for alternative inter-operation options, for example through FIFO channels~\cite{FIFOref}.}
FCCSW will then simulate the interaction of these particles in the detector to evaluate occupancies and levels of spurious objects.
Eventually FCCSW will provide the possibility to
overlay these events on signal events for a more detailed background simulation, possibly with a weighted mixture of MDI processes.
\subsection{Towards an MDI "supercode"}
\label{sec:supercode}
By `supercode' we do not mean a new big program doing everything but a common interface to all relevant codes to effectively simulate a single big program behaviour. The ultimate goal is that physicists wanting to study these backgrounds are able to do so from any of the supported computing infrastructures.
A software ecosystem based on Gaudi allows an approach of this type.
The relevant components which need to be provided and/or identified are the following:
\begin{enumerate}
\item A shared file system available on the computing infrastructures supported for the project;
\item A software stack providing all the relevant applications built in a coherent way;
\item A set of good default configuration files available in the shared file system accessible by the relevant applications;
\item A wrapper to run external applications in Gaudi;
\item A set of Gaudi readers and converters, as mentioned in the previous section;
\item A set of application command line controls covering all the identified needs.
\end{enumerate}
For the shared file system the obvious choice is
CernVM-FS~\cite{cvmfs,cvmfstwo} which is now ubiquitous in HEP communities and beyond.
The Key4hep stack~\cite{key4hep} is the choice for 2, with all the stack software available under {\tt /cvmfs/sw.hsf.org/}. Some of the relevant codes, e.g. {\sc guineapig}, are already available in Key4hep. Part of the work would be to make sure that all the relevant software are in a form suitable for being added and maintained in the common stack.
Good default configuration files should be the result of the detailed evaluation of each of the codes mentioned earlier; they could be stored on the shared CernVM-FS repository, though the possibility to use different settings should be maintained.
The integration with Gaudi, points 4 and 5, is part of the work, and no insurmountable issues are anticipated.
The identification of a common set of switches to cope with all the codes will need some iterations, though, again, it should not pose insolvable problems.
The whole integration process just described is being proof-tested with {\sc guineapig} with promising results already. Detailed documentation of this prototype work and of course of all the components described in this sub-section are part of the work. It should contain examples at different levels, together with a detailed reference guide.
\section{Aspects related to geometry description}
\label{sec:geometry}
The description of the geometry and materials of the relevant accelerator and detector elements is a crucial ingredient of most of the codes discussed in this essay. Having a coherent description, based on the same source of information is highly desirable for several reasons, not least to reduce the risk of errors due to several implementations of the same item.
The design and implementation of these elements follows different paths depending on the nature of the element. {\it Detector components} are designed starting from a detector concept and then it is mostly space constraints that are applied. The tool chosen to describe the relevant concepts is DD4hep~\cite{dd4hep}, an open source toolset introducing the compact detector description concept, provided through minimalistic XML formats, to allow composition of basic sub-detector elements to form complex detector structures.
DD4hep was developed for conceptual design studies and initially applied to linear collider cases. However, its flexibility and generality quickly appealed the LHC community; as of today, DD4hep has been adopted by CMS for use starting from Run3 and is being seriously considered by LHCb and ATLAS.
{\it Accelerator elements} usually arise from optimisation studies carried on with CAD engineering tools which allow working in 3D with the solid features which are essential for the task. Currently the tool mostly used is {\sc Autodesk Inventor}~\cite{adeskinv}, though {\sc catia}~\cite{catia} is currently being evaluated; both these tools are commercial, because the open source CAD tools currently available do not provide the required quality.
While there are good reasons for both approaches, the desirable requirement to have the same source of information becomes a challenge, because a satisfactory conversion procedure of CAD supported formats to DD4hep is currently missing.
DD4hep has some capabilities to read CAD formats though an interface to the external open source library {\sc AssImp}~\cite{assimp}. As well as the fact that only read support is currently implemented - which would result in one-way only conversion, namely CAD to DD4hep - the major limitations come from the CAD file formats currently supported by {\sc AssImp} and the overlap with those supported by {\sc Autodesk Inventor}. Early investigations have shown that currently the only testable solution is to use the STL (Standard Tessellation Language) format~\cite{stlref}. However, this format has some inherent limitations, since it focuses on surfaces and does not seem to provide a natural way to describe material information, which is a must-have for any simulation activity.
Other conversion options are under investigation such as the ones discussed in Ref.~\cite{pyg4ometry} or the suggestions available for {\sc Geant4} users~\cite{instep,salome,cadmesh,blender,step,vtcad,sw2gdml,cadmc,pyg4ometry}. Finding a satisfactory solution to the mutual conversion between CAD formats and DD4hep is clearly one of the challenges of the MDI detector studies.
\section{Conclusion}
\label{sec:concl}
In this essay we reviewed the main aspects affecting accelerator codes and their interplay with experimental software.
The existing accelerator codes are the result of many years of development and have been validated with respect to various accelerator facilities. Often two or more alternative codes for each of the different operational and experimental aspects exist. The challenge in these cases is to get full control of the codes, often not available in version repositories.
We need to facilitate access and configuration and, when relevant, provide clear and, if required, solid ways of combining the results (see, e.g., the case of synchrotron radiation).
For the integration with experimental software, the main challenge is to provide the relevant Gaudi components to enable the interplay between accelerators codes and the data processing chain.
Finally, to achieve the objective of a single geometry source for all the components, a solid conversion solution to and from CAD will be required.
The interaction region design for FCC is particularly challenging and requires a combined optimisation of accelerator, engineering and experiment aspects.
A comprehensive and flexible software environment as outlined in this essay will be very helpful for the MDI design of a future collider.
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\noindent{\bf{\em Introduction:}}
While stationary black holes in 4 spacetime dimensions (4D) are
stable to perturbations, higher dimensional analogues are not. Indeed, as first illustrated by
Gregory and Laflamme in the early 90s~\cite{Gregory:1993vy}, black strings and
p-branes are linearly unstable to long wavelength perturbations in 5 and
higher dimensions. Since then, a number of interesting black objects
in higher dimensional gravity have been discovered, many of them exhibiting
similar instabilities (see e.g.~\cite{Emparan:2008eg}).
An open question for all unstable black objects is what the end-state of
the perturbed system is. For black strings~\cite{Gregory:1993vy} conjectured
that the instability would cause the horizon to pinch-off at periodic intervals,
giving rise to a sequence of black holes. One reason for this conjecture comes from entropic
considerations: for a given mass per unit length and periodic spacing above a critical
wavelength $\lambda_c$, a sequence of hyperspherical black holes has higher entropy
than the corresponding black string. Classically, event horizons cannot bifurcate
without the appearance of a naked singularity~\cite{Hawking:1973uf}. Thus, reaching the
conjectured end-state would constitute a violation of cosmic censorship, without
``unnatural'' initial conditions or fine-tuning, and be an example
of a classical system evolving to a regime where quantum gravity is required.
This conjecture was essentially taken for granted until several years later when
it was proved that the generators of the horizon can
not pinch-off in finite affine time~\cite{Horowitz:2001cz}. From this,
it was conjectured that a new, non-uniform black string end-state would be reached~\cite{Horowitz:2001cz}.
Subsequently, stationary, non-uniform black string
solutions were found~\cite{Gubser:2001ac,Wiseman:2002zc}, however, they had less entropy than the uniform string
and so could not be the putative new end-state,
at least for dimensions lower than 13~\cite{Sorkin:2004qq}.
A full numerical investigation studied
the system beyond the linear regime~\cite{Choptuik:2003qd}, though not
far enough to elucidate the end-state before the code ``crashed''.
At that point the horizon resembled
spherical black holes connected by black strings, though no
definitive trends could be extracted, still allowing for
both conjectured possibilities:
(a) a pinch-off in infinite affine time,
(b) evolving to a new, non-uniform state.
If (a), a question arises whether pinch-off happens in
infinite {\em asymptotic} time; if so, any bifurcation
would never be seen by outside observers, and
cosmic censorship would hold. While this might be a natural conclusion,
it was pointed out in~\cite{Garfinkle:2004em,Marolf:2005vn} that
due to the exponentially diverging rate between affine time and
a well-behaved asymptotic time, pinch-off could
occur in finite asymptotic time.
A further body of (anecdotal) evidence supporting the GL conjecture
comes from the striking resemblance of the equations
governing black hole horizons to those describing fluid flows, the latter which
do exhibit instabilities that often result in break-up of the fluid. The fluid/horizon
connection harkens back to the
membrane paradigm~\cite{Thorne:1986iy}, and also in
more recently developed correspondences~\cite{Bhattacharyya:2008jc,Emparan:2009at}.
In~\cite{Cardoso:2006ks} it was shown that the dispersion relation of Rayleigh-Plateau unstable
modes in hyper-cylindrical fluid flow with tension agreed well with those of the GL modes
of a black string.
Similar behavior was found for instabilities of a self-gravitating cylinder of
fluid in Newtonian gravity~\cite{Cardoso:2006sj}.
In~\cite{Camps:2010br}, using a perturbative expansion of the Einstein field equations~\cite{Emparan:2009at}
to related the dynamics of the horizon to that of a viscous fluid,
the GL dispersion relation was {\em derived} to good approximation,
thus going one step further than showing analogous behavior between
fluids and horizons.
What is particularly intriguing about fluid analogies, and what they might
imply about the black string case,
is that break-up of an unstable flow is preceded by formation of spheres
separated by thin necks.
For high viscosity liquids, a single neck forms before
break-up. For lower viscosity fluids, smaller ``satellite'' spheres can form
in the necks, with more generations forming the lower the viscosity (see
~\cite{Eggers:1997zz} for a review). In the membrane paradigm, black holes have
lower shear viscosity to entropy ratio than any known fluid~\cite{Kovtun:2004de}.
Here we revisit the evolution of 5D black strings using a new code.
This allows us to follow the evolution well beyond the earlier study~\cite{Choptuik:2003qd}.
We find that the dynamics of the horizon
unfolds as predicted by the low viscosity fluid analogues: the string initially evolves
to a configuration resembling a hyperspherical black hole connected by thin
string segments; the string segments are themselves unstable, and the
pattern repeats in a self-similar manner to ever smaller scales. Due to finite
computational resources, we cannot follow the dynamics indefinitely. If
the self-similar cascade continues as suggested by the simulations,
arbitrarily small length scales, and in consequence arbitrarily large curvatures
will be revealed outside the horizon in finite asymptotic time.
\noindent{\bf{\em Numerical approach:}}
We solve the vacuum Einstein field equations
in a 5-dimensional (5D)
asymptotically flat spacetime with an $SO(3)$ symmetry. Since
perturbations of 5D black strings violating this
symmetry are stable and decay~\cite{Gregory:1993vy}, we do not expect
imposing this symmetry qualitatively affects the results
presented here.
We use the generalized harmonic formulation of the field
equations~\cite{Pretorius:2004jg}, and adopt a {\em Cartesian} coordinate
system related to spherical polar coordinates via
$\bar{x}^i=(\bar{t},\bar{x},\bar{y},\bar{z},\bar{w})=(t,r\cos\phi\sin\theta,r\sin\phi\sin\theta,r\cos\theta,z)$.
The black string horizon has topology $S^2\times R$; $(\theta,\phi$)
are coordinates on the 2-sphere, and $z$ ($\bar{w}$) is the coordinate
in the string direction, which we make periodic with length $L$.
We impose a {\em Cartesian Harmonic} gauge condition,
i.e. $\nabla_\alpha \nabla^\alpha \bar{x}^i =0$,
as empirically this seems to result in more stable numerical
evolution compared to spherical harmonic coordinates.
The $SO(3)$ symmetry is enforced using the variant of the
``cartoon'' method~\cite{Alcubierre:1999ab} described in~\cite{Pretorius:2004jg},
were we only evolve a $\bar{y}=\bar{z}=0$ slice of the spacetime.
We further add {\em constraint damping}~\cite{Gundlach:2005eh},
which introduces two parameters $\kappa$ and $\rho$; we use $(\kappa,\rho=1,-0.5)$,
where a non-zero $\rho$ is essential to damp an unstable zero-wavelength
mode arising in the $z$ direction.
We discretize the equations using
4th order finite difference approximations, and integrate in
time using 4th order Runge-Kutta.
To resolve the small length scales that develop during evolution
we use Berger and Oliger adaptive mesh refinement.
Truncation error estimates are used to dynamically generate the mesh hierarchy, and
we use a spatial and temporal refinement ratio of 2.
At the outer boundary we impose Dirichlet conditions, with the metric
set to that of the initial data. These conditions are not strictly
physically correct at finite radius, though the outer boundary is placed
sufficiently far that it is causally disconnected
from the horizon for the time of the simulation.
We use black hole excision on the inner surface; namely, we
find the apparent horizon (AH) using a flow method, and dynamically adjust
this boundary (the {\em excision surface}) to be some distance within the AH. Due to the causal nature of spacetime
inside the AH, no boundary conditions are placed on the excision surface.
We adopt initial data describing a perturbed black string of mass per unit length $M$
and length $L=20M\approx 1.4 L_c$ ($L_c$ is the critical length above which all perturbations
are unstable). This data was used in~\cite{Choptuik:2003qd} and
we refer the reader to that work for further details.
We evaluate the following curvature scalars on the AH:
\begin{equation}\label{inv_def}
K=I R_{AH}^4/12, \ \ \ S=27 \left( 12 J^2 I^{-3}-1 \right ) + 1 \, ,
\end{equation}
where $I=R_{abcd}R^{abcd}$, $J=R_{abcd} R^{cdef} R_{ef}{}^{ab}$ and $R_{AH}$ is the
areal radius of the AH at the corresponding point (though note that
$I$ and $J$ are usually defined in terms of the Weyl tensor,
though here this equals the Riemann tensor as we are in vacuum).
$K$ and $S$ have been scaled
to evaluate to $\{6,1\}$ for the hyperspherical black hole and black
string respectively.
\noindent{\bf{\em Results:}}
The results described here are from simulations where the computational domain
is $(r,z)\in ([0,320M]\times[0,20M])$. The coarsest grid covering the entire
domain has a resolution of $(N_r,N_z)=(1025,9)$ points. For convergence
studies we ran simulations with 3 values of the maximum estimated
truncation error $\tau$: [``low'',``medium'',``high''] resolution have
$\tau=[\tau_0,\tau_0/8,\tau_0/64]$ respectively.
This leads to an initial hierarchy where the horizon of the black string
is covered by 4, 5 and 6 additional refined levels for the
low to high resolutions, respectively. Each
simulation was stopped when the estimated computational resources
required for continued evolution was prohibitively high (which naturally occurred
later in physical time for the lower resolutions); by then the
hierarchies were as deep as 17 levels.
Fig.~\ref{fig:AH_radius} shows the integrated AH area $A$ within $z\in[0,L]$
versus time. At the end of the lowest resolution run the total area
is $A=(1.369\pm0.005) A_0$\footnote{The error in the area was estimated
from convergence at the latest time data was available from all simulations}, where $A_0$ is the initial area; interestingly,
this almost reaches the value of $1.374 A_0$ that an exact 5D black hole
of the same total mass would have.
\begin{figure}
\begin{center}
\includegraphics[width=3.in,clip=true]{i7_AH_area.ps}
\caption{(Normalized) apparent horizon area vs. time.
}
\label{fig:AH_radius}
\end{center}
\end{figure}
Fig.~\ref{fig:AH_embed} shows snapshots of embedding diagrams
of the AH, and Fig.~\ref{fig:AH_invariants} shows the curvature
invariants (\ref{inv_def}) evaluated on the AH at the last time step,
both from the medium resolution run.
\begin{figure}
\begin{center}
\includegraphics[width=3.in,clip=true]{i6_L0_AH_embed_t1_to_10_b.ps}
\includegraphics[width=3.in,clip=true]{i6_L0_AH_embed_t13_to_23_b.ps}
\includegraphics[width=3.in,clip=true]{i6_L0_AH_embed_t24_to_36_b.ps}
\caption{Embedding diagram of the apparent horizon at several
instances in the evolution of the perturbed black string,
from the medium resolution run.
$R$ is areal radius, and the embedding coordinate
$Z$ is defined so that the proper length of the horizon in
the space-time $z$ direction (for a fixed $t,\theta,\phi$) is exactly equal to the
Euclidean length of $R(Z)$ in the above figure.
For visual aid copies of the diagrams reflected about $R=0$ have
also been drawn in.
The light (dark) lines denote the first (last) time from the time-segment
depicted in the corresponding panel.
The computational domain is periodic in $z$ with period $\delta z = 20M$; at the
initial (final) time of the simulation $\delta Z=20M$ ($\delta Z=27.2M$). }
\label{fig:AH_embed}
\end{center}
\end{figure}
The shape of the AH, and that the invariants are
tending to the limits associated with pure black strings or black
holes at corresponding locations on the AH, suggests it is
reasonable to describe the local geometry as being similar
to a sequence of black holes connected by black strings.
This also strongly suggests that satellite formation will continue
self-similarly, as each string segment resembles
a uniform black string that is sufficiently long to be unstable.
Even if at some point in the cascade shorter segments were to form,
this would not be a stable configuration as generically
the satellites will have some non-zero $z$-velocity,
causing adjacent satellites to merge and effectively lengthening the
connecting string segments.
With this interpretation, we summarize key features of the AH dynamics
in Table~\ref{tab_properties}.
\begin{table}
{\small
\begin{tabular}[t]{| c || c | c | c | c | c |}
\hline
Gen. & $t_i/M$ & $R_{s,i}/M$ & $L_{s,i}/R_{s,i}$ & $n_s$ & $R_{h,f}/M$\\
\hline
1 & $118.1\pm0.5$ & $2.00$ & $10.0$ & $ 1 $ & $4.09\pm0.5\%$\\
\hline
2 & $203.1\pm0.5$ & $0.148\pm1\%$ & $105\pm1\%$ & $ 1 $ & $0.63\pm2\%$\\
\hline
3 & $223\pm2$ & $0.05\pm20\%$ & $\approx 10^2$ & $ >1$ & $0.1 - 0.2$ \\
\hline
4 & $\approx 227$ & $\approx 0.02 $ & $\approx 10^2$ & $ >1(?)$ & ? \\
\hline
\end{tabular}
}
\caption{Properties of the evolving black string apparent horizon,
{\em interpreted} as proceeding through several
self-similar generations, where each local string segment temporarily
reaches a near-steady state before the onset of the next GL instability.
$t_i$ is the time when the instability has grown to where
the nascent spherical
region reaches an areal radius $1.5$ times
the surrounding string-segment radius $R_{s,i}$, which has
an estimated proper length $L_{s,i}$ (the critical $L/R$ is $\approx 7.2$~\cite{Gregory:1993vy}).
$n_s$ is the number of satellites that form per segment, that
each attain a radius $R_{h,f}$ measured at the end of the simulation.
Errors, where appropriate, come from convergence tests.
After the second generation
the number and distribution of satellites that form
depend sensitively on grid parameters, and perhaps the only
``convergent'' result we have then
is at roughly $t=223$ a third generation {\em does} develop.
We surmise the reason for this is the long
parent string segments could have multiple unstable modes
with similar growth rates, and which is first excited
is significantly affected by truncation error.
We have only had the resources to run the lowest resolution simulation for sufficiently long
to see the onset of the 4th generation, hence the lack of error estimates
and presence of question marks in the corresponding row.
}
\label{tab_properties}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=3.0in,clip=true]{i6_L0_inv_area.ps}
\caption{Curvature invariants evaluated on the apparent horizon at the
last time of the simulation depicted in Fig.~\ref{fig:AH_embed}.
The invariant $K$ evaluates to $1$ for an exact black string, and $6$ for an exact
spherical black hole; similarly for $S$ (\ref{inv_def}).
}
\label{fig:AH_invariants}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.2in,clip=true]{i6_L0_ln_AH_r_z_const_ln_t.ps}
\caption{
Logarithm of the areal radius vs. logarithm of time
for select points on the apparent horizon from
the simulation depicted in Fig.~\ref{fig:AH_embed}.
We have shifted the time axis {\em assuming} self-similar behavior;
the putative naked singularity forms at asymptotic time $t/M\approx 231$.
The coordinates
at $z=15,5$ and $4.06$ correspond to the maxima of the areal
radii of the first and second generation satellites, and
one of the third generation satellites at the time the simulation
stopped.
The value $z=6.5$ is a representative slice in the middle of a
piece of the horizon that remains string-like throughout the evolution.
}
\label{fig:AH_radius_vs_lnt}
\end{center}
\end{figure}
We estimate when this self-similar cascade will end.
The time when the first satellite appears is controlled
by the perturbation imparted by the initial data; here that
is $T_0/M\approx 118$.
Subsequent timescales should approximately represent the generic
development of the instability. The time for the
first instability {\em after} that sourced by the initial data
is $T_1/M\approx 80$. Beyond that, with the caveats that we have
a small number of points and poor control over errors at late
times, each subsequent instability unfolds
on a time-scale $X\approx1/4$ times that of the preceding one.
This is to be expected if, as for the exact black string,
the time scale is proportional to the string radius. The time $t_0$
of the end-state is then $t_0 \approx T_0 + \sum_{i=0}^\infty T_1 X^i = T_0 + T_1/(1-X)$.
For the data here, $t_0 /M\approx 231$; then the local
string segments reach zero radius, and
the curvature visible to exterior observers diverges.
Fig.~\ref{fig:AH_radius_vs_lnt} shows a few points on the AH, scaled
assuming this behavior. In the Rayleigh-Plateau analogue,
the shrinking neck of a fluid stream has a self-similar scaling solution
that satisfies $r\propto (t_0-t)$, or, $d\ln r/d(-\ln(t_0-t))=-1$,
where $r$ is the stream radius (see ~\cite{eggers_pinch}, and~\cite{miyamoto_extend}
for extensions to higher dimensions);
to within $10-20\%$ this is the average slope we see (e.g. Fig.~\ref{fig:AH_radius_vs_lnt})
at string segments of the AH at late times.
\noindent{\bf{\em Conclusions:}}
We have
studied the dynamics of a perturbed,
unstable 5D black string. The
horizon behaves
similarly to
the surface of a stream of low viscosity
fluid subject to the Rayleigh-Plateau instability. Multiple
generations of spherical satellites, connected by ever thinner string
segments, form. Curvature invariants on the horizon suggest
this is a self-similar process, where at each stage the local string/spherical
segments resemble the corresponding exact solutions. Furthermore, the time scale
for the formation of the next generation is proportional
to the local string radius, implying the cascade will terminate in finite
asymptotic time. Since local curvature scalars grow with inverse
powers of the string radius, this end-state will thus be a naked, curvature singularity.
If quantum gravity resolves these singularities, a series of spherical
black holes will emerge. However, small
momentum perturbations in the extra dimension would induce the merger of
these black holes, thus for a compact
extra dimension the end state of the GL instability will be a single
black hole with spherical topology.
The kind of singularity reached here via a self-similar process is
akin to that formed in critical gravitational collapse~\cite{Choptuik:1992jv}; however,
here no fine-tuning is required. Thus,
5 (and presumably higher) dimensional Einstein gravity allows solutions
that generically violate cosmic censorship.
Angular momentum will likely not alter this conclusion, since
as argued in~\cite{Emparan:2009at}, and shown
in~\cite{Dias:2010eu}, rotation does not suppress
the unstable modes, and moreover induces super-radiant
and gyrating instabilities ~\cite{Marolf:2004fya}.
\noindent{\bf{\em Acknowledgments:}}
We thank
V. Cardoso, M. Choptuik, R. Emparan, D. Garfinkle, K. Lake,
S. Gubser, G. Horowitz, D. Marolf, R. Myers,
W. Unruh and R. Wald for stimulating discussions.
This work was supported by NSERC (LL), CIFAR (LL),
the Alfred P. Sloan Foundation (FP), and NSF grant PHY-0745779 (FP).
Simulations were run on the {\bf Woodhen} cluster at Princeton University and
at LONI. Research at Perimeter Institute is
supported through Industry Canada and by the Province of Ontario
through the Ministry of Research \& Innovation.
\bibliographystyle{apsrev}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\setcounter{equation}{0}
In \cite{PS1} a general method for proving large deviations estimates for dynamical systems (X,T) is developed. In this note we make the connection with the main results of \cite{PS1} and the notion of weak Gibbs measures, which was not explicit in the original paper.
Let $X$ be a compact metric space and $T\,{:}\; X\rightarrow X$ a continuous map which is onto. $M_1(X)$ is the set of Borel probability measures on $X$ (with weak convergence topology) and $M_1(X,T)$ the subset of $T$-invariant probability measures.
Let $x\in X$ and
$$
{\mathcal E}_n(x):=\frac{1}{n}\sum_{k=0}^{n-1}\delta_{T^kx}\,.
$$
The metric entropy of $\nu\in M_1(X,T)$ is denoted $h(T,\nu)$ and
$B_m(x,\varepsilon)$ is the dynamical ball
$\{y\in X\,{:}\; d(T^kx,T^ky)\leq\varepsilon\,,\;k=0,\ldots,m-1\}$.
There are several variants in the literature for the definition of weak Gibbs measures
(see e.g. \cite{BV} and \cite{Yu}).
In this paper a weak Gibbs measure is defined as follows.
\begin{defn}\label{multi-defn2}
Let $\varphi\in C(X)$.
A probability measure $\nu$ is a \emph{weak Gibbs measure for $\varphi$} if $\forall \delta>0$ $\exists \varepsilon_\delta>0$ such that for
$0<\varepsilon\leq\varepsilon_\delta$ $\exists N_{\delta,\varepsilon}<\infty$, $\forall m\geq N_{\delta,\varepsilon}$, $\forall x\in X$,
$$
-\delta\leq \frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\leq \delta\,.
$$
\end{defn}
The set of weak Gibbs measures for a given $\varphi$ is convex (possibly empty).
Gibbs measures as defined in \cite{Bo} (see \cite{Bo}, theorem 1.2) and quasi-Gibbs measures (see \cite{HR}, proposition 2.1) are examples of weak Gibbs measures since these measures satisfy the stronger inequalities: there exists
$0<\varepsilon_0<\infty$ such that for
$0<\varepsilon\leq\varepsilon_0$ $\exists K_{\varepsilon}<\infty$,
$\forall m$, $\forall x\in X$,
$$
-\frac{K_{\varepsilon}}{m}\leq \frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\leq\frac{K_{\varepsilon}}{m}\,.
$$
\section{Results}
\setcounter{equation}{0}
If $\nu$ is a weak Gibbs measure, then
\begin{eqnarray*}
0&=
\lim_{\varepsilon\downarrow 0}\liminf_m\inf_{x\in X}\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\Big)\\
&=
\lim_{\varepsilon\downarrow 0}\limsup_m\sup_{x\in X}\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\Big)\,,
\end{eqnarray*}
that is, $-\varphi$ is a lower, respectively upper, energy function for $\nu$ in the sense of \cite{PS1}
(definitions 3.2 and 3.4). Indeed, in \cite{PS1} a function $e$ on $X$ is called a \emph{lower energy function for} $\nu$ if it is
upper semi-continuous and
\begin{equation}\label{eq1}
\lim_{\varepsilon\downarrow 0}\liminf_m\inf_{x\in X}\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))+\int e\,d{\mathcal E}_m(x)\Big)\geq 0\,.
\end{equation}
It is called an \emph{upper energy function for} $\nu$ if it is lower semi-continuous, bounded and
\begin{equation}\label{eq2}
\lim_{\varepsilon\downarrow 0}\limsup_m\sup_{x\in X}\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))+\int e\,d{\mathcal E}_m(x)\Big)\leq 0 \,.
\end{equation}
The terminology used in \cite{PS1} comes from statistical mechanics.
\begin{pro}\label{pro1}
If the continuous function $e$ verifies (\ref{eq1}) and (\ref{eq2}), then $\nu$ is a weak Gibbs measure for
$\varphi=-e$.
\end{pro}
\medbreak\noindent{\bf Proof.}\enspace
For any $\delta>0$, if $\varepsilon$ is small enough and $m$ large enough,
\begin{eqnarray*}
-\delta &\leq \inf_{m\geq N_{\delta,\varepsilon}}
\inf_{x\in X}\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\Big)\\
&\leq
\liminf_m\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\Big)\\
&\leq
\limsup_m \Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\Big)\\
&\leq \sup_{m\geq N_{\delta,\varepsilon}}
\sup_{x\in X}\Big(\frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\Big)\leq \delta\,,
\end{eqnarray*}
so that for $\forall m\geq N_{\delta,\varepsilon}$ and $\forall x\in X$
$$
-\delta\leq \frac{1}{m}\ln\nu(B_m(x,\varepsilon))-\int \varphi\,d{\mathcal E}_m(x)\leq \delta\,.
$$
\qed
For any dynamical system $(X,T)$ and any weak Gibbs measure the following large deviations estimates are true.
\begin{thm}\label{thm2
Let $\nu$ be a weak Gibbs measure for $\varphi$.
1. If $G\subset M_1(X)$ is open, then for any ergodic probability measure $\rho\in G$
$$
\liminf_m\frac{1}{m}\ln\nu({\mathcal E}_m\in G)\geq h(T,\rho)+\int \varphi\,d\rho\,.
$$
2. If $F\subset M_1(X)$ is convex and closed, then
$$
\limsup_m\frac{1}{m}\ln\nu({\mathcal E}_m\in F)
\leq \sup_{\rho\in F\cap M_1(X,T)}\big(h(T,\rho)+\int \varphi\,d\rho\big)\,.
$$
\end{thm}
\medbreak\noindent{\bf Proof.}\enspace
Proposition 3.1 and theorem 3.2 in \cite{PS1}. \qed
\begin{pro}\label{pro2}
If $\nu$ is a weak Gibbs measure for $\varphi$, then the topological pressure $P(\varphi)=0$.
\end{pro}
\medbreak\noindent{\bf Proof.}\enspace
This is an immediate consequence from theorem \ref{thm2}, theorem 9.10 and corollary 9.10.1 in \cite{Wa}.
Let $G=F=M(X)$. Then
$$
P(\varphi)=\sup_{\rho\,{\rm ergodic}}\big(h(T,\rho)+
\int \varphi\,d\rho\big )
\leq 0 \leq
\sup_{\rho\in M_1(X,T)}\big(h(T,\rho)+
\int \varphi\,d\rho\big )=P(\varphi)\,.
$$
\qed
The following hypothesis about the entropy-map $h(T,\cdot)$ and the dynamical system $(X,T)$
are sufficient to obtain a full large deviations principle.
\begin{thm
Let $\nu$ be a weak Gibbs measure for $\varphi$.
If the entropy map $h(T,\cdot)$ is upper semi-continuous, then
for $F\subset M_1(X)$ closed
$$
\limsup_m\frac{1}{m}\ln\nu({\mathcal E}_m\in F)
\leq \!\!\!\!\sup_{\rho\in F\cap M_1(X,T)}\big(h(T,\rho)+\int \varphi\,d\rho\big)\,.
$$
If the ergodic measures are entropy dense, then for $G\subset M_1(X)$ open
$$
\liminf_m\frac{1}{m}\ln\nu({\mathcal E}_m\in G)
\geq \!\!\!\!\sup_{\rho\in G\cap M_1(X,T)}\big(h(T,\rho)+
\int \varphi\,d\rho\big)\,.
$$
\end{thm}
\medbreak\noindent{\bf Proof.}\enspace Theorems 3.1 and 3.2 in \cite{PS1}. \qed
Entropy density of
the ergodic measures means (\cite{PS1}): for any $\mu\in M_1(X,T)$, any neighbourhood $N$ of $\mu$ and any $h^*<h(T,\mu)$, there exists an ergodic measure $\rho\in N$ such that
$h(T,\rho)\geq h^*$. Entropy density is true under various types of specifications properties for the dynamical system $(X,T)$, see e.g. \cite{PS1}, \cite{PS2}, \cite{CTY}, \cite{KLO} and \cite{GK}.
\begin{pro}\label{pro4}
If $\nu\in M_1(X,T)$ is a weak Gibbs measure for $\varphi$, then it is an equilibrium measure for $\varphi$.
\end{pro}
\medbreak\noindent{\bf Proof.}\enspace
By definition an equilibrium measure $\mu\in M_1(X,T)$ for a continuous function $f$
satisfies the variational principle
$$
P(f)=\sup\Big\{h(T,\rho)+\int f\,d\rho\,{:}\; \rho\in M_1(X,T)\Big\}
=h(T,\mu)+\int f\,d\mu\,.
$$
Since $P(\varphi)=0$, $h(T,\nu)\leq -\int \varphi\,d\nu$. Since $\nu$ is a weak Gibbs measure for $\varphi$,
\begin{eqnarray*}
\limsup_m\int \varphi\,d{\mathcal E}_m(x)&=\lim_{\varepsilon\downarrow 0}\limsup_m
\frac{1}{m}\ln\nu(B_m(x,\varepsilon))\\
\liminf_m\int \varphi\,d{\mathcal E}_m(x)&=\lim_{\varepsilon\downarrow 0}\liminf_m
\frac{1}{m}\ln\nu(B_m(x,\varepsilon))\,.
\end{eqnarray*}
By the ergodic theorem there exists an integrable function $\varphi^*$ such that
$$
\lim_m \int \varphi\,d{\mathcal E}_m(x)=\varphi^*(x)\quad\nu-{\rm a.s.}
$$
and
$$
\int \varphi\,d\nu=\int \varphi^*\,d\nu\,.
$$
Therefore
$$
h(T,\nu)\leq -\int \varphi(x)\,d\nu(x)=\int\big(-\lim_{\varepsilon\downarrow 0}\limsup_m
\frac{1}{m}\ln\nu(B_m(x,\varepsilon))\big)\,d\nu(x)\,.
$$
Let ${\mathcal P}=\{A_1,\ldots,A_p\}$ be a finite measurable partition of $X$,
$\max_i{\rm diam}A_i<\varepsilon$. For $x\in X$, let ${\mathcal P}^n(x)$ be the element of the partition ${\mathcal P}^n={\mathcal P}\vee T^{-1}{\mathcal P}\vee \cdots\vee T^{-n+1}{\mathcal P}$ containing $x$. By the
McMillan-Breiman theorem
$$
h_{\mathcal P}(x):=\lim_n-\frac{1}{n}\ln \nu({\mathcal P}^n(x))\quad \nu-{\rm a.s.}
$$
and
$$
\int h_{\mathcal P}(x)\,d\nu(x)=h_{\mathcal P}(T,\nu)\,,
$$
where
$$
h_{\mathcal P}(T,\nu)=\lim_n\Big(-\frac{1}{n}\sum_{B\in{\mathcal P}^n}\nu(B)\ln \nu(B)\Big)\leq h(T,\nu)\,.
$$
Since $B_n(x,\varepsilon)\supset {\mathcal P}^n(x)$, for any $\varepsilon>0$,
$$
\int \big(-\limsup_m \frac{1}{m}\ln\nu(B_m(x,\varepsilon))\big)\,d\nu(x)
\leq \int h_{\mathcal P}(x)\,d\nu(x)\leq h(T,\nu)\,,
$$
so that -$\int \varphi\,d\nu\leq h(T,\nu)$.
\qed
\bigskip
\noindent
{\bf Concluding remark\,}
The results in \cite{PS1} are proven for continuous ${\bf Z}_+^d$-actions or ${\bf Z}^d$-actions on $X$.
The results of this note are also true for these cases. The empirical measure ${\mathcal E}_n(x)$ and the dynamical ball $B_n(x,\varepsilon)$ are defined as in \cite{PS1}.
\vspace{1.8cm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Consider the problem of finding the parameters $\{\alpha_j,x_j\}_{j=1}^n$ of the exponential sum
\begin{equation}\label{eq:exp-sum}
f(s) = \sum_{j=1}^n \alpha_j x_j^s
\end{equation}
from the noisy samples $\{f(s_k)+\epsilon_k\}_{k=0}^N$. This exponential fitting problem appears in a wide range of different settings, such as direction of arrival estimation, parametric spectrum estimation, finite rate of innovation sampling, phase retrieval, as well as Pad\'{e} approximation, Gaussian quadrature, and moment problems, to name a few (see e.g. \cite{auton1981, batenkov2013b,lyubich2004, pereyra2010,stoica1995} and the references therein).
An instance of \eqref{eq:exp-sum} of particular interest occurs when $|x_j|=1$ for each $j=1,\dots,n$. This setting is motivated by the problem of super-resolution (SR) of sparse measures of the form $\mu(t) = \sum_{j=1}^n \alpha_j \delta(t-t_j)$ from the samples of its Fourier transform
\begin{equation}\label{eq:ft-of-measure}
f(s)=f_{\mu}(s)=\int e^{2\pi\imath t s}d\mu(t) = \sum_{j=1}^n \alpha_j e^{2\pi\imath s t_j},
\end{equation}
known approximately in some bandwidth $s\in[-\Omega,\Omega]$ \cite{donoho1992a}. An important question in applied harmonic analysis is to develop robust reconstruction procedures to solve the SR problem with best possible accuracy, a question which we consider to still be open even in one spatial dimension. In this paper we consider only the model \eqref{eq:ft-of-measure}; however, some of our results may be extended to the more general model \eqref{eq:exp-sum} (i.e. for arbitrary $x_j\in\mathbb{C}\setminus\{0\}$). The min-max stability of solving \eqref{eq:ft-of-measure} has recently been established in \cite{batenkov2021b} when two or more nodes $x_j$ nearly collide, such that the minimal separation $\delta$ is much smaller than $1/\Omega$, see \prettyref{thm:minmax} below. However, we are not aware of a tractable algorithm provably attaining these bounds.
The Prony's method (PM) \cite{prony1795} is an explicit algebraic procedure (see \prettyref{alg:classical-prony} below) which provides an exact answer to the exponential fitting problem (for arbitrary $x_j\in\mathbb{C}$) under the assumption of exact data (i.e., in the noiseless regime $\epsilon_k\equiv0$), requiring access to only $2n$ consecutive samples $f(0),f(1),\dots,f(2n-1)$. The main insight by de Prony was that the linear parameters $\{\alpha_j\}_{j=1}^n$ can be eliminated from the equations, reducing the problem of recovering $\{x_j\}_{j=1}^n$ to finding roots of a certain algebraic polynomial (the Prony polynomial). The coefficients $\alpha_j$ are recovered in the second step by solving a Vandermonde-type linear system. In the presence of noise, PM is considered to be suboptimal -- however, to the best of our knowledge, no rigorous analysis of its stability has been available, in particular in the context of the super-resolution problem.
\subsection{Main contributions}
Our main result in this paper is a rigorous proof that \emph{Prony's method is optimal} for the SR problem when the measurement bandwidth $\Omega$ is constant while the minimal node separation satisfies $\delta\to 0$. By analyzing each step of PM and taking care of error propagation, we show that the error amplification factors for both the nodes and amplitudes are asymptotically equivalent to the min-max bounds (i.e., the best achievable reconstruction errors under the worst-case perturbation scenario) of \prettyref{thm:minmax} under the optimal noise scaling (a.k.a. the threshold SNR). These results are given by \prettyref{thm:node-accuracy-Dima} and \prettyref{thm:coeffs-accuracy-Dima} respectively. In effect, our results provide a generalization of \prettyref{thm:minmax} to the true multi-cluster setting (but still restricted to $\Omega=\operatorname{const}$). As a direct corollary, we also show that PM is numerically stable in finite-precision arithmetic (\prettyref{sec:finite-precision}).
Since PM is a multi-step procedure, in principle the errors from each step might have an adverse effect on the next step (such a phenomenon may occur, for instance, when solving linear systems by the $LU$ decomposition method, where the condition number of the original matrix may be unnecessarily amplified, \cite{higham1996}). Our main technical contribution is an accurate analysis of the inter-relations between the different errors in each step of PM, eventually resulting in previously unnoticed cancellations. For comparison, a ``naive'' estimation by textbook numerical analysis methods provides too pessimistic bounds, as we demonstrate in \prettyref{prop:prony-naive-analysis} and \prettyref{prop:ampl-naive}.
\begin{remark}\label{rem:amplitudes-remark}
The error inter-relations become especially prominent in the multi-cluster setting, as for example it turns out that if the approximate nodes $\{\tilde{x}_j\}_{j=1}^n$ recovered from the first step of PM are further perturbed in an arbitrary direction, for instance by projecting them back to the unit circle, then the resulting errors in the amplitudes $\alpha_j$ may no longer be optimal (contrary to the single cluster example in \prettyref{prop:ampl-naive}). Cf. \prettyref{sec:numerics}.
\end{remark}
\subsection{Towards optimal SR}\label{sub:dpm-intro}
The full SR problem (in particular when $\Omega\delta$ is small but fixed) is apparently still algorithmically open. We believe our results may have implications for analyzing the high-resolution SR algorithms such as ESPRIT/Matrix Pencil/MUSIC, towards establishing their (non-)optimality. Furthermore, the error analysis for different steps may have implications for implementing these methods, cf. \prettyref{sec:numerics} and also \prettyref{rem:amplitudes-remark}.
Recently we have developed the Decimated Prony's Method (DPM) \cite{decimatedProny} (cf. \prettyref{sec:numerics}), which reduces the full SR problem to a sequence of small problems indexed by a ``decimation parameter'' $\lambda$, followed by further filtering of the results. In more detail, for each admissible $\lambda$, the spectrum $f$ is sampled at $2n$ equispaced frequencies $\{\lambda k\}_{k=0}^{2n-1}$ and the resulting system is subsequently solved by applying PM. Min-max bounds are attained if one can choose $\lambda=O(\Omega)$. This ``decimation'' approach was first proposed in \cite{batenkov2013a}, and further developed in a number of publications on SR \cite{batenkov2017c, batenkov2018,batenkov2020,batenkov2022}, as well as in the resolution of the Gibbs phenomenon \cite{batenkov2015b}. Decimation was the key idea in \cite{batenkov2021b} for establishing the upper bound on the min-max error $\Lambda$ in \prettyref{thm:minmax} as well. \emph{As a consequence of the results of the present paper, and the numerical experiments in \cite{decimatedProny} and \prettyref{sec:numerics}, we conjecture that DPM in fact attains the upper bounds on $\Lambda$.} We leave the rigorous proof of this conjecture to a future work.
\subsection{Organization of the paper}
In \prettyref{sec:sr-prony-details} we describe the min-max bounds for SR, and present PM with an initial sub-optimal stability analysis in this context. The main results are formulated in \prettyref{sec:main-results}, and subsequently proved in Sections~\ref{Sec:Prelims},\ref{Sec:PfMainthm} and \ref{Sec:PfAmplitude}, with the more technical proofs delegated to the appendices. \prettyref{sec:finite-precision} is devoted to analyzing the performance of PM in finite-precision arithmetic, while \prettyref{sec:numerics} demonstrates the different theoretical results numerically.
\subsection{Notation} We utilize the following common notations. For $\zeta\in \mathbb{N}$, $[\zeta]$ denotes the set $\{1,2,\dots,\zeta\}$. Asymptotic inequality $A\lessapprox B,A\gtrapprox B$ ($A\asymp B$) means inequality (resp. equality) up to constants. If not specified otherwise, the constants are assumed to be independent of the minimal separation $\delta$ and the perturbation size $\epsilon$. We will use the notation $\operatorname{col}\left\{ y_i\right\}_{i=0}^N := \begin{bmatrix}
y_0&\dots & y_N
\end{bmatrix}^T$, where $\left\{y_i \right\}_{i=0}^N\subseteq \mathbb{C}$ are arbitrary scalars.
\section{Super-resolution and Prony's method}\label{sec:sr-prony-details}
\subsection{Optimal super-resolution}\label{sub:superres-intro}
The fundamental limits of SR in the sparse model were investigated in several works in the recent years, starting with the seminal paper \cite{donoho1992a} and further developed in \cite{batenkov2020, batenkov2021b, demanet2015,li2021, liu2022b}. For the purposes of this paper, we shall consider the following min-max accuracy bounds derived in \cite{batenkov2021b}. In what follows, we re-formulate the original bounds in terms of the geometry of the complex nodes $x_j:=e^{2\pi\imath t_j}$ directly, thereby making the notations consistent with \eqref{eq:exp-sum}.
\begin{defn}[Minimax rate] Let $F$ denote a set of signals of interest of the form $\mu=(\vec{\alpha},\vec{x})$, where $\vec{\alpha}=(\alpha_1,\dots,\alpha_n)$ and $\vec{x}=(x_1,\dots,x_n)$. Given a signal $\mu\in F$ and a perturbation function $e(s)\in L_{\infty}([-\Omega,\Omega])$ with $\|e\|_{\infty}\leq\epsilon$, let $\widetilde{\mu}=\widetilde{\mu}(f_{\mu}+e)$ denote any deterministic algorithm which produces an approximation $(\widetilde{\vec{\alpha}},\widetilde{\vec{x}})\in F$. Then the min-max error rates for recovering each node $x_j$ and amplitude $\alpha_j$ are given by
\begin{align*}
\mm^{\vec{x},j}(\epsilon, F, \Omega) &= \inf_{\widetilde{\mu}=(\widetilde{\vec{\alpha}},\widetilde{\vec{x}})} \ \sup_{\mu=(\vec{\alpha},\vec{x}) \in F} \ \sup_{e:\ \|e\|_{\infty}\le \epsilon} |x_j-\widetilde{x_j}|,\\
\mm^{\vec{\alpha},j}(\epsilon, F,\Omega) &= \inf_{\widetilde{\mu}=(\widetilde{\vec{\alpha}},\widetilde{\vec{x}})} \ \sup_{\mu=(\vec{\alpha},\vec{x}) \in F} \ \sup_{e:\ \|e\|_{\infty}\le \epsilon} |\alpha_j-\widetilde{\alpha_j}|.
\end{align*}
\end{defn}
As noted already in \cite{donoho1992a}, the SR becomes difficult (and, in some sense, nontrivial) when some of the $n$ nodes $\{x_j\}$ may form ``clusters'' of extent much smaller than the Rayleigh-Nyquist limit $1/\Omega$. To make this notion precise, let $\delta$ denote a-priori minimal separation between any two $x_j$ (the definition below is more general than the one used in \cite{batenkov2021b}).
\begin{defn}[Clustered configuration]\label{def:cluster}
Given $n\geq 2$, the set of nodes $\{x_1,\dots,x_n\}$ is said to form a $(K_x,n,\zeta,\ell_*,\delta,\tau,\eta,T)$-cluster if there exists a partition $\bigcup_{s\in [\zeta]}\mathcal{C}_s=[n]$, with $\mathcal{C}_s\bigcap \mathcal{C}_{s'}=\emptyset$ for $s,s'\in [\zeta],\ s\neq s'$, such that:
\begin{enumerate}
\item There exists a compact $K_x\subseteq \mathbb{C}$ such that $x_i\in K_x, \ i=1,\dots,n$.
\item $\operatorname{card}\left(\mathcal{C}_s \right)=\ell_s$ for $s\in [\zeta]$ and $\ell_* = \max_{s\in [\zeta]}\ell_s$;
\item there exist $\tau>1$ and $0<\delta<1$ such that for any $i,j\in \mathcal{C}_s, \ s\in [\zeta]$
\begin{equation*}
\delta \leq \left|x_i-x_j \right|\leq \tau \delta;
\end{equation*}
\item there exist $\eta>1$ and $T>\tau \delta$ such that for any $i\in \mathcal{C}_s,\ j\in \mathcal{C}_{s'}$ where $s,s'\in[\zeta],\ s\neq s'$ \begin{equation*}
T\leq \left|x_i-x_j \right|\leq \eta T.
\end{equation*}
\end{enumerate}
Here $\operatorname{card}\left(A \right)$ stands for the cardinality of the set $A$.
\end{defn}
In the remainder of the paper we will assume that $K_x=\mathbb{S}^1$ is the unit circle (this assumption corresponds to \eqref{eq:ft-of-measure}, which is the model of interest in this paper) and omit $K_x$ from the clustered configuration parameters.
For clustered configurations, in \cite{batenkov2021b} the worst-case bounds for the recovery problem were established as follows. Define the \emph{super-resolution factor} $\textrm{SRF}} \newcommand{\w}{\omega=(\Omega\delta)^{-1}$.
\begin{thm}[\cite{batenkov2021b}]\label{thm:minmax}
Let $F$ denote the set of signals whose node set forms a cluster with $\ell_1=\ell_*$ and $\ell_2=\ell_3=\dots=\ell_p=1$, and $\{\alpha_j\}_{j=1}^n$ bounded from below and above. For $\textrm{SRF}} \newcommand{\w}{\omega:={1\over{\Omega\delta}} \geq O(1)$, and
$\epsilon \lessapprox (\Omega\delta)^{2\ell_*-1}$:
\begin{align*}
\mm^{\vec{x},j}(\epsilon,F,\Omega) &\asymp
\begin{cases}
\textrm{SRF}} \newcommand{\w}{\omega^{2\ell_*-1} \delta \epsilon & x_j \in \mathcal{C}_1, \\
{\epsilon\over\Omega} & x_j \notin \mathcal{C}_1,
\end{cases} \\
\mm^{\vec{\alpha},j}(\epsilon,F,\Omega) &\asymp
\begin{cases}
\textrm{SRF}} \newcommand{\w}{\omega^{2\ell_*-1} \epsilon & x_j \in \mathcal{C}_1, \\
\epsilon & x_j \notin \mathcal{C}_1.
\end{cases}
\end{align*}
\end{thm}
For discussion of the relations of \prettyref{thm:minmax} to other works on the subject, the reader is referred to \cite[Section 1.4]{batenkov2021b}. Related results are known in the signal processing literature for Gaussian noise model: \cite{lee1992} provides similar bounds in terms of Cramer-Rao bound (in the case of a single cluster), and the expression for the threshold SNR \emph{for detection} was shown to scale like $\textrm{SRF}} \newcommand{\w}{\omega^{-2}$ for $n=2$ in \cite{stoica1995}, which is also consistent with \cite{shahram2005}.
The upper bound on the minmax error is realized by a non-tractable ``oracle-type'' algorithm, which, given a measurement function $g(s)=f_{\mu}(s)+e(s)$, produces \emph{some} signal parameters $\{\alpha_j',x_j'\}_{j=1}^n$ for which $\max_{s\in[-\Omega,\Omega]} |\sum_{j=1}^n \alpha_j' x_j'^s - g(s)| \leq \epsilon$. We are not aware of any tractable method which provably achieves the upper bound on $\mm$, although some partial results in this direction are known. The ESPRIT algorithm (originally proposed in \cite{roy1989}) was analyzed in \cite{li2020a} in the case of a discrete measurement model, showing that the nodes errors are bounded by $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell_*-2}\epsilon$ provided that $\epsilon\lessapprox (\Omega\delta)^{4\ell_*-3}/\Omega$. The MUSIC algorithm was partially analyzed in \cite{li2021}, where perturbation bounds on the noise-space correlation function were established; however this analysis does not imply an effective bound on $\Lambda$.
\subsection{Prony's method and its (apparent) instability}
PM reduces the problem to a three-step procedure, which involves solution of two linear systems, in combination with a root-finding step, as described in \prettyref{alg:classical-prony}.
PM is generally considered to be suboptimal for solving \eqref{eq:exp-sum} when $N\gg 2n-1$ (see e.g. \cite{kahn1992, vanblaricum1978} and references therein) due to its inability to utilize the additional measurements. It is also somewhat of a ``folk knowledge'' that PM is ``numerically unstable'', usually contributed to the fact that it involves a rootfinding step, while extracting roots is known to be ill-conditioned for root clusters (which is precisely our case of $\textrm{SRF}} \newcommand{\w}{\omega \gg 1$). While we are not aware of any rigorous numerical analysis of Prony's method in the literature, a rudimentary computation seems to confirm the above claims.
\begin{algorithm2e}[hbt]
\SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
\Input{Sequence $\{\tilde{m}_k\equiv f_{\mu}(k)+\epsilon_k\},\;k=0,1,\dots,2n-1$}
\Output{Estimates for the nodes $\{x_j\}$ and amplitudes
$\{\alpha_j\}$}
Construct the Hankel matrix
$$
\tilde{H}_n =
\begin{bmatrix}
\tilde{m}_{0} & \tilde{m}_{1} & \tilde{m}_{2} & \dots & \tilde{m}_{n-1}\\
\vdots\\
\\
\tilde{m}_{n-1} & \tilde{m}_{n} & \tilde{m}_{n+1} & \dots & \tilde{m}_{2n-2}
\end{bmatrix}
$$
Assuming $\operatorname{det}\tilde{H}_n \neq 0$, solve the linear system
$$
\tilde{H}_n \cdot \operatorname{col}\{q_i\}_{i=0}^{n-1} = -\operatorname{col}\{\tilde{m}_i\}_{i=n}^{2n-1}
$$
Compute the roots $\{\tilde{x}_j\}$ of the (perturbed) Prony polynomial $q(z) = z^n+\sum_{j=0}^{n-1} q_{j}z^j$ \;
Construct $\tilde{V}:= \big[\tilde{x}_j^k\big]_{k=0,\dots,n-1}^{j=1,\dots,n}$ and solve the linear system
$$
\tilde{V} \cdot \operatorname{col}\{\tilde{\alpha}_i\}_{i=1}^n = \operatorname{col}\{\tilde{m}_i\}_{i=0}^{n-1}
$$
\Return the estimated $\tilde{x}_j$ and $\tilde{\alpha}_j$.
\caption{The Classical Prony's method}
\label{alg:classical-prony}
\end{algorithm2e}
The following is proved in \prettyref{app:proof-prony-naive}.
\begin{prop}\label{prop:prony-naive-analysis}
Suppose $\vec{x}$ forms a single cluster ($\ell=n$) configuration and $\Omega=2n-1$. If the perturbations $\epsilon_k$ in the moments $m_k$ satisfy $|\epsilon_k|\leqslant\epsilon\ll 1$, then
\begin{enumerate}
\item The coefficients of the Prony polynomial are recovered with accuracy
$$
|p_i-q_i|\lessapprox \delta^{3-3\ell}\epsilon.
$$
\item The nodes $\{x_j\}_{j=1}^n$ are recovered by Prony's method (Algorithm \ref{alg:classical-prony}) with accuracy $|\tilde{x}_j-x_j| \lessapprox \delta^{4-4\ell}\epsilon$.
\end{enumerate}
\end{prop}
Apparent difficulties arise also when estimating the errors in recovering the $\alpha_j$'s, even assuming that the nodes have been recovered with optimal accuracy. The following is proved in \prettyref{app:proof-prony-naive-ampl}.
\begin{prop}\label{prop:ampl-naive}
Let $\vec{m}=\operatorname{col}\{m_i\}_{i=0}^{n-1}$, $\vec{\tilde{m}}=\operatorname{col}\{\tilde{m}_i\}_{i=0}^{n-1}$. Under the assumptions of \prettyref{prop:prony-naive-analysis}, for $\epsilon \lessapprox \delta^{3\ell-3}$, let $\tilde{\vec{x}}$ be a node vector satisfying $|\tilde{x}_j-x_j|\lessapprox \delta^{2-2\ell}\epsilon$. Let $\tilde{\vec{\alpha}}=\operatorname{col}\{\tilde{\alpha}_i\}_{i=1}^{n}$ be the solution of the linear system $\tilde{V} \tilde{\vec{\alpha}} = \tilde{\vec{m}}$ where $\tilde{V}$ is the Vandermonde matrix corresponding to $\tilde{\vec{x}}$, and $\|\tilde{\vec{m}}-\vec{m}\|_{\infty} \leq \epsilon$. Then $|\tilde{\alpha}_j-\alpha_j|\lessapprox \delta^{3-3\ell}\epsilon$ for all $j=1,\dots,n$.
\end{prop}
Now let us consider the setting of \prettyref{sub:superres-intro} again, and suppose that $\ell=n$, $\Omega$ is constant and $\delta \rightarrow 0$. The above analysis suggests that Prony's method should not be used for solving \eqref{eq:exp-sum} already in this simplified setting, as the estimates derived in Propositions~\ref{prop:prony-naive-analysis} and \ref{prop:ampl-naive} are clearly suboptimal with respect to the min-max error of Theorem \ref{thm:minmax}. However, the actual numerical performance of Prony's method turns out to be much more accurate. Indeed, a simple numerical experiment (Figure \ref{fig:prony-initial-check}) demonstrates that Prony's method exhibits cancellation of errors in both steps 2 and 3, resulting in errors of the order $\delta^{2-2\ell}\epsilon$ for both the coefficients of the Prony polynomial and the roots themselves. On the other hand, replacing the error vector $\delta\vec{p}$ with a random complex vector $\Delta\vec{r}$ such that $|\Delta r_j|=|\Delta p_j|=|q_j-p_j|$, we observe that the corresponding root perturbations are of the order $\delta^{3-3\ell}\epsilon$. Thus, the analysis should take into account both the Hankel structure of $\tilde{H}_n$ in step 2, and the inter-relations between the errors in different coefficients in step 3.
The bound in \prettyref{prop:ampl-naive} turns out to be extremely pessimistic as well, as evident from Figure \ref{fig:prony-initial-check} (right panel). Here again the bound is not attained as described, the Vandermonde structure of $V$ (resp. $\tilde{V}$) evidently playing a crucial role with regards to stability. Furhter experiments suggesting the asymptotic optimality of Prony's method both in terms of stability coefficient, and the threshold SNR, have been recently reported in \cite{decimatedProny} in the clustered geometry. Motivated by the numerical evidence as described, in this paper we close this gap and derive the true error tolerance of Prony's method.
\begin{figure}
\begin{center}
\includegraphics[width=0.32\linewidth]{./figures/fig1-1-DP}
\includegraphics[width=0.32\linewidth]{./figures/fig1-2-DP}
\includegraphics[width=0.32\linewidth]{./figures/fig1-3-DP}
\end{center}
\caption{Left: the condition number $\kappa(H_n)$ of the Hankel matrix scales as $\delta^{2-2\ell}$. Middle: both $\Delta\vec{q}$ and $|x_j-\tilde{x}_j|$ scale as $\delta^{2-2\ell}\epsilon$, showing that the errors in the coefficients of the Prony polynomial are not independent. For comparison, choosing a random perturbation of $\vec{p}$ with same magnitude as $\Delta\vec{q}$ and computing the roots $y_j$ of the resulting polynomial, we observe that $|y_j-x_j| = O(\delta^{3-3\ell}\epsilon)$, as predicted by \prettyref{prop:prony-naive-analysis}. Right: the amplitude errors committed by Prony's method (blue) and by replacing the recovered nodes with random perturbations (red). Here, in contrast, the bound of \prettyref{prop:ampl-naive} is not attained. \emph{All computations are done in floating-point arithmetic, with $\epsilon=10^{-15}$.}}
\label{fig:prony-initial-check}
\end{figure}
\section{Main results}\label{sec:main-results}
Consider the Prony's method in Algorithm \ref{alg:classical-prony}. We will consider the number of nodes (resp. amplitudes) $n$ to be \emph{fixed}. Denote the (true) nodes and the amplitudes of Prony's problem as $\left\{x_j \right\}_{j=1}^n$ and $\left\{\alpha_j \right\}_{j=1}^n$, respectively. We further assume that the nodes form a clustered configuration as in Definition \ref{def:cluster}, whereas the amplitudes satisfy $\mathfrak{m}_{\alpha}\leq |\alpha_i|\leq \mathfrak{M}_{\alpha}, \ i=1,\dots,n$ for some $0<\mathfrak{m}_{\alpha}<\mathfrak{M}_{\alpha}$. The corresponding Hankel matrix is denoted by $H_n$,
whereas the corresponding (unperturbed) monic Prony polynomial is given by
\begin{equation}\label{eq:prony-monic-def}
p\left(z\right)=\prod_{i=1}^{n}\left(z-x_{i}\right) = z^n+\sum_{j=0}^{n-1}p_{j}z^{j}.
\end{equation}
The coefficients of $p(z)$ are obtained by solving the linear system
\begin{align}
&H_n \cdot \operatorname{col}\left\{ p_i\right\}_{i=0}^{n-1} = - \operatorname{col}\left\{m_i \right\}_{i=n}^{2n-1}, \quad
H_n: = \begin{bmatrix}
m_0 & m_1 & \dots &m_{n-1}\\
\vdots & \vdots & \vdots & \vdots\\
m_{n-1} & m_n & \dots & m_{2n-2}
\end{bmatrix}.\label{eq:UnpertHanekl}
\end{align}
We assume that the algebraic moments $m_{k}=\sum_{j=1}^{n}\alpha_{j}x_{j}^{k}$ are measured with perturbations (disturbances) of size $\epsilon\geq 0$. The latter give rise to a perturbed Hankel matrix
\begin{align}
&\tilde{H}_n: = H_n+\epsilon \mathrm{D}, \quad \mathrm{D} : = \begin{bmatrix}
d_0 & d_1 & \dots &d_{n-1}\\
\vdots & \vdots & \vdots & \vdots\\
d_{n-1} & d_n & \dots & d_{2n-2}
\end{bmatrix}, \quad d_i = \text{O}(1), \ i\in \left\{0,\dots, 2n-2 \right\}.\label{eq:PertHankel}
\end{align}
The solution of the linear system
\begin{align}
&\tilde{H}_n\cdot \operatorname{col}\left\{q_i \right\}_{i=0}^{n-1} = - \operatorname{col}\left\{m_i+\epsilon d_i \right\}_{i=n}^{2n-1} \label{eq:PertHanekl}
\end{align}
provides the coefficients of a perturbed monic Prony polynomial
\begin{equation}\label{eq:q-monic-def}
q\left(z;\left\{ d_i\right\},\epsilon\right)=\prod_{i=1}^{n}\left(z-\tilde{x}_{i}\right) = z^n+\sum_{j=0}^{n-1}q_{j}z^{j}.
\end{equation}
The roots of $q\left(z;\left\{ d_i\right\},\epsilon\right)$ are then used to obtain the perturbed amplitudes $\left\{\tilde{\alpha}_j \right\}_{j=1}^n$ (see Algorithm \ref{alg:classical-prony}). The goal of this work is to derive efficient bounds on the errors $\left\{\left|x_j -\tilde{x}_j \right| \right\}_{j=1}^n$ and $\left\{\left|\alpha_j -\tilde{\alpha}_j \right| \right\}_{j=1}^n$, depending on $\epsilon$ and to derive the condition on $\epsilon$ which ensures the bounds.
In this paper we use the ``homogeneous'' version given in Algorithm \ref{alg:homo-prony}, which is computationally equivalent to \prettyref{alg:classical-prony}.
\begin{algorithm2e}[hbt]
\SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
\Input{Sequence $\{\tilde{m}_k\},\;k=0,1,\dots,2n-1$}
\Output{Estimates for the nodes $\{x_j\}$ and amplitudes
$\{\alpha_j\}$}
Compute the roots $\{\tilde{x}_j\}$ of the Prony polynomial
\begin{equation}\label{eq:qbar-def}
\bar{q}(z)=\operatorname{det}\begin{bmatrix}1 & z & z^{2} & \dots & z^{n-1} & z^{n}\\
\tilde{m}_{0} & \tilde{m}_{1} & \tilde{m}_{2} & \dots & \tilde{m}_{n-1} & \tilde{m}_{n}\\
\vdots\\
\\
\tilde{m}_{n-1} & \tilde{m}_{n} & \tilde{m}_{n+1} & \dots & \tilde{m}_{2n-2} & \tilde{m}_{2n-1}
\end{bmatrix}.
\end{equation}
Solve the linear system $\tilde{V} \tilde{\vec{\alpha}} = \tilde{m}$ \;
\Return the estimated $\tilde{x}_j$ and $\tilde{\alpha}_j$.
\caption{The Homogeneous Prony's method}
\label{alg:homo-prony}
\end{algorithm2e}
We now state the main theorems of this work. The node errors $\left\{\left|x_i-\tilde{x}_i \right|\right\}_{i=1}^n$, are bounded as follows.
\begin{thm}\label{thm:node-accuracy-Dima}
Suppose that the node $x_j$ belongs to a cluster of size $\ell_j$, and $\ell_*$ is the size of the largest cluster in $\left\{x_j \right\}_{j=1}^n$. Denote $m_k=\sum_{i=1}^n \alpha_i x_i^k$. Let $\left\{d_i\right\}_{i=0}^{2n-1}=O(1)$ be tolerance coefficients. For each $\epsilon$, let $\tilde{x}_j=\tilde{x}_j(\left\{ d_i\right\},\epsilon), \ j=1,\dots,n$ be the exact roots of the Prony polynomial $\bar{q}\left(z;\left\{ d_i\right\},\epsilon\right)$ in \eqref{eq:qbar-def}. Then for $\epsilon \lessapprox \delta^{2\ell_*-1}$ and each $j=1,\dots,n$ we have $|x_j-\tilde{x}_j| \lessapprox \delta^{2-2\ell_j}\epsilon$.
\end{thm}
Theorem \ref{thm:node-accuracy-Dima} then leads to the following result for the amplitude errors $\left\{\left|\alpha_i-\tilde{\alpha}_i \right|\right\}_{i=1}^n$.
\begin{thm}\label{thm:coeffs-accuracy-Dima}
Let $\left\{d_i \right\}_{i=0}^{2n-1}, \left\{\breve{d}_i \right\}_{i=0}^{n-1} = O(1)$ be two sets of tolerance coefficients. Under the assumptions of \prettyref{thm:node-accuracy-Dima}, and for each $\epsilon$ satisfying the condition $\epsilon\lessapprox\delta^{2\ell_*-1}$, let $\tilde{x}_j=\tilde{x}_j(\left\{ d_i\right\},\epsilon), \ j=1,\dots,n$ denote the exact roots of the perturbed Prony polynomial $\bar{q}\left(z;\left\{ d_i\right\},\epsilon\right)$, and consider $\operatorname{col}\left\{\tilde{\alpha}_i \right\}_{i=1}^n$ which satisfies
\begin{align*}
&\tilde{V}\cdot\operatorname{col}\left\{\tilde{\alpha}_i \right\}_{i=1}^n = \operatorname{col}\left\{m_i+\epsilon \breve{d}_i \right\}_{i=0}^{n-1},\quad \tilde{V} = \begin{bmatrix}
1& \dots &1 &1\\
\tilde{x}_1 & \dots & \tilde{x}_{n-1}& \tilde{x}_n\\
\vdots & \vdots & \vdots&\vdots\\
\tilde{x}_1^{n-1} & \dots & \tilde{x}_{n-1}^{n-1}& \tilde{x}_n^{n-1}
\end{bmatrix}.
\end{align*}
Then for each $j=1,\dots,n$ we have
$$
|\alpha_j-\tilde{\alpha}_j| \lessapprox
\begin{cases}
\delta^{1-2\ell_j}\epsilon & \ell_j > 1;\\
\epsilon & \ell_j=1.
\end{cases}
$$
\end{thm}
As a consequence, we show in \prettyref{thm:finite-precision} that \prettyref{alg:classical-prony} (and thus also \prettyref{alg:homo-prony}) retains the stability bounds when executed in finite precision arithmetic. However, we require several auxiliary definitions in order to state the precise result, and thus we defer these developments to \prettyref{sec:finite-precision}.
Some preliminary results, which are needed for the proofs, are given in Section \ref{Sec:Prelims}. The proof of the Theorem \ref{thm:node-accuracy-Dima} is the subject of Section \ref{Sec:PfMainthm}, while the proof of Theorem \ref{thm:coeffs-accuracy-Dima} is given in Section \ref{Sec:PfAmplitude}. Some numerical results are presented in \prettyref{sec:numerics}.
\section{Preliminary results}\label{Sec:Prelims}
The following lemma will be useful for obtaining an explicit representation of the perturbed Prony polynomial $\bar{q}(z)$ as in \eqref{eq:qbar-def}.
\begin{lem}\label{Lem:HomogPronyPoly}
Let
\begin{align}
\bar{p}\left(z\right)=\operatorname{det}\begin{bmatrix}1 & z & z^{2} & \dots & z^{n-1} & z^{n}\\
m_{0} & m_{1} & m_{2} & \dots & m_{n-1} & m_{n}\\
\vdots\\
\\
m_{n-1} & m_{n} & m_{n+1} & \dots & m_{2n-2} & m_{2n-1}
\end{bmatrix}\label{eq:HomogPronyPol}.
\end{align}
Furthermore, let $\tilde{m}_j=m_j+\epsilon d_j, \ j=0,\dots ,2n-1$, let $\bar{q}(z)$ be given by \eqref{eq:qbar-def}, and let $p(z),q(z)$ be the monic Prony polynomials given in \eqref{eq:prony-monic-def}, \eqref{eq:q-monic-def}. Then
\begin{align}
\bar{p}(z) &= (-1)^n\operatorname{det}\left(H_n \right)p(z)\label{eq:HomogPronyPol1}\\
\bar{q}(z) &= (-1)^n\operatorname{det}\left(\tilde{H}_n \right)q(z)\label{eq:HomogPronyPol2}
\end{align}
\end{lem}
\begin{proof}
See Appendix \ref{Lem:HomogPronyPolyPf}.
\end{proof}
By Lemma \ref{Lem:HomogPronyPoly} and \prettyref{prop:vand-decomp} we immediately have
\begin{align}\label{eq:PbarExplicitExpr}
&\bar{p}(z) = (-1)^n\left(\prod_{k=1}^n \alpha_k\right) \cdot \left(\prod_{1\leq m<\ell\leq n}\left(x_{\ell}-x_m \right)^2\right)\cdot \left(\prod_{m=1}^n(z-x_m) \right).
\end{align}
{ We introduce several definitions that are essential for deriving an explicit $\epsilon$ expansion of the perturbed Prony polynomial $\bar{q}\left(z;\left\{ d_i\right\},\epsilon\right)$. The notations correspond to\footnote{In the original version of \cite{horn2012matrix} there are several typos in Section 0.8.12, which are fixed in the official errata.} \cite[Section 0.8.12]{horn2012matrix}. Given any $\kappa \in \left\{1,\dots,n+1 \right\}$, let $\mathcal{Q}_{\kappa}^{n+1}$ be the set of all increasing sequences of elements from $[n+1]$ of length $\kappa$, namely
\begin{equation}\label{eq:QkapDef}
\mathcal{Q}_{\kappa}^{n+1}=\left\{\left(i_1,\dots,i_{\kappa} \right) \ | \ 1\leq i_1<\dots<i_{\kappa}\leq n+1\right\}.
\end{equation}
The latter is an ordered set with the standard lexicographic order. Given a matrix $A\in\mathbb{C}^{n\times n}$, let $C_r(A)\in \mathbb{C}^{\binom{n}{r}}, \ r\in [n]$ be the $r$-th multiplicative compound of $A$, which consists of all $r\times r$ minors of $A$. Namely, the rows and columns of $C_r(A)$ are indexed by $\beta,\gamma\in \mathcal{Q}^n_{r}$ with the entry $\left[ C_r(A)\right]_{\beta,\gamma}$ being the determinant of the $r\times r$ submatrix obtained by choosing from $A$ the rows in $\beta$ and columns in $\gamma$. We further define $C_0\left(A\right) = 1$. Let $\operatorname{adj}_r(A), \ r\in [n-1] $ be the $r$-th adjugate of $A$, where we define $\operatorname{adj}_n(A)=1$ and $\operatorname{adj}_0(A)=\operatorname{det}{A}$. In particular, $\operatorname{adj}_1(A)$ is the standard adjugate of $A$.
Introducing
\begin{equation}\label{PronPert1}
\begin{array}{lll}
&G(z)=\left[\begin{array}{ccccc}
1 & z & \ldots & z^{n-1} & z^n \\
m_0 & m_1 & \ldots & m_{n-1} & m_n \\
& &\vdots & & \\
m_{n-1} & m_n & \ldots & m_{2 n-2} & m_{2 n-1}
\end{array}\right], \ D = \begin{bmatrix}
0 & \ldots & 0 \\
d_0 & \dots & d_n \\
\vdots\\
d_{n-1} & \dots & d_{2 n-1}
\end{bmatrix}
\end{array}
\end{equation}
and using \eqref{eq:PertHankel} and \eqref{eq:HomogPronyPol}, we see that the perturbed Prony polynomial satisfies
$\bar{q}\left(z;\left\{ d_i\right\},\epsilon\right) = \operatorname{det}(G(z)+\epsilon D)$. By \cite[Formula 0.8.12.3]{horn2012matrix}, we have the expansion
\begin{equation}\label{eq:CompundExpansion}
\bar{q}\left(z;\left\{ d_i\right\},\epsilon\right) = \sum_{\kappa=0}^{n+1} \epsilon^{\kappa}\theta_{n+1-\kappa}(z), \quad \theta_{n+1-\kappa}(z) = \operatorname{tr}\left( \operatorname{adj}_{n+1-\kappa}(D) C_{n+1-\kappa}(G(z))\right).
\end{equation}
Fixing $\kappa\in [n+1]\cup \{0\}$, the following lemma provides an explicit description of the coefficient of $\epsilon^{\kappa}$ in \eqref{eq:CompundExpansion}.
\begin{lem}\label{lem:qbarAsymp}
The coefficient of $\epsilon^{\kappa}$ in \eqref{eq:CompundExpansion} is given by
\begin{equation*}
\begin{array}{lll}
&\theta_{n+1-\kappa}(z) = \begin{cases}
\bar{p}(z), \qquad \kappa = 0 \vspace{0.1cm} \\
\sum_{\gamma \in \mathcal{Q}_{n+1-\kappa}^{n+1}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \Gamma_{\beta,\gamma}(z),\qquad 1\leq \kappa \leq n-1 \vspace{0.1cm} \\
\sum_{i=1}^{n+1}\left[\operatorname{adj}(D) \right]_{i, 1}z^{i-1}, \qquad \kappa = n \vspace{0.1cm} \\
\operatorname{det}(D), \qquad \kappa = n+1
\end{cases} ,
\end{array}
\end{equation*}
where for any $\gamma,\beta\in \mathcal{Q}_{n+1-\kappa}^{n+1}, \ 1\leq \kappa \leq n-1$ such that $1\in \beta$, the term $\Gamma_{\beta,\gamma}(z)$ is given by
\begin{equation*}
\begin{array}{lll}
\Gamma_{\beta,\gamma}(z) = \operatorname{det}\begin{bmatrix}
z^{\mathrm{a}} & z^{\mathrm{a}+\mathrm{k}_1} & \ldots & z^{\mathrm{a}+\mathrm{k}_{n-\kappa}} \\
m_{\mathrm{b}} & m_{\mathrm{b}+\mathrm{k}_1} & \ldots & m_{\mathrm{b}+\mathrm{k}_{n-\kappa}} \\
m_{\mathrm{b}+\mathrm{l}_1} & m_{\mathrm{b}+\mathrm{l}_1+\mathrm{k}_1} & \ldots & m_{\mathrm{b}+\mathrm{l}_1+\mathrm{k}_{n-\kappa}} \\
\vdots\\
m_{\mathrm{b}+\mathrm{l}_{n-\kappa-1}} & m_{\mathrm{b}+\mathrm{l}_{n-\kappa-1}+\mathrm{k}_{1}} & \ldots & m_{\mathrm{b}+\mathrm{l}_{n-\kappa-1}+\mathrm{k}_{n-\kappa}}
\end{bmatrix}_{n-\kappa+1}
\end{array}
\end{equation*}
where
\begin{equation}\label{eq:GammaParam}
\begin{array}{lll}
& \gamma = \left(i_1,\dots,i_{n+1-\kappa} \right), \quad \beta = \left(1,j_1,\dots,j_{n-\kappa} \right),\\
& \mathrm{a} = i_1-1, \quad \mathrm{b} = (j_1 -2) + (i_1-1),\\
&\mathrm{k}_1 = i_2-i_1, \dots, \mathrm{k}_{n-\kappa}=i_{n+1-\kappa}-i_1,\\
&\mathrm{l}_1 = j_2-j_1,\dots, \mathrm{l}_{n-\kappa-1} = j_{n-\kappa}-j_1.
\end{array}
\end{equation}
\end{lem}
\begin{proof}
See Appendix \ref{lem:qbarAsympPf}.
\end{proof}
According to Lemma \ref{lem:qbarAsymp}, we have
\begin{equation}\label{eq:rdef}
\begin{array}{lll}
r(z)&=r\left(z;\left\{ d_i\right\},\epsilon\right) := \bar{q}\left(z;\left\{ d_i\right\},\epsilon\right) - \bar{p}(z) \\
&=\sum_{\kappa=1}^{n-1}\epsilon^{\kappa}\left[\sum_{\gamma \in \mathcal{Q}^{n+1}_{n+1-\kappa}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \Gamma_{\beta,\gamma}(z) \right]\\
&\hspace{3mm}+\epsilon^n\sum_{i=1}^{n+1}\left[\operatorname{adj}(D) \right]_{i, 1}z^{i-1} + \epsilon^{n+1}\operatorname{det}(D).
\end{array}
\end{equation}
We aim to obtain an upper bound on $r(z)$ in terms of the clustered configuration parameters (see Definition \ref{def:cluster}). For that purpose, we consider the expansion \eqref{eq:rdef} and proceed with analyzing $\Gamma_{\beta,\gamma}(z)$ (subject to \eqref{eq:GammaParam}), in the case $1\leq \kappa\leq n-1$.
\begin{thm}\label{Lem:GammaEval}
Let $1\leq \kappa \leq n-1$. Consider $\Gamma_{\beta,\gamma}(z)$ in Lemma \ref{lem:qbarAsymp}, subject to \eqref{eq:GammaParam}. There exist polynomials $\phi^{\beta,\gamma}_{(\omega_1,\dots,\omega_{n-\kappa})}(z), \ (\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n$ (see \eqref{eq:phi-explicit} in Appendix \ref{lem:GammBetAlphExp} for explicit description of $\phi^{\beta,\gamma}_{(\omega_1,\dots,\omega_{n-\kappa})}(z)$) such that
\begin{equation}\label{eq:GammabetgamExp}
\begin{array}{lll}
\Gamma_{\beta,\gamma}(z)&=\sum_{(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n}\left\{\left(\prod_{s=1}^{n-\kappa} (x_{\omega_s}-z) \right)\left(\prod_{1\leq s<t\leq n-\kappa } (x_{\omega_t}-x_{\omega_s})^2 \right)\phi^{\beta,\gamma}_{(\omega_1,\dots,\omega_{n-\kappa})}(z)\right\}.
\end{array}
\end{equation}
Here $\left\{x_j \right\}_{j=1}^n$ and $\left\{\alpha_j \right\}_{j=1}^n$ are the nodes and amplitudes, respectively, of the noiseless Prony's problem.
\end{thm}
\begin{proof}
See Appendix \ref{lem:GammBetAlphExp}.
\end{proof}
\begin{remark}
In Theorem \ref{Lem:GammaEval} with $\kappa = n-1$ we use the convention that $\prod_{1\leq s<t\leq n-\kappa } (x_{\omega_t}-x_{\omega_s})^2=1$.
\end{remark}
Combining \eqref{eq:rdef} and Theorem \ref{Lem:GammaEval}, we obtain the following presentation for $r(z)$
\begin{equation}\label{eq:rdef1}
\begin{array}{lll}
r(z)&=\epsilon^{n+1}\operatorname{det}(D)+\epsilon^n\sum_{i=1}^{n+1}\left[\operatorname{adj}(D) \right]_{i, 1}z^{i-1}\\
&+\sum_{\kappa=1}^{n-1}\epsilon^{\kappa}\left[\sum_{\gamma \in \mathcal{Q}^{n+1}_{n+1-\kappa}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\sum_{(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n}\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \right.\\
&\hspace{18mm}\left.\times \left(\prod_{s=1}^{n-\kappa} (x_{\omega_s}-z) \right)\left(\prod_{1\leq s<t\leq n-\kappa } (x_{\omega_t}-x_{\omega_s})^2 \right)\phi^{\beta,\gamma}_{(\omega_1,\dots,\omega_{n-\kappa})}(z) \right].
\end{array}
\end{equation}
Recall the definition of a clustered configuration (Definition \ref{def:cluster}) and consider the expansion \eqref{eq:rdef1}. Since we assume that $\left\{d_i\right\}_{i=0}^{2n-1}=O(1)$ (see Theorem \ref{thm:node-accuracy-Dima}), we have that $\left[\operatorname{adj}(D) \right]_{i, 1}=O(1), \ 1\leq i \leq n+1$ and $\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} = O(1)$ for any $\gamma,\beta\in \mathcal{Q}_{n+1-\kappa}^{n+1},\ 1\leq \kappa \leq n-1, \ 1\in \beta$. Furthermore, taking into account the continuity of the polynomials $\phi^{\beta,\gamma}_{(\omega_1,\dots,\omega_{n-\kappa})}(z)$, we have that for any $K\subset \mathbb{C}$ compact, we can find a constant $C_1 = C_1(K,\mathfrak{M}_{\alpha},n,\eta,T)$ (i.e., depending on the compact $K$, the clustered configuration parameters $n,\eta,T$ and the upper bound on the amplitudes) such that for all $\gamma,\beta\in \mathcal{Q}_{n+1-\kappa}^{n+1},\ 1\leq \kappa \leq n-1, \ 1\in \beta$ and $(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n$
\begin{equation}\label{eq:phibound} \left\|\phi^{\beta,\gamma}_{(\omega_1,\dots,\omega_{n-\kappa})} \right\|_{L^{\infty}(K)}\leq C_1
\end{equation}
uniformly in $\delta<1$.}
{
\subsection{First-order asymptotic constant}
It may be of interest to have an explicit expression of the first-order in $\epsilon$ term in the node error $\eta_j:=x_j-\tilde{x}_j$ as $\epsilon\to 0^+$. The following lemma provides this first-order term.
\begin{lem}\label{lem:etaj}
The following holds for all $j=1,\dots,n$:
\begin{equation}
\eta_j:=x_j-\tilde{x}_j=(-1)^{n+1} \frac{\Psi_j(D)}{\alpha_j \prod_{m\in[n]\setminus\{j\}} (x_j-x_m)^2} \epsilon + O(\epsilon^2), \quad \epsilon\to 0^+.\label{eq:Etaj}
\end{equation}
Here $\left\{\Psi_j(D)\right\}_{j=1}^n$ are polynomials in $\mathfrak{X}:=\{x_1,\dots,x_n\}$ given in \eqref{eq:psi-j-explicit} below.
\end{lem}
\begin{remark}
Note that if $j\in\mathcal{C}_t$ with $\operatorname{card}(\mathcal{C}_t)=\ell_t$, then \eqref{eq:Etaj} implies that $|\eta_j| \lessapprox |\alpha_j|^{-1}\delta^{2-2\ell_t}\epsilon$ as $\epsilon \to 0^+$, giving the infinitesimal version of \prettyref{thm:node-accuracy-Dima}, and by itself already a vast improvement upon \prettyref{prop:prony-naive-analysis}. Furthermore, the dependence on $1/|\alpha_j|$ in \eqref{eq:Etaj} expresses the intuitive fact that it is harder to accurately recover nodes with smaller amplitudes.
\end{remark}
\begin{proof}[Proof of \prettyref{lem:etaj}]
Recall that $\bar{q}(\tilde{x}_j)=0$. To first order in $\epsilon$, \eqref{eq:rdef1} implies
\begin{equation}\label{eq:first-order-basic-identity}
0 = \bar{q}(\tilde{x}_j)=\bar{p}(\tilde{x}_j) + \epsilon \sum_{\gamma \in \mathcal{Q}^{n+1}_{n}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n}: 1\in \beta} \left[\operatorname{adj}_{n}(D) \right]_{\gamma, \beta}\Gamma_{\beta,\gamma}(\tilde{x}_j) + O(\epsilon^2),
\end{equation}
where, denoting $\mathfrak{q}_s=(1,2,\dots,s-1,s+1,\dots,n)\in\mathcal{Q}^n_{n-1}$, one has
$$
\Gamma_{\beta,\gamma}(\tilde{x}_j) = \sum_{s=1}^n \Biggl\{ \biggl(\prod_{i,k\in[n]\setminus\{s\}} (x_i-x_k)^2 \biggr) \biggl(\prod_{m\in[n]\setminus\{s\}}(x_m-\tilde{x}_j) \biggr) \phi^{\beta,\gamma}_{\mathfrak{q}_s}(\tilde{x}_j) \Biggr\},
$$
with $\phi^{\beta,\gamma}_{\mathfrak{q}_s}$ given by \eqref{eq:phi-explicit} in Appendix~\ref{lem:GammBetAlphExp}. As $\epsilon\to 0^+$ with all other parameters fixed, the only term in $\Gamma_{\beta,\gamma}(\tilde{x}_j)$ which is of order $O(1)$ corresponds to $s=j$. Further substitution of \eqref{eq:PbarExplicitExpr} into \eqref{eq:first-order-basic-identity} implies
\begin{align*}
&(-1)^n\left(\prod_{k=1}^n \alpha_k\right) \cdot \left(\prod_{1\leq m<\ell\leq n}\left(x_{\ell}-x_m \right)^2\right)\cdot \left(\prod_{m=1}^n(\tilde{x}_j-x_m) \right) \\
& =-\epsilon \biggl(\prod_{i,k\in[n]\setminus\{j\}} (x_i-x_k)^2 \biggr) \biggl(\prod_{m\in[n]\setminus\{j\}}(x_m-\tilde{x}_j) \biggr) \sum_{\gamma \in \mathcal{Q}^{n+1}_{n}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n}: 1\in \beta}\left[\operatorname{adj}_n(D)\right]_{\gamma,\beta} \phi^{\beta,\gamma}_{\mathfrak{q}_j}(\tilde{x}_j) + O(\epsilon^2)
\end{align*}
Using the explicit form \eqref{eq:phi-explicit} leads to \eqref{eq:Etaj} where
\begin{equation}\label{eq:psi-j-explicit}
\Psi_j(D) = \sum_{\gamma \in \mathcal{Q}^{n+1}_{n}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n}: 1\in \beta} \left[\operatorname{adj}_n(D)\right]_{\gamma,\beta} s_{\underline{\lambda}}\left(\mathfrak{X} \right)\frac{ { x_j^{\textrm{a}}}\psi_{\underline{\lambda},\mathfrak{q}_j}}{\prod_{ s<t; s,t\in[n]\setminus\{j\} } (x_{s}-x_{t})}.
\end{equation}
To finish the proof, note that the rightmost factor in \eqref{eq:psi-j-explicit} is a polynomial (cf. Appendix~\ref{lem:GammBetAlphExp}, \eqref{eq:gamma-beta-gamma-explicit} - \eqref{eq:phi-explicit}).
\end{proof}
}
\section{Proof of Theorem \ref{thm:node-accuracy-Dima}}\label{Sec:PfMainthm}
{
Recall the definition of clustered configuration (Definition \ref{def:cluster}). Let $j_*\in [n]$ and assume that $0<|z-x_{j_*}| = \rho_*$ with $\rho_*<\delta$. Henceforth, we will fix $\epsilon = \bar{\epsilon} \delta^{2\ell_*-1}$ and $\rho_* = \bar{\rho}_*\delta^{2(\ell_*-\ell_{\mu})+1}$, where $\bar{\epsilon}<1$ and $\bar{\rho}_*<\frac{1}{3}\min(1,\eta T)$ are independent of $\delta$ and will be determined subsequently. Assume further that $j_*\in \mathcal{C}_{\mu}$ where the cluster size is $\ell_{\mu}=\operatorname{card}(\mathcal{C}_{\mu})$. Finally, since $|z-x_{j_*}| = \rho_*$, we have that $z$ lies is in the closed $\frac{1}{3}$-neighborhood $\overline{\mathbb{S}^1+\operatorname{Ball}\left(0,\frac{1}{3}\right)}$. Let $C_1$ be the constant in \eqref{eq:phibound}, which corresponds to this closed neighborhood.
In this section we provide a proof of Theorem \ref{thm:node-accuracy-Dima}. The underlying idea of the proof is to show that subject to the notations above, we can find a pair $(\bar{\epsilon},\bar{\rho}_*)$ (which is independent of $\delta$) for which $\left|\bar{p}\left(z\right)\right| - |r(z)|>0$, for arbitrarily small $\delta<1$. Then, Rouche's theorem \cite{conway1978functions} guarantees that the perturbed polynomial $\bar{q}\left(z;\left\{ d_i\right\},\epsilon\right)$ has a root $\tilde{x}_{j_*}$ such that $\left| x_{j_*}-\tilde{x}_{j_*}\right|\leq \rho_*$. The latter will readily yield the result of Theorem \ref{thm:node-accuracy-Dima}.
The proof requires several auxiliary results. From \eqref{eq:PbarExplicitExpr} we find
\begin{align}
\left|\bar{p}\left(z\right)\right|&= \left(\prod_{k=1}^n |\alpha_k|\right)\cdot \left(\prod_{1\leq m<l\leq n}\left|x_{l}-x_m \right|^2\right) \cdot \left(\prod_{m=1}^n|z-x_m| \right)\nonumber \\
&\geq \mathfrak{m}_{\alpha}^n \cdot \left(\prod_{1\leq m<l\leq n}\left|x_{l}-x_m \right|^2\right) \cdot \left(\prod_{m=1}^n|z-x_m| \right). \label{eq:pLowerBdMultiClus11}
\end{align}
We bound the two rightmost terms separately. First, given $m,\ell\in [n],\ m\neq \ell$, the nodes $x_m,x_{\ell}$ satisfy either $m,l\in \mathcal{C}_s$ for some $s\in [\zeta]$ or $m\in \mathcal{C}_{s},\ l\in \mathcal{C}_{s'}$ where $s,s'\in[\zeta],\ s\neq s'$. The number of ways to choose two nodes which \emph{do not} belong to the same cluster is
\begin{align}
\varrho &= \binom{n}{2}-\sum_{s\in [\zeta]}\binom{\ell_{s}}{2}. \label{eq:xiDef11}
\end{align}
\begin{remark}\label{rem:GenBin}
The latter formula employs generalized binomial coefficients. In particular, if for some $s\in [\zeta]$, we have $\ell_s = 1$ (i.e., $\mathcal{C}_s$ consists of a single isolated node), then $\binom{\ell_s}{2}=0$. The same convention will be used in \eqref{eq:varpiiota} below.
\end{remark}
Recalling the definition of a clustered configuration (Definition \ref{def:cluster}) we have
\begin{align}
&\prod_{1\leq m<l\leq n}\left|x_{l}-x_m \right|^2 \geq T^{2\varrho} \delta^{n(n-1)-2\varrho}.\label{eq:rightterm111}
\end{align}
Moreover,
\begin{align}\label{eq:rightterm211}
\prod_{m=1}^n|z-x_m| &= |z-x_{j_*}|\cdot \left[\prod_{m\notin \mathcal{C}_{\mu}}|z-x_m| \right] \cdot \left[\prod_{m\in \mathcal{C}_{\mu}\setminus\left\{j_* \right\}}|z-x_m| \right]
\end{align}
Using \eqref{eq:rightterm111} and \eqref{eq:rightterm211}, we obtain
\begin{equation}\label{eq:pLowerBdMultiClus111}
\begin{array}{lll}
\left|\bar{p}\left(z\right)\right|&\geq \mathfrak{m}_{\alpha}^n \rho_* \left(\delta-\rho_*\right)^{\ell_{\mu}-1} T^{n+2\varrho-\ell_{\mu}} \delta^{n(n-1)-2\varrho}\\
&\geq \mathfrak{m}_{\alpha}^n \left(\frac{2}{3} \right)^{\ell_{\mu}-1}T^{n+2\varrho-\ell_{\mu}}\bar{\rho}_*\delta^{n(n-1)+2\ell_*-\ell_{\mu}-2\varrho}\\
& = C_2 \bar{\rho}_
* \delta^{n(n-1)+2\ell_*-\ell_{\mu}-2\varrho}
\end{array}
\end{equation}
with the constant $C_2 = C_2\left(n,\mathfrak{m}_{\alpha},T,\left\{\mathcal{C}_s\right\}_{s=1}^{\zeta} \right)=\mathfrak{m}_{\alpha}^n \left(\frac{2}{3} \right)^{\ell_{\mu}-1}T^{n+2\varrho-\ell_{\mu}}$.
Next, consider the expansion \eqref{eq:rdef1} and write it as
\begin{equation}\label{eq:rdef2}
\begin{array}{lll}
&r(z)=\epsilon^{n+1}\Omega_{n+1}(z)+\epsilon^n\Omega_n(z)+\sum_{\kappa=1}^{n-1}\epsilon^{\kappa}\Omega_{\kappa}(z),\\
&\Omega_{n+1}(z) =\operatorname{det}(D) ,\quad \Omega_{n}(z) = \sum_{i=1}^{n+1}\left[\operatorname{adj}(D) \right]_{i, 1}z^{i-1}, \\
&\Omega_{\kappa}(z) = \sum_{\gamma \in \mathcal{Q}^{n+1}_{n+1-\kappa}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\sum_{(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n}\left(\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \right. \\
&\hspace{13mm}\left.\times \left(\prod_{s=1}^{n-\kappa} (x_{\omega_s}-z) \right)\left(\prod_{1\leq s<t\leq n-\kappa } (x_{\omega_t}-x_{\omega_s})^2 \right)\phi^{\beta,\gamma}_{\omega_1,\dots,\omega_{n-\kappa}}(z)\right), \quad 1\leq \kappa \leq n-1.
\end{array}
\end{equation}
We introduce the following notation for $1\leq \kappa \leq n-1$
\begin{equation}\label{eq:PcalDef}
\mathcal{P}_{(\omega_1,\dots,\omega_{n-\kappa})}:=\left(\prod_{s=1}^{n-\kappa} |x_{\omega_s}-z| \right)\left(\prod_{1\leq s<t\leq n-\kappa } |x_{\omega_t}-x_{\omega_s}|^2 \right), \quad (\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n
\end{equation}
Then, recalling \eqref{eq:phibound} and $\left\{d_i\right\}_{i=0}^{2n-1}=O(1)$, we have
\begin{equation}\label{eq:OmegaPhi}
\begin{array}{lll}
&\epsilon^{\kappa}\left|\Omega_{\kappa}(z) \right|\leq C_1 \max_{\gamma,\beta \in \mathcal{Q}_{n+1-\kappa}^{n+1}}\left|\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \right|\Phi_k\\
&\Phi_k =\sum_{\gamma \in \mathcal{Q}^{n+1}_{n+1-\kappa}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\sum_{(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n} \left[\epsilon^{\kappa}\mathcal{P}_{(\omega_1,\dots,\omega_{n-\kappa})}\right].
\end{array}
\end{equation}
To proceed further, we derive the following relations on the expressions $\mathcal{P}_{(\omega_1,\dots,\omega_{n-\kappa})}$.
\begin{lem}\label{lem:PcalRecur}
Let $(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}_{n-\kappa}^n, \ \kappa\leq n-2$ and fix $1\leq s\leq n-\kappa$. Then
\begin{equation}\label{eq:PcalDivision}
\frac{\mathcal{P}_{(\omega_1,\dots,\omega_{n-\kappa})}}{\mathcal{P}_{(\omega_1,\dots,\omega_{s-1},\omega_{s+1},\dots,\omega_{n-\kappa})}} \geq \min \left( \rho_* \delta^{2(\ell_{\mu}-1)},(\tau \delta - \rho_*) \delta^{2(\ell_{\mu}-1)},(\eta T + \rho_*) \delta^{2(\ell_{*}-1)}\right).
\end{equation}
Moreover,
\begin{equation}\label{eq:PcalDiv}
\frac{\epsilon^{\kappa}\mathcal{P}_{(\omega_1,\dots,\omega_{n-\kappa})}}{\epsilon^{\kappa+1}\mathcal{P}_{(\omega_1,\dots,\omega_{s-1},\omega_{s+1},\dots,\omega_{n-\kappa})}} \geq \frac{\bar{\rho}_*}{\bar{\epsilon}}.
\end{equation}
\end{lem}
\begin{proof}
See Appendix \ref{lem:PcalRecurPf}.
\end{proof}
Using Lemma \ref{lem:PcalRecur} and \eqref{eq:OmegaPhi} we conclude that there exist positive constants $\Xi_{\kappa}, \ 2\leq \kappa\leq n-1$ that are independent of $\delta$ and satisfy
\begin{equation}\label{eq:OmegaPhi1}
\epsilon^{\kappa}\left|\Omega_{\kappa}(z) \right|\leq \left(\frac{\bar{\epsilon}}{\bar{\rho}_*} \right)^{\kappa-1}\Xi_{\kappa}\Phi_1, \quad 2\leq \kappa\leq n-1.
\end{equation}
Introducing $\Xi_{n+1}= \left|\operatorname{det}(D) \right|$, $\Xi_{n} = \sum_{i=1}^{n+1}\left| \left[\operatorname{adj}(D) \right]_{i, 1}\right|\left(\frac{4}{3} \right)^{i-1}$ and employing \eqref{eq:OmegaPhi1}, we obtain the following upper bound
\begin{equation}\label{eq:rdef3}
\begin{array}{lll}
|r(z)|&\leq \epsilon^{n+1}\Xi_{n+1}+\epsilon^n\Xi_n+\left(1+\frac{\bar{\epsilon}}{\bar{\rho}_*}\Xi_1+\dots + \left(\frac{\bar{\epsilon}}{\bar{\rho}_*} \right)^{n-2}\Xi_{n-1}\right)\Phi_1.
\end{array}
\end{equation}
Consider first the two leftmost terms on the right-hand side.
\begin{prop}\label{prop:HighPowerBds}
The following holds
\begin{equation}\label{eq:HighPoweBd}
(\epsilon/\bar{\epsilon})^{n+1} < (\epsilon/\bar{\epsilon})^n \leq \delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}
\end{equation}
\end{prop}
\begin{proof}
See Appendix \ref{prop:HighPowerBdsPf}.
\end{proof}
To continue with \eqref{eq:rdef3}, we would like to upper bound $\Phi_1$ (see \eqref{eq:OmegaPhi} for the definition).
\begin{lem}\label{lem:PcalEst}
Let $\epsilon = \bar{\epsilon} \delta^{2\ell_*-1}$ and $\rho_* = \bar{\rho}_*\delta^{2(\ell_*-\ell_{\mu})+1}$, where $\delta<1$, $\bar{\epsilon}<1$ and $\bar{\rho}_*<\frac{1}{3}\min(1,\eta T)$ are independent of $\delta$. Then,
\begin{equation*}
\begin{array}{lll}
&\frac{\epsilon\mathcal{P}_{(\omega_1,\dots,\omega_{n-1})}}{\bar{\epsilon}\delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}}\lessapprox \begin{cases}
1, & \omega_s \neq j_* \text{ for all } s\in [n-1]\\
\bar{\rho}_*\delta^{2(\ell_*-\ell_{\mu})}, & \omega_s\neq t \text{ for some } t\in \mathcal{C}_{\mu}\setminus \left\{j_* \right\},\\
\bar{\rho}_*\delta^{2(\ell_*-\ell_{\iota})+1}, & \omega_s\neq b \text{ for some } b\in \mathcal{C}_{\iota},\ \iota\neq \mu
\end{cases}
\end{array}
\end{equation*}
with a constant that is independent of $\delta$.
\end{lem}
\begin{proof}
See Appendix \ref{lem:PcalEstPf}
\end{proof}
We are now ready to finish the proof of Theorem \ref{thm:node-accuracy-Dima}. By combining \eqref{eq:OmegaPhi} in the case $\kappa=1$ and Lemma \ref{lem:PcalEst}, we obtain that there exists some $\Xi_0>0$, independent of $\delta$, such that
\begin{equation}\label{eq:Phi1bound}
\Phi_1\leq \Xi_0 \bar{\epsilon}\left(1+ \bar{\rho}_*\delta^{2(\ell_*-\ell_{\mu})}+\bar{\rho}_*\delta^{2(\ell_*-\ell_{\iota})+1}\right)\delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}.
\end{equation}
Recalling \eqref{eq:pLowerBdMultiClus111} and \eqref{eq:rdef3}, we have
\begin{equation}\label{eq:Rouche1}
\begin{array}{lll}
\frac{\left|\bar{p}\left(z\right)\right| - |r(z)|}{\delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}}&\geq C_2 \bar{\rho}_
* - \frac{\epsilon^{n+1}\Xi_{n+1}+\epsilon^n\Xi_n+\left(1+\frac{\bar{\epsilon}}{\bar{\rho}_*}\Xi_1+\dots + \left(\frac{\bar{\epsilon}}{\bar{\rho}_*} \right)^{n-2}\Xi_{n-1}\right)\Phi_1}{\delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}}\\
&\overset{\eqref{eq:HighPoweBd},\eqref{eq:Phi1bound}}{\geq} C_2 \bar{\rho}_
* -\bar{\epsilon}^{n+1}\Xi_{n+1}-\bar{\epsilon}^n\Xi_n-\bar{\epsilon}\left(1+\frac{\bar{\epsilon}}{\bar{\rho}_*}\Xi_1+\dots + \left(\frac{\bar{\epsilon}}{\bar{\rho}_*} \right)^{n-2}\Xi_{n-1}\right)\\
&\hspace{12mm} \times \Xi_0 \left(1+ \bar{\rho}_*\delta^{2(\ell_*-\ell_{\mu})}+\bar{\rho}_*\delta^{2(\ell_*-\ell_{\iota})+1}\right).
\end{array}
\end{equation}
Recalling that $\delta<1$ and and $\bar{\rho}_*<\frac{1}{3}$, let $\bar{\epsilon}= \nu\bar{\rho}_*^2, \ 0<\nu\leq 1$. Then,
\begin{equation}\label{eq:Rouche2}
\begin{array}{lll}
\frac{\left|\bar{p}\left(z\right)\right| - |r(z)|}{\delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}}&\geq \bar{\rho}_
*\left[C_2 - \bar{\rho}_*^{2n+1}\Xi_{n+1} -\bar{\rho}_*^{2n-1}\Xi_{n}-3\Xi_0\bar{\rho}_*\left(1+\bar{\rho}_*\Xi_1+\dots + \bar{\rho}_*^{n-2}\Xi_{n-1}\right)\right].
\end{array}
\end{equation}
Noting that all terms on the right-hand side are independent of $\delta$, it is clear the there exists $\bar{\rho}_*$ sufficiently small such that $\inf_{\delta<1}\left(\frac{\left|\bar{p}\left(z\right)\right| - |r(z)|}{\delta^{n(n-1)+2\ell_*-\ell_\mu-2\varrho}}\right)\geq C_3>0$ for some constant $C_3$, which depends on all parameters of the clustered configuration that are different from $\delta$ (see Definition \ref{def:cluster}), as well as the bounds $0<\mathfrak{m}_{\alpha}<\mathfrak{M}_{\alpha}$ on the amplitudes. Since $0<\nu\leq 1$ was arbitrary, the result of Theorem \ref{thm:node-accuracy-Dima} then follows by applying Rouche's theorem \cite{conway1978functions} and the fact that $p(z)$ and $\bar{p}\left(z\right)$ differ by a multiplicative constant. Indeed, note that the perturbed node $\tilde{x}_{j_*}$ satisfies
\begin{equation*}
|x_{j_*}-\tilde{x}_{j_*}|\leq \rho_*=\bar{\rho}_*\delta^{2(\ell_*-\ell_{\mu})+1} \lessapprox \frac{\epsilon}{\delta^{2\ell_{\mu}-2}}, \quad j_*\in \mathcal{C}_{\mu}, \ \ell_{\mu}=\operatorname{card}(\mathcal{C}_{\mu})
\end{equation*}
as stated in Theorem \ref{thm:node-accuracy-Dima}.
}
\section{Proof of Theorem \ref{thm:coeffs-accuracy-Dima}}\label{Sec:PfAmplitude}
{
In this section we consider the amplitude approximation error $\left\{\left|\alpha_{j}-\tilde{\alpha}_{j} \right|\right\}_{j=1}^n$. Throughout the section we assume that the conditions of Theorem \ref{thm:node-accuracy-Dima} hold. In particular, we have $\epsilon = \bar{\epsilon} \delta^{2\ell_*-1}$, whereas given any $j\in [n], \ j\in \mathcal{C}_t$ we denote by $\rho_j = \bar{\rho}_j\delta^{2(\ell_*-\ell_{t})+1}$ the upper bound on $\left| x_j-\tilde{x}_j\right|$. Here, $\bar{\epsilon}<1$ and $\bar{\rho}_j<\frac{1}{3}\min(1,\eta T)$ are independent of $\delta$. In particular, note that $\rho_j\lessapprox \frac{\epsilon}{\delta^{2\ell_{t}-2}}$, as obtained from Theorem \ref{thm:node-accuracy-Dima}.
Let $\left\{d_i \right\}_{i=0}^{2n-1}, \left\{\breve{d}_i \right\}_{i=0}^{n-1} = O(1)$ be two sets of tolerance coefficients (see Theorem \ref{thm:coeffs-accuracy-Dima}).
Recall that the algebraic moments are given by $m_{k}=\sum_{j=1}^{n}\alpha_{j}x_{j}^{k}$. Therefore, introducing
\begin{equation}
b = \operatorname{col} \left\{m_i \right\}_{i=0}^{n-1} , \quad \tilde{b} = \operatorname{col}\left\{m_i+\epsilon \breve{d}_i \right\}_{i=0}^{n-1}, \quad b_{e} = \tilde{b}-b \label{eq:FreeVars}
\end{equation}
we have that both the perturbed and unperturbed amplitudes satisfy
\begin{align}
V\cdot \operatorname{col}\left\{\alpha_i \right\}_{i=1}^n = b, \quad \tilde{V}\cdot \operatorname{col}\left\{ \tilde{\alpha}_i\right\}_{i=1}^n =\tilde{b} \label{eq:Linsyst}
\end{align}
with $\tilde{V}$ given in the formulation of Theorem \ref{thm:coeffs-accuracy-Dima} and $V$ having the same form as $\tilde{V}$ with the perturbed nodes $\left\{\tilde{x}_j\right\}_{j=1}^n$ replaced by the the unperturbed nodes $\left\{x_j\right\}_{j=1}^n$. Denote further
\begin{align}
\tilde{V} - V =: V_e , \quad \operatorname{col}\left\{\tilde{\alpha}_i \right\}_{i=1}^n - \operatorname{col}\left\{\alpha_i \right\}_{i=1}^n=: \alpha_e.\label{eq:LinsystNot}
\end{align}
Then
\begin{align}
& \tilde{V} \alpha_e = b_e -V_e \operatorname{col}\left\{\alpha_i \right\}_{i=1}^n \Rightarrow \alpha_e = \tilde{V}^{-1}\left( b_e - V_e\operatorname{col}\left\{\alpha_i \right\}_{i=1}^n\right) = \tilde{V}^{-1}b_e - \left(I - \tilde{V}^{-1}V\right)\operatorname{col}\left\{\alpha_i \right\}_{i=1}^n .\label{eq:VDMsystem1}
\end{align}
For the rightmost term in \eqref{eq:VDMsystem1}, let
\begin{equation}
\tilde{L}_i(z) = \prod_{k\neq i}\frac{z-\tilde{x}_k}{\tilde{x}_i-\tilde{x}_k} = \sum_{b=0}^{n-1} \tilde{l}_{i,b}z^b,\quad i=1,\dots,n \label{eq:PertLagr}
\end{equation}
be the Lagrange basis corresponding to $\tilde{x}_1,\dots, \tilde{x}_n$. Since the the columns of $\tilde{V}^{-T}$ contain the coefficients of the latter Lagrange basis, we have
\begin{equation}
\tilde{V}^{-1} = \begin{bmatrix}
\tilde{l}_{1,0} & \dots & \tilde{l}_{1,n-1}\\
\vdots & \vdots & \vdots \\
\tilde{l}_{n,0} & \dots & \tilde{l}_{n,n-1}
\end{bmatrix}\Longrightarrow \left[\tilde{V}^{-1}V\right]_{s,b} = \sum_{k=0}^{n-1}\tilde{l}_{s,k}x_b^k = \tilde{L}_{s}(x_b) = \prod_{m\neq s}\frac{x_b-\tilde{x}_m}{\tilde{x}_s-\tilde{x}_m}, \quad s,b\in [n].\label{eq:VDMEval}
\end{equation}
We begin by considering the term $\tilde{V}^{-1}b_e$ in \eqref{eq:VDMsystem1}.
\begin{prop}\label{eq:Clust1Sum}
Fix $j\in [n]$ such that $j\in \mathcal{C}_t$. The following holds
\begin{align*}
&\left|\left[\tilde{V}^{-1} b_e \right]_j\right|\lessapprox \frac{\epsilon}{\delta^{\ell_t-1}}.
\end{align*}
\end{prop}
\begin{proof}
See Appendix \ref{eq:Clust1SumPf}.
\end{proof}
Consider now the term $\left(I - \tilde{V}^{-1}V\right)$, appearing in \eqref{eq:VDMsystem1}.
\begin{prop}\label{eq:Clust2Sum}
Fix $j\in [n]$ such that $j\in \mathcal{C}_t$. The following estimate holds:
\begin{align}
\left|\boldsymbol{\delta}_{j,s}-\left[\tilde{V}^{-1}V\right]_{j,s} \right| \lessapprox \begin{cases}
\frac{\epsilon}{\delta^{2\ell_t-1}}, \quad s\in \mathcal{C}_t ,\\
\frac{\epsilon}{\delta^{\ell_t + \max_{a\neq t}\ell_a-2}}, \quad s\notin \mathcal{C}_t
\end{cases}\label{eq:TildVVBdClus}
\end{align}
where $\boldsymbol{\delta}_{j,s}$ denotes the Kronecker delta.
\end{prop}
\begin{proof}
See Appendix \ref{eq:Clust2SumPf}.
\end{proof}
From Propositions \ref{eq:Clust1Sum} and \ref{eq:Clust2Sum}, and $\mathfrak{m}_{\alpha}\leq \alpha_i \leq \mathfrak{M}_{\alpha},\ i\in [n]$, we obtain the following estimate for $(\alpha_e)_j$ (see \eqref{eq:VDMsystem1}), where $j\in \mathcal{C}_t$:
\begin{align}
\left|(\alpha_e)_j \right| &\leq \left|\left[\tilde{V}^{-1} b_e \right]_j\right|+ \sum_{s=1}^n\left(\left|\boldsymbol{\delta}_{j,s}-\left[\tilde{V}^{-1}V\right]_{j,s} \right|\cdot \left|\alpha_s \right|\right) \nonumber\\
&\overset{}{\lessapprox} \frac{\epsilon}{\delta^{\ell_t-1}}+\frac{\epsilon}{\delta^{2\ell_t-1}}+\frac{\epsilon}{\delta^{\ell_t + \max_{a\neq t}\ell_a-2}}\lessapprox\frac{\epsilon}{\delta^{2\ell_*-1}}.\label{eq:alphaeClust}
\end{align}
Note that the analysis in Proposition \ref{eq:Clust2Sum} yields an upper bound which couples the different clusters together via their sizes $\left\{\ell_{a} \right\}_{a\in [\zeta]}$. Therefore, in \eqref{eq:alphaeClust}, we see that clusters of larger size may influence accuracy of coefficients belonging to cluster of smaller size. This leads to a non-sharp upper bound on the amplitude errors, which will be improved in the next section.
\begin{remark}\label{eq:SingleNodeAmp}
If $x_j, \ j\in \mathcal{C}_t$ is an isolated node, the estimate \eqref{eq:alphaeClust} is replaced by
\begin{align}
\left|(\alpha_e)_j \right| &\lessapprox \epsilon +\frac{\epsilon}{\delta^{\max_{a\neq t}\ell_a-1}} \lessapprox \frac{\epsilon}{\delta^{\ell_*-1}}. \label{eq:alphaeClustIsolated}
\end{align}
See Remark \ref{Rem:SingClus} for the necessary modifications.
\end{remark}
}
{
\subsection{An improved bound on the amplitude error}\label{sec:impBound}
\newcommand{\ensuremath{\kappa}}{\ensuremath{\kappa}}
From \eqref{eq:TildVVBdClus} and \eqref{eq:alphaeClust} it can be seen that the leading contribution to the amplitude error stems
from nodes $x_s$ satisfying $s\in \mathcal{C}_a$ with $a\neq t$ (i.e., \emph{out of cluster nodes}), if $\ell_a > \ell_t$. Here we aim to improve the previous analysis, in order to achieve a better bound on the amplitude error.
Let $b\in [\zeta], \ b\neq t$. We consider the contribution of the nodes in $\mathcal{C}_b= \left\{x_{i_1},\dots,x_{i_{\ell_b}} \right\}$ to {$\left|(\alpha_e)_j \right|,\ j\in \mathcal{C}_t$ (leading to the suboptimal bound $\frac{\epsilon}{\delta^{\ell_t + \max_{a\neq t}\ell_a-2}}$ in \eqref{eq:alphaeClust}) } through $\left(I - \tilde{V}^{-1}V\right)\operatorname{col}\left\{\alpha_i \right\}_{i=1}^n$ (see \eqref{eq:VDMsystem1}). { For simplicity of notations, we assume henceforth that $b=1$ and $x_{i_k}=x_k,\ k\in [\ell_1]$. This can always be achieved by index permutation. }
Since $t\neq 1$, the contribution of $\mathcal{C}_1$ is given by
\begin{align}
&\mathcal{V}_{1}:=\sum_{\nu=1}^{\ell_1} \left[\tilde{V}^{-1} V\right]_{j,\nu} \alpha_{\nu} \overset{\eqref{eq:VDMEval}}{=}\sum_{\nu=1}^{\ell_1} \alpha_{{\nu}} \prod_{m \neq j} \frac{x_{{\nu}}-\tilde{x}_{m}}{\tilde{x}_{j}-\tilde{x}_{m} }.\label{eq:ContribCc}
\end{align}
The following theorem bounds $\mathcal{V}_1$. {Although we assume that $b=1$ for ease of notation, note that the theorem remains valid for any cluster $\mathcal{C}_{b}$ where $b\neq t$ (the proof remains the same subject to more tedious notations)}.
\begin{thm}\label{Thm:ImprovedClust}
Assume the conditions of \prettyref{thm:coeffs-accuracy-Dima}. The following estimate holds:
\begin{align*}
\mathcal{V}_1 \lessapprox \delta^{1-\ell_t}\epsilon.
\end{align*}
\end{thm}
\begin{proof}
Denote
\begin{align*}
A&:=\det \tilde{H}_n = (-1)^n \biggl(\prod_{k=1}^{n}\Tilde{\alpha}_k\biggr)\bigg(\prod_{1\leq m < s \leq n}(\tilde{x}_{m}-\tilde{x}_s)^2\bigg)\\
B&:= \prod_{m\neq j}\left(\tilde{x}_j-\tilde{x}_m\right),\quad C:= \prod_{\nu=1}^{\ell_1} (x_{{\nu}}-\tilde{x}_j).
\end{align*}
These quantities can be effectively bounded \emph{from below}, using the following facts:
\begin{enumerate}
\item $|\tilde{x}_j-\tilde{x}_m|\gtrapprox \delta$ by \prettyref{thm:node-accuracy-Dima};
\item $|\tilde{\alpha}_j-\alpha_j| \leqslant O(1)$ by \eqref{eq:alphaeClust} (so by decreasing $\bar{\epsilon}$ in the proof of \prettyref{thm:node-accuracy-Dima}, this quantity can be made smaller than, say, $|\alpha_j/2|$);
\item $x_j\notin \mathcal{C}_1$.
\end{enumerate}
Therefore we have
\begin{align*}
A & \gtrapprox \delta^{2\sum_{s\in[\zeta]} \binom{\ell_s}{2}},\quad B \gtrapprox \delta^{\ell_t-1}, \quad
C \gtrapprox 1
\end{align*}
Note that $\bar{q}(x_{{\nu}}) = A \prod_{m=1}^s \left(x_{{\nu}}-\tilde{x}_m\right) = \underbrace{\bar{p}(x_{{\nu}})}_{=0}+r(x_{{\nu}})$ for any $\nu \in [\ell_1]$. Recalling \eqref{eq:rdef} and \eqref{eq:ContribCc}, we have
\begin{align*}
A\cdot B\cdot C \cdot \mathcal{V}_1 &= \sum_{\nu=1}^{\ell_1} \alpha_{{\nu}} \biggl[ \prod_{r\in [\ell_1]\setminus\{\nu\}} (x_{r}-\tilde{x}_j) \biggr] \bar{q}(x_{{\nu}})\\
&= \sum_{\ensuremath{\kappa}=1}^{n-1}\epsilon^\ensuremath{\kappa} \sum_{\gamma \in \mathcal{Q}^{n+1}_{n+1-\kappa}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \sum_{\nu=1}^{\ell_1} \alpha_{{\nu}} \biggl[ \prod_{r\in [\ell_1]\setminus\{\nu\}} (x_{r}-\tilde{x}_j) \biggr] \Gamma_{\beta,\gamma}(x_{{\nu}}) \\
&\quad + \sum_{\nu=1}^{\ell_1} \alpha_{{\nu}} \biggl[ \prod_{r\in [\ell_1]\setminus\{\nu\}} (x_{r}-\tilde{x}_j) \biggr] \biggl\{{ \left( \sum_{i=1}^{n+1}\left[\operatorname{adj}(D) \right]_{i, 1}x_{\nu}^{i-1}\right)}\epsilon^n + {\operatorname{det}(D)}\epsilon^{n+1}\biggr\}\\
&=: E+F,
\end{align*}
where by \eqref{eq:GammaParam} and \eqref{eq:gamma-beta-gamma-explicit},
\begin{equation*}
\begin{array}{lll}
\Gamma_{\beta,\gamma}(x_{{\nu}})&=\sum_{(\omega_1,\dots,\omega_{n-\kappa})\in \mathcal{Q}^{n}_{n-\kappa}}\left\{\left(\prod_{s=1}^{n-\ensuremath{\kappa}}\alpha_{\omega_s} \right)\left(\prod_{s=1}^{n-\kappa} (x_{\omega_s}-x_{{\nu}}) \right)\right. \vspace{0.1cm}\\
&\hspace{5mm}\left.\times \left(\prod_{1\leq s<t\leq n-\kappa } (x_{\omega_t}-x_{\omega_s}) \right)s_{\underline{\lambda}}\left(x_{{\nu}},x_{\omega_1},\dots,x_{\omega_{n-\kappa}} \right) x_{{\nu}}^{\mathrm{a}}\psi_{\underline{\lambda},\omega_1,\dots,\omega_{n-\kappa}}\right\}.
\end{array}
\end{equation*}
Here $\psi_{\underline{\lambda},\omega_1,\dots,\omega_{n-\kappa}}$ is given in \eqref{eq:psilambdDef} and $s_{\underline{\lambda}}\left(x_{\nu},x_{\omega_1},\dots,x_{\omega_{n-\kappa}} \right)$ is the Schur polynomial for the partition $\underline{\lambda}$ and in the variables $x_{\nu},x_{\omega_1},\dots,x_{\omega_{n-\kappa}}$ \cite[Chapter 3]{macdonald1998symmetric}. Note that to get a nonzero summand, $\omega_1,\dots,\omega_{n-\ensuremath{\kappa}}$ should be different from ${\nu}$.
To take care of the term $F$, we have
\begin{equation*}
\frac{\epsilon^{n-1}}{\bar{\epsilon}^{n-1}\left|A\right|}\lessapprox \delta^{(n-1)(2\ell_*-1)-\sum_{s\in [\zeta]}\ell_s(\ell_s-1)} = \delta^{(n-1)(2\ell_*-1)-g_{\zeta,n}(\ell_1,\dots,\ell_{n})}
\end{equation*}
where we employ the notations of Appendix \ref{prop:HighPowerBdsPf}. The proof therein shows that $n(2\ell_*-1)-g_{\zeta,n}(\ell_1,\dots,\ell_n)-2\ell_*+\ell_{\mu}\geq 0$ for any $1\leq \ell_{\mu}\leq n$. Setting $\ell_{\mu}=1$, we get $(n-1)(2\ell_*-1)-\sum_{s\in [\zeta]}\ell_s(\ell_s-1)\geq 0$. Therefore, it is clear that $\frac{\epsilon^{n-1}}{\bar{\epsilon}^{n-1}\left|A\right|}\lessapprox 1$, whence $\left|\frac{F}{A} \right|\lessapprox \epsilon$.
Next, we consider $E$. To shorten the notation, we will write $\mathfrak{q}=(\omega_1,\dots,\omega_{n-\kappa})$ for a general element in $\mathcal{Q}^{n}_{n-\kappa}$. Rearranging the order of summation, we can write $E = \sum_{\kappa=1}^{n-1}\epsilon^{\kappa}E_k$, where
\begin{align}\label{eq:e-term}
\begin{split}
&E_{\ensuremath{\kappa}}:= \sum_{\gamma \in \mathcal{Q}^{n+1}_{n+1-\kappa}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n+1-\kappa}: 1\in \beta}\left[\operatorname{adj}_{n+1-\kappa}(D) \right]_{\gamma, \beta} \sum_{\mathfrak{q} \in \mathcal{Q}^{n}_{n-\kappa}} \tilde{E}_{\kappa}^{(\gamma,\beta)}(\mathfrak{q})\\
&\tilde{E}_{\kappa}^{(\gamma,\beta)}(\mathfrak{q}):= \left(\prod_{s=1}^{n-\ensuremath{\kappa}}\alpha_{\omega_s} \right) \left(\prod_{1\leq s<t\leq n-\kappa } (x_{\omega_t}-x_{\omega_s}) \right)\psi_{\underline{\lambda},\mathfrak{q}} \\
&\qquad \qquad \times \sum_{\nu=1}^{\ell_1} \alpha_{\nu} s_{\underline{\lambda}}\left(x_{\nu},x_{\omega_1},\dots,x_{\omega_{n-\kappa}} \right) x_{\nu}^{\mathrm{a}} \left(\prod_{s=1}^{n-\kappa} (x_{\omega_s}-x_{\nu}) \right) \prod_{r\in [\ell_1]\setminus\{\nu \}} (x_{r}-\tilde{x}_j).
\end{split}
\end{align}
Henceforth, we will denote, as in Lemma~\ref{lem:etaj}, $\mathfrak{q}_s=(1,2,\dots,s-1,s+1,\dots,n)\in\mathcal{Q}^n_{n-1}$ and $\mathfrak{X}=\{x_1,\dots,x_n\}$.
Consider first the case $\kappa=1$. Given $(\omega_1,\dots,\omega_{n-1})\in \mathcal{Q}_{n-1}^n$, we consider the product $\prod_{s=1}^{n-1} (x_{\omega_s}-x_{{\nu}})$ and notice that it equals zero unless $\left\{{\nu} \right\}\bigcup \left\{\omega_j \right\}_{j=1}^{n-1}=[n]$. Thus, we obtain
\begin{equation*}
\begin{array}{lll}
&E_1 = \sum_{\gamma \in \mathcal{Q}^{n+1}_{n}}\sum_{\beta \in \mathcal{Q}^{n+1}_{n}: 1\in \beta}\left[\operatorname{adj}_{n}(D) \right]_{\gamma, \beta}\sum_{\mathfrak{q}_{\nu},\nu \in [\ell_1]}\tilde{E}_{1}^{(\gamma,\beta)}(\mathfrak{q}_{\nu}),\\
&\tilde{E}_{1}^{(\gamma,\beta)}(\mathfrak{q}_{\nu}):= \left(\prod_{s\in [n]\setminus \left\{\nu \right\}}\alpha_{s} \right) \left(\prod_{1\leq s<t\leq n, s,t\neq \nu } (x_{t}-x_{s}) \right)\psi_{\underline{\lambda},\mathfrak{q}_{\nu}} \\
&\qquad \qquad \times \sum_{\nu=1}^{\ell_1} \alpha_{\nu} s_{\underline{\lambda}}\left(\mathfrak{X} \right) x_{\nu}^{\mathrm{a}} \left(\prod_{s\in [n]\setminus\{\nu\}}(x_{s}-x_{\nu}) \right) \prod_{r\in [\ell_1]\setminus\{\nu \}} (x_{r}-\tilde{x}_j)
\end{array}
\end{equation*}
which we treat as a multivariate polynomial. Using this presentation and the properties of $s_{\underline{\lambda}}$ and $\psi_{\underline{\lambda},\mathfrak{q}_{\nu}}$, it
can be readily verified that $E_1$ satisfies the following properties:
\begin{enumerate}
\item For any $s<t, \ s,t\notin\mathcal{C}_1$, if $x_s=x_t$ then $E_1=0$: { in this case we have $\prod_{1\leq s<t\leq n, s,t\neq \nu } (x_{t}-x_{s})\equiv 0$ in $\tilde{E}_{1}^{(\gamma,\beta)}(\mathfrak{q}_{\nu})$.}
\item $E_1$ is invariant with respect to any transposition of two nodes not in $\mathcal{C}_1$. { This follows from symmetry of the polynomials $s_{\underline{\lambda}}$ and $\psi_{\underline{\lambda},\mathfrak{q}_{\nu}}\prod_{1\leq s<t\leq n, s,t\neq \nu } (x_{t}-x_{s})$}.
\item For any $s<t, \ s,t\in\mathcal{C}_1$, if $x_s=x_t$ then $E_1=0$. { This follows from $\prod_{s\in [n]\setminus{\nu}}(x_{s}-x_{\nu})\equiv 0$ in $\tilde{E}_{1}^{(\gamma,\beta)}(\mathfrak{q}_{\nu})$.}
\item $E_1$ is invariant with respect to any transposition of the nodes of $\mathcal{C}_1$. {This again follows from symmetry of the polynomials in $\tilde{E}_{1}^{(\gamma,\beta)}(\mathfrak{q}_{\nu})$ and summation over $\mathfrak{q}_{\nu},\ \nu\in [\ell_1]$. }
\end{enumerate}
As a result, $E_1$ is divisible by
$$
H:= \prod_{s<t; s,t\notin\mathcal{C}_1} (x_s-x_t)^2 \prod_{1\leq r < s \leq \ell_1}(x_{r}-x_{s})^2.
$$
This immediately implies
$$
|E_1| \lessapprox |H| \lessapprox \delta^{2\sum_{s\neq 1} \binom{\ell_s}{2}+2\binom{\ell_1}{2}}=\delta^{2\sum_{s\in[\zeta]}\binom{\ell_s}{2}} \implies \epsilon |E_1/A| \lessapprox \epsilon.
$$
We now proceed by induction on $\kappa$ to show that $|E_{\ensuremath{\kappa}} \epsilon^\ensuremath{\kappa} / A| \lessapprox \epsilon$ for $\ensuremath{\kappa}=1,\dots,n-1$.
It can be readily verified that for each $\mathfrak{q}\in \mathcal{Q}^{n}_{n-\kappa}$, $\tilde{E}_{\kappa}^{(\gamma,\beta)}(\mathfrak{q})$ in \eqref{eq:e-term} satisfies the symmetry properties (1)--(4) above with respect to the sets $\mathfrak{q}\setminus\mathcal{C}_1$ and $\mathfrak{q}\cap \mathcal{C}_1$. Consequently, for each $\gamma,\beta$, $\tilde{E}_{\kappa}^{(\gamma,\beta)}(\mathfrak{q})$ is divisible by $H(\mathfrak{q})$ where
$$
H(\mathfrak{q}):= \prod_{s<t, s,t\in \mathfrak{q}\setminus\mathcal{C}_1 } (x_s-x_t)^2 \prod_{r<s, r,s\in \mathfrak{q} \cap \mathcal{C}_1}(x_{r}-x_{s})^2
$$
does not depend on $\gamma,\beta$.
Let $\mathfrak{q}_1\in \mathcal{Q}^n_{n-\ensuremath{\kappa}}$, $\mathfrak{q}_2\in \mathcal{Q}^n_{n-\ensuremath{\kappa}-1}$ such that $\mathfrak{q}_2 \subset \mathfrak{q}_1$ with the obvious meaning (i.e. they differ by a single index). Since $\epsilon \lessapprox \delta^{2\ell_*-1}$, we conclude, by an argument similar to the one employed in the proof of Lemma \ref{lem:PcalRecur}, that
$$
\epsilon |H(\mathfrak{q}_2)| \lessapprox |H(\mathfrak{q}_1)|.
$$
Applying the induction hypothesis, we have for each $\mathfrak{q}\in\mathcal{Q}^n_{n-\ensuremath{\kappa}}$ and each $\gamma,\beta$ that
$$
\epsilon^{\ensuremath{\kappa}-1} |H(\mathfrak{q})| \lessapprox |H| \implies \epsilon^{\ensuremath{\kappa}-1} \left| \tilde{E}_{\kappa}^{(\gamma,\beta)}(\mathfrak{q}) \right| \lessapprox |H| \implies \epsilon^{\ensuremath{\kappa}-1} | E_{\ensuremath{\kappa}} | \lessapprox |H|.
$$
We conclude that $|E_{\ensuremath{\kappa}} \epsilon^\ensuremath{\kappa} / A| \lessapprox \epsilon$ for $\ensuremath{\kappa}=1,\dots,n-1$.
This proves $\mathcal{V}_1 \lessapprox \delta^{1-\ell_t}\epsilon$, as required.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:coeffs-accuracy-Dima}]
Theorem \ref{Thm:ImprovedClust} provides the following, improved, estimate on $\alpha_e$, appearing in \eqref{eq:VDMsystem1}
\begin{align}
&\left|(\alpha_e)_j \right|\overset{}{\lessapprox} \frac{\epsilon}{\delta^{\ell_t-1}}+\frac{\epsilon}{\delta^{2\ell_t-1}}+\frac{\epsilon}{\delta^{\ell_t-1}}\lessapprox\frac{\epsilon}{\delta^{2\ell_t-1}},\quad j\in [n], \label{eq:alphaeClust1}
\end{align}
where the term $\frac{\epsilon}{\delta^{2\ell_*-2}}$ in \eqref{eq:alphaeClust} (which is obtained from nodes not in $\mathcal{C}_t$) is replaced by the improved estimate $\frac{\epsilon}{\delta^{\ell_t-1}}$, which follows from Theorem \ref{Thm:ImprovedClust}. This finishes the proof of Theorem \ref{thm:coeffs-accuracy-Dima}.
\end{proof}
\begin{remark}\label{Rem:SingClus11}
For an isolated node $x_j$, Theorem \ref{Thm:ImprovedClust} implies that the upper bound on $\left|(\alpha_e)_j \right|$ in Remark \ref{eq:SingleNodeAmp} is replaced by $\left|(\alpha_e)_j \right| \lessapprox \epsilon$.
\end{remark}
}
\begin{comment}
\subsection{An improved bound on the amplitude error in the multi-cluster case}
The next proposition derives the $O(\epsilon)$ term in \eqref{eq:ContribCc}.
\begin{prop}\label{prop:Acal}
The following holds
\begin{align}
\lim_{\epsilon \to 0^+}\epsilon^{-1}\mathcal{V}_b = \frac{(-1)^{n+1}}{{\prod_{m\neq j}(x_j-x_m)}}\sum_{q=1}^{\ell_b}\frac{\Psi_{i_q}}{(x_{i_q}-x_j)\prod_{m\neq i_q} (x_{i_q}-x_m)}=:\mathcal{U}_b. \label{eq:ContribCc1}
\end{align}
\end{prop}
\begin{proof} See Appendix \ref{prop:AcalPf}.
\end{proof}
Note that the term $\frac{1}{\prod_{m\neq j}(x_j-x_m)}$ in $
\mathcal{U}_b$ satisfies
\begin{equation}\label{eq:DenomClust}
\frac{1}{\prod_{m\neq j}|x_j-x_m|} \lessapprox \frac{1}{\delta^{\ell_t-1}}
\end{equation}
since $j\in \mathcal{C}_t,\ \operatorname{card}\left(\mathcal{C}_t \right)=\ell_t$. Using Proposition \ref{prop:Acal} we are ready to prove the main theorem of this section.
\begin{thm}\label{Thm:ImprovedClust}
The following estimate holds for all $b\in [p], \ b\neq t$:
\begin{align*}
\mathcal{V}_b = O\left(\frac{\epsilon}{\delta^{\ell_t-1}}\right),\quad \epsilon \to 0^+.
\end{align*}
\end{thm}
\begin{proof}
Introduce
\begin{align*}
\kappa_a\left(\mathfrak{X} \right) = \prod_{m\neq a}(x_a-x_m), \quad a\in [n].
\end{align*}
Using Proposition \ref{prop:Acal} and substituting $\left\{\Psi_s\right\}_{s=1}^n$ from \eqref{eq:SecondSum}, we find
\begin{align}
& (-1)^{n+1}\mathcal{U}_b\cdot \prod_{a=1}^{\ell_b}\left[(x_{i_a}-x_j)\prod_{m\neq i_a}(x_{i_a}-x_m) \right]\cdot {\prod_{m\neq j}(x_j-x_m)}=\sum_{q=1}^{\ell_b}\Psi_{i_q}\prod_{a\in [\ell_b],a\neq q}\left[(x_{i_a}-x_j)\kappa_{i_a}(\mathfrak{X})\right]\nonumber\\
&\qquad \overset{\eqref{eq:SecondSum}}{=}\sum_{m=0}^np_m\sum_{r=1}^n(-1)^{r+1}d_{r-1+m}W_r(\mathfrak{X}),\label{eq:DecompFirstOrd}
\end{align}
where {\color{blue} $\operatorname{Sym}$ is only defined in \eqref{eq:SymFunc}.}
\begin{align}
&W_r(\mathfrak{X}) = \sum_{q=1}^{\ell_b}\operatorname{Sym}_{n-r,i_q}\prod_{a\in [\ell_b],a\neq q}\left[(x_{i_a}-x_j)\kappa_{i_a}(\mathfrak{X})\right], \quad r\in [n].\label{eq:DecompFirstOrd1}
\end{align}
Next, we make the following key observation:
\begin{prop}\label{prop:PermInv}
$W_r(\mathfrak{X}), \ r\in [n]$ is invariant under any permutation $\sigma\in S_{\ell_b}$ acting on $\mathfrak{X}$ via $x_{i_a}\mapsto x_{i_{\sigma(a)}}, \ a\in[\ell_b]$ and $x_i \mapsto x_i, \ i\notin \mathcal{C}_b$.
\end{prop}
\begin{proof}
See Appendix \ref{prop:PermInvPf}.
\end{proof}
Proposition \ref{prop:PermInv} implies the following lemma.
\begin{lem}\label{lem:DiscrepancyDivisibility}
For every $r\in [n]$, the multivariate polynomial $W_r(\mathfrak{X})$ is divisible by the squared Vandermonde determinant on the nodes in $\mathcal{C}_b$, i.e. there exists a polynomial $Q(\mathfrak{X})$ in the variables $\mathfrak{X}$ such that
\begin{equation*}
W_r(\mathfrak{X}) = Q(\mathfrak{X})\cdot \prod_{1\leq q<a\leq \ell_b}(x_{i_a}-x_{i_q})^2.
\end{equation*}
\begin{proof}
It is easy to check that if $x_{i_{m_1}}=x_{i_{m_2}}$ for any $m_1,m_2\in [\ell_b]$ and $m_1\neq m_2$, then $W_r\left( \mathfrak{X}\right)$ equals zero. Therefore, as a polynomial in $x_{i_1},\dots,x_{i_{\ell_b}}$, $W_r\left( \mathfrak{X}\right)$ is divisible by every factor of the form $x_{i_{m_2}}-x_{i_{m_1}}$. By Proposition \ref{prop:PermInv}, $W_r\left( \mathfrak{X}\right)$ is invariant under transpositions of the form $\sigma_{i_{m_1},i_{m_2}}$. Hence, $W_r\left( \mathfrak{X}\right)$ must be divisible by every factor of the form $(x_{i_{m_2}}-x_{i_{m_1}})^2$.
\end{proof}
\end{lem}
Finally, consider \eqref{eq:DecompFirstOrd}. The product $\prod_{a=1}^{\ell_b}\left[(x_{i_a}-x_j)\prod_{m\neq i_a}(x_{i_a}-x_m) \right]\cdot {\prod_{m\neq j}(x_j-x_m)}$ on the left-hand side is of order $\delta^{\ell_t-1+\ell_b(\ell_b-1)}$ (see \eqref{eq:DenomClust}). By Lemma \ref{lem:DiscrepancyDivisibility}, we see that the right-hand side is at least of order $\delta^{\ell_b(\ell_b-1)}$. Hence, we find that $\mathcal{U}_b \lessapprox \frac{1}{\delta^{\ell_t-1}}$. By Proposition \ref{prop:Acal}, $\lim_{\epsilon \to 0^+}\epsilon^{-1}\mathcal{V}_b=\mathcal{U}_b$, finishing the proof of the theorem.
\end{proof}
\end{comment}
\section{Stability in finite precision computations}\label{sec:finite-precision}
\newcommand{\ensuremath{\mathcal{H}_n}}{\ensuremath{\mathcal{H}_n}}
So far, the results presented in Theorem \ref{thm:node-accuracy-Dima} and Theorem \ref{thm:coeffs-accuracy-Dima} analyze the performance and output of Prony's method subject to computations in exact arithmetic. In particular, we show that
the forward errors $\left\{|\tilde{x}_j-x_j|\right\}_{j=1}^n$ and $\left\{|\tilde{\alpha}_j-\alpha_j|\right\}_{j=1}^n$ are bounded by the condition number of the problem (see Theorem \ref{thm:minmax}) multiplied by the noise level. Therefore, Prony's method achieves the theoretically optimal recovery bound for both the nodes and the amplitudes.
In this section we analyze the performance of Prony's method in the regime of finite-precision computations. In order to carry out such an analysis, we recall the definition of a backward error induced by a particular numerical algorithm. We define the normwise backward error in the spirit of approximation theory:
\begin{defn}\label{def:GenBackErr}
Let $f:\mathbb{C}^m \to \mathbb{C}^n$ be a given function. Fix $x\in \mathbb{C}^m$ and let $y\in \mathbb{C}^n$ be an approximation of $f(x)$. The backwards error in $y$ is defined as
\begin{align*}
\eta(y) = \inf\left\{\nu \ : \ y=f\left(x+\Delta x\right), \left| \Delta x\right|\leq \nu \left| x\right| \right\}.
\end{align*}
Namely, the (normwise) backward error is the smallest relative error between $x$ and a perturbed $x+\Delta x$, which is mapped to $y$ under $f$.
\end{defn}
In the context of computations, one can think of $y$ in Definition \ref{def:GenBackErr} as obtained from $f(x)$ by round-off, due to finite-precision computation. A well-known approach in numerical analysis is to bound the backward errors committed by a particular numerical algorithm, and conclude that the actual (forward) error is bounded by the multiple of the backward error and the condition number \cite{higham1996}. A prototypical bound of the forward error
for Definition \ref{def:GenBackErr} is then of the form
\begin{equation*}
\frac{\left|y-f(x) \right|}{\left| f(x)\right|}\leq \operatorname{cond}(f,x)\eta(y) + O(\eta(y))^2
\end{equation*}
where $\operatorname{cond}(f,x)$ is an appropriate condition number associated with the function $f$ and the point $x$. Unfortunately, deriving a-priori bounds on the backward error is essentially difficult, and therefore it is frequently of interest to compute a bound numerically, given an output of a particular algorithm.
To concretize the discussion above for the context of Prony's method and Algorithm \ref{alg:classical-prony}, we introduce the following definition. {In what follows, for a vector $\vec{y}=\operatorname{col}\{y_i\}_{i=0}^N$ and $0\leq j < k \leq N$ two indices, we denote $\vec{y}[j:k]=\operatorname{col}\{y_i\}_{i=j}^k$.}
\begin{defn}\label{def:backward-errors}
Let $\vec{q}^{\circ} = \operatorname{col}\left\{ q^{\circ}_j\right\}_{j=0}^{n-1}$, $\vec{x}^{\circ}=\operatorname{col}\{x^{\circ}_j\}_{j=1}^n$ and $\vec{\alpha}^{\circ}= \operatorname{col}\left\{\alpha^{\circ}_j \right\}_{j=1}^n$ denote the results of Steps 2, 3 and 4, respectively, in a (finite-precision) numerical implementation of Prony's method (\prettyref{alg:classical-prony}). The backward errors for each of the steps are defined as follows (all inequalities of the form $|\vec{a}|\leq c$ where $\vec{a}$ is a vector or a matrix are to be interpreted component-wise):
\begin{itemize}
\item {\underline{Step 2:} Let $\tilde{\vec{m}} = \operatorname{col}\left\{\tilde{m}_j \right\}_{j=0}^{2n-1}$ be the vector of perturbed moments. For a vector $\vec{v}\in\mathbb{C}^{2n-1}$, let $\ensuremath{\mathcal{H}_n}(\vec{v})\in\mathbb{C}^{n\times n}$ denote the Hankel matrix constructed from $\vec{v}$. We define, for $\vec{\hat{m}}=\operatorname{col}\{\hat{m}_j\}_{j=0}^{2n-1}$,
\begin{align}\label{eq:backward-error-q}
\operatorname{berr}_1(\vec{q}^{\circ};\tilde{\vec{m}}) &= \inf \biggl\{\epsilon_1: \ensuremath{\mathcal{H}_n}(\vec{\hat{m}}[0:2n-2]) \cdot \vec{q}^{\circ} = -\vec{\hat{m}}[n:2n-1],\quad |\hat{\vec{m}}-\tilde{\vec{m}}|\leq \epsilon_1 \biggr\}
\end{align}
to be the backward error corresponding to the Hankel system
$$
\ensuremath{\mathcal{H}_n}(\vec{\tilde{m}}[0:2n-2]) \cdot \operatorname{col}\left\{\tilde{q}_j \right\}_{j=0}^{n-1} = -\vec{\tilde{m}}[n:2n-1].
$$ }
\item \underline{Step 3:} Denote $\hat{\vec{q}} = \operatorname{col}\left\{\hat{q}_j \right\}_{j=0}^{n-1}$ and let
\begin{align}\label{eq:eq:backward-error-x}
\operatorname{berr}_2(\vec{x}^{\circ};\vec{q}^{\circ}) &= \inf\biggl\{\epsilon_2: (x^{\circ}_j)^n+\sum_{i=0}^{n-1} \hat{q}_i (x^{\circ}_j)^i=0\quad \forall j=1,\dots,n,\quad |\vec{q}^{\circ}-\vec{\hat{q}}|\leq\epsilon_2\biggr\}
\end{align}
be the backward error corresponding to the numerically obtained roots of the Prony polynomial.
\item {\underline{Step 4:} Denote $\vec{m^*}\in\mathbb{C}^{n}$ and $\vec{x}^{*}=\operatorname{col}\left\{x_j^* \right\}_{j=1}^n$. Then define
\begin{align}\label{eq:eq:backward-error-alpha}
\operatorname{berr}_3(\vec{\alpha}^{\circ};\vec{\tilde{m}}[0:n-1],\vec{x}^{\circ}) &= \inf\biggl\{\epsilon_3: V(\vec{x}^*)\vec{\alpha}^{\circ}=\vec{m}^*,\quad |\vec{x}^*-\vec{x}^{\circ}| \leq \epsilon_3,\; \bigl|\vec{m}^*-\vec{\tilde{m}}[0:n-1]\bigr|\leq\epsilon_3 \biggr\}.
\end{align}
to be the backward error corresponding to solving the Vandermonde system for the amplitudes.}
\end{itemize}
\end{defn}
Our goal is to estimate the total backward error of Algorithm \ref{alg:classical-prony}, by aggregating over the backward errors of the steps above. To estimate the backward errors in practice, there are several available results in the literature, e.g. \cite{bartels1992a,higham1992,stetter2004,sun1998}. For the Hankel linear system structured backward error (when the right-hand side depends on a subset of the same parameters as the Hankel matrix), we have the following:
{\begin{prop}\label{Prop:berr1}
Let $\vec{r}=\vec{r}(\vec{\tilde{m}},\vec{q}^{\circ})=\vec{\tilde{m}}[n:2n-1]+\ensuremath{\mathcal{H}_n}(\vec{\tilde{m}}[0:2n-2]) \cdot \vec{q}^{\circ}\in\mathbb{C}^n$ denote the actual residual. Then
$$
\operatorname{berr}_1(\vec{q}^{\circ};\vec{\tilde{m}}) \lessapprox \min \{ \|\vec{\delta}\|_{2}: \vec{\delta}\in\mathbb{C}^{2n}, C_n \vec{\delta} = \vec{r} \} \leq \|C_n^{\dagger}\|_2 \|\vec{r}\|_2,
$$
where
\begin{equation}\label{eq:Cmat}
C_n(\vec{q}^{\circ})=C_n = \begin{bmatrix}
q^{\circ}_0 & q^{\circ}_1 & \dots & q^{\circ}_{n-1} & 1 & 0 & \dots & 0\\
0 & q^{\circ}_0 & q^{\circ}_1 & \dots & q^{\circ}_{n-1} & 1 & 0 \dots & 0\\
\ddots & \ddots & \ddots\\
0 & \dots & 0 & q^{\circ}_0 & q^{\circ}_1 & \dots & q^{\circ}_{n-1} & 1
\end{bmatrix}\in\mathbb{C}^{n\times 2n}.
\end{equation}
\end{prop}
\begin{proof}
Following \cite{higham1992}, denote $\vec{\delta}=\vec{\hat{m}}-\vec{\tilde{m}}$, then the constraint
\begin{equation}\label{eq:hankel-constraint}
\ensuremath{\mathcal{H}_n}(\vec{\hat{m}}[0:2n-2]) \cdot \vec{q}^{\circ} = -\vec{\hat{m}}[n:2n-1]
\end{equation}
in \eqref{eq:backward-error-q} can be rewritten as $C_n\vec{\delta}=\vec{r}$. The exact value of the backward error is the solution to the underdetermined constrained minimization problem $\min \{ \| \vec{\delta}\|_{\infty}: C_n\vec{\delta}=\vec{r} \}$, which can be, up to constants, bounded by the solution in the 2-norm.
\end{proof}
}
The backward error $\operatorname{berr}_2(\vec{x}^{\circ};\vec{q}^{\circ})$, given in \eqref{eq:eq:backward-error-x}, can be easily computed as follows: since the values of the roots in finite-precision, $\{x^{\circ}_j\}_{j=1}^n$, are known, the coefficients $\hat{\vec{q}} = \operatorname{col}\left\{\hat{q}_j \right\}_{j=0}^{n-1}$ of the corresponding polynomial can be computed explicitly, followed by direct evaluation of the norm $\left\|\vec{q}^{\circ}-\hat{\vec{q}} \right\|_{\infty}$.
As for the structured backward Vandermonde error, we have the following.
\begin{prop}
$\operatorname{berr}_3(\vec{\alpha}^{\circ};{\vec{\tilde{m}}[0:n-1]},\vec{x}^{\circ})$ can be bounded using the following steps:
\begin{enumerate}
\item Compute the linearized backward error
$$
\operatorname{berr_3^{lin}}(\vec{\alpha}^{\circ};{\vec{\tilde{m}}[0:n-1]},\vec{x}^{\circ}) = \min \biggl\{\|(\vec{\delta_1},\vec{\delta_2})\|_{2}: \underbrace{\bigl[ V'(\vec{x}^{\circ}) \operatorname{diag}(\vec{\alpha}^{\circ}) \quad -I_{n\times n}\bigr]}_{:=U} (\vec{\delta_1},\vec{\delta_2})^T = \vec{r}\biggr\},
$$
where, as before, $\vec{r}={\vec{\tilde{m}}[0:n-1]}-V(\vec{x}^{\circ}) \vec{\alpha}^{\circ}$ and $V' = \big[{(x_j^{\circ})}^k \quad k{(x_j^{\circ})}^{k-1} \big]^{j=1,\dots,n}_{k=0,\dots,n-1}$ is the confluent Vandermonde matrix. The solution is given by $(\vec{\delta_1},\vec{\delta_2})^T = U^{\dagger}\vec{r}$.
\item Substitute the obtained minimal perturbation in $\vec{x}$ and obtain an actual bound for the right-hand side perturbation:
$$
\vec{\delta_2}'=V(\vec{x}^{\circ}+\vec{\delta_1})\vec{\alpha}^{\circ}-{\vec{\tilde{m}}[0:n-1]}.
$$
\item Take the bound for the actual backward error to be the maximum between $\|\vec{\delta_1}\|,\;\|\vec{\delta_2}\|,\;\|\vec{\delta_2}'\|$.
\end{enumerate}
\end{prop}
\begin{proof}
See p.26 in \cite{bartels1992a}. The $\infty$-norm estimates can again be bounded by the solution in the Euclidean norm.
\end{proof}
\begin{thm}\label{thm:finite-precision}
Suppose that the numerical algorithms in steps 2, 3 and 4 of \prettyref{alg:classical-prony} are \emph{backward stable}, i.e. the backward errors \eqref{eq:backward-error-q}, \eqref{eq:eq:backward-error-x} and \eqref{eq:eq:backward-error-alpha} are on the order of machine epsilon $\epsilon_M$, and also that $\epsilon_M \lessapprox \epsilon$. Further assume that for \eqref{eq:Cmat}, it holds that $\|C_n^{\dagger}(\vec{q}^{\circ})\|_2 = O(1)$ (i.e. independent of the minimal separation $\delta$). Then the bounds of \prettyref{thm:node-accuracy-Dima} and \prettyref{thm:coeffs-accuracy-Dima} hold for $\{x^{\circ}_j\}_{j=1}^n$ and $\{\alpha^{\circ}_j\}_{j=1}^n$ in place of $\{\tilde{x}_j\}_{j=1}^n$ and $\{\tilde{\alpha}_j\}_{j=1}^n$.
\end{thm}
\begin{proof}
By backward stability of Step 2, $\vec{q}^{\circ}$ is the exact solution to the \emph{Hankel} system
$$
{\ensuremath{\mathcal{H}_n}(\vec{\hat{m}}[0:2n-2]) \cdot \vec{q}^{\circ} = -\vec{\hat{m}}[n:2n-1]}
$$
with moment vector {$\vec{\hat{m}}\in\mathbb{C}^{2n}$} such that
$\|\vec{\hat{m}}-\vec{\tilde{m}}\|=O(\epsilon_M)$. By backward stability of Step 3 (root finding), $\vec{x}^{\circ}$ are the exact roots of the polynomial $\hat{q}(z)$ such that $\|\vec{q}^{\circ}-\vec{\hat{q}}\|=O(\epsilon_M)$. Using Proposition \ref{Prop:berr1} {and \eqref{eq:hankel-constraint}}, we have
\begin{align*}
\operatorname{berr}_1(\vec{q}^{\circ},\vec{\tilde{m}})&\lessapprox \|C_n(\vec{q}^{\circ})^{\dagger}\|_2 { \|\vec{\tilde{m}}[n:2n-1]+\ensuremath{\mathcal{H}_n}(\vec{\tilde{m}}[0:2n-2]) \cdot \vec{q}^{\circ}\|_2}\\
&\leq \|C_n(\vec{q}^{\circ})^{\dagger}\|_2 {\| \left(\tilde{\vec{m}}-\hat{\vec{m}} \right)[n:2n-1]+\left[\ensuremath{\mathcal{H}_n}(\vec{\tilde{m}}[0:2n-2]) -\ensuremath{\mathcal{H}_n}(\vec{\hat{m}}[0:2n-2]) \right]\vec{q}^{\circ}\|_2}\\
& = O(\epsilon_M).
\end{align*}
Since $\epsilon_M\lessapprox \epsilon$, we conclude that $\vec{x}^{\circ}$ are the exact roots of a polynomial $\bar{q}(\vec{m}^{\circ};z)$ {where $\vec{m^{\circ}}\in\mathbb{C}^{2n}$} with $\|\vec{m}^{\circ}-\vec{\tilde{m}}\|\leq \| \vec{m}^{\circ}-\vec{\hat{m}}\|+\|\vec{\hat{m}}-\vec{\tilde{m}}\|=O(\epsilon_M)$. The latter implies that the bounds for $|x^{\circ}_j-x_j|$ hold as specified in
\prettyref{thm:node-accuracy-Dima}. By backward stability of Step 4, there exist $\vec{x}^*$ and $\vec{m}^*$ such that $\alpha^{\circ}$ is the exact solution of the Vandermonde system $V(\vec{x}^*)\vec{\alpha}^{\circ}=\vec{m}^*$ with $\vec{x}^*-\vec{x}^{\circ}=O(\epsilon_M)$ and $\|\vec{m}^*-{\vec{\tilde{m}}[0:n-1]}\|=O(\epsilon_M)$. Clearly $\vec{x}^*$ are the exact roots of a polynomial with coefficients $\vec{q}^*$ satisfying $\|\vec{q}^*-\hat{\vec{q}}\|=O(\epsilon_M)$. But then we also have $\|\vec{q}^*-\vec{q}^{\circ}\|=O(\epsilon_M)$ and by the same computation as above we conclude that $\vec{x}^*$ are the exact roots of a polynomial { $\bar{q}(\vec{\breve{m}};z)$ with $\|\vec{\breve{m}}-\vec{\tilde{m}}\|=O(\epsilon_M)$.} Applying \prettyref{thm:coeffs-accuracy-Dima} completes the proof.
\end{proof}
\section{Numerical results}\label{sec:numerics}
The numerical performance of both PM and DPM has already been investigated in \cite{decimatedProny}, suggesting their optimality in the corresponding regime (respectively, as either $\delta\ll {1\over \Omega}$ or $\textrm{SRF}} \newcommand{\w}{\omega \gg 1$ {where $\textrm{SRF}} \newcommand{\w}{\omega:={1\over{\Omega\delta}}$}). In particular, the results reported in \cite{decimatedProny} confirm the predictions of \prettyref{thm:node-accuracy-Dima} and \prettyref{thm:coeffs-accuracy-Dima}. Numerical results in the multi-cluster setting for additional SR algorithms such as ESPRIT, MUSIC and Matrix Pencil are available in, e.g., \cite{li2020a,li2021,batenkov2021b}. Here we complement the experiments in \cite{decimatedProny} by computing the backward errors in each step of PM (cf. \prettyref{thm:finite-precision}), implying that PM attains the optimal bounds in finite-precision arithmetic as well (\prettyref{fig:vanilla-nonproj-DP-backward}).
In all experiments, we consider a clustered configuration with $n=3$ nodes, where node $j=3$ is isolated, and construct the signal with random complex amplitudes (as in the model \eqref{eq:ft-of-measure}), while adding random bounded perturbations (noise) to the measurements. We measure the actual error amplification factors i.e., the condition numbers of this problem) of the nodes and amplitudes (cf. \cite[Algorithm 3.3]{batenkov2021b}):
$$\mathcal{K}_{x,j}:=\epsilon^{-1}\Omega|x_j-\tilde{x}_j|, \quad \mathcal{K}_{\alpha,j}:=\epsilon^{-1}|\alpha_j-\tilde{\alpha}_j|,$$
choosing $\epsilon, \delta$ at random from a pre-defined range. Then, reconstruction of the signal is performed using one of three methods: classical Prony Method, Decimated Prony Method and Matrix Pencil. For each method, we compare the scaling of the condition numbers in two scenarios: projecting the recovered nodes on the unit circle prior to recovering
the amplitudes versus non-projecting them. All computations were done in double-precision floating-point arithmetic.
As evident from \prettyref{fig:prony-initial-check} (middle pane), the correlations between the errors in the coefficients of the Prony polynomial $\bar{q}(z)$ are essential for obtaining the correct asymptotics for the errors in $\{x_j\}_{j=1}^n$, as done in the proof of \prettyref{thm:node-accuracy-Dima}. On the other hand, \prettyref{fig:prony-initial-check} (right pane) does not appear to suggest that the correlations between the errors in $\{x_j\}_{j=1}^n$ have any influence on the accuracy of recovering the $\alpha_j$'s. In hindsight, the reason is clear: the proof of the estimates \eqref{eq:alphaeClust} (or \eqref{eq:alphaeClustIsolated}) does not require any correlations between the different errors. In contrast, the improved analysis in \prettyref{sec:impBound} uses these correlations in an essential way via the expression $\sum_{\nu=1}^{\ell_1} \alpha_{{\nu}} \biggl[ \prod_{r\in [\ell_1]\setminus\{\nu\}} (x_{r}-\tilde{x}_j) \biggr] \bar{q}(x_{{\nu}})$, resulting in the improved bound \eqref{eq:alphaeClust1}. Thus, if we were to perturb the recovered nodes in random directions, Theorem~\ref{Thm:ImprovedClust} would no longer be valid, and the bound \eqref{eq:alphaeClust} would be the ``next best thing''. It turns out that a simple {\bf projection of the complex nodes $\{\tilde{x}_j\}_{j=1}^n$ to the unit circle prior to recovering the amplitudes} $\alpha_j$ provides the required perturbation -- \prettyref{fig:discrepancy-DP} demonstrates the loss of accuracy of the non-cluster node $j=3$ when projecting all nodes. Here we also plot the normalized ``cluster discrepancy'' $\mathcal{V}_1/\epsilon$ given by \eqref{eq:ContribCc} measuring the influcence between the different clusters, which should scale according to either \eqref{eq:TildVVBdClus} (second estimate) in the projected case, or according to Theorem~\ref{Thm:ImprovedClust} in the non-projected case. Note that the multi-cluster geometry is essential to observe such a behavior.
\begin{figure}
\includegraphics[width=0.9\linewidth]{./figures/prony-backward-errors}
\caption{{\small Classical Prony method - asymptotic optimality. For cluster nodes $j=1,2$, the node errors $\mathcal{E}_{x,j}=|\tilde{x}_j-x_j|$ (left) scale is like $\delta^{2-2\ell}$, while the amplitude errors $\mathcal{E}_{a,j}=|\tilde{\alpha}_j-\alpha_j|$ (middle) scale like $\delta^{1-2\ell}$. For the non-cluster node $j=3$, both errors are bounded by a constant. Right: backward errors of each step, as specified in \prettyref{def:backward-errors}, are on the order of machine epsilon, implying numerical stability of PM according to \prettyref{thm:finite-precision}. Here $\epsilon=10^{-15}$ in all experiments.}}
\label{fig:vanilla-nonproj-DP-backward}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{./figures/ampl_prony_discrepancy_randomComplex-DP}
\caption{Prony's method: accuracy of amplitude recovery where the nodes are projected (left) or non-projected (center) prior to recovering the amplitudes. Right panel: the corresponding normalized amplitude discrepancy function $\mathcal{V}_1/\epsilon$ (see text). The results are consistent with the estimates \eqref{eq:TildVVBdClus} (projected) and Theorem~\ref{Thm:ImprovedClust} (non-projected).}
\label{fig:discrepancy-DP}
\end{figure}
Interestingly, the cancellation phenomenon just demonstrated appears in other methods for SR which are based on decoupling the recovery of $\{x_j\}_{j=1}^n$ from that of $\{\alpha_j\}_{j=1}^n$. Performing the same projection perturbation, the loss of accuracy can be seen for Decimated Prony method (DPM, \prettyref{fig:DPM-merged}) and also the Matrix Pencil (\prettyref{fig:MP-merged}). In all the above, if we project the nodes before computing the amplitudes, then the amplitudes accuracy will deteriorate according to the estimate \eqref{eq:alphaeClust}. {\bf Thus, non-projecting the nodes is crucial for maintaining the accuracy of non-cluster amplitudes.} While this phenomenon is perhaps to be expected for DPM which still relies on PM, Matrix Pencil entails an eigenvalue decomposition, and therefore it is a-priori, not obvious that cancellations should occur also there. We believe our insights can help towards a complete analysis of Matrix Pencil and related methods in the multi-cluster geometry.
\begin{comment}
In all the experiments, we construct a signal with random complex amplitudes with a single-clustered configuration on the unit circle (as in model \eqref{eq:ft-of-measure}), adding random bounded perturbations to the measurements. {\color{red} Not very clear what it means} We measure the actual error amplification factors (problem condition numbers) of the nodes and amplitudes (cf. \cite[Algorithm 3.3]{batenkov2021b}):
$$\mathcal{K}_{x,j}:=\epsilon^{-1}\Omega|x_j-\tilde{x}_j|, \quad \mathcal{K}_{\alpha,j}:=\epsilon^{-1}|\alpha_j-\tilde{\alpha}_j|,$$
choosing $\epsilon, \delta$ randomly from pre-defined range. For each method, we compare the scaling of these condition numbers for two scenarios: projecting the recovered nodes on the unit circle prior to recovering
the amplitudes versus non-projecting them. All computations were done in double-precision floating-point arithmetic.
\end{comment}
\begin{comment}
In all of the above experiments, we construct a signal with random complex amplitudes and perturbations with a single-clustered configuration on the unit circle (as model \ref{eq:ft-of-measure}). {\color{blue} Not very clear what it means} We measure the error amplification factors of the nodes and amplitudes ($\mathcal{K}_{x}, \mathcal{K}_{a}$ respectively, see Algorithm 3.3 in \cite{batenkov2021b}), choosing $\epsilon, \delta$ randomly from pre-defined range. For each method we compare the scaling of these condition numbers for two scenarios: projecting the recovered nodes on the unit circle before finding the amplitudes versus non-projecting them.
All computation were done in double-precision floating-point arithmetic.
\end{comment}
\begin{comment}
\begin{figure}
\includegraphics[width=0.9\linewidth]{./figures/ampl_proj_false-DP}
\caption{\small Decimated Prony Method with non-projected nodes - asymptotic optimality. For cluster nodes, the node error amplification factors (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, while the amplitude error amplification factors (right) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$. For the non-cluster node j=3, both error amplification factors are bounded by a constant.}
\label{fig:decim-nonproj-DP}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{./figures/ampl_proj_true-DP}
\caption{\small Decimated Prony Method with projected nodes - asymptotic optimality. For cluster nodes, the node error amplification factors (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, while the amplitude error amplification factors (right) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$. For the non-cluster node j=3, the node error amplification factor is bounded by a constant while the amplitude error amplification factor scales like $\textrm{SRF}} \newcommand{\w}{\omega^{\ell-1}$. {\color{blue} Here we choose $\Omega$ randomly from pre-defined range.}}
\label{fig:decim-proj-DP}
\end{figure}
\end{comment}
\begin{figure}
\includegraphics[width=0.98\linewidth]{./figures/DPM-final-fig}
\caption{\small Decimated Prony Method - asymptotic optimality. For cluster nodes $j=1,2$, the node amplification factors $\mathcal{K}_x$ (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, and for the non-cluster node ($j=3$) it is bounded by a constant. The amplitude error amplification factors $\mathcal{K}_a$ for the non-cluster node with no projection (middle) are bounded by a constant while with projection (right) they scale like $\textrm{SRF}} \newcommand{\w}{\omega^{\ell-1}$. For the cluster nodes, both amplitude error amplification factors scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$.}
\label{fig:DPM-merged}
\end{figure}
\begin{comment}
\begin{figure}
\includegraphics[width=0.9\linewidth]{./figures/ampl_MP_proj_false-DP}
\caption{\small Matrix Pencil without projected nodes - asymptotic optimality. For cluster nodes, the node error amplification factors (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, while the amplitude error amplification factors (right) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$. For the non-cluster node j=3, both error amplification factors are bounded by a constant.}
\label{fig:mp-noproj-DP}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{./figures/ampl_MP_proj_true-DP}
\caption{\small Matrix Pencil with projected nodes - asymptotic optimality. For cluster nodes, the node error amplification factors (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, while the amplitude error amplification factors (right) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$. For the non-cluster node j=3, the node error amplification factor is bounded by a constant while the amplitude error amplification factor scales like $\textrm{SRF}} \newcommand{\w}{\omega^{\ell-1}$ for large enough $\textrm{SRF}} \newcommand{\w}{\omega$.}
\label{fig:mp-proj-DP}
\end{figure}
\end{comment}
\begin{figure}
\includegraphics[width=0.98\linewidth]{./figures/MP-final-fig}
\caption{\small Matrix Pencil's asymptotic optimality. For cluster nodes, the node amplification factors $\mathcal{K}_x$ (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, and for the non-cluster node (j=3) it is bounded by a constant. The amplitude error amplification factors $\mathcal{K}_a$ for the non-cluster node with no projection (middle) are bounded by a constant while with projection (right) they scale like $\textrm{SRF}} \newcommand{\w}{\omega^{\ell-1}$ for large enough $\textrm{SRF}} \newcommand{\w}{\omega$. For the cluster nodes, both amplitude error amplification factors scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$.}
\label{fig:MP-merged}
\end{figure}
\begin{comment}
\begin{figure}
\includegraphics[width=0.45\linewidth]{figures/kx_unprojected.png} \hfill
\includegraphics[width=0.45\linewidth]{figures/ka_uprojected.png}
\caption{\small VEXPA without projected nodes - asymptotic optimality. For cluster nodes, the node error amplification factors (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, while the amplitude error amplification factors (right) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$. For the non-cluster node j=1, both error amplification factors are bounded by a constant. {\color{blue} Here $\Omega=300$, with ESPRIT used for solving the subsampled and shifted systems.}}
\label{fig:vexpa-k}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\linewidth]{figures/kx_projected.png} \hfill
\includegraphics[width=0.45\linewidth]{figures/ka_projected.png}
\caption{\small VEXPA with projected nodes - asymptotic optimality. For cluster nodes, the node error amplification factors (left) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-2}$, while the amplitude error amplification factors (right) scale like $\textrm{SRF}} \newcommand{\w}{\omega^{2\ell-1}$. For the non-cluster node j=1, the node error amplification factor is bounded by a constant while the amplitude error amplification factor scales like $\textrm{SRF}} \newcommand{\w}{\omega^{\ell-1}$ for large enough $\textrm{SRF}} \newcommand{\w}{\omega$.}
\label{fig:vexpa-k-pro}
\end{figure}
\end{comment}
\newpage
\printbibliography
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Theory of cooling in a transmon array coupled to a cavity}
Here we describe the theory behind the bath-engineering protocol implemented in the experiment. We first consider the general case of an array of $L$ transmon qubits, each capacitively coupled to its nearest neighbors. After showing how, in such an array, an engineered bath can induce transitions between array eigenstates which conserve excitation number, we specialize to the case of a three-site array ($L=3$) used in our experiment. As we will show, theoretical predictions are in qualitative agreement with experimental observations, with the single-mode approximation of the 3D microwave cavity likely contributing a large part of the discrepancy.
\subsection{General L-site array}
A general array of $L$ capacitively-coupled transmon qubits ($j=1, \dots L$) is described by the Hamiltonian
\begin{equation}\label{H_array}
H_{\rm array}=\hbar \sum_{j=1}^L \left(\omega_j b_j^\dagger b_j+\frac{\alpha_j}{2} b_j^\dagger b_j^\dagger b_j b_j \right)+\hbar J\sum_{j=1}^{L-1}(b_{j+1}^\dagger b_j+b_j^\dagger b_{j+1}).
\end{equation}
The first sum describes each individual transmon as an anharmonic oscillator with creation/annihilation operators $b_j^\dagger$ and $b_j$. This is valid in the limit in which we work, where the Josephson energy of each qubit dominates the capacitive charging energy by roughly two orders of magnitude. Capacitive coupling, or dipole-dipole interaction, between the qubits gives rise to the second term in Eq.~(\ref{H_array}), with $J$ being the hopping amplitude for an excitation to jump from one qubit to the next. Here, for simplicity, we ignore couplings beyond nearest-neighbors, because the dipole interaction scales inversely with the cube of the qubit-qubit distance, making next-nearest-neighbor coupling amplitudes close to an order of magnitude lower than corresponding nearest-neighbor values.
For a derivation of Eq.~(\ref{H_array}) via circuit quantization, see the supplementary materials of either~\cite{PhysRevLett.110.030601} or ~\cite{DalArxiv}. These derivations specialize to the lowest two transmon levels, but considering higher levels is straightforward. In this work, we work in a parameter regime where the qubit-qubit coupling is much lower than the qubit frequencies; in such a regime the rotating-wave approximation is well-justified, and we use it here to obtain the Bose-Hubbard Hamiltonian. This Hamiltonian has U(1) invariance and therefore preserves the number of excitations.
The array is coupled to a single-mode cavity $ H_{\rm cav}=\hbar \omega_c a^\dagger a$ (with $a^\dagger$ and $a$ the creation/destruction operators) via an interaction term
$H_{\rm int}=\hbar \sum_{j=1}^L g_j(b_j a^\dagger +b_j^\dagger a)$. The coupling $g_j$ between qubit $j$ and the cavity depends on the product of the strength of the electromagnetic field of the cavity mode at the location of qubit $j$ and the transition dipole moment of that qubit, and hence varies from qubit to qubit. For example, in a setup such as the one used in the main text, where the array lies roughly in the center of the cavity, the qubit-cavity coupling to the lowest cavity mode will be largest in the center of the array.
Since the operators which create and destroy transmon excitations satisfy the bosonic commutation relation $[b_j, b_j^\dagger]=1$, it is natural to view these excitations as bosons on a lattice, with each transmon embodying a lattice site. Each transmon has a negative anharmonicity $\alpha_j < 0$, representing an effective on-site interaction between the bosonic excitations. In the case where the transmon anharmonicity is uniform across the array, close to what we realize in the experiment, the array Hamiltonian in Eq.~(\ref{H_array}) has the familiar form of a Bose-Hubbard model in the regime of attractive interactions (here $\alpha $ plays the role of the Bose-Hubbard parameter $U$).
To complete the description of this circuit-QED system we add the drive term, representing an external classical drive applied to the cavity: $H_{\rm drive}=\hbar \epsilon(t)(a^\dagger e^{-i \omega_d t}+a e^{i \omega_d t})$, where
$\epsilon(t)$ is the strength of the external drive and $\omega_d$ the drive frequency. The full Hamiltonian of the system qubit-array and cavity is the sum of all the aforementioned terms:
\begin{equation}\label{H_full}
H=H_{\rm array}+H_{\rm cav}+H_{\rm int}+H_{\rm drive}.
\end{equation}
This Hamiltonian describes all aspects of the array-cavity system important to our experiment, including the cooling via engineered dissipation, as we will show shortly. To see this, we cast the Hamiltonian in a simpler form via a few standard transformations. First, we move to the rotating frame of the drive by applying the unitary transformation: $U=e^{i\hbar \omega_d t(a^\dagger a+\sum_j b_j^\dagger b_j)}$. The transformed Hamiltonian $\tilde{H}=U H U^\dagger -i \hbar U \partial_t U^\dagger$\footnote{Recall that using the Baker-Campbell-Hausdorff formula we have:
$\tilde{H}=U H U^\dagger=e^{X}H e^{-X}=H+[X,H]+\frac{1}{2!}[X,[X,H]]+\frac{1}{3!}[X,[X,[X,H]]]\dots $ } takes the form:
\begin{equation}\label{H}
\tilde{H}=\hbar \sum_{j=1}^L \left(\Delta_{j} b_j^\dagger b_j+\frac{\alpha_j}{2} b_j^\dagger b_j^\dagger b_j b_j \right)+\hbar J\sum_{j=1}^{L-1}(b_{j+1}^\dagger b_j+b_j^\dagger b_{j+1})+
\hbar\Delta_c a^\dagger a+\hbar \sum_{j=1}^L g_j(b_j a^\dagger +b_j^\dagger a)+\hbar\epsilon(t)(a^\dagger +a ),
\end{equation}
where we have introduced the detunings: $\Delta_c=\omega_c-\omega_d$ and $\Delta_{j}=\omega_j-\omega_d$. In essence, this rotation gives new effective qubit and cavity frequencies while removing the oscillating time-dependence on the cavity operators in the drive term.
Ignoring for a moment both the drive and the anharmonic transmon terms in $\tilde{H}$, the array-cavity system is simply a set of coupled harmonic oscillators which exhibits its own set of normal modes. To find these modes, we write the quadratic terms of the Hamiltonian, $H_0$, in matrix form:
\begin{eqnarray}\label{H0}
H_0&=&\hbar \sum_{j=1}^L \Delta_{j} b_j^\dagger b_j+\hbar J\sum_{j=1}^{L-1}(b_{j+1}^\dagger b_j+b_j^\dagger b_{j+1})+
\hbar\Delta_c a^\dagger a+\hbar \sum_{j=1}^L g_j(b_j a^\dagger +b_j^\dagger a) \\
&=& \hbar\mathbf{v}^\dagger \mathcal{H}_0 \mathbf{v},
\end{eqnarray}
where we have introduced the $L+1$-component vector $\mathbf{v}=(a, b_1, b_2, \dots b_L)$ consisting of the annihilation operators for each oscillator, and the $(L+1)\times (L+1)$ matrix $\mathcal{H}_0$ representing their frequencies and couplings. Now the normal modes of the coupled system are found simply by diagonalizing the matrix $\mathcal{H}_0$. We thus find the matrix $M$ whose columns are the eigenvectors of $\mathcal{H}_0$; then $M^{-1} \mathcal{H}_0 M$ is diagonal. Calling $N = M^{-1}$, the corresponding change of basis $\mathbf{W} = N \mathbf{v}$ gives the eigenmodes of the system. Writing these modes explicitly in terms of the matrix elements of $N$ and $M$ will be useful for later purposes---calling the new basis vectors $\mathbf{W}= (A, B_1, B_2, \dots B_L)$, we have:
\begin{eqnarray}
A=N_{00} a+\sum_{l=1}^L N_{0 l} b_{l}, \label{AofN}\\
B_j=N_{j0} a+\sum_{l=1}^L N_{j l} b_{l},\label{BofN}
\end{eqnarray}
and their inverse transformations:
\begin{eqnarray}
a=M_{00} A+\sum_{l=1}^L M_{ 0 l} B_{l},\label{aofN} \\
b_j=M_{j0} A+\sum_{l=1}^L M_{j l} B_{l}.\label{bofN}
\end{eqnarray}
The corresponding eigenvalues or normal-mode frequencies are denoted by $\lambda_i$. In this new basis $H_0$ is, as expected, a sum of uncoupled harmonic oscillators: $H_0=\hbar \lambda_0 A^\dagger A+\hbar \sum_{j=1}^L \lambda_j B_j^\dagger B_j$.
In the dispersive limit, where $g_j \ll |\omega_{\mathrm{c}} - \omega_{\mathrm{j}}|$, we expect the dressed mode $A$ to largely consist of the cavity mode, with small contributions from each qubit. Equivalently, this means that $M_{0 l}$ and $N_{0 l}$ are each much smaller than $M_{00}$ and $N_{00}$, which are both close to unity. Conversely, the $B_j$ modes are mostly linear combinations of transmon excitations ($b_1, b_2, \dots b_L$) with a small component from the cavity. In symbols, $|M_{j0}|\ll |M_{jl}|$ and $|N_{j0}|\ll |N_{jl}|$ for $j,l \neq 0$ (see for instance the specific example from the experiment, shown in Eq.~(\ref{N_ex}) and ~(\ref{M_ex})).
In this new basis the drive term becomes: $\epsilon(t)(a^\dagger +a ) = \epsilon(t)\left(M_{00}(A^\dagger +A) +\sum_{l=1}^L M_{0 l} (B^\dagger_{l}+B_{l})\right)$. Because of the mode mixing, the drive on the original cavity-mode $a$ now excites all of the normal modes $\{A, \{B_\mathrm{j}\}\}$. However, since $M_{0l} \ll M_{00}$, we can neglect the $B_j$ terms in this operator unless the drive is close to resonance with one of the qubit-like modes. In that case, the coefficients $M_{0l}$ determine how responsive the qubit-like mode is to a drive pulse, i.e., which states are dark and which are bright. We will return to this point later.
Continuing onwards in solving the original Hamiltonian, we now put back the anharmonic transmon terms which we neglected when finding the linear eigenmodes. In terms of the new normal-mode operators, the anharmonic term is
\begin{eqnarray}
\sum_{j=1}^L\frac{\alpha_j }{2} b_j^\dagger b_j^\dagger b_j b_j= \frac{1}{2}\sum_{j=1}^L \alpha_j (M_{j0} A^\dagger+\sum_{l=1}^L M_{j l} B^\dagger_{l})(M_{j0} A^\dagger+\sum_{p=1}^L M_{jp} B^\dagger_{p})(M_{j0} A+\sum_{q=1}^L M_{jq} B_{q})(M_{j0} A+\sum_{s=1}^L M_{js} B_{s}).
\end{eqnarray}
Invoking the rotating wave approximation to neglect terms which do not conserve excitation number, the remaining terms can be grouped into three categories:
\begin{itemize}
\item self-Kerr corrections to the qubit-like operators:
\sum_{lpqs=1}^L \mu_{lpqs}B^\dagger_l B^\dagger_p B_q B_s$,
where we have defined the tensor:
\begin{equation}
\mu_{lpqs}=\sum_{j=1}^L \alpha_j M_{jl}M_{jp}M_{jq}M_{js},
\end{equation}
\item a self-Kerr correction for the cavity:
\Pi_0 A^\dagger A^\dagger A A$,
where we have defined the constant:
\begin{equation}
\Pi_0=\sum_{j=1}^L \alpha_j M_{j0}^4,
\end{equation}
\item and a cross-Kerr term that couples together the $A$ and $B_j$ operators:
4 A^\dagger A \sum_{lp} \eta_{lp} B^\dagger_l B_p$,
where we have defined the tensor:
\begin{equation} \label{eta}
\eta_{lp}=\sum_{j=1}^L \alpha_j M_{j0}^2 M_{jl}M_{jp}.
\end{equation}
\end{itemize}
Now we have the full Hamiltonian in the new basis $\mathbf{W}$:
\begin{eqnarray}
\tilde{H}_W& =& \hbar \lambda_0 A^\dagger A+\frac{\hbar}{2}\Pi_0 A^\dagger A^\dagger A A+\hbar \sum_{j=1}^L \lambda_j B_j^\dagger B_j+\frac{\hbar}{2}\sum_{lpqs=1}^3 \mu_{lpqs}B^\dagger_l B^\dagger_p B_q B_s+2 \hbar A^\dagger A \sum_{lp} \eta_{lp} B^\dagger_l B_p\nonumber \\
& & +\hbar \epsilon(t)M_{00}(A^\dagger +A) +\hbar \epsilon(t)\sum_{l=1}^L M_{0 l} (B^\dagger_{l}+B_{l}).
\end{eqnarray}
Given our assumption $|M_{j0}|\ll |M_{jl}|$ for $l,j \neq 0$ we have: $\Pi_0\ll \eta_{lp} \ll\mu_{lpqs}$. In other words, in this new dressed-state basis $\mathbf{W}$ we have:
[qubit self-Kerr] $\gg$ [cross-Kerr] $\gg$ [cavity self-Kerr]. For simplicity we henceforth neglect the cavity self-Kerr term $ \Pi_0 A^\dagger A^\dagger A A$.
Next, we displace the new cavity-like operator $A$ and write it as a classical part and a small quantum correction:
\begin{equation}
A=\bar{A}(t)+D,
\end{equation}
by applying the unitary transformation: $U_D=\exp[\bar{A}(t)A^\dagger-\bar{A}^\star(t)A]$.
Under this transformation $U_D A U_D^\dagger=A-\bar{A(t)}=D$ and
the Hamiltonian $\tilde{H}_W$ transforms as:
\begin{equation}
\tilde{H}_{W,D}=U_D \tilde{H}_{W} U_D^\dagger-i \hbar U_D \partial_t U_D^\dagger.
\end{equation}
By direct substitution of $A=\bar{A}(t)+D$ into the Hamiltonian we find:
\begin{equation}
A^\dagger A = (\bar{A}^\star +D^\dagger)(\bar{A} +D)=|\bar{A}|^2+(\bar{A}D^\dagger+\bar{A}^\star D)+D^\dagger D
\end{equation}
and therefore:
\begin{eqnarray}
\tilde{H}_{W,D}& =&\hbar \lambda_0 \left[|\bar{A}|^2+(\bar{A}D^\dagger+\bar{A}^\star D)+D^\dagger D\right] +\hbar \sum_{j=1}^L\lambda_j B_j^\dagger B_j+\frac{\hbar}{2}\sum_{lpqs=1}^L \mu_{lpqs}B^\dagger_l B^\dagger_p B_q B_s+\hbar \epsilon(t)M_{00}(D^\dagger +D+\bar{A}^\star+\bar{A})\nonumber\\
& +& 2\hbar \left[|\bar{A}|^2+(\bar{A}D^\dagger+\bar{A}^\star D)+D^\dagger D\right]\sum_{lp} \eta_{lp} B^\dagger_l B_p-i (\dot{\bar{A(t)}} A^\dagger- \dot{\bar{A(t)^\star}} A)+\hbar \epsilon(t)\sum_{l=1}^L M_{0 l} (B^\dagger_{l}+B_{l}).
\end{eqnarray}
We choose $\bar A(t)$ by requiring the terms linear in $D$ and $D^\dagger$ to vanish
\begin{equation}\label{dispA}
\hbar \lambda_0 \bar{A}(t)+(2\hbar \sum_{lp} \eta_{lp} B^\dagger_l B_p)\bar{A}(t)+\hbar \epsilon(t)M_{00} =-i \hbar\dot{\bar{A}}(t),
\end{equation}
so that we can eliminate the terms involving the drive. Eliminating the drive is equivalent to moving to a frame where the cavity evolves according to the usual Heisenberg equation $\dot{A}=\frac{i}{\hbar}[H,A]-\frac{\kappa}{2}M_{00} A$, and therefore solving for the classical part
$\bar{A}(t)$ is equivalent to Eq.~(\ref{dispA}).
Since $\eta_{lp} \ll \lambda_0$ we neglect that term and consider only:
\begin{equation}
\lambda_0 \bar{A}(t)+\epsilon(t)M_{00} =-i \dot{\bar{A}}(t).
\end{equation}
In the stationary case $\dot{\bar{A}}(t)=0$ the solution is:
\begin{equation}\label{stationary_case}
\bar{A}(t)=\frac{-\epsilon(t) M_{00}}{\lambda_0-i \frac{\kappa}{2}M_{00}},
\end{equation}
where we have included the correction due to the cavity damping rate $\kappa$.
The final Hamiltonian then becomes:
\begin{eqnarray}\label{Hcool}
\tilde{H}_{W,D}& =&\hbar \lambda_0 \left[|\bar{A}|^2+D^\dagger D\right] +\hbar \sum_{j=1}^L \lambda_j B_j^\dagger B_j+\frac{\hbar}{2}\sum_{lpqs=1}^L \mu_{lpqs}B^\dagger_l B^\dagger_p B_q B_s+\hbar \epsilon(t)\sum_{l=1}^L M_{0 l} (B^\dagger_{l}+B_{l})\nonumber\\
& +& 2\hbar \left[|\bar{A}|^2+(\bar{A}D^\dagger+\bar{A}^\star D)+D^\dagger D\right]\sum_{lp} \eta_{lp} B^\dagger_l B_p.
\end{eqnarray}
Ignoring the constant offset term $\hbar\lambda_0|\bar{A}|^2$, we break the Hamiltonian up into the following pieces. We call $H_D$ the term involving only the dressed cavity operator:
\begin{equation}
H_D=\hbar \lambda_0 D^\dagger D;
\end{equation}
$H_B$ the one involving only the dressed qubit operators:
\begin{equation}\label{HB}
H_B=\hbar \sum_{j=1}^L \lambda_j B_j^\dagger B_j+\frac{\hbar}{2}\sum_{lpqs=1}^L \mu_{lpqs}B^\dagger_l B^\dagger_p B_q B_s;
\end{equation}
with $H_{D,B}$ the interaction between dressed cavity and dressed qubits:
\begin{equation}\label{H_DB}
H_{D,B}=2\hbar \left[|\bar{A}|^2+(\bar{A}D^\dagger+\bar{A}^\star D)+D^\dagger D\right]\sum_{lp} \eta_{lp} B^\dagger_l B_p.
\end{equation}
Finally, we have $H_{B,\rm{drive}}$, the driving term on the dressed qubit operators:
\begin{equation}\label{H_B_drive}
H_{B,\rm{drive}}=\hbar \epsilon(t)\sum_{l=1}^L M_{0 l} (B^\dagger_{l}+B_{l}).
\end{equation}
The final Hamiltonian is the sum of all of these terms:
\begin{equation} \label{Heff}
H^{\rm eff}=H_D +H_B+H_{D,B}+H_{B,\rm{drive}}.
\end{equation}
\subparagraph{Cooling} The term $H_{D,B}$ is the one we are most interested in.
Depending on the detuning of the drive, this term can act either as a cooling term or as a term that induces a state-dependent shift of the cavity resonance and therefore allows one to detect the state of the system from a homodyne measurement. It is important to notice that this term preserves the U(1) invariance of the Hamlitonian $H_B$ and therefore conserves the total qubit excitation number $N$, thus allowing for cooling within a given excitation-number manifold.
Let us focus on the cooling process first: we assume the loss rate of photons from the cavity to be high with respect to the dynamics involving
$H_{D,B}$. Therefore, we neglect the terms $D^\dagger D$ and $\bar{A}^\star D$
and we call the remaining operator the cooling term:
\begin{equation}\label{Vcool}
V_{\rm cool}=2 \hbar \bar{A}(t) D^\dagger \sum_{lp} \eta_{lp} B^\dagger_l B_p=2 \hbar \bar{A}(t) D^\dagger \mathcal{O}_B.
\end{equation}
Within our approximation, the dissipation makes $V_{\rm cool}^\dagger$ ineffective.
In Eq.~(\ref{Vcool}) we have defined the operator:
\begin{equation}\label{Ob_ope}
\mathcal{O}_B=\sum_{lp} \eta_{lp} B^\dagger_l B_p.
\end{equation}
If the pump is red-detuned from the cavity, then
the cooling operator $V_{\rm cool}$ scatters one qubit excitation to a lower energy state by creating a photon with higher frequency than the incoming one (strictly speaking, it scatters a pump photon up to the cavity frequency). Therefore the total number of qubit excitations is conserved, but some excess energy has been transferred from the qubit array to a cavity photon.
To estimate the rate of cooling between two states we use the Fermi's Golden Rule:
\begin{equation}\label{FermiGolden}
\Gamma_{\rm cool}=\frac{2 \pi}{\hbar} \sum_f |\langle \Psi_f | V_{\rm cool} | \Psi_i\rangle|^2 \delta(E_i-E_f).
\end{equation}
In the Hilbert space of the dressed excitations $|\rm qubits \rangle \otimes |\rm cavity \rangle$, the initial state is $| \Psi_i\rangle=|\psi_i, 0\rangle$. We consider the qubits initially in an excited state $\psi_i$ with energy $E_i$ and no photons in the cavity. The final state instead is $| \Psi_f\rangle=|\psi_f, 1\rangle$, where the qubits are in a lower energy state $\psi_f$ with energy $E_f$, and the excess energy $E_i-E_f$ is carried away by a new photon. Therefore we have:
\begin{eqnarray}
\Gamma_{i\to f}& =& \frac{2 \pi}{\hbar} (2 \hbar \bar{A}(t))^2 \sum_q \langle \psi_i, 0 | \mathcal{O}_B^\dagger D | \psi_f, 1\rangle \langle \psi_f, 1 | D^\dagger \mathcal{O}_B | \psi_i, 0\rangle \delta(E_i-E_f-\epsilon_q)\nonumber\\
& = & (2 \pi \hbar) (2 \bar{A}(t))^2 \langle \psi_i| \mathcal{O}_B^\dagger | \psi_f\rangle \langle \psi_f| \mathcal{O}_B | \psi_i\rangle \sum_q \langle 0| D | 1\rangle \langle 1| D^\dagger | 0\rangle \delta(E_i-E_f-\epsilon_q)\nonumber\\
& = & (2 \bar{A}(t))^2|M_{if}|^2 \int_{- \infty}^{+\infty} dt \sum_q \langle 0| D | 1\rangle \langle 1| D^\dagger | 0\rangle e^{\frac{i t}{\hbar}(E_i-E_f-\epsilon_q)}\nonumber\\
& = & (2 \bar{A}(t))^2|M_{if}|^2 \int_{- \infty}^{+\infty} dt \; e^{i t (\omega_i-\omega_f)}
\sum_q \langle 0| D(t) | 1\rangle \langle 1| D^\dagger | 0\rangle \nonumber\\
& = & (2 \bar{A}(t))^2 |M_{if}|^2 \int_{- \infty}^{+\infty} dt \; e^{i t (\omega_i-\omega_f)}
\langle 0| D(t) D^\dagger | 0\rangle \nonumber\\
& = & (2 \bar{A}(t))^2 |M_{if}|^2 S_{DD}(\omega_i-\omega_f),
\label{RateCooling}
\end{eqnarray}
where $M_{if}=\langle \psi_f| \mathcal{O}_B | \psi_i\rangle$ and
\begin{equation}
S_{DD}(\omega)=\frac{\kappa}{(\omega-\Delta_c)^2+(\kappa/2)^2}
\end{equation}
is the spectral density of the cavity field fluctuations. The square of the classical drive amplitude $\bar{A}(t)^2$ is equal to the average photon number in the cavity: $\bar{A}(t)^2=\bar{n}$, so we see that the cooling rate is linear in the photon number:
\begin{equation}\label{RateCooling_short}
\Gamma_{i\to f}= 4 \bar{n}|M_{if}|^2 \frac{\kappa}{(\omega_i-\omega_f-\Delta_c)^2+(\kappa/2)^2}.
\end{equation}
In the stationary case (Eq.~(\ref{stationary_case})) $\bar{A}(t)\sim \epsilon(t)$, and therefore the drive power is directly proportional to the photon number $P_{\rm in}\propto \bar{n}$, giving the result contained in the main text (Eq.(4)) that: $\Gamma_{\rm cool} \propto P_{\rm in}$.
The transition rates just derived are perturbative, since the derivation relied on Fermi's Golden rule. Thus they are valid only in the limit that the rate of transitions is small with respect to the decay rate of the cavity:
\begin{equation}
\Gamma_{\rm cool} < \kappa.
\end{equation}
Beyond this regime the perturbative description of the cooling process is not appropriate anymore because
the effect of the $V_{\rm cool}^\dagger$ term will no longer be negligible, and will induce other processes besides the cooling \cite{Murch_PRL}.
\subparagraph{Stark shift}
For low intracavity photon number, the frequencies $\omega_i$ and $\omega_f$ used above in Eqs.~(\ref{RateCooling}) and~(\ref{RateCooling_short}) are simply the bare eigenvalues of the Hamiltonian in Eq.~(\ref{HB}) that we diagonalized to find the eigenmodes. For higher photon number, the first term $2\hbar \,|\bar{A}|^2\sum_{lp} \eta_{lp} B^\dagger_l B_p$ of the Hamiltonian $H_{D,B}$ (Eq.~(\ref{H_DB})) starts to be significant; this term contributes a
Stark shift to the energies of the eigenstates. We consider this Stark shift perturbatively and define a photon-number dependent frequency as:
\begin{equation}
\omega_i(\bar{n})=\omega_i^0+2 \hbar\,\bar{n}\langle \psi_i | \mathcal{O}_B|\psi_i\rangle.
\end{equation}
For the low photon number used in the experiment (at most five photons at steady state in the cavity), this perturbative correction is a good approximation to the exact solution.
\subparagraph{Measurement}
By performing a homodyne measurement, we can measure the shift of the cavity frequency depending on the state of the array:
\begin{equation}\label{cavity_pull}
\left[ \lambda_0 +2 \sum_{lp} \eta_{lp} B^\dagger_l B_p\right] D^\dagger D.
\end{equation}
Therefore we introduce the operator $\chi$ describing the cavity pull:
\begin{equation}\label{chi_shift_op}
\chi= 2\sum_{lp} \eta_{lp} B^\dagger_l B_p=2 \mathcal{O}_B,
\end{equation}
The expectation value of this operator on a generic state of the array $S$ constitutes the observable that identifies that state:
\begin{equation}\label{chi_shift}
\chi_S= 2\langle \mathcal{O}_B\rangle_S =2\langle \sum_{lp} \eta_{lp} B^\dagger_l B_p \rangle_S.
\end{equation}
We point out an important difference in the behavior of the operator $\mathcal{O}_B$ during the cooling process versus during the measurement.
During the cooling process the pump drive is centered at a red detuning $\Delta_c=\omega_i-\omega_f$ given by the energy difference
between the two states, initial and final, constituting the cooling transition.
The operator $\mathcal{O}_B$ then will induce transitions between states whose energy difference lies within a bandwidth $\sim\kappa$ centered on $\Delta_c$.
During the readout, instead, the drive is at the cavity frequency, i.e.\ at zero detuning, so to a good approximation we can neglect terms
rotating faster than the linewidth $\kappa$ of the cavity. For small lattice sizes and big coupling $J$, as it is the case in our experiment with $L=3$, the excitation energies are big compared to $\kappa$ and therefore the operator $\chi$ will not induce transitions between different states during the readout, and the measurement can then be considered quantum nondemolition (QND).
\subsection{Long array limit}
For long arrays, $L\gg 1$, and small hopping strength $J$, the energy levels will become closely spaced and therefore eventually the measurement will not be QND anymore. Once there exist pairs of states with energy difference $\Delta E$ less than $\sim\kappa$, the measurement drive will (generically) cause transitions between them. This same physics will adversely affect the cooling process that we have discussed.
The cooling process implements the removal of an amount of energy $\Delta E=\hbar(\omega_i -\omega_f)$ ($\omega_i> \omega_f$) that is well-defined up to the cavity linewidth $\kappa$. $\Delta E\gg \kappa$ is easily achievable in small arrays. In the limit of long arrays, eventually $\Delta E \sim \kappa$ or smaller and both Stokes and anti-Stokes processes become possible and the (non-equilibrium) quantum bath no longer has zero effective temperature \cite{SteveRMP}.
Consider the simplest case in which all of the quits have uniform frequency and are coupled only to nearest-neighbors. Neglecting initially the anharmonic terms, i.e.\ imagining a chain of coupled linear harmonic oscillators gives the simple Hamiltonian
\begin{equation}\label{H_tightb}
H_{\rm array}=\hbar \omega_0 \sum_{j=1}^L b_j^\dagger b_j+\hbar J\sum_{j=1}^{L-1}(b_{j+1}^\dagger b_j+b_j^\dagger b_{j+1}).
\end{equation}
This chain of oscillators exhibits normal modes with excitation amplitudes varying sinusoidally along the array. Or, in the language of excitations as bosonic particles, this is a tight-binding model of free bosons on a lattice, which is diagonalized by introducing modes of the form:
\begin{equation}
B_n=\sqrt{\frac{2}{L+1}} \sum_{j=1}^L\sin(k_n j) b_j, \qquad k_n=\frac{\pi}{L+1} n, \quad n=1,2,\dots, L.
\end{equation}
In this form the Hamiltonian becomes
\begin{equation}\label{Hcos}
H_{\rm array}=\hbar \sum_{n=1}^L [\omega_0+2J\cos(k_n)] B^\dagger_n B_n.
\end{equation}
Note that unlike in the previous derivation we did not include the coupling to the cavity when diagonalizing the quadratic Hamiltonian. Including these terms we now obtain
\begin{eqnarray}\label{HCavQub}
H &=& \hbar \sum_{n=1}^L [\omega_0+2J\cos(k_n)] B^\dagger_n B_n + \hbar\omega_c a^\dagger a + \hbar \sum_{j=1}^L g_j\left(b_j a^\dagger + b_j^\dagger a\right)\\
&=&\hbar \sum_{n=1}^L [\omega_0+2J\cos(k_n)] B^\dagger_n B_n + \hbar\omega_c a^\dagger a + \hbar \sum_{m=1}^L \xi_m\left(B_m a^\dagger + B_m^\dagger a\right)
\end{eqnarray}
where
\begin{equation}
\xi_m= \sqrt{\frac{2}{L+1}}\sum_{j=1}^L g_j\sin(k_mj).
\end{equation}
The quartic interaction term becomes in this basis
\begin{eqnarray}
H_{\mathrm int} &=& \frac{1}{2}\sum_{j=1}^L \alpha_j b^\dagger_j b^\dagger_j b_j b_j\\
&=& \sum_{m,n,p,q=1}\Xi_{mnpq}B^\dagger_mB^\dagger_nB_pB_q,
\end{eqnarray}
where
\begin{equation}
\Xi_{mnpq}=\frac{2}{(L+1)^2} \sum_{j =1}^L \alpha_j\sin(k_mj)\sin(k_nj)\sin(k_pj)\sin(k_qj).
\end{equation}
Proceeding as before we can diagonalize the quadratic part of the Hamiltonian in the presence of a drive on the cavity and extract the cooling operator defined in Eq.~(\ref{Vcool}) and Eq.~(\ref{Ob_ope}) and hence determine the cooling matrix elements $M_{if}$. We will not present those details here but rather simply note certain qualitative features of the result that can be seen more easily in the present version of the derivation where we have delayed inclusion of the cavity coupling until after diagonalization of the array Hamiltonian.
If $g_j$ varies slowly and smoothly with position then $\xi_m$ will be large only for small $m$. It follows that the Raman scattering process does not permit large changes of (quasi-) momentum and the matrix $\eta_{lp}$ in Eq.~(\ref{Ob_ope}) will be non-zero only near the diagonal. If only small momentum changes are permitted then only small energy changes are permitted and the cooling will be weak. If on the other hand, the set of $\{g_j\}$ has low symmetry then the $\{\xi_m\}$, and hence the transition matrix elements $M_{if}$ derived from them, are not constrained by symmetry and are generically non-zero.
We now discuss a protocol to cool towards the lower energy eigenstates of a general L-site array. We illustrate the basic idea using the single-excitation manifold as an example.
Based on the band structure of the 1D tight-binding model, we know that the single-excitation manifold energies are spread around the bare qubit frequency $\omega_0$ with a total width of $ 4 J$.
The levels will be more densely spaced at the top and bottom of the band (see Fig.~\ref{Teff}) due to the van Hove singularities in the extremes of the tight-binding band structure.
For simplicity we assume that all cooling matrix elements are generically non-zero. Suppose we start from a high-energy state $E_i$ and we set the drive such that the cavity detuning $\kappa<\Delta_c< 4J$ can cool between transitions within the bandwidth. As illustrated in Fig.~\ref{Teff}, a cascade of several cooling steps brings the initial energy down to a final value equal to $E_0^{(f)}=E_0\mod(\Delta_c)<\Delta_c$, at which point no more cooling steps can occur. To lower the energy further, we can slowly decrease the detuning $\Delta_c$ towards $\sim\kappa$. We will be able to cool to the lowest energy state if $J(\frac{\pi}{L+1})^2>\kappa$. Otherwise this process will cool down the N-particle subspace to an effective temperature of $T_{\mathrm{eff}} \sim \kappa$ \cite{SteveRMP}.
It may occur that the above procedure fails because some particular matrix element in the cascade vanishes.
This can be overcome by scanning cooling pump detuning $\Delta_c$ down and up multiple times to find a transition that allows escape from the trapped state.
The cooling rate calculations used above for illustrative purposes are straightforward to carry out within the manifold of single excitations. Higher excitation manifolds will exhibit more complex and interesting dynamics because as energy is removed from the system by Raman processes, boson-boson collisions will further relax the particle distribution function. The dynamics will depend importantly on whether or not the underlying model is integrable. These considerations are well beyond the scope of the present experimental state-of-the-art, but could well become important in future realizations of quantum simulators.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.30\textwidth]{Teff.pdf}
\caption{Illustration of the temperature limit for the single excitation subspace of a degenerate L-site chain. To be able to cool to the ground state we require $J(\frac{\pi}{L+1})^2>\kappa$.}
\label{Teff}
\end{center}
\end{figure}
\subsection{Three-qubit array used in the experiment}
We now specialize the above results to the case of $L=3$ sites.
The Hamiltonian $H_0$ in Eq.~(\ref{H0}) is:
\begin{equation}
H_0=\hbar \sum_{j=1}^3 \Delta_{j} b_j^\dagger b_j+\hbar J\sum_{j=1}^{2}(b_{j+1}^\dagger b_j+b_j^\dagger b_{j+1})+\hbar J_{\rm 13} (b_{1}^\dagger b_3+b_3^\dagger b_1)+
\hbar\Delta_c a^\dagger a+\hbar \sum_{j=1}^3 g_j(b_j a^\dagger +b_j^\dagger a).
\end{equation}
where we have also included the next-nearest-neighbor coupling $J_{\rm 13}$ between the 1st and 3rd sites.
This Hamiltonian has the matrix form:
\begin{equation}
\mathcal{H}_0=\hbar \left(\begin{array}{cccc}\Delta_c & g_1 & g_2 &g_3\\ g_1 & \Delta_1 & J & J_{13}\\ g_2 & J & \Delta_2 & J\\ g_3 & J_{13} & J & \Delta_3 \end{array} \right)
\end{equation}
acting on a four-component vector $\mathbf{v}=(a, b_1, b_2, b_3)$, so that $H_0=\mathbf{v}^\dagger \mathcal{H}_0 \mathbf{v}$.
The matrix $N$ defined in Eqs.~(\ref{AofN}) and (\ref{BofN}) for the experimental parameters given
in Section \ref{Device_param} is:
\begin{equation}\label{N_ex}
N=\left(\begin{array}{cccc}
-0.986 & -0.073 & -0.126 & -0.077 \\
-0.163 & 0.426 & 0.690 & 0.561\\
0.014 & -0.680 & -0.158 & 0.716\\
-0.013 & -0.592 & 0.695 & -0.408
\end{array} \right).
\end{equation}
The rows of this matrix are the coefficients of the new dressed operators $(A, B_1, B_2, B_3)$ expressed in the old basis $(a, b_1, b_2, b_3)$. As expected $A\approx a$ remains an operator that is mostly cavity-like, similarly the $B_j$ operators are mostly linear combination of qubits operators.
This is ensured by the condition $g_j/(\omega_j-\omega_c) \ll 1$ for each $j$. In the experimental setup we have: $g_{1,3}/(\omega_{1,3}-\omega_c)\sim 0.06$ and $g_{2}/(\omega_2-\omega_c) \sim 0.12$.
If the couplings $g_1$ and $g_3$ were identical, the model would be symmetric with respect to the exchange of $b_1 \leftrightarrow b_3$, and the operators $A, B_1, B_3$ would be even under such transformation, while $B_2$ would be odd,
transforming as $B_2 \leftrightarrow -B_2$.
In the experimental case, $g_1$ and $g_3$ are close but not identical and similarly, the qubit frequencies $\omega_1$ and $\omega_3$ are slightly different. This symmetry is thus broken in the experiment, but only weakly. We see this, for instance, in the matrix $N$ that is almost, but not completely, invariant under exchange of $b_1 \leftrightarrow b_3$.
The matrix $M=N^{-1}$, that according to Eqs.~(\ref{aofN}) and (\ref{bofN}) has as elements the decomposition of the original operators $(a, b_1, b_2, b_3)$ in the new basis $(A, B_1, B_2, B_3)$, is:
\begin{equation} \label{M_ex}
M=\left(\begin{array}{cccc}
-0.986 & -0.164 & 0.014 & -0.013\\
-0.073 & 0.426 & -0.680 & -0.592\\
-0.126 & 0.690 & -0.158 & 0.694\\
-0.077 & 0.561 & 0.716 & -0.410\\
\end{array} \right).
\end{equation}
Inspecting the elements of this matrix confirms that for this range of parameters our assumptions $M_{00}\approx 1$ and $M_{0l}\ll 1$ are valid.
The tensor $\eta$ in Eq.~(\ref{eta}) is:
\begin{equation}
\eta=2 \pi \times \left(\begin{array}{ccc}
-2.422 & 0.227 & -1.241\\
0.227 & -1.279 & 0.338 \\
-1.241 & 0.338 & -2.445\\
\end{array} \right) \rm MHz.
\end{equation}
The fact that the off-diagonal elements in the above matrix involving the operator $B_2$ ($\eta_{12}$ and $\eta_{23}$) are relatively small is a manifestation of the near-symmetry our system exhibits, described above. The operator $V_{\rm cool}$ in Eq.~(\ref{Vcool}) cannot connect states with different symmetry (in the perfectly symmetric case $B_1$, $B_3$ would be even, while $B_2$ odd and the elements $\eta_{12}=\eta_{21}$ and $\eta_{23}=\eta_{32}$ would be equal to zero).
\subparagraph{Single-excitation subspace} Let us consider first the eigenstates with only a single excitation in the array. For these states the nonlinear term in Hamiltonian Eq.~(\ref{HB}) vanishes and we are left with a simple diagonal Hamiltonian:
\begin{equation}\label{HB_3_one}
H_B=\hbar \sum_{j=1}^L \lambda_j B_j^\dagger B_j.
\end{equation}
A basis $\mathcal{B}$ for the Hilbert space of the one-excitation manifold
is the following:
\begin{eqnarray}
{\bf |1\rangle}& = & | 1, 0, 0\rangle= {B_1^\dagger} |G\rangle \label{basis11},\\
{\bf |2\rangle} &= &| 0,1, 0\rangle= {B_2^\dagger} |G\rangle\label{basis12},\\
{\bf |3\rangle} &= &| 0, 0, 1\rangle= {B_3^\dagger} |G\rangle\label{basis13}.\\
\end{eqnarray}
We have introduced here the notation $|m,n,l\rangle \propto (B_1^\dagger)^m (B_2^\dagger)^n (B_l^\dagger)^l |G\rangle$ as a shorthand way of representing a normalized state with $m$ excitations in the lowest eigenmode, $n$ in the second mode and $l$ in the third mode. Recall that the three modes do not correspond to the three lattice sites; each operator $B_j$ has in general a nonzero component on each site, as we found in Eq.~(\ref{BofN}).
In this one-manifold basis, the Hamiltonian in Eq.~(\ref{HB_3_one}) is diagonal, and therefore the one-excitation modes $\{|E_i\rangle \text{,~} i\in [1,3]\}$ coincide with this basis: $|E_1\rangle={\bf |1\rangle}$, $|E_2\rangle={\bf |2\rangle}$, $|E_3\rangle={\bf |3\rangle}$, provided that we have ordered them from lowest to highest energy, $\lambda_1<\lambda_2<\lambda_3$.
Expressed in this basis, the operator $\mathcal{O}_B$ in Eq.~(\ref{Ob_ope}) has the matrix form:
\begin{equation}
\mathcal{O}_B=\left(\begin{array}{ccc}
\eta_{11} & \eta_{21} & \eta_{31} \\
\eta_{12} & \eta_{22} & \eta_{32} \\
\eta_{13} & \eta_{12} & \eta_{33}
\end{array}\right).
\end{equation}
The cooling rate in this manifold, for example, from $\ket{E_3}$ to $\ket{E_1}$ is easily evaluated from Eq.~(\ref{RateCooling}) as:
\begin{equation}\label{cool_rate_E}
\Gamma_{E_3\to E_1}=(2 \bar{A}(t))^2 |\eta_{13}|^2 S_{DD}(\lambda_3-\lambda_1).
\end{equation}
The cavity pull, or $\chi$-shift, corresponding to each state is calculated via:
\begin{equation}\label{chi_E}
\frac{\chi_{ E_i}}{\kappa}=\frac{2 \langle \mathcal{O}_B\rangle_{E_i}}{\kappa}=\frac{2 \eta_{ii}}{\kappa}
\end{equation}
\subparagraph{Two-excitation subspace} Let us consider the manifold consisting of states which have two excitations in the array. We define a basis for this manifold similar to that used for the single-excitation subspace, noting that because of the nonlinear terms, this basis is not an eigenbasis for the Hamiltonian:
\begin{eqnarray}
{\bf |1\rangle}& = & | 2, 0, 0\rangle= \frac{1}{\sqrt{2}} {B_1^\dagger}^2 |G\rangle \label{basis1},\\
{\bf |2\rangle} &= &| 0,2, 0\rangle= \frac{1}{\sqrt{2}} {B_2^\dagger}^2 |G\rangle\label{basis2},\\
{\bf |3\rangle} &= &| 0, 0, 2\rangle= \frac{1}{\sqrt{2}} {B_3^\dagger}^2 |G\rangle\label{basis3},\\
{\bf |4\rangle} &= & | 1, 1, 0\rangle= B_1^\dagger B_2^\dagger |G\rangle\label{basis4},\\
{\bf |5\rangle} &= & | 0, 1, 1\rangle= B_2^\dagger B_3^\dagger |G\rangle\label{basis5},\\
{\bf |6\rangle} &= & | 1, 0, 1\rangle= B_1^\dagger B_3^\dagger |G\rangle\label{basis6}.
\end{eqnarray}
The Hamiltonian in Eq.~(\ref{HB})
\begin{equation}
H_B=\hbar \sum_{j=1}^3 \lambda_j B_j^\dagger B_j+\frac{\hbar}{2}\sum_{lpqs=1}^3 \mu_{lpqs}B^\dagger_l B^\dagger_p B_q B_s
\end{equation}
expressed in the basis $\mathcal{B}$ has the matrix form:
\begin{equation}\label{matrixM}
\mathcal{H}_B=\left(\begin{array}{cccccc} 2 \lambda_1+ \mu_{1111} & \mu_{2211} & \mu_{3311}& \sqrt{2} \mu_{1211}& \sqrt{2} \mu_{2311} & \sqrt{2} \mu_{1311} \\ \mu_{2211}& 2 \lambda_2+ \mu_{2222} & \mu_{3322}& \sqrt{2} \mu_{1222}& \sqrt{2} \mu_{2322} & \sqrt{2} \mu_{1322} \\ \mu_{3311} & \mu_{3322}& 2 \lambda_3+ \mu_{3333}&\sqrt{2} \mu_{1233}& \sqrt{2} \mu_{2333} & \sqrt{2} \mu_{1333} \\
\sqrt{2} \mu_{1112} & \sqrt{2} \mu_{2212} &\sqrt{2} \mu_{3312} & \lambda_1+\lambda_2+2 \mu_{1212} & 2 \mu_{2312} & 2 \mu_{1312} \\
\sqrt{2} \mu_{1123} & \sqrt{2} \mu_{2223} &\sqrt{2} \mu_{3323} & 2 \mu_{1223} &\lambda_2+\lambda_3+2 \mu_{2323} & 2 \mu_{1323} \\
\sqrt{2} \mu_{1113} & \sqrt{2} \mu_{2213} &\sqrt{2} \mu_{3313} & 2 \mu_{1213} & 2 \mu_{2313} &\lambda_1+\lambda_3+2 \mu_{1313} \\
\end{array} \right)
\end{equation}
and the operator $\mathcal{O}_B$ in Eq.~(\ref{Ob_ope}):
\begin{equation}
\mathcal{O}_B=\left(\begin{array}{cccccc} 2 \eta_{11} & 0 & 0& \sqrt{2} \eta_{12} & 0 & \sqrt{2} \eta_{13}\\
0 & 2 \eta_{22} & 0 & \sqrt{2} \eta_{12} & \sqrt{2} \eta_{23} & 0 \\
0 & 0 & 2 \eta_{33} & 0 & \sqrt{2} \eta_{23} & \sqrt{2} \eta_{13} \\
\sqrt{2} \eta_{12} & \sqrt{2} \eta_{12} & 0 & \eta_{11}+\eta_{22} & \eta_{13} & \eta_{23} \\
0 & \sqrt{2} \eta_{23} & \sqrt{2}\eta_{23} & \eta_{13} & \eta_{22}+\eta_{33} & \eta_{12} \\
\sqrt{2} \eta_{13} & 0 & \sqrt{2}\eta_{13} & \eta_{23} & \eta_{12} & \eta_{11}+\eta_{33}
\end{array} \right).
\end{equation}
To find the eigenstates of the two-excitation manifold we need to diagonalize the matrix:
\begin{equation}
\mathcal{\tilde{H}}_{B}=\mathcal{H}_B+ 2 \hbar |\bar{A}|^2 \mathcal{O}_B,
\end{equation}
that represents $H_B$ plus the photon-dependent Stark shift $2 \hbar |\bar{A}|^2 \sum_{lp} \eta_{lp} B^\dagger_l B_p$.
The six eigenstates of the matrix $\mathcal{\tilde{H}}_{B}$ correspond to the six $F$-states introduced in the main text, with corresponding
eigenfrequencies $\epsilon_j$, $j=1, 2, \dots, 6$.
The cooling rate from $\ket{F_i}$ to $\ket{F_j}$ is evaluated from Eq.~(\ref{RateCooling}) as:
\begin{equation}\label{cool_rate_F}
\Gamma_{F_i\to F_j}=(2 \bar{A}(t))^2 \langle F_i| \mathcal{O}_B | F_j\rangle|^2 S_{DD}(\epsilon_i-\epsilon_j).
\end{equation}
The cavity pull in a generic $F_j$ state is:
\begin{eqnarray}\label{chi_F}
\frac{\chi_{ F_i}}{\kappa}=\frac{2 \langle F_j |\mathcal{O}_B|{F_j}\rangle}{\kappa}
\end{eqnarray}
\newpage
\section{Summary of theoretical results}
\subsection{Cooling rates}
As shown in Eqs.~(\ref{RateCooling}) and (\ref{cool_rate_E}), (\ref{cool_rate_F}) we can theoretically predict the cooling rate for a specific transition as a function of the average photon number $\bar{n}$ and the detuning of the cooling drive from the cavity resonance:
\begin{equation}
\Gamma_{i\to j}= 4 \bar{n} |M_{if}|^2 \frac{\kappa}{(\omega_i-\omega_f-\Delta_c)^2+(\kappa/2)^2}.
\end{equation}
As noted above, the frequencies $\omega_i$ and $\omega_f$ in the above equation depend on the photon number via the Stark shift, an effect that was observed experimentally (see Fig. 3b of the main text). At resonance, the cooling rate per photon number is then simply proportional to the square of the matrix element of the cooling operator on the two states $M_{if}=\langle \psi_f| \mathcal{O}_B | \psi_i\rangle$:
\begin{equation}\label{cool_per_photon}
\frac{\Gamma^{\rm res}_{i\to j}}{\bar{n}}= \frac{16}{\kappa} |M_{if}|^2.
\end{equation}
In Table \ref{table_cool_rates} we show the cooling rates per photon number for different transitions in the two-excitation subspace and single-excitation subspace, comparing the theoretical predictions with the rates measured in the experiment. The theoretical rates are always within a factor of two of the measured ones, and consistently always higher than the experimental ones. This suggests that the single-mode cavity approximation, or others, in the theoretical model are underestimating other decay processes
which weaken the effective cooling rate accessible experimentally.
\begin{center}
\begin{table}
\caption{Cooling rates $\Gamma_{i\rightarrow f}$ in MHz}
\begin{tabular}{c | c | c | c }
\hline
$\ket{\psi_i}$ & $\ket{\psi_f}$ & Experiment & Theory \\ \hline
$\ket{F6}$ & $\ket{F5}$ & 0 & 0 \\
& $\ket{F4}$ & 11.6 & 16.1 \\
& $\ket{F3}$ & 4.2 & 9.8 \\
& $\ket{F2}$ & 0 & 0.28 \\
& $\ket{F1}$ & 0 & 0 \\ \hline
$\ket{F5}$ & $\ket{F4}$ & 0.4 & 0.86 \\
& $\ket{F3}$ & 0.5 & 0.75 \\
& $\ket{F2}$ & 5.8 & 12.5 \\
& $\ket{F1}$ & 0 & 0.2 \\ \hline
$\ket{F4}$ & $\ket{F3}$ & NA & 10.6 \\
& $\ket{F2}$ & 3.1 & 4.2 \\
& $\ket{F1}$ & 8.4 & 10.2 \\ \hline
$\ket{F3}$ & $\ket{F2}$ & 0.6 & 0.66 \\
& $\ket{F1}$ & 11 & 20.6 \\ \hline
$\ket{F2}$ & $\ket{F1}$ & 2.6 & 10.3 \\ \hline
$\ket{E3}$ & $\ket{E2}$ & 0.54 & 0.52 \\ \hline
$\ket{E3}$ & $\ket{E1}$ & 13 & 15.5 \\ \hline
$\ket{E2}$ & $\ket{E1}$ & 0.54 & 1.15 \\ \hline
\end{tabular}
\label{table_cool_rates}
\end{table}
\end{center}
We now refer back to the simple tight-binding model of free excitations hopping on the lattice (that we have analyzed above in Eq.~(\ref{H_tightb}) and following), we expect this to apply well to the single-excitation manifold where the nonlinear terms are zero. On a three-site lattice the only allowed Fourier modes are: $k_1=\pi/4$, $k_2=\pi/2$ and $k_3=3\pi/4$; these correspond respectively to the momentum of the states: $E_3$, $E_2$, $E_1$. Given that $J>0$, the state with higher momentum has the lowest energy (here $E_1$ has momentum $k_3=3\pi/4$). In the case where the couplings obey $g_1=g_3$, they have parity symmetry. Hence
cooling can be effective between states $E_3$ and $E_1$ because they have the same spatial parity. Conversely cooling is highly suppressed in the case of $E_3$ to $E_2$ or $E_2$ to $E_1$ because those transitions require a change of parity. This is clearly confirmed by the cooling rate experimentally measured between the states of the E-manifold (see lower part of Table \ref{table_cool_rates}). The cooling rate $\Gamma_{E_3\rightarrow E_1}$ is more than twenty times bigger than $\Gamma_{E_3\rightarrow E_2}$ and $\Gamma_{E_2\rightarrow E_1}$.
\subsection{Calculation of $T_1$ Purcell-limited relaxation time}
We calculate the Purcell relaxation rate as the decay rate of a qubit-array state due to the action of the bare cavity operator $a$; this operator destroys a photon and induces transitions to lower states. In order to estimate the rate for such a process to occur, we compute the overlap between the $\ket{E_i}$ and $\ket{F_j}$ states and the bare cavity mode $a$.
From Eq.~(\ref{BofN}), we know the decomposition of each dressed qubit operator $B_j$ in terms of the bare operators:
\begin{equation}\label{BofN_bis}
B_j=N_{j0} a+\sum_{l=1}^L N_{j l} b_{l}.
\end{equation}
Since the single-particle states $\ket{E_j}$ are simply given by the dressed operator $B_j$ acting on the vacuum (with no excitations in either qubit or cavity), the overlap between the $\ket{E_j}$ state and the bare cavity mode is simply given by $N_{j0}$. Then, the Purcell-limited decay rate for this process is the square of this overlap times the cavity decay rate $\kappa$:
\begin{equation}
R_{E_j \to G}=|\langle G|a |E_j\rangle|^2 \kappa=N_{j0}^2 \kappa, \qquad T_1=\frac{1}{R_{E_j \to G}}.
\end{equation}
The results for the $|E_j\rangle \to |G\rangle$ decay times are shown in the top three rows of Table~\ref{table_T1_th}.
The analysis for the decay rates of the $|F_j\rangle $ states is similar with the decay rate of, say, an initial two-excitation substate to a single-excitation state given by
\begin{equation}
R_{F_j\to E_i} = |\langle E_i|a |F_j\rangle|^2 \kappa
\end{equation}
To calculate these values we expand $\ket{F_j}$ in the basis $\mathcal{B}$ of states $\{ {\bf | n\rangle}\}_{n\in [1,6]}$ in Eqs.~(\ref{basis1})-(\ref{basis6}):
$|F_j\rangle=\sum_{n=1}^6 c^{F_j}_n {\bf |n\rangle}$. Then the decay rate to a specific one-excitation state $\ket{E_i}$ is
\begin{equation}
R_{F_j\to E_i}= \left|\langle E_i|a \sum_{n=1}^6 c^{F_j}_n {\bf |n\rangle} \right|^2 \kappa=\left|\sum_{n=1}^6 c^{F_j}_n \langle E_i |a {\bf |n\rangle} \right|^2 \kappa;
\end{equation}
and the decay rate for a direct decay to the ground state given by
\begin{equation}
R_{F_j\to G}= \left|{\langle G} |a^2 \sum_{n=1}^6 c_n {\bf |n\rangle} \right|^2 \kappa=\left|\sum_{n=1}^6 c_n {\langle G} |a^2 {\bf |n\rangle} \right|^2 \kappa.
\end{equation}
The total Purcell-limited decay rate is the sum of the rates given above, with a Purcell-limited $T_1$ time given by the inverse of that sum. The $T_1$ times found from the above analysis are shown in Table~\ref{table_T1_th}.
For comparison, the $T_1$ decay times measured in the experiment are shown in Table \ref{table_T1_exp_down} for the downward rates. Due to spurious excitation from the sample being at finite temperature, the experiment observed upward transition rates as well. These spurious excitation rates are shown in Table \ref{table_T1_exp_up}.
In most of the cases, the theoretical prediction for $T_1$ is of the same order of magnitude as the experimentally measured value. Nevertheless, there are some qualitative discrepancies. For instance, the theoretical $T_1$ times for the $F$-states are in general shorter than the measured ones. We attribute this to the limitation of the single-mode cavity model which can be shown to predict a shorter life-time than a model which takes into account higher modes of the 3D cavity. A more mundane discrepancy arises from the fact that as the $\ket{E_1}$ and $\ket{E_2}$ are almost dark states, their dominant decay channel is not Purcell decay via coupling to the cavity, but rather on-chip material losses.
\begin{center}
\begin{table}
\caption{ $T_1$ theoretical Purcell-limited decay time in $\mu$s}
\begin{tabular}{c |c| c |c|c|c|c }
\hline
state & $\omega/(2 \pi)$ GHz & $T_1$(tot) & $T_1(E_1)$ & $T_1(E_2)$& $T_1(E_3)$ & $T_1(G)$ \\
\hline
$E_1$ & 4.61164 & 97 & & & & 97\\
$E_2$& 4.85539 & 80 & & & & 80\\
$E_3$ & 5.0196 & 0.6 & & & & 0.6 \\
\hline
$F_1$ & 9.11862 & 20 & 32& 201 & 93 & 438\\
$F_2$ & 9.3201& 7.5 & 8.8 & 57 & 705 & $>$1ms \\
$F_3$& 9.48676 & 1.3 & 1.3 & 50 & 212 & $>$1ms \\
$F_4$ & 9.64465 & 1.2 & 1.3 & 69 & 30 & 182\\
$F_5$& 9.7987 & 0.6 & 49 & 0.6 & 159 & $>$1ms \\
$F_6$ & 9.97278 & 0.9 & 78 & $>$1ms & 1.17 & 5.7\\
\hline
\end{tabular}
\label{table_T1_th}
\end{table}
\end{center}
\begin{center}
\begin{table}
\caption{ $T_1$ experimental fitted \emph{downward} decay time in $\mu$s}
\begin{tabular}{c |c| c |c|c|c|c }
\hline
state & $\omega/(2 \pi)$ GHz & $T_1$(tot) & $T_1(E_1)$ & $T_1(E_2)$& $T_1(E_3)$ & $T_1(G)$ \\
\hline
$E_1$ & 4.6078 & 28.5 & & & & 28.5\\
$E_2$ & 4.7854 & 30.5 & & & & 30.5\\
$E_3$ & 5.06916 & 3.2 & & & & 3.2\\
\hline
$F_1$ & 9.1184 & 18.0 & 20.4 & 151 & & \\
$F_2$ & 9.2592& 15.1 & 30.3 & 30.1 & & \\
$F_3$ & 9.4230 & 8.8 & 13.5 & 25.4 & & \\
$F_4$ & 9.5618 & 4.6 & 5.3 & 34.6 & & \\
$F_5$ & 9.7788 & 3.1 & 100.7 & 3.3 & 70.0 & \\
$F_6$ & 10.0539 & 1.5 & 20.6 & 42.6 & 1.6 & \\
\hline
\end{tabular}
\label{table_T1_exp_down}
\end{table}
\end{center}
\begin{center}
\begin{table}
\caption{ $T_1$ experimental fitted \emph{upward} decay time in $\mu$s}
\begin{tabular}{c |c| c |c|c|c|c }
\hline
state & $\omega/(2 \pi)$ GHz & $T_1$(tot) & $T_1(E_1)$ & $T_1(E_2)$& $T_1(E_3)$ & $T_1(G)$ \\
\hline
$E_1$ & 4.6078 & 82.4 & & & & 82.4\\
$E_2$ & 4.7854 & 182.3 & & & & 182.3\\
$E_3$ & 5.06916 & 167.0 & & & & 167.0\\
\hline
$F_1$ & 9.1184 & 104.2 & 104.2 & & & \\
$F_2$ & 9.2592& 82.0 & 237.5 & 125.2 & & \\
$F_3$ & 9.4230 & 45.3 & 98.1 & 84.2 & & \\
$F_4$ & 9.5618 & 36.7 & 43.3 & 239.4 & & \\
$F_5$ & 9.7788 & 9.7 & 50.8 & 28.2 & 20.7 & \\
$F_6$ & 10.0539 & 3.0 & 9.8 & 33.6 & 5.0 & \\
\hline
\end{tabular}
\label{table_T1_exp_up}
\end{table}
\end{center}
\subsection{Dark and bright states}
\label{Dark_bright}
Coherently preparing an array eigenstate is accomplished via pulses applied to the same port of the cavity used to perform cooling and state measurement. Thus, the response of the system to this drive is contained in the operator $H_{B,\mathrm{drive}}$ discussed in the previous section (see Eq.~(\ref{H_B_drive}) and its surroundings). In the dressed basis, this operator is $H_{B, \rm{drive}}=\hbar\epsilon(t)\sum_{l=1}^L M_{0 l} (B^\dagger_{l}+B_{l})$, so the relevant value(s) of $M_{0l}$ for a particular eigenstate determine the magnitude of the response. For the single-excitation manifold, this is an easy calculation, as the operators $B^\dagger_l$ create a single-particle eigenstate from the vacuum. For the two-particle manifold the calculation is more involved, as the eigenbasis of $B^\dagger_l B_l$
does not coincide with the two-particle eigenmodes. A second, equivalent method is to work in the bare basis and compute directly the matrix element of the operator $H_{\rm int}=\hbar \sum_{j=1}^3 g_j(b_j a^\dagger +b_j^\dagger a)$ between the states $\ket{S,0_{\rm ph}}$ and $\ket{G,1_{\rm ph}}$, for desired array eigenstate $\ket{S}$. This calculation yields
\begin{equation}\label{d_SG}
d_{S,G}=|\langle \Psi_S |H_{\rm int}|\Psi_G\rangle|=\hbar \left |\langle 0_{\rm ph} |a|1_{\rm ph}\rangle\langle S|\sum_{j=1}^3 g_j b_j^\dagger |G\rangle \right|.
\end{equation}
In Fig.~\ref{Edark} we plot this coupling $d_{E,G}$ for the $\ket{E}$-states to ground state $G$, as a function of flux (current in the coil). The figure shows that the theory is qualitatively in agreement with the measurements. $\ket{E_3}$ is predicted to be always bright, in agreement with the spectroscopy image Fig~\ref{fig:SuppSpec}. Both $\ket{E_1}$ and $\ket{E_2}$ states instead become dark at a specific value of the flux (current in the coil). There is some uncertainty in the location of this dark spot due to the uncertainty $\Delta g_j=\pm 7$MHz in the measured values of the couplings $g_j$, which affects this result significantly. Theoretically we find: $I_{\rm dark }(E_1)=11.3\pm 0.7$mA and $I_{\rm dark }(E_2)=13.1\pm 0.7$mA. These predicted values are shown, with the corresponding error bar, in Fig.~\ref{Edark}.
Experimentally the measured values are: $I^{\rm exp}_{\rm dark }(E_1)=10.71$ mA and $I^{\rm exp}_{\rm dark }(E_2)=10.64$mA; these are marked with a single dashed vertical line in Fig.~\ref{Edark} (the two lines are too close to be distinguished on the scale of the graph).
The measured value $I^{\rm exp}_{\rm dark }(E_1)$ fits within the uncertainty range of the theoretical prediction; however, $I^{\rm exp}_{\rm dark }(E_2)$ is somewhat off the theoretical prediction, and it is also slightly smaller than $I^{\rm exp}_{\rm dark }(E_1)$, contrary to what the theory would predict.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.60\textwidth]{E_coupling_data_err.pdf}
\caption{Bright and dark features of the $E$-manifold states. The figure shows the predicted coupling $d_{S,G}$ in Eq.~(\ref{d_SG})
between a state $S$ in the single-excitation manifold and the cavity pulse. The vertical line show the experimental value for which the states $E_1$ and $E_2$ become dark (the values are very close so cannot be distinguished). The error bars for the theoretical location of the dark point for the $E_1$ and $E_2$ states are shown along the horizontal axis.}
\label{Edark}
\end{center}
\end{figure}
\section{Experimental Details}
\subsection{Device Parameters}
\label{Device_param}
Our device consists of three transmon qubits~\cite{PhysRevA.76.042319} on a single silicon chip. Each qubit is formed by two aluminum paddles, connected by either a single double-angle-evaporated Al/AlO$_x$/Al Josephson junction (middle qubit) or a superconducting quantum interference device (SQUID) consisting of two junctions (outer qubits). The qubit array is located in the center of a copper waveguide cavity with dressed frequency $\omega_c/2\pi = 7.116$ GHz and $\kappa/2\pi = 10$ MHz. Each qubit couples to the cavity via a Jaynes-Cummings $\hat{\sigma}_x(\hat{a}+\hat{a}^\dagger)$ interaction with strength $g_i$. Because the middle qubit is located in the center of the cavity where the $\vec{E}$-field strength is greatest and its paddles are longer, it couples to the cavity more strongly than the outer qubits, with strengths $g_{\rm mid} /2\pi = $ 264 $\pm$ 7 MHz and $g_{\rm out} = /2\pi $ 155/149 $\pm$ 7. The outer qubits are characterized by a charging energy $E_c/h = 214$ MHz, which also gives us the absolute value of the anharmonicity for a transmon ($\alpha=-E_c/h$), and have a Josephson energy which gives, at zero flux, $\omega_{q1}/2\pi = $ 5.074 GHz for the left qubit and $\omega_{q3}/2\pi =$ 5.165 GHz for the right. For the middle qubit, $E_c/h = 240$ MHz and the qubit frequency is $\omega_{q2}/2\pi= 4.892$ GHz. The qubits are spaced by 1 mm, giving a nearest-neighbor coupling strength of $J/h = 177$ MHz and a next-nearest-neighbor coupling stength of $J_{13}/h = 26$ MHz, with uncertainties of a couple of MHz mainly due to the uncertainty in the calibrated $g_i$ values. The qubit-cavity couplings $g_i$ and the qubit charging energy $E_c/h$ were independently calibrated by suppressing the qubit-qubit interactions. To do so we attached two coils to the cavity. One, wraped around the whole cavity, gave a uniform field. The other was fixed off center on top of the cavity, which produced a gradient field. Using the combination of these two coils we tuned the qubits such that they are effectively not interacting, $\Delta_{q_i q_j} > 10J_{q_i q_j}$. The qubit frequencies at zero flux as well as the qubit-qubit couplings were obtained by fitting the spectroscopically-measured eigenergies of the ten lowest-lying eigenstates of the array to the Bose-Hubbard Hamiltonian, as described in the next section.
\begin{figure}
\includegraphics[totalheight=0.4\textwidth]{chiplayout.pdf}
\caption{To scale layout and dimensions of the chip with the three transmons. Josephson juctions are not illustrated}
\label{fig:Layout}
\end{figure}
The dimensions and layout of the chip placed in the cavity are shown in Fig. ~\ref{fig:Layout}. As can be seen, the transmons have slightly different dimensions, so that they interact with the cavity with different strengths (since the field of the fundamental cavity mode is roughly uniform over the dimensions of the small chip, the difference in interaction strengths comes primarily from the different antennae configurations). In our chip the interaction between the cavity and the transmon in the middle is nearly twice as strong as that between the cavity and the transmons on the end of the array (recall from the spectroscopy results presented in the main text that the measured $g_{2}$ was $264$ MHz while the measured $g_{1,3}$ was $150$ MHz). Numerical simulations indicated that this mismatch in coupling strengths would improve achievable cooling rates.
Coupling between the transmons themselves comes from two sources: cavity-mediated interactions and direct capacitve (or dipole-dipole) coupling. Cavity-mediated interactions arise when both qubits couple to the cavity mode, as described in ref.~\cite{Majer2007}. This coupling strength is $J_{ij~\mathrm{cavity}}=\frac{1}{2} g_i g_j (\frac{1}{\Delta_i}+\frac{1}{\Delta_j})$, about 20 MHz for adjacent qubits and 10 MHz for edge-edge coupling, at the working point of the experiment. Direct dipole-dipole coupling, discussed in refs.~\cite{PhysRevLett.110.030601,DalArxiv}, arises from the capacitance between transmon paddles. In our case, this coupling is on the order of 150 MHz for adjacent qubits, and 10 MHz for the qubits on the edge.
\subsection{Spectroscopy}
We perform spectroscopy to extract, as a function of applied magnetic flux, the eigenergies of the nine lowest-lying excited states of the array with respect to the global zero-particle ground state. These nine states consist of three states in the single-particle manifold and six in the two-particle manifold. As in the main text, these states are denoted $\ket{G}$, $\{\ket{E_i}\}$, and $\{\ket{F_i}\}$ respectively. We probe the array for coil currents between -2 and +17 mA.
Since the qubit population without any excitation predominantly lies in $\ket{G}$, standard two-tone spectroscopy reveals the $\ket{G}\rightarrow\ket{E_1}$, $\ket{G}\rightarrow\ket{E_2}$, and $\ket{G}\rightarrow\ket{E_3}$ transitions. To perform this spectroscopy, the reflected phase of a tone near the cavity resonance (7.116 GHz) is continuously monitored as a second tone sweeps from 3.7 to 5.3 GHz. This measurement results in Fig.~\ref{fig:SuppSpec}a, with three main lines indicating the single-particle energies.
Extraction of the two-particle energies is more involved. For the $\ket{F_6}$ state, the energy can be directly measured via a two-photon transition from $\ket{G}$, as shown in Fig.~\ref{fig:SuppSpec}b. For all other $\ket{F_i}$ states, however, the energies must be measured indirectly via transitions from a single-particle state. We use $\ket{E_1}$ and $\ket{E_3}$ as stepping stones to measure the $\ket{E_1} \rightarrow \ket{F_i}$ and $\ket{E_3} \rightarrow \ket{F_i}$ transitions, by running two additional spectroscopy scans: one with the addition of a tone at the $\ket{G}\rightarrow\ket{E_1}$ frequency, and another with the addition of a tone at the $\ket{G}\rightarrow\ket{E_3}$ frequency. The results, shown in Fig.~\ref{fig:SuppSpec}b, allow the identification of all six two-particle states.
From the extracted energies of the one- and two-particle manifolds (see Fig.~1 in the main text), we extract parameters of our device by fitting these values to predictions based on the Bose-Hubbard Hamiltonian with an additional next-nearest-neighbor coupling term. After taking into account the variation of the qubit frequencies with flux, that Hamiltonian is
\begin{equation}
\hat{H} = \hbar\sum\limits_{i = 1}^{3} \left( \omega_i(\phi) \hat{b}_i^\dagger \hat{b_i} + \frac{\alpha_i}{2}\hat{b}^\dagger_i\hat{b}^\dagger_i\hat{b}_i\hat{b}_i \right) + \hbar J\left( \hat{b}_1^\dagger\hat{b}_2 + \hat{b}_2^\dagger\hat{b}_3 + h.c.\right) + J_{13}\left(\hat{b}_1^\dagger\hat{b}_3 + h.c. \right)
\end{equation}
with the qubit frequency as a function of flux described by $\omega(\phi) = \omega_{0i}\sqrt{\cos{\left(B_iI+A\right)}}$ for the outer qubits with SQUIDS (for the middle qubit $\omega$ is constant). The parameters $B_i$ for each edge qubit are the ratio between the current applied to the coil and the flux threading the qubit's SQUID loop, and $A$ is an overall offset due to potential flux trapped in the SQUID loops during cooldown.
\begin{figure}
\includegraphics[totalheight=0.4\textwidth]{SpecRaw.pdf}
\caption{Raw data from spectroscopy, showing (a) the $\ket{G}\rightarrow\ket{E_i}$ transitions probed with two microwave tones, and (b) the $\ket{E_1}\rightarrow\ket{F_i}$ (blue) and $\ket{E_3}\rightarrow\ket{F_i}$ (red) transitions probed with three microwave tones.}
\label{fig:SuppSpec}
\end{figure}
\subsection{Dispersive readout of the array}
Due to the dispersive coupling between the qubits and the cavity, each array eigenstate induces a shift in the resonant frequency of the readout cavity. The frequency corresponding to the array in $\ket{G}$ is measurable simply via the reflected phase measurement on a network analyzer, as the array is in its global ground state without excitation pulses. This frequency is 7.116 GHz when our system is biased up to 10 mA. To measure the resonator frequencies corresponding to the excited states, we use microwave pulses to prepare the array in the desired eigenstate $\ket{i}$, then measure the reflected phase $\theta_i$ of a 7.116 GHz tone, referenced to the reflected phase with the array in $\ket{G}$. This measurement was done using our LJPA in phase-preserving mode. The standard equations for a reflected phase shift from a resonator yield that the frequency shift $\chi_i$ for a given eigenstate is related to $\theta_i$ by the equation $\chi_i = \kappa/2\tan\left(\theta_i/2\right)$. The measured reflected phase angle $\theta_{\rm exp}$ and the corresponding $\chi_{\rm exp}$ are shown in Table \ref{table_chi}.
In the same table are also shown the $\chi$ shifts calculated theoretically according to Eqs.~(\ref{chi_E}) and (\ref{chi_F}).
Except for $\ket{F_3}$ and $\ket{F_4}$, all of the states are resolvable. In fact, by using the LJPA in phase-sensitive mode and adjusting the measurement frequency and amplification axis (phase of the detected quadrature), we can obtain additional separation between states of interest for a particular experimental run.
To extract the population of a given state after a particular experimental sequence, we repeat the sequence several ($\sim$ 1 million) times and histogram the measured phase or quadrature amplitude values. After the run, we take a set of calibration histograms, in which we prepare all ten states and immediately make a measurement with the same frequency and amplification axis used in the experiment. We then fit the measured histograms to a sum of Gaussians with the same mean and variances as the calibration histograms, and from the amplitudes of each Gaussian, extract the corresponding state's population during that run. See the next section for an example of this procedure.
\begin{center}
\begin{table}
\caption{$\chi$ shifts}
\begin{tabular}{c |c| c| c}
\hline
state & $\theta_{\rm exp}$(rad) & $\chi_{\rm exp}/\kappa$ & $\chi_{\rm th}/\kappa$ \\
\hline
E1 & 1.37 & 0.41& 0.49 \\
E2 & 0.74 & 0.19 & 0.26 \\
E3 & 1.43 & 0.43 & 0.48 \\
\hline
F1 & 2.09 & 0.86 & 1.07 \\
F2 & 1.64 & 0.53 & 0.68 \\
F3 & 1.82 & 0.64 & 0.75\\
F4 & 1.77 & 0.61 & 0.70 \\
F5 & 2.03 & 0.80 & 0.82 \\
F6 & 2.16 & 0.93 & 0.90 \\
\hline
\end{tabular}
\label{table_chi}
\end{table}
\end{center}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.60\textwidth]{Histograms.pdf}
\caption{An example of calibration histograms.}
\label{CalibHisto}
\end{center}
\end{figure}
\subsection{Extraction of Decay Rates}
Natural decay dynamics were measured by coherently initializing the system in the state of interest, waiting for a time, then measuring the array state, as described above. We fit the population dynamics for each state to a decay model given by the matrix rate equation $\partial_t \vec{c} = \Gamma \vec{c}$, where $\vec{c}$ is a vector containing all of the state populations as a function of time and $\Gamma$ a matrix with transition rates between states. This procedure is similar to that used by Peterer et al.~\cite{PhysRevLett.114.010501} in their study of a single transmon qubit. This model suppresses transitions from the two-particle manifold directly to the zero-particle ground state based on the results in Ref.~\cite{PhysRevLett.114.010501}. We also suppress transitions between states in the same manifold, since these transition frequencies are on the order of a few hundred MHz, and the photon shot noise spectrum which plays a dominant role in dissipation for this system has very little support at these frequencies. Best-fit parameters are given in \ref{table_T1_exp_down}, and were used to generate the natural decay map in Fig.~2 of the main text. For the bath engineering decays, a similar model was used for the fit, with the addition of parameters for the intramanifold decay; all intermanifold rates were held fixed to the previously-measured natural values.
Errors in the fit, primarily at low population, occur likely due to effects such as spontaneous $T_1$ decay which cause the readout histograms to be skewed from their nominally Gaussian shape. See \cite{GirvinT1} for a detailed explanation of this effect. As can be seen in the main text, this effect seems to occur more often in the case of the natural decays than during the cooling protocols.
\subsection{Calibration of Bath Engineering Drive Power}
Calibration of powers used to drive the bath-engineering transitions is done in the standard circuit-QED manner~\cite{PhysRevA.74.042318}: first the qubit-cavity $\chi$-shift is calibrated, then with this value known, the intracavity photon number is then inferred from the Stark shift on the qubit. To do this, in our system, we tune the edge qubits to below 3 GHz, decoupling from the middle qubit. We then measure the $\chi$-shift between the middle qubit and the cavity by measuring, for a variety of incident powers at the cavity frequency, both the measurement-induced dephasing rate and the resulting Stark shift. As shown in Ref.~\cite{PhysRevA.74.042318}, the measurement-induced dephasing rate is $8\chi^2\bar{n}/\kappa$, while the Stark shift is $2\chi\bar{n}$, so by comparing the slopes of these lines vs. power, we extract $\chi$ (since $\kappa$ is known). From there, we use the Stark shift to calibrate the intracavity photon number over the range of frequencies and powers used in the bath-engineering experiment.
\subsection{Comparison of Cooling and Coherent Driving to Excite a Dark State}
Here we include a brief description of our attempts to populate dark states using coherent microwave drives rather than via the autonomous feedback protocol discussed at the end of the main paper.
In theory, without dephasing and dissipation, and in the absence of higher Jaynes-Cummings energy states, one could drive coherent transitions to the desired state except at the singular flux bias point where the matrix element of this transition is exactly zero. In practice, for experimentally available drive powers, driving an almost-dark transition may take such a long time that dissipation (qubit $T_1$, $T_\phi$, cavity loss) reduces the fidelity to unacceptable values. This was the case at a flux bias of 10 mA (where the single-particle states were almost dark), where, in contrast to the usual coherent pulses which take tens of nanoseconds (we used 64 ns for the coherent pulses), the pulse took around 1 $\mu$s to excite the $|G\rangle \rightarrow |E_1\rangle$ transition, and this caused our fidelity to be limited to about 65\% (compare the cooling process, which achieved a fidelity for the single-excitation state of 80\%).
Another limitation of the coherent excitations is that, due to higher levels in the spectrum, off-resonant transitions which have a higher dipole moment than the dark transition may be driven before the desired transition. This was the case at 10.71 mA (``complete'' darkness), where even at the maximum power we could excite with, we saw no population in the $\ket{E_1}$ state, but at these powers higher states (either in the two-excitation subspace or in an even higher manifold; from the dispersive shifts it was difficult to tell properly) began to get populated.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\setcounter{equation}{0}
\hspace{3mm}
Recent investigations \cite{acg91,blsu92} of the compact U(1) lattice
gauge theory in 4 dimensions have produced energy histograms
indicative of a first-order transition. In the corresponding Monte
Carlo simulations, however, the tunneling between the phases is
strongly suppressed. In order to overcome the difficulties due to
the lack of transitions the authors of \cite{acg91} introduce an
iterative reweighting for different $\beta$, while the authors of
\cite{blsu92} use a matching of hot and cold start results. The
problem is that on larger lattices conventional algorithms are not
able to induce transitions at all. Therefore, we have looked for
algorithmic alternatives.
To reduce the slowing down of the Monte Carlo algorithm in systems
with a rough free-energy landscape the method of simulated tempering
\cite{mp92} has been proposed and applied to the random-field Ising
model. In this method the inverse temperature $\beta$ is promoted to
the status of dynamical variable, taking values which range over
some definite set. In this manner one tries to utilize the fact that
at lower $\beta$ the free-energy barriers are lower. In an application
of this procedure to spin-glass simulations \cite{kr94} it has turned
out, however, that adjusting the set of temperatures and handling the
corresponding probability distribution in an efficient way is far from
straightforward. Nevertheless it has been possible to develop a
procedure \cite{kr94} leading to a reduction of slowing down
comparable to the one obtained with the multicanonical method
\cite{bc92}, and with the additional advantages of allowing full
vectorization of the code and of providing the canonical ensemble
directly.
In the Potts model in 2 dimensions, the strength of the first-order
transition decreases with the number of the degrees of freedom $q$ of
the spins, the transition becoming of second order for $q<5$. This
has been used to set up an algorithm \cite{kw93} in which $q$ becomes
a dynamical variable: by opening the easier pathway along the
mountains of the joint probability distribution of $q$ and energy, one
avoids the need of relying, for large $q$, on the strongly suppressed
tunneling for equilibrating the configurations. To implement
transitions between different $q$ cluster steps \cite{sw87} have been
inserted. It turns out that by this algorithm one gains large factors
in the autocorrelation times also in comparison to the multicanonical
algorithm \cite{bn91}.
Proceeding along similar lines, we have obtained an efficient
algorithm for the U(1) gauge theory \cite{krw94}. We start from the
Wilson action supplemented by a monopole term \cite{bss85},
\begin{equation}
S=\beta \sum_{\mu>\nu,x} (1-\cos \Theta_{\mu\nu,x})+
\lambda \sum_{\rho,x} |M_{\rho,x}| ,
\label{eq1}
\end{equation}
where $M_{\rho,x}$ is the monopole content of 3 dimensional cubes
\cite{dt80}. One finds that the strength of the first order transition
decreases with $\lambda$, the transition ultimately becoming of second
order. Thus, by making $\lambda$ a dynamical variable, we can again
dispose of the tunneling transitions and proceed instead along the
much easier pathway running over the top of the joint probability
distribution $P(E,\lambda)$, $E$ being the average plaquette
energy. With the use of appropriate Metropolis steps for the link
variables as well as $\lambda$, moreover, one can make the algorithm
fully vectorizable and parallelizable.
Before running the dynamical-parameter algorithm one has to determine
the position of the phase transition as function of $\lambda$ and some
parameters in the generalized action, which serve to enforce the
prescribed $\lambda$ distribution. On lattices of moderate size
(e.g.~$8^4$) this is relatively easy because there is still some
overlap between the peaks. On larger lattices determining these
quantities is much more difficult and it becomes then crucial to
perform the calculation without excessive computational cost. In the
present paper we develop a method to achieve this goal. We
demonstrate its effectiveness illustrating results obtained for a
$16^4$ lattice. We will see that our method enables us to observe
transitions also on large lattices, which is very important for a
reliable determination of the transition point.
In Section 2 we outline the general features of the
dynamical-parameter method. In Section 3 we derive relations among
transition probabilities on which we will base the determination of
the quantities required for the implementation of the algorithm. The
detailed procedure followed for their calculation is described in
Section 4. In Section 5 we will present some numerical results.
\section{Outline of method}
\setcounter{equation}{0}
\hspace{3mm}
Conventional methods simulate the probability distribution
\begin{equation}
\mu_{\lambda}(\Theta)= \exp(-S_{\lambda}(\Theta))/Z_{\lambda}
\end{equation}
where $\lambda$ is a fixed parameter. In order to make $\lambda$ a
dynamical variable we consider $\mu_{\lambda}(\Theta)$ as the
conditioned probability to get a configuration $\Theta$ given a value
of $\lambda$ and prescribe a probability distribution $f(\lambda)$ to
get the joint probability distribution
$\mu(\Theta,\lambda)=f(\lambda)\mu_{\lambda}(\Theta)$. To simulate
$\mu(\Theta,\lambda)$ we need it in the form
\begin{equation}
\mu(\Theta,\lambda)=\exp(-S(\Theta,\lambda))/Z
\label{eq2}
\end{equation}
where
\begin{equation}
S(\Theta,\lambda)=S_{\lambda}(\Theta)+g(\lambda)
\label{eq3}
\end{equation}
and
\begin{equation}
g(\lambda)=-\log(f(\lambda) Z/Z_{\lambda}).
\end{equation}
Eventually we will require that the values of $\lambda$ be approximately
equidistributed, i.e. $f(\lambda) \approx$ const, which then gives
$g(\lambda) \approx \log Z_{\lambda}+$ const.
In our application of the algorithm each update of the link variables
$\Theta_{\mu,x}$ is followed by an update of $\lambda$, which can take
values from a discrete, ordered set $\lambda_q$ with $q=1,\ldots,n$.
The individual update steps are Metropolis steps in both cases. For
the $\lambda$ update we use a proposal matrix
$\frac{1}{2}(\delta_{q+1,q'}+\delta_{q,q'+1}+\delta_{q,1}
\delta_{q',1}+\delta_{q,n}\delta_{q',n})$ and an acceptance probability
$\min(1,\exp(S(\Theta,\lambda_{q})-S(\Theta,\lambda_{q'})))$. The
above form of the proposal matrix implies that, if the current value
$\lambda_q$ is not extremal, then we choose as new candidate value for
$\lambda$ one of the two neighboring values, $\lambda_{q-1}$ or
$\lambda_{q+1}$, with equal probability, whereas, if $\lambda_q$ lies
at the boundary of the set of possible values, we preselect either its
(only) neighboring value or $\lambda_q$ itself, again with equal
probability.
In order to implement the simulation, one must fix $\beta(\lambda_q)$
(cfr.~(\ref{eq1})) and $g(\lambda_q)$ (cfr.~(\ref{eq2}) and
(\ref{eq3})), for all values of $q$. We demand
$\beta(\lambda_q)\approx \beta_w(\lambda_q)$, where $\beta_w$ is the
value of $\beta$ which makes both phases equally probable. Our
condition for fixing $g(\lambda_q)$ is $f(\lambda)\approx$ const. In
order to determine $\beta(\lambda_q)$ and $g(\lambda_q)$, we will use
the fact that in a simulation the transition probabilities between
neighboring values of $\lambda$ are very sensitive to these
quantities.
\section{Transition probabilities}
\setcounter{equation}{0}
\hspace{3mm}
To derive relations which can be used for the envisaged determination of
$\beta(\lambda_q)$ and $g(\lambda_q)$ we use the probability for the
transition from a value $\lambda_q$ to a neighboring value $\lambda_{q'}$
\begin{equation}
W(\Theta,q;q')=
\frac{1}{2}\min(1,\exp(S(\Theta,\lambda_{q})-S(\Theta,\lambda_{q'}))
\label{tp}
\end{equation}
and note that detailed balance implies
\begin{equation}
f(\lambda_{q-1})\mu_{\lambda_{q-1}}(\Theta)W(\Theta,q-1;q)=
f(\lambda_q)\mu_{\lambda_q}(\Theta)W(\Theta,q;q-1) \; .
\label{db}
\end{equation}
Let us consider subsets $K(q)$ of configurations $\Theta$
with probability distributions proportional to $\mu_{\lambda_q}(\Theta)$
and weight $w_K(q)=\sum_{\Theta \in K(q)} \mu_{\lambda_q}(\Theta)$.
If we introduce the average transition probability for the set $K(q)$
\begin{equation}
p_K(q;q')=\frac{1}{w_K(q)}\sum_{\Theta \in K}\mu_{\lambda_q}(\Theta)
W(\Theta,q;q') \; ,
\label{atp}
\end{equation}
by averaging (\ref{db}) we obtain
\begin{equation}
f(\lambda_{q-1})\ w_K(q-1)\ p_{K}(q-1;q)=
f(\lambda_q)\ w_K(q)\ p_{K}(q;q-1) \; .
\label{adb}
\end{equation}
We now apply (\ref{adb}) to sets $K_c$ and $K_h$ of configurations in the
cold phase and in the hot phase separately. Because we are interested
in cases where transitions between the phases are extremely rare, in
practice it is easy to obtain sets of this type with numbers of
configurations sufficient for the present purpose. Also, for the
same reason, the corresponding equations can be considered to be
independent. We assume that our two conditions,
$\beta(\lambda)=\beta_w(\lambda)$ and
$f(\lambda)=$const, are satisfied. The condition on $\beta$
implies that the two phases are equally populated, i.e.
$w_{K_c}=w_{K_h}$. Moreover, since the two subsets $K_h$ and $K_c$
essentially exhaust the whole set of configurations (in the cases
we are considering the overlap is extremely small), all of the weights
are, to a very good approximation, equal to ${1 \over 2}$. Using this
fact, the constancy of $f(\lambda)$, and (\ref{adb}) for $K=K_c$
and $K=K_h$ separately, we get a pair of equations which simplifies to
\begin{eqnarray}
p_{K_c}(q-1;q)=p_{K_c}(q;q-1)\hspace{0.5mm}\nonumber\\
p_{K_h}(q-1;q)=p_{K_h}(q;q-1)
\label{trans}
\end{eqnarray}
This is what we will exploit to determine
$\beta(\lambda_q)$ and $g(\lambda_q)$.
Our strategy will be to adjust $\beta(\lambda_q)$ and $g(\lambda_q)$, for
known $\beta(\lambda_{q-1})$ and $g(\lambda_{q-1})$, in such a way that
(\ref{trans}) holds. Starting from given $\beta(\lambda_1)$ and
arbitrarily chosen $g(\lambda_1)$, in this manner we can obtain
$\beta(\lambda_q)$ and $g(\lambda_q)$ for $q=2,\ldots,n$.
\section{Determination of $\beta(\lambda)$ and $g(\lambda)$}
\setcounter{equation}{0}
\hspace{3mm}
To begin our procedure we select a value for $\lambda_1$ in the region
where the peaks of the probability distribution associated to the two
phases strongly overlap so that tunneling occurs frequently and
$\beta(\lambda_1)$ can easily be obtained by a conventional
simulation. Because only the differences $g(\lambda_{q-1})-g(\lambda_q)$
are relevant we can choose $g(\lambda_1)$ arbitrarily. Then for
$q=2,\ldots,n$ we consecutively determine $\beta(\lambda_q)$ and
$g(\lambda_q)$ for known $\beta(\lambda_{q-1})$ and $g(\lambda_{q-1})$.
In order to proceed from $q-1$ to $q$ we choose a new $\lambda_q$,
approximately at the same distance from $\lambda_{q-1}$ as in the previous
steps. As a first rough approximation we obtain $\beta(\lambda_q)$ by
extrapolation from the former values. At this point we use the sets
of cold and hot configurations $K_c(q-1)$ and $K_h(q-1)$ at
$\lambda_{q-1}$, available from the previous step, and generate two new
sets of $\Theta$ configurations $K_c(q)$ and $K_h(q)$ at $\lambda_q$ by
short Monte Carlo runs with cold and hot start, respectively.
For each set $K_i(q')$ we can easily calculate the quantity
\begin{equation}
\tilde{p}_{K_i}(q';q'') =\frac{1}{N_{K_i(q')}} \sum_{\Theta \in K_i(q')}
W(\Theta,q';q'') \; ,
\end{equation}
where $N_{K_i(q')}$ is the number of configurations in the set and
$W(\Theta,q';q'')$ is given by (\ref{tp}) (of course, the variables
$q'$, $q''$ stand for $q-1$, $q$ or $q$, $q-1$, as appropriate).
Since $\tilde{p}$ approximates (\ref{atp}), this allows us to calculate
the quantities $p_{K_i}$ which, for the correct choice of
$\beta(\lambda_q)$ and $g(\lambda_q)$, should satisfy (\ref{trans}).
We adjust then $\beta(\lambda_q)$ and $g(\lambda_q)$ until the
equations (\ref{trans}) are satisfied. In practice this takes
only a very small amount of computer time. We obtain good estimates
for $\beta(\lambda_q)$ and $g(\lambda_q)$ though only approximate
quantities enter (\ref{trans}) because the peaks related to
the phases vary only little with $\beta$. In addition, the quantities
$\tilde{p}_{K_i}(q';q'')$ are used to adjust the distances between
neighboring $\lambda$ values in such a way that one has roughly
equal transition probabilities for all steps.
After a larger number of $q$ steps the errors may accumulate. Therefore
we perform short runs of the dynamical-parameter algorithm to test
whether the simulation does indeed travel along the mountains of the
distribution in the hot as well as in the cold phase. If it gets stuck
we slightly increase or decrease the couplings $\beta(\lambda_q)$ in the
region of $\lambda$ where the transitions fail. We determine then the
corresponding values $g(\lambda_q)$ from the conditions
\begin{equation}
\tilde{p}_{K_c}(q-1;q)+\tilde{p}_{K_h}(q-1;q)=
\tilde{p}_{K_c}(q;q-1)+\tilde{p}_{K_h}(q;q-1)
\label{tr}
\end{equation}
and run the dynamical algorithm again. Typically one or two trials are
sufficient.
After performing the simulations with dynamical $\lambda$, improved
$\beta(\lambda_q)$ can be obtained by reweighting \cite{fs88} the
distribution at the values of $\lambda$ where deviations from
the equidistribution of configurations in the cold and hot phase
are seen to occur. Corresponding new values for $g(\lambda_q)$ are
then obtained from (\ref{tr}). Alternatively improved values for
$g(\lambda_q)$ can be obtained by replacing the current values with
$g(\lambda_q)+\ln(f(\lambda_q))$.
\section{Numerical results}
\setcounter{equation}{0}
\hspace{3mm}
Our method has made it possible to determine the phase transition region
for a lattice as large as $16^4$ using only a moderate amount of
computer time. We have used approximately $2\times 10^4$ sweeps per
each value of $\lambda$ to get the sets of configurations $K_c$ and
$K_h$ and approximately $4\times 10^4$ sweeps, in total, in the short
test runs with the dynamical algorithm. These preliminary calculations
have been used to determine $\lambda_q$, $\beta(\lambda_q)$ and
$g(\lambda_q)$ following the procedure described in Section 4. Our
results for these parameters are reproduced in Table 1. Altogether we
used 25 values for $\lambda$ ranging from $\lambda=0.4$ (our starting
point) down to $\lambda=0$. In our simulations with dynamical $\lambda$
we performed approximately $10^6$ sweeps of the lattice and we observed
a large number of transitions between the phases also on the $16^4$
lattice.
We define as location $\beta_{\mbox{\scriptsize{C}}}$ of the phase
transition the maximum of the specific heat. We have used reweighting
\cite{fs88} in order to explore a range of $\beta$ in the neighborhood
of the value $\beta(\lambda_q)$. As a matter of fact reweighting is
necessary not only to find $\beta_{\mbox{\scriptsize{C}}}$, but also to
determine accurately $\beta_w$ (the value of $\beta$ where the
configurations are equidistributed between the phases) since in order to
implement the procedure of Section 4 we only needed to make the areas
under the peaks approximately equal.
Figure 1 shows $\beta_{\mbox{\scriptsize{C}}}$ as function of $\lambda$
for the $16^4$ lattice and also our earlier results \cite{krw94} for the
$8^4$ lattice. In particular, for the $16^4$ lattice at $\lambda=0$, we
obtain the value $\beta_{\mbox{\scriptsize{C}}}=1.01084(5)$ , where the
error has been estimated from the fluctuation of different samples. This
confirms the result $\beta_{\mbox{\scriptsize{C}}}=1.01082(6)$ obtained
in Ref.~\cite{blsu92} by a matching procedure.
In Figure 2 we show the distribution $P(E,\lambda)$ at $\beta_w$ (for
$\lambda=0.6$ at $\beta_C$) which we got (after reweighting) from our
simulations. The data have been obtained with the dynamical-parameter
algorithm, except for $\lambda=0.5$ and $\lambda=0.6$ where the peaks
overlap and the conventional Metropolis algorithm is adequate. Comparing
with the corresponding figure for the $8^4$ lattice in \cite{krw94} the
much stronger suppression of tunneling in the transition region is
obvious. In fact, on the $8^4$ lattice, because of the overlap between
the peaks, there is still substantial tunneling. (For this reason, in
our earlier simulations on the $8^4$ lattice we could determine
$g(\lambda_q)$ following the less sophisticated procedure based on
(\ref{tr}).)
In regards to the efficiency of our algorithm versus conventional
methods, the number of sweeps required to observe comparable numbers of
tunnelings is greatly reduced already on an $8^4$ lattice. One must
make here a distinction (cfr. the discussion in \cite{krw94}) according to
whether one is interested in all values of $\lambda$, in which case
our method produces all of the results in one stroke, or in a single
$\lambda$. In the latter case, since our method requires that one still
simulates a whole range of $\lambda$ values, fairness requires that the
the observed mean time between tunnelings be multiplied roughly by the
number of $\lambda$ values considered. Even in this case, with an $8^4$
lattice there is still considerable gain, for example, for
$\lambda=-0.3$, and some gain remains also for $\lambda=0$ \cite{krw94}.
With a $16^4$ lattice a comparison is, as a matter of fact, impossible,
simply because the separation between the phases is so strong that with
conventional algorithms one does not observe any transition at all.
With our algorithm, instead, on a $16^4$ lattice and for $\lambda=0$, we
observe average tunneling times of the order of $10^3$ (for tunneling
time we follow the definition of \cite{bp91}). If we were interested
only in $\lambda=0$, this number ought to be multiplied by 25, i.e. the
number of $\lambda_q$ involved. This is certainly not a small time,
however it is small as compared to infinity, which corresponds to
observing no transition at all.
For a further reduction of the autocorrelation times, in addition to
circumventing tunneling, one would have to replace the local Metropolis
steps for $\Theta$ with more efficient ones. In the dynamical parameter
algorithm for the Potts model \cite{kw93} the cluster steps, which were
originally introduced to implement the transitions between different
$q$, have the additional advantage of reducing critical slowing down
and, correspondingly, the autocorrelation time in the second order
region. However, at this stage an implementation of cluster steps for
gauge theories with continous groups appears very problematic, if not
plainly impossible \cite{km94}. A more promising direction to pursue
might be along the lines of multi-scale algorithms \cite{ab91}, provided
that these could be modified to account for the actual structure of the
configurations.
\section*{Acknowledgements}
\hspace{3mm}
One of us (W.K.) wishes to thank the Physics Department of Boston University
for kind hospitality during his visits.
This research was supported in part under DFG grants Ke 250/7-2 and 250/12-1
and under DOE grant DE-FG02-91ER40676.
The computations were done on the CM5 of the Center for Computational
Science of Boston University and on the CM5 of the GMD at St.~Augustin.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |